semantic


This is an update to the Drupal-related portion of my 2/7/11 post:

Semantic Web Bus or Semantic Web Bandwagon?

Image by dullhunk via Flickr

Stéphane Corlosquet posted some background regarding his research, and a link to his masters thesis, on and paving (or at least mapping) the way to inclusion of RDFa in Drupal 7.

The latter does a good job outlining the matter being addressed — in a a pretty digestible way even for the lay person — along with the way to get there.  Of particular note is the emphasis on facilitating the leveraging of it, as evidenced by the existence of its Chapter 4, focused on usability and adoption.

After all, this effort finally represents the technical equipping of content — through the work flow and processes of non-technicians  to generate that content — so as to be technically consumed.  This is how most of our every day systems operate (think about the behind-the-scenes code that is incorporated into Word, for example, when the bold or italics button is pressed.

Once we arrived at the participatory phase of the Web, this type of invisible facilitation/enablement within everyday processes — in a usable way, no less — became an essential pathway to its semanticization.

Artist's impression of the US Gerald R. Ford-c...

Image via Wikipedia - (turning the big ship)

Enhanced by Zemanta

Image representing Drupal as depicted in Crunc...

Image via CrunchBase

I’m finally getting a break from chipping ice and shoveling snow, so before the next round comes in, I wanted to get this post up about our second episode of the Semantic Link podcast.

A hen chicken (Gallus gallus)

Image via Wikipedia

In brief, we had an interesting discussion around how we anticipate Drupal 7 will impact the landscape and why – specifically its built-in ability to generate semantic annotation of content. To date there has been a chicken-and-egg situation, where the development of semantic-consuming applications has been waiting for consumable content – while efforts to generate semantic content have been awaiting the incentive of there being systems to consume, digest, and expose it. Call it CM-antics or C-Mantics – either way it is easier than saying “CM Semantics” – but perhaps it’ll reduce some of the antics.

Some green and a brown egg standing on end wit...

Image via Wikipedia

Other parts of the conversation included discussion of how semantic solutions find their way into companies; and about the way that semantics has influenced the division of labor and the definition of IT roles within companies (CTO vs CIO) due to its changing the nature of information itself, and making it more of a technology – or part of the machine itself.
Other parts of the conversation included discussion of how semantic solutions find their way into companies; and about the way that semantics has influenced the division of labor and the definition of IT roles within companies (CTO vs CIO) due to its changing the nature of information itself, and making it more of a technology – or part of the machine itself.

The logo used by Apple to represent Podcasting

Image via Wikipedia

Give a listen, and enjoy: Semantic Link – Episode 2

Enhanced by Zemanta

Symbol for languages. Based on Image:Gnome-glo...Image via Wikipedia

If this isn’t one of the coolest things you’ve ever seen…

You probably thought it was Jetson’s material that someone could speak one language into a phone, and you could hear it in a different language on the other end.  Pretty great stuff, translation on the fly.  Think about looking at something that is written in a different language, and being able to able to see it in another, without having to go look it up somewhere!

That’s exactly what the Word Lens app from Quest Visual does – which you’ve got to see to believe (if not understand)!

I don’t know if this is exactly right, but “bastante salvaje” if you ask me!

Enhanced by Zemanta

'Ida' fossil - the Missing LinkImage by Ragnar Singsaas via Flickr

What do you get when you cross a set of technologies with an evangelist, a community activist, a business strategist, a Hungarian from the W3C, an ontologist / library scientist, a standards expert, a seasoned Internet executive, and a Slovenian entrepreneur?

Hopefully, what you get is an interesting discussion.  Eric Franzon from SemanticWeb.com and Paul Miller of  Cloud of Data have organized just such a cross-section of participants for a monthly discussion – The Semantic Link podcast series – on things Semantic and/or Linked – from multiple perspectives.

King Arthur and the Knights of the Round Table...Image via Wikipedia

I had the honor of being included at the table, and at this week’s inaugural conference call and Semantic Link podcast, we covered our different thoughts on the highlights for the space over the past year, and our hopes and dreams for the year to come.

Enhanced by Zemanta

Complementary anglesImage via Wikipedia: Complementary Angles

Since Tony Shaw wrote his post about their motivations and intentions in selling the SemTech Conference and Semantic Universe, some others have asked what I was thinking by having proposed it in the first place.  That’s actually pretty easy to explain.

In my initial post about the deal, I touched on my sense that the WebMediaBrands approach to the space, and its efforts to date, complemented what Tony was doing with SemTech.  Sure, they both focused on similar material, and involved many of the same cast of characters, but the interesting part was in their individual strategies and execution.

As background: Outside the more academically focused ISWC, SemTech had pretty much become the annual convention for the community, a good part of which was about migration to the business potential of these technologies.  The energy caught the attention of what was then Jupiter Media, who saw the opportunity to focus right in on what outside business was looking for: how to leverage these capabilities for competitive advantage.

SemTech too was looking to help answer that question – but was doing so within the context of fostering that community and its discovery, with programs structured to focus on sector-specific application.  Jupiter came from the other direction, with the LinkedData Planet conference asking right off, how business can make use, which they sustained in the subsequent Web3.0 Conference, under WebMedia’s Mediabistro.

It is the underlying approaches of the organizers that shines a light on the potential synergies here – the complementary angles – and the benefits should manifest outside the organizers themselves.  The modus operandi in the case of the SemTech organizers has been methodical community building, across academics, standards and business, while that for WebMedia is vertical integration of offerings for their consumption.  So the thinking was that SemTech’s introspective contemplation of the question, and WebMedia’s pragmatic approach would yield brass tacks.

68/365 - TackImage by Niharb via Flickr

To put a shine on those tacks, combining of the big SemTech event with WebMedia’s year-round and multi-pronged focus-within-the-vertical should also help wash away a subtle but present “us versus them” undercurrent from among participants.  For today, the community can ignore any “which team” questions, or what “it” (Semantic Web, Linked Data, Web 3.0, Web of Data) should be called, and who coined which terms.  As one, the combined efforts can focus on furtherance - for interoperability, efficiency, usefulness…  Perhaps we’ll see the first signs of this happening at this week’s Semantic Web Summit, in Boston.

Enhanced by Zemanta

Datasets in the Linking Open Data project, as ...Image via Wikipedia

Wow.  If you thought the Linking Open Data cloud had grown between September 2007 (right) and July of 2009 (below), take a look at this to see where we are NOW!

Instance linkages within the Linking Open Data...Image via Wikipedia

As Richard and Anja note on the site linked above: The cloud images show “some of the datasets that have been published in Linked Data format, by the Linking Open Data community project and other organisations.

Where is this going? Andreas Blumauer of Semantic Web Company, in Vienna, put it well: “15 years ago we all were excited when we published HTML for the first time and it didn’t take a long time until all of us were “on the internet”. Now we are starting to publish data on the web. Based on semantic web technologies professional data management will be possible in distributed environments generating even more network effects than Web 1.0 and Web 2.0 ever did.”

Some might ask where value derived from this cloud, or if membership in it just marketing?  Talis’ Tom Heath outlines, in the latest issue of Nodalities Magazine, that without Linked Data, there couldn’t be a Semantic Web.  Being linked and of use means having been made available following Linked Data Principles.  This includes: things having unique identifiers (URIs); that are in the form of hypertext (HTTP) so they are standardly navigable (dereferencable); at which destinations there is useful and standardly interprable information (in RDF/XML) describing the thing; and which contains links to other things (read: HTTP URIs which also contain RDF/XML).  Through explanation of the progression from FOAF files, (where the “things” at these “URIs” are individual people, collectively representing the basis for semantic social networks), to working out standards around what constitutes an information vs non-information resource (via httpRange-14), Tom makes the all important point that: each step along the way is an essential building block toward where we are going.

And where (at this stage) is this?  When Tony Shaw, of Semantic Universe, pointed to Linked Data in his recent article “Nine Ways the Semantic Web Will Change Marketing“, he was pointing to its impact on Marketing.  But beyond that, we can take from his explanation the broader capabilities afforded by it: findability, pullability, mashability, mobility – essentially interoperability, as applicable to any activity, sector or function which involves information (read: data).  Can you think of any that don’t?

Enabling data in this way (with all these building blocks) is “one” thing – moving control closer to the end user, and toward solutions and value.  Making it “usable” is yet another.  Every interaction is marketing (good or bad) for the resources of the interaction.  The opportunity this points to is, through the leveraging of those capabilties, to improve the experience around deriving those solutions and achieving that value.

Enhanced by Zemanta

Image representing WebMediaBrands as depicted ...Image via CrunchBase

Today, WebMediaBrands announced that it acquired the Semantic Technology Conference (SemTech) and Semantic Universe.  SemTech has been the main non-academic annual gathering for the Semantic Technology space for six years thus far.  In the past few years, WebMediaBrands has also been active in the space, with its SemanticWeb and MediaBistro arms, and its organizing of related events including the Web3.0 Conference and before that, LinkedData Planet.

Semantic Technology ConferenceImage via Flickr

W3c semantic web stackImage via Wikipedia

  The combination of WebMediaBrands’ year-round focus on the space (through regional and sub-sector targeted events), with the annual convention that SemTech has been, should result in driving the space forward.  Together, their now complementary efforts should facilitate momentum on the commercial side of the space.  Perhaps we’ll also see the development of some useful industry-wide resources, as a result.

Update: Press release from Semantic Universe

Enhanced by Zemanta

Early in my career, when working as a data jockey with an economic consulting firm, I was on a team for a particular project where, I’ll always remember, we were referred to (in the New York Times) as “nitpicking zealots”.  While I knew it was meant as a criticism, I took the reference then (as now, for that matter), as a complement – emphasizing the attention-to-detail in our analysis.

The American manual alphabet in photographsImage via Wikipedia

For me, that focus has long been coupled with heavy emphasis on usefulness (ok, and logic) as a driving factor in doing or creating anything.  “Stick-in-mud” – maybe.  “Drive you nuts” – sure, the family says this sometimes…  But things just need to make sense.

So it shouldn’t surprise me (or anyone else) that, in my recent Experience Design mini-masters  project, I had an overriding need for the product idea my team was to come up with, to be of real use and value.  The first task was to evaluate whether design principles had been followed in the creation of a particular product (the Roadmaster – a single-line scrolling text display for use on a car).  Then we were to apply these design principles to come up with a different product/application making use of the technology for the context.  We performed our review by considering the Roadmaster’s affordances (what the design suggested about its use); its mapping of controls to meaning or functionality;  whether it provided feedback during use; its conceptual model and obviousness of purpose; any forcing functions, limters or defaults.  Having developed a “sense” of the product, as it was, we were embarked on the design effort by adding interviews/surveys to gather research on potential market need/desire.

Without getting into our conclusions about the Roadmaster product itself, of particular interest is where we ended up going with our design as a result of performing our own contextual inquiry.  Some great ideas emerged among the different teams, for which each team prototyped their design (using Axure), performed usability testing, and presented results.  Most of the teams designed mainly for social-media driven applications.  With our own goals including not just usability, but the usefulness factor mentioned above, we discovered potential in re-purposing the device – to be directed not to other drivers, but to the driver of the vehicle in which it is installed.  Specifically, to aid hearing impaired drivers – whether for receiving guidance from a driving instructor, instructions from a gps, or conversing with a passenger.

The design, which at one point we dubbed the “iDrive” (for reasons that will reveal themselves), involves mounting of the scrolling text display out in front of and facing the driver, and integration of speach-to-text conversion, so that as words were spoken, the driver would see these words displayed out in front of them, without their having to turn to see the hands or lips of a person commnicating with them, nor would they have to look away from the road to read directions on a gps screen.  In its simplest form, the design calls for an iPhone (or similar) application to perform the voice-to-text conversion, transmitting the resulting text to the display for the driver.  An extension of this concept could incorporate detection and display of other sounds, such as a honk, and which direction it is coming from. Since the program, we’ve found that the required voice-to-text conversion capability, in a mobile app (e.g. for the iPhone) as we called for in the design, does exist, so with the combination of the technologies (display, conversion, mobile application, and gps capability), the serving the hearing-impaired-driver market in this way should be within reach.

A side-note to this post: The faculty of the UXD program, Dr. Marilyn Tremaine, Ronnie Battista, and Dr. Alan Milewski, helped to revealed for me that the formal processes of experience design, and particularly contextual inquiry, parallel closely with what I’ve sought to achieve through the joining of the disciplines of Usability, Value Network Analysis (perspectival), and a dash of Semantic (extensible and interoperable) thinking.

Reblog this post [with Zemanta]

Day 191: Sticky Notes Mean ProductivityImage by quinn.anya via Flickr

If you haven’t already encountered Google’s newly released Sidewiki, it is a web annotation feature accessible via browser plug-in or their toolbar – and is essentially a means for people to comment on pages and, unlike tools for making notes for just yourself (like sticky notes on your screen, or the electronic equivalent), these comments are visible to others who use it and visit those pages – right on the page with the content.  This isn’t a new concept, but one that gives cause to consider the “traditional” dimensions of web experience.Generally speaking, users of web resources have typically thought of the pages they view as being depicted in the way intended by the owner of the domain (or page).  If we want to get philosophical, ownership of the rendering of the page, it could be argued, is the user’s – and plug-ins empower such customization, as this is referred to.

Image representing Google as depicted in Crunc...Image via CrunchBase

Similarly, functionality of a site is has typically been considered by users to be provided/delivered by, and/or controlled by the site owner.  In the context of beginning to think of rendering as being other-webly (i.e. from other than the provider), the same holds true with respect to functionality.  The functionality being added to the experience here is around the ability to comment, and to see comments of others, about the page.

This starts to bring home the concept that the browser is acting as the actual platform, rather than the page/site itself.  In this case, we’re talking about the bringing together of the page’s content with toughts or opinions about the page – or about things that are on the page.  So in essence, what sidewiki adds is a virtualized forum – where the forum content is in the hands of Google rather than those of the owner of the site – but is displayed alongside the content itself.

Image representing AdaptiveBlue as depicted in...Image via CrunchBase

This is not altogether different from what AdaptiveBlue’s Glue does – though there are a couple of key difference.  In both cases the user must be using the plug-in in order to see or add content – akin to joining the community.  And in both cases the comment / opinion content that is generated as a result, is in the control of the plug-in provider.  The first, and most notable difference (for now, at least) is that sidewiki “acts” as if the user generated content is about the page which it annotates, while Glue’s emphasis is on the asset to which the page refers.  The key benefit of the latter, in the cases where the commentary relates to an asset referenced on the page, is that it decouples the item referred to from location which makes reference to it.  This translates to Glue displaying  the comment on any page in where the same item is found, as opposed to just being seen on the same page where the comment was made.  This difference won’t likely persist, and seems more a matter of emphasis/focus and positioning.

Since the annotations are only visible to users making use of the particular service used when making the annotations, the more of these services we see, the more fragmented the sea of commentary.  The next level may be about “aboutness”, and differentiation by the ability to determine relatedness of otherwise unassociated commentary and content – and making the virtual connection between the two for the user.

Reblog this post [with Zemanta]

In the context of marketing and advertising, we’ve heard more during the last year or so, in reference to the semantic web and semantic technology.  What does Semantic Advertising really mean?  One interpretation – the one we’re not talking about here – is the selling of something by calling it semantic, which some have done in order to ride momentum (which I call “meme-entum”) of the space to sell something based on a loose association with the concept of “meaning” or “intent”.  So what are we talking about?

The Art of Online AdvertisingImage by khawaja via Flickr

VS

New, Improved *Semantic* Web!Image by dullhunk via Flickr

The strategy in the space has long been driven by word association, and more and more-so on an automated basis.  At a time, placement was done on an entirely manual basis – and automation of keyword matching increasingly became the basis for new business models.  That is, after all, the basis of what we now think of as contextual advertising – the alignment of what the user is looking for with the other things they encounter on the page.

  • So to put it simply:  What is it that is new and different?  What is it about the inner workings of an advertising mechanism that makes an offering semantic or not.  What are the drivers and opportunities around these differences?  What is real?  These are some of the things we’re looking to learn about in detail at the panel discussion that I’ve been helping to organize for Internet Week in New York – the title of which is Semantic Advertising.We’ll leave it to our moderators to dig into the nuts and bolts of the subject with the experts that have been gathered.  Going into the discussion though, here are some of the questions I’m thinking about:

    • Since keyword matching is, well,  keyword matching: what are the main differences between straight-up contextual advertising that uses keyword lookups relative to its semantic brethren?
    • Does the addition of keyword frequency, and therefore the statistical analysis of the text, make the matching on a ranking basis qualify as semantic?
    • Going beyond simply enhancing alignment, predicated upon statistical assumptions, is it the further use of NLP to not just extract concepts to be matched, but to determine the intent by the terms used – to better tune matches when words have multiple potential meanings?  Many of us have encountered the unintentionally matched ads – which can be disastrous for a brand.  What can really be done now, and how?
    • Further on the NLP side, there is the potential for sentiment detection – so even when the correct meaning of a term is understood, determining whether its use is appropriate for matching would be based on the positive or negative connotation of its use (think here in terms of whether you would want your airline advertised next to a story about an aviation mishap, for example).
    • Going at the question from the “semantic-web” side, is embedding (and detection of) metadata on the page just a different flavor of Semantic Advertising – or should we be calling that Semantic Web Advertising instead?  This seems less prone to interpretation errors, but the approach relies upon metadata which is largely not yet there.  (Because of the markup related aspects of this point, I wanted to call this post “Mark(up)eting and (RDF)ertising”, but was talked out of doing so).
    • Is there a difference in strategy and/or scalability when considering whether a semantic approach is more viable when done within the search process, as opposed to on the content of the page being viewed?
    • If ads to be served are stored in semantically compliant architecture, does that itself provide any advantages for the service provider?  And would doing so give rise to the service being referred to as Semantic Advertising?  Does this even enter into the eqaution at this point?
    • Would increases in the amount of embedded metadata shift the balance of systematically enhanced ad selection and presentation of sponsored content – from one web-interaction phase to another?

    I’m looking forward to the panel – to open my mind regarding these and other factors that come into play – and what elements and trends will be necessary for the viability of the various possible directions here.

    Reblog this post [with Zemanta]

    « Previous PageNext Page »

    Clicky Web Analytics