January 2012


Español: Delfín nariz de botella English: - Tu...

Image via Wikipedia

Our latest Semantic-Link discussion was interesting in that it touched on some distinct but deep topics that tend to recur in our discussions, namely: usability, privacy and the old standby – the definition of semantics itself.

I won’t spend any more time on the definition of semantics beyond that the consensus (for purposes of this discussion) was that it means “meaning”, with contexts including: linguistic/NLP related word-meaning semantics; and the other being compliance with W3C standards – or architectural Semantics.  In essence, the latter is what enables a machine version of the former.

The focus was actually a conversation with guest Nova Spivack, and his more current efforts, including Bottlenose and StreamGlider. (Next time we’ll have to let Nova do more of the talking, as we only really had time to dig into the first of those.)  Bottlenose is intended to help people manage and interconnect their interaction across the multiple electronic realms in which they operate.  While Nova mentions that the system doesn’t currently make use of W3C standard architectural Semantics, it does use ontologies to relate topics and navigate meaning.  This is particularly visible in Bottlenose’s Sonar – which renders a visualization of the active topics, hash-tags, and people around you, with adjustable time-horizon.  If you’d like to try it out during the private beta, visit Bottlenose.com and you can Sign Up using the Invite Code: semanticlink.

Listen to podcast here: Semantic Link Podcast – January 2012

As mentioned above, two key items arose from the discussion – the matters of privacy, and the question of transparency.  In the case of privacy, would it become an issue, from a business intelligence standpoint, that others could more easily see the topics that someone is discussing or investigating – especially if such a tool could cross multiple networks/platforms in finding patterns.

As is often the case in these Semantic-Link discussions, the question of “how much should be exposed about the use of semantics” arose.  There is of course a balance between active vs viral evangelizing of semantics, and the cost of exposure is simplicity and usability, while the benefit is flexibility and control, for those who can handle it.

The answer itself is complicated.  On the one hand, technologies need to evolve in terms of leveraging semantics in order for people to really benefit from the underlying semantic capabilities.  At the same time, those same people we’re talking about getting the benefit shouldn’t have to understand the semantics that enable the experience.  Paul Miller, host of the podcast, also wrote about this issue.  I’ll add that Investors do to like to hear that their company is using unique and valuable techniques.  So too, though, is it the case that any company making use of semantics likely feels it is a competitive advantage to them – a disincentive to sharing details of the secret sauce.  .

As mentioned during the podcast, this is a matter of which audience is being addressed – the developers or the masses.  And in terms of the masses, even that audience is split (as is the case with almost all other software users).  There are the casual users, and there are those who are hardcore – and when we’re talking about masses, there are many many more people would fall into the casual camp.  So from a design standpoint, this is where usability really matters, and that means simplicity.

So in the case of Bottlenose, for the time being they’ve chosen to hide the details of the semantics, and simplify the user experience – which will hopefully facilitate broader adoption.  There may too be room for a power-user mode, to exposes the inner workings of the black-box algorithms that find and weigh associations between people, places, things… and let users tweak those settings beyond the time-frame and focus adjustments that are currently provided.

Mentioned by Nova was the LockerProject in which personal data could potentially be maintained outside any one particular network or platform.   This of course helps on the privacy side, but adds a layer of complexity (until someone else comes along and facilitates easy integration – which will no doubt chip some of the privacy value).

Personally, I’d love to see the ability to combine slices of personal activity from one or multiple platforms, with tools such as Bottlenose, so that I could analyze activity around slivers or Circles (in the case of Google+ usage) from various networks, in any analytical platform I choose.

Enhanced by Zemanta

Aurasma

Image via Wikipedia

Word Lens logo

Image via Wikipedia

In the same vein as Word Lens, which I wrote about here just over a year ago, Aurasma too looks through your lens and “augments reality”. What does that mean though? And why is it interesting? At the most basic end of augmented reality, think of those times in touristy areas where you’ve had someone take a picture of you sticking your face through a board, on the front side of which – surrounding the hole you’re looking through – is painted some well-built body that surely isn’t mistakable as yours.

English: This is the logo of Wikitude World Br...

Image via Wikipedia

Add some basic technology, and you have photo doctoring capability that puts a border (or mustache) on your photo, or converts it to a sepia or negative view. Geo-code and/or date-stamp the image file, and integrate with information on buildings, locations, people and/or events that occurred there, and you can display that information along with the image when the coordinates correspond, a la Wikitude. Load up that app, turn it on, and walk around pointing your phone at things, and see what it says about your surroundings. (MagicPlan is an iPhone App, from Sensopia, that is a practical application of related technology, enabling CAD for making floorplans!)

Aurasma adds to this, by integrating image recognition (think: word recognition, but visually, picking up defined items) and rendering associated audio, video, animation, what have you – much like scanning a QR code would launch an associated action – but in this case, like WordLens, will do it in place on the image. Take a look:

The reality is that behind the scenes, with text, image or voice recognition, any action could be defined to be launched upon encountering triggers. Going further, imagine using multiple criteria or triggers to launched actions – tweaking the criteria for different scenarios. For example, a coffee company logo could spawn a video themed “start your morning with a cup” if the logo is seen early in the day, a “get a mid-day boost” if it is in the afternoon, or “keep your mind sharp tonight” if it is in the evening (adding “to get your studying done” if the geocode also indicates that the location is on a college campus. The mantra of late has been “context is king”. That’s context.

Here’s another hands-on example of use:

Enhanced by Zemanta

silver balls...

Image by play4smee via Flickr

The December episode of the Semantic-Link podcast was a review of the past year, and a look forward.  The framework for the discussion was:

  • What company, technology or issue caught your attention in 2011
  • Are we “there” yet?
  • What are people watching for 2012

Notable attention grabbers were: schema.org and its impact on who pays attention (i.e. SEO space); linked data (and open data); increase in policy maker awareness of the need to pay attention to interoperability issues; commercial integration of technology (ontologies plus nlp capabilities) to leverage unstructured content; and of course Siri (a key example of such integration…).

In terms of where we are in the progression of the semantic technology realm, the general sentiment was that Siri represents the beginning of inserting UI in the process of leveraging semantics, by making the back end effort invisible to the user.  And looking forward, the feeling seems to be that we’ll see even more improved UI, stronger abilities in analysis and use of unstructured content, greater integration and interoperability, and data-driven user navigation, and Siri clones.

Give a listen, and be sure to express your opinion about a) topics that should be covered in the future, and b) the ways you would like to interact or participate in the discussion (see dark survey boxes).

Enhanced by Zemanta

Clicky Web Analytics