Español: Delfín nariz de botella English: - Tu...

Image via Wikipedia

Our latest Semantic-Link discussion was interesting in that it touched on some distinct but deep topics that tend to recur in our discussions, namely: usability, privacy and the old standby – the definition of semantics itself.

I won’t spend any more time on the definition of semantics beyond that the consensus (for purposes of this discussion) was that it means “meaning”, with contexts including: linguistic/NLP related word-meaning semantics; and the other being compliance with W3C standards – or architectural Semantics.  In essence, the latter is what enables a machine version of the former.

The focus was actually a conversation with guest Nova Spivack, and his more current efforts, including Bottlenose and StreamGlider. (Next time we’ll have to let Nova do more of the talking, as we only really had time to dig into the first of those.)  Bottlenose is intended to help people manage and interconnect their interaction across the multiple electronic realms in which they operate.  While Nova mentions that the system doesn’t currently make use of W3C standard architectural Semantics, it does use ontologies to relate topics and navigate meaning.  This is particularly visible in Bottlenose’s Sonar – which renders a visualization of the active topics, hash-tags, and people around you, with adjustable time-horizon.  If you’d like to try it out during the private beta, visit Bottlenose.com and you can Sign Up using the Invite Code: semanticlink.

Listen to podcast here: Semantic Link Podcast – January 2012

As mentioned above, two key items arose from the discussion – the matters of privacy, and the question of transparency.  In the case of privacy, would it become an issue, from a business intelligence standpoint, that others could more easily see the topics that someone is discussing or investigating – especially if such a tool could cross multiple networks/platforms in finding patterns.

As is often the case in these Semantic-Link discussions, the question of “how much should be exposed about the use of semantics” arose.  There is of course a balance between active vs viral evangelizing of semantics, and the cost of exposure is simplicity and usability, while the benefit is flexibility and control, for those who can handle it.

The answer itself is complicated.  On the one hand, technologies need to evolve in terms of leveraging semantics in order for people to really benefit from the underlying semantic capabilities.  At the same time, those same people we’re talking about getting the benefit shouldn’t have to understand the semantics that enable the experience.  Paul Miller, host of the podcast, also wrote about this issue.  I’ll add that Investors do to like to hear that their company is using unique and valuable techniques.  So too, though, is it the case that any company making use of semantics likely feels it is a competitive advantage to them – a disincentive to sharing details of the secret sauce.  .

As mentioned during the podcast, this is a matter of which audience is being addressed – the developers or the masses.  And in terms of the masses, even that audience is split (as is the case with almost all other software users).  There are the casual users, and there are those who are hardcore – and when we’re talking about masses, there are many many more people would fall into the casual camp.  So from a design standpoint, this is where usability really matters, and that means simplicity.

So in the case of Bottlenose, for the time being they’ve chosen to hide the details of the semantics, and simplify the user experience – which will hopefully facilitate broader adoption.  There may too be room for a power-user mode, to exposes the inner workings of the black-box algorithms that find and weigh associations between people, places, things… and let users tweak those settings beyond the time-frame and focus adjustments that are currently provided.

Mentioned by Nova was the LockerProject in which personal data could potentially be maintained outside any one particular network or platform.   This of course helps on the privacy side, but adds a layer of complexity (until someone else comes along and facilitates easy integration – which will no doubt chip some of the privacy value).

Personally, I’d love to see the ability to combine slices of personal activity from one or multiple platforms, with tools such as Bottlenose, so that I could analyze activity around slivers or Circles (in the case of Google+ usage) from various networks, in any analytical platform I choose.

Enhanced by Zemanta

Aurasma

Image via Wikipedia

Word Lens logo

Image via Wikipedia

In the same vein as Word Lens, which I wrote about here just over a year ago, Aurasma too looks through your lens and “augments reality”. What does that mean though? And why is it interesting? At the most basic end of augmented reality, think of those times in touristy areas where you’ve had someone take a picture of you sticking your face through a board, on the front side of which – surrounding the hole you’re looking through – is painted some well-built body that surely isn’t mistakable as yours.

English: This is the logo of Wikitude World Br...

Image via Wikipedia

Add some basic technology, and you have photo doctoring capability that puts a border (or mustache) on your photo, or converts it to a sepia or negative view. Geo-code and/or date-stamp the image file, and integrate with information on buildings, locations, people and/or events that occurred there, and you can display that information along with the image when the coordinates correspond, a la Wikitude. Load up that app, turn it on, and walk around pointing your phone at things, and see what it says about your surroundings. (MagicPlan is an iPhone App, from Sensopia, that is a practical application of related technology, enabling CAD for making floorplans!)

Aurasma adds to this, by integrating image recognition (think: word recognition, but visually, picking up defined items) and rendering associated audio, video, animation, what have you – much like scanning a QR code would launch an associated action – but in this case, like WordLens, will do it in place on the image. Take a look:

The reality is that behind the scenes, with text, image or voice recognition, any action could be defined to be launched upon encountering triggers. Going further, imagine using multiple criteria or triggers to launched actions – tweaking the criteria for different scenarios. For example, a coffee company logo could spawn a video themed “start your morning with a cup” if the logo is seen early in the day, a “get a mid-day boost” if it is in the afternoon, or “keep your mind sharp tonight” if it is in the evening (adding “to get your studying done” if the geocode also indicates that the location is on a college campus. The mantra of late has been “context is king”. That’s context.

Here’s another hands-on example of use:

Enhanced by Zemanta

silver balls...

Image by play4smee via Flickr

The December episode of the Semantic-Link podcast was a review of the past year, and a look forward.  The framework for the discussion was:

  • What company, technology or issue caught your attention in 2011
  • Are we “there” yet?
  • What are people watching for 2012

Notable attention grabbers were: schema.org and its impact on who pays attention (i.e. SEO space); linked data (and open data); increase in policy maker awareness of the need to pay attention to interoperability issues; commercial integration of technology (ontologies plus nlp capabilities) to leverage unstructured content; and of course Siri (a key example of such integration…).

In terms of where we are in the progression of the semantic technology realm, the general sentiment was that Siri represents the beginning of inserting UI in the process of leveraging semantics, by making the back end effort invisible to the user.  And looking forward, the feeling seems to be that we’ll see even more improved UI, stronger abilities in analysis and use of unstructured content, greater integration and interoperability, and data-driven user navigation, and Siri clones.

Give a listen, and be sure to express your opinion about a) topics that should be covered in the future, and b) the ways you would like to interact or participate in the discussion (see dark survey boxes).

Enhanced by Zemanta

Andreas e Michael

Image by Giorgio___ via Flickr

During the recording of the December podcast of the Semantic-Link (as of this writing, soon to be posted), I emphasized the general need for enablement of the general public to begin contributing and consuming linked data – without having to have much, if any, technical wherewithal.  The real explosion of the Web itself came as a result of wysiwyg authoring and facilitation of posting content and comments by just about anyone with a web connection.  Similarly, de-tech-ification of where the web is going from here is what will pave the way to getting there.

There are standards and tools now for the related underlying componentry, and what is needed is user-interface development that will usher in the explosion of linked-content generation and consumption (as web2.0 did before).

Toward this end, Andreas Blumauer writes about a new version of PoolParty’s WordPress plugin that extends an in-page Apture-like approach, to use and contribute to the LD ecosystem.  This (coupled with other elements such as SKOSsy) is an example of the type of  UI gateway that is needed in order to enable the general public to participate – with systems that generate and digest the linked-data-age information currency.

Enhanced by Zemanta

new MOO business cards

Image by massdistraction via Flickr

We recently used Moo to get some really nice self-designed cards made, and were really happy with the quality.

Here’s a 10% discount you can use as a new customer, if you like – the equivalent of entering TPX88K as a promo code in the checkout process.

Enhanced by Zemanta

Here’s the latest installment of our Semantic Link podcast, hosted by Paul Miller of Cloud of Data

and joining with me were Christine Connors, Trivium RLG, LLC, Eric Franzon, SemanticWeb.com, Bernadette Hyland, 3 Roundstones, and Andraz Tori, Zemanta

Topics covered this month were:

Marbles - Schulenburg, Texas

Image by adamj1555 via Flickr

While I’m still actually waiting to get “in”, I have a couple of comments regarding Google+, from outside the Circle.

From descriptions of this Google Social Networking effort (following Orkut, Wave and Buzz), key elements as of now are: Circles (think of them as groups of people within your network); Sparks (which are topics or areas of interest); Hangouts (video chat rooms); Huddles (group chat); and Instant Upload (automatic mobile photo syncing).

Considering potential for integrating capability across product areas has always been most intriguing to me.  In serving them up “together”, G+ makes it that much more likely for capabilities to be used together.

First, and I think most interesting, is the way that the concept of Circles melds the idea of a network of friends/connections with tagging/categorization so that, without having the clunky thinking of classifying or inviting people to groups, the user is able to achieve the elusive sense of having multiple personas representable within one system.   Some people maintain their professional network in one system (LinkedIn, for example), and their personal network in another (e.g. facebook).  Others maintain multiple accounts in a single system in order to segregate their “work” online presence from their “family” or “personal play” selves.  For those who already maintain multiple Google accounts, G+ lets you log into multiple accounts at once.  I have yet to see how well you can interact in ways that cross over account lines.
Image representing Twine as depicted in CrunchBase

Image via CrunchBase

The second area of note is the way that Sparks re-frames the idea of Alerts in a way that subtly shifts the nature of the material that results from them from being one-off emails or links — that you might dig into or forward on — to material that relate to particular areas of interest, which presumably parallel or align with groupings of people you associate with around those topics.  Twine had used the approach of integrating topic areas and social groupings for alerts – but these were groups that potential recipients would have to join.  In G+, the “proximity” to the Circles aspect, and the fact that those Circles are unique to the individual, and don’t require reciprocation, make for a compelling scenario for the “push” side of the equation. (At the same time, I see some potential issues in terms of “pull” and management by those on the receiving end).

Together, Sparks and Circles could take us a lot closer to a dream system I yearned for a few years back, that I referred to as a Virtual Dynamic Network.  In this, rather than having defined groups that you would need to join (which would send you related material along with much you would prefer to do without), material you both receive and send would be routed based on what it is about and how it is classified. I would love to see distinct sets of controls for in-bound vs out-bound content.
I won’t know until I get to try it, but ideally G+ will enable you to tie Sparks to Circles for you.  I’m also hoping you’re able to group your Circles – to relate and arrange them even hierarchically (consider: a large Circle for your work persona, which might contain multiple Circles for various client or team categories; or a large personal Circle, with sub-Circles for family, local friends, remote friends, classmates – all with overlap management to avoid multiply-sent content).

Hangouts and Huddles are by nature “social” already, for which you’ll presumably be able to seamlessly leverage Circles.  As with topical material, Instant Upload brings your photo content automatically one step closer to where you are sharing.  Success of all this as a social platform depends significantly on integration between the parts for seamless use by a user across capabilities – for example, adding someone who is participating on a video call or chat right into one or more of the Circles touched or represented by the other participants on that call or chat.

Ripples

Image by Bill Gracey via Flickr

Leveraging other capabilities such as linguistic processing of AdSense (and G+ may already have this in the works) it would not be a stretch for the content in your interactions to generate suggestions for Sparks which you could simply validate — places or people in photos, words in chats, terms that show up in content within Spark items.  From there, it wouldn’t be far to being able to interact with your life through what I might call a “SparkMap” — reflecting relationships between terms within your areas of interest.

 

UPDATE: I’m now in, as of Friday afternoon, July 8. So now I’ll be playing, with more ideas to come…

Additional links:

  • How to Get Started with Google+… (socialmediaexaminer.com)
  • A good ScobleEncounter listen (scobleizer on cinch.fm)
  • Quite a collection of tips growing on this public google doc
  • Enhanced by Zemanta

    Semantic Web Light Beer

    Image by davidflanders via Flickr

    This was to be a pre-conference post to give an overview of what to expect during the week-long, 150-or-so session Semantic Technologies Conference – a gathering of all things semantic.

    I wanted to mention a few “views” by which you can consider the landscape, to help navigate the more than 150 sessions:

    • Sector / Industry (such as e-gov, health/life science and pharma, publishing, financial…)
    • By stack-/layer-cake component (the individual technology or standard)
    • By function performed (search, data integration, dynamic categorization…
    • Technical Level – from highly technical, to purely business focused
      W3c semantic web stack

      Image via Wikipedia

    And there are related “tracks” that can help you follow any one of these. Whether you’re interested in what the Semantic Web is in general, intricate architectural aspects of the various segments of the semantic web layer cake/stack (RDF, OWL, SPARQL…), it’ll be covered during the week.

    Since it is now under way, I’ll mention a few of the points made during the Semantic-Link live podcast on Sunday, an opening sessions that I was part of.  In particular, I wanted to touch on the “Advice to new attendees” (who represented a surprisingly massive portion of those who had already checked in for the week) included [full mp3 here]:

    • Talk to anyone about anything.  This is an extremely diverse, giving, open and accessible group of people.  (Andraz Tori of Zemanta added: while it is great to see people you haven’t seen in a year, don’t talk to the ones you know.  Meet and talk with new ones!).
    • Try to sample from the uniquely WIDE variety of topical material covered.  It is rare that you’ll find the range of material that is accessible.
    • Don’t try to get deeply into the intricacies of each component of the stack.  Instead, get enough of a sense of how each of the components relates to one another – so you can then consider the context of anything you encounter here.
    • Don’t be afraid to walk out of a session you determine is not for you, and head into another you were considering.
    • Value the hallway conversations as much as the sessions themselves.
    • Decide whether you are trying to learn everything and anything you can – or if you are seeking out specific solutions or material to justify an agenda – and navigate accordingly.

    One topic released too recently to be on the agenda, is the schema.org arrangement between Google, Bing and Yahoo around the common use the Microdata vocabulary (vs RDFa or Microformats), which is less expressive and easier to implement.  The question put out during the opening panel discussion was whether this good, bad, important, unimportant… to the Semantic Web community.  The only consensus of the panel was that it will generate much discussion on all sides of the matter during the week – and that is a good thing.  Christine Connors added that the SEO world will likely jump on this standardization for annotating – and a cottage industry might emerge around people offering to annotate pages.   From my own relatively non-technical perspective, it is strategically positive for the Semantic Web.  To the extent that this opens up the floodgates and generates masses of annotation, there is then much more to be worked with, for RDFa to be added where higher degrees of expressiveness are still desired – and these will surely emerge.

    Enhanced by Zemanta

    This is an update to the Drupal-related portion of my 2/7/11 post:

    Semantic Web Bus or Semantic Web Bandwagon?

    Image by dullhunk via Flickr

    Stéphane Corlosquet posted some background regarding his research, and a link to his masters thesis, on and paving (or at least mapping) the way to inclusion of RDFa in Drupal 7.

    The latter does a good job outlining the matter being addressed — in a a pretty digestible way even for the lay person — along with the way to get there.  Of particular note is the emphasis on facilitating the leveraging of it, as evidenced by the existence of its Chapter 4, focused on usability and adoption.

    After all, this effort finally represents the technical equipping of content — through the work flow and processes of non-technicians  to generate that content — so as to be technically consumed.  This is how most of our every day systems operate (think about the behind-the-scenes code that is incorporated into Word, for example, when the bold or italics button is pressed.

    Once we arrived at the participatory phase of the Web, this type of invisible facilitation/enablement within everyday processes — in a usable way, no less — became an essential pathway to its semanticization.

    Artist's impression of the US Gerald R. Ford-c...

    Image via Wikipedia - (turning the big ship)

    Enhanced by Zemanta

    Image representing Drupal as depicted in Crunc...

    Image via CrunchBase

    I’m finally getting a break from chipping ice and shoveling snow, so before the next round comes in, I wanted to get this post up about our second episode of the Semantic Link podcast.

    A hen chicken (Gallus gallus)

    Image via Wikipedia

    In brief, we had an interesting discussion around how we anticipate Drupal 7 will impact the landscape and why – specifically its built-in ability to generate semantic annotation of content. To date there has been a chicken-and-egg situation, where the development of semantic-consuming applications has been waiting for consumable content – while efforts to generate semantic content have been awaiting the incentive of there being systems to consume, digest, and expose it. Call it CM-antics or C-Mantics – either way it is easier than saying “CM Semantics” – but perhaps it’ll reduce some of the antics.

    Some green and a brown egg standing on end wit...

    Image via Wikipedia

    Other parts of the conversation included discussion of how semantic solutions find their way into companies; and about the way that semantics has influenced the division of labor and the definition of IT roles within companies (CTO vs CIO) due to its changing the nature of information itself, and making it more of a technology – or part of the machine itself.
    Other parts of the conversation included discussion of how semantic solutions find their way into companies; and about the way that semantics has influenced the division of labor and the definition of IT roles within companies (CTO vs CIO) due to its changing the nature of information itself, and making it more of a technology – or part of the machine itself.

    The logo used by Apple to represent Podcasting

    Image via Wikipedia

    Give a listen, and enjoy: Semantic Link – Episode 2

    Enhanced by Zemanta

    « Previous PageNext Page »

    Clicky Web Analytics