Tue 29 Jan 2013
Tue 29 Jan 2013
Tue 4 Dec 2012
This is absolutely genius.
You have to watch (and listen to) it twice in order to really appreciate it.
Wed 18 Jan 2012
Our latest Semantic-Link discussion was interesting in that it touched on some distinct but deep topics that tend to recur in our discussions, namely: usability, privacy and the old standby – the definition of semantics itself.
I won’t spend any more time on the definition of semantics beyond that the consensus (for purposes of this discussion) was that it means “meaning”, with contexts including: linguistic/NLP related word-meaning semantics; and the other being compliance with W3C standards – or architectural Semantics. In essence, the latter is what enables a machine version of the former.
The focus was actually a conversation with guest Nova Spivack, and his more current efforts, including Bottlenose and StreamGlider. (Next time we’ll have to let Nova do more of the talking, as we only really had time to dig into the first of those.) Bottlenose is intended to help people manage and interconnect their interaction across the multiple electronic realms in which they operate. While Nova mentions that the system doesn’t currently make use of W3C standard architectural Semantics, it does use ontologies to relate topics and navigate meaning. This is particularly visible in Bottlenose’s Sonar – which renders a visualization of the active topics, hash-tags, and people around you, with adjustable time-horizon. If you’d like to try it out during the private beta, visit Bottlenose.com and you can Sign Up using the Invite Code: semanticlink.
Listen to podcast here: Semantic Link Podcast – January 2012
As mentioned above, two key items arose from the discussion – the matters of privacy, and the question of transparency. In the case of privacy, would it become an issue, from a business intelligence standpoint, that others could more easily see the topics that someone is discussing or investigating – especially if such a tool could cross multiple networks/platforms in finding patterns.
As is often the case in these Semantic-Link discussions, the question of “how much should be exposed about the use of semantics” arose. There is of course a balance between active vs viral evangelizing of semantics, and the cost of exposure is simplicity and usability, while the benefit is flexibility and control, for those who can handle it.
The answer itself is complicated. On the one hand, technologies need to evolve in terms of leveraging semantics in order for people to really benefit from the underlying semantic capabilities. At the same time, those same people we’re talking about getting the benefit shouldn’t have to understand the semantics that enable the experience. Paul Miller, host of the podcast, also wrote about this issue. I’ll add that Investors do to like to hear that their company is using unique and valuable techniques. So too, though, is it the case that any company making use of semantics likely feels it is a competitive advantage to them – a disincentive to sharing details of the secret sauce. .
As mentioned during the podcast, this is a matter of which audience is being addressed – the developers or the masses. And in terms of the masses, even that audience is split (as is the case with almost all other software users). There are the casual users, and there are those who are hardcore – and when we’re talking about masses, there are many many more people would fall into the casual camp. So from a design standpoint, this is where usability really matters, and that means simplicity.
So in the case of Bottlenose, for the time being they’ve chosen to hide the details of the semantics, and simplify the user experience – which will hopefully facilitate broader adoption. There may too be room for a power-user mode, to exposes the inner workings of the black-box algorithms that find and weigh associations between people, places, things… and let users tweak those settings beyond the time-frame and focus adjustments that are currently provided.
Mentioned by Nova was the LockerProject in which personal data could potentially be maintained outside any one particular network or platform. This of course helps on the privacy side, but adds a layer of complexity (until someone else comes along and facilitates easy integration – which will no doubt chip some of the privacy value).
Personally, I’d love to see the ability to combine slices of personal activity from one or multiple platforms, with tools such as Bottlenose, so that I could analyze activity around slivers or Circles (in the case of Google+ usage) from various networks, in any analytical platform I choose.
Wed 4 Jan 2012
The December episode of the Semantic-Link podcast was a review of the past year, and a look forward. The framework for the discussion was:
Notable attention grabbers were: schema.org and its impact on who pays attention (i.e. SEO space); linked data (and open data); increase in policy maker awareness of the need to pay attention to interoperability issues; commercial integration of technology (ontologies plus nlp capabilities) to leverage unstructured content; and of course Siri (a key example of such integration…).
In terms of where we are in the progression of the semantic technology realm, the general sentiment was that Siri represents the beginning of inserting UI in the process of leveraging semantics, by making the back end effort invisible to the user. And looking forward, the feeling seems to be that we’ll see even more improved UI, stronger abilities in analysis and use of unstructured content, greater integration and interoperability, and data-driven user navigation, and Siri clones.
Give a listen, and be sure to express your opinion about a) topics that should be covered in the future, and b) the ways you would like to interact or participate in the discussion (see dark survey boxes).
Mon 12 Dec 2011
During the recording of the December podcast of the Semantic-Link (as of this writing, soon to be posted), I emphasized the general need for enablement of the general public to begin contributing and consuming linked data – without having to have much, if any, technical wherewithal. The real explosion of the Web itself came as a result of wysiwyg authoring and facilitation of posting content and comments by just about anyone with a web connection. Similarly, de-tech-ification of where the web is going from here is what will pave the way to getting there.
There are standards and tools now for the related underlying componentry, and what is needed is user-interface development that will usher in the explosion of linked-content generation and consumption (as web2.0 did before).
Toward this end, Andreas Blumauer writes about a new version of PoolParty’s WordPress plugin that extends an in-page Apture-like approach, to use and contribute to the LD ecosystem. This (coupled with other elements such as SKOSsy) is an example of the type of UI gateway that is needed in order to enable the general public to participate – with systems that generate and digest the linked-data-age information currency.
Thu 7 Jul 2011
While I’m still actually waiting to get “in”, I have a couple of comments regarding Google+, from outside the Circle.
From descriptions of this Google Social Networking effort (following Orkut, Wave and Buzz), key elements as of now are: Circles (think of them as groups of people within your network); Sparks (which are topics or areas of interest); Hangouts (video chat rooms); Huddles (group chat); and Instant Upload (automatic mobile photo syncing).
Considering potential for integrating capability across product areas has always been most intriguing to me. In serving them up “together”, G+ makes it that much more likely for capabilities to be used together.
The second area of note is the way that Sparks re-frames the idea of Alerts in a way that subtly shifts the nature of the material that results from them from being one-off emails or links — that you might dig into or forward on — to material that relate to particular areas of interest, which presumably parallel or align with groupings of people you associate with around those topics. Twine had used the approach of integrating topic areas and social groupings for alerts – but these were groups that potential recipients would have to join. In G+, the “proximity” to the Circles aspect, and the fact that those Circles are unique to the individual, and don’t require reciprocation, make for a compelling scenario for the “push” side of the equation. (At the same time, I see some potential issues in terms of “pull” and management by those on the receiving end).
Hangouts and Huddles are by nature “social” already, for which you’ll presumably be able to seamlessly leverage Circles. As with topical material, Instant Upload brings your photo content automatically one step closer to where you are sharing. Success of all this as a social platform depends significantly on integration between the parts for seamless use by a user across capabilities – for example, adding someone who is participating on a video call or chat right into one or more of the Circles touched or represented by the other participants on that call or chat.
Leveraging other capabilities such as linguistic processing of AdSense (and G+ may already have this in the works) it would not be a stretch for the content in your interactions to generate suggestions for Sparks which you could simply validate — places or people in photos, words in chats, terms that show up in content within Spark items. From there, it wouldn’t be far to being able to interact with your life through what I might call a “SparkMap” — reflecting relationships between terms within your areas of interest.
UPDATE: I’m now in, as of Friday afternoon, July 8. So now I’ll be playing, with more ideas to come…
Fri 17 Dec 2010
Image via Wikipedia
If this isn’t one of the coolest things you’ve ever seen…
You probably thought it was Jetson’s material that someone could speak one language into a phone, and you could hear it in a different language on the other end. Pretty great stuff, translation on the fly. Think about looking at something that is written in a different language, and being able to able to see it in another, without having to go look it up somewhere!
That’s exactly what the Word Lens app from Quest Visual does – which you’ve got to see to believe (if not understand)!
I don’t know if this is exactly right, but “bastante salvaje” if you ask me!
Fri 3 Dec 2010
Every now and again, I’m asked why one post or another of mine seems to be off on a tangent from “the usual”. In these cases, it seems that while I’ve stayed true to the theme of connecting ideas to create value, the exchange for that value isn’t as obvious or direct. To me, these are the times that are most interesting – involving translation of the currency, whether to or from knowledge, experience, or goods. It is that value translation that is at the heart of the Second Integral.
I’ll speculate now that this will likley prove to be one of those times.
While walking through Maplewood, NJ last weekend, I came upon a new store in place of one that had recently closed. I ventured in to see what it was about, and discovered it to be an art/craft boutique, with lots of hand crafted and nicely made/decorated items. A woman approached me and asked if I needed any help, and I asked if these were all things made by people locally. She was Cate Lazen, and she turns out to have been the founder of Arts Unbound, the organization that opened this “pop-up” store. She answered my question, saying “well, yes, and everything in the store was made by people dealing with a disability of one sort or another.”
With a part of my brain dedicated full time to triangulation, I found myself automatically thinking about the coalescence of purposes here. On the one hand, people with disabilities, engaging in artistic work as physical therapy, an expressive outlet, to perhaps generate income, while gaining pride, satisfaction, experience… all through their creative art.
Art as therapy itself is clearly valuable – but what struck me as particularly interesting was its combination of it here with (at least) two other constituencies. According to Cate, the shop also employs people with disabilities, so it satisfies many of these same therepeutic purposes for the workers as it does the artists. And of course, being a shop, it brings customers into the mix.
The simple combination of manufacturer + shopkeeper + consumer may not, on the surface, seem so interesting – it is just how a business works. But the dynamic in this case yields some additional benefits beyond the traditional.
Along with the direct purposes noted above, for the artists and workers, and obviously filling customers’ needs, there are some more subtle byproducts as well, and they’re accentuated by the season’s spirit, due to the timing of the shop’s materialization just in time for the holidays.
Those who find their way to the shop will undoubtedly gain awareness of the overall purposes being served by the organization. Additionally, buying a gift from this store provides the giver the satisfaction of giving twice (at least) – to the recipient of the gift, to the artist, to the shop worker, and even the good feeling of having contributed in some small way. All this can even make you feel a little better about buying something for yourself.
Wed 22 Sep 2010
Image via Wikipedia
Wow. If you thought the Linking Open Data cloud had grown between September 2007 (right) and July of 2009 (below), take a look at this to see where we are NOW!
Image via Wikipedia
As Richard and Anja note on the site linked above: The cloud images show “some of the datasets that have been published in Linked Data format, by the Linking Open Data community project and other organisations.”
Where is this going? Andreas Blumauer of Semantic Web Company, in Vienna, put it well: “15 years ago we all were excited when we published HTML for the first time and it didn’t take a long time until all of us were “on the internet”. Now we are starting to publish data on the web. Based on semantic web technologies professional data management will be possible in distributed environments generating even more network effects than Web 1.0 and Web 2.0 ever did.”
Some might ask where value derived from this cloud, or if membership in it just marketing? Talis’ Tom Heath outlines, in the latest issue of Nodalities Magazine, that without Linked Data, there couldn’t be a Semantic Web. Being linked and of use means having been made available following Linked Data Principles. This includes: things having unique identifiers (URIs); that are in the form of hypertext (HTTP) so they are standardly navigable (dereferencable); at which destinations there is useful and standardly interprable information (in RDF/XML) describing the thing; and which contains links to other things (read: HTTP URIs which also contain RDF/XML). Through explanation of the progression from FOAF files, (where the “things” at these “URIs” are individual people, collectively representing the basis for semantic social networks), to working out standards around what constitutes an information vs non-information resource (via httpRange-14), Tom makes the all important point that: each step along the way is an essential building block toward where we are going.
And where (at this stage) is this? When Tony Shaw, of Semantic Universe, pointed to Linked Data in his recent article “Nine Ways the Semantic Web Will Change Marketing“, he was pointing to its impact on Marketing. But beyond that, we can take from his explanation the broader capabilities afforded by it: findability, pullability, mashability, mobility – essentially interoperability, as applicable to any activity, sector or function which involves information (read: data). Can you think of any that don’t?
Enabling data in this way (with all these building blocks) is “one” thing – moving control closer to the end user, and toward solutions and value. Making it “usable” is yet another. Every interaction is marketing (good or bad) for the resources of the interaction. The opportunity this points to is, through the leveraging of those capabilties, to improve the experience around deriving those solutions and achieving that value.
Mon 25 Jan 2010
Early in my career, when working as a data jockey with an economic consulting firm, I was on a team for a particular project where, I’ll always remember, we were referred to (in the New York Times) as “nitpicking zealots”. While I knew it was meant as a criticism, I took the reference then (as now, for that matter), as a complement – emphasizing the attention-to-detail in our analysis.
Image via Wikipedia
For me, that focus has long been coupled with heavy emphasis on usefulness (ok, and logic) as a driving factor in doing or creating anything. “Stick-in-mud” – maybe. “Drive you nuts” – sure, the family says this sometimes… But things just need to make sense.
So it shouldn’t surprise me (or anyone else) that, in my recent Experience Design mini-masters project, I had an overriding need for the product idea my team was to come up with, to be of real use and value. The first task was to evaluate whether design principles had been followed in the creation of a particular product (the Roadmaster – a single-line scrolling text display for use on a car). Then we were to apply these design principles to come up with a different product/application making use of the technology for the context. We performed our review by considering the Roadmaster’s affordances (what the design suggested about its use); its mapping of controls to meaning or functionality; whether it provided feedback during use; its conceptual model and obviousness of purpose; any forcing functions, limters or defaults. Having developed a “sense” of the product, as it was, we were embarked on the design effort by adding interviews/surveys to gather research on potential market need/desire.
Without getting into our conclusions about the Roadmaster product itself, of particular interest is where we ended up going with our design as a result of performing our own contextual inquiry. Some great ideas emerged among the different teams, for which each team prototyped their design (using Axure), performed usability testing, and presented results. Most of the teams designed mainly for social-media driven applications. With our own goals including not just usability, but the usefulness factor mentioned above, we discovered potential in re-purposing the device – to be directed not to other drivers, but to the driver of the vehicle in which it is installed. Specifically, to aid hearing impaired drivers – whether for receiving guidance from a driving instructor, instructions from a gps, or conversing with a passenger.
The design, which at one point we dubbed the “iDrive” (for reasons that will reveal themselves), involves mounting of the scrolling text display out in front of and facing the driver, and integration of speach-to-text conversion, so that as words were spoken, the driver would see these words displayed out in front of them, without their having to turn to see the hands or lips of a person commnicating with them, nor would they have to look away from the road to read directions on a gps screen. In its simplest form, the design calls for an iPhone (or similar) application to perform the voice-to-text conversion, transmitting the resulting text to the display for the driver. An extension of this concept could incorporate detection and display of other sounds, such as a honk, and which direction it is coming from. Since the program, we’ve found that the required voice-to-text conversion capability, in a mobile app (e.g. for the iPhone) as we called for in the design, does exist, so with the combination of the technologies (display, conversion, mobile application, and gps capability), the serving the hearing-impaired-driver market in this way should be within reach.
A side-note to this post: The faculty of the UXD program, Dr. Marilyn Tremaine, Ronnie Battista, and Dr. Alan Milewski, helped to revealed for me that the formal processes of experience design, and particularly contextual inquiry, parallel closely with what I’ve sought to achieve through the joining of the disciplines of Usability, Value Network Analysis (perspectival), and a dash of Semantic (extensible and interoperable) thinking.