advertising


Perhaps I’m missing something.

English: The Tesla Roadster is the only all-el...

With regard to the issues being argued around Tesla’s direct-to-consumer sales model and the legality of such – while the concerns of auto dealers (profitability, sales guidance, and service facilities for customers) have merit, the lower maintenance architecture of all-electric vehicles do give rise to the need for new “thinking” in terms of the models that manage and regulate the related activities and processes.  The point of this post though is NOT to argue those merits, but to suggest what seems a relatively straightforward solution for Tesla.

 

State laws at issue appear to prohibit the sale through other than an independent intermediary. It is not unusual for companies to have exclusive contractual arrangements which also include many other stipulations.  In that regard, would it not be a reasonable solution for the “galleries” through which it displays and facilitates remote-purchase of its vehicles to be independent, to have territorially exclusive ability to non-electronically “show” the product, in exchange for being bound to strict operating requirements.  And included in such contract could be the payment by Tesla of the operating costs they would otherwise spend on the Galleries had they been owned by Tesla – protected by the operating requirements (which may be subject to revision yada yada yada).Electric discharge showing the lightning-like ...

 

Perhaps this is too naive as an outsider perspective, but in essence, the facilities would be light-weight and lean virtual art galleries with physical examples as well.   Disruption is sometimes mostly in mindset.  In looking at the requirements (on the Motor Vehicles pages of a few states – for example NJ or VA) there are specific requrements, but they do not seem insurmountable (NJ requires TWO vehicles) relative to what may already be in place in an existing Tesla Gallery.

 

 

 

 

Enhanced by Zemanta

This is absolutely genius.

You have to watch (and listen to) it twice in order to really appreciate it.

 

google search results

google search results (Photo credit: Sean MacEntee)


You say “Semoogle”, I say “Goomantics”. Two made up words; one meaning. Map the terms to one another, and associations to one can be related to the other.  Do that within the house that Google built, and you can really traverse the knowledge graph (that was MetaWeb’s Freebase).

Keyword matching is just part of what happens inside the Google machine – and more and more, sense is discerned from context – in aligning content (search results or ads) with the searcher’s intent (their meaning, in terms of identifiable entities and relationships).

Read more, from a Mashable interview with Google’s Amit Singhal [1]

[1] http://mashable.com/2012/02/13/google-knowledge-graph-change-search/

Enhanced by Zemanta

Aurasma

Image via Wikipedia

Word Lens logo

Image via Wikipedia

In the same vein as Word Lens, which I wrote about here just over a year ago, Aurasma too looks through your lens and “augments reality”. What does that mean though? And why is it interesting? At the most basic end of augmented reality, think of those times in touristy areas where you’ve had someone take a picture of you sticking your face through a board, on the front side of which – surrounding the hole you’re looking through – is painted some well-built body that surely isn’t mistakable as yours.

English: This is the logo of Wikitude World Br...

Image via Wikipedia

Add some basic technology, and you have photo doctoring capability that puts a border (or mustache) on your photo, or converts it to a sepia or negative view. Geo-code and/or date-stamp the image file, and integrate with information on buildings, locations, people and/or events that occurred there, and you can display that information along with the image when the coordinates correspond, a la Wikitude. Load up that app, turn it on, and walk around pointing your phone at things, and see what it says about your surroundings. (MagicPlan is an iPhone App, from Sensopia, that is a practical application of related technology, enabling CAD for making floorplans!)

Aurasma adds to this, by integrating image recognition (think: word recognition, but visually, picking up defined items) and rendering associated audio, video, animation, what have you – much like scanning a QR code would launch an associated action – but in this case, like WordLens, will do it in place on the image. Take a look:

The reality is that behind the scenes, with text, image or voice recognition, any action could be defined to be launched upon encountering triggers. Going further, imagine using multiple criteria or triggers to launched actions – tweaking the criteria for different scenarios. For example, a coffee company logo could spawn a video themed “start your morning with a cup” if the logo is seen early in the day, a “get a mid-day boost” if it is in the afternoon, or “keep your mind sharp tonight” if it is in the evening (adding “to get your studying done” if the geocode also indicates that the location is on a college campus. The mantra of late has been “context is king”. That’s context.

Here’s another hands-on example of use:

Enhanced by Zemanta

silver balls...

Image by play4smee via Flickr

The December episode of the Semantic-Link podcast was a review of the past year, and a look forward.  The framework for the discussion was:

  • What company, technology or issue caught your attention in 2011
  • Are we “there” yet?
  • What are people watching for 2012

Notable attention grabbers were: schema.org and its impact on who pays attention (i.e. SEO space); linked data (and open data); increase in policy maker awareness of the need to pay attention to interoperability issues; commercial integration of technology (ontologies plus nlp capabilities) to leverage unstructured content; and of course Siri (a key example of such integration…).

In terms of where we are in the progression of the semantic technology realm, the general sentiment was that Siri represents the beginning of inserting UI in the process of leveraging semantics, by making the back end effort invisible to the user.  And looking forward, the feeling seems to be that we’ll see even more improved UI, stronger abilities in analysis and use of unstructured content, greater integration and interoperability, and data-driven user navigation, and Siri clones.

Give a listen, and be sure to express your opinion about a) topics that should be covered in the future, and b) the ways you would like to interact or participate in the discussion (see dark survey boxes).

Enhanced by Zemanta

new MOO business cards

Image by massdistraction via Flickr

We recently used Moo to get some really nice self-designed cards made, and were really happy with the quality.

Here’s a 10% discount you can use as a new customer, if you like – the equivalent of entering TPX88K as a promo code in the checkout process.

Enhanced by Zemanta

In the context of marketing and advertising, we’ve heard more during the last year or so, in reference to the semantic web and semantic technology.  What does Semantic Advertising really mean?  One interpretation – the one we’re not talking about here – is the selling of something by calling it semantic, which some have done in order to ride momentum (which I call “meme-entum”) of the space to sell something based on a loose association with the concept of “meaning” or “intent”.  So what are we talking about?

The Art of Online AdvertisingImage by khawaja via Flickr

VS

New, Improved *Semantic* Web!Image by dullhunk via Flickr

The strategy in the space has long been driven by word association, and more and more-so on an automated basis.  At a time, placement was done on an entirely manual basis – and automation of keyword matching increasingly became the basis for new business models.  That is, after all, the basis of what we now think of as contextual advertising – the alignment of what the user is looking for with the other things they encounter on the page.

  • So to put it simply:  What is it that is new and different?  What is it about the inner workings of an advertising mechanism that makes an offering semantic or not.  What are the drivers and opportunities around these differences?  What is real?  These are some of the things we’re looking to learn about in detail at the panel discussion that I’ve been helping to organize for Internet Week in New York – the title of which is Semantic Advertising.We’ll leave it to our moderators to dig into the nuts and bolts of the subject with the experts that have been gathered.  Going into the discussion though, here are some of the questions I’m thinking about:

    • Since keyword matching is, well,  keyword matching: what are the main differences between straight-up contextual advertising that uses keyword lookups relative to its semantic brethren?
    • Does the addition of keyword frequency, and therefore the statistical analysis of the text, make the matching on a ranking basis qualify as semantic?
    • Going beyond simply enhancing alignment, predicated upon statistical assumptions, is it the further use of NLP to not just extract concepts to be matched, but to determine the intent by the terms used – to better tune matches when words have multiple potential meanings?  Many of us have encountered the unintentionally matched ads – which can be disastrous for a brand.  What can really be done now, and how?
    • Further on the NLP side, there is the potential for sentiment detection – so even when the correct meaning of a term is understood, determining whether its use is appropriate for matching would be based on the positive or negative connotation of its use (think here in terms of whether you would want your airline advertised next to a story about an aviation mishap, for example).
    • Going at the question from the “semantic-web” side, is embedding (and detection of) metadata on the page just a different flavor of Semantic Advertising – or should we be calling that Semantic Web Advertising instead?  This seems less prone to interpretation errors, but the approach relies upon metadata which is largely not yet there.  (Because of the markup related aspects of this point, I wanted to call this post “Mark(up)eting and (RDF)ertising”, but was talked out of doing so).
    • Is there a difference in strategy and/or scalability when considering whether a semantic approach is more viable when done within the search process, as opposed to on the content of the page being viewed?
    • If ads to be served are stored in semantically compliant architecture, does that itself provide any advantages for the service provider?  And would doing so give rise to the service being referred to as Semantic Advertising?  Does this even enter into the eqaution at this point?
    • Would increases in the amount of embedded metadata shift the balance of systematically enhanced ad selection and presentation of sponsored content – from one web-interaction phase to another?

    I’m looking forward to the panel – to open my mind regarding these and other factors that come into play – and what elements and trends will be necessary for the viability of the various possible directions here.

    Reblog this post [with Zemanta]

    Clicky Web Analytics