1st generation search (1994-1998): Searching for a specific website? Go to a directory, e.g. Yahoo, search by category and then chose the website you are looking for.
2nd generation search (1998- … ) Searching for a piece of information? Go to a search engine, e.g. Google and enter your query, browse through the results and chose your site/blog etc.
3rd generation search (2009- … ) Hear a rumour on the web or a tweet such as e.g. Michael Jackson’s death, plane crush on the Hudson river, riots in Iran, Tiger Woods last status update on Facebook or the AG’s opinion on the French Adword cases. Go to a search engine like Google or Bing? BAD IDEA. Why?
According to Clive Thompson of WIRED Google has -for almost a decade- organized/ranked the Web by figuring out who has authority. Google measures which sites have the most links pointing to them—crucial votes of confidence—and checks to see whether a site grew to prominence slowly and organically, which tends to be a marker of quality. If a site amasses a zillion links overnight, it is assumed that it this site will almost certainly be a spam-website.
So for example when When Michael Jackson died on June 25, millions of people surfed to Google News to find the latest information about what had happened. The spike in traffic was so massive that Google suspected a malware attack and began blocking anyone searching for “Michael Jackson.” (please see graph below)
As there seems to be a “need” which Google can’t satisfy at the moment a new generation of search engines like Tweetmeme, OneRiot, Topsy, Scoopler, and Collecta are trying to redefine what makes a piece of information important.
– Some of these sites offer a Digg-like indexed front page that displays hot topics, while others just include a simple search field. But most of them rely heavily on Twitter. When a burst of tweets citing a particular subject or URL emerges, it’s a “signaling event,” as Rishab Ghosh of Topsy puts it. To make sure they’re not just getting hoodwinked by spammers, these new search engines employ some clever tricks, like crawling tweeted URLs and discarding those that land on sites containing spamlike language. Most disregard Twitter users who behave like spambots—for example, ones that follow thousands of people but have very few followers themselves.
– Other, such as OneRiot has a toolbar that lets users flag an interesting post immediately. Collecta actively imports blog posts and tweets so they appear in search results less than a second after they go live, rather than the hours it can take regular search engines to catalog the same info.
The result is something curiously different from regular searching: If you hunt for “Michael Jackson” on a traditional engine like Google.com or Bing, the vast majority of the links remain the same for hours. Authority changes slowly on the “old” Web. But real-time search engines deliver different, updated results almost every time. The creators of these new engines argue that their goal isn’t to answer questions— à la Google—but to organize experience into a keyhole glimpse of what the world is doing at this very moment. “It’s exactly what your friends are going to be talking about when you get to the bar tonight,”.
Still, what makes this development so interesting for me is that when speaking about the likeliness of confusion while searching people tend to think of the old style Google SERP (Search Engine Results Page) for which e.g. French courts believe that users assume that when search for a certain product, company or trademark that all the (sponsored) ads on the SERP will somehow have to be related to the company or trademark owner. [TGI Nanterre, 02.03.2006, Hotels Méridien c/ Google France, RLDI 2006/15 Nr. 438, 28 [Costes].] So when users are already being confused when facing a clear and well structured such as the Bing or Google SERP, what will happen if they are trying to look at the SERP of a real time search engine? Maybe soon someone will come up with the argument of “initial & perpetual confusion“.
I am glad to see that the GA or the German BGH have adopted a far more modern approach, taking into account not only the reality how search engines finance and have always financed themselves, but also surfing the web REQUIRES a minimum level of experience, skill and that the average user is not a 65yrs old civil servant who has received a computer from his grand children as a present for his retirement. [I thought at this point about making some pretty harsh remarks about old judges at Supreme Courts … but I am positive that such “novel” issues are mainly dealt with by their much younger clerks]
And let’s face it, the average search skills of an ordinary user are very, very poor but why does everybody assume the web has to be easy to handle as a …. toaster? The pace at which the web is moving is steadily increasing and thus, although new or more sophisticated search engines (e.g. Bing displaying Tweets & Facebook status messages) will help to handle the ever increasing stream of information, also users have to pace up a little bit if they want to be able to follow.
[I guess that was a pretty pointless statement now but typing all the questionable decisions French courts produce all day one really gets a bit annoyed ;) ]