Archive for the 'Search Engines' Category

*Empirical Research on Consumers’ Perspective of Keyword Advertising (II)

20121105_USF_teaserOtt (Links&Law) was so kind to point out a recent study by Franklyn/Hyman on consumer expectations and confusion when using trademarks as search terms.

As a starting point the study correctly states that “there has been little independent empirical work on consumer goals and expectations when they use trademarks as search terms; on whether consumers are actually confused by search results; and on which entities are buying trademarks as keywords. Instead, judges have relied heavily on their own intuitions, based on little more than armchair empiricism, to resolve such matters.Continue reading ‘*Empirical Research on Consumers’ Perspective of Keyword Advertising (II)’

*Google Slightly Changes Layout of Top-Ads – Further Blurring The Line Between Ads and Search Results?

Google announced on the 3rd of February that Top-Ads (these are the ads shown above the -organic- search results and placed on a coloured background) will be shown in a slightly different style in the future.

Ads on Google  are shown in a layout that is different from the layout of the (organic) search results. The different layouts thus might help users can (more easily) distinguish between them. The more similar the layout of ads are to the layout of search results, the more difficult it is for a user to correctly differentiate between the two.

Legal aspects:
From a legal point of view the differentiation between ads and search results is not only important from the point of the obligation to label commercial communication as such but also from a law of unfair competition point of view. As proven in the past by numerous ‘AdWords’-cases there also exists a trademark law aspect of this issue. Continue reading ‘*Google Slightly Changes Layout of Top-Ads – Further Blurring The Line Between Ads and Search Results?’

*’Instant Preview’ – One More ‘Instant’ Function On Google

About a month ago Google launched the ‘Instant Google‘ feature which rendered the ‘Search’-button kind of obsolete. While the ‘Instant Google’ feature might be highly interesting from an unfair competition law point of view (results are being shown already while the user is searching and thus users may be especially vulnerable for distractions etc.) the implications of the new ‘Instant Preview‘-feature onto currently ongoing TM-disputes should be considered.

The ‘Instant Previews’-feature enables users to see -after clicking on the magnifying glass besides the title of the search result- a preview (Google calls it a ‘image based snapshot‘) of the search result whilst remaining on the SERP (Search Engine Result Page). There is a also YouTube-Video available in the respective Google blog post.


Continue reading ‘*’Instant Preview’ – One More ‘Instant’ Function On Google’

*You Are What You Querying For: The AOL ‘Data Valdez’ Case of Thelma Arnold

A Short History Lesson: In July 2006 AOL offered a data-bank, containing the data of 20 million search queries by 680.000 AOL-users, for download on its website. Although the data was removed again shortly after, the data had found its way into the net and since then stayed there. This did not only prove as a PR-disaster (the ‘Data Valdez‘ case) but also triggered an interesting legal dispute (Does v. AOL LLC, Case No. C06-5866 SBA (N.D. Cal.; June 22, 2010).

Hendrick Speck already mentioned this case 3 years ago on the Suma e.v. conference but it took me until today (Thank You Links&Law) to get hold of the exactly facts of the case.

Although the AOL-users had been assigned random numbers to protect their identity it took reporters of the New York Times less than a month to identify at least one user (only) on the basis of the search queries of this user:

Continue reading ‘*You Are What You Querying For: The AOL ‘Data Valdez’ Case of Thelma Arnold’

*Bergspechte: Take A Look and Decide Yourself: Is This Ad Misleading?

In its Google France decision the ECJ ruled that the question of whether the function of indicating origin is adversely affected or not particularly depends on the manner in which the ad is presented. The Court furthermore stated that the function of indicating origin may be adversely affected if the ad is so vague that a normally informed and reasonably attentive internet user is not able to ascertain whether the goods or services advertised originate from the trademark owner, an undertaking that is economically connected to it, to the contrary, from a third party. The same of course is true if the ad suggests an economic connection.

In its Bergspechte case the Austrian OGH will have to decide whether the ad shown below fulfils these requirements. The ad (please see below) was shown in response to searches for “Bergspechte“.

So to shorten the waiting time until the judgement, please feel free to decide yourself and give it an “educated guess“. I’d furthermore appreciate it if your leave a comment explaining why you decided the way you did.

Rough translation by the author:

“Ethiopia per bike
trip of a lifetime through the north with
lots of culture. Sixteen days from 20.10.”

Note: the ad shown is not necessarily identical with the original ad as the ad described in the courts file contains too many characters [sic!] to be shown as a Google ad.

*Dynamic Keyword Insertion – Stairway To Trademark Infringement?

Following the ECJ’s decision in Google France users should be very cautious to use AdWords functions which enable the automatic display of (unpredictable) terms in the text the of ad, as this could create confusion about an economic connection between the advertiser and e.g. the TM proprietor and thus infringe TM-law.

Disclaimer: This is quite a specialised post. If you are not a bit into keyword options etc. you might find it a bit difficult to understand!

Continue reading ‘*Dynamic Keyword Insertion – Stairway To Trademark Infringement?’

*Recap: Deep Search II Conference, Vienna, 28.05.2010

Unfortunately I could only attend the afternoon sessions of the Deep Search II Conference, which turned out to be a real shame as most of the presentations were outstanding.

As I am sure that he organizers will provide a transcript etc. I will just try to communicate the look&feel of this conference. Thus the content below doesn’t really have a structure but mostly resembles the ideas I scribbled into my notepad while listening. As the statements  just express my personal opinion and I do not intend to offend or to make anybody feel depressed.


Elizabeth von Couvering held a brilliant presentation about Search Engine Bias and the Public Interest in which she explained that the ranking of search engines is usually driven by the expectation of the users and thus search engine results are always somewhat biased. She then went on to the issue of an informed citizenship as a kind of pre-requirement for democracy and later in the discussion stressed that maybe also the market itself should establish (self binding) rules and ethic standards  to guarantee neutrality (or less bias, depends…). After an extremely well done (looks like I will have to rewrite the respective chapter in my dissertation) historical overview (please see the draft chapter of her thesis) on search engines and their business models she closed with the remarks that the the issue of search engines is not about information retrieval, but it’s about sales (of advertisement) and urged a discussion on the role of the public in this respect.

One aspect I’d also like to highlight is that von Couvering also indicated (and later confirmed upon my request) that the quality of search engines (or the size of the index) correlated with the expected advertising revenues. Thus if courts restrict their abilities to create advertising revenues (“not worth while“) this would (taking into account all the costs SE have to run their business) effectively have a negative effect on the quality of search results.

All in all I’d like to agree with von Couvering but as I am convinced that people just ain’t no good I wonder if self-binding ethical standards are able to improve the whole situation. Unfortunately at the same time I don’t think laws will do either… (I know this is a depressing thought, but what about this ‘Code‘ that is supposed to solve all the problems of the web?)


Matteo Pasquinelli spoke about the Surplus and the Immaterial: Political Notes on the ‘Industrial Revolution of Data’ and referred for most of his presentation to an article from the Economist. I think I didn’t really get the point of his presentation. I agree with the assumption that the mass of data created is steadily increasing and that it eventually might exceed the storage and computing capabilities. Pasquinelli however ended up kind of referring to the big service providers (Google, Facebook, etc. ) as “capitalists” (or “the new landlords“) who allow indigent users to use (live inside) their services. In return for the right to use these services the users generate data/information/content which the landlord later owns for his own good.

Although I have to admit that the idea is very interesting I think Pasquinelli effectively failed to explain his theory in a bit more detail or to consider the fact that users are not (yet) dependent on these services but use them to creates extra joy in their lives and that thus the comparison with the poor worker (who is forced to live in the landlord’s house as a shelter) is a bit far-fetched and thus not fully convincing.


dr mc schraefel gave a stunning talk about Building Knowledge: What’s Beyond Keyword Search? and even being aware of the arrogant tone of this statement, I have to admit that her presentation was the first in quite some time that left me speechless as not only the content of her presentation was anything but brilliant but at the same time her slides were clear, appealing and I’d almost go as far as saying that they had an artistic touch… (I reckon pretty much everything done on a Mac looks great, right? If you’re curious by now, you can find most of the ideas of her presentation also on her blog.)

Her (jumpy, active and highly enthusiastic) presenting style pretty much reminded me of Burkard Schafer who used this style to teach (or at least tried to teach) his sleep deprived master students some basic principles of AI.

Trying to sum up all the aspects Schraefel mentioned (apart from the geek health tips, see the picture above) one point was that data wants and should be free as only free data will enable serendipity (unexpected) discoveries. Another point that caught my attention was the remark that in the future everything will be visible an that it makes no sense drafting laws to prevent this inevitably things from happening but instead the relevant institutions should focus on modifying existing or creating new norms that will penalize the abuse of data.


Dr. Karl H. Müller in his remarkable talk (From a Tiny Island of Survey Data to the Ocean of Transactional Data) critically questioned the quality of survey data and graphical representations thereof.

Although everybody in the room, already before his talk, would have agreed with the statement that none should believe any study he/she hasn’t falsified him/herself, Müller provided the audience with alerting examples on how questionable the quality of survey data can be. E.g. he provided an example where a question about the personal general life-happiness of the survey participant accidentally got used twice in a questionnaire and surprisingly led to the result that the person’s perception of her/his general happiness significantly changed within twenty minutes.

Finally there are three remarks I’d like to make:

1: Great location. I had already passed the Hotel Imperial Riding School Vienna in Vienna a couple of time but I’ve never found my way in. So everybody expecting geeky IT-researchers conspiring in nerdy computer lab facilities would be baffled finding the conference to take place in the luxurious halls of the hotel. Not to mention the buffet…

2: As Austria is still a dreadfully conservative country the usual ration of men v women is usually 3 men for one woman when it comes to IT (law). Thus I was pleasantly surprised to see that the majority of the audience were actually females. 🙂

3: IT-jurists, as all jurists I suppose, are kind of walking one-man-companies thus jurists are usually extremely reluctant to clearly state in the course of an academic discussion that the are of a different opinion as this at them same time would mean criticism in their colleague’s skills and far worse their business. Thus the average excitement level of an IT-law discussion in Austria is usually doomed to be as high and breathtaking as speeches at UN-conferences. Not so however at the Deep Search 2 conference.

Once the discussion had gained some speed it actually got quite exciting and it was great to see that the people sitting there were not sitting there just to make the name of their law firm appear on the agenda but because they had very profound knowledge about their subject and maybe even more important, they were truly passionate about it. (Yes, I know…)

One aspect that might have even increased this impression was the fact that during the discussion some of the panelists were  still wearing their head microphones which boosted their “quiet silent sighs” clearly audible into the room.  😉

*How Much Information Does A Search Query Reveal About A User?

One search query on its own might actually not reveal too much information about a user. If you however, keep logging the queries from one particular user one might very soon be able to gain interesting insights. At some point early this year Google extended its ‘personalized search’ function onto all users, not matter if they were logged into any Google service or not (explanation found >>here<<).

I was first confronted with this topic at the Suma e.v. conference in Berlin 2007. Hendrik Speck (a indeed humble man who even has a ‘My Quotes’ section on his website) mentioned in his speech search engine log files and talked in detail about how much information search engines could gain out of analysing them. In the example provided by Speck he talked -as far as I remember about an overweight, sick old lady who had some kind of fixation on cats. I didn’t really like the example and labelled the whole idea as ‘Google-Bashing‘.

My next encounter with this topic was when I was playing around with the Google Dashboard and was surprised to see that how precisely Google kept track of what I did and was even kind enough to tell me on which days I had been lazy, not doing much research for my blog or my thesis.


Then, last week I stumbled across a ‘cute’ YouTube-video, on the German Basic Thinking Blog, telling a romantic story, just by showing the queries a user had typed in.


Cute, as I’ve said, right? But… let’s take the idea a bit further:

I have repeatedly reported lengthy a service called TweetPsych which allows users to create kind of an psychological profile of any twitter feed, analysing the language used, the topics covered, the frequency of the posts, etc… The first worrying thing about this service is, that it works quite well. The second worrying thing is the idea of spreading this idea from a person’s twitter feed, which he deliberately had decided to publish freely on the internet, to a users search queries. Doing this we would be able not only to analyse a person’s interests but also its mood and even living habits.


Most of you will now say, yes that’s what Google (Google Insight) does anyway. True. But the difference is that Google, at least that’s what they’ve communicated only do this on a large scale.

Interest in the search term 'Michael Jackson'... not so much apparently until his unexpected death


Doing the same with just a single user takes the whole thing to a completely new level. I am not saying this because I am just another privacy-prayer hoping to get ‘street cred‘ for his words but to rephrase an idea I’ve heard from THE Austrian privacy activist (Hans Gerhard Zeger). [I know this idea isn’t entirely new, but I do think its worth being repeated many, many times…]


If data/information about everybody is available, authorities will start searching the data for unusual patterns to be able to investigate or even predict potentially malicious behaviour. So, the second a user types in ‘uncommon’ queries, he/she thus would be under suspicion. And here comes the point; under such circumstances, the whole principle of “presumption of innocence” (ei incumbit probatio qui dicit, non qui negat) is actually turned over. Then the authority won’t have to proof that the user has done anything wrong, but the user would be under the obligation to prove that he/she hasn’t.


I guess, nobody is feeling comfortable about being tracked/logged. At the same time we all appreciate the benefits of this technique. So as always a compromise has to be found as stubborn search engine bashing will just blur the whole situation and allow competitors to use the confusion and find loopholes to put things into practise, big corporations are still struggling to be allowed. Example? While some 70 year old Austrian even attacked an Google Streeview car last week with a pick-axe, nobody seems to care that an Austrian company (Herold Straßentour) has already recorded most of Vienna’s inner city, using pretty much the same technique. So shall Google be punished for at least openly speaking about their plans while others ‘just do it‘?

*How To Freak Out Users: Tell Them They Are Being Watched

A friend of mine who is working in the travel industry was so kind to pointed out a nice rumour to me which highlights how suspicious users get as soon as they have the feeling that are being watched and that websites are tracking their surfing behaviour.

“Don’t book your flight on the same computer you used to search for a flight. The second time you access the airline’s website the prices will have increased as the website anticipates that you are coming back to book the flight.”

I don’t want to dig deeper into the issue (e.g. is Ryanair‘s cooking really there to make your next search more expensive) or give you advise on that stuff but I’d like to draw you attention on how emotional users get as soon as they have the feeling that they are being spied at. Thus, knowing that most users thrust Google more than their local authorities (Fallows 2005, p. 27), I wonder which measures websites and internet businesses will have to take to get out of the “creepy behavioural marketing/advertising corner” 😉

*Facebook Asks Microsoft To Integrate The Web Into Their User’s News Feeds

Facebook and Google are trying to integrates as much content as possible into their services to keep users on their sites.

Hub-Spoke” was the term previously used to describe the navigation patterns of users when navigating on the web. User started their “surf sessions” from mainly one point and then directly went to another, just to return to their homepage afterwards. Then, as search engines got better, the days of “hyperlinks” had come and users indeed used the links to move from one site to the next as it got increasingly easier to place high quality links due to good search engines.

What I see happening with Google and even more with Facebook today is that users always follow the way that has the lowest transactions costs and in the same way the search window has doomed your browser’s address line, Google (iGoogle) and Facebook (Facebook’s News Feed) are nowadays working on providing you all the information you want “just one mouse-click” away.

Most clearly this phenomenon can be watched on Facebook which is currently working hard on integrating web content right into the News Feed of their users. Thus users are kept on the site much longer and only leave the site (in e.g. the case of a commercial transaction) at a very late stage, giving Facebook a chance to supply the user with advertisements all the way down.

Integrating interesting content however isn’t as easy as it sounds and thus Facebook relies on two soucres:

User generated content (Tweets, Status Updates, etc.)

Online Content Generators (online newspapers, etc.)

As for the first it seems much easier to supply a stream of tweets and Updates than to constantly have a finger on the pulse of the web, almost predicting which information will be of interest for the community a second later. As the community inevitably however will demand further information and subsequently will search for them, Microsoft’s Bing will be used to provide these information -still within Facebook-. As Google has shown however, it is still very difficult for search engines to differentiate between a DOS-attack and breaking news such as Michael Jackson’s demise.

WIRED: 'Google: Michael Jackson Search Spike Seemed Like an Attack'

As for the second Facebook keeps pushing content generators to open FB’s subsidiaries (Fan-pages) in which they publish their content (along with ads which generate revenues for Facebook). This seems to work just fine… and you might have realized that last Face-lift of Facebook has definitely also pushed the News Feed into the Foreground.

This Satelite Doesn’t Beep But It ‘Tweets’

Please click here if you want to follow this blog on Twitter.

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 94 other followers

Author’s Rights - Online reporting hotline for child pornography and nationalsocialist content on the internet
JuraBlogs - Die Welt juristischer Blogs

Previous Posts:

RSS Goldman’s Tech & Marketing Blog

  • An error has occurred; the feed is probably down. Try again later.

RSS Class 46 Blog

  • An error has occurred; the feed is probably down. Try again later.

RSS WIRED Epicenter

  • An error has occurred; the feed is probably down. Try again later.
wordpress stat