The Search of Doom

Google was designed to play the role of a passive observer of the internet: web content was created for people, not specific Google queries, and Google would look around, take inventory of what was available, and give it to people who asked.” –Marco.org

I think we need a bit of disambiguation. 

Indeed, Google was built as a passive observer, but its current purposes are no longer sufficiently well served by this definition alone. 

People usually search for two things: what they know it exists and what they don’t know anything about. 

What Marco Arment is saying only fits the first category: for what you know it exists, a search engine should behave in a passive-neutral way. But when you’re looking for something you don’t know it exists, then the magic starts. 

Let’s have two examples:

1. Querying “Who wrote <Catcher in the Rye>?” will return “Salinger”; there is only one correct result and there’s no other way but index neutrality for a search engine in this case. You knew the author has to exist, only you didn’t know what his name was. That’s a closed question, i.e. you know what you don’t know. Mixing the question with the answer will make a logical proposition that can bear a truth value: “Salinger wrote <Catcher in the Rye>” is true.

2. Querying “What is the best mobile OS” can return so many different types of results, none of them being dependent on what was in your mind when you asked that. There is no correct result here, but relative to a lot of factors: reference system, time, popularity, conjuncture, personality, moral values, intelligence etc.. That’s an open question, i.e. you don’t know what you don’t know. If you mix the question and any of the answers to form a statement, you’ll have “X is the best mobile OS”; this proposition is never a truthbearer because “good / best” is never “true” or “false”. That’s a category mistake

Google and hopefully no other search engines should be able to return anything else but correct results to type 1 queries. 

An engine that’s answering only to the first type of queries is a limited but preferable one, while an engine that’s trying or pretending to answer the second type of queries is doomed from the beginning: there is now way placing itself out of human intentions. Not only because humans are willing to intervene, but because a machine will never be able to offer this type of answers that essentially depend on human judgement, emotions and values. (Counting emotions does not exist and never should: human emotions and values are not in the quantifying category of things)

Th questions like “Which one is good and which is bad” should never be answered by a machine and consequently people should never trust any type of answer coming from this machine.

You’d say that nobody uses a search engine to ask such things as emotional or moral questions. Of course nobody will explicitly write in the edit box emotional questions, but almost everybody tends to overlook what a machine is able to know; it’s in our nature to hope, believe, want and so on. If I search for “the most beautiful woman” I am expecting to see images that please my eye, therefore I have an emotional need that I’m looking to fulfill. If I’m looking for “the best 2010 movie” I’m expecting to find human judgement valuations. 

You only need to say “mobile OS” and you’ll be already presented with results that cannot bear the truth values and that’s enough.

Everything is about what are we made to believe a search engine can do. Of course, Google is magic, but Google is no exception to the rules of what a search engine should or shouldn’t answer to. If we are made to believe Google can surpass these limitations, then we’re in great danger.

By the way, why would you say you’d be angry at if you googled for “hot Latino girls” and got “Mother Teresa” instead? At Google, for not showing you the relevant answers? Well, what if that’s what’s relevant for Google from statistic and ranking point of view?

Bending the results to please you is the first sign of human intervention.