Facebook Web Search – 1% of the cost for 80% the quality

If Facebook launched a ‘limited’ Web Search, that showed search results only from the web pages shared on Facebook, instead of the entire web – I suspect it would be able to answer up to 80% of thePost image for Facebook increases the odds of a fist fight with Google queries Google gets, at 1% of the cost.

People who’ve worked on Web Search (as I did at Yahoo!) know how capital intensive this business is. Just to play and compete with Google, you have to be prepared to crawl and index the entire web – not once, but constantly.

And you can’t cheat – since if you opt out of crawling a subset of documents on the web, what’s to say that the results for the most important query for tomorrow (‘qadaffi in iran’) aren’t hiding in these documents?

Also, by competing with Google on their terms, you are forced to be as encyclopaedic as they are. You can’t say – hey – we’re only answer half the queries we get from the customers, but we’ll have damn fine answers to those queries.

Facebook, the soon to be evil-empire (just you wait) has no such hang-ups. It doesn’t need to play Google’s game. Let Google be the Library of All Knowledge. All Facebook needs is a Web Search, that serves most of it’s users’ queries – the rest of the queries it can send over to Bing just like it does now. Except that by signing up to answer the first 80% of queries, it would own the intelligence around the searches and interactions, around the URLs that were shared within Facebook in the first place. This is incredibly useful information, of course, since it now tells Facebook not only what people are sharing, but also what they’re searching for.

And, since it only has to crawl, index and rank documents that were fed to it by the customers in the first place, the capital intensive nature of Web Search does not matter so much; it can go do it on the cheap. What do you think?

Amit

(Lovely pic from blog post by Square Pear Design)

Advertisements

2 thoughts on “Facebook Web Search – 1% of the cost for 80% the quality

  1. Seems that it could work – the idea of using a distributed base of humans as crawlers is not new, but given that facebook has 100x the scale that delicious/Y!Bookmarks had, it may close the comprehensiveness gap to be meaningful to 80% of searches.

  2. So, how does facebook plan on dealing with sockpuppets who plant spam links into their networks and drive the available content?

    Seems like this might be fairly easy to game, kind of like when Google started to get hit.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s