While Google has the full right to build a product that serves its business interests, it’s intellectually dishonest to imply, via technical jargon, that it’s somehow Twitter’s fault that their links don’t show up in Google’s ‘social’ search.
I haven’t (yet) seen a good layperson explanation of what all this ‘rel=nofollow’ business is, so I’ll start with some background. Feel free to skip to the end if you’re an SEO expert 😉
So, what’s this rel=nofollow thing? (aka, what’s Google talking about?)
Typically, Google asks traditional publishers (like New York Times) to mark links added to their site by, say, people commenting on their articles, as ‘untrustworthy’. The reasoning is that since anyone can post a comment on these sites, there’s a potential that all kinds of spammy links will get posted, too. Google (and other search engines like Bing) need to know which links on the NYT site are legitimate, and which ones are not.
This is achieved by what’s called the’rel=nofollow’ – it’s a directive that NYT puts in place, for certain links, to indicate to Google that the link is potentially untrustworthy, and hey, Google, feel free to ignore it.
But what about Twitter, a site that’s all about links that are shared? More than 25% of all tweets on Twitter contain a link. What is Twitter to do? Should it mark all these links as untrustworthy? Mark some of them as legitimate? How would it know which ones are good and which ones are bad?
Google: When in doubt, rel=nofollow (or beware the nuclear option)
In such cases, Google’s advice is clear – when it doubt, mark a link rel=nofollow. I say ‘advice’, but it really is a warning, since if you, as a publisher, are hosting a lot of links, and not explicitly marking all of these ‘untrustworthy’, you’re treading on thin ice. This is because if it turns out that all these links, or a large number of them, are spam – Google will assumes that you are spammy – and can take you off their search engine.
That’s right. In this example, if Twitter did not mark all user contributed links (in all the tweets people are sharing) as untrustworthy, or if it didn’t do a very good job at figuring out what links are legit, and only marking the others untrustworthy – Google could make Twitter disappear from search. Presumably, there’d be a lot of frantic phone calls between executives at the two companies, but it could end up so you’d search for ‘twitter’ on Google, and the top result would be about… a wikipedia page on bird sounds.
Twitter: damned if you do, damned if you don’t!
So, let’s recap:
- If Twitter exposes all user-contributed links to Google, and tells Google they’re all legit, it is likely to end up getting booted from search
- If Twitter does not expose user-contributed links to Google, they won’t show up in Google search, and Google gets to claim it’s Twitter’s own damn fault!
Technically, Google is in the right. I was a product lead for Web Search at Yahoo! a few years ago, so I can say this with some authority. It really works well for everyone concerned if publishers (like NYT and Twitter) are conservative about marking links that appear on their website as trusted (ie, they mark most links as untrusted). If they didn’t do this, hordes of spammers would dump tons of links (via comments, for instance) onto their site, just for Google to find them. This is not good for Google, but it’s not good for the publishers either, since these spammy links would reduce the quality of the user experience they’re providing to their own readers.
Where I finally get to the intellectual dishonesty.
So what’s the solution? Well, someone has to figure out which links, being shared on a social networking site like Twitter, Facebook or Google+, are trustworthy, and which ones aren’t. Either the publisher (Twitter, Facebook or Google+) does this analysis, and marks bad links appropriately, or the search engine (Google or Bing) figures this out.
Which brings me to why I think Google is not being entirely honest. They already have an algorithm in place to figure out which links, being shared on Google+, are legitimate, and which ones aren’t. Why can’t they use the same technology to figure out which links being shared on Twitter are trustworthy? After all, Twitter isn’t disallowing them to crawl their website, it’s just defaulting to calling these links untrustworthy – precisely what Google wants them to default to!
Calling a spade a spade – it’s not about technology, just business
The truth is, this is merely a legitimate business decision made by Google. They are focussing their resources squarely on the sharing experience around Google+ – and why wouldn’t they, it’s their product after all. However, claiming that content being shared on Twitter doesn’t show up on Google because of Twitter’s fault, is not being intellectually honest. Worse, by throwing technical jargon at the public, they’re insinuating that there is a technical issue at work here – which there really isn’t.
Sorry Twitter, Google’s merely supporting their business interests. You’ll just have to find another way to get distribution!