X
    Categories: SEO

Why #SEOscience Is Always a Massive Fail

Bloggers love to marvel at the wonders of #SEOscience because the guardians of the Google make us believe it’s the key to earning links.

Actual Snake Oil!

The dirty secret is that anyone can be a scientist in this industry, and you don’t need any qualifications in computer science to state your claim as an #SEOscience expert.

The SEO industry needs scientists because we must test everything Google says, as we can’t in good conscience take their Webmaster Guidelines at face value. You can find the secrets to top rankings within this impenetrable tome for webmasters, but only between the lines of every sentence.

Find those clues and the secrets to ranking will be yours. However, your ultimate test is actually as simple as trying to test anything at all. Once you do that, you’ll truly become an SEO rockstar/data prophet/marketing ninja/social media guru/you name it.

Or Not.

Could you tell I was being a bit facetious there? The reality is that #SEOscience is chock full of massive flaws. While the guidelines search engines have given us aren’t 100% clear, it’s completely unreasonable to expect them to be. Even Googlers can’t predict exactly how their own search engine is going to stack the deck of “search engine results pages” (SERPs). Google’s algorithm is as secret as the recipe for Coca Cola or how Colonel Sander’s makes his chicken so finger lickin’ good.

So called SEO scientists obsess over the algorithm when all it really does is deal out a pack of cards in the order of millions of mini-relevancy experiments per second. And because the algorithm is constantly trained on a live set of humans determining relevance for themselves, the truth is that Google is testing us — not the other way around. And the billions of click-throughs to websites displayed on billions of different search results comprise the real data Google analyzes to determine what’s relevant to us. After all, we’re the human users Google serves and the constituent parts of the audiences that drive the lion’s share of Google’s revenue.

The way those clicks are even occurring has changed massively over the last 10 years, and yet #SEOscience is still talking about Google’s output rather than search users’ input.

How users actually use search engines is the wild card that #SEOscience refuses to acknowledge. #SEOscience experiments look for causes that remain unambiguously unmeasurable in the discussion — at best, it’s chaos theory. In practice, SEO is little more than an attempt to correlate your site visibility with the needs of search users. Is it even reasonable to think our studies can reveal anything more than a correlation? No, it’s not.

Moreover, should we really expect experiments to tell us more about what search engines deem relevant about a site, when the ultimate sign of relevance becomes ever more closely matched to the intent of the user? Of course not, given the simple truth that the won click is quite literally the deciding factor in the search algorithm equation.

The outcome is the result of a giant Turing test of a magnificent list-building engine that can play simultaneously against millions of people inputting billions of queries. Google’s dominance in search perpetuates a self-fulfilling prophecy: the algorithm can continue to perfect the relevance of its search results because it is constantly feeds a trove of search and user data on which to train its algorithm. As a result, Google search gets more human with time.

Google aims to democratically elect web pages in response to text-based queries, SEO be damned. That we can observe a correlation between the number of links pointing to a site and how well-recognized it is in the real world does not necessarily point to a cause-and-effect relationship, but it’s no coincidence either.

In fact, we know that links reflect the real world, as it’s exactly the presumption upon which Google is based. Sites with more links pointing to them are probably stronger indicators of the site’s utility than a machine’s understanding of the text on the page. We also know that once Google atomized the web according to the voting power of the link, people started using search engines more. They just organized the web in a way that people became better at using it. The initial experiment worked and so the Grand Experiment began perpetuating itself.

Google is as guilty as us SEOs of relying on correlation. That Google was already counting on that correlation is evident from the fact that they built an entire concept of ‘authority’ around it — a concept which is intuitively measured against its similarity to the real world.

And it really is conceivable that search could be organized differently in the future. Search Engine indexes always need to be reorganized, and ranking updates are a testament to that need. The major ranking factors that change with every Google dance are often based in the new training sets Google is introducing to search data. And these sets are everywhere: Image, News, and Video results; Google Instant feedback into search queries; human editors in Panda; spam data in Penguin; and freebase data in the Knowledge Graph. It’s no great mystery how these data sets influence search engines!

#SEOscience experiments are often “janky” and unbalanced by their very nature. If you’re trying to play against the dealer (rather than with the user), you’re probably going to draw junk conclusions from junk data. #SEOscience experiments that don’t analyze user behavior will nearly always tend towards a certain degree of pointlessness.

That is not to point fingers either. It really isn’t anyone’s fault. After all, how can the tiny corner of the internet where you’re visible tell you anything conclusive about Google’s internal workings? The behavior of search end users has changed drastically over the last 10 years. And the entire structure of the web has changed from one of a giant archive to one of an always-on network. If Google has to adapt to the changing behavior of web users, then it should be no surprise that you will too.

Google’s model of clustering topics by authority has gone from a link graph built off the static backbone of DMOZ and the Yahoo! Directory to the knowledge graph built off the dynamic ontological net of Freebase data (ontologies describe the relationships between concepts). That alone shows how much Google’s understanding of the web has evolved. And Google’s design priorities for search results pages (SERPs) have shifted from a reflection of structural data about the web to a topic-centric reflection of the average end user’s search intentions.

So, if the search index and associated rankings are a reflection of what works for the user, then what do we know about search engines and their ranking algorithms?

Well, not all that much if we’re being completely honest. Ultimately, relevance is in the eye of the beholder and at the end of a mouse click — it’s not in the algorithm.

What we do know is that following the pretty basic points made in the Google Guidelines does tend to yield improved rankings. Yes, some (such as the definition of quality) may be open to interpretation, but as Googlers say themselves, “Webmasters who spend their energies upholding the spirit of the basic principles will provide a much better user experience and subsequently enjoy better ranking than those who spend their time looking for loopholes they can exploit.”

In short, following the spirit of the Google Guidelines really will deliver specific and measurable results. You could test each point and you’d still have only wasted time understanding a computer when your goal should have been to understand people.

#SEOscience perpetuates stagnation. I implore SEOs to concentrate their efforts on the information seeker and relay those insights back to the community, rather than worrying about disavowal files or the like. #SEOscience as it currently stands is inherently useless when you consider that the actual mission of an SEO is to engage the search engine user.

Thus, it’s time to reframe our #SEOscience problem as not one of determining the whims of Google, but one of pinpointing the whims of users.

Follow the spirit rather than the letter of Google Webmaster Tools. Who cares what the algorithm is? If you’re not betting on #creativeSEO solutions, you’re essentially betting against yourself. Without #CreativeSEO remaining a stable part of the larger community discussion, we risk ignoring the most important ingredient in the question of Relevance.

The following two tabs change content below.
Jonathan Allen is the President of Longneck & Thunderfoot, a brand publishing company and which is part of the Columbia Startup Lab, an incubator program based in New York City. Formerly, Director of Search Engine Watch, Jonathan has spoken at the largest digital marketing conferences in the world and provided search marketing industry commentary on breaking news for the BBC, the Boston Globe and Deutsche Welle. In May 2012 Search Engine Watch won the Gold Azbee National Award for "Online Excellence, New or Relaunched Web Site" from the American Society of Business Press Editors. An early player in mobile social media, Jonathan is also co-founder of Moblog: tech, a pioneering tech company that created a smartphone photography platform that played a notable role in the emergence of citizen journalism. The community; Moblog, won the Experimental and Innovation Webby award in 2009 for its collaborativemapping project with Shozu, and Moblog:tech’s build of Channel 4’s Big Art Mob won the MediaGuardian Community Engagement Award (MEGAS), the Royal Television Society’s On The Move Award and 3 BAFTA nominations.

Latest posts by Jonathan Allen (see all)

Jonathan Allen :Jonathan Allen is the President of Longneck & Thunderfoot, a brand publishing company and which is part of the Columbia Startup Lab, an incubator program based in New York City. Formerly, Director of Search Engine Watch, Jonathan has spoken at the largest digital marketing conferences in the world and provided search marketing industry commentary on breaking news for the BBC, the Boston Globe and Deutsche Welle. In May 2012 Search Engine Watch won the Gold Azbee National Award for "Online Excellence, New or Relaunched Web Site" from the American Society of Business Press Editors. An early player in mobile social media, Jonathan is also co-founder of Moblog: tech, a pioneering tech company that created a smartphone photography platform that played a notable role in the emergence of citizen journalism. The community; Moblog, won the Experimental and Innovation Webby award in 2009 for its collaborativemapping project with Shozu, and Moblog:tech’s build of Channel 4’s Big Art Mob won the MediaGuardian Community Engagement Award (MEGAS), the Royal Television Society’s On The Move Award and 3 BAFTA nominations.