SEO Philosophers: We've got a huge guy theory and a serial crusher theory

There are many ways to progress in your SEO. Almost all of them are a bit flawed, and today we’re going to take a closer look at one method that’s especially risky.
Paul Smecker (played by Willem Dafoe) sums it up best in Boondock Saints: “So now we’ve got a huge guy theory and a serial crusher theory.” What I want to highlight today is trying to philosophize your way into understanding Google’s function. It almost never works.
If you work in or near the SEO industry, you’ve probably encountered this phenomenon before, although you may have missed it. I think it’s easiest to describe it with an example:
Google releases a new update, and they say they prioritize user experience. You sit down and try to understand how this new update will affect which websites. You know that Google owns Chrome and Google Analytics, so you come to the conclusion that Google will favor websites with long time on site, low bounce rate, or why not a high conversion rate. Pretty quickly, you’ve figured out that a high conversion rate is the most important signal in this update, because it’s the best indicator of how well the user perceives the page they landed on.
It’s a good theory, but the problem is that you’ve drawn the completely wrong conclusion
The reason the conclusion is almost always wrong is relatively simple, though perhaps hard to accept. Google’s algorithm is “dumber” than you think, or at least less human. The algorithm can’t understand what is good for the visitor; what it does is draw statistical conclusions about websites based on relatively predetermined rules.
Sometimes there’s a self-learning element like “websites that have the following elements are typically appreciated by visitors and should therefore be rewarded,” but most of the time, it’s relatively simple rules that govern things. However, there are a lot of these rules, and that creates problems. In fact, Google itself doesn’t know how a change in the algorithm will affect the results. That’s why they do a “side-by-side” test after updates, where real humans perform blind tests on the new algorithm compared to the old one and report which result was better.
You can’t draw conclusions like that
When Google announces that they are rolling out an algorithm update that will impact things in a specific way, it’s a good starting point — that’s where you can begin your investigation. Let’s look at the big Penguin launch that shook up the entire SEO world; it’s a good example of this:
Google announced that they would roll out a filter targeting “spammy” links by identifying purchased links that unfairly impact search results. They had developed an algorithm that could recognize purchased links. Sure enough, the filter was rolled out, and search results were shaken to their core. A lot of conclusions were drawn about how Google understands a link to be bought, and many panicked, so that’s perhaps forgivable. But when you take a closer look, you see that it was primarily anchor texts they focused on. The most significant factor in the filter was the one-sided link texts. None of the many more speculative theories had any real foundation. The algorithm simply looked at how large a percentage of the anchor texts used the same words.
There’s a huge difference between how the algorithm should work and how it actually works.
Of course, there are exceptions
There are indeed reasons to theorize as well, and sometimes you need to act on the theory, even if it’s mostly an educated guess. After all, it’s not fun to sit without revenue while waiting for better data, for instance. My point is that it’s important to know when to do this.

Magnus is one of the world's most prominent search marketing specialists and primarily works with management and strategy at his agency Brath AB.