SEO And The Elusive Controlled Experiment

SEO Experiments

(attrib:flickr/cybergibbons)

Over on Search Engine Land, the perennial discussion of the benefits and disadvantages of using subdomains vs. directories has recently been reignited.
In his post, 5 Whopping Lies That Keep SEO At Status Quo, Ian Lurie, presents as his 4th most egregious lie:

“We Can Put The Blog On A Subdomain. It’s Fine.”

The conversation on this issue has been swinging back and forth for a long time, with people on both extremes and quite a number in the middle who claim that it doesn’t matter at all for SEO. which is also Google’s stated position.

Micheal Martinez, of the excellent SEO Theory blog responds in the comments that:

“You and [Rand] Fishkin are completely wrong on the subdomain issue. It’s a shame this kind of misinformation is still being shared on major SEO Websites like Search Engine Land.”

Lurie replies that he’s willing to be proven wrong, and will attempt to devise an experiment to test the claim. To which Martinez responds (with a bit more scorn than is probably deserved):

“For every example you think you can show of Google treating subdomains separately from folders I can show you examples of Google treating subdomains exactly like folders.

There is no justification for this kind of naivete in a professional SEO article. That you think you can devise some “test” off the cuff to resolve the issue shows that you weren’t basing your statement on legitimate research to begin with.”

All this gives us an excellent example to highlight a problem that seems common in the SEO world, the misunderstanding or the ignoring of the need for properly controlled experiments.

To be clear, we’re not putting words in Michael Martinez’s mouth here — who would dare? What follows is simply the result of a line of thought prompted by his comments. Nor are we assuming that Lurie isn’t perfectly well aware of all this.

Let’s think about how we would go about testing whether a subdomain or a directory is the best choice. For an experiment to be rigorous, we need to have an independent variable — the thing we are changing, and a dependent variable — the thing we are trying to measure.

Here the independent variable is whether we are using a subdomain or a directory with our blog. The thing we are trying to measure –the dependent variable — is our position in the SERPs.

To be reasonably certain that the independent variable is actually causing a change to the dependent variable, we need to make sure that everything is exactly the same in the two tests, and this is where the problem arises. It’s very hard to make everything the same.

For example, we launch two sites that are totally identical apart from one having its blog at example.com/blog (site 1), and the other having its blog at blog.example.com (site 2). Then perhaps we add material to the blogs equally over time, and monitor how each site is ranking. Unfortunately, this is obviously not going to work because Google is going to see one of these sites as a duplicate of the other, and that will affect the ranking, so we can’t be sure whether the SERP position is due to our independent variable.

So, we do the next best thing, and have the sites identical except for the textual content, which we make as similar as possible but target different keywords; then we monitor the SERPs for those keywords. But, clearly, different sets of keywords have different competitive landscapes, even if we choose keywords that are roughly similar in terms of their competition, there are other contextual factors that will affect ranking.

The same issue arises if we test over time, site 1 affects the landscape into which site 2 is launched, making it difficult to control the experiment. This will also occur if we start with site 1, and then change it to site 2.

The problem in a nutshell: if our test sites are identical, Google will treat them differently; if our sites are not identical, then our results are not very reliable.

This is one of the reasons that an ‘off the cuff’ experiment isn’t going to work to decide the subdomain vs. directory issue.

The difficulty of understanding and carrying out controlled experiments is probably also the reason — apart from straight-up fabrication — that the SEO world abounds with anecdotes and speculation. How many times have you heard the tall tale of the SEO who tweaked some minor aspect of his site and saw a surge in the site’s rankings? The SEO then goes on to be a fervent believer that this particular ‘optimization’ must always be carried out, and any SEO who disagrees is clearly not a Real SEO.

What he usually fails to mention or remember is the 18 other changes he made to the site at the same time, each of which individually or in combination could have been the reason for the improved ranking. He failed to control for other variables, making it impossible to determine a causal relationship between the claimed site changes and the SEO benefit.

There are, of course, ways around this problem, which is one that science faces daily. In a future article we might take a look at statistical testing, or how Bayesian inference can help SEOs. Until then, feel free to let us know what you think in the comments below.