(Photo credit: planeta)
Duplicate content is a problem for search engines, and that makes it a problem for SEOs. Google strongly dislikes including the same content more than once in search results, and if content exists in more than one place on a site, search engine algorithms have trouble determining which version they should include or exclude. They also have difficulty knowing where to assign link juice and authority. This confusion can lead to sites experiencing a loss of traffic and reduced SERP rankings. The “rel=canonical” element is intended to help search engines out by letting them know which page is “the page” when it comes to particular content.
Duplicate content from the perspective of search engine crawlers can be created in a couple of different ways. Sites can actually have the same content on two different pages of their site or they can have links that differ yet point to the same content — these both look the same to the crawlers.
Keeping abreast of everything that’s happening in a vibrant and dynamic industries like SEO and inbound marketing is no easy task. So, this week, we’re pleased to offer our visitors the first of our monthly roundups of what you need to be reading in SEO, social media, hosting and web design.
We hope you learn as much from reading it as we did when we compiled it.
SEO and Inbound Marketing
(Photo credit: Wikipedia)
Anyone who hasn’t been living under a rock in recent days will be aware that GoDaddy recently suffered an outage that left millions of its client’s websites inaccessible. While this sort of downtime will obviously affect a site’s traffic and therefore revenue, many people are asking whether it will have an effect on their SEO.
The short answer to that question is no, probably not. Google is generally not happy when they coming knocking and find that a site in their index has apparently disappeared, but they are also aware that problems occur, and so long as they don’t occur regularly then a site’s SERP ranking is unlikely to be degraded. Reliability is important, and a site that is regularly down will take a ranking hit, but a short period of unavailability is an anomaly, not a trend that is useful as a signal.
With that specific case dealt with, it might be useful to consider the more general case of downtime and how it should be handled. It’s occasionally necessary to take a site down for various reasons: making large scale changes to the software or server, for example. How are we to handle this?
PageRank-hi-res-2 (Photo credit: Wikipedia)
There’s been a fair bit of chatter in recent months about the value of nofollow links for SEO. It’s been claimed that when assessing a site’s backlink profile, Google takes into account nofollow links as a signal of naturalness. Whether that’s true or not, it’s useful for any SEO to understand the nature of nofollow links, and especially whether nofollow backlinks are really worth pursuing.
On occasion, sites would rather that Google did not follow particular links or allow them to affect the target’s PageRank. There are various reasons, but one of the main reasons is to avoid spam. For example, a common SEO tactic for gaining backlinks was to put thousands of links into the comment sections of blogs and on forums. Many blogs now choose to have all links in comments marked “nofollow” as a way of discouraging such spam. Because nofollow links are ignored for PageRank purposes, spammers now have no incentive to put their links on these pages. Sites like Wikipedia have all their outgoing links marked nofollow for exactly this reason, it discourages people from using Wikipedia’s authority to get PageRank, as Matt Cutts discusses in the video below.