Welcome to SEO Boy, the authority on search engine optimization -- how to articles, industry news, insider tips, and more! If you like what you see, you can receive free and daily updates via email or RSS.
Print This Post Print This Post

Duplicate Content, Redirects, and Canonical Tags

January 11th, 2010 | | Information Architecture, SEO Copy Writing

Search engine optimization is based on a number of fundamentals and one of the most important fundamentals is to have organized, unique content. By organizing your content and making sure it’s unique, search engines can easily find your content and users can link to it if they find it interesting enough to mention. There are several ways to make sure your content is unique and keep your site organized, at least in the eyes of the search engines.

301 Permanent Redirects

Using permanent 301 redirects is the best practice for duplicate content. Google recommends using 301 redirects whenever possible, even if you have to call your webhost to set up the redirects for you. Amber has also written a post regarding the benefits of 301 redirects. But, when that option isn’t available, what is the next best practice? Or, what if you know that the content is a duplication, but you’d like to keep the page(s) live?

Robots.txt

If you are unfamiliar with a robots.txt file, Pat wrote a great article that dissects each part of a robots.txt. If you’d like to keep your content live, this was one option that you would have. You can essentially tell the search engines to not index or not allow a search engine bot to look at the page. This option keeps the page live, but it also removes the page from a search engine’s index. Well, what if you’d like to have the best of both worlds; keep a page live and indexed by the search engines?

Canonical Link Tag

Your content may not be unique for any number of reasons. There are also endless reasons why you may not be able to use 301 redirects or a roboots.txt file. The important thing is that there is a third option.

Last February, Google and other search engines created rules regarding canonical tags and how you can use them. Basically, if you can’t at all use redirects to eliminate duplicate content, you can use the link tag rel=”canonical” to mark duplicate content (ex. <link rel=”canonical” href=”…../index.html”> in the <head> of the duplicate pages. There’s been additional clarification on this subject, stating that you can add the link tag to the <head> of the original page, as well as the duplicate pages.

I should state that the first two options are always preferred. Don’t get carried away with creating duplicate content and canonical tagging those pages. It’s more or less a last option, just like the order of this article. Google has a nice summary of cross-domain duplicate content options available to you. You have the tools to clean your site’s duplicate content up, now go do it!

Facebook   IN   Stumble Upon   Twitter   Sphinndo some of that social network stuff.
  • http://www.max-td.com Max

    Thanks for sharing valuable information. Now I will try to avoid such mistakes.