Welcome to SEO Boy, the authority on search engine optimization -- how to articles, industry news, insider tips, and more! If you like what you see, you can receive free and daily updates via email or RSS.
Print This Post Print This Post

Can Search Engines Crawl Dynamically Generated Content?

November 3rd, 2009 | | Crawlability

I’ve been researching this topic for the past week and have found many articles out there that contradict each other on whether search engines can and do crawl dynamically generated content on a webpage.

The answer to the million dollar question is yes. Technically, the search engines can crawl dynamic URLs and content that is dynamically generated. However, they rarely do. Did you think it was going to be that easy?!!

In a post from the Official Google Webmaster Central Blog back in 2008, they claim that yes, search engines can and will crawl dynamic URL’s:

Myth: “Dynamic URLs cannot be crawled.”
Fact: We can crawl dynamic URLs and interpret the different parameters. We might have problems crawling and ranking your dynamic URLs if you try to make your urls look static and in the process hide parameters which offer the Googlebot valuable information. One recommendation is to avoid reformatting a dynamic URL to make it look static. It’s always advisable to use static content with static URLs as much as possible, but in cases where you decide to use dynamic content, you should give us the possibility to analyze your URL structure and not remove information by hiding parameters and making them look static.

I definitely agree with the thought behind using static pages with static URL’s as much as possible if it’s important to get those pages crawled and indexed.  Another recommendation is if you do have dynamic URL’s, if possible include those dynamic URL’s, or as many as you can in your XML and HTML sitemaps. This way the search engines won’t have to stumble upon them, they can simply find them via your sitemap.

Another response I received from SEOmoz was this:

Unless the search engines have HTML links through which to get to a page, it is quite unlikely that they will spider that page. They have been known to execute some form functions in the past, but rarely do it, as it usually results in them creates loads of duplicate or garbage content for themselves. If you need search engines to find those pages, create a static version of the pages (or of as many of them as is sensible and does not result in duplicate content) and link to these pages without the necessity of a form.

The point I think is people should try not to ‘hide’ important information from the search engines. You may not be trying to hide your content that is on these dynamic URL’s but to the search engines they are still harder to find and more difficult to crawl. Point being, if the page is that important make it a static page with a static URL in order for the search engines to find that data more easily.

Facebook   IN   Stumble Upon   Twitter   Sphinndo some of that social network stuff.