The robots.txt file is a file that sits in the root of your website and provides instructions to search engines about what pages should and should not be crawled. This file is your opportunity to provide guidance to search engines about what content they should ignore, and most respectable search spiders follow this guidance. So, for pages or sections of your website that you don’t want search engines to crawl, you can list them in the Robots.txt file.


Will my Robots.txt file guarantee that no spiders can crawl those pages?

No. The Robots.txt file is a suggestion, however most search engines respect this. Still, if certain pages or sections of your site must not be crawled for security reasons, then you need to encrypt those sections, put them within a password protected area and possibly take additional measures depending on the importance of security and need for protection.

How does the Robots.txt file affect SEO

Robots.txt helps search engines focus on pages of your site that are worth crawling and direct them away from sections that you do not want crawled. This makes it more likely that search engines will crawl your important pages before leaving your site. However, blocking pages or sections of your site from being crawled will also stop the flow of PageRank through those pages.

It is very important to understand how to properly use your Robots.txt file. The best resource on the subject can be found at:

https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt



Now you know how to use your Robots.txt file for

Precision Crawling Control!


Next In the next lesson we will take a look at proper use and structuring of your