Our Robots.txt checker can be used to find the robots.txt file of a given website. The tool will search for the file in the root of the website and if it is not found it will notify the user in this regard.
Insert the link to your website, preferably the domain link, and hit the "Analyse Robots" button. When the tool is done searching it will return with appropriate results and the content of the robots file if found.
A basic text file the search engines crawlers need in order to gain information on which pages they should crawl and from which to stay away.
It is usually located in the root directory of the specific website and it contains rules for the search engines crawlers. This helps the crawlers limit their range of action to specific pages of the site, generally the pages that are indexed or are submitted for indexing. This ensures a good crawling of the site.
The file must also contain the path to the sitemap.xml, a simple xml file that tells the crawlers which pages are indexed or requested for indexing.