Search Engine does not allow you to visit certain pages
Do you know why sometimes you are not able to visit certain websites? This is because they use a certain type of file which is called as Robot.txt. It is a search engine robot who tells you or allows you to put the pages that you do not want the search robots to visit them. It tells them that which areas should not be processed. In short it controls how search engines should access your website and which pages are to allow and disallow for search engine access.
What Is Robots.txt?
Robots.txt is a notepad file you have to put on your site to tell search robots which pages you would like them not to visit. Search engines usually follow what they are asked not to do. If you have really sensitive data, it is too naive to rely on robots.txt to protect it from being indexed and displayed in search results.
This file resides in the root directory of your Web space, that's either a domain or a subdomain, for example- "/web/user/htdocs/sample.com/robots.txt" resolving to http://example.com/robots.txt.
How it is used?
Currently, there are only three statements you can use in robots.txt:
Disallow: /path
Allow: /path
Sitemap: http://sample.com/sitemap.xml
Robot file can be used with respect to SEO but it depends upon the content you have on your website. If you do not use a robot file then it can get access to your root files, backend pages etc. So in short you can control how a search engine has to look at your data.