Robots.txt is a text file created by the webmasters to tell the search engine bots how to crawl pages on their website. Suppose, if you want to prevent search engines to not to crawl and index your “thank you” page then you can specify a disallow command in your robots.txt file in just a single line. In this way, Search engine bots will not be able to find your thank you page anywhere. So, Keeping the search engines from accessing certain pages on your site is essential for both the privacy of your site and for your SEO .
In practice, robots.txt files indicate whether certain or all search engine bots can or cannot crawl parts of a website. These crawl instructions are specified by “disallowing” or “allowing” the behavior of certain (or all) user agents.
User-agent: [user-agent name] (if you want to give the same instructions to all the search engines bots then you only need to put “*” in the user agent name place.)
Disallow: [URL string not to be crawled]
Please note that is you want to allow a particular page on a website, you dont need to put it in the robots file but suppose you have blocked a whole folder of a website but want to allow a few pages of that folder then only you will have to specify such pages with allow string. Also, you should know that robots file is case sensitive, so if you are facing case sensitive url issue then you can fix them using robots file or 301 redirection.