That is a valid robots.txt – but you’ve got a UTF-8 BOM (xefxbbxbf) at the beginning of the text file. That’s why there’s a red dot next to ‘User’ in the first line. This mark tells browsers and text editors to interpret the file as UTF-8 whereas the robots.txt is expected to use only ASCII characters.
Convert your text file to ASCII and the errors will go away. Or copy everything after the red dot and try pasting it in again.
I tested this on the live version, here’s the result translated from byte form:
You can clearly see the BOM at the beginning. Browsers and text editors will ignore it but it may mess with a crawlers ability to parse the robots.txt. You can test the live version using this python script:
import urllib.request
text = urllib.request.urlopen('http://www.best-iran-trip.com/robots.txt')
print(repr(text.read()))
If you’re able to install Notepad++, it actually has an encoding menu that lets you save it in any format.
2
Answers
That is a valid robots.txt – but you’ve got a UTF-8 BOM (xefxbbxbf) at the beginning of the text file. That’s why there’s a red dot next to ‘User’ in the first line. This mark tells browsers and text editors to interpret the file as UTF-8 whereas the robots.txt is expected to use only ASCII characters.
Convert your text file to ASCII and the errors will go away. Or copy everything after the red dot and try pasting it in again.
I tested this on the live version, here’s the result translated from byte form:
You can clearly see the BOM at the beginning. Browsers and text editors will ignore it but it may mess with a crawlers ability to parse the robots.txt. You can test the live version using this python script:
If you’re able to install Notepad++, it actually has an encoding menu that lets you save it in any format.
you can use webmaster panel tools
https://www.google.com/webmasters/tools/robots-testing-tool
and test your robots file then download it
it works ok.