A robots.txt file tells crawlers which pages or files the crawler can or can't request from your site.
The ads.txt file for a domain may be ignored by crawlers if the robots.txt file on a domain disallows one of the following:
eg) You own a website called example1.com and the following lines are included in example1.com/robots.txt.
User-agent: *
Disallow: /ads
The above line is telling the robot to ignore ads.txt based on robots.txt standard.
You can modify the robots.txt file as follows to allow crawling of the file (other approaches are possible):
Option 1: Modify disallowed path.
User-agent: *
Disallow: /ads/
Option 2: Explicitly allow ads.txt; depends on crawler support for the Allow robots.txt directive.
User-agent: *
Allow: /ads.txt
Disallow: /ads
For detailed settings on Robots.txt, please contact your service provider

Post a Comment