🤖

Robots.txt Tester

Test if a URL path is allowed or blocked by your robots.txt rules. Supports multiple user agents including Googlebot, Bingbot & more.

Robots.txt Content
Test URL
💡 About robots.txt

How It Works

  • robots.txt tells search engine crawlers which pages to access.
  • Place it at the root of your domain: example.com/robots.txt
  • More specific rules take precedence over general ones.
  • Allow and Disallow directives control crawler access.
📖 Rule Priority

How Rules Are Matched

  • Agent-specific rules override wildcard (*) rules.
  • The most specific (longest) matching path wins.
  • Allow directives can override Disallow for specific paths.
  • If no matching rule is found, the URL is allowed by default.
🎯 Common Patterns

Wildcards & Tips

  • Disallow: / — blocks everything.
  • Disallow: (empty) — allows everything.
  • Use trailing slash for directories: /admin/
  • Use * for wildcards: Disallow: /*.pdf$

Frequently Asked Questions

Does robots.txt block pages from appearing in search?
Not exactly. robots.txt prevents crawling, but if other pages link to a blocked URL, it can still appear in search results (without a snippet). To fully remove a page from search, use a "noindex" meta tag or X-Robots-Tag header instead.
Can I test my live robots.txt?
This tool tests the robots.txt content you paste in. To test your live file, visit yourdomain.com/robots.txt, copy the contents, and paste them here. Google Search Console also has a built-in robots.txt tester.