AltUtils

Robots.txt Builder

Easily instruct search engine bots on which pages to crawl or ignore.

Crawler Rules

Separate multiple paths with commas (,)

Sitemap Index (Optional)

File Output: robots.txt

What is the Robots.txt Builder?

The Robots.txt Builder is a visual tool aimed at crafting standard-compliant directives that instruct web crawlers (like Googlebot) on how to index a site.

A misconfigured robots.txt file can completely block Google from crawling a site, resulting in total organic de-indexing. This tool eliminates syntax errors and guards against catastrophic rule collisions.

Frequently Asked Questions

What is a robots.txt file?

It is a plaintext file residing at the absolute root of your domain (e.g., site.com/robots.txt) that acts as the traffic cop for automated search engine bots.

Can robots.txt hide my private data?

No. While it tells legitimate bots like Google not to crawl a folder, malicious scrapers completely ignore robots.txt. Always use passwords (authentication) for private data.

What does 'User-agent: *' mean?

The asterisk (*) acts as a wildcard. 'User-agent: *' means the subsequent 'Allow' and 'Disallow' instructions apply globally to every bot on the internet.

Should I block my CSS and JS files?

Absolutely not. Modern Googlebot renders pages exactly like a human using headless Chrome. If it cannot access CSS or JS, it won't understand your page layout, severely hurting your rankings.

What is Crawl-delay?

Crawl-delay requests that a bot wait a specific number of seconds between requests to avoid overloading your server. Note: Googlebot largely ignores this, preferring Search Console settings instead.