Generate a robots.txt file for your website with an intuitive visual editor. Control search engine crawling with allow/disallow rules, sitemap references, and crawl delays.
User-agent: * Allow: /
Upload this file to your website root: yourdomain.com/robots.txt
* to target all search engine bots./ blocks the entire site.$ at the end to match exact file extensions.The robots.txt file is a plain text file placed at the root of your website (e.g., yourdomain.com/robots.txt) that instructs search engine crawlers which pages or sections of your site they can or cannot access. It follows the Robots Exclusion Protocol, a standard used by all major search engines.
While robots.txt cannot enforce access restrictions (crawlers can choose to ignore it), all reputable search engines — Google, Bing, Yahoo, Baidu, Yandex — respect these directives. It is your primary tool for managing crawl budget and preventing indexation of sensitive or low-value pages.
Common use cases include blocking access to admin panels, staging environments, duplicate content (print pages, filtered views), private directories, and resource-heavy pages that waste crawl budget. You should also use it to point crawlers to your XML sitemap for faster discovery of important pages.
The crawl-delay directive (supported by Bing and Yandex, but not Google) tells bots to wait a specified number of seconds between requests, which can help prevent server overload from aggressive crawling.
Important: Never use robots.txt to hide sensitive information — blocked pages can still appear in search results if other sites link to them. For true access control, use authentication or the noindex meta tag instead.