robots.txt is a request, not an enforcement mechanism. Well-behaved bots (Googlebot, Bingbot, Anthropic's ClaudeBot, OpenAI's GPTBot) respect it; malicious bots ignore it. Never rely on robots.txt for security — it is a public file that effectively advertises what you do not want crawled.
Common patterns: `Disallow: /admin/` keeps internal pages out of search results, `Disallow: /api/` keeps API endpoints from being crawled, `Allow: /` (the default) opens everything else. The `Sitemap:` directive points crawlers to your sitemap.
A frequent mistake is `Disallow: /` shipped to production. This blocks all crawling and silently removes the site from search results. Always check robots.txt after a deploy, and always validate in Search Console.

