Crawl Budget
The number of pages a search engine will crawl on your site within a given timeframe.
Understanding Crawl Budget
Crawl budget refers to the number of pages Googlebot (or other search engine crawlers) will visit on your website within a specific period. It is determined by two factors: crawl rate limit (how fast the crawler can go without overloading your server) and crawl demand (how much Google wants to crawl your site based on popularity and freshness). For most small-to-medium sites, crawl budget is not a concern. However, for large sites with millions of pages — e-commerce stores, news publishers, or user-generated content platforms — managing crawl budget is critical. Techniques include blocking unnecessary pages via robots.txt, fixing redirect chains, improving server response times, and ensuring internal linking guides crawlers to important pages.
Keep learning
Robots.txt
A text file that instructs search engine crawlers which pages they can or cannot access.
Sitemap
An XML file that lists all important pages on your website to help search engines discover and crawl them.
Indexing
The process by which search engines store and organize web pages in their database for retrieval in search results.
Internal Linking
Links from one page on your website to another page on the same website, helping distribute authority and guide navigation.
Track crawl budget and more with Optic Rank
Get AI-powered SEO intelligence that puts glossary knowledge into actionable insights.