e n u m e r a t e

Search Console Crawl Stats URL Parameters

One of the lovely parts of Google Search Console is the Settings > Crawl Stats report suite:

Crawl requests breakdown image

And something I only noticed today is the unexpectedly friendly URL structure:


If you haven’t had enough coffee today, this probably means the following will do something:


and it does.

(I don’t usually see the parameters at the end of the URL because I have too many pinned extensions in Chrome).

This post is just me finding something interesting, rather than uncovering this THING they don’t want you to know.

And the documentation is really good here: https://support.google.com/webmasters/answer/9679690?hl=en

So it’s not like I’m discovering much here. Enjoy.


I am just going to run through and list the responses for each parameter.

  1. OK (200)
  2. Not found (404)
  3. Unauthorised (401/407)
  4. Other client error (4xx)
  5. DNS error
  6. Fetch Error
  7. Blocked by robots.txt
  8. This returns a 400 for some reason.
  9. Server error (5xx)
  10. DNS unresponsive
  11. robots.txt not available
  12. Page could not be reached
  13. Page timeout
  14. This returns a 400 for some reason.
  15. Redirect error
  16. Other fetch error
  17. Moved permanently (301)
  18. Moved temporarily (302)
  19. Moved (other)
  20. Not modified (304)


  1. HTML
  2. Image
  3. Video
  4. JavaScript
  5. CSS
  6. PDF
  7. Other file type
  8. This returns a 400 for some reason.
  9. Unknown (failed requests)
  10. Other XML
  11. JSON
  12. Syndication
  13. Audio
  14. Geographic data


  1. Discovery
  2. Refresh
  3. Other fetch purpose

( ͡° ͜ʖ ͡°)


  1. Smartphone
  2. Desktop
  3. Image
  4. Video
  5. Page resource load
  6. Other agent type
  7. Adsbot
  8. Storebot
  9. AMP

Anything interesting?

Yes, please have a look yourself. I will dig in more once I have the will.

  • Why are a few of these options returning a 400?
  • I’ve never seen many of these in the wild – are they all functional or placeholders for planned/deprecated/currently broken reports?

For example, I don’t think I’ve ever seen Blocked by robots.txt (&response=7) show up in this report before. It does show up in Coverage with ‘last crawl’ dates (I had a look at this back in 2018 to see whether blocked crawl ‘attempts’ were recorded):

Search Console Crawl Stats URL Parameters 1

But this particular crawl stats report doesn’t seem to be populated for any sites I currently have access to:

Search Console Crawl Stats URL Parameters 2
Attentive readers may note the date discrepancy. It’s not an issue, the coverage report goes back to late April with examples.

And you would expect it to be if we’re getting the coverage report showing recently ‘crawled’ URLs.

The documentation indicates it probably is intended to function:

View the hyperlink for and jump to the possibly good response codes section (this is a screenshot of that text)
Documentation indicates it probably should function.

Maybe it works for you, let me know! It might be temporarily broken (and I’ve previously been inattentive).

I hope some tool providers can use these URL formats for more efficient scraping.

This post could be a tweet thread, but if I did that, I’d never find it ever again.

One thought on “Search Console Crawl Stats URL Parameters”

  1. no, the ones that don’t work for you also don’t work for me. I tried 2 domain level properties and two URL prefix.

    Thank you for the parameter value list / overview.

Leave a Reply

Your email address will not be published. Required fields are marked *