Free Technical SEO Tool

Indexability Checker

Find out exactly why a page isn't (or might not be) indexable

We follow redirects, read response headers, and parse the HTML to give you the full picture.

How it works

No black box. Here's exactly what Indexability Checker checks.

  1. 1

    We fetch the URL

    Including the final URL after any redirects, with the real HTTP status.

  2. 2

    We read the response headers

    Including X-Robots-Tag — a common silent indexing killer most tools miss because it never appears in HTML.

  3. 3

    We parse the HTML

    Robots meta, googlebot meta, canonical link, title, lang attribute.

  4. 4

    You get a clear verdict

    Indexable, possibly blocked, or not indexable — with the exact reason and the exact fix.

Why this matters

"Why isn't my page indexed?" is the most-Googled SEO question. The answer is almost always one of five things: HTTP error, robots.txt block, noindex meta, X-Robots-Tag header, or canonical pointing elsewhere. This tool checks all of them in one pass and tells you which one is the problem.

  • X-Robots-Tag is set at the HTTP level so it never shows up when you "view source." It's the most common silent indexing killer.
  • Canonical pointing to a different URL means crawlers will index *that* URL, not the one you submitted.
  • A 200 + noindex is a perfectly working page that's also perfectly invisible to search engines and AI crawlers.
  • Soft signals (title, meta description, lang) don't block indexing but determine whether you actually rank once indexed.

Want the full story across every page?

The Indexability Checker checks one URL. CrawlTide audits your whole site, tracks issues over time, watches your AI Visibility weekly, and pushes meta-tag fixes straight to your CMS.

No credit card. Free tier covers a small site end-to-end.

Frequently asked questions

My page returns 200 but isn't in Google. Why?
In order of likelihood: noindex meta tag, X-Robots-Tag noindex header, canonical pointing to a different URL, blocked by robots.txt, or simply not yet crawled. This tool catches the first three — for the last two, also run our Robots & Sitemap Checker.
What's X-Robots-Tag and how do I find it?
It's an HTTP response header that can carry the same noindex/nofollow directives as the meta tag. It's set by your server, framework, CDN, or sometimes a Cloudflare worker. Open DevTools → Network → click the request → Response Headers. We surface it automatically here so you don't have to.
Is "noindex, follow" different from "noindex, nofollow"?
Yes. Both prevent indexing, but follow lets crawlers traverse links on the page (so equity flows through), while nofollow stops crawlers at this page. Use noindex,follow on intentionally non-indexed hub pages (search results, paginated archives) so internal linking still works.
My canonical points to a different URL on purpose. Will this tool flag that?
Yes — as a warning, not a failure. Canonical clusters (e.g. product variants pointing to a parent product) are intentional. The flag is there so you can confirm the target is the right one.
How is this different from Google Search Console's URL Inspection?
GSC tells you the truth about what Google sees and indexed. This tool tells you why a page might not be indexable in the first place — useful before you submit, and useful for diagnosing competitor pages or staging URLs that aren't in your GSC.
Does this check robots.txt?
Not directly here — that's our Robots & Sitemap Checker. We check noindex via meta and HTTP headers, which is what most failures actually come down to.