Free Site Audit Tool

Broken Link Checker

Find broken internal and external links on any single page

We extract every link on the page and check the first 50 for 4xx, 5xx, timeouts, and redirects. Per-link timeout: 5s.

How it works

No black box. Here's exactly what Broken Link Checker checks.

  1. 1

    We fetch the page

    Server-side HTML fetch.

  2. 2

    We extract every link

    Anchors, images, navigation, footer — every <a href> on the page.

  3. 3

    We HEAD-check up to 50 links

    In parallel, with a 5-second per-link timeout. Falls back to GET when servers don't support HEAD.

  4. 4

    You get the broken ones

    4xx, 5xx, timeouts, and redirects with their target URLs. Sorted by severity.

Why this matters

Broken links are the cheapest SEO loss possible — they're trivial to fix and they bleed credibility, crawl budget, and user trust every day they're live. Search engines treat link rot as a quality signal. AI engines that hit a broken link mid-extraction lose that page as a citation source.

  • Each broken outbound link costs you a small amount of trust with crawlers and a larger amount of trust with users.
  • Broken internal links bleed link equity into nowhere and confuse crawl prioritization.
  • Most broken-link issues are caused by link rot on external sites — find them before users do.
  • Redirects aren't broken, but each one is a small crawl-budget tax. Update direct links where you can.

Want the full story across every page?

The Broken Link Checker checks one URL. CrawlTide audits your whole site, tracks issues over time, watches your AI Visibility weekly, and pushes meta-tag fixes straight to your CMS.

No credit card. Free tier covers a small site end-to-end.

Frequently asked questions

Why do you cap at 50 links?
50 outbound HEAD requests is fast enough to be a usable free tool but not so many that it becomes a denial-of-service vector against linked sites. CrawlTide's paid product checks every link across every crawled page with proper queuing and rate limits.
What counts as a "broken" link?
Any link that returns a 4xx (e.g. 404 not found, 403 forbidden), 5xx (server error), times out within 5 seconds, or fails outright. Redirects (3xx that resolve to 2xx) are flagged separately as warnings, not broken.
Does this find broken images?
Not in v1 — we only check <a href> links. Broken images need a separate check (img src, srcset, picture). CrawlTide's full audit catches both.
Why are some sites flagged as broken when they work in my browser?
A few servers refuse HEAD requests, refuse our User-Agent, or serve 200 only after JavaScript runs. We try GET as a fallback for 405/501 responses, but if a site requires JS execution, we can't see the content. Treat those flags as "investigate," not "definitely broken."
Does this check links on every page of my site?
No — only the page you submit. To check a whole site, run a full CrawlTide crawl. That's the kind of thing free tools don't and shouldn't do.
How do I fix the broken links it found?
For internal broken links: update or remove the anchor in your CMS. For external broken links: replace with a current equivalent, link to web.archive.org's snapshot, or remove the link. Document the fix so the same URL doesn't reappear from a stale CMS template.