It's Not Techy
All articles
SEO

Topical Authority: How We Map Clusters That Actually Rank

Shabir Malik 4 min readJune 24, 2024Updated Apr 24, 2026

Topical authority is the single most important concept in modern SEO, and also the most misunderstood. Most teams read a blog post about pillar-and-cluster content, dump 40 keywords into a spreadsheet, assign each one to a writer, and publish. Six months later they wonder why the cluster ranks briefly and collapses. The problem isn't effort — it's mental model.

We've built ranking clusters for dozens of brands across SaaS, healthcare, fintech, and ecommerce. The ones that hold are structured like a syllabus, not a keyword list. This piece explains how we actually scope, sequence, and measure a cluster so it compounds instead of decays.

Why most topic clusters fail within a year

Traditional cluster strategy treats keywords as the unit of work: find 20 related keywords, write one page per keyword, link them all to a pillar page. This produces a network of pages that look topically related but read as disconnected drafts. Google's quality systems catch this quickly. Pages rank for three to six months while the site earns new-content credit, then drop as engagement signals reveal the cluster doesn't actually help readers.

The second failure mode is keyword cannibalization within the cluster itself. If pages 3, 7, and 12 of your cluster all target 'small business accounting software,' Google picks one and ignores the other two. You've spent writer time producing pages that compete with each other instead of stacking authority.

The third failure is depth asymmetry — three rich, well-researched pages surrounded by 20 shallow stubs. The shallow stubs drag down the site-wide helpful-content signal and the strong pages never reach their ceiling because the cluster's overall topical depth looks weak to Google.

The mental model we use instead

Think of a topic cluster as a university syllabus. The pillar page is the course description — broad, orientational, and deliberately high-level. Supporting pages are individual lectures: each one teaches a specific concept that a reader needs at a specific stage of their learning journey. The sequence matters. A lecture on advanced tax strategy doesn't belong in week one; it belongs after the reader has understood what a deduction is and how cash-basis accounting works.

In practice, this means we map every cluster against a reader journey before writing a word. Who's the reader? What do they know at the start? What do they need to know to make the decision the pillar ultimately sells? Every supporting page answers one specific question at one specific stage. Pages don't target keywords; they target reader questions. Keywords are how we validate demand, not how we structure the work.

The five-question sanity check before publishing

Before any cluster ships, we run it through five questions. If the cluster can't answer all five cleanly, it isn't ready.

One: can a reader land on any page in the cluster and, through internal links alone, answer 'what is this topic?' — without leaving the site? Two: does the cluster clearly explain why the topic matters, with a concrete example the reader can relate to? Three: is there a page for 'who needs this,' so a reader can self-qualify whether the topic applies to them? Four: is there at least one page that goes deep on 'how does this work' at the level of detail an operator needs, not a glossary entry? Five: is there a clear 'what do I do next' page with a real next action, whether that's a tool, a service, or a decision framework?

If the cluster passes all five, it's a complete learning environment. If it fails even one, there's a hole — and that hole is where readers will bounce to a competitor.

How to measure whether your cluster is actually compounding

Rankings on individual queries are too noisy to measure cluster health. We track three signals instead, quarterly. Share of voice on head terms within the cluster — the composite ranking position of the top five queries, weighted by search volume. This tells you if the cluster's center of gravity is rising. Branded queries that reference the cluster topic — people searching '[your brand] + [topic]' means the cluster is becoming a known resource, not just a set of individual pages. Number of queries where two or more cluster pages rank in the top 20 — this is a healthy sign of topical breadth, not cannibalization, as long as the pages target different questions.

If all three are rising over a 12-month window, the cluster is compounding. If rankings rise but branded search doesn't, you're earning impressions without building brand memory — likely because the content is algorithmically optimized but not memorably useful. If branded search rises but ranking share doesn't, the cluster is building reputation but Google hasn't decided you're the authority yet — keep shipping supporting pages.

Key takeaways

  • Most clusters fail because teams think in keywords, not reader journeys. Structure clusters like a syllabus instead.
  • Before publishing, run the five-question check: what / why / who / how / what next. Any gap is where readers bounce.
  • Measure cluster health quarterly with three signals: head-term share of voice, branded search for the topic, and number of queries where multiple pages rank.
  • A cluster that rises on rankings but not branded search is algorithmically optimized, not memorably useful. Fix the content depth.

Keep reading — SEO

Need this applied to your business?

Our team ships seo programs every week. Book a free consult — we'll tell you what would move the needle for your brand.