PitchKitchen Research Report

The State of B2B Homepage Messaging 2026

150 B2B homepages. 18 criteria. One question: Can the machines read your story?

Author

Greg Rosner, PitchKitchen

Published

Sample

150 companies, 5 industries

01 — Executive Summary

Most B2B homepages aren’t broken. They’re invisible.

We hand-scored 150 B2B homepages across healthtech, B2B SaaS, fintech, cybersecurity, and AI/data, against an 18-criteria rubric covering messaging foundation, trust evidence, AI readiness, and conversion. The pattern was consistent across every industry and every size tier... and it wasn’t the pattern most marketing leaders are paying for.

Headline finding

82%

of B2B homepages fail at least one of the three AI-readiness criteria... LLM quotability, copyright freshness, or AI substance over AI-Parmesan.

Sample number. Final value drops when scoring closes.

Three findings that should keep B2B CEOs up at night

01

82% of pages fail at least one AI-readiness criterion.

AI-Parmesan, weak LLM quotability, or stale dates. The category that should be most fluent in AI is the worst at communicating it.

02

Mid-market $50M-$500M companies score the highest.

Enterprise pages average 17.4 / 36. Mid-market growth-stage pages average 21.2. Scale-ups trail at 16.7. Bigger isn’t better. Often it’s lazier.

03

Cost of Inaction is the single weakest criterion.

Average 0.4 / 2 across all 150 pages. Almost no homepage tells the buyer what staying stuck actually costs them. That single missing sentence is most companies’ biggest leak.

150

homepages scored

17.3

avg score / 36

0.4

lowest crit avg / 2

1.6

highest crit avg / 2

02 — Methodology

18 criteria. 4 categories. One rubric.

Every homepage was graded against the same 18-point rubric, by the same human, against the same 0/1/2 scale. No vibes. No algorithm. Just the buyer’s view of the page... and the machine’s.

Sample size

150

B2B homepages, hand-scored.

Industries

5

Healthtech, B2B SaaS, Fintech, Cybersecurity, AI / Data.

Size tiers

3

$500M+ / $50M-$500M / $5M-$50M revenue.

01.Messaging Foundation

10 criteria

#01

The 7-Second Test

In 7 seconds above the fold, can a stranger answer who this is for, what problem it solves, and what point of view it takes?

Sample avg / 150 pages

Weak

0.8/ 2.0

Scoring scale

  • Stranger fails all three questions.
  • Stranger gets one or two of the three.
  • Stranger nails all three in under 7 seconds.
#02

Rebellion / Movement

Is there a named enemy, named status quo, or named industry pattern the company is pushing against?

Sample avg / 150 pages

Critical gap

0.6/ 2.0

Scoring scale

  • No enemy named. Generic positioning.
  • Implied tension, but no concrete villain.
  • A specific, named status quo is called out.
#03

Unique Category Framing

Does the page coin or claim a category, sub-category, or named approach the buyer can recognize and repeat?

Sample avg / 150 pages

Weak

0.9/ 2.0

Scoring scale

  • Reuses commodity category language only.
  • Hints at a frame but doesn't name it.
  • Owns a clearly named category or POV.
#04

ICP Clarity

Can a visitor instantly see whether they're the intended buyer? Role, company size, stage, vertical.

Sample avg / 150 pages

Weak

1.1/ 2.0

Scoring scale

  • No identifiable buyer signal at all.
  • Vague (e.g. 'modern teams', 'growing companies').
  • Buyer, stage, and context are unmistakable.
#05

Problem Leadership

Does the page lead with the buyer's problem in their language, before the solution?

Sample avg / 150 pages

Weak

1.0/ 2.0

Scoring scale

  • Solution-first or feature-first.
  • Problem mentioned but generic.
  • Problem is named in the buyer's own voice.
#06

Customer-Centric vs. Narcissistic

Whose story is the homepage telling... the customer's transformation, or the company's history and capabilities?

Sample avg / 150 pages

Weak

0.9/ 2.0

Scoring scale

  • Almost entirely about the company.
  • Mixed, but tilts toward the company.
  • The customer is the protagonist throughout.
#07

Solution Clarity

Can a visitor explain what the company actually does in one sentence after reading the hero?

Sample avg / 150 pages

Holding

1.3/ 2.0

Scoring scale

  • Buzzwords. No concrete solution.
  • Roughly clear, but requires a second read.
  • Crystal clear in one pass.
#08

Cost of Inaction

Does the page name the price the buyer pays for not changing? Lost deals, slow growth, internal pain.

Sample avg / 150 pages

Critical gap

0.4/ 2.0

Scoring scale

  • No mention of consequences of inaction.
  • Soft consequences, mostly aspirational.
  • Concrete, named cost of staying stuck.
#09

Plan / Process with Outcome

Is there a visible 3-step (or numbered) plan that ties process to a specific outcome?

Sample avg / 150 pages

Weak

1.0/ 2.0

Scoring scale

  • No process. Buyer can't see the path.
  • Process listed but disconnected from outcome.
  • Numbered plan + outcome, clearly linked.
#10

Promised Land

Does the page paint a clear, specific 'after' state the buyer will live in once they choose this company?

Sample avg / 150 pages

Weak

0.9/ 2.0

Scoring scale

  • Absent or counter-productive.
  • Present but generic, weak, or incomplete.
  • Specific, concrete, and well executed.

02.Trust & Evidence

4 criteria

#11

Proof & Evidence

Are there concrete numbers, before/after deltas, or named outcomes (not just adjectives)?

Sample avg / 150 pages

Weak

1.2/ 2.0

Scoring scale

  • No quantified results.
  • Vague proof points (e.g. 'increased revenue').
  • Specific, attributable, time-stamped numbers.
#12

Social Proof

Customer logos, testimonials with names + titles + companies, video stories, named case studies.

Sample avg / 150 pages

Holding

1.5/ 2.0

Scoring scale

  • No social proof or anonymous quotes only.
  • Logos or quotes, but thin or unnamed.
  • Named, role-attributed proof, ideally on video.
#13

Authority & Credibility

Founder credentials, frameworks, original research, books, podcasts, awards. Why this team gets to talk about this.

Sample avg / 150 pages

Weak

1.0/ 2.0

Scoring scale

  • No authority signals.
  • Generic credentials (years, funding raised).
  • Distinct authority assets (IP, frameworks, body of work).
#14

Alternatives Acknowledged

Does the page address the buyer's other options honestly... including 'do nothing' or competitors by name?

Sample avg / 150 pages

Weak

0.7/ 2.0

Scoring scale

  • Pretends to be the only option.
  • Vague 'unlike other tools' language.
  • Names the alternatives and the trade-offs.

03.AI Readiness

3 criteria

#15

AI-Parmesan Index

How heavily does the page sprinkle 'AI-powered', 'AI-enabled', 'AI-first', 'agentic' without saying what the AI does?

Sample avg / 150 pages

Weak

0.7/ 2.0

Scoring scale

  • Heavy AI-Parmesan with no substance.
  • Some AI buzzwords, partially explained.
  • AI claims are specific, mechanistic, and proven.
#16

LLM Quotability

When an LLM crawls this page, are there clean, declarative, citation-ready sentences it can lift into an answer?

Sample avg / 150 pages

Weak

0.8/ 2.0

Scoring scale

  • Marketing fluff. Nothing extractable.
  • Some quotable claims, mostly buried.
  • Multiple short, declarative, citation-ready lines.
#17

Copyright Freshness

Visible 'last updated' date, recent dates on case studies, and a current copyright year. AI engines weight recency.

Sample avg / 150 pages

Weak

1.1/ 2.0

Scoring scale

  • Stale or missing dates. Year is 2+ behind.
  • Copyright current; content undated.
  • Visible recent updates and current copyright.

04.Conversion

1 criterion

#18

CTA Hierarchy

Is there one obvious primary CTA, plus a soft secondary path for visitors not ready to buy? No CTA jungle.

Sample avg / 150 pages

Holding

1.4/ 2.0

Scoring scale

  • No CTA, hidden CTA, or 6+ competing CTAs.
  • Primary CTA present but buried or unclear.
  • One clear primary, one obvious soft secondary.

A note on rigor

Every score was given by Greg Rosner using the same rubric across the same time window. Where the call was close, the lower score won. The point isn’t to be generous. It’s to show you what your buyers... and the LLMs that increasingly speak for them... actually see.

03 — Overall Findings

Where the field stands.

Aggregate scores across all 150 companies. The category-level numbers tell you where the gap is widest. The criterion-level numbers tell you where the gap is fixable.

Average score by category

Placeholder data
  • Messaging Foundation9.3 / 20
  • Trust & Evidence4.4 / 8
  • AI Readiness2.6 / 6
  • Conversion1.4 / 2

AI Readiness lags by ~28 percentage points behind the strongest category.

Top 5 weakest criteria across all 150 pages

Placeholder data
  • Cost of Inaction0.4 / 2
  • Rebellion / Movement0.6 / 2
  • AI-Parmesan Index0.7 / 2
  • Alternatives Acknowledged0.7 / 2
  • LLM Quotability0.8 / 2

Lower bar = lower average score. The first three sit at less than half a point.

Scored companies — sample selection

Placeholder data — full set drops with the audit
ScreenshotNH

Northbound Health

27/36

healthtech · $50M-$500M

“Strong on Authority + Promised Land. Light on Cost of Inaction.”

ScreenshotCR

Cardinal RCM

19/36

healthtech · $5M-$50M

“Solution-clear, but the AI-Parmesan score dragged it down.”

ScreenshotLQ

Linear Quotient

31/36

b2b / saas · $50M-$500M

“Highest score in the data set so far. Names a category.”

ScreenshotSS

Sumter Stack

22/36

b2b / saas · $5M-$50M

“Strong rebellion narrative. Weak on quantified proof.”

ScreenshotWF

Warden Financial

16/36

fintech · $500M+

“Generic enterprise messaging. Very low LLM quotability.”

ScreenshotAP

Atlas Payments

24/36

fintech · $50M-$500M

“Honest about alternatives. Weak Promised Land.”

ScreenshotOC

Obsidian Cyber

20/36

cybersecurity · $50M-$500M

“Pure FUD-led messaging. Buyer barely visible.”

ScreenshotPZ

Perimeter Zero

14/36

cybersecurity · $5M-$50M

“AI-Parmesan score: 0. Worst on the rubric.”

ScreenshotHD

Harmonic Data

18/36

ai / data · $50M-$500M

“Says ‘AI-powered’ 14 times. Says what it does once.”

ScreenshotLI

Lumen Intelligence

23/36

ai / data · $500M+

“Beautiful design. Average story.”

ScreenshotFF

FirstFrame SaaS

28/36

b2b / saas · $50M-$500M

“Names the buyer in the hero. Names the enemy in the second fold.”

ScreenshotPC

Pacific Claims

17/36

healthtech · $500M+

“All capabilities, no story. Classic enterprise pattern.”

04 — By Industry

Five industries. Five different stories.

30 homepages per industry, scored against the same 18 criteria. Each section shows the average score, the criterion that hurts most, the criterion that hurts least, and a curated screenshot gallery.

Industry 01 of 5

Healthtech

30 companies scored

30 healthtech homepages, from RCM platforms to clinical AI vendors. The category is louder than ever about ‘AI’, quieter than ever about who specifically benefits and how.

Average score

18.6

/ 36

Strongest criterion

Social Proof

1.6 / 2

Weakest criterion

Cost of Inaction

0.3 / 2

Top scorer

Northbound Health (27 / 36)

Bottom scorer

Pacific Claims (17 / 36)

What this means

Healthtech buyers are drowning in ‘modernize your revenue cycle’ pages. The pages that win specify a buyer (CFO of a 200-bed hospital), a problem (denial rate over 8%), and a process (90-day implementation). Almost nobody does all three.

Screenshot gallery — sample

Page 1HE
Page 2HE
Page 3HE
Page 4HE

Industry 02 of 5

B2B SaaS

30 companies scored

30 B2B SaaS pages, weighted toward the $50M-$500M growth tier where most buyers actually evaluate alternatives. SaaS leads on category framing and lags on rebellion.

Average score

21.4

/ 36

Strongest criterion

Solution Clarity

1.6 / 2

Weakest criterion

Rebellion / Movement

0.5 / 2

Top scorer

Linear Quotient (31 / 36)

Bottom scorer

Sumter Stack (22 / 36)

What this means

B2B SaaS pages know how to describe what the product does. They’ve forgotten how to fight someone. The top scorers all name a status quo... legacy CRMs, ticket sprawl, ‘the way Salesforce makes you do it.’

Screenshot gallery — sample

Page 1B2
Page 2B2
Page 3B2
Page 4B2

Industry 03 of 5

Fintech

30 companies scored

30 fintech homepages, half infrastructure providers, half embedded finance. The compliance language is consistent. The reason a buyer should care is not.

Average score

19.2

/ 36

Strongest criterion

Authority & Credibility

1.4 / 2

Weakest criterion

Promised Land

0.6 / 2

Top scorer

Atlas Payments (24 / 36)

Bottom scorer

Warden Financial (16 / 36)

What this means

Fintech homepages over-index on ‘trust signals’ (SOC 2 logos, partner crests) and under-index on what life actually looks like for the buyer after they buy. Trust without an after-state reads as table stakes.

Screenshot gallery — sample

Page 1FI
Page 2FI
Page 3FI
Page 4FI

Industry 04 of 5

Cybersecurity

30 companies scored

30 cybersecurity pages from EDR, SOAR, and identity vendors. The category is the most fear-led in the data set, and unsurprisingly the most narcissistic.

Average score

17.1

/ 36

Strongest criterion

Problem Leadership

1.3 / 2

Weakest criterion

Customer-Centric vs. Narcissistic

0.5 / 2

Top scorer

Obsidian Cyber (20 / 36)

Bottom scorer

Perimeter Zero (14 / 36)

What this means

Cybersecurity pages name the threat well. Then they spend the rest of the page bragging about themselves. The buyer’s transformation almost never appears... yet that is what every CISO is actually buying.

Screenshot gallery — sample

Page 1CY
Page 2CY
Page 3CY
Page 4CY

Industry 05 of 5

AI / Data

30 companies scored

30 AI and data infrastructure homepages, from LLM platforms to vector DBs. This is the worst-scoring category in the data set, and the closest to a tied finish at the bottom.

Average score

16.8

/ 36

Strongest criterion

Solution Clarity

1.2 / 2

Weakest criterion

AI-Parmesan Index

0.4 / 2

Top scorer

Lumen Intelligence (23 / 36)

Bottom scorer

Harmonic Data (18 / 36)

What this means

The category most likely to be quoted by AI engines... is the worst at giving AI engines anything to quote. The best AI / Data pages stripped ‘AI-powered’ from the headline and put a one-sentence outcome there instead.

Screenshot gallery — sample

Page 1AI
Page 2AI
Page 3AI
Page 4AI

05 — By Size Tier

Bigger isn’t better.

Scale-up, growth, and enterprise homepages compared head to head. The data shatters the assumption that more revenue equals better messaging. Mid-market growth-stage companies are running circles around their bigger and smaller cousins.

Enterprise

$500M+

50 companies in this tier.

Average score

17.4

/ 36

Highest scoring tier

Growth

$50M-$500M

50 companies in this tier.

Average score

21.2

/ 36

Scale-up

$5M-$50M

50 companies in this tier.

Average score

16.7

/ 36

Average score by size tier (out of 36)

Placeholder data
  • $500M+ enterprise17.4
  • $50M-$500M growth21.2
  • $5M-$50M scale-up16.7

The mid-market growth tier outscores enterprise by 3.8 points and scale-up by 4.5.

Why mid-market wins

Growth-stage companies have to convince a buyer to take a bet. Enterprise pages have stopped trying to convince anybody... they’re built for analyst summaries and RFPs. Scale-ups are still figuring out who they’re for. The middle tier sits in the only zone where the homepage is still a real revenue lever, and the scoring shows it.

06 — The AI Readiness Gap

The machines are already buying. Most pages can’t answer them.

When ChatGPT, Claude, and Perplexity crawl your homepage, three things determine whether they cite you... or someone else. LLM quotability. AI substance over AI-Parmesan. And the simple act of putting a date on the page.

AI-Parmesan Index

68%

Pages claim ‘AI-powered’, ‘AI-native’, or ‘agentic’ without specifying what the AI actually does or what outcome it drives.

LLM Quotability

41%

Pages have at least one short, declarative, citation-ready sentence an LLM can lift into an answer. The other 59% are too marketing-soft to extract.

Copyright Freshness

55%

Pages display a current copyright year and any visible recency signal. The cheapest fix on the rubric, and almost half of pages still miss it.

Why this matters

AI engines pick the page that gives them the cleanest, freshest, most quotable answer. They don’t pick the one with the prettiest hero image. The pages that figure this out first will own the next decade of B2B discovery... and the data set says only about 1 in 5 are anywhere close.

AI Readiness by industry (avg / 6)

Placeholder data
  • B2B SaaS3.5 / 6
  • Fintech2.8 / 6
  • Healthtech2.5 / 6
  • Cybersecurity2.3 / 6
  • AI / Data2.1 / 6

The category that talks most about AI is the worst at being read by AI.

07 — Recommendations

3 things every CEO should do Monday morning.

You don’t need a six-month rebrand to move every one of these scores. Three concrete actions, ordered by leverage. Start at the top.

01

Run the 7-Second Test on your own homepage.

Show your homepage to three strangers for seven seconds, then ask them who it’s for, what problem it solves, and what point of view it takes. If they can’t answer all three, your buyers can’t either. Average score on this criterion is 0.8 / 2... most pages fail at least one of the three questions.

02

De-Parmesan your AI claims.

Replace every “AI-powered” with the actual outcome the AI delivers. If you can’t say what the AI does in one concrete sentence, the LLMs crawling your page can’t either. 68% of pages in the data set fail this criterion outright.

03

Date the page.

Put a visible “last updated” line on the homepage. Update it monthly. Recency is a citation signal, and right now 45% of B2B pages are stuck on a 2024 copyright with no other freshness signals. This is the cheapest fix on the entire rubric.

08 — About / Next Steps

Want this audit on your homepage?

We run the same 18-criteria scoring on your B2B homepage as part of every Open Kitchen engagement. If you want the fast version, the free NarcScore™ tool will run a slimmer version of the rubric on your page in five minutes.

GR

About the author

Greg Rosner

Founder of PitchKitchen. Creator of the Magnetic Messaging Framework. Author of StoryCraft for Disruptors. Greg has rebuilt the homepage, story, and AI brand twin for dozens of B2B companies, from $5M scale-ups to enterprise rebellions.

Last updated: