AI-Parmesan Just Became a Securities Problem

By Greg Rosner
Founder of PitchKitchen · Author of StoryCraft for Disruptors
· 7 min read

TL;DR
Fortune just ran a piece by the head of Baker McKenzie's securities litigation practice about AI-washing class actions. 51 of them filed in the last five years (per Secretariat). Global Predictions got SEC-charged for marketing themselves as "the first regulated AI financial advisor." Innodata's stock dropped 30% after a short seller called out their AI claims. The first wave of cases asked "did the AI exist?" The new wave asks "does it meaningfully change the economics?" Most B2B homepages can't answer that. The fix isn't a legal review. It's a specific, opinionated, boring sentence about what you actually do that nobody else does. Boring sentences win court cases. They also convert better. And in 2026, they're the only sentences AI agents will quote when buyers ask the machine for a recommendation. Three audiences. One sentence.
Read that sentence again
Fortune just ran a piece from Baker McKenzie on AI-washing.
Read that sentence again.
Not a marketing trade pub. Not a SaaS newsletter. Fortune. With a global law firm explaining why the regulatory reckoning is coming.
The author is Perrie M. Weiner, head of Baker McKenzie's Securities Litigation Practice Group in North America. He wasn't writing for marketers. He was writing for boards and general counsels. The message: the era of "AI-powered" homepage copy without consequence is over. Plaintiffs' lawyers are reading your homepage now too.
If your homepage says "AI-powered platform for modern enterprises," you've already told three audiences you have no real story. Customers. Investors. And as of this week, plaintiffs' lawyers.
The data, by the numbers
51 AI-related securities class actions filed in the last five years. The source isn't Baker McKenzie. It's Secretariat, the consulting firm tracking the litigation. A significant majority of those 51 alleged companies overstated or misrepresented their AI capabilities.
The cases that hit the SEC, not just the courthouse, are the ones to study.
In March 2024 the SEC charged Delphia (USA) Inc. for promoting "unsubstantiated AI-driven investing capabilities."
The same week the SEC charged Global Predictions Inc. for marketing themselves as "the first regulated AI financial advisor." That sentence, the one that probably looked like a clever positioning win in a Friday deck review, became the lead exhibit in an enforcement action.
In early 2024 a short seller publicly accused Innodata, Inc. of exaggerating AI's role in its business model. A class action followed. The stock dropped 30%.
Three real companies. Three sentences that didn't survive scrutiny. Three sets of investors who lost money, and three sets of plaintiffs' lawyers who are now cataloging which other companies say similar things.
The shift the lawyers are noticing
Here's the part of the Fortune piece that should change how you read your own homepage.
The first wave of AI-washing cases looked like traditional fraud. Critics argued the AI didn't exist at all. Prove the AI exists, and you survive.
The new wave is sharper. The plaintiffs' bar isn't asking "did the AI exist?" anymore. They're asking, in Weiner's framing, whether the AI "meaningfully change[s] the economics of the business."
Translation: it's not enough to use AI. The AI has to demonstrably move margin, drive revenue, change the unit economics, or create a defensible advantage. If you said it does, on your homepage, in your earnings call, in your investor deck, you have to be able to prove it.
Most B2B companies can't. They sprinkled AI on the homepage to keep up with the deck their competitor posted last month. There's no underlying claim about economics. There's just vibes.
Vibes don't survive depositions.
AI-Parmesan, two years later
I've been telling founders for two years that "AI-Parmesan," sprinkling AI on a weak story like cheese on a mediocre pasta, was a positioning problem. The phrase landed. The pattern didn't change.
Two years ago, AI-Parmesan was a positioning problem. Your homepage sounded like every competitor.
One year ago, it became an LLM Invisibility problem. ChatGPT, Claude, and Perplexity started recommending companies based on whose homepage said something specific enough to quote. Generic AI sentences became invisible to the machines that buyers were now asking for recommendations.
This year, AI-Parmesan is a securities problem. Same empty sentence. Same fix. Three different audiences yelling about it now instead of one.
The pattern is converging. Whatever sentence on your homepage doesn't survive an investor lawsuit also doesn't get cited by AI agents and doesn't convert buyers. Whatever sentence does all three is the same sentence. Boring. Specific. Defensible.
Three audiences. One sentence.
This is the part most founders haven't fully clocked.
The boring, specific, defensible sentence wins on three different fronts at once.
It survives a securities suit, because every claim in it is grounded in something you can actually demonstrate. The plaintiffs' lawyers don't have anything to hang a misrepresentation theory on, because you didn't claim more than you can prove.
It gets cited by AI agents, because ChatGPT and Claude and Perplexity are looking for sentences specific enough to quote when a buyer asks for a recommendation. Generic doesn't make the cut. Specific does.
It converts buyers, because real buyers can recognize themselves in specific language. A founder reading "AI-powered platform for modern enterprises" doesn't know if you mean them. A founder reading "98% of clinical notes signed within two hours, for FQHCs with under 50 providers" knows immediately whether they're in or out.
Same sentence. Three jobs. One fix.
What a defensible sentence actually looks like
Look at the contrast.
Sentences that fail all three tests:
- "AI-powered platform for modern enterprises."
- "AI-enabled healthcare for the next generation."
- "AI-infused workflows for legal teams."
And the one that triggered an actual SEC enforcement action: "The first regulated AI financial advisor." That's what Global Predictions said about itself. It's also the sentence the SEC built its case around.
What do those four have in common? They make claims no plaintiff can disprove because they don't actually claim anything specific. Which is also why no AI agent will quote them. Which is also why no buyer recognizes themselves in them.
Now the format that works.
[Specific buyer] [specific measurable outcome] by [specific mechanism].
Example: "Mid-market manufacturers with 50 to 500 suppliers cut $2.3M in annual visibility losses by replacing trust-based supplier check-ins with real-time freight telemetry."
That sentence names a buyer (mid-market manufacturers, 50 to 500 suppliers). It names a measurable outcome ($2.3M in losses prevented). It names a mechanism (trust-based check-ins replaced by real-time telemetry). Every claim in it can be supported with case studies, customer data, or product documentation.
That's the sentence the plaintiffs' lawyer can't build a case around. It's also the sentence ChatGPT will quote when asked about supply chain platforms. It's also the sentence the right buyer reads and says "that's me."
That sentence is boring. By design.
What this means for you
The fix isn't a legal review.
The fix is the truth.
What do you actually do? For whom? That nobody else does? Said specifically.
Run these three tests on your homepage this week.
- 1The Deposition Test. Read each claim on your homepage as if a plaintiffs' lawyer is going to ask you to defend it under oath. For each AI-related claim, can you produce documented evidence? Customer data, product behavior, measurable outcome? If "no" or "not really" comes up more than once, you're carrying litigation exposure you didn't price into your roadmap.
- 2The Recommendation Test. Open ChatGPT or Claude. Ask "what are the best [your category] companies for [your specific buyer]?" If you're not mentioned, the machines have already decided your sentence isn't specific enough to quote. Same diagnosis as the Deposition Test, different symptom.
- 3The Recognition Test. Show your homepage to a real buyer in your ICP. Cover everything except the first three sentences. Ask: "is this for you?" If they hesitate, your sentence isn't doing its job for them either.
If your homepage fails all three tests, you don't have three problems. You have one. The same boring, specific, defensible sentence fixes all of them. Boring sentences win court cases. They also convert better. And in 2026, they're the only sentences AI agents will quote when your buyers ask the machine for a recommendation. Three audiences. One sentence. That's the work. If you're not sure your homepage can pass all three tests, that's the room I sit in. Open Kitchen
Questions People Ask
FAQ
What is AI-washing?
AI-washing is the practice of marketing AI capabilities a company doesn't actually have, or overstating what its AI actually does. The term mirrors "greenwashing," which describes companies overstating environmental practices. As of 2026, AI-washing has triggered SEC enforcement actions against multiple companies and has been the basis for at least 51 securities class actions in the last five years (per consulting firm Secretariat).
Can my company really get sued for what's on my homepage?
Yes, if your investors or customers can argue they relied on it. The SEC has charged Delphia (USA) and Global Predictions for AI-related claims, including marketing material. Innodata's stock dropped 30% after a short seller cited the company's AI claims, triggering a class action. The trend in newer cases isn't whether the AI exists. It's whether the AI "meaningfully changes the economics of the business," in the framing of Baker McKenzie's head of securities litigation in North America. Generic AI claims ("AI-powered platform") are particularly exposed because they imply economic impact without any specific defensible claim underneath.
What does a defensible AI sentence actually look like?
Defensible AI sentences name a specific buyer, a specific measurable outcome, and a specific mechanism. "AI-powered platform for modern enterprises" fails all three. "Mid-market manufacturers with 50 to 500 suppliers cut $2.3M in annual visibility losses by replacing trust-based supplier check-ins with real-time freight telemetry" passes all three. The same specificity that makes a sentence legally defensible also makes it quotable by AI agents (ChatGPT, Claude, Perplexity) when buyers ask the machine for recommendations, and recognizable to real buyers reading the homepage.
