1,102 manufacturer websites. Word by word
I wrote 144 commodity-phrase rubrics, 22 differentiation-signal patterns, and a scoring algorithm, then ran the whole thing across 1,102 real manufacturer homepages in 29 industries. The question: could a buyer tell one company from another based on what the site actually says?
I expected bad websites, I found good ones saying nothing
I designed the scoring rubric to read manufacturer homepages the way a buyer does, looking for anything that distinguishes one company from another: specific industries served, certifications, named capabilities, customer references, lead times, and whether the language is particular to one company or could describe any company in the same category.
The first surprise: 94% of these sites are mobile responsive. 80% have contact forms. The infrastructure is mostly there. But 83% contain nothing a buyer could use to distinguish this company from a competitor in the same industry. Words are on every page, but the right words aren’t.
One way to see this: how often these sites talk about themselves versus talking to the buyer. A you-to-we ratio of 1.0 means equal. B2B SaaS companies average 1.5 to 2.5, meaning they address the buyer more than they reference themselves. Across 1,102 manufacturer sites, the ratio is 0.58. These sites say “we” almost twice as often as they say “you.”
336 sites open with “quality” as their lead claim
The finding isn’t that “quality” appears on manufacturer websites. Of course it does. The finding is that 31% of sites use it as the primary message a visitor sees on arrival, the headline or opening statement, the thing that’s supposed to tell a buyer why this company and not the next one.
Add “precision” and 136 more sites (12%) stack both words as their primary descriptors. The average site in this dataset uses 4+ generic phrases. The opening isn’t empty. It’s full of words that do nothing.
The chart below separates where each phrase appears. The light bar is anywhere on the site. The dark bar is in the hero, the single element a visitor reads before deciding whether to stay. When 30% of manufacturers put “quality” in the hero, it’s not a differentiator. It’s wallpaper.
Where the dead phrases live
In some industries, the average company uses both
Some industries don’t just overuse one dead phrase. They stack them. When your industry’s combined “quality” + “precision” rate exceeds 100%, the average company in your category is using at least one. In metal stamping, 74% say “quality” and 81% say “precision.” That’s not differentiation. That’s a shared vocabulary.
A buyer comparing three metal stamping shops sees three sites that say “quality” and “precision.” Same words. Same order. Sometimes the same template. The buyer does the only rational thing: picks the cheapest one.
Nearly everyone scores average, almost nobody breaks out
Each site gets a differentiation score from 0 to 100. The more generic the language, the lower the score. The more specific, provable claims, the higher. The distribution tells the story: 74% of all 1,102 sites cluster in the 45–59 band. Virtually none reach “well differentiated.”
Differentiation score distribution
Highly
commoditized
Below
average
Average
Above
average
Well
differentiated
Two findings from this distribution are worth noting.
Features don’t predict differentiation. I tested every common website feature against the score: blogs, case studies, video, live chat, product photos. Every correlation was under 1.3 points on a 100-point scale. Negligible. A company with a blog, video, and case studies can still score 48 if the language on every page says “quality products” and “innovative solutions.” The score is driven entirely by language.
No industry is statistically different from the mean. Z-tests across all 29 industries: not one reaches significance. The highest z-score is 1.6. The range of industry averages is 49.3 to 54.5. A thermoformer and a die caster sound more alike than two random die casters sound different. Commoditization is uniform.
The things buyers need that almost nobody provides
The other side of the dead-phrase data: almost none of these sites say the things buyers actually use to make decisions. I tracked the presence of several categories of differentiating content across all 1,102 sites.
% of sites missing each signal
9% of sites mention lead time. Nine percent. In a category where lead time is often the deciding factor, fewer than one in ten bother to mention it.
Twelve percent have a named customer or case study. The remaining 88% ask a buyer to evaluate their credibility without evidence. The phrase “trusted partner” appears on 6% of sites, none of which provide a verifiable basis for the claim.
58% of these sites don’t even have a call to action above the fold. The contact form exists on 79% of them, but it’s buried. The hero section, the first thing a visitor sees, doesn’t invite them to take a step.
Any company willing to say the specific thing, the true thing, is standing alone on a field where everyone else looks identical.
Five sites that at least said something
These aren’t examples of great messaging. They’re examples of specificity, the minimum viable break from the pattern. Out of 1,102 sites, these five stood out simply because they said something a competitor could not have said word-for-word. The bar is that low.
None of these are polished messaging. They’re just the rare sites that chose a specific word over a generic one. Imagine what happens when the language is actually designed.
What the data suggests
The data describes a self-reinforcing pattern. Everyone uses the same words because those are the words they’ve seen on competitor sites. Competitors use those words because that’s the industry vocabulary. The vocabulary stops meaning anything. Buyers default to price because they have nothing else to go on.
Companies that say specific things, named industries, certifications, lead times, actual customers, almost always stand out. Not because they’re necessarily better, but because everyone else refused to say anything particular.
There’s usually a reason a company is different. A specialty that took years to develop. A customer relationship that goes back decades. A capability that a competitor genuinely can’t match. The data suggests those things exist in most of these companies. They just don’t appear on the website.
The fix isn’t necessarily a new website. It’s new words. Specific words. Words that name what you actually make, who you actually serve, and what actually happens when someone works with you. The companies in this dataset that scored highest didn’t have bigger budgets or better designers. They had specific language.
How this was done
1. Study design
Cross-sectional observational study of manufacturer and trades-company homepages. The research question: to what extent does publicly visible website language differentiate individual firms within their stated industry category? No experimental intervention was applied. All measurements are based on publicly accessible page content at the time of collection (Q1 2026).
2. Sample
N = 1,102 sites across 29 manufacturing and trades sub-industries. Selection combined geographic sampling (top organic results for “[category] + [city]” queries across 40 U.S. metro areas), industry directory listings, and manual sourcing to reduce survivorship bias. Sample sizes per sub-industry range from n = 13 (powder coating, thermoforming) to n = 72 (CNC machining). No site appears more than once. Sites were excluded if the homepage was behind a login wall, under construction, or returned a non-200 HTTP status.
3. Instrument
Each site was scored against a deterministic rubric comprising 144 commodity-phrase patterns (severity weight 2–7) and 22 differentiation-signal patterns (strength 3–9). The rubric is applied to extracted page text, not rendered visuals. 120 data points were recorded per site across four dimensions: content, design, technical infrastructure, and positioning signals.
The differentiation score (0–100) is computed as the ratio of differentiation-signal strength to total signal weight (commodity + differentiation). A site composed entirely of generic phrases scores near 45; a site with strong, specific claims and no commodity language scores above 75. The algorithm is deterministic and reproducible from text alone.
4. Weighting
Commodity phrases are weighted by ubiquity and semantic vacuity: “quality” (weight 5) is penalized more heavily than “family-owned” (weight 2) because it appears at higher frequency and carries less informational content. Differentiation signals are weighted by specificity: a named customer (strength 7) scores higher than a vague industry reference (strength 3). Common certifications (ISO 9001 at 27% prevalence, ITAR at 22%) are downweighted to strength 3 as table stakes. Rare certifications (FDA at 0.7%, UL at 0.3%) retain full strength 8.
5. Definitions
Dead phrase. A descriptor whose cross-dataset frequency is high enough that its presence on any individual site carries near-zero information value for a buyer comparing firms. Measured at two levels: site-wide frequency and hero-section frequency.
Zero positioning signals. A binary classification. A site is coded “zero” when it contains none of the following: specific industries or customers named; specific capabilities, tolerances, or materials; named certifications; lead time or capacity information; any claim a competitor in the same sub-industry could not make word-for-word.
You-to-we ratio. Count of second-person pronouns (you, your, you’re) divided by first-person plural pronouns (we, our, us) across all extracted homepage text. A ratio below 1.0 indicates the site references itself more often than it addresses the visitor.
6. Statistical tests
Z-tests for inter-industry differentiation-score differences using pooled standard deviation. No sub-industry reaches p < 0.05. The highest observed z-score is 1.6 (n.s.). The range of sub-industry mean scores is 49.3 to 54.5 on the 100-point scale. Pearson correlations between individual website features (blog presence, case studies, video, live chat) and differentiation score are all r < 0.08 (equivalent to <1.3 points on the scale). Design quality scores were calibrated via Claude Haiku vision model against a 10-point visual assessment rubric developed from 50 manually scored reference sites.
7. Limitations
The sample is limited to U.S.-based firms with English-language homepages. Selection via organic search results introduces ranking bias. The rubric measures textual content only; visual design, interactive elements, and off-site brand presence are not captured in the differentiation score. The 29 sub-industry categories are researcher-defined and may not correspond to how firms self-categorize. Sites are not named individually in the published findings; the unit of analysis is the pattern, not the particular firm.