April 2026

The Credibility Δ

Outplacement providers report 92-98% satisfaction. Independent platforms show 1.7 to 2.8 out of 5. Both numbers cannot be true.

Delta.

The outplacement industry has built its commercial reputation on a specific, measurable claim: that participants overwhelmingly value the service. LHH reports that 97% of participants “strongly recommend” their program. Right Management cites 98.5% satisfaction. Randstad (RiseSmart) claims 92%. These figures appear prominently in sales materials, RFP responses, and case studies. They are presented as evidence that the service works.

LHH · Self-reported

97%

"strongly recommend"

Right Mgmt · Self-reported

98.5%

"satisfied"

Randstad · Self-reported

92%

"satisfied"

Those numbers are not wrong in a narrow, technical sense. They reflect what providers measured, using the instruments they designed, administered to the audiences they selected, at the times they chose. The problem is that an entirely separate body of evidence exists, generated by the people who actually received the service, and it tells a fundamentally different story.

The independent signal

On platforms where participants leave reviews voluntarily and anonymously, the picture inverts. LHH holds a 1.7 out of 5.0 on Trustpilot across 284 reviews. 83% of those reviews are one star. Right Management averages 2.1 on Yelp. Randstad (RiseSmart) averages 2.8 on Trustpilot. Korn Ferry sits at 2.3 on Glassdoor.

Gartner, which surveys corporate HR buyers rather than participants, rates LHH at 3.7 out of 5.0. That figure sits neatly between the provider's self-reported number and the participant experience. The pattern is consistent: the closer you get to the person actually receiving the service, the worse the rating becomes.

Provider-reported satisfaction vs. independent participant ratings

The gap between what providers claim and what participants report

0%25%50%75%100%LHH (Adecco)Right ManagementRandstad (RiseSmart)Korn Ferry

Self-reported figures from provider marketing materials and case studies. Independent ratings from Trustpilot, Yelp, and Glassdoor as of Q1 2026. Both normalized to a 0-100 scale for comparison.

This is not a measurement problem

The industry's instinct will be to frame this as a measurement dispute: different methodologies producing different results. That framing is wrong. The divergence is not an artifact of survey design. It is the predictable result of a system that was built to produce one answer.

Provider satisfaction surveys are not administered to the people who received the service. They are administered to the HR buyer who purchased it. They are sent mid-engagement, before outcomes are known. They exclude the 35% of participants who never activate. And they are designed, administered, analyzed, and published by the provider itself, with no independent verification at any stage.

That is not a methodology difference. It is a system designed to be structurally incapable of producing a bad number. The 97% was never a finding. It was an inevitability.

The independent platforms tell the truth because they measure the right people, at the right time, with no incentive to flatter. When participants who actually used the service are free to say what they think, after they know whether it worked, the numbers collapse. That is not a platform bias problem. That is a product problem.

Distribution of lowest ratings

Percentage of reviews that are one star

LHH (Adecco)Right ManagementRandstad (RiseSmart)Korn Ferry0%25%50%75%100%

Percentage of total reviews rated 1 star on each provider's primary independent review platform.

What the reviews actually say

Public reviews are imperfect data. They skew negative, attract outliers, and lack context. No serious reader should treat any single review as representative. But when patterns repeat across hundreds of reviews, across multiple providers, on multiple platforms, they stop being anecdotes and start being signal.

We analyzed 532 public reviews across Trustpilot, Yelp, and Glassdoor for the four largest outplacement providers. Four themes account for the majority of negative feedback.

ThemePatternFrequency
Coach availabilityParticipants report being assigned a coach, then waiting weeks for a first session. Follow-up sessions are described as infrequent, brief, or repeatedly rescheduled.Cited in 38% of negative reviews
Platform qualityJob boards described as aggregating the same listings available on LinkedIn and Indeed. Resume tools produce generic templates. Video interview practice modules are described as outdated.Cited in 29% of negative reviews
Relevance to senioritySenior professionals and executives report receiving the same materials and coaching approach as entry-level participants. No differentiation by industry, function, or career stage.Cited in 24% of negative reviews
Engagement pressureParticipants describe feeling pushed to mark activities as complete and provide positive feedback. Some report being surveyed before receiving meaningful service.Cited in 18% of negative reviews

Analysis of 532 public reviews on Trustpilot, Yelp, and Glassdoor for LHH, Right Management, Randstad (RiseSmart), and Korn Ferry. Reviews categorized by primary complaint theme. Some reviews cite multiple themes.

These are not niche complaints. Coach availability, platform quality, seniority mismatch, and engagement pressure describe the core service itself. If the most common criticisms target the primary deliverables, then the satisfaction gap is not a measurement artifact. It reflects a product that is not working for a significant share of the people it is supposed to serve.

The standard that doesn't exist

In healthcare, hospitals are required to report outcomes through CMS Hospital Compare. In financial services, advisors must disclose performance through Form ADV and BrokerCheck. In higher education, institutions report graduation rates and employment data through IPEDS.

In outplacement, there is no equivalent. No industry body requires independent outcome verification. No standard methodology exists for measuring placement rates, time-to-placement, or participant satisfaction. No provider is obligated to publish results that were not self-generated. The category operates on a level of measurement opacity that would not be tolerated in any adjacent professional services market.

IndustryMeasurement MechanismIndependencePublic
HealthcareCMS Hospital Compare, mandatory outcomes reportingFederal mandateYes
Financial AdvisoryForm ADV, BrokerCheck, fiduciary disclosureSEC / FINRA oversightYes
Higher EducationIPEDS graduation rates, gainful employment dataDept. of EducationYes
OutplacementProvider self-reported surveysNoneNo

The absence of a measurement standard has not been accidental. It has protected providers from the kind of scrutiny that would make the satisfaction gap visible to buyers. As long as providers control the survey, the sample, the timing, and the publication, the 97% figure holds. The moment an independent party measures the same thing, it collapses.

It doesn't have to be this way

The credibility delta is not inherent to outplacement. It is inherent to outplacement that does not work. When a provider delivers a service that participants actually value, the gap closes on its own. Independent reviews and self-reported satisfaction converge because there is nothing to hide.

We know this because we are that provider. FirstSourceTeam holds a 4.9 on Google across 174 verified participant reviews. Not because we designed a survey to produce that number. Because participants who completed our program chose to say, publicly and voluntarily, that it worked.

That is the difference between a satisfaction score and a reputation. A score is a number you produce. A reputation is a number that is produced about you, by the people you served, on platforms you do not control. Every provider in this analysis has access to the same independent platforms. The question is not whether they can be measured there. The question is what that measurement reveals.

For LHH, independent measurement reveals 1.7 out of 5. For us, it reveals 4.9 across 174 reviews. The methodology is identical. The platform is the same. The only variable is the service.

Providers with bad independent reviews do not have a credibility problem. They have a quality problem. The credibility delta is just how it shows up.

About this research

Self-reported satisfaction figures are sourced from provider marketing materials, published case studies, and RFP response templates as of Q1 2026. Independent ratings are sourced from Trustpilot, Yelp, and Glassdoor. Review counts and one-star percentages reflect platform totals at time of analysis. Gartner Peer Insights ratings reflect the corporate buyer audience. Review theme analysis is based on manual categorization of 532 public reviews across four providers and three platforms. Frequency percentages reflect the proportion of negative reviews (3 stars or fewer) citing each theme as a primary complaint.

Would you use your own outplacement program?