<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=7725170&amp;fmt=gif">
Skip to content
English
  • There are no suggestions because the search field is empty.

How are the Benchmarks Created?

Benchmarks are created by combining large-scale employee survey data with public labour-market sources, standardising all inputs to a common scale, correcting for known group differences, and calculating sector and best-in-class reference points.

What are benchmarks?

In the field of psychometrics, benchmarks refer to reference 
points or standards against which individuals' test scores or results are compared. 
Benchmarks are typically derived from normative data collected from a 
representative sample of the population. These benchmarks help interpret test
scores by providing a context, indicating where an individual stands relative to 
others. 

How are the Benchmarks Created?
  1. We start with a solid core of employee survey data: From across the realm of 
    workplace research and science, we have thousands of complete 
    Employee Experience (EX) records, covering age, gender and job details across 
    diverse sectors and regions. This rich, empirical data lets us see real workplace 
    patterns in depth.
  2. Widen the view with public sources: To avoid a one-company lens, we mix in public information from places like Glassdoor-style review sites, LinkedIn, job boards, HR news, and industry reports. These extra points help us spot trends that don’t always show up in our own surveys, allowing us to generate more accurate and nuanced insights whilst ensuring that our calculations reflect the true diversity within populations.
  3. Put everything on the same scale: Large mixed samples tend to form a bell-shaped curve, a principle known as the Central Limit Theorem. Using that idea, we convert raw answers to a one-to-five “stan5” scale and then map it to a familiar 0–100 range. This step means different question sets can be compared side by side.
  4. Correct for known group differences: Some regions or industries naturally score higher or lower on certain topics. We calculate statistical offsets to adjust for those predictable gaps, so they don’t distort the final picture.
  5. Build the benchmarks: From the cleaned and adjusted data, we then calculate two reference points for every region-by-industry slice:
    1. Industry benchmarks: the sector average, showing the middle of the pack, and
    2. Best-in-class benchmarks: an aspirational high point that shows what an idealised organisation would achieve if it were able to combine the highest scores from across the industry measurements.
  6.  Why this matters: Traditional benchmarks drawn only from a company’s own clients can miss broader market shifts. By combining internal depth with external breadth, and by standardising and offsetting the numbers, we provide a balanced reference that lets any organisation see where it stands and where it could aim next. Our reference values are never the performance of one firm in disguise. They are built from, and then diluted across, thousands of anonymous employee responses plus public labour‑market data. Because every record is stripped of company identifiers and blended with many others, no individual organisation’s highs or lows can dominate the final average. Traditional approaches that draw only on their own client lists risk exactly that kind of bias; our wider data pool avoids it and gives a truer picture of the whole sector.