Radford Compensation Survey: Cost, Methodology, And Fit

May 11, 2026

9

By James Harwood

woman viewing hr compliance checklist with team in background

When you’re competing for talent, guessing at pay ranges isn’t a strategy, it’s a liability. That’s where compensation surveys come in, and the Radford compensation survey (now part of Aon’s broader McLagan platform) is one of the most recognized names in the space. But recognition doesn’t automatically mean it’s the right fit for every organization, especially if you’re a growing company watching every dollar.

Before you commit budget to any benchmarking tool, you need to understand what you’re actually buying. How does Radford collect its data? What does participation cost? Who benefits most from the platform, and where might a mid-sized company find better value elsewhere? These are the questions worth answering before you sign anything.

At Soteria HR, we help small and mid-sized organizations build smart, competitive compensation strategies without the overhead of a full HR department. We’ve guided clients through the process of selecting and interpreting compensation data, so we know firsthand that the most expensive survey isn’t always the most useful one. This article breaks down Radford’s methodology, pricing structure, and ideal use cases so you can decide whether it belongs in your toolkit, or whether your benchmarking dollars are better spent somewhere else.

Why companies use the Radford survey

Compensation benchmarking used to be optional. Now, with pay transparency laws spreading across the country and candidates comparing offers in real time, it’s table stakes. Companies turn to the Radford compensation survey because it gives them a defensible, data-backed answer to a question that comes up constantly: are we paying people what the market actually pays? The answer isn’t just about attracting candidates. It shapes retention, internal equity, and whether your comp strategy holds up under legal scrutiny.

When compensation decisions aren’t grounded in real market data, you’re not just risking a bad hire, you’re risking losing the good people you already have.

Specialized data for specialized roles

Most general compensation surveys cover broad job families reasonably well. But if your workforce includes software engineers, data scientists, clinical researchers, or biotech professionals, general surveys often fall short. These are the roles where Radford built its name. The platform was originally designed to serve technology and life sciences companies, and that focus shows in the depth of its job-level data. You’ll find benchmarks that distinguish between a mid-level backend engineer and a principal engineer, not just a generic "software developer" category.

That kind of granularity matters when you’re trying to set distinct pay bands across multiple levels within a single function. Without it, you end up compressing ranges or accidentally creating internal equity problems that take years to unwind. Companies in tech-adjacent industries, like fintech, healthtech, and SaaS, have adopted Radford partly because their direct competitors use it, which keeps the dataset competitive and relevant to the roles they’re actually hiring.

Equity compensation is hard to benchmark without it

Cash salary is only part of the picture in most industries today. In tech and life sciences especially, equity grants, long-term incentive plans, and equity refresh programs are significant components of total compensation. Most general HR surveys don’t capture this kind of data with any real precision. Radford does. The platform includes detailed data on equity grant practices by role, level, company stage, and industry, which gives compensation teams a real foundation for designing or auditing equity programs rather than building them on guesswork.

For growing companies, this is where imprecision gets expensive. If your equity grants are meaningfully below market, candidates won’t always tell you why they declined your offer. They’ll just decline it, and you’ll spend another month trying to fill the same seat.

Pay transparency laws are raising the benchmarking stakes

Several U.S. states now require employers to post salary ranges in job listings, and more states are following. That legal shift has changed how companies approach compensation in a fundamental way. You can no longer set ranges informally and adjust them case by case. Once you post a range, current employees see it, candidates compare it, and if your numbers aren’t grounded in data, you’ll face uncomfortable internal questions you can’t answer without embarrassment or legal exposure.

Companies use Radford precisely because it gives their comp decisions a traceable, credible source. When an employee asks why their salary lands where it does, "we benchmarked against Radford" carries more weight than "we looked around online." This isn’t just about compliance. It’s about building a consistent, repeatable compensation process that holds up over time and doesn’t require reinvention every time you open a new role or conduct a pay equity review.

What the Radford McLagan database includes

The Radford McLagan database is not a single survey. It’s a collection of data sets that cover different components of compensation, organized by industry, company size, and geography. When you access the platform through Aon, you’re pulling from a large, structured repository of pay, equity, and benefits data contributed by thousands of participating organizations. Understanding what’s actually in the database helps you figure out which modules are worth your budget and which ones you can skip.

Salary and total cash benchmarks

The foundation of the database is base salary and total cash compensation by job function, level, and location. Radford uses its own proprietary job leveling framework, which means you’ll need to map your internal titles to their job codes before any data becomes useful. This step takes time, but it’s worth doing carefully. A mismatched job code produces misleading benchmarks, and misleading benchmarks produce pay decisions you’ll regret. The granularity within technical job families is where Radford stands apart from most general surveys. You’ll find distinct benchmarks for individual contributors versus managers, and further cuts by seniority level within each track.

Equity and long-term incentive data

One of the most valuable parts of the radford compensation survey is its equity data. Very few benchmarking sources capture grant values, vesting schedules, and equity refresh practices with the precision that Radford does. The database segments equity data by company stage (pre-IPO, public, etc.), role level, and industry vertical, which gives you a realistic picture of what competitors are actually awarding, not just what a national average looks like across all company types.

If you’re designing or auditing an equity program without industry-specific data, you’re working with a map that’s missing half the roads.

Benefits and supplemental compensation data

Beyond base pay and equity, the database includes benefits prevalence data and supplemental pay practices like signing bonuses, retention bonuses, and variable compensation structures. This layer of the database is especially useful when you’re preparing an offer package and want to know whether your signing bonus is competitive or whether your health plan contributions fall below what candidates expect in your industry. The benefits data is segmented by company size and sector, which makes it more relevant to your actual competitive set than broad national averages would be.

How Radford gathers and refreshes survey data

The Radford compensation survey runs on a participation model. Organizations contribute their own compensation data in exchange for access to the aggregated results. This creates a feedback loop where the more companies participate, the more robust and representative the database becomes. Understanding how this process works matters because data quality depends entirely on who’s in the survey pool and how rigorously Aon validates what participants submit.

Participation is the price of entry

To access Radford’s full dataset, most organizations are expected to submit their own compensation data as part of the agreement. Aon runs survey submission cycles throughout the year, typically with major data cuts in the spring and fall. During each cycle, participants upload data on job codes, salaries, equity grants, bonus targets, and other compensation elements. Aon then aggregates, anonymizes, and validates the submissions before incorporating them into the broader database.

This model has a real advantage: the dataset reflects actual pay practices from real companies, not self-reported estimates or publicly scraped figures. When you’re looking at a benchmark for a principal engineer in a Series B SaaS company, you’re seeing what organizations in that peer group are genuinely paying, not a rough approximation.

The quality of any compensation survey is only as good as the rigor applied to the data that goes into it.

How Aon validates and updates the data

Aon doesn’t just accept submitted data at face value. The platform applies statistical validation checks to flag submissions that fall outside expected ranges or don’t match the job code descriptions. If a submitted data point looks inconsistent, Aon may follow up with the participating organization to clarify or correct it before the data enters the live database. This process helps filter out the kind of errors that would otherwise skew benchmarks and produce misleading results.

The database receives formal updates on a regular cycle, with some modules refreshed more frequently than others depending on market volatility. Tech and life sciences compensation can shift quickly, especially at the senior and equity-heavy levels, so Aon makes adjustments more often in those segments. For your benchmarking purposes, this means checking when each module was last refreshed before pulling numbers for a high-stakes hiring decision or compensation review. Stale data in a fast-moving market can put you meaningfully off from where competitors are actually landing.

How much the Radford survey costs

The Radford compensation survey does not publish a standard price list. Aon structures its pricing through direct negotiations, which means your cost will depend on how many modules you need, your company size, and whether you’re participating in the survey or purchasing standalone access. That setup makes it difficult to budget upfront, especially if you’re approaching Aon for the first time and don’t know what to expect.

What you typically pay for access

Most organizations that purchase access to the full platform report annual costs starting in the range of several thousand dollars, with prices climbing significantly depending on the breadth of data you need. A company accessing just one industry-specific module for a single geography will pay far less than a compensation team pulling multi-country equity and salary data across multiple job families. Participants who contribute their own data to the survey typically receive discounted or subsidized access compared to organizations that want results without contributing, so participation status affects your final number directly.

For smaller organizations with limited benchmarking needs, the investment can feel hard to justify when you compare the cost against what you’ll actually use. Most mid-sized companies don’t need the full platform. They need reliable benchmarks for 20 to 50 job codes in their primary geography, and for that scope, Radford may price well above what you’ll realistically extract in value from the subscription.

If you’re not sure whether you’ll use more than a fraction of the database, you’re probably paying for more than you need.

What drives your final price

Several variables determine what Aon will quote you. The number of modules you license is the primary driver, since each industry vertical and geographic region is priced separately. Your company size and headcount factor in as well, with larger organizations often paying more on the assumption that they’re pulling higher volumes of data and applying it to more complex compensation programs.

Whether you participate in the survey data submission also changes your cost structure. Non-participants pay a premium for access-only arrangements, while companies that contribute data regularly tend to negotiate better terms over time. If you’re evaluating Radford for the first time, asking Aon specifically about the participation discount and what data submission requires will give you a clearer picture of your real total cost before you commit to anything.

How to access Radford and participate in surveys

Accessing the Radford compensation survey runs entirely through Aon, which means there’s no self-serve sign-up or trial period you can jump into on your own. Your first step is contacting Aon’s compensation solutions team directly to discuss your needs, get a quote, and determine which modules make sense for your organization. The process is more like a sales conversation than a software subscription, so come prepared with a clear picture of which industries you operate in, how many roles you need to benchmark, and whether you’re willing to contribute your own data.

Getting formal access through Aon

Once you’ve worked through the negotiation and signed an agreement, Aon gives you platform credentials and access to the specific modules you’ve licensed. Most users interact with the data through Aon’s online portal, where you can pull benchmarks by job code, geography, company size, and industry vertical. Aon typically assigns a client success contact who can walk you through the platform during onboarding, help you map your internal job titles to their coding framework, and answer questions about how the data is structured.

If you’re using Radford for the first time, that onboarding support matters more than it might seem. The job matching process requires careful attention, and a misstep there will produce benchmarks that don’t reflect your actual competitive set.

What survey participation involves

Participating in the survey means your compensation team submits data during Aon’s scheduled collection cycles, typically in spring and fall each year. You’ll upload information for each employee using Aon’s data templates, which capture job code, base salary, bonus, equity grants, and other compensation elements. The submission process is structured but requires internal coordination, especially if your HRIS doesn’t already organize data in a format that maps cleanly to Aon’s job coding system.

Participation takes real time to do right, but it’s what earns you discounted access and keeps the dataset strong for everyone using it.

Most companies designate one or two people to own the submission process, typically someone in HR or total rewards who understands both the compensation data and the platform requirements. If you don’t have that capacity internally, a partner like Soteria HR can coordinate the process for you so you stay compliant with submission timelines without pulling your leadership team off higher-priority work.

How to use Radford data to benchmark roles

Pulling numbers from the Radford compensation survey is the easy part. The harder work is applying those numbers correctly to your actual roles. Without a disciplined process, you’ll end up with benchmarks that look precise on paper but don’t reflect your competitive reality. Here’s how to work through the data in a way that produces reliable, defensible pay ranges you can actually use.

Match jobs by function and level, not by title

Your internal job titles probably don’t match Aon’s job codes, and that gap creates real risk if you skip the matching step. Start by reviewing Aon’s job code library and identifying the function and level descriptors that most closely align with what each role actually does, not what it’s called. A "Senior Associate" at your company might be doing work that Radford codes as a mid-level individual contributor in a completely different function. Title inflation is common enough that you can’t rely on seniority language alone to guide your matching decisions.

Map each role carefully before you pull a single benchmark. The more precise your job matching process, the more relevant your compensation data will be when you sit down to build or update your pay bands. Rushing this step is one of the most common mistakes compensation teams make, and it quietly corrupts every decision that follows.

Build ranges around a market anchor, not just the midpoint

Once you have clean job matches, the temptation is to take the 50th percentile figure and call it your salary midpoint. That works as a starting point, but it leaves out the strategic part. You need to decide where your organization wants to compete in the market, sometimes called your pay positioning or competitive stance. A company trying to attract top technical talent in a tight market might target the 75th percentile. A nonprofit with strong mission alignment and other non-cash benefits might stay closer to the 50th.

Your pay positioning decision should be deliberate, not a default.

Use the benchmark data to build a full range with a minimum, midpoint, and maximum for each role or job family. This structure gives your managers a consistent framework for making offers and adjusting pay at review time, without treating every compensation decision as a one-off negotiation. Build in a regular review cycle, at least annually, so your ranges don’t drift below market as the data refreshes. Stale ranges create pay compression problems that take years to unwind and quietly push your best people toward competitors who stayed current.

What can go wrong with survey benchmarking

Even with a well-regarded source like the Radford compensation survey, the data only performs as well as the process you build around it. Benchmarking errors tend to compound quietly. You make one flawed assumption early in the process, and it ripples through every pay band you build afterward. Understanding where things break down protects you from building a compensation structure that looks credible but doesn’t reflect your actual market.

Mismatched job codes corrupt your results

The most common failure point in compensation benchmarking isn’t the survey itself. It’s the job matching process that happens before you pull a single number. When you map your internal titles to survey job codes too loosely, you end up comparing your roles against a broader category that includes positions with fundamentally different responsibilities and pay expectations. The benchmark looks precise, but it’s measuring the wrong thing entirely.

A benchmark built on a bad job match is not a benchmark. It’s a confident-looking guess.

This problem gets worse when title inflation is present in your organization. If your internal leveling has drifted upward over time, a role coded as "senior" internally might match a mid-level code in the survey, and that mismatch produces a benchmark that overstates what the market actually pays for that scope of work. Fix your internal leveling structure before you do your job matching, not after.

Applying data without adjusting for your actual competitive set

Survey data aggregates responses from multiple geographies and company types, which means you need to apply location and size filters before the numbers become relevant to your hiring reality. A benchmark pulled from a national dataset won’t reflect what you’re competing against if you’re hiring in a high-cost metro or a market where a handful of large employers drive local pay rates above the national average. Ignoring geographic cut data is one of the fastest ways to underprice your offers without realizing it until candidates walk.

Company stage and size matter just as much as geography. If your peer group in the survey includes organizations that are significantly larger or better-funded than you are, your benchmarks will skew high in ways that distort your ranges and set expectations your compensation budget can’t consistently support. Always filter your data to the competitive set that reflects your actual organization, not the aspirational version of what you hope to become in three years.

Radford fit for SMBs and practical alternatives

The Radford compensation survey is built for organizations with the budget, staff, and data complexity to use it fully. That describes a lot of large technology companies and publicly traded life sciences firms, but it doesn’t describe most small and mid-sized businesses. If you’re running a 50-person company without a dedicated total rewards team, the platform can feel like buying industrial equipment to do work that a well-chosen hand tool handles just as well. Understanding where Radford fits and where it doesn’t helps you spend your benchmarking budget on tools that actually match your needs.

Where Radford works well for smaller organizations

Radford can make sense for an SMB if your workforce is heavily concentrated in technical roles like software engineering, data science, or clinical research. In those cases, the depth of Radford’s job-level data is genuinely hard to replicate with general alternatives. If you’re hiring senior engineers and competing directly against companies that benchmark with Radford, having access to the same data source puts you on equal footing during offer negotiations and helps you explain your pay decisions with confidence.

If your roles are technical and your competitors use Radford, working from a different data source means you’re benchmarking against yourself.

The fit breaks down when your workforce is more generalist in nature or when your hiring volume doesn’t justify the cost. Paying several thousand dollars annually for a platform you reference twice a year to benchmark a handful of roles isn’t sound resource allocation, especially when solid alternatives exist at a fraction of the price.

Practical alternatives worth considering

Several benchmarking tools deliver reliable salary and benefits data without Radford’s price tag or participation requirements. These options work well for organizations that need broad coverage rather than deep technical granularity:

  • Mercer Benchmark Database: Strong across industries and useful for companies that need global data alongside domestic benchmarks.
  • Willis Towers Watson (WTW) Surveys: Widely used and respected, with coverage across compensation, benefits, and executive pay.
  • CompAnalyst by Salary.com: More accessible for SMBs, with self-serve access and a lower cost of entry than enterprise platforms.
  • SHRM Compensation Data Center: A practical option for HR professionals who need foundational benchmarks without a large annual commitment.

Matching your tool to your actual benchmarking scope is what determines whether you get real value from the investment. Bigger isn’t always better when the data you need covers 30 job codes, not 300.

Next Steps

The Radford compensation survey gives you one of the most detailed benchmarking datasets available, but it’s only as useful as the process built around it. You need clean job matching, a deliberate pay positioning strategy, and a regular refresh cycle to turn survey data into compensation decisions that hold up over time. If your roles are highly technical and your competitors use Radford, it may be worth the investment. If your workforce skews generalist or your benchmarking scope is narrow, a more accessible tool will likely serve you better.

Compensation strategy works best when someone is actively managing it, not just pulling numbers once a year and hoping for the best. Getting your pay structure right means fewer lost candidates, fewer retention surprises, and fewer uncomfortable conversations about internal equity. If you want a partner who can help you build and maintain a solid compensation framework, reach out to Soteria HR to start the conversation.

Explore More HR Insights

Connect with Our Experts

Ready to elevate your HR strategy? Contact us today to learn more about our comprehensive consulting services or to schedule a personalized consultation.