A recent study by Grey Matter Research and Harmon Research revealed that online panels are incredibly susceptible to respondent quality issues – a finding that threatens to undermine the trust in MR data.
The study’s researchers fielded an online questionnaire with a handful of the largest panel providers that included a range of tests and quality control measures designed to assess the caliber of respondents. Just under half (46 percent) of respondents failed to meet the researchers’ standards for inclusion due to multiple errors or outright fraud; overall, the report concluded that around 90 percent of researchers weren’t taking “sufficient steps to ensure online panel quality”. It’s a statistic which, if even remotely accurate, is deeply worrying.
The battle for better data
Among the quality issues identified were nonsensical responses to OE questions, failure to identify and remove straightliners, as well as a lack of fake or ‘red herring’ questions designed to weed out inauthentic responses. Crucially, the report showed stark differences between respondents whose identities the researchers had verified and those tagged as bogus: the bogus respondents significantly skewed results, rendering data ineffective, at best.
Research companies and panels face a constant battle to guard against the infiltration of fraudulent and disengaged respondents – including bots and click farms – into online surveys but it’s becoming more difficult to detect them as their methodologies become increasingly sophisticated.
Obviously, panel companies are under significant pressure to provide fast, affordable data, a demand that doesn’t always go hand in hand with the need to provide quality. It’s not a problem that can be easily solved after the data is collected – without a lot of manual checking – so, it’s best tackled before results are aggregated.
Some QA can be addressed through study design. Every questionnaire should include measures to determine respondents’ validity. Data reviews should be scheduled during and after the field. Obviously, different types and lengths of study require different solutions – speeding issues are more obvious on longer questionnaires, for example, while straightlining won’t be a problem where there are no grids. Red herring questions can be readily identified by bogus respondents, so aren’t always effective.
Cleaning the respondent pool
The majority of bad or fraudulent respondents can be taken out of the pool before being given the opportunity to start a survey. Imperium’s data integrity solutions are fully automated and are designed to validate only those respondents who pass our stringent checks.
RegGuard® is a broad-spectrum, automated data-validation solution. It combines our flagship ID-validation API RelevantID® with Fraudience®, Real Answer® and Real Mail™ tools during registration to perform a customizable 360-degree check on each registrant – importantly, before they’re added to your panel.
It weeds out fraudsters and dupes, at the same time verifying their IP reputation. Items are scored and flagged, making it easy to identify registrants for processing or removal. When used in concert with self-reported data-authentication tool, Verity®, and CASS™-certified postal record-checking tool, Address Correction™, it provides a robust first line of defence for MR companies and panels.
We’re adding a new tool to our collection in the New Year that will give MR companies and panels even more control over data quality using an entirely automated process that will totally transform survey results, saving time, money and resources. Stay tuned!
Register here for more information on this industry-first development.