Imperium Shortlisted for Prestigious Technology Award

Imperium announced as a finalist in Quirk’s 2021 Marketing Research and Insight Excellence Awards

(NEW YORK – Sep 16, 2021) – Data quality solutions specialist Imperium (www.imperium.com) has been shortlisted for Quirk’s 2021 Technology Impact Award – a category that recognizes outstanding innovations in the marketing research industry.

Imperium is renowned for its powerful data integrity, validation and hygiene tools, including flagship ID-verification product RelevantID® and new multi-point respondent-scoring tool QualityScore™.

The company’s sector-specific solutions are used by some of the world’s leading market research companies and panels to boost data quality. Imperium’s real-time automated tools assess both passive and behavioral data to swiftly, and cost-effectively, weed out fraudsters and dupes, at the same time delivering consistently high-quality respondents.

“We’re delighted that Imperium is a finalist for Quirk’s Technology Impact Award, especially as the judges are considering the real-world application and long-term benefits of shortlisted technologies,” commented Tim McCarthy, General Manager, Imperium.

“Over the years, our focus on innovation has enabled us to create a suite of top-level tools that assure data quality for marketing research clients, providing an informed response to increasingly complex fraud attempts. Deploying a sophisticated machine learning model enables us to continuously adapt to new behavior patterns, allowing us to isolate fraudulent and poor-quality respondents before they have the chance to subvert data.”

“The threat posed to the integrity of online panel surveys by disengaged or fraudulent responses is on the rise. Although meticulous study design is the cornerstone of effective data collection, Imperium’s tools give researchers greater control over the QA process without the need to commit additional time and resources.”

The Marketing Research and Insight Excellence Awards, powered by Quirk’s Media, recognize the researchers, suppliers and products and services that are adding value and impact to marketing research. Finalists are selected by a panel of judges made up of a combination of end-client researchers, supplier partners and Quirk’s editorial staff.

Award winners will be announced at The Marketing Research and Insight Excellence Awards Virtual Ceremony on November 9, 2021.

About Imperium

Founded in 1990, Imperium provides a comprehensive suite of technology services and customized solutions to verify personal information and restrict fraudulent online activities. The world’s most respected market research and e-commerce businesses rely on Imperium’s superior technology and solutions to validate their customers’ identities, verify data accuracy, automate review processes and uncover the intelligence that improves profitability. The company’s flagship product RelevantID® is widely recognized as the market research sector’s de facto data-quality and anti-fraud tool. In recent years, Imperium has invested heavily in machine learning, NLP and neural networks, capitalizing on its domain knowledge to expertly map fraudsters’ behavior. Last year, Imperium prevented 1 billion instances of fraud at source. https://www.imperium.com/

 

About Quirk’s

Quirk’s Marketing Research Media provides sector-focused resources devoted to professionals responsible for conducting, coordinating and purchasing marketing research products and services. www.quirks.com.

Press contact
connect@theflexc.com

Top 5 Tips for Ensuring Research Data Quality

By Tim McCarthy, Imperium General Manager

High-quality data – data that’s accurate, valid, reliable and relevant – is the ultimate goal for market researchers and brands alike.

While well-constructed engagements return critical insights on timely topics, their poorly planned/designed counterparts can accrue flawed results: invalid, unreliable or irreproducible data that can lead to the wrong conclusions and inform bad decisions further down the line.

Taking the time to explore and improve data quality delivers tangible benefits for everyone, increasing reliability, reducing the costs and aggravation associated with refielding and saving on time and resources all round. Luckily, there are multiple ways of fine-tuning survey processes to minimize problems and optimize data quality.

Quality data begins with selecting the right participants – respondents who are well suited to the aims and objectives of the study. But researchers also need to be cognizant of dozens of individual survey elements that have the potential to impact data quality – and understand how to manage and mitigate these threats while maintaining process integrity.

Technology can help. Greater automation helps select the best participants at the outset. By reducing subjectivity and lowering bias, automation also achieves a more balanced and consistent view of what constitutes “good” and “bad” respondents. A smoothly automated system reduces friction, boosting project speed while scaling back the cost and duration of manual checks.

Here are my top 5 tips for ensuring data quality in research:

1. Plan properly

Great results start with careful planning. Everyone involved should thoroughly understand the aims and objectives of the research before it leaves the starting blocks. Early actions include engaging in a process to identify the ideal target audience for a study, followed by building out an accurate sample plan based on both audience make-up and research objectives. Once this stage is agreed, it’s time to create a thorough screener that only allows appropriate respondents to proceed to the main survey.

2. Set candidates up for success

A successful survey isn’t one that’s predicated on tripping up respondents. It’s in everyone’s interests to build an engaging survey that is relevant to the audience and – crucially – not over-long. Surveys should be mobile friendly and shouldn’t include numerous trap/trick questions that may confuse even the most genuine of participants. Trick questions can backfire if respondents get frustrated and abandon the survey or intentionally answer incorrectly just to “see what happens”. Likewise, if you think the use of “insider jargon” could be problematic, you may want to consider conducting research to identify how your audience speaks about a particular topic, so you can communicate with them in a way that makes sense and is more likely to return relevant responses.

3. Include an appropriate mouse trap

You will need to incorporate a range of question types designed to identify poor respondents, but make sure they’re targeted and fit for purpose. Employ a variety of Open-End Questions, Grid Questions, Low Incidence Questions, Red Herring Questions and Conflicting Answers to weed out the weakest candidates. But don’t overdo it by adding multiple trap questions that are unrelated to the survey; instead use actual survey questions with anomalistic/inappropriate behaviors flagged. Also, be sure to not throw out respondents at the first sign of concern, but rather look for secondary data to confirm your suspicions of poor quality. Flushing out acceptable respondents because they’ve triggered one flag depletes the potential respondent pool and risks biasing data at a time when we need more diverse voices.

4. Ditch the fraudsters and dupes!

Using the right tech at the right time will save you time and money: by utilizing survey data quality solutions like RelevantID® at the outset, you’ll be able to build up a detailed picture of respondents’ fraud potential. RelevantID maps a participant’s ID against dozens of data points (including geo-location, language and IP address) to weed out obvious fraud and dupes before they enter the survey. This will not only mean that fewer respondents will need to be manually reviewed/removed after survey data is collected but will also prevent large-scale attacks from bots and click farms.

5. Develop consistent, efficient and accurate data quality checks

Ensure that you design an effective and efficient plan for removing bad data that is consistent for all respondents, and run frequently to ensure you don’t have large amounts of removals upon quota completion. Automating in-survey data reviews is one of the best ways to safely streamline the quality process. By utilizing solutions like QualityScoreTM, you can ensure your data checks are consistent and run in real-time, while reducing the time and resources needed to conduct effective manual reviews. Our data shows that by using QualityScore, clients save about 85 percent of the time they would otherwise spend checking survey results to identify bad respondents.

 

Tim McCarthy is General Manager at Imperium. He has over 15 years of experience managing market research and data-collection services and is an expert in survey programming software, data analysis and data quality. Imperium is the foremost provider of technology services and customized solutions to panel and survey organizations, verifying personal information and restricting fraudulent online activities.

Automation Paving the Way to Standardize ‘Good Quality’ Data in Surveys

By Tim McCarthy, Imperium General Manager

As brands look to better understand the rapidly evolving requirements and motivations of their customers, the need for quality data has never been more urgent.

But, with demand for respondents at an all-time high, one of the central challenges facing the industry is that there’s no baseline of data quality on which all parties can agree – and with so much left to subjective reviews, it’s easy to see how disagreements on data quality continue to proliferate.

In principle, everyone involved – sample providers, market research agencies and brands alike – wants the same thing: good respondents. In practice, this means removing those who demonstrate some level of bad behavior during a survey without eliminating otherwise strong candidates who’ve provided one or two sub-optimal responses. Spotting bad respondents manually is trickier than it sounds and extremely time consuming. Moreover, without an agreed benchmark for quality, it’s not surprising that standards vary.

Respondents are often removed at the first sign of a poor response because it is very labor intensive to review the data more holistically and determine if a poor response was an isolated incident (potentially due to survey setup/design), or whether additional data contributed to detecting a broader unfavorable response pattern.

At Imperium, we believe that the answer lies in moving to a more automated respondent-scoring process. Reducing subjectivity and tailoring quality checks to the specifics of each survey is key to reaching a more balanced agreement on what constitutes “good” and “bad” respondents. A smoothly automated system also increases project speed while greatly decreasing the cost and duration of manual checks.

We’ve been reviewing data from our new QualityScore™ solution and have revealed some useful insights. We analyzed approximately 200K respondents across 125+ projects and found tangible links between the various types of behavior that produce an overall bad respondent score.

For example, our analysis revealed that those who scored in the bottom quadrant for Open-Ends had a 40 percent likelihood of being in the bottom quadrant for quality overall, while speeding (16 percent correlation) and straight-lining (10 percent correlation) were less reliable indicators of generally poor-quality respondents.

When it comes to banding respondents based on quality, QualityScore metrics have led us to some interesting observations. Our analysis shows that, on average, 8 percent of respondents fall into our poor-quality range, while 65 percent rate highly. This leaves about 20 or 30 percent whose results are less clear cut.

It’s an important group, comprising a number of respondents that may have triggered one or two flags, but have nevertheless scored within an acceptable range. Ditching all of these respondents at first sign of concern will not only waste a significant percentage of the potential respondent pool, which could lead to difficulty in reaching quotas, but also risks biasing data at a time when listening to more diverse voices is critical.

Importantly, QualityScore uses machine learning to compare each respondent’s data against peers from that specific question/survey. For example, if any part of the survey is set up in a way that lends itself to straight-lining or poor Open-End responses, respondents will only be flagged for poor quality if there is other supporting evidence.

Our data shows that clients using the fully automated QualityScore solution save about 85 percent of the time they would otherwise spend checking survey results to identify bad respondents. This not only provides time and cost savings for our clients, but, by reducing the potential for conflict between sample providers and market researchers, we hope it will provide a sound basis for driving data quality higher for the industry as a whole.

 

Tim McCarthy is General Manager at Imperium. He has over 15 years of experience managing market research and data-collection services and is an expert in survey programming software, data analysis and data quality. Imperium is the foremost provider of technology services and customized solutions to panel and survey organizations, verifying personal information and restricting fraudulent online activities.

New Imperium Solution Automates In-Survey Data Cleaning Process

QualityScore™ tool automatically assesses survey respondent quality to significantly improve data set accuracy and fielding efficiency

(NEW YORK – Feb 25, 2021) – Data quality solutions specialist Imperium (www.imperium.com) today announced the release of QualityScore, a fully automated, platform-agnostic tool that improves survey data quality, while reducing reliance on costly manual checks.

“We know this tool will save clients time and money, by providing valuable insights into how respondents are performing,” said Tim McCarthy, General Manager, Imperium. “We calculate that QualityScore will improve data accuracy by ~10%, resulting in savings of thousands of dollars per project. It will enable researchers and project managers to allocate more time and energy to their customers and to productive data analysis rather than spending hours reviewing data simply to check the quality.”

The new tool will not only ensure cleaner data but will also greatly reduce the frustration caused by having to return to field after closing quotas or ending a project, only to find out you need to remove or replace poor respondents.

“By implementing QualityScore, companies can have greater confidence in their completes,” explained McCarthy. “There’ll no longer be any need to either revisit surveys or to include sub-par respondents just so they can close fielding on time.”

QualityScore uses a sophisticated scoring algorithm to return a per-respondent quality rating that incorporates behavioral information, such as speeding, straightlining and poor OE responses, as well as passive data points such as mouse movement, and browser activity to paint a complete picture of the respondent’s attentiveness and data reliability. Because it’s completely automated – and customizable – it allows MR companies to focus on their business priorities without the distraction of bad data/respondents.

A recent review of online panels by industry specialists Grey Matter Research and Harmon Research reported that at least 90 percent of researchers are not taking adequate steps to ensure online panel quality – a statistic that has serious implications for the quality of MR survey data.

Researchers have long been aware of the threat posed by disengaged or fraudulent responses to the integrity of online panel surveys. Yet, tens of thousands of market researchers rely on their efficacy. While meticulous study design is the cornerstone of effective data collection, tools like QualityScore give researchers much greater control over the QA process without committing additional time and resources.

 

Targeting Better Quality Data

A recent study by Grey Matter Research and Harmon Research revealed that online panels are incredibly susceptible to respondent quality issues – a finding that threatens to undermine the trust in MR data.

The study’s researchers fielded an online questionnaire with a handful of the largest panel providers that included a range of tests and quality control measures designed to assess the caliber of respondents. Just under half (46 percent) of respondents failed to meet the researchers’ standards for inclusion due to multiple errors or outright fraud; overall, the report concluded that around 90 percent of researchers weren’t taking “sufficient steps to ensure online panel quality”. It’s a statistic which, if even remotely accurate, is deeply worrying.

The battle for better data

Among the quality issues identified were nonsensical responses to OE questions, failure to identify and remove straightliners, as well as a lack of fake or ‘red herring’ questions designed to weed out inauthentic responses. Crucially, the report showed stark differences between respondents whose identities the researchers had verified and those tagged as bogus: the bogus respondents significantly skewed results, rendering data ineffective, at best.

Research companies and panels face a constant battle to guard against the infiltration of fraudulent and disengaged respondents – including bots and click farms – into online surveys but it’s becoming more difficult to detect them as their methodologies become increasingly sophisticated.

Obviously, panel companies are under significant pressure to provide fast, affordable data, a demand that doesn’t always go hand in hand with the need to provide quality. It’s not a problem that can be easily solved after the data is collected – without a lot of manual checking – so, it’s best tackled before results are aggregated.

Some QA can be addressed through study design. Every questionnaire should include measures to determine respondents’ validity. Data reviews should be scheduled during and after the field. Obviously, different types and lengths of study require different solutions – speeding issues are more obvious on longer questionnaires, for example, while straightlining won’t be a problem where there are no grids. Red herring questions can be readily identified by bogus respondents, so aren’t always effective.

Cleaning the respondent pool

The majority of bad or fraudulent respondents can be taken out of the pool before being given the opportunity to start a survey. Imperium’s data integrity solutions are fully automated and are designed to validate only those respondents who pass our stringent checks.

RegGuard® is a broad-spectrum, automated data-validation solution. It combines our flagship ID-validation API RelevantID® with Fraudience®, Real Answer® and Real Mail tools during registration to perform a customizable 360-degree check on each registrant – importantly, before they’re added to your panel.

It weeds out fraudsters and dupes, at the same time verifying their IP reputation. Items are scored and flagged, making it easy to identify registrants for processing or removal. When used in concert with self-reported data-authentication tool, Verity®, and CASS-certified postal record-checking tool, Address Correction, it provides a robust first line of defence for MR companies and panels.

Looking forward

We’re adding a new tool to our collection in the New Year that will give MR companies and panels even more control over data quality using an entirely automated process that will totally transform survey results, saving time, money and resources. Stay tuned!

Register here for more information on this industry-first development.

Prioritizing Quality: Tackling the Covid-19 Spike in Survey Fraud Head-on

By Tim McCarthy, Imperium General Manager

The Covid-19 pandemic has already taken a devastating toll on the health of tens of millions of people across the globe. But it’s also opened up fresh opportunities for fraud from unscrupulous operators. News agency Reuters estimates that losses from coronavirus-related fraud and identity theft in the U.S. alone have reached nearly $100 million since March this year, with complaints about scams at least doubling in most states.

As you might expect, much of this criminal activity is targeted at ordinary American citizens – the kind of shopping scams and phony ‘cures’ that make capital out of misery, while creating massive anxiety for people already suffering from the physical, mental and economic distress caused by the pandemic.

But fraudsters are also causing consternation for businesses that rely on collecting accurate data – like the survey companies and panels we work with every day at Imperium. In a sector where trust is at a premium, any increase in fraudulent activity can undermine core intelligence gathering – and carry the potential for ruining hard-won reputations.

We know from our own experience that survey fraud has been on a steady incline since COVID-19’s first wave. In spring this year we identified a 25 percent increase in fraudulent survey respondents; more recently we have seen this spiking to around double the expected numbers. And, while we’re witnessing an inevitable rise in fraudsters trying to enter surveys, even verified respondents are providing less actionable insights in their open-ends (OEs) due to an increase in the poor OE rate of about 40-50 percent.

Dupe rates are particularly interesting: although they dipped at the outbreak of COVID-19, the numbers have not only bounced back but are now rocketing. Although it appears anomalous, there’s a logical reason for this pattern. As fears grew over the spread of the coronavirus earlier in the year, market research activities slowed correspondingly, reducing MR companies’ reliance on multi-sourcing for their projects and resulting in a greater-than-sufficient supply of respondents, with commensurately lower levels of overlap.

Over the past four or five months, however, with the market rapidly rebounding, the survey duplicate trend is swinging the other direction – as the number of projects-per-month is on a steep incline, it’s being accompanied by a sharp leveling-off in panelist supply. All of which means that MR agencies are now having to rely on multi-sourcing methods to meet their quotas. This supply-demand imbalance naturally fuels higher dupe rates which are likely to persist until the shortfall is rectified.

Event-driven impacts – like those we’re seeing as a result of the current crisis – will often lead to short- and medium-term problems, but there’s no one-size-fits-all solution to improving data accuracy in the long term. Some believe that adopting a more human-centric approach to data collection could help address some of the problems created by the wholesale movement of market research methodologies to online platforms. While there’s little likelihood of a return to the resource-intensive days of in-person interviews, online panels can contribute valuable, high-quality feedback, as long as we take the steps needed to maintain a powerful connection between brands and their consumers.

Survey companies have an important part to play in this dynamic by creating robust systems capable of withstanding the closest scrutiny. We know that surveys are all-too-easily inundated with bots, survey farms, and fraudsters, and that quotas can be easily met with misinformation. Offering brands access to millions of consumers only holds currency if you have the response rates to back your claims up.

We recommend taking an agile approach to research. Adopting more flexible methodologies enables the creation of a highly iterative model that allows you to engage more consistently and dig deeper for more granular information when necessary. This only works if you prioritize the respondent experience – providing a better UX will always result in better outputs.

Whatever approach you favor, implementing robust multi-level security measures is essential. Current conditions are creating a perfect storm for the depletion of trust in MR data: (1) the proliferation of fraudsters attempting to enter surveys, just as (2) MR companies are being forced to multi-source their projects, at a time when (3) real respondents are returning less actionable OE information. It’s more important than ever for research companies to incorporate the necessary tools into their surveys to ensure these conditions are not allowed to negatively impact the overall quality of the insights they are providing.

Tim McCarthy is General Manager at Imperium. He has over 15 years of experience managing market research and data collection services and is an expert in survey programming software, data analysis and data quality. Imperium is the foremost provider of technology services and customized solutions to panel and survey organizations, verifying personal information and restricting fraudulent online activities.

Imperium Supercharges Industry-leading Anti-fraud Solution

Upgraded RelevantID® tool provides powerful 360-degree response to escalating problem of survey dupes and frauds

(NEW YORK – Oct 26, 2020) – Data quality solutions specialist Imperium (www.imperium.com) today announced the release of a significantly upgraded version of its flagship ID-validation tool RelevantID®.

This major release is designed to help market research and panel organizations combat the rise of highly sophisticated synthetic identity frauds that are becoming increasingly difficult to catch using conventional fraud-detection models.

From today, the company’s powerful fraud-blocking tool Fraudience® will be integrated into RelevantID® as standard. New and existing clients will now automatically enjoy access to Fraudience®, which monitors respondents’ behavior patterns against a proprietary research-based algorithm, quickly identifying potentially fraudulent respondents.

New RelevantID® additionally includes FraudProbabilityScore, a machine-learning model that assesses passive and behavioral data, returning an extremely precise fraud assessment that detects fraud, bots, and jumpers/ghost completes in surveys.

“We’re confident that RelevantID® offers a genuinely multi-level approach to fraud detection, combining passive machine data with behavioral data to provide a detailed picture of respondents’ fraud potential, while fully complying with stringent privacy laws like GDPR,” said Tim McCarthy, General Manager, Imperium.

McCarthy adds: “Businesses rely on us to consistently deliver top-level tools that assure data quality. Since supercharging RelevantID® with Fraudience® and FraudProbabilityScore, we’re seeing fraud capture scores skyrocket—on average by more than 5x—from 2.5% to 13.8%. Because we’re using a machine-learning model, our tools are recognizing new behavior patterns every day, making it easier to stay ahead of determined fraudsters.”

The announcement chimes with the findings in a report on the subject of synthetic identity fraud published this year by the Federal Reserve. The authors of the paper referred to an ID Analytics study estimating that traditional fraud-detection models lacked effectiveness in 85-95 percent of likely synthetic identity frauds.

The extensive link-analysis process that’s a core part of Imperium’s survey data quality solutions mirrors Federal Reserve recommendations that businesses adopt a ‘layered fraud mitigation approach’ incorporating ‘manual and technological data analysis’ to combat the most sophisticated fraud technologies.

About Imperium

Imperium provides a comprehensive suite of technology services and customized solutions to verify personal information and restrict fraudulent online activities. The world’s most respected market research and e-commerce businesses rely on Imperium’s superior technology and solutions to validate their customers’ identities, verify data accuracy, automate review processes and uncover the intelligence that improves profitability. The company’s flagship product RelevantID® is widely recognized as the market research sector’s de facto data-quality and anti-fraud tool. In recent years, Imperium has invested heavily in machine learning, NLP and neural networks, capitalizing on its domain knowledge to expertly map fraudsters’ behavior. Last year, Imperium prevented 1 billion instances of fraud at source. www.imperium.com

Press contact:
connect@theflexc.com

The Rise of the Deepfake: Is Synthetic ID Fraud Unstoppable?

By Tim McCarthy, Imperium General Manager

While exponential developments in Artificial Intelligence (AI) and Machine learning (ML) technology are creating fresh opportunities for advancement in almost every field of human endeavor, there are also dark forces at play.

A recent study by a prestigious British university rated so-called deepfakes (hyper-realistic audio and video trickery) as the ‘most worrying application of artificial intelligence’ because of its potential to further the aims of those involved in crime and terrorism. The report’s authors were concerned not only by the difficulties in detecting fake content but by the societal harm that could result from a universal distrust of media. Simply put, if everything can be faked, how do we know what’s real?

The hand-wringing over deepfake technology may feel like a million miles away from everyday life. Many of us will only ever encounter it as a kind of cinematic smoke and mirrors used to reverse the aging process on movie stars. You might even dabble with it yourself via one of the deepfake apps that allow people to insert themselves into pop videos or movie scenes.

But the underlying tech has serious implications for us all. At Imperium, we spend a lot of time anticipating the impact of increasingly sophisticated technology – like deepfakes – on ID fraud. We know that, globally, synthetic identity fraud is on the rise. According to McKinsey, it’s the fastest-growing type of financial crime in the U.S.

A synthetic ID can be created using a combination of real and fake information, some of which may be stolen or hacked from a database or even acquired on the dark web. If the activity only has a limited impact on the victim’s PII (personally identifying information), it could go unnoticed for months. Adding deepfake and synthetic AI tech into the mix just makes it easier for the perpetrators to build up more detailed synthetic IDs that lead to more serious levels of fraud.

It’s a long game. Small credit applications at the beginning attract little attention and allow fraudsters to establish what’s known as a ‘thin-file’ that can be used to move up to more substantial targets. It can be an expensive business. For our panel and survey clients, fraudsters not only leverage a significant financial cost on their organizations but also have the potential to chip away at the trusted brand they’ve worked so hard to build.

It’s not all doom and gloom. Accurately verifying applicants’ IDs during onboarding is critical – whether you’re vetting survey respondents, checking a loan application or verifying ballot requests. The good news is that the same AI that’s being used to create deepfakes, can be employed to spot synthetic IDs.

It’s a process that requires continuous monitoring and development, however. To create a robust system, multiple layers of security – that go over and above standard two-factor and knowledge-based authentication – will be required. These systems should incorporate both manual and technological data analysis, and deploy tactics including screening for multiple users with the same SSN or with accounts that originate from the same IP address, performing link analysis to check for PII overlaps, and comparing applicants’ behavior against ML-enabled algorithms.

There is, potentially, another weak point to consider: Synthetic IDs are sometimes too perfect. Real people have real histories that form a patchwork of physical and digital evidence in dozens of different data systems. Natural data trails are hard to fake because their reach stretches back years and include student loan details, old addresses and property records. Synthetic IDs either include some real details that don’t map with other records or are completely fake and have data that’s a little too neat to be true.

While tracking and trapping fraudsters will increasingly require the use of more sophisticated anti-fraud systems, it’s important to balance the requirement for greater transparency with the individual’s right to privacy. Which is why it’s crucial to partner with high-caliber ID verification providers who can implement world-class solutions, at the same time complying with the most stringent GDPR and CCPA regulations on information processing, storage and protection.

The importance of using AI ethically – particularly to support uses related to fairness, bias and non-discrimination – is a discussion-worthy topic on its own. But it’s my belief that we can – and should – push back against AI-based fraud with an equally powerful AI-enabled fraud-detection response.

With more and more companies and consumers interacting online, and where criminal transgressions can damage lives and destroy reputations, it’s our duty to mount a determined and effective defense of our data.

 

Tim McCarthy is General Manager at Imperium. He has over 15 years of experience managing market research and data collection services and is an expert in survey programming software, data analysis and data quality. Imperium is the foremost provider of technology services and customized solutions to panel and survey organizations, verifying personal information and restricting fraudulent online activities.

Imperium Announces Pioneering Data Quality Certification Program

Gives stamp of approval to market research and panel organizations that pass muster

NEW YORK, Sep 21, 2020 – Imperium, one of the foremost providers of data-quality solutions, today announced the launch of a certification program designed to offer market research and panel organizations the opportunity to gain accreditation from one of the industry leaders in fraud prevention.

Imperium, which is best known in the market research industry for its data quality and anti-fraud tools, is introducing this new certification program to set a benchmark for the dedication to data quality. Believed to be the first of its kind in the sector, Imperium certification will be awarded to businesses that correctly and consistently deploy proven data-quality and anti-fraud services in addition to passing multiple quality and operational checks to validate best practice. Imperium will confirm what checks companies have in place, how often they are utilizing them, and what their settings are to determine whether they qualify.

Imperium certification is the next logical step for market research companies and panels looking to demonstrate a long-term commitment to delivering high-quality data to their customers. The certification process includes close analysis of anti-fraud tools to ensure the widest possible implementation across respondents and survey traffic. An annual review monitors continued compliance.

Cint and Dynata are two of the first companies to win Imperium’s Data Quality Certified seal.

“With digital fraud on the rise, it’s important that agencies are able to trust the mission-critical information they rely on to build close relationships with customers and to gain a competitive advantage in a fast-moving market,” said Tim McCarthy, General Manager, Imperium.

McCarthy adds: “Certification will enable commissioning brands to confirm that their supplier is doing everything possible to deliver the best and most accurate data, time after time. We only reward the highest levels of compliance because our professional reputation rests on the rigor of our anti-fraud and data-quality services. Companies that qualify for Imperium certification can be trusted for data quality.”

Imperium’s initiative is timely. According to Imperium’s own research, as many as 38 percent of survey respondents are duplicates, fraudsters or bots. In a recent Experian study, businesses surveyed admitted that 28 percent of their customer and prospect data was likely to be inaccurate, with only half of organizations polled considering the current state of their CRM or ERP data as ‘clean’. This, despite 98 percent of organizations believing that high-quality data was either ‘extremely important’ or ‘important’ in achieving their business objectives.

Imperium’s data integrity solutions deliver validated datasets via a series of invisible, fully automated processes, returning trusted, verified insights. More details can be found here.

About Imperium

Imperium provides a comprehensive suite of technology services and customized solutions to verify personal information and restrict fraudulent online activities. The world’s most respected market research and e-commerce businesses rely on Imperium’s superior technology and solutions to validate their customers’ identities, verify data accuracy, automate review processes and uncover the intelligence that improves profitability. Founded in 1990, Imperium pioneered digital technologies to validate identity, verify data accuracy, and automate review processes and is now a leader in online fraud detection and identity verification. The company’s flagship product RelevantID® is widely recognized as the market research sector’s de facto data-quality and anti-fraud tool. In recent years, Imperium has invested heavily in machine learning, NLP and neural networks, capitalizing on its domain knowledge to expertly map fraudsters’ behavior. Last year, Imperium prevented 1 billion instances of fraud at source.

Federal Reserve Shines a Spotlight on Synthetic Identity Fraud

Back in June, the Federal Reserve released a white paper that takes a deep dive into the subject of synthetic identity fraud.

The report – the latest in a series – discusses the context to the growing incidence of this kind of fraud, while listing factors that are crucial to mitigating its prevalence, including technological advancements like artificial intelligence (AI) and machine learning (ML) as well as regulatory influences and information sharing.

Most examples of synthetic identity fraud are perpetrated by fraudsters who combine real information such as names and Social Security numbers (SSNs) with fictitious data to create plausible new identities. These fraudsters often incorporate the personally identifiable information (PII) they’ve acquired on the dark web to create realistic identities that are hard to detect using conventional fraud detection models.

‘Hard to detect’ may well be an understatement: the authors of the paper point to an ID Analytics study that estimates traditional fraud models are ineffective at catching 85% to 95% of likely synthetic identities.

If that’s true, it amounts to a vast torrent of fraud that’s simply passing undetected – at the same time costing businesses a fortune.

A spokesperson for the Federal Reserve concluded that organizations had the best chance of flagging synthetics – and of minimizing losses – by using a ‘layered fraud mitigation approach’ incorporating both ‘manual and technological data analysis’.

A multi-level approach to fraud detection

It’s an approach that resonates with us here at Imperium. We’ve been developing anti-fraud and data integrity solutions for over fifteen years and have engineered a robust range of solutions that are designed to operate in concert to deliver the most comprehensive toolkit available to market research companies.

Recently, we’ve been refining – and redefining – our data quality solutions to ensure our customers have access to 360-degree protection, enabling them to maintain absolute data integrity at all times.

Our flagship ID-validation tool RelevantID® is designed to map survey respondents’ ID against dozens of data points to create a fraud profile, as well as to screen for duplicate users attempting to join the same survey. But, as fraudsters adopt increasingly sophisticated approaches against a backdrop of tighter privacy laws, we’ve recognized the need to further develop and diversify the way that RelevantID® operates. Which is why we’ve introduced a couple key upgrades and additions to the tool.

The big news is that we’ve decided to incorporate our powerful fraud-blocking tool – Fraudience® – into RelevantID® at no additional cost. Companies currently utilizing RelevantID® will now automatically enjoy access to the benefits of Fraudience® – a solution that monitors respondents’ behavior patterns against our proprietary algorithm (based on 3 billion transactions) to identify potentially fraudulent respondents. This upgrade will boost RelevantID®’s ability to catch dupes and fraudsters by passively monitoring machine information, as well as respondent behavior, creating the most robust and accurate solution in the market.

We’ve also enhanced RelevantID® with a FraudProbabilityScore. This score utilizes a machine learning (ML) model that assesses both passive and behavioral data, returning an extremely precise fraud assessment which catches more fraud, bots, and jumpers/ghost completes.
We’re confident that RelevantID® now offers a genuinely multi-level approach to fraud detection that cleverly fuses passive machine data with behavioral data to provide a detailed picture of respondents’ fraud potential. When deployed with Imperium’s OE response-validation tool Real Answer®, we believe our clients wield a potent weapon in the fight against fraudsters.

Making the right connections

The extensive link-analysis process that’s a core part of Imperium’s survey data quality solutions mirrors Federal Reserve recommendations that multiple instruments should be checked and cross-checked to identify relationships or common characteristics of synthetic identities in order to combat the most sophisticated fraud technologies.

Imperium solutions are:

  • Cloud-based and platform-independent
  • Invisible to users with no impact on experience
  • Customizable with flexible deployment options
  • GDPR- and CCPA-compliant

If you’d like to learn more about our full-featured response or to request a free trial, contact us at sales@imperium.com to discuss your requirements with an expert member of our team.