Ensuring Authenticity in an Era of Qualitative Research Fraud

In market research, we’re constantly barraged with articles, webinars, and content on quantitative data quality and combatting survey fraud. Less attention is paid to qualitative data quality, yet the field is facing its own sophisticated challenges.

Fraudsters have long exploited poorly designed screeners to “qualify” for studies. Human-to-human re-screening of respondents became a qualitative research best practice, in part, to suss out these people. 

But automated systems for re-screening qualitative research respondents have allowed a new generation of fraudsters to proliferate. They use techniques to mask their true location. They pass through video or text screenings that lack a human investigator to look for subtle “tells.”

That’s why, at Campos, we’ve not only stuck to human-to-human re-screening of respondents, but we’ve doubled down. We now spend more time vetting research participants. 

Read on to learn more about how we vet our research participants and why this level of engagement is so important.

Catching Qualitative Research Fraudsters

Most recruitment projects at Campos start with a screening survey. Our team designs a survey that will be used to qualify or disqualify people for a study. We are careful to obscure who precisely we’re looking for so people can’t fraudulently screen in, but we always do one-to-one follow-ups with anyone who qualifies via the screener to further vet who they are and if they qualify.

The “Man in the Baseball Cap”

In one recent example, we saw just how far some people will go to try to “cheat” their way into a study. We were conducting an internal study focused on prospective college students. When one of our recruitment team members reached out to Mark, we’ll call him, she expected to be speaking to a college-bound teenage boy. She reviewed his screening survey answers and he was seemingly a perfect match for what we were looking for.

But when the video re-screen began, she  didn’t see a teen; she  saw a middle-aged man. Despite his best efforts – a strange, fake-sounding voice, Gen Z slang, a “youthful” sweatshirt and hat – she caught on immediately and screened him out (and blocked him from participation in any further studies with Campos).

The Limitations of Automated Vetting Technologies 

Other fraudsters are harder to spot. In a separate study, Campos was recruiting 75 prospective college students for an asynchronous diary study. To reach our recruitment goals, we used several methods to source prospective respondents, including social media and a third-party sample provider (essentially another company that Campos hires to connect it with qualified respondents). 

Even when we’re using a third-party vendor like this, we apply our own layer of rigorous screening to ensure the “Campos Standard” is met. For this project, we found during our re-screening process the outside sample partner’s automated vetting system allowed multiple fraudulent participants-to-be through. Our human-to-human re-screening again allowed us to ferret them out and replace them with thoroughly vetted participants.

This speaks to the importance of having an in-house recruitment team that is ultimately responsible not just for meeting our sample goals, but for the quality of the recruit. If the people we recruit aren’t good research participants, or even worse, aren’t who they say they are, the research is pointless. That’s why we put much more time and energy into this than other research houses.

The Irreplaceable Value of Experience

While the industry moves toward automation, there is an irreplaceable value in veteran field management. At Campos, this is spearheaded by David Blaha, our Qualitative Operations Manager. David has more than 15 years of experience leading complex recruits; he’s seen it all. He leads a recruitment team that includes many others who have been with us for years and together they act as our human firewall.

The Human Firewall

Both our quantitative and qualitative research teams use a flagging system to identify potential fraudsters as well as low-quality responses/respondents. In the qualitative space, some of these flags are things an automated system should catch, while others rely on human intuition and experience. For example: 

  • Digital Discrepancies: Most qualitative research projects will have some level of geographic screening criteria. A standard first-line of defense when it comes to re-screening participants is checking their location metadata. If a respondent claims to be in Florida but their IP address reveals they’re in India, they’re likely not who they say they are. Many automated vetting technologies weed out fraudsters using location data.
  • Pattern Recognition: Less easy to catch, though, are “professional respondents,” that is, people who make a living (or at least a side hustle) fraudulently qualifying for and participating in market research. They may cycle through multiple identities in order to figure out how to qualify and then, once they do, they role play. Sometimes these people are part of wider networks of fraudsters who share information about how to qualify for studies. Recognizing these people is a skill that requires human intuition and experience. Sometimes we can literally spot them, we’ve seen them try this before, but other times it’s recognizing patterns in how they respond and act. And once you know how to recognize these patterns, it’s hard to miss them when you see them in the wild (cough…some of the political focus groups we see covered in national media…cough).

Beyond Fraud

Our team also leverages their decades of experience to make sure the person we’re re-screening will be a good research participant. This may seem obvious to some, but if you’ve ever observed a focus group with painfully quiet, or worse, combative participants, you get how important this is.

So re-screening is about more than listening for the correct responses to our qualifying criteria. We listen for engagement, patience with our line of questioning, willingness to share a point of view, and authenticity. If a respondent seems annoyed by repetitive questions or provides one-word answers, we know they won’t have the expressiveness required for a deep qualitative dive.

Getting Research Participants to Show

Once we have screened and re-screened our participants, we’re ready for research to commence. But recruiting respondents is only half the battle. We need them to actually show up and participate in the research! Oftentimes this is easier said than done.

The Engagement “Sweet Spot”

Campos has exceptionally high “show rates.” This is the number of people who “show” for the research session or sessions vs. the number we recruited. In one recent example, we recruited 48 people to participate across 6 virtual focus groups. 48 people showed, a 100% show rate. 

Our approach to achieving high show rates is rooted in high-touch communication rather than automated pestering. This philosophy balances offering an incentive with a strategic cadence of touchpoints. While there are some best practices here (e.g., sending a reminder 24 hours before the session), our engagement is not a one-size-fits-all approach. Instead, we customize based on our interactions with each participant. Some people may require more touchpoints or touchpoints via text versus phone. It depends on the person. We want to ensure we’re reminding them of what they’ve committed to without annoying them. It can be a delicate dance.

For longitudinal studies where the risk of fatigue is much higher, this “meet them where they’re at” strategy is even more important. By keeping notes on individual schedules and providing consistent, human reminders, we ensure participants stay engaged and compliant without feeling overloaded. Our engaging-but-not-annoying model is currently working extremely well on a yearlong longitudinal study we’re conducting that just entered month 10.

Better Recruits = Better Insights

Ultimately all the work we put into recruitment and respondent engagement is about producing high-quality research.

At Campos, our in-house moderators work closely with David and his team throughout the entire process:

  • These moderators collaborate with the recruitment team to design our screening surveys, so everyone is  aligned on who we want to include in the research and why.
  • There’s a lot of gray area in recruitment – for example, someone may not technically qualify based on all the screening questions, but actually seems like a great fit for the project based on what we’ve learned about them. Or, someone may technically qualify but our recruiters feel they shouldn’t be included for other reasons. Our recruiters and moderators discuss all of these borderline cases so we feel great about the final group of respondents.
  • As a result of our deep experience and collaborative process, our moderators have complete confidence in our team’s ability to recruit real, engaging respondents, whether it’s for interviews, focus groups, or asynchronous research. That confidence allows our moderators to focus on developing superior quality discussion guides and preparing for the research sessions themselves. We know we’ll have productive conversations because we know we’ll have quality participants.

We see this in-house synergy as one of our secret weapons that allows us to outperform other research firms.

Quality is a Choice, Not a Filter

Qualitative recruitment isn’t just about cleaning data; it’s about ensuring the foundation of the research is grounded in authentic human truth. In an industry saturated with automated “solutions,” we believe that true quality remains a deliberate choice—one rooted in experience, vigilance, and rigor. 

Want to learn more about how we design and execute qualitative research grounded in human truths? Reach out.