How Pollfish Provides Quality Data and Prevents Survey Fraud (2023 Update)

survey-fraud

Pollfish implements a variety of procedures to ensure the highest quality of data. Using a combination of proprietary techniques and machine learning technology, our platform, along with our team of technical experts prevent survey fraud at every turn. 

When combined with our rigorous quality checks, our participant pool of over 800M+ enables us to discard responses that don’t meet our quality standards — without sacrificing the sample size. As such, no question is subject to survey fraud, as the Pollfish platform stamps out inaccurate or poor quality responses.

As a result, we deliver higher quality data than our less-selective, panel-based counterparts.

What is Survey Fraud?

Survey fraud, or market research fraud is the adverse phenomenon that occurs when survey respondents submit fraudulent or bogus responses. This can occur accidentally, such as when responders undergo survey fatigue, or purposefully.

What would prompt a responder to partake in the latter? They can be your competitors (especially if the survey mentions your brand or makes it apparent that it is conducted by your brand), bots, click farms, or respondents eager to finish the survey to receive an incentive. 

Thus, there are a number of ways respondents can contribute to survey fraud, such: 

  • Providing nonsensical, i.e., gibberish answers
  • Breaking rules
  • Answering suspiciously quickly
  • Hiding their IP address via the use of a VPN
  • Leaving one-word responses in open-ended questions that ask for an in-depth answer
  • Flatlining

In regards to the latter, remember the old advice to “just choose B” all the way through if you didn’t study for a test? That would result in an “F” from us. Participants that choose the same answer repeatedly, try to take the same survey again, or attempt to submit multiple surveys in quick succession don’t pass our test.

Natural Language Processing for Open-Ended Answers (New Feature 2023)

We’ve recently released an even more advanced type of quality check functionality, in addition to the various others explained in this article. Now, there is a data quality check for open-ended questions; this will apply to the answers to ensure they are of quality. The check applies artificial intelligence, namely Natural Language Processing (NLP).

NLP refers to the ability of computers to understand and interpret written or spoken speech, much like a human would. In this case, our research platform uses its new NLP methods over the given open-ended answers and evaluates their quality. By doing so, it then is able to determine whether to disqualify the respondent who gave the answer or not.

Respondent Verification

We further verify by checking ahead for duplicated IDs via IP or MAC addresses, Google Advertising and mobile device identifiers, and we work with our manually vetted publishers to send unique IDs as an added layer of protection. In-survey questions are designed to add another layer of security against survey fraud, such as requesting an answer to a simple math equation or including identical questions within a survey with the response options re-ordered to verify answer consistency.

Zero Tolerance For Bots

In addition, we have zero tolerance for bot-friendly VPNs, incomplete surveys, or other suspicious activities. We reject responses from any behavior we deem questionable—anything from answering open-ended questions with nonsense to attempting to sign in from multiple countries at once. We’re even alerted if respondents are spending an inappropriate amount of time across questions within the survey.

Rigid Adherence to Targeting

Our data quality is second to none. As such, we’re skeptical of just about everyone, and that’s what makes us the best at what we do. Accuracy is our top priority, which is why we are so critical of our sources and their behavior. In fact, we only include respondents that match 100% of the targeting criteria, even if there was nothing fraudulent about their responses. This includes surveys with multiple audiences, as well as those with stringent respondent qualifications, such as specific answers to screening questions.  Our combination of technology and expertise ensures the delivery of the highest quality data to our customers—every survey, every time.

Multiple Layers of Quality Checks

Pollfish supports multiple other layers of quality checks. This is an ongoing process since both the platform and our technical experts continuously work to avoid survey fraud while fetching the preset amount of required survey completions with the correct targeting. There are several layers that we use to improve the data quality. These include:

The Technical layer: 

  • Includes various checks to ensure first-rate data. Hasty answers check: catches respondents who answer faster than the average time required to read the actual questions.
  • Reset ID Check: Occurs when the responder answered the same survey previously, but with a different device to avoid the same respondent from partaking more than once.
  • Gibberish Check: When the given answers contain text that is considered gibberish, ex: “dsfjkn dfnksj ifjodf”.
  • Same IP Participation: Checks if a survey has been completed before within a certain time from the same IP address of the respondent’s device.
  • Carrier Consistency: Assures that the carrier of the respondent is contained in the targeting market.
  • VPN: VPN users are automatically disqualified from survey participation.

Quality Questions & Responses

Our platform is designed so that your questionnaire is bound to receive quality responses. In order for it to do so, aside from post-answer quality checks, we provide quality answer triggers. Our quality questions include: trap, red herring and attention questions.

  1. Trap questions: Checks that respondents are paying attention to a command, usually one that asks to select a negative response. Responders with acquiescence bias, or those who choose positive responses will be caught.
    1. Ex: Please select “Somewhat Disagree” below
      1. Strongly Agree
      2. Somewhat Agree
      3. Somewhat Disagree
      4. Strongly Disagree
  2. Red Herring Questions: These check if the respondent is engaged with the survey, tracking how logically they answer oddball questions. 
    1. Which of the following is not a sport?
      1. Soccer
      2. Basketball
      3. Cookies
      4. Baseball
  3. Quality Questions: Checks if respondents are reading and comprehending the question, much like red herring questions. 
    1. Ex: Which of the following does not have wheels?
      1. Bike
      2. Car
      3. Skateboard
      4. Elephant
      5. Cart

These are injected within various components in the survey flow depending on each survey’s type to check if the respondents pay attention. Those quality questions are also used in Demographic surveys. If the respondent fails to answer a quality question, a reverse counter is displayed.

Finally, we also check for response quality by banning respondents who provide insufficient quality data from the network. Additionally, our system uses an audit to validate open-ended data to ensure that respondents do not provide inadequate responses. We also have installed a function that wards off copy-pasted answers in open-ended questions and others.