Section 1: Market Research Terminology

Section 2: Question & Survey Types

Section 3: Survey Bias

Section 4: Pollfish Features

Market Research Terms Glossary

You can’t become a market research expert without learning the lingo. That’s why we created the Pollfish Market Research Terms Glossary.

This alphabetical listing of key market research terms, concepts and ideas has been segmented into four sections: Market Research Terminology, Survey Design & Question Types, Survey Bias and Pollfish Features.

For basic research concepts, check out the Market Research Terminology section. Survey Design & Question Types gives simple definitions of the different types of surveys and questions you can build with Pollfish. The Survey Bias section takes you through the ways you can bias your results. And the Pollfish Features section gives you definitions of Pollfish-specific tools and features so you’ll always know what’s going on.

Check it out!

Section 1: Market Research Terminology

The more you read about market research, the more surveys you build and the deeper you sink into the world of consumer insights, the more you translate the language surrounding it.

Here, we define some basic terms that you will need to have an understanding of to become a market research expert.

A   |   C   |   D   |   E   |   F   |   I   |   L   |   M   |   N   |   O   |   P   |   Q   |   R   |   S   |   T


Agile market research

An approach that values numerous small experiments over a few large bets, rapid iterations over big-bang campaigns and responding to change over following a plan.



A fully completed survey that has been screened for bias, insufficient responses and bots.

Completion Rate

The rate at which surveys are completed as compared to the number of surveys started by respondents. To calculate completion rate, divide the number of completes by the number of starts.

Consumer Insights

Valuable information on the preferences, opinions, habits and emotions of your most valuable customers. Consumer insights usually encompass insights related to a product or service.

Control Group

Survey participants can be split into two groups–an experimental group, exposed to a product or service, and a control group that is neutral. A common example is ad effectiveness testing, where researchers can track a respondent’s exposure to an ad through cookies. They then split those who have viewed the ad and those who have not into separate groups, asking the same questions to see how responses differ.

Cost Per Complete

The price you pay per completed survey. This calculation is based on certain factors of your survey, including the number of screening questions, quotas, demographic/geographic filtering and more. Learn More >

Cross Tabulation (Crosstab)

A feature of your survey platform that presents data in a table with rows and columns, designed to help researchers observe two or more variables at the same time. Crosstabs are useful when you want to divide your respondents into subgroups to see how a dependent variable changes the results.



Any information collected by your survey, along with any outside information collected, observed, generated or created in service of your research goals.

Data Cleaning

Removing unqualified, biased or incomplete responses from a survey. This process improves data quality and protects against survey bias. Learn More >

Device ID

An individually assigned ID given to a Respondent’s device to differentiate them from other respondents. Many survey platforms who deliver surveys via mobile devices will collect or create Device IDs. At Pollfish, these IDs are passed to us from the mobile app publishers where we deliver your survey. This ID, along with demographic information respondents give on their first survey, are collected and used for targeting and filtering purposes in the future. Learn More >

DIY market research

Market research conducted using a self-service platform, as opposed to partnering with a market research agency or research consultant. Most DIY market research is conducted in-house to avoid the speed and cost limitations of working with outside entities. Learn More >


When a respondent begins a survey and doesn’t complete it. These are also called Starts. Drop-offs are not counted as completes and, therefore, you will not be charged for them.



Engagement has many meanings. It can refer to a high degree of focus and interest in stimuli. In digital marketing and technology, engagement often refers to metrics surrounding use of platforms, content features or apps (think clicks, time on page, etc). In market research, engagement refers to how users interact with your survey. Does their time spent on each question indicate they are confused or don’t understand how to choose an answer? If so, your completion rate could be impacted.


Feasibility Study

A feasibility study is designed to determine the likely success of a project, product or service. There are many factors that go into a feasibility study, including existing competitors, production limitations, timing, estimated pricing and more. Brands or researchers may conduct feasibility studies to determine the market interest in a new product or service, or even to help determine the feasibility of a future research project.


Fielding refers to the distribution of the survey questionnaire. When using Pollfish, you can watch your results roll in in real time using our Results Dashboard.


Implicit data

Implicit data refers to information that is not provided from respondents directly, but is gathered from available data. For example, Pollfish collects location data and information on a respondent’s mobile carrier from the App publisher, not from the respondent directly.

Incidence Rate

Incidence rate is the measure for the rate of occurrence or the percentage of persons eligible to participate in a survey, based on the targeting criteria selected.


Longitudinal Research

Researchers performing a longitudinal study will run the same survey many times over short or long periods, in an effort to observe how the opinions, behaviors or habits of the same population change over time. The population can also be randomized to see how time impacts the questions being asked, regardless of population.


Margin of error

Margin of error, also called the confidence interval, is a statistical measurement of difference between survey results and the population value, expressed as a percentage. Within the survey ecosystem, the margin of error measures the difference between your survey results and how accurately they reflect the views of the overall population. Learn More >

Market research

Market research refers to the gathering of consumers’ needs, preferences, habits, behaviors and more in an attempt to better understand a company’s potential customers, brand positioning and potential interest in a product or service.

Mobile ID

Also known as the Advertising ID, this unique ID number is how mobile advertisers are able to cookie users and keep track of engagement with mobile ads. At Pollfish, we use this id in a similar way, delivering surveys to users in much the same way an app delivers ads. We use the advertising ID to ensure we don’t send the same survey to the same user more than once. Learn More >


Non-Probability Sampling

Non-probability sampling excludes some of the population in your sample, and that exact number can not be calculated – meaning there are limits on how much you can determine about the population from the sample. These methods include convenience sampling, quota sampling, judgement sampling and snowball sampling. Learn More >


Online panel

Online panels collect responses either via a fully opt-in structure, including a signup page, or start with some form of digital outreach to potential respondents who have agreed to take surveys in advance. Panelists are then recruited to participate in specific surveys, for example via email invitation to the page of the panel provider. Pollfish avoids the pitfalls of traditional online panels by asking users to take surveys while they are using apps or games in real-time, increasing data quality and reducing biased responses. Learn More >



A panel is a collection of potential respondents who have agreed to take a survey in advance of the survey’s fielding process. These respondents are typically promised some type of incentive in exchange for joining the panel, which would effectively pay them for their time. Learn More >


Piping allows researchers to personalize surveys by ‘piping’ an answer from a previous question into a later question. For example, you can ask a respondent their name or occupation on the first question, and then add that name or occupation to future questions to make the questions more personalized.


The population is the total group of respondents who you attempted to survey. If they complete your survey, they become part of your sample.

Primary Data

Primary data refers to the data collected by researchers directly from respondents using surveys, interviews or direct observation.

Primary research

Primary research refers to the methodology of using only data collected directly from respondents, rather than relying on data collected during previous research or from some external source (government agencies, employment records, etc).

Probability Sampling

Probability sampling refers to a randomized method of respondent selection. In order to utilize probability sampling, researchers have to have a method that ensures every member of the population has an equal chance of being chosen to participate (like picking names out of a hat).


Unlike demographics, which explain who your respondents are, psychographics seek to explain why they do what they do. While any quantitative study, group of screening questions or even secondary location data can net you demographic data, psychographics are more often culled from qualitative studies. For example, is your respondent concerned with health and appearance? Do they enjoy socializing or are they more introverted? You can get answers to these questions from scale-based quantitative questions, but open-ended questioning or interviewing often provides more depth to these groupings. Depending on how you plan to use them, you should consider this before creating your survey. Learn More >

Public Opinion Research

Public opinion refers to the opinions of a majority of people in a certain population. Polling the public opinion requires taking as broad of a study as possible and asking direct, quantifiable questions about specific issues.


Qualification rate

The qualification rate is the estimated percentage of people you expect to qualify for your survey based on your targeting criteria, screening questions and other filters.

Qualitative research

Qualitative survey questions aim to gather data that is not easily quantified such as attitudes, habits, and challenges. They are often used in an interview-style setting to observe behavioral cues that may help direct the questions. Learn More >

Quantitative research

Quantitative research is about collecting information that can be expressed numerically. Quantitative research is usually conducted through surveys or web analytics, often including large volumes of people to ensure trends are statistically representative. Learn More >


Your questionnaire is the list of questions you plan to ask your respondents. There are many different types of survey questions you can ask, depending on your survey goals. Learn More >


Quotas are limits you can set for the number of responses your survey collects from a particular group. They can be set across the entire survey or on a given question or segment. Unlike weighting, which ties your quotas to existing data sets like a national or local census, quotas can be chosen by the researcher to match the goals of your survey. If, for example, you’d like to survey a population that is 75% female, you can do that.



A respondent is a person who meets your targeting criteria and completes your survey in full.

Response Rate

The response rate is the percentage of the total targeted population who responded to your survey. Learn More >



Your sample refers to the respondents who matched your targeting criteria and completed your survey.

Sample size

Your sample size is the number of completes your survey receives. Learn More >

Secondary data

Secondary data refers to data that has been collected outside of the bounds of a researcher’s survey, but which the researcher can use to add context. For example, DMV records, national census records and other government information can be used for things like weighting and quotas, or just to show the disparity between perception and reality within a studied population.

Secondary research

Secondary research refers to the summary or synthesis of existing research towards a new research goal. In this practice, previous primary research projects are used as sources.


Segmentation studies seek to separate larger audiences into smaller segments based on similar tastes, interests, perceptions and other secondary factors like education, employment or lifestyle.

Statistical significance

A measurement to quantify whether a result is just due to chance or due to some significant factor. Getting a survey that is not statistically significant comes from sampling error. That’s when your sample doesn’t accurately reflect the population of your survey and, therefore, may cause skewed results. There are two things to contend with when trying to remove sampling error: sample size and variation. The larger the sample size, the less chance there is in the result so that reduces sampling error and increases significance. Controlling for variability of your sample also can impact sampling error–the more variability in your sample, the more prone to error the study will be.


In its most basic form, a survey refers to the questionnaire–delivered either in person or online–that a researcher administers in service of a research study. When using Pollfish and other online survey platforms, a survey refers to the targeting you select, the questionnaire you create and the results files created once you run your survey. Each survey’s full info is grouped together in your account dashboard.



Targeting refers to the criteria you select to screen potential survey respondents. Once targeting is selected, a population is created and your survey is delivered.

Tracking Study (Tracker)

Tracking studies use the same questionnaire, delivered over time, to track brand awareness, monitor customer satisfaction, study consumer interest in new products or services, analyze the effectiveness of advertising creative and more. Tracking studies may be delivered to the same populations (to gauge how perceptions of the same group changes as time goes on) or different populations to view time as just one factor impacting shifting perceptions.

Section 2: Survey Design & Question Types

When creating your survey design, it is important to start with the type of survey you want to run. Understanding the small but significant differences between survey types is a great first step.

Here, we want to review the different types of studies you can run with Pollfish, as well as the ideal question types for each one.

A   |   B   |   C   |   D   |   I   |   L   |   M   |   N   |   O   |   P   |   Q   |   R   |   S   |   U   |   V


Ad Absorption

A study designed to show the effectiveness of certain advertising creative. These studies tend to use narrow audience targeting screening questions and even control groups to determine if new ad creative is having statistically significant impact on user experience or understanding of a product, brand or service. Learn More >

Attitude & Usage Testing

A study that aims to understand the available market for a product or service. This typically involves things like market sizing, general understanding of the product category, questions understanding brand decisions, gathering targeting information and more.

Audience Profiling

Profiling is a process of collecting demographic information to define an audience or population. Researchers may need to conduct surveys of a population to determine if there are enough available survey takers for the demographic or other targeting they want to run. At Pollfish, we use a rolling profiling model to keep our audience up to date. We also collect most of the demographic data you will need up front, including demographic, gender and mobile usage data.Learn More >

Audio question

A question that requires the respondent to listen to an audio clip in full before answering.


Brand Awareness

Brand awareness is the extent to which consumers are familiar with a brand, the extent to which the brand’s intended perception matches reality and the extent to which a company’s brand is helping or hurting sales.


Closed-ended question

Closed ended questions are those that offer a limited selection of answers to choose from, such a single or multiple-selection question, matrix, or scaling question type. Learn More >

Competitive Analysis

Competitive analysis testing helps you better understand how your potential customers are discovering your competitor’s product, feature preferences, user behavior and more. Learn More >

Concept Testing

The process of testing a big idea, concept testing allows you to run ideas past a sample of your target market before producing them. These ideas can be new logos, new product ideas, new ad campaign ideas and more.

Content Creation Surveys

Conducting surveys can reveal gaps in news coverage that can help digital media brands develop a unique editorial voice. By surveying outside of existing readers to prospective readers and even competitive readers, digital media companies can gauge interest in, and existing knowledge of–certain topic areas. Learn More >

Creative Testing

A test allowing a creative team to gauge the effectiveness of their work at conveying the feelings and emotions they were aiming for. This includes logo testing, ad creative testing and more. Creative testing may ask questions about brand effectiveness, gauge tone and consumer enjoyment and gauge the likelihood of someone to purchase before and after viewing creative assets.


Description question

A question in your questionnaire that contains a basic description. This question type is usually used to provide directions to the respondent. Learn More >


Image question

A survey question requiring respondents to look at and respond to an image or a group of images.


Likert Scale

A Likert Scale is typically a 5 or 7 point scale that asks a respondent to express how much they agree or disagree with a statement. Learn More >

Logo Testing

A type of creative testing focused on changes, updates or just current opinions of a brand or product logo. This type of survey may contain questions gauging how appealing, authoritative or on-brand a logo is, or even reveal possible changes to a brand logo and ask potential customers to weigh in. Learn More >


Market Analysis

Studies that focus on the size, scope and potential of a market. Before launching a new product, companies may want to figure out the size of the potential market by broadly surveying interest for that product across key areas and demographic groups. This helps companies determine how large a product rollout may be needed.

Matrix multiple selection question

A matrix question asks respondents to make selections for multiple options on a scale. A matrix multiple selection question lets you select multiple responses for a single option. For example, a matrix may contain elements of a hotel, asking respondents to rate elements like the pool or the lobby bar using a variety of potential responses like “Clean” or “Fun.” A multiple selection matrix will allow you to check all descriptions that apply. Learn More >

Matrix single selection question

A matrix single selection question will ask you to select one answer per option on a scale. For example, a matrix may contain elements of a hotel, asking respondents to rate elements like the pool or the lobby bar using a variety of potential responses. Because responses are single-selection, researchers will often ask something more definitive, like “How likely were you to use these different elements of our hotel?” And ask for a scale of 1-5. Learn More >

Multiple selection question

Multiple selection questions contain a list of options and ask respondents to select all that apply. For example, the question “what kind of music do you like?” will contain a list of music types so respondents can select all types they prefer. Learn More >


Naming Tests

Studies concerning the names of things–new product names, new website URLs, new brand names or even the name of a new film or TV show–naming tests help reveal potential perception issues that can occur. For example, does one title effectively convey what the product or service is? Does the name defy expectations and surprise the user?

Net Promoter Score (NPS)

Net Promoter Score is a widely used customer/consumer experience test that asks a simple question: on a scale of 1-10, how likely are you to recommend this business, brand, product or service to a friend or colleague? Respondents who rate 0-6 are detractors, 7-8 are considered passive and 9-10 are considered promoters. Learn More >

NPS Survey

An NPS Survey asks the Net Promoter Score question to a specified target population. This allows companies to gauge how well they are doing on the NPS scale with their specific target customer or potential customer. Learn More >

Numeric open-ended question

An open-ended question that requires a numeric answer. For example, researchers may ask how much money you’d potentially pay for a product or service. Learn More >


Open-ended question

Open-ended questions do not require the respondent to select from a specific list of responses, but instead asks them to type their response into a text box. Open-ended questions, therefore, are less definitive, seeking to gather more qualitative responses, getting at the feelings and broader opinions from the respondent in their own words. Learn More >


Package Testing

Studies that provide consumer feedback on product packaging. Would you be more or less likely to buy a product with new packaging? Does the packaging effectively convey what is inside? How do your potential customers feel when they look at your new packaging?

Pre/Post Studies

A pre-post study examines whether participants in an intervention improve or become worse off during the course of the intervention, and then attributes any such improvement or deterioration to the intervention.

Product / Market Fit

Product/Market fit is a the degree to which a particular product meets the demands of the market it is in. Have you created the minimum viable product for your market, that solves a problem or need that exists? Product/Market fit studies aim to survey early adopters or potential customers to see if the product meets previously identified criteria for satisfaction within a market.

Product Testing

A final test of a product, product testing gives users access to a prototype and seeks to identify opinions–positive or negative–of a product before it goes to market.


Qualitative survey question

Qualitative research focuses on personalized behavior, such as habits or motivations behind their decisions. This can be gathered through contextual inquiries or interviews to learn more about feelings, attitudes, and habits that are harder to quantify but offer important additional context to support statistical data. Learn More >

Quantitative survey question

Quantitative survey questions are those that can be expressed numerically, meaning they require respondents to select from a pre-selected list of potential responses. Learn More >


Ranking question

Similar to a single-selection matrix, ranking questions ask users to rank different elements of a product or service on a numerical scale. Learn More >

Rating stars

A question where respondents answer using a scale of a number of stars. Learn More >


Screening question

Screening questions help researchers ensure they are getting the exact sample they want using filtering that falls outside of available targeting criteria. For example, if you only want to include people in your survey who have seen the movie Pulp Fiction, your screening question would ask potential respondents if they have seen the film, and screen them based on their answers. Learn More >

Single selection question

A question providing a list of potential responses, from which respondents may choose one answer. Learn More >

Slider question

A question where respondents answer by dragging a button across a slider to provide a rating. Learn More >


Usability Testing

Usability testing aims to test user experience changes on real users to determine if the experience has been positively or negatively impacted by any changes. Unlike more traditional product testing, usability testing usually focuses more on design changes and more commonly relates to app or software development.

UX Testing

Any testing related to the overall user experience of a product (usually app or software). UX testing tends to be more holistic than Usability testing, running qualitative and quantitative studies of each phase of the user journey.


Video question

A question where respondents much watch a video in its entirety before answering.

Section 3: Survey Bias

One of the most important factors in DIY survey creation is avoiding bias. Survey bias reduces data quality, threatening the integrity of your consumer insights.

We have gone in-depth on ways to root out survey bias here. But to fully understand the different types of bias you may have to contend with, here are some helpful definitions.

A   |   D   |   E   |   L   |   O   |   P   |   S


Acquiescence Bias

Also referred to as “yea-saying,” Acquiescence bias occurs when respondents are overly agreeable, to the point of contradicting themselves. You can avoid this by limiting the number of questions asking respondents to simply agree or disagree, as these statements often result in over-agreement. Learn More >


Alternators are survey takers that fill in responses in familiar patterns, simply alternating responses but not engaging with the questions. You can avoid this by mixing in some open-ended, audio or video questions. If responses appear to be biased, Pollfish will remove them from your sample. Learn More >


Double-Barreled Questions

Double-barrelled questions happen when researchers blend two questions into one, and then allow for only one answer. The respondent must agree in total with a statement they may only partially believe, creating incomplete or biased responses. For example, “Did you find the product interesting and helpful? Yes or no?” Learn More >


Extreme Responding

If given the option, some respondents will select only the most extreme option. Make sure to mix up your question types. If you have several scales or rating questions, make sure to mix in some others that require a bit more attention. Learn More >


Leading Question

Leading questions use biased language to influence respondents in a direction. Keep questions as simple and direct as possible, taking care to remove any editorializing adjectives that may betray a preference towards one response or another. Learn More >

Loaded Question

Loaded questions make assumptions. For example, asking “what kind of candy do you get when you go to the movies?” assumes the respondent goes to the movies, eats candy and gets candy at the movies. This creates a biased environment where the respondent may not be able to answer accurately. Learn More >


Order Bias

The order you list question responses in a quantitative survey question can bias respondents towards choices higher on the list. If your survey platform has the ability, use answer shuffling to remove the potential for this bias. Learn More >


Prestige Question

Inserting prestige into a question by citing an expert opinion one way or another biases the respondent towards the expert’s opinion. For example, “Doctors say smoking causes cancer. Do you agree?” or “Do you support the President’s policy on Zimbabwe?” Learn More >


Self-Selection Bias

This occurs when respondents are allowed, through whatever means, to opt out of taking a survey based on its content. This biases the results by oversampling people with a previous interest or overly polarized opinion on a given topic. Learn More >

Social Desirability Bias

This occurs when respondents will attempt to discern what the researchers want them to say and respond accordingly. This often occurs when participants see the brand or topic ahead of time and try to give answers the company or researcher may want to hear. Avoid exposing this information, or asking any leading or loaded questions that may betray an interest. Learn More >


Respondents who complete surveys far too fast to have actually read the questions or truly contemplated answers. Pollfish removes these respondents through AI and machine learning algorithms. Learn More >

Stereotype Question

When researchers infuse bias into survey questions, they open the potential for respondents to agree with or defy that bias. If, for example, a survey question reminds respondents that gender stereotypes exist around math, that question may have planted a seed in the mind of respondents that changes responses. Avoid this type of language at all costs. Learn More >


When respondents answer using the first available choice for every question. Respondents answering in familiar patterns in order to complete surveys faster will be removed from the Pollfish sample. Learn More >

Section 4: Pollfish Features

Sometimes, we get so excited about the features of Pollfish, we forget to define them all for you!

So we decided to explain them all in one place. Feel free to bookmark this section so you can return to it as you read through

A   |   P   |   Q   |   R   |   S   |   W


Advanced branching

Also known as skip logic, advanced branching allows you to show respondents different questions based on responses to earlier questions. For example, if a respondent says they have tried your product, you can show them one set of questions and if they have not, you can show them different questions. Branching provides an alternative to screening questions. Learn More >

App Monetization

App monetization refers to the method or methods app publishers use to make money on their apps. There are a variety of app monetization strategies and platforms. Pollfish provides a way for app publishers to immediately monetize their creations by installing the Pollfish SDK and exposing their user base to Pollfish surveys. Learn More >


Programmatic Advertising

Programmatic advertising refers to the automated buying and selling of online advertising. Pollfish functions on a programmatic advertising model, delivering surveys in much the same way advertisers deliver ads in apps. Learn More >



If a survey must meet a set percentage of participants from an age or gender group, you can set quotas.


Radius Targeting

When setting geographic targeting on your Pollfish survey, you can select a city, congressional district or town. You can also use our Radius Targeting tool to select a specific radius on a map to target your survey. Learn More >


Skip Logic

Also known as Branching, skip logic allows you to show respondents different questions based on responses to earlier questions. For example, if a respondent says they have tried your product, you can show them one set of questions and if they have not, you can show them different questions. Skip logic provides an alternative to screening questions. Learn More >

Survey Stratification

Stratified sampling involves dividing a population into smaller sub-populations called strata. There are many ways of accomplishing this, including balancing using predetermined quotas before you run your survey, or weighting your results after the fact. Learn More >



A type of stratification where quotas–often pegged to pre-existing data like a local or national census–are used to weight respondent pools to more accurately reflect the population being surveyed. With Pollfish, you can weight your survey results to the local or national census (where available) with the push of a button. Learn More >