Comparing Online vs. Telephone vs. Face-to-face Quantitative Research Survey Modes; Which is top dog?
Recent OFCOM Research (1) highlighted that 71% of the UK receive 9 nuisance calls a month. Also that telephone is the #4 culprit. This questions whether this quantitative research survey mode has had its day? But while online has grown in share, is this top dog? We’ve looked closely at the merits of online, telephone (random direct dialling) and face-to-face (ftf). Several insights emerged. So if you are about to brief in a quantitative research survey, this article summarises our findings. It also spotlights ideas to help you make the most of your research investment.
Cost of research
Costs vary by sample size, ease of reaching an audience or ‘incidence’, the length of survey, mode, and also complexity of fieldwork and analysis. Some costs such as coding for online research, computer aided telephone interviewing (CATI) and computer assisted personal interviewing (CAPI) are similar. Compared with online (index =100) fieldwork costs are typically higher for face-to-face (index 250-300) than telephone (index 250-300) due to the greater human time involved.
Coverage or reach
Online presently reaches 87% of the UK (1) though many online surveys run via panels which cover just 5% of the population. Thus sample carefully to cover geographic gaps and bear in mind that respondents are also usually more ‘Internet experienced’. Conversely, nearly all homes have access to at least one phone though telephone databases cover just 60% UK (and we suspect even fewer are opted-in to research). Within this fixed line telephone reaches 79% (with greater penetration among older respondents) and mobiles reach 96% (with greater penetration of younger respondents) (1). Face-to-face also reaches most places (though at a cost).
Online response depends on the nature of the panel, and how responsive and interested respondents are; expect between 5-30%. Conversely, response from links on websites or emails depend on the nature of the source. Telephone responses have also fallen over the last decade and responses are now around 10-15%. Face-to-face response, by contrast, is around 15-20%.
Avidy bias (Sample bias)
The self-selection nature of online means there is a greater risk of respondents opting-in to surveys that interest them. This is called avidy bias. Typically online respondents are younger, more familiar with the online world and spend more time on it. They are also more informed, more opinionated and more politically activist. (2) Panels also contain more early technology adopters though it remains possible to discern other types on the diffusion of innovation spectrum.
Social desirability bias
Research (3, 4) observes that those responding by telephone present more socially desirable responses more often than face-to-face (FTF). This is particularly the case with those with lower intellectual ability/fewer years of formal education (i.e. C2DEs). Research also shows that respondents are more comfortable discussing sensitive subjects face-to-face as they can see. This they have greater trust in the interviewer. Conversely, FTF interviews conducted in the respondent’s home eliminate anonymity, making socially desirable responses more pronounced. Overall however, interpersonal trust between the interviewer and the respondent has a greater influence resulting in more honest responses. FTF also shows similar results to online (where there is no interviewer effect). However some research (5) has observed directionally higher valuation responses to some ‘willingness to pay’ questions. For example, when there is a perceived ‘civic virtue’ in being seen to contribute to a common good.
Satisficing (combining the words satisfy and sacrifice) involves short-cutting the response process, settling on a solution that is ‘good enough’ but could be ‘optimised’.
Telephone poses an increased cognitive burden. The increased difficulty to fully comprehend questions, reduces the effort to cooperate, search the memory and process information. Perceived time pressure also fatigues and demotivates. As a result, questions are less considered, giving rise to higher acquiescence (answering affirmatively regardless of the question), having no opinions, choosing mid-points or only extremes in rating scales, easier to defend answers and reduced disclosure. Again this is more evident with those with lower intellectual ability. Further research (3, 4, 5) suggests FTF researchers are better able to judge confusion, waning motivation, distraction (via watching a tv, eating etc.), Thus also be better able motivate and make it easier for the respondent to understand questions and improve cooperation on complex tasks. Conversely, with online, respondents go at their own pace.
(1) Great research starts with a great marketing research brief. Decide your target and what’s most important. Beyond feasibility and answers to questions, what’s the relative importance of cost, speed, precision etc.
(2) There are many pitfalls in conducting quantitative research. Even more if you would like to repeat a survey or set up a tracker. So use larger samples to give greater reliability; a sample in excess of 1000 will give more reliability than a sample of 500. This means that repeating a survey 100 times means that in 95 instances, responses (confidence interval) will be within +/- 1%. Also make sure data is comparable from wave to wave, and design shorter surveys to cut the risk of satisficing.
(3) Take care to make sure samples are not biased and give reliable findings. Nationally representative samples are essential to measure awareness, usage and market share. Also make sure your sample eliminates any demographic, subject affinity, usage or other bias.
(4) Buyer beware. Remember the Whiskas advert that famously told us that ‘8 out of 10 cats prefer Whiskas’. This was eventually changed to ‘8 out of 10 owners that expressed a preference said their cats preferred Whiskas’. However, what we still don’t know is the sample size, how many said ‘don’t know’, and how many expressed a preference. So whatever the survey mode, be clear what is statistically significant or merely directional, and make the context clear. This will help you avoid being duped and make better decisions! Meoww, yum!
(1) OFCOM (2019).
(2) Duffy Bobby, Smith Kate, Terhanian George, Bremer John. Comparing Data from Online and Face-to-face Surveys. International Journal of Market Research Vol 47 Issue 6. (2005)
(3) Holbrook Allyson L, Green Melanie C, Krosnick Jon A. Telephone versus Face-to-face interviewing of National Probability Samples with Long Questionnaires. Public Opinion Quarterly, Volume 67:79–125 (2003).
(4) Szolnoki G, Hoffman D. Online, face-to-face and telephone surveys – Comparing different sampling methods in wine consumer research. Wine Economics and Policy 2 (2013) 57-66.
(5) Lindhjema Henrik, Navrudb Ståle. Are Internet surveys an alternative to face-to-face interviews in contingent valuation? Ecological Economics 70(9): 1628-1637 (2011).
The Marketing Directors and The Market Researchers have no vested interest in promoting one quantitative research survey mode over another. Thus, we work in partnership with the world’s leading online, telephone and face-to-face fieldwork companies to deliver the best solution for you. So just get in touch for help.