Comparing Quantitative Research Survey Modes: Online vs. Telephone vs. Face-to-face; Which is top dog?
Recent OFCOM Research (1) highlighting that 71% of the UK receive 9 nuisance calls a month, and that telephone is the #4 culprit, questions whether this mode has had its day? But while online has grown in share, is this top dog? We’ve been looking closely at the merits of online, telephone (random direct dialling) and face-to-face (ftf). Several insights emerged. So if you are about to brief in a quantitative research survey, this article summarises some of our findings and spotlights some considerations to help you make the most of your research investment.
Cost of research
Costs vary by sample size, ease of reaching an audience or ‘incidence’, the length of survey, mode and complexity of fieldwork and analysis. Some costs such as coding for online research, computer aided telephone interviewing and computer assisted personal interviewing are similar. Compared with online (index =100) fieldwork costs are typically higher for face-to-face (index 250-300) than telephone (index 250-300) due to the greater human time involved.
Coverage or reach
Online presently reaches 82% of the UK though many online surveys run via panels which cover just 5% of the population. Thus sample carefully to cover geographic gaps and bear in mind that respondents are also usually more ‘Internet experienced’. Conversely, nearly all homes have access to at least one phone though telephone databases cover just 60% UK (and we suspect even fewer are opted-in to research). Within this fixed line telephone reaches 82% (with greater penetration among older respondents) and mobiles reach 81% (with greater penetration of younger respondents) (1). Face-to-face can reach most places (though at a cost).
Online response depends on the nature of the panel, and how responsive and interested respondents are. Expect between 5-30%. Response from links on websites or emails will similarly depend on the nature of the source. Telephone responses have fallen over the last decade and responses are now around 10-15%. Face-to-face response is also around 15-20%.
Avidy bias (Sample bias)
The self-selection nature of online means there is a greater risk of respondents opting-in to surveys that interest them. This is called avidy bias. Typically online respondents are younger, more familiar with the online world and spend more time on it. They are also more informed, more opinionated and more politically activist. (2) Panels also contain more early technology adopters though it remains possible to discern other types on the diffusion of innovation spectrum.
Social desirability bias
Research (3, 4) has observed that those responding by telephone present more socially desirable responses more often than face-to-face. This is particularly the case with those with lower intellectual ability/fewer years of formal education (i.e. C2DEs). Research has also shown that respondents are more comfortable discussing sensitive subjects face-to-face as they can see, and thus have greater trust in, the interviewer. Conversely, ftf interviews conducted in the respondent’s home eliminate anonymity, making socially desirable responses more pronounced. Overall however, interpersonal trust between the interviewer and the respondent has a greater influence resulting in more honest responses. FTF shows similar results to online (where there is no interviewer effect) though some research (5) has observed directionally higher valuation responses to some ‘willingness to pay’ questions (for example, when there is a perceived ‘civic virtue’ in being seen to contribute to a common good).
Satisficing (combining the words satisfy and sacrifice) involves short-cutting the response process, settling on a solution that is ‘good enough’ but could be ‘optimised’.
Telephone poses an increased cognitive burden. The increased difficulty to fully comprehend questions, reduces the effort to cooperate, search the memory and process information. Perceived time pressure also fatigues and demotivates. This results in questions being less considered, giving rise to higher acquiescence (answering affirmatively regardless of the question), having no opinions, choosing mid-points or only extremes in rating scales, easier to defend answers and reduced disclosure. Again this is more clear with those with lower intellectual ability. Research (3,4,5) suggests FTF researchers are better able to judge confusion, waning motivation, distraction (via watching a tv, eating etc.) and be able motivate and make it easier for the respondent to understand the questionnaire and improve cooperation on complex tasks. With online respondents go at their own pace.
(1) Great research starts with a great brief. Decide your target and what’s most important. Beyond feasibility and answers to questions, what’s the relative importance of cost, speed, precision etc.
(2) There are many pitfalls in conducting quantitative research. Even more if you would like to repeat a survey or set up a tracker. Larger samples give greater reliability. A sample in excess of 1000 will give more reliability than a sample of 500. This means that repeating a survey 100 times means that in 95 instances, responses (confidence interval) will be within +/- 1%. Make sure data is comparable from wave to wave. Prefer shorter surveys to cut the risk of satisficing.
(3) Take care to make sure samples are not biased and give reliable findings. Nationally representative samples are essential to measure awareness, usage and market share. Anything else has in-built bias and risks misleading. Ensure your sample eliminates any demographic, subject affinity, usage or other bias.
(4) Buyer beware. Remember the Whiskas advert that famously told us that ‘8 out of 10 cats prefer Whiskas’. This was eventually changed to ‘8 out of 10 owners that expressed a preference said their cats preferred Whiskas’. What we still don’t know is the sample size, how many said ‘don’t know’, and how many expressed a preference. Whatever the survey mode, be clear what is statistically significant or merely directional, and make the context clear. This will help you avoid being duped and make better decisions! Meoww, yum!
(1) OFCOM (2014).
(2) Duffy Bobby, Smith Kate, Terhanian George, Bremer John. Comparing Data from Online and Face-to-face Surveys. International Journal of Market Research Vol 47 Issue 6. (2005)
(3) Holbrook Allyson L, Green Melanie C, Krosnick Jon A. Telephone versus Face-to-face interviewing of National Probability Samples with Long Questionnaires. Public Opinion Quarterly, Volume 67:79–125 (2003).
(4) Szolnoki G, Hoffman D. Online, face-to-face and telephone surveys – Comparing different sampling methods in wine consumer research. Wine Economics and Policy 2 (2013) 57-66.
(5) Lindhjema Henrik, Navrudb Ståle. Are Internet surveys an alternative to face-to-face interviews in contingent valuation? Ecological Economics 70(9): 1628-1637 (2011).
The Marketing Directors and The Market Researchers have no vested interest in promoting one quantitative research mode over another. We work in partnership with the UK’s leading online, telephone and face-to-face fieldwork companies to deliver the best solution for you. So just get in touch for help.