Recent OFCOM Research (1) highlighted that 71% of the UK receive 9 nuisance calls a month. Also that telephone is the #4 culprit. This questions whether this quantitative research survey mode has had its day? But while online has grown in share, is this top dog? We’ve looked closely at the merits of online, telephone (random direct dialing) and face-to-face (ftf). Several insights emerged. So if you are about to brief in a quantitative research survey, this article summarises our findings. It also spotlights ideas to help you make the most of your research investment.
Costs vary by sample size, ease of reaching an audience or ‘incidence’, the length of survey, mode, and also complexity of fieldwork and analysis. Some costs such as coding for online research, computer aided telephone interviewing (CATI) and computer assisted personal interviewing (CAPI) are similar. Compared with online (index =100) fieldwork costs are typically higher for face-to-face (index 250-300) than telephone (index 250-300) due to the greater human time involved.
Online presently reaches 87% of the UK (1) though many online surveys run via panels which cover just 5% of the population. Thus sample carefully to cover geographic gaps and bear in mind that respondents are also usually more ‘Internet experienced’. Conversely, nearly all homes have access to at least one phone though telephone databases cover just 60% UK (and we suspect even fewer are opted-in to research). Within this fixed line telephone reaches 79% (with greater penetration among older respondents) and mobiles reach 96% (with greater penetration of younger respondents) (1). Face-to-face also reaches most places (though at a cost).
Online response depends on the nature of the panel, and how responsive and interested respondents are; expect between 5-30%. Conversely, response from links on websites or emails depend on the nature of the source. Telephone responses have also fallen over the last decade and responses are now around 10-15%. Face-to-face response, by contrast, is around 15-20%.
The self-selection nature of online means there is a greater risk of respondents opting-in to surveys that interest them. This is called avidy bias. Typically online respondents are younger, more familiar with the online world and spend more time on it. They are also more informed, more opinionated and more politically activist. (2) Panels also contain more early technology adopters though it remains possible to discern other types on the diffusion of innovation spectrum.
Research (3, 4) observes that those responding by telephone present more socially desirable responses more often than face-to-face (FTF). This is particularly the case with those with lower intellectual ability/fewer years of formal education (i.e. C2DEs). Research also shows that respondents are more comfortable discussing sensitive subjects face-to-face as they can see. This they have greater trust in the interviewer. Conversely, FTF interviews conducted in the respondent’s home eliminate anonymity, making socially desirable responses more pronounced. Overall however, interpersonal trust between the interviewer and the respondent has a greater influence resulting in more honest responses. FTF also shows similar results to online (where there is no interviewer effect). However some research (5) has observed higher valuation responses to some ‘willingness to pay’ questions. For example, when there is a perceived ‘civic virtue’ in being seen to contribute to a common good.
Satisficing (combining the words satisfy and sacrifice) involves short-cutting the response process, settling on a solution that is ‘good enough’ but could be ‘optimised’.
Telephone poses an increased cognitive burden. The increased difficulty to fully comprehend questions, reduces the effort to cooperate, search the memory and process information. Perceived time pressure also fatigues and demotivates. As a result, questions are less considered, giving rise to higher acquiescence (answering affirmatively regardless of the question), having no opinions, choosing mid-points or only extremes in rating scales, easier to defend answers and reduced disclosure. Again this is more evident with those with lower intellectual ability. Further research (3, 4, 5) suggests FTF researchers are better able to judge confusion, waning motivation, distraction (via watching a tv, eating etc.), Thus also be better able motivate and make it easier for the respondent to understand questions and improve cooperation on complex tasks. Conversely, with online, respondents go at their own pace.
(1) Great research starts with a great market research brief. Decide your target and what’s most important. Beyond feasibility and answers to questions, what’s the relative importance of cost, speed, precision etc.
(2) There are many pitfalls in conducting quantitative research. Even more if you would like to repeat a survey or set up a tracker. So use larger samples to give greater reliability; a sample in excess of 1000 will give more reliability than a sample of 500. This means that repeating a survey 100 times means that in 95 instances, responses (confidence interval) will be within +/- 1%. Also make sure data is comparable from wave to wave, and design shorter surveys to cut the risk of satisficing.
(3) Take care to make sure samples are not biased and give reliable findings. Nationally representative samples are essential to measure awareness, usage and market share. Also make sure your sample eliminates any demographic, subject affinity, usage or other bias.
(4) Buyer beware. Remember the Whiskas advert that famously told us that ‘8 out of 10 cats prefer Whiskas’. This was eventually changed to ‘8 out of 10 owners that expressed a preference said their cats preferred Whiskas’. However, what we still don’t know is the sample size, how many said ‘don’t know’, and how many expressed a preference. So whatever the survey mode, be clear what is statistically significant or merely directional, and make the context clear. This will help you avoid being duped and make better decisions! Meoww, yum!
(1) OFCOM (2019).
(2) Duffy Bobby, Smith Kate, Terhanian George, Bremer John. Comparing Data from Online and Face-to-face Surveys. International Journal of Market Research Vol 47 Issue 6. (2005)
(3) Holbrook Allyson L, Green Melanie C, Krosnick Jon A. Telephone versus Face-to-face interviewing of National Probability Samples with Long Questionnaires. Public Opinion Quarterly, Volume 67:79–125 (2003).
(4) Szolnoki G, Hoffman D. Online, face-to-face and telephone surveys – Comparing different sampling methods in wine consumer research. Wine Economics and Policy 2 (2013) 57-66.
(5) Lindhjema Henrik, Navrudb Ståle. Are Internet surveys an alternative to face-to-face interviews in contingent valuation? Ecological Economics 70(9): 1628-1637 (2011).
The Marketing Directors and The Market Researchers have no vested interest in promoting one quantitative research survey mode over another. Thus, we work in partnership with the world’s leading online, telephone and face-to-face fieldwork companies to deliver the best solution for you. Read about all of our market research services.
The labelling effect, recently discovered by behavioural economists, gives marketers another weapon in their arsenal of influencing techniques. Even small labelled promotions (vouchers to spend on certain items) shift spending patterns disproportionately. Tesco, Sainsbury’s and even the Government have used the labelling effect to alter behaviour. This short piece introduces the labelling effect and also explains how marketers can use this technique to influence customers’ choice of products.
Small promotions do not force customers to change their spending patterns because they can easily rearrange their budget. In reality, however, small promotions do affect which items are purchased. Thus marketers can target small promotions at highly profitable items to encourage customers to spend more on those items.
A promotion is ‘non-distortionary’ if the customer would have spent more than the value of the promotion on the targeted product anyway. The labelling effect occurs when consumers react to these promotions by spending more on the targeted item. For example:
A recent study gave customers at a restaurant an €8 voucher (1). Vouchers could be spent on beverages (the ‘labelled’ voucher) or food or beverages (the ‘unlabelled’ voucher). As customers usually spend at least €8 on beverages the gift is non-distortionary: customers could rearrange their money to spend the same amount of money. However, customers who received the labelled voucher actually spent on average €3.90 more on beverages than those with the unlabelled voucher.
Interestingly, the most common behaviour was to spend the voucher on the targeted good. Additionally, those with lower non-verbal cognitive ability were more likely to respond to the label. Non-verbal cognitive ability involves problem solving skills and mathematical ability as opposed to language skills.
Supermarkets are also starting to use labelled vouchers to nudge customers towards more profitable goods.
Here Tesco is offering a 20p discount on top range lettuces. Clearly, the label means that customers are more likely to buy one more top range lettuce than if the voucher could be used on any item.
Here Sainsbury’s is using a small promotion to nudge consumers towards bakery items. The labelling effect means that this 40p discount will disproportionately increase spending on bakery items.
The Government also uses the labelling effect on benefits such as the ‘Winter Fuel Payment’ (WFP). Currently, pensioners spend on average 41% of the WFP on fuel. However, if named ‘The Annual Allowance’ they would only spend 3% of it on fuel (2).
There are three potential causes of the labelling effect: narrow bracketing, mental accounting and reciprocity.
This is the process whereby people split one decision into separate parts and then consider each part in turn (3). For example, people may decide to spend their budget on food, beverages or both. If their usual budget decision were totally unaffected by say an 8 Euro voucher then, then all would be spent on the targeted good.
There are four possible causes of narrow bracketing. Firstly, customer’s cognitive limitations. Secondly, cognitive inertia (it simplifies decisions). Thirdly, by applying previous value judgements or ‘rules of thumb’ to spending behaviour, such as ‘always spending at least £10 on a bottle of wine’. This could cause customers to see the wine cost as separate to the rest of the cost of the meal. Fourthly, through deliberate or conscious action to control or check expenditure, perhaps as a New Year health or budget resolution.
This is a form of narrow bracketing whereby people divide their expenditure, wealth and income into different ‘mental accounts’ (4). These mental accounts represent narrow brackets. For example, a restaurant patron may have two separate dining budgets in his or her mind; a food budget and also a beverage budget. The unlabelled voucher could therefore be split between either account, while the labelled voucher may only be added to the beverage budget. Once allocated to an account, money is not easily shifted as the label given to the gift affects spending.
Furthermore, the tighter the customer’s budget, the more strictly mental accounts are enforced. So the labelling effect has greater impact on the less wealthy.
These may cause customers to respond in a way they think is helpful to the gift-giver. Customers may see a promotion as a gift and reciprocate by spending more of the gift on the targeted good. Reciprocity does not cause the labelling effect on vouchers, but it may do for Government payments.
In the context of retail vouchers, most people are aware of the labelling effect per se but fewer are aware of its real impact (5). Awareness of the labelling effect is driven by non-verbal cognitive ability and age. The use of labels is less obvious to younger people with lower cognitive ability. Conversely, those more likely to respond are less likely to know about it.
1. Labels change behaviour so target promotions such as vouchers to increase spend on more profitable goods. Do not assume your customers will rearrange their money, even with small promotions. So if your customer spends £10 on books and £10 on (more profitable) DVDs, a £5 gift can significantly change spending balance. Evidence suggests labelling the voucher for DVDs would cause customers to spend £2.50 more on DVDs than they would with an unlabelled voucher.
2. Nudge your customers into narrow bracketing by creating new product divisions in categories and markets. If some DVDs are more profitable than others, then divide them into groups by age or genre.
3. Use the labelling effect to make the most of loyalty scheme promotions. Perhaps by making reward points worth more on certain products, or by allocating reward points to different ‘accounts’ to spend on different products.
4. Use behavioural economics to uncover new insights and optimise your promotions. Small promotions and labelling or small changes in copy can change spending patterns. Schedule research, such as quali-quant tests or hall tests to understand the causes and effects.
(1) Abeler, J. & F. Marklein (2013), ‘Fungibility, Labels, and Consumption’, Working Paper. First published May 2008 as IZA Discussion Paper No. 3500.
(2) Beatty, T., Blow, L., Crossley, T. & C. O’Dea (2011), ‘Cash by any other name? Evidence on labelling from the UK Winter Fuel Payment’, Institute for Fiscal Studies Discussion Paper.
(3) Read, D., Loewenstein, G. & M. Rabin, (1999) ‘Choice Bracketing’, Journal of Risk and Uncertainty, 19(1–3), p.171–97.
(4) Thaler, R. H. (1999), ‘Mental accounting matters’, Journal of Behavioral Decision Making, 12, p.183-206.
(5) Hogg, T. (2013) ‘Fungibility: Are People Aware of Non-Fungibility?’, MSc Dissertation at The University of Nottingham. Available on request.
The advent of digital media has inspired many new forms of customer research which businesses are embracing with a passion. We have also witnessed marketers foregoing more traditional approaches of gaining customer insights, primarily to generate cheaper and quicker results. However, there are lots of myths and misconceptions surrounding digital methods. As one mobile phone marketer commented ‘let’s say we don’t wholly buy into the claims being made about online’. So what are the facts and considerations when choosing between traditional vs. digital market research methods? New doesn’t necessarily mean better …. or does it?
More traditional forms of research involve either face-to-face contact or verbal conversations in real-time such as;
Traditional face-to-face or telephone approaches enable the moderator to go with the natural flow of the discussion, and thus better understand what’s important to interviewees. Also to flex the discussion, intervene, probe or challenge at any point in the proceedings.
In addtion, findings or interpretations are based on respondent comments and non-verbal indicators such as facial expressions, body language, behaviour and voice intonation. Albert H. Mehrabian specifically found that body language accounts for 55%, tone of voice (38%) and words only just 7% of received communication (1). This non verbal communication therefore provides extra richness and texture to information and gives deeper insight. What is not said is often as revealing as what is said.
Conversely, traditional research approaches consume more time and cost. They sometimes need more time to set up. For example recruiting a very specific sample, such as frequent rail and air travellers with experience of mobile applications could easily take a couple of weeks.
The massive growth in general internet and social media, enables marketers and researchers to communicate with their consumers digitally, and also better understand the changing digital world. New digital functionality such as wikis, video filming and uploading and messaging also provides researchers with a new means of customer communication, and new means of capturing information. This therefore helps researchers and customers collaborate and co-create ideas.
Some groups have particular affinity with the digital world and are thus easier to engage e.g. kids/youth market. The anonymity of the online world also encourages participation and openness. Early technology adopters are particularly useful to pressure-test new ideas and anticipate the future.
Some digital media offer an almost ‘instant’ sample. For example, polls on Facebook, Twitter or blogs. However, a high-number of engaged followers are needed to generate fast and cost-effective insights.
The growing range and extent of online communication, for example via smartphones, make it easier to reach a wide geographic target. Thus avoiding travel and sometimes communication costs. In-built cameras also make it easier to collect visual or audio insights.
More complex technology, such as that involved in online qualitative research is a little more difficult to master. So allow time for set-up, to help respondents as well as moderate and analyse research. Thus it is sometimes more expensive than face-to-face discussions.
Online moderation is also more difficult. The process is often more linear and mechanical limiting ability to pursue all avenues of exploration. There are also visual limitations. Zoomed in head shots or screen size room views, make it difficult to see the big picture, and thus non-verbal responses. Qualitative responses also vary between the superficial and detailed. Initial superficial responses require more probing. Conversely, unduly verbose responses, especially if written, take time to follow and interpret.
Digital methods complement traditional methods and vice versa. Digital tools also help automate research activities, for example, making some activities, such as recruitment, and quantitative fieldwork, cheaper and quicker. In particular, online is a fast and cost-effective way to recruit respondents for traditional qualitative research. It ensures broader reach, and helps mitigate against serial groupies.
However, there will always be a need for a moderator, to ease the journey of discovery and dig into the detail. Online moderation is just more difficult. Witness any radio let alone text discussion.
Online pre-planning also needs to be more exacting to make sure respondents are capable of accessing and using systems. And this has a time-cost.
Technology can also fail. As a result, some online qualitative approaches advocate running research with two people. One to manage the IT systems, and another to moderate the discussion.
Whichever method is used there is a need for human management and analysis. Particularly for qualitative, where online costs can be higher than face-to-face.
The nature of the social media, also means there is more and more data available for analysis. Analysis of social media big data has shown more accurate insights than conventional polls, such as on the outcomes of election results.
New hybrids that cross the lines of traditional and digital media offer the advantages of both worlds. For example, Skype and Zoom are a boon for conducting remote face-to-face interviews and thus see and hear respondents.
If you have or are aware of any new digital research methods that merit inclusion in our article please let us know.
(1) Mehrabian Albert H, ‘Silent Messages; Implicit Communication of Emotions and Attitudes’ 2nd edition 1981
(2) Carter Simon, Managing Director, Fujitsu ‘Back to basics’ Marketing Week & Research Live April 2011
(3) O’Reilly Lara ‘A blinkered digital vision makes marketers forget the customer’ Marketing Week 21 Oct 2011
The quantitative vs. qualitative research debate has been going on since the 1970s. Apparently it’s all about epistemology, a branch of philosophy concerned with the theory of knowledge. Quantitative research is defined as positivism i.e. scientific and objective. Conversely, qualitative research is interpretivism i.e. non-scientific and subjective.
But there is an academic argument that the two methods cannot and should not work together.
“The chief worry is that the capitulation to “what works” ignores the incompatibility of the competing positivistic and interpretivist epistemological paradigms that purportedly undergird quantitative and qualitative methods, respectively”(1). Blah, blah, blah…
The blurring of lines between quantitative and qualitative research has gone on for some time. How many times have you attended research groups and a done a quick ‘tally’ of responses to gain some quantitative guidance? Or, within a quantitative omnibus, included a few open-ended questions to give a little more colour? Superficial instances admittedly, but evidence of ‘blurring’ nonetheless.
Perhaps the reason overlap has not been fully acknowledged is because many believe the disciplines still run separately? Or perhaps it is because as a ‘quali’ or a ‘quanti’ researcher you are defined or compartmentalised at birth?! So never the twain shall meet? There is some truth in this as many researchers tend to train under a single discipline. In addition, most large research organisations run separate quantitative and qualitative departments.
However, from someone “on the ground”, as a qualitative researcher (and perhaps somewhat fearsome of quantitative research) it is possible to marry these two approaches together and get extra benefits. Thus, there is room for a new model, a better hybrid of qualitative and quantitative research. Here are some examples:
Qualitative research discussions often include a few ‘wishy-washy’ answers to questions. Thus it can be difficult to discern differences in meaning. For example, in what one person says they ‘like’ versus another, as well as in overall shades of ’like’, ‘love’ etc. Using simple quantitative measures, such as a rating out of 10, provides much more clarity and decision-making substance.
For example, used within a new product development (NPD) process it offers a more useful ‘gate’ enables better short-listing and prioritisation. It also helps make sure you are not wasting thousands of hours and pounds barking up the wrong tree!
Quantitative data uses open-ended questions to explain the numbers. But in many cases it doesn’t explain anything because respondents failed to fill in the boxes or the responses were insufficiently detailed. The data can also be costly to collect and cumbersome to analyse.
However, combined qualitative-quantitative research can both assess and improve products. From food and drink to media and beyond. In a recent project, respondents tasted and critiqued a number of competitive food products. Research was undertaken in a high traffic places in order to recruit people off the street into a hall. Then after gathering consumers’ responses on a questionnaire, we understood their reasoning as well as revealed brand fit and new product development opportunities.
This work was hugely beneficial in providing clear guidance and recommendations for both brand and product development. It was also very cost-effective.
These techniques also apply to other categories, and challenges. For example, to assess packaging, or merchandising. When refining packaging, a clear read on issues such as stand-out, and reasoning is required. By co-opting a minimum of 100 consumers to review a mocked-up retail fixture rotated with current and proposed new packs and complete a short questionnaire. By identifying the appealing packs and critiquing them within the visual noise of a fixture, a numerical assessment of stand-out is obtainable. Subsequent qualitative discussion then allows deconstruction and analysis of the pack elements. Also reconstruction of the ideal pack design.
Concluding the quantitative vs. qualitative research debate, there will always be a role for ‘pure’ quantitative and qualitative research approaches. However, research doesn’t need pigeon-holing into either quantitative or qualitative methods.
It is possible to design quantitative-qualitative research to offer the benefits of both. In so doing you gain face-to-face consumer contact and understanding as well as meaningful numbers. Within this it is possible to set quotas for consumer types while also realising time and cost savings. Marketers just need to decide what they really need. So do you need understanding or numbers, or both? A creative research agency should guide and inspire you, even if it goes against what’s specified in the brief.
Get in touch for a bespoke qualitative-quantitative research proposal to meet your needs.
(1) Against the quantitative–qualitative incompatibility thesis (or dogmas die-hard) by Kenneth R. Howe, Ph.D – Professor of Philosophy at University of Colorado, Boulder Published in the Educational Researcher 17(8) 10-16 1988
‘Focus groups’ are often the default consumer research method yet in today’s highly competitive environment relying on groups alone is blinkered, if not blind. If everyone is just using the same technique how can anyone possibly gain an advantage over a competitor? So how can you gain new insights and an edge?
First, as insights can come from anywhere, it is vital to look in different places, and view and explore consumers in different ways. Use mixed methods to push the boundaries, dig deeper, and look into the future. We advocate using three consumer research strategies to gain an edge; we call them the three Cs: Context, Challenge and Collaboration.
Who consumers are, their needs, behaviour and influences are seldom what they seem. Sometimes consumer preferences and reasoning is beyond imagination. So we need to understand who consumers are, how they live their lives, what’s important and why. So by getting close up, and though observation, it is possible to truly understanding the decision making context.
Consumer thoughts and feelings come from their own frame of reference i.e. experiences, prejudices and memory. By provoking consumers, it is possible to reveal what is unconsidered, hidden or perhaps forgotten. So take them out of their comfort zones and provide new experiences. For example, giving consumers a new or different product to try can reveal insights on current product deficiencies, on new or unmet needs, and also on barriers to overcome. Conversely, combining loyal and lapsed consumers in a ‘conflict group’ can reveal drivers and barriers to usage. Also to cast new light on the strength of attitudes, and whether, and how, to change attitudes.
Today we live in an increasingly connected and savvy society with greater free-flow of information and collaboration. In IT collaboration and ‘open-source’ software is common-place. Consumers are also very familiar with advertising and brands. able to discourse in ‘technical’ terms. This is a boon for researchers and marketers. It therefore means that consumers able to discourse in ‘technical’ terms and also create as well as assess communication and product ideas and solutions. The concept of collaboration applies all aspects of research. The only limiting factor is our imagination! For example, by including specific technical experts in research, brings leading-edge insights and ideas, and potential glimpses into the future.
For market research to probe for new insights and give you an edge consider the 3 Cs : Context, Challenge and Collaboration.
There has been much in the press in recent years about market research losing its place in the boardroom. Most notably from Unilever who say that senior managers were unwilling to invest time attending a research debrief. Further, most CEOs consider market research less useful than finance, marketing, information services and human resources (1). However, market research professionals appear in denial about their relevance (2). Yet criticism is also made by major research suppliers (3)!
Some suggest researchers lack the ability to integrate information, fail to connect research results with business outcomes and are unable to turn complex data into clear narratives (3). Problems result from many causes. Less than robust data collection, market research analysis and strategic interpretation. This is therefore to share insights and ideas to restore the importance of research in the Boardroom.
Triangulation is a mainstay market research method. The idea is that when two or more methods are used in a study, confidence in the results increases. Denzin defines four basic types of triangulation. Methodological triangulation involves using multiple research methods to gather information, such as interviews, observations, and documents. Also, data triangulation which involves multiple time periods and respondents. Investigator triangulation involves multiple researchers. Finally, theory triangulation which involves using multiple analytical methods or models (4).
As qualitative research data is usually unstructured a key challenge is to manage, shape and make sense it. The most common form of qualitative data analysis is observer impression. Computers and software offer a storage place and tools to classify, sort and arrange information. However, computers and software fail to do the thinking. Identifying themes and patterns in data i.e. uncovering insight requires human skill.
And human skills and knowledge lie with the observer and analyst. While many analysts are graduates, most are career researchers and not business people. Further, for life stage and economic reasons, fieldwork and analysis tasks often fall to younger, less experienced staff.
The concept of triangulation provides a foundation on which to build. The more data collection points, the more ways a problem is looked at, the more analytical methods, the more substantial is both data collection and analysis. Bricolage is a term used to describe multiperspectival research methods. It is also a way to learn and solve problems by trying, testing and playing around. It avoids the reductionism in many single method (monological) and mimetic research approaches (5 and 6). Further, it enables more deductive reasoning (in which a conclusion is based on the concordance of multiple premises). Thus it produces more comprehensive and specific insights.
Within qualitative research, employing simple numerical scoring (or semi-quantitative) techniques also enables more rigorous analysis. For example, by asking respondents to independently select the most appealing communication idea from a gallery. Or to rate a new product or service concept on scale from ‘will definitely buy’ to ‘will definitely not buy’. Using these techniques reduce reliance on subjectivity (interpretivism) (7) and adds scientific methods to qualitative research i.e. objectivity (empiricism, positivism). This therefore ensures important differences in meaning and in relative customer appeal are discerned more readily. As a result this spotlights key issues and opportunities on which to focus. Also ‘outliers’ (8) that demand more detailed investigation.
Probing and testing for clear cause and effect relationships also ensures more robust findings and analysis. The ‘Manchester Map’ is a useful technique learned in management consulting days. This involves systematically asking and understanding ‘so what does this mean?’ or ‘why does this happen?’. It therefore forces all information to be reviewed, delineated and linked. This thus ensures clearer articulation and understanding, of causes and effects. And in turn, findings and implications or conclusions.
Researchers should understand core marketing and business principles. Every marketer knows that customers have needs and seek products and services that meet their needs. So to design products and services to meet those needs, research must first clarify needs and wants, and also the drivers behind those needs. Only then can product benefits be matched or created to meet those needs.
Researchers should also have a good understanding of business aims and options. The broader and deeper the knowledge of a businesses’ aims, possible business strategies, and product, brand, and marketing options, the broader and more penetrating the nature of enquiry. And thus ability to uncover relevant, meaningful and actionable business insights.
1. Esomar Research World / ARF (2005)
2. Boston Consulting Group (2009)
3. Does Market Research Need inventing? www.InspectorInsight.com (2014)
4. Denzin, N. Sociological Methods: A Sourcebook. Aldine Transaction (2006)
5. Kincheloe, Joe. L. Berry, Kathleen, Rigour and Complexity in Educational Research (2005)
6. What is Mimetic Theory? www.woodybelangia.com
7. Interpretivism (or antipositivism) is a view that social research should not be subject to the same methods of investigation as the natural world. Gerber, John J. Macionis, Linda M. Sociology (7th Canadian ed.) page 32 (2010)
8. An ‘outlier’ or outlying observation is one that appears to deviate markedly from other members of the sample in which it occurs though this is partly a subjective exercise. Grubbs, F. E. “Procedures for detecting outlying observations in samples”, Technometrics 11 (1): 1–21 (February 1969)