- Meet Our Market
There has been much in the press in recent years about market research losing its place in the boardroom. Most notably from Unilever who say that their senior managers are unwilling to invest time attending research debriefs. An ESOMAR survey also adds that most CEOs consider market research less useful than finance, marketing, information services and human resources (1). A further BCG survey suggests that market research users seem in denial about their lack of relevance (2). Criticism is also made by major research agencies (3).
Some say researchers lack the ability to integrate information, fail to connect research results with business outcomes and fail to turn complex data into clear narratives (3). Problems result from less than robust data collection, market research analysis and strategic interpretation. All trace to research methods used and the skills of the people involved. Concise presentations and explanations help but not if they result in more questions than answers particularly ‘so what?’.
Triangulation is a mainstay market research method. The idea is that using two or more methods in a study gives more confidence in the results. Denzin defines four basic types of triangulation. First, methodological triangulation. This involves using multiple research methods to gather information, such as interviews, observations, and documents. Second, data triangulation which involves multiple time periods and respondents. Thirdly, investigator triangulation which involves multiple researchers. Finally, theory triangulation which involves using multiple analytical methods or models (4).
Bricolage is a term used to describe multiple or multi-perspectival research methods; also a way to learn and solve problems by trying, testing and playing around. It avoids the reductionism in any single method (monological) and mimetic research approaches (5 and 6). It enables more deductive reasoning (in which a conclusion is based on the concordance of multiple premises) and produces more comprehensive and specific insights.
Qualitative research data is usually unstructured so a key challenge is to manage, shape and make sense it. The most common qualitative market research analysis method is observer impression. Computers and software offer a repository and tools to classify, sort, and arrange information. Though computers and software fail to think and human skill must spot themes, patterns, and uncover insight.
Skills and knowledge lie with the observer and analyst. For life stage and economic reasons, fieldwork and analysis tasks often fall to younger, less experienced researchers. While many are graduates, experience is acquired mainly on the job. Understanding both the research discipline, business world, and how to apply learnings to business takes time.
Within qualitative research, employing simple numerical scoring (or semi-quantitative) techniques helps give weight to findings. Thus sort the ‘wheat from the chaff’. For example, by asking respondents to independently select the most appealing communication idea from a gallery or to rate a new product or service concept on a scale from ‘will definitely buy’ to ‘will definitely not buy’. This reduces reliance on subjectivity (interpretivism) (7) and adds scientific rigour to qualitative research i.e. objectivity (empiricism, positivism). Thus you’ll spot differences in meaning and in relative customer appeal more readily. In turn this spotlights key issues and opportunities and ‘outliers’ (8) that demand more detailed investigation.
Probing and testing cause and effect relationships also ensures more robust analysis. The ‘Manchester Map’ is a useful technique learned in management consulting days. This involves systematically reviewing findings and asking ‘so what does this mean?’ or ‘why does this happen?’. It helps sort and delineate information. Thus enabling clearer understanding and articulation of findings and conclusions.
Every marketer knows that customers have needs and seek products and services that offer benefits that match their needs. So to design products and services, researchers must first understand needs, and the drivers behind those needs. Only then can product benefits be matched to meet those needs. This simple marketing logic helps challenge and analyse market research findings. It is therefore vital that researchers understand basic marketing principles in order to analyse and interpret findings. In addition, a broad and deep knowledge of a businesses’ aims, possible business and marketing strategic options, and product, communication, and brand elements, allows broader and more penetrating enquiry. In turn inspiring more insightful, relevant, meaningful, beneficial and actionable findings and conclusions.
1. Esomar Research World / ARF (2005)
2. Boston Consulting Group (2009)
3. Does Market Research Need inventing? www.InspectorInsight.com (2014)
4. Denzin, N. Sociological Methods: A Sourcebook. Aldine Transaction (2006)
5. Kincheloe, Joe. L. Berry, Kathleen, Rigour and Complexity in Educational Research (2005)
6. What is Mimetic Theory? www.woodybelangia.com
7. Interpretivism (or antipositivism) is a view that social research should not be subject to the same methods of investigation as the natural world. Gerber, John J. Macionis, Linda M. Sociology (7th Canadian ed.) page 32 (2010)
8. An ‘outlier’ or outlying observation deviates markedly from other members of the sample in which it occurs. Grubbs, F. E. “Procedures for detecting outlying observations in samples”, Technometrics 11 (1): 1–21 (February 1969)
Focus groups are a tried and tested qualitative research staple, having risen to prominence in the 1950s (1). Yet in today’s highly competitive environment relying on focus groups alone is limiting. If everyone just uses focus groups how can anyone possibly unearth new insights (2)?
The recipe for success is to use mixed methods. As insights can come from anywhere, look in different places, from different angles, and in different ways. When designing research we advocate four strategies to unearth new insights; we call these the four Cs: Context, Challenge, Collaboration and Calculation.
To understand the context in which consumers make choices requires getting up close. For example, through observation and recording daily life. To understand who consumers are, their needs, behavioural influences and the processes involved in choosing to buy or consume a product. They are seldom what you think. For example, by exploring the customer journey (say in food) from discovery through to choosing, buying, storing, preparing, eating and using the left-overs and packaging helps reveal what’s important and nice-to-have at each stage. Including success factors such as convenience, ease of use and sustainability. Observing meal preparation also helps reveal product misconceptions or packaging inadequacies. And observing eating occasions helps explore social drivers and barriers. All pinpoints previously unconsidered product, positioning and promotion issues, and opportunities.
What consumers think and feel is based on their own frame of reference i.e. experiences, prejudices, and memory. Stimulating with new experiences helps uncover new, hidden or forgotten thoughts. Do this by taking them out of their comfort zones and giving new experiences. For example, giving consumers a new or different product to try, helps reveal new or unmet needs, or barriers to overcome. Combining loyal and lapsed consumers in a ‘conflict situation’ to debate what’s good, bad or plain ok about a product or service helps reveal barriers to usage. It can also shed light on the strength of views and whether, and if so, how these can be overcome.
No-one has a monopoly on good ideas and we live in a society with increasing free-flow of information and collaboration. Consumer’s familiarity with advertising and brands means that they are more ‘savvy’ and able to converse in ‘technical’ terms. This is a boon for researchers and marketers as it allows consumers to evaluate and create new marketing solutions. The concept of collaboration applies to both who, how to and what to research. The only limiting factor is our imagination! Involving technical experts or opinion leaders in research brings even more critical and creative thinking. It also brings the future closer.
While qualitative research is good at answering ‘what’ and ‘why’ questions it is less effective at answering ‘how’ many or ‘how much’ questions. Simple scoring techniques overcome the problem by distinguishing the ‘really good’ from the ‘ok’ responses. These also allow people to think for themselves and mitigate the group ‘herd’ effect. They also spotlight winning ideas and make sure research really helps marketing colleagues drive the business forward.
1. While focus groups are a qualitative research’default’, embrace the 4 Cs, to view your challenge in a new light and unearth more insights.
2. While quantitative and qualitative research were once different disciplines, and worlds apart, they are now blurring and overlapping. This presents new creative opportunities for researchers.
3. It is a myth that mixed research methods costs more. When writing your next market research brief be clear about your questions and guidelines – especially budgetary. This encourages agency ‘creativity’ ;-).
(1) University of Columbia, History of Focus Group Research
(2) An insight is a ‘consumer need, want or belief that points to a new opportunity. Perhaps giving extra importance to something that has previously been ignored, forgotten or dismissed. Net the insight should shed new light and aid the business or brand. Chapter 13, Managing Market Research, The Marketing Director’s Handbook.
The qualitative vs. quantitative research debate started in the 1970s. It’s all about epistemology(1), a branch of philosophy concerned with the theory of knowledge. Qualitative research is described as ‘interpretivism’ i.e. non-scientific and subjective. Quantitative research is ‘positivism’ i.e. scientific and objective.
But there is an academic argument that the two methods cannot and should not work together.
“The chief worry is that the capitulation to “what works” ignores the incompatibility of the competing positivistic and interpretivist epistemological paradigms that purportedly undergird quantitative and qualitative methods, respectively”(2). Blah, blah, blah…
The blurring of lines between qualitative and quantitative research has gone on for some time. How many times have you attended focus groups and a done a quick ‘tally’ of responses to gain some quantitative guidance? Or, within an omnibus, included a few open-ended questions to add a little more colour? Superficial instances perhaps, but evidence of ‘blurring’ nonetheless.
A possible reason overlap is not fully acknowledged is because many believe the disciplines still run separately? Another is because qualitative and quantitative researchers are defined at birth? And never the twain shall meet? It is true that many researchers train under one discipline and that most large research organisations run separate quantitative and qualitative departments.
However, from hard-won experience it is possible to marry both approaches and gain extra benefits. There is room for a new model, a qualitative and quantitative research hybrid. Here are some examples:
Qualitative research discussions often include a few ‘subjective’ answers to questions where it is sometimes difficult to discern differences in meaning. For example, whether there are differences in meaning are between ’like’ and ‘love’ or ‘great’ and ‘good’ etc. Whether when two people says they ‘like’ something, they mean the same thing? Numeric measures, such as a simple likert scale (3) provides more decision-making substance.
Rather than asking consumers who ‘likes’ what, by asking them to state product purchasing intent, or to select a number of ‘would definitely try or buy’ product ideas helps focus time and energy on the ideas that matter. This is a more useful ‘gate’ in a typical NPD process. It better helps assess marketing implications and potential. When developing new products this can help save thousands of hours and pounds by not barking up the wrong tree!
Quantitative data uses open-ended questions to explain the numbers. But in many cases it doesn’t explain anything because respondents fail to fill in the boxes or just write two or three words. Data is also costly to code and cumbersome to analyse.
However, combined qualitative and quantitative research can assess and improve products and more. In a recent study, respondents tasted and critiqued a number of competitive food products. Research was undertaken in a high traffic place so people could be recruited off the street into a hall. With some support from a moderator, consumers completed a simple questionnaire to assess relative appeal, brand fit, and opportunities for product improvement as well as reasoning.
The same techniques can assess service ideas, communications and packaging. For example, at the pack refinement stage when a clear read on shelf stand-out, and reasoning is required. By co-opting a minimum of 100 consumers to check a mocked-up retail fixture. By identifying the appealing packs and critiquing them within the visual noise of a fixture provides a numerical assessment of stand-out. Adding in a qualitative discussion to deconstruct and reconstruct the pack elements provides understanding and guidance for improvement.
1. The debate does not have to be about qualitative vs. quantitative research as there are roles for both.
2. Combined qualitative-quantitative research offers the benefits of both research methods. Dial either up or down to gain answers from ‘why’ questions as well meaningful numbers. Within this it is also possible to establish quotas for consumer types. And save time and money too. So do you need understanding or numbers? Or both? A creative research agency should help you get the most for your money.
1. What is Epistemology? https://en.wikipedia.org/wiki/Epistemology
2. Howe Kenneth R. PhD – Professor of Philosophy at University of Colorado, Boulder. Against the qualitative-quantitative incompatibility thesis (or dogmas die-hard), Educational Researcher 17(8) 10-16 1988
3. What is a Likert scale? https://en.wikipedia.org/wiki/Likert_scale
Recent OFCOM Research highlighted that 71% of the UK receive 9 nuisance calls a month, and that telephone research is the #4 culprit (1). So has telephone research had its day? At the same time online grows apace. We’ve looked closely at the merits of telephone, online and face-to-face (ftf). So if you wish to commission a quantitative research survey, this article summarises some insights and ideas to help you make the most of your research investment.
Quantitative research costs are sensitive to sample size, ease of reaching an audience or ‘incidence’, the length of survey, mode and complexity of fieldwork and analysis. Compared with online (index =100) fieldwork costs are typically higher for face-to-face (index 350-450) than telephone (index 250-300) due to the greater human time involved. Other costs such as coding for online research, computer aided telephone interviewing (CATI) and computer assisted personal interviewing (CAPI) are similar.
82% of the UK are online though many online surveys use panels which cover just 5% of the population. There are some geographic gaps and respondents are more ‘Internet experienced’. Thus some sample bias is possible. Nearly all homes have access to at least one phone though telephone databases cover just 60% UK homes (though even fewer will have agreed to take part in research!). Fixed line telephone reaches 82% homes (and proportionately more of the elderly) while mobiles reach 81% (and proportionately more of the young) (1). Face-to-face can reach most places (though with extra travel costs).
Online response depends on the nature of the panel, and how responsive and interested respondents are. Expect between 5-30%. Response from links on websites or emails will similarly depend on the nature of the source. Telephone responses have fallen over the last decade and responses are now around 10-15%. Face-to-face response is also around 15-20%.
The self-selection nature of online panels means there is a greater risk of respondents only participating in surveys that interest them. So-called avidy bias. Typically online respondents are younger, more familiar with the online world and spend more time on it. They are also more informed, more opinionated and more politically activist. (2) Panels also contain more early technology adopters though it remains possible to discern other types on the ‘diffusion of innovation’ spectrum.
Telephone research respondents present more socially desirable responses more often than face-to-face (3,4). This is particularly the case with those with lower intellectual ability or fewer years of formal education (i.e. C2DEs). Research has also shown that respondents are more comfortable discussing sensitive subjects face-to-face as they can see, and thus have greater trust in, the interviewer. Conversely, face-to-face interviews conducted in the respondent’s home minimises anonymity, making socially desirable responses more pronounced. Overall however, interpersonal trust between the interviewer and the respondent has a greater influence resulting in more honest responses. Face-to-face shows similar results to online (where there is no interviewer effect) though some research (5) has observed higher valuation responses to some ‘willingness to pay’ questions (for example, when there is a perceived ‘civic virtue’ in being seen to add to a common good).
Satisficing (a combination of the words satisfy and sacrifice) involves short-cutting the response process, settling on a solution that is ‘good enough’ but could be ‘optimised’.
Telephone poses an increased cognitive burden. The increased difficulty to comprehend questions, reduces the effort to cooperate, search the memory and process information. Perceived time pressure also fatigues and demotivates. This results in questions being less considered, giving rise to higher acquiescence (answering affirmatively regardless of the question), having no opinions, choosing mid-points or only extremes in rating scales, easier to defend answers and reduced disclosure. Again this is more clear with those with lower intellectual ability. Research (3,4,5) suggests face-to-face researchers are better able to judge confusion, waning motivation, distraction (via watching a tv, eating etc.) and be able motivate and make it easier for the respondent to understand the questionnaire and improve cooperation on complex tasks. With online respondents go at their own pace.
1. Useful research starts with a clear market research brief. Decide your objectives, target market, what you need to know and any guidance. Beyond feasibility and answers to questions, what’s the relative importance of cost, speed, ‘reliability’ etc? Be as clear as you can about expected sample sizes.
2. Understand the pitfalls in conducting quantitative research. Larger samples give greater reliability, for example a sample in excess of 1000 will give more reliability than a sample of 500, i.e. if repeating a survey 100 times, in 95 instances a confidence interval i.e. variance of responses will be within +/- 1%. Prefer shorter surveys to cut the risk of satisficing.
3. Make sure samples are not biased and give reliable findings. Nationally representative samples are key to measure awareness, usage and market share. Anything else builds in bias and risks misleading. Ensure your sample eliminates any demographic, subject affinity, usage or other bias.
4. There are even more pitfalls if you would like to repeat a quantitative research survey or set up a brand tracker. Take extra care to make sure the pool of respondents will deliver a sufficient and matched sample for each survey wave so findings are comparable.
5. Beware spurious analysis. Remember the Whiskas advert that famously told us that ‘8 out of 10 cats prefer Whiskas’. This was eventually changed to ‘8 out of 10 owners that expressed a preference said their cats preferred Whiskas’. What we still don’t know is how many said ‘don’t know’, how many expressed a preference, and the sample size. Whatever the survey mode, be clear about the sample size, what is statistically significant or merely directional to make the context clear. Ensuring clear and fair analysis gives more useful insights and ensures better decision-making!
1. OFCOM Telephone Nuisance Research (2014).
2. Duffy Bobby, Smith Kate, Terhanian George, Bremer John. Comparing Data from Online and Face-to-face Surveys. International Journal of Market Research Vol 47 Issue 6. (2005)
3. Holbrook Allyson L, Green Melanie C, Krosnick Jon A. Telephone versus Face-to-face interviewing of National Probability Samples with Long Questionnaires. Public Opinion Quarterly, Volume 67:79–125 (2003).
4. Szolnoki G, Hoffman D. Online, face-to-face and telephone surveys – Comparing different sampling methods in wine consumer research. Wine Economics and Policy 2 (2013) 57-66.
5. Lindhjema Henrik, Navrudb Ståle. Are Internet surveys an alternative to face-to-face interviews in contingent valuation? Ecological Economics 70(9): 1628-1637 (2011).
Digital media continues to inspire lots of new customer research methods. There are lots of myths, perceptions and misconceptions surrounding digital vs. traditional research methods. So what’s available and what are the issues and opportunities when choosing between digital vs. traditional research methods? Booming options are a boon for market researchers everywhere.
More traditional research methods involve either face-to-face or verbal conversations in real-time such as;
Traditional face-to-face or telephone research methods enable the moderator to follow the natural flow of the discussion and understand what’s really important to interviewees. Also to flex the discussion, intervene, probe and challenge at any point in the proceedings.
Face-to-face methods allow observation of non-verbal indicators, such as facial expressions, body language, general behaviour and voice intonation. What’s not said is sometimes as important as what’s said. Albert H. Mehrabian found that body language accounts for 55% of received communication, while tone of voice accounts for 38% and words only account for 7% (1). Non-verbal communication provides extra richness and texture to information and gives deeper insight.
Costs not only include research moderation and analysis but travel and respondent recruitment and research incentives. Research incentives typically cover undertaking pre-tasks, travel as well as time for attending research.
The massive growth in general Internet use, both at home and on-the-go and specifically social media networking sites, provides more direct consumer research options and ways to better understand the digital world. New digital functionality such as wikis, video filming and uploading and messaging provides researchers with a new means of capturing information too. All helps researchers and customers work together to explore and develop ideas.
Some groups love the digital world and are easier to engage e.g. kids/youth market. Groups such as early technology adopters help pressure-test new ideas and anticipate the future. Remaining anonymous also encourages participation and openness.
Some digital media offer an almost ‘instant’ sample. For example, polls on Facebook, Twitter or blogs. Though very many followers are needed to generate fast and cost-effective insights.
The growing range communications, for example via smartphones, make it easier to reach a wide geographic target. In-built cameras also make it easier to collect visual or audio insights. Thus avoiding travel and some communication costs.
However, some technology, for example, as used in online qualitative research, is a little more difficult to master. So allow time for set-up, to help respondents as well as moderate and analyse research. This adds to costs making online discussions more expensive than face-to-face.
Online moderation is more difficult. The process is often more linear and mechanical limiting ability to pursue all avenues of exploration. There are also visual limitations. Zoomed in head shots or screen size room views, make it difficult to see the big picture, and see non-verbal responses. Qualitative responses vary between the superficial and detailed. Initial superficial responses require more probing. In contrast, unduly verbose responses, especially if written, are time-consuming to follow and interpret.
(1) Mehrabian Albert H, ‘Silent Messages; Implicit Communication of Emotions and Attitudes’ 2nd edition 1981
If you have some research technology we’ve not covered let us know!