|Benefits understanding the customer journey||Marketing benefits|
|Reveals customer’s decision making thoughts, feelings, and needs at each stage||Reveals under or over performance and thus where and when to optimise or smooth the customer journey|
|Clarifies decision making influences, i. e. what are the relationship drivers or barriers (pain points) at each stage||Informs how to better communicate to engage, attract, retain customers and secure referrals, for example, through what medium, and with what messages|
Also how to optimise the customer or brand experience experience, e.g. through better processes.
In 1961, Lavidge and Steiner created ‘The Hierarchy of Effects’ model to explain customer buying behaviour, and also help advertisers make better advertisements (1). The concept of the customer journey is essentially the flip side of the same coin.
Lavidge and Steiner’s ‘hirarchy of effects’ model comprises three stages; each has one, two or three steps. Thus six steps overall, as illustrated in Figure 1, and summarised from top to bottom, below:
Lavidge and Steiner’s model is effectively a linear series of steps:
However, while most relevant in a world of mass advertising, in today’s online world, customer journeys are much more complex.
Understanding the customer’s journey allows marketers to determine the sequence, nature and importance of the steps, also the triggers, drivers and barriers to sales of a particular service or product. Thus, where and how to promote and add value to an offer. Also specifically how to improve trial, retention and endorsement.
It is akin to a powerful ‘sat nav’ to marketing success.
However, shopping no longer takes place just in the High Street but anywhere, anytime, with the result that nearly 20% purchases were made online in 2019 (and significantly more during the lock-down). Further almost all UK adults aged 16-44 years, and 95% adults aged 16-74 years use the Internet daily (2). 79% also own a smartphone making it easy to go online on the go (3). Thus the ability to buy food, clothes, music, films, sports equipment, holidays, cars etc. online has never been easier.
With the growth of the online world, the customer journey has become a more complex, stepping stone process (Figure 3). Sometimes purposeful, and fast, sometimes serendipitous, looping, and seemingly never – ending. Recent research by Google Australia/New Zealand, describes this as the ‘messy middle’ (4).
Different media present different environments, and thus attract different demographics and psychographics. Further media entry points and occasions vary, and technology increasingly links from one medium to another. Of course, markets, competitive contexts and customer needs also vary. Consequently, influences differ, resulting in different customer behaviour.
The threat to marketers is to only see digital media and only to rely on website analytics data. This only sees the effects of customer behaviour and fails to explain why they do things and what their needs are.
The opportunity is to see and understand the big picture. In particular to understand customers, in context of the entire media and competitive landscape. Where, when, how and why they behave as they do, and then to plan marketing accordingly.
First, use qualitative research to understand the big picture, what customers do, and why? The benefit of a qualitative approach is to fully explore and understand what customers consider important, what engages, fails to engage and most importantly why? Here are some start-point (though generic) questions to ask (Figure 4).
In our experience, there are also two tracks to any customer journey; first the journey into the category, and secondly, the journey to discover, and choose a particular brand. Thus original customer research is valuable to understand the twin tracks, and also the relationships between the tracks.
It is almost impossible to predict a journey from the outside looking in, and thus research will inevitably unearth new insights.
Google’s own research suggests consumers are in sequential exploration and evaluation modes. With continuous offer selection, short-listing, and deselection until a final purchasing decision is made. They also highlight the importance of brand awareness, and the power of benefits, recommendations, and incentives to shift demand from a familiar to lesser known brand (4).
There are many ways to map customer journeys, and what to do should follow from your business objectives and needs. A grid matrix may best inform a new IT customer contact system, but risks the ‘wood being hidden by the trees’. A more visual representation may better help colleagues understand and act on research findings. Figure 5 (below) summarises on one page, a customer journey to research family history or family trees. It highlights key events, and drivers and barriers on the journey. It is useful in bringing to life rational and emotional factors. And thus opportunities to optimise the journey and better market brands.
Use quantitative research to rank ‘influencing’ factors, and thus prioritise marketing effort. Though conduct qualitative research first to scope your quant survey, and finesse the questions. You’ll need to devise a long list of answers to the following:
Figure 6 (below) summarises some of the results from a survey to assess key reasons (and barriers) to buying telecoms services (fixed line, mobile, broadband and TV). This shows how issues and attitudes differ across two countries. It also reveals a marketing opportunity simply to promote additional services!
There has been much in the press in recent years about market research losing its place in the boardroom. Most notably from Unilever who say that their senior managers are unwilling to invest time in research debriefs. An ESOMAR survey also adds that most CEOs consider market research less useful than finance, marketing, information services and human resources (1). A further BCG survey suggests that even market research professionals seem in denial about their lack of relevance (2). Yet criticism is also made by major research agencies (3). The problems appears to result from less than robust data collection, and also flimsy market research analysis and strategic interpretation.
Issues also trace to the research methods used and the skills of the people involved. Some say researchers lack the ability to integrate information, fail to connect research results with business outcomes, and also fail to turn complex data into clear narratives (3). Of course, concise presentations and explanations are important. But not if they result in more questions than answers. In particular ‘so what does this mean?’.
Triangulation is a mainstay market research method. The idea is that using two or more methods in a study gives more confidence in the results. Denzin defines four basic types of triangulation. Firstly, methodological triangulation. This involves using multiple research methods to gather information, such as interviews, observations, and documents. Secondly, data triangulation which involves multiple time periods and respondents. Thirdly, investigator triangulation which involves multiple researchers. And finally, theory triangulation which involves using multiple analytical methods or models (4).
Bricolage is a term used to describe multiple or multi-perspectival research methods; also a way to learn, and solve problems, by trying, testing and playing around. It avoids the reductionism in any single method (monological) and also mimetic research approaches (5 and 6). It also enables more deductive reasoning (in which a conclusion is based on the concordance of multiple premises). Lastly, it produces more comprehensive and specific insights.
Qualitative research data is usually unstructured so the challenge is to manage, shape and make sense it. The most common qualitative market research analysis method is observer impression. Computers and software also classify, sort, and arrange information. Though computers and software fail to think; leaving human skill to spot themes, patterns, and thus uncover insights.
While skills and knowledge lie with the observer and analyst, for life stage and economic reasons, fieldwork and analysis tasks often fall to younger, less experienced researchers. While many are also graduates, experience is acquired mainly on the job. Thus explaining why ‘business savvy’ may be lacking.
Every marketer knows that customers have needs and seek products and services that offer benefits that match their needs. So to design products and services, researchers must first understand needs, and the drivers behind those needs. Only then can product benefits be matched to meet those needs. This simple marketing logic therefore helps challenge and analyse market research findings. It is therefore vital that researchers understand basic marketing principles both to uncover, analyse and interpret findings. A broad and deep know how on a businesses’ aims, as well as marketing and brand concepts, also allows broader and more penetrating enquiry. Thus inspiring more insightful, relevant, and actionable findings and conclusions.
Probing and testing cause and effect relationships also ensures more robust analysis. In particular, the ‘Manchester Map’ is useful technique learned in management consulting days. This involves systematically reviewing findings and then asking ‘so what does this mean?’ or ‘why does this happen?’. It also helps sort and delineate information. Thus helping understand and express findings and conclusions.
Within qualitative research, employing simple numerical scoring (or semi-quantitative) techniques helps give weight to findings. Thus sorting the ‘wheat from the chaff’. We call this quali-quant research. For example, we ask respondents to choose the most appealing comms idea from a gallery. Or to rate a product concept on a scale from ‘will definitely buy’ to ‘will definitely not buy’. This reduces reliance on subjectivity (interpretivism) (7). Equally it adds scientific rigour to qualitative research i.e. objectivity (empiricism, positivism). Thus helping spot differences in meaning and relative customer appeal. In turn, spotlighting key issues and thus opportunities and ‘outliers’ (8) that demand further investigation.
1. Esomar Research World / ARF (2005)
2. Boston Consulting Group (2009)
3. Does Market Research Need inventing? www.InspectorInsight.com (2014)
4. Denzin, N. Sociological Methods: A Sourcebook. Aldine Transaction (2006)
5. Kincheloe, Joe. L. Berry, Kathleen, Rigour and Complexity in Educational Research (2005)
6. What is Mimetic Theory? www.woodybelangia.com
7. Interpretivism (or antipositivism) is a view that social research should not be subject to the same methods of investigation as the natural world. Gerber, John J. Macionis, Linda M. Sociology (7th Canadian ed.) page 32 (2010)
8. An ‘outlier’ or outlying observation deviates markedly from other members of the sample in which it occurs. Grubbs, F. E. “Procedures for detecting outlying observations in samples”, Technometrics 11 (1): 1–21 (February 1969)
The qualitative vs quantitative research debate started in the 1970s. It’s all about epistemology (1), a branch of philosophy concerned with the theory of knowledge. Qualitative research is described as ‘interpretivism’ i.e. non-scientific and subjective. Whereas quantitative research is ‘positivism’ i.e. scientific and objective.
But there is an academic argument that the two methods cannot and should not work together.
“The chief worry is that the capitulation to “what works” ignores the incompatibility of the competing positivistic and interpretivist epistemological paradigms that purportedly undergird quantitative and qualitative methods, respectively”. Blah, blah, blah…Prof. Kenneth R. Howe (2)
The blurring of lines between qualitative and quantitative research has gone on for some time. Though how many times have you attended focus groups and a done a quick ‘tally’ of responses to gain some quantitative guidance? Or, within an omnibus, included a few open-ended questions to add a little more colour? Superficial instances perhaps, but evidence of ‘blurring’ nonetheless.
A possible reason overlap is not fully acknowledged is because many believe the disciplines still run separately? Another is because qualitative and quantitative researchers are defined at birth. And thus never the twain shall meet? However, many researchers train under one discipline and most large research organisations run separate quantitative and qualitative departments.
Nevertheless from hard-won experience it is possible to marry both approaches and gain extra benefits. Thus there is room for a new model; a qualitative and quantitative research hybrid. Here are some examples:
Qualitative research discussions often solicit a few ‘subjective’ answers to questions where it is difficult to discern differences in meaning. For example, whether there are differences in meaning are between ’like’ and ‘love’ or ‘great’ and ‘good’ etc. However, when two people say they ‘like’ something, they may not mean the same thing. Though seeking numeric measures, using a simple likert scale (3) better distinguishes the ‘wheat from the ‘chaff’.
So rather than asking consumers who ‘likes’ what, asking them to say who ‘would definitely try or buy’ product ideas clarifies product purchasing intent. This is a particularly useful ‘gate’ in a typical NPD process. It helps better assess market potential and marketing implications. Thus, when developing new products this can help save you barking up the wrong tree. And also help you save thousands of hours and pounds!
Quantitative data uses open-ended questions to explain the numbers. However, in many cases it doesn’t explain anything because respondents fail to fill in the boxes or just write two or three words. Data is also costly to code and cumbersome to analyse.
However, combined qualitative and quantitative research can assess and improve products and more. For example, in a recent study, respondents tasted and critiqued a number of competitive food products. Research was conducted in a high traffic place so people could be recruited off the street into a hall. With some support from a moderator, consumers completed a simple survey to assess relative product appeal and brand fit. Also opportunities for product improvement as well as reasons why.
The same techniques can assess service ideas, communications and packaging. For example, at the pack refinement stage, to give a clear read on shelf stand-out, and reasoning. Firstly, by co-opting a minimum of 100 consumers to check a mocked-up retail fixture. Then by identifying the appealing packs and critiquing them within the visual noise of a fixture provides a numerical assessment of stand-out. Finally, adding in a group discussion to deconstruct and reconstruct the pack elements adds understanding and guides improvement.
1. The debate does not have to be about qualitative vs quantitative research as there are also many other types of market research services. Yet each has a different role, application and benefits.
2. Combined qualitative-quantitative research offers the benefits of both qual and quant research methods. So dial either up or down to answer ‘why’ questions as well as gain meaningful numbers. Within this it is also possible to establish quotas for consumer types, and save time and money too. So do you need understanding or numbers? Or both? Choose a creative research agency to help you get the most for your money.
1. What is Epistemology? https://en.wikipedia.org/wiki/Epistemology
2. Howe Kenneth R. PhD – Professor of Philosophy at University of Colorado, Boulder. Against the qualitative-quantitative incompatibility thesis (or dogmas die-hard), Educational Researcher 17(8) 10-16 1988
3. What is a Likert scale? https://en.wikipedia.org/wiki/Likert_scale
Recent OFCOM Research highlighted that 71% of the UK receive 9 nuisance calls a month, and that telephone research is the #4 culprit (1). So has telephone research had its day? At the same time online grows apace. We’ve looked closely at the merits of telephone, online and face-to-face (ftf). So if you commission quantitative research surveys, this article summarises some insights and ideas to help you make the most of your research investment.
Quantitative research costs are sensitive to sample size, ease of reaching an audience or ‘incidence’, the length of survey, mode and complexity of fieldwork and analysis. Compared with online (index =100) fieldwork costs are typically higher for face-to-face (index 350-450) than telephone (index 250-300) due to the greater human time involved. Other costs such as coding for online research, computer aided telephone interviewing (CATI) and computer assisted personal interviewing (CAPI) are similar.
97% of the UK are online though many online surveys use panels which cover just 5% of the population. There are some geographic gaps and respondents are more ‘Internet experienced’. Thus some sample bias is possible. Nearly all homes have access to at least one phone though telephone databases cover just 60% UK homes (though even fewer will have agreed to take part in research!). Fixed line telephone reaches 79% homes (and proportionately more of the elderly) while mobiles reach 96% (and proportionately more of the young) (1). Face-to-face can reach most places (though with extra travel costs).
Online response depends on the nature of the panel, and how responsive and interested respondents are. Expect between 5-30%. Response from links on websites or emails will similarly depend on the nature of the source. Telephone responses have fallen over the last decade and responses are now around 10-15%. Face-to-face response is also around 15-20%.
The self-selection nature of online panels means there is a greater risk of respondents only participating in surveys that interest them. So-called avidy bias. Typically online respondents are younger, more familiar with the online world and spend more time on it. They are also more informed, more opinionated and more politically activist. (2) Panels also contain more early technology adopters though it remains possible to discern other types on the ‘diffusion of innovation’ spectrum.
Telephone research respondents present more socially desirable responses more often than face-to-face (3,4). This is particularly the case with those with lower intellectual ability or fewer years of formal education (i.e. C2DEs). Research has also shown that respondents are more comfortable discussing sensitive subjects face-to-face as they can see, and thus have greater trust in, the interviewer. Conversely, face-to-face interviews conducted in the respondent’s home minimises anonymity, making socially desirable responses more pronounced. Overall however, interpersonal trust between the interviewer and the respondent has a greater influence resulting in more honest responses. Face-to-face shows similar results to online (where there is no interviewer effect). However, some research (5) has observed higher valuation responses to some ‘willingness to pay’ questions (e.g. when there is a perceived ‘civic virtue’ in being seen to add to a common good).
Satisficing (a combination of the words satisfy and sacrifice) involves short-cutting the response process, settling on a solution that is ‘good enough’ but could be ‘optimised’.
Telephone poses an increased cognitive burden. It increases difficulty to comprehend questions, thus reducing the effort to cooperate, search the memory and process information. Perceived time pressure also fatigues and demotivates. This results in questions being less considered, giving rise to higher acquiescence (answering affirmatively regardless of the question), and reducing disclosure. Also choosing mid-points or only extremes in rating scales, easier to defend answers and having no opinions. This is most evident with those with lower intellectual ability. Research (3,4,5) suggests face-to-face researchers are better able to judge confusion, waning motivation, distraction (via watching a tv, eating etc.) and thus motivate and make it easier for the respondent to understand the questionnaire and improve cooperation on complex tasks. While online respondents go at their own pace.
1. Useful quantitative research surveys start with a clear market research brief. So decide your objectives, target market, what you need to know and any guidelines. Also beyond feasibility and answers to questions, what’s the relative importance of cost, speed, ‘reliability’ etc? Be clear too about expected sample sizes. All helps recommend the best market research service for your needs.
2. Understand the pitfalls in conducting quantitative research. Larger samples give greater reliability. Thus a sample over 1000 gives more reliability than a sample of 500, i.e. if repeating a survey 100 times, in 95 instances a confidence interval i.e. variance of responses will be within +/- 1%. So prefer shorter surveys to cut the risk of satisficing.
3. Make sure samples are not biased. Nationally representative samples are key to measure awareness, usage and market share. Anything else builds in bias and risks misleading. Thus ensure your sample removes any demographic, subject affinity, usage or other bias.
4. There are even more pitfalls in repeating a quantitative research survey or a brand tracker. So take extra care to make sure the pool of respondents delivers a sufficient and matched sample for each survey wave. Then findings will be comparable.
5. Beware spurious analysis. Remember the Whiskas advert that told us that ‘8 out of 10 cats prefer Whiskas’. This eventually changed to ‘8 out of 10 owners that expressed a preference said their cats preferred Whiskas’. However, what we still don’t know is how many said ‘don’t know’, how many expressed a preference, and the sample size. So in all quantitative research surveys, be clear about the sample size. Also what is statistically significant or merely directional to make the context clear. Ensuring clear and fair analysis gives more useful insights and thus leads to better decision-making!
1. OFCOM Telephone Nuisance Research (2014).
2. Duffy Bobby, Smith Kate, Terhanian George, Bremer John. Comparing Data from Online and Face-to-face Surveys. International Journal of Market Research Vol 47 Issue 6. (2005)
3. Holbrook Allyson L, Green Melanie C, Krosnick Jon A. Telephone versus Face-to-face interviewing of National Probability Samples with Long Questionnaires. Public Opinion Quarterly, Volume 67:79–125 (2003).
4. Szolnoki G, Hoffman D. Online, face-to-face and telephone surveys – Comparing different sampling methods in wine consumer research. Wine Economics and Policy 2 (2013) 57-66.
5. Lindhjema Henrik, Navrudb Ståle. Are Internet surveys an alternative to face-to-face interviews in contingent valuation? Ecological Economics 70(9): 1628-1637 (2011).