Great Research Starts Where Traditional Research Stops

A lot of research reports read like a recitation of facts: “X% of respondents reported that they always use the same brand of Y,” “X% of our consumers say that free shipping is a big motivator when purchasing online,” etc. Often, this is about as deep as the report goes, aside from scattered follow-up questions (“And why is Brand Z your most preferred brand?”) and a few slides that try to tie everything in the report together into a coherent story. Although many findings from such reports might be interesting, they fall short of providing the evidence necessary to compel meaningful action.

Here’s a fairly typical example, slightly altered for anonymity, from a report a major vendor did for one of our clients. Here, respondents were asked if they used their smartphones while shopping in-store to help decide which product to purchase. This was then split out by whether they reported buying our client’s product or another brand’s product. The results looked similar to the below:

While this finding is interesting, it’s hard to know exactly what to do with the information because there are many possible reasons for this result to occur, including:

  1. Whenever a consumer is at the shelf and opens their phone to do research, they tend to be pushed towards competitors’ products, which could happen for any number of reasons.

  2. Something about consumers who purchase competitors’ products also tends to make them use their phone more while shopping. For instance, younger consumers may have different brand preferences and also use their phone more while shopping, even if their phone isn’t affecting their purchase decision.

  3. Reverse causation - something that another brand is doing is causing people to use their phone. For instance they may use an unfamiliar phrase that people look up. Note that this doesn’t require whatever’s causing people to use their phones to be effective. Even if a majority of people switch brands after using their phones, the ones that don’t will have disproportionately used their phone during the decision making process.

  4. Differences in recall, effort, and/or attentiveness between the two groups of respondents. Lower quality responses will tend to have different incidence rates across subgroups or response choices. This can cause correlations to arise simply because two items have higher (or lower) prevalence of inattentive/low effort/ low recall respondents than other items in the survey.

This is a problem because these differing explanations have vastly different implications for the actions the client should take to increase their sales. However, in this case (and very typically in these types of reports), the report does nothing to try and better understand which explanation is correct.


Of course, it would be nice to have a follow-up “Why did you use your smartphone before you purchased Brand X?”. But it’s unrealistic to have this type of follow up to every question in the survey, and even so, it would be hard to come up with an exhaustive but manageable list of answers for respondents to choose from ahead of time.

However, that doesn’t mean we can’t find out anything useful about why purchasers of other brands are using their phones more than purchasers of our client’s brand with questions already included in the survey. Some reasonable questions to start with might include:

  1. Are purchasers of the other brand younger? How does brand choice vary by mobile device usage within age groups?

  2. Do purchasers of other brands have more or less experience in the market? (Or do both the most experienced and least experienced tend to opt for other brands?)

  3. Are mobile device users doing more or less research in total (rather than just in-store)?


The information to answer any of these questions is typically included in surveys. We often don’t need to add a lot of clutter and extra questions to dig into interesting findings that pop out of a survey, we just need to make better use of the information we’re already capturing. Answers to the questions posed above would inform our assessment of which of the four possible explanations is most likely to be correct (or even alert us to an alternative explanation we hadn’t yet considered). But too often research reports stop at merely “interesting” and don’t do the extra analysis which would provide enough clarity to confidently act on.


PS

For those interested in what was driving the results here, our assessment was that differences in recall, effort, and/or attentiveness were driving most of the difference. The client’s brand is the market leader in the product space by a wide margin. LowerInattentive or poor quality respondents were more likely to report using a mobile device while shopping (probably exhibiting screen-in behavior) and less likely to report purchasing the client’s brand. This difference in purchasing showed up not just in use of a mobile device, but with any relatively rare shopping behavior. So the result turned out not to be as interesting as it first appeared.

Cody Brown

Cody lives in Dallas, TX with his wife, two kids, and dog. He has spent his career keeping up to date with the academic literature on research and applying the best possible methods to the complexities organizations face. By constantly putting in the ‘shoe leather’ that great research requires, he’s delivered unbelievably impactful results with the clarity that make them stand as the foundation for future insights to be built upon. Cody has helped mentor top corporate researchers and often gives his time to help those that he can by simply offering advice, teaching research principles, or accepting invitations to speak at conferences. Cody can be reached at cody@goiag.com

Next
Next

Has Big Business Hurt Your Market Research Project?