Illustration of a pie chart. There are 5 pieces to the pie. One of the pieces is cursor. The illustration uses a pink and purple color palette.
Illustration by Alisha Giroux.

How to use mixed method research to drive product decisions

A research framework for using data and cross-validation to reduce bias and drive action

Raschin Fatemi
Shopify UX
Published in
5 min readJun 9, 2022

--

As designers and researchers, understanding our users is core to building the right things, and guiding the design direction we take. Knowing which research direction to take is a challenge in itself. Starting with user feedback is an obvious first step. But how do you gather enough feedback to provide an accurate picture? How can you avoid high-cost interviews, reduce feedback bias, and select the most appropriate research method?

This is where a mixed-method research approach can be impactful.

A few months ago, Shopify’s Customer team shipped customer segments to help merchants engage with their customers more effectively. This new product replaced the existing Customers page with more advanced, robust, and flexible capabilities. As a new design lead on the team, I was eager to understand how users feel about the product and what should be our design direction as we advance.

Our team used mixed methods to design a research framework under these principles:

  • gather more data ≠ adding cost
  • reduce feedback bias
  • drive clear action

What is the mixed method?

Mix method is about building a community around user research and cross-functional collaboration between UX, data science, support, and product to create a shared understanding of user behavior. This research method offers complementary insights by combining qualitative and quantitative feedback data. The foundation of the Mix method has been around for decades now — with two main components: data richness and cross-validation.

  1. Data richness: A rich understanding of user experiences comes from looking at user feedback from multiple data sources. Rather than running different research methods in silos, mixing methods creates a more accurate picture of the experience and mitigates blind spots.
  2. Cross-validation: Mix-methods show attitudes and behaviors sometimes diverge, and we can’t measure the product’s success by looking at qualitative or behavioral data alone.

Comparing results from different methods give us unique complementary signals to understand the behavioral patterns and perceived experience.

An illustration of the feedback pipeline outlined in this article
Gather diverse data from multiple data sources to find actionable insights.

You have more data than you think:

Our research framework offers diverse data gathering by employing existing feedback sources:

  1. Support calls: A natural place to gather direct feedback is from the support team. For example, the Shopify support team documents all merchant frustration and makes it accessible for everyone. We looked at support calls during the first month of the release as a source to find top merchant frustration on the Customers page.
  2. Behavioral logs: Partnering with our data science team, we look at quantitative data and experimentation results to understand the overarching impact of customer segment release on merchant behavior.
  3. Internal Feedback: Centralized internal stakeholder feedback with the rest of the feedback data for cross-validation.

At the same time, we conducted three new research initiatives with the same group of merchants.

  1. Impression Survey: we gathered indirect, attitudinal data through a survey with two simple questions. 1) rate your experience, and 2) explain your rating in plain text format. We gathered more than 100 respondents to be reliable based on our survey population size.
  2. Session recording: We observed 55 session recordings to find actual navigational patterns through FullStory and what UX pain points stray merchants away from optimal paths.
  3. 1:1 interviews: we conducted 5 remote interviews to get direct feedback and understand the merchant mental model.
Illustration of a four-quadrant chart. The x axis has why on the left side and what on the right side. The y axis has behavioral at the top and attitudinal at the bottom. The types of data are plotted on the graph.
Each method provides a different perspective in terms of the questions they answer.

Start Guide: run your mix methods research

  1. Preparation:
  • Determine your research focus: what is prompting you to consider doing user research? Clearly define the research goal and scope.
  • Define your research questions and choose research methods: which method elicits information that will allow you to answer your research question best?
  • Map research questions to research methods: which method draws out different perspectives? Which will make effective use of time and available resources?

2. Implementation:

  • Build the team: there is a great value in being part of a diverse team and distributing responsibilities among team members. Having different skill sets and perspectives within the team reduces the bias.
  • Conduct research: gather data through each research method.
  • Document structurally: create data tables or templates to simplify sorting and analyzing data.
  • Centralize feedback: organize all data in one place and make it easily accessible for everyone.

3. Processing and validation:

  • Analyze data: Compare, contrast, and combine data findings from various channels.
  • Define actions: Form hypotheses, gather stakeholders’ feedback and define improvements.
  • Work with product & engineering to prioritize improvements.
  • Measure the success of new improvements

What did we learn?

Before the customer segment launch, we foresaw there would be a learning curve with the new advanced segment creation experience. However, the complementary results showed us that the redesign changed the page’s structure and removed some small, less appreciated elements a lot of loyal users wanted, which upset people about the overall change.

  • Our survey used open text to let users tell us their problems and dislikes. The text analysis yielded a breakdown of categories of user problems where small UX changes cause considerable frustrations, which we validate with support calls.
  • In session recordings, we observed how often merchants spent scrolling and paginating without finding what they were looking for. We also discovered frustrating interactions with specific filters and how merchants validate the result.
  • This finding also correlates with the behavioral data, where we saw a considerable drop between people who started interacting with customer segments and didn’t end the session successfully.

Looking at all the data makes us confident about what should be our focus and immediate improvement after the release. We took this finding directly to product and engineering to focus on specific things and quickly launch impactful changes. A further study after the improvement release showed people were more satisfied.

Few examples of improvements we shipped:

  1. Small details matter: We learned that the lack of small components like a search bar caused the most frustration for merchants. We made this a top priority and added customer search to the Customers page.
  2. Increase affordance: We simplified the segment creation workflow based on the laws of proximity. For example, we moved the save CTA closer to where the merchant creates the segment to increase affordance. The A/B test results show the number of merchants saving segments increased by 180% compared to the previous experience.
  3. Self-serve filter creation: We learned merchants who adapted segmentation tools need more filters, so we leveraged customer metafields as segmentation filters, enabling merchants to create filters on their own and customize segments further.

Conclusion

This mixed-method helped us find an actionable and specific case, focusing on particular user reactions and being more prescriptive, rather than investing in behavioral data alone. This method could diagnose the user’s pain points, inform potential prioritization, and assess whether the redesign is more effective.

It also helps stakeholders understand the big picture of user research. These are not just methods we are using in isolation. Mix method user research is iterative and drives awareness of people’s needs and forces readjusting to improve their experience.

--

--