Photo by Rachel on Unsplash.

Demystifying the art of giving user research feedback

A comprehensive playbook on how to review user research plans, sessions, and reports (including your own)

--

Over the past few years, I’ve spent quite a bit of my time giving other researchers feedback on their work. In fact, if you were to ask my lead, he’d tell you I’ve spent entirely too much time reviewing other people’s studies. Every once in awhile though, he’d nudge me to think of offloading some of that work onto others on my team. To pass the torch in a way, by teaching someone else those skills and moving on to new challenges myself.

Though I heard (and agreed) with what he said, I didn’t act on his suggestion. True enough, the more time I spent down in the weeds of other people’s research, the less time I had to pull back, see the bigger picture, and tie all their stories together. And the more time I spent reviewing work, the more I was taking away opportunities from others on my team who maybe wanted to learn how to do it themselves. But the problem was I didn’t know how to teach others something I thought came innately to me. I’d been looking at other researchers’ work for what felt like a very long time, and therefore thought I had developed some sort of instinct for weeding good studies from bad ones. “It just comes to me” I’d respond, thinking it probably wasn’t possible to abstract my way of thinking into a teachable skill.

But, as the requests for feedback continued to metaphorically pile up on my desk, my approach seemed more and more unsustainable: I knew something needed to change. And so I began to pay more attention to what was actually going through my head as I reviewed other people’s work, attempting to break down my own process. Eventually, it dawned on me that the answer was far simpler than I imagined: questions. All my brain was doing whenever I looked at research documents or sat in on sessions was looping through a series of questions. In a way, it seems predictable (and fitting) for a researcher to structure their thoughts entirely in questions, and so once I began writing them down, I realized that this skill was easily transferrable.

I know now that being able to give good feedback isn’t fully contingent on having years’ worth of exposure to different types of research projects. It’s not some sort of mystical or innate ability that can’t be abstracted or taught. And it’s not something that should fall solely on one person’s shoulders, but rather a shared responsibility that can help entire teams level up their skills together. And so, we put it all into this playbook.

Today, it’s being used at Shopify, not only by research managers looking to give their reports better feedback, but also by the reports themselves, as they look for opportunities to mentor one another. And, if you look closely enough, you’ll see it also serves as a checklist of sorts: questions researchers should be asking themselves about their own work before shipping it.

As we align those who produce the work and those who review it more closely on a common set of questions, we can tighten the feedback loop and collectively shorten the time to get quality research out the door.

How to use this playbook

It’s pretty simple, really. We’ve broken things down into giving feedback on research plans, sessions, and reports. As you review other people’s work, try to see how confident you are in answering questions from the appropriate list below.

An important thing to note is that, while comprehensive, each list is not meant to be exhaustive. And we do not expect most artifacts that come your way to satisfy all questions within a list. Rather, each set of questions is meant to give you a sense of what to think about and look for when reviewing others’ work, as well as your own.

Giving feedback on research plans

Giving feedback on research sessions

Giving feedback on research findings

Questions to consider when giving feedback on research plans

On the rationale behind the work:

  • Why is this research being done?
  • Did the researcher offer a strong rationale?
  • Have research or data explorations been done on this topic within your organization? What about outside your organization?
  • If there is already existing research, why is this study necessary? What will we learn that hasn’t already been uncovered with past research?
  • Which project(s)/initiatives benefit from this research being done?
  • Is this research actionable? What decisions are the team trying to make based on the research? How will the team’s decisions change depending on what’s learned?

On the research questions:

  • Are the research questions well-formulated? Do they attempt to understand or uncover a process/need/challenge (e.g. “What are the biggest challenges people experience when it comes to doing X?”)? Or are they questions the team should be asking itself after the research has been completed (e.g. “What can we do to improve X?”)?
  • Do the questions include leading/non-neutral language that might bias the research?
  • Are the research questions well-scoped? As they’re written, is it possible to truly answer the question through research with the timeframe in mind?
  • When are the insights needed? Is there enough time to do the study well enough?
  • Are the research questions prioritized so that less impactful or less urgent ones can be scoped out if needed?

On the research method:

  • Why and how was the research method chosen?
  • To what extent is the method suitable to the research question at hand?
  • What other methods could have been considered, and how well would they have answered the research questions?
  • What advantages does the chosen method offer? What disadvantages or biases might it present?

On selecting participants:

  • How were the participants chosen?
  • Have all types of users who will be impacted been considered?
  • What participant characteristics did the researcher specify, and why?
  • To what extent have baseline parameters that define participants been considered?
  • How relevant are the research questions to the group of chosen participants?
  • How diverse should the participants be?
  • Should a range of expertise be considered (e.g. novice vs. advanced users), and why?
  • Is a screener question needed to further qualify potential participants?
  • Does the screener complement what the researcher might learn about participants through data already available to them?
  • What biases might this group present?
  • Has the researcher decided on what “thank you” gift might be appropriate for their participants?
  • If there are any participant groups with unique recruitment considerations, is there a plan for how to handle the outreach?

On designing participants tasks:

  • To what extent do the tasks involved help address the research questions at hand?
  • What will be learned from watching participants perform those tasks?
  • To what extent are those tasks ecologically valid (i.e. mimic something the participant would actually do)?
  • Are there any potential biases associated with the tasks?

On the interview guide:

  • Are participants given an introduction that sets the right expectations for the session without biasing them?
  • Are all considerations when it comes to ethics, consent, and compensation discussed with the participant at the start of the session?
  • How well does the interview guide reflect the research questions?
  • Are all questions open, neutral, and non-leading? If a question is not, is there a good reason behind that?
  • Does the length of the interview guide seem reasonable? Can it be completed within the allotted time?

Questions to consider when giving feedback on research sessions

On introducing the session:

  • How clearly did the researcher go over consent/NDA considerations at the start of the session?
  • Did they address any concerns the participant had?
  • Did they go over the details of how the participant would receive their compensation?
  • How clearly did they introduce the format and purpose of the research session?
  • Were they careful not to introduce any information that might bias participant attitudes or responses?
  • To what extent did they build rapport with the participant?
  • Did they maintain a professional attitude while doing so?
  • Were they able to warm up the session without necessarily posing as a friend to the participant?

On the way they spoke to participants:

  • Did they ask open, neutral, non-leading questions whenever possible?
  • Did they make each question as clear and concise as possible?
  • Did they embrace silence when done posing the question, to give the participant room to answer?
  • Did they embrace silence when the participant was done answering, to give them room to add any further thoughts?
  • Did they keep a neutral tone and facial expression throughout?
  • Did they avoid nodding unnecessarily?
  • Did they avoid using filler words (e.g. ‘um’, ‘uh’, ‘like’, ‘you know’, etc.)?

On clarity of language:

  • How did the participant react to each question being posed?
  • Did they struggle to understand any particular questions? Did they ask for clarifications?
  • Did they react with surprise at any particular questions?
  • Did they seem to think the answer was obvious, or weren’t sure why the question was being asked?
  • Were they unsure how to answer any particular questions?
  • Did they answer in a manner that indicated they misunderstood the question?
  • Did the researcher probe sufficiently behind each answer?

On the flow of conversation:

  • Was there a natural progression to the conversation?
  • Did the flow/order of the question make sense?
  • To what extent was the researcher able to follow their interview guide?
  • How did they handle participants going on tangents?

On completing tasks:

  • Did they let participants explore tasks or designs without leading them?
  • Did they refrain from helping them as much as possible?
  • How did they handle participants asking their own questions?

On wrapping up the session:

  • Did the researcher treat the participant in a courteous and respectful manner throughout?
  • Did they go over any relevant compensation details, final notes, or next steps with the participant at the end of the session?

Questions to consider when giving feedback on research findings

On providing necessary context:

  • Did the researcher include a clear rationale for why this work was carried out?
  • Is there a brief overview of the study (so that readers don’t have to go to another document just to understand what was done and why)?
  • Have links to relevant collateral been provided (e.g. research plan, notes and analysis, etc.)?

On addressing research questions:

  • Have answers to the original research questions been explicitly provided? Why/why not?
  • Have root problems been uncovered and described clearly? Why/why not?
  • How strong or weak are the themes and patterns in the sample? Is there a need to expand the size to develop clear themes and patterns?

On each individual finding or insight:

  • Does this point actually make sense?
  • Does it surprise you? If so, why?
  • Is it an anecdote, or a fully synthesized insight?
  • If an anecdote, has it been carefully positioned as such? And why is it important to mention?
  • If an insight, is there sufficient evidence to back it up? Can you determine how the analysis process might have led to that point being made?
  • Are there any findings in the document that are in conflict with one another?
  • Are there any findings that are in conflict with findings from other research/data/support explorations?
  • If so, have those conflicts been sufficiently addressed or explored?
  • Are the findings framed in a way that minimizes misinterpretation or misuse of the information?
  • Have there been any attempts to triangulate qualitative findings with previously uncovered insights?
  • How well does the researcher make the connection between the “what” and the “so what?” Why does each finding matter, and what can be done as a result?

On sensitive information:

  • Has personally identifiable information (PII) for participants been removed from the findings?
  • If any PII has been featured in the findings (e.g. participant names, faces, videos, etc.), is there a clear rationale and consent for doing so?

On considering one’s audience:

  • Who is the audience for this report (e.g. the project team, other researchers, the entire organization)?
  • How well is the content suited for that audience? To what extent would they be able to understand it? To what extent would they care about it?

On the structure of an artifact:

  • Are the most important points made early enough in the document? Will a reader who drops out early still get something meaningful out of it?
  • Is the length of the document commensurate with the breadth of the study? Think: a usability study report vs. findings from a generative/exploratory study.
  • Does any of the content belong in a separate document or an appendix?

On next steps:

  • Are there clear actions or next steps to be taken as a result of this research?
  • Are those actions or next steps clearly marked down? Can anyone referring to the document later clearly find them?

As mentioned earlier, this list is by no means exhaustive. It’s better thought of as a living, breathing document — a starting point to expand on and tweak according to your team’s specific needs.

I hope this playbook can play even a small part in giving researchers clearer expectations of what a reviewer might look for in their work. More importantly, however, I hope it will give researchers more confidence in reviewing each others’ work, taking on mentorship opportunities, and feeling ownership over raising the bar of their collective output.

Big shoutouts to Jen Chow for her massive contribution to this playbook, and to Meghan Yip, Sharon Moorhouse, and Brook Jibb for supporting me in taking it to the finish line.

--

--

Head of UX Research @Miro, human woman, <insert random quirky fact here>.