
Inside UXR
Explore the practice of user experience research with Drew and Joe, one question at a time.
Send your questions and feedback to insideUXR@gmail.com
Inside UXR
35. How do I pick the best research method for my study?
In this episode of Inside UXR, Drew and Joe tackle one of the biggest challenges in research: choosing the right method for the job. They break down key factors like research objectives, timelines, budget constraints, and level of certainty to help you make informed decisions. Through real-world scenarios, they walk through how to balance ideal methods with practical limitations. Tune in to sharpen your skills in selecting the best approach for any research question!
Send your questions to InsideUXR@gmail.com
Visit us on LinkedIn, or our website, at www.insideUXR.com
Credits:
Art by Kamran Hanif
Theme music by Nearbysound
Voiceover by Anna V
35. How do I pick the best research method for my study?
Joe Marcantano: Drew, how are you today?
Drew Freeman: I am doing well. How are you, Joe?
Joe Marcantano: I'm doing well. After an aborted start, we are here recording, a new set of episodes.
Drew Freeman: You know, one of these days we'll get on our game and actually put out like a bloopers little mini thing.
Joe Marcantano: There's part of me that would really enjoy that and part of me that'thinking about the editing involved and saying, no, thanks.
Drew Freeman: It's much easier for me to say than it is for you.
Joe Marcantano: Exactly.
Drew Freeman: All right, so what are you bringing us today?
Joe Marcantano: so I had a listener shoot, me a message on LinkedIn. They were thankful for finding the podcast and really enjoying it and they had.
Drew Freeman: A question hit me with it.
Joe Marcantano: So this person said that, they're a M mid level researcher and one thing that they really struggle with is figuring out the optimal research method based on what they hear from stakeholders. So they'd love to hear about like maybe some scenarios or some sort of framework on how you pick the best method in any given situation.
Drew Freeman: This is a great question and I'm really glad that they're thinking about this. It shows that they understand what's really important about kicking a project off.
Joe Marcantano: Yeah. In my head, as I think about this, there's so many different things that kind of come in and like, play into how do you pick the method you're going to do that? Like, there's part of me that's just like, oh, it's easy. You just pick the research questions and go. But there's so much more to it that goes into it.
Drew Freeman: There is no if A then B kind of decision making process that you can use here. Like you said, there are a lot more variables in play.
Joe Marcantano: Yeah, it's more like if A from this column, B from that column, and A from column three, then maybe this.
Drew Freeman: With that being said, I think we can both agree that the column that you need to weight most heavily is what are we trying to learn? What are our objectives?
Joe Marcantano: Definitely it's, you know, at the end of the day, like, what are you trying to learn? What are you going to do with it? Is certainly the heaviest column, but the combination of the other columns could certainly outweigh it.
Drew Freeman: yes, but even if the information from columns B C and D outweigh column A. You still need to make sure that you are answering the questions that you and your stakeholders have.
Joe Marcantano: O. We're going to have like a yes, but off here.
Drew Freeman: Oay.
Joe Marcantano: yes, but if. Let's say they want to learn a specific piece of information and the best method, the right method Is, you know, IDIs, whatever. But your stakeholders need answers in three days, five days, six days. A time frame too short to do IDIs. You need to find another way to get that answer, even if it's at a reduced level of certainty. Because your stakeholders are not going to wait. They are going to go. And it's either going with no information or going with information that might be a little less certain.
Drew Freeman: I am in total agreement. I think what we have here is that we had slightly different definitions for what column A was.
Joe Marcantano: Gotcha.
Drew Freeman: My definition for column A was just like a yes. No. Does it get over the bar of can we use this data to help us answer the question?
Joe Marcantano: Okay, fair enough. Yes.
Drew Freeman: Whereas it sounds like your column A was more of a. What's the ideal way to answer this question?
Joe Marcantano: Yes. So let's start with this. Why don't we talk about, like, the broad. We've, been calling them columns, but like the broad categories of information you need to consider when you're talking to your stakeholders before you make a decision on a method.
Drew Freeman: Yeah. So for me, I will always start with, what are we trying to learn? What is our research objective? And that will. That will always be the place that I center around. And then, so for me, that first, that first, category of information is what are we trying to learn? And what methods can help me get that answer? So there you get a lot of things like, you know, a very common question that we might be trying to learn is,
00:05:00
Drew Freeman: can. Can our users complete task X, Y and Z within the product? Great. That's a great time for usability testing. But there are other methods that you can use to try to help answer that question, too.
Joe Marcantano: Yeah, I frame it, marginally differently. I frame it as, what are the decisions that we're trying to make and then what information helps us make those decisions? But it's essentially the same thing. It's basically, what's the point of the study? What are we trying to figure out? What are we trying to learn?
Drew Freeman: So really kind of in my head when I'm thinking about this, and I've actually been part of creating these. Like writing this down and creating documents with this kind of thinking is basically like A table where you have, on the, on the left side, the first, the first piece of information is what are some, you know, what are we trying to learn? And you write down common questions like can users complete a task within our system? Where do users expect to find this button? How do users think about this feature? You know, those kinds of very common research questions. The next bucket is then two or three research methods that can help you answer those questions. So very common ones. Obviously usability testing can help you answer can users complete this task? IDIs will help you answer things like how do users think about whatever? But you usually have two or three per research question. And then the third thing that I think about is what amount of time or what amount of effort is needed to run that kind of study.
Joe Marcantano: The thing that I think about kind of slightly differently, and I think this is just a different way to say what you said is one of the things I think about, especially when we're talking about what are we trying to learn? Is am I trying to learn, a what or a why?
Drew Freeman: Yeah.
Joe Marcantano: And that really directionally helps me get situated on what type of method to use if I'm trying to learn, a what. There are obviously exceptions, but at that point I'm defaulting to unmodated, like quant know something that isn't super time intensive on my end if I'm trying to deeply understand why or how. Again, exceptions. But I am defaulting to moderated. And that's kind of one of the first steps I take is am I trying to learn a what or why? And that directionally starts pointing me that way.
Drew Freeman: Yeah, I think for me that's just kind of baked into that first piece of information, which is what are we trying to learn? What is our research question? But you're right, that's a really important thing to think explicitly about.
Joe Marcantano: Yeah. And then the next thing that you brought up, and I'm glad you did, is timeline. that, you know, to me is one of the, probably the second most important thing when deciding a method. If I don't have for six weeks, I'm probably not running a diary study.
Drew Freeman: Right. That would make no sense. You absolutely have to think about how much time do I have? Will I actually be able to complete this method? Because if I don't have enough time to complete the method, it would be better if I didn't actually do it at all.
Joe Marcantano: Yeah. And at that point you need to decide like, is it better for me to run a mini 10 day diary study or should I just get four or five IDIs, like, you gotta kind of make a judgment call there. Is the lower quality of one method better than the higher quality of an imperfect method.
Drew Freeman: This makes me think that another kind of category that we haven't talked about yet, but we should, is how risky is this area? How confident do we need to be in our decision? Is this something where if we get it wrong, that might be a little bit of an annoyance, but we can easily change it later? Or is this a. This is a go, no go decision and we need to be as confident as we possibly can for sure.
Joe Marcantano: I call it level of certainty. Like, what's the level of certainty you want? And then because of, And actually, I'll share the framework I use, but I think that at whatever company you're at, you should use a framework. You should build a framework. And I've kind of built mine off of my background, my law enforcement background. But I have proof beyond a reasonable doubt, which is, you know, we are 95/% sure that this is the case.
00:10:00
Joe Marcantano: I have preponderance of the evidence, which is I am 51% sure. And then I have. We're just not sure, which could be anything below 50%. And so that's kind of how, when I'm doing findings, I will put a level of certainty next to them. Generally, I use, like, stoplights, red light, green, red, yellow, green. and I go from there. But that by using a framework, it allows the stakeholders to kind of, as they get to know me, they'll know that when I'm putting yellow there. This is what I mean. This is where I'm going with this.
Drew Freeman: That is a really good idea. That's not something that I do. I'm a little bit more loose and fluid in that conversation, but that's a really good idea.
Joe Marcantano: Yeah. And like, everyone's welcome to steal my framework, but I would encourage you to create your own. And, you know, using the court terminology kind of plays into my background a little. And so it just makes it a little more sticky with stakeholders. You know, come up with one that works with your background that'help make it a little sticky.
Drew Freeman: All right, Joe, are there any other big categories that you think about?
Joe Marcantano: There are. or I should say, there is. this is one that if you are working at a bigger company, if you're at an agency, if you work for one of the faang companies, you're probably not worried about this. but it's budget. If you are working at a smaller company, paying incentives can be painful. Paying for a subscription to a user testing or whatever tool you're using, that can be painful and that can really squeeze your budget. you know, I'm fairly active on the UXR subreddit and every once in a while I'll see somebody post saying I have no budget. I've been told I won't get any budget. How can I do this study?
Drew Freeman: That's a really good point and I'm glad you brought it up. I guess the way that I think about it is I was just kind of wrapping time and budget into that one category. But you're right, they are actually two separate things. So I'm glad that you brought that up specifically.
Joe Marcantano: Yeah, they're definitely both resources and it is really easy to think of them together. And I do it a lot too. But that's because I work at a bigger company where paying for the incentive, frankly is a non issue for us. It's on our end. It's more the time that is the issue.
Drew Freeman: Okay, so I think we've covered kind of what our listener was asking about and how we think about these decisions at a high level or an abstract level. But I think part of what our listener was looking for and asking for was more detailed, more specific information on how to do this kind of thing. Do you have any scenarios or examples that you can think of?
Joe Marcantano: Yeah, I do feel a little bad because I got the same impression. Like that's what they wanted was kind of a framework. Right. If this, then this. And we just spent the first time minutes talking about how that's not really how it works.
Drew Freeman: Yeah, unfortunately I don't think that exists. I think that's one of the places where researchers really make their money is using your critical thinking and using your judgment to help your stakeholders get to the, get to a workable decision. Not even the best decision, but a workable decision.
Joe Marcantano: Yeah. And I hesitate to use an example that I think might be specific to this person's company just because I don't know, the resourcing, the timelines, the model that they're using, are they a build fast and break it and then do the research later, or are they more methodical and do all the research first? Like I don't want to, to make that kind of assumption about, this person's employer.
Drew Freeman: So let'instead of doing that, let's maybe do a little bit of like a mini role play. I'll be a stakeholder coming to you as the researcher with a problem and we can just kind of have a really brief conversation about how we might get to a good place. Sound good?
Joe Marcantano: Yeah, yeah, that sounds good.
Drew Freeman: Okay, so for this, I'll be a product manager of a website that books travel.
Joe Marcantano: Okay.
Drew Freeman: So I come to you as a researcher and say, hey, Joe, we're seeing in the last 30 days that we are not getting number, the number of people who are visiting our website, but then not booking anything. That's really shot up. So the number of people booking has gone down, but the number of people visiting the website is staying roughly the same. How do we fix that?
Joe Marcantano: So a couple of things that I'm thinking right off the bat before I even like start the conversation is they already know what they already know. The stakeholder already knows what is happening. The bookings are down.
00:15:00
Joe Marcantano: We've already got the analytics that show that the real question here is theynna fix it. And in order to fix it, we have to know why. So I'm already in my head in a space of this is likely going to be some sort of moderated thing. And so the first thing I'll say to a stakeholder who says this kind of thing to me is I'll say, you know, right off the bat, what's our timeline for a fix? How much time are we thinking before you are going to run with this? When do you need to have this fixed by?
Drew Freeman: So obviously we want to get this fixed as soon as we can, but we want to make sure that we do it right the first time. How much time do you think you need?
Joe Marcantano: Well, it's going toa depend on a couple of things. The first is do you have, a hypothesis already, do you already have a theory as to why things are, are going wrong?
Drew Freeman: Not really. We did do, a little UI refresh last month, so it probably has something to do with that, but I don't know what in particular might have caused this.
Joe Marcantano: Let's talk about that refresh for a second. Was this a cosmetic facelift or was this more of a change in the actual architecture of the site?
Drew Freeman: No, this was meant to make our website look a little bit more modern, feel a little bit more clean. It wasn't meant to add or remove any functionality.
Joe Marcantano: Okay, so an aside here. He's told me that this is really a cosmetic facelift. This didn't affect the architecture. So I'm already kind of ruling out in my head card sorts and tree tests. We're not testing the categories because those didn't change. That wasn't the variable that changed. So now I'm leaning towards Some sort of walkthrough, some sort of moderated. Hey, try to do this and see what happens. I, am. Because there was something that changed on the site. I am leading in my head towards this is likely a usability issue, not a competitor came in and undercut us by 20% on, price. If there were no change, I might be thinking something like that. But because we've had a change, I'm now thinking that something about the refresh has made the site less usable.
Drew Freeman: Really good. Really good. Aside and walking through your thought process, though, I'm thinking very similar things. I think the only thing that I might think slightly differently than you is I might be thinking about running, a really quick unmoderated usability test while I'm planning for a moderated test to see if I can pin down the. What changed, what happened, why are people having this problem? Or I should rephrase that. What problem are people having before I actually get to my moderated sessions?
Joe Marcantano: For sure, if you are working someplace where kind of the resourcing isn't an issue, that might be a, great first step because especially if you're using one of the bigger sites with one of the bigger panels, you can get directional info, with, you know, 10 participants in a couple of hours that can.
Drew Freeman: And I can do that while I'm planning a bigger, more intensive study. So I'm not necessarily expanding, elongating the timeline and it might even help me better target my moderated sessions as well.
Joe Marcantano: Exactly. So now we're saying, like, in a perfect world with unlimited resourcing, that's what we do. If you work someplace that monitors that budget a little more tightly or doesn't have that budget, or maybe you just, you know, your stakeholders, you know that that's not going to be valuable to them. Jump right to the unmoderated. But now, based on the conversation we've had, you've told me that, like, getting it right is important. I am already thinking what will likely be my first step in this is six to eight participants. Just, I'm going to give them the journey to book a trip and, see what happens. See if they're getting lost somewhere. If they're not understanding something, is there a piece of information missing? Can they not find the submit button, whatever. But that's what I'm thinking now is I will recruit some participants who kind of fit our target market. Maybe they even bought something similar in the last three to five months from a competitor. And I will have them walk me through a journey and see where they get stuck.
Drew Freeman: I think that's exactly spot on. The only thing that I would add to this is that I often like to give my stakeholder multiple different options or packages that they could choose. And sometimes you market it as a, this is the best option, this is a better
00:20:00
Drew Freeman: option, this is a good option. Sometimes you just market it as a, okay, I've got three different options, 1, 2, and 3, whatever. But whichever direction you decide to go down in that, it's really important to try to make, to try to make it as clear as possible to your stakeholder that this option, comes with the highest degree of certainty. This option comes with a little bit less. This option comes with the least amount of certainty, but also this option will be the fastest and cost the least. Whatever that mix of cost andcertainty is.
Joe Marcantano: Yeah. If you are in house, probably the biggest two concerns are going to be level of certainty and time. If you are working at an agency, it's probably going to be like some degree of all three level of certainty, time and budget that you're going to be presenting your options on.
Drew Freeman: Is there anything else that you want to cover with this scenario? Otherwise, I've got one more very common scenario that I think we could go over.
Joe Marcantano: The only thing I would say is that for this scenario, the solution that we've kind of talked about here is only to identify the problem that is not research for testing the fix. That is a separate phase. And know depending on the urgency of the fix and the level of certainty you get, you may need to test the fix before you launch it. If the fix turns out to be something simple like they couldn't find the submit button and we think that moving it to this location will be much better and you know, it's a low risk fix, then go for it. But if, if your research finds something else, then maybe you need to test the solution before that gets deployed as well.
Drew Freeman: Well, and best practice would be to always test the fix, but best practice isn't the world that we always live in. We're not always able to do that 100%.
Joe Marcantano: Yeah. So you said you've got one more.
Drew Freeman: Yeah. So I'm thinking this one comes from a designer and the designer comes to you and says, you know, hey, my boss just assigned me this project where we're, you know, our competitor has introduced this new feature and we think that that could be really good for them and bad for us. Take users away from us and they'll go to that competitor for this new feature. We want to create our own version of this feature, but I'm not sure where to start. Can you help me?
Joe Marcantano: You have inadvertently touched one of my pet peeves and that is building something just because the competition built it. Now, it could be that the competition did their research and what they're building is the right thing to build. It could also be that whatever feature was the pet project of their pm, they could have been copying somebody else. Just because your competitor builds something doesn't mean you should build it.
Drew Freeman: Very good point. But I will say that this is a very common scenario regardless.
Joe Marcantano: It's incredibly common and it's one of those choose your battles kind of thing. Because if they are dead set on building it and you're not going to convince them otherwise, your job is not to convince them not to build it. Your job is to convince them to build it in the most useful way possible for both the users and the business. Once the direction is set, it becomes disagree and commit.
Drew Freeman: Yes, but you are totally right that it is part of our job as researchers to say and to try to answer the question, is this feature actually meeting a problem and solving a problem that users have?
Joe Marcantano: Yep. So if a designer came to me with something like this first, I wouldn't even start with primary research on this one. I would start with secondary research and I would kind of look at the feature and make an assumption about what problem it is trying to solve. Is it trying to solve, keeping with our travel website theme, is it trying to solve that when people book flights they need to book a rental car too and so they've built an add on for that? Is it whatever it is, I'm trying to figure out the problem that this new feature is solving and then I'm going toa do secondary research to see if that problem exists. Have we noticed it before in another research? Has it come through and support tickets? Are people already talking about this problem?
Drew Freeman: I think that's a great place to start.
Joe Marcantano: Now, presuming that people already are talking about this problem and it's come up a bunch, I might just go depending on the timelines straight into, hey, throw me together a ah, prototype.
00:25:00
Joe Marcantano: It does not have to be perfect. What I want to do here is just understand, you know, confirm that this is a problem we're seeing and give me your best guess on a solution and let's see how far off we are. And I'll do some IDIs where I ask them how they're solving the problem today, what they're doing, is that painful enough that A newer solution might be useful, and then we'll test the new solution and just see if it's intuitive and they can figure it out.
Drew Freeman: I might even break up those steps and just do the IDIs about how are you encountering this problem? How painful is this problem without having a potential solution to test? I might just do that IDI section first and, alone.
Joe Marcantano: So my thought was that I would do that section first and alone. If we are not already seeing this problem. Maybe this is a problem folks don't realize or they have a solution already, so they haven't complained about it. you know, I would, at that point separate them out. But if we already kind of have a good inkling that this is the problem, in my head, I'm gonna combine those two because I can do them at the same time.
Drew Freeman: Yeah, I think that's totally fair. It really depends on how confident are we in this problem space from any existing research that we've already done, or like you said, just secondary research on. Maybe it's Reddit or comments in articles or articles from travel bloggers, whatever.
Joe Marcantano: That's such a great call out because, I don't think that everybody realizes this secondary research is not just the stuff in your repository. It's not just the stuff. You know, it could be scraping of social media, it could be looking at travel blogs. Like it is more than just the stuff that you specifically or your team specifically has done in the past.
Drew Freeman: Yeah. Do not give yourself extra work if you don't have to.
Joe Marcantano: So, yeah, for me, presuming this, we already have an inkling that this problem exists. I would plan on that kind of combined IDI where I'm testing, you know, I'm talking to them about the problem, making sure we understand it, and then testing the proposed solution. And from there it just becomes an iterative process. You know, M. Maybe we're close on the designer's first guess, and the first prototype's really close. At that point, we make whatever tweaks are necessary and we do another round. Maybe we're way off and we need to rethink it. And that might be something where at the end of my IDIs, I have five or ten minutes built in, and if we're way off, I might go into a. Talk to me about what would make this easier. Talk to me about the things. You know, if you had a magic wand, how might you design this? being really careful with that. Because participants will ask for the world. That's just how they are. but it'll Give you a good idea of how they're thinking and how they're framing the solution.
Drew Freeman: Yeah. And if you're way off, then you can go into a new conversation around, do we need to do some co design sessions? Do we need to do card sorting to try to better understand the mental model that's going on? But the thing that I want to bring up with this scenario is when it's creating a new product or creating a new feature, you generally need to think about more robust methods. Methods that give you more certainty and that time and that budget generally becomes slightly less of a factor.
Joe Marcantano: Yeah. The thing that I would also throw out there is, you know, especially if you are playing catch up might against a competitor, you might feel a little timeline pressure. So let's say you get three or four sessions in and it is very clear, abundantly obvious that whatever we are testing missed the mark. Do not be afraid at that point to go to your designer and say, hey, I'll go through the rest of the sessions if you want me to. But I think our time might be used more value might be more valuably used to end this prematurely. We kind of know this doesn't work. Let's make some changes and restart.
Drew Freeman: One of my favorite examples of this, I forget what the exact name is, but there's a, specifically designed fake teapot where when you pour the hot water, it actually pours onto your hand. You wouldn't keep doing research on that after the first participant burned themselves. You would stop the research and say, this is clearly a problem we need to fix. I don't need to do any more research.
Joe Marcantano: Yeah. The example I was thinking of very similarly is
00:30:00
Joe Marcantano: that, you know, there's the picture of the cat coffee mug where when you drink out of it, the cat ears poke you in the eyes.
Drew Freeman: Yes. Same idea.
Joe Marcantano: Yeah. Like once you get one or two people who fall into this very same very obvious trap, you don't need to, like, I don't need.
Drew Freeman: I don't need to watch five more people poke themselves in the eye.
Joe Marcantano: I don't need this statistically significant sample. I can exercise a little bit of common sense there.
Drew Freeman: Yeah. Okay. So hopefully to our listener, that was. Those little exercises were helpful. We can't cover everything. And I know we weren't able to give you the decision tree that you might have been looking for, but hopefully this conversation has been helpful.
Joe Marcantano: Yeah, hopefully. We've kind of set up the way that we think and that can kind of help you create a framework in the mind for the way you should tackle these problems. Anything else you want to hit before we call this one?
Drew Freeman: No, I think we're good.
Joe Marcantano: Awesome. Well, thank you everybody for joining us today. Give us a like a subscribe wherever it is you get your podcasts. I will throw out one more time that next, month I'm going to be speaking at the UX Insight Conference in Leeden in the Netherlands, so you should check out an online ticket for that. And if you want to hear us talk about your question for an episode, you can message either Drew and I or LinkedIn. You can send us an email at inside uxrmail.com if you'd like to support the show, there's a link in the show notes otherwise. I'm Joe Marantano.
Drew Freeman: I'm Drew Freeman.
Joe Marcantano: And we'll see you next time.
00:31:39