AMAs
September 18, 2025

Carl Pearson on minimum viable rigor in research

Carl Pearson, Staff Quantitative Researcher at Reddit, joined Rally on September 18th for an AMA and shared his insights on how to maximize the impact of your UX research in an era of increasing UXR demands and decreasing UXR resources.

Missed the event? Or want to revisit the key takeaways? Read the full recap below or watch the live discussion here. 

Who is Carl?

My name is Carl, I am currently at Reddit but have done work at Meta, Uber, Zoom, and Red Hat. I'm the only quantitative researcher on the Reddit team, building that function out personally and helping other people up-level.

I'm a mixed method researcher by training, so I have done Qual extensively, but now I focus on Quant primarily. I spend a lot of time just writing about things quantitative UX research and research impact on my blog.

Everyone talks about "rigor" in UX research, but what does that actually mean?

I pulled from a definition in academia, in a domain called action research. That's a jargony term that just means research that is applied to achieve a specific goal. It's not basic research where we're trying to foundationally understand the universe. We're doing it to move the needle on something we care about. 

Rigor is designing our research so that we understand the environment well enough that it helps us achieve our goals.

So we want to understand, not reality for its own sake, but we have a goal, and we want to understand reality well enough that the actions we take will help move us towards that goal. A concept rigor is a little bit abstract, but I think as far as definitions go, that was a little bit more down-to-earth and a little bit more tangible.

How do you apply rigor to UX research?

I wrote a follow-up post about this, and I'm calling it "Minimum Viable Rigor." We're not trying to be academic. We're not trying to be as maximally rigorous as possible. We would do that, but we live in a resource-constrained world and we work in resource-constrained environments. So, whether that's time, money, skills, whatever it is, it whittles down what we're able to do.

So, it's a matter of thinking through every point that matters.

You have to know when you're making a decision, is this going to affect the rigor? One really obvious one is sampling. Are you sampling from a panel or users that you have direct access to because they already use your product? That's gonna have a big impact on the way that you maybe structure your questions in an interview or survey and the things you look out for in a panel. For example, you're a lot more likely to get an acquiescence bias on a survey, so people that are just saying, "I really love this, this is a 5 out of 5." But someone that's your actual customer, that's probably paying you money, is much more likely to give you an honest response. They might be more likely to give a very negative response because they've interacted with your product, and maybe they have a bone to pick.

So, there's all these little things, and the key is to just know what decision you're making will have an impact on your rigor, and then, given your resources, make the decision you think is best and just make a note of it. So if anyone asks you, "hey, why is this the way it is?" you can say, "I already thought about that; here's why we're doing it this way." And that's pretty much the job of choosing the right method and doing research the right way. It's easy to say that in a couple of sentences, but it's a lot harder to do in practice.

I think there's a lot of tension right now, saying, "UX research is too rigorous and maybe not rigorous enough," and I'm trying to meet it somewhere in the middle and ground it in that reality. I think it builds up the trust in an organization to know things are being done correctly.

I think rigorous research also makes your findings more valuable. For example, if you ask somebody "hey, do you, do you like this product?" That's a bad question that doesn't really give you useful answers. So not only is it not rigorous, your findings won't really be deep, they won't really tell the product team anything new.

So, I think the piece that is left out about this rigor conversation is that rigorous insights, the way that they're designed, will actually give you novel information that helps the team think about things in a new way, and makes actual progress instead of just being a simple validation that didn't need to be there in the first place.

Why *shouldn't* UX researchers talk about rigor? 

I think there's a temptation for researchers to be, "look at all the work I did, I made all these decisions." The reality is that product managers often don't give a $#!+. You really should put it in the appendix, and then if they ask, bam, I have it. But don't lead with that. Your co-workers don't want to nerd out about the methods. It's important, you need to have it in the deck, but it should probably be in the appendix.

I don't think it makes it less important to you as a researcher, but you gotta think about your audience when you're sharing your work.

How does rigor get balanced with "democratized" UX research?

I think this is a really hard one because, when teams want to move faster, they don't say, "hey, you're not really good at this, but just try your best and we'll see how it goes."

With research, you do have to have people that want to invest the time in it. Someone posted an article saying one of the biggest threats to democratized research is if people don't actually want to learn research. You do have to actually learn it, and to learn it, you often have to have a drive to do that.

One of my first professors in grad school said anyone can learn to do what we do. And I was very threatened by that at the time. Why am I getting this degree? Why does this matter?

But you actually have to then learn it. You have to take the time and you have to invest in it. So, I think the tip here is maybe finding people that actually do want to learn it, because it's really hard to make somebody learn something they're not actually invested in.

Try to identify people that are really passionate about it, because it will take some effort. It's certainly doable. Anyone can learn to do what we do, but you have to take the time to actually do it. So, you can't really skip that step and expect to get results that lead to a good impact for the business.

What is "research theater" and why does it happen?

"UX theater" or "research theater" is essentially doing it because it's the right thing to do, but not actually thinking about the value that we get from it, so doing it for its face value.

If you do research that is sort of below that Minimum Viable Rigor line, what that starts to do is it takes your insights, and they no longer relate to that reality that we want to understand to make our decisions. And if you get too far from that reality, you're then making decisions that don't have a clear view of what's going on. The reason that matters is because the most important impact to a business is essentially their final metrics, whether that's user growth or revenue, or retention, or decreased customer support tickets, whatever that might be, that's the thing that they care about.

If you're doing research that is not rigorous enough, you're essentially just relying on luck, because you don't have any bearing to reality. And I think it's actually a little bit worse than relying on luck. 

At least with luck, you're saying "well, we don't know, but we're just gonna send it and see what happens." But then you're waiting and you're a little more cautious about checking the results. 

But if you do work that is sort of falsely confident, then you're gonna be less ly to interrogate when things aren't lining up down the road. It's luck, but worse because you're not as wary of what is happening in front of your eyes as you go into it.

I think it's been framed often in terms of risk. The more risk you have, the more you're relying on luck for success. So if you have luck, that means that there was a lot of risk and you were still successful anyway, which works sometimes, but probably not enough over the long term.

What's the hardest part of maintaining rigor in your UX research?

I think time pressure is one of the biggest risk sources these days. And I think it actually loops in very clearly with the next one, which is a lack of clear standards. 

This is where the team coming together can really help, because I think one way to deal with time pressure is to find a way, obviously to move more efficiently through a rigorous approach, and if you can understand your context that you're working in, and the way that you do studies, you can essentially pre-make some of those decisions that we were talking about. , is this decision gonna affect the rigor?

If you're doing research in similar ways, you can make those decisions ahead of time, so you have to think about them less and less. If you can draw on everyone's collective experience to get the best practices for your org across all of the methods, then you cut down on all of that communication time and you can do that all at once. Time pressure is definitely real, the more you document those decisions and learn from people around you, the more you can speed things up while maintaining that level of rigor.

Why do you consider UX research to be "mixed-method by default?"

Brian Utash, who's a UX researcher, wrote a great article on why he only hires mixed methods researchers, and I thought one thing he said was so poignant, and it was that you would never describe yourself as sort of a generative-only researcher or evaluative-only researcher, which is also another way to split methodology. So, why do that for quality and quant?

It's not that you need to be perfect and know absolutely everything, but when you say you're qual-only, and I think more rarely, but still sometimes quant-only, it's saying, "I'm not willing to learn this." If people ask me this, to do a research question that necessitates this, I might not tackle it in the optimal way.

I think it's just a lack of curiosity, and this isn't to throw shade at people that don't know as much quant. There's room to learn it. All you need is the curiosity and the drive for it.

What are the risks of doing mixed-method UX research?

When the team asks you questions and you could answer them with Qual or Quant, are you always defaulting to Qual? If you're trying to skimp by on just the qualitative work, and you're saying, "well, we'll just forego this quantitative extension of this mixed methods work we should do," then I think that's a red flag. Look at what the needs are and what the team is actually doing.

The challenge there is that you have to have somewhat of an understanding of where qualitative work is helpful to make that diagnosis. I think that's the importance of staffing, at least, some people with mixed mixed methods capabilities.

The threat there is that it goes back to rigor. If you're trying to answer qualitative questions quantitatively or vice versa, you're probably gonna miss the mark in some way. So we're really just trying to best fit the method for the product question at hand. That's the core competency of a UX researcher. It's foundational to know what method to choose and why. So, even if you're not a quantitative expert, knowing when quant is the better solution for something can still help you say, "maybe we need to pull in this other person." Just having at least some level of fluency across the spectrum I think, is critical for researchers.

I had a great professor who taught qualitative methods and one thing that she mentioned is that "mixed methods" is a little bit of a misnomer because there're very few methods where you're truly blending Qual and Quant at the same time. It's mostly staggered. "Multi-method" is probably a better name for it, because you get more value out of each if you do them after one another and build on the learnings from each of them.

Is there a book or a course that you can recommend for different quant methods?

I have a post that's about learning Quant UXR and I break down some different pillars, because we often just think of surveys, but there's a lot of elements around that statistics, research design, which is not design research, but it's how we effectively create our methods. And then programming, because sometimes it's not a survey or even anything talking to users directly, but we dig into log data, our friends in data science.

 So, there's a good breadth of methods that are described in that blog post.

Chris Chapman and Carrie Ron's book, Quantitative User Experience Research, was published just a few years ago, and it's the most foundational text for Quant UXR methods and gives a ton. Everything from CSAT surveys to max-diff surveys to log data analysis. So, if you want an actual textbook, that's just the name I would probably start with.

What does it look like when research is "real?" How do you define research impact?

The way that I break down impact is first you have the Execution of research itself. The number of studies you're running, the number of reach outs or requests that you're getting, the number of insights generated, things that.

Then you have the Influence, which is you hand over your power as a researcher to try and get someone else to take action. So that can be how you influence the strategy. You have a citation in a strategy document or a line on a redesign or something like that, where you can say people are being influenced by the work that I'm doing.

And then the last, Outcome, is whatever the actual business cares about. So again, back to revenue growth, user growth, user retention, whatever that might be.

And all of these are essential steps, but it's interesting because, when we're talking all about rigor, that has the biggest impact on the execution of research itself. And then democratization also has a big impact there, because people want to generate more insights, they want more, so you increase that as well. 

But the only one that the business cares about, the CEO cares about, or your VPs, is the organizational outcome. They don't care how many insights you've generated. They don't even really care that much about how many people you've influenced, even though that's really important for us. They only care about how the business is doing better or not.

And I think influence is really important for us as researchers because it's the last step in this process that we own. We don't own the KPI that a PM owns. They own that, and they're the one that's going to take the heat if they don't get the numbers up, not research directly.

But if we do insights and no one listens to them, we will get heat for that. That's the last thing that a researcher owns in this line of steps. So I think, as researchers, execution is important because you have to do the work itself. It needs to be rigorous, ultimately, because you want the final numbers to go up on the other side.

Influence is important because that's how we track the success of our work itself.

And if our work is rigorous and high quality, then the business numbers will go up in the end. So, there's a lot to unpack there. That's my framework for thinking about impact and the different kinds of impact that exists in a company.

How do you quantify UX research's impact?

That is something that is hard to quantify. You can have individual data points, but it's more about the narrative. I keep a spreadsheet that has every project I've done, and then it has a rolling set of impacts that I track, and they progress linearly.

So it starts out: this person put this in this doc, this person referenced this doc in this strategy doc, the team built this, and it was based originally on the findings that were in the strategy doc, and then that might be Q2, and then Q3, it's oh, this thing that we launched had positive user growth metrics. So, that is not going to be quantified by me saying I did X number of studies, but it's gonna say I did this one thing that led to this, that led to this, that led to this.

And candidly, I think that takes you away from the work that the company cares about the most, which is creating useful insights and influencing people. That's what you're hired to do.

But everyone has to defend the value of their work, so you need to make time for it, and it's worth spending a little bit of time and having a tracker to just see how all this stuff is evolving over time. And personally, I love that stuff too. I think it's interesting. I don't want to be mired in it, but I do to see the value of my work and it's cool when something ships and it's useful to people. So, that's how I think about creating a narrative for impact for projects as a researcher.

It's also a challenge, too, because you don't always have a business outcome or metric that you can look at. Some of my most important work has been, "this is a terrible idea, let's not do this." How do you look at the metrics for that? You didn't ship it. It's still good that you did it. I trust it because I know that I employed enough rigor that it was we were matching the reality of the situation, but in some of the most important UXR work, there is no metric to look at, so that, you're lucky to get the final business metric, because it doesn't even apply to every situation.

How do you effectively communicate research impact to non-research stakeholders or leadership?

I think a lot of times they are looking for more of the numbers. People outside of research especially are not familiar with a qualitative assessment, so they're looking for those hard numbers from what we call a positivist, philosophical tradition. Classic science thinking. Even though that's just predominant in the business world. So, when you can show numbers, it can be useful. Usability benchmarking when it makes sense, or sentiment tracking, all of these things can help lend credibility, but it can be a little tough because sometimes you're spending a lot of effort just to prove that the work you do is valuable, you're doing extra projects to say your other projects are good, so you gotta walk a fine line there.

I will say, for ICs, a lot of times you are not defending this to people that are non-research stakeholders. Your most important stakeholder in a corporation is probably your manager. At smaller companies where you're thinking less about your management chain and more just about, at a startup, things are on fire, you might be answering to the CEO more, so that's where some of those numbers might come into play. Or taking time to slow down and just demonstrate qualitative wins. But you have to hope that your stakeholders are either willing to listen to your explanation of the value of a qualitative win, or they already know it, if you're super lucky.

How do you ensure quality with research democratization?

I think you have to identify the right people to bring into democratization, people that are willing to learn, and you have to go into it.

Letting folks do research when you think their skills are up to par, and then building in quality checks. Don't let it be a total free-for-all, but have some quality review as things go out that you can check on. 

I do that with my team right now, and we're all I think very talented researchers. We still check each other's work and find stuff. I think that is not just unique to democratization. 

If you have designers doing democratized research, they're really open to crits, because they do these design crits all the time. That's not as embedded in UXR culture. See if you can do a research crit where you might review one of their interviews or something that, when you have a good rapport. You don't just want to lay this on if they're not ready for it, but oftentimes they are. 

This is the challenge. You do democratization because time is tight, but it ends up taking a lot of time too, so you really gotta find that balance there.

How has AI changed your UX research methods?

I can't really talk about research right now without talking about AI. AI is a very big deal.

It can make some really big changes, but it's not as big of a deal as everyone says it is. I think there's a lot of economic pressures that are really putting it in the spotlight.

There are some things that AI does really well. It's good at certain kinds of summarization. It's getting really good at translation. That's really, really nice. It can translate things quite effectively.

It can be really useful as a sparring partner for things that you already know somewhat well. I like to ask it about statistics, and it's not always right, but sometimes it gets me thinking in new ways. So, I'm not just letting it choose my whole model for me, but I'll say, "I'm thinking about this rather than this," and we'll spar back and forth. 

But there's a lot that is not quite there. I think there's early work coming out now about summarization of open text responses and that can still be a challenge to know if it's getting the themes right. You have to double check for hallucinations and stuff like that too. So, some stuff is there, some stuff is not.

I just have to say the biggest shift that is not happening with AI is AI is not helping me choose better research projects. And that's one of the most important things a good researcher does is choosing what to research.

That is, I think, sometimes overlooked and something AI is pretty, pretty bad at still. I haven't gone into it earnestly to have it do that for me, but I've been curious and it hasn't wowed.

Do you need formal academic research training to do rigorous UX research?

You can unpack this in a couple different ways, because there is the reality of the hiring landscape, and I think a PhD with the right experience attached to it or internships or focus during your program can be a big leg up, especially if you're looking at FAANG-type companies. They just have sort of a bias towards hiring that.

That said, I've worked with people that have master's degrees, and the training from that program and their experience in the field, and they're super, super qualified. I have great respect for their research, decision-making, and their ability to do the work. So, there's a little bit of a calculation of just optics, but as far as core skills, certainly you do not need a PhD.

It sometimes gives you a little bit more time to steep in the complicated methods of quant, and that can be beneficial, but again, not a necessity. It all depends on how you learn, how quickly, and what ways you get to it.

What does UX research look like at Reddit?

Without veering too much into it, I'll just say as a quant, I really like it, because we have so many users. 

One of the challenges of working in a B2B space was that I just wanted to do quant, but I couldn't, because it was very hard to get to people, or if I did, it was working with more small sample sizes, which has sort of a different set of challenges. I'm really enjoying the challenges and ways of doing studies with really large sample sizes. It's just a very lucky place to be for a researcher.

We're a smaller team, but mighty. All the researchers cover a lot of ground, but do a lot of excellent work at Reddit.

Personally, I use Reddit, and it's fun to work on something that I use a lot in my own time, so that's maybe one of the biggest benefits there.

Do you think the job market now or in the future supports the need for having purely research focused staff, or will we be pushed to more of a generalist role?

I've seen this question floating around on LinkedIn. I think it's on everyone's mind for good reason. I've been in this field for 7 or 8 years now, and it's mostly been a boom time until maybe a couple of years ago. It really started to shift.

The more that I talked to people that have been in this field for years and years and years, it's absolutely cyclical.

So, there might be more of a push for generalist roles as budgets are tighter right now. If the past 30, 40 years are any indication of the future, that pendulum will swing back, and then it will swing back again. I think that if that happens, it will be temporary, and it will always come back to having room for more specialization.

That said, even as we've had this hiring downturn, I still see teams hiring for research ops. That is super niche to have an ops just for research, and I take it for granted cause we have it, and it's amazing to have a well-staffed research ops team.

So, even though there is this concern for more generalist roles, I still see people hiring for really, really specific roles, so I'm not totally clocking that in the job market right now, and there's more Quant UXR roles than ever, so I still see some room for specialization just based on my anecdotal experience.

What is one mindset shift that you think every researcher should make to stay relevant and impactful in the next 5 years?

I don't know if everyone needs to shift, but again, I think it's sort of that curiosity, and it goes back to that mixed methods idea.

I'm a quant researcher. A lot of the quant that I use in my projects day to day, I didn't learn until after grad school. I didn't learn until 3 years ago. So always be learning, you never know what you're gonna need.

We don't control the questions that get asked of us a lot of the time, so you've got to be ready to just roll with it, and you want to find the best way to answer it. It's hard when we all have to be heads-down, when we have jobs, we're super, super busy as the market is tight. But if you can keep that curiosity, just keep learning, I think that's what keeps you strong as a researcher and keeps your skills up to date.

Connect with Carl

Follow him on LinkedIn and say hello!

Listen to Carl share his insights on building UX research rigor, so you don't have to rely on luck.

Thank you, Carl!

A heartfelt thank-you to Carl for sharing how he and Reddit are maintaining rigorous, data-driven UX research standards every company can all aspire to. While we aren't all so fortunate to have Reddit's user base to survey, Carl's insights showed how we can adapt research best practices for our own projects, our own organizations, and our own personal up-skilling. We're grateful for the time and expertise he shared.