AMAs
April 24, 2025

Pedro Vargas on making research count – metrics that matter

Measuring the impact of research is more important than ever. But it’s also more complex than a simple ROI calculation. So, how can Research and Research Ops professionals define, track, and communicate their impact in meaningful ways? And how do we make sure the value of research is seen beyond the research team?

Pedro Vargas, Research Ops Lead, joined us for a Rally AMA to share his approach to tracking impact, aligning with stakeholders, and building systems that make research visible and valuable.

Missed the event? Or want to revisit the highlights? Read the full recap below or watch the live discussion here.

Who is Pedro?

I’m from Brazil and my whole experience in research is based here and in Latin America. I've done research for the Argentinian market and also Colombia and other countries here, but my full experience is based here in Brazil.

I was a leader of UX Research, and now I’m leading a Research Ops team — which might seem similar, but in practice, they’re quite different. I also teach classes on UX Research and Research Ops, and I write articles on the topic. This year, I took on the challenge of translating most of those articles into English. They were originally written in Portuguese, and I’m working on the translations to share my perspective more broadly and learn from people outside of Brazil.

That's why I'm here. I really like this type of connection. I really like to talk about impact, metrics, and all the things connected to that.

When I joined the first research team as a solo researcher, I didn't have a clue about this topic. It was crazy. We didn't know anything about Research Ops or how to scale research in a company to gain more influence. I think that's part of my purpose now: to help other people like me back in 2018 when I joined that first research team.

Also to help leaders, because in the end, we're all in the same boat. It's crazy how the same problems I have here, people in Germany also have – of course in different levels and different contexts, but still the same problems.

What's the difference between ROI and research impact?

That's important to say, but what's even more important is a disclaimer: my two cats are crazy. They might pass in front of the camera and make a little mess, so please don't mind. They're nice people.

Talking about ROI and impact, my view is that when we talk about impact, people tend to think about ROI, which is return on investment. I know business people talk about that very often. We can talk about ROI in research, but it's very different from talking about impact or metrics or OKRs.

ROI and impact might bring us to the same place – to strengthen the research system and discipline in a company – but they live in different buckets. People usually misunderstand this. Research ROI and research impact are not the same.

In my experience, I created a research impact spreadsheet. I did this in my first company as a researcher and have since applied it in other companies, including the one I'm in now as a ReOps leader. But I never had the chance to talk about ROI because ROI requires financial data: salaries, costs, time. Researchers usually don't have access to that. And without that data, you can't calculate ROI.

Cost savings is a big trend now, especially after layoffs, but you need access to financial data to talk about that. Most researchers don't have it.

So people might think, "I can't do anything, I can't track impact." But I disagree. We can track impact. We just have to use other methods. 

What does your impact tracker look like?

This is what I call the Impact Tracker or Research Impact Spreadsheet. I created it after watching a short talk by Victoria Sosik, Director of UX Research at Verizon, about UX Research. It really inspired me. I went back to my team and built a spreadsheet based on that talk.

I believe every research project can have impact – on stakeholders, on the company, on product strategy, on squad strategy. There are many types of impact. What's key is knowing the type of research: is it exploratory, evaluative? And then asking: what kind of impact do we want to track?

Some examples:

  • How many initiatives were changed after research?
  • How many changes were made to a product or screen?
  • How many stakeholders engaged with research interviews for the first time?
  • How much collaboration happened as a result?

We sit down with the team and brainstorm: What is a good research project? What is not? That leads us to defining the types of impact.

We also map the areas we want to impact: design, product, marketing. It's simple to track, and Portuguese and English are similar enough that you can easily adapt.

Access the template and make your own copy here.

There are graphs that visualize the types of impact, scale, and timing. We note when the research was delivered and when the impact happened. Often, they don't align. You might deliver results today and the impact might come two months later. That's okay, it can still tell you if you're on track or not. Maybe you're doing research that's not aligned with product or company OKRs. Or maybe you're ahead of the game. Either way, you need to know.

We also ask two key questions:

  • What was the situation before the research?
  • What changed afterward?

This lets us qualitatively analyze the most impactful projects.

Finally, we use a feedback form (make a copy and make it your own!) for stakeholders. Questions include:

  • How much did the research team impact your deliveries this quarter? (1–5)
  • Did you work with the research team?
  • Did you make decisions based on the findings?

Sometimes people click "I don't remember." That hurts more than a no. That means we didn't make an impression. It's important to get this feedback, and I prefer anonymous forms, people are more honest.

We did this quarterly and every researcher tracked their own projects. As a leader, I shared feedback with them during performance reviews. But it's not just the leader's job. Every researcher should track the impact of their own projects. Schedule a call a month after delivering results and ask: what changed?

Do you collect and maintain all of this information manually?

Yes, it's manual because it's research about research. In my context, we don’t have automated systems for this. But it’s still doable.

If you have automated ways to collect feedback – like sending a survey when you deliver results – great. But for the more nuanced parts, like understanding what changed after a study, you need to follow up with stakeholders. That part is manual.

What's the largest scale you've used this tracker at?

The largest scale was with five researchers. If you have 30, maybe divide them into smaller groups and define impact tracking champions. You don't need to copy my spreadsheet exactly. Use the mindset, adapt it to your needs.

As for compliance, our spreadsheet is locked to the research team and directors. We don't log user names or sensitive content. It’s a tracker, not a repository.

What if stakeholders resist filling out your feedback form?

That happens. You need to sell the benefit.

Don’t say, "Fill this out because I need it for my manager."

Say, "By filling this out, you help us make the case for more researchers, better tools, and more support for your team."

Make the benefit clear. You can also:

  • Use Slack polls
  • Ask 1:1
  • Keep the form short and anonymous

If a shared channel isn’t generating enough responses, reach out to people individually.

Are researchers involved in tracking Research Ops metrics?

Yes. Every year we invite a sample of researchers to a FigJam workshop. We map the 8 pillars of Research Ops and ask questions about each. It helps us see what’s working and what’s not.

If we find a gap and we’re not tracking it, that becomes a new metric.

We update metrics monthly using survey data, repository usage, and internal tools. Without that routine, we forget what matters.

What metrics should every ReOps team track?

You should know how researchers perceive your support. Are they satisfied?

Ask:

  • Was it easy or hard to recruit participants?
  • Was it easy or hard to find past reports?
  • Was it easy or hard to find the GDPR guidelines?

You can embed these questions in Notion or a tool like Rally. Or you can trigger questions based on behavior. For example, if someone is on the repository page for 15 seconds, ask them if they found what they were looking for.

It’s harder in our context because our internal tools are custom-built. But the principle still applies.

How do you differentiate activity metrics from impact metrics?

I think about three boxes:

  • Research Ops metrics (perception, documentation access, compliance)
  • Research project metrics (what changed because of a study)
  • Strategic impact metrics (how research influences OKRs and business direction)

They're all connected but they should be kept distinct. That helps us understand what kind of value we’re providing and where.

What's the biggest challenge you're currently facing in ReOps?

We have hundreds of people doing research, with different levels of maturity and seniority. Some follow the guidelines. Some don’t. Some are junior. Some are senior.

The biggest challenge is not knowing whether they’re doing research well.

We do trainings. We write guidelines. We host forums. But we can’t see or control everything.

So we have to ask: with the time and effort we have, where can we make things easier and more effective?

What's your biggest learning around advocating for research?

Don’t just give a presentation about why research is important. That won’t change anything.

Show impact. Show results. Research teams cost money. You have to show what the company gets in return.

Also, don’t let great research go unseen. Share success stories.

We started doing a Research Expo once a year. We select 3-4 research projects and present them to the whole company. It gives visibility to the team and shows the value of our work.

Final takeaways?

Three things:

  1. Learn negotiation. It’s a key skill for ReOps and research leaders.
  2. Learn strategy. We should talk about it as much as PMs do.
  3. Talk about leadership. I don’t see enough leadership conversations in research. We need to grow leaders too.

Connect with Pedro

If you enjoyed Pedro’s AMA:

Thank you, Pedro!

We’ve admired Pedro’s work in the Research Ops space for a while and were thrilled to have him join our AMA series. We’re so grateful for the time, energy, and expertise he shared with our community. If you’d like to watch the full AMA, follow this link.