The Case for Adding More Steps in UX Research
Loose research processes create more work, not less. Here’s why adding structure early helps teams scale research without chaos or rework.
Every UX research program chasing democratization hits a point where things start to slip from coordinated to chaotic.
It starts with requests that come in half-baked or simply get skipped. Then a designer recruits from Slack with no idea how often or how recently these participants have been surveyed. Next, a PM spins up a survey solo with no regard as to whether these questions have already been answered. Finally, someone approves a $200 per user incentive with zero visibility and no mechanism to manage the payouts, which finally gets this whole problem on finance's radar and it officially becomes "a problem."
Amidst it all, ReOps, the exact team meant to enable scale and ensure "research democratization" doesn't lead to UX chaos, has been working overtime cleaning up these chaotic projects and then gets blamed for "not doing its job."
The default reaction is predictable: if people find the request process so onerous that they'd rather go rogue, we've got to make it easier to submit requests. Require fewer steps and remove structure so we can just get people through the door.
But often the problem isn’t too much process, it’s where the process shows up.
We discussed this problem in detail during our latest AMA with Twilio's Kseniya Kreyman and Joe Marcantano: Inside Twilio's Democratized Research Engine.
You can (and should) watch the whole session for some amazing insights, but we'll break down the case for adding, not removing, research request steps below.
When Democratization Gets Loose, The Work Doesn’t Disappear
Getting more people to participate in research is a plus, but trying to run unstructured research throws up some serious challenges.
Removing guardrails doesn't remove work; it just pushes the work downstream, where it’s slower and more expensive to fix.
Instead of a clean intake, you get vague requests. Instead of a solid method, you get something that kind of looks like research but doesn't deliver insights. Instead of a quick review, you get a Slack thread with five people trying to decode what’s going on.
Kseniya from Twilio nailed it:
“We would get a submission, and it would be empty or missing a lot of components... the burden kind of fell on ops to … come back and kind of educate on best practices … even sometimes [explain] what good research looks like.”
That’s the trade: Save five minutes upfront, lose hours later.
Everyone was happy except ReOps, because all the work that got skipped fell on ReOps to fix later. (At least until something slips through that ReOps can't fix.)
Friction Isn’t The Problem. Timing Is.
We treat friction like it’s always bad, but every car needs a set of brakes, and even high-speed rockets have pre-flight checklists.
The right friction forces clarity and makes people think before they act.
The problem isn't too much friction; it's friction that shows up too late.
Downstream friction shows up as rework -- and it usually shows up under the tightest deadlines possible. All the time saved up front costs double when you have to do work twice and in a desperate hurry.
Put the right friction at the start of your research process, and it delivers value.
Now the request asks the right questions:
- What are you trying to learn?
- Who are you talking to?
- Why this method?
- What’s the incentive?
Now there’s a review step that establishes clear ownership and provides feedback before anything expensive happens.
An ounce of prevention is worth a pound of cure.
What “Adding Steps” Looks Like
It’s all about guardrails; You replace a blank form with a structured intake questionnaire, including required fields that push for a real study design. Provide prompts that slow people down just enough to think through their objectives, so they, in turn, give ReOps enough data to launch an effective study.
Upon submission, you make it clear what happens next: who is going to review the request, how long will that take, and how does the submitted follow-up if they need more details.
Yes, the review step will slow things down in the moment but it catches obvious problems early, when they are easiest, fastest, and cheapest to correct.
The benefit of this minor slowdown is clarity.
If I’m submitting a request, I shouldn’t have to guess what good looks like. I shouldn’t wonder who’s reviewing this or what happens next.
If the process doesn’t answer those questions, I guess, and I’ll get some of it wrong.
That’s where the chaos starts.
Small Teams Don’t Get To Ignore This
Big teams can absorb inefficiency, but almost none of us have the luxury of working on big teams.
When demand goes up, and headcount doesn’t, every bad request costs you, and every "shotgun study" burns time and budget already stretched thin.
In that world, a review step is crucial, not a formality.
Review steps are there to keep quality high without pulling ops into everything. They cut rework and let you scale research without adding headcount or risking burnout.
One extra step upfront saves ten later.
Changing Behavior Takes Time
Even when the idea of "preventative structure" clicks, it doesn’t get accepted or adopted overnight.
You’re asking people to change how they work, to think before they hit submit (if they even fill out a request at all), and that’s a big shift.
Joe from Twilio put it well:
“Even if you're at a small company, you're trying to turn the Titanic. This is not gonna happen in a day. You gotta make tiny course corrections, and this will happen over time.”
At first, it will feel like a step backward because studies will start slower (and fewer will start at all). This doesn't land perfectly in every org, especially ones that hate process on principle. But most teams see the same pattern if they stick with it:
- Research requests improve
- Rework drops
- Studies produce actionable results faster
Best of all: ReOps stops playing defense all day.
The Real Takeaway
“Adding more steps” sounds like slowing down. Often it is, briefly.
But the work you're trying to skip happens either way. You can do it upfront, where it’s cheap and clear, or later, where it’s messy and expensive. Or there's a third option: studies burn time and money but provide no value at all.
One system scales, another doesn’t, and the third actively does damage. Build your request process accordingly.
If you want to put structure in place without turning your process into a bottleneck, request a personalized Rally demo today.
Rally’s Research Ops Platform enables you to do better research in less time. Find out how you can use Rally to empower your teams to talk to their users, without disjointed tooling and spreadsheets. Explore Rally now by setting up a demo.
