What AI Enablement in UXR Looks Like In Practice
Most AI efforts in UXR stall because teams start with the tool, not the problem. Here's a more grounded approach.
UX research orgs are under a lot of extra pressure right now: Leadership is asking about AI (and it's less of a question and more of a demand).
Vendors are shipping AI features weekly, Slack threads are full of prompts, plugins, and “have you tried this?” links to demo videos. Meanwhile, your team is still trying to recruit participants on time, synthesize three studies at once, and defend headcount.
So what happens? Your team gets bullied into piloting AI just to prove you're "doing something."
That backwards motion, tool first, problem second, is where most AI efforts stall.
AI Enablement Is Not About Getting People to Use AI
If you want a more grounded definition of AI enablement, it sounds less exciting and more operational.
Brandi Amm, Staff Research Program Manager at LinkedIn, has spent the past two years building out AI enablement for LinkedIn’s research org. Her framing is disarmingly simple:
"I don't start with AI even though AI enablement is sort of my wheelhouse right now. I just start with the problem spaces."
That’s the shift.
AI enablement is not:
- Running prompt workshops.
- Mandating experimentation.
- Adding “AI-powered” to your tech stack slide.
It’s identifying friction in your research system and asking: Is AI the right tool for this specific problem?
Sometimes the answer is yes. Sometimes it’s absolutely not. The discipline is in being willing to say both.
Start With the Work That’s Already Slowing You Down
In practice, your first AI research wins tend to be unglamorous.
They’re not agentic research copilots or automated strategic roadmaps, but they’re likely the parts of research that are high-volume, cognitively heavy, and structurally consistent. In other words, the gruntwork.
Examples include:
- Interview note-taking and transcript cleanup
- First-pass thematic clustering
- Pulling quotes across large data sets
- Summarizing research briefs or prior studies
- Building PowerPoint slides. Of any kind. Ever. (Ugh.)
These are areas where AI can meaningfully reduce time without redefining the researcher’s role. It supports judgment; it doesn’t replace it.
This is the “prove value” phase.
You start small, test in controlled environments, measure impact in hours saved, turnaround time reduced, and researcher satisfaction. You establish governance and compliance guardrails early, before scaling usage across the org.
Governance here isn’t red tape. It’s what makes adoption defensible. Clear guidelines around data handling, storage, access, and approved tools create psychological safety for researchers and operational safety for leadership. (In other words, your CISO gives their blessing.)
Without that foundation, even good tools feel risky.
Then Expand (Carefully)
Once AI is embedded in real workflows, not just experiments, the conversation broadens.
Two areas often emerge next:
1. Democratization (Done Right)
There’s a lot of fear baked into this word. If AI makes UX research faster and easier, does that mean non-researchers start doing research?
In healthy systems, democratization doesn’t replace expertise; it exposes it.
AI can help product managers or designers explore existing research repositories, generate summaries of past findings, or draft research questions. But the scaffolding, interpretation, and methodological rigor still sit with trained researchers.
In this model, AI makes research more visible and more accessible, while clarifying where expertise matters most.
2. Agentic Tooling (Only After the Basics Work)
Agentic workflows, AI that can complete complex actions or connect across tools, are compelling. But they only make sense once your foundational workflows are stable.
If your repository is messy, your metadata is inconsistent, and your processes are undefined, adding automation just makes the chaos spread faster. Don't be surprised if your early pilot exposes all these issues, and don't be afraid to slow down your progress long enough to fix them before expanding your AI footprint.
The process matters:
- Solve clear workflow friction
- Prove value
- Establish guardrails
- Correct pre-AI processes
- Expand outward
You can't skip steps, as shortcuts can just make these problems more expensive to fix later.
Skepticism Is a Feature, Not a Bug
There’s another tension worth naming: skepticism.
Many researchers are wary of AI for good reason, namely bias, hallucination, privacy risks, and the erosion of craft. That caution is not something to overcome. It’s something to preserve.
Brandi puts it this way:
"My skepticism has shifted away from 'is AI weird or bad?' to 'is this actually helping us, or is this just exciting and new?'"
That’s the right evolution.
The question isn’t whether AI is inherently good or bad; it’s whether it meaningfully improves a workflow you already care about.
Novelty is easy to demo, and workflow improvement is harder to prove. One shows up in a lunch-and-learn. The other shows up in cycle time metrics, researcher productivity, and the quality of synthesis delivered to stakeholders.
If your team can’t clearly articulate what’s better, faster recruitment, clearer insights, and more time for strategic thinking, you probably have a novelty problem, not an enablement strategy.
Realistic Timelines Over 'Instant' Transformation
Another pattern common among successful AI adopters: they give themselves time.
AI enablement at scale isn’t a quarter-long initiative. It’s iterative. It requires:
- Policy drafting and revision
- Tool vetting
- Security reviews
- Internal education
- Feedback loops
- Cultural adjustment
Rushing it to “keep up” usually leads to shallow adoption and quiet abandonment.
Treat it like any other systems change inside Research Ops: define scope, pilot intentionally, measure impact, adjust.
You’re not racing other teams, you’re building a durable capability. (And 'slow and steady' wins the race anyway.)
The Practical Question to Take Back to Your Team
If there’s one takeaway, it’s this:
Before you evaluate another AI tool, ask: What problem are we actually trying to solve?
Is it slow synthesis? Limited repository access? Research bottlenecks? Stakeholder self-service? Compliance anxiety?
Then: Is AI the right tool for that problem, or just the flashiest one?
AI enablement in UXR isn’t about proving you’re modern. It’s about improving the system that your researchers already operate inside.
Start with the problem space and build from there.
To dive more deeply into best-practices for AI adoption in UX research, check out our full AMA with LinkedIn's Brandi Amm.
If you're ready to adopt the foundational UX research solution that makes effective AI adoption possible, schedule your personalized Rally UXR demo today!
Rally’s Research Ops Platform enables you to do better research in less time. Find out how you can use Rally to empower your teams to talk to their users, without disjointed tooling and spreadsheets. Explore Rally now by setting up a demo.
.png)