How to build and launch a UXR chatbot with Maria from Checkr
Lots of "experts" are predicting that AI can replace user researchers, designers, and product managers, but for a dose of practical reality, how about building a ResearchOps chatbot that helps out your UX teams, rather than futilely trying to replace them?
Maria de Caris, Senior UX Researcher and Product Strategist at Checkr, joined us for an AMA on Sept. 25th where she demo'd the UXR bot she built to assist and enhance her own research efforts.
Missed the event? Or want to revisit the key takeaways? Read the full recap below or watch the live discussion here.
Who is Maria?
I'm Maria. I'm a Senior Researcher at Checkr, a background-check product that serves HR professionals. Their workflows get incredibly complex in hiring so we research ways to make those processes simpler and more repeatable.
Before Checkr, I was a Lead UX researcher at Loop and a Venture for America Fellow. I have a degree in Urban Planning from the Ohio State University, which prepares you for UX research more than you'd expect.
What is your philosophy toward AI in UX research, and how did it lead to building a ReOps chatbot?
So I'll pull up a quote that is very popular in the AI space and in AI debate. The quote basically says, "I want AI to do my laundry and dishes so that I can do art and writing, and not the other way around," which I think resonates with so many of us who debate the role of AI.
It's a valid sentiment. I think we all desire technology to serve us and not reduce our capacity for expression and creativity.
I really view this quote more as a metaphor for operators. I think that we have a choice in how and where we want to deploy AI and the freedom to experiment with this technology to better understand it, and I feel some responsibility on that level as well.
I found AI to really help my creativity, especially in areas where I struggle: coding, very succinct exact summaries. I really struggle with writing succinctly and the tedious aspects of research. So, in general, I think it's important to know how and where to deploy AI to increase capacity rather than just using it to fully replace cognitive ability. I think there's more studies coming out about how AI can impact our cognition if we're overusing it, and then getting into those cycles. But I think it's particularly useful, for me at least, with overcoming analysis paralysis, helping me get started, simply offering a good second opinion on some of my findings.
So my approach today really involves finding high-value work that's being disrupted, or repetitive, disliked workflows that feel like doing laundry, and then experimenting with AI solutions or AI or other technologies to help alleviate that.
Before you built this chatbot, what problem that you were facing initially and what downstream impacts did that problem cause?
Our two-person research team consults across multiple product lines and teams. Checkr even actually completed a few acquisitions lately, and while we don't formally support some of those companies, we end up consulting some of the designers and PMs from those companies. So, Allison, my counterpart, and myself are very busy. We're also responsible for handling our own strategic research.
I'm sure at least half of the audience can resonate with what I'm talking about, given that half the audience's researchers, but we handle a lot of questions regarding study setup, usability tools, external recruitment, managing and issuing budgets for incentives, things like that, and then also even our past research. So, we're frequently distracted, and that was really the problem space I wanted to focus on.
Before you built this chatbot, how did you try to solve this problem?
We had a standing research office hours meeting, which worked well for a bit, but ultimately it mostly received very few signups, and we would still receive a ton of questions over Slack and one on one, quick meetings. So we eventually actually canceled the research office hours and we continued to help people all over the place, and then made centralized research support and research insight Slack channels. I would actually often miss questions posted there unless someone tagged me directly. So I wanted to create a solution that would allow PMs, designers, and marketers to get their questions answered more quickly and without disruption for myself, so that's what we tried first.
How did you know that an AI chatbot could potentially solve this?
What really happened was Allison and myself are both out of office at the same time, and it actually really blocked a few individuals because of how much we hold in our heads and how many different things we manage on our own. Add in that we have a pretty complex tool stack, so that we'll use components of tools for different things, and it can just be hard for people to understand that without our guidance.
That's when I began thinking about addressing the problem with a more scalable solution. Then, as I learned more about AI agents -- Checkr is very AI-forward, so we're, we're deploying AI in a lot of different contexts internally, and Allison and myself are both part of a generative AI task force that did a lot of testing of various AI tooling -- really scoped appropriate problems to deploy that tooling to. So, I had some background there and then thought that that would be a really great space to potentially deploy a solution for and experiment with. Then, with just observing AI support in other contexts, I figured that with sufficient training, this would be an effective solution.
Were you skeptical about chatbots before you started to learn more and more about AI?
I've been following the AI space and debate a lot, so I definitely have had my own reservations. I've been using it for a couple of years now, and I think that as I've deployed it in other contexts, I've absolutely seen hallucination happen, and that made me nervous because that's obviously something that we want to prevent. When we talk about hallucination, it's coming up with things that aren't even from the window of context that it's been provided. So it's coming up with its own inference or it's just pulling something that's potentially totally false.
I felt this was a low stakes context to apply it to. I think it's higher stakes when you're doing this with research analysis, and I think that applying an AI solution in a lower risk space is actually a really good approach, because then you can start to learn the technology and how the training works, and then have more confidence to apply it in more high risk areas. I'll talk about more about training in my demo and show that to an extent. But I was definitely skeptical.
Are you surprised at how comfortable (or uncomfortable) UX researchers are with AI?
Not really. I think this lines up with what I've heard as I've talked to experts in the AI space. I think that a lot of the limitations people face are potentially actually more around what their organization will allow. Checkr very AI-forward. They're allowing us to expense very various AI tools and deploy solutions. We just have to be really careful with PII -- anything dealing with any personal personally identifiable information -- really cannot be fed into these models. And we have to take appropriate measures to make sure we're using the right tooling that will prevent that from happening.
So I know that I'm in more of a startup mindset organization, and I know that it can be very, very challenging to get started at other organizations, but that's part of why I wanted to show that this example because I think this is a pretty low-risk application It's a good way to build expertise and then also show value to your team, in a way that's a little bit more straightforward.
How long did it take you to build this chatbot?
It was half a day of work, and then the real time is spent training it, making sure it's evolving, assessing models and how to update that. For this particular use case, it did not take me very long at all because I used a no-code solution.
How should we start to think about chatbots for the UXR use case?
If I could drive one point home, I think it would be to avoid over-engineering AI agents. Really try to focus on simple and direct solutions for straightforward use cases. This works to prevent analysis paralysis and ensure very clear return on investment.
I think it was MIT that came out with a little bit of new research, claiming that a lot of AI investments that companies are not showing ROI. And I think that we all know, AI or otherwise, a lot of initiatives get spun up, and if you make the scope too broad, you don't have a clear problem that you're trying to solve really. If you're thinking about the product development, best practices, and you're applying those frameworks, that's really the way to go. You get into trouble when you're doing a lot of the overengineering and trying to make an agent do too many things.
Can you show us what the chatbot looks like from the researcher's perspective?
From the stakeholder point of view, what I want to show is a few screenshots of how it's been deployed in Slack with real questions and answers that it provided.
So, the first one that I'm showing was, I think the first question ever submitted, from a product marketer named Sydney. I think this response is cool because it shows that it's trained on organizational policies and processes in addition to the Rally setup components. I'll show later how both were accomplished, but we have an app called Lumos where you need to go to get approval for tooling, so it recommended that, and then, the steps to add a user and Rally, which is helpful.
That one was a cool one because this is a research support channel. I hadn't even seen the question. I was actually just tagged in by my director who then gave kudos. Sydney responded and was like, "oh this answers my question."
That was great cause it was an immediate answer to a fairly urgent question, right? And I would have not seen that until the end of the day because I was busy with other things.
Then there was a designer named Tabor on my team who had some trouble getting some prototypes into Lina, which is a tool that we use for usability testing. I think this actually ultimately ended up being a bug, but in the case that it wasn't, it gave pretty clear guidance on how he needed to modify his prototypes in order to test well and for the latency to be good in Lina. That was a pretty good step-by-step.
But I noticed too that it was being a little bit scattered in its response. It wasn't consistent with the formatting. So, at that point, I gave it some guidelines around how it needs to answer questions and make sure that it's always step-by-step. With every response, I went in and modified more to make sure that it was providing a very consistent framework for how it answers questions.
Here's another example from Shane. These are all in Slack because I deployed the agent to answer questions in our research support Slack channel. He was wondering if there was a way to have the participants sign a form. Notice that he's not even mentioning Rally, but it's so well trained on tools that it knew, from the context provided, here are the steps to do that in Rally. We only facilitate our consent signing process within Rally, so it knew that, and then it gave him the steps to manually facilitate that, which I thought was helpful. Then it shows its reference sources at the bottom with an opportunity to provide feedback, to help train the model. So, that's an overview of how it's deployed to my stakeholders.
What differences have you seen from this chatbot versus the office hours that you initially tried?
Really the ability to get that instantaneous response. It's responding within a minute usually. With the office hours, that's a preset time and people weren't attending because everybody wants their questions answered right away. So, what it has done is it cuts down on the number of DMs, not completely, and I'm still very happy to answer the DMs that I get.
Sometimes I actually even use the chat interface. We have a bot that doesn't force you to ask questions in Slack. You can go straight to our tool, our vendor Credal, and ask the questions there if you just select the agent. Sometimes I'll even ask because it's a quicker way for me to query help docs if I'm not completely sure.
I actually didn't post one of my favorite examples because I didn't want to overuse Shane, but he's been the most active in the channel, and he even asked at one point what the cost of moderated versus unmoderated external recruitment is if he were to use the Rally respondent integration. Because he's new to Rally, we just onboarded him, and he wanted to set aside some budget for his study. The agent actually responded with the accurate costs and even a quantified amount based on the number of participants he was estimating on including.
So that was really cool because it pulled from the help docs. I didn't have that answer in my head. I don't have all of these things memorized, but it did that. I would have found myself in the help docs trying to find that to answer his question. Then on top of that, it did the math.
And I know that some AIs are not great at math. I'm using a GPT 4.0. I'm actually thinking about changing the foundational model soon but it does fine when it's not a super complex question.
I thought that that was also a really cool example, and I think that it just prevents the cognitive load. If I get Slacked that in the middle of trying to do rigorous analysis or other things I need to do in my day, it just gets distracting, and I think that that's what it helps address.
What I will say is that I didn't realize that I could configure the agent, until this week when I was reviewing it again, to do follow-ups in the Slack thread. So it would answer and then that would start. The stakeholder would read the answer and be like, "great, I also have these five questions now." And so then I would have to come into the thread and then answer those questions, but at least one is off the list.
So now I've actually enabled the agent to make a decision on whether it should answer in the thread. If it has the capacity to answer based on the knowledge that it has, it will start to answer. I also have the agent configured so that, as you can tell, Credal doesn't need to be tagged. It's just looking for a question mark, and so far that's actually worked really well. I haven't had too many questions come up where people haven't included a question mark, so it's been pretty good.
So I deployed it to Slack. The tool that I'm using and will show to you all is Credal. That is a tool that my organization adopted that has an enterprise layer over what you're doing with various AI models.
To my understanding, it most critically prevents PII being fed to any model that's been selected, because we cannot have these models trading on PII or certain proprietary pieces of information as well. So, Credal is that layer, and then it's also provided an interface to set up agents and actions.
Agents are what I'm going to demo today. It's really a place for you to select a model. You give it a very specific prompt and context that it pulls from to handle various tasks. Credal integrates with Slack, and then the way I had it configured was that it would answer questions in our research support channel.
Can you walk us through how you built this chatbot and what kinds of data sources feed into it?
Credal is a specific tool or a vendor that we have access to at Checkr. There are other tools to my understanding. Glean and various competitors that other companies may adopt. I'll get into some alternatives to Credal if Credal isn't deployed at your organization. Checkr and Credal are strong partners. We've really helped inform many aspects of the tool, and they've informed so much of our AI practices and standards. It's been a really great partnership.
So, within Credal, you can create agents. I have several that I've either been privy to or I'm working on, and the one that I'm gonna show today is my user research operations support agent. And that's what's powering what I've already shown in Slack.
Really the key thing to call out is the model. As I said, I started with GPT. I think it really handles a lot of these generative questions. Well, I'm probably gonna evolve this to a more updated GPT. GPT 5 is gonna take a little bit more time with its response. It's a real thinker according to some. We have AI solutions engineers at Checkr now, which is great. So now I have a lot more guidance on how I'm making these decisions, but it's been running on GPT 4.0. I'm probably gonna change that soon.
Within the agent, you set the creativity, too, leaning to more creative, balanced, or precise. I chose the balanced approach because I wanted it to have the ability to make some of those inferences that we were just shown and to be able to handle more difficult questions with a more creative output. But I also wanted it to stay high on accuracy, and I can talk about how we think about and look for accuracy.
Within the prompt, this is the space to define the agent's role, the goal for the agent, and then context. I'm a little embarrassed showing this context blurb because it's very long, and I'm working on actually getting it out of here and into another doc that I've created, which has our best practices, policies, etc.
That doc is a little bit of a better framework because it's organized with headers. It will be easier for the LLM to parse. I just haven't had a lot of time to get to that, because as you can see I'm just running on about our process, which is a little insane. But I'm working on getting that out of here and into the doc because the context window is fairly big. It's not enormous, it's very targeted, so it doesn't really matter that I have this blurb here, it just makes it a nightmare for people who want to improve the agent.
So that's why it's been a little bit of a backlog item for me, but this part's important where I give it very clear instructions on what it needs to do before it's sharing an output.
As much guidance as you can give an agent is better. You can see that I have it set to scan these user research policies at Checkr, and our best practices for research analysis doc, to incorporate any relevant policies and guidelines in the response.
I've actually recently learned through a little bit of review that some docs can be pinned, so that the LLM can consistently scan those and leverage them in its output.
I'll talk about that more and why that might be more important in a little bit, but I have it set to default to scanning linked Google Doc files. Again, we'll pin those, and then using the most recent updates from the Help Center articles and training guides to ensure current and accurate information.
This is what I was talking about earlier when I noticed that its formatting needed some improvement. So I told it to always guide with clear actionable steps, preferably in a bulleted list when detailing operational tasks.
I've worked a lot with a lot of different AI experts at Checkr on other projects, so I was able to grab some instructions from other projects where I felt they were relevant. These two are examples of those where it's a good practice to say that it needs to identify and communicate any gaps in the context that prevent a full answer by prompting the user for additional information if needed.
It needs to cite specific information from the context where possible and avoid introducing information not present, which is basically reiterated here. So, you have to bully the agent sometimes to not guess, infer, or make up facts. You need to respond with, "I don't have enough information to answer this" if there's no explicit context provided that answers that question.
When you're in the Credal interface for chat, when you select an agent, you'll see suggested questions to ask, and I thought that this part was important because I want people to understand the scope of what this can answer. This agent is not set to query all of our research docs and to give people summaries of research we've conducted at Checkr.
There are other ways to get at that through Credal and agents that we're working on to potentially address that specific solution. But, going back to my guidance earlier, I think it's best to keep the scope really focused and to make sure that the context is also focused. I did not want this to be an agent that handles every single thing that is thrown at us. I really wanted it to focus on supporting and unblocking people in research operations. That's really the goal of this bot, and that's the suggestion that I give people when they're going into Credal and selecting this agent.
I haven't worked with actions yet, but this is where we get into having the agent do things on your behalf. There's a lot of new technology around MCP, which started with Anthropic. It's a framework for integrations where these agents can actually begin to interact with other agents or facilitate actions on your behalf.
This part here, the data sources, is important. It's a little messy because I ran into a couple of issues with some websites preventing me from doing full web crawls. But in most scenarios, I was able to just fully link the Rally Help Center, which it then conducts a web crawl of. I can set it to go down a specific number of layers, so it'll click through and then crawl up to 50 different help center articles. I actually ended up having to do a lot of individual articles. It tells me when some of these help articles are unhealthy and need to be deleted, so I can make those edits and then update my deployed agent.
You can see that I have some of these internal docs that I talked to you all about. I experimented with adding not just our best practices around operations, but a little bit of context around when to use certain unmoderated methodologies. I don't want this to be a methodology consultation interface, but it does need to have that context when it's making inferences on where to go with specific tooling and how to leverage those tools effectively.
So, as I was reviewing some responses, I realized that this would be a helpful thing to add.
This is basically how I have it configured as far as data sources go. I think it's really inefficient to have these individual links, because as you can tell, sometimes they change and they go unhealthy. So, one thing I commended Rally on was the fact that somehow I was able to just do the web crawl of your help center, I'm not blocked from being able to. I just have that linked and then it updates based on that.
I revisit these as much as I can, and then I add more sources as needed, but again, you want to keep this pretty tight because too many sources can begin to overwhelm and redirect the agent. I did see that as I added more sources. I then had to add more guidance and context around how to deal with those sources. That combination of things with training has worked extremely well, and I rarely see issues with the output. I think the issues happen more when folks are asking questions that aren't very clear, or if it's asking the agent to do too many things.
So, I did have a PM come in and say, "Can you review my study setup in Rally, as well as this research plan, and tell me if I have everything set up correctly?" I don't have this agent to go into these tools and do those things. Maybe one day, and I do think that would be an effective, very specific use case. Just reviewing things in Rally, being able to refer to how a study is configured. If anybody here has ideas on how to do that today, let me know. I'm not sure how to do that. I think Rally is building towards that future and we've talked about it a lot.
It definitely has its limitations, but what it's really good at is scanning these sources in the order that I've asked it, and again we can pin these docs to bookmark sources, so that the LLM is always scanning that.
And then as I deploy changes, I'm gonna want to run a bunch of tests, in the space of monitoring it. Credal has a tab for monitoring, and I can review the responses, share feedback, -- which is really thumbs up, thumbs down -- write in more context, and then debug the message. I found that incredibly helpful for when I've noticed that it's going off track, and I want to redirect it.
Besides giving feedback, how else do you train the chatbot?
There's some guidance that I got from a solutions engineer in the AI space. He's helping with a lot of our internal AI projects at Checkr. It's a pretty good standard to take 10 sample Q&A tests across three to five models. And when I talk about models, I'm talking about the model that will process the natural language request.
Over here, you'll want to try to test against three metrics which include: Accuracy, so you're getting the correct answer; Hallucination, am I getting partially correct answers with lots of made up extra nonsense, that's a problem; and then Latency, so what's the wait time look from overall overall query to response across these three models.
So, as you can see, I have the option to bring in models from OpenAI, Anthropic, Google. Gemini is really exciting. I love using Gemini models. I actually probably wouldn't select that for this task, though.
I've gone through testing and consultation with our folks at Checkr, I found the GPTs from OpenAI are pretty good for this.
This is Alex at Checkr. His order is to compare families of GPT. GPT 5, Claude 4.1 or Opus or GPT 4 or 4.1 or Claude Sonnet 4. So the Levels from each family, trying to compare those more directly, compare model generations within the family. You're comparing GPT 4o to 4.1 versus 5 -- I'm probably going to switch this one to 4.1 based on some testing that I've done -- and then you'll compare model versions within generations. So GPT 4o versus 4o Mini versus Claude 3.5 Sonnet versus Claude 3.5 Haiku. These models are constantly coming out and changing, and it's really hard to keep up with what's good for what. So I think the best thing that you can do is really just run your own tests, with your own sample of your data. Ask it consistent questions, really look at the output, and think about how you would answer that question, and then grade it. Again, you'll want to keep those three metrics in mind that we talked about: Accuracy, Hallucination, Latency.
How have you measured the impact of the chatbot?
I didn't show the analytics tab, but we are actually able to see those analytics in a more direct way.
So, in this particular month we've only had three requests, so not a ton. But you can see the number of requests, the counts of positive feedback, the counts of negative feedback, and overall cost, which I think is, it's really cool that Credal provides that. That's a good reference point, as you continue to evolve it, to see if those trends are changing and where people are actually interacting.
That's how I'm doing it. I don't know if that helps answer the question. Obviously not everyone here is gonna have Credal, and we can talk about some alternative options.
I really am checking this tab. I read everything in research support, and I'm making my own inferences. What I've answered this way. Did it really answer that person's question? I'll follow up with them. I asked them for feedback. That's the way I'm handling this right now.
Users are able in Slack or in the Credal interface to thumbs up, thumbs down, and then provide comments and feedback. That actually helps train the overall model. I have the agent set so that any user can help train in that fashion.
You can set guardrails where you're really the only person reviewing and training, but I prefer for it to be more of an open feedback from everybody and more training from my stakeholders as well. So, that's awesome.
I just wanna pull up this one screenshot because I think you can see at the bottom here that thumbs up, thumbs down, or the X, and, and even provide feedback. I think that's really great to be able to allow the people asking the question, whether or not they got the right answer.
What limitations are there with the chatbot?
Maybe I'll just try to pull up Sain's question, because I really like how that speaks to the limitations. I said this has a very narrow focus. Slack has its limitations with the integration. If you are actually using the Credal interface, you can attach documents for review, within an agent. But if you're in the normal chat, you can attach all kinds of things and it can scan those docs and then give you feedback.
So I think it's interesting, because I'm working with a lot of other AI tools, such as Notebook LM. That has been a really cool use case that I would love to share more about sometime, to help get more of your stakeholders on board with interacting with your research. Because they can ask questions in their own time and you can link as many sources as you want. That one's harder to train and guide because they don't really have an agentic interface for that. But I was thinking that I really wish Credal could pull from those other AI use cases that I've deployed and actually be able to deep search within those and search across a lot of the things that I'm publishing.
Right now I have to manually link the data sources, which I think is great because you want to keep the context window contained, but I'm still in the process of figuring out how you set up agent to agent workflows where the bot can actually do things on your behalf. So one thing we've talked about with the Rally team is, it'd be great to have a future where Someone's not just asking to get added to Rally and then to create a study. But once they ask, that's actually happening in the background on their behalf, spinning it up for them, giving them the direct link to the study and say, hey, start here, go fill all this out, you're in the tool, you've been approved in Lumos, with that extra step of Allison and myself reviewing, approving.
But having those things work in the background, I think would help a lot, because those are very manual and tedious tasks.
This can't accomplish that today, but I think we're pretty close to that reality. It already being trained on all of our processes and where we go for what is the most important step, because if we ever get to that point of it running background tasks, I want it to be very well trained on where we go for what, and, how to do those things, so that it's not making mistakes and doing unnecessary background tasks that we didn't ask for.
So that's a limitation.
How does the chatbot impact what you do on a day-to-day basis?
At Checkr, I'm a senior researcher. We have a staff researcher. We don't have a formal Research Ops team. We would love that but, because of that, Allison and myself handle so many operational tasks. Even thinking about the question of, well, we don't want it to replace Research Ops fully, whichever space you're in. What this enables is time spent on higher value tasks that no one's performance review at the end of the year is gonna include. I set up that Rally study, I just generated a study for my coworker Tibor to fill out.
You're gonna be graded in terms of the value you provide to your organization. What strategic things were implemented because of your work, right? I'm reminded of that all the time by my manager because I'm always trying to help people in all of these areas, and she's like, try to really focus on how you're contributing to our strategy in two quarters. What's on the roadmap because of your research. It's very important that you support all these people, but at the end of the day, that's not what you're graded on in your role.
I don't want AI to replace Research Ops, but I certainly think we'd all be in a better place if we weren't spinning up links and doing really basic stuff for design and product stakeholders. Just because getting started with the tool is a daunting task for someone. We're all lazy in lots of different ways. (I don't wanna say that my counterparts are lazy because they're not.) I have things I'm very lazy about and do not want to do, and it would be nice to have those things taken care of in the background.
Let's cut down on all of these small things that we're having to fill our days with and really focus on the problem space that we're trying to understand, the strategic projects and things that we're trying to unblock for the organization, for our stakeholders, for our customers and our users, right?
So, that's how I think about it. I think that my guidance earlier was keep your agents boring. Let's deploy them where there's a really good scope, a really narrow scope and a really strong problem present, and then test and iterate on that and make sure that it's actually providing some return on investment.
I'm still evaluating that with my agent, right? But in general, I've found that it's pretty easy to get implemented, and it's doing a lot on my behalf, so I'll keep it around. But I haven't really evolved it in a huge capacity because I haven't really seen a strong need for that.
So, what are those administrative tasks that maybe are time consuming for Research Ops or senior researchers that they don't necessarily need to be doing. Those kinds of task can be supported by AI and that enables you to figure out what are the business objectives that you can contribute to. So that when it comes time for annual review, what are those projects that you had that time and the brain power to actually sit down and focus on that are bringing the business forward.
Where do you see the long term vision of AI agents at your organization?
It's an interesting question. I'm always really bad at long term vision, five year plan, that stuff, but I'll try to answer that.
What I've noticed, at Checkr at least, is we're heavily investing. We've brought on an AI solutions team that is internal facing, so they're helping deploy agents and AI where it makes sense and really helping scoping out some of our problem areas.
I don't know if I can speak to Checkr as a whole. I just know that the company is committed and continues to be. We've had this generative AI task force that came very early in the conversations about AI. The company is investing a lot of resources toward it, and not in a capacity that's really meant to replace people. They really want people to be strong operators of the technology and know what contexts are appropriate to deploy it in.
I think that AI is the hot topic right now, but, when we look at the last 10 years of tech, there have been so many different technologies thrown at us. Things are evolving. Who knows where even AI is going in the next five, 10 years. So, I think what's important is to stay up to date with technology, and what solutions make sense. But I think it's really important to stay focused on proper operational and product development practices.
I saw a post recently that someone was calling out. There's been so much buzz about AI that we haven't even talked about the fact that product development processes need some support. We haven't really changed the status quo of product development and the standards there for a long time, right?
Don't let those conversations fall to the wayside. I think that Checkr is pretty good at making sure that we're remaining focused on the biggest problems for our customers. There's obviously a push to consider the technology, but not to push it onto users, whether internal or external, just for the sake of saying that we do AI, right? I'm gonna try to stay with organizations that have that philosophy. We want to enable use of the most evolved technology and support our employees in doing that, but we also want to be mindful that these solutions don't make sense in every single context, and you need to be really thoughtful about how you're deploying all of it.
We're in a very highly regulated space, so I think that's gonna be just with the legislative stuff coming at, hiring practices, background check practices, we're gonna have to pay attention to that and follow those guidelines. That will obviously impact how AI is used at my company.
Connect with Maria
Follow her on LinkedIn and say hello!
Thank you, Maria!
A huge, sincere thank you to Maria for this deep dive demo into her experience building custom chatbots for Research Ops support. As these tools get easier to use, the expectations for deployment will rise, and Maria's experience can serve as a guide for our own experiments with AI research assistants.