Scott Garrison on AI's impact on the future of Research
Scott Garrison, a Market Research Consultant who has been leading insights in both User Research and Market Research for more than 12 years, joined the Rally team on July 27 for an AMA to discuss AI’s impact on the future of research. Scott covered topics like scaling research with AI, AI’s effect on Researcher’s day-to-day lives, tips for evaluating AI tools, and accounting for the inherent risks of AI.
If you missed it, or want to revisit the highlights, read our recap below. If you’d like to watch a recording of the full AMA, follow this link.
🔑 AI will become most useful in automating manual tasks and opening the door for Researchers to add more value by focusing more on strategic projects, partnering with cross-functional counterparts and stakeholders, and thought leadership.
🔑 It’s vital to have a firm understanding of both research and AI to effectively utilize these new tools. Treat AI as you would any new research methodology or software - learn as much as you can before trying to utilize it for work.
🔑 Don’t let your use of AI erode your foundational research skills! These are still vital for our work, and can be used to set up proper guardrails and checks and balances for AI.
🔑 When first experimenting with AI, use dummy data or synthetic data to reduce the risk of exposing real, private data to unnecessary and unknown risks. Make sure to follow your company’s policies and that your stakeholders are comfortable.
🔑 AI tools will have bias – we just don’t always know what that bias is. That’s why you always need to review and add a human element to what you receive from an AI tool to ensure you are identifying potential risks and minimizing them as much as you can.
🔑 Ultimately, those who embrace AI and are willing to evolve with it are going to get ahead.
Who is Scott?
Scott is a seasoned New York-based Researcher and founder of the “very creatively named” Garrison Research. “That’s what happens when you form an LLC without really thinking long term about what it will be named,” Scott joked.
“My background has always been in Research,” he said. He began his career in Research at a company called SKIM, which is an international market research consultancy. After leading research teams in both NYC and London, Scott transitioned to leading a research team at Instagram and later Robinhood. Along with his time in Fintech, he also spent time leading research for a startup in the web3 space.
In January of this year Scott decided to branch out on his own with his consultancy. “That’s really when the fascination and kind of obsession with AI kicked in.”
Scott’s journey with AI
Scott, like many people, got into AI with the launch of ChatGPT. As he began to explore AI more, he realized “AI has been a part of our lives for years and years” in things like Apple’s Siri and Amazon’s Alexa. Now, with ChatGPT, Scott said we’ve “taken a quantum leap forward.”
Inspiration to further explore AI came from an unlikely source — a friend's journey into AI photography. Scott said he was intrigued by the transformative potential of AI. His hands-on approach extended beyond voice assistants and photography tools, enveloping a range of emerging technologies. This curiosity became a tool itself, enabling Scott to leverage AI in his professional life. "You can't really scale one person, but with a lot of AI you can,” said Scott.
How can you scale research using AI?
The focus, according to Scott, isn't just about data analysis. Instead, it's about harnessing AI in the generation of research tools. For example, "how can I simplify and make a research proposal? Or how can I design a survey, questionnaire, or usability test through AI?"
Scott explained that automation of manual tasks is a prime opportunity to implement AI in Research. A great starting point, he suggested, is to delve into basic tools like ChatGPT and play around with prompts. “Go into one of these tools, start putting in prompts, and see what happens.”
However, Scott acknowledged that AI isn't a magic button that instantly gives you exactly what you need. “It’s never something where you hit ‘Enter’ and you have a perfect solution.” Instead, he underscored the importance of human touch. "It's about mixing the two together and using AI as a starting point, and then adding a human touch on top of it."
What impact did AI have on your own work and productivity?
For Scott, AI has streamlined many of his processes and increased productivity. “It's not that productivity suddenly allows you to be lazy, but it does allow you to speed things up.”
Scott shared how he currently employs AI for more efficient work with his clients. He was recently provided with previous questionnaires from a new client, which he used to train AI models. “I used ChatGPT and trained it on the language and verbiage along with how the client normally structured questions and was able to produce something much faster than I would have done normally.”
How can you leverage and train ChatGPT to help build discussion guides and research plans?
The first thing Scott recommended doing is training ChatGPT to assimilate a specific language and style. This approach involves uploading several documents and instructing the AI, in this case ChatGPT, to interpret them. “You can tell ChatGPT, ‘I'm going to upload five to ten documents and all I want you to do is understand and interpret the language and style,’” Scott explained.
Once the AI understands the given material, it can then be directed to draft new questionnaires or discussion guides based on the acquired knowledge. “Now you can tell ChatGPT, ‘I’ve now finished uploading this material. You are now Scott (or a researcher). Based on what you've seen in the past, please write a new questionnaire or a new discussion guide about ___."
If you’re in an agency, Scott said you can use ChatGPT’s unique capabilities to cater to specific clients. By creating separate chats for different clients, ChatGPT can be trained uniquely for each, preventing unnecessary overlap and maintaining client-specific styles and languages. Scott admitted that overlap could sometimes be beneficial, but it can also cause extremes, meaning a thoughtful approach to training AI is essential. However, this is still something that must be done in compliance with company policies and clients must be informed of this.
Accounting for risks when using AI for Research
When it comes to handling risks associated with AI in research, Scott likened it to understanding the pitfalls of recent technological advancements like social media or crypto. “There can be many negative consequences and risks when you don’t fully understand something.”
One of the central challenges highlighted by Scott is the black box problem – input goes in, something comes out, but the process that led to the result is unknown. He cited a well-known instance of a ChatGPT model trained as a lawyer that produced details about non-existent legal cases.
Data privacy and security is another concern Scott stressed, citing a recent Wired article on the potential risks posed by ChatGPT and browser plugins. "Even if ChatGPT and OpenAI are secure, the plugins aren't necessarily," Scott explained. Thus, Scott explained, it’s necessary to understand what the risks of AI are before diving in headfirst.
Scott said he refrains from using AI tools for data analysis due to these concerns as well as to align with his clients’ policies and reduce the risk of exposing sensitive information. Scott recommends running different experiments with AI using dummy data or synthetic data. “Using real data can be a big risk in terms of what you’re exposing that data to. Tools like this have the potential to be really powerful, but we’re still at the infancy stages of AI,” said Scott. “You need to be extremely careful about what you’re inputting into these tools.”
Should only seasoned Researchers utilize AI?
As a newer researcher, you may not be as familiar with what you are inputting into an AI tool and therefore may not catch errors or things missing from the output. The more seasoned of a researcher you are, the more likely you are to be able to account for and fix AI errors. In this case, it’s essential to have a working knowledge and understanding of the subject matter you are introducing AI to.
Additionally, a newer researcher using AI may not actually save as much time as a seasoned researcher because they are having to spend extra time checking the outputs from an AI tool and making sure they are correct. Or they may not have experience training an AI tool and have to adjust the response to fit the language and context they are working in, ultimately causing extra work instead of reducing it - at least at first.
“That being said, there are still a lot of things you can learn from, and about, AI,” said Scott. “Start taking courses on AI so you can learn it and better understand it.” Scott recommends pairing your research training and education with AI training so that when you bring the two together, you’ll have more experience with both and be able to ensure a more accurate and effective output.
Why you should be transparent about your use of AI
“I’m a big proponent of being honest and sharing what you’re doing,” said Scott. By being fully transparent about using AI, you can avoid scenarios like these:
- Unknowingly exposing data or trade secrets, or doing anything that might go against company policy, by flagging it early to others in the company.
- Cross-functional teams recognizing changes in your work produced by AI (e.g. wording, formatting, repeated mistakes, etc.) and start to question your work and in turn, you.
- Stakeholders assuming your work must be too simple as you’re now doing it more quickly, and either adding more requests to your workload or expecting projects to be done sooner than you can manage.
- Running research with stakeholders who may have already developed negative feelings towards AI, and thus losing current or future buy-in that could have been vital to your research function.
All these scenarios could greatly impact your position, potentially even leading to loss of responsibilities, trust, work-life balance, or your role entirely. “For those reasons, my advice is to always openly communicate,” said Scott. “There may be issues or things that they are concerned about that you wouldn’t know if you didn’t communicate up front.”
By automating the more manual tasks, AI can help researchers spend more time on strategic initiatives and projects and in turn demand a seat at the table, said Scott. Communicating with stakeholders that you are using AI, how you’re using it, and the successful outcomes you experience from using it (e.g. projects done quicker, cost savings, etc.) enables you to show stakeholders you have the capacity and ability to be a strategic partner.
Advice for those interested in using synthetic data
If you choose to experiment with synthetic data / user tools, Scott recommended first introducing AI ‘participants’ to a study where you still conduct research with real human participants as well. This will allow you to properly compare the results of each. You will be able to:
- Identify strengths and weaknesses of using synthetic data
- Identify potential risks and develop ways to reduce them
- Reduce the chance of over-relying on synthetic data and insights
“I think there are some really cool and exciting elements to this, but I don’t think we are 100% there yet,” said Scott. “I would caution you to be careful (in the short-term).”
How do you avoid bias while using AI tools?
There are some who think a great benefit of AI is that it can help remove human bias. “I don’t think that’s true,” said Scott. “There’s a lot of risk to biased data coming out of AI tools.” AI models will reflect the bias of what they’ve been trained on, said Scott. And going back to the concept of AI tools being black boxes, we don’t really know what some of those added biases may be.
Ultimately, AI tools will have bias – we just don’t always always know what that bias will be, said Scott. “That’s why you always still need to review and add a human element to what you receive from an AI tool to ensure you are identifying potential risks and minimizing them as much as you can.”
The future is bright though. “As we get better tools and better train new models, we’ll hopefully get to the stage where these tools will be able to automatically reduce the chance of biasing participants,” said Scott.
How to evaluate AI tools and find what works best for you
“We’re at a stage right now where every day there are seemingly 30 new tools coming out which is really cool, but also kind of scary. It means you’re never going to know all of them,” said Scott. “You don’t want to get to the point where you’re spending all your time experimenting and looking at new tools rather than actually doing something.”
Here’s what Scott recommends for evaluating AI tools:
- Primarily utilize the mainstream tools, at least to start. Major players like ChatGPT, Bard, or Claude are big for a reason. They are often more stable and have been subject to more experimentation and testing than newer tools. Additionally, more mainstream tools are less likely to fail or be bought out in the future. Despite this, Scott still recommends exploring other tools to find what works best for you, your working style, and your customers.
- Experiment yourself, but also use learnings from others' experimentation. For every tool, there are plenty of people who have already spent the time vetting and experimenting with it. Utilizing their research and recommendations can greatly reduce the time you spend navigating the overwhelming amount of new AI tools.
- Research! Many sites are touting new AI innovations as the latest and greatest, but it’s still important to carefully evaluate the pros and cons, opportunities and limitations, of each you hope to use.
“To ensure that what I am experimenting on and learning from is going to be there in the future, I do tend to default back to some of the bigger tools,” said Scott. But if you’re looking to try newer AI tools or branch away from the major players, Scott recommends using Top Apps as a starting point. A few other resources he recommends are @aitherevolution, the MIT Technology Review, Medium, and Google’s AI blog. Lastly, The Age of AI and Our Human Future is a fantastic book that Scott suggests reading.
What does the future of research look like when it comes to AI?
“I think you’ll see the number of people uninterested or unwilling to use AI in their work diminish over time,” said Scott. “Saying you’re not going to use AI or adopt it is like burying your head in the sand.” Alternatively, those who embrace AI and are willing to experiment with it and learn from it are going to get ahead, he continued.
Scott likened the growing use of AI to when companies started offshoring work. “I think we’re going to see increased automation of basic, manual tasks (to start),” said Scott. “This also means that the strategy, thought leadership, and partnerships with stakeholders will become increasingly important.”
In terms of loss of jobs – something that many in the User Research and Market Research world fear, Scott said he sees analyst-level roles or programming / data analysis roles most at risk of changing or being replaced in the near-term. An additional thing Scott said he foresees – and which he finds most frightening – is a future where people aren’t learning proper methodologies, procedures, checks and balances, and other foundational research principles because they have become over-reliant on AI. This can ultimately lead to poor, unethical, ineffective, and potentially harmful research.
What excites Scott about AI?
“The ability to automate things and make our lives easier is really exciting,” said Scott. By using AI to do things like, for example, running numbers, Scott said he can then focus on the things he enjoys more like strategy consulting.
Scott mentioned a meeting that occurred recently at the White House where leaders of seven of the top AI companies in the U.S. gathered to make voluntary commitments to new standards for safety, security, and trust. Scott said that it’s a step in the right direction but is still too early to see how effective it ends up being.
Regardless of the outcome of that meeting, Scott said that the increase in energy and money being poured into AI makes him hopeful that there will be some really impressive and exciting developments in the near future. “The fact that AI and how to use it safely are top of mind and that people are working toward a better future through AI excites me and gives me confidence that AI isn’t going to end the world or anything like that.”
Will AI replace User Research functions?
“I don’t think that’s the case at all,” said Scott. “I think AI will give us more tools to effectively conduct research.” Scott said AI provides a new playground that Researchers need to be exploring. One thing Scott recommended is to not put all your eggs into the AI basket or into the basket of a specific AI tool.
Scott’s advice? “Try AI out. Get into it. But make sure you are a strong researcher who can add necessary guardrails and make sure that everything you’re doing is not just to cut costs and improve your P&L, but that it’s actually creating a better product and experience for your customers.”
Work with Scott
As Scott began experimenting with AI, he not only started talking with clients, but also other researchers. He quickly realized that nearly everyone from researchers to business executives to aspiring entrepreneurs were interested in how AI can help understand users and markets. So, he decided to share his findings both through his consultancy, Garrison Research, and through his writing and educational courses.
His first course was recently published on Udemy and aimed at both researchers and non-researchers looking to get a lay of the land of AI and a better understanding of its basic elements.
“My goal in all my work is to meet more people and learn from others,” said Scott. “I would love to hear others’ experiences and advice so please reach out to me if you have anything you’d like to discuss, or to teach me.”
Thank you, Scott!
We’re grateful to Scott for joining us and sharing his thoughts and experience with AI, specifically regarding User Research and Market Research. If you’d like to watch the full webinar, follow this link.
Additional AI content and resources
Here is some additional content and resources on AI that Scott recommends checking out. Sources that Scott cites during his AMA are marked by a 🎤.
- Hard Fork episode with the CEO of Anthropic 🎤
- Atlantic article on how the world’s major military powers are introducing AI into warfare 🎤
- 2022 Expert Survey on Progress in AI - include P(doom) 🎤
- ChatGPT Has a Plug-In Problem 🎤
- State of User Research 2023 report, by User Interviews 🎤
- How AI will reinvent the market research industry, by Qualtrics 🎤
- Top Apps 🎤
- MIT Technology Review
- Medium - AI
- Google’s AI blog
- The Age of AI and Our Human Future