top of page

Can Large Language Models (AI) Replace your Market Researcher?

  • Sharlene Zeederberg
  • Jun 6
  • 6 min read
 Short answer - not if you want reliable, meaningful insights

Chat GPT and fellow LLM AI tools swept across the landscape of the last couple of years with the force of a cyclone. The speed at which the advent of large language models entered the playing field was breathtaking. With barely a moment to validate outcomes, research companies were offering us synthetic consumers (I have a lot of teeth grinding thoughts about these), tools with which to both create surveys and analyse the data, and cheap research services that automate data gathering and analysis. There is a tinge of gold rush in the air.

It's a worry.

Chat GPT understands its limitations. Does business? 

In response to “Can AI replace a researcher?” my friend Chat GPT says:

Partly true, as is so often the case with AI outputs. I’d argue it is rubbish at insights, precisely because it lacks critical thinking, creativity and contextual understanding. They’re key capabilities to generate insight.


So, what are its limitations?

We’ve been playing around with some of the various versions (Chat GPT, Perplexity, Notebook, CoPilot and so on) on offer, giving it tasks to do and comparing those outputs to our “human version” ones.

 And our conclusion - AI is a tool that a good researcher can use effectively, but it isn’t a replacement for one (and if you're a rubbish researcher, it might make you worse, not better).  

Much like the days of “anyone with Survey Monkey can be a researcher”, using AI reminds me just how important it is to get expertise if you want reliable, valid data and data interpretation.

I’m not here to diss AI. It’s an exciting (albeit somewhat scary) tool that can and should bring efficiencies to the way we all work. And I find it useful in many ways. 


Things it does well

In terms of providing functional knowledge, it’s great. I especially love it for telling me exactly how to do things in Excel or PowerPoint. 

It is also excellent at providing starting points for exploring existing knowledge – PhD students must love it for a literature review starting point. Key here is recognising they are starting points, not meta studies. It would be easy, especially for newbies I think, to assume it to be a source of truth, rather than an additional source of data. 

Notebook LLM is great for finding that quote you were after from a series of transcripts. Even if you have to ask it twice. And that's the recurring theme - you have to check what it says is right - because it sometimes isn't. 


Research Assistant Tool?

Beyond asking for the instructions on how to calculate survey power, I have been playing around with it in the context of a desktop research assistant and a qualitative research analysis tool.

We are currently looking into global consumer trends for a client, country by country. ChatGPT does a good job at giving us ideas on what is going on and where to turn our attention to discover more. But if think you can put your question in and get a meaningful (and truthful) market or consumer summary, you’d be wrong. You’d get a summary, but even a little digging would reveal it to be as superficial, and possibly fake, as a MAFs contestant.

I also don’t trust the responses to be correct or the full picture. The data it chooses to work off is limited in ways that are not transparent. It’s not like it doesn’t know – it just doesn’t always include things. Here’s an example:

 

 

And that lack of transparency – what is it that makes the list and what doesn’t – makes the data a little unreliable.


Still, it provides good first look clues to follow down rabbit holes and gather the next level of information. And this gives more ideas on what you can ask it to look for.  It provides a good source of data to be further investigated, rather than a source of truth.


What about analysing transcripts?

As a qualitative research analysis assistant, it has similar problems. With no real understanding of context and real-world experience, it draws conclusions that seem on the surface reasonable, but can often be downright wrong. Because I do the research myself, I know that it is drawing incorrect conclusions. And some of them are doozies! 

It's pretty good at identifying themes (thematic analysis), and for academic research, which seems to be more about collating knowledge than generating insight, this is probably a god-send sort of tool. But in the world of consumer behaviour and business, we don’t just want themes. We want to know what that means for our business, what opportunities there are. We want to know the why - insights. And insights come from the (subconscious) blending of information from different data sources – the research itself, the context of the company and culture, and what we know about how humans behave. LLMs are not good enough at this.

An experienced researcher knows, for example, that what humans say they do is not necessarily what they do, and that the reasons they give as to why they do things cannot be taken at face value. AI doesn’t know what to discount, what to nuance. It doesn’t know to connect the what and the why, and ask itself why they might have said this about that. Despite repeated instructions and vigorous tagging, it seems unable to discount the moderator’s questions in its analysis.

 Even so, as a way to interrogate data and bounce ideas around, it is pretty nifty. Again, provided you don’t treat the responses like “truth”. As in any good brainstorming session, the point is to generate ideas and thoughts for further analysis.


A tool, not a replacement 

AI’s can process information and spit out results – and they are good at this - but they can’t think. They can’t check their own responses against reality (which is more than the internet), and they can’t make connections between datasets that aren’t included in their universe. That’s a human skill, at least for now. 

I think of LLM AIs as a tool – like excel or a calculator. A good tool, with limitations that need to be understood, but not a replacement for a human. Not as pretend (synthetic) consumers (talk about a self-fulfilling circle of banality!), and not as pretend researchers. There is place for even these roles in our research world, but it isn’t in truth, insight or meaning. A knowledge start point, yes. Meaningful, applied outcomes, no.

On the whole, to date I find the outputs are as generic and un-insightful as you might expect from a junior researcher with no experience under their belt. Some useful information, but nowhere near the whole picture. A place to start, not to finish.

 I think it is best to think of these systems, right now, as another source of data that needs to be interrogated (and yay for having access to that). A cheap and easy source of data, but one that should be taken with a hefty slug of salt.


Sharlene Zeederberg

February 2025

---------------------------

 PS

I got CoPilot to rewrite the above… my view - bland, corporate jargon, good for attention-short readers (but as a result misses a lot of the nuance). What do you think?

Chat GPT and similar AI tools have rapidly transformed the landscape with their swift integration into various industries. These tools offer synthetic consumer insights, survey creation, data analysis, and automated research services. However, there are concerns about their effectiveness.

AI cannot replace human researchers, as it lacks critical thinking, creativity, and contextual understanding. While useful for tasks like Excel instructions or initial literature reviews, AI-generated insights often lack depth and accuracy. Researchers tested several AI versions, concluding that though AI can aid efficiency, expertise is essential for reliable data interpretation.

AI can provide initial data for deeper investigation but should not be relied upon for definitive market or consumer summaries. Outputs tend to be generic, lacking the nuanced understanding a skilled researcher provides. Transparency issues also affect the reliability of AI-generated data.

As a qualitative research assistant, AI identifies themes but struggles with context and real-world experience, leading to incorrect conclusions. It can be useful for brainstorming and thematic analysis in academic research but falls short in generating business insights.

AI is best used as a tool, similar to Excel or a calculator, aiding but not replacing human researchers. It processes information well but cannot think critically or make nuanced connections. Current AI systems should be viewed as an additional data source requiring careful scrutiny.


Copilot

February 2025

 
 
 

Comments


©2025 by Leap Insight

bottom of page