Michael O'Leary dialling into Professional Planner Researcher Forum from Chicago, US.

Current AI models do not have the accuracy levels needed to replace human researchers, the Professional Planner Researcher Forum has heard.

Researcher Morningstar distinguished quantitative analyst Michael O’Leary, based in Chicago, said that despite the advancements AI has made, it is still too fallible to be able to replace humans.

“Despite the hype, the gains really have been enormous,” O’Leary said, dialling in from the United States.

“It is no exaggeration to say that my personal efficiency has increased two to three times in the last two years just because of the rise LLMs [large language models].

LLMs are a type of AI that can understand, process, and generate human language, for example ChatGPT.

“I don’t think there’s any way I could be replaced, at least in the current versions of these models, and that’s primarily because LLMs cannot verify facts,” O’Leary said.

“While an AI agent might be trained to ask the right questions, someone still needs to verify that those questions are, in fact, the right questions and that takes expertise a chat bot simply cannot replicate.”

O’Leary said Morningstar’s approach to AI adoption was to lighten the load, such as through reports, which when automated were “saving thousands of hours annually”.

“It’s about freeing up human analysts to do the work that requires human judgment, which is precisely the work that AI can’t do,” O’Leary said.

“It has the advantage of both allowing us to expand our coverage and improve our accuracy, basically to do better work.”

Improving efficiency is at the forefront of advisers’ minds and many have found AI a useful tool in this objective.

O’Leary said Morningstar’s philosophy was concentrated around strategically incorporating elements of both machine learning and generative AI.

“[Incorporating AI] starts with data ingestion, which is a massive undertaking, and AI really plays a crucial role in ensuring data accuracy and efficiency at the scale,” O’Leary said.

Morningstar is rolling out its AI tools globally with a launch expected soon in Australia.

The researcher had developed a tool called the Medalist Rating which uses machine learning. O’Leary described the tool as an “intelligent integration of human analysts and AI tools”.

“We’re combining all of our analysts’ expertise over many years with the massive scale of machine learning to generate far more accurate and comprehensive ratings than we could achieve by human alone,” O’Leary said.

“It allows us to cover 10 times the managed products that we were able to cover with just human analysts alone.”

Morningstar also have their own chat bot, or “AI power research assistant” called Mo, as a form of generative AI. O’Leary describes it as “a research assistant built on top of our existing infrastructure [that] allows for natural language queries”.

“The main thing I wanted to emphasise is the structure,” O’Leary said.

“It’s not just a standalone chat bot. It’s integrated into our existing systems. It’s the content and the integration, not the individual components, that’s critical here.”

However, O’Leary emphasised the importance of caution as “AI models do not think in terms of truth, but in terms of probabilities” so certain accuracy is does not exist with these tools.

“The sort of current approach that we’re using really prioritises transparency and verification, we provide a clear audit trail, allowing our data analysts to see the data sources and the steps taken by the model,” O’Leary said.

He warned that “we should be sceptical of AI. It is powerful, but it’s also fallible.”

Looking towards the future, O’Leary said there is “a lot of potential in getting much higher accuracy in those models and incorporating that so then you’re sort of left with only looking at real anomalies”.

However, this still requires a human to physically look for inaccuracies and be responsible for mistakes. At a Class Ignite conference in September this year, human accountability was raised as an important factor in the usage of AI tools.

Join the discussion