The evolution of AI at platform provider HUB24 has found that smaller generative AI models are preferable to the bigger foundation models.
At a roundtable about AI’s role in driving innovation in advice hosted by Professional Planner in partnership with HUB24, head of the platform provider’s Innovation Lab, Evan Morrison, said they have observed smaller models growing and improving faster than larger counterparts.
The insights from Morrison, who received his PhD in computer science from the University of Wollongong, will be included in a soon to be released special report covering the key findings of the roundtable.
Smaller models popular with HUB24 include Facebook’s Llama, France-born Mistral and Microsoft Phi4.
“Programming against these big foundation models has been tricky because you can’t track versions. It’s not like traditional software development,” Morrison said.
“By running smaller models, we’ve got the transparency, we’ve got the clarity, and we’re able to keep track of versions and maintain some really interesting outcomes.”
Additionally, smaller models require a significantly lower energy consumption and therefore have lower electricity costs and carbon impacts.
Morrison said HUB24 is able to keep electricity costs lower because they use smaller models with lower memory needs.
He said by running a smaller model, they can run at a quarter of the energy consumption of ChatGPT, for example.
Partnering up
As part of its AI strategy, HUB24 has formed partnerships with the big cloud providers, including Google, Microsoft and AWS.
This provides access to new technology and AI models when they appear, which allows for with faster and better innovation and consequently greater reach as a business.
“By partnering and working together with various groups, we’re able to put together a product which then effectively started to bring that information into a single place where licensees and practice managers could then see it,” Morrison said.
They also maintain relationships with academic institutions, putting through a large number of interns and graduates, and since the creation of the Innovation Lab, they have graduated roughly 25 people through the lab, all doing a cycle with Morrison.
Morrison spoke about the creation of the platform’s Innovation Lab in 2018, which he described as heavily machine learning (ML) focused on the earlier days.
“The goal [was] to effectively look at emerging technologies and try to see how that would change our business, disrupt ourselves effectively, or give ourselves benefits,” Morrison told the roundtable.
“We could actually then share those insights with the wider community…advisers, licensees and everyone else.”
High level of governance
The development of AI at HUB24 has required a high level of governance to prevent human error having disastrous consequences.
Morrison said before the invention of generative AI, the goal was to push accuracy as high as possible.
“To measure [accuracy] and to monitor, we had to introduce things like human-in-the-loop processing,” Morrison said.
“Before we processed anything, a human would have to check and validate that we could push the button and set it upstream.”
He said they had been working on governance for a long time, using techniques such as monthly sampling and human-in-the-loop activities, which have been a necessity because the platform has grown alongside the technology.
“By the very nature of the [Innovation] lab, we’ve been on a journey of embedding governance and privacy from the very beginning,” Morrison said.
“Because we had to work with our own models, train them, create GPU machines and clusters, we learned a lot about the processes involved in setting up these environments.
“Building in governance from the beginning in the ML world, when we hit the generative AI timeframe, we were ready to start rolling forward.”
Good governance can also help tackle some of the elements of generative AI that are not working, such as the problem of hallucinations.
“The very first thing we noticed was hallucinations, and that’s been one of the biggest pain points across the board in the application of generative AI over the last year and a half,” Morrison said.
“We’ve then had to expand out that governance framework to incorporate additional checks and monitoring practices to try and cover and understand where those hallucinations are likely to occur and eradicate them.”
Morrison predicted over the next 12 to 24 months, there will be an uptick in the use of “thinking and reasoning models” which are models that have been trained to think and reason about their answer
“[These models] are less likely to hallucinate and so we’re getting better quality answers,” he said.
He argued it will be a driving factor in the shift towards more knowledge-based work being done in conjunction with AI.
“We’re looking at these models as a way of doing expert analysis over specific tasks and activities in a way that will aid and assist a human in their day-to-day workflows.”





