Something significant has shifted in the way Australians seek financial guidance. Not gradually, and not quietly. While the industry has been working through how to close the advice gap through more advisers, greater efficiency and better technology for professionals, there is a large and growing portion of the Australian public has already moved on. They’ve opened ChatGPT and started asking their questions there.
This isn’t a problem created by the industry. It’s a reflection of how quickly consumer behaviour can outpace even the best intentions. And it raises a question that the entire sector now needs to sit with: given that Australians are already using AI for financial guidance at enormous scale, how do we make sure they’re protected when they do?
The numbers are too large to ignore and this is not a trend on the horizon, it is already here.
According to Compare Club’s AI Index, 16.6 per cent of Australians have already used AI for financial decisions. A further 30.9 per cent plan to do so soon. That’s nearly half the country either using or actively intending to use unlicensed AI to make decisions that will shape their financial future.
Among younger Australians, the behaviour is deeply entrenched. A 2025 RACQ Bank survey found that 45 per cent of Australians aged 18–34 had followed AI financial advice without consulting a professional. Nearly 80 per cent of Australian investors aged 18–29 said they used AI to inform their investment decisions.
The international picture reinforces just how significant this shift is. Research by Intuit Credit Karma found that financial advice is now the second most common use of generative AI in the United States, behind only health queries. 66 per cent of Americans have used AI for financial guidance. Among millennials and Gen Z, that figure rises to 82 per cent.
Vanguard’s global chief economist Joe Davis put it bluntly: ChatGPT is now probably the largest provider of financial advice in the world.
This is a generation who will inherit Australia’s $4.3 trillion superannuation system, but for whom asking AI a financial question is as natural as asking it for a restaurant recommendation. That behaviour is not going to reverse.
The problem is, AI doesn’t always get it right
AI isn’t going away, yet its outputs don’t always prioritise keeping them safe. This is where the conversation becomes urgent.
AI is being used for financial guidance and advice at massive scale, by real people making real decisions. But the tools they are using were never designed, licensed, or built to be accountable for financial advice. And the consequences are already visible.
Independent researchers in the UK tested AI tools on 100 personal finance questions. The results were sobering: correct 56 per cent of the time, deceptive or misleading 27 per cent of the time, and outright wrong 17 per cent of the time, with errors across pension rules, tax thresholds, and inheritance guidelines.
In Australia, ChatGPT presented with a hypothetical retirement planning scenario produced outdated superannuation contribution limits, generic asset allocation recommendations with no regard for individual risk tolerance, and missed recent changes to pension eligibility entirely.
In the United States, more than half of people who acted on AI financial advice admitted they subsequently made a poor financial decision as a result.
These aren’t edge cases. They are the predictable outcome of millions of people seeking guidance from tools that carry no licence, no regulatory obligation, and no accountability. The tool doesn’t know what it doesn’t know and neither does the person asking.
Three-in-four people who use AI for financial queries say it lets them ask things they’d be too embarrassed to raise with another person. Financial anxiety is real, and the instinct to find a private, accessible source of guidance is entirely understandable. The challenge is making sure that when people act on that instinct, they are genuinely protected.
The answer is not to tell people to stop using AI for financial questions. That ship has sailed. The answer is to ensure that when they do, the AI they are using is built to actually serve their best interests.
At Otivo, we’re already addressing this challenge by building AI-driven advice that operates under an AFSL, within the same consumer protection framework that applies to personal financial advice.
In our opinion, safe AI-powered financial guidance should operate under an AFSL. It should know and be transparent about the boundaries of what it can and cannot advise on. It should be built around an individual’s actual circumstances, not generic assumptions. It should be accountable when it gets things wrong. And most of all, it should be affordable and accessible for every Australian.
ASIC chair Joe Longo framed the opportunity clearly: “As the race to maximise the benefits of AI intensifies, it is critical that safeguards match the sophistication of the technology and how it is deployed”.
That is precisely the work in front of the industry. Not resisting AI’s role in financial guidance, but ensuring it is done with the rigour, compliance, and genuine care for the individual that Australians deserve.
The superannuation sector has both the scale and the legislative obligation to lead. Funds managing $4.3 trillion in member assets have a duty to act in members’ best financial interests and right now, a growing proportion of those members are making financial decisions based on guidance from unlicensed tools. Licensed, AI-powered guidance and advice is not a threat to that obligation. It is a way of fulfilling it.
The same is true for Australia’s insurers. Between 60 per cent and 80 per cent of Australians are underinsured for life cover, and 3.4 million lack adequate income protection. Compliant, accessible AI guidance, built to reach people at scale, is one of the most practical tools available to begin closing that gap.
The industry is already moving
It would be wrong to suggest the profession is standing still. Many of the most forward-thinking practices and institutions in Australia are already embracing technology to extend their reach, deepen client relationships, and bring meaningful financial guidance to more Australians than ever before. That momentum is real, and it is genuinely encouraging.
The foundations are in place. The regulatory framework exists. The technology exists. What is emerging now is a clearer picture of how licensed, compliant, AI-powered advice fits alongside, and not instead of, the human expertise that remains at the heart of great financial advice.
The institutions that move now – that embed licensed, compliant AI advice into their member and client experience – will be the ones that earn the trust of the next generation of Australians. That trust, once established, is extraordinarily difficult for others to replicate.
The Australians seeking guidance and advice aren’t waiting for the industry to reach consensus. They’re already asking questions and acting on the answers. Encouragingly, some institutions are showing leadership and starting to act. The opportunity is to provide those answers safely, compliantly, and in their clients’ genuine best interests.
The question is no longer whether AI has a role in financial planning. It does, whether we designed it that way or not. The question is who steps up to make it work.
Paul Feeney is founder and CEO of Otivo, an Australian digital financial advice platform.







Leave a Comment
You must be logged in to post a comment.