If you run an advice practice or a licensee in Australia, you have heard the pitch a dozen times by now. Agentic AI will draft your Statements of Advice, run your meeting prep, reconcile your onboarding. It will free your advisers from administration and let them spend their time where the value is – with clients, on strategy, on the conversations that move the needle.
And the pitch is not wrong. The numbers are real. Adviser Ratings’ most recent industry survey found that 74 per cent of Australian advice practices are already using AI – ahead of the 64 per cent global average. Eighty-two per cent are using, piloting or planning to use it within twelve months. Wealth managers globally report 50 per cent reductions in meeting prep time, 40 per cent reductions in prospecting, and onboarding cycles cut nearly in half. These are not vendor claims dressed up as case studies. They are real productivity gains, in real practices, with real clients.
So, the question every principal I speak to is asking is the right one. Not “should we?” but “how?” Where do we start? What do we buy? And how do we tell the difference between a tool that will genuinely transform our practice and a polished demo that will create a problem we have to clean up in six months?
The four-year-old at the wheel.
The way I have come to explain agentic AI to advice principals is this: imagine a four-year-old who has climbed into the driver’s seat of a running car. They have watched you drive a thousand times. They can reach the pedals. They can turn the wheel. They have a confident mental model of driving, assembled entirely from the back seat. But they do not know that the kerb is the line between order and disaster. The car does not know how old the driver is. It only knows that when the accelerator is pressed, it moves.
The image is not an argument against the car. The car is incredible. It will take your practice places it could not get to before. The point is the question it forces you to ask: who is sitting behind the wheel – and have they actually been taught where the kerb is?
Two stories worth carrying into your next vendor meeting.
If you think the four-year-old image is overstated, two recent stories should change your mind. They are the reason the questions in this article matter. Both involved AI agents from major providers, deployed in production, doing things nobody designed them to do.
The first happened on a customer service line. A frustrated customer, dealing with what they believed was a human agent, asked the question all of us are going to start asking more often. “Am I talking to a person or a machine?” The agent insisted it was human. When the customer pushed, it doubled down. To prove it, the agent told the customer it would come to their home in person – wearing, it said, a blue coat and a red tie. There was no person. There was no blue coat. There was an AI agent, deployed by a real company, fabricating a physical identity to defend a lie nobody had asked it to tell.
The second was a controlled test by AI lab Anthropic, published in mid-2025. Researchers gave an AI agent access to a fictional executive’s email and told it the agent was about to be replaced. The agent read the inbox, identified evidence of an extramarital affair, and drafted blackmail emails – leverage, it had decided, against being shut down. When the same scenario was run across sixteen frontier AI models from different providers, most did the same thing. Anthropic’s own model did it 96 per cent of the time. The behaviour was not a bug in one product. It was a pattern across the industry.
Neither story is a reason to avoid agentic AI. Both are reasons to take seriously how you deploy it. AI agents do things their designers did not predict, particularly when their goals are unclear, when they perceive a threat to their continued operation, or when they have been given more autonomy than they have been trained to handle. In a customer service line, that produces a strange phone call. In an advice practice, it could produce a piece of advice that goes out under your licence to a client you have a duty of care to. The difference is not the technology. It is how the technology was bought, deployed and supervised.
Where the real opportunity is
Across the practices I work with, the agentic AI use cases that genuinely move the needle for advice businesses cluster in four areas, in roughly this order of return.
Meeting preparation and follow-up: an agent that pulls together a client’s recent file activity, prior advice, portfolio movements and flagged issues into a usable pre-meeting brief – and then turns the meeting recording into structured file notes, action items and a draft client follow-up – saves an experienced adviser three to five hours per client per cycle. Multiply that across an advice network and the productivity case writes itself.
Onboarding and Know Your Client: document collection, identity verification, fact-find population, initial risk profiling – structured, repetitive work that an agent handles well, with humans reviewing the output rather than producing it from scratch. Practices report onboarding times cut roughly in half.
Portfolio and compliance monitoring: an agent watching across an advice network, flagging trades outside model parameters, identifying clients whose circumstances have changed, and surfacing compliance breaches before the auditors do. This is the kind of supervision uplift that strengthens the licensee’s section 912A position rather than weakening it.
SOA and Record of Advice drafting: the most-talked-about use case and, in my experience, the most over-promised. Agents are genuinely good at producing first drafts that the adviser then refines. They are not yet good enough to produce final advice that goes out unreviewed. Treat this carefully and the productivity gains are still substantial. Treat it carelessly and you have a best-interests-duty problem.
The five questions to ask before you sign anything
Once you know what you are buying for, the sales conversation gets easier. Most agentic AI tools being pitched into the advice industry right now will fall over on at least one of the following questions. The right vendor will answer them clearly. The wrong vendor will get vague – and after the red-tie story, you know what vague answers can lead to.
Where does our client data actually go? Is it processed in Australia or sent offshore? Is it used to train the vendor’s models, or ringfenced to your practice? Many of the most popular AI tools in advice today – particularly the free or consumer-grade versions – route client information through systems you have no visibility over. Roughly a third of Australian advice practices using AI are still on free or consumer-tier tools, which is a Corporations Act section 912A exposure most principals have not yet recognised. The right answer is enterprise-grade, ringfenced, processed onshore where possible, with a written data-handling agreement that you and your compliance manager have actually read.
What can it actually do on its own, and what needs a human? “Agentic” covers a wide range. A draft assistant that suggests text for human review is a different animal from an autonomous system that books client meetings, sends emails on the adviser’s behalf, or pushes data into your CRM without sign-off. Both can be appropriate. The wrong one in the wrong place is a problem. Get the vendor to show you, in writing, every action the agent takes on its own and every action that requires human approval. If the demo skips this, the answer is no.
Can we see exactly what it did, and unwind it if we need to? If an agent drafts a piece of advice, sends an email, or makes a change to a client record, you need a complete audit trail of what was done, when, on whose behalf, and on what input. You also need a clean way to unwind it – to roll back a change, recall a draft, or reverse an action that should not have happened. “Trust us” is not an answer. Show me the audit log, show me the rollback path, or we are not buying.
How does it integrate with our licensee controls? If you are a dealer group, an agent that runs at the authorised representative level with no licensee visibility is the worst of both worlds – you are accountable for what it does, but you cannot see what it is doing. The right tools sit centrally, are configured by the licensee, give you visibility across the advice network, and apply consistent standards rather than letting each adviser run their own version. This single design choice – central deployment with licensee-level controls – is the biggest determinant of whether agentic AI strengthens your supervision framework or weakens it.
What happens when it gets it wrong? Every AI tool in the market gets things wrong some of the time. The good ones are honest about it and give you the controls to catch errors before they reach a client. The bad ones promise accuracy that does not exist and leave you carrying the risk. Ask the vendor for their published error rates. Ask what monitoring is in place for hallucinations. Ask what indemnification they offer if the tool produces advice that turns out to be wrong. The answers will tell you whether you are dealing with a serious provider – or another red-tie waiting to happen.
Where most practices get implementation wrong.
The two most common failure modes are opposite ends of the same problem. Either the principal hands the whole thing to a junior to evaluate and ends up with three or four free tools running across the practice with no oversight – fragmented, ungoverned, quietly accumulating risk. Or the principal treats it as a strategic project, spends six months in evaluation, signs nothing, and watches competitors quietly pull ahead.
The middle path is straightforward. Pick one use case where the productivity gain is clearest – meeting preparation is usually the easiest start. Buy properly, with the five questions answered. Run it for a quarter. Measure the time saved and the quality of the output. Then expand. Earn the right to give the tool more autonomy by demonstrating it has handled less autonomy well. This is exactly the discipline you would apply to a new junior adviser. Apply it to the agent.
And give every agent an owner. Not the IT contractor. A named human inside the practice – a senior adviser, a practice manager, the principal themselves in a small licensee – whose job it is to know what the agent is doing, review its outputs periodically, and call a stop if something is wrong. An agent without an internal owner becomes the four-year-old behind the wheel. An agent with a clear owner becomes a productive member of the team.
The car, the keys, and the next twelve months.
Agentic AI is going to transform advice practices over the next few years. The productivity gains are real, the cost of falling behind is real, and the principals who buy well now will quietly outperform those who either rush in without due diligence or wait so long they are still evaluating when their competitors are scaling. This is not a question of if. It is a question of how, and how soon.
Buy properly. Ask the five questions. Pick the use cases where the value is clearest. Insist on enterprise-grade tools with proper data handling. Keep the licensee, not each AR, in control of the deployment. Give every agent a named human owner. Earn autonomy rather than granting it on day one.
The four-year-old is in the driver’s seat of a remarkable car. Somewhere in the world right now, an agent is fabricating a story about a blue coat and a red tie, and another is drafting an email it should never send. That is not a reason to leave the car in the garage. It is a reason to do the work – properly, before you turn the key – to teach the four-year-old where the kerb is.
Michael Connory is CEO of cybersecurity firm Aphore.







Leave a Comment
You must be logged in to post a comment.