Gaining informed consent from clients when using AI in the advice process is essential, and advisers could risk a breach of the Code of Ethics if they don’t, a compliance expert has argued.
Standard 4 of the code says “you may act for a client only with the client’s free, prior and informed consent”. Integrity Compliance managing director Rhett Das says right now it’s legally a grey area in the context of the application of AI and the case could be that client consent is needed.
“We’re saying to people is that you really need to get that client consent if you are going to be running AI in your business,” Das tells Professional Planner.
The compliance risks presented by the use of AI include lack of consent over client conversations being recorded, having the data used in a third-party system, along with storage security.
Das noted that a pop-up on a screen to say a meeting is being recorded is not sufficient consent.
“You do need someone to acknowledge and say they are happy to record it,” Das says.
“If you don’t have that consent, if you’re looking at it from a Code of Ethics perspective too…there would be a number of issues from a code perspective by not acknowledging that you’ve got that consent there.”
Das says his firm has just finalised an AI policy that has been worked on for some time, but the trouble is creating a framework in a fast-moving field of technology.
That includes using paid versions of AI services, like Copilot or ChatGPT, rather than relying on free ones which places client data at risk by sharing it within the larger AI architecture.
“But you really need to get consent if client information is going to be in it and that it’s going to be acknowledged.”
ASIC announced in August that part of its corporate strategy will include reviewing how licensees utilise AI.
Line in the sand
However, Assured Support managing director Sean Graham takes a different, hardline stance, arguing a general principle to not use client data in AI systems.
“At the end of the day you’re a trustee of that information and we know there is no absolutely safe repository of information,” Graham says.
“Whether it’s free or the paid version I would still not use personal information. The downside far outweighs the upside. I’m just naturally conservative about this because we’ve seen so many data breaches over the past couple of years.”
Instead, Graham argues for keeping the use of AI more generic.
“Don’t load up any proprietary information, don’t load up any personal information, don’t load up any commercial information,” Graham says.
“You don’t want to squash [the use of AI] completely, it’s going to be used, but you’ve got to have some understanding of the limitations of AI, how it’s going to work, where it falls down, and it’s a great way to supplement your understanding or to confirm your understanding.”
In addition to highlighting AI for further regulatory attention, ASIC will also scrutinise the application of offshoring in respects to the security of client data.
Das says that ASIC announcement was a catalyst for further reviews of how Integrity Compliance can help its clients when it comes to offshore data storage with third parties.
“It’s something we’re flagging this quarter with the firms that we work with as part of a deep dive into what you’re outsourcing and what controls you have in place with overseas providers,” Das says.
Expanded intelligence
The proliferation of AI in financial services has varied from being used in the advice and investment process all the way to treating it as an investable asset class.
Das notes the change of perspective of advice firms over the last year, when some practices were early adopters while others stood on the sidelines waiting for others to “get it wrong first”.
“We are seeing other firms where recording client meetings is no issue, [and they’re] using it to crunch data, using it to get a number of gains,” Das says.
But Das says not every practice is fully across the capabilities of AI, what large language models can do, and what happens with data.
“It’s an issue and it’s something that will come up more in the future,” Das says.
“For us, as we were putting our policy together, we had written the policy and when we were reviewing it, we thought we really do need this. We can’t force people to do it but over time it will become an issue.”
Despite the hardline stance on AI usage, Graham says there’s a lot of upside particularly with how the paraplanner role works.
“If it’s used effectively what the paraplanner will become is instead of – in some licensees they’re treated as a glorified typist – in other places they’ve got to do everything,” Graham says.
“I think it will become more a paralegal role where they’ll be working alongside the advisers to formulate strategy and then re-shape the information that’s produced. That will not only speed up the production of advice, but it will elevate the role of the paraplanner.”