Licensees need to be careful that they don’t allow their AI adoption to accelerate to an extent where the safeguards they have in place cannot keep up with the sophistication of the technology. 

Assured Support managing director Sean Graham urges licensees to be vigilant and conscious of the fact that they will have to stay on top of their internal governance as AI develops rapidly. 

“From a licensee perspective, it’s about looking at the governance framework, so making sure you have a clear policy in place as to how AI is to be used,” Graham tells Professional Planner. 

ASIC’s report Beware the Gap: Governance arrangements in the face of AI innovation, which was released earlier this week, reviewed 23 licensees and found there is potential for a governance gap in the future. 

As AI usage accelerates, the potential for customer harm due to a lack of set policies and procedures is magnified. 

The ASIC report centred around a review that the regulator conducted where AI use cases that 23 licensees were using or developing as of December 2023. 

The regulator found a swift acceleration in the volume of AI use cases as well as a shift towards more advanced and complex forms of AI such as generative AI. Although ASIC found most licensees are currently acting cautiously, 61 per cent planned to increase AI use in the following 12 months. 

The review raised concerns that not all licensees are well positioned to take on the challenge of rapidly expanding AI usage.  

Approximately half of licensees had specifically updated their risk management policies to handle AI risks such as consumer exploitation and data breaches. Other licensees relied on already existing policies without making specific changes. 

However, the review found that licensees had documented policies for risk that are relevant to AI, but not specifically targeted to combat AI-specific risks. 

The lack of consideration for suitable policies and procedures to manage AI use will lead to a widening of the gap between governance and the rate of AI expansion for licensees. 

Although the shift towards generative AI is slow, it will pose the greatest risk when adopted and licensees must be ready for its usage. 

The regulator posed several questions to licensees regarding their safe and secure usage of AI. These questions revolved around a clear AI strategy, the importance of managing risk, accountability, and the policies and procedures in place being fit for purpose. 

This was in addition to existing regulatory obligations such as acting conscionably towards consumers, identifying and updating compliance measurers and ensuring financial services are provided in a way that meets the standard of “efficiently, honestly and fairly”. 

While these obligations help to avoid making misleading representations, bias and risk to an extent, the fast-moving nature of AI threatens to overtake these existing requirements.

Sean Graham

Graham reminds licensees to remember the commercial risk sits with them and so the compliance employees are the ones who are going to have to adapt quickly as the AI tools keep advancing.  

“From a licensee perspective, there are concerns about producing material which could be misleading or deceptive or inaccurate because of hallucination. It’s about making sure you’ve got those controls in place to make sure that you know the content is reviewed and assessed, and not just published.” 

A report from the Office of the Australian Information Commissioner on guidance on privacy and the use of commercially available AI products detailed guidance on responsible use of AI for organisations, especially regarding the privacy obligations. 

The OAIC report recommended as a matter of best practice that organisations “do not enter personal information, and particularly sensitive information, into publicly available generative AI tools”. 

Graham suggests this recommendation should apply to financial services licensees due to the security risks. 

“Don’t put client personal information, so anything that can identify them or their specific circumstances, but use generic information, which is a very simple thing to do.”  

Graham emphasises licensees need to ensure their advisers are as safe as possible from risk and protect them accordingly. 

“Governance framework is so important, to make sure the licensee says here are the parameters, here’s the regulatory perimeter in which we need to operate when we’re using AI and making sure their representatives understand that.” 

Join the discussion