AI is the hot topic in financial planning right now. From automating routine tasks like file note-taking to analysing client data for more personalised investment strategies, AI has the power to revolutionise the way advisers engage with clients.

But while the technology is promising, the reality isn’t as straightforward. In a recent investigation, Aphore Security reviewed five prominent AI solutions used by financial planners – three based in the US and two from Australia.

The findings were eye-opening and highlighted a concerning gap between promise and practice when it comes to security, ethics, and compliance in AI usage for financial planning.

The promise of AI: A game-changer for efficiency and client engagement

Imagine this: an AI system that listens in on client meetings, captures every detail, and auto-generates a perfect, organised summary file note. No more scrambling to jot down notes or worrying about missing key details.

It’s the dream scenario for advisers who want to focus on building relationships rather than being bogged down by administrative tasks. And it’s not just about note-taking – AI can now scan massive datasets to suggest investment strategies tailored to a client’s unique risk appetite and goals, providing insights that would otherwise take hours or days of analysis.

However, the convenience and sophistication of AI come with strings attached. Advisers must ask themselves: Where is this data being stored? Who has access to it? And how secure are these AI systems? As it turns out, the answers aren’t as reassuring as many would hope.

A reality check: The cybersecurity gaps in AI solutions

Aphore Security’s recent review of five AI solutions used in financial planning revealed stark differences in the cybersecurity posture of these platforms. While the US-based providers demonstrated a higher level of compliance, the Australian providers fell short. These findings suggest that while the technology is advancing, the underlying security practices are lagging.

One provider’s terms stated they “take reasonable steps to protect personal information”, but also included a disclaimer: “We cannot guarantee the security of any personal information transmitted over the internet”.

This is a significant concern, especially under Australia’s Notifiable Data Breaches scheme. Advisers using such platforms could face serious compliance issues if a breach occurs.

The ethical tightrope: Is AI truly bias-free?

Bias in AI is another growing concern. As AI systems increasingly take on roles traditionally handled by humans – like assessing client profiles or suggesting financial products – the question of fairness comes into play.

In our review, we found that AI models can indeed replicate the biases of their training data. If the data predominantly represents male clients, the system might overlook or undervalue the needs of female clients or those from underrepresented communities. For advisers who pride themselves on offering unbiased, inclusive advice, this is a red flag.

Australian financial planners, bound by the ethical standards of the financial adviser Code of Ethics, need to be especially vigilant. Failing to ensure that AI systems do not unfairly discriminate against clients could lead not only to reputational damage but also to breaches of the code, particularly Standard 4, which mandates informed and fair treatment for all clients.

Gaining informed consent: An overlooked necessity

The issue of informed consent is one that many advisers may not have considered in-depth, but it’s a crucial compliance aspect that should not be ignored. Integrity Compliance managing director Rhett Das has raised concerns about the grey area surrounding client consent in the context of AI.

“We’re advising advisers that if you’re using AI in your business, you need to explicitly get client consent. A pop-up saying the meeting is recorded simply isn’t enough,” he said.

Clients need to be fully aware of how their data will be used and stored, and they should affirmatively acknowledge and consent to this.

ASIC has noted the growing concern around AI’s role in financial services and has reviewed how licensees utilise AI. For advisers, this means ensuring that client data is handled transparently and securely, and that all AI systems comply with regulations, even if they are hosted overseas.

Regulation and compliance: The current landscape and what’s missing

It’s clear that the regulatory framework around AI in Australia is still evolving. The government’s AI Ethics Principles, developed by CSIRO’s Data61, outline a set of guidelines aimed at ensuring that AI systems are safe, fair, and reliable. These principles include:

  • Do no harm: AI systems must not be designed to deceive or harm individuals and should minimise negative outcomes.
  • Privacy protection: AI systems must secure personal data to prevent breaches that could cause financial, reputational, or psychological harm.
  • Transparency and accountability: People affected by AI should be informed about how it works and have a way to challenge its decisions.

Despite these guidelines, there is still a lack of enforceable regulations specific to AI in the financial services industry. Advisers need to proactively align with these principles, even in the absence of comprehensive legislation, to avoid future scrutiny.

Best practices: Navigating the complexities of AI compliance

For advisers, it’s crucial to be proactive. Here are some steps that can help:

Audit your AI providers: Ensure they meet global cybersecurity standards such as SOC 2, ISO 27001, and NIST compliance. Review their policies on data storage and encryption protocols to verify their claims.

Implement robust governance: Develop policies that clearly define AI’s use, including data collection, usage, and security measures. Make sure to conduct regular audits and updates to stay compliant with evolving regulations.

Communicate transparently with clients: Before using AI tools, ensure clients understand how their data will be processed. Use explicit, documented consent methods – simple pop-ups won’t suffice. Clients should know exactly what’s being recorded, how it will be used, and who will have access.

The way forward for financial advisers using AI

AI’s promise for financial advisers is vast, but it comes with a need for caution.

Advisers must navigate these technologies with a strategic mindset, ensuring they secure explicit, informed consent from clients, protect data with rigorous security measures, and stay ahead of evolving regulations.

The bottom line is this: the potential of AI is immense, but without proper safeguards, the risks can be equally substantial.

Advisers who proactively align their practices with the highest ethical and compliance standards will not only protect themselves but also build a stronger, more trusted relationship with their clients—something AI, for all its power, cannot replicate.

Michael Connory is CEO of Security In Depth.

Join the discussion