Professor Jeannie Marie Paterson

Targeting law reform to keep up to date with new digital technology may be a fool’s errand because legislation is at risk of being out of date by the time it is enacted, according to research.

A research paper from University of Melbourne Professor Jeannie Marie Paterson looked into the legal implications of robo-advice, and has coincidentally backed up the government’s approach to the Quality of Advice Review, which will be technologically neutral.

The paper, entitled ‘Making Robo-Advisers Careful? Duties of Care in Providing Automated Financial Advice to Consumers’, said law reform is slow, and regulation targeted at new digital technologies may prove to be out of date almost as soon as it is enacted.

“While it would be possible for regulators to draft prescriptive standards to be followed in providing robo-advice, an expectation of a proactive response to foreseeable risks by firms deploying robo-advisers may more effectively be expressed through the open-textured duty of reasonable care,” the paper stated.

Additionally, the paper argued that the automated nature of robo-advice has risks beyond those arising from human advisers.

“Consumers are seeking personalised advice, with little experience in assessing the value of what is produced by automated processes, and potentially high expectations of the accuracy of what may be delivered,” the paper stated.

“Robo-advice may be tainted by biased or inaccurate data or ill-fitting algorithms to produce low quality, unsuitable or discriminatory outputs.”

Patterson argued in the paper that safeguards for consumers are still necessary so that consumers are not made worse off because of their interactions with robo-advisers.

“Consumers are seeking personalised advice, with little experience in assessing the value of what is produced by automated processes, and potentially high expectations of the accuracy of what may be delivered,” she tells Professional Planner.

Paterson suspects the industry will soon see robo-advisers incorporating large language models like ChatGPT.