A secure AI adoption strategy for businesses means users need to take accountability for potential misuse or inaccurate information produced to both prevent mistakes and accept responsibility for error.
CyberCX regional director Aparna Sundararajan said at Wednesday’s Class Ignite 2024 conference, held by the SMSF administrator service provider owned by HUB24, that humans must remain responsible for their usage of AI within their business in order to ensure cyber security.
As Generative AI usage becomes more widespread “every user will need to get comfortable, accountable and responsible”.
“The financial services, certainly the large banks, are already investing a lot in artificial intelligence,” Sundararajan said.
As a result, AI users must be responsible for correct use as tools evolve and become more advanced as the technology can easily produce false information.
Sundararajan said pointed to an example where she knew the AI had provided a false response.
“Thankfully I knew the answer was wrong. That is where the business risk is very high and then the accountability is so high.”
Sundararajan described the existence of AI hallucinations and the problems they can cause when not detected by the user. Sometimes AI algorithms may produce outputs that are not based on training data, wrongly decoded or identified. That’s termed as a hallucinated response.
Because of this feature causing major inaccuracies and producing false information, the user must remain responsible for information gathered through AI. This can happen without any malintent by the user.
Sundararajan provided an example of a potential deliberate error that AI models might make. She referred to a situation in a digital clinical trial in the US where the responses given by AI lacked citations and were inaccurate.
“It just made up [responses], there was no actual factual resource backing it, and they decided this was not ready for us to use.”
Possible bias can also be produced by AI tools as there are inherent biases often in training the data used by businesses and media.
“If the data is wrong or inaccurate or had inherent bias based on the past, then your answers and outcomes will be biased,” Sundararajan explained, adding this issue results in false information.
The possibility of AI tools producing false information due to hallucinations or bias means that users are responsible for ensuring they fact check all data and do not miss mistakes.
The adoption of a secure AI adoption strategy for businesses requires absolute focus on cyber security. Sundararajan emphasised that accountability is a core principle of cyber security.
“Human beings definitely need to be a lot more responsible.”
Other principles include ensuring authorised access, committing to doing no harm and introducing no further risk.
Despite the risks, many businesses are adopting AI tools successfully with a focus on user responsibility.
“The progressive organisations are working on creating [transparency, governance, safety and policies around AI]use and understanding how it works, building more responsibility across the organisation and creating more confidence as an individual,” Sundararajan said.
*Article updated on 19 September 2024 to clarify certain quotes.