AI to Transform Financial Reporting and Auditing, Report Claims

Three-quarters of Australian companies are using or piloting Artificial Intelligence (AI) in financial reporting – and in three years time the number will be 100%, according to a KPMG International survey.

The biggest leap will be in the use of Generative AI (GenAI), “currently being used by only 9% of Australian companies, but which more than half (52%) say will be their top priority of all technologies in financial reporting by 2027.”

KPMG’s study of 1800 leaders “across 10 countries including 100 here, finds Australia comparing positively with counterparts.”

The 75% of Australian companies currently “piloting or adopting AI is higher than the global average and stands third, behind only Canada and the UK.”

In Australia more than half (51%) of companies “are devoting 10-20% of their IT budgets to AI, compared with 44% globally, and spending is set to increase further over the next 12 months – 29% expect their AI spend to rise by up to a half, while a further 9% predict an increase of over 50% in that time.”

The study finds that currently ‘traditional’ AI is most “valued in financial reporting – with greatest benefit being in anomaly detection/pattern recognition (Aus 60%, global 64%), followed by robotic process automation, machine learning and deep learning, – but over the next 3 years, companies will start prioritising GenAI more than any other technology.”

Shane O’Connor, KPMG Global Audit Head of AI and KPMG Australia Partner, said:

‘Our research shows that globally, companies are making significant investments in AI, with Australia at the forefront. This is particularly noticeable in sectors such as Telecommunications, Technology, and Finance, where AI investments are being made to enhance productivity, cut costs, improve customer experiences, and develop new products and services. GenAI is now being prioritized for use in financial reporting, enabling companies to identify anomalies in financial data, generate customized financial reports, assess the efficacy of internal controls, pinpoint inefficiencies in core processes, and benchmark disclosures across their organizations.”

The report also finds that companies “are looking to auditors to lead the way and help transform financial reporting, with the large majority of respondents believing their auditors are either ahead of them, or at a similar level in the adoption of AI for financial analysis. Nearly 85% of business leaders thought their auditors understood their company’s use of AI for financial reporting well.”

Already, three-quarters of Australian and overseas respondents say “the use of AI in the external audit is moderately to very important, but they want to see auditors increasingly using AI to not only improve the efficiency and accuracy of audits, but to transform them into more pro-active, continuous and predictive processes. More than half of companies wanted auditors to prioritize predictive analysis, while 45% of them would like to see real-time auditing throughout the year.”

Shane O’Connor said:

“Audit firms have been using AI in the development of their audit solutions for some years now. Whether it’s through the use of computer vision, anomaly detection or in recent times, generative AI, KPMG has been at the leading edge of responsibly developing AI solutions impacting financial reporting and we are doing so in a highly regulated environment.”

The survey also covered the barriers and concerns “on the road to AI in financial reporting, which can increase with the use of GenAI.”

In Australia, 31% of survey respondents currently “have significant concerns over copyright in AI, but this rises to 40% with GenAI, and for data organization and management this increases from 33% to 43%. For data sovereignty, this concern nearly doubles from 21% to 40% with GenAI. Bias, hallucinations and cyber-security are also issues with GenAI. Australian and global figures were similar.”

Shane O’Connor said:

“The implementation of AI in financial reporting is accompanied by a set of novel risks that organisations must proactively address. When not effectively designed, AI systems may prove to be inconsistent or even infuse bias into the evaluative process. The phenomenon of AI hallucinations is a reality, necessitating continual supervision of algorithms to maintain their long-term trustworthiness, while risks related to data privacy and security amplify when external entities are integrated into AI systems. Organisations must keep abreast of, and adhere to the ever-changing regulatory landscape.


Register Now
Sponsored Links by DQ Promote

 

 

Send this to a friend