Artificial intelligence has made an undeniable mark on our world, reshaping the workplace while unlocking new opportunities for how we live our lives.
DC Alliance – which recently launched GPU-as-a-Service (GPUaaS), a new AI service supported by SingTel – brought together industry leaders, AI experts and decision-makers at the inaugural AI Forum, ‘Embracing AI’, at Murdoch University on Thursday 21 November.
Igniting provoking discussions of the role of AI applications in Western Australia, the forum explored cutting-edge solutions and emerging business opportunities in the evolving AI landscape while putting a spotlight on the most critical issues of AI now and in the future. Panellists discussed a range of issues and challenges, from operational efficiency and enhanced customer experiences to data governance, ethics, and accountability in AI.
Arnold Wong, a fellow of the Australian Computer Society and a leader in the information and communication technology sector, with over 30 years across various sectors including technology, education, and healthcare, opened the forum with a reminder of how quickly AI technology is shaping the way we live, work and operate in the world. “AI has undeniably become a fundamental part of our lives, making it essential for us to embrace its impact,” he said.
The power of AI
ChatGTP is an example of how quickly the technology is changing and transforming into an indispensable tool which has the power to enhance workplace productivity, while adding layers of complexity in governance, ethics and risks.
Mr Wong reiterated that it’s more important than ever to embrace not only what AI has the power to do, but to consider what it should do, while shaping it into the technology that will benefit us the most.
An executive member of the global ethics committee of The Adecco Group, Joshua J Morley, Global Head of AI, Data & Analytics at Akkodis and Chair of Responsible AI, opened with an important reflection: “We are on the brink of one of the most significant periods of technological disruption we have ever experienced as a species,” he said.
Mr Morley, an award-winning innovator leading over 1000 experts around the world, discussed the power of AI and the rapid evolution of artificial general intelligence (AGI) and superintelligence which has the potential to exceed human intelligence.
“AI is an essential threat to businesses. You need to transform along with your competitors,” Mr Morley warned. He referenced the dramatic decline in value of tech company Chegg, which lost 99 per cent of its worth (US$14.5 billion) after the rise of ChatGPT.
Data governance
Mr Morley emphasised the importance of responsible AI implementation, which includes governance and risk management, cultural adoption and change management, strategic alignment, leadership buy-in, and training and development.
"The most important factor in deriving AI success in this transition is the human one. Training people to use the technology responsibly, safely and efficiently will yield more returns than diverting those resources to a slightly more state of the art system. An adaptive, curious and growth mindset is the most valuable skillset for the future in the age of intelligence," he said.
Krista Bell, director of data governance at Curtin University, expanded on why data governance and risk management is essential for making AI trustworthy, ethical, and secure. “AI isn’t just another technology. It’s a nascent and new area that requires careful thought. It’s important to address the expansive nature of AI and call out now those nuanced AI risks and unique considerations.”
Ms Bell emphasised that data privacy, security, accountability, fairness, transparency, bias detection, and stakeholder engagement are all critical areas of focus when implementing AI solutions. “Data is the foundation of AI. It is important to ensure you have a reliable data pipeline to collect, clean and preprocess data,” she said.
Systems modelling
David Lucido, founder and CEO of Sentient Hubs, a systems simulation and impact modelling hub, discussed the importance of systems modelling in addressing the complex challenges AI presents.
“We are facing a whole series of real problems that are dynamic and interdependent,” he said. He referenced a “polycluster” of related global risks with compounding effects, where the overall impact exceeds the sum of each part. “It’s a complex and dynamic set of challenges, choices and impacts,” he said.
He added that AI alone cannot solve a company’s strategic issues. Instead, businesses must take a holistic approach, considering the interconnectedness of global risks and the dynamic nature of AI technologies, while considering the “butterfly effect” of decisions on future outcomes.
“Better outcomes require holistic modelling and simulation of many natural, social and industrial systems and their complex interactions and dynamics,” he said.
GPUaaS, for example, is transforming the way businesses approach AI infrastructure by giving access to high-performing computing resources for machine learning, deep learning, and other data-intensive applications.
Perils and pleasures
SJ Price, a partner and AI practice lead at law firm Stirling & Rose, discussed the “perils and pleasures of generative AI”.
“The key thing with artificial intelligence is you’re trying to get the benefits out of artificial intelligence, but you also want to be able to protect your business,” said Ms Price, who sits on the WA Data Science Innovation Hub Advisory Board, Law Counsel of Australia Digital Commerce Committee, and Murdoch University’s AI Competency Centre.
She discussed the significant opportunities AI offers, such as increased productivity, enhanced employee satisfaction, and improved customer engagement. However, she also pointed to the risks businesses face if they fail to implement AI responsibly.
These risks include intellectual property theft, data breaches, ethical dilemmas, and over-reliance on AI systems. However, she added that the biggest risk to business is not using AI at all.
“Governance is about protecting your people. We need to be thinking about AI not only as it is now, where it makes mistakes, but how it will be in the future. It’s so important because it’s here to stay,” she noted.
Ms Price emphasised that businesses have “a huge responsibility to comply with guardrails” while embedding legal safeguards into their AI strategies. These safeguards include having meaningful human oversight, establishing clear accountability and transparency, and proactively managing risks and compliance related to data privacy and cybersecurity.
“We need to have the law embedded into AI and we need to be thinking ahead,” she said. It also includes asking ethical questions around bias to ensure AI reflects a broad set of perspectives rather than narrow, idealised views. “The question to ask around bias is, does it represent the real world, or your idealised view of the world? These are very difficult and complex questions,” she added.
“The pleasure and perils of AI is to do so safely, carefully, and with the support of your people.”