Australia is no stranger to the misfortunes of automated technology blunders.
Subscribe now for unlimited access.
or signup to continue reading
It wasn't too long ago that we felt the repercussions of the robodebt scandal, which used algorithms to calculate welfare overpayments and mistakenly issued false debts to thousands of welfare recipients. This caused widespread distress and financial hardship and prompted a royal commission into the robodebt scheme.
More recently, in the Commonwealth realm, the UK's Post Office scandal was exposed, uncovering the wrongful criminal conviction of hundreds of postmasters over a 20-year period. This was primarily due to flaws in the postal service's digital system used for transaction management.
Another example lives with a prominent international airline, which was recently asked to pay damages after misleading a customer with an AI-powered bereavement policy.
Because of examples like these, trust in automated systems is under scrutiny. And this is particularly relevant amid AI entering the agent economy.
The recent release of ChatGPT-4o makes AI more capable than ever at having agent-like instantaneous, emotional conversations with its users. If you check out Hugging Face - the AI building community platform - there are over half a million models ready to offer hyper-personalised 'agent' services.
Although we've developed an understanding of requirements for rigorous testing on AI itself, the hallucinatory scenarios we've seen in the past stir big questions around the environmental structures we've developed or are developing to feed the burgeoning agent economy.
Because, when it comes to AI, the tech is only as good as the data at its disposal. And this is why most Australian companies are not ready to implement it.
While AI means different things to different companies, an ADAPT report found just nine per cent of Australian organisations feel prepared for it, while a larger 30 per cent claim they're 'fully unprepared'.
According to the Australian federal government's Export Finance Agency, it's security threats, poor data quality, and privacy concerns over information slowing the adoption of AI.
While there is a place for regulation to evolve and mature this conversation, my conversations with local executives suggest the common symptom inhibiting businesses from capitalising on AI responsibly and profitably are data silos and tech redundancies found in their digital operations.
Tuning data
IDC reports as much as 68 per cent of company data goes untapped or wasted, indicating organisations are only understanding a third of what is happening in their business. AI thrives on information, so this degree of darkness over quality and access to data is a major risk.
It's one thing for a young business, but companies in operation for several years, let alone decades, have a chamber of operational and customer data, much of which is hiding away in locked-up silos.
To overcome this challenge, businesses require a digitally connected core to create a layer of assurance that the data being leveraged and communicated by its systems is representative and trustworthy.
Here's where the concept of a context pipeline becomes relevant. This involves a data layer that is interconnected, clean, accurate, secure, and formatted for effective synchronisation, which supplies relevant enterprise data to AI systems. In the case of generative AI applications, this setup helps ensure the technology provides precise and pertinent responses to user queries.
Fail to establish a data framework and be prepared to face consequences when users find the AI incomplete and dishonest.
Technical debt-ache
Meanwhile, businesses are already sweating in a bath of technical debt, and this is expected to rise under a culture of AI experimentation, particularly in the agent economy.
In the corporate world, technical debt can be compared to makeshift plumbing fixes that postpone necessary renovations. It is the conglomeration of bad design, bad code, and corner-cutting that accrued over years of operation, often under circumstances of pressure to rush projects out the door.
According to McKinsey, large IT systems can accumulate technical debt amounting to at least 20-40 per cent of the value of their existing codebase. This translates to millions in lost opportunities.
So, just as businesses manage financial capital, they need a strategic approach to assess and mitigate technical debt - before they add to it with new AI 'shinnies'. This involves transparent valuation at the application level, identifying redundancies, gaps, and potential weak points that could undermine AI deployment.
When AI is part of a clean and cohesive environment, enterprises have the right level of business intelligence to act responsibly and deliver profitably. Overlook the back-end systems and processes that AI feeds, and you render the tool relatively useless.
- David Irecki is director of solutions consulting for Asia Pacific and Japan at Boomi.