Responsible AI is now a governance risk, not an ethics debate

Responsible AI concept with businessman.
Photo: iStock/Getty Images Plus

Artificial intelligence is already shaping real decisions that affect people. It influences what content is taken down online, how harmful behaviour is flagged, and how public agencies manage risk.

In many settings, AI is no longer something being tested quietly in the background. It’s already part of how platforms and institutions operate.

For years, responsible AI was discussed mainly as an ethics issue. We talked about fairness, bias, transparency and values. These conversations were important, and they still matter.

But many of the AI failures we see today are not caused by ethical disagreement or technical flaws alone. They happen because responsibility is unclear, oversight is weak and decision-making authority is spread across too many parties.

In other words, AI has become a governance issue.

When AI systems fail, governance usually fails first

In many countries today, AI is used to manage scale. Social media platforms rely on automated systems to process millions of posts every day. Public agencies use AI tools to prioritise cases, monitor online harm and support enforcement work.

When something goes wrong, the first question often asked is whether the model was accurate enough. That question misses the deeper problem. In many cases, the technology could have worked better, but the surrounding governance failed.

Common governance gaps include:

• no clear owner for an AI system

• limited oversight before deployment

• weak escalation when harm begins to appear

• responsibility split between those who build systems, those who deploy them and those expected to regulate them.

These gaps are well-recognised in international policy discussions on AI accountability, including work by the OECD and the WEF AI Governance Alliance.

Lessons from online harm and content moderation

Many of these challenges were discussed in a recent Future Conversations podcast on hate speech, deepfakes and online safety, where researchers and regulators spoke openly about the limits of AI and regulation in practice.

One message was clear. AI moderation tools already exist and are widely used. Machine learning is essential as a first filter for harmful content. The harder problem lies in how these tools are governed.

Content moderation usually works in layers:

• Automated systems flag potential harm

• Human moderators review complex or disputed cases

• Regulators step in when platforms fail to act.

Breakdowns occur when one or more of these layers lacks accountability. Platforms may underinvest in local language and cultural context. Oversight may rely on complaints rather than prevention. Responsibility may be passed between companies that build systems, platforms that deploy them and authorities expected to oversee them.

In multilingual and culturally-diverse societies, these weaknesses become more visible. Language mixing, slang and context change quickly. Without strong governance, even capable AI systems struggle to keep up.

Where responsibility breaks down

AI governance graphic.
How responsibility is spread around AI systems used in online safety and public services. While AI developers, platforms, public agencies and regulators all play a role in shaping how these systems are built and deployed, children and the general public experience the consequences with the least ability to influence decisions. Image: Supplied

This network helps explain why AI harm is rarely caused by a single failure. AI systems are developed by one group, deployed by another, overseen at some distance and experienced most directly by the public.

When ownership, oversight and escalation are not clearly connected, harm falls into the gaps between institutions.

Child safety shows why governance matters most

The risks are especially clear when children are involved. AI-generated deepfakes and synthetic images have made online abuse easier to create and harder to detect. UNICEF has warned that AI introduces new risks for children that cannot be addressed by technology alone.

A recent example illustrates this clearly. In January 2026, Grok, a chatbot linked to X, faced global scrutiny after reports it could be misused to create non-consensual sexual images, including sexualised images involving minors. In Malaysia, a detailed account of the incident and wider risks was reported by Bernama.

This matters because it shows how quickly harm can move from niche tools into mainstream platforms. A capability that should have been blocked at the design stage can spread widely before safeguards catch up.

That is a failure of governance, not just a failure of detection.

In many countries, laws already prohibit such content. Australia’s online safety framework is set out in the Australian government’s legislation overview and enforced through powers described by the eSafety Commissioner.

In Malaysia, the Online Safety Act 2025 came into force on 1 January, 2026, with supporting subsidiary legislation and FAQs released around its commencement. Yet enforcement remains difficult when platforms operate across borders and harmful material spreads faster than regulators can respond.

These examples show that child safety is not just a technology problem. It’s a governance problem.

Public-sector AI carries hidden risks

AI is also being adopted across public services, from education and welfare to digital enforcement and online safety. These systems influence real outcomes for real people.

When public sector AI fails, the impact goes beyond technical performance. It affects trust in institutions.

Yet governance often lags behind adoption. AI systems may be introduced without independent review, without clear accountability for outcomes and without transparent ways for citizens to question decisions. When something goes wrong, a simple question emerges:

Who is responsible?

If institutions cannot answer that clearly, public confidence erodes quickly.

What responsible AI looks like in practice

Responsible AI does’t mean avoiding AI. It means governing it properly.

In practice, this involves:

• clear ownership of each AI system

• defined roles for oversight and review

• documented decision-making and risk assessment

• ongoing monitoring of real-world impact

• the ability to pause or withdraw systems when harm emerges.

It also means recognising that not all risks can be solved by better models. Decisions about acceptable use, escalation and enforcement are human decisions. They require leadership at senior and board level.

Across many jurisdictions, regulatory expectations are already shifting. Online safety laws, platform obligations and public-sector guidelines signal that responsible AI is moving from voluntary principles to enforceable governance.

 From discussion to decision-making

 Responsible AI has moved from discussion to decision-making.

 The key questions are no longer abstract:

• Who owns the system?

• Who oversees it?

• Who acts when harm begins to appear?

Institutions that cannot answer these questions clearly will face regulatory, reputational and trust risks, regardless of how advanced their technology becomes.

As AI becomes more embedded in public life, responsible AI must be treated as a core governance responsibility. This is how trust is built, harm is reduced and innovation can continue in a way society is willing to accept.

Read More

Republish

You may republish this article online or in print under our Creative Commons licence. You may not edit or shorten the text, you must attribute the article to Monash Lens, and you must include the author's name in your republication.

If you have any questions, please email lens.editor@monash.edu

Republishing Guidelines

https://lens.monash.edu/republishing-guidelines

Title

Responsible AI is now a governance risk, not an ethics debate

Content