Is an AI winter coming?: Navigating the cycles of AI hype and just regulation
Karim
When OpenAI launched ChatGPT in November 2022, it provoked extreme reactions: “Oh my God! It's incredible – expand it!”, versus “Oh no, that’s horrible – ban it.”
Since then, the hype surrounding AI has been immense. For instance, Elon Musk made some bold claims last year. He said Tesla would be fully self-driving within a year or two, that AI would surpass human intelligence next year, and that by 2040, an army of one billion AI-powered robots could replace human workers.
Such predictions suggest AI development is on an unstoppable exponential trajectory that we humans can’t control.
However, many experts argue this is far from the truth, pointing to concerns about AI stagnation due to diminishing returns from larger datasets and rising computational demands.
Modern AI systems depend on deep learning and neural networks, trained on vast data, to identify patterns and predict trends.
However, the benefits of increasing dataset sizes and computational power are diminishing. For example, improving an AI’s recognition accuracy from 60% to 67.5% required quadrupling the training data, showcasing diminishing returns.
Additionally, the computational load for training grows exponentially with each additional data point, making further advances costly and energy-intensive.
This trend is illustrated by the marginal improvements in newer AI models compared with their predecessors, such as the transition from GPT-3.5 to GPT-4, despite massive increases in data.
Is ‘winter’ closing in?
Can we say, then, we may witness an “AI winter”? To understand what AI winter is, let’s discuss it a little further.
According to Professor Luciano Floridi, an AI winter refers to a period when the enthusiasm for AI cools significantly, often due to failed, overhyped projects and economic downturns.
The first AI winter occurred in the 1970s, followed by another in the late 1980s and early 1990s.
During these winters, many AI projects, especially those dependent on government funding and venture capital, experienced significant cutbacks. The trigger was often a combination of overly ambitious AI projects failing to meet expectations, coupled with broader economic issues.
This led to scepticism and a general reduction in the enthusiasm surrounding AI technologies.
Although some experts predict an upcoming “AI winter”, I don’t think the world will experience it anytime soon. Rather, investments in AI have been witnessing an unprecedented boom and a broad integration of AI technologies across various sectors.
However, like any seasonal change, I’m sure winter will return, and we’d better be ready. When it will come, it will come with significant financial and socio-political costs.
As AI companies have seen their revenue skyrocket over the past two years, so too have the massive computational costs of running increasingly complicated AI models.
Startups might fail, larger companies may cut jobs, and overall financial instability could ensue if AI technologies fail to deliver on their promises.
Despite the hype, many AI companies, including OpenAI, are currently operating at a loss.
This unprofitability fuels fear of an AI winter, and among many challenges, a new concern arises – AI regulation.
The question of AI regulation
With the European Union and other governments and regulatory bodies racing towards regulating AI, the question is whether their swift regulatory response to AI may lead to unmet expectations by imposing stricter regulations on AI development and deployment.
Many experts believe the forthcoming AI regulations could stifle innovation and lead to conflicts between governmental bodies and tech industries, creating a socio-political tug-of-war over the direction of future technologies.
Then, what should we do about regulating AI? Should we not regulate AI at all?
While we definitely need to regulate AI, any regulation should be based on the principles of justice. This includes distributional, procedural, and recognition principles. By following these principles, we can take a balanced approach that promotes innovation while safeguarding societal interests.
Read more: What does the rise of AI in agriculture mean for the future of farming?
For instance, the distributional justice principle focuses on the equitable distribution of AI’s benefits and burdens. Regulations should ensure that AI technologies do not exacerbate inequalities, but instead contribute to bridging gaps between different socio-economic groups.
For example, AI deployment in public services like healthcare or education should improve access and quality for all, not just a privileged few.
If AI technologies predominantly benefit only certain sectors or demographics, the broader society may grow sceptical and withdraw support, leading to reduced funding and interest.
On the other hand, the procedural justice principle involves transparent, fair processes in the development, deployment, and governance of AI.
Developers must be transparent
Regulations should enforce accountability by requiring AI developers to be transparent about their algorithms’ functions and the data they use. This includes open audits, ethical reviews, and the involvement of diverse stakeholders in the regulatory process.
Additionally, AI investments should be transparent and should drive good for all, and not only for a particular society.
Trust is crucial for continued investment and innovation in AI, as stakeholders are more likely to support and engage with technologies they believe are developed and used responsibly.
Lastly, the recognition of justice in AI regulation involves acknowledging and addressing the potential negative impacts of AI on human lives.
This means regulations should require AI systems to respect and protect individual identities and cultural diversities. AI should not perpetuate stereotypes, nor infringe on privacy or personal dignity.
Read more: AI, we need to talk: The divide between humanities and objective truth
A risk-based approach to AI regulations is required to ensure AI systems respect and protect diverse human values and cultural norms. Adapting to different social contexts can prevent backlash and the potential stalling of AI advancements due to ethical concerns or public outcry over insensitivity or bias.
Grounding AI regulations in these principles of justice not only addresses immediate ethical concerns, but also strategically positions AI development for long-term viability and support.
This approach can mitigate the risk factors associated with AI winters, such as loss of public trust, backlash against unintended consequences, and uneven benefits leading to disillusionment.
By fostering an environment of trust, equity, and adaptability, such regulations can help maintain the momentum necessary for sustainable advancement in AI.
About the Authors
-
Ridoan karim
Lecturer, School of Business, Monash University Malaysia
Ridoan is a Lecturer at the Department of Business Law & Taxation, School of Business, Monash University Malaysia. He has taught and researched in the fields of business and international trade law. Prior to joining the Department of Business Law and Taxation of Monash University Malaysia, Ridoan was a full-time Lecturer for more than three years at the School of Business Administration, East Delta University, Bangladesh. His research interests include environment and energy law, business law, law and technology and public policy and governance.
Other stories you might like
-
Staying ahead of the AI revolution
Does our inherent embrace of technology that makes our lives easier have the potential to threaten our very way of being?
-
AI, we need to talk: The divide between humanities and objective truth
Could our fascination with objectivity be the Pied Piper that led us to develop a machine some of us now fear and avoid?
-
So sue me: Who should be held liable when AI makes mistakes?
AI is more likely to make errors than humans, because it relies on often incomplete or inaccurate data. The big question is, who’s accountable if it does? The user, programmer, owner, or AI itself?
-
ChatGPT: From the horse’s mouth
We asked the artificial intelligence tool what the legal and ethical issues of using it were. Here’s what it told us.