Artificial Intelligence (AI), or machine learning, is a process of programming computers that has recently become a topic of widespread public discussion.
This is mainly due to the rapid pace of technological advancement in recent years, which has made AI seem a more imminent reality than ever before.
AI will become even more important in the future as businesses look to automate more tasks. For example, in 2023, it’s estimated AI will be responsible for managing 30% of all customer service interactions.
As AI becomes increasingly prevalent in our society, the important question arises: Can AI make mistakes?
Yes, it certainly can. In fact, AI is more likely to make mistakes than humans, because it relies on often incomplete or inaccurate data.
As a result, AI constantly learns and evolves as it interacts with more data. The more data it has to work with, the more accurate its predictions and recommendations become.
That’s why businesses are always looking for ways to collect more data. One way they do this is by using AI-powered chatbots to interact with customers. Chatbots can collect data about customer preferences and behaviour. This data can improve the customer experience by making better recommendations or providing more personalised service.
More accurate data with better algorithms may replace (at least to some extent) AI’s mistakes or inaccuracy.
Who’s accountable when it goes wrong?
The issue is, who will be held liable if the AI system makes a mistake? The user, programmer, owner, or AI itself?
Sometimes, the AI system may be solely responsible. In other cases, the humans who created or are using the AI system may be partially or fully responsible.
Determining who’s responsible for an AI mistake can be difficult, and it may require legal experts to determine liability on a case-by-case basis.
Arguably, it may be difficult to hold individuals to account without their direct link between AI mistakes and individuals. As a result, it’s rational and fair to hold AI liable instead of individuals.
Read more: ChatGPT: Old AI problems in a new guise, new problems in disguise
How can we hold AI liable? Can we file lawsuits against AI? We can, but only when it’s undisputed that AI is a legal person.
The law or legal system permits filing lawsuits against persons, either legal or natural. Is AI a legal person or entity?
It’s also a grey area whether AI is a legal entity like a company, or works as an agent. Proponents argue that legal personhood is a legal concept that grants certain rights and responsibilities to entities, such as corporations or natural persons.
But AI systems are considered property, and don’t have the same legal rights and responsibilities as humans or legal entities.
They believe AI shouldn’t be held liable for its mistakes because it’s not a conscious being and, therefore, can’t be held responsible for its actions in the same way that a human can.
Is AI a punishable entity?
On another side of the argument, some believe AI should be held accountable for its actions just like any other entity. After all, if AI is capable of making decisions, then it should also be responsible for the consequences of those decisions.
But can AI make any decision without the help of a person working behind it? If not, why will AI take responsibility for its mistake?
Instead, we frequently see the principals or employers are responsible (with some exceptions) for the actions of their agents or employees. It’s called vicarious liability theory, developed in the UK in 1842 in the case of R v Birmingham & Gloucester Railway Co Ltd. This oldest case was the first to hold a corporation liable for the action of its employees.
Can we consider AI as a corporation or company?
Read more: ChatGPT: We need more debate about generative AI’s healthcare implications
We all know AI is machine learning, where a scientist or programmer has set some code for it. AI will never work without systematic coding set by programmers.
For example, ChatGPT is now widely known. It’s a kind of AI system prepared by a natural person or group of persons.
Imagine, ChatGPT offers to provide its service to a hospital for a yearly $1000 fee. If AI’s algorithms misdiagnose diseases that result in the death of patients, is it not rational to file a case against Open AI, the parent of ChatGPT?
I ask, how many people know of its founder? And collectively about Open AI and its product, ChatGPT?
So, if users suffer from ChatGPT, isn’t it rational to sue Open AI? Since it’s a separate entity, and a separate entity is considered a legal person, which was held in the well-known case Solomon v. Solomon & Co., [1897] UKHL 1, [1897] AC 22.
In line with this motion, some jurisdictions are starting to explore the concept of granting legal personhood to AI systems in certain circumstances.
In addition to the legal entity’s liability, responsibility may be extended to natural persons where mistakes or errors are attributable to their explicit consent, connivance, or neglect.
Whether the liability falls to AI or individuals, it’s a fantastic tool that can be used to help us in various ways.
However, there are also some risks associated with its use. We need to be aware of these challenges, and take steps to mitigate them.