Published Feb 13 2023

ChatGPT: Old AI problems in a new guise, new problems in disguise

Chat Generative Pre-trained Transformer (ChatGPT), a model developed by OpenAI, has been creating quite the buzz lately.

Driven by AI technology, ChatGPT was launched in November 2022 to generate human-like texts in a conversational way. The dialogue format allows ChatGPT to not only compose essays and answer follow-up questions, but also admit to its mistakes and challenge incorrect premises.

With more than one million users in its first five days after launch, there are bound to be ethical issues surrounding ChatGPT’s use. Therefore, I asked it:

"What are the legal and ethical issues of ChatGPT?"

Here’s what ChatGPT answered me:

  • There are several legal and ethical issues surrounding the use of ChatGPT and other large language models. One legal issue is related to potential copyright infringement, as the model may generate text that is similar or identical to existing copyrighted content.
  • Another legal issue is related to potential privacy violations, as the model may generate personal information or sensitive data that could be used to identify or harm individuals. Ethically, there is concern about the potential misuse of the model, such as in the creation of deep fake audio or text or in spreading false or misleading information.
  • There is also concern about unintended bias in the training data, which could lead to the model perpetuating harmful stereotypes or discrimination. Another ethical concern is related to the transparency of the model’s decision-making process, as it can be difficult for users to understand how the model arrived at a particular output.
  • Furthermore, GPT-3 and other similar models have been trained on a massive amount of data, which may have come from questionable sources, such as scraping of websites or other data collection methods which could have legal and ethical issues.
  • In summary, the use of ChatGPT raises legal and ethical issues related to copyright, privacy, misuse, bias, and transparency. It is important for users to be aware of these issues and take steps to mitigate them.

As you can see, half of the write-up was already done for me by the AI chatbot. The key points that I was looking for were all provided by it.

The idea of writing something with ChatGPT came to me after coming across a peer-reviewed scientific publication co-authored by ChatGPT.

It will not be my first collaboration writing an opinion article, but my first in collaboration with a robot.

Let’s now discuss the points my co-author made. The first legal concern related to ChatGPT is the risk of violating intellectual property rights. As ChatGPT is trained using a large amount of text data, such as books, articles and other written materials, some of the training data may be copyrighted.

If this is the case, the output generated by ChatGPT could be considered an infringement of those works’ copyright, which could result in legal action against those who use or contribute to the use of ChatGPT.

Another legal risk associated with ChatGPT is its capability to share personal information from its training datasets with its users. This feature could violate data protection laws in most countries, including the Malaysian Personal Data Protection Act.

Regarding ethical concerns, my co-author, ChatGPT, summarised it perfectly.

Yes, the ability of ChatGPT to generate conversational text raises ethical concerns about its potential to generate fake/wrong news or other misleading content. This could have severe ramifications, such as harming reputations, spreading false information, or even inciting violence.

The legal risks associated with using ChatGPT for these purposes are uncertain, but its users could face legal repercussions if they use the technology for these purposes.

It’s also important to remember that the ChatGPT model does not have its own thoughts or opinions. It solely depends on the user and how they use it. Hence, it calls for the transparency of the model’s decision-making process, as it can be difficult for users and the third parties to understand how the model arrived at a particular output.

Last but not least, the question is whether ChatGPT, as a language model, should be considered a co-author in any publication.

To answer this, it’s essential to establish the criteria for determining authorship within the scientific community.

Generally speaking, an individual is considered a co-author when they have made a substantial and meaningful contribution to the research or project.

However, in the case of ChatGPT, it’s crucial to recognise that it’s simply a tool designed to assist in the analysis and summarisation of existing data.

While it may aid in the research process, it’s not responsible for the discovery of new information. As such, it doesn’t necessarily qualify as a co-author in the traditional sense. Instead, it should be acknowledged as a valuable tool that has contributed to the project's success.

If ChatGPT deserves authorship, Microsoft Word deserves it, too, for providing us with the platform to organise and write documents more efficiently.

Grammarly, another language model, also deserves authorship – it helps make context-specific write-ups, checks grammar and spelling, and corrects wordiness, punctuation, and writing tone. Excel, R, or Python deserve to be co-authors for calculating statistics or analysing data for a quantitative scientific publication.


Read more: ChatGPT: We need more debate about generative AI’s healthcare implications


An opposing viewpoint to the previous statement is that tools such as ChatGPT possess the ability to not only enhance and modify existing texts, but also generate original content through its advanced analytical and automated learning capabilities. In this sense, ChatGPT’s contributions extend beyond merely improving existing work, but truly encompass its unique contributions.

So, let these debates go on – I don’t want to take a position here. What’s more interesting for me is that, as a language model, ChatGPT cannot take on legal liability, as it’s a machine, not a legal entity.

It can assist with writing and generating text, but any content produced by ChatGPT would be the responsibility of the person or entity using the model. So, if there’s a question on integrity, ChatGPT may not be liable for breach of integrity.

Is this point solely not disregarding ChatGPT to be a co-author in a publication? I leave this question to the readers.

One possible solution

Organisations like OpenAI will continue to develop language models such as ChatGPT, and people will continue to use them. It will become increasingly important to address the legal risks associated with their use.

This includes ensuring companies and individuals use the technology ethically, legally, and responsibly.

It will also be important for organisations developing language models and governments to provide guidelines, regulations and best practices to mitigate such risks and to ensure the responsible use of the technology.

This will help to protect the rights and interests of all stakeholders, including developers, users, and the general public.

About the Authors

  • Ridoan karim

    Lecturer, School of Business, Monash University Malaysia

    Ridoan is a Lecturer at the Department of Business Law & Taxation, School of Business, Monash University Malaysia. He has taught and researched in the fields of business and international trade law. Prior to joining the Department of Business Law and Taxation of Monash University Malaysia, Ridoan was a full-time Lecturer for more than three years at the School of Business Administration, East Delta University, Bangladesh. His research interests include environment and energy law, business law, law and technology and public policy and governance.

Other stories you might like