AI is in trouble with the law

The relationship between artificial intelligence (AI) and the law is complex and uncertain. To start, it’s a two-way relationship. There are, and more importantly will be, laws about AI and how AI is used in the legal process.

AI used by the law

The days in which a major part of any lawyer’s job was to remember many legal documents and decisions are effectively over with AI’s powerful natural language searching capabilities coupled with the writing capability to communicate what was found. However, generative AI suffers from hallucinations — that is outputs that are not factual. For example, AI may “find” relevant historical cases by simply making them up. To prevent such misuse, the output of AI must be carefully checked by human experts before those outputs are used in any way.

If used and checked properly, AI functionality can provide a 30-40% productivity gain for legal case officers and be valuable. These time savings are so great that we can now do things that previously were not done at all as they would have been prohibitively expensive or time-consuming — searching through all previous cases in a country, for instance.

The days in which a major part of any lawyer’s job was to remember many legal documents and decisions are effectively over with AI’s powerful natural language searching capabilities coupled with the writing capability to communicate what was found. However, generative AI suffers from hallucinations — that is outputs that are not factual. For example, AI may “find” relevant historical cases by simply making them up. To prevent such misuse, the output of AI must be carefully checked by human experts before those outputs are used in any way.

If used and checked properly, AI functionality can provide a 30-40% productivity gain for legal case officers and be valuable. These time savings are so great that we can now do things that previously were not done at all as they would have been prohibitively expensive or time-consuming — searching through all previous cases in a country, for instance.

Laws about AI

Let’s proceed to AI as a field that needs to be regulated. Clearly, AI raises many questions that must be answered by lawmakers.

Can AI models be trained on copyrighted material? If the model produces material that is very similar to this copyrighted material, does it owe a license fee? How may creators opt-out of having their property used for training purposes? If AI model usage results in some form of damage, who is liable and to what extent? This can range from copyright infringement to death in an autonomous vehicle.

If the data from country A, labelled in country B, is used by the AI training company in country C to make a model that is used in country D by a computer data center in country E, which jurisdiction’s laws apply?

The answers to these questions, and many others, are currently not known in most countries around the world. We do not know what is permitted or forbidden. In most cases, we do not know what the penalties are even if we know what is forbidden. For example, the executive order by the President of the USA did not spell out any penalties.

Most jurisdictions are new to these problems and international coordination is virtually non-existent. While the EU AI Act effectively assumes that consumers must be protected from AI companies, the executive order of the USA implies that companies only need some nudging to act appropriately and places an emphasis on the country maintaining its economic lead in the field. While the USA prefers incentives to act well, the EU prefers punishments for bad actors.

Other legal codes, for example in China or Russia, exist but it is not clear how effectively those laws will be enforced in the same spirit in which they were written. Legal systems there have traditionally been biased towards domestic companies and may use AI laws more as an economic weapon rather than a protectionary measure against misuse.

For companies making AI models and tools, for companies buying those models and tools, and for consumers exposed to it all, this legal uncertainty is risky. It is possible that in a few years, the models we are using today will be illegal due to copyright laws. Models trained on far less data than today will be less accurate and effective and thus provide lower business value to users. Furthermore, this uncertainty makes many people prone to wait and see.

The lack of international coordination on laws is at fundamental odds with the international nature of computer software and the cloud. While this has troubled lawmakers since the advent of the internet, AI increases the challenge many-fold. Data and insights from one country are absorbed by a model and transferred to another without clear provenance and therefore virtually impossible to police.

Loopholes in the international legal system will lead to a situation similar to that of corporate taxation in which companies powerful enough will end up paying no taxes to anyone and obeying the laws of no country.

A way forward

To lawmakers, I suggest consulting closely with industry experts who can provide insights into the real capabilities of AI systems so that laws can apply to them. Currently, the lack of technical understanding in the legal system makes some regulations irrelevant. Also, I urge international cooperation to drive at standards that hold for everyone.

To corporations, create internal processes for documentation of how data was obtained and used in preparation for disclosures and compliance processes. They should also create and empower ethical AI review boards that keep a close eye on this evolution.

Finally, it is my opinion that the next great evolutionary step in AI will not be technical, but legal.

Patrick Bangert

Patrick is the Senior Vice President for data, analytics, and AI at Searce, which provides professional services for cloud applications. He heads the profit center that is responsible for all projects with a data scientific character globally. Patrick holds a PhD in AI and has over 20 years of business experience.

Scroll to Top

SUBSCRIBE

SUBSCRIBE