Artificial Intelligence: A Legal Nightmare

Posted by Nicholas Cizin.

Artificial intelligence has the power to transform the business world like no other invention. It is the ability for computers to think and learn for themselves, quite often better than humans. A great example of this is IBM’s Watson. Watson is famous for its time on Jeopardy, thinking, generating, and then speaking answers quicker and more accurately than a human contestant. Recently, Watson has been revolutionizing healthcare, helping to diagnose cancer and develop new treatments (1). The positive potential of artificial intelligence on business is limitless. In the future, businesses will likely be able to use artificial intelligence to increase production efficiency, reduce expenses, and develop new products.

Despite such amazing business potential, AI also poses an extreme threat. Companies could easily use AI to exploit customers, hack other companies, and steal from competitors. Recognizing both the positives and negatives of artificial intelligence development, lawmakers have the responsibility to pass laws that regulate AI’s destructive power. As it is said, with great power comes great responsibility. There are laws to regulate technology, but currently none specifically for AI. Courts deem developers liable for harm caused by software like robotics when the developer is negligent or could foresee harm. But with AI’s ability to think for itself, there is potentially no fault by humans or foreseeable injury. Traditional tort law would therefore most likely find developers not liable (2). Companies and lawmakers alike agree new laws must regulate AI, unfortunately, the timing of this regulation is still under debate.

Some believe laws should be implemented only after artificial intelligence affects the business world. Thus, laws would be created on a case-by-case basis. For example, Thomas C. Henderson, a professor at the University of Utah’s school of computing, uses a speed limit analogy to represent his view on AI regulation. He says, “you only impose speed limits once you build cars that can go faster than is safe” (3). Although, Mr. Henderson’s opinion is quite logical, others believe AI has too much potential for harm to regulate only after a negative result has occurred. This is the opinion taken by the business-world famous, Elon Musk. Musk firmly believes, “we need to be proactive in regulation than reactive. By the time we’re reactive in AI regulation, it’s too late” (4). Elon Musk sees the potential harm AI can cause if not regulated from the start. The only issue is that it is quite difficult to regulate something that has not been created. For example, the British Parliament’s House of Commons Science and Technology Committee stated, “while it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now” (3). Governments around the globe fully recognize the challenge of regulating AI in its early stages of development.

In my opinion, AI regulation should be a combination of the two theories. If dangers of AI can be logically seen, laws should be put into place before the development of such activity ever occurs. This may inhibit AI’s potential, but is worth preventing foreseen harm. Not all harmful uses of AI can obviously been seen as it is in such infantile stages of development. This is why constant evaluation and laws on case-by-case basis must actively regulate AI. If a harm cannot foreseen, as most is not with developing technology, then the law must be able to expand and adapt quickly to prevent harms from being repeated. In the coming years, it will be interesting to see how laws regulate AI. One can only hope regulation is done so successfully.

Nicholas is an accounting major at the Stillman School of Business, Seton Hall University Class of 2020.

Sources: