Marvin Minsky, credited as a founding father of Artificial Intelligence (“AI”), died at the age of 88 on January 24, 2016. The cause of death – a cerebral hemorrhage. Interestingly enough, Minsky compared the human brain to a machine whose functioning could be studied and digitally replicated. He believed that this replication would help humans better understand the brain and its higher-level functions. Minsky also believed that there would come a time when machines would rival human intelligence. He was the last of the founding fathers of AI, which included Alan Turing, Allen Newell, Herbert A. Simon, and John McCarthy.

Coined as the technology of the future, AI is founded in the premise that once an AI system is initially programmed and developed, it can continue to run on its own; learn from its environment; and evolve. AI would be dynamic and versatile enough to solve complex problems from its own learning. It is important to note, however, that the initial programming of AI systems is from humans, and as such, is initially limited by the limits of human innovation. The ability of AI to learn from its environment (via data inputs) is called “machine learning”. AI, however, may even act in ways not initially programmed, intended, or foreseen.

Let’s begin with a hypothetical:

A furniture manufacturing Company begins using an AI system which buys materials for production and also schedules the deliveries of its finished products. The finished goods are sold and shipped in bulk to furniture stores or individually to online consumers. As time goes on, the AI continues to evolve through the machine learning process. Everything runs smoothly until a lumber shortage causes Company’s main supplier to raise its prices. Company has an agreement in place to purchase 50% of its lumber from the main supplier. The AI’s code includes the necessity of decreasing expenses and maximizing profit for Company and typically buys lumber at pre-calculated lower price points. Despite having this agreement in place, the AI purchases less from the main supplier during the shortage, and purchases a larger amount from a competitor at a cheaper cost. Even with these choices, Company experiences a shortage of materials and can’t fulfill every order it has for the month. The AI decides to ship out the bulk orders first to the highest paying customers. Eventually it runs out of inventory and fails to fulfill the orders of smaller stores who have built a strong relationships with Company for over 10 years. Finally, malware finds its way into the system and releases private information of companies and individuals who have made purchases from Company.

Within this short story, many legal issues are raised, such as: Who is at fault or is liable? Does product liability play a part and does AI fall within the scope? Is the AI at fault or is the person/entity who designed it at fault? Can the AI itself be held liable? If the AI made its decision based upon its own self-produced code, can the developer be held liable? Who gets to decide how to initially code the AI for situations like this?

Keeping this mind, this article will look to examine and highlight some of the key legal implications surrounding AI.

 

Regulation, Liability, and Data Privacy

Who’s at fault?

This is perhaps one of the most difficult questions the legal community faces when it comes to AI and allocating liability. If an AI system can continually learn and evolve through information it receives and through its own experiences, can the developer be held liable for the AI system acting in ways the developer did not foresee or intend? Looking back to the story above, should the AI be held liable for the breach of contract with the main supplier or should the Company be held at fault? Is it a flaw that it did everything it can to reduce costs and maximize profits? Or is it the fault of the original programmer? The difficult task of determining who should be held liable will ultimately fall upon the regulatory body and systems in place to handle such issues.

 

Product or Service: Identifying liability standards for AI

Determining what exactly AI is for legal classification purposes can be challenging and is not always clear. If an AI is incorporated within a logistics system like the story above, would it be subject to product liability standards? Or would the AI be classified as a software service? The distinction is important in determining which legal standards would govern the issues. Strict liability applies when there is a flaw in the design, manufacture, or warnings of a product. Meanwhile, the negligence standard is used for services. Some courts make the distinction between the object which holds the software (product) from the actual software itself. Some companies may accept liability for damages caused by its AI. However, absent such an agreement, fault will need to be determined between the different parties involved in the manufacture to sale process. A concerning issue surrounding AI products is whether the user is in control of the product assisted by AI, or the AI itself is solely in control of the operation of the product.

A recent case demonstrated how product liability claims involving AI may evolve as the technology continues to develop. In Nilsson v. General Motors, LLC, a motorcyclist filed suit for injuries sustained when an autonomous vehicle allegedly veered into his lane. In the complaint, the plaintiff only claimed general negligence against the company and that the manufacturer had breached its duty of care because the vehicle had autonomously drove in a negligent manner. In its answer, General Motors admitted that the vehicle itself was required to use reasonable care when driving. This case, however, was settled prior to moving past the pleading stages, but did raise a couple of issues: 1) whether AI can be considered an actor and, if so, what would be the applicable standard of care (“reasonable machine”?); and 2) the issue of foreseeability with an AI that is intended to act autonomously.

 

Regulations

Various forms of regulations have been proposed with respect to AI and its growing usage around the world. One possibility set forth and enacted in Russia is to apply a unique type of ownership similar to that of an animal because they are mainly autonomous. When it comes to liability, the owner would be held liable for any damage caused by the animal (AI). There are, however, limitations because it would not be viable for criminal law, and because such laws were meant for household pets not expected to cause much harm, it would not be a fair representation of AI and its capabilities. There have been suggestions to enact laws similar to the regulation of wild animals. However, because of the stringent nature of such rules, many fear that it may deter developers from introducing AI due to liability concerns. Asa result, this may stunt technological innovation and growth.

Another popular idea is to regulate AI in a similar manner to legal entities. Because legal entities are merely constructs of the law, a similar status can be given to AI. By giving AI a legal status similar to entities, the laws surrounding AI regulation would be clearer. The main concern with this approach, however, is that when it comes to determining liability with entities, there is typically one or more people acting as representatives of the company. This brings us back to the dilemma of whether those representing the AI can be held liable for the AI acting in unforeseen ways.

Overall, the regulatory framework for handling AI is still being developed as the unique nature of the technology does not allow for a simple construction of laws. No matter what the adopted regulations may be, due to the complexities associated with AI and its ability to learn apart from the original developers, the legal field will need to pay close attention to any changes or further developments AI may bring.

 

Data Protection and Privacy

With the expanding use of data in today’s society, the protection and privacy of personal data is has become an integral part of data regulation. Data protection laws exist world-wide and generally apply to the collection, use, processing, disclosure, and security of personal information.

 

Fairness Principle. Many of the existing data protection laws require the processing of personal information in a fair manner. This principle allows individuals to decide how their personal information is used. With respect to AI and the fairness principle, the main challenge is preventing machine learning algorithms from incorporating biases, whether it be from human error in the development stage, incomplete data inputs, or improper use of the data.

 

Purpose Specification Principle. Data protection regulations typically require the collection of personal information to be for a specific and legitimate purpose. However, AI presents a challenge to this principle in that it is impossible to determine in what way an AI might learn and change its own algorithm. This development could lead to the AI using the personal information for a new purpose. This will then require developers to constantly monitor the AI systems and its use of information in order to adhere to regulations.

 

Data Minimization Principle. This principle requires that the personal information used is not more than necessary to achieve the stated purposed and to minimize the amount of time the data is kept. Similar to the specification principle, because an AI can develop on its own, it may prove difficult to minimize the data should the AI decide that holding more data for longer periods is necessary.

 

As AI continues to integrate into more facets of businesses and everyday life, it will also become more involved with the handling and using of personal information. The General Data Protection Regulation (“GDPR”) is a good source to follow as it is the regulation in the European Union which governs data protection and privacy for citizens of the European Union and European Economic Area. The GDPR will likely continue its own development to respond to the changing realm of data protection, specifically within the AI industry.

 

Intellectual Property

Patent

AI can be patentable as the United States Patent and Trademark Office (“USPTO”) recognizes AI through the designation of Class 706 (Data Processing: Artificial Intelligence). The USPTO also implements two examination units when reviewing AI-related applications. There are, however, drawbacks as the patent application process can take several years and the process requires public disclosure of the invention.

 

Copyright

Copyright protection is available for certain types of AI, which include the source code and visual elements of an AI program. The original source code is protectable as a computer program under 17. U.S.C. § 101. Both source code and object code are copyrightable as literary works if fixed in a tangible medium of expression. Drawbacks related to copyrights include: the limited protection of the source code and not other aspects of the system, such as hardware; fair use may prevent certain actions form being deemed copyright infringement; and owners must re-register every version of the software if continued registration is sought.

 

Trade Secret

AI can be protected under trade secrets which are protected at the federal level under the Economic Espionage Act and amended by the Defend Trade Secrets Act of 2016. Trade secrets are also protectable at the state level under state statutes. The main drawbacks to seeking protection under trade secrets is that the owner must make significant efforts to obtain and maintain it.

 

Final Thoughts

The digital revolution is in full force and effect and there are no signs of it slowing down. AI is expected to outperform humans in many activities in the next ten years, further supporting the need for regulatory frameworks. As illustrated earlier, the legal implications surrounding AI is one that will need to be closely monitored. As the technology becomes more sophisticated, the regulations overseeing AI will need to follow suit.