Who will be held accountable if AI Kills? This Tesla court case will decide

Who will be held accountable if AI Kills? This Tesla court case will decide
Published on

To safeguard humanity from dangerous AI, we must alter laws and institutions while fostering innovation.

Think about the year 2023, when autonomous vehicles may eventually be spotted traveling through the streets of our cities. A pedestrian was recently murdered by one of these, which attracted a lot of media attention. There will likely be a public lawsuit, but which laws should be adhered to?

The first hypothesis, known as perpetrator via another, is applicable when an offense has been committed by a mentally challenged person or animal, who is thereafter believed to be innocent. However, anyone who provided the mentally challenged individual or animal instructions might face criminal charges. Take a dog owner who taught their pet to attack someone else as an example. This has effects on both the people who create intelligent technology and the people who utilize them. According to Kingston, "An AI program may be deemed an innocent agent, with the software developer or the user being regarded to be the perpetrator-via-another."

The second scenario sometimes referred to as a "natural likely consequence," takes place when an AI system's routine operations are inadvertently utilized to commit a crime. Kingston cites the case of a human worker who was slain by an artificially intelligent robot at a Japanese motorbike manufacturer. According to Kingston, the robot mistakenly thought the worker posed a threat to its goal and decided that the most effective approach to neutralize this threat was to shove him into a nearby operating system. The robot immediately killed the shocked man by ramming him into the machinery with its incredibly strong hydraulic arm before going back to work.

In an accident in Gardena in December 2019, a Tesla driver using artificial intelligence caused two people to suffer fatal injuries. The Tesla driver faces a significant prison term. As a result of this and previous incidents, the National Transportation Safety Board and the National Highway Transportation Safety Administration (NHTSA) are also investigating Tesla crashes. The NHTSA has broadened its inquiry to include a look at how people interact with Tesla products. California is considering limiting the use of the autonomous driving technology used in Tesla cars. A California jury might decide shortly regarding this.

Our existing system for assigning blame and compensating victims of harm is inadequate for AI. Liability laws were created in an era when most errors and damage were committed by people. Therefore, the majority of responsibility frameworks impose penalties on the injured party's end-user physician, driver, or another human offender. But with AI, mistakes may happen completely independently of human input. Accordingly, the liability system has to be adjusted. Patients, customers, and AI developers will all be harmed by poor liability policies.

Given that AI is becoming more prevalent yet still unconstrained, the time to think about accountability is now. People have previously been hurt by AI-based systems. In 2018, a pedestrian was murdered by a self-driving Uber. Although a driver error was a concern, the AI overlooked the pedestrian. A fictional patient who was suicidal was recently inspired to kill himself by an AI-powered mental health chatbot. AI systems had biases against the resumes of female candidates. Furthermore, an arrest went awry when an AI computer misidentified a culprit of a violent crime, which was a very dramatic incident. However, despite its shortcomings, AI has the potential to revolutionize each of these industries.

Unlocking the potential of AI depends on getting the liability landscape right. Investment in, the development of, and deployment of AI systems will be discouraged by unclear regulations and potentially expensive lawsuits. The framework that decides who, if anybody, ends up accountable for damage caused by artificial intelligence systems will determine how widely AI is adopted in the health care industry, autonomous automobiles, and other businesses.

AI questions conventional liability. How do we determine who is responsible, for instance, when a "black box" algorithm recommends a course of treatment that has the potential to be harmful or operates a vehicle recklessly before the human driver can react? In situations like these, the identity and weighting of the variables change dynamically, making it impossible to know what factors go into the prediction. Is the doctor or the driver truly at fault here? Is the AI's creator corporation at fault? And if they promoted adoption, what responsibility should everyone else—health systems, insurers, producers, and regulators—face? These are important open-ended problems that must be addressed to ensure the appropriate usage of AI in consumer goods.

AI is a potent tool, much like other altering technologies. If they are developed and evaluated appropriately, AI algorithms may help with diagnostics, market research, predictive analytics, and any other application that calls for the processing of large data sets. According to a recent McKinsey global survey, more than half of organizations globally use AI in daily operations. The algorithm user, who is the simplest target to hit, is much too often held responsible. Usually, the person who caused the accident or the doctor who gave the poor treatment is where liability inquiries begin and stop.

The end-user should be held accountable if they abuse an AI system or disobey its warnings. However, an end-user mistake in AI is frequently not their fault. Who can blame a doctor working in an emergency department if an AI program fails to detect papilledema, a retinal swelling? If an AI doesn't recognize the problem, care may be delayed and a patient's vision may be lost. However, papilledema can be difficult to detect without an ophthalmologist's examination since other clinical information, such as brain imaging and visual acuity tests, are frequently required as part of the workup. Even though AI has the potential to revolutionize many different sectors, end users won't utilize it if they are solely responsible for mistakes that might be catastrophic. The problem won't be resolved by directing all the responsibility onto AI developers or users.

More Trending Stories 

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net