As artificial intelligence (AI) continues to advance at a rapid pace, permeating various aspects of our daily lives, a pressing question emerges: Can it be possible to teach a machine to make ethical decisions? This problem can be considered as one of the most vociferous topics for discussion among technologists, ethicists, and policymakers because the world faces the challenges of gradually autonomous systems backed by AI.
Ethical AI is all about developing machines that will have moral capabilities in order to make and take moral decisions. Nevertheless, the problem begins with the question of how to practically apply or operationalize these concepts of ethical behaviour in such a way that they make sense to a machine, which is what artificial intelligence really is.
Ethical Artificial Intelligence is an approach to structuring systems in ways that would make decisions they take based on these ethics which comprise fairness, accountability and transparency. In practically any field such as health or self-driving cars ethical AI has to solve many ethical problems. But the challenge comes in because in ethics you find that ethical practices differ from one culture to another, from one situation to another and even depending on the individualist’s values and beliefs. Therefore, the generation of AI systems that can be ethical worldwide is challenging.
For instance, consider an autonomous car that is on a drive and approaches a pedestrian who is on its way. Should the car prioritize the safety of its passengers or the pedestrian? Such choices, expose the difficulty in programming machines to make morally sound choices.
Quite possibly the biggest challenge in developing ethical AI systems is the vagueness of ethical decision-making. Ethics issues are often placed in situations where there is no clear right or wrong decision to make by humans. Lastly, it is also important to note the fact that culture and individual predispositions can heavily influence what is correct or incorrect.
Further, AI systems are based on data to form decisions. This means that the AI will have the same prejudice and lack of information they were trained on and use this when making its decisions. Real-world examples have shown how AI can perpetuate discriminatory practices:
Amazon’s Hiring Algorithm: A Reuters article stated that Amazon terminated its AI recruitment software in 2018 after uncovering gender-based discrimination in its algorithms.
COMPAS in Criminal Justice: COMPAS, an AI tool used for predicting the probability of recidivism by a defendant in the US criminal justice system was recently accused of bias against Blacks. A study conducted by ProPublica showed that COMPAS was wrong in predicting that Black defendants will become criminals in the future at a rate that is almost twice that of whites.
These examples show the desirability of strong monitoring and governance to forestall the reinforcement of problematic behaviour.
There is an effort from researchers and engineers to develop solutions that reciprocate certain ethical principles. Since AI can be trained and retain the codes, of course, its actual implementations show that it has its drawbacks. Take self-driving cars, for example. Tesla’s Autopilot and other self-driving technologies must navigate unpredictable real-world conditions. In March 2018, an Uber self-driving car hit and killed a woman in Arizona during the testing of the car. This left many wondering about morality.
In the course of its practice in healthcare, AI is crucial for diagnostic and therapeutic management. Autonomous decision-making support systems such as IBM’s Watson for Oncology make it easier for doctors to make treatment and care decisions by providing broad data analysis. However, recommendations made by Watson put the user in danger that are not seen anywhere near any other medical recommendations. Thus, it shows that even though AI can help in portraying the right ethical decisions it is not proficient in the decision-making process.
One thing is clear, humans have to stay in the loop. Nevertheless, the vital role of human supervision emerges in strictly important spheres including criminality, medicine, and self-driving cars. AI can make decisions using sets of data and deliver recommendations, but decision-making based on morality requires context and emotional intelligence that cannot be found in a machine.
For instance, Deep Mind, a subfield of Artificial intelligence has been used in the health sector largely in the diagnosis of eye ailments. Despite these systems presenting high accuracy they still need doctors to review the results from AI and determine if the proposed treatment is right for this particular patient. In each of these cases, AI is utilized as an instrument for those humans who perform particular professions, yet not for replacing them.
To make an ethical decision by AI a continuous concern there is a need for constant engagement of technology experts, ethicists, policymakers, and society at large. AI systems have to be made transparent and fair. Additionally, regulations must evolve to keep pace with technological advancements, establishing clear guidelines and accountability for AI developers.
As AI continues to evolve, the question of whether machines can be programmed to do the right thing remains open. What is clear, is that the development of ethical AI will require ongoing collaboration between technologists, ethicists, policymakers, and the public to navigate the complex moral landscape of the future. In the end, the query is not how can ‘AI do the right thing’ or can ‘AI be programmed,’ and so on, but it is how society, the users, and the integrators make sure that the Artificial Intelligence is embedded and implemented responsibly.