The Importance of Ethics and Governance with AI

The Importance of Ethics and Governance with AI
Published on

AI needs to be harnessed in intelligence societies with good ethics and governance.

The Promise, Possibilities, and Potential of AI

The harnessing of AI has and will bring tremendous benefits to society, such as making supply chains more efficient, advancing intelligent automation and robotics to support our everyday living, and offering seamless clearances at borders. AI technology has advanced so much to the point that it has fast approached, and in some instances exceeded, human intellect. Just as human intelligence needs to be harnessed and orchestrated appropriately in human societies with good ethics and governance, we need to do the same with AI.

The Dark Side of AI

There are some considerations to bear in mind. If you have heard about GPT-4chan, you would also have heard how problematic it was. Developed by a Youtuber in the Artificial Intelligence (AI) community, the GPT-4chan model was made based on the /pol/ board (politically incorrect board) of 4chan, a controversial forum. The result? An AI that spewed hate speech.

As troubling as that may seem, the very basis of AI may inadvertently encourage such practices that make it easy to use the technology for nefarious purposes. That is starkly evident as the community increasingly embraces open-source development. This no longer restricts the development of AI applications to a small number of privileged companies but opens it up for all to use — including bad actors.

Should we be surprised? The likes of Bill Gates, Elon Musk and Jeff Bezos have all expressed concerns and issued warnings about the potential dangers of AI, especially around their usage within weapons systems and job displacement. However, much of what we experience today involves AI to some extent, such as the ads that are targeted specifically to us on social media and our favourite streaming platforms recommending new content to us based on our previous choices and habits. If we truly want to transform AI's potential into a reality, then there are concerns that must first addressed.

Growing Concerns

AI systems have grown exponentially in recent years. It has spawned numerous benefits, but it is not insulated from certain drawbacks that have sparked concerns, especially around topics like compliance, ethics and governance.

  • Bias – Eliminating bias is necessary to reflect the society with greater precision. This requires identifying all the potential areas of bias and calibrating AI solutions to address them. An AI system that suffers from biases will deliver skewed insights and recommendations if they've not been trained with varied data sets.
  • Loss of control — With the increasing use of AI, machines have become more capable of making important decisions. However, it is still necessary to have human involvement in any decision-making that may affect humanity in any way. AI is still unable to properly account for emotion, apply empathy to situations that call for it, pass moral judgement or derive creative outcomes.
  • Technology is not fool proof – Innovation is a constant work in progress, and there is always the risk of potentially grave errors if decision making was completely entrusted to an AI system with little to no oversight or calibration. Technology after all, is not perfect.
  • Privacy — Privacy has long been a major ethical concern associated with AI. For instance, your smart devices are constantly picking up cues from its environment such as speech, which can then be mined for insights and recommendations. AI-based toys that can collect data on children too are a genuine concern.
  • Erosion of trust – the indefinite collection and storage of sensitive data such as biometrics raises questions around trust – could the people in charge be doing anything else with our data "off the books"?

Compliance is not enough

Addressing these issues requires us to look beyond mere legal compliance. Instead, various factors such as privacy, human rights, and social acceptability must be considered. This type of problem solving should not be limited to firms involved in the development and marketing of AI. AI-related issues must be dealt with across the entire supply chain — including by individuals and organisations that provide AI-based services. These include:

Technologists are not the only stakeholders when it comes to addressing AI issues. Policymakers will also play an important part in helping us address the potential risks of AI application. They'll be responsible for:

  • Strengthening existing AI regulatory and governance frameworks
  • Championing trial and error, repeated testing and sandboxes to fine tune and calibrate AI models and systems to eliminate the risk of bias
  • Working on sector specific regulation that is tailored to the varied applications of AI across various industries

Lastly, investments must also be made to ensure that the ecosystem remains viable and that there is a deep and robust talent pool to continually staff AI-related roles. Consulting firm Korn Ferry estimates that Asia Pacific's TMT sector could face a talent shortfall of 2 million – including AI professionals, at an annual opportunity cost of more than $151.60 billion by 2030. Singapore, for instance, has committed $180 million to accelerate AI research and launched programmes to upskill people with AI skills. While those might address the "hard" skills such as AI engineering and development, "soft" skills important too. Singapore's Nanyang Technological University and Singapore Computer Society have also launched a course in AI ethics and governance aiming to recognise and certify professionals in those areas.

Great promise, but more progress is needed

Ultimately, there needs to be a deep and open discussion on what AI can and cannot do. Organisations and governments alike need to make sure that it will be used to benefit as many people as possible. Measures need to be adopted to ensure AI does not omit, by design or by accident, certain subsets of society, assuage privacy and trust concerns and implement safeguards that give humans a degree of control.

The increasing use of AI in daily life will undoubtedly continue to raise questions around ethics, compliance and governance. Are humanity's AI goals ambitious or just simply dangerous? That is a question that all involved stakeholders – governments, regulators, innovators, tech firms and consumers – need to collectively work towards answering.

Author:

Walter Lee, Evangelist and Head of Public Safety Consulting, Public Safety Centre of Excellence, NEC Asia Pacific

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net