Artificial Intelligence: Safety Rules from Experts

Artificial Intelligence: Safety Rules from Experts

AI experts suggest guidelines for safe artificial intelligence systems

Artificial intelligence (AI) is a powerful technology that can perform tasks that normally require human intelligence, such as recognizing images, understanding speech, making decisions, and learning from data. AI has many potential benefits for society, such as improving health care, education, transportation, and entertainment. However, AI poses challenges and risks, such as ethical dilemmas, privacy violations, bias, discrimination, and security threats.

To address these challenges and risks, a global group of AI experts and data scientists has released a new voluntary framework for developing artificial intelligence products safely. The World Ethical Data Foundation (WEDF) has 25,000 members, including staff at tech giants such as Meta, Google, and Samsung. The framework contains 84 questions for developers to consider at the start of an AI project. The WEDF is also inviting the public to submit their questions. It says they will all be considered at its next annual conference.

The framework has been released as an open letter, seemingly the preferred format of the AI community. It has hundreds of signatories. The letter states:

We believe artificial intelligence can be a force for good globally, but only if developed and deployed responsibly and ethically. We recognize that AI systems can significantly impact individuals, communities, and society positively and negatively. We also acknowledge that AI systems can be subject to misuse, abuse, and unintended consequences. Therefore, we propose a set of guidelines for AI developers to follow to ensure that their products are safe, trustworthy, and beneficial for all.

The guidelines are based on four core principles: respect for human dignity and autonomy, fairness and justice, transparency and accountability, and safety and security. The guidelines cover various aspects of the AI lifecycle, such as data collection, processing, analysis, modeling, testing, deployment, monitoring, and evaluation.

Some of the 84 questions are as follows:

Do I feel rushed or pressured to input data from questionable sources?

Is the team working on selecting the training data from diverse backgrounds and experiences to help reduce the bias in the data selection?

What is the intended use of the model once it is trained?

How will I ensure the model does not discriminate against or harm any group or individual?

How will I communicate the limitations and uncertainties of the model to the users and stakeholders?

How will I monitor the performance and impact of the model after deployment?

How will I handle feedback and complaints from users and stakeholders?

How will I update or retire the model if it becomes obsolete or harmful?

The WEDF hopes that the framework will help raise awareness and foster dialogue among AI developers and other stakeholders about the ethical implications of their work. The WEDF hopes the framework will inspire other initiatives and standards to promote ethical AI development and governance.

The framework comes when AI becomes more prevalent and influential in various domains and sectors. It also comes when governments and regulators are increasingly concerned about AI's potential harms and challenges. For example, this week, shadow home secretary Yvette Cooper said that the Labour Party would criminalize those who deliberately use AI tools for terrorist purposes. Prime Minister Rishi Sunak has appointed Ian Hogarth, a tech entrepreneur, and AI investor, to lead an AI task force. Mr. Hogarth told me this week that he wanted "to better understand the risks associated with these frontier AI systems" and hold the companies who develop them accountable.

The WEDF acknowledges that its framework needs to be a comprehensive or definitive solution to all the ethical issues related to AI. It also recognizes that its framework may need to be revised and updated as AI technology evolves and new challenges emerge. However, it believes its framework is a useful starting point for creating a culture of responsibility and ethics among AI developers.

As one of the letter's signatories said: "We're in this Wild West stage, where it's just kind of: 'Chuck it out in the open and see how it goes.' We need some guidance to make sure we're doing things right."

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net