Are Researchers Intentionally Inventing Racist AI Systems for Their Profit?

Are Researchers Intentionally Inventing Racist AI Systems for Their Profit?
Published on

EU's new AI act alleges researchers for inventing racist AI systems to gain profit

EU's new AI act claims AI-based industries are avoiding legal liability for 'innovation' and aims to prevent tech companies from releasing dangerous systems. The European court of justice issued a landmark ruling that European citizens had the right to petition search engines to remove search results that linked to material that had been posted lawfully on third-party websites. This was popularly but misleadingly described as the "right to be forgotten"; it was a right to have certain published material about the complainant delisted by search engines, of which Google was by far the most dominant. Or, to put it crudely, a right not to be found by Google.

What brings this to mind is the tech companies' reaction to a draft EU bill published last month that, when it becomes law in about two years, will make it possible for people who have been harmed by software to sue the companies that produce and deploy it. The new bill, called the AI Liability Directive, will complement the EU's AI Act, which is set to become EU law around the same time. These laws aim to prevent tech companies from releasing dangerous systems, for example, algorithms that boost misinformation and target children with harmful content; facial recognition systems that are often discriminatory; racist AI systems used to approve or reject loans or to guide local policing strategies and so on that are less accurate for minorities. In other words, technologies are currently almost entirely unregulated.

The AI Act mandates extra checks for "high-risk" uses of AI that have the most potential to harm people, particularly in areas such as policing, recruitment, and healthcare. The new liability bill, says MIT's Technology Review journal, "would give people and companies the right to sue for damages after being harmed by racist AI systems. The goal is to hold developers, producers, and users of the technologies accountable and require them to explain how their racist AI systems were built and trained. Tech companies that fail to follow the rules risk EU-wide class actions."

That would also be the same innovation that led to the Cambridge Analytica scandal and Russian online meddling in 2016's US presidential election and UK Brexit referendum and enabled the live streaming of mass shootings. The same innovation behind the recommendation engines that radicalized extremists and directed "10 depression pins you might like" to a troubled teenager who subsequently ended her own life.

What is even more remarkable, though, is how the tech companies' claim to be the sole masters of "innovation" has been taken at its face value for so long. But now two eminent competition lawyers, Ariel Ezrachi and Maurice Stucke have called the companies' bluff. In a remarkable new book, How Big-Tech Barons Smash Innovation – And How to Strike Back, they explain how the only kinds of innovative tech companies tolerate that align with their interests. They reveal how tech firms are ruthless in stifling disruptive or threatening innovations, either by pre-emptive acquisition or naked copycatting and that their dominance of search engines and social media platforms restricts the visibility of promising innovations that might be competitively or societally useful. As an antidote to tech puffery, the book will be hard to beat. It should be required reading for everyone at Ofcom, the Competition and Markets Authority, and the DCMS. And from now on "innovation for whom?" should be the first question to any tech booster lecturing you about innovation.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net