Beware, AI is Learning All Our Worst Bias Impulses

Microsoft’s chatbot Tay was taken down after it posted abusive content on Twitter

Artificial Intelligence (AI) is starting to be more than just a technology that exists to help humans. It is slowly invading people’s lives and is even making changes in the routine. Even though when AI looks like a futuristic technology that could do only good to humans, there are concerns on its bias. The science fiction movies have given us a vague outlook on AI technology. The movie directors have portrayed AI robots either as a humble creature that falls in love or a vicious character that takes over humanity. To be precise, AI is neither of this. AI is more of a mechanism that abstracts and reacts to content like humans because it is designed and developed by humans. Then if you have questions on whether AI can adopt everything that humans including being biased to an ideology, yes! AI can absolutely do that. Think about the cruel world where people at every corner are demanding equality. Continuous protests and violence break outs across the globe to support issues like racism, feminism, LGBTQ+ community, etc are already on track. Even if the next generation lives with a broad mind to see everything equal, it will take at least a hundred years to make the societal changes. The problem here is that human data is an important substance to make AI function. That is where the AI-bias problem lies. AI-bias is the underlying prejudice in data that’s used to create AI algorithms, which can ultimately result in discrimination and other societal consequences. We already have a lengthy record of cases where AI reacted like a biased mechanism. In 2016, Microsoft launched its first AI chatbot named ‘Tay’ to interact with people on Twitter. Tay was trained using a basic grasp of language based on a dataset of anonymised public data and some pro-written material, with the intension of subsequently learning from interactions with users. However, Tay tweeted over 95,000 times within sixteen hours of launch, mostly with abusive content. Microsoft took down the chatbot concerning societal issues. In May 2016, a stunning report claimed that a computer program used by a US court for risk assessment was biased against black prisoners. The program, Correctional Offender Management Profiling for Alternative Sanctions (Compas) was much more prone to mistakenly label black defendants twice the rate as white people, according to an investigation. These incidents are just a small piece of an ultra-large cake. Tackling AI-bias involves understanding the whole system and removing the poison from the root.  

The reason behind AI-bias

Every human has a biased thought about something. Even seasoned professionals with good intentions can be influenced by biases, hindering the effectiveness of diversity and inclusion decisions. Patterns of discrimination have long impacted existing datasets. Data input for any AI device is collected from human actions. Henceforth, we can’t expect humans to have a mechanical mindset. But this could drag bias in the AI device as well. It looks simple from a general perspective, but think about your job application going through AI classification and it rejects you based on your race. AI feels like a conservative boss at the top.  

A way out

Addressing the problem of AI-bias starts with knowing where and how it began. The people behind AI and its mechanism should be unbiased. Auditing decisions on who is recruited and promised is highly important. Going a step further and understanding who is given the opportunity of promotion, assigned the hardest projects or availed the chance to expand their internal networks can help us gain a clearer picture. The data inputs added to the AI device or robot should be filtered and bias-free. By being precautious in all these ways, AI can enter to a more inclusive future.
Join our WhatsApp and Telegram Community to Get Regular Top Tech Updates
Whatsapp Icon Telegram Icon

Disclaimer: Any financial and crypto market information given on Analytics Insight are sponsored articles, written for informational purpose only and is not an investment advice. The readers are further advised that Crypto products and NFTs are unregulated and can be highly risky. There may be no regulatory recourse for any loss from such transactions. Conduct your own research by contacting financial experts before making any investment decisions. The decision to read hereinafter is purely a matter of choice and shall be construed as an express undertaking/guarantee in favour of Analytics Insight of being absolved from any/ all potential legal action, or enforceable claims. We do not represent nor own any cryptocurrency, any complaints, abuse or concerns with regards to the information provided shall be immediately informed here.

546 Views
Close