The professional roles associated with artificial intelligence have been significantly gaining traction globally. LinkedIn has already nominated data scientist and data analyst roles amongst the most preferred in the coming decade. Reports suggest that with changing dynamics of work, recruiters across all industry are currently inclined to employ professionals having technical proficiency about big data. Undoubtedly, big data is promptly becoming the game-changer across different sectors, and hence thriving the AI roles.
Big tech organizations like Google and Amazon have in many instances tweeted about bolstering data science job profiles. They are already offering certification courses for the easy recruitment process in the big data and AI jobs. And while the AI roles are getting recognized globally, one domain that is still flawed is in the gender bias of the recruited roles. Despite many initiatives flagged off by the AI regulatory authorities, organizations still have a wider gender gap then what is being anticipated. The societal biases are so deeply ingrained, that the field of technology has less than 33% of women working at a higher position. A report by AI Now Institute points out that women comprise only 15% and 10% of AI research staff at Facebook and Google respectively. This means that the root cause in the dearth of women in technology is the pre-requisite biases during the recruitment process.
A study report titled, "Ethical Implications of AI Bias as a Result of Workforce Gender Imbalance", by the University of Melbourne further substantiates the role of AI-algorithms for proliferating gender-based bias across the industry. The researchers gave the real resume of candidates to the 40 recruiters of Unibank, for the role of a data analyst, finance officer and recruitment officer.
It is observed that the surveyed panelists ranked female resumes lower than that of males, in male-dominated as well as gender-balanced roles.
Leah Ruppanner, who is the co-author of the study and an associate professor at the University of Melbourne states that "Unfortunately, for data and finance roles, women's resumés were ranked lower than men by our human panelists though they had the same qualifications and experience."
With the help of data available, the researchers developed a suite of automated and semi-automated algorithms to rate the candidates' suitability for each of the jobs. The algorithm gets created for impartiality and neutrality without human intervention, as well as for enabling optimal efficiency and accuracy to sort out a large number of applications while profiting the company.
Such algorithms are created to ease out the redundant tasks as well, for identifying the excellent candidates. Instead, the study points out that the applied algorithms can encourage gender bias without taking into account the merit of the candidates.
Dr Marc Cheong, the report co-author and digital ethics researcher from the Centre for AI and Digital Ethics states that "Even when the names of the candidates were removed, AI assessed resumés based on historic hiring patterns where preferences leaned towards male candidates. For example, giving an advantage to candidates with years of continuous service would automatically disadvantage women who've taken time off work for caring responsibilities."
He further adds, "Also, in the case of more advanced AIs that operate within a "black box" without transparency or human oversight, there is a danger that any amount of initial bias will be amplified."
However, this is not the first time that gender biases due to discrepancies in algorithms have impacted the hiring process. In 2018, Amazon's secret AI recruiting tool AMZN.O displayed heavy bias against women.
The AI algorithms are trained using a large volume of structured and unstructured data. Any discrepancy in this data can lead to the proliferation of biases in the existing system. For example, the study indicates that the limitation of datasets to train the AI-model for ranking and assessing the candidate can subsequently transform their benchmark based on the specific set of data. Similarly, if the data lacks the representation of a female candidate, the algorithm would indicate results based on male attributes.
Similarly, the pre-existing biases in the system can also be detrimental during the hiring process. This implies that the proxy attributes such as demographic locations, neighbourhood, and race can implant gender biases across the system, thus disrupting the corporate culture.
Undoubtedly, every technology is a double-edged sword. While we accept the pros of this evolving technology, it also becomes imperative to address its negative impacts across the industry. Since the talk about empowering women is getting constantly sweltered around, it's high time that we give equal opportunities to women to display their skills. Without rectifying, the existing flaws and addressing the current barriers moving forward with artificial intelligence will be detrimental.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.