With the advent of technology, we have seen drastic changes in the way life got easier. Right from automation to improved security systems, technology got us covered. Every field out there has had immense utilization of technology, especially on the AI front. Machines deployed with intelligence stand the potential to drive cars, perform medical procedures, control traffic, and possibly everything that you can think of.
Now, addressing the main question here – Can AI take emotional decisions as well considering the fact that it is a replica of human intelligence? The reason why this is one of the hottest topics and is garnering attention from everywhere across is because humans are capable of making decisions on the basis of morals and emotions at any instant but machines cannot. They act as per what is programmed. Consider a situation in which the car is run by AI. You are sitting in the car and a situation arises wherein there is a passer-by on the road. What will the machine choose to do here? Will it save your life or the life of the passer-by? Had it been the case where you are driving, the situation would have been a lot more different. This is because you possess emotional intelligence and take the decision accordingly. This is why emotional intelligence in machines is critical.
On that note, a team of researchers at the University of Stanford thought of working in this aspect. The study is named – 'ArtEmis: Affective Language for Visual Art'. The team is engaged in a task that aims at computers being able to understand how images make people feel. Their aim is to make sure that the computers are not just in a position to recognize the objects but also are capable enough of understanding how those images make people feel. All of this revolves around emotional intent. In order to achieve this target, the researchers employed an algorithm and trained it to recognize emotional intent. With this, AI, in addition to being intelligent will be more like a human now, one of the researchers said.
This team of researchers first created a database. The efforts that go behind this is phenomenal for the sole reason that the database included about 81,000 WikiArt paintings. To bring in the element of emotional intent, the team collected over 4 lakh written responses from 6,500 humans indicating how they felt about a painting. It was based on the responses collected that the team trained AI to generate emotional responses. The algorithm aims to justify those emotions in language. Well, it just doesn't stop here. The study – 'ArtEmis: Affective Language for Visual Art' boasts of including categories like awe, amusement, sadness and fear have been included to align with the concept of emotional intelligence. Additionally, the algorithm is such that what it is in the image that justifies the emotion is explained in written text as well.
The team stated that they have designed the model in a manner that it is possible to interpret any form of art. Portraits, abstraction and still life also form a part of the model. The fact that not every human feels in the same manner pertaining to any subject is taken into account and the machine is designed accordingly. Simply put, the model is built on the aspect of subjectivity. What it feels might not be felt by a different model. The best application of this model, the team says, would be for artists, especially graphic designers. This is because they can now evaluate their work and also how the audience feels about it.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.