Latest News

Can You Work with Robots? Researchers Can Track Your Relationship

Zaveria

The future of humans and robots closely working together is here and is effective

It's important to make sure that humans and robots get along well as industries start to employ close human-robot collaboration. These working human-robot relationships depend on the robot's trustworthiness and human willingness to trust robot behavior. Subjectivity makes it challenging to measure human trust levels in robots is very crucial.

This is a problem that researchers at Texas A&M University's Wm Michael Barnes '64 Department of Industrial and Systems Engineering are working to overcome. The NeuroErgonomics Lab's associate professor and director, Dr. Ranjana Mehta, explained that several initiatives on human-robot interactions in safety-critical job domains served as the foundation for her lab's research on human-autonomy trust. Tracking human-robot relationships for work interactions may be possible now. Working with robots hasn't been easy for humans as the robots have messed up a lot of work due to their poor programming. Human-robot working together has always raised eyebrows and doubts about their relationship.

As Mehta noted, "trust became a crucial component to examine while our focus up until now was to understand how operator conditions of fatigue and stress affect how people engage with robots." However, why that is the case becomes a crucial topic to answer. "We observed that as humans get fatigued, they let their guards down and become more trusting of technology than they should."

Mehta's most recent research, which was just published in Human Elements: The Journal of the Human Factors and Ergonomics Society, focuses on understanding the connections between the brain and behavior that explain why and how human and robot factors affect an operator's trusting behaviors.

Mehta has written another article that examines these human and robot aspects and is published in the journal Applied Ergonomics.

Mehta's lab used functional near-infrared spectroscopy to record functional brain activity as humans and robots worked together on a production operation. They discovered that poor robot performance reduced the operator's faith in the robots.

The frontal, motor, and visual cortices' greater activity in these areas was linked to distrust, suggesting increased effort and situational awareness. It's interesting to note that the same distrustful behavior was linked to the decoupling of these brain regions' ability to cooperate, which was otherwise strong when the robot exhibited consistent conduct. According to Mehta, this decoupling was more pronounced at increasing robot autonomy levels, showing that the dynamics of human-autonomy teaming have an impact on neural signs of trust.

"What we found most intriguing was that the neural fingerprints altered when we compared operator's trust levels (as measured by surveys) in the robot to brain activation data across reliability situations (manipulated using normal and defective robot behavior ", Mehta added.

Since perceptions of trust alone are not indicative of how operators' trusting behaviors shape up, this underlined the need of understanding and quantify brain-behavior links of trust in human-robot cooperation.

Lead author of both papers and a recent doctoral student in industrial engineering, Dr. Sarah Hopko, claimed that perceptions of trust and neural responses are both signs of trusting and distrusting behaviors and convey different information about how trust develops, is violated, and is repaired with various robot behaviors. She stressed how multimodal trust measurements, such as eye tracking, behavioral analysis, and cerebral activity, might disclose fresh viewpoints that subjective answers by themselves cannot.

The research will next be expanded to include other work contexts, such as emergency response, to better understand how teamwork and taskwork in safety-critical conditions are impacted by trust in multi-human robot teams. According to Mehta, the long-term objective is to create trust-aware autonomy agents to serve people rather than replace them with autonomous robots.

She emphasized the benefits of multimodal trust measures, such as eye tracking, cerebral activity, behavioral analysis, etc., which can reveal novel perspectives that are not possible with purely subjective replies.

Mehta clarified that the long-term objective is to help humans by developing trust-aware autonomy agents rather than replacing them with autonomous robots.

The importance of this work drives us to make sure that humans-in-the-loop robots' design, evaluation, and workplace integration are empowering and supportive of human skills, according to Mehta.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

BONK Market Cap Hits $2.5 Billion - 5th Highest Ranking Meme Coin Still Gaining Traction

Will Dogecoin Hit the $2 Mark by 2025?

Qubetics Presale: Low $0.0212 Entry Point Makes It a Top Pick as the Best Crypto to Buy Now as LTC Eyes $100 and ADA Expands

Top Crypto-Friendly Banks and Financial Services

Memecoin Frenzy Ignites: Brett and Dogecoin Target $1, Analyst Predicts 10x Surge for Lunex Network