Artificial Intelligence

How does Artificial Intelligence Contribute to Robotic System Design?

Kamalika Some

Welcome to an Age where AI meets the machine world.

Artificial intelligence is en route to changing all industries and the robotics industry is not an exception. Presently, the innovative combination of AI and robotics has created a number of futuristic possibilities, in all the industry domains. While most of us will agree that most robots will be humanoids in 10 years from now; in many environments, robots are designed to emulate a range of behaviors  and physical abilities will reflect a best fit for those characteristics. An exception will likely be robots that provide medical or other care or companionship for humans, and perhaps service robots that are meant to establish a more personal and 'humanized' relationship. Here are how the different technologies, combining AI will bring a difference to Robotics-

Computer Vision

Though related, some would argue that the correct term for computer vision is machine vision or robot vision rather than computer vision, because "robots seeing" involves more than just computer algorithms; engineers and roboticists also have to account for camera hardware that allow robots to process physical data. Robot vision is very closely linked to machine vision, which can be given credit for the emergence of robot guidance and automatic inspection systems.

Imitation Learning

Imitation learning is closely related to observational learning, a behaviour exhibited by infants and toddlers. Imitation learning is also an umbrella category for reinforcement learning, or the challenge of getting an agent to act in the world so as to maximize its rewards. Bayesian or probabilistic models are a common feature of this machine learning approach. The question of whether imitation learning could be used for humanoid-like robots was postulated as far back as 1999.

Self-Supervised Learning

Self-supervised learning approaches enable robots to generate their own training examples in order to improve performance; this includes using a priori training and data captured close range to interpret "long-range ambiguous sensor data." It's been incorporated into robots and optical devices that can detect and reject objects (dust and snow, for example); identify vegetables and obstacles in rough terrain; and in 3D-scene analysis and modeling vehicle dynamics

Watch-Bot is a concrete example, created by researchers from Cornell and Stanford, that uses a 3D sensor (a Kinect), a camera, laptop and laser pointer to detect 'normal human activity', which are patterns that it learns through probabilistic methods. Watch-Bot uses a laser pointer to target the object as a reminder (for example, the milk that was left out of the fridge). In initial tests, the bot was able to successfully remind humans 60 percent of time, and the researchers expanded trials by allowing its robot to learn from online videos (called project RoboWatch).

Other examples of self-supervised learning methods applied in robotics include a road detection algorithm in a front-view monocular camera with a road probabilistic distribution model (RPDM) and fuzzy support vector machines (FSVMs), designed at MIT for autonomous vehicles and other mobile on-road robots.

Assistive and Medical Technologies

An assistive robot (according to Stanford's David L. Jaffe) is a device that can sense, process sensory information, and perform actions that benefit people with disabilities and seniors (though smart assistive technologies also exist for the general population, such as driver assistance tools). Movement therapy robots provide a diagnostic or therapeutic benefit. Both of these are technologies that are largely (and unfortunately) still confined to the lab, as they're still cost-prohibitive for most hospitals in the U.S. and abroad.

Early examples of assistive technologies included the DeVAR, or desktop vocational assistant robot, developed in the early 1990s by Stanford and the Palo Alto Veterans Affairs Rehabilitation Research and Development. More recent examples of machine learning-based robotic assistive technologies are being developed that include combining assistive machines with more autonomy, such as the MICO robotic arm (developed at Northwester University) that observes the world through a Kinect Sensor. The implications are more complex yet smarter assistive robots that adapt more readily to user needs but also require partial autonomy (i.e. a sharing of control between the robot and human).

Multi-Agent Learning

Coordination and negotiation are key components of multi-agent learning, which involves machine learning-based robots (or agents – this technique has been widely applied to games) that are able to adapt to a shifting landscape of other robots/agents and find "equilibrium strategies." Examples of multi-agent learning approaches include no-regret learning tools, which involve weighted algorithms that "boost" learning outcomes in multi-agent planning, and learning in market-based, distributed control systems.

A more concrete example is an algorithm for distributed agents or robots created by researchers from MIT's Lab for Information and Decision Systems in late 2014. Robots collaborated to build a better and more inclusive learning model than could be done with one robot, based on the concept of exploring a building and its room layouts and autonomously building a knowledge base.

The Futuristic Outlook

The above, brief outline of machine-learning based approaches in robotics, combined with contracts and challenges put out by powerful military sponsors (e.g. DARPA, ARL); innovations by major robotics manufacturers (e.g. Silicon Valley Robotics) and start-up manufacturers (Mayfield Robotics); and increased investments by a barrage of auto-manufacturers (from Toyota to BMW) on a next generation of autonomous vehicles (to name a few influential domains), point to the trend of machine learning as a long-term priority.

5 Top Performing Cryptos In December 2024 You’ll Regret Ignoring – Watch Before the Next Breakout

AI Cycle Returning? Keep an Eye on Near Protocol, IntelMarkets, and Bittensor to Rally Before 2025

Solana to Double its 2021 Rally Says Top Analyst, Shows Alternative that Will Mirrors its Gains in 3 Months

Ethereum and Litecoin Rallies Spark Excitement, But Whales Are Targeting a New Altcoin for 20x Gains

Here Are 4 Altcoins You’ll Regret Not Holding In This Crypto Bull Run