Hyperdimensional Computer uses the concepts of hyper-vectors, the dimensions of which range around tens of thousands. With an aim to solve cognitive tasks, Hyperdimensional computing proposes calculating similarity among the data and allows fast learning ability, high energy efficiency, and acceptable accuracy in learning and classifications of tasks. It also aids data transformation with its inherently robust nature.
Large vectors have versatile properties that are applicable to the workings of AI. We can select any number of random vectors from the hyperspace of 10k dimensions, being orthogonal to each other. This implies that we can generate a new vector at any time, different from the previous ones. We can also add two vectors to get a similar vector and multiply two vectors to obtain a dissimilar vector.
Suppose a group has a text of random size and with random content and a group wants to guess if it's French or English. What can they do? They can use compute vectors for each language and input and compare the angles. The group can start with computing a single vector for each language, one for French and one for English. Then, compute a single vector for the input text and compare the input text vector with both language vectors. The closest language vector to the input vector is most likely the input's language.
Step 1: Computing a 10k vector for a language
The encoding will go as follows:
Generate a random 10k vector for each letter to store it by using +1 and -1 as the available values. The vector will look something like this: (+1-1+1+1-1-1-1+1….)
Encode trigrams using rotate and multiply operations. This summarises a short sequence of 3 letters into 10k vectors.
Step 2: Computing a 10k vector for the input text
This process is exactly the same for encoding a language vector.
Step 3: Comparing the input and language vectors
By using cosine similarity, one can compare the angles between vectors.
For robots to be as intelligent as humans in various tasks, they need to coordinate sensory data with robotics motor capabilities. Scientists from the University Of Maryland published a paper in their journal Science Robotics describing a potentially revolutionary approach to improve the way AI handles sensorimotor representation using hyperdimensional computing theory.
The researchers aimed at creating a way to improve a robot's "active perception" and the robot's ability to integrate the way a machine will fit in the world around it. They wrote in the paper. "We find that action and perception are often kept in separated spaces", which is connected to traditional thinking.
Instead, they proposed "a method of encoding actions and perceptions together into a single space that is meaningful, semantically informed, and consistent by using hyperdimensional binary vectors (HBVs)."
Using these vectors, the researchers can keep all the sensory information that the robot receives in one place, essentially creating its memories. As more information gets stored, "history" vectors will be created, increasing the machine's memory content. This will result in robots being better at making autonomous decisions, expecting future situations, and completing tasks.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.