Brace yourself, Artificial Intelligence is becoming more powerful by the minute.
According to a study by researchers from Japan, Artificial Intelligence is one step closer to mind reading machines illustrated in popular science fiction films.
Scientists have recently created a set of ‘deep learning’ algorithm, modeled on the human brain which can, eventually, be used to decipher the human brain.
Japanese scientists have developed an algorithm that can read human thoughts with a disturbing accuracy, according to details from a recently published study available on bioRxiv.
The scary part? Other than the fact that we’ll soon have AI running around reading your mind (joking), is that it isn’t the first time its been done. However, the difference is that earlier methods and results were much simpler, deconstructing pictures based on two main characteristics: Their pixels and basic shape.
“Our brain processes visual information by hierarchically extracting different levels of features or components of different complexities,” said Yukiyasu Kamitani, one of the scientists involved in the study. “These neural networks or AI models can be used as a proxy for the hierarchical structure of the human brain.”
Through various tests, artificial intelligence was able to analyze electrical signals from the brain and determine what images each subject was observing or imagining.
In order to achieve this, the system has a set of artificial neural networks, which learn to think like a human brain through simulations.
The study lasted ten months, in which three people viewed images from three different categories such as natural phenomena, artificial geometric shapes and letters of the alphabet.
Comparing 50 different photographs and the result of the magnetic resonances obtained from the brain of each observer, the neural network developed by the scientific team learned to interpret the human thought.
By reconstructing people’s brain patterns, the algorithm was able to define the images they observed, between owls, showcases, red mailboxes and airplanes. In addition, it reproduced images such as swans, leopards, bowling balls or fish that each person imagined.
“Whereas it has long been thought that the externalization or visualization of states of the mind is a challenging goal in neuroscience, brain decoding using machine learning analysis of fMRI activity nowadays has enabled the visualization of perceptual content,” said the research paper.
“Although sophisticated decoding and encoding models have been developed to render human brain activity into images or movies or to the matching to exemplar images or movies (Naselaris et al., 2009; Nishimoto et al., 2011), failing to combine visual features of multiple hierarchical levels, the methods were essentially limited to the image reconstruction with low-level image bases.”
Featured image credit: Shutterstock.