After a decade of significant progress in the evolution of artificial intelligence, today many of these systems have become smarter beings using a huge database of various data. For example, the artificial neural network has the ability to be trained and able to distinguish between the image of a leopard cat and a leopard, and well distinguish the image of a leopard from any similar image. At the same time, you should know that although this strategy has been remarkably successful to date, it is accompanied by problems and inefficiencies!
In fact, such training is always accompanied by human-labeled data. has been This means that artificial neural networks often use shortcuts to learn! For example, an artificial neural network might use the presence of grass to recognize a photo of a cow, since cows are usually photographed in fields. In fact, these two elements contain the minimum data that helps artificial intelligence in diagnosis! In this regard, Alexei Efros, a computer scientist at the University of California, Berkeley, admits that we are breeding a generation of algorithms that behave like students who didn't go to class all semester and didn't study, then on exam night with a bunch of They face the information and only keep it! In such a case, the student does not literally learn the material, but performs well on the exam!
- Supervised learning compared to self-supervised learning
- Is Is the biological brain related to self-supervised learning?
- Answering the ambiguities of the human brain by understanding learning in artificial intelligence
- Deficiencies of self-supervised learning in explaining our brain function
Supervised learning versus self-supervised learning
The commonality of animal and machine intelligence is interesting for many researchers, because the procedure of learning under the supervision of another entity "supervised learning" is very limited when exploring biological brains. In general, animals (including talking animals like humans) do not use labeled datasets for learning like machines. In fact, in most cases, the animal explores the environment on its own, and by doing so, it gains a rich and robust understanding of the world.
Now some computational neuroscientists are beginning to explore networks. They are neural networks that are trained with little or no human label data and are known as self-supervised learning algorithms. It is interesting to know that these self-supervised learning algorithms have been very successful in modeling human language and image recognition operations. During recent studies and efforts, computational models of the visual and auditory system of mammals were built using self-supervised learning models, which showed a close and significant correspondence to the brain's performance compared to cases with human-supervised learning! So some neuroscientists admit that artificial networks are beginning to reveal some of the ways our brains learn. And what is unimportant; In this regard, this process may be what has made our brain so successful!
Back to list
Is the biological brain with self-monitoring learning? Is "self-supervised learning" related?
Building brain models inspired by artificial neural networks in It started about 10 years ago, and their construction coincided with the emergence of neural networks called AlexNet, which revolutionized the classification of uncertain images. This network, like all neural networks, is made of layers of artificial neurons whose computing units have connections to each other and can be different in terms of strength or weight.
Synaptic weight " means the strength or range of connection between two nodes in the neural network.
If a neural network fails to classify an image correctly, the learning algorithm revises and updates the weights of the connections between neurons to reduce the probability of a mistake. reduce classification in the next round of training. During this procedure, the algorithm repeats this process over and over with all the training images, until the network error is reduced to an acceptable level! Around the same time, neuroscientists developed the first computational models of the primate visual system using neural networks such as AlexNet and similar examples. Such a procedure worked well because, for example, when monkeys and artificial neural networks were shown similar images gave, the activity of real neurons and artificial neurons showed an interesting match! Such a result was successfully followed in the artificial models of hearing and smell!
One of the successful experiments in the field of understanding artificial intelligence is the use of computational models of the visual system of primates using self-supervised learning models. During this process, by showing monkeys and artificial intelligence different images, the activity of real neurons and artificial neurons showed an interesting correspondence!
As the field progressed, researchers soon realized the limitations of supervised learning. For example, in 2017, Leon Gatys, a computer scientist at the University of Tbingen in Germany, and his colleagues took an image of a Ford Model T and then covered it with a leopard skin, creating a strange but easily recognizable image. Recognizable created! An advanced artificial neural network correctly classified the original image as a Ford Model T, but mistook the image covered in leopard skin for a leopard! In fact, the artificial neural network based on supervised learning has no understanding of the shape of the car (or leopard) and only limits its judgment to the texture!
Now, according to this experiment, you can easily understand that Why self-supervised learning strategies have replaced supervised learning. In this approach, humans do not label the data, and the labels and the nature of everything come from the data itself. It is interesting to note that self-monitoring algorithms can basically create gaps in the data and ask the neural network to fill in the gaps. For example, in one of the exercises, the learning algorithm shows the artificial neural network the first few words of a sentence and asks it to predict the next word. In such a case, it seems that when this model is trained with a huge collection of texts collected from the Internet, it can learn the syntactic rules of the language and show an impressive linguistic ability (that too without supervision and external labels!). /p>
Animals and humans explore the environment alone and by doing this they gain a rich and strong understanding of the world; Therefore, our brain function is not dependent on existing labels and is associated with self-monitoring learning.
Similar efforts are being made in the field of computer vision. For example, in late 2021, Kaiming He and his colleagues introduced a method called masked auto-encoder, which is based on a technique by the Efros team in 2016. The self-supervised learning algorithm randomly hides approximately three-quarters of each image. Then, using the automatic masking-encoding method, it transforms the non-hidden parts of the image into a latent representation, which is a mathematical and compact representation that contains important information about that object; After this step, a decoder transforms those images back into complete images.
The self-supervised learning algorithm trains the encoder-decoder combination to convert images with hidden parts into complete versions of their original image. slow Meanwhile, any differences between the real images and the reconstructed images are fed back to the system to help it learn. In fact, this process is repeated for a set of training images until the error rate of the system is reduced appropriately. For example, when an auto-encoder-trained masking system was shown an image of a bus (which it had never seen before!) with approximately 80% coverage of its various parts, the system successfully recognized the structure of the bus. rebuilt This result is remarkably important and valuable.
Computational neuroscientist Blake Richards admits that 90% of what our brain is able to do comes from self-monitored learning.
Therefore, it seems that the latent segment reconstructions contain deeper information than the previous strategies. In such a case, this system may understand not only the textures, but also the shape (car, leopard, etc.). In fact, it can be acknowledged that this is the fundamental idea of self-monitoring learning; That way, you build your knowledge and understanding of the concepts from the ground up (like a student who studies and understands the concepts throughout the semester!), without applying too much rigor (overnight) to pass the final exams. .
Back to list
Answering the ambiguities of the human brain by understanding learning in artificial intelligence
In systems similar to the one presented, some neuroscientists see glimpses of how we learn. Blake Richards, a computational neuroscientist at McGill and Mila University, admits that 90 percent of what our brains do comes from self-regulated learning. Biological brains are generally thought to continuously predict the future location of a moving object or the next word in a sentence, just as a self-supervised learning algorithm tries to predict an obscure part of an image or a piece of text. slow Therefore, brains (both biological and artificial) learn from their mistakes on their own!
In order to better understand the similarities between our brain and the artificial neural network, consider the visual system of humans and other primates. Visual systems are the best sensory systems among this category of animals, but the existence of two main and separate pathways in the visual system has always been a question for neuroscientists. One of these pathways is the ventral visual stream, which is responsible for recognizing objects and faces, and the other is the dorsal visual stream, which processes movements. According to this question, Richards and his team created a spark to answer this question by using a self-monitoring model.
For this, this research team trained artificial intelligence to combine two different neural networks. slow, one of these networks was called ResNet architecture was designed for image processing and the other was known as "Recurrent network" and could follow a sequence of previous inputs in order to predict the next expected input.
The existence of two main and separate pathways in the visual system of different impulses is one of the important questions of neuroscientists, which can be partially answered by understanding and testing hybrid artificial intelligence. It is given!
Back to index
The shortcomings of self-monitored learning in explaining how our brain works
Josh McDermott , a computational neuroscientist at the Massachusetts Institute of Technology, is one of those who has worked on models of vision and auditory perception using supervised and self-supervised learning, and is among the researchers who disagree with this theory. In his laboratory, he has designed something called metamers, which are a kind of audio and visual signals that are noise-like and incomprehensible to humans. However, for an artificial neural network, metamers are indistinguishable from real signals. Therefore, this experiment shows that the representations formed in the deeper layers of the neural network, even with self-supervised learning, do not match the representations in our brain. According to McDermott, these self-supervised learning approaches are advances in order to learn a series of representations. which can support many cognitive behaviors without requiring the labels of an observer; But it is still dealing with deep problems. On the other hand, the algorithms themselves need more work. For example, "Meta AI's Wav2Vec 2.0", is among the cases that only has the ability to predict latent parts for a sound of several tens of milliseconds, which is less than the time required to understand a noise in a distinguishable form and even to understand a word. Not enough!
Back to list
True understanding of brain function requires more than supervised learning; Because the brain is full of feedback connections, while current models (if they exist) do not have such extensive connections and include only a few communication nodes. On the other hand, the next step in better understanding the brain and artificial intelligence is to use self-supervised learning to train feedback networks and observe how such networks work compared to the actual brain activity. Another important step will be matching the activity of artificial neurons in self-monitoring learning models with the activity of biological neurons of people. Of course, in this regard, it should be noted that if the observed similarities between the brain and self-monitoring learning models for other sensory systems also exist, it can be a strong reason for the fact that whatever our brain is capable of doing, it is somehow related to learning. It needs self-monitoring.
- Artificial intelligence also challenged physicists!
Source: Quanta MAGAZINE
During the AI Day 2022 event, Elon Musk unveiled the prototype of the Optimus humanoid robot, which significantly replaces the software and some senso...
In recent months, artificial intelligence systems for converting text into images have been in the headlines. Meanwhile, artificial intelligence resea...
By using artificial intelligence algorithms to identify illegal pools, the French government has managed to collect about 10 million euros more in tax...