After a decade of significant progress in the evolution of artificial intelligence, today many of these systems have become smarter beings using a huge database of various data. For example, the artificial neural network has the ability to be trained and able to distinguish between the image of a leopard cat and a leopard, and well distinguish the image of a leopard from any similar image. At the same time, you should know that although this strategy has been remarkably successful to date, it is accompanied by problems and inefficiencies!

In fact, such training is always accompanied by human-labeled data. has been This means that artificial neural networks often use shortcuts to learn! For example, an artificial neural network might use the presence of grass to recognize a photo of a cow, since cows are usually photographed in fields. In fact, these two elements contain the minimum data that helps artificial intelligence in diagnosis! In this regard, Alexei Efros, a computer scientist at the University of California, Berkeley, admits that we are breeding a generation of algorithms that behave like students who didn't go to class all semester and didn't study, then on exam night with a bunch of They face the information and only keep it! In such a case, the student does not literally learn the material, but performs well on the exam!

  • Supervised learning compared to self-supervised learning
  • Is Is the biological brain related to self-supervised learning?
  • Answering the ambiguities of the human brain by understanding learning in artificial intelligence
  • Deficiencies of self-supervised learning in explaining our brain function
  • li>
  • Conclusion

Supervised learning versus self-supervised learning

The commonality of animal and machine intelligence is interesting for many researchers, because the procedure of learning under the supervision of another entity "supervised learning" is very limited when exploring biological brains. In general, animals (including talking animals like humans) do not use labeled datasets for learning like machines. In fact, in most cases, the animal explores the environment on its own, and by doing so, it gains a rich and robust understanding of the world.

Now some computational neuroscientists are beginning to explore networks. They are neural networks that are trained with little or no human label data and are known as self-supervised learning algorithms. It is interesting to know that these self-supervised learning algorithms have been very successful in modeling human language and image recognition operations. During recent studies and efforts, computational models of the visual and auditory system of mammals were built using self-supervised learning models, which showed a close and significant correspondence to the brain's performance compared to cases with human-supervised learning! So some neuroscientists admit that artificial networks are beginning to reveal some of the ways our brains learn. And what is unimportant; In this regard, this process may be what has made our brain so successful!

Back to list

Is the biological brain with self-monitoring learning? Is "self-supervised learning" related?

Building brain models inspired by artificial neural networks in It started about 10 years ago, and their construction coincided with the emergence of neural networks called AlexNet, which revolutionized the classification of uncertain images. This network, like all neural networks, is made of layers of artificial neurons whose computing units have connections to each other and can be different in terms of strength or weight.

Synaptic weight " means the strength or range of connection between two nodes in the neural network.

If a neural network fails to classify an image correctly, the learning algorithm revises and updates the weights of the connections between neurons to reduce the probability of a mistake. reduce classification in the next round of training. During this procedure, the algorithm repeats this process over and over with all the training images, until the network error is reduced to an acceptable level! Around the same time, neuroscientists developed the first computational models of the primate visual system using neural networks such as AlexNet and similar examples. Such a procedure worked well because, for example, when monkeys and artificial neural networks were shown similar images gave, the activity of real neurons and artificial neurons showed an interesting match! Such a result was successfully followed in the artificial models of hearing and smell!

One of the successful experiments in the field of understanding artificial intelligence is the use of computational models of the visual system of primates using self-supervised learning models. During this process, by showing monkeys and artificial intelligence different images, the activity of real neurons and artificial neurons showed an interesting correspondence!

As the field progressed, researchers soon realized the limitations of supervised learning. For example, in 2017, Leon Gatys, a computer scientist at the University of Tbingen in Germany, and his colleagues took an image of a Ford Model T and then covered it with a leopard skin, creating a strange but easily recognizable image. Recognizable created! An advanced artificial neural network correctly classified the original image as a Ford Model T, but mistook the image covered in leopard skin for a leopard! In fact, the artificial neural network based on supervised learning has no understanding of the shape of the car (or leopard) and only limits its judgment to the texture!

Now, according to this experiment, you can easily understand that Why self-supervised learning strategies have replaced supervised learning. In this approach, humans do not label the data, and the labels and the nature of everything come from the data itself. It is interesting to note that self-monitoring algorithms can basically create gaps in the data and ask the neural network to fill in the gaps. For example, in one of the exercises, the learning algorithm shows the artificial neural network the first few words of a sentence and asks it to predict the next word. In such a case, it seems that when this model is trained with a huge collection of texts collected from the Internet, it can learn the syntactic rules of the language and show an impressive linguistic ability (that too without supervision and external labels!). /p>

Animals and humans explore the environment alone and by doing this they gain a rich and strong understanding of the world; Therefore, our brain function is not dependent on existing labels and is associated with self-monitoring learning.

Similar efforts are being made in the field of computer vision. For example, in late 2021, Kaiming He and his colleagues introduced a method called masked auto-encoder, which is based on a technique by the Efros team in 2016. The self-supervised learning algorithm randomly hides approximately three-quarters of each image. Then, using the automatic masking-encoding method, it transforms the non-hidden parts of the image into a latent representation, which is a mathematical and compact representation that contains important information about that object; After this step, a decoder transforms those images back into complete images.

The self-supervised learning algorithm trains the encoder-decoder combination to convert images with hidden parts into complete versions of their original image. slow Meanwhile, any differences between the real images and the reconstructed images are fed back to the system to help it learn. In fact, this process is repeated for a set of training images until the error rate of the system is reduced appropriately. For example, when an auto-encoder-trained masking system was shown an image of a bus (which it had never seen before!) with approximately 80% coverage of its various parts, the system successfully recognized the structure of the bus. rebuilt This result is remarkably important and valuable.

Computational neuroscientist Blake Richards admits that 90% of what our brain is able to do comes from self-monitored learning.

Therefore, it seems that the latent segment reconstructions contain deeper information than the previous strategies. In such a case, this system may understand not only the textures, but also the shape (car, leopard, etc.). In fact, it can be acknowledged that this is the fundamental idea of self-monitoring learning; That way, you build your knowledge and understanding of the concepts from the ground up (like a student who studies and understands the concepts throughout the semester!), without applying too much rigor (overnight) to pass the final exams. .

Back to list

Answering the ambiguities of the human brain by understanding learning in artificial intelligence

In systems similar to the one presented, some neuroscientists see glimpses of how we learn. Blake Richards, a computational neuroscientist at McGill and Mila University, admits that 90 percent of what our brains do comes from self-regulated learning. Biological brains are generally thought to continuously predict the future location of a moving object or the next word in a sentence, just as a self-supervised learning algorithm tries to predict an obscure part of an image or a piece of text. slow Therefore, brains (both biological and artificial) learn from their mistakes on their own!

In order to better understand the similarities between our brain and the artificial neural network, consider the visual system of humans and other primates. Visual systems are the best sensory systems among this category of animals, but the existence of two main and separate pathways in the visual system has always been a question for neuroscientists. One of these pathways is the ventral visual stream, which is responsible for recognizing objects and faces, and the other is the dorsal visual stream, which processes movements. According to this question, Richards and his team created a spark to answer this question by using a self-monitoring model.

For this, this research team trained artificial intelligence to combine two different neural networks. slow, one of these networks was called ResNet architecture was designed for image processing and the other was known as "Recurrent network" and could follow a sequence of previous inputs in order to predict the next expected input.




https://safirsoft.com Elon Musk unveiled the prototype of a humanoid robot

Elon Musk unveiled the prototype of a humanoid robot

During the AI Day 2022 event, Elon Musk unveiled the prototype of the Optimus humanoid robot, which significantly replaces the software and some senso...


https://safirsoft.com Meta unveiled an artificial intelligence to convert text to video

Meta unveiled an artificial intelligence to convert text to video

In recent months, artificial intelligence systems for converting text into images have been in the headlines. Meanwhile, artificial intelligence resea...

https://safirsoft.com The French government uses artificial intelligence to identify illegal swimming pools

The French government uses artificial intelligence to identify illegal swimming pools

By using artificial intelligence algorithms to identify illegal pools, the French government has managed to collect about 10 million euros more in tax...