Home / Science and technology / USA: 100 billion for brain research

USA: 100 billion for brain research

США: 100 миллиардов на изучение мозга

The aim of the project undertaken to assist the intelligence community of the United States to carry out reverse engineering of the brain and to define algorithms that will enable computers to think like people

Jordan, Cepelewicz (Henryka Jordana Cepelewicz)

Thirty years ago the U.S. government launched the project “human Genome” program sequencing and mapping all human genes as a species, designed for 13 years. Despite the fact that at first this initiative has caused distrust and even protests, it has significantly transformed genetics as a science and is now considered one of the most successful scientific projects of all time.

Currently, the Agency advanced research in the field of intelligence (IARPA) performing research for the U.S. intelligence community and was created as an analogue of the famous Agency for future defence research development U.S. Department of defense (DARPA) has allocated $ 100 million for a similar ambitious project. The aim of the large research program “Artificial intelligence based on neural networks” (Machine Intelligence from Cortical Networks/MICrONS) is to conduct engineering analysis of brain size in one cubic millimeter, the study of the mechanisms by which the brain performs calculations, and based on the obtained data to improve the performance of machine learning algorithms and artificial intelligence. For this project, IARPA has attracted three groups of scientists, headed by biologist and specialist in computer science Harvard University David Cox (David Cox), a specialist in computer science from Carnegie Mellon University tai sing Lee (Tai Sing Lee) and a specialist in the field of neuroscience from the medical College Baylor (BCM) Tolias Andreas (Andreas Tolias). Each team has developed its five-year research programme.

“This is a solid investment because we believe that this problem is very important, and [it will have] the effect of altering the work of the intelligence community, and overall change the world”, — said the representative IARPA Jacob Vogelstein, who oversees the program MICrONS.

The goal of a large research program MICrONS, implemented in the framework launched by President Obama’s national project is the U.S. BRAIN Initiative (Brain Research Through Advancing Innovative Neurotechnologies/”the brain research through innovative neurotechnologies”) is to achieve a breakthrough in the field of computing, based on the model of the human brain. Today, the technology is already used by a group of algorithms, called artificial neural network, which, judging by their name, based on the architecture (or at least, what we know about the architecture) of the brain. Due to a significant increase in computer performance and the availability of huge amounts of data on the Internet, Facebook can recognize faces, Siri recognizes a voice, the car can drive without a driver, and the computer can defeat humans in games such as chess. However, these algorithms remain imperfect, and they are based on greatly simplified the process of analysing information on the templates and samples. Typically the performance of artificial neural networks based models of the sample of 1980-ies, low in cluttered environment where the object that the computer is trying to recognize, is hidden among a large number of objects, many of which partially overlap each other or are identified ambiguously. In addition, these algorithms do not possess the ability of generalization. For example, if the computer to show as samples of one or two images of a dog, to recognize all the dogs he learns.

 
США: 100 миллиардов на изучение мозга
As for the people, they seem to cope with this task easily. We can see each other in the crowd, to distinguish a familiar voice in a noisy environment and take notice of sound or images, on the basis of only one or a few examples, seen or heard before. We are constantly learning to generalize without needing any prompts or instructions. Therefore, in order to figure out which of these models lack the computer, the participants of the project MICrONS, began studying the brain. “This is the best guide”, says Cox.

Although neural networks contain elements of architecture found in the brain, methods of calculation, which they used, not a direct copy of any algorithms used for information processing by neurons. In other words, the ways by which modern algorithms are transform information and learn from her the basis of engineering decisions, mostly by trial and error. They work, but scientists don’t actually know why — anyway, you know enough to create an artificial neural network. Remains unclear whether similar neural processing of such information on the relevant transactions that occur in the brain or not. “So if we get a level deeper and get out of the brain information not only on the architectural level, but also at the level of computation, we can modify these algorithms and to bring them to the mechanisms of brain function,” says Vogelstein.

All three groups of scientists will attempt to map signals between neurons in a cubic millimeter of cerebral cortex of laboratory rats. It may seem that one cubic millimeter is equal to less than one millionth of the volume of the human brain is far too short. But today scientists are able to measure simultaneously the activity of a few neurons or millions of neurons recorded for the combined images obtained by functional magnetic resonance imaging. Now participants of the program MICrONS are planning to record the activities and communications of interneuronalic 100 thousand neurons during experiments in which a lab rat will perceive visualizations and to perform the tasks of the learning plan is very difficult because you have to take pictures to gauge accuracy and to work with wires with a length of only several millimeters. “It’s like that to make a map of the United States, measuring every inch,” says Vogelstein.

And yet, Vogelstein optimistic, given the recently allocated funds to conduct full-scale research. “With the launch of the national project BRAIN Initiative, a host of new online tools and methods — both in terms of resolution and scale, allowing detailed studies of the brain needed for building circuits, — he said. — So it’s a unique historical moment when we for the first time appeared the tools, techniques and technical equipment to build concepts, to take into account every neuron and every synapse”.

Each group of scientists is going to create a “road map” of the brain differently. To measure brain activity in rats in the process of learning to recognize the objects on the computer screen, the group, led by Coke, plans to use a method called two-photon microscopy. Scientists are going to introduce rats modified fluorescent protein sensitive to calcium. When the ignited neuron, calcium ions rush into the cell, causing the protein begins to glow brighter — so, using the laser scanning microscope, scientists can watch neurons light up. “It’s a bit like listening to the brain, explains Cox. — As well as eavesdropping on the phone call to be informed that we will be able to listen to critical internal processes that occur in the brain of the living, and performing some action of the animal”.

Then a sample of rat brain volume in cubic millimeter will be sent to Harvard University biologist and neurocinema Jeffrey Lichtman (Jeffrey Lichtman). In his laboratory the sample will be cut into incredibly thin slices, and these slices will be studied with a microscope with sufficient resolution to see the connecting with each other and similar to the elongated wire sections of the cells of the brain. The group under the leadership of Tolias will use a similar method, called three-photon microscopy. Thus scholars of this group will be able to explore not only the superficial layers of the rat brain studied by Cox and his colleagues, but also penetrate into the deeper layers.

As for the group under the leadership of Lee, for mapping the connectome — the neural circuitry of the brain — she is going to use a more radical approach. Together with scientists-a geneticist from the medical school of Harvard University George Church (George Church) they plan to use DNA barcoding — that is, to label each neuron by a unique nucleotide sequence (barcode) and chemical by connecting the bar code through the synapse to recreate schemes. Although this method does not allow to obtain the same spatial information as the use of microscopy, Lee hopes that this method will be more accurate and provide results in a shorter period of time — but assuming that it even works. So far this method has never been applied successfully. “If this method DNA-bar coding is effective, it fundamentally will change the neuroscience and connectomics,” Lee.

But this is only half of a larger program MICrONS. Then scientists have to figure out how all this information can be easily applied to algorithms in machine learning. On this account they have some suggestions. For example, many scientists believe that the brain, by nature balazovsky, that is, the neurons represent sensory information in the form of the probability distribution, calculating the most likely interpretation of events based on previous experience. This hypothesis is based primarily on the idea of the existence in the brain circuits of feedback, whereby information flows not only forward and there is an even greater number of connections, the guide information in the opposite direction. In other words, the researchers assume, hypothetically, that perception is not simply a process of transferring information from a conditional input to any output. Rather, there is a constructive process, “analysis by synthesis”, during which the brain stores and creates an internal representation of the surrounding world, shapes the expectation and prediction that allow it to interpret incoming data and to plan how these data must be used. “We’re very thorough in fundamental principle — characteristic features of the synthesis process, explains Cox, is when we fantasize what could happen in the world, and then compare with what we actually see, and then we use it to form our ideas”.

For example, the retina of the eye that responds to light by generating electrical signals transmitted to the optic nerve and then entering the brain, is actually a two-dimensional structure. So when people see something, the brain probably uses this probabilistic model to make a conclusion about the three-dimensionality of the surrounding world using light falling on a two-dimensional surface of the retina. Although, if this is indeed the case, then the brain found a much more effective way of approximation and generalization variables than we can do with our current set of mathematical models. After all, if you see a picture with 100 objects, imagine just how many of these objects can be parameters in the forward and backward directions (only two of many directions). Simultaneously — 2100 possible combinations. To get an answer by calculation is hardly possible. But the brain easily does it with an infinite number of possible directions: given the various distances, angles and in different lighting conditions. “What does the brain — he spins this diversity [base coordinates] and allows you to easily separate them from each other,” explains Tolias.

Each of the three groups drew on the work of specialists on computer technologies that translate these theories into mathematical models and then test them on data obtained through engineering analysis of the brain. “For each specific description of the algorithm — for example, probabilistic algorithm — there are millions of variants of implementation that need to be selected and viewed, to turn this theory into a working program, says Mr. Vogelstein. — Million (or so) variants with some combinations of these parameters and characteristics will allow you to write a good algorithm, but on the basis of some other combinations you can create ineffective or unsuitable for the algorithms. By “extract” the settings of these parameters from the brain — and not by guessing them using software (as we did before) — we hope to reduce the field to a small number of implementation variants of the algorithm, comparable to the work of the brain.”

With the help of such models the internal level of the project participants MICrONS will be able to create automated machines — especially when talking about teaching machines to identify objects without having studied thousands of examples, in which the items denoted by the names. Vogelstein wants to help the US intelligence, applying methods of unsupervised learning. “We can have only one picture or just one example of a hacker attack, which we need to prevent, or one entry of the financial crisis or weather events that can cause problems. And thus we need to generalize this information, summarizing it in a wider range of situations in which you may experience the same event or symptom, he says. And that’s what we hope to achieve: more effective integration, improve the ability to allocate the main signs and more effective use of sparse data”.

Although scientists agree that the construction of such algorithms on the basis of data obtained by studying the brain, will be the most difficult part of the project MICrONS (they will determine how programming of how the brain processes information and creates new links), some problems difficult to be solved already in the early stages of the work. For example, during measurement of brain data will be available approximately two petabyte is equal to the memory capacity of 250 notebooks or 2.5 million CDs. To store such amounts of data would be very difficult, and to find a solution to this problem, IARPA start working with Amazon. Moreover, all data are presented as images. To search for information in this dataset uses a process called segmentation, in which each structural element of neurons and their connections is given its color — to your computer to better understand common characteristics and features. “Even if the work on the painting was completed all the people living on earth, to paint each cubic millimeter of images we would need a life,” says Lichtman. Instead of segmenting graphic data scientists will work to create more sophisticated methods of computer image processing.

Likhtman already achieved success with the data processing of 100 terabytes (the twentieth part of the whole data array, which intend to gather participants of the project MICrONS). These data were obtained in the study of a small sample of the thalamus — a brain region responsible for the redistribution of information from the senses. The results of the work led by him groups of scientists will be published this month in a scientific publication in molecular and cellular biology Cell. “We found that sometimes the connection with one and the same area of the different nerve cells the same axon it jumps from cell to cell, which indicates that the thalamus works otherwise than we expected,” says Lichtman. Perhaps the same will be found in the sample of cerebral cortex volume of one cubic millimeter, which they have just started to learn. “We know that we can work with large volumes of data, but now we begin to study what might be called the huge amount, he says. Is a significant step forward. And we think ready for this.”

Doctor of philosophy, mathematician and winner of the fields medal prize David Mumford (David Mumford), who advises the group, led by Lee, but not among the program participants, treated the project with enthusiasm. “This is real progress, he says. — As soon as such data are obtained, the scientists will be the most challenging and interesting task — they will have to figure out what can be done to better understand the interactions of neurons. I’ve always wanted to ever have the opportunity to record such an impressive amount of information, and I believe that these people can do it.”

“But I am somewhat more sceptical attitude to the possibility to use these data to artificial neural networks. Is still quite difficult to understand and is more detached from life”.

But even so, the scientists of all three groups are confident that their work will bring results. “What happens — no matter what — it’s already a success, says Lichtman. — Maybe not what you expected, but it’s a chance. And I don’t torment yourself with doubts about the misconception of our idea or not. Ideas no. The point is that the brain actually exists, it is very complicated, and nobody still really haven’t seen him, so we have to look at. What do we risk?”.

They hope to succeed in those matters, which have any problems the participants of the project “human Brain” with a budget of $ 2 billion. As Cox explains, their approach fundamentally differs from that chosen by the participants of the project “the human Brain” — and in technical terms and in terms of logistics. In fact, speaking about the nature of the work, before attempting to model the brain, they are, in fact, in the opposite direction. And it is hoped that work on the project MICrONS with the involvement of several groups of scientists will create an environment of cooperation and healthy competition needed to achieve serious results. The Agency IARPA plans to publish research findings so that other scientists could offer their ideas and their academic achievements. “And although it’s like looking at grains of sand, — speaks, — as my teacher at the University, in one grain of sand one can see God”.

Check Also

Science in Russia is isolated, as the Runet

Science in Russia is isolated, as the Runet: the staff of universities are already ordered …