Brain-computer interfaces are finally here.

https://img.techlifeguide.com/202303171422526704717800.jpeg

Brain-computer interfaces are finally here

In this twenty-first century, the 1920s, in which we find ourselves, we will likely witness some of the biggest technological advances in everyday life in a hundred years. Bigger than personal computers and cell phones.

Running water, flush toilets, domestic bathrooms, heating, elevators and telephones are things that have been commonplace in developed Europe and the United States since the late nineteenth century. Televisions and television stations appeared in the 1920s, and it was in the 1920s that radio broadcasts, refrigerators, washing machines and family cars made their way into millions of homes. With all these things, what’s missing from your modern life?

Sadly, there has been no substantial progress in modernizing your life since 1930. Yes, you can now work on a computer, use a cell phone for entertainment and hail a cab – but typing, communicating, reading the newspaper, chatting across the room, watching movies and TV, hailing a cab for takeout, all of these things people used to be able to do as well, and perhaps experience better because they would be in more contact with a real person. Those later advances were mainly advances in the way information was transmitted that could make those things cheaper and more convenient, but if you lived in an affluent area before and weren’t poorly off, the substance of your life wasn’t drastically affected.

It’s different now, the twenties are here again. Given current trends, because of the increasing maturity and horizontal convergence of basic technologies such as artificial intelligence, materials science, 3D printing and batteries, we will soon see new technologies that will literally change the way we live our daily lives, including robots, flying cars, artificial organs, and medicines that can cure cancer and even extend life expectancy, to name just a few.

Mr. Sun Yu, a material scientist and the main speaker of the cutting-edge course “Brain-Computer Interface”, has published a book called “The Third Layer of the Brain”, and the brain-computer interface that he talks about is this kind of level of technology.

https://img.techlifeguide.com/031721.jpeg

Scientists still don’t dare to say they understand the brain, but brain science has skyrocketed in the last few decades. What we now know about the brain, what we expect from brain-computer interface technology, can be summarized, I see, in a single sentence–

*The brain is caged and it should come out. *

In its natural state, the human brain is not utilized to its full potential. A popular saying is that “only 10% of the brain’s potential is realized”, that 90% of the brain area is not utilized, which is wrong. The truth is that every area of the brain is involved in our daily lives, and scientists know exactly what areas are doing what, and there is nothing so bizarre as to be unknown about the brain. But the brain does have a limitation, and that limitation is not the brain itself, but its inputs and outputs.

The brain in its natural state relies on the input of information from the eyes (vision), ears (hearing), nose (smell), mouth (taste) and skin (touch). The brain transmits information to the outside world by directing the body to perform certain actions, using the hands and voice output. The transmission speed of these “devices” is too low. The brain is capable of accepting faster inputs and outputs if we give it some training.

For example, in the days before civilization, when there was no writing, people could only communicate by voice and body language, and the signals were so crude that they could only convey simple meanings. With writing, people could grasp a lot of information immediately by reading a text quickly, and the speed and accuracy of information input were greatly improved. Research has shown that learning to read and write leaves a permanent mark on the brain, with a portion of the brain area being dedicated to reading and writing functions – the ability to read and write is not innate, it is trained, and training changes the brain.

The brain of a person with many years of education is much more powerful than the brain of his illiterate parents. When information input is upgraded, the brain is upgraded. By the same token, there are many people who now play audio and video programs at double or even triple speed, and they find that once they get used to it it feels very natural, there is no delay at all in comprehension or memorization, and they can’t stand the speed at which people normally speak any longer. If you want to give them a lecture, perhaps the best way to do it is to record and videotape the lesson for them to play at variable speeds. The new generation will process information at least twice as fast as the previous one. So the brain is a particularly adept information-processing device that upgrades itself, and its main bottleneck is input-output.

So what if we could provide the brain with information beyond sound, images and text, if we could read the electrical signals of the brain’s neurons directly, if we could stimulate the brain’s neural networks directly? How would the brain think if it could quickly access any information, anywhere, anytime? If the brain could become one with a computer, what new capabilities would explode?

We now know that many of the brain’s thoughts, feelings, and intentions cannot be expressed in words. What does depression really feel like? What exactly is the emotion of an aura? Those who have never given birth cannot understand precisely what kind of love motherhood really is, just as men cannot experience what a female orgasm feels like. Our mouths and hands, our primitive input-output devices limit our ability to convey this information. What if we could connect two brains directly through a device that allowed them to truly “connect”?

That’s what “brain-computer interface” technology is doing. Brain-computer interfaces allow electronic devices to talk directly to the brain, and are categorized into “non-invasive” and “invasive” types. Non-invasive brain-computer interface is to let people wear a helmet or in the forehead stick something, or with remote monitoring methods to read brain waves and other signals, this way the risk is small, easy to popularize. Invasive interfaces involve drilling holes in the skull, which is like performing a craniotomy, and inserting small electrodes directly into specific parts of the brain. This may sound a bit costly, but it’s worth it, especially for some patients.

In her book, Sun Yu talks about how brain-computer interface technology can change the life of our brains on four levels.

The first is “repair “, by allowing machines to talk to the brain, to form a replacement for some of the body’s organ functions. In the past, people with disabilities used to have prosthetic limbs that were mechanical, but now scientists have been able to get the brain to command a prosthetic limb in the same way that it commands a real limb to a certain extent. The remarkable thing about this technology is that you have to be able to interpret the brain’s command signals quickly and precisely.

The next step is to “improve “, to do something to the brain itself. Scientists have made it very clear that depression is not just psychological, it’s physiological, it’s the hardware of the brain that’s at fault. Traditional drug therapies are generally ineffective, but experiments have shown that it is possible to “undo” the symptoms of depression by intervening directly in the brain. For example, a small electrical signaling board can be implanted in the brain to stimulate the relevant brain areas when needed.

The third layer is “augmentation “, that is, allowing brain-computer interfaces to empower the brain to achieve functional upgrades. Legend has it that the U.S. Army has come up with a helmet device that dramatically improves a soldier’s concentration and accuracy, as well as increasing learning efficiency several times over. We don’t know the details of the U.S. Army’s device, but we do know that several companies are making civilian versions of it.

The fourth layer is “communication “, which is a deep connection between the brain and the machine, or even the brain and the brain. This was demonstrated back in 2015 when three monkeys were made to cooperate on a gaming task simply by making a brain-brain connection. Deeper connections, which require complex and precise signaling, would have to be invasive, and it’s not enough to just insert an electrode into the brain; it’s envisioned that eventually we’ll have a grid implanted in the brain.

So now things have become so serious that it’s no longer science fiction but people at all levels are doing substantial R&D. It’s quite possible that within this 2020s there will be a lot of otherwise healthy people who will voluntarily install a third arm in themselves – because they feel more capable of working with all three hands together, maybe playing some musical instrument. By then we may realize that the whole “two hands” thing is unnecessary.

Further, we may find it unnatural to keep the brain in its “natural” state, alone in the skull. It’s as if we’ve gotten used to the idea of hitchhiking for long distances – leaving the brain alone to think is like walking dozens of kilometers on two legs: not only unnecessary, but also inadvisable, unless you’re doing it for exercise.

Imagine what the world would be like if the brains of all the people in the world were wired together, so that one person’s thoughts could be immediately understood by another, by a large group of people. That would be a much more profound change than “meta-universe” or “uploading life”.

This is what Elon Musk is doing, and this is what Sun Yu is doing. And the reason why this thing couldn’t be done before and can be done now is because it requires several basic technologies to be up to par. For example, you need material science, the electrode material implanted in the brain can not be used in general what metal and so on, Sun Yu’s program is to use graphene, and graphene is just popular new material. Another example is how to interpret the complex signals of the brain? You need artificial intelligence. Also, to monitor the brain in a comprehensive way and make various devices work in coordination, you need a high-level information transfer mechanism, and we are just in time for the era of 5G and the Internet of Everything.

So in order to unite man and machine, you have to make technology unite, and you need a lot of people who can synthesize these new technologies.

But in my opinion all the technology is not the most critical, the most critical is that we do not understand the brain enough. I’m sure smart prosthetics are just around the corner, but whether something like “uploading memories” can be realized is not a matter of whether the technology is there or not, it’s a matter of whether the brain itself is there or not, or whether there are any side effects we haven’t thought of. As far as I know, brain neuroscientists and cognitive scientists are still working on it like crazy, and no one can guarantee that the brain will allow you to fiddle with it like that. ……

And that’s the beauty of technological exploration. Don’t know if it can be done eventually, but let’s do it first – we’ll be behind when we know it can be done. Why isn’t it up to us to tell the world if this can be done?

Get to the point

The brain is caged and it deserves to come out.
Brain-computer interface technology can change the life of our brains on four levels: repair-improve-enhance-communicate.