Brain-computer interfaces are one of the most promising technologies in the field of neuroscience, with the capacity to connect the human brain directly to machines. Johan de Heer, Segment Manager of Brain Computer Interfaces at the Key Technology Domain Systems at Group level and Director of Research, Technology and Innovation at the Technical Directorate in the Netherlands, explains why extreme caution is required when developing relevant use cases.
Thales has made neuroscience one of the six technology areas it wants to focus on in the coming years. It is a vast field to investigate, encompassing both an understanding of the biology of the nervous system and the chemistry of the brain, as well as cognitive psychology. Which technologies do you see as a priority in terms of developing the first neuroscience use cases?
You’re right – neuroscience is a huge field of exploration, with so many prospects for development that it’s easy to get lost. That’s why it’s important to keep in mind what the sector can offer Thales in terms of concrete applications and markets. Our current strategy involves concentrating our efforts on BCIs (brain-computer interfaces - see box 1). We are still in the early stages, and for the moment, our main aim is to identify the most promising applications in this ecosystem by drawing on the expertise that the company has in the neuroscience field. This has led to some productive collaborations with a number of universities: Twente in the Netherlands, and Laval in Canada, as well as the Army biomedical research Institute in France.
Concrete, industry-ready applications of BCIs seem to be still a long way off for Thales…
If you look at the six major technological fields that Thales has identified as being strategically important, some are much more “mature” than BCIs currently are. This is because industry has not invested heavily in a technology it sees as too futuristic… It’s as if BCIs belong to a world of science fiction that prevents them from being grounded in the real world.
However, if you look closely at all the initiatives that have emerged over recent years in the field of brain-computer interfaces, it is clear that, as futuristic as they may seem, BCIs have enabled the development of a number of very real applications, notably in the medical field.
I’m thinking in particular of the advances made possible by deep brain stimulation (see box 2). Some patients suffer from neurological tremors so acute that they are unable to hold a cup of tea. Implanting electrodes that can be activated by a simple switch on a mobile phone can stop the tremors and considerably improve their quality of life.
Another example would be totally paralysed patients who have lost all means of communicating with the outside world.
BCIs that provide the opportunity to communicate directly with the patient’s brain are opening doors for these patients: for example, they can switch on the lights, select a TV channel, or even spell letters, words and sentences. This lifeline did not exist a few short years ago! So, yes, the world of BCIs is still in its infancy, but it has already shown that it has the potential to be a game-changer.
There is still a lot of fundamental research to be done on our brain functionalities before we can develop applications that make the most of them. But there is extraordinary potential there!
The paradigm shift that BCIs could create seems to apply first and foremost to the field of medicine. Is it possible to imagine repercussions in other fields?
If we go back to the example of deep brain stimulation, there is obviously a huge difference between using this type of technology for therapeutic purposes and potentially using it in the business world, with the aim of optimising human cognition, for example. There is still much that needs to be proven in this field, not to mention the ethical questions that such usage could raise.
However, there is nothing to stop us considering whether there is benefit to be had in using a BCI on an operator in a command and control environment, or even on an airline pilot. As a matter of fact, research already exists that attempts to ascertain whether such usage is compatible with complex environments.
These studies are not very far removed from the practical use cases we are working on at Thales. Within the framework of the EPIIC project (see box 3), in particular, we are looking to incorporate certain cutting-edge technologies from the field of neuroscience, in order to measure brain activity and certain biological and physiological vitals (blood oxygen saturation, heart rate…). The objective is to reduce stress, tiredness and the risk of hypoxia, thus optimising workload.
Several recent studies have shown that when some of these factors are not kept at optimum levels, performance declines, whilst the risk of error increases. This is not something we want, especially in critical environments where it is vital that the human element perform at its best so as to avoid any breakdown.
We are going to see this type of technology, which draws on brain data, introduced more and more in the next few years…provided that its usability improves. This is a crucial point. For example, we are working with the Paris-based company Conscious Lab, which develops brain sensors embedded in headsets. However, when you wear a headset for a number of hours, it can begin to feel very uncomfortable.
So before these types of objects can be made more widely available, we need to make them more comfortable. This is going to take time, but when you look at how smart watches – which also use many biological sensors – have become part of our day-to-day lives in just a few years, there’s good reason to believe that BCIs will be the next step on that road.
Thales has always stood out through its capacity to deploy technological innovations in critical environments. Do you think that BCIs will one day be sufficiently reliable to be compatible with the specific requirements of such environments?
I’d like to answer with an example, if I may. It takes a few seconds for a visual signal to reach your conscious brain, which then begins to process the information to bring it to your attention. However, it takes only 300 milliseconds for your brain to be already aware of something you don’t yet know.
We could well capture the brain activity that precedes conscious awareness with a BCI, in order to suggest an action as quickly as possible. This would open up some very interesting possibilities, in terms of helping and speeding up the decision-making process, particularly in critical systems, where this gain in time can be vital. Right now, these are just potential avenues of exploration; first we need to improve our knowledge of the capabilities of the human brain.
The way our brain works is, in many respects, a total mystery to us. Lack of understanding makes us curious, but we are not familiar with all the elements that enable dynamic decision-making… There is still a lot of fundamental research to be done on our brain functionalities before we can develop applications that make the most of them. But there is extraordinary potential there!
Another obstacle to the democratisation of BCIs is the fear that there will be inadequate control of technologies that are directly connected to the most private thing we possess: our brain. Can we come up with an ethical framework that would be well regulated enough to one day enable widespread use of BCIs?
That’s the $64,000 question. Especially since it’s a fear that to some extent is well founded. What if, by controlling parameters such as attention span, perception and the decision-making process, we were able to influence someone’s brain and steer them in a particular direction? What we would have in our hands would be nothing short of a neuro-weapon…
But beyond these potentially radical uses of neuroscience, the question that BCIs raise on a wider scale is that of what are starting to be known as “neurorights” (see box 4). As soon as we start talking about deploying technologies that may collect data that comes from our brain, it raises a whole series of questions about one of the most sensitive subjects there is: the integrity of the brain. Who owns the data in your brain? Do you need to give your consent to allow that information to be accessed? Could a company access it, for example, in order to modify your workload according to your personal data (even if it was in your best interests)?
Clearly, this concerns fundamental aspects of human personality. With technology developing so quickly, it is vital that the legal framework keep pace. And within the context of TrUE BCIs, they are questions that need to be taken into account from the outset. That’s why I always say that respecting a code of ethics is a prerequisite for every single project involving BCIs. And this is exactly the kind of area in which the “ethical design” approach that we are committed to has real significance.
At a European level, the laws that will regulate artificial intelligence to ensure that it is responsible and can be trusted are being rolled out, and Thales has played a major part in the discussions. The chances are that, in the very near future, the same approach will be used to develop an ethical framework for BCIs.