Skip to main content

AI shows its TrUE colours

Darryn (Daz) Rawlins, Managing Director, Thales Training and Simulation, UK writes about how artificial intelligence is playing an ever-bigger role in national defence. But can we trust it in an increasingly complex environment? 

Watching the latest instalment from the Terminator franchise (Dark Fate, if you’re interested), it would be easy to be alarmist about a future that sees the rise of SkyNet and the fall of mankind. But ripping, yet frightening, yarn aside, it did get me thinking about the rapid growth of artificial intelligence (AI) and the increasingly influential part it’s playing in all our lives – including in the defence sector.

Artificial intelligence evokes strong feelings. For some, it feels like an abdication of human control that can only lead to misery, death and destruction. For others, it promises huge advances in areas like scientific research, health and education and safety. 

Watching the film also got me thinking about what we’re doing at Thales – a world leading defence and security company - to stop something like SkyNet happening and AI running amok. After all, we’ve been using AI techniques for years in our Starforce computer generated forces (CGF). By CGF, I mean the models of aircraft, ground and maritime vehicles that populate our simulation systems, and which trainees can interact with.

What’s really exciting for me in this area has been the pace of change and the incredible advances in recent years. It really has been a digital revolution. Not only in the ever-increasing cheaper and faster computational performance, but also the new AI techniques this has enabled. Bringing machine learning and deep neural networks into the mainstream, along with associated technologies like big data analytics and Xtended Reality (XR) – virtual,augmented and mixed reality - has opened up many new applications.

We’ve seen these advances in everything from Google’s AlphaZero AI beating the world number one at chess, to AI company Psibernetix’s ALPHA aerial combat application, which has consistently beaten the world’s top pilots in simulated air combat. (Side note: Thales recently acquired Psibernetix to strengthen our work in Certifiable AI.) 

AIs in Training

These AI technologies have helped create more realistic, more human-like and adaptive training programmes – and continue to equip defence forces with the right skills, expertise and experience to prepare for, and carry out, individual, team and platform-specific roles in the real world –in air, land and maritime domains. 

To give this more context in part of the industrial world I live in – supporting Combat Air capability – we can use AI in our training programmes to train pilots to identify aircraft accurately, be more fuel efficient, execute manoeuvres better and numerous other things. We can use AI to model the optimal course of action based on accumulated knowledge and historical data built up over countless hours of simulated training. We can use AI to crunch a huge amount of data to work out what the next best moves would be in any given situation, and how multiple entities could work together. And we can apply AI to data to better define future tactics for a range of activities across various platforms – air, land or maritime. 

Use AI in the right way and you can more quickly train your defence forces to be more effective, perform simple and complex tasks better, and make more informed and better decisions, faster. An AI ‘buddy’ could also provide faster decision support in Combat Air applications as we move into an age with hypersonic threats. 

But allowing an intelligent algorithm to shape and, in some cases, take complete control over a human’s decision-making capabilities raises major questions. For example, how has a smart algorithm used data to make a decision or draw a conclusion? Will it do what it’s supposed to do and guarantee it won’t do certain things (vis-a-vis SkyNet)? This AI verification and validation is a very hot topic right now as AI techniques are historically seen as quite ‘black box’. 

Essentially, it all boils down to trust, which is why every AI needs good governance – a set of rules or principles that underpin an approach to using it (a bit like Asimov’s Three Laws of Robotics). With this in mind, Thales has created TrUE AI.

Tried and TrUE

It’s a nifty acronym that describes our approach to AI. It stands for ‘Transparent’, Understandable’, and ‘Ethical’. So to break that down a bit more: transparent in that you can look closer at what the AI is doing and how’s using data to arrive at a conclusion; understandable in that it’s possible to explain and justify the results; and ethical in that the AI won’t go haywire, run amok and start a war, but follow objective standards protocols, laws and human rights. TrUE AI is our way of applying our knowledge to artificial intelligence to see how can we shed light on that information. In other words, how can we make it less black box and more open door. And these three elements complete a circle because, by being transparent the AI is understandable, and therefore we can know it’s acting ethically, which maintains its transparency and so on. If you’re doing it right, you keep the circle, if you’re doing it wrong, it can spiral out of control.

Tread carefully

Thankfully, while I don’t have any specific examples of AI going awry, according to my colleague and software engineer, Chris Cunningham, if you're not careful with your AI’s reinforcement learning it will just try and learn the optimal way to achieve a good solution. If you're just hacking different AIs together without that understanding or ethical approach, you could end up in deep water. Why? Because if you're not testing or verifying them, they’ll do what they think is best.

So if you’ve not properly designed the internal structure or managed your AI’s learning process, don’t be surprised if amongst the occasional brilliance of its decision making it has a hidden darker side that neither you or anyone else can predict or control.

My thanks to Chris Cunningham for helping me pull this article together.

Contact