Skip to main content

“Edge computing can be useful if it’s part of a comprehensive, carefully managed cloud strategy”

While edge computing might sound like the latest buzzword, it offers new opportunities for managing and creating value from machine-generated data. For Thales, which has long made designing ever-smarter sensors a priority, edge computing could pave the way to promising new applications. But as Laurent Laudinet - Internet-based Technology Segment Manager at Thales Research & Technology - explains, this new cloud computing paradigm poses additional cybersecurity challenges that demand particularly close attention.

Edge computing is a distributed model in which data is processed locally, closer to where it’s generated, rather than centrally. Some analysts claim that this optimised approach will be a cornerstone of tomorrow’s enterprise computing environment, on account of the possibilities it seems to offer. What’s your view?

To understand why edge computing is important, you have to look at it as part of a wider trend in enterprise transformation towards the convergence of information technology (IT) and operational technology (OT) (see below). 

IT / OT : this underlying trend, which accelerated in the mid-2000s, involves a narrowing of the gap between two previously separate spheres: OT, which refers to the hardware and software used to control industrial information systems, and IT, which plays a similar role but for enterprise information systems (EIS).

The world of OT has to do with physical machines, sensors and industrial processes. In the past, these systems tended to exist in a separate silo from IT, which focuses on data management, processing and analytics. This two-track approach was attributable in part to specialisation, but also to industrial-process safety and security concerns. 

Edge computing promises to narrow this gap. In the past, some computing and data-processing operations could only operate on-premise for latency or security reasons. But nowadays these same operations can be executed on machines at the edge of the network

Edge computing promises similar possibilities for OT as cloud computing did for IT, including the option of bringing regular application updates closer to the user. What’s more, many enterprises have already migrated their information systems to the cloud. So edge computing has an added benefit in that it enables connectivity between different industrial information systems, linking data from previously isolated sources. From a business and financial perspective, this new capability holds immense promise. 

The game-changing potential of edge computing is now widely recognised. But at Thales, we believe that it can only be useful if it is part of a comprehensive, carefully managed cloud strategy, in its broadest sense. That’s because edge computing is merely a first step. In the longer term, it will give way to other models: "fog computing", where compute resources are distributed as logically and efficiently as possible across network nodes and, even further down the line, "liquid computing", where these resources are all allocated dynamically. 

Edge computing enables connectivity between different industrial information systems, linking data from previously isolated sources. From a business and financial perspective, this new capability holds immense promise. 

Will edge computing change how organisations process their data? 

Edge computing represents a paradigm shift, so it will clearly influence the way embedded applications are designed. 

Going forward, the application context will become platform-dependent. Similarly, the burden of responsibility for efficient service delivery will rest with the platform itself. In that sense, edge computing will fundamentally alter how applications are built, requiring close collaboration between software and hardware design teams. And its effects will be felt at all levels, right through to front-line operations.  

In the world of IT, the emergence of cloud computing led to a process of mass standardisation. The same is happening now in OT with edge computing. This transformation is neatly captured by the image of “pets” versus “cattle” – a common metaphor used by cloud engineers to refer to the advent of the data centre era. 

In the early days of IT, servers were individual machines with their own names. Each time an upgrade was needed, the work had to be done server by server – and by hand. But when the tech giants and other cloud providers brought their solutions to market, this makeshift approach became obsolete. Instead, the emphasis shifted to efficiently managing an ever-growing network of servers, using techniques such as auto-configuration and bulk maintenance.

Edge computing represents a similar kind of step change, only this time in the world of OT. The transition from isolated machines – or small clusters of machines – to entire suites of devices will transform the problem from one of performance to one of logistics and supply chain management. In short, we’ll need to rip up the rule book and start again. 

To take another analogy, consider the difference between a single unmanned combat air vehicle and a swarm of drones. In the first case, operational performance depends on painstakingly configuring a highly specialised system. But in the second case, mission success rests less on how each part of the whole is configured and more on how they collaborate within the swarm. 

What are the implications of this new paradigm in terms of safety and security, which is a key consideration for the critical systems developed by Thales? 

Edge computing bears many similarities with smart sensors – a flagship Thales success story – in the sense that it involves moving computing resources and functions closer to sources of data generation

That said, safety and security are of course the number one priority when you’re developing critical systems like ours, especially when lives are at stake. Since all our technologies are fully qualified and certified, upgrading them will inevitably be a time-consuming process. 

Safety and security are the number one priority when you’re developing critical systems like ours, especially when lives are at stake.

Before we can embed edge computing functions into our existing systems, we’ll need to run extensive tests to make sure we’ve addressed the risks associated with mass standardisation, such as dynamic software updates. It will also take time to have these upgraded systems certified. It’s also important to maintain an appropriate level of segregation between environments, so that we can modify and upgrade components without compromising the system as a whole. 

On a more general note, the edge computing model raises its own set of unique risks and challenges. We’ll need to keep these in mind if we are to maintain high standards of operational performance, safety and security.

What are these risks and challenges, exactly? 

First and foremost, having geographically distributed data-processing machines – as is the case in an edge architecture – poses cybersecurity risks. Take, for example, a suite of devices installed along a train track: each device is embedded in a piece of physical infrastructure that isn’t necessarily secure at all times. So there’s a risk that one or more of the constituent compute nodes could fail. The answer, in this case, is to apply the “zero trust” principle: an approach used in cybersecurity that assumes that no individual node can automatically be trusted.

At Thales, we see this principle as critical to maintaining functional chain resilience when a node goes offline. It’s equally critical in withstanding supply chain attacks, which are a common threat in cloud architectures. This need for resilience means designing execution environments that can support third-party applications without the behaviour of these applications compromising the security of the system as a whole. It’s important to bear these considerations in mind when, like Thales, you’re pursuing a cohesive open-innovation strategy.

Assuming these risks and challenges can be overcome, what benefits could edge computing bring to the sectors and industries that Thales serves? 

To my mind, the clearest benefit is that it implies shoring up the functional chains I mentioned earlier. These chains are the very backbone of an edge computing model, so it’s important that they’re properly managed and fully secure. The deployment of countless standardised compute nodes makes these chains more robust and autonomous than ever before. For example, if one machine goes  offline, the computation can resume elsewhere, on another node. It’s easy to see what a difference this could make for any type of technology where a break in the information-processing chain could cause the entire system to grind to a halt. 

One example would be self-protection for an armoured vehicle, where the option for one compute node to take over when another one fails can prove critical at decisive moments. Another example is an unmanned surface vehicle operating in rough seas that impede the transmission of radio waves, where edge-enabled compute capabilities would enhance resilience and autonomy. 

The advent of edge computing could also be a watershed moment for predictive maintenance, which is a central plank of the industry 4.0 model. For instance, we can easily imagine controllers adjusting the manufacturing process dynamically to account for the condition of machine tools, or a battery of sensors monitoring production-line anodes and cathodes and determining when is the optimal time to replace them. In all of these processes, edge computing raises the bar for efficiency.

One of the most promising applications is so-called “space edge computing”, which involves the development of new onboard data-processing capabilities for satellite systems. What are Thales’s ambitions in this field?

Edge computing is of particular relevance to the space sector, where the ability to deploy compute resources in orbit, and to allocate functions to the right components on the fly, can substantially boost satellite efficiency.

In the future, a satellite will be able to run a surveillance application as it flies over the target zone with significantly greater processing power. Meanwhile, edge resource pooling will further optimise satellite on-orbit compute capabilities and enhance mission performance, putting an end to the “one block, one function” paradigm. 

We have several space edge computing projects in the pipeline right now. Two of these projects, led by Thales Alenia Space - a joint venture between Thales and Italian company Leonardo -  are at an advanced stage. The first aims to deploy a suite of technologies – a computing system, a software framework and high-performance Earth observation sensors – on board the International Space Station, with a view to unlocking new on-orbit data-processing applications

The second project involves DeeperVision, an Earth observation data analytics application developed in conjunction with Microsoft, which can mass-process satellite imagery in the cloud. Both of these use cases are incredibly promising. But we’re only just starting to explore what edge computing can do!