Why the Future of AI Depends on Security and Education
© 123RF
AI is prominent on almost every business agenda right now, and nearly every industry is working to identify ways they can harness its potential. The benefits can be significant – but they can only be realised through having both a clear sense of how it will be used, the outcomes it will be driving, as well as adequately securing the AI itself.
When electricity first entered homes in the late 1800s, it reshaped society in ways no one could have fully anticipated. Today, Artificial Intelligence (AI) stands at a similar threshold. No longer an emerging technology, AI is becoming as transformative to our daily lives as electricity once was, powering everything from medical diagnoses to mission-critical defence systems.
Yet, while public discourse often centres on ethics, misinformation, and the future of work, one vital issue remains underexplored: the security of AI itself.
Earlier this month, at the World Knowledge Forum in Seoul, Thales had the opportunity to contribute to this critical conversation—highlighting not only the challenges but also the opportunities that lie ahead.
The Readiness Gap
The benefits of AI are undeniable. However, the pace of adoption has outstripped the global readiness to secure it. Consider the findings from Gartner’s 2024 Guardians of Trust Survey:
- 83% of banking cybersecurity executives admit they cannot keep up with AI-powered cybercriminals.
- 74% of leaders are aware of sensitive data being fed into public AI models.
- Yet only 20% of organisations feel very prepared to defend against AI-driven attacks.
This is the readiness gap we must urgently close. The question is no longer whether AI will shape the future. it already is. The real question is: can it be secured enough to trust?
Navigating a Fragmented Regulatory Landscape
Governments are moving quickly, but differently. Europe’s AI Act is value-driven and strict, with penalties of up to 7% of global turnover. The U.S. is pursuing a fragmented, sectoral approach. The UK has opted for pragmatism, though with less clarity.
This patchwork reflects geopolitical realities and nuances, but it also creates complexity for global enterprises. One constant remains: security and sovereignty will be non-negotiable.
Unlocking Opportunities Through Cybersecurity
Without robust protections, AI is a gamble. With them, it becomes a catalyst for innovation. For example:
- Enterprise AI Assistants can boost productivity by reading emails, analysing documents, and generating meeting minutes—provided they are built with strict data classification and encryption controls.
- Identity Protection is critical as AI-generated morphing attacks on passports become a reality. Detection algorithms must outpace attackers.
- Agentic AI - AI that acts autonomously - raises profound challenges around authentication, trust, and control.
Education: The Foundation of Trusted AI
Technology alone cannot secure the future of AI. Human error remains the Achilles’ heel of any cybersecurity system. That’s why education is indispensable.
Organisations must empower employees to use AI responsibly, recognising sensitive data, applying classification rules, and critically evaluating outputs. Continuous training is essential, as AI evolves too rapidly for static compliance checklists.
At a societal level, citizens must be equipped to distinguish between AI-generated content and genuine information. Without this literacy, the resilience of the digital ecosystem is at risk.
Governments and institutions also need to invest in cross-disciplinary expertise. Policies and regulations are only as strong as the understanding behind them.
Our Commitment to Trusted AI
At Thales, we call this vision Trusted AI: intelligence that is powerful yet explainable, innovative yet ethical, and above all, secure by design. Achieving this depends as much on people as it does on algorithms.
Everything Is In Everything
AI and cybersecurity are not separate disciplines, they are deeply interwoven. As the ancient Greek philosopher Anaxagoras said, “everything is in everything.” That is why the future of AI is not simply a technological challenge: it is a human one.
The readiness gap can be closed. The regulatory complexity can be navigated. The technical opportunities are immense. But without education, all progress remains fragile.
The real question is not whether AI will shape the future. It will. The question is: will we prepare people, as well as systems, to ensure AI remains a tool for resilience rather than a vector of risk?
We welcome continued dialogue and collaboration on this journey toward a secure, trusted AI future.