
Threat Analysis and Exposure Surfaces According to ANSSI
1. Scope and Context of the Analysis
In its report CERTFR-2026-CTI-001 published on February 4, 2026, ANSSI provides a structured threat assessment focused on the role of generative artificial intelligence in cyber attacks. The document specifically addresses generative AI systems, defined as systems capable of producing text, images, audio, video, or source code based on models trained on large datasets. This category notably includes Large Language Models (LLMs).
The analysis follows a cyber threat intelligence approach, examining both the malicious use of generative AI by threat actors and the threats directly targeting AI systems themselves. ANSSI states that, to date, no documented case demonstrates a cyber attack fully and autonomously conducted by a generative AI system.
2. Generative AI as an Enabler of Cyber Attacks
2.1 Progressive Integration into the Attack Chain
Generative AI services are increasingly incorporated into attackers’ toolsets as facilitators across multiple phases of the attack chain. Their adoption is driven by flexibility, accessibility, and the ability to automate tasks traditionally requiring human effort.
ANSSI observes recurring use of these technologies for reconnaissance, social engineering content generation, and attack scaling, particularly in poorly secured environments.
2.2 Social Engineering and Targeted Reconnaissance
Documented cases highlight extensive use of generative AI to produce social engineering content. Several attack campaigns attributed to state-linked or state-sponsored actors leveraged AI services to generate multilingual phishing content, fake professional profiles, and legitimate-looking websites.
ANSSI also reports observing websites likely generated by AI, used either to host malicious payloads or to perform technical profiling of visitors. In parallel, low-cost deepfake audio and video services are exploited for identity impersonation purposes.
2.3 Malware Development and Adaptation
The use of generative AI in malware development remains constrained by the need for advanced human expertise, but several cases demonstrate its role as an assistance tool. AI-generated or AI-adapted scripts have been observed in real-world attack campaigns.
The report references the emergence of malware capable of dynamically embedding prompts at runtime or polymorphic malware that regularly rewrites its own source code using generative AI APIs to evade detection mechanisms.
These developments suggest a gradual increase in sophistication, without yet demonstrating reliable large-scale Zero-Day vulnerability discovery or exploitation capabilities.
2.4 Post-Exfiltration Data Analysis
Generative AI is also leveraged to analyze large volumes of exfiltrated data in order to rapidly identify information of interest. This capability allows attackers to optimize post-compromise activities and accelerate monetization or extortion phases.
3. Differentiated Use Across Threat Actor Profiles
ANSSI highlights a strong correlation between threat actor maturity and how generative AI is employed.
Highly skilled actors use generative AI as a force multiplier, comparable to the historical adoption of generic offensive tools such as Cobalt Strike or Metasploit. AI enables them to accelerate established workflows, mass-produce content, and operate at greater scale.
Less experienced actors primarily use generative AI as a learning and technical assistance tool, without fundamentally altering their operational capabilities. Overall, ANSSI assesses that generative AI currently contributes more to temporal acceleration than to a structural transformation of cyber attacks.
4. Abuse and Circumvention of Generative AI Models
Commercial generative AI models integrate technical safeguards designed to prevent illicit usage. However, ANSSI notes continuous evolution of bypass techniques based on prompt engineering, aimed at manipulating moderation mechanisms.
Jailbreak-as-a-service offerings and intentionally unrestricted models are advertised on cybercriminal forums, sometimes trained on datasets explicitly tailored for malicious use, including malware development and phishing scenarios.
5. Generative AI Systems as Targets
5.1 Model Poisoning and Disinformation
The report identifies model poisoning as a credible threat. This attack involves manipulating training data to introduce bias, malicious behaviors, or disinformation capabilities into AI models.
Research cited by ANSSI indicates that a relatively small number of poisoned samples can compromise a model, regardless of the overall size of the training dataset. Additionally, the growing volume of AI-generated content available online may indirectly contaminate future training corpora.
5.2 Supply Chain Compromise and Data Exfiltration
Open-source generative AI models introduce new software supply chain attack surfaces. Compromised models may execute arbitrary code upon integration into development environments.
ANSSI also highlights risks associated with integration agents such as Model Context Protocols, which expand attack surfaces when connected to external tools or data sources that are insufficiently secured. The practice known as slopsquatting further illustrates how LLM hallucinations can be exploited to inject malicious components into dependency chains.
6. Organizational Exposure and Professional Use Cases
Uncontrolled professional use of generative AI accounts constitutes an additional exposure factor. Documented cases include account compromises via infostealers, as well as data leaks resulting from improper employee usage.
When AI systems are integrated into operational or critical information systems, their compromise can directly impact the confidentiality, integrity, and availability of connected assets.
7. Analytical Conclusion
The CERTFR-2026-CTI-001 report outlines an evolving yet currently bounded threat landscape. Generative AI primarily acts as an accelerator of existing attacker capabilities rather than a near-term disruptive technology. At the same time, it introduces new attack surfaces that increase cyber risk management complexity.
For CERT, CSIRT, SOC, and governance functions, this analysis underscores the necessity of including generative AI systems within monitoring, risk assessment, and incident response scopes, alongside traditional software components and digital services.
Source : https://www.cert.ssi.gouv.fr/cti/CERTFR-2026-CTI-001/
Enjoy !



