Militant Groups Embrace AI Tools

A.I
Militant Groups Embrace AI Tools
National security officials say Islamic State affiliates and other extremist groups are experimenting with generative AI for propaganda, cyberattacks and translation — a threat that could grow as powerful tools become cheap and widespread. Experts and lawmakers are pushing for faster information-sharing, better detection and international norms to blunt the downstream risks.

How extremist networks are making AI part of their toolkit

On Dec. 15, 2025, reporting from the Associated Press and PBS highlighted a new front in the proliferation of artificial intelligence: loose-knit militant and extremist networks that are testing generative models to amplify recruitment, produce deepfake imagery and audio, automate translation and sharpen cyber operations. The messages are blunt and practical — a pro-Islamic State user recently told supporters in English that "one of the best things about AI is how easy it is to use" and urged them to make the technology part of their operations. That online exhortation captures the basic logic driving concern among security officials: cheap, powerful AI lowers the bar for malicious impact.

How militants are using AI today

Researchers and monitoring firms report several recurring uses. Groups have repurposed public generative models to create realistic photos and videos that can be shared across social platforms; they have produced deepfake audio of leaders and used AI to translate messages into multiple languages within hours; and they have started to run targeted disinformation campaigns shaped to feed social algorithms. SITE Intelligence Group documented examples including manipulated images circulated around the Israel-Hamas war and AI-crafted propaganda following a deadly concert attack in Russia. In other cases, attackers have used synthetic audio to impersonate officials in fraud and phishing operations.

Those tactics are not all high‑tech miracles: much of the work relies on off‑the‑shelf tools and human creativity. But combined with social‑media amplification, they can shift narratives, spread fear and recruit sympathizers far beyond the reach of a small organization. As one former NSA vulnerability researcher turned industry executive put it: "With AI, even a small group that doesn't have a lot of money is still able to make an impact."

Why the threat is growing

Three technical trends make the problem asymmetric and accelerating. First, generative models — text, image and voice — have become widely available and easy to operate without specialized training. Second, models can be chained together: a language model can draft propaganda that an image model then illustrates, while translation tools localize content for new audiences. Third, commodified compute and cloud services let actors automate repetitive tasks, from scraping target lists for phishing to synthesizing thousands of personalised messages.

That combination matters because it converts scale into influence. Social platforms designed to reward engagement will happily amplify vivid, shareable content; a convincing deepfake or an incendiary translated post can travel quickly, ratcheting up polarization or recruiting in places that in past decades would have been inaccessible.

From online propaganda to battlefield tools

Security analysts also worry about operational applications beyond propaganda. The Department of Homeland Security’s most recent threat assessment explicitly flagged the risk that AI could help nonstate actors and lone attackers compensate for technical shortfalls — including assistance with cyberattacks and, more alarmingly, with the engineering of biological or chemical threats. While those scenarios are harder and more resource‑intensive, DHS and other agencies say they cannot be ruled out as models and datasets grow in capability and as laboratories and tools become easier to access.

Meanwhile, conventional military uses of AI — such as automated analysis of satellite imagery, drone targeting aids and logistics optimisation — provide models and affordances that militant groups can observe and imitate at lower fidelity. The war in Ukraine has been a proving ground for many of these techniques: militaries use AI to sift large volumes of imagery and video to find targets and manage supply chains, and that same pattern of rapid innovation can inspire or leak into irregular forces and proxy actors.

Concrete risks and recent examples

  • Recruitment at scale: AI tools help produce multilingual, emotionally tailored propaganda that recruiters can push to sympathetic audiences.
  • Deepfakes and deception: fabricated images and audio have already been used to inflame conflicts, erode trust and impersonate leaders for extortion or to trigger real‑world responses.
  • Cyber operations: attackers use AI to draft sophisticated phishing messages, write exploit code, and automate tasks within an intrusion campaign.

Analysts point to recent episodes where synthetic images circulated after high‑profile attacks, and to documented training sessions that some groups have run for supporters on how to use AI for content production. Lawmakers testified that both Islamic State and al‑Qaida affiliates have held workshops to teach supporters to use generative tools.

What officials and experts recommend

The policy response is unfolding on multiple tracks. In Washington, lawmakers have urged faster information‑sharing between commercial AI developers and government agencies so companies can flag misuse and collaborate on detection. Sen. Mark Warner, the ranking Democrat on the Senate Intelligence Committee, said the public debut of user‑friendly models made clear that generative AI would attract a wide range of malign actors. The U.S. House has passed legislation requiring homeland security officials to assess AI risks from extremist groups annually. Members of both parties have told agencies to accelerate collaboration with industry on red‑teaming and abuse‑reporting pathways.

Technical measures are also being pursued. Companies and researchers are working on provenance and watermarking systems for generated media, classifiers that detect synthetic content, and platform enforcement approaches that rate‑limit or block suspicious automated accounts. At the same time, civil‑liberties advocates warn about overbroad surveillance and the risk of censoring legitimate speech if detection systems are poorly designed.

Limits and the hard choices

Mitigating the threat requires hard tradeoffs. Curtailing the spread of generative tools could slow beneficial uses — in medicine, climate modelling and logistics — while allowing free access increases the misuse surface. Internationally, some countries and dozens of U.S. states have passed or proposed laws to limit certain kinds of deepfakes; the federal government has also taken steps, for example outlawing AI‑generated robocalls that impersonate public officials. But binding global agreements on autonomous weapons and the nonstate use of AI remain politically difficult.

Experts who study military AI caution that there is no silver bullet. Paul Scharre of the Center for a New American Security notes that wars accelerate innovation; the longer intense conflicts continue, the faster dangerous techniques spread. Cybersecurity practitioners emphasize that small, affordable improvements in detection and platform design — combined with better user literacy and resilient institutions — can blunt many attacks. Yet as one cybersecurity CEO told reporters, "For any adversary, AI really makes it much easier to do things."

What to watch next

Expect to see three measurable trends over the coming year: more frequent public examples of AI‑enabled propaganda and fraud; an uptick in lawmakers pressing for developer transparency and mandatory abuse reporting; and an expanding market of detection tools aimed at platforms and governments. Agencies will also increasingly flag the intersection of AI with biological risk, keeping that topic under review as modelling and synthesis tools evolve.

For practitioners and the public, the immediate priorities are practical: bolster monitoring of extremist channels, build robust mechanisms for developers to report abuse without violating user privacy, and invest in fast, explainable detection tools that platforms can deploy at scale. Without those steps, inexpensive generative AI will continue to act as a force multiplier for actors who already found ways to weaponize information and technology.

Sources

  • Department of Homeland Security (Homeland Threat Assessment)
  • SITE Intelligence Group (extremist activity monitoring)
  • Center for a New American Security (analysis on AI and warfare)
  • National Security Agency (vulnerability research and public commentary)
  • U.S. Congressional hearings on extremist threats and AI
Mattias Risberg

Mattias Risberg

Cologne-based science & technology reporter tracking semiconductors, space policy and data-driven investigations.

University of Cologne (Universität zu Köln) • Cologne, Germany