Hosting
Sunday, February 23, 2025
Google search engine
HomeArtificial IntelligenceTerrorism and artificial intelligence, a deadly tandem

Terrorism and artificial intelligence, a deadly tandem


Oscar Ruiz
Migration expert and international analyst

TErrorist groups are using artificial intelligence (AI) to increase the scale and effectiveness of their propaganda operations, recruitment and cyber attacks, and are also exploring how to use AI drones to carry out attacks. What tools and strategies are available to make their jobs more difficult?

Technology in general is advancing and becoming much more accessible to all target groups, and of course also to organizations such as the Islamic State (ISIS) and other violent actors who have begun experimenting with AI tools to maximize their reach and minimize the risk of detection. This poses a security challenge for authorities who can only watch as the technology develops and expands, and the means to control it lag far behind, requiring increasingly rapid adaptation of AI control strategies by terrorist groups.

How AI is used by terrorism

There are several (and evolving) applications of AI used by extremist groups, ranging from of the automation of propaganda content to the use of chatbots for interactive recruitment and manipulation of social networks. For example, chatbots with AI capabilities have been used to provide personalized information to potential recruits, tailoring messages to their beliefs and interests (just as modern militaries do to filter their search for new recruits). This makes the content more relevant and persuasive to the targets, promoting a stronger bond with the extremist group so that direct human intervention is not required.

The creation of videos with AI-generated avatars Spreading propaganda messages, mimicking conventional media aesthetics to gain credibility among their audiences, has been another way for groups like ISIS to use this technology. Generative AI has also been used to automatically translate propaganda into multiple languagesallowing terrorists to overcome language barriers and increase the number spreading their messages worldwide.

The future of AI in the hands of terrorists

But these aforementioned tools could be just the tip of the iceberg, as the potential of AI used by terrorist organizations could extend to other, more complex and dangerous areas. One concern for security experts is the use of autonomous drones or ‘killer robots’. Terrorists have already started integrating AI into drones to improve autonomous navigation, target recognition and real-time mission planning. These drones can be used to carry out large-scale attacks without human intervention, reducing risk to operators and increasing risk lethality of their actions. There is also the possibility of terrorist groups using autonomous vehicles as mobile bombs (exactly as is currently used in the US). Ukrainian war), and while this method does not appear to have been used yet, there is evidence that ISIS/Daesh and other organizations have explored this technology, with the danger this would entail.

On the cyber sideterrorists could use AI to launch more sophisticated cyberattacks that identify vulnerabilities and adapt their tactics in real time, such as using LLMs (large-scale language models) to simulate human interactions and fool security systems, making it harder to attack detected before significant damage is caused to the systems.

How to combat it

Preventing the use of such tools by terrorists is nothing short of utopian, but measures and strategies can be taken to hinder the use of AI by the ‘bad guys’. Basic would be to start with improved content moderation; in this case, content moderators and technology platforms should update their algorithms to identify AI-generated content, using approaches such as analyzing inconsistencies in speech patterns, unusual shadows in videos, and abnormal facial expressions. In addition, hash techniques to detect and block recycled or manipulated content must be adapted to the capabilities of generative AI.

A cooperation of the sectors involved would also be interesting; Governments, technology companies and academic institutions must establish more robust collaborative frameworks to share knowledge and coordinate efforts against malicious uses of AI. Initiatives such as the European Union Code of Practice on Disinformation provide examples of how collaboration can be fostered to mitigate the impact of generative AI on spreading propaganda. Another important instrument would be the development of defensive AI.

AI technologies can also be used to develop advanced defense systems, such as automated moderation systems and chatbots designed to intercept and redirect potential recruits before they become radicalized. Finally, public information and awareness should be encouraged. Educating and raising public awareness about the risks of generative AI and digital manipulation are critical to developing social resilience against disinformation. Initiatives such as media literacy campaigns should be a priority to enable people to identify manipulated or artificially generated content.

The use of AI by terrorist groups is not only an unavoidable problem, but also an evolving threat that requires new strategies and tools from security services. And while advances in AI can be used to increase defensive capabilities, there is no doubt that they increase offensive capabilities and allow terrorists to operate with greater sophistication and at lower costs and exposure. The development of policies that adapt to the changes, international cooperation and the implementation of advanced technological systems will be crucial to meet these challenges and protect global security in an increasingly digitalized world.

© This article was originally published in Escudo Digitalwith whose permission we reproduce it



Source link

RELATED ARTICLES
- Advertisment -
Google search engine

Most Popular