AI spam doesn’t directly distort facts. Rather, it fills information gaps, replacing reporting when factual evidence is scarce or difficult to obtain.
The development of widely available generative models based on artificial intelligence (AI) has led to the problem of “AI spam” in global politics. Deliberately fake, low-quality AI content (known as “AI slop”) has quickly found its niche in domestic political conflicts and international crises. This creates pseudo-evidence that, despite its low quality, it is in high demand. All significant military clashes in 2025 were accompanied by fake visual artifacts that quickly gained online reach. Such distortions have created a crisis of credibility in the digital realm, significantly damaging political dialogue.
The problem of AI spam has been further fueled by user-generative models (diffusion and large-language models). They enable the production of countless artificial images or videos, creating a stream of visually striking material which lacks content. People spread AI spam for the sake of irony or to inflame a situation. However, regardless of users’ intentions, states face a dilemma. On the one hand, the global AI race is incentivizing governments to motivate citizens to use and develop AI. On the other hand, real-world experiences with AI are failing to meet constructive expectations. User behavior is becoming a source of inauthentic, distorted images online.
By 2025, synthetic materials had gradually taken over the visual environment of the internet, especially social media, becoming the visual accompaniment to any emergency, conflict, or other international event. Unlike well-known deepfakes (fake videos), generative spam does not require consistent pre-training or additional preparation. Technically, AI spam is generated using queries to massive neural networks (e.g., Midjourney, DALL-E) and is based on the plausibility of the result relative to the query. Furthermore, while deepfakes disguise themselves as “reality,” AI spam can openly demonstrate its artificiality while remaining in demand.
of the influence of generative materials was the Iran-Israel conflict of 2025. In the first hours after the situation had escalated, realistic images of destruction generated by neural networks began appearing online. Generative images of downed fighter jets and bombers, as well as videos of the aftermath of missile strikes, were widely shared. They garnered millions of views, creating a pseudo-witness effect. Users quickly picked up such content and shared it not so much because they trusted the content, but because of its “relevance” to the context of the events.
Similar scenarios are observed in other conflict regions. The clashes between India and Pakistan in the spring of 2025 and the Cambodian-Thai border conflict illustrated similar user behavior. During each conflict, the visual environment was altered. Even before real footage appeared, a stream of “simulacra” was emerging.
However, the reach of AI spam is significantly broader than international conflicts. It can affect any event that attracts public attention, taking up a portion of the information space. For example, in the fall of 2024, during Hurricane Helene in the United States, a significant portion of social media posts were accompanied by AI-generated messages, hindering the dissemination of official information during the emergency. A similar trend was observed during the 2024–2025 election campaigns, particularly during the US presidential election and the German parliamentary elections. Competing political forces flooded the visual media space with satirical or deliberately artificial images discrediting their opponents.
From a theoretical perspective, the artificial visualization of international processes was conceptualized by 20th-century philosophers even before the advent of the digital age. But while during the Gulf War described by Jean Baudrillard, privileged media companies with access to the conflict region were responsible for creating “hyperreality,” today’s simulacra are the result of networked interactions with generative content. A significant factor contributing to this networked response is the information gap created by restrictions on the photographic and video recording of emergency situations or military clashes.
AI spam doesn’t directly distort facts. Rather, it fills information gaps, replacing reporting when factual evidence is scarce or difficult to obtain. As a result, millions of users receive the illusion of presence, a picture credible enough to elicit an emotional response. Thus, global politics is moving toward a state of postmodern communication based on emotional stimuli rather than verifiable facts.
In practice, the algorithmic structure of digital platforms plays a key role in the spread of generative spam: visual and emotional content is prioritised, making spam essentially an integrated element of the digital ecosystem. Novelty and visual appeal automatically boost rankings, thus creating a vicious cycle of content generation, mass distribution, and algorithmic promotion.
“Moreover, unlike in the previous period of social media development, accounts with initially low popularity scores become “super-spreaders” of generative spam. Visual appeal proves sufficient to trigger an avalanche effect. “
The distribution channel for information is also changing. Despite the widespread presence of political AI spam on major social networks (e.g., X), messaging apps Telegram and WhatsApp (owned by Meta Platforms Inc., a recognised extremist organisation and banned in Russia) are becoming significant distribution channels. A key feature of their ecosystem is the closed, encrypted communication loop between users (or groups), which complicates moderation or other informational interference with content.
The shift in political visibility provoked by AI spam is shaping three key trends. First, users and scammers will continue to generate content imitating any socially significant events to increase account traffic. This trend will persist either until users become “saturated” and fatigued by synthetic content, or until regulatory restrictions are introduced.
Second, the trend, generated by “fake news,” of declining public trust in authentic materials will persist, thereby damaging the collective consensus. Furthermore, unscrupulous political actors will be able to declare real evidence to be AI spam without significant reputational risks, creating a “plausible deniability” effect.
Third, the problem of AI spam will become a new target for securitisation, on par with deepfakes and other synthetic materials. As a result, most countries will need to regulate and counter the negative consequences of AI spam, which, however, may stall at the public debate stage.
Despite these trends, AI spam currently appears to be a secondary issue in terms of global governance. This is due to the global race among regulators for global rules, where innovation and prioritisation of AI developments are often prioritised over protecting society. Consequently, the response of international organisations to AI threats is rather fragmented and focuses on declarative documents on ethical issues, such as the UNESCO Recommendation on the Ethics of Artificial Intelligence and the Bletchley Declaration on the Safe Application of AI.
At the national level, the specific risks of synthetic content are being more actively considered due to their connection to information warfare or fraud. China has already passed a law banning the creation of deepfakes without the consent of the subjects. Individual US states have their own laws prohibiting the use of deepfakes in politics, while at the federal level, similar initiatives are being delayed. Other countries are also developing their own measures or contextualising AI threats within existing legislation (as, for example, in India).
There are also supranational initiatives that potentially cover AI spam. For example, in the European Union, legislation is being developed to control user chats within messaging apps (Chat Control), which could be scaled up to address the issue of generative spam. This is due to the development of the existing regulatory framework (the AI Act and the Digital Services Act), as well as systemic pressure on digital platforms.
Nevertheless, AI spam currently remains in a grey zone for regulators, as remains user-generated content that makes no claim to authenticity. Therefore, it does not fall directly into the category of disinformation or other AI threats. This contradiction necessitates the search for alternative forms of regulation.
One option is to ensure user cooperation with digital platforms and technical communities. They have the power to develop flexible responses through labelling generative content and user information. At the same time, political institutions must develop mechanisms to influence synthetic media and online platforms to ensure that AI spam is deprioritised in emergency situations.
Regulatory action remains a viable option, but it must address more than just the legal risks of spam. Beyond laws, information literacy must be developed and adapted to the current situation. A purely technical approach to the issue fails to capture the true scope of the threat, as a fundamental cultural shift is underway in the public perception of “truth”. Truth is now defined not as what corresponds to reality, but as what is emotionally appealing.
If left unaddressed, the pseudo-reality produced by AI will increasingly pollute already-“noisy” international communication. For this reason, in a world where the volume of generative content exceeds the volume of human fakes, regulating synthetic information and AI spam becomes, from a societal perspective, even more important than curbing the flow of ordinary information distortions.








