Evolving Propaganda Warfare
- DRASInt® Risk Alliance

- 4 days ago
- 6 min read
Introduction

Modern conflict now includes a contest over perception as well as terrain. Since 2018, U.S. Army doctrine, Defense Advanced Research Projects Agency (DARPA) research, NATO policy and major reporting on the Iran war all show the same trend, adversaries use bots, synthetic media and coordinated narrative campaigns to shape belief, delay verification and erode trust. The military problem is not only that false content exists, but that it can move faster than authoritative correction and can exploit the appearance of scale, authenticity and urgency. This article argues that information operations, deception and psychological operations remain central to contemporary conflict, but now operate in a digitally accelerated environment where generative AI lowers the cost of influence and raises the speed of deception.
Doctrine and policy
U.S. doctrine defines information operations as the integrated use of capabilities to influence, disrupt, corrupt or usurp adversary decision making while protecting friendly information and systems. The Army’s newer doctrine treats information as a manoeuvre space and emphasises information advantage, not merely messaging. NATO policy on psychological operations likewise states that PSYOPS are planned activities directed at approved audiences to influence perceptions, attitudes and behaviour in support of political and military objectives. Taken together, these texts show that modern military practice still sees information as an operational domain rather than a public relations function.
The same doctrine also makes a practical distinction, information activities are meant to support decision advantage, not just generate noise. That point matters because today’s adversary can use the open social media environment to inject deception directly into the public sphere, where it may be mistaken for organic reporting. DARPA’s recent work on deepfake defence confirms that false media is now treated as a security problem requiring forensic tools for detection, attribution and characterisation. This is a clear sign that the U.S. defence community no longer treats manipulated media as a marginal issue.
Bot networks
Bot networks remain a core instrument of modern influence operations. Academic work on computational propaganda shows how organised networks of automated and semi-automated accounts can manufacture consensus by repeating selected claims until they appear socially validated. State backed information operations research likewise finds evidence of coordination across accounts and narratives, with timing and repetition used to amplify selected themes. The military value of bots lies not in persuasion alone, but in volume, speed and the illusion of scale.
This matters in war because perception is often formed before facts are settled. A flood of coordinated posts can push a narrative into the information space within minutes, making later correction harder and less visible. The effect is operational, not cosmetic: a false narrative can affect morale, public pressure, diplomatic framing and the credibility of official statements. In that sense, bot networks are not merely online nuisances, they are force multipliers for deception and psychological pressure.
Synthetic media in deception
Synthetic media has changed the cost curve of deception. DARPA’s Media Forensics and Semantic Forensics work exists because manipulated images, video, audio and text have become accessible, scalable and operationally useful to attackers. The agency’s own explanation is direct, false media and adversarial information attacks can deceive both humans and machine learning systems and defence must therefore reverse engineer the deception chain rather than only flags the final product. That is a doctrinally important shift.
In military terms, synthetic media acts as a deception enabler. A fabricated strike video, a fake satellite image, or a generated battlefield clip can create the appearance of an event that never happened but still shape assessment and response. The risk is highest when the content looks like eyewitness material, because audiences often treat visual evidence as more reliable than text. Once shared widely, such content can harden into perceived reality before verification can catch up.
Perception management tactics
Perception management in contemporary conflict relies on timing, repetition, emotional framing and plausible visual evidence. NATO PSYOPS policy explicitly aims to influence approved audiences’ perceptions, attitudes and behaviour; adversaries apply the same logic with greater anonymity and less restraint. In open social media environments, the most effective tactic is often not deep persuasion but continuous uncertainty. If the audience cannot tell what is real, it becomes easier to direct fear, doubt and anger.
Three techniques recur across the evidence. First, rapid narrative seeding introduces a claim before official sources respond. Second, amplification by coordinated accounts makes the claim appear widely accepted. Third, synthetic visuals supply the emotional proof that text alone may not provide. This combination is potent because it exploits both cognitive bias and platform mechanics.
Iran war case studies
The Iran war provides a contemporary example of how these methods converge. Euronews reported that social media was flooded with AI-fabricated combat scenes, missile strike clips and imagery of civilian damage from the start of the conflict, with misleading content spreading rapidly across feeds. The report also noted that some now, debunked videos claimed to show US troops crying or Gulf buildings being destroyed, demonstrating how easily synthetic visuals can mimic war reporting. The New York Times similarly reported a surge of AI-created images and videos tied to the conflict and identified more than 110 distinct deepfakes in a short period.
Other reporting on the same conflict described pro Iran disinformation activity built around coordinated amplification and AI-generated content, including the use of large numbers of fake accounts and repeated themes of victory and adversary weakness. Even where exact numbers differ by source, the pattern is consistent, false or manipulated media was produced quickly, circulated broadly and used to influence how the war was perceived. The strategic lesson is simple, when the information environment is saturated, the first convincing narrative can matter more than the later correction.
Countermeasures
Countermeasures must be layered. DARPA’s deepfake work shows that technical defence now requires detection, attribution and characterisation, not just binary authenticity checks. That approach is necessary because generative tools evolve quickly and can be adjusted to evade simple filters. The defence community therefore needs forensic pipelines, provenance tools and rapid analytic triage.
Technical tools alone are insufficient. Army doctrine on information advantage and broader NATO PSYOPS doctrine both imply that the decisive response includes disciplined communication, clarity of mission narrative and protection of trusted channels. Public literacy, platform transparency and journalist verification remain essential because the real contest is over trust. In conflict, speed matters, but credibility matters more over time.
Conclusion
The post-2018 doctrine and reporting examined here support a clear conclusion: information operations, deception and psychological operations remain core military functions, but they now operate inside an AI-enabled, platform-driven environment. Bot networks manufacture apparent consensus, synthetic media supplies visual plausibility and perception management exploits the gap between event, verification and correction. The Iran war demonstrates how quickly these methods can be fused into a single influence campaign. The practical answer is not one tool but a system, doctrine, forensics, resilience and disciplined public communication.
References
1. U.S. Army Field Manual 3-13: Information Operations https://armypubs.army.mil/ProductMaps/PubForm/Details.aspx?PUB_ID=1007358
2. U.S. Army Field Manual 3-0: Operations (information as a warfighting function) https://armypubs.army.mil/ProductMaps/PubForm/Details.aspx?PUB_ID=1007357
3. NATO Allied Joint Publication (AJP-3.10.1): Psychological Operations https://www.coemed.org/files/stanags/01_AJP/AJP-3.10.1_EDA_V1_E_2474.pdf
4. NATO NATO Strategic Communications Centre of Excellence https://stratcomcoe.org/
5. DARPA Media Forensics (MediFor) Program https://www.darpa.mil/program/media-forensics
6. DARPA Semantic Forensics (SemaFor) Program https://www.darpa.mil/program/semantic-forensics
7. DARPA Explainable AI (XAI) & AI security initiatives https://www.darpa.mil/research/programs/explainable-artificial-intelligence
8. Oxford Internet Institute Computational Propaganda Project https://www.oii.ox.ac.uk/research/projects/computational-propaganda/
9. Oxford Internet Institute The Global Disinformation Order (Bradshaw & Howard, 2020) https://www.oii.ox.ac.uk/research/projects/political-bots/
10. NATO StratCom COE Social Media as a Tool of Hybrid Warfare https://stratcomcoe.org/publications
11. Brookings Institution How Deepfakes Threaten National Security https://www.brookings.edu/articles/how-deepfakes-could-undermine-national-security/
12. RAND Corporation The Russian “Firehose of Falsehood” Propaganda Model https://www.rand.org/pubs/perspectives/PE198.html
13. MIT Sloan The Spread of True and False News Online https://science.sciencemag.org/content/359/6380/1146
14. Euronews Report on AI-generated war misinformation https://www.euronews.com/next/
15. The New York Times Coverage of AI-generated deepfakes in conflict https://www.nytimes.com/
16. BBC News Disinformation and AI in modern conflicts https://www.bbc.com/news
17. Reuters Fact-checking and war misinformation coverage https://www.reuters.com/fact-check/
18. CISA Disinformation & resilience guidance https://www.cisa.gov/topics/election-security/mis-disinformation
19. EU vs Disinfo Disinformation tracking database https://euvsdisinfo.eu/
20. First Draft Verification and misinformation resources https://firstdraftnews.org/
Order DRASInt Security Officer's Manual for FREE | |
DRASInt Mandates | |
Testing and Certification



📞 Contact Us for free Consultation
Phone / WhatsApp | +91 82904 39442 |
Website | |
Detection | Research | Analysis | Security | Intelligence

🚀Innovate, Navigate, Thrive!
DRASINT RISK ALLIANCE is the sole owner of the published content






Comments