THE SOCIAL MEDIA PLATFORM THAT BECAME A WEAPON OF WAR
- DRASInt® Risk Alliance

- Mar 23
- 23 min read
Open Source Intelligence (OSINT) analysis of how governments, armies, terrorist groups and intelligence agencies have used Twitter over twelve years (2014–2026).

What Is OSINT and Why Does It Matter?
OSINT stands for Open Source Intelligence , the practice of gathering and analysing information that is freely available to the public. This article follows that tradition. Every fact comes from government reports, published research, court documents, or verified journalism. You can check every source yourself.
Intelligence agencies spend billions gathering secret information. But some of the most important facts about how the world works are in plain sight , in company filings, government reports, academic papers, and social media data. OSINT analysts read these sources carefully and piece together a picture that is otherwise difficult to see clearly.
This article applies that approach to Twitter , the platform that has become one of the most important battlegrounds in modern information warfare. We will show you exactly what happened, using only verified public sources, in plain language.
1. The Platform: How Big Is Twitter ?
Before we discuss how the platform has been used in conflict, we need to understand its scale. Twitter was founded in 2006. By 2014 , when this analysis begins , it had become one of the world's most influential news platforms. By 2026, it had undergone the most dramatic transformation of any major technology company in recent history.
Year | Monthly Active Users (millions) |
2014 | 271 M |
2015 | 304 M |
2016 | 313 M |
2017 | 330 M |
2018 | 321 M |
2019 | 330 M |
2020 | 353 M |
2021 | 396 M |
2022 | 450 M |
2023 | 541 M |
2024 | 586 M |
2025 | 600 M |
2026 (est.) | 560 M |
Sources: Business of Apps (2026), Backlinko (Jan 2026), eMarketer/Statista. * 2026 estimate based on 15.2% daily active user year-on-year decline. Twitter no longer files public disclosures since going private in 2022.
These numbers do not show who uses the platform. Twitter is disproportionately used by journalists, politicians, military officials, and policy experts , the people who shape how the world thinks about events.
Who Uses Twitter ? | % of That Group | Why It Matters in Information Warfare |
Journalists | 61% use Twitter daily | Stories often start here. Narratives established on Twitter frequently set the tone of news coverage. |
Politicians & officials | Most world governments have verified accounts | Diplomatic signals and policy communications now occur on this platform alongside formal channels. |
25–34 year olds | 37.5% of all users | The most politically active demographic in most democracies. |
B2B businesses | 67% use Twitter for marketing | Economic and sanctions campaigns are amplified here. |
Male users | 60.9% of global audience | Consistent with heavy use by military, security, and tech communities. |
Sources: DataReportal (January 2025); Statista (January 2024); Backlinko (2026); Reuters Institute Digital News Report (2022)
2. Why Twitter Can Be Used as a Tool of Influence , The Science
Governments and nonstate actors have exploited Twitter not by accident but because its specific technical features make it particularly effective for spreading information rapidly , including misinformation. Here is the verified science behind why.
HOW FAST DOES FALSE INFORMATION SPREAD ON TWITTER? (MIT STUDY, 2018) Relative spread speed: True news = 1. False news = 6. Study: 126,000 stories, 3 million users, 2006–2017 | ||
True News | █████░░░░░░░░░░░░░░░░░░░░░░░░░░░ | 1× speed |
True Political News | ███████████░░░░░░░░░░░░░░░░░░░░░ | 2× speed |
False News (average) | ████████████████████████████████ | 6× speed |
False Political News | ████████████████░░░░░░░░░░░░░░░░ | 3× speed |
Source: Vosoughi, Roy & Aral , 'The Spread of True and False News Online', Science, Vol. 359 (6380), pp. 1146–1151, March 2018. MIT Media Lab. | ||
That finding , from a peer-reviewed study in one of the world's most respected scientific journals , is among the most important in this analysis. False information spreads six times faster than true information on Twitter, as measured across 126,000 stories and 3 million users over more than a decade.
Twitter Feature | Designed to Do | How It Has Been Exploited | Evidence |
Real time broadcast | Share breaking news instantly | Seed disinformation before fact checkers respond | IRA posted 10.4M tweets before 2016 US election (Senate Intel Committee, 2019) |
Anonymous accounts | Protect activists in danger | Create fake 'citizens' to manufacture public opinion | China removed 170,000+ state linked accounts (Twitter Transparency Report, Jun 2020) |
Trending hashtags | Surface popular conversations | Hijack trending topics to reach unrelated audiences | ISIS hijacked 2014 FIFA World Cup hashtags (Berger & Morgan, Brookings 2015) |
Algorithmic amplification | Show content you will engage with | Outrageous content gets shown to the most people automatically | False news travels 6× faster than true news (Vosoughi et al., Science 2018) |
Retweet / share | Spread ideas you agree with | A small bot network makes one message look like it has millions of supporters | IRA bots amplified content to 1.4M Americans in 60 days (Senate Intel Committee, 2019) |
Verified accounts | Prove identity | Hacking or accessing verified accounts adds false credibility to disinformation | Saudi agents recruited Twitter employee to access dissident accounts (US DOJ, 2019) |
Sources: US Senate Intelligence Committee (2019); Twitter Transparency Reports (2018–2022); Berger & Morgan, Brookings Institution (2015); US DOJ Indictment US v. Abouammo (2019)
3. Russia: The Internet Research Agency (IRA) Operation
In 2014, the Russian's created the Internet Research Agency (IRA) , a building in St. Petersburg employing hundreds of people working in shifts to create fake Twitter accounts. Real people shared their posts and attended events organised by what they believed were genuine American citizens. This is confirmed in a bipartisan US Senate investigation using Twitter's own data.
RUSSIA'S IRA TWITTER OPERATION , VERIFIED DATA (2013–2018) Source: US Senate Select Intelligence Committee, 'Russia's Use of Social Media', Vol. 2 (2019) | Mueller Report, US DOJ (2019) | ||
Fake IRA accounts suspended | █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ | 3,841 |
Total IRA tweets produced | ████████████████████████████████ | 10,400,000 |
Americans reached before 2016 election (60 days) | ████░░░░░░░░░░░░░░░░░░░░░░░░░░░░ | 1,400,000 users |
Twitter ad spend by IRA (USD) | █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ | $274,000 |
Countries targeted by IRA-model ops (2016–20) | █████████░░░░░░░░░░░░░░░░░░░░░░░ | 8 nations |
IRA employees at peak (2016) | █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ | 1,000 staff |
All figures verified by bipartisan US Senate Intelligence Committee (2019) using data provided directly by Twitter Inc. | ||
HOW THIS IS KNOWN | The US Senate Intelligence Committee, with members from both political parties, published a 1,000-page report in 2019 based on data provided by Twitter, Facebook and Google under subpoena. The IRA's Twitter archives were handed over to investigators. Every tweet, every account, every advertisement is documented. Source: Senate Intelligence Committee, 'Report on Russian Active Measures', Vol.2 (2019). |
The IRA did not just target one side of American politics , it ran accounts promoting multiple opposing viewpoints simultaneously to maximise societal tension. It organised real protests and counter protests on the same day in the same city , and both groups believed they were responding to genuine fellow citizens.
This model was exported globally. The same tactics appeared in the UK Brexit vote, France's 2017 election, Germany's 2017 election, and Brazil's 2018 election , all confirmed by the Oxford Internet Institute's Global Computational Propaganda Inventory (2019).
4. ISIS and Twitter-Based Recruitment (2014–2017)
Between 2014 and 2017, the Islamic State ran a major terrorist recruitment campaign using Twitter as the primary publicfacing channel. They recruited fighters from 80 countries, reached millions of people who had not searched for them and exploited the platform's trending system as a propaganda mechanism.
ISIS TWITTER NETWORK , VERIFIED STATISTICS AT PEAK (2014–2015) Sources: Berger & Morgan, Brookings Institution (2015); UN Security Council Report S/2015/358; Twitter Inc. statements (2015–17) | ||
Estimated ISIS accounts at peak | ██░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ | 90,000 accounts |
Average followers per ISIS account | █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ | 1,000 followers (vs. 208 avg. user) |
Ratio vs. average Twitter user | ██░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ | 5× more followers |
Foreign fighters recruited globally | █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ | 40,000 fighters |
Countries of origin of fighters | ██░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ | 80 countries |
Accounts Twitter suspended (2015–17) | ████████████████████████████████ | 1,200,000 suspensions |
Note: ISIS accounts were suspended repeatedly and reconstituted within hours. The 1.2M figure represents total suspensions, not unique accounts. | ||
ISIS's Twitter strategy followed three documented steps: First, building genuine-looking accounts with large followings. Second, hijacking trending hashtags , during the 2014 World Cup, football fans searching for match updates encountered ISIS propaganda. Third, using Twitter as the opening of a recruitment funnel , once someone engaged, they moved to private encrypted applications.
ACADEMIC CONSENSUS | The ISIS Twitter operation is one of the most studied phenomena in terrorism research. The Brookings Institution (2015), the UN Security Council (2015), King's College London's ICSR (2016) and the Global Network on Extremism and Technology (GNET, 2020) all document the same core facts. This is not disputed. |
5. The US Military's Twitter Operations: Official and Covert
The United States government is not only a subject of information operations , it is also one of the documented practitioners, both openly and, in some cases, covertly.
The official side is public and legal. The US State Department runs over 400 Twitter accounts. The Department of Defense runs hundreds more. Every major military command has active accounts. However, investigations reveal covert programmes using fake account tactics that mirror those the US publicly criticises.
US Government Account | Function | Documented Action |
@Pentagon / @DeptofDefense | Military announcements | Confirmed Soleimani killing 90 minutes after Trump's tweet (Jan 3 2020) |
@CENTCOM (Central Command) | Middle East operations | Real time strike announcements in Syria, Iraq, Yemen |
@US_AFRICOM (Africa Command) | Africa operations | 300+ individual Somalia airstrike announcements (2014–2024) |
@OIRSpox (Anti ISIS campaign) | Operation Inherent Resolve | Average 14 posts/day at campaign peak; 27 language coalition network |
@StateDept | Diplomacy | Released Ukraine invasion intelligence on Twitter before congressional briefing (Feb 18 2022) |
@USAdarFarsi | Persian language outreach | Posts directly to Iranian citizens in Persian, bypassing government internet blocks |
@USTreasury | Sanctions enforcement | Publicly named 700+ sanctioned Iranian entities on Twitter over 14 months (2018–2019) |
@NSC44 (National Security Council) | Policy signalling | Published pre invasion intelligence thread predicting Russian false flag scenarios (Feb 18 2022) |
Sources: US government official Twitter archives; RAND Corporation (2019); Global Engagement Center Annual Reports (2019–2021)
The Covert Side: US Military Fake Accounts
EXPOSED , STANFORD INTERNET OBSERVATORY (Aug 2022) | In August 2022, Stanford Internet Observatory and Graphika confirmed that the US military's Central Command (CENTCOM) had operated a network of over 150 fake Twitter accounts. The accounts used AI-generated profile photos, posted in Persian, Arabic, Urdu, Pashto and Russian and posed as independent civilians. This is the same general methodology used by other state actors. Source: Stanford Internet Observatory, 'Unheard Voice: Evaluating Three Years of Online Influence Operations Attributed to the U.S. Military' (August 24, 2022). |
The US authoritiesdid not formally deny the operations. It argued they were different from adversarial operations because their stated goal was to support democratic values. Independent researchers noted that the methods , fake personas, manufactured consensus, coordinated amplification , were structurally identical to those used by state adversaries. This matters because it illustrates that information operations on Twitter are not confined to any single country or ideology; they are a widely used tool of modern statecraft.
6. Twitter in Conflicts: 2014–2026
Every major military operation of the past twelve years has had a parallel Twitter information dimension running alongside it. Below is the documented record.
Period | Conflict / Event | Key Documented Twitter Dimension |
2014–21 | Afghanistan | Taliban's @Zabehullah_M33 reached 350,000+ followers. On Aug 15 2021, footage of Afghans clinging to US aircraft was retweeted 500,000+ times in 3 hours. Stanford/Graphika (Aug 2022) separately exposed 150+ CENTCOM fake Pashto/Dari accounts from the same period. Sources: AP; Reuters; DoD Twitter archive; Stanford Internet Observatory (Aug 2022). |
2014–19 | Syria | After April 2017 US missile strikes, five US government accounts coordinated messaging within 45 minutes , Pentagon, State Dept, White House, UN Ambassador, @OIRSpox , establishing the public narrative before the UN Security Council met. A separate covert network of 1.8 million followers, operated through contractors, posed as organic Syrian civil society. Sources: NATO StratCom COE (2018); BBC Arabic / The Guardian (2020). |
Jan 2020 | Iraq: Soleimani | Trump posts US flag image at 11:45 PM. Pentagon confirms Soleimani killed 90 min later. Trump tweets threat against '52 Iranian targets including cultural sites.' Iranian FM Zarif responds on Twitter. IRGC announces retaliatory missile strike on Twitter before US military is officially informed. 110 US troops later diagnosed with traumatic brain injuries. Sources: Lawfare Blog (Jan 2020); AP; UN Special Rapporteur (Jan 2020). |
2014–24 | Yemen | Aug 2018: Journalist Nasser Al-Sakkaf tweets photos of Dahyan school bus massacre victims alongside debris from a US manufactured MK-82 bomb. Thread gets 500,000+ retweets and forces Pentagon and State Dept to respond publicly. Contributes to 2019 US Senate War Powers Act vote , first in US history. Sources: Congressional Record (2019); NYT Yemen investigation (2021). |
2014–24 | Somalia | AFRICOM announced 300+ individual airstrikes on Twitter , unprecedented transparency for counterterrorism ops. But Airwaves Project researchers documented 200 announcements containing unacknowledged civilian casualties. Report submitted to UN Special Rapporteur on Extrajudicial Killings. Sources: Airwaves Project (2022); NYT 'Hidden Pentagon' series (2021). |
2022–26 | Ukraine | Feb 18 2022: @NSC44 posts thread predicting specific Russian false flag scenarios , 6 days before the invasion. When those scenarios occurred, 2,000+ journalists cited the thread as proof of pre planned deception. Blinken's invasion day statement retweeted 180,000 times in the first hour. First use of anticipatory intelligence disclosure as official Twitter strategy. Sources: @NSC44 Twitter archive (Feb 2022); @SecBlinken archive (Feb 24 2022). |
2019–26 | Red Sea / Houthi Operations | Iran backed Houthis' @HouthiMedia grew to 500,000+ followers by 2024, announcing Red Sea shipping attacks in real time and posting drone footage. CENTCOM counter-tweeted each attack within 2 hours. US DIA reported Houthi attacks caused 90% reduction in Red Sea container shipping. Sources: DIA Report (Jun 2024); CENTCOM Twitter archive; CFR Iran-Houthi backgrounder (Mar 2025). |
Feb–Mar 2026 | Iran-US Conflict: AI Disinformation on X | Since US-Israeli strikes (Feb 28 2026), Clemson University identified 62 IRGC linked fake accounts spreading propaganda using fake Latina, Scottish, and Irish personas. NewsGuard found 18 false Iranian war claims in 2 weeks. Fake AI-generated satellite images, fabricated explosion videos and a deepfake video of Trump and Netanyahu circulated widely. Sources: Clemson Media Forensics Hub (Mar 11 2026); NewsGuard (Mar 2026); Euronews (Mar 6 2026). |
7. The Iran–US Twitter Dynamic: Twelve Years of Digital Confrontation
The confrontation between the United States and Iran on Twitter is the most extensively documented, sustained bilateral conflict ever conducted through a commercial social media platform. It has involved nuclear adjacent threats, diplomatic signals, influence operations and , by early 2026 , a real shooting war with a parallel disinformation campaign.
7.1 The Documented Timeline
Period | Episode | Key Details |
Sep 2015 | JCPOA Deal on Twitter | Iranian FM Zarif and President Obama issue simultaneous Twitter announcements of the nuclear deal , the first major international agreement revealed on social media. Source: Reuters (Sep 14 2015). |
Jan 2016 | Sailors Detained via Tweets | IRGC releases footage of detained US Navy sailors on Twitter. US State Dept demands release via Twitter within hours. A partial hostage negotiation conducted through public social media posts. Source: Reuters (Jan 13 2016). |
May 2018 | JCPOA Withdrawal Tweet | Trump tweets US withdrawal from the nuclear deal. Tehran Stock Exchange falls 4% , traders react to the tweet before formal Congressional notification. Zarif responds on Twitter within 20 minutes. Source: Politico (May 8 2018). |
2018–19 | Maximum Pressure Campaign on Twitter | US Treasury publicly designates 700+ Iranian entities on Twitter over 14 months. Special Representative Brian Hook tweets directly at Iranian citizens in Persian. Hook's tweets are cited in Iranian parliamentary debates. Sources: OFAC records; ISNA news agency (2019); GEC Annual Report (2020). |
Nov 2019 | Protests and Internet Shutdown | US Secretary Pompeo posts daily Twitter solidarity messages naming IRGC commanders by name. Iran shuts down the internet , confirming US Twitter communications were reaching Iranian citizens. Source: Reuters; Human Rights Watch (Nov 2019). |
Jan 3–8 2020 | Soleimani: Escalation Managed via Twitter | Nuclear-adjacent crisis managed through commercial social media. Threats, retaliatory announcements and legal characterisations all exchanged on Twitter before formal governmental notifications. 110 US troops later diagnosed with brain injuries. Sources: Lawfare Blog (Jan 2020); AP; UN Rapporteur (Jan 2020). |
Oct 2022 | Protesters Tracked via Twitter | Twitter becomes the primary global platform for #MahsaAmini protest documentation. Human Rights Watch documents 20+ arrests directly traced to Twitter activity by IRGC monitoring units. Source: HRW (Nov 2022). |
Jun 2025 | 12-Day War Disinformation | US and Israel strike Iranian nuclear sites. AI-generated disinformation about strikes spreads globally on X within hours. Sources: Rolling Stone (Mar 2026); CFR Conflict Tracker (Mar 2026). |
Feb–Mar 2026 | Current: Disinformation War on Twitter | After US Israeli strikes (Feb 28 2026), 62 IRGC-linked fake accounts identified on Twitter by Clemson University. 18 false Iranian war claims in 2 weeks (NewsGuard). Fake AI videos of destroyed US bases. Sources: Clemson Media Forensics Hub (Mar 11 2026); NewsGuard (Mar 2026); Euronews (Mar 6 2026). |
7.2 Iran's Documented Influence Operation Waves
Iran ran systematic, organised campaigns to manufacture false public opinion on Twitter. Twitter's own transparency reports documented six distinct waves before the data was discontinued in 2023.
IRAN IRGC-LINKED TWITTER ACCOUNT WAVES , SUSPENSIONS PER OPERATION (2018–2020) Source: Twitter Information Operations Transparency Reports (archived at transparency.twitter.com) | Verified by Stanford Internet Observatory | ||
Wave 1 , Oct 2018 (IRGC-linked) | █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ | 770 accounts |
Wave 2 , Jan 2019 (US political) | █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ | 2,617 accounts |
Wave 3 , Jun 2019 (US amplification) | █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ | 1,666 accounts |
Wave 4 , Dec 2019 (Arabic-language) | █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ | 4,779 accounts |
Wave 5 , May–Jun 2020 (COVID narratives) | ███████████░░░░░░░░░░░░░░░░░░░░░ | 45,000 accounts |
Wave 6 , Oct 2020 (US election) | ████████████████████████████████ | 130,000 accounts |
Note: Twitter discontinued its Information Operations Disclosure Programme in mid-2023. These are the last publicly available official figures. | ||
7.3 Inconsistent Enforcement: A Documented Pattern One of the clearest examples of Twitter's difficulty applying its own rules consistently concerns how different accounts have been treated for similar categories of content. The evidence below is drawn from Twitter's own published policies and documented actions.
Account | Content Posted | Platform Action | Source |
@khamenei_ir , Active throughout | Posts calling for Israel's destruction; COVID vaccine misinformation claiming Western vaccines unsafe for Iranians | No action , 'heads of state' exception applied | Columbia Journalism Review (Nov 2020); NYT (Jan 2021) |
@khamenei_ir , Active 2015–2024 | Posts praising 'martyrdom operations' (suicide attacks) , same content category for which private accounts were suspended | No action throughout 2015–2024 | Freedom House 'Freedom on the Net: Iran' (2022) |
@realDonaldTrump , Suspended Jan 2021 | Posts about Jan 6 Capitol riot, cited as incitement to violence | Permanently suspended , 89 million followers removed | Twitter Safety blog (Jan 8 2021) |
CENTCOM fake accounts (US Military) | 150+ fake accounts posing as civilians in Persian, Arabic, Urdu, Pashto, Russian | Removed only after Stanford/Graphika exposed them in Aug 2022 , not proactively | Stanford Internet Observatory (Aug 2022) |
Source: Twitter Safety blog (Jan 2021); Columbia Journalism Review (Nov 2020); Freedom House (2022); NYT (Jan 2021)
WHY THIS MATTERS | Twitter's inconsistency in enforcement , documented across multiple independent sources , means the platform cannot claim to operate as a neutral public square. This inconsistency has been exploited by multiple state actors in their own propaganda and it directly affects trust in the platform as an information source. The pattern applies across multiple governments, not any single one. |
8. The Global Picture: Likely State Sponsored Operations by Country
STATE-SPONSORED FAKE ACCOUNTS REMOVED FROM TWITTER , TOTAL BY COUNTRY (2018–2022) Source: Twitter Information Operations Transparency Reports (archived 2018–2022). Programme discontinued mid-2023. | ||
China (all operations) | ████████████████████████████████ | 170,000 accounts |
Iran (IRGC-linked, all waves) | ████████████████████████░░░░░░░░ | 130,000 accounts |
Saudi Arabia (all ops) | █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ | 5,100 accounts |
Russia (IRA + GRU) | █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ | 3,841 accounts |
Venezuela (state-linked) | █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ | 1,900 accounts |
US Military , CENTCOM (psyops) | █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ | 150 accounts (independently identified) |
NOTE: These figures are not directly comparable , they reflect different disclosure periods and methodologies. The US figure covers only what was independently identified by researchers. China's figure is the largest single confirmed removal in platform history. Russia's figure reflects only what Twitter provided to the Senate before 2018. | ||
Country | What They Did | Key Evidence | Verified Source |
China | 170,000+ fake accounts defending actions in Hong Kong, Xinjiang, Taiwan. Used 'like brigading' , coordinated massliking to game Twitter's algorithm. | 936 accounts suspended for Hong Kong op alone (Aug 2019) , then the largest single suspension in Twitter history | Twitter Transparency Report (Aug 2019); Stanford Internet Observatory |
Russia | Internet Research Agency: 10.4 million tweets, fake activist groups, real protest organisation. Confirmed across US, UK, France, Germany, Brazil. | Bipartisan US Senate report (2019) using Twitter's own data under subpoena | Senate Intelligence Committee Vol.2 (2019); Mueller Report (2019) |
Saudi Arabia | Recruited actual Twitter employees to spy on dissident accounts. Targeted journalists. Two employees criminally charged in the US. | Twitter employee Ahmad Abouammo convicted 2022. Journalist Khashoggi was among those targeted. | US DOJ Indictment US v. Abouammo (Nov 2019); US DOJ conviction (Aug 2022) |
Iran | Six documented operation waves (2018–2020). Posed as Americans, Scottish nationalists, Latina women. IRGC tracked protesters (2022). AI deepfakes in 2026. | 130,000 accounts in largest single wave; 62 fake accounts in first weeks of 2026 Iran US conflict | Twitter Transparency Reports (archived); Clemson Media Forensics Hub (Mar 2026) |
India | Major political parties linked networks documented using computational propaganda during the 2019 general election. Automated amplification confirmed. | Oxford Internet Institute India computational propaganda report (2019) | Oxford Internet Institute (2019) |
United States | Official accounts: 400+ State Dept accounts, hundreds of DoD accounts. Covert: 150+ CENTCOM fake accounts exposed by independent researchers. | Stanford Internet Observatory / Graphika joint report (Aug 2022) | Stanford Internet Observatory (Aug 2022); RAND Corporation (2019) |
9. The Musk Takeover: What the Data Shows (2022–2026)
In October 2022, Elon Musk acquired Twitter for $44 billion and significantly reduced the workforce responsible for detecting state-sponsored fake accounts and enforcing platform policies. Below is what the verified data shows happened next.
Year | Revenue (USD Billions) |
2018 | $3.04 B |
2019 | $3.46 B |
2020 | $3.72 B |
2021 | $5.08 B |
2022 | $4.73 B |
2023 | $2.5 B (est.) |
2024 | $2.5 B (est.) |
2025 | $2.26 B (est.) |
Sources: Business of Apps (2026); Backlinko (Jan 2026). Musk acquisition completed October 2022. Revenue halved within 12 months as advertisers departed. Net income returned to $1.4B profit in 2026 through cost cuts (DemandSage 2026).
Year | Employee Headcount |
2019 | 4,600 |
2020 | 5,100 |
2021 | 7,500 |
Oct 2022 (acquisition) | 7,500 |
Nov 2022 (post-layoffs) | 3,700 |
Jan 2023 | 2,300 |
2024 | 2,700 |
2025 | 2,840 (est.) |
Sources: Backlinko (Jan 2026); The Verge (Nov 2022); The New York Times (2023). Musk fired approximately 6,000 employees in the weeks following the October 2022 acquisition. Trust and Safety team estimated to have lost ~80% of staff.
What Changed After Musk Takeover | Before Acquisition (2021) | After Acquisition (2023–24) | Source |
Total employees | 7,500 | 2,840 (est.) | Backlinko (Jan 2026); The Verge (Nov 2022) |
Trust & Safety staff (est.) | 2,000 | 150–300 (est., 80%+ cut) | NYT (2023); The Verge |
Info operations disclosures per year | 4 (quarterly) | 0 , suspended mid2023 | NPA analysis of Twitter disclosure archive |
Election disinformation spread (6-month comparison) | Baseline | 1,500% increase | Stanford Internet Observatory (2023) |
State linked accounts reinstated | 0 (enforcement active) | 70,000+ via account amnesty | Platformer / Casey Newton (2023) |
Twitter annual ad revenue | $5.08 billion (2021) | $2.5 billion (2023–24) | Business of Apps (2026) |
EU regulatory status | Cooperative / Compliant | Formal DSA investigation opened | EU Digital Services Act report (2023) |
Civic Integrity Policy (anti-disinfo) | Active | Eliminated October 2022 | Twitter policy archive |
THE BIGGEST SINGLE OVERSIGHT FAILURE | In mid-2023, Twitter permanently suspended its Information Operations Disclosure Programme , the quarterly public releases of data about state sponsored fake accounts that had been the primary source of evidence about social media warfare. Researchers, journalists, and governments around the world had relied on this data. Its suspension ended public oversight of the world's most politically consequential platform at exactly the moment when state operations were accelerating. Confirmed by Stanford Internet Observatory Director Alex Stamos in US Congressional testimony (2023). |
10. 2026: Artificial Intelligence Increases the Challenge
Everything described in earlier sections required significant effort , writing fake posts, building audiences over months, editing videos. In 2026, a single person with a free AI tool can create a convincing fake video of a military strike and have it reach a million people in an hour. The Iran US conflict of early 2026 is the first major armed conflict where AI-generated disinformation played a significant documented role on Twitter .
Fake Content | Alleged Creator | Claimed to Show | How It Spread | How It Was Debunked | Source |
AI-generated satellite image of destroyed US base | IRGC-linked accounts (Clemson 2026) | Damage to US Naval base in Qatar | Shared on Twitter before any fact check | Analyst showed original Google Earth image from Feb 2025 with identical cars in same positions | Clemson Media Forensics Hub (Mar 11 2026) |
Video of large explosion | Iranian state linked sources, then Twitter | Iranian drone strike on a US facility | Went viral Telegram, then cross-posted to Twitter | Fact checkers identified it as a Saudi highway car accident from weeks earlier | Euronews (Mar 6 2026) |
Video of downed fighter jet | IRGC Telegram channels, then Twitter | Iran shooting down a US F15 strike eagle | Celebrated by pro Iran accounts on Twitter for hours | Israeli Air Force confirmed video showed an Israeli F35 shooting down an Iranian Yak 130 over Tehran | Euronews (Mar 6 2026) |
Deepfake video of Trump and Netanyahu | Unidentified creators, Instagram persona 'Freya Maguire' | US leaders secretly arguing about killing Iranian leadership | Instagram - Twitter cross posting; viral before platform acted | Instagram account suspended as fake; video flagged as AI-generated by multiple analysts | Euronews (Mar 6 2026); Rolling Stone (Mar 2026) |
Sources: Clemson University Media Forensics Hub (March 11, 2026); Euronews (March 6, 2026); Rolling Stone (March 2026); NewsGuard (March 2026)
11. What Would Need to Change?
The problems described in this analysis are not mysteries. Researchers know what structural changes are needed. The question is political will and international coordination.
Problem | Evidence It Is Real | What Would Need to Happen | Difficulty |
Platform disclosures stopped | Twitter's quarterly infoops reports discontinued mid-2023 (confirmed by Stanford 2023) | Law requiring mandatory regular public disclosure of detected state-sponsored account removals | Moderate , needs national legislation |
Trust & Safety staff significantly reduced | 80% T&S reduction; 1,500% surge in election disinformation (Stanford 2023) | Minimum staffing ratios for platforms above a size threshold, written into law | Moderate , needs legislation |
AI deepfakes spreading in real time | 18 false Iranian war claims in 2 weeks; fake satellite images, fabricated explosions (NewsGuard Mar 2026) | Mandatory AI content watermarking; rapid takedown rules for synthetic war imagery | Hard , needs technical and legal solutions |
Inconsistent enforcement | Multiple governments' accounts treated differently for similar categories of content , documented across independent sources | Consistent platform enforcement standards applied equally to all governments | Very hard , political resistance from all sides |
No global coordination | Each country acts alone; adversaries exploit jurisdictional gaps | Multilateral agreement on minimum platform governance standards among democracies | Very hard , requires international treaty |
Public unaware of manipulation scale | Most people cannot identify coordinated inauthentic behaviour | Media literacy education in schools treating it as a national security issue | Moderate , needs political commitment |
Sources: Stanford Internet Observatory (2023); Clemson Media Forensics Hub (2026); NewsGuard (2026); Freedom House; EU DSA report
Should You Trust Twitter ?
This analysis has presented verified, publicly available evidence from government reports, academic research, court documents, and verified journalism. The picture it presents calls for a clear, evidence based answer to the question of trust. That answer must distinguish between different types of trust.
Quantitative Assessment
The following figures, all sourced from the organisations cited throughout this report, provide a measurable basis for evaluating the platform's reliability as an information source:
Metric | Data Point | Source |
Speed advantage of false information | False news spreads 6× faster than true news on Twitter | Vosoughi et al., Science (2018) |
State sponsored fake accounts removed (2018–2022) | Over 310,000 accounts across documented operations | Twitter Transparency Reports (archived) |
State sponsored accounts since transparency programme ended | No public data , programme suspended mid 2023 | Stanford Internet Observatory (2023) |
Trust & Safety staff reduction post-acquisition | 80% reduction from 2,000 to 150–300 staff (est.) | NYT (2023); The Verge (Nov 2022) |
Election disinformation increase post-acquisition (6-month comparison) | 1,500% increase | Stanford Internet Observatory (2023) |
State-linked accounts reinstated after acquisition | 70,000+ accounts | Platformer / Casey Newton (2023) |
Daily active user year-on-year decline (Jan 2026) | 15.2% | Backlinko (Jan 2026) |
EU Digital Services Act compliance | Formal investigation opened , noncompliance findings | EU DSA Compliance Report (2023) |
AI-generated false war claims in 2 weeks of Iran-US conflict (Mar 2026) | 18 false claims tracked by NewsGuard; 62 fake accounts identified by Clemson | NewsGuard (Mar 2026); Clemson Media Forensics Hub (Mar 2026) |
Twitter has measurable structural weaknesses that make it susceptible to coordinated manipulation. The platform has historically been one of the fastest channels for the spread of false information. The safety infrastructure designed to counter state sponsored manipulation has been significantly reduced, and public accountability data has been suspended. These are not matters of opinion , they are documented data points.
Qualitative Assessment
Qualitatively, the assessment must be more nuanced. Twitter is neither purely a reliable source of information nor purely a disinformation machine. It is a tool , one that different actors use for different purposes simultaneously.
Dimension | Evidence Based Assessment |
As a source of breaking news | High value but high risk. Real, verified information often appears on Twitter faster than anywhere else. The same is true of false information. The platform does not reliably distinguish between the two before amplification. |
As a platform for official government communication | Governments, including the US, Iran, Russia and others, use Twitter as a formal channel for diplomatic signals, policy announcements and even military notifications. This makes it a genuine primary source , but one that must be treated as such, not as neutral editorial content. |
As a tool of influence operations | Confirmed by the governments of Russia, China, Iran, Saudi Arabia, Venezuela and , in documented covert cases , the US. The platform has been used systematically for fake account operations by every major state actor with the resources and intent to do so. |
Since October 2022 , under Musk ownership | The structural protections against coordinated inauthentic behaviour have been substantially reduced. The Civic Integrity Policy was eliminated. Safety staffing was cut by an estimated 80%. The Information Operations Disclosure Programme was suspended. The EU opened a formal non compliance investigation. These facts do not establish intent, but they do establish documented consequences: measured increases in disinformation spread. |
For individuals and journalists | Twitter remains important for reaching relevant sources and real time information , but independent verification of any claim is essential before acting on it. The documented rate of AI-generated disinformation in active conflict zones in 2026 means that even footage and images cannot be assumed authentic. |
Based entirely on the evidence presented in this analysis, the following conclusions are supported by the cited data:
1 | Do not trust Twitter as a reliable, unfiltered source of truth during breaking news events or conflicts. The platform's own research shows false information spreads faster than true information and the safety infrastructure to address this has been significantly weakened. This is a structural, not a political, finding. |
2 | Twitter remains a valuable platform for monitoring what governments, militaries and institutions say officially. Verified accounts of governments and agencies represent genuine primary sources. The platform is an important window into official positions, even when those positions are themselves propagandistic. |
3 | No single country or actor has a monopoly on misuse of the platform. Documented operations on Twitter have been conducted by state actors across the political spectrum, including democracies as well as authoritarian states. Any analysis that identifies only certain governments as problematic while ignoring documented operations by others is itself a form of selective framing. |
4 | The appropriate response is verification, not avoidance. Twitter cannot simply be ignored , too much significant information appears there first. But every piece of information from the platform, particularly during conflict or elections, should be treated as unverified until confirmed by independent, primary sources. In the AI era of 2026, this applies especially to images and videos. |
THREE KEY FACTS TO REMEMBER | 1. False news spreads 6 times faster than true news on Twitter , proven by MIT in a peer-reviewed study of 126,000 stories (Science, 2018). 2. Between 2018 and 2022, Twitter removed over 310,000 state sponsored fake accounts across multiple countries , and then stopped publishing the data. 3. In March 2026, an active armed conflict is being accompanied by an AI-powered disinformation campaign on Twitter with fake satellite imagery, fabricated explosion footage and deepfake videos documented by independent researchers within days of the conflict's start. |
Sources & References
Every fact in this article comes from one of the following sources. All are publicly available. No classified material is used in this analysis.
Author / Organisation | Document / Report | Year | Key Data |
US Senate Select Committee on Intelligence | 'Russia's Use of Social Media', Vol. 2 (Bipartisan) | 2019 | IRA account counts, tweet volumes, US election targeting , all confirmed by Twitter data under subpoena |
Mueller, R.S. / US Department of Justice | Report on the Investigation into Russian Interference in the 2016 Presidential Election | 2019 | Confirmation of IRA and GRU social media operations |
US DOJ | US v. Abouammo et al. , Indictment and conviction | 2019 / 2022 | Twitter insider espionage for Saudi Arabia; Khashoggi connection |
US DOJ | US v. Shahram Poursafi , IRGC assassination plot | 2022 | Iran's use of social media intelligence to target US officials on US soil |
Twitter Inc. | Information Operations Disclosures archive (transparency.twitter.com) | 2018–2022 | All state-sponsored account removal data. Discontinued 2023. |
Stanford Internet Observatory / Graphika | 'Unheard Voice': US Military CENTCOM Operations | Aug 2022 | 150+ CENTCOM fake accounts on Twitter; AI-generated photos; foreign-language personas |
Stanford Internet Observatory | Election Integrity Partnership Report | 2023 | 1,500% increase in election disinformation post-acquisition |
Vosoughi, Roy & Aral | 'The Spread of True and False News Online', Science, Vol.359(6380) | 2018 | False news 6× faster than true news; MIT Media Lab, 10-year study of 3M users |
Berger, J.M. & Morgan, J. | 'The ISIS Twitter Census', Brookings Institution | 2015 | ISIS account metrics: 46K–90K accounts; 40K+ recruits from 80 countries |
Oxford Internet Institute | Computational Propaganda Inventory; India report | 2019 | India election Twitter ops; global eight-country disinformation study |
Human Rights Watch | 'Iran: Online Surveillance and Targeting of Protesters' | Nov 2022 | IRGC tracking of #MahsaAmini activists via Twitter; 20+ documented arrests |
Clemson Univ. Media Forensics Hub | IRGC fake account network during Iran-US conflict 2026 | Mar 2026 | 62 fake accounts; fake Latina/Scottish/Irish personas; AI-generated content |
NewsGuard | False claims tracking , Iran-US War 2026 | Mar 2026 | 18 false Iranian war claims in 2 weeks of conflict |
Euronews | 'Iran state media ramps up disinformation campaign' | Mar 6 2026 | Specific fake videos catalogued; AI-doctored satellite images |
Rolling Stone | 'The Latest Weapon in the Iran War Is AI-Generated Misinformation' | Mar 2026 | AI deepfake spread; Rumman Chowdhury quote (former X AI Ethics Lead) |
Columbia Journalism Review | 'The Khamenei Problem: Twitter's Inconsistent Enforcement' | Nov 2020 | Documented asymmetry in enforcement across world leaders' accounts |
Business of Apps | X Revenue and Usage Statistics 2026 | 2026 | Revenue $2.5B (2023–24); 80% staff cut; historical user data |
Backlinko | X/Twitter Statistics 2026 | Jan 2026 | 561M MAU (Jul 2025); 15.2% daily active user YoY decline; staff headcount |
FireEye / Mandiant | 'Suspected Iranian Influence Operation' | Oct 2018 | First Wave 1 Iran operation attribution; 770 accounts |
Airwaves Project | AFRICOM airstrike civilian casualty analysis | 2022 | 200 AFRICOM Twitter announcements with unacknowledged civilian casualties |
Global Engagement Center (US State Dept) | Annual Reports on Foreign Disinformation | 2019–2021 | US counter-Iran operations; @USAdarFarsi reach; GEC activities |
NATO StratCom COE | Social Media as a Tool of Hybrid Warfare | 2018 | Syria 45-minute Twitter coordination case study |
RAND Corporation | Lessons from the Campaign Against ISIS: OIR Strategic Communications | 2019 | OIR 14-posts/day rate; 27-language coalition; coordination architecture |
BBC Arabic / The Guardian | Investigation: US military social media accounts in Syria | 2020 | SOCOM-contracted 1.8M-follower Syrian opposition Twitter network |
Lawfare Blog | 'The Soleimani Strike: Legal and Strategic Dimensions' | Jan 2020 | Nuclear-adjacent Twitter crisis analysis |
UN Special Rapporteur (Callamard) | Statement on Soleimani killing and international law | Jan 2020 | Legal context of cultural-sites tweet as potential war crime threat |
Freedom House | 'Freedom on the Net: Iran' (annual) | 2018–2023 | IRGC censorship; Twitter reach inside Iran despite internet blocks |
CFR (Council on Foreign Relations) | Global Conflict Tracker; Iran-Houthi backgrounder | 2025–2026 | Timeline of Iran-US kinetic conflict and proxy operations |
EU Digital Services Act Compliance Report | X (Twitter) DSA compliance findings | 2023 | Formal non-compliance investigation; cooperation failures |
DataReportal | Digital 2025 Global Overview Report | Jan 2025 | 37.5% of X users aged 25–34; demographics; 29.1% of internet users use X monthly |
All sources above are publicly available. No classified material is used in this analysis. Where data is estimated or methodology differs between sources, this is noted in the text. This is an Open Source Intelligence (OSINT) analysis. All data is drawn from publicly verifiable sources. For errors or additional sourcing, consult the original documents cited above.
Free Publication | |
DRASInt Mandates | |
Testing and Certification



📞 Contact Us for free Consultation
Phone / WhatsApp | +91 82904 39442 |
Website | |
Detection | Research | Analysis | Security | Intelligence

🚀Innovate, Navigate, Thrive!
DRASINT RISK ALLIANCE is the sole owner of the published content






Comments