In recent months, Pakistan’s social media space has seen a surge of AI-driven disinformation, with dozens of doctored videos and images pushed by accounts linked to the security establishment. Investigations by journalists and analysts reveal that many of these viral posts are AI-generated “deepfakes” designed to inflame tensions and spread false narratives. For example, a fabricated AI-generated clip showing Air Chief Marshal V.S. Singh criticizing India’s Tejas fighter program was traced to an X account with ties to the Pakistani military establishment.
Similarly, fact-checkers exposed a digitally altered report about former Indian Army Chief V.P. Malik. Pakistani propaganda handles circulated a clip of Malik spouting fake communal rhetoric. These AI manipulations often mimic real news formats, such as TV reports or social media clips, but exhibit uncanny audiovisual glitches, repetitive eye movements, clipped speech, and misaligned lip-sync that betray their synthetic origin. In each case, fact-checkers have debunked the posts and found no credible sources for the claims. However, Pakistan’s intelligence agencies have become adept at spreading disinformation on social media to generate widespread confusion, maliciously target their opponents, and disrupt peace.
Two of the most prominent victims of this campaign are international journalists Yalda Hakim and Palki Sharma Upadhyay. In early December 2025, a deepfake video of Sky News anchor Yalda Hakim interviewing Imran Khan’s sister Aleema Khanum went viral on Pakistan-linked feeds. The manipulated clip had Aleema purportedly calling Pakistan’s army chief a “radicalized Islamist” and blaming him for seeking war with India. Sky News and Hakim immediately denounced the post as a “terrifying” fake, noting that the real interview contained no discussion of India or its army. Hakim herself tweeted that “this clip is completely fake” and never aired by Sky News, and the network clarified that Pakistani politicians had “twisted her words” to fabricate the content. This incident clearly shows that the Pakistan military, through the ISPR-handled social media accounts, would target anyone, including international figures, who would empathize with Imran Khan.
Similarly, the AI-manipulated videos of Firstpost’s Palki Sharma Upadhyay (and other Indian journalists) are circulating in Pakistani networks. These fake clips showed Palki promoting Indian government-backed financial investment platforms or questioning diplomatic protocols for the Indian Prime Minister’s visit to Jordan. These viral videos on Pakistani social feeds were entirely AI-generated, with no actual broadcast or transcript exists. Several such examples suggest that the state-backed Pakistani social media profiles have intensified their disinformation against India. Their only aim is to create civil-military tensions and religious divide in India through such AI-generated fake clips.
These incidents fit a pattern identified by specialists: clusters of Pakistani X/Twitter accounts, often self-presenting as Indian news consumers, synchronously amplify AI-faked stories to sow discord. One analysis found that key accounts all listed “Pakistan” as their location and exhibited coordinated behavior, classic signs of a troll farm operation. The disinformation networks do not confine themselves to the India-Pakistan rivalry. Inside Pakistan, even journalists and civilians have been targeted by AI hoaxes. Dawn columnist Sheraz Khan documented, for example, an account “PakVocals” that posted a deepfake video of Pakistani reporter Benazir Shah supposedly caught partying in a nightclub. The intent was clearly to discredit her with a lurid, false story.





