In recent years, the proliferation of deepfake technology has raised serious concerns across the globe, with Pakistan now grappling with its widespread misuse. Deepfakes—AI-generated media that convincingly manipulates images, videos, and audio—have become a tool for misinformation, defamation, and political manipulation. A particularly disturbing example surfaced on social media platform X (formerly Twitter) on November 28, 2024, where multiple users shared a deeply misleading video targeting Pakistani journalist Matiullah Jan. The video falsely depicted Jan as being dishonorably discharged from a military academy, a narrative completely fabricated through deepfake technology.
The video, which originally featured a fictional scenario from a television serial, was edited with such precision that it appeared to show Jan in an embarrassing situation, undermining his credibility and reputation. The malicious video quickly went viral generating more than 200,000 views and was reshared 169 times with several posts circulating it as fact, accusing Jan of disgraceful actions. This kind of digital manipulation can have a devastating impact, especially when it involves high-profile figures, as it destroys public trust and spreads dangerous falsehoods.
Matiullah Jan is a prominent journalist and an outspoken critic of various power structures in Pakistan, including the military. This video appears to be part of a targeted smear campaign designed to discredit him and tarnish his image. Deepfake technology, with its increasing sophistication, is proving to be a potent weapon in the hands of those who wish to manipulate public perception. The disturbing ease with which videos like these can be created and disseminated highlights the growing challenges posed by deepfakes in the digital age.
This is not an isolated incident. Over the past few years, deepfake videos and audios have been used against several high-profile politicians, journalists, and celebrities in Pakistan to malign their reputations. One example includes a deepfake video that targeted former Prime Minister Imran Khan, spreading fabricated claims about his personal life and relationships. The video was shared widely on social media, generating massive controversy before it was debunked. Similarly, a deepfake audio clip falsely attributed to opposition leader Shehbaz Sharif was spread to suggest his involvement in corrupt practices, once again damaging his public image and credibility.
Similarly, Punjab Information Minister Azma Bokhari was also a targeted by a deepfake video depicting a sexualized, fabricated scenario. The video was designed to discredit her as one of Pakistan’s few prominent female political leaders, aiming to undermine her authority and diminish her influence in the political landscape. The video spread rapidly across social media platforms, shocking her and severely damaging her public image. “I was shattered when it came into my knowledge,” Bokhari revealed in a public statement.
Such videos not only harm the individuals involved but also contribute to a culture of distrust, where the authenticity of any media content is questioned, even when it is legitimate.
These incidents exemplify the destructive power of deepfakes in the digital world. The technology allows even ordinary individuals to create highly convincing fake content using sophisticated software tools. However, the real concern is that such videos are not being created by just anyone. The level of technological expertise required to produce high-quality deepfakes—especially those that are as convincing as the ones circulating in Pakistan—means that they are likely being orchestrated by organized groups or individuals with access to significant resources.
In the case of the Matiullah Jan video, it is highly likely that the creators behind such deepfakes have access to advanced AI tools and resources that ordinary people simply do not. Deepfake creation involves complex algorithms, training sets, and video manipulation skills that require a certain level of technical expertise. It is not something that can easily be achieved with off-the-shelf software. The rise of deepfakes in Pakistan, especially with their evident political and social motives, suggests that powerful groups with substantial backing—be it political entities, state actors, or organizations with vested interests—are behind these smear campaigns.
The technology itself is relatively new, and its implications are still unfolding. Experts warn that the increasing sophistication of deepfakes could pose an existential threat to the integrity of public discourse, particularly in countries like Pakistan, where media freedom is already under strain. When deepfakes are used to spread false information about prominent figures—especially journalists, politicians, and celebrities—they can significantly alter the course of political events, sow discord, and create confusion among the public. In many cases, the damage done by these fabricated videos is irreversible, even when they are later debunked.
The impact of these deepfake videos is further compounded by the speed at which they spread on social media platforms. Once a video goes viral, it can reach millions of people in a matter of hours, and many viewers may not take the time to verify its authenticity. By the time a deepfake is proven false, its damage has already been done—public figures find their reputations tarnished, and the credibility of the media and institutions is undermined.
Moreover, Pakistan’s relatively low media literacy rates and limited access to reliable fact-checking resources mean that a large portion of the population may fall prey to deepfake videos and audios. In a country with a diverse and often polarized political landscape, where public figures are frequently vilified by different factions, the introduction of deepfake technology has further complicated efforts to foster a fair and transparent media environment.
The implications of this technology extend beyond just the realm of politics and celebrity. Deepfakes have the potential to be used in more nefarious ways, such as blackmail, fraud, and even incitement to violence. For instance, fake videos of military or law enforcement officials could incite public unrest, while fabricated videos of political leaders could spark violent protests or exacerbate ethnic or religious tensions. In a volatile political environment like Pakistan, where social unrest is not uncommon, the stakes are particularly high.
To address the growing menace of deepfakes, governments, tech companies, and civil society organizations must collaborate on solutions that can mitigate the risks posed by this technology. In Pakistan, authorities need to invest in advanced detection tools that can identify and debunk deepfake videos before they go viral. Public awareness campaigns about the dangers of deepfakes and the importance of media literacy should be launched, empowering citizens to critically evaluate the content they consume online.
The rise of deepfake technology poses a serious threat to the integrity of public discourse in Pakistan. While the technology is still evolving, the ramifications of its misuse are already being felt across the country. The case of Matiullah Jan is just one example of how deepfakes can be weaponized to undermine individuals, tarnish reputations, and destabilize political landscapes. It is clear that more needs to be done to combat this growing threat, from both a technological and societal standpoint. The onus is on both the government and the public to ensure that Pakistan does not fall victim to the destructive potential of deepfake media.
The views expressed in this article are the author’s own and do not necessarily reflect Coverpage’s editorial stance.