Pakistani authorities have registered three separate cases under the Prevention of Electronic Crimes Act (PECA) 2025.
The Shahdara, Shahdara Town, and Kot Lakhpat police stations have launched investigations into these incidents, emphasizing the increasing role of cyber laws in curbing digital misinformation.
- Section 500: Defamation
- Section 504: Intentional insult with intent to provoke breach of peace
- Section 505-1(C): Statements conducive to public mischief
These legal provisions underscore the seriousness of defamation, propaganda, and misinformation in the digital space.
The complainant submitted a Facebook link as evidence, prompting authorities to investigate the alleged defamatory content
The Rise of Deepfake Technology and Its Threat to Public Figures
The rapid evolution of artificial intelligence and deepfake technology has raised concerns globally.
Deepfakes, which use AI-powered algorithms to generate hyper-realistic fake videos, have been increasingly weaponized to manipulate public opinion, damage reputations, and spread misinformation.
The case of Maryam Nawaz highlights a growing trend where political figures become targets of AI-generated propaganda.
The Pakistan Telecommunication Authority (PTA) and Federal Investigation Agency (FIA) are working in collaboration to curb cyber-related offenses under PECA 2025.
Authorities have been increasing efforts to monitor social media platforms, detect deepfake content, and take legal action against perpetrators.
Challenges in Regulating AI-Generated Content
- Strengthening Cyber Surveillance – Law enforcement agencies are deploying advanced AI-driven tools to identify and track the spread of deepfake videos and malicious content.
- Collaboration with Social Media Platforms – Government agencies are working closely with Facebook, Twitter, and YouTube to remove harmful and defamatory content swiftly.
- Public Awareness Campaigns – Authorities are emphasizing digital literacy and responsible use of AI-powered tools to prevent misuse.
- Detecting Deepfakes: AI-generated videos are becoming more sophisticated, making it difficult for traditional detection tools to identify fake content.
- Defining Misinformation vs. Satire: Not all AI-generated content is malicious. Distinguishing between satire, parody, and intentional defamation remains a gray area in cyber law.