Friday, July 25, 2025

AI Misuse in Bangladesh: Rising Disinformation and Election Threats

The misuse of Artificial Intelligence (AI) in Bangladesh has turned the technology into a powerful tool for spreading disinformation, threatening political, social, and personal spheres. AI-generated fake images and videos are being used to defame political leaders, women activists, and business figures, with women facing the worst of this abuse. With the 13th National Parliamentary Election on the horizon, concerns are growing that AI-driven disinformation could sow widespread confusion and potentially incite mob violence. Despite the escalating threat, the government has yet to establish robust measures or legal frameworks to tackle AI misuse effectively.

Government’s Position and Limitations:

Foyez Ahmad Taiyyab, Special Assistant to the Chief Adviser of the interim government on Information and Technology, stated that the Cyber Security Act includes provisions to address AI misuse. “Law enforcement agencies can act under this law, but holding everyone accountable is challenging,” he said. He noted that previous government requests to Meta and YouTube for content removal were only partially successful. “We’ve asked Meta to strictly enforce their community guidelines, but they are hesitant to invest adequately in this area,” Taiyyab added. He attributed the public’s vulnerability to AI-generated fakes to widespread digital illiteracy in Bangladesh.

Milestone Tragedy: A Disinformation Case Study:

On July 21, a Bangladesh Air Force FT-7 BGI fighter jet crashed into Milestone School and College in Dhaka’s Uttara, a tragedy that gripped the nation. Soon after, AI-generated videos claiming to depict the crash went viral on social media, amplified by their realistic and dramatic nature. Fact-checking group Rumor Scanner confirmed these videos were fabricated using Google’s VEO-AI tool, with errors like misspelled names and inconsistent building structures exposing their artificial origins. This incident highlights how AI can manipulate sensitive events to mislead the public.

Election Concerns: 

As the national election approaches, AI-driven disinformation is surging at an alarming rate. According to Dismislab, over 65 AI-generated political videos were published in June and July, garnering more than 20 million views. These videos falsely portrayed women, workers, and ordinary citizens as supporters of political parties like Jamaat, BNP, or Awami League. Deepfake videos targeting female candidates have already sparked digital violence, with fears that this could intensify during the election.

Data from Cyber and Gender-Based Violence in Bangladesh shows that 75% of AI-driven digital violence victims this year were women, many of whom are politically active. ActionAid Bangladesh’s survey revealed that 60% of women faced AI-generated harassing content, with 60% of it being sexually explicit, hindering their political participation and causing psychological harm.

Expert Insights:  

Professor Md. Abdur Razzaq, Chairman of the Computer Science and Engineering Department at Dhaka University, said, “While AI offers immense potential, its misuse demands strict regulations and awareness. We need coordinated efforts from tech experts, monitoring teams, and policymakers.” 

Shameem Sarkar, Head of Technology at a London-based multinational, warned, “AI-driven disinformation could multiply exponentially before the election, especially targeting women candidates. Legal frameworks, digital literacy, and stronger fact-checking platforms are essential to counter this.” 

Impact of AI Misuse:

Accessible AI tools like HeyGen and DeepFaceLab enable anyone to create realistic fake content, distorting political narratives and undermining trust. In the Milestone tragedy, over 45 AI-generated videos fueled rumors of a planned attack, distorting a national tragedy. Such disinformation thrives as negative content gains more traction than factual corrections.

Proposed Solutions:  

Experts recommend a multi-faceted approach to curb AI misuse. First, AI-generated fake content should be classified as a criminal offense. Second, advanced detection tools and blockchain-based verification systems could identify deepfakes. Third, social media platforms like Facebook, X, and YouTube must collaborate to control AI content. Professor Razzaq emphasized, “A strong monitoring team would deter perpetrators.” 

Sarkar urged the government to negotiate with social media platforms to enforce stricter AI content removal policies. “If platforms can restrict ad promotions elsewhere, Bangladesh should demand similar controls. Without action now, disinformation could spiral out of control before the election,” he cautioned.

Conclusion:

AI misuse poses a grave threat to Bangladesh’s information security and democratic processes. With the national election looming, urgent legal, technological, and awareness-driven measures are needed to combat AI-driven disinformation. Failure to act could destabilize the country’s political and social fabric, with far-reaching consequences.


Share This Post

শেয়ার করুন

Author:

Note For Readers: The CEO handles all legal and staff issues. Claiming human help before the first hearing isn't part of our rules. Our system uses humans and AI, including freelance journalists, editors, and reporters. The CEO can confirm if your issue involves a person or AI.