Rheizzielle Jhoy Badilla

When AI Deceives: The Deepfake Dilemma and the Growing Risks of Artificial Intelligence

For Nanay Esther, it was just another quiet morning. She picked up her phone, casually scrolling through videos online when one clip caught her attention: a seemingly authentic street interview featuring two young Filipino students. The video, styled like many viral “TikTok man-on-the-street” interviews, showed the boys expressing strong support for a controversial political figure against allegations of corruption and dismissing the accusations as politically motivated. The students’ speech was well-articulated, their setting convincingly urban and local, and their message resonated with many.

The video quickly racked up more than 7 million views and drew hundreds of thousands of interactions across social media. But the twist? Neither the students nor the interview was real. It was a “deepfake”, a hyper-realistic video generated using artificial intelligence and machine learning technologies, designed to mislead the average viewer.

 

The Rise of AI-Generated Misinformation

Unlike humorous deepfakes or exaggerated parodies, this video was disturbingly credible. The students’ accents, uniforms, and their surroundings gave the illusion of authenticity. The production quality and attention to detail revealed the creator’s intent: not amusement, but deception.
The video was shared on Facebook by a senator who is actively vocal for his support of the administration mentioned. While social media users quickly flagged the video as fake, the said government official stood by his post, claiming that whether the video was AI-generated and the “point” made was still valid. His refusal to retract the post prompted criticism from both citizens and officials who warned that spreading misinformation erodes public trust especially when it comes from people in power.

 

Post-Truth Politics and the Power of AI

Critics were divided on whether the said public official was genuinely deceived or knowingly participated in spreading disinformation. Some believed he was misled by the video’s realism, while others argued that he knew it was fake but shared it to push a political narrative. If the latter is true, it reflects a dangerous shift toward post-truth politics, where facts become secondary to emotional or ideological persuasion. It mirrors tactics seen globally: such as former American president’s earlier promotion of an AI-generated video falsely depicting Gaza as a lavish, peaceful metropolis.

In the Philippines, where social media is the primary battleground for political discourse, such incidents are increasingly common. The country consistently ranks among the highest in the world for time spent on social media, yet regulatory oversight remains weak. This makes it fertile ground for AI-driven propaganda and misinformation.

 

The Broader Dangers of AI

The incident serves as a stark example of how artificial intelligence can be weaponized in ways that affect not only individual lives but also the democratic integrity and legal systems of nations like the Philippines. As AI technology rapidly advances, its potential for misuse becomes more pronounced, raising pressing concerns about its intersection with rights, justice, and the rule of law. One of the most immediate threats is the invasion of privacy. In the Philippine context, where data protection laws like the Data Privacy Act of 2012 aim to safeguard personal information, AI-driven surveillance and facial recognition technologies can still exploit citizens’ data without proper consent or transparency, leading to violations of privacy rights and enabling state or corporate overreach. Moreover, algorithmic bias and discrimination threaten to deepen existing social inequalities. If AI is embedded in government services, such as welfare distribution, policing, or recruitment, and is trained on biased data, it may replicate those patterns to undermine the constitutional guarantee of equal protection under the law and marginalizing vulnerable communities. AI-generated misinformation, such as deepfakes and manipulated content, also poses a serious challenge in the Philippines, where disinformation has already influenced elections and public opinion. Such manipulation undermines democratic participation and violates citizens’ right to truthful information, protected under freedom of expression. Economically, the rise of automation through AI threatens large-scale job displacement, particularly in industries crucial to the Filipino workforce, (such as business process outsourcing (BPO), manufacturing, and administrative services) raising the need for updated labor laws and social safety nets. Compounding these issues is the lack of clear accountability mechanisms for AI decisions. When individuals are denied services or targeted based on opaque algorithmic processes, existing legal frameworks offer limited avenues for redress or appeal, posing a challenge to due process and access to justice. On the international front, the development of AI-powered autonomous weapons raises ethical and legal questions for the Philippines under international humanitarian law, especially as it navigates regional security tensions. Even more speculative risks, such as superintelligent AI acting counter to human interests, demand foresight in policy and regulation to prevent existential threats. Lastly, AI’s psychological and social effects, especially on Filipino youth, warrant concern, as AI companions and algorithmically curated content may shape behavior, perception, and emotional health in ways not yet fully understood, potentially conflicting with laws protecting children’s rights. In this complex landscape, Philippine lawmakers, regulators, and civil society must urgently address the evolving implications of AI to ensure that its deployment aligns with national laws, democratic principles, and the rights of every citizen.

 

The Need for Vigilance and Regulation

Senator Dela Rosa’s promotion of an AI-generated political message highlights the urgent need for stronger digital literacy, responsible media practices, and robust government regulation to address the risks posed by emerging technologies. While artificial intelligence offers transformative benefits across sectors from healthcare to industry, it also presents serious dangers when misused to mislead, manipulate, or malign. In the Philippines, several legal frameworks already exist that can be applied to AI-related offenses. The Cybercrime Prevention Act of 2012 (RA 10175) penalizes cyber libel, computer-related forgery, and crimes committed through information and communication technologies, such as the creation or spread of deepfake videos. The Revised Penal Code addresses the unlawful dissemination of false information (Article 154) and the falsification of documents (Articles 171 and 172), both of which could apply to deceptive AI-generated content. The Data Privacy Act of 2012 (RA 10173) protects against the unauthorized processing and use of personal data, particularly relevant in the context of AI tools that exploit facial recognition, profiling, or emotional analysis without consent. Under the Omnibus Election Code, the spread of false or misleading propaganda, including AI-generated political deepfakes, can constitute an election offense. The Anti-Photo and Video Voyeurism Act of 2009 (RA 9995) may apply to AI-generated sexual deepfakes, especially those created without consent using a person’s likeness. The Consumer Act of the Philippines (RA 7394) prohibits deceptive sales practices, which could include the use of AI chatbots or synthetic media to mislead consumers. Lastly, the Intellectual Property Code (RA 8293) addresses content that infringes on copyrighted material or misuses a person’s voice, image, or likeness. These laws, though not originally designed for AI, provide a foundational legal basis for addressing its misuse. However, their effective enforcement depends on updated regulatory guidance, technical capacity, and public awareness. Ultimately, as AI becomes more embedded in society, the challenge lies not just in developing the technology, but in governing it ethically and legally to uphold democratic values, protect individual rights, and maintain public trust.

Legal Gaps and Need for Reform:
Currently, there is no dedicated AI law in the Philippines. However, the government, through the Department of Information and Communications Technology (DICT), has proposed AI governance frameworks and national strategies. Still, legal clarity and stronger regulation are urgently needed to cover AI-specific risks like deepfakes, algorithmic bias, and autonomous decision-making.

 

Reference:

Artificial Intelligence and the Law in the Philippines. (2025 March 28) Asia Business Law Journal. Retrieved, June 26, 2025, from https://law.asia/ai-law-philippines-government-strategies-privacy/

Bilyonaryo News Channel. (2025, June 16) Dela Rosa on AI-generated Video: What Matters is the Message | Newsfeed@Noon. [Video]. Youtube. https://youtu.be/3IyL-MK5XfQ?si=FGcr65FISYIvr-z3
Domino, J. (2021, November 11) The Destabilization Experiment: Filipinos Are Left to Pick Between Repressive Social Media Laws — Or None at All. Rest of World. https://restofworld.org/2021/philippines-social-media-regulation/

Enriquez, J. M. (2025, April 16) Philippine AI Governance: Time to Shift Gears. Fulcrum. https://fulcrum.sg/philippine-ai-governance-time-to-shift-gears/
Flores, D. N. (2025, June 16) Dela Rosa Draws Flak Over AI video of Students Opposing Sara Duterte’s. Philstar Global. impeachmenthttps://www.philstar.com/headlines/2025/06/16/2450990/dela-rosa-draws-flak-over-ai-video-students-opposing-sara-dutertes-impeachment?fbclid=IwY2xjawLJjZBleHRuA2FlbQIxMQBicmlkETEyMktaa0ZXa3F4aFVNRDNvAR54psLz4VFNxQDxzpuAlsUxGbaOoL1XsbYpTdgvS_Hv086gKVXXit-swMkA_A_aem_zbzMOJ_Z6nikV_GZ9CY4PA

Hall, R. (2025, March 6) ‘Trump Gaza’ AI Video Intended as Political Satire, Says Creator. The Guardian. https://www.theguardian.com/technology/2025/mar/06/trump-gaza-ai-video-intended-as-political-satire-says-creator
Karl, T. (2025, March 4) The Deepfake Dilemma: How Cybercriminals Are Using AI to Deceive, Defraud, and Destroy Trust. New Horizons. https://www.newhorizons.com/resources/blog/deepfake-scams

Lofranco, M. V. (2024, April 4) The Legal Implications of Artificial Intelligence in Business Operations. CBOS Business Solutions Inc. https://cbos.com.ph/the-legal-implications-of-artificial-intelligence-in-business-operations/

SAJ Leonen: Despite Risks, Legal System Should Keep Abreast with AI Developments. (2024, August 6) Supreme Court of the Philippines. Retrieved, June 26, 2025, from https://sc.judiciary.gov.ph/saj-leonen-despite-risks-legal-system-should-keep-abreast-with-ai-developments/

Sturt, B. (2025, June 24) Philippine Senator’s Deepfake Post Raises Fresh Disinformation Concerns. The Diplomat. https://thediplomat.com/2025/06/philippine-senators-deepfake-post-raises-fresh-disinformation-concerns/