By: Sonny Zulhuda

Right after speaking on “Neurotech, Surveillance and the Future of Sovereignty: Who Owns the Signals of the Mind?” at the Digital Rights in Asia Pacific 2025 (DRAPAC 2025) in Kuala Lumpur in August last year, I was immediately approached by the organiser and was asked to speak at the NetMission Academy 2026 online. This is an annual event on Internet governance and policies which gathers participants across all Asian nations.
The event website provides an introduction to the event, which is very interesting:
“The NetMission Academy is a series of online sessions designed to equip youth with the knowledge and discussion skills to participate in Internet policymaking. The program includes 10 sessions and a closing ceremony. These 1.5-hour workshops will be interactive discussion sessions, with expert guest speakers selected for each topic. This event is not a one-sided learning program. Selected fellows will be assigned to thematic groups to research and prepare for the respective session as hosts. All fellows are expected to actively contribute to discussions. We are looking for young visionaries who want to make a change for the betterment of the Internet and our community. If you are a student currently enrolled in any tertiary institution within the Asia Pacific, aged between 18 and 30, and are interested in how the Internet impacts society, this is the program for you.”
I would absolutely recommend this programme for my cyberlaw students and anyone who thinks that Internet should work better and be supported by better policies!
This time, I was asked to speak about the online trust based on the elements of cybersecurity, privacy and safety. The session, held online on Thursday, 29th January 2026, especially discusses how AI is reshaping the security and trust of our digital lives. “From deep fakes and scams to data scraping and questions of digital consent, to industry-led standards for verifying content authenticity, new risks and governance challenges are emerging across the Asia-Pacific. This session explores how institutions, governments, and industry actors can respond to safeguard dignity, security, and trust online.”
My 20-minute speaking time focuses on several points which have been pre-determined by the organiser:
- What are the social and security impacts of AI-driven scams and sexual deepfakes? What tools and policies can best protect individuals’ dignity and safety?
- If AI-driven attacks show how technology can be abused against people, what happens when personal data is taken without clear consent and used to build these same systems?
- What does “informed consent” mean, and what legal or ethical frameworks can protect individuals from exploitative data use?
- Alongside questions of data use, we’re also seeing challenges of trust. How do we even know what content is real or manipulated in an AI-driven online space?
- How can industry collaboration balance free expression with combating misinformation?
- What shared responsibilities should governments, industry, and civil society take to safeguard dignity, security, and trust online?
In a podcast-style conversation, I was joined by two other speakers, Mel Migriño from Gogolook (Philippines) and Raunaq Sharma from The Dialogue (India). Jenie Benedetta and Vinayak Bharadwaz moderated the session.
The dialogue session was very lively. I recorded some of those questions by this active group of Asian youth as follows;
Fatima Munir: “What does ‘informed consent’ mean, and what legal or ethical frameworks can protect individuals from exploitative data use?”
Taruna Kaur Bamrah: “Deepfake videos hide frame manipulations better than images due to motion and compression – unlike static pixel clues, temporal inconsistencies are tough to spot in pictures or images. What policy or tech gaps block scalable frame forensics for youth-led fact-checking, or be it senior end users, and how can this be fixed or what can be the approach towards this?”
Maulidya Alhidayah: “In the context of cross-border personal data transfer, how does individual sovereignty as the owner of the data actually work? Then, what democratic or governance channels are available for civil society to meaningfully participate in decisions regarding cross-border personal data flows?”
Fatima Munir: “What shared responsibilities should governments, industry, and civil society take to safeguard dignity, security, and trust online?”
Shweta: “Are current consent frameworks sufficient to protect vulnerable groups, such as minors, activists, or marginalised communities, from non-consensual AI training and deepfake misuse? Like the AI algorithm and infrastructure talked about by Mel”
Suhani: “Does online governance require international collaboration and global agreements, especially for the protection of children online? Big technology giants have international presence, and I can not imagine this can be properly tackled in isolation”
Evasana_Pradhan: “Cybercrime laws often emphasise surveillance, control, and punishment, while effective cybersecurity relies on transparency, trust, and vulnerability disclosure. Is it possible that countering cybercrime can unintentionally weaken cybersecurity by discouraging ethical researchers? And how can cybercrime laws be designed to protect privacy and improve online safety without undermining trust or security research?”
Shiang Yen Eow: “Given that most major platforms are designed outside the Asia-Pacific region, how can regional actors meaningfully influence platform architecture to reflect local cybersecurity risks, cultural norms, and privacy expectations?”