
By: Sonny Zulhuda
Recently, two journalists reached out to me to gauge my comments on the piece of news I wrote below, to which this blog relates:
All artificial intelligence-generated content may have to be labelled as “AI-generated”, says Communications Minister Fahmi Fadzil. Fahmi said the government was considering the move under the Online Safety Act 2024, which is expected to come into force by the end of this year, Bernama reported. He said the move was crucial to address the misuse of AI, especially on social media platforms for purposes such as scams, defamation, and identity impersonation.
A Regulatory Trend?
This idea is one of the recent trends in the global AI regulations that can be traced, among others, in the EU, China and the US through their respective legislative initiatives.
In Europe, the EU Artificial Intelligence Act 2024 prescribes that AI systems that pose risks of Impersonation or deception are subject to information transparency requirements, for example, users must be made aware when they interact with chatbots. Besides, deployers of AI systems that generate or manipulate image, audio or video content must make sure the content is identified as AI-generated. This could include visual labelling or digital watermarking.
In China, the Cyberspace Administration of China, together with the China Ministry of Industry and Information Technology and a few others, published in 2024 new rules about labelling content that is created by artificial intelligence or other synthetic methods. The new regulations are scheduled to be enforced towards the end of 2025.
According to these rules, any content that has been generated by AI systems must be marked or labelled in several ways, including implicit and explicit labelling.
The former is by way of embedding AI-generated information inside the digital file’s metadata so that computer systems can detect it. The latter means there must be clear, visible signs that regular users can easily see and understand to know that the content was made by AI, including within written text, audio recordings, still images, video content, and virtual reality scenes or environments.
In the United States, lawmakers are currently working on several proposed laws that are still in the early stages of development. One of these proposed laws is called the Artificial Intelligence Research, Innovation and Accountability Act of 2024. If this law is passed, it would force companies that operate online platforms to publicly reveal and explain when they are using generative artificial intelligence systems on their platforms.
All the above initiatives are designed to enforce a “risk-based approach,” which means that the rules and requirements would be stricter for AI systems that pose greater potential risks or have more significant impacts on society.
A Timely Move by the Government
So, the move by the Government is arguably both crucial and timely. It is a legal policy that will serve both the industry and community: We will have an improved, more accountable and more transparent digital industry while at the same time improving the protection for the public and users of the Internet industry.
But we have to bear in mind that the Online Safety Act 2025 (which is currently not yet in force) only regulates specific segments of industry, i.e. most of the licensees under the Communications and Multimedia Act 1998. Therefore, there is still a bulk segment of AI content creators or digital services who are not the targeted licensees under the Online Safety Act and therefore would not be the subject of the new regulation.
The Benefits and Risks of the Labelling Regulation
As mentioned above, the labelling requirement may serve positively to both the industry and the public at large. For the industry, this requirement will enhance the level of transparency and accountability. Abiding by such a rule will soon become a distinguishing factor between AI service providers.
As for the licensees, this means they take part in creating a transparent digital environment and will increase their ESG level of compliance. The risk involved will be some added tasks to filter AI content which do not come with necessary labelling. They will need to equip both their workforce skills and technical capacity to ensure only AI content with adequate labelling are being transmitted or broadcast.
Enforcing it across Various Platforms, beyond Malaysia
This initiative is not standing on its own. It interplays with the existing mechanism as we already have in the Communications and Multimedia Act 1998, and is to be further supported by the new Online Safety Act 2025. The latter has a specific objective, i.e. to impose duties on the prescribed license holders on harmful content. Various parties do capitalise on AI to launch such harmful content now. Therefore, the labelling regulation is just another response to this development. The 2025 Act is meant to apply with extra-territorial effect,t which means there should be efforts to enforce it to contents based outside the border of Malaysia.
It goes without saying that in addressing these problems, we have to consider multiple approaches – both preventive and punitive; both educational and legal; and both technical and administrative measures such as content moderation, industry standards and public-private collaboration. And last but not least, we must capitalise on our diplomatic mechanism to have it enforced outside Malaysia.
I always believe that all available measures have to be activated concurrently.
References:
More Than 120 AI Bills Currently Processing in Congress (Government Technology, 18 September 2024): https://www.govtech.com/policy/more-than-120-ai-bills-currently-processing-in-congress.
AI Watch: Global regulatory tracker – China (White & Case, 29 May 2025): https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-china.
Regulating Artificial Intelligence: U.S. and International Approaches and Considerations for Congress (Congress.gov, 6 April 2025): https://www.congress.gov/crs-product/R48555.
European AI Act: Mandatory Labeling for AI-Generated Content (Imatag, 9 April 2024): https://www.imatag.com/blog/ai-act-legal-requirement-to-label-ai-generated-content.