Does Thailand Need to Regulate Artificial Intelligence?

FRIDAY, JANUARY 05, 2024

Artificial intelligence (AI) can be a powerful force for good in Thailand, improving healthcare, reducing carbon emissions, bridging educational divides and many more. But AI also comes with new challenges in terms of safety, privacy, skills and competitiveness gaps. 

To ensure that AI moves ahead on the right track, True Corporation recently organized the AI Gets Good seminar. Representatives from the government sector, the academic sector, and the business sector came together at the event to discuss "How to ensure that AI creates positive impacts in Thai society?"

Regulation: Boon or Bane?

Chaichana Mitrpant, Executive Director of the Electronic Transactions Development Agency (ETDA), said AI applications are now in widespread use across the world with both benefits and adverse impacts. For example, AI has already turned social media into an "echo chamber," meaning users get exposed to information or opinions that resonate with their beliefs, are one-sided, and insulate them from rebuttals.

AI may also disrupt the economy, create new societal divides or affect national security. Hence a growing chorus of calls to regulate this technology. Some AI developers have even called for a pause in AI development while studies on AI's adverse impacts are being conducted.

Work on AI regulation is the most advanced in Europe. The European Union is set to introduce measures to monitor and govern AI's emerging risks. The move has its supporters but has also drawn criticism from people who think AI governance will hinder rather than promote AI development.

In Thailand, AI governance remains "vague." Chaichana admitted that Thai authorities still lacked experience in AI application and governance because they were relatively new things in Thai society.  

"If we rush to govern a technology at an inappropriate time, we may abort the technology before it is born. So, we have decided to mainly monitor AI regulatory guidelines from developed countries," he explained.

Writing a New Playbook for AI

ETDA also conducted an AI Governance Clinic (AIGC) as a key mechanism in AI governance framework development. The clinic has analyzed AI risks in each applicable project to specify indicators of both positive and negative impacts of AI development. If risks are high, regulators will roll out measures to monitor and control the risks in the project involved.     

"AIGC promises to foster ethical AI applications," Chaichana said. "In some cases, self-regulation is enough. But in some other cases, it may prove inadequate and thus we may need to draft a law to govern it. AI governance, as a result, will need both self-regulation and laws to create a space that enables technology development and fosters economic security while addressing social impacts."

Recently, the European Parliament reached a provisional agreement on its Artificial Intelligence Act to protect fundamental rights and democratic principles in Europe. At the same time, it has supported the business sector's efforts to develop AI further in pursuit of the goal of establishing Europe as the world's AI leader.

This milestone act will be the world's first legislation on AI. Its contents detail rationale, principles, AI governance guidelines, cautions, and punishments for misuse of the new technology. Hence, it goes much further than statements on the need for ethical AI as was seen in the USA and the UK.

Asst. Prof. Jittat Fakcharoenphol, Vice-Chair of the Computer Engineering Department at Kasetsart University

AI as a System

Asst. Prof. Jittat Fakcharoenphol, Vice-Chair of the Computer Engineering Department at Kasetsart University and the translator of "The Ethical Algorithm," (by Michael Kearns and Aaron Roth, published by Salt) commented that it was particularly challenging to lay down ethical guidelines for AI applications. The task, he emphasized, requires clear operational procedures.

For example, the governance of an ethical algorithm must have clear definitions and components. If an algorithm is expected to be unbiased and non-discriminatory, drafters of the governance guidelines must define the term "bias" very clearly. To date, research has shown there are so many definitions of bias and the word can be interpreted in various ways. In Prof. Jittat's opinion, the governance guidelines should identify biases clearly and set the right frameworks to reduce bias-related risks in data inputs for AI training.

"AI is a system," he said. "Relevant parties should understand this fact for them to properly learn about, use and govern AI. Because it is a system, AI-related problems should be addressed together, not separately."

Montri Stapornkul, Head of Personal Data Protection at True Corporation Public Company Limited (True)

Legitimacy of Data Usage

Montri Stapornkul, Head of Personal Data Protection at True Corporation Public Company Limited (True), described AI as a modern infrastructure with a crucial role to play at both enterprise and national levels. The utilization of AI can be divided into two parts: technology usage and data usage. Montri believes governance should be in place for data usage to protect the rights of data owners.

"True issued a charter for ethical AI to ensure that our data usage is transparent, legitimate, and in line with stated purposes. Our ethical AI application is reflected through our respect for data owners' rights,"  Montri said.

According to him, AI systems have three types of risks: 1) Risks related to Data Privacy Policy; 2) Risks related to Data Usage Conditions; and 3) Risks related to Public Understanding of AI.

To provide assurances that consumers will enjoy maximum benefits from AI applications, True has drawn up "True's Ethical AI Charter," which comprises four pillars: 1) Good Intent; 2) Fairness and Bias Mitigation; 3) Data Privacy; and 4) Transparency.

"Data is the foundation for AI applications. Although Thailand has not yet had a dedicated AI law, existing legislations had already laid down clear frameworks on data usage," Montri said, referring to the Personal Data Protection Act B.E. 2562 and the National Broadcasting and Telecommunications Commission's Announcement on the Protection of Telecom Service Users' Rights to Privacy and Communications Freedom. "Metaphorically, AI development is like raising a child. When we prepare ethical guidelines for AI, it will progress well and become a good member of our society."

His opinion was echoed by Raewat Tankittikorn, Head of Channel Excellence at True. Raewat said whether AI would be ethical or not depended on the kinds of data used for its training with the purpose of AI usage as another key factor.

Raewat Tankittikorn, Head of Channel Excellence at True