Asia School of Business

Edit Content

Accountability: Cracking the code: Tackling the complexities of AI governance

This article first appeared in Digital Edge, The Edge Malaysia Weekly on January 22, 2024 – January 28, 2024

There have been renewed calls for improved artificial intelligence (AI) governance and legislative oversight to address the ethical application of the technology including limiting bias. Countries like China, the US and those in the European Union have come up with AI regulatory frameworks. In the latest series of initiatives to shape the development of AI, the US, UK and several other countries including Singapore, Australia, Germany and Chile signed on to a new framework in November 2023 to create AI systems that are “secure by design”.

In Malaysia, the Ministry of Science, Technology and Innovation is developing a code of ethics and governance — to be ready by this year — that would form the basis of AI regulation for the country. The government has outlined the principles of responsible AI in the Malaysia AI Roadmap 2021–2025. These include fairness; reliability; safety and control; privacy and security; inclusiveness; the pursuit of human benefits and happiness; accountability; and transparency.

Those familiar with AI governance say the overriding goal should be protecting the stakeholders, beginning with consumers and individuals. Asia School of Business CEO, president and dean Sanjay Sarma says issues such as customer privacy, liability, and safety and security need to be addressed.

“A second stakeholder category is creators. We are well and truly into the creator economy through platforms such as YouTube and TikTok. Perhaps we need ways for AI to acknowledge where its generative capacity is sourced from,” he says, acknowledging that though this may prove difficult, at the very least creators should be able to mark their work with a “do not mine” request that reputable AI engines will have to respect.

Read the full article HERE.
Originally published by The Edge.