Opening Keynote by Minister Josephine Teo at Asia Tech x Artificial Intelligence (ATxAI) Conference
Her Excellency Ingrida Šimonytė, Prime Minister of the Republic of Lithuania,
Fellow Ministers, Colleagues and Friends,
Welcome and thank you for being here.
-
Thank you very much for making time to be here with us in Singapore. Last year, on this very stage, I shared with you Singapore’s hope to use AI for the Public Good. A few months later, we launched the refresh of our National AI Strategy.
-
This was followed by the commitment to invest more than 1 billion dollars to grow compute capacity and AI skills in our workforce. Many organisations in Singapore have expanded their AI Centres of Excellence or are setting up new ones.
-
Almost daily, our local media reports on some new AI application. These applications cut across multiple settings, from crime-fighting, healthcare delivery, maintenance of public transport assets like buses and trains, and even construction safety. They show enthusiasm, imagination, and growing confidence in the use of AI in various forms.
-
With increasingly widespread adoption, it is no wonder that concerns have also sharpened. Concerned citizens want more protections against AI risks. Concerned businesses worry that more protection equals less innovation.
-
Here in Singapore, we hope to avoid such zero-sum thinking. To fulfil our vision of AI for the Public Good, we’ve always believed that AI governance is as important as AI innovation.
-
As in so many areas, good governance is not the enemy of innovation. On the contrary, good governance enables sustained innovation.
-
This is the reason why, back in 2019, even before we launched our first National AI Strategy, we had developed a Model AI Governance Framework. Well before we developed our own large language model, the Southeast Asian Languages in One Network or SEA-LION, we developed AI Verify, a testing framework and software toolkit for responsible AI use.
-
Many of you know AI Verify to be one of the first in the world to allow AI developers to test for a range of harms such as unauthorised data access or systemic bias. It is by no means a perfect tool, but it does fill a gap between being worried and actually doing something about it. This is perhaps why, since AI Verify was open-sourced, many developers have been motivated to help improve it.
-
To quote President Tharman Shanmugaratnam who addressed us on Wednesday night, “regulating AI must be the art of the possible, the attainable, and the next best”.
-
In the spirit of always seeking “the next best”, later in my presentation, I want to share with you another testing tool we’re launching today, to complement AI Verify. It is a MVP – minimum viable product. And, as its name implies, a “moonshot” – an undertaking to challenge ourselves.
-
But please allow me to keep you in suspense for just a little longer, to explain our current thinking regarding AI Governance, which I hope can be of some use to you. One important set of tools for governance is laws and regulations that serve the public interest to help society meet governance objectives.
-
In governing the digital domain, we have introduced new laws for personal data protection, against misinformation and disinformation that is spread online, to better manage cyber risks and egregious content, and curb online criminal activities.
-
We have indicated the intention for new legislation to better safeguard the security and resilience of our digital infrastructure, help victims of online harms seek redress from their perpetrators, and address problems of deepfakes.
-
But we have not introduced an overarching AI law and have no immediate plans to do so. Why?
-
One reason is that some of the harms associated with AI can already be addressed by existing laws and regulations. Take for example, AI-generated fake news that is spread online. Regardless of how the fake news is produced, as long as there is public interest to debunk it, our laws already allow us to issue correction notices to alert people.
-
What about AI models used to support hiring? For starters, many employers here do not yet intend to use AI for recruitment, mostly because they worry about biased outcomes.
-
Regardless of how bias comes about, with or without AI, existing guidelines on fair employment practices and on workplace fairness legislation that will be upcoming will hold employers accountable.
-
Another reason for not yet introducing an AI law is that in some instances, an update of existing laws is the more efficient response. Take for example “sextortion”, where someone threatens to distribute intimate images of a victim.
-
We can all agree that even if an image was not real but rather a “deepfake”, the distress caused is enough for it to outlawed. That was precisely what we did when we updated the Penal Code to introduce a specific offence of “sextortion”. We ensured that “sextortion” would be illegal, with or without AI.
-
The examples I shared suggest that we are not defenseless against AI-enabled harms. In AI governance, we are not starting at ground zero. However, we must also have an attitude of humility in recognising that it is one thing to deal with the harmful effects of AI, but quite another to prevent them from happening in the first place, through proper design and upstream measures.
-
To borrow from road safety, it is in our interest to implement the equivalent of traffic rules, lights and signs, speed limits, seatbelts and airbags, all of which work together to protect road users. But when cars were first sold to the masses, we didn’t understand all the risks. Nor did we know all the measures that could minimise them.
-
The successful identification, development and validation of risk-mitigating measures is essential. And there is no short-cut. However, we believe that if we persist, we will have a much stronger basis for new laws and regulations, one that is grounded on evidence, that results in more meaningful and impactful AI governance.
-
This conviction underpins our efforts to develop a second set of tools for AI Governance. It is the proverbial figuring out the nature of the beast, through high-quality research on what can tame the beast, and bring out its goodness.
-
Last year, we published a discussion paper on key areas of concern in Generative AI. In December, we gathered some of the world’s top minds at the Singapore Conference on AI.
-
They came up with a list of 12 thought-provoking questions that are well worthy of our research emphasis. Many of them concerned reliability, trustworthiness, fairness, and safety.
-
Most recently, at the AI Safety Summit in Seoul, Yoshua Bengio’s team released a report on the Safety of Advanced AI, highlighting the problems when AI malfunctions.
-
Singapore is a part of all these conversations to keep up with the latest understanding and to be ready to enhance AI Governance here. In parallel, we are growing our governance capabilities.
-
For example, we are building up the Centre for Advanced Technologies in Online Safety. Much of its research focus is on AI-generated content, such as misinformation, online hate, and discrimination. At its launch, I saw prototypes that can detect toxic language, deepfake videos and even scam messages on WhatsApp.
-
We are strengthening our Digital Trust Centre (DTC), which carries out research on AI testing and evaluations, as well as data safety through Privacy Enhancing Technologies. The DTC has been designated as Singapore’s AI Safety Institute.
-
It will be part of an international network of AI Safety Institutes working to address gaps in science behind the AI. But it is not enough for these capabilities to reside solely within governments or developers of AI.
-
This brings me to the third set of tools that we’re developing for AI Governance. In a fast-evolving field like AI, regulations are a necessary and insufficient response.
-
Our objective is to provide a safer environment for businesses and citizens to use AI, and enable AI innovations. We commend model developers that have invested resources to prevent their models from generating harmful content.
-
For example, Anthropic is training Claude to be “helpful, honest, and harmless”. But for the entire ecosystem to be uplifted, we must reach beyond model developers.
-
Because AI, being a general purpose technology, can be used in an infinite number of ways, each with their own contexts. It serves the public interest when organisations and individuals using AI understand its advantages as well as its limitations.
-
It is therefore important that we equip them with the right attitude, capabilities and tools. This is why our Model AI Governance Framework provides practical guidance on what we expect as safeguards. It sets a baseline for companies developing or deploying AI systems, regardless of their size or resources.
-
Yesterday, Deputy Prime Minister Heng announced that we will expand the Model AI Governance Framework to include Generative AI. This Framework continues its ecosystem approach to AI governance, and highlights 9 dimensions that policymakers should consider holistically, in totality.
-
To turn some recommendations into concrete actions, IMDA and Microsoft have announced their collaboration on Content Provenance and Responsible AI. This is an example of how government and industry can partner each other to develop practical measures.
-
We have also developed our testing toolkit, AI Verify, which I mentioned earlier. One Singapore start-up known as XOPA.ai, offers an AI-driven HR solution to help identify for employers the job applicant with the best fit.
-
They were an early tester of AI Verify. Today, XOPA.ai uses AI Verify to conduct fairness checks, and to ensure that their AI systems are free from bias.
-
What happens to AI Verify users like XOPA.ai that now want to also use Generative AI? So today, I am very excited to share with you Project Moonshot – our latest effort that extends the AI Verify toolkit from Traditional AI, to Generative AI.
-
Project Moonshot is one of the world’s first open-sourced toolkits for Generative AI that brings onto one common platform benchmarking, red-teaming, and recommended testing baselines. It can test both foundation models, and AI applications built on top of these models. It helps organisations building AI systems to more easily test and compare results, so as to identify weaknesses that they can fix.
-
We have validated Project Moonshot with members of the AI Verify Foundation, including DataRobot, Resaro, and SingTel.
-
And today, the Foundation is releasing Project Moonshot into open beta. But that’s not all. We are going even further.
-
The AI Verify Foundation and MLCommons, two of the leading communities in AI safety and testing, will also come together to develop a common testing benchmark for Large Language Models. This benchmark can be used to test their basic safety and trustworthiness, against key indicators such as hate, toxic and violent content, regardless of the context or use cases.
-
More importantly, this partnership is a significant step towards global harmonisation of benchmarking standards, which is something we hope will become the norm rather than the exception.
-
Colleagues and friends, Project Moonshot is “another next best” in our pursuit of good AI Governance. As its name suggests, we believe it is ambitious, yet achievable.
-
To summarise Singapore’s approach to AI Governance, we believe in three sets of tools to support our vision of AI for the Public Good.
-
These are:
(i) Appropriate laws and regulations;
(ii) Understanding of risks and commitment to research in mitigation; and
(iii) Practical guidance and tools.
-
This leads me to the final point I will make for this session, and that is the importance of international cooperation. There have been many commendable efforts and let me acknowledge some of them.
-
The AI Safety Summit, hosted by the UK and then South Korea. The UN High-Level Advisory Board on AI, whom we were very pleased to host in Singapore this week. ASEAN, which has published a Guide on AI Governance and Ethics. The US and China, who have met and are continuing to meet to discuss AI Governance.
-
May I say again how important it is to also hear the voices of smaller players who must equally deal with AI’s impact on our economies and our societies.
-
Singapore is particularly attuned to their concerns, because we are ourselves a very small state and for over 30 years, have been the convenor of the Forum of Small States, a grouping of 108 UN Member States.
-
Some of you already know that together with Rwanda and other partners, we are developing an AI Governance playbook for small states.
-
This playbook aims to help members of the Forum adopt AI to meet our specific needs and circumstances. It is our hope that by doing so, we promote AI for the public good, not just for Singapore, but also for the world.
-
Thank you very much for listening.