Minister Josephine Teo Panel Discussion at AI Action Summit
MINISTER FOR DIGITAL DEVELOPMENT AND INFORMATION MRS JOSEPHINE TEO’S PANEL DISCUSSION “FUELLING TRUSTWORTHY AI INNOVATION THROUGH COLLABORATION BETWEEN INDUSTRY & GOVERNMENT” AT AI ACTION SUMMIT ON 10 FEB
Question: So, Minister Teo, let me start with you. What is the state of play in Singapore on the trust and safety conversation?
Minister: The way we try to approach innovation in Singapore is to acknowledge the fact that within government, some things are well known and some things are not well understood. In particular, when it comes to AI and what kind of use cases and applications that will have the most receptivity to the market, that is something that we think the private sector knows a lot more. So, with that as a starting point, we wanted very much to get the private sectors’ viewpoints, and also to understand what kinds of signal they respond to best. And on that basis, I would say that the private sector partnership has turned out to be very encouraging.
There are two areas in particular where I feel that we are seeing the highest level of returns. The first is that when we look at companies and industries, how they are thinking about using AI, the level of ambition has gone up a lot. The level of ambition did not come about because government said you should be ambitious. The level of ambition came about because the private sector was able to demonstrate the power of the tools and also the kind of expertise that could be made available.
So, if I give an example, many people in this room would have heard of Singapore Airlines. When they started thinking about using AI, I think they themselves thought that it was about making themselves more successful, which is something that most companies would want. But over time, we noticed that their ambition went a lot higher, and they started articulating a vision where they hoped to contribute to the reshaping of the entire aviation industry. Now that is really a very interesting way of thinking about the use of a particular technology, and they themselves as a use case.
The second example I would like to share is that, in terms of how the workforce capabilities are being developed, the private sector's contributions have been instrumental. I have always believed that when we think about what signals employees respond to, it is one thing for the government to say that you should upskill, reskill, and try and figure out how AI can work for you. It is quite different when the employers say that it is something you need to do. So, this is where I think there has been the biggest contributions.
Now then segue to how it then plays out in testing. As we see more AI applications being brought to the market, the risks are becoming more prominent. So, for example, in the finance sector, we see more and more of the use of AI for credit approvals. The question is, if you are a bank customer, how can you be assured that this is not tilted against you and there isn’t some bias that is built in that makes it harder for you? That is when you need a proper process for robust testing for fairness.
So we are quite pleased that we started work through the AI Verify effort, we started to develop a software toolkit that was accompanied by a testing framework. That has gone well and then with generative AI, we decided to launch a sister project that we call Project Moonshot. Today we see that the private sector involvement has expanded a great deal, and now we are ready to take the next step. That is, you have testing providers, they need to be able to find where the demand is, and we are launching a Global AI Assurance Pilot to match the testing providers together with the demand. That is something that we hope will demonstrate even further -- how the private sector and the government collaboration can help to build trust in AI tools, how AI is being implemented in the real world and industries, and give people the assurance that it is something that they can use.
Question: Minister, just as we talk about what is already incredibly challenging technical frontier, it becomes even more interesting and challenging when you bring it into an international conversation, trying to align different governments approaches. How is Singapore thinking about its partnership with other countries, in particular the safety network? But I know Singapore also is one of the leaders around the world, in particular, engaging in smaller states which might not have the kinds of capabilities of other governments.
Minister: I think it is one thing that we believe we can make a contribution. We have learning lessons, some of our own experiences in developing public good use cases, as well as the risk management measures, we are happy to share with colleagues around the world. So together with Rwanda, we developed an AI Playbook for Small States. This is one way in which we can contribute, but I think there is also a value in countries collaborating on testing and evaluation.
I want to talk about the Network of AI Safety Institutes. What I am most encouraged by is that, it isn't just a platform where the AI safety institutes are exchanging notes. I think they are taking joint action. So, one good example about collaboration with Japan, we were looking at all of these Large Language Models, and we are saying that within our own countries, a lot of the applications are being developed on top of the LLMs, and these applications could have safety implications. For example, if you are using an application to give medical advice, but you are giving medical advice in Japanese, or in Korean, or in the Singapore context, we also have the Malay language. How do we know that if the performance of the model being great in English also mean that they perform very well in our vernacular languages? So, the testing ability is something that we are able to bring to bear. So together with Japan, and several other countries, we tested some of the models against 10 languages. That was quite an exciting development. I think it is just an example of how much more scope there is for us to collaborate and to try and advance AI implementation across the board. There will be many more examples of instances where we do not have the answer, and other countries have equally challenging time dealing with the same issues. And if we were to be able to put our resources together, combined with what the private sector is also able to bring together in terms of resources and expertise, I think we have a better chance of overcoming the issues that we face.
In response to Sasha Baker’s comment on the challenges of navigating AI regulations globally and the need for consistent international standards
Minister: May I just add to what Sasha (referring to Sasha Baker, Head of National Security, OpenAI) has said. I think it is incredibly important that we recognise that it is not always possible to harmonise the standards, but at least try and make them interoperable. And making them interoperable is key to both promoting trust in our respective jurisdictions and also enabling cross-border innovation. A lot of the teams that you are working with are not located in one place. A lot of the data that you are trying to bring together in order to draw insights and be able to design other kinds of innovations. They come from different jurisdictions. So, I would echo that really is an important area for us to be thinking about interoperable standards.