MCI's response to PQ on Risk Assessment on Providers of Artificial Intelligence Technologies
Parliament Sitting on 8 May 2024
QUESTION FOR WRITTEN ANSWER
*26. Assoc Prof Razwana Begum Abdul Rahim asked the Minister for Communications and Information (a) whether Singapore's National AI Strategy 2.0 includes a requirement for providers of AI technologies to complete a risk assessment prior to making the technology publicly available; (b) if so, who undertakes the assessment; and (c) what risks are assessed.
Answer:
The Singapore National AI Strategy 2.0 identifies the presence of a trusted ecosystem as a key enabler for robust AI development.
In fact, Singapore was a first-mover in launching our AI Model Governance Framework, back in 2019, which recommends best practices to address governance issues in AI deployment. We continue to update it to address emerging risks, including by launching a Framework for Generative AI this year. Meanwhile, the Government also provides practical support for organisations seeking to manage risks in the development and deployment of AI, including through launching open-source testing toolkits such as AI Verify to help them validate their AI systems’ performance on internationally- recognised governance principles like robustness and explainability.
These frameworks provide a useful baseline for the Government to partner industry on managing and assessing AI risks across the ecosystem. In the finance sector, financial institutions are guided by sector-specific AI governance guidelines, such as MAS’s Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT), which aligns closely to the earlier-mentioned AI governance frameworks. Many companies have supplemented these with additional internal guidelines to oversee AI development, examples of which can be found in the PDPC’s Compendium of Use Cases for the Model AI Governance Framework. For example, the Development Bank of Singapore (DBS) has implemented its own Responsible Data Use framework for its AI models to comply with legal, security and quality standards, and utilises risk assessment tools such as the probability-severity matrix.
Besides enhancing our governance approach domestically, we collaborate
with international partners to build a trusted environment for AI worldwide.
For instance, we have conducted a joint mapping exercise between AI Verify
and the US’s AI Risk Management Framework, to harmonise approaches and
streamline the compliance burdens on organisations deploying AI across
different jurisdictions. We will continue to seek out such opportunities
and adapt our methods in tandem with the technology development.