Closing Speech by Minister Josephine Teo at the Second Reading of the ELIONA Bill
SECOND READING CLOSING SPEECH BY MINISTER JOSEPHINE TEO ON THE ELECTIONS (INTEGRITY OF ONLINE ADVERTISING) (AMENDMENT) BILL, 15 OCT 2024
1. Mr Speaker, I thank Members for their unanimous support of the Bill. In fact, Members had expressed concerns about deepfakes even before the debate on this Bill. We have heard questions in this House about how we can better tackle impersonation scams. Earlier this year, members Dr Tan Wu Meng and Ms Mariam Jaafar shared concerns about the impact of deepfakes on democratic processes and elections.
2. Taken together with today’s debate, there is clear consensus on the pressing need to deal with the threat of digitally manipulated online content because of what is at stake: the integrity of our elections.
3. Members have also sought clarifications on several issues. I will try my best to address them.
4. Some members have asked how the ELIONA Bill compares to other governments’ attempts to tackle deepfakes. Sir, we do take reference from other countries, but it is more important to be fit-for-purpose. We have therefore scoped the law to be appropriate for Singapore’s context.
5. Earlier, I mentioned how South Korea bans all political campaign videos that use AI-generated content 90 days prior to an election. Brazil has also banned synthetic electoral propaganda.
6. In response to Ms Joan Pereira’s question, we did consider a temporary ban on all deepfake content of a political nature during elections. After careful deliberations, we decided this was not necessary.
7. There is nothing inherently wrong if AI is used, for example, to enhance the background of political communications materials. The key problem is with digitally generated and manipulated content that misrepresents a candidate’s words or actions. The Bill therefore targets such content.
8. Members have also asked for the rationale behind the proposed duration of the ban.
9. Previously, Ms He Ting Ru asked about the recourse for political candidates affected by deepfakes during Cooling-off or Polling Day in elections. The ELIONA Bill provides the recourse.
10. But we know that purveyors of deepfakes will not constrain themselves to just the Cooling-Off Day or Polling Day. If we are to effectively uphold the integrity of elections, the protections under the Bill must be available when election activities are the most intense, and mischief makers most active. This is usually the election period, which is also defined in section 61S of the Parliamentary Elections Act, and section 42R of the Presidential Elections Act. It starts from the issuance of the Writ of Election and ends after the close of polling.
11. Mr Yip Hon Weng and Ms He suggested that the proposed duration of the ban should be longer. I thank them for their suggestion and agree with both their concerns. Practically speaking, however, even if we wanted ELIONA to take effect X days before an election, we cannot do so until the Writ is issued, and Polling Day revealed. This is why we will introduce a Code of Practice to require specified social media services to implement safeguards beyond the election period specified in the Bill. This allows for calibration of the speed of response and the resource requirements outside of election periods.
12. Let me now deal with the types of content the Bill will and will not cover.
13. Mr Zhulkarnain Abdul Rahim and Mr Vikram Nair have asked why the ban was not scoped wider, to cover online election advertising that misrepresents persons other than candidates. For example, deepfakes that falsely show key influencers or artistes endorsing a candidate.
14. We considered this carefully. The question is, how influential must these other persons be for the prohibition to apply? Where do we draw the line and who decides? As a political contest develops, the dynamics may also change. How about persons who were previously not influential but suddenly gained prominence?
15. Similarly, Ms Pereira asked why the new measures only apply to content that explicitly depicts candidates, and not content that indirectly misrepresents them. One such example is an AI-generated podcast that discusses their past. This problem has existed even without AI or digital manipulation. For example, through coffeeshop talk of people who claim to know something about the candidate. However, the difference is that deepfake content is very realistic and hence persuasive. When they directly depict candidates doing or saying something, the audience is more likely to accept it as reality. In contrast, hearsay information or third-party accounts like coffeeshop chatter tend to be discounted, or at least viewed with some scepticism.
16. There are also practical difficulties in extending the coverage of the ELIONA Bill outside of content directly depicting candidates’ words and actions. For example, how do we ascertain the degree of misrepresentation? The better alternative is to encourage a culture of truthfulness, where persons of influence and candidates themselves step forward to clarify to the public if they have been misrepresented through deepfake content. Voters too must be vigilant and turn to trusted sources such as our mainstream media.
17. Sir, in our review of how manipulated content has affected elections globally, we have seen examples of content being edited using non-AI means to very realistically misrepresent electoral candidates.
18. Furthermore, traditional media editing software are now beginning to adopt AI technologies, such as Photoshop’s introduction of generative AI capabilities to add and remove content, with photorealistic results. This further blurs the line between content that has been purely manipulated via AI technology and other traditional means.
19. This is why ELIONA does not exempt from prohibitions content that has been partially edited by AI or other more traditional technology. For instance, if one manually wrote a speech, but uses AI-generation to produce a video of a candidate reading it, the video will be considered to be AI-manipulated and will be prohibited. This addresses the point raised by Mr Nair.
20. Mr Nair also asked if the Bill covers content about a candidate that does not directly relate to the constituency which the candidate is contesting in. The answer is yes.
21. A piece of content does not have to reference the specific constituency which a candidate is contesting in for it to be considered online election advertising. We will make a holistic assessment of what constitutes online election advertising, and if the online content that misrepresents a candidate has the potential to unduly influence the behaviour of voters in the election.
22. Members including Mr Louis Ng, Mr Yip, and Mr Zhulkarnain have asked how we will treat online election advertising designed to entertain, such as satire or memes.
23. As I mentioned in my opening speech, content that does not mislead and deceive people about a candidate’s actual speech or actions will not be banned. Besides the question of whether it is digitally generated or manipulated, the law requires us to consider these questions: If the public saw or heard the content, would they believe it is the candidate being depicted in the content? Would they also believe that the candidate did or said that thing in real life?
24. Memes and satire are already part of our online space. Most of such content will show caricatures of individuals, and a reasonable person will be able to distinguish fact from fiction. This also applies to other online content, like online political campaign posters.
25. Mr Zhulkarnain asked what our approach will be if the offending content was labelled. In other words, declared to have been digitally generated or manipulated.
26. Sir, labelling does not automatically exempt content from being prohibited by the ELIONA Bill. A label may not be noticed by everyone. There are also ways to remove labels from content before re-circulation.
27. What matters are the four criteria I have shared previously: the content constitutes online election advertising; it is digitally generated or manipulated, it is realistic; and it shows the candidate doing what he did not do, or saying what he did not say. If these criteria are met, the content will be prohibited even if labelled.
28. Some members asked for confirmation if the proposed ban covers private or domestic communications, such as messages on WhatsApp and Telegram.
29. The elections rules are not intended to police private or domestic communications. When deciding whether a communication is of a private or domestic nature, the Returning Officer, or RO, will consider various factors, such as the number of individuals in Singapore who can access the content; if the group is a public or closed; and the relationships between the individuals.
30. As an example, chat groups on WhatsApp and Telegram with very large memberships that anyone can freely join should not be considered private or domestic communication. If prohibited content is circulated in these open groups, the RO will assess if action should be taken. If the same prohibited content is posted online, on websites or social media platforms, we can issue corrective directions for it under the ELIONA Bill.
31. Members such as Mr Ng and Dr Wan Rizal asked how we will assess if a piece of content is realistic enough to be believed. Clearly, this is not an exact science. But there are some factors that can be considered; I had outlined them in my opening speech.
32. The aim of the ELIONA Bill is to uphold the integrity of our elections. We have seen how disinformation, even if believed by a small segment of society, can lead to drastic and violent consequences. Consider how allegations of election fraud in the US played a role in the deadly Capitol insurrection on 6 January 2021.
33. I hope Members will agree that we should not accept any segment, no matter how small, voting based on a false representation. We have no way of knowing in advance the extent to which it will alter the course of our elections. But why should we subject our elections to such risk at all, if we can prevent it or at least minimise it? This is why the ELIONA Bill has been drafted in this manner, to allow for the prohibition of deepfake content as long as some voters reasonably find them believable.
34. Ms He opposed the proposal to allow platforms to reproduce content prohibited under the ELIONA Bill when reporting on news and current affairs during the election period. I’m slightly puzzled because in her speech, Ms He also advocated educating the public through short-form videos which will likely have to reproduce such content in some form to show how realistic they are.
35. Our belief is that news agencies can, and should, play a part in preserving the integrity of our elections. The prohibition does not apply to news published by authorised news agencies, because of their duty to report on news fairly and accurately to inform and educate the public.
36. In fact, this is not new. Today, media outlets report on online scams to educate the public about its dangers, and to let citizens know how to identify and avoid scams. This often includes re-publishing images of online content to alert citizens to the scam.
37. Mr Yip and other members sought assurances that the issuance of corrective directions will be impartial and that measures will apply equally, regardless of the party the depicted candidate belongs to.
38. Sir, the Bill itself is designed for impartiality. We apply the same criteria to determine who are considered candidates. They are all provided with the choice of when to inform the public of their candidacy. The defined election period is also known to all candidates at the same time. The duration and thresholds for content prohibitions are the same for all candidates. We do not even assess whether the prohibited content is favourable or unfavourable to a candidate; this would be highly subjective and open to dispute. Deceptive content will not be allowed, whichever party the candidate belongs to.
39. To ensure transparency and accountability, the public will be notified about corrective directions that have been issued against offending content, so that they can vote in an informed manner. To Ms Hany Soh’s question on how this will be done, the Elections Department will make an assessment and provide an update in due course.
40. Members including Mr Zhulkarnain and Ms He have also asked: What recourse is there if a piece of content was deemed to be wrongfully taken down?
41. Recipients of a corrective direction who feel their content has been wrongfully taken down can contact the RO to provide supporting evidence of their claims. If the RO does not accept their appeal, they may apply to the Courts for judicial review of the RO’s decision.
42. If content was found to have been mistakenly taken down due to a false declaration by a candidate, there will be serious consequences for the candidate – including the loss of his or her seat if elected. If the content does not otherwise meet the criteria for prohibition, it can be reposted. To Ms He’s query, there are also current penalties in place for persons other than candidates who knowingly provide false information to Government agencies.
43. During this debate and on previous occasions, Members including Ms Sylvia Lim and Ms He have acknowledged the difficulties in determining the authenticity of online media content. This is why candidates will have to make a declaration in addition to their request to RO to assess content under the ELIONA Bill.
44. Members will agree that we cannot just take a candidate’s word at face value. Ms Soh highlighted what has been described as “liar’s dividend”. This is why, in addition to the candidate’s declaration to the RO, there is an independent technical assessment made by the RO and his team of public officers, and we have instituted severe penalties for a false declaration.
45. To the question by Mr Louis Ng, candidates will be asked to submit their declarations via an online form during the election period. This form will be available on ELD’s candidate services portal. More details on the information requirements will be shared in due course.
46. Each of the requests and declarations made by candidates to the RO will be carefully assessed. The RO will only issue corrective directions for genuine cases that have met the requirements.
47. Mr Zhulkarnain asked if a candidate should instead affirm a statutory declaration for this purpose.
48. My colleagues and I have studied the options and weighed the trade-offs between the formal process of making a statutory declaration in front of a Commissioner of Oaths, and submitting a declaration online.
49. An online declaration is both efficient and effective. The consequence is direct and appropriate. The offence is an election offence and should be punished in accordance with elections legislation, which provides for the loss of seat for egregious offences. The general punishment of making false statutory declarations does not capture the seriousness and context of this offence.
50. The digital mode of the declaration is also meant to facilitate a speedy and efficient declaration process for candidates during the election period. In the spirit of promoting fair elections, we want to encourage candidates to report content that misrepresents them by removing as many administrative barriers as possible.
51. Members like Mr Ng have also asked if any candidate can request corrective directions to be issued, not just candidates who are depicted in the impugned content.
52. As I mentioned in my opening speech, we will place significant weight on a depicted candidate’s declaration to the RO, as he or she is in the best position to clarify if the content is an accurate representation of himself or herself. Therefore, in most cases, we will rely on candidates making requests and declarations when they are depicted in the impugned content. Further details of the prescribed modality will be shared in future.
53. In cases of positive campaigning, where the impugned content actually portrays a candidate favourably, other candidates, and even non-candidates, can make a request for review. However, we will still ask the depicted candidate for a declaration as the RO and his team are unlikely to have the full facts. If the depicted candidate does not make a declaration for whatever reason, the Government is still empowered to issue directions if we have other objective information that the content is in breach and should be prohibited.
54. Members, including Mr Yip and Ms Soh, have asked about the timely issuance of corrective directions and safeguards against foreign-based entities who attempt to influence our elections.
55. We recognise the need to quickly disable such false online content about candidates.
But there is also the need to be rigorous and fair. The RO will have to strike a balance.
56. Once a corrective direction is issued, the expectation is for individuals, social media services and Internet Access Service Providers to respond within hours. This is to minimise the potential harm that such content could cause during our election period.
57. The proposed ban covers the publication in Singapore of all digitally generated and manipulated online election advertising depicting candidates, regardless of the nationality of the user who created or published the content. This addresses the question by Ms He.
58. In addition, we already have rules prohibiting foreigners or foreign entities from knowingly publishing or publicly displaying any election advertising. This is in line with the principle that Singapore’s politics are for Singaporeans alone to decide.
59. If we are aware of hostile information campaigns or foreign interference, we will address them under the Foreign Interference (Countermeasures) Act, or FICA.
60. Ms He asked if the penalties for non-compliance by the social media services are too low to have sufficient impact or deterrence. Dr Wan Rizal asked if penalties should be scaled accordingly to the platforms’ reach and impact.
61. Sir, the financial penalty quantum is comparable with other local legislation that covers social media services, such as the Protection from Online Falsehoods and Manipulation Act and the Broadcasting Act. Non-compliance with the corrective directions is an offence punishable by a fine of up to S$1 million, and it will be for the courts to decide the appropriate level for each offence.
62. Beyond the exact quantum involved, the imposition of financial penalties on the services for not doing enough to preserve free and fair elections would have reputational implications for their respective platforms, whether from the perspective of Singapore users or globally.
63. Some Members including Ms Pereira asked about tools that the Government will use to detect deepfakes. The Government will use a mix of in-house and commercially available tools, such as AlchemiX, a tool developed by the Home Team Science and Technology Agency, which can compare recordings of a suspected deepfake video with a recording of a speaker’s actual voice.
64. Deepfake technology is constantly improving, and our capabilities must evolve accordingly. I seek Members’ understanding that we will err on the side of caution and not reveal the full extent and capabilities of our detection tools. This is to guard against malicious actors who may seek to exploit this information, and use it to game or circumvent our systems.
65. Some members like Dr Wan Rizal and Ms He have asked if the RO or election officials can proactively monitor the Internet to identify prohibited content, and how they will be supported in enforcing provisions under the Bill.
66. During the election, there are processes in place to monitor for and minimise the risk of election interference that can arise from the spread of prohibited online election advertising. There will be dedicated teams stood up during the election period for this purpose, and they will work closely with the social media services to act swiftly on prohibited content.
67. As candidates will be best placed to determine if there is false online election advertising being circulated, we will rely primarily on their requests to review problematic content. However, the RO may still assess and act on problematic content without a candidate’s request and declaration, if the content is surfaced and is deemed likely to threaten electoral integrity.
68. Mr Speaker, I have discussed how this Bill, along with other legislative levers, deal with various types of harmful deepfakes. The Bill focuses on a specific category of deepfakes during elections, while other legislation such as POFMA, OCHA and the BA may be used to tackle other forms of harmful deepfakes. However, beyond outrightly harmful consequences, the proliferation of deepfake content is also concerning. When users can no longer differentiate what is real and what is fake, there is a wider threat to trust in online media.
69. As I said in my opening speech, the IMDA will introduce a Code of Practice to deal with digitally manipulated content at all times, beyond the election periods. This will mean requiring social media companies to play a larger role in the complex issue of tackling deepfakes, given their extensive influence in shaping our online experiences.
70. MDDI and the IMDA are in the process of engaging the major social media services in Singapore. The companies have been receptive to our proposals, and recognise the need to do more against digitally manipulated content. We aim to introduce the Code in 2025.
71. Mr Yip, Ms He, and Ms Soh have asked about our public education efforts to alert our citizens to the dangers of AI-generated misinformation. We agree that a digitally aware public is the strongest defence we have against misleading and deceptive manipulated online content. Public education plays a critical role in empowering Singaporeans to safeguard themselves against risks in the digital space, and be resilient to such threats. To this end, the Government has put in place public education programmes to equip the public to be discerning producers and consumers of information, and protect themselves against online falsehoods.
72. For example, the National Library Board’s S.U.R.E program, which stands for “Source. Understand. Research. Evaluate”, has developed resources and organised activities to educate Singaporeans about the dangers of misinformation. In fact, NLB is currently rolling out its community outreach initiative, Be S.U.R.E. Together: Gen AI and Deepfakes Edition, which provides opportunities for the public to learn about the uses and threats of generative AI.
73. Mr Speaker, the Bill before us today seeks to further protect Singapore’s future elections from misinformation caused by deceptive deepfakes. We introduced this Bill after careful study of global trends, and a realistic assessment of what could happen in Singapore’s elections if this threat was left unchecked.
74. I urge all Members to support this Bill. Together, we can ensure that deepfakes and other digitally generated and manipulated content do not prejudice the fair and free elections that Singaporeans should be able to experience.
75. Mr Speaker, I beg to move.