Opening Speech by Minister Josephine Teo at the Second Reading of the ELIONA Bill
SECOND READING OPENING SPEECH BY MINISTER JOSEPHINE TEO ON THE ELECTIONS (INTEGRITY OF ONLINE ADVERTISING) (AMENDMENT) BILL, 15 OCT 2024
Introduction
1. Madam Deputy Speaker, I beg to move, “That the Bill be now read a second time.”
Threat of deepfakes to elections
2. Madam, 2024 is a bumper year for elections around the world. Almost half of the world’s population have gone or will go to the polls this year.
3. Unfortunately, there has been a noticeable increase of deepfake incidents in countries where elections have taken place or are planned. Research conducted by London-based tech company Sumsub suggests that the numbers are alarming. In India, compared to a year ago, there are three times as many deepfake incidents. In Indonesia, more than 15 times and in South Korea, more than 16 times.1
4. Earlier in January this year, a fake version of U.S. President Joe Biden’s voice was featured in robocalls that sought to discourage Democrats from participating in the New Hampshire Primary. The robocalls reached thousands of people. The US Federal Communications Commission has since declared AI-generated robocalls illegal, noting that they have the potential to confuse consumers with misinformation. The telecommunications company which transmitted the fake robocalls has been fined US$1 million, and the individual behind it faces a fine of US$6 million and criminal charges.
5. During the Slovakian parliamentary elections last year, a deepfake audio of a politician discussing electoral rigging was posted online. Unsurprisingly, the audio went viral. Its impact was amplified by its timing – right before Slovakia’s electoral “silence period”, which is like our cooling-off day. The candidate lost the elections, despite having earlier led in the polls. Did the deepfake audio contribute to his loss? No one can say with certainty, but surely we prefer not to have elections subject to such incidents.
Overseas jurisdictions and industry action
6. Why have deepfake content proliferated? The short answer is that they have become very easy and cheap to produce. With your permission, Mr Speaker, may I play a video on the LED screens?
Minister’s deepfake video presentation Hello, it’s been a busy day in the office, and I’ve just had a cup of coffee. (Pause) Did you think this was really me, Jo Teo, speaking in this video? Actually, this is a deepfake generated by artificial intelligence. It only took one person one hour to create this, using easily accessible software that anyone can use right now from the Internet. Imagine if someone produced realistic deepfakes, depicting Members of this House saying or doing something we did not actually say or do, and disseminated it. Such technology will only improve, and deepfakes may become even more realistic, convincing, and easy to make. |
7. Sir, Members will appreciate that AI technology is improving quickly. If the deepfake video you just watched did not convince you of its impersonation of me, more advanced versions soon will.
8. Around the world, countries have recognised the need to mitigate the harms of deepfakes to their elections.
a. For example, South Korea revised its Public Official Election Act to ban political campaign videos that use AI-generated content 90 days prior to an election. Violations of the revised law, which took effect in January this year, can lead to jail time of up to seven years, or a fine of up to 50 million won, which is almost S$50,000. To date, 388 deepfakes have been taken down by the National Election Commission of South Korea during its elections.
b. Another example is Brazil, which has banned synthetic electoral propaganda that will harm or favour any candidate during an election. The sanctions include the revocation of the candidate’s registration or their mandate, if they had been elected.
c. Last month, the state of California passed into law the “Defending Democracy from Deepfake Deception Act of 2024”, which requires social media platforms to block materially deceptive deepfakes of candidates from 120 days before the election to the day of the election.
d. The Australian government is also considering the advice of its Electoral Commission to regulate the use of AI in elections, given the Commission’s recent warning that it has limited scope to protect voters from deepfake videos and phone calls imitating politicians in Australia’s upcoming elections.
9. It is not just Governments which are concerned. The tech industry has also recognised the dangers of electoral deepfakes, and the importance of ensuring voters can exercise their choice, free from AI-based manipulation. Twenty leading tech companies, including Meta, Microsoft, OpenAI, and TikTok, signed the Tech Accord at the Munich Security Conference in February, committing to combat the deceptive use of AI in elections this year.
Upholding the integrity of elections in Singapore
10. In the face of these developments, Singaporeans are rightly concerned. One study2 shows that more than 6 in 10 Singaporeans are worried about the potential impact of deepfakes on the next election.
11. In a 2021 ruling on a case related to misinformation and online falsehoods, our apex court had said: “It is simply incompatible with the core principles of democracy to procure the outcome of an election to public office or a referendum by trading in disinformation and falsehoods.”
12. Mr Speaker, I hope Members will agree that AI-generated misinformation can seriously threaten our democratic foundations and demands an equally serious response. The Elections (Integrity of Online Advertising) (Amendment) Bill, or ELIONA, is our carefully calibrated response to augment our election laws under the Parliamentary Elections Act and the Presidential Elections Act, ensuring that the truthfulness of candidate representation and the integrity of our elections continues to be upheld.
Scope of the Bill
Key legal requirements
13. Sir, I will now bring Members through the key aspects of the Bill.
14. The ELIONA Bill will amend our election laws to prohibit the publication of content that:
a. is or includes online election advertising, or “OEA”;
b. Is digitally generated or manipulated; and
c. depicts a candidate saying or doing something that he or she did not in fact say or do;
d. But is realistic enough that some members of the public who see or hear the content would reasonably believe that the candidate did in fact say or do that thing.
15. I will go through each of these criteria in detail.
16. First, online election advertising under our existing election laws refers to any information or material published online that can reasonably be regarded as intended to promote or procure, or prejudice the electoral success or prospects of a candidate or political party. The existing OEA provisions guide the transparent and responsible use of the Internet during elections, including for campaigning, and ensure that the elections are contested fairly. The ELIONA Bill strengthens the OEA regime by targeting the substantive content of the OEA.
17. Second, the ELIONA Bill is scoped to address content that is digitally generated or manipulated. This includes content generated or manipulated using AI techniques such as generative AI. It also includes non-AI techniques such as Photoshop, dubbing, and splicing. These are now seen as more traditional editing methods. But they can still be used to manipulate content depicting candidates, making them as harmful and misleading as AI-generated deepfakes.
18. Third, the ELIONA Bill is scoped to address the most harmful types of content in the context of elections, which is content that misleads or deceives the public about a candidate, through a false representation of his speech or actions, that is realistic enough to be reasonably believed by some members of the public.
19. The condition of being realistic will be objectively assessed. There is no one-size-fits-all set of criteria but some general points can be made:
a. First, such content should closely match the candidates’ known features, expressions, and mannerisms. Technically, we would expect a degree of sophistication resulting in minimal inconsistencies in aspects like lighting, body movements, or audio distortions.
b. Second, content may make use of actual persons, events and places so that the false representation appears more believable. For example, a fake rally speech touching on current affairs looks more real when placed against the backdrop of an actual and familiar rally site.
c. We must also recognise that audiences perceive and process the same content through different lenses shaped by their individual experiences, beliefs, and cognitive biases. For example, many of us find it incredible that the Prime Minister would be giving investment advice on social media. But as Members of Parliament, we have all met residents who have fallen prey to such AI-enabled scams. In this regard, the law will apply so long as there are some members of the public who would reasonably believe that the candidate did say or do what was depicted.
20. Also, in assessing whether content matches a candidate’s speech or actions, we will be relying primarily on declarations by the candidate if he or she had said or done that thing. I will elaborate on this later in my speech.
21. All four legal limbs have to be met for the content to be prohibited. That is, the content
a. is or includes OEA;
b. Is digitally generated or manipulated; and
c. depicts a candidate saying or doing something that he or she did not in fact say or do;
d. But is realistic enough that some members of the public who see or hear the content would reasonably believe that the candidate did in fact say or do that thing.
22. Mr Speaker, with your permission, may I ask the Clerks to distribute a handout that will illustrate our thinking on what will be allowed or disallowed under the new provisions? Sir, Members can also access this handout through the MP@SGPARL App.
23. Examples of content that would be prohibited include:
a. Realistic audiofakes featuring a candidate saying things he did not say;
b. Realistic AI-generated images of a candidate participating at events that did not happen, meeting people that he or she did not meet;
c. Realistic manipulated images or videos taken out of context and misrepresenting a candidate’s actions.
24. It does not matter if the content is favourable or unfavourable to any candidate. The publication of such prohibited content during the election period, including by boosting, sharing, and reposting existing content, will be an offence.
Assurances on content that are unlikely to be caught
25. As we propose this new measure to tackle realistic misrepresentations of candidates online, we are mindful not to disallow reasonable use of AI or technology in electoral campaigning. While each case will be assessed on a case-by-case basis, there are several scenarios that the prohibition will not extend to.
a. The first is AI-generated or animated characters and cartoons. Most of these animations are not photorealistic replicas of real persons; audiences will generally be able to tell that the speech or actions depicted are not real.
b. The second is benign cosmetic alterations, such as the use of beauty filters, or colour and lighting adjustments of images and videos. Such alterations typically involve modifications that do not materially affect truthfulness, and do not result in a misrepresentation of a candidate’s speech or actions.
c. The third is entertainment content, such as memes. We recognise that such content can arise as part of online discourse during the election period. Memes will not be caught under the law as long as they are assessed to be unrealistic and do not mislead audiences about a candidate’s speech or actions.
26. Some Members may be concerned – will candidates’ regular campaign posters, showing individuals against the backdrop of a GRC or a SMC, be prohibited if they are put online? Such posters are usually obvious composite images, such as candidates disproportionately superimposed in front of a landmark or backdrop. Members of the public would not reasonably believe such content to be realistic depictions of a candidate’s action. Such campaign posters are unlikely to fall within the scope of prohibitions. Ban will not apply to certain communications and publications
27. Mr Speaker, I would like to make clear that the ban will not apply to certain types of content.
28. First, the Bill does not extend to private or domestic communications. This refers to content shared between individuals or within a closed group, like group chats with family or a small group of friends, in view of user privacy.
a. That said, we know that false content can circulate rapidly on open WhatsApp or Telegram channels. If it is reported that prohibited content is being communicated in big group chats that involve many users who are strangers to one another, and are freely accessible by the public, such communications will be caught under the Bill and we will assess if action should be taken.
b. The factors that determine whether communications are private or domestic are set out in the respective election Acts.
29. Second, the prohibition does not apply to news published by authorised news agencies. This is to give space to fair reporting on prohibited content, such that the public can be alerted to the false content about candidates in a timely manner.
30. Third, we recognise that a layperson may carelessly reshare messages and links without realising that the content has been manipulated. The legislation provides for a defence if a person did not know and had no reason to believe that the candidate did not in fact say or do the thing and inadvertently commits the offence.
Duration of the ban, and who the ban applies to
31. Sir, the enhanced safeguards will apply during the election period, from the issuance of the Writ of Election to the close of polling on Polling Day.
32. From the issuance of the Writ to Nomination Day, we will introduce a two-part requirement for individuals who would like to identify themselves as prospective candidates and have the safeguards of the ELIONA Bill apply to them:
a. First, pay the election deposit; and
b. Second, consent for their names to be made public by the Elections Department (“ELD”). The consent form will be made available via the ELD website. The intention is for the website to be updated daily.
33. Paying the election deposit is a pre-requisite for standing in an election and an indicator of an individual’s seriousness of intent to be a candidate.
a. It is a step up from simply making a public declaration of one’s potential candidacy.
b. Publishing the names of those who placed the deposit will also make clear to the public which individuals are covered by the law.
c. It is the candidate’s choice entirely when to inform the public about his or her intention to stand for election before Nomination Day. Candidates can choose to put down the election deposit but withhold consent to make their names public until Nomination Day.
34. After nomination proceedings are completed, and up to the close of polling on Polling Day, the safeguards of the ELIONA Bill will apply to content depicting all successfully nominated candidates.
35. At the same time, individuals themselves should also readily come forward to clarify and debunk content that they believe misrepresents them.
Enforcement of the Bill
Technical assessment
36. The Government will use a range of detection tools to assess if the content has been generated or manipulated using digital means. We have at our disposal commercial tools, and also those developed in-house, or in partnership with researchers, such as those at the Centre of Advanced Technologies in Online Safety, or CATOS.
37. These tools are constantly updated to keep up with technology. Continuing strong investment in research is one way to ensure that we stay ahead. We have channelled $50 million in funding over five years to CATOS, which will develop new technological capabilities to detect online harms, including harmful digitally manipulated content.
Declaration by candidates
38. In reviewing reports of false content flagged under this new prohibition, the Government will prioritise candidates’ reports to the Returning Officer (“RO”). Besides the technical assessment on whether content has been manipulated, the RO will also rely on candidates’ declarations on whether the content falsely depicts their speech and actions.
39. We have placed significant weight on candidates’ declarations to the RO under this new prohibition, as a candidate is in the best position to speedily clarify if the content is a truthful and accurate representation of himself or herself. The Government is unlikely to have all the evidence of whether a candidate actually said or did something, especially if it was in a private setting.
40. The declaration by candidates will be made to the RO, for candidates to attest to the veracity of his or her claim. This declaration form will be made available to candidates via the Candidate’s Portal on ELD’s website.
Penalties for false declaration
41. To deter the abuse of the law, such as candidates requesting to take down unfavourable content that is in fact a factual representation of their speech or action, it will be made an illegal practice for candidates to knowingly make a false or misleading declaration in a request.
42. The penalties for an illegal practice are set out in the election Acts. If convicted of an illegal practice, one may face a fine not exceeding $2,000, and become ineligible to be elected as a Member of Parliament or the President. Further, if already elected, the election of a candidate as a Member of Parliament or President may also be invalidated.
Powers of the Returning Officer
43. Currently, the RO has powers to issue corrective directions to any relevant person, including social media services, to remove or disable access in Singapore to prohibited online election advertising, or to stop or reduce its electronic communication activity.
44. The ELIONA Bill extends these powers for the RO to issue corrective directions against this new category of content, and corrective actions must be taken within the specified period of time.
45. As mentioned earlier, the Government will prioritise candidates’ reports and declarations to the RO. If assessed to be a genuine case, the RO will issue corrective directions. Only in exceptional cases will directions be considered against false representations without a candidate’s declaration. This may arise when the objective facts are widely known or if the Government has access to data that reliably confirms the candidate’s actual speech or action.
46. The public can also report potentially prohibited content of candidates to the authorities for review.
47. To better equip the public to make informed choices during the elections, the public will be notified about corrective directions that have been issued against offending content.
Non-compliance with corrective direction
48. Non-compliance with a corrective direction issued by the RO is an offence. Recognising the extensive reach and responsibility that social media services must uphold, we have raised the fine of up to $1 million for a provider of a social media service that fails to comply with a corrective direction.
49. This is a reasonable adjustment – the revised penalty is on par with similar offences under other content regulation tools like the Protection from Online Falsehoods and Manipulation Act (“POFMA”), the Broadcasting Act (“BA”), and the Online Criminal Harms Act (“OCHA”).
50. For all others, including individuals, there is no change to the financial and custodial penalties for non-compliance with a corrective direction, and remains at a fine not exceeding $1,000, or to imprisonment for a term not exceeding 12 months or to both.
51. We have engaged major social media services on the requirements under the ELIONA Bill and shared our expectation that such directions are to be promptly complied with to uphold the integrity of our elections.
Adding ELIONA to our suite of measures
52. Mr Speaker, as noted by Dr Carol Soon of the Institute of Policy Studies, the ELIONA Bill is carefully calibrated in its scope of the “what”, “when”, and “whom”, and is a continuation of our principled approach towards the conduct of elections in Singapore.
53. First, the Bill addresses the most harmful digitally generated and manipulated content, including deepfakes, that can influence electoral outcomes, while recognising the value of novel content creation techniques and the desire of candidates to employ innovative methods to engage voters.
54. Second, it applies only during the election period, from the issuance of Writ to the end of polling on Polling Day, to safeguard the integrity of the electoral process and preserve space for fair and legitimate political discourse during the elections.
55. Third, the safeguards apply to all candidates regardless of political party and potential impact of the fake content. This recognises that fake content favourable to one candidate must be unfavourable to another, and vice versa. Our voters must be able to make informed choices based on factual and truthful representation of our prospective political leaders. Candidates too have a responsibility to conduct themselves with integrity during the elections.
56. The ELIONA Bill updates our suite of measures introduced over the years to address various forms of harmful online content. During and outside the election periods, existing content regulation tools will continue to apply to certain types of AI-generated misinformation and deepfakes.
a. For example, under POFMA, a Minister, or an appointed Alternate Authority during the election periods, may issue a direction for a recipient to communicate a correction notice. The Minister or Alternate Authority may also direct the removal or disabling of access to deepfakes on the grounds that they contain false statements of fact and it is in the public interest to issue the direction.
b. Under OCHA, directions may be given to deal with online activities that are criminal in nature, such as deepfake-related scams.
c. Under the Protection from Harassment Act, individuals may seek recourse for certain content that have caused personal harassment, alarm or distress to the individual.
New POFMA Code of Practice
57. Beyond the expectations set out in the ELIONA Bill during the election period, social media services should also bear greater responsibility for digitally generated or manipulated content at all times. Most major social media services are also prescribed Internet intermediaries under POFMA, and will be required to prevent and counter the abuse of digitally generated or manipulated content at all times through an upcoming Code of Practice under POFMA (“Code”).
58. This includes an obligation for PIIs to put in place adequate systems and processes to enhance the transparency about digitally generated or manipulated content, such as through labelling. Like the three other POFMA Codes in effect today, the new Code is being formulated in consultation with the prescribed Internet intermediaries, and we intend to finalise the Code for issuance in 2025.
Conclusion
59. Mr Speaker, may I continue in Mandarin, please.
60. 我相信许多人都有过这样的经历: 身边的亲友对网上的深伪内容深信不疑。无论我们怎么解释,对方仍旧把假消息信以为真。有时,甚至连我们自己也上当,受骗, 假内容的散播,让人防不胜防。
61. 在人工智能的迅速发展下,深伪内容只会变得更普及,更逼真,让别有居心的人能够更轻易,更快速地在我们的社会散播怀疑的种子。如果这种假内容在大选期间广泛传播,其后果将不堪设想。
62. 事实上,这并非无稽之谈。近期,不少国家的大选中,已经看到了假内容的危害。这项新法令旨在禁止在网上发布和传播歪曲候选人言行,举止的假内容,让我们才可以更好地保障民主进程,和选举的公平性。
63. 然而,法令并非灵丹妙药。增强公众辨别内容真伪的能力,才是最有效的预防针。因此,让我们同心协力,对网上内容提高警惕,必要时,向可靠来源求证。每个人从自身做起,成为抵御假内容的第一道防线。
64. Mr Speaker, let me conclude.
65. In the age of deepfakes, seeing is no longer believing. We have to question not only the authenticity of sources and statements, but the very thing that usually counts as truth: what we see as images and videos, and what we hear through audio clips, even if it is a familiar voice.
66. The ELIONA Bill will add an additional layer of safeguards to our elections, but everyone – candidates, citizens, tech platforms – have a part to play in protecting our democracy.
67. We must keep our elections fair and honest, conducted on the basis of fact, not fiction. By and large, Members of this House regardless of party allegiance have supported these ideals, and I appeal to Members to stand behind this Bill so that we can continue to uphold the integrity of our elections.
68. With that Sir, I beg to move.
1 Sumsub. (2024) Deepfake cases surge in countries holding 2024 elections, Sumsub research shows. https://sumsub.com/newsroom/deepfake-cases-surge-in-countries-holding-2024-elections-sumsub-research-shows/
2 Verian. (2024) Three quarters of Singaporeans concerned about the use of deepfakes in scams. https://www.veriangroup.com/press-release/three-quarters-of-singaporeans-concerned-about-the-use-of-deepfakes-in-scams