MDDI's response to PQ on Accuracy of Deepfake Detection Technologies
Parliament Sitting on 7 August 2024
QUESTION FOR WRITTEN ANSWER
29. Ms He Ting Ru asked the Minister for Digital Development and Information (a) what is the current accuracy rate of the Government’s deepfake detection technologies for AI-generated content; (b) how will the Government differentiate between harmful deepfakes and legitimate political satire or memes using similar technologies; and (c) what happens if videos are wrongly identified as deepfakes.
Answer:
There are a variety of tools and techniques available to the Government to detect, identify and assess manipulated content, including AI-generated content such as deepfakes. These may be sourced commercially, developed in-house or in partnership with researchers such as those at the Centre for Advanced Technologies in Online Safety. We do not publish their accuracy levels as our tools are constantly being updated to keep up with technology. It is also not in the public interest to reveal the full extent of capabilities as malicious actors may exploit it.
The Government can take action against online falsehoods when certain thresholds are met, including falsehoods generated with the help of AI. Action may be taken under the Protection from Online Falsehoods and Manipulation Act (POFMA) if such content is false and against the public interest. Satire or parody do not by themselves meet the criteria for POFMA action, unless they contain falsehoods that harm public interest. Individuals who disagree with POFMA directions issued to them, including those for deepfake content, can file an appeal in court.
Many countries have recognised the need to mitigate the harms and risks from AI use and application, including the malicious use of deepfakes. Some countries have already put in place safeguards, especially during elections, in order to protect the integrity of the electoral process. We are studying if further safeguards are required and will provide an update when ready.