The European Union is leaning on signatories to its Code of Practice on Online Disinformation to label deepfakes and other AI-generated content.
In remarks yesterday following a meeting with the 40+ signatories to the Code, the EU’s values and transparency commissioner, Vera Jourova, said those signed up to combat disinformation should put in place technology to recognize AI content and clearly label it to users.
“The new AI technologies can be a force for good and offer new avenues for increased efficiency and creative expression. But, as always, we have to mention the dark side of this matter and they also present new risks and the potential for negative consequences for society,” she warned. “Also when it comes to the creation of and dissemination of disinformation.
“Advanced chatbots like ChatGPT are capable of creating complex, seemingly well substantiated content and visuals in a matter of seconds. Image generators can create authentic looking pictures of events that never occurred. Voice generating software can imitate the voice of a person based on a sample of a few seconds. The new technologies raise fresh challenges for the fight against disinformation as well. So today I asked the signatories to create a dedicated and separate track within the code to discuss it.”
The current version of the Code, which the EU beefed up last summer — when it also confirmed it intends the voluntary instrument to become a mitigation measure that counts towards compliance with the (legally binding) Digital Services Act (DSA) — does not currently commit to identifying and labelling deepfakes. But the Commission is hoping to change that.
The EU commissioner said it sees two main discussion angles for how to include mitigation measures for AI-generated content in the Code: One would focus on services that integrate generative AI, such as Microsoft’s New Bing or Google’s Bard AI-augmented search services — which should commit to building in “necessary safeguards that these services cannot be used by malicious actors to generate disinformation”.
A second would commit signatories who have services with potential to disseminate AI-generated disinformation to put in place “technology to recognise such content and clearly label this to users”.
Jourova said she had spoken with Google’s Sundar Pichai and been told Google has technology which can detect AI-generated text content but also that it is continuing to develop the tech to improve its capabilities.
In further remarks during a press Q&A, the commissioner she said the EU wants labels for deepfakes and other AI generated content to be clear and fast — so normal users will immediately be able to understand that a piece of content they’re being presented with has been created by a machine, not a person.
She also specified that the Commission wants to see platforms implementing labelling now — “immediately”.
The DSA does include some provisions requiring very large online platforms (VLOPs) to label manipulated audio and imagery but Jourova said the idea to add labelling to the disinformation Code is that it can happen even sooner than the August 25 compliance deadline for VLOPs under the DSA.
“I said many times that we have the main task to protect freedom of speech. But when it comes to the AI production, I don’t see any right for the machines to have freedom of speech. And so this is also coming back to the old good pillars of our law. And that’s why we want to work further on that also under the Code of Practice on the basis of this very fundamental idea,” she added.
The Commission is also expecting to see action on reporting AI-generated disinformation risks next month — with Jourova saying relevant signatories should use the July reports to “inform the public about safeguards that they are putting in place to avoid the misuse of generative AI to spread disinformation”.
The disinformation Code now has 44 signatories in all — which includes tech giants like Google, Facebook and Microsoft, as well as smaller adtech entities and civil society organizations — a tally that’s up from 34 who had signed to the commitments as of June 2022.
However, late last month Twitter took the unusual step of withdrawing from the voluntary EU Code.
Other big issues Jourova noted she had raised with remaining signatories in yesterday’s meeting — urging them to take more action — included Russia’s war propaganda and pro-Kremlin disinformation; the need for “consistent” moderation and fact-checking; efforts on election security; and access to data for researchers.
“There is still far too much dangerous disinformation content circulating on the platforms and too little capacities,” she warned, highlighting a long-standing complaint by the Commission that fact-checking initiatives are not comprehensively applied across content targeting all the languages spoken in EU Member States, including smaller nations.
“Especially the center and eastern European countries are under permanent attack from especially Russian disinformation sources,” she added. “There is a lot to do. This is about capacities, this is about our knowledge, this is about our understanding of the language. And also understanding of the reasons why in some Member States there is the feeding ground or the soil prepared for absorption of big portion of disinformation.”
Access for researchers is still insufficient, she also emphasized — urging platforms to step up their efforts on data for research.
Jourova also added a few words of warning about the path chosen by Elon Musk — suggesting Twitter has put itself in the EU’s enforcement crosshairs, as a designated VLOP under the DSA.
The DSA puts a legal requirement on VLOPs to assess and mitigate societal risks like disinformation so Twitter is inviting censure and sanction by flipping the bird at the EU’s Code (fines under the DSA can scale up to 6% of global annual turnover).
“From August this year, our structures, which will play the role of the enforcers of the DSA will look into Twitter’s performance whether they are compliant, whether they are taking necessary measures to mitigate the risks and to take action against… especially illegal content,” she further warned.
“The European Union is not the place where we want to see the imported Californian law,” she added. “We said it many times and that’s why I also want to come back and appreciate the cooperation with the… former people working in Twitter, who collaborated with us [for] several years already on Code of Conduct against hate speech and Code of Practice [on disinformation] as well. So we are sorry about that. I think that Twitter had very knowledgeable and determined people who understood that there must be some responsibility, much increased responsibility on the site of the platforms like Twitter.”
Asked whether Twitter’s Community Notes approach — which crowdsources (so essentially outsources) fact-checking to Twitter users if enough people weigh in to add a consensus of context to disputed tweets — might be sufficient on its own to comply with legal requirements to tackle disinformation under the DSA, Jourova said it will be up to the Commission enforcers to assess whether or not they are compliant.
However she pointed to Twitter’s withdrawal from the Code as a significant step in the wrong direction, adding: “The Code of Practice is going to be recognised as the very serious and trustworthy mitigating measure against the harmful content.”
Europe wants platforms to label AI-generated content to fight disinformation by Natasha Lomas originally published on TechCrunch