TL;DR: The EU Deploys the DSA; New Tools to Stop Sextortion; Congress Actually Does Something
The TL;DR round-up gives a quick analysis of consumer, industry, academic, and legal tech news every few weeks.
TL;DR from Europe Tech-Law: The EU Commission Starts to Flex Enforcement of the DSA Against Meta
On Tuesday, the European Commission opened a formal proceeding to discern whether Meta has potentially breached the DSA. Mathias Vermeulen (an EU lawyer, tech policy expert, and one of the people whose judgment I trust most in contextualizing EU politics and policy) called this “the most important European Commission enforcement action to date under the Digital Services Act.”
This isn’t just any boring enforcement, there’s lots of backstory and it mostly has to do with the upcoming European elections in June 2024. Here are the four issues under investigation, why they matter, and what they’re responding to.
TL;DR:
It’s about the elections, stupid. Why is this happening now? European Union elections are less than a month off and the Commission is super nervous about disinformation and is receiving enormous pressure from Eastern & Central European member states on this issue.
Investigation #1: Meta violating DSA on deceptive advertising & disinformation
According to Vermeulen this is a response to what is known as the “Doppelganger” Disinformation Campaign — a Russian pro-Putin influence campaign to purchase ads on Facebook and sow division in social media before the EU Elections in June.
The EU has already tried to deal with this with fines etc., but now they’re seemingly trying to use the DSA to force Meta to get involved.
Investigation #2: Meta not complying with DSA visibility of political content
Facebook’s decision to demote political content through recommendations to your newsfeed is seen as potentially out of compliance with DSA.
Investigation #3: Meta’s shut-down of CrowdTangle denies researchers an effective “real-time civic discourse and election-monitoring tool”
For those unfamiliar:
CrowdTangle was an imperfect but nonetheless very powerful data access tool for researchers and journalists to see Meta’s user data for transparency, research, and reporting.
Meta announced that they were shutting CrowdTangle down last month.
A lot of people flipped out at this news. (If you’re interested in more on it, I’d highly recommend
‘s amazing post "CrowdTangle Is Dead, Long Live CrowdTangle!").
The Commission has come to the rescue saying that the shut-down and its hastiness is in potential violation of the DSA, especially headed into Elections.
The Commission is maybe not wrong about this: as Vermeulen states, Meta itself touted the importance of Crowdtangle as a tool to mitigate risks in the 2020 U.S. elections, but then curiously left it out of its risk assessment on election interference.
Investigation #4: Meta’s insufficient procedures on both illegal requests for take-downs and appeals of content moderation decisions
Vermeulen argues that this approach is consistent to the trend of going after social media platforms for lack of procedure, as the EU’s recent proceedings against TikTok for launching TikTok lite).
TL;DR from Industry: New Tools to Stop Sextortion and Non-Consensual Intimate Image Sharing
Sometimes — not often, but sometimes — engineers are able to develop safety-enriching technologies that don’t have any particularly significant speech-restricting or privacy costs attached to them.
Meta recently announced that it was beginning to roll out a handful of new safety features targeted at preventing sextortion, and at least one of these new features might be one such development that falls into the straightforwardly ‘lawful good’ category.
The new feature is called ‘nudity protection mode.’ Similar to the ‘invisible ink’ functionality in iMessaging, it detects and blurs nude images in both sent and received Instagram DMs. When nudity protection mode is on (default for minors, opt-in for adults), people sending nude images will see a prompt reminding them to think twice before sending sensitive photos, and that they can unsend photos if they’ve changed their mind. For the recipient side, nudity protection also prompts users to think twice before unblurring, and asks them if the image was unwanted and links to safety tools. To address non-consensual intimate image (NCII), if you try to forward a nude, you will receive a prompt encouraging you to reconsider.
Of course, blurring out pictures and reminding users to be careful is a proactive measure — it doesn’t alter the reactive steps necessary to resolve an issue once it has unfolded. And obviously, spreading NCII doesn’t always (or even typically) use the forwarding feature. But this development is a nice easy win on these issues: it uses existing machine learning technology and it does so on the device itself, meaning the feature is fully compatible with end-to-end-encrypted chats and photos never get sent to Meta unless someone chooses to report them.
Separate from its nudity protection mode feature, Meta is also developing technology to analyze a “range of signals that could indicate sextortion behavior” to detect and categorize potential sextortion accounts. Once classified as a potential sextortion account, it will be harder for these accounts to message or interact with other accounts
TL;DR:
The highlight of the new nudity protection mode feature is it is fully compatible with end-to-end-encrypted chats and photos never get sent to Meta unless someone chooses to report them.
Any message requests from these potentially sextortion accounts will go straight to the recipient’s hidden requests folder.
If it turns out you’re already chatting with potential scam or sextortion accounts (ick), Meta will notify users and remind them how to report any threats to share their private messages, and reminding them that they can say no to anything that makes them feel uncomfortable.
The new technology isn’t going to end NCII forwarding or eliminate sextortion, but it’s progress.
TL;DR from U.S. Regulation: Congress passes REPORT Act Which Implements Some of SIO’s Recommendations
Last week’s TL;DR covered the Stanford Internet Observatory’s April 22 report, “How to Fix the Online Child Exploitation Reporting System.” The report directly called on Congress to give NCMEC (the National Center for Missing and Exploited Children) more money so that they can more effectively combat online child exploitation.
Following that report, the House passed a Bipartisan Bill — Revising Existing Procedures On Reporting via Technology Act or the REPORT Act — now heads to Biden’s desk to become law.
TL;DR:
The bill doesn’t address all the issues identified in Stanford’s report, but it does address many of them, including:
Allow providers to preserve the contents of online exploitation reports for up to a year, rather than just 90 days.
Let NCMEC legally store data using commercial cloud computing services.
Raises the fines faced by platform providers who don’t report suspected violations to NCMEC from $150,000 to $850,000.
Requires platforms to report on the enticement of children, in addition to the CSAM they’ve been required to report previously.
TL;DR from Research Assistants: Introducing Margo Williams!
If you’ve been reading closely, you’ve noticed my acknowledgments thanking Margo Williams for her research assistance, and today I’m thrilled to add her as a co-author on this TL;DR and introduce her to everyone.
Margo is a native of Chicago and a recent graduate of Colgate University, where she majored in international relations with a minor in economics and Chinese (she is fluent in Mandarin!) She has worked as my research assistant in Paris since September 2023 and been an absolutely invaluable help in reading, processing, discussing, researching, and organizing my work here. It’s a true pleasure to give her a place to publish her own smart, thoughtful work on some of the issues we’ve been working on.
NEXT KLONICKLES PREVIEW:
The TL;DR on the TikTok Ban
It has been two weeks since the totally nutty TikTok divest-or-ban bill was signed into law by Biden. Margo and I break down the First Amendment, national security, and corporate law issues that are on the table (or likely to soon surface) in next week’s TL;DR.