TL;DR: Misinfo Motivated Reasoning, How to Improve Child Exploitation Tip Lines, and Jonathan Haidt's Hype
The TL;DR round-up gives a quick analysis of consumer, industry, academic, and legal news every month.
TL;DR on Misinformation, Conspiracy Theories, and Motivated Reasoning
I’ve long argued — like since 2015 — that the conversation around “fake news” and “misinformation” is so thin and under-theorized as to be completely useless to operationalize at any level of policy. But, of course, that certainly has not stopped the conversation from happening — and over and over again I’ve watched really bad discussions act like “misinformation” or “disinformation” are 1) the same thing — they’re not! — 2) something that we could just stick a label on and “fix” in a way that dramatically misunderstands what these things actually are (and how the solutions for the two things are different.
A hill I will die on is trying to get people to understand the differences between misinformation and disinformation — so frequently slurred into one word by policy-makers, the media, and even academics as “misdisinfo.” Here is why this distinction matters.
In online systems, disinformation is best defined by Prof. Clare Wardle as “coordinated inauthentic behavior.” It is fake or misleading information that is put into ecosystems by multiple actors simultaneously — usually ones with state resources — to deceive people or propagandize. Unlike with misinformation, it is possible to understand the intentionality of disinformation because of how it behaves online. Often specific memes, videos, or posts will all have provenance from servers and accounts located in Russia, for example, and will all be posted at the same time to platforms. Because you can identify something as disinformation by origin, time-stamps, and coordinated accounts sharing it means that it is easier to identify and easier to create solutions to stop it.
Misinformation, however, is a much more difficult to define and complex problem. So I was thrilled to see Manvir Singh’s thoughtful framing of this in The New Yorker this week drawing on the work of a number of well-regarded cognitive scientists.
The main (and its significant) problem with Singh’s take — and indeed the headline of the piece: Don’t Believe What They’re Telling You About Misinformation — is that Singh is actually not describing misinformation qua misinformation. Like everyone else before him, he lumps together the a number of things, like misinformation and conspiracy theories, or conspiracy theories and rumors, and flattens it all under the same terminology, in the same way it might be accurate but not helpful to call tomatoes, bananas, and acorns all “fruit.”
Nevertheless, it’s a thoughtful and welcome treatment of the complexity of these issues and in particular the science around belief, action, and how both are guided by motivated reasoning.
TL;DR:
To explain why people’s actions sometimes depart greatly from their deeply-held beliefs, Singh draws on cognitive scientist Dan Sperber’s work that people operate with two different belief systems — factual (chairs exist) and symbolic (God is omnipotent). The former instructs the actions we take in our day-to-day lives, while the latter serves “social ends.”
A separation of the two belief systems explains how we can reconcile the Christian tradition of eating a cracker and calling it ‘the body of Christ’ without accusing one another of cannibalism. This nuance is part of what makes misinformation fundamentally difficult to identify or label and it is also why it is so hard to track by behavior in the same way that disinformation is tracked.
There’s “thinking” and there’s “believing.” The former very clearly affects how we make day to day choices and what our behavior is. For the latter, this is less obviously the case. Both are things we want to protect in certain circumstances and not so much in others.
The conclusion? “From this perspective, railing against social media for manipulating our zombie minds is like cursing the wind for blowing down a house we’ve allowed to go to rack and ruin. It distracts us from our collective failures, from the conditions that degrade confidence and leave much of the citizenry feeling disempowered.”
TL;DR from Research: Stanford Internet Observatory’s latest report “How to Fix the Online Child Exploitation Reporting System”
The Stanford Internet Observatory just released an incredibly useful report of research and analysis of the U.S. online child abuse reporting system: How to Fix the Online Child Exploitation Reporting System which gives practical takeaways about how the current system needs to improve to anticipate AI.
TL;DR:
The National Center for Missing and Exploited Children (NCMEC) runs the CyberTipline which fields a huge volume of reports of Child Sexual Abuse Material (CSAM) — many of which come from the platforms.
Law enforcement has 3 hurdles to better managing the huge volume of tips:
Platforms’ reports to NCMEC are low quality because executives don’t pay engineers to make the reporting process better; there’s little consistency & best practices because of trust and safety turnover.
NCMEC doesn’t improve because it can’t afford the good talent, hindering NCMEC’s ability to leverage machine learning tools which could aid in triaging cases — linking between reports, matching case management interfaces, integrating external data sources, etc.
NCMEC works in a bit of a Constitutional grey area (federal appeals courts are split) when it comes to the 4th Amendment and the risk of overstepping legally constrains them from better communicating to platforms what to look for or report, as doing so would risk turning platforms into government agents, too.
The Report makes recommendations for how to solve these issues, including:
Platforms should invest dedicated engineering resources in implementing NCMEC reporting API. Ensure there is an accurate and (where possible) automated process for completing all relevant fields.
To avoid state actor concerns, an NGO that is not NCMEC should publish the key CyberTipline form fields that platforms should complete to increase the likelihood that law enforcement will be able to investigate their reports.
Congress should give NCMEC’s more money and extend the required preservation period (after material is handed over to NCMEC) from 90 to at least 180 days, but preferably one year.
SCOTUS should resolve the split in authority and clarify the law as to whether the private search doctrine requires human review by platform personnel in order for law review enforcement to open a file without a warrant, or whether the doctrine is satisfied where a reported file is a hash match for a previously viewed file.
TL;DR from Academia: The Jonathan Haidt Hype
A few weeks ago, NYU psychology professor cum public intellectual, Jonathan Haidt published an article in The Atlantic with a provocative headline: End the Phone-Based Childhood Now. The piece draws from Haidt’s latest book, The Anxious Generation, and both argue in no uncertain terms that the smart phone based “environment in which kids grow up today is hostile to human development.”
The book and it’s findings have been pretty heavily criticized. A review in Nature called his claims “not supported by science [and w]orse, the bold proposal that social media is to blame might distract us from effectively responding to the real causes of the current mental-health crisis in young people.” (Haidt had a decent, but still inadequate, response to that here).
This week in The Daily Beast, long-time tech journalist Mike Masnick extends this. He pokes holes in Haidt’s data, exposes fundamental biases in his framing, and dismantles the effectiveness of the policies Haidt calls for at actually solving the problems he claims exist.
TL;DR:
Haidt’s findings are grounded in major confirmation bias. The (small percentage of) kids who have nobody in the real world to turn to, turn to people on social media, and these kids are generally (surprise, surprise) those with poor mental health. As such, Haidt’s research is providing a scapegoat for shitty parents to feel better about themselves and not change their poor parenting.
The data and studies he relies on is cherry-picked. For example, Haidt makes no mention of two recent studies from the Internet Institute at Oxford of nearly 1 million people across 72 countries “that showed no direct connection between screen time and mental health or social media and mental health.”
Haidt’s policy takes are lazy and bad. He is a proponent of bills such as the Kids Online Safety Act — which researchers like Danah Boyd, who have studied kid health online since before it was cool two decades ago have long argued — would be damaging to the privacy of not just teens, but the teens who likely need privacy the most, such as LGBTQ+ kids seeking community online.
Haidt’s hype seems self-serving to sell books. The experts are right that the science simply doesn’t support his boldness of his claims and where his claims aren’t bold, they’re self-evident. You don’t need to review 300 longitudinal studies to know 9 year-olds shouldn’t have a phone. Over claiming causation and certainty in science is nothing new, but it is extremely dangerous if it leads to bad policy. That’s the real injury that Haidt creates.
NEXT KLONICKLES PREVIEW:
Why has Europe failed to build big tech?
It has been thirty years since the technology revolution began in Silicon Valley. And while California is still the epicenter of the tech industry, many other dominant technology players have started in different places around the United States in the intervening decades.
But what is remarkable is how innovation in the tech industry is still largely an American phenomenon. In particular, despite decades of effort, Europe has failed to build any large technology companies. But why?
A number of scholars and economists have debated this question since 2013, and offered very different proposals for why. I summarize the three best arguments in the next newsletter.
Thank you to Margo Williams for help reading, summarizing, and analyzing these links and articles.