The End of the Golden Age of Tech Accountability
It was voluntary, and it wasn't enough, but it's more than we'll have for the foreseeable future.
I did a panel Wednesday with Justin Hendrix, the CEO and Editor of Tech Policy Press, for All Tech Is Human’s Tech & Democracy: A Better Tech Future Summit. One thing we talked about was what to make of all the huge changes that have happened in the last six months in the economic landscape of Big Tech. This is am essay version of my response to Justin’s prompt: Was 2021, in retrospect, a heyday for trust and safety online?
We’ve all seen the headlines: Elon buying Twitter and taking it private; the huge swaths of tech worker layoffs; the declines in tech revenue projections. If you don’t like Big Tech, these might seem like good things—maybe it even just feels like a good thing to see that these Goliaths are indeed vulnerable—it sure hasn’t felt that way for the last ten years.
But feelings are misleading, and unfortunately, as far as I can tell, this is not a good thing at all. While the economic downturn might have lowered the absolute economic power of Big Tech, it has not hurt the relational power of Big Tech. By relational power I mean that while there might be lower overall valuations of stock and revenues, tech companies still have massive comparative power in markets, infrastructure, politics, and our individual lives.
What the tech downturn has hurt, however, is the soft power of technology civil society, employees, journalists, researchers, academics, and individuals that work on technology and social welfare issues. Massive revenues and largess can give companies the freedom to depart from strict self-interested economic models and dabble in social-planner projects, like funding a giant data-sharing initiative with academics or a $300 million dollar content moderation appeals court or internal teams that research how to curb extremism and misinformation across products.
But when the money stops, the first projects to lose funding are these social-planner initiatives. This is what is happening now, as seen in the much smaller headlines from the last six months about Twitter shutting down its scientist data sharing program and turning off its API, Twitter decimating its Trust & Safety Team, Twitter disbanding the Trust & Safety Council, Jigsaw taking its hate and extremism research team to a “skeleton crew,” or Meta’s AI ethics team elimination.
Of course, these teams, projects, and partnerships took more than just money to build. They took years and years of advocacy, soft politics, horse-trading, and well-positioned good actors to bring them into fruition—and then even longer to substantively build-out and make effective. So even if we suddenly entered a new tech boom, or magically passed regulation to mandate their existence, it will take again take years and years to ever bring them back.
Which is why, for all the of the complaining we’ve done about Big Tech’s lack of cooperation with accountability, transparency, and research efforts, I unfortunately think we’ll look back on the last five years as a Golden Age of Tech Company access and cooperation.
If you’re reading this going, “What the heck are you talking about, KK? GOLDEN AGE?! Everything has been AWFUL online for the last five years and it is only getting worse for democracy!” Let me explain, and I’ll start with a joke.
Everything Is Amazing and Nobody’s Happy
There’s this old Louis CK bit from 2009 on Conan where he talks about various kinds of technology, how much has changed for the better, and how everyone nevertheless complains:
I was on an airplane and there was high-speed internet on the airplane, which was the newest thing I knew about . . . and they go “you can open your laptop, you can go on the internet.” And its fast and I’m watching YouTube clips and I’m in an AIRPLANE.
And then it breaks down, and the guy next to me goes, “PFF this is BULLSHIT.”
Like how quickly, the world owes him something that he knew existed only ten seconds ago.
This captures, in a very reductionist nutshell, some of my feelings on the last few years of how the public acts about online technology and industry reform efforts.
In my mind there are two major misconceptions by the public perception of how Big Tech has harmed democracy and society and what has been done about it:
Misconception #1: Social Media Started Harming Democracy and Society in 2016
Before 2016, here are some terrible things that were happening on the internet and some really smart people who were talking about them, in pretty high profile places:
random top-level decision-makers at technology platforms were making global decisions on freedom of expression and censorship at scale1
censoring the worst parts of the internet was done not by AI by humans working in off-shore call centers2
kids were bullied online, sometimes to the point of suicide3
people posted misinformation and nation-states ran disinformation campaigns4
stalking and hate speech proliferated5
child sexual abuse material and terrorism was a huge issue for platforms6
Depending on how much you read, you might have been aware of all of these things or some of them. But I think it is fair to say that these were relatively niche issues that were not commanding political agendas or headlines before 2016. Almost all of the links in the footnotes are to very well regarded sources but are magazine articles or academic books — not news.
It really wasn’t until early 2017, following the 2016 US Presidential election that public awareness of the power of technology companies over the public sphere and democracy exploded. In particular, public awareness of the harms that technology companies perpetuated in the public sphere exploded (I use the term “public awareness” on purpose because how much people were conscious of harms is both more measurable than some absolute number of harms that exist and more likely to predict popular protest and agitation). For instance, fake news and misinformation online were certainly happening at significant scale both before and after the 2016 Presidential election, but awareness of it as a social issue didn’t emerge until after the election. For example, here’s a graph of a the Google News frequency of the term “Fake News” over the last 7 years (that sudden uptick is December 2016):
But it’s not just that the public is just discovering tech harms and falling victim to a bit of an availability bias, its that they’re also not aware of what’s already being done and why.
Misconception #2: There’s Been No Regulation, So Tech Companies Have Done Nothing Since 2016 to Address These Harms
The public awareness around the bad effects of Big Tech was accompanied by calls to address these issues with regulation (to ban misinformation, election interference, disinformation, online bullying, data sharing, etc.). Nevertheless, in the intervening five years, no meaningful federal regulation around any of these issues passed in the United States.7
But what did happen in those five years some of the big technology companies actually voluntarily complied with efforts to create better governance policies and more partnerships with researchers and outside stakeholders.
These voluntary projects happened for a couple of reasons:
Bad press created reputational harm and put pressure on publicly traded companies to ameliorate brand damage with social planner initiatives (even if they were just performative).
The continued Big Tech boom made such social planner initiatives relatively small costs compared to volume of revenue.
Tech companies voluntarily addressing harms through such initiatives staved-off regulators.
Social planner minded people joined the tech companies to actually try to create change from the inside.
The result is that contrary to the near constant stream of bad press that technology companies have gotten in the last five years, they have actually done more in that period for the right or wrong reasons to create transparency, engage stakeholders, give access to academics and researchers, than ever before.
If we drew a make-believe-science-graph it might look like this:
Conclusion: The Last Five Years Were Better for Big Tech Reform Than We Thought. . .
But that doesn’t mean it was enough or we don’t need regulation
I want to be clear what I’m NOT saying by calling the last five years a Golden Age:
I’m not saying that tech companies couldn’t have done more, or that all their initiatives were meaningful or great. Big Tech absolutely could have done more, and they still can. And of course many of the projects created during this time period were either flawed or were empty-ESG brand rehabilitation efforts.
I’m not saying self-regulation is enough. In fact, I’m trying to point out just the opposite. The loss of these social planner programs the second there was a blip in revenue, show us that we can’t trust companies to be incentivized to do them on their own. We absolutely need to mandate the types of programs that existed and developed over the last five years with regulation.
What I’m trying to point out, is that some good things actually were happening while we were so angry and maybe that their disappearance now isn’t fixed by more anger. Please note that in those five years in which technology companies chose to voluntarily engage, public outrage was at its peak and government regulation failed to pass any legislation that might have mandated such continued good efforts by tech companies.
My point here is somewhat uncomfortable: that maybe some of the blame here is on us. Politically we have missed a huge opportunity to formally mandate these types of voluntary initiatives by technology companies. Or even barring regulation, socially we have failed to create an environment that acknowledged and encouraged the social planner efforts so they continued long enough to improve and normalize.
My optimistic note is that while the last six months have been a truly massive blood-letting of these types of social planner projects — it has shown us (at least a little!) what works and what doesn’t. And it’s not too late. Some of those lessons already exist in pending legislation (like the Platform Accountability and Transparency Act!) and and passing them can make these kinds of governance and safety initiatives a permanent part of tech.
KKPS is a more personal and whimsical part of the newsletter with things I’m reading, watching, thinking about, and cute pictures of my blind dog. It’s for paid subscribers only!
Today’s KKPS contains lessons in growing your own oyster mushrooms; fun new readings from Heidi Tworek, Susan Benesch, Brett Frishmann, Rebecca Wexler, and Daphne Keller; an amazing tomato candle and more!
Please consider subscribing to read and support the free substantive content above.
Keep reading with a 7-day free trial
Subscribe to The Klonickles to keep reading this post and get 7 days of free access to the full post archives.