The 2023 Evil List of Tech Companies
What does it mean to be an evil tech company, anyway? And who would be on it now?
Programming Note: This is the third installment of The Klonickles, a weekly newsletter focused on online speech and law & technology plus a crazy Kate post-script (see below) for paid subscribers. Thanks for reading and thanks double for pay subscribing. The more you do, the more I have the freedom to research and write.
“The Evil List” of 2020
Almost exactly three years ago — January 15, 2020 — Slate published an article that surveyed people’s feelings on which technology companies were doing the most harm.
They called it “The Evil List,” which is about as subtle as a sledgehammer. But despite its questionable and reductive framing, I have thought back to it often over the last few years. At the very least, it is a fascinating zeitgeist of a moment in TechLash history. . . or at least what a particular group of academics, journalists, activists, and other experts in technology thought were the most harmful technology companies at the start of 2020.
I contributed to this list at the time, naming Baidu — the Chinese multinational corporation which runs the second-largest search engine in the world and suppresses information and surveils Chinese users for the state — as the worst. But few at the time agreed with me, it ranking only the 28th most “evil” company.
Here’s what those surveyed for the piece ranked tech companies three years ago, starting with the most “evil”:
ByteDance (parent company of TikTok)
Huawei (world’s largest telecom equipment manufacturer/second largest smartphone manufacturer)
LiveRamp (data broker)
Tencent (Chinese telecom and electronics giant that owns WeChat)
23andMe (genetic testing company)
Anduril Industries (AI defense firm)
Megvii (AI facial recognition)
Vigilant Solutions (Surveillance AI and analytics)
The Grid (the loosely connect public/private network of electricity providers)
Baidu (Giant Chinese search engine and cloud service)
Cellebrite (forensics company that hacks devices)
mSpy (stalkerware manufacturer)
There is a lot to say about this list. But let’s start by addressing the obvious issues with definition.
What makes something an evil company? Hell, what makes something a technology company?
I have no idea what is meant by “evil.” Do they mean a company acting with evil intentions? Creating evil incentives? Generating evil outcomes? Evil for democracy? For workers? For individual harms? For consumers? For society generally?
It’s hard to know, but the article gives this loose description:
We mean ills that outweigh conveniences. We mean temptations and poison pills and unanticipated outcomes.
And just as it’s impossible to really define “evil” it’s perhaps even more difficult to define a “technology” company. Which might explain why DISNEY is on the list, alongside Facebook, Exxon, and IBM.
This reminds me of something my friend Molly Brady, a professor at Harvard Law School who studies the history of property and land use, once told me when were in grad school together and I was prattling on in a workshop about “disruptive technology.” I am paraphrasing, but essentially she said:
The term disruptive technology doesn’t mean anything. All technology is disruptive. Indoor plumbing was disruptive technology: existing homes were ripped up to install it; city sewers had to be put in and tore up streets; laws were passed to mandate it in new construction. But no one thinks of a faucet or a toilet as technology anymore. All technology is disruptive, and it is only technology until it normalizes.
I love this point and I think about it all the time. It’s an idea I’ll probably circle back to in a future Substack, because it also gets to a fundamental question of what the role of law is regarding technology and norms. But for now, I’ll just drop it here as part of the necessary “big picture” critique of the very idea of something like an Evil List of technology companies being definitionally possible. This is a stupid feelings thing, not a real empirical exercise. And sometimes there’s value in measuring stupid feelings, just for posterity and later contemplation.
Updating the Evil List for 2023
Ok, so the entire concept of The Evil List of Tech Companies is absurd on its face, but it does capture something phenomenological about that moment in time and makes it fun to ask: what would be on the list now?
I am not an entire news organization. I don’t have a staff or journalists, or any real time or funding to do this properly so the methodology for my Evil List 2023 update is . . . spurious.
Basically, around noon today, I texted and emailed about 5 dozen people that were listed in the credits for the article or who are friends/colleagues and are similar types of experts. I asked them to take a look at the original Evil List and let me know who they thought would still be on the list, who would be off, and who they would add now.
Here’s what they told me.
Who is off or much lower on the Evil List
“I wouldn't even put Facebook in the top 5 to be honest,” Electronic Frontier Foundation’s Jillian York told me.
“I don’t see hosting speech as creating the same material harm as displacing people, ruining the environment, and functionally assisting with murder,” York clarified (she thought Amazon and Palantir were the most evil).
Others gave more functional reasons that that Facebook/Meta had ceased to be a threat: it actually had done a decent job cooperating with stakeholders and responding to concerns. But perhaps most saliently, it just didn’t seem to be the market force it used to be.
Alex Stamos, former Chief Security Officer at Facebook and head of Stanford Internet Observatory, laughed when I mentioned The Evil List (“Hah, I remember that”) and then gave this take:
My position is the media's focus on Cambridge Analytica and Russian trolls was never empirical or evidence-based and distorted their view of tech, and specifically FB, for years.
It’s worth reading this interview in Vox from 2109, where Stamos expands on this idea and the supply and demand problem of disinformation.
In fact, there was only one person from the 50+ that I contacted who named Meta as the most evil. Matt Yglesias, of:
Imagine a friend told their New Year’s Resolution was to spend less time socializing, less time sleeping, less time exercising, less time focusing on work or school or childcare, less time reading books and more time scrolling their Facebook or Instagram feeds. You’d think that was obviously bad news. And yet all the people plugging away at Meta to try to make their apps more “engaging” have to be crowding out something. And whatever that something is almost has to be something more worthwhile than scrolling Facebook and Instagram feeds. I think in some ways this company attracts a lot of unfair criticism from people who don’t have a realistic sense of how hard it is to do content moderation in a reasonable way. But it also doesn’t attract enough criticism for the fact that its core business is inducing people to compulsively waste their time.
Who isn’t on the Evil List that should be
When I first sent this question to people, the first five replies that came back all said almost exactly the same thing:
How is NSO Group not on this list?
NSO group, for those, who don’t know, is an Israeli cyber-intelligence firm that created and sells Pegasus, perhaps the most dangerous, invisible, and yes, evil, spyware in existence. NSO will sell Pegasus to literally anyone or any state, no matter how authoritarian its power or rampant its human rights abuses.
And here’s the bad news: After lawsuits, fines, and terrible press destroyed its business, it seems to be trying to save itself from financial ruin by selling more notorious spyware to the worst of the worst.
Asof summarized NSO's damage:
[The r]evelations about the use of its spyware to target activists, dissidents, and journalists have been terrifying. Worse yet is the fact that it's part of a thriving global spyware industry with few legal safeguards in place to protect users' privacy.
In January of 2020, Slate listed Chinese company Megvii — one of the then dominant facial recognition AI companies as 25 on its Evil List.
Interestingly, just four days after the Evil List was published, the New York Times’s Kashmir Hill broke the story of Clearview AI, a small U.S. start up that was selling its insanely powerful facial recognition software to law enforcement. In the three years since, Hill and others have continued to report on just how scary — and ubiquitous — this software is. Take, for example, this recent report from NBC on how security at the Rockette’s Show in New York City used facial recognition software to identify and kick-out a woman attending a performance with her daughter’s girl scout troop, because she happened to be an attorney at a personal injury firm that was litigious with Madison Square Garden.
As Adam Conner, vice president of technology policy at the Center of American Progress (and an early paid subscriber to this Substack! Thanks, Adam!), recapped:
The unauthorized and behind the scenes data scraping, the selling of services to law enforcement, the unrepentant CEO . . . Clearview AI has it all.
But its evil success is really a reflection of American society’s choice to not regulate this kind of data collection, to not differentiate different kinds of scraping, or its purchase or use by law enforcement or anyone else. It’s a failure because almost everyone who hears about Clearview AI thinks it should be banned or heavily regulated and instead it just keeps growing.
Who on the Evil List continues to be nefarious
#1 in 2020, Amazon was still at the top of many people’s lists. It was by far the most cited and for the most reasons. Seemingly, Amazon was terrible across every possible metric: packaging and delivery had devastating environmental impact, damage to local business and neighborhoods, terrible labor practices, consumer surveillance through Ring technology, the deathblow to traditional book publishers, its continued sale of suicide kits to kids, and of course, their anti-competitive tactics with third-party sellers.
And of course there’s just the raw power of the company. AsRobyn Caplan, a senior research at Data and Society said:
Their surveillance of workers down to the second, the way that logic is being outsourced to customers as well (D&S report by Aiha Nguyen and Eve Zelickson) and their anti-unionization efforts. Add to this that they have fulfillment centers distributed all over the US, giving them diffuse power over local governments (in addition to centralized power federally), it’s gives them a lot more influence than some of the other tech companies.
They tend to put their fulfillment centers in lower income areas as well, not only creating environmental issues for those communities, but also gives them a lot of power when threatening to leave.
“Obviously, Palantir” was the answer I got from over half-a-dozen people I queried, which was then quickly followed by “but that’s on background, please don’t quote me.”
Apple is a black hole of siloed workers that push out policies and products that give them near total control of their customers — from the hardware, to storage, to the operating system, to applications and software, to the market for new applications and software.
“Apple,” a friend once joked to me, “is the North Korea of tech companies.”
And in my opinion, the very worst part of all of this is that the company had exactly zero accountability and have no interest in cultivating it. Apple’s communications team must be the easiest job in the world because their approach to any type of privacy scandal, products liability, labor issue, or negative reporting is always the same: They simply don’t respond.
Or as one person told me via tell-tale green texts:
I can’t stand Apple. Part of it is the hypocrisy of doing business in China, but mostly it's convincing a generation of consumers that dumb down appliances and closed platforms are the way to go.
I’m interested to know readers reactions to both the old list and the informal survey I did today. Leave your comments below and I’ll mention the best in next week’s newsletter.
The Kate Klonick Post Script is more general musings, pictures of my dog, and other things I collect around the internet. It’s meant to be fun, personal, whimsical, but smart. Starting in February, it will be subscriber only! So please consider subscribing.
Weird Plants I Love
For some reason my begonia maculata is going totally nuts right now, and has absolutely exploded with new leaves. They’re also called polka dot begonia or angel wing begonia. But they’re really just a very fun looking plant.
My string of pearls succulent is also bubbling out over the edge of its container.
I also love my staghorn fern. Which were the topic of a lovely Modern Love column recently and are so gorgeous when mounted.
This Week’s Brain Worm Term
Do you ever have a term, phrase, idiom or word that just gets stuck in your head like a bad song? Sometimes it’s a word that you hear and then for some reason gets stuck in your conversational RAM and the cache won’t clear and so you just keep repeating it across conversations. Or just a weird turn of phrase or idea you learn from one domain that then seems relevant metaphorically to all these other areas.
I don’t know what you call it, but I’m going to call it Brain Worm Term, and the one I have in my head right now is Minimum Publishable Unit or MPU.
The term generally applies to publishing in science academia, which typically has more competitive standards for tenure that are measured by something called an H index (a stupid and complex algorithm that essentially measures an academic’s impact and productive though number of published papers and citations to those papers over time).
Because of the pressure to generate as many papers as possible from any given amount of lab or empirical work, there is pressure to not put all of one’s observations from ONE set of data or ONE experiment into ONE paper, but instead to slice off the minimum publishable unit to generate as many unique papers as possible to drive up publication rate and citation counts.
Tenure and impact in legal academia doesn’t exactly work by peer review or H Index, but it does present itself in different ways:
First, there really are only so many ideas or arguments you can put into one law review article and still have it hang together. So trying to split off tangents and other big ideas from a law review article into more law review articles is common — but of course law review articles are long, typically 50-60 pages or 20,000 words. So “minimum publishable unit” takes on a new meaning.
Second, you can have a big idea that you argue in a law review paper and then split a smaller observation or summary of it also into an essay, and then an even smaller argument into an op ed — in order to get the most utility from one unique idea.
Third, it occurs to me that I am emphatically not achieving this with this newsletter. I probably am writing and reporting far too much for a 98% free Substack, but I don’t know what to do about it or whether I want to do anything about it. Maybe the MPU doesn’t apply in this context, especially if I think these ideas are worth just getting out there and slowly building a user base. But also maybe I’d be better off trying to write less and/or publish it in venues with a larger audience.
In any event, the term MPU keeps popping into my head the longer this newsletter gets. If you, dear reader, have thoughts or suggestions please feel free to chat with me about them here:
Nena had some major surgery last week and isn’t looking her best, so this week’s picture is a throwback to one of her sexier beach days
And a huge shout out and thank you to my friend Sue Glueck and her wonderful cat, Tatters, who provided much needed good vibes and moral support for a hard 36 hours.
Who I’m Remembering: Aaron Swartz
Today is the 10th anniversary of the tragic passing of Aaron Swartz.