"You were saying something about... best intentions?"

We Built A Broken Internet. Now We Need To Burn It To The Ground.

Twitter and my design shop, Mule, used to be right across the hall from each other in a run-down shitbox of a building in San Francisco's SOMA district. We were friends with a lot of the original crew that built the platform. They wanted to build a tool that let people communicate with each other easily. They were a decent bunch of guys -- and that was the problem.

They were a bunch of guys. More accurately, they were a bunch of white guys. Those white guys, and I'll keep giving them the benefit of the doubt and say they did it with the best of intentions, designed the foundation of a platform that would later collapse under the weight of harassment, abuse, death threats, rape threats, doxxing, and the eventual takeover of the alt-right and their racist idiot pumpkin king. [...]

Twitter never built in a way to deal with harassment because none of the people designing it had ever been harassed, so it didn't come up. Twitter didn't build in a way to deal with threats because none of the people designing it had ever gotten a death threat. It didn't come up. Twitter didn't build in a way to deal with stalking because no one on the team had ever been stalked. It didn't come up. [...]

We designed and built platforms that undermined democracy across the world. We designed and built technology that is used to round up immigrants and refugees and put them in cages. We designed and built platforms that young, stupid, hateful men use to demean and shame women. We designed and built an entire industry that exploits the poor in order to make old rich men even richer.

If your reply is that we didn't design and build these things to be used this way, then all I can say is that you've done a shit job of designing them, because that is what they're being used for. These monsters are yours, regardless of what your intentions might have been. [...]

The machine we've built is odious. Not only can we not participate in its operation, nor passively participate, it's now on us to dismantle it. It was built on our watch and it needs to burn on our watch. When a platform we designed and built to connect people across the world is used to doxx the parents of murdered children, and the people who run it refuse to do anything about it; when they refuse to fix it because they don't see it as a problem; when they attempt to justify the profits with which they're lining their pockets as values, we need to burn it to the ground.

Previously, previously, previously, previously, previously, previously, previously.

Tags: , , ,

59 Responses:

  1. Mark Kraft says:

    It's... odd... that all these venture capital funded, stock-billionaire dotcoms can't do a fraction of what those of us on LiveJournal did to create systems to empower users to fight abuse, long before there was any money or real profit motive in what we did at all.

    Maybe all their execs should step down and let people from the community take over?!

    • James says:

      To be fair, Twitter reached a couple orders of magnitude more active users over similar time frames, and there's evidence they are doing better than YouTube and Reddit, who nominally put more effort into their anti-abuse measures. What isn't clear to me is what keeps email from suffering far more from these issues. On the other hand, I wouldn't be surprised if a lot of the worst abuses seen out in the open on social media is coordinated in email.

      • Aidan Gauland says:

        I've been thinking for years now that putting something like SpamAssassin on these services would be a great start. Right now they have only reactive defense measures in place, and nothing proactive (as far as I can see).

    • Vlad Poutine says:

      Which is great, except the Russians bought LiveJournal. And one can only assume they used its rich veins of pathos to build some kind of Machine Learning AI designed to prey upon The West.

      I mean, that's what I'd do if I were in their position.

      I agree, though. The Internet was a bad idea. At least, outside of Academia. Given to everyone it just serves as an immediate plague vector for the meme versions of the Hegemonizing Swarm's Vomit-Plague that converts human bodies into mindless piles of razor-sharp bone fragment and filamentous wings driven to only devour, mindlessly in moaning agony for all time.

  2. Christoph says:

    There's the thing about hindsight always being 20/20.

    Back in the days we really believed that the internet, ubiquitous communications and universal access to information would make the world better. Yes, we were really that naive. Now... we're old, and we've failed. And there was no savepoint.

  3. MattyJ says:

    Now might be a good time to remind everyone that within the past year, Jack Dorsey vacationed in Myanmar. You're trusting your online identity/reputation to this asshole.

  4. jon says:

    Maybe I'm making assumptions, but are they saying a bunch of nerds never got harassed? Why do we assume being white and male makes you immune to harassment? I grew up in a time where nerds got picked on constantly, maybe that's not the case for the twitter guys, I don't know. But that's a heck of an assumption to make.

    • jer says:

      No, just that they had assumptions about the ways they would be harassed, and not about the ways other people would be harassed, likely based on assumptions of what freedom of speech would do for themselves, and perhaps not for others. Catch all your errors.

      • jon says:

        They specifically said the people designing it had never been harassed.

        >Twitter never built in a way to deal with harassment because none of the people designing it had ever been harassed, so it didn't come up.

        • jwz says:

          Dude. "A football player was mean to me in high school" doesn't count. You are literally demonstrating the point the author is trying to make.

          • jon says:

            So the argument is, as a white male, you couldn't possibly be harassed enough, but a woman or minority would have? As someone who had a friend commit suicide in high school over bullying, I guess we'll just have to agree to disagree.

          • Rene says:

            Hi Jamie. I think the issue is the use of absolutes to make a point. Everybody's experiences and how those experiences affect them is an individual thing, and a membership of any particular race or group doesn't exclude or include anybody in those experiences. I think the quest for more understanding and tolerance for some shouldn't lead to an intolerance and trivialization of the experience of others.

            Victimization shouldn't be a competition about who is or isn't more affected. We should all be able to agree that this is a problem and that those in a position to address the problem should. Overly broad statements do little to further that aim because they only serve to polarize people against each other.

            You both have valid points and I can see how your views are molded by your individual experiences. If it helps in any way, harassment and bullying are things even in environments where there is racial and/or sexual homogeneity. There is always going to be some measurement of another group's lack of worth by those looking for an outlet for their aggression.

            Part of the solution is to universally encourage inclusion and tolerance of others. It pains me to see the trivialization of another person's suffering merely because their particular group is seen as less deserving than another. The act of deciding that somebody's experience isn't of the type worthy of such interventions is precisely the problem that in my perception the article is talking about. It shouldn't be necessary for the member's of the twitter administration to deem someone's suffering to have sufficient merit for those tools to exist. Harassment of anyone for any reason is wrong and the tools should exist to allow everyone to participate in online experiences without being subject to unwanted aggression from others.

            We can all do better. Hope you are well.

          • mdhughes says:

            A gang of the hooligans/unevolved apes in my school tried several times to kill me, because to them anyone smaller and nerdier was obviously gay and needed to be killed, and I escaped by luck and speed. Not much happened from the administration or police, because those are made up of older hooligans. Some white boys do in fact get harassed. You know the "believe the victims" thing? That applies here, too, white boy.

            I don't make systems that let anyone harass anyone else; when I worked on an MMO the only chat allowed between strangers were friendly stock phrases, and even private chat was filtered and could be flagged. Zero tolerance for that.

            But Monteiro's not serious, he's the kind of troll who'd be first banned on any socially responsible system.

            • jwz says:

              "I have also had bad things happen to me, so aren't I the real victim here?" is never a good look.

              • Chris says:

                That's not the point being made and that's a fatuous response.

                • tfb says:

                  It kind of should be the point being made. No-one is claiming that bad things have not happened to members of set x for almost any x: the claim is that for some sets bad things are far more likely to happen to their members than others. You can only know which those are by actually looking at the statistics: picking individual examples is easy but meaningless. And those statistics are available and, oh look, these platforms were indeed substantially designed by members of sets to members of which bad things happen rather rarely and who chose not to pay much attention to how they might work for members of sets much more at risk than they were (not least of course, because there are other statistics about the sets for which it's more likely that members do the bad things, and the intersection of those sets and the set of people who built these platforms is not anywhere near empty).

                  Picking individual examples and drawing conclusions from them is like saying 'it's unusually cold today, so global warming must be a hoax'. You have to do the statistics.

                  (None of this is denying that bad things to happen to members of all sorts of groups and that they are, well, bad.)

          • Mark Kraft says:

            nods

            If you want to be shocked as a male, find an attractive female friend who is using a relatively innocuous dating service, such as OkCupid, and take a look at what they have to deal with just to use it.

            When it comes to the internet, there is literally no comparison between what women and men have to go through, as far as experiences are concerned.

      • lol wtf says:

        Nah they probably just weren't gigantic fags.

  5. Mozai says:

    Kinda like how memcached, magneto, mongodb, one-touch wifi, anything setup.php, and a legion of other tcp-listening software don't need extra protection? I don't use them unsafely, so why should I believe anybody else will?

  6. Bob Preston says:

    ha ha u r gay

  7. Nick Lamb says:

    The promise of the internet was that it was going to give voice to the voiceless, visibility to the invisible, and power to the powerless.

    No. There was no such promise. The Network just moves bits. I'm sure if I dig I can find a similar rant for the printing press too. The fact that we suck is independent of our technologies, it was not caused by and cannot be remedied with them.

    An insistence that technologies are or should be the solution to social problems is at best gravely misguided and at worst actively pernicious. Occasionally, less often than engineers might wish, technological change is one component of a good solution to a social problem, but invariably you need to start with the much tougher parts that aren't technical at all.

    I'm not a fan of the "punch a Nazi" approach because violence doesn't have a good track record, but it has a lot more going for it than "design all your tools to try to prevent Nazis from using them" in terms of whether it might actually stop there being so many fucking Nazis.

    This reminds me of the initial reaction (and a continuing background noise) to Let's Encrypt. The Web PKI cares whether this is http://www.jwz.org or not. It isn't in the least bit interested in whether http://www.jwz.org is a "good" or "safe" web site, or whether the things written on this site (by JWZ or by commenters) are good things, child-friendly, Christian, compatible with the will of Allah, uphold the Queen's Peace, etcetera etcetera. But a surprising number of people, not all of them with even an obvious commercial motive, suddenly became convinced that it had in fact "promised" to verify all of these things and more and so Let's Encrypt fell short and must be stopped.

    If you want to "give a voice to the voiceless, visibility to the invisible, and power to the powerless" you aren't even in the right industry. Alas as an American your best bet to actually do anything about those things is in Washington, and if you shuddered at the thought too bad.

    • MrSpookTower says:

      Jeeze, that right there is a textbook demonstration of narrow mindedness. Congrats on your auspicious "feat."

    • Andrew Dalke says:

      John Perry Barlow certainly persuaded many that there was such a promise. Quoting his "A Declaration of the Independence of Cyberspace":

      "We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth."

      "We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity."

      There are 1617 citations to it on Google Scholar. He was "The Thomas Jefferson of Cyberspace" exclaimed Reason in 2004, though academic writings say things like more like, quoting Aimée Hope Morrison from 2009: "Ten years after its original publication, the Declaration is both widely reprinted and increasingly mocked: its language has become commonplace and its idealism has come to seem absurd."

      • J Greely says:

        That would be the same John Perry Barlow who thought the best way to protest the Republican National Convention in 2004 was to "periodically erupting into wild and inexplicable explosions of dancing"? That's some Deep Thinking, right there.

        -j

        • Andrew Dalke says:

          And cattle rancher, lyricist for the Grateful Dead, and Fellow Emeritus of the Berkman Center for Internet and Society at Harvard Law School? Yes, one and the same. Amazing how people can have many facets, isn't it?

          That doesn't mean that people didn't buy into his promise of the Internet. Nor was Barlow the only one making that sort of promise.

          • J Greely says:

            The only one of those that sounds even remotely impressive is "cattle rancher", assuming it's hands-on; there's no room for magical thinking at the wrong end of a cow.

            -j

            • Andrew Dalke says:

              Even white boys knew to allow users to block accounts with a practice of making inane egocentric comments. You didn't even bother to read his Wikipedia page before casting public doubts on Barlow's cattle ranching experience. Nor do any of your ad hominem statements have any bearing on the historical record that many in the 1990s believed in the promise of the Internet as way to 'give voice to the voiceless', etc. in part because of Barlow.

          • Mark Kraft says:

            I find Barlow's enthusiasm for the promise of the internet more honest, human, and realistic than Timothy C. May's "Crypto Anarchist Manifesto" from 1988, where he says, quite accurately, that "The State will... try to slow or halt the spread of (cryptography), citing national security concerns, use of the technology by drug dealers and tax evaders, and fears of societal disintegration. Many of these concerns will be valid; crypto anarchy will allow national secrets to be trade freely and will allow illicit and stolen materials to be traded. An anonymous computerized market will even make possible abhorrent markets for assassinations and extortion. Various criminal and foreign elements will be active users of CryptoNet. But this will not halt the spread of crypto anarchy," summarizing cryptography as "the wire clippers which dismantle the barbed wire around intellectual property."

            Only to say "Arise, you have nothing to lose but your barbed wire fences!"

            Sure, your data can be hacked, your information stolen and sold without your permission, your identity and credit cards can be shared and cloned, your bank account drained, your democratically elected leaders undermined or even killed by anonymous, conspiring criminal/foreign interests... but don't worry, embrace the chaos.

            Ultimately, we need more smart, tech savvy, idealistic people engaging with power and holding it accountable, based on a mandate of the people, in a way that protects the public. Given the choice of an idealism based on good intent and good acts, or an outspoken racist and ideologue like May talking about how much better it would be to impose an ideologically unrepresentative, criminal, troll-fueled chaos dominated by the '1337, rich, and corrupt, give me the idealist, any day.

  8. NT says:

    Is there an actual suggestion here? It seems like the idea is to have some system to automatically police online content, with the large companies that mediate digital communication both obliged and empowered to stop objectionable content. Is that right?

    • nooj says:

      Yes. The suggestion, as explicitly stated in the article, is to burn Twitter to the ground, and design a communication tool with the express design parameter of considering the meaning of "harrassment" by users and build in ways of addressing it.

      For example, perhaps Twitter could avoid having everyone scream in the same room. Give them different rooms to scream in, like Google Groups. Or not force comments to be vapidly short, which was a fairly needless restriction even at the time. For example, it should be harder to create a troll account with a lot of followers and then suborn political discord with it; it should be harder to create and utilize blocks of thousands of accounts at once.

      The point is that we need to design things in advance with the thought "How could people abuse this, and is there a way to make abusing it hard?"

      • NT says:

        You'll want some kind of identity system then, so that anything said online can be tracked back to a photo ID or credit card of some sort? Possibly a phone number, though you'd also want to crack down on burner phones in that case.

        • nooj says:

          No, I don't. What I meant by Google Groups was the thing Google Groups used to be, the old Usenet. There was no identity verification, and a way to handle abuse was to have the server not carry offending messages.

          It was distributed, not centralized like Twitter, and so individual servers not carrying offensive messages made an effective system for limiting abuse. Most servers had reasonably competent moderation, so users had a good experience. And people could not complain about having their platforms removed like with the "de-platforming" problems on youtube and twitter. Their words were still out there to be consumed by motivated parties, but it was a lot harder and so almost no users did.

          Look, the point is that if designers got together and tried to solve problems of abuse, they could. There are solutions out there that don't involve surveillance panopticons or hate speech and graffiti spraypainted on everything.

          Just because you can't imagine a better world doesn't mean we can't find one together.

          • NT says:

            I actually remember Canter and Siegel. There was very little moderation on USENET and tons of stuff that would be considered harassment by today's standards. The only reason it worked as well as it did was that you had to be admitted to a top 100 school to have access, and abuse would be handled by university staff. So yes, identity verification.

            > Just because you can't imagine a better world doesn't mean we can't find one together.

            I think the difference of opinion is that you can't imagine a worse world. So you recommend that we burn everything to the ground and you'll work out the details as we rebuild.
            Can we see some of your previous work first? Because I've worked with that kind of architect before.

            • nooj says:

              Usenet was available to anyone with broadband (including public libraries) long before it died.

              No, I don't recommend shit. I was explaining the premise of the article to you because you apparently can't figure it out on your own.

              And if you're alluding to the CADT thing by talking about 'that kind of architect', the crux of CADT is that existing problems aren't addressed in the new system. That's obviously not the position the authors take.

              • NT says:

                The fundamental position that you are defending explaining to me is that the internet turned out badly because of the Designers, not the Users. As support they have a list of ways that the internet has turned out badly, and a bunch of vague proposals, some of which were tried and failed. In real life, nobody limited anybody to 140 characters: people just prefer Twitter to stodgier publishing platforms. I don't like it either, but you have to be a hipster to see the guiding hand of Designers behind this. Or desperate for a scapegoat, I guess.

                For a more articulate presentation of a similar idea but blaming Coders instead of Designers, try this:
                https://www.nytimes.com/2019/04/01/books/review/clive-thompson-coders.html

      • Jonny says:

        So you want it so people are in different rooms, long form posts, and personal verification. You literally want Reddit with IDs.

        There are so many posts expressing anger at what we have, but I see extraordinarily little talk about what SHOULD be done, other than telling some corporation to "fix it", or else they are going to make the government "fix it". With "fix it" being defined as "make it all go away, run the process with uncorruptible angels, and never get it wrong".

        I mean, Facebook got shit for not pre-cogging the New Zealand shooter was going to murder a bunch of people, and taking less than hour to pull the video.

        Everyone is angry, but no one knows what they actually want.

        • jwz says:

          Did we read the same article?

          The thesis was very clear to me: "What we built is shit. Burn it to the ground."

          Maybe you can't imagine a world without Facebook and Twitter, which are (checks notes) fifteen years old.

          • BHN says:

            You can call for it but I don't think anyone is going to burn Facebook or Twitter to the ground. I don't use either one but until they are the next LiveJournal they're not going away.

            Perhaps someone just needs to build the better thing that will make them obsolete? I think building a system that will preemptively prevent anyone from being offended or harassed is a pretty tall order.

            I'd like to hear the thesis on what should be built after Facebook and Twitter are burnt to the ground because it's not at all clear. Unless you know what the next step is what you'll get after you burn Facebook and Twitter to the ground (somehow) is another Facebook and another Twitter.

            nooj seems confident that it is a solve-able problem on a technical level but I have doubts about that. I certainly don't think it is obviously a solve-able technical problem. There are certainly questions of free speech versus the rights of people to not be harassed or offended and that is always a trade-off. There will always be some who are not happy with the degree of trading off freedom of speech for security. Where is the happy medium? Once you get past threats of physical violence things will get murky quickly and you will never satisfy everyone.

            If you can engineer something that is on the individual user's end, like the good old killfile for Usenet, you might get somewhere because people could make their own decisions on what they're exposed to and everyone's thresholds could be different. You get rid of the need for the Panopticon approach. Make it a nice fine-grained extensible tool and you might be on to something really worthwhile.

            The real problem being presented here though is that human beings in their behavior toward one another obey Sturgeon's Law and fixing them directly is a non-option.

            • jwz says:

              What you're saying sounds to me like "If you get rid of measles, someone's just going to make another measles. So what is your proposal for a better measles? Because until I hear that, I guess I'm just going to stick with old-measles."

              • BHN says:

                My proposal for a better measles is one that lets me filter the content of a site since the site filtering out 'bad' stuff for everyone is probably not a problem with a solution - in large part because we as humans can't agree on much of anything, let alone what is 'offensive' or despicable enough to ban it. Giving users individual control, if it could be done, would be a much better solution.

                I also think there is the potential for thought-policing once you put effective machinery in place for all of the major discussion platforms to filter out things that people post and begin mandating that platforms prevent people from being offended. It's the Fahrenheit 451 problem. How do we find a happy medium? I'm not arguing for not trying but I think it's a hard problem to solve and I don't think destroying everything and starting over is the answer. I do agree with creators of new and existing platforms working to make them harder to abuse. But I'm tempering that with caution on how much platform providers become the arbiters of what can appear online.

                I did clearly state that I don't use measles and so if you find a means to destroy it I'm A-okay with that. Ensuring that another measles hydra head doesn't just take its place is, I think, harder. Harder in part because a lot of people really want that trainwreck that is Facebook and Twitter and Reddit, etc.

            • McDanno says:

              "There are certainly questions of free speech"

              There is no question of free speech. Twitter (and Facebook) is a private company. They can choose to ban the harrassers and the racists. They consistently choose not to. The whole "anti-Nazi mode" where if you set your locale to Germany or France puts paid to the idea they can't solve this problem.

              I don't think the actual problem is that they value the engagement and advertising dollars the shitheads provide, but that's the perception given because from the outside nobody can tell the difference.

              Once you speak to folks who work at Twitter, though, one begins to realize that too many people there believe in an absolutist version of the "free speech" straw man. I've spoken to folks there and more than once have heard the response "but if we ban all these people they'll just go somewhere else."

              And my response is always, yes, that's the point. Herd the rats into the sewer, then fill the sewer with gasoline and set it on fire. No more rats.

              (I can't speak to Facebook, though for all intents and purposes it's difficult to tell the difference between malice and stupidity with them. Flip a coin.)

          • nooj says:

            The first thesis of the article is "What we built is shit. Burn it to the ground." Apparently y'all hate it cause you're fine with old-measles, or you think no fire can be big enough to burn it all out, or you think no one's willing to light enough matches, or you would agree but something something it's really hard. I think y'all are crazy pants, but I understand.

            The second thesis is "Dear designers: When you build stuff, at lesat try to make it hard to abuse." I don't understand why y'all are resistant to this one.

            • Nick Lamb says:

              For the first, I live in Britain, so forgive me if it's just much more apparent from here, but "What we built is shit. Burn it to the ground" is very, very stupid. A bare plurality of our electorate voted for this superficially attractive idea, and now look where we are.

              This thesis amounts to the blithe presumption that anything else must be better than this, which betrays a terrible lack of imagination and, frankly, plain illiteracy because plenty of people who do have imaginations have written down just how very much worse it could be.

              I have no real argument with the second thesis. You should try to make your arbitrary thing hard to abuse. This comes under what Raymond Chen would call "taxes", you have to do this work in any project that isn't just a proof-of-concept doodle even though it's not fun. Not doing this work, like avoiding paying your taxes, means you're an asshole.

              But, I will offer not a caveat but a cautionary note: Even when you try very hard to make your arbitrary thing hard to abuse people will anyway and you'll get the blame. Every successful project gets to be Milkshake Duck. It may well be that the best you can hope for is either to be quietly forgotten or to be so ubiquitous that people just wearily sigh when it's pointed out that you're problematic.

  9. jancsika says:

    Before burning it all to the ground, how about hard-working SV veterans take an exciting trip to a country minimally resistant to the U.S. copyright lobby and relax in the company of an inviting Sci-hub mirror?

    Imagine your dream vacation-- waking up refreshed each morning to serve millions of users without fear of any of them experiencing "harassment, abuse, death threats, rape threats, doxxing."

    Enjoy state-of-the-art torrenting software, complete with bandwidth limiting so you can continue posting long, meandering op-eds from the comfort of your own personal rackspace.

    Bask in gorgeous sunny weather with no chance of alt-right racist idiot pumpkin kings. Just the pure, crystal clear oceans of scientific journal articles-- a perfect getaway for any SF veteran or aging digital utopianist.

    Book your Sci-hub mirror today. The apocalypse will wait.

    • NT says:

      On closer examination you'll find that Sci-hub reeks of unexamined privilege and military funding.

  10. dan mca says:

    I really agree with this post.

  11. Phil says:

    A fantastic book (Invisible Women by Caroline Criado Perez) was published last month which looks at the way the data that drives policy and design decisions is largely missing women and the effect that has. Some of the most shocking stuff is in medicine (it's starting to be understood that men and women can have literally opposite reactions to some drugs but women are still considered a 'complicating factor' in drug trials so are often excluded) but it also looks at how a dominant male perspective has a big impact on technology design. Twitter is by no means alone in this.

    https://www.amazon.com/Invisible-Women-Data-World-Designed/dp/1419729071

  12. It's not just Twitter.

    In retrospect, "rough consensus and running code" seems not to have been sufficient.

  13. Carlos says:

    Wow. This post proves that the best way to attract trolls, abuse, and general knuckle-dragging bullshit responses is to post something arguing against being shitty to your fellow man.

    I've never seen this terrible a level of discussion on a jwz post.

    C.

  14. Unholyguy says:

    Alas, you cannot put the mushroom cloud back into the little silver canister once you let it out

  • Previously