The Bullshit Fountain

danmcquillan: We come to bury ChatGPT, not to praise it:

ChatGPT is, in technical terms, a 'bullshit generator'. If a generated sentence makes sense to you, the reader, it means the mathematical model has made sufficiently good guess to pass your sense-making filter. The language model has no idea what it's talking about because it has no idea about anything at all. It's more of a bullshitter than the most egregious egoist you'll ever meet, producing baseless assertions with unfailing confidence because that's what it's designed to do. [...]

Of course, the makers of GPT learned by experience that an untended LLM will tend to spew Islamophobia or other hatespeech in addition to talking nonsense. The technical addition in ChatGPT is known as Reinforcement Learning from Human Feedback (RHLF). While the whole point of an LLM is that the training data set is too huge for human labelling, a small subset of curated data is used to build a monitoring system which attempts to constrain output against criteria for relevance and non-toxicity. It can't change the fact that the underlying language patterns were learned from the raw internet, including all the ravings and conspiracy theories. While RLHF makes for a better brand of bullshit, it doesn't take too much ingenuity in user prompting to reveal the bile that can lie beneath. The more plausible ChatGPT becomes, the more it recapitulates the pseudo-authoritative rationalisations of race science. [...]

ChatGPT is a part of a reality distortion field that obscures the underlying extractivism and diverts us into asking the wrong questions and worrying about the wrong things. Instead of expressing wonder, we should be asking whether it's justifiable to burn energy at "eye watering" rates to power the world's largest bullshit machine. [...]

Commentary that claims 'ChatGPT is here to stay and we just need to learn to live with it' are embracing the hopelessness of what I call 'AI Realism'. The compulsion to show 'balance' by always referring to AI's alleged potential for good should be dropped by acknowledging that the social benefits are still speculative while the harms have been empirically demonstrated. Saying, as the OpenAI CEO does, that we are all 'stochastic parrots' like large language models, statistical generators of learned patterns that express nothing deeper, is a form of nihilism. Of course, the elites don't apply that to themselves, just to the rest of us. The structural injustices and supremacist perspectives layered into AI put it firmly on the path of eugenicist solutions to social problems.

If the CEO of OpenAI thinks that you are a stochastic parrot, then that means that he doesn't really recognize you as a person. We have a word for that kind of systemic lack of empathy and that word is "psychopath".

Blake C. Stacey:

I confess myself a bit baffled by people who act like "how to interact with ChatGPT" is a useful classroom skill. It's not a word processor or a spreadsheet; it doesn't have documented, well-defined, reproducible behaviors. No, it's not remotely analogous to a calculator. Calculators are built to be *right*, not to sound convincing. It's a bullshit fountain. Stop acting like you're a waterbender making emotive shapes by expressing your will in the medium of liquid bullshit. The lesson one needs about a bullshit fountain is *not to swim in it*.

Previously, previously, previously, previously, previously, previously, previously, previously, previously.

Tags: , , ,
Current Music: Yello -- Domingo ♬

31 Responses:

  1. Dave says:
    5

    People seem to think ChatGPT is going to be able to do our jobs.   Suppose your job is to spellcheck and you have a dictionary to look up words but it's only correct 50% of the time.  You'd throw it away because it's useless.  Without knowing where it's wrong it's 100% unreliable.  Suppose they spent years getting it up to 80%, still useless.  98%, 99%?   Still useless.   I suppose there's jobs where they don't expect you to be correct or accurate, not sure what those are other than writing spam pages to clog up google search.

    • Elusis says:
      22

      Health insurers and megacorps will gladly replace me (a therapist) with a bot that's only helpful or accurate some degraded percentage of the time, because 1) it will save a lot of money, and 2) they don't give a shit about actual people and their actual well-being.  Just like high-fructose corn syrup was a good-enough shitty facsimile of real food to feed the unwashed masses, BetterHelpBot (tm) will be a good-enough shitty facsimile of real therapy.  Negative externalities? That's someone else's problem.

    • Joe Shelby says:
      14

      "don't expect you to be correct or accurate" - well, there are plenty. Like writing commentary for right-wing news outlets like Fox. Or coming up with justifications for changing the alcohol rules in San Francisco. Or excuses for why there's no reason to ban self-drive cars after half-a-decade of accidents. The quantity of sewage in any of these is enough to fill the Hetch-Hetchy, and humans wrote that crap.

      The concern is never correctness or accuracy. The concern is comprehension. Can it generate enough bullshit that someone actually thinks they're saying something, or at least that the statements that follow the first sentence support it (expository writing 9th grade all over again)?

      And here, we get into two things. 1) people don't actually read every word (hint, they don't), and 2) people can be very gullible, both in thinking an AI might be human (Eliza all over again) because they're not looking for the right patterns, or in thinking an AI and all of its crappy output should be given a 'benefit of the doubt' (case in point, the law school prof that gave an AI legal brief a 'B' in spite of blatant math and grammatical errors).

      • granville says:
        3

        At one point I would have strenuously disagreed with you, but now I think you've barely touched the tip of the iceberg. There are huge sectors of our economy that are bullshit jobs dedicated to generating bullshit. From internal comms to policy papers to creating tens of thousands of words in essays designed not to communicate information to the reader but to lure them to a site via SEO. This will absolutely aid in the mass production of bullshit, first on behalf of the bullshit jobs and then, when the C-suite figures it out, in replacement of the bullshit jobs, likely in a new form of information sector offshoring (nobody will trust the truck to drive itself, and that's not really the main objective. The point is to replace the Teamster behind the wheel with his minimum wage equivalent who merely needs to "keep his hands on the wheel and stay alert," which sounds like driving but is different because technology. The point here is to replace the information worker with someone that works with information, but in a place with palm trees.)

        Today, Bed Bath & Beyond careened off bankruptcy by announcing it was raising $1 billion. How? From who? Well there's a prospectus explaining it, but almost nobody's been able to decipher it. It was the opinion of a former editor & Goldman Sachs weenie that it was unreadable by design. It's a public document not intended for public consumption, and so written in a manner that makes it so almost no one can understand it, or why anyone would invest in a company that has shut 50% of their stores in the last year (including nearly 20% on the same day this was announced!) without sleazy and legally tenuous financial chicanery going on. Here we have bullshit as a method of obfuscating probable crime. A bullshit generator would be amazing for this.

        Marketers are going on about ChatGPT writing books, forgetting that nobody reads books anymore and nobody in the business world cares about them at all. It will craft amazing marketing materials about "synergies" and "bleeding edge technologies." Nobody reads these either but there is a lot of money in pretending that they do.

        • thielges says:

          I don’t doubt that a company would deliberately obfuscate a prospectus, but part of its unreadability is due to it being a legal document, leading to precise, careful CYA legal jargon and awkward structure.  When US Steel’s stock price (X) suddenly shot up and fell this last week I was curious about the cause and found they had just issued a press release about debt obligations that I could not make heads or tails of.   Part of my befuddlement is that I’m neither a lawyer nor a banker.  And I’m too lazy to decode that press release sentence by sentence.  

          I do see the parallels in patent law though.   Every invention I’ve patented could be explained to someone else in the same field in a one or two page informal document.   Or five minutes at a white board.   But the patent docs themselves are dozens of pages long and bloated with jargon and redundancy.  They’re meant to protect the invention, not explain it.  

    • tfb says:
      3

      I think these things are vastly over-hyped: this is just another AI hype cycle I think.

      But the argument that it's not completely reliable and you don't know when it makes mistakes means it can't replace humans is wrong, because humans aren't completely reliable and you don't know when they make mistakes.  There are lots of published numbers for the rate at which programmers make mistakes (these seem to vary between a mistake every 20 lines to perhaps a mistake every 10,000 lines), many of which are probably spurious, but whatever the rate is it's well above zero.  Yet we use this code.

    • CdrJameson says:
      5

      It's just got to keep you on the page long enough to read the ads.
      You think Google's bad now? It's only just beginning.

  2. Doctor Memory says:
    4

    Loudly for the kids in the back: your brain is not a computer, and never was.

    • Nick Lamb says:
      1

      I was sort-of at least half hoping this was a ChatGPT generated version of the rant, but nope, it's Robert Epstein, an actual human wasting their time by insisting, as these articles [and books, several whole books on this subject have been written] by psychologists usually do that 1) The brain isn't a computer because 1) The brain isn't a computer. It's a little worrying that they find begging the question intellectually satisfying enough to publish.

      Last century I took a class by a man who has thought a lot harder about this stuff and its implications for us, Stevan Harnad. Stevan thinks embodiment is crucial, which is a much harder position to discredit - does embodiment matter only in the very minor technical sense that the AI can't change the world if it doesn't exist in the world, or is it so much more fundamental ? Harnad's Total Turing Test focuses on a very pragmatic result which I will no doubt now do a disservice by over-simplifying: if the machine can actually pass as one of us (not in some sort of chat room but in real life) then who are we to insist it's not one of us ?

      Anyway, in the particular case of these current AIs like ChatGPT the first half of that 2016 article gives the game away. They don't actually remember the huge volume of material they were exposed to, there just categorically isn't room. ChatGPT doesn't contain megabytes of PCM data from a CD of Beethoven's Fifth, or even kilobytes of written reviews, but it does have opinions about it anyway just like I do. The AIs have been somehow changed by their exposure, just as human children are changed by exposure to our media. And so, we must conclude the computer is by Robert Epstein's understanding ALSO not a computer. When their theories are torn to pieces in this way mathematicians often learn something interesting but in my experience psychologists like Epstein will double down and insist they were right anyway.

      • Doctor Memory says:
        2

        but it does have opinions about it

        Citation needed.  The Fountains of Bellagio spurt water continuously and aggressively; that does not mean that they care.

      • Jim says:

        They don't actually remember the huge volume of material they were exposed to

        Large language models do contain much of the full text information of the documents on which they were trained.  See the first paragraph of the Background and Related Work section on page 2 here.

      • k3ninho says:

        I prefer to invert the question of whether a system passes into one about whether I treat people and things outside myself well. Or fairly.

        That aside, the next step to ask about 'impact on the world' is already pretty important -- because reputations and the consequences of outcomes you cause have to matter for the world to be fair. The use of tools to amplify a person's BS to gain access to VC billions has to have consequences, or the use of tools to take and wield political power has to have consequences. If a bunch of humans overseas pretend to be in one country and use that pretence to win elections for their favourable candidate, you'd want the same consequences for a computer system that did the same. The treatment has to be fair.

        K3n.

  3. Eric TF Bat says:
    19

    I'm seeing ChatGPT described as "Mansplaining As A Service", and there's no truer statement.

    I read an attempt by ChatGPT to write and explain a small program in Forth.  With hardly any coaxing, it quickly got to the point of insisting that ">" is the Forth word to create a temporary variable.  It isn't.  It's the comparison operator - greater than, just like in every other language.  But it doubled down and basically invented a whole theory of temporary variables like something out of a 1960s toy language just to justify its initial assumption.  The actual program it wrote was complete gibberish. And yes, I do know enough Forth that I can state confidently that sometimes a Forth program isn't gibberish.

  4. Eric says:
    4

    The whole internet loves AI Seinfeld, an AI-scripted comedian who tells jokes! *five seconds later* We regret to inform you the AI is transphobic.

    • CSL3 says:
      6

      No one ever loved AI Seinfeld, and the real Jerry Seinfeld is already transphobic (and prone to "wokeness is killing comedy" rants before he defends Bill Cosby and Louis CK).

      • Birdy says:
        6

        I remain alarmed at how many sitcoms from the past (and probably the present, but I don't really watch sitcoms anymore) have at least one episode with transphobia. When I was younger I hadn't really noticed how common the trope was. Looking back at the same things with adult eyes made it a lot more obvious to see.

        • Elusis says:
          1

          "Dude in a dress!"/"Lady with a dick!" is one of the world's laziest jokes [sic] and thus terribly easy to throw in when writers [sic] want to get to the bar early.

          "Fat people are fat/lazy/disgusting/clumsy/ugly!" is another one.

          The suck fairy comes for an awful lot of alleged comedy once you start spotting both tropes.

      • Elusis says:

        Know Your Meme.

        • CSL3 says:
          1

          That meme is about people being fickle and eventually turning on something/one popular.

          My comment was about how Jerry Seinfeld (as much as I still love re-runs of the show) has always been an unforgivable asshole (and a fuckin' creeper).

  5. Duality K. says:
    5

    When ChatGPT first started making the rounds, I played dumb and made someone explain to me what it was, exactly, other than some nebulous "AI thing".  I read their response, and then read some interactions people had with it, and said "Oh, they invented MegaHAL again."

    It's shinier, and better funded, and more thoroughly trained, but it's the same shit.  One was given away as a diversion, the other sold as the next great labor replacing product.  And it's energy intensive.  Put it over there on the blockchain, with the rest of the fire.

  6. BHN says:
    4

    Speaking with the appearance of meaning is one of the tasks that turns out to be surprisingly easy. Peoples' predisposition to find meaning is so strong that they tend to overshoot the mark. So if a speaker takes care to give his sentences a certain kind of superficial coherence, and his audience are sufficiently credulous, they will make sense of what he says.  (Paul Graham)

    We need to learn to give far less benefit of the doubt to stuff posted online, and not just because of ChatGPT and its kin.  Our instinct to give the benefit of the doubt is really working against us now and it's time as a species for us to adapt and learn from it - if we can.  Trusting most communications at face value has served us well but its time seems to be up.

  7. sean says:
    1

    Altman's definitely not on Team Human.  I listened to an interview with him where he, apparently a father, referred to teaching a "kid or AI or whatever," which kind of told me all I needed to know.  That said, a lot of people working at Filipino content farms will need to find a new line of work.  Yay, progress?

  8. phuzz says:

    The one singular use I have seen for an AI chatbot, is generating plausible bullshit to waste the time of scammers. I suppose if you thought you were under surveillance you could use it in a similar manner (ie, wasting someone else's time). Unfortunately I can also think of lots of 'uses' which are only of use in saving money for the company deploying it, and harmful for everyone else.

    Oh, and has anyone used this for dating yet? Because someone right now is probably using a chatbot to generate dating responses for them. I'm predicting a twitter thread where they crow about how clever they are, followed by the inevitable backlash, within a month or so. I'm sure we can all imagine the kind of dickhead who would think this was a great idea.

    • グレェ「grey」 says:

      I dunno about ChatAPT dating, but circa 2004 someone sexed up an ELIZA bot for lulz and the ensuing "jenny18" chat logs were enticing enough that some professed it was passing the Turing test. Subsequently, some industrious scammers augmented jenny18 even further to take its precambrian brain triggering capabilities to get it to ask for credit card numbers with presumably lucrative enough results for such unethical individuals?

      I think it was around 2013 when angryskul/zb pointed out to me someone had taken a Markov chain script, added in a lot of incendiary trollbait language and unleashed it onto tumblr. The fallout that ensued was astounding. It doesn't take a lot of code to incite your average interwebs user to fall into a pit of despair thinking they're being trolled by idiots, instead of being crontabbed by artificial "intelligence" or some series of scripts feigning as much malice.

      RFC 439: PARRY encounters the DOCTOR is prior art in the realm of turning chat bots against each other, prefaced by more fictionalized fear mongering such as 1970's Colossus: The Forbin Project. From my vantage, the next AI winter can't come quickly enough.

      This sort of stuff is already being widely abused, but the older I get, the more I am of the dismal opinion that the myth of Daedalus and Icarus was a cautionary tale. Similar, the Judeo-Christian Commandment agains "Thou shalt not make unto thee any graven image"  "לֹא-תַעֲשֶׂה לְךָ פֶסֶל, וְכָל-תְּמוּנָה" was probably onto something too. I seem to even recall reading or hearing about some purported genius in the Greco-Roman era who was a master at creating automata, which the ruling class smashed, presumably judiciously at what a horrific nuisance such things more often than not, become.

      I admit even having been cautioned by scriptures as well as the likes of MegaZone 23, Macross Plus and such, the experience of having attended a Hatsune Miku concert at the Warfield circa 2016 was surreal enough that it seemed worth checking out at least once. At least in that instance, the CGI Vocaloids had a live backing band playing real instruments? I try to tell myself that somehow justifies the expense, when real humans are homeless and starving to death, but I wasn't too far off from that myself then and now either.

  9. Tom says:
    6

    As a bullshit fountain, ChatGPT was incredibly effective at writing just the sort of bullshit required to fill in my annual HR review objectives.   What in past years was an exercise in torturous writing was this year in this year done in about 3 minutes with bullshit better than the bullshit I've ever personally written for this task.  This left me with mental energy and time for more productive tasks.  I call that progress.

    • Eric TF Bat says:
      3

      Oh my! If this can be applied to answering selection criteria for job applications, it will change the whole game!  The biggest waste of time and brainjuice in the whole job-search process is filling out stupid boilerplate selection criteria that do nothing but reiterate what's already in any halfway-well-written CV.  I wrote an app to let me glue paragraphs of bullshit together to streamline it, but it was still like dragging a dumpster full of burning shit up a slight incline by tying it to your nipples with fishing line and walking backwards.

      But if people start using Google Plagiarism Tool to answer the criteria, advertisers will quickly realise there's no point asking for them, and they'll remove them from applications entirely.  Win for everyone!

  10. qarl says:

    If the CEO of OpenAI thinks that you are a stochastic parrot, then that means that he doesn't really recognize you as a person.

    naw... i think you just need to come to terms with the fact that people are only stochastic parrots.  any perceived superiority is an illusion.

    or, said another way, intellect is just linguistic manipulations.

Leave a Reply

Your email address will not be published. But if you provide a fake email address, I will likely assume that you are a troll, and not publish your comment.

Starting a new top-level thread.

  • Previously