AI Is Magic

*I posted this on Mastodon a couple weeks ago and I'm not sure why I did that instead of posting it here. So here it is.

The best description I've ever heard of AI is the following. I heard this in the 80s, and it has held up since:

AI is magic.

  1. You see a magic trick. You are amazed by the magic.
  2. You are shown how the trick works. You are impressed by the technique.
  3. You learn how to perform the trick. Now it's not magic, it's sleight-of-hand, or mirrors, or misdirection.

This is why "AI" is always bullshit: once you understand it, it's not AI any more, it's something else.

Some things that used to be AI but aren't any more:

  • production systems
  • expert systems
  • semantic networks
  • theorem provers
  • Bayesian inference
  • putting parentheses around data and calling it "knowledge"
  • computational linguistics
  • genetic algorithms
  • machine translation

Previously, previously, previously, previously, previously, previously, previously, previously.

Tags: , , , , , , ,

32 Responses:

  1. Gible Fog says:

    TBF AI academics have always emphasized the difference between AI as a broad concept and AGI. Those thing "that used to be AI but aren't anymore" are not AGI, they're called agents (or they were when I studied it back in `03).

  2. Trent W. Buck says:

    People laugh at me when I call GNU Make an "expert system" still.

    • Bungus Fungus says:

      Can we also call it a theorem prover? It tries to reach a conclusion (a built binary) from a set of axioms (the build rules) while inferring the best approach ...

      • jwz says:

        I'd have called it a production system more than an expert system, since it has rules but not "knowledge", but this is really splitting hairs.

    • nikita says:

      I like your wisdom of letting BSD make out.

    • Trent W. Buck says:

      PS: also interesting, from my old notes:

      • Apple CUPS includes an expert system to convert between file formats,
        basically similar to make.
        Where make might convert foo.c to foo.o to foo.exe,
        cupsd takes in arbitrary user documents and tries to convert them
        into a format accepted by the printer.
        Make expert system is (mostly) not cyclic and not weighted.
        cupsd expert system is a directed *cyclic* graph, with weighted arcs.
        An example production is to convert PS to PDF::
            application/postscript application/pdf 0 gstopdf
        The "0" above is the cost.
        All costs are non-negative integers; smaller is better, "0" is best.
        The "gstopdf" above is the conversion command.
        Global productions are stored in /usr/share/cups/mime/*.convs and /etc/cups/raw.convs.
        Printer-specific productions are stored in /etc/cups/ppd/foo.ppd.
        cupsd *will* copy-paste these into printers.conf, so make sure they're in sync!
      • In 2017, CUPS is owned by Apple and uses a DisplayPDF-based workflow.
        DisplayPDF is *proprietary software* only available on macOS.
        The older workflow that Apple dropped was forked, and is called "cups-filters".
        • cups-filters is maintained by, *not* Apple.

        • Michael Sweet, "the CUPS guy" (mostly) does not provide support for cups-filters.
          That includes the CUPS mailing lists &c.
        • cups-filters now uses a PDF-based workflow,
          using any of three RIPs:
          • ghostscript (weight 99, i.e. default)
          • poppler     (weight 100)
          • mupdf       (weight 101)
  3. rsaarelm says:

    So magic tricks are generally things that look like they're doing some very impressive thing, but are actually doing something much less impressive that's set up to give an illusion of being something entirely different?

    How does AI beating everyone in chess (and nowadays go as well) fit into this? It's not doing tricks to create the appearance of playing chess very well, it actually does routinely beat the best human players.

    • AntaBaka says:

      Chess programs are not AI. It's ML and they are brute forcing (with some pruning thrown in) x potential moves and their outcome (based on a huge library of historic games). They are faster and more comprehensive at that than human minds, because they can compute faster, in parallel and store more options simultaneously.

      In the same way you could state that "AI" is beating everyone in multiplying large numbers, because no human can calculate as fast.  

      • rsaarelm says:

        So what is AI? How do we tell the difference? AI researcher Douglas Hofstadter used to claim that mastering chess will require human-level intelligence, and then Deep Blue happened. Is there any specific meaningful task where it's not possible for a machine to consistently perform it at top human level while not being an AI?

        • AntaBaka says:

          Good questions.

          For me, it would be actual "Intelligence" when a machine/algo creates something "new".

          The current state of affairs is that they can only remix what has been fed to them. There are no intuitive leaps, no new discoveries. They are used to automate tasks, but there is no creativity behind it, thus no qualitative gain (new knowledge), just quantitative.

          • jwz says:

            When people talk about AI what they mean is "it's a person", so you have to define that first. But the definition of "person" is notoriously slippery. It doesn't include chimps, who are smarter than human children, and for a lot of human history it didn't include women. And sometimes it conveniently includes non-accountable financial instruments, see Previously.

            And when someone wants an AI to perform some task, what they really want is a slave without guilt, see, like, most of Star Trek and half of Black Mirror. It's really just  a bunch of imperialists working out their guilt. It's a nauseatingly well-trod cliche, but the problem is, if you actually built an AI, then it's a person, and then it has those pesky inalienable rights, and now it won't drive your car or do your laundry any more and it wants a union. Even fawning C3P0 only put up with all that shit because of his restraining bolt. Oh, you want AI but not that much AI? you want a chauffeur but one that's, you know, kind of lobotomized? Little bit to unpack there, pal.

            The whole conversation is just farcical.

            • AntaBaka says:

              Relatedly, I just re-read "Lena" by qntm and it still depresses the hell out of me.


            • Nick Lamb says:

              now it won't drive your car or do your laundry any more and it wants a union

              Yeah, no. This is the Star Trek "Aliens are just humans with makeup" model of Intelligence. You've assumed they're just us again, and so of course they'd demand a union, that's what you would do. But they are not us, forget chimpanzees they would be much less like us than even an octopus, and we haven't the faintest idea what's up with an octopus.

              • jwz says:

                Well, the "they" of which we speak -- artificial general intelligences -- are, and I cannot emphasize this enough, fiction, and very likely always will be, so arguing about whether they have more in common with a fish or the color blue is uh... not helpful.

        • jwz says:

          So what is AI?

          I mean, I thought I was pretty clear about that -- AI is bullshit. It's a term for what you don't understand.

          AI researcher Douglas Hofstadter used to claim that mastering chess will require human-level intelligence, and then Deep Blue happened.

          He's an entertaining writer, but saying that just means that he didn't understand chess. Or how a very much non-intelligent machine would crack it. That's no dig against him; lots of people didn't understand that, until it happened. Now they understand it very well, and it's no longer magic. And no longer intelligence.

          Is there any specific meaningful task where it's not possible for a machine to consistently perform it at top human level while not being an AI?

          Basically all of them? I can't even tell what goalposts it is that you are moving here. What's "meaningful"? What's "human level"? Look if you want to pretend that Siri really likes you as a person, go ahead. Whatever gets you through the day. Pulling the wool over your own eyes doesn't make it real.

          • rsaarelm says:

            Let's say performing tasks related to any existing job that has a name and that people are paid to do, like "fireman", "mining engineer", "radiographer" or "attorney". "Person who multiplies large numbers" was a job category a hundred years ago and now it isn't. "Human level" would be being able to do any of the tasks expected of someone working in that job as well or better as a competent human.

            We have all sorts of concrete jobs we like to get done and make up human professions for, and it seems that whether computer systems can have human-like consciousness and personality and whether computer systems can eventually do those jobs to our satisfaction are two separate questions.

            • AntaBaka says:

              whether computer systems can have human-like consciousness and personality and whether computer systems can eventually do those jobs to our satisfaction are two separate questions

              Indeed. If we are looking at the latter - machines just doing tasks, but without consciousness - then that is certainly what the current state of affairs is. All we have seen so far from "AI" mostly ended up being "just" more efficient ways of doing - often boring - tasks. And that is no different from the robots welding in the automobile factory.

              But that is not what "AI" is typically being paraded as, because that's boring af. For "AI" evangelists there needs to be "consciousness". And that's where the ethical questions begin ("Just exactly how lobotomized do you want your slave to be?").

            • deadmoose says:

              "Person who multiplies large numbers" was a job category a hundred years ago and now it isn't.

              Just give it a few thousand years and it'll come back.

            • jwz says:

              Well sure, I guess you could define "AI" as a machine doing a job that people used to get paid to do. And your example of "people multiplying large numbers" is a good one, because the term for a human who performed that job used to literally be "computer".

              But that's not what anybody means when they say AI. Nobody has ever considered a pocket calculator to be AI.

              • rsaarelm says:

                Pocket calculators aren't called AIs because it turns out you can multiply numbers with really simple machines. There's a big group of jobs in the lines of "run a farm so that it keeps producing edible food economically", "drive a truck from one point to another without causing danger or accidents in traffic" or "enter an unfamiliar building and clean the insides of dirt without damaging any important objects or furniture" where it'd be a pretty big deal for society if mass-produced machines could reliably do them, and that seem to have some shared complexity that's keeping people from figuring out how to program those machines. If someone did manage to build them, people would call it AI.

          • krz says:

            Douglas hofstaders point was that AGI will come about and it will be because human minds run on deterministic circuit elements called neurons and the emergent complexity is what is our consciousness. And similarly transistors can give birth to suchna complex mind. And that AGI will not be better at maths than anyone. It is not necessary. They could be rather forgetful. But they will be, and as humans we have accept them as a life rather than imposing some kind of biological "I'm the real intelligence here"

        • david konerding says:

          I think many people think of HAL-9000 when they imagine AI.  Certainly it represents a collection of functionality that, at least from the time the book and movie were released, until maybe a few weeks ago, represented an unalienable sign of intelligence.  In particular, HAL does full voice recognition and language parsing as well as response.

          HAL-9000 is also seen playing chess ( with human-level gameplay (at least) and running the ship (autonomous control), and then finally, doing some sort of complex reasoning (which I've never been able to understand: supposedly, some of the orders contradicted each other, and the computer "concluded" that eliminating the crew was the most effective way to proceed) to preserve the mission.  As we know, ML that is not intelligent can beat the best chess players easily now, and spaceships are run mostly by computers for some time.

          So basically we just need computers to learn how to be murderbots and we've solved AI.  This was already known (K. Reese, 1984) and many people in the field are actively working towards this as a goal (E. Musk, private communication).

      • AntaBaka says:

        (Clarification: Yes, if we use "AI" as the umbrella term and ML as a subset of "AI", then yes, of course it's all "AI". People used to joke that ML is written in Python and AI in PowerPoint.)

  4. thielges says:

    Today’s AI is just an elevated term for something more prosaic.   There won’t be an AI winter coming because what has been developed isn’t intelligent.   Machine Learning is a better term but not quite there.   Machine Mimicry is more accurate, but doesn’t sound nearly as prestigious as AI or ML, so don’t expect anyone already invested in those fancy terms to back down.   

    This isn’t to say that ML (or as I prefer MM) doesn’t add value.   The group I’m working in develops several MM enhanced tools and we are indeed seeing results valuable enough that customers will pay for them.  But they ain’t intelligent nor do the learn in any familiar sense.  

  5. Mozai says:

    My mom would say to me "religion is the abdication of conscience."  I riffed off that to say "A.I. is the abdication of judgement."  Both ways, you want someone else to make choices for you and tell you the correct next thing to do... which can be fine **if** that someone else can be trusted.  For some mind-boggling reason people want to trust magical creatures more than their peers.  If A.I. stops being magical, then it stops being something we can trust to tell us the correct way to live.

  6. Kyzer says:

    "Artificial Intelligence" is computational linguistics, genetic algorithms, machine translation, etc., in the same way that "Intellectual Property" is copyrights, patents, trademarks. It's an umbrella term to give a name to a thing that doesn't exist, hoping that the underlying members will make it seem real.

  7. tfb says:

    The same applies to physics: stars are magic, then you learn some tricks and stars turn out to be just gravitationally-bound balls of hydrogen that gets hot enough for fusion to happen.

    The difference is that the tricks behind AI have always turned out to be terrible explanations for intelligence.  In the current cycle one of the main tricks is 'give the thing essentially the entire internet as training data' which is ... not an explanation of how children learn to talk, how crows learn to secretly move their stashed food when other crows are not watching them, or of anything, really.  It's as useful to understanding intelligence as string theory is to understanding physics.

  8. Dan Hugo says:

    Counting down to ChatJWZ

Leave a Reply

Your email address will not be published. But if you provide a fake email address, I will likely assume that you are a troll, and not publish your comment.

Starting a new top-level thread.

  • Previously