Insight is precisely what Musk's strawberry-picking AI lacks, as do all the other AIs that destroy humanity in similar doomsday scenarios. I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea. Then I realized that we are already surrounded by machines that demonstrate a complete lack of insight, we just call them corporations. Corporations don't operate autonomously, of course, and the humans in charge of them are presumably capable of insight, but capitalism doesn't reward them for using it. On the contrary, capitalism actively erodes this capacity in people by demanding that they replace their own judgment of what "good" means with "whatever the market decides." [...]
There are industry observers talking about the need for AIs to have a sense of ethics, and some have proposed that we ensure that any superintelligent AIs we create be "friendly," meaning that their goals are aligned with human goals. I find these suggestions ironic given that we as a society have failed to teach corporations a sense of ethics, that we did nothing to ensure that Facebook's and Amazon's goals were aligned with the public good. But I shouldn't be surprised; the question of how to create friendly AI is simply more fun to think about than the problem of industry regulation, just as imagining what you'd do during the zombie apocalypse is more fun than thinking about how to mitigate global warming. [...]
There's a saying, popularized by Fredric Jameson, that it's easier to imagine the end of the world than to imagine the end of capitalism. It's no surprise that Silicon Valley capitalists don't want to think about capitalism ending. What's unexpected is that the way they envision the world ending is through a form of unchecked capitalism, disguised as a superintelligent AI. They have unconsciously created a devil in their own image, a boogeyman whose excesses are precisely their own.
Previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously.
Companies are supposed to have a clear Purpose. And it is not a huge surprise that the world's biggest companies actually do have a clear Purpose. That moment when somebody decides the only important thing is "shareholder value" is usually a bad sign for the company's continued success and even existence. Large companies with very vague and broad purposes are the exception rather than the rule, Bunzl is the example that comes to mind. It's a big industrial. It does a bunch of things that obviously somebody must do, none of them so exceptionally badly as to have come to your notice, none so well as to be bywords for anything much. It never stays in any particular industry for so long as to become synonymous with it. Apparently right now it's a "distribution and outsourcing company", whatever that is, once upon a time it made cigarette filters.
In people we recognise this idea of Purpose as a Good thing. Most people think it's good that our host spent his Internet fortune on trying to keep SF's night life, alive rather than a vast mansion or ludicrous "disruptive" ideas that focus on the "break things" even more than the "move fast" and with no real concept of where they're going to anyway. Some people despair more at Trump's apparent lack of Purpose than all the things and people that get smashed after each tantrum.
But that takes us back to why AI doomsday scenarios arise. Having a Purpose doesn't make you Good. Converting everything into paperclips is a Purpose. Bringing about the Apocalypse is a Purpose. One of the ways in which Rabbit is the Good Guy in Rainbows End is lack of Purpose. Rabbit is a child playing, it even takes the attempt to destroy it in relatively good humour.
Nota Bene: Ted Chiang's short story 'Stories of Your Life' became the movie Arrival.
Read Meditations on Moloch (Google it) for a much (much) longer exegesis of this mechanic. I don't think it's a disease peculiar to SV, VC, or even tech.
In so far as the market has a will and computes something, it's a lot like an AI already, and has always been. It relentlessly optimizes itself, and ultimately seeks to take humans out of the value chain - humans are too expensive and unpredictable. If it can figure out a way of removing human consumption as a driver of the marketplace, the economy will finally be able to eliminate humans altogether.
Meditations on Moloch covers some of the sources of the mechanic, but fundamentally it's a philosophy and shared belief system that keeps us locally self-interested in the system's self-perpetuation. Hardly a surprise since any system that didn't self-perpetuate dies sooner or later, but what is surprising is how many people are wedded to ideologies fixated on means rather than ends; e.g. that markets are good, rather than that outcomes are good.
I don't know how to escape it. You can't even stop playing, realistically.
> In so far as the market has a will and computes something, it's a lot like an AI already, and has always been.
A common claim about the optimality of market prices is that it's the best information processor, having at hand the information you the party you're doing a deal with as well as your information. That's a comment about market making or arbitration, but often gets played into a form that sounds like 'you always get the best price' -- which isn't true because I'm often the chump in the deal who gets the worst price.
This novel first made explicit the idea the idea that corporations are already artificial intelligences complete with rights, privileges, and responsibilities.
Valentina: Soul in Sapphire Mass Market Paperback – October 1, 1984
by Joseph H. Delaney (Author), Marc Stiegler (Author)
Valentina, an artificial intelligence program come to life, and her creator, Celeste Hackett, a shy college student and computer genius, are menaced by an unscrupulous lawyer and two computer wizards hired to destroy Valentina
it's funny how many people read that Chiang piece and think: "yes, the corporation that issues my paycheck isn't just legal window-dressing on the desires of some phenomenally greedy people to enrich themselves and their friends but actually an amoral abstract intellect beyond all human concerns!"
I think the abstraction is useful, if it gets people to the right conclusion in the end.
It is true that at some level corporations are made up of people and those people should be held accountable for their actions, but the hierarchy and the laws are constructed specifically to make it easy for them to abdicate responsibility in service of what they have been told are the "needs" of the company. And not just CEOs, everyone in the middle too.
Here are two good ways to start, if you want to make the people who compose the bodies of corporations act like humans and not like organs: 1) eliminate corporate personhood; 2) establish and readily deploy corporate death penalties: when they break the law, revoke corporate charters like someone is giving out candy.
But hey, now we're really into science fiction territory. Let's dream about something more realistic, like posthuman superintelligences.