i spent a large part of my twenties (mid-1980s to mid-1990s) wandering/traveling/touristing. one of my favorite questions to ask random people i met: "if you could do it all over again, would you rather live 500 years ago, right now, or 500 years in the future?"
very broadly, though not inaccurately broadly: non-colonial-europeans from canada to tierra del fuego said 500 years ago. europeans were almost universally, "the past was shit the future is shit, right now is the best we can hope for." and usa americans were every one of them "500 years from now! omg the future is going to be sooooo special!!!1!"
i wonder how any of that has changed in the past 25 years.
My hope is that Musk et al. take all of the billionaires up to mars over the course of a year and then it's suddenly "oh no they all blew up oops"
because who needs em?
Also every time someone shows me (and i am someone who is deeply interested in such things) some novel, or reference Artificial Intelligence, Machine Learning, or Evolutionary Algorithm - it's always a "toy". There's nothing doing and that gives me hope. If someone showed me even the remote possibility that a machine can make actual decisions i wouldn't even post this because i would be in the middle of the ocean on a barge with a solar array and a ham radio. Preferably closer to the southern pole.
I agree 100% with the broader sentiments in this thread (and other recent threads): Alpha-testing your fast-moving, fast-accelerating, fast-swerving, heavily-armoured autonomous robots on busy city streets is a sociopathic thing to do. And in general, Uber & Facebook are congenitally, Uber & Facebook are congenitally, intrinsically, exploitative, sociopathic companies.
But -- no antagonism intended -- my money's on the Deep Neural Networks.
DNNs are a very effective framework for building software that learns from evidence (the "examples", or "training data"). Even before DNNs, there were a bunch of language-processing tasks in which early-2000s ML outperformed humans. And then DNNs came along and blew early-2000s ML out of the water.
To solve a problem with a DNN, you ("merely") need to work out which trainable filters suit a given problem; and then you collect lots of examples of that problem, along with the corresponding desirable outputs, in digital form. (Then you spend a bunch of GPU cycles on the training or transfer learning.)
For an example of DNNs being more than toys: For at least a few years, DNNs have been matching (even outperforming?) humans at the task of squinting at X-rays/MRIs to detect medical problems like cancer. That's using image-shaped filters. And DNNs don't get tired, or sore eyes, after a long day of squinting.
So I don't think that DNN technology (or its descendants) will be unable to develop the ability to make "decisions". It comes down to "filters" that capture the context, and digitally-encoded training examples. I think the more challenging (and more important) problems are:
* Developing engineering methodologies & standards to ensure that deployed DNNs are sufficiently well tested on all "normal" situations and a large catalogue of plausible/known "hacks".
* (Relatedly, for facial recognition rather than autonomous vehicles, developing similar methodologies & standards for unbiased recognition of faces of different races.)
* (Relatedly relatedly, methodologies & standards for anonymization of collected data, whenever the training examples are derived from people.)
* Developing regulations that require, monitor, and enforce adherence to these testing standards. Legislating these regulations into enforceable law, with sufficient resources, access, and know-how for monitoring; and with sufficient teeth in case of violation (even before any damage, injuries or deaths occur).
Elaaaan!!!
i spent a large part of my twenties (mid-1980s to mid-1990s) wandering/traveling/touristing. one of my favorite questions to ask random people i met: "if you could do it all over again, would you rather live 500 years ago, right now, or 500 years in the future?"
very broadly, though not inaccurately broadly: non-colonial-europeans from canada to tierra del fuego said 500 years ago. europeans were almost universally, "the past was shit the future is shit, right now is the best we can hope for." and usa americans were every one of them "500 years from now! omg the future is going to be sooooo special!!!1!"
i wonder how any of that has changed in the past 25 years.
Why, 500 years ago people were dying in vast numbers of diseases that are easily curable today OH WAIT
Maybe those people said the wrong prayers too, and back then
idiotsthe faithful didn't have social media to use, reaching out to 'prayer warriors'What's a "non-colonial-european" and how do they differ from an european?
My hope is that Musk et al. take all of the billionaires up to mars over the course of a year and then it's suddenly "oh no they all blew up oops"
because who needs em?
Also every time someone shows me (and i am someone who is deeply interested in such things) some novel, or reference Artificial Intelligence, Machine Learning, or Evolutionary Algorithm - it's always a "toy". There's nothing doing and that gives me hope. If someone showed me even the remote possibility that a machine can make actual decisions i wouldn't even post this because i would be in the middle of the ocean on a barge with a solar array and a ham radio. Preferably closer to the southern pole.
no offense.
I agree 100% with the broader sentiments in this thread (and other recent threads): Alpha-testing your fast-moving, fast-accelerating, fast-swerving, heavily-armoured autonomous robots on busy city streets is a sociopathic thing to do. And in general, Uber & Facebook are congenitally, Uber & Facebook are congenitally, intrinsically, exploitative, sociopathic companies.
But -- no antagonism intended -- my money's on the Deep Neural Networks.
DNNs are a very effective framework for building software that learns from evidence (the "examples", or "training data"). Even before DNNs, there were a bunch of language-processing tasks in which early-2000s ML outperformed humans. And then DNNs came along and blew early-2000s ML out of the water.
To solve a problem with a DNN, you ("merely") need to work out which trainable filters suit a given problem; and then you collect lots of examples of that problem, along with the corresponding desirable outputs, in digital form. (Then you spend a bunch of GPU cycles on the training or transfer learning.)
For an example of DNNs being more than toys: For at least a few years, DNNs have been matching (even outperforming?) humans at the task of squinting at X-rays/MRIs to detect medical problems like cancer. That's using image-shaped filters. And DNNs don't get tired, or sore eyes, after a long day of squinting.
So I don't think that DNN technology (or its descendants) will be unable to develop the ability to make "decisions". It comes down to "filters" that capture the context, and digitally-encoded training examples. I think the more challenging (and more important) problems are:
* Developing engineering methodologies & standards to ensure that deployed DNNs are sufficiently well tested on all "normal" situations and a large catalogue of plausible/known "hacks".
* (Relatedly, for facial recognition rather than autonomous vehicles, developing similar methodologies & standards for unbiased recognition of faces of different races.)
* (Relatedly relatedly, methodologies & standards for anonymization of collected data, whenever the training examples are derived from people.)
* Developing regulations that require, monitor, and enforce adherence to these testing standards. Legislating these regulations into enforceable law, with sufficient resources, access, and know-how for monitoring; and with sufficient teeth in case of violation (even before any damage, injuries or deaths occur).
* Generally, holding shitty companies like Uber properly to account for any damage, injuries or deaths caused by their deployed self-driving systems.
The future isn't what it used to be.