-
• #552
That's easy, he reads some passages of scripture, denies the existence of freewill and then collapses into a super-dense wormhole created from freshly minted Dawkins-matter.
-
• #553
Didn't you cover that one with the small bomb solution?
-
• #554
simulacrum of the real thing.
What is the "real thing"? An activity only specific to Man? How about social beings such as apes, dogs, bats and mice? Communicate? Learn? Can something as "simple" as bacteria "play" ?
-
• #555
Did you read the article? The computer was'told' the basic rules of the game and then left to play.... It learnt not only how to play but beat the best human players. That's the same as s you or I being told the rules of a game, establishing the nuances of the game then mastering it. Impressive stuff.
-
• #556
I've not read the article, but I assume the game was thermo-nuclear war? Or possibly naughts and crosses.
-
• #557
don't see why a robo-car shouldn't simply be an extension of the algorithms that allow Stanley to navigate around the dining chairs without hitting them.
They are like comparing an electric Bobby Car to a Tesla Model-X:
The robot vacuums use relatively simple logic, mainly a handfull of IR sensors and a processor--- I don't imagine the products are using anything much more sophisticated than an Arduino.
-
• #558
I've not read the article, but I assume the game was thermo-nuclear war? Or possibly naughts and crosses.
If we are talking about the DeepMind/Nature article.. Atari games!
-
• #560
It learnt not only how to play but beat the best human players.
Being better at a number of tasks than humans is relatively easy. Already now image recognition software is better at the ImageNet competition than people--- even Phd students who have tried to train themselves to be good at the task. We are at the threshold where speech recognition is about to be better than people--- even the cocktail party problem (discriminating between different speakers) is showing great progress. The pace of improvements over the past year or two is dramatic!
Back to DeepMind.. Interestingly their "machine" was very good at a number of Atari 2600 games BUT not all. In contrast to children it also needed many more iterations of reinforcement to "learn". While DeepQ did not do well at games like Ms Packman I think it is not an issue of the paradigm only their current approach and implementation.
-
• #561
Did you read the article?
I never read the article, but I do read the comments.
-
• #562
If the behaviour is 'learned' via an RNN, with base truths that are not in themselves dangerous, harmful or evil.
It is not really quite like that.. And not just here. Once upon a time I was a research economist. With that hat on, I showed how a number of nicely intended social transfers and tax incentives when taken together with the other social transfers and tax laws, led to some pretty nasty situtations literally providing a tangible incentive to engage in behavior contrary to those considered positive by the society at large.
With the tax system I was able to set up a nice simulation and run a bunch or scenarios through to find the "ugly ones". Doing this with a self-driving car sound easier than it is--- if at all possible. I think we should all be reminded of IA's "I, Robot" stories. -
• #563
AA's "I, Robot" stories.
Alcoholics Anonymous do "I, Robot" stories? Is it some kind of fanfic club?
-
• #564
Alcoholics Anonymous do "I, Robot" stories? Is it some kind of fanfic club?
It's called keyboard autocrap!
-
• #565
This is you arguing round in a circle then. If it isn't possible to predict pathologic outcomes (such as R. Daneel Olivaw's inception of the Zeroth Law, inspired by R. Giskard's inadvertantly bestowed telepathic power) then how should we be concerned with proscribing blame for those outcomes? Which is what your original concern was, wasn't it?
Yes, I am an Asimov geek.
-
• #567
If electrical cars emitted fake engine noise.
Will robot cars emitted swear words out of side windows?
-
• #568
FWIW, I don't think that cartoon encapsulates EdwardZ posts at all. Thanks for playing.
-
• #569
FWIW, I don't think that cartoon encapsulates EdwardZ posts at all. Thanks for playing.
This one from Google Deep Dream does quite a good job of it IMO
-
• #570
If it isn't possible to predict pathologic outcomes then how should we be concerned with proscribing blame for those outcomes?
I don't attach blame. I'm just suggesting that the implicit ethnics encoded in the technology might be in contradition to those of the person who has elected to use it. Technology is not neutral. Not even mathematics is neutral--- see Brouwer, Hilbert, Abraham Robinson's work from the early 1960s on non-standard analysis, Alexander Yessenin-Volpin's work on Ultrafinitism, Bishop's constructive analysis,......
Which is what your original concern was, wasn't it?
I'm not concerned with blame. I'm concerned with the ethical issues at hand. There are loads of ethical issues right now in the technological sphere that are not being addressed. Everyone in Europe seem so keyed up on some projection of the capabilities of NSA (many of which seem more Orwellean projection that technological reality) yet think nothing of the kind of corporate run "Big Data" that is real, unemcumbered and even voluntary through loyalty and bonus cards (the German PayBack system comes to mind). We speak about all the shinny benefits and utopica of of AI, pervasive internetworking (the "Internet of Things"), nanotech, genetics, prenatal diagnoistics ... and there are amazing benefits... we also need to see their dystopic side.
When Elon Musk called AI an existential threat last year he was serious. Sure these are not new issues. Unfortunately much of the critical turf is being squated by luddites and not futurologists. -
• #571
Oh. I see. It's a general concern about the ethics of technology.
Yawnsville.
Ethical and legal constructs have, necessarily, always lagged technology. Nothing new nor modern about today.
-
• #572
As for the neutrality of mathematics, that smells like a pile. I'll go do some reading.
Thanks.
-
• #574
That sounds cool. Love to own one to play around with.
-
• #575
I looked at the Man page for Ultrafinitism. Seems more like mathematicians being non neutral rather than mathematics being non neutral. Obviously there are disagreements about truth in the body of maths. Xeno shows us that, and Gödel proves that the disagreements will persist.
But the maths statements themselves are deterministic not subjective.
Even a seemingly subjective, probabilistic, physics such as QED has firm mathematic underpinnings.
But what if it's absolutely unavoidable? What if he's barrelling down the hall at 100mph and one of the cats jumps out in front of him, trying to stare him down and intimidate him? Say his brakes have failed, how does he decide whether to swerve into the skirting or into your hairy hobbit feet?
These are important ethical concerns and it's a discussion that needs to be had.