-
• #502
A self driving car would avoid putting itself in that scenario in the first place.
But what if a wizard appears by the side of the road, and waves his magic wand, and conjures up the scenario, using the power of DARK MAGICKS?
-
• #503
You're giving the car too much credit, it's not Terminator (yet). It's not making a decision over life and death. It's following your heuristic decision matrix to merely avoid a collision. Over time they will get better at avoiding collisions. Given more time and more tech communicating with cars and surrounds, they will be even better at avoiding collisions. Iterative improvement. They aren't going to solve all our road-going woes in a fortnight.
-
• #504
You had me at wizard.
-
• #505
Cars, or indeed any machines, will never be able to 'make decisions' at all, no matter how good the software.
(Cue argy-bargy about 'artificial intelligence'. :) )
-
• #506
A self driving car would avoid putting itself in that scenario in the first place.
Cars don't have selves. :)
-
• #507
EdwardZzzzz is totally right here, in that the machine may well perform an action that leads to a death.
As will an aeroplane on autopilot or a chainsaw with a faulty safety cowling and trigger.
The question is, I suppose, whether software is inherently more spooky and anthropomorphic when it does this and therefore more blameful?
It's hard to imagine that the source code for the software will have a function called "kill someone". More likely it will have functions like "collision detector" and "emergency action implementor".
I think that the decision path would be more " I am going to crash, slow down as much as you can" rather than "I am going to crash, is there something preferable I could crash into?".
-
• #508
It's hard to imagine that the source code for the software will have a function called "kill someone". More likely it will have functions like "collision detector" and "emergency action implementor".
The emergeny action implementor is based around statistics just as the object detector. I expect most people are using a RNN. One probably trains it against a simulated and synthetic ground truth with reinforcement in the field: what is the correct response.
I think that the decision path would be more " I am going to crash, slow down as much as you can" rather than "I am going to crash, is
Slow down, of course.. but if it is clear that one can't slow down sufficiently .... People are not that great at this task. Machines can detect things better.
there something preferable I could crash into?".
Exactly. And one uses object detection here. School bus? Child? Old woman? Old man? There really is no way around this... it is unavoidable.. that is why we need to talk about ethics in this context!
-
• #509
Cars, or indeed any machines, will never be able to 'make decisions' at all, no matter how good the software.
Of course they make decisions! When a program says "It is a cat" that is a decision. Computer neurons don't function the way our neurons work and computers don't learn the same way we do but that does not mean that they can't learn or make decisions.
-
• #510
They're not 'decisions' but outputs of algorithms.
-
• #511
Of course they make decisions!
You seem to think a computer is human.
-
• #512
Your an artificial intelligence!
-
• #513
But doesn't someone make a decision when building a that algorithm?
-
• #514
They're not 'decisions' but outputs of algorithms.
What is the difference?
-
• #515
But doesn't someone make a decision when building a that algorithm?
Yes and no. These days is is all Bayes predictive. Sure there is a "ground truth".. But that is what learning is all about--- people like machines.
-
• #516
Interestingly enough, the problem here lies with the nonsensical concept of 'intelligence'. :)
-
• #517
You seem to think a computer is human.
No. But that does not prevent computers from having memory, learning, making decisions, playing games etc. and even passing the Turing Test.
A better question is: What is human? Are apes human?
-
• #518
Are apes human?
Depends. How's their driving?
-
• #519
about par.
-
• #520
Turing Test hasn't been passed properly yet.
-
• #521
What is the difference?
Categories.
An algorithm is a (usually iterative) rule or set of rules which generates an output from an input. When we make decisions, we do not necessarily follow rules. We can do that (after all, we invented algorithms :) ), but we're always free to decide otherwise. (We don't even necessarily need an input (a priori reasoning).)
Cutting to the chase, it all revolves around whether we have free will. Decisions are subject to free will, which machines categorically cannot have.
Needless to say, it's a rather complex philosophical topic with a long history.
-
• #522
No. But that does not prevent computers from having memory, learning, making decisions, playing games etc.
Computers can't do any of these things, and they never will. They can achieve simulacra of these, but no more.
-
• #523
So I may be getting a little out of my depth here but....
An algorithm is a (usually iterative) rule or set of rules which generates an output from an input.
Who decides what rules the computer should follow? I think this is us and that is the decision I refer to. At some point somewhere a human has programmed something into the computer that could, theoretically, result in the loss of life or the choice of one life over another. (?)
-
• #524
Well, that's part of my point. The computer doesn't make decisions, it just follows an algorithm that, yes, originated with (a) human being(s). And yes, that can have material consequences.
-
• #525
I think you're imagining a world where we employ humans to write algorithms to control a vehicle. That may be what ends up happening, but would it not be much easier to supply a learning machine with examples of good and bad outcomes, together with the circumstances and actions that led up to them and let it build it's own view of how to drive. In practice the output of that process would probably be an RNN as described by EdwardZ.
Then the only human / moral input is in labelling the cases as good or bad.
But, why are they required to? Not economic reasons. It's not a levy on airlines designed to support the struggling inflatable life-raft companies is it?