-
• #527
How easy it is to unwittingly stumble into an argument with St Augustine. Few people have ever enjoyed this. :)
I'm afraid I don't have any inclination to get into an Internet debate on free will. If you're interested in scriptural philosophy, you may be interested in checking it out yourself. This volume looks good and usefully seems to assemble everything you need in one place:
(NB I haven't read this, just some of the sources collected in it.)
Mosque Thread Alert Level 3. :)
-
• #528
Then the only human / moral input is in labelling the cases as good or bad.
In a nutshell that is the current state of the art. I call it "empirical programming". The "programming" is in creating the inputs/labels and designing the network (by empirical trial and error). A lot of work is even end-to-end, letting the network "develop" its own model--- versus the "old way" of designing models and using a SVM. What is, I think, particularly interesting is that while the quality of the results-- and the pace of improvement--- is beathtaking, why it works is not quite understood, viz. what the final feature layer means is not clear. Andrea Vidaldi http://www.robots.ox.ac.uk/~vedaldi/ at Oxford did some interesting work to try to address this: http://www.robots.ox.ac.uk/~vedaldi//research/visualization/visualization.html . Another interesting approach has been to use the network to try to intentionally create new synthetic data that leads to "wrong" results. Example: http://www.evolvingai.org/fooling
-
• #529
How easy it is to unwittingly stumble into an argument with St Augustine.
Augustine of Hippo was forced to declare "free will" in order to define accountability for in his frame punishment and purgatory could only exist if there was the free will to sin.
Kant correctly replaces this will the concept of reason ("Vernunft")!
It is, I think, in this sense that we must read http://www.myjewishlearning.com/article/the-denial-of-free-will-in-hasidic-thought/ -
• #530
-
• #531
This is actually all very fascinating. But to stay slightly prosaic and pertinent, AI think you may have argued yourself in a circle Edwardz. If the behaviour is 'learned' via an RNN, with base truths that are not in themselves dangerous, harmful or evil. And it has passed some form of certification that the behaviour of the learned algorithm is appropriate and acceptable in a percentage if cases that is over a threshold, then where can blame be attributed at all?
I suppose the ethical question lies in either the base truths (don't hit people; hitting fewer people is better) or in the correctness assessment criteria (did you hit noone, did you hit the least number of people possible). These should all be quantised or discrete enough to be judged on their merits, though?
-
• #532
Augustine of Hippo was forced to declare "free will" in order to define accountability for in his frame punishment and purgatory could only exist if there was the free will to sin.
Erm ... not quite, no.
Kant correctly replaces this will the concept of reason ("Vernunft")!
If one were to look for just about the most nonsensical statement that anyone could make about Kant, one would probably pick something like that.
-
• #533
AI think
Wonderful typo. Must surely be intentional? :)
-
• #534
Not quite, although intentionally uncorrected.
-
• #535
It'll be a great day when it'll be possible to take a driverless car to AIKEA. :)
-
• #536
No. But that does not prevent computers from having memory, learning, making decisions, playing games etc.
Computers can't do any of these things, and they never will. They can achieve simulacra of these, but no more.
Not quite right... Deep Mind has already created a computer who's algorithms can learn how to play and then win at computer games. Have a look at this article in Nature to see details... fascinating stuff.
-
• #537
I'm sure that's very sophisticated software, but it still can't learn or play anything. :) It can without any doubt add code and then more code, but as I said above, that's no more than a simulacrum of the real thing.
-
• #538
It can without any doubt add code and then more code
Only that's not what it's doing, rather it's configuring pathways through a network in the same manner as a brain.
it still can't learn or play anything
Depends on your definition of learning and playing, right? Is it, by definition, only humans that can do these things? Could I change the definition so that it only counts if it's me doing these things? Then, if I play a game, learn the rules etc. and then give you a turn - your experience and your actions are nothing more than a 'simulacrum' of mine. After all, your brain and body are just copying how the original (i.e. me) behaved.
-
• #539
FWIW this is as close to a religious debate as you can get to without a deity, in my opinion.
Also IMO, human learning and decision making is no more magical or profound than computer versions of the same. I am firmly in the "free will is an evolutionarily advantage illusion" camp.
I also think that the internet may be conscious.
-
• #540
This page is shit.
-
• #541
-
• #542
Motherfucking robocars
-
• #543
-
• #544
Bitching.
-
• #545
Believe! Yo!
-
• #546
-
• #547
^ this one does make decisions, those decisions result in death to all humans. I'll be behind the wheel, live tweeting the destruction of mankind.
-
• #548
"Come with me if you want to live"
-
• #549
-
• #550
I'm on my second autonomous robot Hoover (a Botvac 85) now, the first one burnt out a bearing due, I suspect, to a lax cleaning regime on the part of yours truly.
Stanley the hoover does bonk into things, but only at a very low speed - when he is in the hall for e.g. he tears down it at a tremendous clip, as he maps his environment with a LIDAR system, and can see that there are no obstacles.
I don't see why a robo-car shouldn't simply be an extension of the algorithms that allow Stanley to navigate around the dining chairs without hitting them.
When he is in close proximity to things he simply slows down to the point where there is no damage caused if he does bonk into something.
If bonking into something is going to happen he applies his brakes and ensures that the kinetic energy transfer is below a level which is "survivable".
He also poses something of an existential puzzle for the cats - should they be scared of him? Is he alive? Why is he pushing a slipper around the coffee table?
Personally I think neither from a Judaic philosophical (I refer to the concept of Divine providence or השגחה פרטית and especially Leiner's treatice "Mei Hashiloach") nor neuro-scientific (I refer here to Libet's pioneering work) view do humans have free will. What they have is the belief or illusion of free will.
While free will can not exist without the capacity to make decisions, the ability to make decisions does not demand free will. Decisions are made all the time, inclusive of the belief or illusion of a free will.
Exodus 6:2-9:35
" But I will harden Pharaoh's heart, that I may multiply My signs and marvels in the land of Egypt. 4 When Pharaoh does not heed you, I will lay My hand upon Egypt and deliver My ranks, My people the Israelites, from the land of Egypt with extraordinary chastisements. 5 And the Egyptians shall know that I am the Lord, when I stretch out My hand over Egypt and bring out the Israelites from their midst." 6 This Moses and Aaron did; as the Lord commanded them, so they did. 7 Moses was eighty years old and Aaron eighty-three, when they made their demand on Pharaoh."
http://www.sciencedirect.com/science/article/pii/S0010027714001462
http://www.scientificamerican.com/article/finding-free-will/