• I think you're imagining a world where we employ humans to write algorithms to control a vehicle. That may be what ends up happening, but would it not be much easier to supply a learning machine with examples of good and bad outcomes, together with the circumstances and actions that led up to them and let it build it's own view of how to drive. In practice the output of that process would probably be an RNN as described by EdwardZ.

    Then the only human / moral input is in labelling the cases as good or bad.

  • Then the only human / moral input is in labelling the cases as good or bad.

    In a nutshell that is the current state of the art. I call it "empirical programming". The "programming" is in creating the inputs/labels and designing the network (by empirical trial and error). A lot of work is even end-to-end, letting the network "develop" its own model--- versus the "old way" of designing models and using a SVM. What is, I think, particularly interesting is that while the quality of the results-- and the pace of improvement--- is beathtaking, why it works is not quite understood, viz. what the final feature layer means is not clear. Andrea Vidaldi http://www.robots.ox.ac.uk/~vedaldi/ at Oxford did some interesting work to try to address this: http://www.robots.ox.ac.uk/~vedaldi//research/visualization/visualization.html . Another interesting approach has been to use the network to try to intentionally create new synthetic data that leads to "wrong" results. Example: http://www.evolvingai.org/fooling

About