Autonomous systems rely upon action prediction. The general idea in these systems is to not just determine what is in images but how they might unfold. Pretty trivial issues endemic to the paradigm and not really a problem as data gets fed into the net (typically R-CNNs). Since the network was not trained to "identify" track stands it drew some wrong conclusions. The future has reinforced and unsupervised learning in the cards but right now it's all tagged (ground truth)... The race these days given the curent approach is in data collection (and what, for example, Alphabet/Google is keeping closed lips on).. But even with "data".. the unexpected leads to the unexpected.. people making the judgements or machines.. Where people are different from machines is in their pychology and cognition.. Where is it going? Josh Tenenbaum and the CogSci group at MIT http://web.mit.edu/cocosci/josh.html are, I think, good trend markers..
Autonomous systems rely upon action prediction. The general idea in these systems is to not just determine what is in images but how they might unfold. Pretty trivial issues endemic to the paradigm and not really a problem as data gets fed into the net (typically R-CNNs). Since the network was not trained to "identify" track stands it drew some wrong conclusions. The future has reinforced and unsupervised learning in the cards but right now it's all tagged (ground truth)... The race these days given the curent approach is in data collection (and what, for example, Alphabet/Google is keeping closed lips on).. But even with "data".. the unexpected leads to the unexpected.. people making the judgements or machines.. Where people are different from machines is in their pychology and cognition.. Where is it going? Josh Tenenbaum and the CogSci group at MIT http://web.mit.edu/cocosci/josh.html are, I think, good trend markers..