-
• #727
-
• #728
Coding in expectations is the interesting bit, leading to: "We coded the car to expect cyclists to get the fuck out of the way...", etc.
-
• #729
The solution seems to be that they will add code to suggest that larger vehicles won't necessarily give way
You don't "add code". Procedural code is, in general, a nasty approach to address these issues. These cars are driven by, to (overly) simply things, multistage probability matrices that encode "learned features" and "learned responses". Instead of code like "see big object then.." one now has some new data on possible scenarios. These one can use to synthesize a class of new data as ground truth to train the system. The "big vehicle" model is integrated here as "example". In the network a number of characteristics might represent those "larger vehicle" cases but it might not but something else that within the corpus of data leads to the "right decisions". In fact imagine with enough data one can see a trend that shows, for example, the chance that a public buses during rush hour will cut off a driver is.... That is why the race is to collect data.
Normally--- I don't know any end to end network for this application-- one uses multiple networks of different designs, example: CNNs for image recognition, some variant of a RNN/LSTM to learn action detection and a RNN for control. The future for control might be reinforced learned-- see the Deep Mind paper in last years Nature-- but right now.. and controlling cars is indeed simple enough...
Sometimes what is learned is different than what can expect. The classical example of this is a old story about the first networks used by the military to spot tanks. The network was trained with all kinds of pictures. Out in the field it failed. Back to the drawing board.. It seems that all the photos taken of tanks were made on cloudly days and all the photos without were on sunny days. The network did not learn the truth of tank but of cloudy versus sunny day.
These days the networks have many more variables and the data volume is significantly larger.. Still.. One of the networks used by a major player, for example, trained on school buses seemed to learn black and yellow stripes.
-
• #730
I think someone added code to me that prevents me ever reading past about a third of a paragraph you've written.
-
• #731
Ironically, you have just perfectly illustrated the point that EdwardZzzzzz was trying to make about learning algorithms adapting to situations.
But you knew that.
-
• #732
learning algorithms adapting to situations.
These learning algorithms don't adapt and they don't really "learn"-- they just try to fit probabilty weights and their distribution to minimize loss (misses). See what is called SGD ("stochastic gradient descent"):
http://ufldl.stanford.edu/tutorial/supervised/OptimizationStochasticGradientDescent/. Also https://en.wikipedia.org/wiki/Stochastic_gradient_descentMachine learning and human learning are very different. We might have a model of Neuron but its apples and wales.
-- yes most of what people are doing is "supervised" (results compared against truth or known results aka "learn by example") rather than "unsupervised".
-
• #733
I know. Jesus Christ you are condescending.
I happen to have a pretty darn good CS degree, but talking like that ^ won't say anything to anyone.
enter code here
-
• #734
"We coded the car to expect cyclists to get the fuck out of the way...", etc.
To simply things (again by a large margin) and turn this into a kind of thought experiement: if 9 out of 10 cyclists one experiences gets out of the way one can expect cyclists will get out of the way 90% of the time-- that is most of the time but not all of the time . Of those 10% that don't get out of the way we have also statisical experience how many get into crashes etc.. Throw in a cost function and we can .. If we only have data about crashes (labels) etc. the networks "learn" a set of actions.. If the chance of hitting another car is significantly higher than the chance of hitting a cyclist when not avoiding one is greater and the cost of hitting another car is higher-- perhaps causing even multiple crashes to a given probability-- the action might be to "ignore" and risk the crash with the cyclist. In this example we see that the more cyclists get out of the way the more likely a car will "ignore" them and risk a crash. If this was a collective strategic game the best strategy for cyclists would be to collectively not get out of the way as this would force a response for cars to avoid them-- since we increased the cost of ignoring cyclists. Unfortunately the cost function for the cyclist is quite high so on the individual level it is the worst strategy. What this ultimately means is that cyclists will need to increase their avoidance and cars will increasingly ignore them in traffic.
-
• #735
I know. Jesus Christ you are condescending.
Sorry. Not my intent. I just wanted to clear up some general mislaid beliefs about the kind of "learning" in these systems. Just before ANNs made their "come back"-- they more or less died in the early 1990s-- the fashionable model was the "Random forest" (mid 1990s on). It works quite differently as in its underbelly are decision trees-- at the time RNNs (recurrent neural networks) were plauged by their need of large amounts of training data (no longer and issue) and massive computational demands (addressed now with GPUs) for training.
-
• #736
More on cyclists and probability..
From http://www.verkeerskunde.nl/trends-2016/2016/marjan-hagenzieker-beter-begrip-ongevalsrisico-s.4.42650.lynkx
"'Robots zijn niet goed in het omgaan met inconsequent gedrag, en het gedrag van een fietser kun je niet altijd voorspellen. "
(Robots are not good at dealing with inconsistent behaviour and predicting the action of cyclists)Carlos Ghosn, Renault's CEO, openly
http://www.cnbc.com/2016/01/08/driverless-cars-confused-by-cyclists.html"One of the biggest problems is people with bicycles," he said.
"The car is confused by them because from time-to-time they behave like pedestrians and from time-to-time they behave like cars."
"Cyclists don't respect any rules usually"
"They don't respect any rules usually," Mr Ghosn said.
-
• #738
It is called connecting the dots. The point I was trying to make is that their control is not a typical decision tree as many might expect and that they are not just programmed with scenarios. AI is no longer logic and symbols but statistics, control and black-box tools-- I call them black box since we, at this time, don't quite understand what (features) the systems are really learning only that they seem to be doing a very good job. Although symbollic AI-- explicitly building things with facts and rules (anyone remember the big wave of expert systems back in the 1980s)-- is also making a bit of a renaissance at the edges of cognitive science it is doing so in quite a different way with the addition of dynamics. The big revolution right now is just taking the connectionist approach kicked off in the late 1980s and early 1990s with now the data, hardware (GPUs) and tools (especially to use those GPUs) to make it happen-- remember TM's CM-1 had a max of 65K compute cores and their last machine maxed at 1024 SuperSPARC processors and as a friend commented "Getting a program to run on a CM was sufficient for a Phd at Princeton".)
Here is an old paper by Minsky
http://web.media.mit.edu/~minsky/papers/SymbolicVs.Connectionist.html that is still a pretty relevant backgrounder..When Google got into the game they started off they used a massive number of computers and CPUs. That was back in 2011/2. The big "bang" came the next year when they went from using CPUs to GPUs. Using 3 machines, each with consumer quadcore processors and cheap GPU cards they were able to duplicate their work
See http://jmlr.org/proceedings/papers/v28/coates13.pdfInstead of using borrowed graphic cards we now have GPGPUs-- GPUs that are designed for use as general purpose computational processors. They have better bandwidth, more memory and cores. The Tesla K80, for example, has 24 GB RAM, 4992 cores and 48 GB/s bandwidth.
The current state of the art is now hybrid systems: CPUs and GPUs. The first major hybrid was Oakridge's Titan: 18K GPUs (Tesla K20s) and 18K 16-core CPUs (Opteron).
Even the little Tegra X1 that one can find in a phone/tablet (right now Google's reference Pixel C) offers 256 CUDA cores, 8 CPU cores.
-
• #740
Do you work professionally in this field ?
Kind of.... well concerned with pattern recognition, computer vision, blah blah just not in automotive. I'm focused on what could be called knowledge discovery rather than control or robotics. The only sensors I might care about are image capture-- these days mainly document but I've also putted about in remote sensing--- and maybe GPS or other localization technology as relevant to what I'm trying to do-- or with what I'm working.
There are loads of applications that aren't about control.. Some pretty wild.. I know a lot of us are also interested in photography..
https://devblogs.nvidia.com/parallelforall/understanding-aesthetics-deep-learning/ -
• #741
You work for a government agency then. It's OK t just say it.
-
• #742
You work for a government agency then. It's OK t just say it.
Not everything to do with recognition, knowledge discovery, etc. is security or defense related-- even if the agencies do indeed have their interests and fund research (most R&D these days in ML, however, are being payrolled by companies like Alibaba, Google, Facebook, Baidu, Microsoft etc. albeit sometimes with transfers from defense)-- and not everything those agencies do with these technologies is related to surveillance-- while said companies are indeed keyed to such activities but with different goals (ultimately beyond just "improving" their products, economic hedgenomy either through "controlling the wires" or through market perception leading to a higher valuation and more cash to go shopping to thwart rival threats). It is a "very hot" area right now and a lot of interest is quite mainstream. Go to any university and all the relevant lectures will be filled to overflow. And all the conferences are filled with companies looking to hire talent. Everyone from pharmaceuticals to retailing, banking, insurrance, manufacturing, etc. feels the need to get into the show fearing the potential of being left behind.. The return right now, however, is some areas is quite good. Speech recognition is a parade example. End-to-end speech recognition networks deliver amazingly good results but without the need to design and build difficult models (acoustic, phonemes, language etc.) . Its really leveled the field-- some really good results are even comming from groups that don't have the kind of data that Google or Microsoft have as techniques are emerging for data synthesis.
-
• #743
I looked into it when I worked at Tesco, for automatic the fruit and veg scales. Recognising bananas and so forth.
The business case was compelling, since every second off each till transaction is worth a million a year on the bottom line.
-
• #744
Stops people picking 'onions' for everything too.
-
• #745
I looked into it when I worked at Tesco, for automatic the fruit and veg scales. Recognising bananas and so forth.
That is really really easy-- what people call "low hanging fruit".
The business case was compelling, since every second off each till transaction is worth a million a year on the bottom line.
-
• #746
Boom Tish.
Yes, I am amazed they haven't done it.
-
• #747
Stops people picking 'onions' for everything too.
I've not given it much thought.. But I think one would want to not just identify the fruit but also the consistency of the contents of the bag. It too should be pretty straightforward.
To distinguish between grades or qualities of the same fruit would be hard but I think for now one would just price them in lumps and avoid trying to embark on doing fine grain recognition-- which would be, I think, a whole lot more difficult than detecting basic produce types. -
• #748
Yes, I am amazed they haven't done it.
Putting together a convincing proof of concept would be pretty easy.... just would, for now, steer clear of fine grain image identification--- which may here be a whole lot easier than I fear (since I've never tried). One solution to that problem should clump pricing be inappropriate would be tags. These would be needed anyway-- and are being used-- to distinguish between, for example, some organic or fairtrade produce from non-organic etc. Premium from non-premium etc. etc. Spotting and identifying tags etc. is easy!
P.S.: I actually attended https://sites.google.com/site/thirdworkshoponfgvc/ last year...
-
• #749
Just genetically modify bananas so they have built-in barcodes. Duh. Simples.
-
• #750
ust genetically modify bananas so they have built-in barcodes. Duh. Simples.
http://botany.si.edu/projects/DNAbarcode/
Plant DNA Barcode Project
A taxonomic impediment for many systematists, field ecologists, and evolutionary biologists is determining the correct identification of a plant or animal sample in a rapid, repeatable, and reliable fashion. This problem was a major reason for the development of a new method for the quick identification of any species based on extracting a DNA sequence from a tiny tissue sample of any organism. DNA barcode consists of a standardized short sequence of DNA between 400 and 800 base pairs long that in theory can be easily isolated and characterized for all species on the planet. By harnessing advances in molecular genetics, sequencing technology, and bioinformatics, DNA barcoding is allowing users to quickly and accurately recognize known species and retrieve information about them. It also has the potential to speed the discovery of the thousands of species yet to be named. DNA barcoding has become a vital new tool for taxonomists who are charged with the inventory and management of the Earth’s immense and changing biodiversity.The concept of a universally recoverable segment of DNA that can be applied as an identification marker across species was initially applied to animals with the Cytochrome C Oxidase 1 or CO1 gene region. After several broad screenings of gene regions in the plant genome, three plastid (rbcL, matK, and trnH-psbA) and one nuclear (ITS) gene regions have become the standard barcode of choice in most investigations for plants.
The NMNH Plant DNA Barcode Project is exploring the development of new tools and applications for DNA barcoding. This website gives background and information on standard plant barcoding protocols used in these studies.
See also: http://www.barcodeoflife.org/
Software: http://www.geneious.com/workflows/agriculture-crop-plant
Conclusion: If the bus wasn't driven by a human, no accident would've happened. Kill the humans!