You are reading a single comment by @SwissChap and its replies. Click here to read the full conversation.
  • Brilliant!
    I read somewhere that Elon Musk (and FWIW I far from agree with much of his mantra) is of the view that the one thing which is going to cause us humans the most problems in the immediate future, isn't going to be running out of fossil fuels; isn't going to be global warming; isn't going to be crop failure or water shortage - but is likely to be AI as it is currently advancing exponentially, pretty much without any agreed controls - without getting all T1000 about it.

  • I have to say, as someone dabbling in the field myself, I very much disagree with him there. Remember he's also the guy who's consistently over-promised on what his cars can do in terms of autonomous driving. The whole thing is coming along nicely, but it's only when you try to implement stuff yourself that you realise how incredibly 'dumb' and limited AI systems are, and will continue to be for quite a long time still. I wouldn't worry about the Skynet scenario too much.

    What I would worry about is idiot humans using AI for applications it isn't ready or useful for. Such as the decision regarding bail that they're now using AI for in some states of the US - or the hiring thing mentioned earlier.

  • you realise how incredibly 'dumb' and limited AI systems are

    Yeahbut, everyone* thinks they are smart, and so they implement it to do important stuff, and it ends up doing important stuff shoddily. Like driving cars. Shoddily.

    * people who don't realise a computer is a just a calculator on crack

  • Not sure what Musk's (probably mental) views are, but isn't the generally accepted Doomsday scenario less Skynet and more "kill everyone to make paperclips". It's precisely shoddy programming that most people are scared of. Basically some system doing a HAL on a global scale.

About

Avatar for SwissChap @SwissChap started