You are reading a single comment by @yoshy and its replies. Click here to read the full conversation.
  • Basically...we don't know how the fuck to make one because we don't even understand how ours works, and if we do, we don't know how to power it.

    This. I was getting really excited about AI in the pub once and a mate just said that we don't understand our own minds, so how can we create another?
    Sometimes I think of it as a simple input/output/computation problem though. When we create a system that can process as much information (sights, sounds, touch etc) as quickly as a human, will it just 'be' intelligent? Will it need an aim? The atari example gives the algorithm rewards with the in-game points. Scary to imagine more sophisticated bots could have rewards for much more sinister things. Without an aim, will it just try things at random (like a baby)? A robot with the power of a human but the whims of an infant is scary too.

    If all it took was 3D printing my face, I'd have accomplished world domination by now.

    Have done, can confirm I haven't achieved world domination.

  • When we create a system that can process as much information (sights, sounds, touch etc) as quickly as a human, will it just 'be' intelligent? Will it need an aim?

    Will it be self aware, conscious?
    Are we really conscious, are animals or even plants?
    Or is the fact that we feel self aware just an instinct that perhaps humans have elaborated more than other beings?

  • Is artificial intelligence just the copying of our own minds though? Seems a waste, given most of us are fucking idiots.

    We seem to be building working quantum computers at the moment and last time I checked we were pretty vague on how that shit worked too.

    It's not just about processing as much (more) information as a human though, it's about it learning how to do something with the information and changing how it's using the information as it is fed more information. Rather than Big Blue just being really fast at computing ALL the moves on a chess board we have Alphago which learns Go strategies the more it plays.

    Scary, yes, which is where all the ethical discussions come into play - what controls should be in place when testing this kind of thing? What safeguards should be built in? Self-driving cars that adapt and learn road maneuvers seem to be the hot topic on this at the moment. See all the ethical dilemas raised in that thread.

    Red Dwarf new series had them printing a crew out. Add in a "paper jam" for hilarious consequences...

  • we don't understand our own minds

    Anyone who thinks that could do worse than read:

    Plato's Phaedo
    Plato's Republic
    Plato's Parmenides
    Plato's Theaetetus
    Plato's Sophist
    Plotinus' The Enneads
    Kant's Critique of Pure Reason

    Just for starters. :)

    Of course, the fundamental uncertainty at the root of it all persists in the face of the best philosophy, and 'artificial intelligence' is still baloney. Can we create very sophisticated computer systems mocking up how we think? Yes, probably. Can we create something that is alive in the same way we are and hence has 'intelligence'? Probably not.

About

Avatar for yoshy @yoshy started