-
Is artificial intelligence just the copying of our own minds though? Seems a waste, given most of us are fucking idiots.
We seem to be building working quantum computers at the moment and last time I checked we were pretty vague on how that shit worked too.
It's not just about processing as much (more) information as a human though, it's about it learning how to do something with the information and changing how it's using the information as it is fed more information. Rather than Big Blue just being really fast at computing ALL the moves on a chess board we have Alphago which learns Go strategies the more it plays.
Scary, yes, which is where all the ethical discussions come into play - what controls should be in place when testing this kind of thing? What safeguards should be built in? Self-driving cars that adapt and learn road maneuvers seem to be the hot topic on this at the moment. See all the ethical dilemas raised in that thread.
Red Dwarf new series had them printing a crew out. Add in a "paper jam" for hilarious consequences...
This. I was getting really excited about AI in the pub once and a mate just said that we don't understand our own minds, so how can we create another?
Sometimes I think of it as a simple input/output/computation problem though. When we create a system that can process as much information (sights, sounds, touch etc) as quickly as a human, will it just 'be' intelligent? Will it need an aim? The atari example gives the algorithm rewards with the in-game points. Scary to imagine more sophisticated bots could have rewards for much more sinister things. Without an aim, will it just try things at random (like a baby)? A robot with the power of a human but the whims of an infant is scary too.
Have done, can confirm I haven't achieved world domination.