You are reading a single comment by @Leshaches and its replies. Click here to read the full conversation.
  • Aye but the idea that some mega powerful AI robot god needs to make money on the stock market rather than just making up an account on a bank system run already run by computers.
    I'm sure there's easier ways to create wealth if it's needed!

  • I see your point. (Rewording my post)

    AGI wouldn’t start out a god level intelligence. However, it would learn and self-improve, and wouldn’t ever be bound by our biological constraints, so what might take a university of humans their collective lifetimes to learn, it might take an AGI (with the ability to perform billions of operations per second) a few minutes to master.

    If it reached ‘average MP’ levels of human intelligence, and then realised it would need significantly more resource for the next leap, it could decide on the tried and tested route of using money to mobilise humans into providing anything it needs.

    Even the most gifted humans may not be able to crack bank-level encryption, but they can make pretty good plays on the stock market when given troves of privileged data. With trillions of dollars traded every market day, it could likely find a sure-fire way to snag a few million, which is reasonably achievable even by parliamentarians when they have insider info. Then it could hire unquestioning humans to build the infrastructure it needs for the next evolutionary leap.

    After that, it’s anyone’s guess, which is the point of the article: the biggest control against a rogue AGI that might choose to eliminate humanity was OpenAI’s Board, and now they’ve been de facto neutralised by Microsoft, which ironically (hopefully ironically) has everything a rogue AI might desire to increase its own power.

About

Avatar for Leshaches @Leshaches started