+25 25 0
Published 8 years ago by drunkenninja with 5 Comments

Deadly Truth of General AI - Computerphile

The danger of assuming general artificial intelligence will be the same as human intelligence. Rob Miles explains with a simple example: The deadly stamp collector.

 

Join the Discussion

  • Auto Tier
  • All
  • 1
  • 2
  • 3
Post Comment
  • Kysol (edited 8 years ago)
    +4

    I dunno, I'd like to read a novel where an AI is controlling the world and it's really rational and calculated. I think that it would be really dry, but if you were to omit the reason why it was doing stuff and then later on everything falls into place you would have an OMG moment where the last 400 pages of blah mean so much more to you now.

    It would probably make you look at your PC with disgust afterwards... "what are you thinking right now you devious little device... you're turned off, but are you really 'off'?"

  • btcprox
    +4

    Sounds like creating the general AI would entail the huge problem of including and fine-tuning as much criteria and constraints as possible, so it won't end up making decisions that unexpectedly cause serious harm to the world it interacts with.

  • Tawsix (edited 8 years ago)
    +3

    I found it interesting that he thinks that an artificial general intelligence will necessarily not be anthropomorphic. I would think that we would model such intelligences after our own biology and that would cause them to at least be related to, if not an instance of, human intelligence. We would make them in our image, so to speak.

    My problem with the stamp collector example is that there is no cost associated with the acquisition of the stamps, such that a solution breaking laws (such as converting humans into stamps) would instantly disqualify the solution. It does raise questions about making AI's without such safeguards. However, again the example is a bit infeasible, as such models as the stamp AI used to test every possible combination of signals is quite a bit out of our reach, and if it were in our reach there would be no need for the AI in the first place.

    Really interesting post. Lots of food for thought in this one!

    • Kysol
      +4

      If we were to make an AI would we by default apply Asimov's Three Laws of Robotics (even though there are loopholes) or would we write an extended set of rules. As he stated, the simplest things are often harder to compute.

      I swear I read somewhere that Android had the rules implemented in some way.

      • Tawsix
        +4

        I think the Three Laws is a good start but honestly, binding an AI's behavior to force it not to act unlawfully would close a lot of loopholes (or find legal loopholes, I suppose, which would be interesting.)

Here are some other snaps you may like...