I found it interesting that he thinks that an artificial general intelligence will necessarily not be anthropomorphic. I would think that we would model such intelligences after our own biology and that would cause them to at least be related to, if not an instance of, human intelligence. We would make them in our image, so to speak.
My problem with the stamp collector example is that there is no cost associated with the acquisition of the stamps, such that a solution breaking laws (such as converting humans into stamps) would instantly disqualify the solution. It does raise questions about making AI's without such safeguards. However, again the example is a bit infeasible, as such models as the stamp AI used to test every possible combination of signals is quite a bit out of our reach, and if it were in our reach there would be no need for the AI in the first place.
Really interesting post. Lots of food for thought in this one!
If we were to make an AI would we by default apply Asimov's Three Laws of Robotics (even though there are loopholes) or would we write an extended set of rules. As he stated, the simplest things are often harder to compute.
I swear I read somewhere that Android had the rules implemented in some way.
I think the Three Laws is a good start but honestly, binding an AI's behavior to force it not to act unlawfully would close a lot of loopholes (or find legal loopholes, I suppose, which would be interesting.)
I found it interesting that he thinks that an artificial general intelligence will necessarily not be anthropomorphic. I would think that we would model such intelligences after our own biology and that would cause them to at least be related to, if not an instance of, human intelligence. We would make them in our image, so to speak.
My problem with the stamp collector example is that there is no cost associated with the acquisition of the stamps, such that a solution breaking laws (such as converting humans into stamps) would instantly disqualify the solution. It does raise questions about making AI's without such safeguards. However, again the example is a bit infeasible, as such models as the stamp AI used to test every possible combination of signals is quite a bit out of our reach, and if it were in our reach there would be no need for the AI in the first place.
Really interesting post. Lots of food for thought in this one!
If we were to make an AI would we by default apply Asimov's Three Laws of Robotics (even though there are loopholes) or would we write an extended set of rules. As he stated, the simplest things are often harder to compute.
I swear I read somewhere that Android had the rules implemented in some way.
I think the Three Laws is a good start but honestly, binding an AI's behavior to force it not to act unlawfully would close a lot of loopholes (or find legal loopholes, I suppose, which would be interesting.)