If we were to make an AI would we by default apply Asimov's Three Laws of Robotics (even though there are loopholes) or would we write an extended set of rules. As he stated, the simplest things are often harder to compute.
I swear I read somewhere that Android had the rules implemented in some way.
I think the Three Laws is a good start but honestly, binding an AI's behavior to force it not to act unlawfully would close a lot of loopholes (or find legal loopholes, I suppose, which would be interesting.)
If we were to make an AI would we by default apply Asimov's Three Laws of Robotics (even though there are loopholes) or would we write an extended set of rules. As he stated, the simplest things are often harder to compute.
I swear I read somewhere that Android had the rules implemented in some way.
I think the Three Laws is a good start but honestly, binding an AI's behavior to force it not to act unlawfully would close a lot of loopholes (or find legal loopholes, I suppose, which would be interesting.)