Scientists want to teach robots to know when to trust humans
If advanced robotics become ubiquitous in society, we need to know that we can trust them. At the same time, we need to make sure robots trust us mere humans in matters they’re not equipped to handle, researchers argue in a paper published last month in the academic journal ACM Transactions on Interactive Intelligent Systems. The work — a collaboration of Penn State, MIT, and Georgia Institute of Tech scientists — is an attempt to develop a definition and model of trust that could easily translate into software code. After all, robots can’t get a “gut feeling” to trust someone the way humans do.
Continue Reading
Join the Discussion