6 years ago
4
AI Has a Hallucination Problem That's Proving Tough to Fix
Machine learning systems, like those used in self-driving cars, can be tricked into seeing objects that don't exist. Defenses proposed by Google, Amazon, and others are vulnerable too.
Continue Reading https://www.wired.com
Additional Contributions:
Join the Discussion
If you have a regression tool with a ton of features and a ton of parameters, there’s always going to be some degree of overfitting, no matter how hard you try to avoid it. If you find ranges of features where the overfitting is worse, you’ve found your “hallucinations”. I wish people talked about ML without making people believe that they’re actually creating “intelligent” systems.
One time Gathering Flowers and I dropped, like, half a sheet of glassy Black Laughing Clown acid and all I remember was I 'overfitted' pink elephant gods swimming in the aether with us... Good times.
You need to be regularized :)
Some things you just have to work out for yourself. Found out later the acid was triple-dipped, which explains so much. You know how they say never to trust a skinny chef? Well, even if they seem well-intentioned, never trust walls with labored breathing. Especially if they're trying to convince you all optima are local.