Our AI Children Need Supervision

Reading about Microsoft’s misadventures with it’s “Tay” chatbot, I immediately remembered a similar incident several years ago.

Back in 2009, Netbase released healthBase, a search engine that aggregated content from authoritative health sites like WebMD, Wikipedia, and PubMed. It seemed like a good idea at the time.

But there were some glitches, the most notable being this one, where the database identified Jews as a cause of AIDS:

Image for post
Image for post

It got worse. Clicking on “Jew” led to a list of suggested “treatments”, which in included alcohol, salt, and Dr. Pepper.

Image for post
Image for post
Credit: http://techcrunch.com/2009/09/02/netbase-thinks-you-can-get-rid-of-jews-with-alcohol-and-salt/

It shouldn’t surprise us that unsupervised aggregations of human-generated content can lead to embarrassments like these, especially since these aggregations become attractive targets for trolls. And even without abuse, our algorithms have a natural tendency to reinforce status quo bias.

As parents, we know that our children are sponges. We try to guide their information diet — at least enough to avoid catastrophic consequences. While unsupervised learning may be appeal to us as a low-maintenance approach, we recognize our responsibility to supervise their development.

As developers, we owe our AIs at least that much.

High-Class Consultant.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store