Microsoft’s offensive chatbot Tay returns, by mistake by Georgia Wells.
From the post:
Less than one week after Microsoft Corp. made its debut and then silenced an artificially intelligent software chatbot that started spewing anti-Semitic rants, a researcher inadvertently put the chatbot, named Tay, back online. The revived Tay’s messages were no less inappropriate than before.
…
I remembered a DARPA webinar (download and snooze) but despite following Tay I missed her return.
Looks like I need a better tracking/alarm system for incoming social media.
I see more than enough sexist, racist, bigotry in non-Twitter news feeds to not need any more but I prefer to make my own judgments about “inappropriate.”
Whether it is the FBI, FCC or private groups calling “inappropriate.”