Victims of bigots, fascists and misogynists on social media can (and many have) recounted the emotional toll of engaging with them.
How would you like to reduce your emotional toll and consume minutes if not hours of their time?
I thought you might be interested. 😉
Follow the link to DeepPavlov. (Ignore the irony of the name considering the use case I’m outlining.)
From the webpage:
An open source library for building end-to-end dialog systems and training chatbots.
We are in a really early Alfa release. You have to be ready for hard adventures.
An open-source conversational AI library, built on TensorFlow and Keras, and designed for
- NLP and dialog systems research
- implementation and evaluation of complex conversational systems
Our goal is to provide researchers with:
- a framework for implementing and testing their own dialog models with subsequent sharing of that models
- set of predefined NLP models / dialog system components (ML/DL/Rule-based) and pipeline templates
- benchmarking environment for conversational models and systematized access to relevant datasets
and AI-application developers with:
- framework for building conversational software
- tools for application integration with adjacent infrastructure (messengers, helpdesk software etc.)
… (emphasis in the original)
Only one component for a social media engagement bot to debate bigots, fascists and misogynists but a very important one. A trained AI can take the emotional strain off of victims/users and at least in some cases, inflict that toll on your opponents.
For OpSec reasons, don’t announce the accounts used by such an AI backed system.
PS: AI ethics debaters. This use of an AI isn’t a meaningful interchange of ideas online. My goals are: reduce the emotional toll on victims, waste the time of their attackers. Disclosing you aren’t hurting someone on the other side (the bot) isn’t a requirement in my view.