our new robo-reader overlords by Alan Jacobs.
After you read this post by Jacobs, be sure to spend time with Flunk the robo-graders by Les Perelman (quoted by Jacobs).
Both raise the issue of what sort of writing can be taught by algorithms that have no understanding of writing?
In a very real sense, the outcome can only be writing that meets but does not exceed what has been programmed into an algorithm.
That is frightening enough for education, but if you are relying on AI or machine learning for intelligence analysis, your stakes may be far higher.
To be sure, software can recognize “send the atomic bomb triggers by Federal Express to this address….,” or at least I hope that is within the range of current software. But what if the message is: “The destroyer of worlds will arrive next week.” Alert? Yes/No? What if it was written in Sanskrit?
I think computers, along with AI and machine learning can be valuable tools, but not if they are setting the standard for review. At least if you don’t want to dumb down writing and national security intelligence to the level of an algorithm.
I first saw this in a tweet by James Schirmer.