I was attracted to this post by a tweet from Diana Zeaiter Joumblat which read:
How parallel computing, big data & deep learning algos have put an end to the #AI winter
It has been almost a decade now but while riding to lunch with a doctoral student in computer science, they related how their department was known as “human-centered computing” because AI had gotten such a bad name. In their view, the AI winter was about to end.
I was quite surprised as I remembered the AI winter of the 1970’s.
The purely factual observations by Kevin in this article are all true, but I would not fret too much about:
As it does, this cloud-based AI will become an increasingly ingrained part of our everyday life. But it will come at a price. Cloud computing obeys the law of increasing returns, sometimes called the network effect, which holds that the value of a network increases much faster as it grows bigger. The bigger the network, the more attractive it is to new users, which makes it even bigger, and thus more attractive, and so on. A cloud that serves AI will obey the same law. The more people who use an AI, the smarter it gets. The smarter it gets, the more people use it. The more people that use it, the smarter it gets. Once a company enters this virtuous cycle, it tends to grow so big, so fast, that it overwhelms any upstart competitors. As a result, our AI future is likely to be ruled by an oligarchy of two or three large, general-purpose cloud-based commercial intelligences.
I am very doubtful of: “The more people who use an AI, the smarter it gets.”
As we have seen from the Michael Brown case, the more people who comment on a subject, the less is known about it. Or at least what is known gets lost is a tide of non-factual but stated as factual, information.
The assumption that the current AI boom will crash upon is the assumption that accurate knowledge can be obtained in all areas. Some, like chess, sure, that can happen. Do we know all the factors at play between the police and the communities they serve?
AIs can help with medicine, but considering what we don’t know about the human body and medicine, taking a statistical guess at the best treatment isn’t reasoning, it a better betting window.
I am all for pushing AIs where they are useful, but being ever mindful that it has no more operations than my father’s mechanical pocket calculator I remember as a child. Impressive but that’s not the equivalent of intelligence.