If we look back at the history of artificial intelligence over the past 50 years or so, memory tells us that there have been boom-and-bust cycles of growth and concentration in AI.
Consider the second “AI winter” that occurred between approximately 1987 and 1994.
Over the past few years, interest in this type of technology has waned, and we've seen less innovation like what's happening today with rapidly ramping up artificial intelligence and machine learning programs.
But why the AI winter?
As we look back at that history at conferences, panels, research groups, and classes, we can see some reasons why things played out the way they did. And we'll see some ideas to help us continue.
Growth requires buy-in
Basically, in AI's second winter, people stopped funding projects due to concerns about lack of investment.
It's kind of a vicious cycle. If enough people say the money isn't there, the money often isn't there in the end.
Some cite John McCarthy, an early proponent of AI, as in some ways denying the potential of the market at the time.
“McCarthy criticized expert systems (the AI design of the time) because they lacked common sense and knowledge of their own limitations,” writes Sebastian Schuchmann of Towards Data Science.
In any case, there is broad consensus that an AI winter has occurred. The AI newsletter explains:
“The term itself was coined in the late '80s and served as a wake-up call about the cyclical nature of technological advances and disappointments in the field of artificial intelligence. 'Winter' figuratively freezes AI ambitions and progress. , leading to the disappearance of AI companies and a significant downturn in research and investment. ”
Eventually, when the technology matured to the point where people began to see its results more transparently, AI began to gain momentum again. I'll explain that in a little more detail later.
prove practicality
We also see that in the years between 1987 and 1994, there were some smaller inventions that didn't seem as groundbreaking as earlier inventions. For example, IBM's 1992 backgammon player didn't seem to pack the same punch as Arthur Samuel's checkers player, which wowed audiences in the '50s (covered in another article). Similarly, Rollo Carpenter's Jabberwocky chatbot has been consigned to the dustbin of history, not only because of its limited scope but also because of its timing.
Conversely, you could argue that the second AI winter saw other types of technology proving their worth everywhere.
The Internet has begun to spread. Tim Berners-Lee talked about hyperlinks and hypertext in 1989, and in 1990 he created the first World Wide Web browser.
This begins to explain another part of why AI wasn't at the forefront at the time. People were overwhelmed by the communication power of the Internet, and a lot of money was flowing there.
You could link this to the dot-com bubble that erupted around the turn of the new millennium, but you also need to consider what was happening around that time in the big data era.
Neural networks are a different animal
If there's a fundamental lesson we've learned about emerging AI technologies, it's that the big data era has changed the game, ushering in the possibility of new artificial intelligence systems based on fundamentally different engines. That is…
Most of us started hearing about neural networks a few years after the world celebrated the year 2000.
Before that, we heard about how companies collect large amounts of data, sift through it with algorithms, and collect results.
But after a few more years, we started hearing about everything from weighted inputs, activation functions, and hidden layers in neural network models.
All of that started being aggressively applied to all kinds of technology, and then pretty quickly we started getting chatGPT and Dall-E and all these other things.
What we learned is that before the 21st century, the world had not yet discovered the technology to truly run intelligent computer systems.
In other words, during the second AI winter, part of the criticism was that the technology itself wasn't sophisticated enough. They could follow very complex rules, but were still deterministic. They weren't actually trained on data like neural network systems.
In fact, I would argue that leading public awareness about this technology is a daunting task because so many people still don't understand what it does and how it's done. can also do.
But the point here is that part of the blame for the second AI winter lies in the simple reality that we haven't yet pioneered the systems that will make these new kinds of AI possible. . We were really just playing around…
Stay tuned for more updates, including news about a major conference scheduled for late April.
follow me LinkedIn. check out my website.