Forget Summer and Winter. AI progress is more like a boat trip
Guest post by Tom Bewley
As we edge our way into the new decade, some researchers are asking whether we are on the cusp of an ‘AI winter’. Tom Bewley explores whether this fear is justified.
The term artificial intelligence was coined by John McCarthy for the 1956 Dartmouth workshop, which effectively established the AI discipline as distinct from its conceptual parents in formal logic, statistics and cybernetics. Ever since, commentators have presented the field as progressing over a series of boom-and-bust cycles: summers and winters. Few would argue against the suggestion that recent years have been an example of the former, with truly remarkable results brought by a combination of massive datasets, large-scale computation, and deep neural network models. But as we edge our way into the new decade, many are worrying that winter might be coming once more. Is this fear justified?
Since around 2012, data-driven machine learning has annihilated previous benchmarks in many challenging problems of computer vision, natural language processing and game playing. For many, this revolution reached its zenith in late 2017, when DeepMind’s Alpha-series board game agents culminated in AlphaZero achieving superhuman performance in Go, chess and shogi after just 24 hours of self-play reinforcement learning. Others have been equally excited by more recent developments in transformer-based language models (see: ELMo, GPT-2, BERT, Meena) whose ability to distill the salient points from freeform text and generate humanlike responses to a wide variety of queries could easily lead us to believe that true understanding is right around the corner.
Point of diminishing returns
But these models, deeply impressive as they are, exemplify many of the insecurities that now pervade the AI field. In some respects, their performance is brittle, and has frustrating omissions of common sense. As vocal critic Gary Marcus has found in his experiments with GPT-2, it is relatively easy to elicit basic errors of mathematics or logical entailment (“I put two trophies on a table, and then add another, the total number is…” “…five trophies”), indicating that these systems lack some fundamental aspects of practical reasoning. Such issues persist despite an explosion in model complexity. Microsoft promote their new Turing-NLG language model by boasting about its 17 billion parameters, double that of any previous system, and Google’s Meena chatbot reportedly required $1.4 million worth of computing time to train. These trends not only signify an unhealthy concentration of research power in the hands of a few US-based tech giants with sufficient data and computing resources, but also indicate that we might be reaching the point of diminishing returns from the deep learning paradigm.
But these models, deeply impressive as they are, exemplify many of the insecurities that now pervade the AI field.
And causes for concern present themselves from yet more directions. Led by the likes of Judea Pearl, the recent resurgence of interest in causal inference has promoted this skill as the bedrock of scientific induction and common-sense reasoning. A notion of causality is glaringly absent in deep learning models, whose sole currency is statistical correlation. This limitation may be poised to create big problems in complex domains such as autonomous driving. In my own area of research, a growing crowd of researchers are worrying about the lack of transparency, accountability and demonstrable fairness in black-box machine learning models. As Wachter and Mittelstadt point out, these properties have a strong claim to being legally mandated in consequential real-world situations but such a requirement would rule out all but the simplest of today’s AI models.
Contemporary learning machines are tools for pattern recognition, and it is becoming increasingly clear that pattern recognition is not the totality of intelligence. So once again, with the shaky ground laid bare for all but the most steadfast neural network aficionados to see, could we be entering an AI winter?
With the shaky ground laid bare for all but the most steadfast neural network aficionados to see, could we be entering an AI winter?
Seasonal analogy for AI progress is misleading
Let’s try to answer this question by thinking carefully about its premises. As Daniel Dennett highlights in his theory of intuition pumps, our thinking is driven to an uncomfortable extent by the analogies we choose to use, and I fear that this particular seasonal analogy for AI progress is misleading. Summer paints the picture of our brightest minds engaged in a glorious sun-baked flow state, sharing ideas freely and innovating at a ludicrous rate as they frolic in burgeoning fruit orchards. Winter suggests our scientists grimly hunched over ice-encrusted laptops, joints seized up and eyes staring blankly at empty document templates and bug-ridden code. But this is not how research and technological development works. To a large extent, genuine theoretical and empirical progress is independent of glamour or external excitement, and it has none of the regularity of the seasons. Gartner’s hype cycle captures more of the relationship between perceived and actual innovation but I think about AI progress with the help of another intuition pump: one that takes us on a fog-filled waterborne journey.
Genuine theoretical and empirical progress is independent of glamour or external excitement, and it has none of the regularity of the seasons.
Imagine we find ourselves on a small boat, paddling our way downstream to a destination we know little about but are told is really rather wonderful. Our surroundings are hazy and visibility is limited but we notice how the waterways we traverse vary from narrow, gushing channels to vast expanses whose limits lie far beyond our sight. Things are great in the narrow parts. Pulled along by the currents, and perhaps a little by our paddling prowess, it feels like real progress is being made as the grassy banks whizz by on either side. We never need to think about where to go (it’s obvious: forward), and the splashes we make as we ricochet off the rocks are actually kind of fun. Wider stretches, on the other hand, force us to do some soul-searching. Absent of any visible landmarks our forward speed is immeasurable, and with so much space on both sides we must make hard choices about where to paddle in time for the next branching into narrowness. At these times, the immediate future is dull and the long-term unknown. But really, our true rate of progress is difficult to estimate, and it may be that the water flows fastest in some of the broadest stretches. Wide parts are also where we choose our path, even if that is from a position of hazy uncertainty. Ultimately, whether we currently find ourselves in a narrow or wide channel has little bearing on when we will arrive at our destination, if that is indeed our fate…
Map this story onto the history of AI
Map this story onto the history of AI. Back in the 1950s, the attendees of that foundational Dartmouth workshop pondered intelligence in its broadest sense. With so many more questions than answers, they found themselves in an intellectual space of almost unparalleled breadth, with a great deal of paddling room to explore one possible direction or another. A partial narrowing was reached in the 60s, when the prevailing paradigm of good old-fashioned AI pursued a particularly tidy vision of intelligence as the logical manipulation of symbols.
During the mid-70s, it became clear that the proponents of this vision had over-promised and under-delivered, as their ambition butted up against limits of computing power.
During the mid-70s, it became clear that the proponents of this vision had over-promised and under-delivered, as their ambition butted up against limits of computing power. In Britain, the Lighthill report induced a radical scaling back of research funding. But important work did continue regardless, and influential new theories of perception and knowledge representation emerged in this period, alongside the first demonstrations of the practical utility of expert systems for aiding decision making in realistic medical, business, and engineering situations. These systems, which process large databases of human-generated facts and rules through inference engines to solve novel problems, were pursued energetically by the corporate world during the 80s. For the first time, AI research had widespread real-world impact (it was “ricocheting off the rocks”) but this work embodied an almost comedically narrow perspective on how to design and build intelligent machines.
The 90s and 00s brought a recognition that expert systems of any significant power are extremely expensive to create and maintain, and remain fundamentally limited by their inability to learn. Yet despite a relatively quiet period of public interest, renewed breadth enabled an abundance of superb original work, bringing additional rigour and flair to the sub-fields of planning, language modelling, computer vision and reasoning under uncertainty. In the same period, IBM’s Deep Blue recorded its historic victory against world chess champion Garry Kasparov, and numerous teams unveiled impressive autonomous vehicle prototypes.
For those genuinely driven by the promise of scientific progress rather than the venture capital billions, what we might now call a wide period is no less rewarding than a narrow one.
The last decade saw us eagerly plunge once more into a channel – deep learning – opened up by minor algorithmic developments and major advances in computing hardware. Filled with prodigious milestones, this passage has been the most exhilarating yet but inevitably it is reaching its end. Over the past two years, the various question marks hanging over the field have begun to widen its scope again to a degree unseen since its mid-century infancy. Bobbing around in the Amazonian expanse ahead are issues such as data-efficiency, generalisation, causality, common-sense reasoning and interpretability.
Vital progress and transcendent insights
Our emergence from the latest cosy channel in our AI journey is no cause for remorse. For those genuinely driven by the promise of scientific progress rather than the venture capital billions, what we might now call a wide period is no less rewarding than a narrow one. In many ways, there is greater space for thought and movement, and it in these times that our agency to steer the future truly lies. Unlike a winter, a wide period gives us no reason to think that researchers are thinking less hard, or doing less remarkable work. As our boating analogy shows, our ability to see the space of possibilities around us is limited, and we can never actually know what our rate of progress is, especially without the comfort of firm intellectual boundaries to keep us on track.
The era of deep learning’s dominance probably is drawing to a close, and this should give us profound excitement. Whatever happens in the years ahead, vital progress and transcendent insights will continue to be made, whether or not we find ourselves splashed with the turbulent froth of media attention in the process.
Tom is a PhD student in explainable artificial intelligence at the University of Bristol. His current work explores the use of simplified, transparent models of intelligent autonomous agents to bring trust, understanding and safety. @tom_bewley
Photo credits: City in winter by Logan Armstrong on Unsplash; kayak by Bit Cloud on Unsplash