J.C.R. Licklider’s proposals for “man-machine symbiosis” led to the invention of the internet
The history of AI is often told as the story of machines getting smarter over time. What’s lost is the human element in the narrative, how intelligent machines are designed, trained, and powered by human minds and bodies.
In this six-part series, we explore that human history of AI—how innovators, thinkers, workers, and sometimes hucksters have created algorithms that can replicate human thought and behavior (or at least appear to). While it can be exciting to be swept up by the idea of super-intelligent computers that have no need for human input, the true history of smart machines shows that our AI is only as good as we are.
Part 4: Licklider’s Cyborg Intelligence
At 10:30pm on 29 October 1969, a graduate student at UCLA sent a two-letter message from an SDS Sigma 7 computer to another machine a few hundred miles away at the Stanford Research Institute in Menlo Park.
It read: “LO.”
The student had meant to send “LOGIN,” but the packet switching network supporting the transmission of the message, the ARPANET, crashed before the whole message could be typed out.
In histories of the internet, this moment is celebrated as ushering in a new age of online communication. What is often forgotten, however, is that underlying the technical infrastructure of the ARPANET was a radical vision for a future of human-machine symbiosis developed by a man named J.C.R. Licklider.
Licklider, who had a background in psychology, became interested in computers in the late 1950s when working at a small consulting firm. He was interested in how these new machines could amplify humanity’s collective intelligence, and began to conduct research into the burgeoning field of AI. When he reviewed the existing literature, he found that programmers aimed to “teach” these machines how to perform pre-existing human activities, such as chess or language translation, with greater aptitude and efficiency than humans.
This conception of machine intelligence didn’t sit well with Licklider. The problem, for him, was that the existing paradigm saw humans and machines as being intellectually equivalent beings. Licklider believed that, in fact, humans and machines were fundamentally different in their cognitive capacities and strengths. Humans were good at certain intellectual activities—like being creative and exercising judgment—while computers were good at others, like remembering data and processing it quickly.
Instead of having computers imitate human intellectual activities, Licklider proposed an approach in which humans and machines would collaborate, each making use of their particular advantage. He suggested that this strategy would shift the focus from competition (like computer-versus-human chess matches), and facilitate previously unimaginable forms of intelligent activity.
In a 1960 paper entitled “Man-Machine Symbiosis,” Licklider spelled out his idea. “The hope is that in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.” For Licklider, a promising existing example of this symbiosis was a system of computers, networking equipment, and human operators known as the Semi-Automatic Ground Environment (SAGE) that had opened two years earlier to track U.S. airspace.
In 1963, Licklider took a job as a director at the U.S. Department of Defense Advanced Research Projects Agency (then called ARPA, now called DARPA), where he had the opportunity to put some of his ideas into practice. In particular, he was interested in designing and implementing what he first called an “Intergalactic Computer Network.”
The idea came from Licklider’s realization that at ARPA, he would need an efficient way to keep large, dispersed teams made up of both humans and machines up to date with changes in programming languages and technical protocols. A communication network connecting these actors across distances was his answer. The challenges in building such a network were akin to a problem contemplated by science fiction writers, he wrote in a memo explaining his concept: “How do you get communications started among totally uncorrelated ‘sapient’ beings?”
Licklider left ARPA before a fully funded program for developing this network began. But over the next five years his initial lofty vision was integral to the development of the ARPANET. And as the ARPANET developed into what we now know as the internet, some began to see how this new networked communication method represented a cooperative interaction between human and technological actors, a symbiont that seemed at times to behave, as the Belgian cyberneticist Francis Heylighen put it, like a “global brain.”
Today, many great leaps forward in machine learning applications are underpinned by collaborative networks of humans and machines. The trucking industry, for example, is increasingly looking for ways to allow human drivers and computational systems use their relative strengths to deliver freight more efficiently. Also in the transportation realm, Uber has developed a system whereby humans are given high-skill driving tasks, like entering and exiting highways in traffics, and machines are left to manage the hours of routine highway driving.
While there are many other instances of human-machine symbiosis, there is still a cultural tendency to envision machine intelligence as a quality belonging to a single supercomputer with human-level cognitive abilities. But in fact, the cyborg future that Licklider envisioned has come to pass: We live in a world of human-machine symbiosis, or what he described as the “living together in intimate association, or even close union, of two dissimilar organisms.” Instead of focusing on a fear of being replaced by machines, Licklider’s legacy makes us aware of possibilities for collaboration.
This is the fourth installment of a six-part series on the untold history of AI. Part 3 explained why Alan Turing thought AI agents should make mistakes. Come back next Monday for Part 5, which describes a shocking case of algorithmic bias—in the 1980s.