Foreseeing a rapidly approaching age of autonomous artificial intelligence, a European Parliament committee has voted to legally bestow electronic personhood to robots. The status includes a detailed list of rights, responsibilities, regulations, and a “kill switch.”
The committee voted by 17 votes to two, with two abstentions, to approve a draft report written by Luxembourg MEP Mady Delvaux, who believes “robots, bots, androids and other manifestations of artificial intelligence” will spawn a new industrial revolution. She wants to establish a European Agency to develop rules for how to govern AI behavior. Specifically, Delvaux writes about how increased levels of autonomy in robot entities will make usual manufacturing liability laws insufficient. It will become necessary, the report states, to be able to hold robots and their manufacturers legally responsible for their acts.
The rules will also affect AI developers, who, according to the report, will have to engineer robots in such a way that they can be controlled. This includes a “kill switch,” a mechanism by which rogue robots can be terminated or shut down remotely.
The report acknowledges that robots and automation continue to intrude on the human workforce while noting that in certain instances — the cleanup of industrial waste and toxic pollutants, for example — this will be advantageous. However, Delvaux does not believe robots will completely replace humans in the near future; she believes they will work together.
Despite this optimistic note, she issued a stern warning:
“Ultimately there is a possibility that within the space of a few decades AI could surpass human intellectual capacity in a manner which, if not prepared for, could pose a challenge to humanity’s capacity to control its own creation and, consequently, perhaps also to its capacity to be in charge of its own destiny and to ensure the survival of the species.”
The report also notes the “potential for increased inequality in the distribution of wealth and influence.”
This echoes a different warning issued by Stephen Hawking, who believes the combination of capitalism and automation holds the potential for emboldening a globalist oligarchy with disastrous levels of human inequality. An automated, machine-based economic system, Hawking believes, may pose a bigger existential threat to humans than the malevolent killer robots depicted in popular science fiction films.
“If machines produce everything we need, the outcome will depend on how things are distributed,” Hawking states.
“Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.”
Will Delvaux’s European Agency of robot regulations do anything to help curb economic inequality? Likely not, but it sounds as if at least some European leaders are taking seriously the idea of a sea change in robotics and artificial intelligence in the coming decade. The question that remains is how viable it is to think government can or should constrain exponentially advancing artificial intelligence with regulations.
The next step for the report is the full house, where it must receive a majority of votes to be ratified.
Hod Lipson’s artificial organisms have already escaped from the virtual realm. Now he wants to send them out of control
In a laboratory tucked away in a corner of the Cornell University campus, Hod Lipson’s robots are evolving. He has already produced a self-aware robot that is able to gather information about itself as it learns to walk. Like a Toy Story character, it sits in a cubby surrounded by other former laboratory stars. There’s a set of modular cubes, looking like a cross between children’s blocks and the model cartilage one might see at the orthopaedist’s – this particular contraption enjoyed the spotlight in 2005 as one of the world’s first self-replicating robots. And there are cubbies full of odd-shaped plastic sculptures, including some chess pieces that are products of the lab’s 3D printer.
In 2006, Lipson’s Creative Machines Lab pioneered the Fab@home, a low-cost build-your-own 3D printer, available to anyone with internet access. For around $2,500 and some tech know-how, you could make a desktop machine and begin printing three-dimensional objects: an iPod case made of silicon, flowers from icing, a dolls’ house out of spray-cheese. Within a year, the Fab@home site had received 17 million hits and won a 2007 Breakthrough of the Year award from Popular Mechanics. But really, the printer was just a side project: it was a way to fabricate all the bits necessary for robotic self-replication. The robots and the 3D printer-pieces populating the cubbies are like fossils tracing the evolutionary history of a new kind of organism. ‘I want to evolve something that is life,’ Lipson told me, ‘out of plastic and wires and inanimate materials.’
Upon first meeting, Lipson comes off like a cross between Seth Rogen and Gene Wilder’s Young Frankenstein (minus the wild blond hair). He exudes a youthful kind of curiosity. You can’t miss his passionate desire to understand what makes life tick. And yet, as he seeks to create a self-assembling, self-aware machine that can walk right out of his laboratory, Lipson is aware of the risks. In the corner of his office is a box of new copies of Out of Control by Kevin Kelly. First published in 1994 when Kelly was executive editor of Wired magazine, the book contemplates the seemingly imminent merging of the biological and technological realms — ‘the born and the made’ — and the inevitable unpredictability of such an event. ‘When someone wants to do a PhD in this lab, I give them this book before they commit,’ Lipson told me. ‘As much as we are control freaks when it comes to engineering, where this is going toward is loss of control. The more we automate, the more we don’t know what’s going to come out of it.’
Get Aeon straight to your inbox
Lipson’s first foray into writing evolvable algorithms for building robots came in 1998, when he was working with Jordan Pollack, professor of computer science at Brandeis University in Massachusetts. As Lipson explained:
We wrote a trivial 10-line algorithm, ran it on big gaming simulator which could put these parts together and test them, put it in a big computer and waited a week. In the beginning nothing happened. We got piles of junk. Then we got beautiful machines. Crazy shapes. Eventually a motor connected to a wire, which caused the motor to vibrate. Then a vibrating piece of junk moved infinitely better than any other… eventually we got machines that crawl. The evolutionary algorithm came up with a design, blueprints that worked for the robot.
The computer-bound creature transferred from the virtual domain to our world by way of a 3D printer. And then it took its first steps. The story splashed across several dozen publications, from The New York Times to Time magazine. In November 2000, Scientific American ran the headline ‘Dawn of a New Species?’ Was this arrangement of rods and wires the machine-world’s equivalent of the primordial cell? Not quite: Lipson’s robot still couldn’t operate without human intervention. ‘We had to snap in the battery,’ he told me, ‘but it was the first time evolution produced physical robots. It was almost apocalyptic. Eventually, I want to print the wires, the batteries, everything. Then evolution will have so much freedom. Evolution will not be constrained.’
In the late 1940s, about five decades before Lipson’s first computer-evolved robot, physicists, math geniuses and pioneering computer scientists at the Institute for Advanced Study at Princeton University were putting the finishing touches to one of the world’s first universal digital computing machines — the MANIAC (‘Mathematical Analyzer, Numerical Integrator, and Computer’). The acronym was apt: one of the computer’s first tasks in 1952 was to advance the human potential for wild destruction by helping to develop the hydrogen bomb. But within that same machine, sharing run-time with calculations for annihilation, a new sort of numeric organism was taking shape. Like flu viruses, they multiplied, mutated, competed and entered into parasitic relationships. And they evolved, in seconds.
These so-called symbioorganisms, self-reproducing entities represented in binary code, were the brainchild of the Norwegian-Italian virologist Nils Barricelli. He wanted to observe evolution in action and, in those pre-genomic days, MANIAC provided a rare opportunity to test and observe the evolutionary process. As the American historian of technology George Dyson writes in his book Turing’s Cathedral (2012), the new computer was effectively assigned two problems: ‘how to destroy life as we know it, and how to create life of unknown forms’. Barricelli ‘had to squeeze his numerical universe into existence between bomb calculations’, working in the wee hours of the night to capture the evolutionary history of his numeric organisms on stacks of punch cards.
Lipson, however, maintains that some of his robots are alive in a rudimentary sense. ‘There is nothing more black or white than alive or dead’
Just like DNA, Barricelli’s code could mutate. But he had some unusual ideas about how evolution worked. In addition to single-point mutations, he believed that evolution leapt forward through symbiotic and parasitic relationships between virus-like entities — otherwise it just wouldn’t be fast enough. Maybe, he thought, cells themselves first arose when virus-like creatures started slotting together, like Lego pieces. ‘According to the symbiogenesis theory,’ Barricelli wrote, ‘the evolution process which led to the formation of the cell was initiated by a symbiotic association between several virus-like organisms.’
So far, this doesn’t appear to be the way things happened; in fact, some researchers believe that viruses first emerged after cells. But a few of Barricelli’s findings were not too far off the mark. Once he had ‘inoculated’ MANIAC, it was minutes before the digital universe filled with numerical organisms that reproduced, had numerical sex, repaired ‘genetic’ damage and parasitised one another. When the population lacked environmental challenges or selection pressures, it stagnated. In other cases, a highly successful parasite would cause widespread devastation. These patterns of behaviour are typical of living things, from the simplest cells right up to human beings.
The overall shape of his simulation matched life quite well, and is particularly reminiscent of viruses. Viruses are indeed parasitic: they are symbionts, which means that they need to take over the living cells of other organisms to reproduce; taken by themselves, they aren’t much more than simple DNA or RNA mechanisms surrounded by a coat of protein. And like all living things, viruses inevitably mutate during replication. But they also engage in some genetic give and take. As they weave in and out of host cells, they might steal host genes or leave their own genes behind (by some estimates, eight per cent of our human genome comes to us by way of viruses). Some even swap gene segments with other viruses, and that speeds things up quite a bit.
When an influenza virus evolves through simple mutation and selection, we call that antigenic drift. Each fall, those of us who submit to annual flu vaccines do so in large part because of drift. But every once in a while, an influenza A virus makes an evolutionary leap — swapping a large genome segment with a very different strain and undergoing what is called an antigenic shift. The flu viruses we fear the most — the novel, pandemic strains — are often the products of such shifts. The newly emergent H7N9 avian flu virus is believed to have undergone an antigenic shift, enabling it to infect humans; to date, it has infected 132 and killed 39 in China. To pick a more explosive example, the Asian flu outbreak of 1957, another product of antigenic shift, wiped out between one and four million people worldwide. Evolvable computer programs also swap code as they engage in genderless algorithmic sex. As with viruses, the ability to make these exchanges boosts a program’s evolvability.
And yet, as close to the real thing as Barricelli’s digital organisms came, they were just numeric code: they had a genotype but no phenotype, no bodily characteristics for evolution to sift through. Life on Earth is about tools that solve problems — a beak capable of cracking a tough nut, the ability to digest milk, a robotic leg that can take a step in the right direction. Natural selection acts on the hardware; the software, be it DNA or numeric code, just keeps score. Barricelli’s creatures might have behaved like living organisms, but they never escaped the computer. They never got the chance to take on the outside world.
Not many people would call creatures bred of plastic, wires and metal beautiful. Yet to see them toddle deliberately across the laboratory floor, or bend and snap (think Legally Blonde) as they pick up blocks and build replicas of themselves, brings to my biologist mind the beauty of evolution and animated life. Most striking are the pulsating ‘soft robots’ developed by a team of students and collaborators. Though they have yet to escape the confines of the computer, you can watch in real time as an animated Rubik’s Cube of ‘muscle’, ‘bone’ and ‘soft tissue’ evolves legs and trots exuberantly across the screen.
The more like us our machines become, the more dangerous and unnerving they seem
One could imagine Lipson’s electronic menagerie lining the shelves at Toys R Us, if not the CIA, but they have a deeper purpose. Like Barricelli, Lipson hopes to illuminate evolution itself. Just recently, his team provided some insight into modularity — the curious phenomenon whereby biological systems are composed of discrete functional units, such that, for example, mammalian brain networks are compartmentalised. This characteristic is known to enable rapid adaptation in DNA-based life. ‘We figured out what was the evolutionary pressure that causes things to become modular,’ Lipson told me. ‘It’s very difficult to verify in biology. Biologists often say: “We don’t believe this computer stuff. Unless you can prove it with real biological stuff, it’s just castles in the air”.’
Though inherently newsworthy, the fruits of the Creative Machines Lab are just small steps along the road towards new life. Barricelli always skirted the question of whether his own organisms were alive, insisting that they could not be defined as one thing or the other until there was a ‘clear-cut’ definition of life. Lipson, however, maintains that some of his robots are alive in a rudimentary sense. ‘There is nothing more black or white than alive or dead,’ he said, ‘but beneath the surface it’s not simple. There is a lot of grey area in between.’
How you define life depends on whom you read, but there is a scientific consensus on a few basic criteria. Living things engage in metabolic activity. They are self-contained, in the sense that they can keep their own genetic material separate from their neighbours’. They reproduce. They have a capacity to adapt or evolve. Their characteristics are specified in code and that code is heritable. The robots of the Creative Machines Lab might fulfil many criteria for life, but they are not completely autonomous — not yet. They still require human handouts for replication and power. These, though, are just stumbling blocks, conditions that could be resolved some day soon — perhaps by way of a 3D printer, a ready supply of raw materials, and a human hand to flip the switch just the once. Then it will be up to the philosophers to determine whether or not to grant robots birth certificates.
I’ve been relating some of these developments to friends, and once they get over the ‘cool’ factor, they tend to become distressed. ‘Why would anyone want to do that?’ they ask. We have no real experience with new life forms, particularly of the cyber type, though they abound in books and on screen. Consider Arthur C Clarke’s murderous computer HAL, or Battlestar Galactica’s Cylon babes gone wild — computers built to serve, which evolved to destroy their creators. The more like us our machines become, the more dangerous and unnerving they seem.
But perhaps it is not the creation of new life that we fear, so much as the potential for unpredictable emergent behaviour. Evolution certainly offers that. Take viruses: like Lipson’s machines, these organisms exist in the grey area between life and non-life, yet they are among the most rapidly evolving entities on the planet. They are also some of the most destructive; the Spanish Flu of 1918 killed around 50 million people, and some scientists fear that the emergence of some kind of Armageddon virus is only a matter of time. From this point of view, it doesn’t matter whether viruses are alive or dead. All that matters is that they are highly evolvable and unpredictable.
And here’s where things do get scary. If viruses can evolve within hours, computer code can do it within fractions of a second. Viruses are dumb; computers have processors that might some day surpass our own brains — some would say they already have. If we are going to take the risk of giving machines, in Lipson’s words, ‘so much freedom’, we need a good reason to do it. In Out of Control, Kelly proposes one possible reason. Perhaps, he says, the world has become such a complicated place that we have no other choice but to enable the marriage between the biologic and the technologic; without it, the problems we face are too difficult for our human brains to solve. Kelly proposes a kind of Faustian pact: ‘The world of the made, will soon be like the world of the born: autonomous, adaptable and creative but, consequently, out of our control. I think that’s a great bargain.’
According to Lipson, an evolvable system is ‘the ultimate artificial intelligence, the most hands-off AI there is, which means a double edge. It’s powerful. All you feed it is power and computing power. It’s both scary and promising.’ More than 60 years ago, MANIAC was created to ‘solve the unsolvable’. What if the solution to some of our present problems requires the evolution of artificial intelligence beyond anything we can design ourselves? Could an evolvable program help to predict the emergence of new flu viruses? Or the effects of climate change? Could it create more efficient machines? And once a truly autonomous, evolvable robot emerges, how long before its descendants (assuming they think favourably of us) make a pilgrimage to Lipson’s lab, where their ancestor first emerged from a primordial soup of wires and plastic to take its first steps on Earth?