People have long dreamed of the idea of machines having the intelligence and capabilities of humans. From the early Greek myths of Hephaestus and his automatons to the Golem of Eastern European Jewish tradition to well over a hundred years of science fiction stories, novels and movies, our human imaginations have envisioned what it would be like to have sentient, intelligent, human-like machines co-exist with us. In 1920 Karel Čapek’s play R.U.R. (Rossum’s Universal Robots) first coined the word “robot” and gave us a name to give to the creations of our imaginations. In many ways, the quest for the intelligent machine lead to the development of the modern computer. Ideas by Alan Turing not only formulated the basis of programmable machines, but also the core of the concepts of artificial intelligence, with the namesake Turing Test providing a means for evaluating intelligent machines.
After a semi-painless injection between the thumb and index finger, a microchip is implanted in another employee. A cyborg is now created, and this human/machine mashup runs off to buy a smoothie using his or her new sub-dermal implant.
If that sounds futuristic, it’s because we’re conditioned to this as a sort of science fiction trope: human gets implanted, its overlords are now in control. For a Swedish company, however, the practice of implanting microchips into its employees has become routine, popular even.
A YouTube collection of grainy video clips highlights the progress Gravity founder Richard Browning has made toward his outlandish dream over the past year. Each seems more terrifying than the last, with multiple jet engines attached to his limbs in various configurations, as he hovers a few feet from the ground.
The press material attached to the announcement heralds the oil trader turned entrepreneur as a real life Iron Man, but it’s hard to shake the feeling that you’re watching some sort of backyard mad scientist, a few moments away from the world’s most dangerous Jack Ass stunt. Browning acknowledges how downright alarming the footage of the Daedelus rig appears, but shakes off any notion that he’s actually in danger at any point during the three-and-a-half minute package.
When “little green men” invaded Crimea in early 2014, they left a data trail that went largely unnoticed by the U.S. Intelligence Community (IC). Distracted by a large Russian exercise to the west, the IC did not connect the digital dots that indicated the impending invasion. In the Information Age, the “dots” are more plentiful and glaring as everyone now leaves a data trail. Given that, how can intelligence analysts better gather, share, organize, and view data to reveal intent, more accurately predict behavior, and make better decisions with limited resources?
In 2012, Futurist Thomas Frey predicted that 2 billion jobs would disappear by 2030, roughly half of all jobs that exist today. Oxford University researchers reinforced this with their estimates that 47 percent of U.S. jobs could be automated within the next two decades. But which ones will robots take first?
First, we should define “robots” as technologies, such as machine learning algorithms running on purpose-built computer platforms, that have been trained to perform tasks that currently require humans to perform.