Microsoft head of research Eric Horvitz says, in contrast to what others, such as Elon Musk and Stephen Hawking have said, that even though machines will likely achieve a human-like consciousness but do not pose a threat to the survival of mankind.
“There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences,” Horvitz said in an interview after being awarded the prestigious AAAI Feigenbaum Prize for his contribution to artificial intelligence (AI) research, “[but] I fundamentally don’t think that’s going to happen”.
“I think that we will be very proactive in terms of how we field AI systems, and that in the end we’ll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life.”
In a later blog, Horvitz admitted the procession of AI towards super-intelligence would present challenges in the realms of privacy, law and ethics, but pointed to an essay he had co-authoredwhich concludes that “AI doomsday scenarios belong more in the realm of science fiction than science fact”.
Meanwhile technologist Elon Musk, co-founder of PayPal and founder of SpaceX and Tesla Motors, has repeatedly expressed concerns about the development of AI, first likening it to the production of nuclear weapons and then claiming mankind was “summoning the demon” by pursuing the technology carelessly.
Famed physicist Stephen Hawking recently said AI “could spell the end of the human race”. He added that the technology would eventually become self-aware and “supersede” humanity as it developed faster than biological evolution.
Even Microsoft founder Bill Gates, who has stepped back from the company to pursue philanthropic work with his Bill & Melinda Gates Foundation, is concerned about AI getting away from us.
Speaking during a Reddit AMA on Thursday morning, Gates was asked how much of an “existential threat” the development of super-intelligent machines posed to mankind.
“I am in the camp that is concerned about super intelligence”, said Gates.
“First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned”.
Possibly partially explaining Horvitz’s view on AI systems, he also revealed that “over a quarter” of all resources at his research unit were now focused on developing the technology.
Microsoft has already introduced its digital assistant Cortana (based ambitiously and worryingly on the advanced AI from the Halo games, a fictional world in which all AI eventually becomes rampant) for Windows phone and desktop operating systems, and Horvitz believes the only battle on the horizon will be between companies developing competing AI standards.
“We have Cortana and Siri and Google Now setting up a competitive tournament for where’s the best intelligent assistant going to come from”, Horvitz said, “and that kind of competition is going to heat up the research and investment, and bring it more into the spotlight.”
In addition to personal data assistants, the development of AI and automation continues to progress rapidly and in all directions, as in the last few weeks alone we’ve seen both an AAAI competition entry that allows Super Mario to become self-aware and a move from Foxconn toswitch out a huge chunk of its workforce for a million robots.
In outlining a long-term plan for his company after posting an unexpectedly large profit last quarter, Facebook CEO Mark Zuckerberg revealed he also was planning to get into the AI business.
Meanwhile Zuckerberg, along with Musk, has become an investor in the “Vicarious” program, which hopes to produce a “unified algorithmic architecture” to imbue machines with human-like consciousness and control over their faculties.