Search This Blog

Thursday, July 2, 2015

Why Elon Musk is Donating Millions to Make Artificial Intelligence Safer

As reported by Fortune: What do Stephen Hawking, Bill Gates, and Tesla Motors founder Elon Musk have in common?

They all fear that advances in artificial intelligence, an area of computer science in which machines can mimic human behavior and make decisions, could lead to potential unforeseen disasters down the road for humanity.

This is why Musk, the non-profit Open Philanthropy Project, and The Future of Life Institute, a research organization that aims to mitigate possible catastrophes from emerging technology, are teaming up to reward researchers working to prevent calamities caused by artificial intelligence.

So far, 37 research groups have received a share of the $7 million in grants made by the initiative.

Keep in mind, the types of disasters imagined by Musk and others aren’t the typical Hollywood fare involving terminators or robots gaining some sort of consciousness and turning on their human overlords. Rather, they are more practical in nature.

The grants are aimed to explore issues like the legal ramifications that could arise from machines and robots operating independently in society. Automated personal shopping assistants that pick up your groceries and self-driving cars are just a few possible examples. One of the research groups receiving grant is looking into ways to “manage the liability for the harms they might cause to individuals and property.” Essentially, if a robot runs a red light, the researchers want to know who should get the ticket

Another research group is looking to develop guidelines on how a computer embedded with artificial intelligence could rationalize and explain its actions to humans. The idea is that people would be able to ask a machine why it is making a specific decision to troubleshoot any potential problems.

Responses from a Google AI chatbot in development
For example, consider a computer using artificial intelligence that is programmed to make trades on the stock market to achieve the best possible financial returns for a company. If part of the reason the machine is doing so well involves some form of illegal trading, a human could potentially stop the activity by asking the computer why is it making the trades it chooses to do.

The goal for all of this research is to lay the groundwork for scientists and engineers to create intelligent systems that can “work with humans in ways that wouldn’t have been possible before,” said Daniel Dewey, a Future of Life program officer that oversees the grants.
The grants are just another example of the type of far-out projects that Musk, the CEO of the electric-car maker Tesla, likes to be involved with. Besides Tesla, Musk is also the CEO of commercial aerospace maker SpaceX and he dreamed up the idea of Hyperloop, a transportation system that involves shooting train-like capsules through tunnels at high speed between San Francisco and Los Angeles.

Musk first outlined his plans to invest in artificial intelligence research in January. The $7 million in grants announced Wednesday are part of the $10 million Musk donated to the The Future of Life Institute at that time.

DeepFace uses a 3-D model to rotate faces,
virtually, so that they face the camera.
Image (a) shows the original image,
and (g) shows the final, corrected version.
While there’s a lot of research and funding taking place in artificial intelligence, especially from big tech companies like Google and the Chinese search company Baidu, Dewey explained that a lot of that research is geared toward projects that are boosting the performance of artificial intelligence technology. Facebook, for example, has developed technology and algorithms that have led to the social network being able to identify people in photos even if their faces were covered up.

Dewey likens the different kinds of artificial intelligence research to staff at a nuclear power plant, each of whom have different roles. While one power plant engineer might be working on ways to improve the efficiency of the reactor, a safety engineer would be responsible to make sure the reactor doesn’t blow up.