Impact Lab


Subscribe Now to Our Free Email Newsletter
May 19th, 2015 at 4:59 pm

What technologies will be required for the robots of the future?

1431465370071

It might sound like science fiction, but someday, thanks to creative scientists and engineers, our world may contain autonomous or semi-autonomous robots working with people, helping us do tasks that are better suited for machines.  

What technology will it take to get us there? Engineers believe it comes down to mastery of the four Ps: Perception, Processing, Power and Planning.

1431465343475

 

Perceptoin

Some jobs aren’t easy for humans. They may be dangerous, unpleasant or require expensive equipment and safety protections. While today’s robots can perform some of these tasks for us, simply putting a camera on a robot and tele-operating it isn’t always an option. Some places, such as deep space or miles under the sea, may not allow for immediate and constant communication. In instances where a robot doesn’t have access to its operator, it must independently infer how to interact with and impact its surroundings.

“If both robots and humans understand how to operate within their environments, robots can help humans have action at a distance. This is easier said than done,” said Lockheed Martin’s Dr. Todd Danko. “Humans are very good at looking at their environment and determining what is important and what is irrelevant. We do this without even thinking about it. For robots to do that, it requires a complex combination of sensors and algorithms.”

IL-Header-Communicating-with-the-Future

“A robot will enter its environment with a mission—and it will determine what within the environment is important based on that mission,” added Dr. Danko. “At the same time, the mission is often modified by what is observed in the environment. Robots must perceive what’s going on around them and adapt their actions, including perception approaches in real-time.”

This is why researchers like Danko are working on adaptive perception algorithms  to improve autonomous navigation and manipulation.

 

Processing

If you compare the human brain to a computer, the brain certainly has its advantages.

“The human brain has been designed to learn. It can deal with uncertainly, and humans tend not to have single or brittle points of failure,” said Lockheed Martin cognitive scientist Dr. William Casebeer. “Some scientists and philosophers insist that this means your brain can do things computers will never be able to do. But the bottom line is there are lots of tasks that come easy for our ‘biological computer’ that are difficult to get robots or more traditional computers to do.”

What differentiates a human brain from conventional computing is that the human brain can constantly adapt its knowledge representation. This allows it to improve its ability to process vast amounts of sensory data at every moment. Our brains don’t need to recreate a perfect model of the environment in order to make good choices. Based on our experiences, we learn sufficient models that enable us to make good decisions. However, robots today have brittle intelligence limited by their inability to precisely model the world.

Engineers and scientist are developing robots that will learn and can work collaboratively by turning to cloud computing, which increases available processing power and allows robots to sharing experiences with each other. Like supercomputers, robots need incredible processing capability to manage their movement in and manipulation of the environment, up to and including collaborating with humans and other robots.

 

Power

To work autonomously or semi-autonomously, robots need a reliable source of energy. Even today’s best batteries still lack the energy storage capabilities needed for many mobile robotics applications. A human can store 50 to 100 times more energy per unit weight in fat than a robot can in rechargeable batteries.

In addition to storing energy, robots must also convert that energy to a mechanical form. Most animals have natural elasticity in their muscles that allow them to store energy. They also develop efficient locomotion gaits that exploit this process to maneuver complex environments at high speeds with low power consumption. We need to develop similarly sophisticated actuation methods and intelligent motions that reduce the power requirements of our robots.

“Most robots rely on motors for actuation. The problem is that motors have relatively poor power to weight ratio. This means that motors are relatively heavy compared to the power they produce. Human muscles, on the other hand, have comparatively very high power to weight,” said Lockheed Martin’s Dr. Richard Primerano, who researches robotics and mechanical design.

How can engineers improve a robot’s power density? The secret may be the development of new actuators. In studies, series elastic actuators provide compliance to robots, making them safer for humans to work alongside. Piezoelectric actuators produce very high forces and find application in micro-scale robots. Nanotube materials are being investigated as an extremely high strength to weight ratio actuator that operates similar to human muscles.

 

Planning

Robots need planning capabilities that give them the ability to understand their task and what actions to take in the event of the unexpected.

“Most people don’t realize how difficult it is for a human to command a robot to complete a simple task,” explained Dave Kotfis, an engineer who is researching autonomous system software. “In order to be tasked like a human, the robot needs to understand language sufficiently to translate language into its goals and constraints.”

For example, imagine telling a robot to pick up a cup. This may require that the robot knows how to recognize cups, determines whether a cup is within reach or if it needs to move closer first, decides where on the cup it should grasp, and lift it with a motion that does not spill any liquid inside. When can the robot assume it should do all of these things? When does a human need to clarify some of these constraints? What if there are two cups nearby?

In the perfect world, a person will have programmed the template for every possible scenario for a robot. However, at some point the robot will be facing a situation they don’t have a template for (like two cups instead of one) so the robot will not know what to do. If human guided autonomy is part of the plan, the robot will communicate with a human operator whenever faced with an unknown situation. Instead of abandoning the plan, the robot receives instructions to continue on task, working autonomously until it faces another unknown situation.

Images and article via Lockheed Martin

IL-Header-Communicating-with-the-Future

Comments are closed.

Drones square