Interactive Perception in Robotics: Road to Endless Possibilities
Robotics engineers are striving for the deployment of autonomous robots capable of interactive perception and mobility in dynamic and unstructured environments. The attainment of autonomy and competence in these environments would open up a host of significant applications for robotics in multiple fields ranging from planetary exploration to elder care and from the disposal of improvised explosive devices to flexible manufacturing and construction in collaboration with human experts. Perception answers the basic question of what is surrounded by a robot. Situational awareness is crucial for ensuring dynamic mobility in everyday world.
Interactive Perception employs forceful interactions with the robot’s environment to uncover sufficient perceptual information for the swift execution of specific tasks. The limitation to a specific job facilitates the perception task, as only task-relevant information has to be extracted from the sensor stream. The inclusion of forceful interactions into the perceptual process makes it possible to extract information from the environment that would otherwise be unobtainable or could only be obtained with significant domain knowledge.
For these applications, it is not possible to provide detailed-deductive models of the environment. The ability to efficiently acquire and repetitively improve such models from perception is thus an essential prerequisite for autonomous operation in unstructured environments. Perceptual techniques, in particular in the domain of computer vision, have recently made significant progress.
Along the progressive development of perception capabilities on robots, from ultrasonic sensors, radars, and low-cost cameras, to more powerful LiDAR technologies in a likely near future, the digital connection and control over the different robots functionalities have been developed to such an extent that the question of automation of certain key operations, even in complex environments, can be effectively addressed.
New advancements in robotic technologies provide novel tools for extracting distinctive invariant features from images which can be used to reliably match different views of an object. An increasingly powerful set of tools is being developed to address complex vision problems. Researchers have also made significant progress in specific applications, such as semantic image retrieval and semantic video search.
Advanced robotic systems that combine mobility, manipulation/sampling, machine perception, path planning, controls, and a command interface could be capable of meeting the challenges of in situ planetary exploration. Extending our manipulation and sampling capabilities beyond typical instrument placement and sample acquisition, such as those demonstrated with the Mars rovers, could make ever more ambitious robotics missions possible.
Objects that possess inherent degrees of freedom cannot be extracted from visual information alone, they have to be discovered through physical interaction advances in the realm of perception have had significant impact in robotics. The scientists believe that there are two closely related reasons for no significant impact of the advances in computer vision in robotics.
First and foremost, after initial and foundational work at the intersection of computer vision and robotics, both fields have progressed mostly independently. As a result, roboticists currently do not exploit the full potential of state-of-the-art computer vision techniques. But there is a second important reason for the lack of impact made by recent progress in computer vision. The scientists believe that adequate perceptual capabilities have to be developed in the context of a specific robotic task. The perceptual information extracted from the sensor stream can then be tailored to the task to provide the appropriate feedback to ensure its robust task execution, in particular in the presence of significant uncertainty. In contrast, the majority of computer vision research is concerned with general perception skills.
The lack of impact such skills have had in robotics is a result of the difficulty in developing general, task-relevant perception skills. The importance of considering the perception problem in the context of a specific task has been demonstrated in a highly visible manner by Stanford’s robot Stanley during the 2005 DARPA Grand Challenge race. The vision techniques that helped Stanley win the race were effective because they were tailored to a specific problem.
To illustrate the promise of interactive perception as a perceptual paradigm for autonomous robotics, the robot engineers present early efforts towards the development of perceptual skills that extract kinematic models of the environment. Many objects in everyday environments possess inherent degrees of freedom that have to be actuated to perform their function. Such objects include door handles, doors, drawers, and a large number of tools, such as scissors and pliers. Knowledge of their kinematic models is necessary for the successful execution of various tasks.
Since it is impossible to provide an autonomous robot with a kinematic model for all objects in the environment, the robots have been able to extract such a model from its surroundings. The robot engineers have demonstrated preliminary work towards interactive perception primitives that extract kinematic models from the environment. In experiments, a robot interacts with a set of tools; the resulting sensor stream provides sufficient information to extract a model of their kinematics.
Space exploration, for instance was entirely transformed, when NASA carried humans to the moon. NASA is now ready for its next transformation, the robot revolution. In real life situations, robots are performing increasingly complex tasks in ever more challenging arrangements such as medical surgery, automated driving, and bomb disposal are just a few examples of the important work of robots. In space, after the deployment of robots at planets could construct and maintain extraterrestrial assets while autonomously exploring difficult terrains, and even clearing out space debris. Future possibilities of technologically advanced life with autonomous robots are endless and limited only by our imagination.