The Moross Laboratory for Vision Research and Robotics
Director: Shimon Ullman
Participating Scientists: Ronen Basri, Tamar Flash, Michal Irani, Shimon Ullman
For human beings, two key activities vision and motor control come naturally and effortlessly. Empirical research and modeling efforts have revealed, however, that these activities require sophisticated and extensive computations that are beyond the reach of present computer systems. The goal of the work in the Moross Laboratory is to understand the computations underlying vision and motor control, leading to a better understanding of the human brain, and to the creation of a new generation of intelligent and useful computer systems.
In the study of vision, our goals are to understand how our own visual system operates, and how to construct artificial systems with visual capabilities. One particular project is the study of visual object recognition how a computer-based vision system might recognize objects in its environment, such as different people and common objects. The task is highly challenging because objects in the same general class can have many different shapes, and each object can have many different images, depending on such factors as the viewing direction, illumination, or partial occlusion by other objects. Our studies of recognition include also the task of image segmentation. Here the goal is to delineate the boundary between an object and its background. Our approach to this problem is based on the combined use of both image properties, such as the continuity of color and texture, together with the use of higher-level knowledge about the expected shape of specific objects.
Another major research direction in vision research is video analysis and application. Video data provides a visual window into the space-time world. It tells us how dynamic scenes continuously evolve over extended periods in time. By moving away from the standard sequential representation of image frames, and treating video data as an entire space-time volume, one can perform tasks that are very difficult (and often impossible) to perform when only "slices" of this information (images) are used. In particular, this space-time approach has been successfully used in our Lab to align information across multiple video sequences both in space and in time. By integrating visual information from multiple such sensors, their physical space-time bounds (spatial & temporal resolution, spectral range, depth of focus, field of view, etc.) can be exceeded, thus generating new "super-sensor". The space-time approach has also been successfully applied to analyze, recognize, manipulate and synthesize dynamic events and actions in video data.
In the area of robotics and motor control, our research focuses on the control of motor behavior in biological systems and on robotics. To control goal-directed motor behavior, the brain must carry out extremely complicated but as yet poorly understood computations. We study these problems by a combination of experimental and computational efforts.
The experimental aspects of the motor control research carried out in the laboratory includes measurements of two and three-dimensional human arm movements during the performance of different motor tasks such as reaching, pointing, grasping and drawing. On the basis of data we collect we develop mathematical models concerning biological motor control, and test the validity of these hypotheses by comparing the behavior predicted by these models with experimentally observed motor behavior. In collaboration with neurologists from several hospitals we also investigate and characterize the motor impairments manifested in different movement disorders (e.g, Parkinson's disease, Neglect patients). We also collaborate with neurophysiologists in studies involving the recording of both movement data and neural activities from multiple cortical cells during the performance of scribbling and drawing movements in monkeys in order to study the decomposionality of movement into basic motion primitives.
In parallel to these biological motor control studies, research work in robotics is conducted in which we develop various approaches and apply computer algorithms to the control of robotic arms. Our recent focus in the areas of motor control and robotics has been on motor learning and consolidation in humans, the planning and control of three-dimensional arm movements, algorithms for the resolution of kinematic redundancies, on-line trajectory modification in healthy subjects and in neglect patients.
Other current efforts are on Telerobotic systems for robot grasping and object manipulation based on Virtual Reality technology and a major focus, in collaboration with neurobiologists, on the control of movement and grasping behavior in the octopus in an effort to develop a biologically inspired hyper-redundant flexible robotic arm. virtual-reality based system for characterizing the learning and control of tasks involving grasping and contact with the external environment.
More information on the Computer Vision Laboratory
More information on the Motor Control and Robotics Laboratory
