All registered team members of RoboCup competitions are automatically admitted to the Symposium. Team members do not need to register separately for the symposium. Registration as a Symposium Only participant is also possible. Invitation letters to Symposium Only participants are issued to those with accepted publications therein, only and invited participants.
The RoboCup Symposium is a primary venue for presentation and discussion of scientific contributions to a variety of research areas related to all RoboCup divisions (RoboCup Soccer, RoboCup Rescue, RoboCup@Home, RoboCup@Work, and RoboCupJunior). Its scope includes, but is not restricted to, research and educational activities in robotics and artificial intelligence. Due to its interdisciplinary nature, the Symposium offers a unique venue for exploring both theory and practice in wide spectrum of research fields. The experimental, interactive, and benchmark character of the RoboCup initiative presents an opportunity to disseminate novel ideas and promising technologies, rapidly adopted and field-tested by a large, and still growing, community.
The Symposium is co-located every year with the worldwide competition and its proceedings have been published every year by Springer-Verlag’s Lecture Notes on Artificial Intelligence.
Yoshua Bengio (computer science, 1991, McGill U; post-docs at MIT and Bell Labs, computer science professor at U. Montréal since 1993): he authored three books, over 300 publications (h-index over 100, over 100,000 citations), mostly in deep learning, holds a Canada Research Chair in Statistical Learning Algorithms, is Officer of the Order of Canada, recipient of the Marie-Victorin Quebec Prize 2017, he is a CIFAR Senior Fellow and co-directs its Learning in Machines and Brains program. He is scientific director of the Montreal Institute for Learning Algorithms (MILA), currently the largest academic research group on deep learning. He is on the NIPS foundation board (previously program chair and general chair) and co-created the ICLR conference (specialized in deep learning). He pioneered deep learning and his goal is to uncover the principles giving rise to intelligence through learning, as well as contribute to the development of AI for the benefit of all.
Jeannette Bohg is Assistant Professor for Robotics in the Department of Computer Science at Stanford University. She leads the Interactive Perception and Robot Learning lab that seeks to understand the underlying principles of robust sensorimotor coordination by implementing them on robots. Her research is at the intersection of Robotics, Computer Vision and Machine learning applied to the problem of autonomous manipulation and grasping. Jeannette is also a guest researcher at the Autonomous Motion Department of the Max Planck Institute for Intelligent Systems in Tübingen, Germany, where she was a research group leader until fall 2017. In 2012, she received her PhD from the Royal Institute of Technology (KTH) in Stockholm, Sweden. She holds a Diploma in Computer Science from the Technical University Dresden, Germany and a M.Sc. in Art and Technology from Chalmers in Gothenburg, Sweden.
Abstract: Recent approaches in robotics follow the insight that perception is facilitated by interaction with the environment. First, this creates a rich sensory signal that would otherwise not be present. Second, knowledge of the sensory dynamics upon interaction allows prediction and decision-making over a longer time horizon. To exploit these benefits of Interactive Perception for capable robotic manipulation, a robot requires both: methods for processing rich, sensory feedback and feedforward predictors of the effect of physical interaction.
In the first part of this talk, I will present a method for motion-based segmentation of an unknown number of simultaneously moving objects. The underlying model estimates dense, per-pixel scene flow that is then followed by clustering in motion trajectory space. We show how this outperforms state-of-the-art in scene flow estimation and multi-object segmentation. In the second part, I will present a method for predicting the effect of physical interaction with objects in the environment. The underlying model combines an analytical physics model and a learned perception part. In extensive experiments, we show how this hybrid model outperforms purely learned models in terms of generalisation.
In both projects, we found that introducing structure greatly reduces training data, eases learning and provides extrapolation. Based on these findings, I will discuss the role of structure in learning for robot manipulation.
Torsten is a Full Professor of Computer Science, Director of Intelligent Process Control and Robotics Laboratory (IPR), and Co-Chair of the Institute for Anthropomatics and Robotics (IAR) at Karlsruhe Institute of Technology (KIT). He is also a Visiting Scientist at Stanford University and the founder of Reflexxes. From 2014 to 2017, he was a Staff Roboticist and the Head of the Robotics Software Division at Google. His research interests are real-time motion planning, transfer learning, and deterministic distributed real-time systems enabling sage human-robot collaboration. He received the Early Career Award and the Distinguished Service Award of the IEEE Robotics and Automation Society.
Abstract: Embedding multiple sensors - force/torque, vision, and distance - in the feedback loops of motion controllers has enabled new robot applications; for instance, safe human-robot interaction and many assembly tasks that could not be automated before. As important as these real-time control features is the ability of planning robot motions deterministically and in real-time. To enable spontaneous changes from sensor-guided robot motion control (e.g., force/torque or visual servo control) to trajectory-following motion control, an algorithmic framework is explained that lets us compute robot motions deterministically within less than one millisecond. The resulting class of on-line trajectory generation algorithms serves as an intermediate layer between low-level motion control and high-level sensor-based motion planning. Online motion generation from arbitrary states is an essential feature for autonomous hybrid switched motion control systems. Building upon this framework and with the goal of significantly reducing the amount of resources needed for programing industrial and service robots, reinforcement learning offers a yet unused potential that will be introduced as well. Samples and use-cases - including manipulation and human-robot interaction tasks - will accompany the talk in order to provide a comprehensible insight into these interesting and relevant fields of robotics.