RoboCup 2006 Publications

From RoboCup Wiki
Jump to: navigation, search
Author Title Year Journal/Proceedings Reftype DOI/URL
Billington, D., Estivill-Castro, V., Hexel, R. & Rock, A.

Using Temporal Consistency to Improve Robot Localisation

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 232-244 inproceedings


Abstract: Symbolic reasoning has rarely been applied to filter sensor information; and for data fusion, probabilistic models are favoured over reasoning with logic models. However, we show that in the fast dynamic environment of robotic soccer, Plausible Logic can be used effectively to deploy non-monotonic reasoning. We show this is also possible within the frame rate of vision in the (not so powerful) hardware of the AIBO ERS-7 used in the legged league. The non-monotonic reasoning with Plausible Logic not only has algorithmic completion guarantees but we show that it effectively filters the visual input for improved robot localisation. Moreover, we show that reasoning using Plausible Logic is not restricted to the traditional value domain of discerning about objects in one frame. We present a model to draw conclusions over consecutive frames and illustrate that adding temporal rules can further enhance the reliability of localisation.
Bruce, J. & Veloso, M.

Real-Time Randomized Motion Planning for Multiple Domains

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 532-539 inproceedings


Abstract: Motion planning is a critical component for autonomous mobile robots, requiring a solution which is fast enough to serve as a building block, yet easy enough to extend that it can be adapted to new platforms without starting from scratch. This paper presents an algorithm based on randomized planning approaches, which uses a minimal interface between the platform and planner to aid in implementation reuse. Two domains to which the planner has been applied are described. The first is a 2D domain for small-size robot navigation, where the planner has been used successfully in various versions for five years. The second is a true 3D planner for autonomous fixed-wing aircraft with kinematic constraints. Despite large differences between these two platforms, the core planning code is shared across domains, and this flexibility comes with only a small efficiency penalty.
This work was supported by United States Department of the Interior under Grant No. NBCH-1040007, and by Rockwell Scientific Co., LLC under subcontract No. B4U528968 and prime contract No. W911W6-04-C-0058 with the US Army. The views and conclusions contained herein are those of the authors, and do not necessarily reflect the position or policy of the sponsoring institutions, and no official endorsement should be inferred.
Brunhorn, J., Tenchio, O. & Rojas, R.

A Novel Omnidirectional Wheel Based on Reuleaux-Triangles

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 516-522 inproceedings


Abstract: This paper discusses the mechanical design and simulation of a novel omnidirectional wheel based on Reuleaux-triangles. The main feature of our omniwheel is that the point of contact of the wheel with the floor is always kept at the same distance from the center of rotation by mechanical means. This produces smooth translational movement on a flat surface, even when the profile of the complete wheel assembly has gaps between the passive rollers. The grip of the wheel with the floor is also improved. The design described in this paper is ideal for hard surfaces, and can be scaled to fit small or large vehicles. This is the first design for an omnidirectional wheel without circular profile, yet capable of rolling smoothly on a hard surface.
Bustamante, C., Garrido, L. & Soto, R.

Fuzzy Naive Bayesian Classification in RoboSoccer 3D: A Hybrid Approach to Decision Making

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 507-515 inproceedings


Abstract: We propose the use of a Fuzzy Naive Bayes classifier with a MAP rule as a decision making module for the RoboCup Soccer Simulation 3D domain. The Naive Bayes classifier has proven to be effective in a wide range of applications, in spite of the fact that the conditional independence assumption is not met in most cases. In the Naive Bayes classifier, each variable has a finite number of values, but in the RoboCup domain, we must deal with continuous variables. To overcome this issue, we use a fuzzy extension known as the Fuzzy Naive Bayes classifier that generalizes the meaning of an attribute so it does not have exactly one value, but a set of values to a certain degree of truth. We implemented this classifier in a 3D team so an agent could obtain the probabilities of success of the possible action courses given a situation in the field and decide the best action to execute. Specifically, we use the pass evaluation skill as a test bed. The classifier is trained in a scenario where there is one passer, one teammate and one opponent that tries to intercept the ball. We show the performance of the classifier in a test scenario with four opponents and three teammates. After a brief introduction, we present the specific characteristics of our training and test scenarios. Finally, results of our experiments are shown.
Carpin, S., Lewis, M., Wang, J., Balakirsky, S. & Scrapper, C.

Bridging the Gap Between Simulation and Reality in Urban Search and Rescue

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 1-12 inproceedings


Abstract: Research efforts in urban search and rescue grew tremendously in recent years. In this paper we illustrate a simulation software that aims to be the meeting point between the communities of researchers involved in robotics and multi-agent systems. The proposed system allows the realistic modeling of robots, sensors and actuators, as well as complex unstructured dynamic environments. Multiple heterogeneous agents can be concurrently spawned inside the environment. We explain how different sensors and actuators have been added to the system and show how a seamless migration of code between real and simulated robots is possible. Quantitative results supporting the validation of simulation accuracy are also presented.
Chonnaparamutt, W. & Birk, A.

A New Mechatronic Component for Adjusting the Footprint of Tracked Rescue Robots

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 450-457 inproceedings


Abstract: There is no ideal footprint for a rescue robot. In some situations, for example when climbing up a rubble pile or stairs, the footprint has to be large to maximize traction and to prevent tilting over. In other situations, for example when negotiating narrow passages or doorways, the footprint has to be small to prevent to get stuck. The common approach is to use flippers, i.e., additional support tracks that can change their posture relative to the main locomotion tracks. Here a novel mechatronic design for flippers is presented that overcomes a significant drawback in the state of the art approaches, namely the large forces in the joint between main locomotion tracks and flippers. Instead of directly driving this joint to change the posture, a link mechanism driven by a ballscrew is used. In this paper, a formal analysis of the new mechanism is presented including a comparison to the state of the art. Furthermore, a concrete implementation and results from practical experiments that support the formal analysis are presented.
Colombo, A., Matteucci, M. & Sorrenti, D.G.

On the Calibration of Non Single Viewpoint Catadioptric Sensors

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 194-205 inproceedings


Abstract: A method is proposed for calibrating Catadioptric Omni-directional Vision Systems. This method, similarly to classic camera calibration, makes use of a set of fiducial points to find the parameters of the geometric image formation model. The method makes no particular assumption regarding the shape of the mirror or its position with respect to the camera. Given the camera intrinsic parameters and the mirror profile, the mirror pose is computed using the projection of the mirror border and, eventually, the extrinsic parameters are computed by minimizing distance between fiducial points and back-projected images.
Dawei, J. & Shiyuan, W.

Using the Simulated Annealing Algorithm for Multiagent Decision Making

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 110-121 inproceedings


Abstract: Coordination, as a key issue in fully cooperative multiagent systems, raises a number of challenges. A crucial one among them is to efficiently find the optimal joint action in an exponential joint action space. Variable elimination offers a viable solution to this problem. Using their algorithm, each agent can choose an optimal individual action resulting in the optimal behavior for the whole agents. However, the worst-case time complexity of this algorithm grows exponentially with the number of agents. Moreover, variable elimination can only report an answer when the whole algorithm terminates. Therefore, it is unsuitable in real-time systems. In this paper, we propose an anytime algorithm, called the simulated annealing algorithm, as an approximation alternative to variable elimination. We empirically show that our algorithm can compute nearly optimal results with a small fraction of the time that variable elimination takes to find the solution to the same coordination problem.
Delchev, I. & Birk, A.

Vectorization of Grid Maps by an Evolutionary Algorithm

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 458-465 inproceedings


Abstract: Mapping is a fundamental topic for robotics in general and in particular for rescue robotics where the provision of information about the location of victims is a core task. Occupancy grids are the standard way of generating and representing maps, i.e., in form of raster data. But vector representations are for many reasons, especially due to their compactness and the possibility to use very efficient computational geometry algorithms, highly desirable for many applications. Here a novel method for vectorization is presented that is intended to work particularly well with maps. It is based on an evolutionary algorithm that generates vector code for a so to say drawing program. The output of the evolving vector code is compared to the input grid map via a special similarity function as fitness. Experiments are presented that indicate that the approach is indeed a successful method to extract vector data out of grid maps.
Enderle, S.

The Robotics and Mechatronics Kit “qfix”, The

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 134-145 inproceedings


Abstract: Robot building projects are increasingly used in schools and universities to raise the interest of students in technical subjects. They can especially be used to teach the three mechatronics areas at the same time: mechanics, electronics, and software. However, it is hard to find reusable, robust, modular and cost-effective robot development kits in the market. Here, we present qfix, a modular construction kit for edutainment robotics and mechatronics experiments which fulfills all of the above requirements and receives strong interest from schools and universities. The outstanding advantages of this kit family are the solid aluminium elements, the modular controller boards, and the programming tools which reach from an easy-to-use graphical programming environment to a powerful C++ library for the GNU compiler collection.
Estivill-Castro, V. & Seymon, S.

Mobile Robots for an E-Mail Interface for People Who Are Blind

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 338-346 inproceedings


Abstract: The availability of inexpensive robotic hardware has brought to realization the dream of having autonomous mobile robots around us. As such, the research community has recently manifested more interest in assisting robotic technology (see proceedings of the last two IEEE RO-MAN conferences, the emergence of the RoboCup@Home challenge at RoboCup and the first annual Human Computer Interaction Conference jointly sponsored by IEEE and ACM). Robots provide to the blind what was lost as textual interfaces were replaced by GUIs. This paper describes the design, implementation and testing of a first prototype of a multi-modal Human-Robot Interface for people with Vision Impairment. The robot used is the commercially available four legged SONY Aibo.
Fidelman, P., Coffman, T. & Miikkulainen, R.

Detecting Motion in the Environment with a Moving Quadruped Robot

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 219-231 inproceedings


Abstract: For a robot in a dynamic environment, the ability to detect motion is crucial. Motion often indicates areas of the robot’s surroundings that are changing, contain another agent, or are otherwise worthy of attention. Although legs are arguably the most versatile means of locomotion for a robot, and thus the best suited to an unknown or changing domain, existing methods for motion detection either require that the robot have wheels or that its walking be extremely slow and tightly constrained. This paper presents a method for detecting motion from a quadruped robot walking at its top speed. The method is based on a neural network that learns to predict optic flow caused by its walk, thus allowing environment motion to be detected as anomalies in the flow. The system is demonstrated to be capable of detecting motion in the robot’s surroundings, forming a foundation for intelligently directed behavior in complex, changing environments.
Fidelman, P. & Stone, P.

Chin Pinch: A Case Study in Skill Learning on a Legged Robot, The

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 59-71 inproceedings


Abstract: When developing skills on a physical robot, it is appealing to turn to modern machine learning methods in order to automate the process. However, when no accurate simulator exists for the type of motion in question, all learning must occur on the physical robot itself. In such a case, there is a high premium on quick, efficient learning (specifically, learning with low sample complexity). Recent results in learning locomotion have demonstrated the feasibility of learning fast walks directly on quadrupedal robots. This paper demonstrates that it is also possible to learn a higher-level skill requiring more fine motor coordination, again with all learning occurring directly on the robot. In particular, the paper presents a learned ball-grasping skill on a commercially available Sony Aibo robot, with no human intervention other than battery changes. The learned skill significantly outperforms our best hand-tuned solution. As the learned grasping skill relies on a learned walk, we characterize our learning implementation within the layered learning formalism. To our knowledge, the two learned layers represent the first use of layered learning on a physical robot.
Geipel, M. & Beetz, M.

Learning to Shoot Goals Analysing the Learning Process and the Resulting Policies

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 371-378 inproceedings


Abstract: Reinforcement learning is a very general unsupervised learning mechanism. Due to its generality reinforcement learning does not scale very well for tasks that involve inferring subtasks. In particular when the subtasks are dynamically changing and the environment is adversarial. One of the most challenging reinforcement learning tasks so far has been the 3 to 2 keepaway task in the RoboCup simulation league. In this paper we apply reinforcement learning to a even more challenging task: attacking the opponents goal. The main contribution of this paper is the empirical analysis of a portfolio of mechanisms for scaling reinforcement learning towards learning attack policies in simulated robot soccer.
Goldman, R., Azhar, M.Q. & Sklar, E.

From RoboLab to Aibo: A Behavior-Based Interface for Educational Robotics

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 122-133 inproceedings


Abstract: This paper describes a framework designed to broaden the entry-level for the use of sophisticated robots as educational platforms. The goal is to create a low-entry, high-ceiling programming environment that, through a graphical behavior-based interface, allows inexperienced users to author control programs for the Sony Aibo four-legged robot. To accomplish this end, we have extended the popular RoboLab application, which is a simple, icon-based programming environment originally designed to interface with the LEGO Mindstorms robot. Our extension is in the form of a set of “behavior icons” that users select within RoboLab, which are then converted to low-level commands that can be executed directly on the Aibo. Here, we present the underlying technical aspects of our system and demonstrate its feasibility for use in a classroom.
Gottfried, B. & Witte, J.

Representing Spatial Activities by Spatially Contextualised Motion Patterns

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 330-337 inproceedings


Abstract: The interpretation of spatial activities plays a fundamental role in several areas, ranging from the analysis of animal behaviour to location-based assistance applications. One important aspect when interpreting spatial activities consists in relating them to their environment. A problem arises insofar propositional representations lack an appropriate attention mechanism to comprehend the spatiotemporal development of spatial activities. Therefore, we propose a diagrammatic formalism which allows spatial activities to get classified depending on their spatial context and provide a link to propositional formalisms. It shows that RoboCup soccer is particularly suitable for investigating these issues. In fact, alone the spatial activity of the ball teaches us to a considerable degree much about a game.
Göhring, D. & Hoffmann, J.

Sensor Modeling Using Visual Object Relation in Multi Robot Object Tracking

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 279-286 inproceedings


Abstract: In this paper we present a novel approach to estimating the position of objects tracked by a team of mobile robots. Modeling of moving objects is commonly done in a robo-centric coordinate frame because this information is sufficient for most low level robot control and it is independent of the quality of the current robot localization. For multiple robots to cooperate and share information, though, they need to agree on a global, allocentric frame of reference. When transforming the egocentric object model into a global one, it inherits the localization error of the robot in addition to the error associated with the egocentric model.
We propose using the relation of objects detected in camera images to other objects in the same camera image as a basis for estimating the position of the object in a global coordinate system. The spacial relation of objects with respect to stationary objects (e.g., landmarks) offers several advantages: a) Errors in feature detection are correlated and not assumed independent. Furthermore, the error of relative positions of objects within a single camera frame is comparably small. b) The information is independent of robot localization and odometry. c) As a consequence of the above, it provides a highly efficient method for communicating information about a tracked object and communication can be asynchronous.
We present experimental evidence that shows how two robots are able to infer the position of an object within a global frame of reference, even though they are not localized themselves.
Hebbel, M., Nistico, W. & Fisseler, D.

Learning in a High Dimensional Space: Fast Omnidirectional Quadrupedal Locomotion

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 314-321 inproceedings


Abstract: his paper presents an efficient way to learn fast omnidirectional quadrupedal walking gaits. We show that the common approaches to control the legs can be further improved by allowing more degrees of freedom in the trajectory generation for the legs. To achieve good omnidirectional movements, we suggest to use different parameters for different walk requests and interpolate between them. The approach has been implemented for the Sony Aibo and used by the GermanTeam in the Four-Legged-League in 2005. A standard learning strategy has been adopted, so that the optimization process of a parameter set can be done within one hour, without human intervention. The resulting walk achieved remarkable speeds, both in pure forward walking and in omnidirectional movements.
Heinemann, P., Haase, J. & Zell, A.

A Novel Approach to Efficient Monte-Carlo Localization in RoboCup

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 322-329 inproceedings


Abstract: Recently, efficient self-localization methods have been developed, among which probabilistic Monte-Carlo localization (MCL) is one of the most popular. However, standard MCL algorithms need at least 100 samples to compute an acceptable position estimation. This paper presents a novel approach to MCL that uses an adaptive number of samples that drops down to a single sample if the pose estimation is sufficiently accurate. Experiments show that the method remains in this efficient single sample tracking mode for more than 90% of the cycles.
Heinemann, P., Sehnke, F., Streichert, F. & Zell, A.

Towards a Calibration-Free Robot: The ACT Algorithm for Automatic Online Color Training

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 363-370 inproceedings


Abstract: Many approaches for object detection based on color coding were published in the RoboCup domain. They are tuned to the typical RoboCup scenario of constant lighting using a static subdivision of the color space. However, such algorithms will soon be of limited use, when playing under changing and finally natural lighting. This paper presents an algorithm for automatic color training, which is able to robustly adapt to different lighting situations online. Using the ACT algorithm a robot is able to play a RoboCup match while the illumination of the field varies.
Herrero-Pérez, D. & Martínez-Barberá, H.

Robust and Efficient Field Features Detection for Localization

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 347-354 inproceedings


Abstract: In some Robocup leagues, specially in the four-legged league, robots make use of coloured landmarks for localisation. Because these landmarks have no correlation with real soccer, it seems a natural approach to remove them. But for this to be a reality, there are some difficulties that need to be solved, mainly an efficient and robust field features detection and an efficient localisation technique to manage such type of information. In this paper we deal with an approach for field features detection based on finding intersections between field lines which runs at frame rate in the AIBO robots. We also present some experimental results of the vision system and a comparison of the traditional coloured landmark localisation and the field features only localisation, both using a fuzzy-Markov localisation technique.
Hoffmann, J.

Proprioceptive Motion Modeling for Monte Carlo Localization

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 258-269 inproceedings


Abstract: This paper explores how robot localization can be improved and made more reactive by using an adaptive motion model based on proprioception. The motion model of mobile robots is commonly assumed to be constant or a function of the robot speed. We extend this model by explicitly modeling possible states of locomotion caused by interactions of the robot with its environment, such as collisions. The motion model thus behaves according to which state the robot is in. State transitions are based on proprioception, which in our case describes how well the robot’s limbs are able to follow their respective motor commands. The extended, adaptive motion model yields a better, more reactive model of the current robot belief, which is shown in experiments. The improvement is due to the fact that the motion noise no longer has to subsume any possible outcome of actions including failure. In contrast, a clear distinction between failure and normal, desired operation is possible, which is reflected in the motion model.
Indiveri, G., Paulus, J. & Plöger, P.G.

Motion Control of Swedish Wheeled Mobile Robots in the Presence of Actuator Saturation

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 35-46 inproceedings


Abstract: Swedish wheeled mobile robots have remarkable mobility properties allowing them to rotate and translate at the same time. Being holonomic systems, their kinematics model results in the possibility of designing separate and independent position and heading trajectory tracking control laws. Nevertheless, if these control laws should be implemented in the presence of unaccounted actuator saturation, the resulting saturated linear and angular velocity commands could interfere with each other thus dramatically affecting the overall expected performance. Based on Lyapunov’s direct method, a position and heading trajectory tracking control law for Swedish wheeled robots is developed. It explicitly accounts for actuator saturation by using ideas from a prioritized task based control framework.
Iocchi, L.

Robust Color Segmentation Through Adaptive Color Distribution Transformation

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 287-295 inproceedings


Abstract: Color segmentation is typically the first step of vision processing for a robot operating in a color-coded environment, such as RoboCup soccer, and many object recognition modules rely on that.
Although many approaches to color segmentation have been proposed, in the official games of the RoboCup Four Legged League manual calibration is still preferred by most of the teams. In this paper we present a method for color segmentation that is based on an adaptive transformation of the color distribution of the image: the transformation is dynamically computed depending on the current image (i.e., it adapts to condition changes) and then it is used for color segmentation with static thresholds. The method requires the setting of only a few parameters and has been proved to be very robust to noise and light variations, allowing for setting parameters only once when arriving at a competition site.
The approach has been implemented on AIBO robots, extensively tested in our laboratory, and successfully experimented in the some of the games of the Four Legged League in RoboCup 2005.
Isik, M., Stulp, F., Mayer, G. & Utz, H.

Coordination Without Negotiation in Teams of Heterogeneous Robots

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 355-362 inproceedings


Abstract: A key feature of human cooperation is that we can coordinate well without communication or negotiation. We achieve this by anticipating the intentions and actions of others, and adapting our own actions to them accordingly. In contrast, most multi-robot systems rely on extensive communication to exchange their intentions.
This paper describes the joint approach of our two research groups to enable a heterogeneous team of robots to coordinate implicitly, without negotiation. We apply implicit coordination to a typical coordination task from robotic soccer: regaining ball possession. We discuss the benefits and drawbacks of implicit coordination, and evaluate it by conducting several experiments with our robotic soccer teams.
Kalyanakrishnan, S., Liu, Y. & Stone, P.

Half Field Offense in RoboCup Soccer: A Multiagent Reinforcement Learning Case Study

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 72-85 inproceedings


Abstract: We present half field offense, a novel subtask of RoboCup simulated soccer, and pose it as a problem for reinforcement learning. In this task, an offense team attempts to outplay a defense team in order to shoot goals. Half field offense extends keepaway [11], a simpler subtask of RoboCup soccer in which one team must try to keep possession of the ball within a small rectangular region, and away from the opposing team. Both keepaway and half field offense have to cope with the usual problems of RoboCup soccer, such as a continuous state space, noisy actions, and multiple agents, but the latter is a significantly harder multiagent reinforcement learning problem because of sparse rewards, a larger state space, a richer action set, and the sheer complexity of the policy to be learned. We demonstrate that the algorithm that has been successful for keepaway is inadequate to scale to the more complex half field offense task, and present a new algorithm to address the aforementioned problems in multiagent reinforcement learning. The main feature of our algorithm is the use of inter-agent communication, which allows for more frequent and reliable learning updates. We show empirical results verifying that our algorithm registers significantly higher performance and faster learning than the earlier approach. We also assess the contribution of inter-agent communication by considering several variations of the basic learning method. This work is a step further in the ongoing challenge to learn complete team behavior for the RoboCup simulated soccer task.
Khojasteh, M.R. & Meybodi, M.R.

Evaluating Learning Automata as a Model for Cooperation in Complex Multi-Agent Domains

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 410-417 inproceedings


Abstract: Learning automata act in a stochastic environment and are able to update their action probabilities considering the inputs from their environment, so optimizing their functionality as a result. In this paper, the goal is to investigate and evaluate the application of learning automata to cooperation in multi-agent systems, using soccer simulation server as a test bed. We have also evaluated our learning method in hard situations such as malfunctioning of some of the agents in the team and in situations that agents’ sense/act abilities have a lot of noise involved. Our experiment results show that learning automata adapt well with these situations.
Kobayashi, H., Osaki, T., Williams, E., Ishino, A. & Shinohara, A.

Autonomous Learning of Ball Trapping in the Four-Legged Robot League

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 86-97 inproceedings


Abstract: This paper describes an autonomous learning method used with real robots in order to acquire ball trapping skills in the four-legged robot league. These skills involve stopping and controlling an oncoming ball and are essential to passing a ball to each other. We first prepare some training equipment and then experiment with only one robot. The robot can use our method to acquire these necessary skills on its own, much in the same way that a human practicing against a wall can learn the proper movements and actions of soccer on his/her own. We also experiment with two robots, and our findings suggest that robots communicating between each other can learn more rapidly than those without any communication.
Kyrylov, V.

Balancing Gains, Risks, Costs, and Real-Time Constraints in the Ball Passing Algorithm for the Robotic Soccer

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 304-313 inproceedings


Abstract: We are looking for a generic solution for the optimized ball passing problem in the robotic soccer which is applicable to many digital simulated sports games with ball. In doing so, we show that previously published ball passing methods do not properly address the necessary balance between the anticipated rewards, costs, and risks. The multi-criteria nature of this optimization problem requires using the Pareto optimality approach. We propose a scalable and robust solution for decision making, as its quality degrades in a graceful way once the real time constrains are kicking in.
Lange, S. & Riedmiller, M.

Appearance-Based Robot Discrimination Using Eigenimages

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 499-506 inproceedings


Abstract: Transformation of high-dimensional images to a low-dimensional feature space using Eigenimages is a well known technique in the field of face recognition. In this paper, we investigate the applicability of this method to the task of discriminating several types of robots by their appearance only. After calculating suitable Eigenimages for Middle Size robots and selecting the most useful ones, a Support Vector Machine is trained on the feature vectors to reliably recognize several types of robots. The computational demands and the integration into a real-time vision system have an important role throughout the discussion.
Latzke, T., Behnke, S. & Bennewitz, M.

Imitative Reinforcement Learning for Soccer Playing Robots

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 47-58 inproceedings


Abstract: In this paper, we apply Reinforcement Learning (RL) to a real-world task. While complex problems have been solved by RL in simulated worlds, the costs of obtaining enough training examples often prohibits the use of plain RL in real-world scenarios. We propose three approaches to reduce training expenses for real-world RL. Firstly, we replace the random exploration of the huge search space, which plain RL uses, by guided exploration that imitates a teacher. Secondly, we use experiences not only once but store and reuse them later on when their value is easier to assess. Finally, we utilize function approximators in order to represent the experience in a way that balances between generalization and discrimination. We evaluate the performance of the combined extensions of plain RL using a humanoid robot in the RoboCup soccer domain. As we show in simulation and real-world experiments, our approach enables the robot to quickly learn fundamental soccer skills.
Laue, T. & Röfer, T.

Integrating Simple Unreliable Perceptions for Accurate Robot Modeling in the Four-Legged League

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 474-482 inproceedings


Abstract: The perception and modeling of other robots has been a topic of minor regard in the Four-Legged League, because of the limited processing und sensing capabilities of the AIBO platform. Even the current world champion, the GermanTeam, abandoned the usage of a robot recognition. Nevertheless, accurate position estimates of other players will be needed in the future to accomplish tasks such as passing or applying adaptive tactics. This paper describes an approach for localizing other players in a robot’s local environment by integrating different unreliable perceptions of robots and obstacles, which may be computed in a reasonable way. The approach is based on Gaussian distributions describing the models of the robots as well as the perceptions. The integration of information is realized by using Kalman filtering.
The Deutsche Forschungsgemeinschaft supports this work through the priority program “Cooperating teams of mobile robots in dynamic environments”.
Lauer, M.

Ego-Motion Estimation and Collision Detection for Omnidirectional Robots

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 466-473 inproceedings


Abstract: We propose an algorithm to estimate the ego-motion of an omnidirectional robot based on a sequence of position estimates. Thereto, we derive a motion model for omnidirectional robots and an estimation procedure to fit the model to observed positions. Additionally, we show how we can benefit from the velocity estimates deriving an algorithm that recognizes situations in which a robot is blocked by an obstacle.
Li, X. & Zell, A.

H ∞  Filtering for a Mobile Robot Tracking a Free Rolling Ball

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 296-303 inproceedings


Abstract: This paper focuses on the problem of tracking and predicting the location and velocity of a rolling ball in the RoboCup environment, when the ball is pushed consecutively by a middle-size omnidirectional robot to follow a given path around obstacles. A robust algorithm based on the H  ∞  filter is presented to accurately estimate the ball’s real-time location and velocity. The performance of this tracking strategy was also evaluated by real-world experiments and comparisons with the Kalman filter.
Marchetti, L., Grisetti, G. & Iocchi, L.

A Comparative Analysis of Particle Filter Based Localization Methods

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 442-449 inproceedings


Abstract: Self-localization is a deeply investigated field in mobile robotics, and many effective solutions have been proposed. In this context, Monte Carlo Localization (MCL) is one of the most popular approaches, and represents a good tradeoff between robustness and accuracy. The basic underlying principle of this family of approaches is using a Particle Filter for tracking a probability distribution of the possible robot poses.
Whereas the general particle filter framework specifies the sequence of operations that should be performed, it leaves open several choices including the observation and the motion model and it does not directly address the problem of robot kidnapping.
The goal of this paper is to provide a systematic analysis of Particle Filter Localization methods, considering the different observation models which can be used in the RoboCup soccer environments. Moreover, we investigate the use of two different particle filtering strategies: the well known Sample Importance Resampling (SIR) filter, and the Auxiliary Variable Particle filter (APF).
Marcinkiewicz, M., Kunin, M., Parsons, S., Sklar, E. & Raphan, T.

Towards a Methodology for Stabilizing the Gaze of a Quadrupedal Robot

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 540-547 inproceedings


Abstract: When a quadrupedal robot moves, the body and head pitch, yaw and roll, because of its stepping. This natural effect of body and head motion adversely effects the use of visual sensors embedded in the robot’s head. Any object in the visual frame of the robot will, from the perspective of the robot, be subject to considerable unmodeled motion or slip. This problem does not affect mammals, which have vestibulo-collic and vestibulo-ocular reflexes that stabilize their gaze in space and maintain objects of interest approximately fixed on the retina. Our work is aimed towards constructing an artificial vestibular system for quadrupedal robots to maintain accurate gaze. This paper describes the first part of this work, wherein we have mounted an artificial vestibular system in a Sony aibo robot.
Martínez, I.C., Ojeda, D. & Zamora, E.A.

Ambulance Decision Support Using Evolutionary Reinforcement Learning in Robocup Rescue Simulation League

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 556-563 inproceedings


Abstract: We present a complete design of agents for the RoboCup Rescue Simulation problem that uses an evolutionary reinforcement learning mechanism called XCS, a version of Holland’s Genetic Classifiers Systems, to decide the number of ambulances required to rescue a buried civilian. We also analyze the problems implied by the rescue simulation and present solutions for every identified sub-problem using multi-agent cooperation and coordination built over a subsumption architecture. Our agents’ classifier systems were trained in different disaster situations. Trained agents outperformed untrained agents and most participants of the 2004 RoboCup Rescue Simulation League competition. This system managed to extract general rules that could be applied on new disaster situations, with a computational cost of a reactive rule system.
Mayer, N.M., Boedecker, J., da Silva Guerra, R., Obst, O. & Asada, M.

3D2Real: Simulation League Finals in Real Robots

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 25-34 inproceedings


Abstract: We present a road map for a joint project of the simulation league and the humanoid league that we call 3D2Real. This project is concerned with the integration of these two leagues which is becoming increasingly important as the research fields are converging. Currently, a lot of work is duplicated across the leagues, collaboration is sparse, and knowhow is not transfered effectively. This binds resources to solve the same problems over and over again. To address this, we discuss the current situation of both leagues with respect to these points and focus on open issues that have to be fixed. In addition, we describe existing open standards and contributions from the RoboCup community that we plan to use for the project. As a milestone, we propose to conduct the finals of the 3D simulation tournament on real robots by the year 2008. Finally, we propose a database of simulated parts and algorithms in which each league can benefit and contribute with their expertise. These contributions facilitate synergies to be used across individual leagues for the benefit of the RoboCup project and the year 2050 goal.
McMillen, C. & Veloso, M.

Distributed, Play-Based Coordination for Robot Teams in Dynamic Environments

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 483-490 inproceedings


Abstract: We present refinements to our previous work on team communication and multi-robot world modeling in the RoboCup legged league. These refinements put high priority on the communication of task-relevant data. We also build upon past results within the simulation and the small-size leagues and contribute a distributed, play-based role assignment algorithm. This algorithm allows the robots to autonomously adapt their strategy based on the current state of the environment, the game, and the behavior of opponents. The improvements discussed in this paper were used by CMDash in the RoboCup 2005 international competition.
Nakanishi, R., Bruce, J., Murakami, K., Naruse, T. & Veloso, M.

Cooperative 3-Robot Passing and Shooting in the RoboCup Small Size League

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 418-425 inproceedings


Abstract: This paper describes a method for cooperative play among 3 robots in order to score a goal in the RoboCup Small Size League. In RoboCup 2005 Osaka, our team introduced a new attacking play, where one robot kicks a ball and the other receives and immediately shoots the ball on goal. However, due to the relatively slow kicking speed of the robot, top opponent teams could prevent successful passing between robots. This motivates the need for more complex play, such as passing around to several robots to avoid the opponents’ passing defense. In this paper we propose a method to realize such a play, i.e. a combination play among 3 robots. We discuss the technical issues to achieve this combination play, especially for a pass-and-shoot combination play. Experimental results on real robots are provided. They indicate that the success rate of the play depends strongly on the arrangement of the robots, and ranges from 20 % to 90 % in tests with an opponent goalkeeper which stands still.
Nicklin, S.P., Fisher, R.D. & Middleton, R.H.

Rolling Shutter Image Compensation

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 402-409 inproceedings


Abstract: This paper describes corrections to image distortion found on the Sony AIBO ERS-7 robots. When obtaining an image the camera captures each pixel in series, that is there is effectively a ’rolling shutter’. This results in a delay between the capture of the first and last pixel. When combined with movement of the camera the image produced will be distorted. The sensor values from the robot, coupled with knowledge of the camera’s timing, are used to calculate the effect of the robots movement on the image. This information can then be used to remove much of the distortion from the image. The correction improves the effectiveness of shape recognition and bearing-to-object accuracy.
Nisticò, W., Hebbel, M., Kerkhof, T. & Zarges, C.

Cooperative Visual Tracking in a Team of Autonomous Mobile Robots

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 146-157 inproceedings


Abstract: Robot soccer is a challenging domain for sensor fusion and object tracking techniques, due to its team oriented, fast-paced, dynamic and competitive nature. Since each robot has a limited view about the world surrounding it, the sharing of information with its teammates is often crucial in order to be ready to react to situations which might involve it in the near future. In this paper we propose a Particle Filter based approach that addresses the problem of cooperative global sensor fusion by explicitly modeling the uncertainty concerning the robots’ positions, the data association about the tracked object, and the loss of information over the network.
Olufs, S., Adolf, F., Hartanto, R. & Plöger, P.

Towards Probabilistic Shape Vision in RoboCup: A Practical Approach

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 171-182 inproceedings


Abstract: This paper presents a robust object tracking method using a sparse shape-based object model. Our approach consists of three ingredients namely shapes, a motion model and a sparse (non-binary) subsampling of colours in background and foreground parts based on the shape assumption. The tracking itself is inspired by the idea of having a short-term and a long-term memory. A lost object is ”missed” by the long-term memory when it is no longer recognized by the short-term memory. Moreover, the long-term memory allows to re-detect vanished objects and using their new position as a new initial position for object tracking. The short-term memory is implemented with a new Monte Carlo variant which provides a heuristic to cope with the loss-of-diversity problem. It enables simultaneous tracking of multiple (visually) identical objects. The long-term memory is implemented with a Bayesian Multiple Hypothesis filter. We demonstrate the robustness of our approach with respect to object occlusions and non-Gaussian/non-linear movements of the tracked object. We also show that tracking can be significantly improved by using compensating ego-motion. Our approach is very scalable since one can tune the parameters for a trade-off between precision and computational time.
Otsuka, F., Fujii, H. & Yoshida, K.

Development of Three Dimensional Dynamics Simulator with Omnidirectional Vision Model

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 523-531 inproceedings


Abstract: In the field of robotics, simulators are important tools to verify algorithms. They are required to generate movements and physical interactions of objects based on the dynamics and to simulate outputs of sensors. However, few simulators of robots simulate dynamics of objects and outputs of sensors, in particular omnidirectional cameras, which are effective sensors because they can acquire an omnidirectional field of view at one time. In this study, omnidirectional vision simulators are developed based on the ray casting method. The proposed simulators enables to render accurate omnidirectional images and an intersection test is developed for speeding up. These are able to be used in ”Gazebo” which is open source 3D dynamics simulator. The proposed methods are verified through comparison of the rendered images with the images which are obtained by real omnidirectional vision sensor.
Ozgelen, A.T., Sklar, E. & Parsons, S.

Automatic Acquisition of Robot Motion and Sensor Models

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 548-555 inproceedings


Abstract: For accurate self-localization using probabilistic techniques, robots require robust models of motion and sensor characteristics. Such models are sensitive to variations in lighting conditions, terrain and other factors like robot battery strength. Each of these factors can introduce variations in the level of noise considered by probabilistic techniques. Manually constructing models of noise is time-consuming, tedious and error-prone. We have been developing techniques for automatically acquiring such models, using the AIBO robot and a modified RoboCup Four-Legged League field with an overhead camera. This paper describes our techniques and presents preliminary results.
Planthaber, S. & Visser, U.

Logfile Player and Analyzer for RoboCup 3D Simulation

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 426-433 inproceedings


Abstract: In multi agent environments or systems equipped with artificial intelligence it is often difficult to obtain the function or method which led to a particular behavior that is noticeable from outside. However, this information is crucial if not necessary to optimize the agents behavior. In the RoboCup 3D simulation league this dilemma becomes obvious when replaying logfiles of a game that was simulated before. The 3D soccer simulation league monitor (rcssmonitor-lite) is restricted with regards to replaying logfiles.
This paper describes the concept and the implementation of improvements for the logplaying and analyzing abilities of the monitor. The idea is to provide a tool that is able to assist developers to detect problems of their agents both in single and cooperation mode.
Polverari, G., Calisi, D., Farinelli, A. & Nardi, D.

Development of an Autonomous Rescue Robot Within the USARSim 3D Virtual Environment

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 491-498 inproceedings


Abstract: The increasing interest towards rescue robotics and the complexity of typical rescue environments make it necessary to use high fidelity 3D simulators during the application development phase. USARSim is an open source high fidelity simulator for rescue environments, based on a commercial game engine. In this paper, we describe the development of an autonomous rescue robot within the USARSim simulation environment. We describe our rescue robotic system and present the extensions we made to USARSim in order to have a satisfying simulation of our robot. Moreover, as a case study, we present an algorithm to avoid obstacles invisible to our laser scanner based mapping process.
Prüter, S., Salomon, R. & Golatowski, F.

Local Movement Control with Neural Networks in the Small Size League

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 434-441 inproceedings


Abstract: In the RoboCup small-size league, most teams calculate the robots’ positions by means of a camera that is mounted above the field as well as different kinds of artificial intelligence methods that run on an additional PC. This processing loop induces various time delays, which require forecasting routines, if more accurate behaviors are desired. This paper shows that by utilizing a combination of a neural network and local sensors, the robot is able to estimates its actual position quite accurately. This paper furthermore shows that the learning procedure is also able to compensate for slip and friction effects that cannot be observed by the local sensors.
Rojas, R., Simon, M. & Tenchio, O.

Parabolic Flight Reconstruction from Multiple Images from a Single Camera in General Position

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 183-193 inproceedings


Abstract: This paper shows that it is possible to retrieve all parameters of the parabolic flight trajectory of an object from a time stamped sequence of images captured by a single camera looking at the scene. Surprisingly, it is not necessary to use two cameras (stereo vision) in order to determine the coordinates of the moving object with respect to the floor. The technique described in this paper can thus be used to determine the three-dimensional trajectory of a ball kicked by a robot. The whole calculation can be done, at the limit, with just three measurements of the ball position captured in three consecutive frames. Therefore, this technique can be used to forecast the future motion of the ball a few milliseconds after the kick has taken place. The computation is fast and allows a robot goalie to move to the correct blocking position. Interestingly, this technique can also be used to self-calibrate stereo cameras.
Saggar, M., D’Silva, T., Kohl, N. & Stone, P.

Autonomous Learning of Stable Quadruped Locomotion

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 98-109 inproceedings


Abstract: A fast gait is an essential component of any successful team in the RoboCup 4-legged league. However, quickly moving quadruped robots, including those with learned gaits, often move in such a way so as to cause unsteady camera motions which degrade the robot’s visual capabilities. This paper presents an implementation of the policy gradient machine learning algorithm that searches for a parameterized walk while optimizing for both speed and stability. To the best of our knowledge, previous learned walks have all focused exclusively on speed. Our method is fully implemented and tested on the Sony Aibo ERS-7 robot platform. The resulting gait is reasonably fast and considerably more stable compared to our previous fast gaits. We demonstrate that this stability can significantly improve the robot’s visual object recognition.
del Solar, J.R., Loncomilla, P. & Vallejos, P.

An Automated Refereeing and Analysis Tool for the Four-Legged League

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 206-218 inproceedings


Abstract: The aim of this paper is to propose an automated refereeing and analysis tool for robot soccer. This computer vision based tool can be applied for diverse tasks such as: (i) automated game refereeing, (ii) computer-based analysis of the game, and derivation of game statistics, (iii) automated annotations and semantic descriptions of the game, which could be used for the automatic generation of training data for learning complex high-level behaviors, and (iv) automatic acquisition of real game data to be used in robot soccer simulators. The most attractive application of the tool is automated refereeing. In this case, the refereeing system is built using a processing unit (standard PC) and some static and/or moving video cameras. The system can interact with the robot players and with the game controller using wireless data communication, and with the human spectators and human second referees by speech synthesis mechanisms or using visual displays. We present a refereeing and analysis system for the RoboCup Four-Legged League. This system is composed by three modules: object perception, tracking, and action analysis. The camera placement issue is solved by human controlled camera’s placement and movement. Some preliminary experimental results of this system are presented.
This research was partially supported by FONDECYT (Chile) under Project Number 1061158.
Sridharan, M. & Stone, P.

Autonomous Planned Color Learning on a Legged Robot

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 270-278 inproceedings


Abstract: Our research focuses on automating the color-learning process on-board a legged robot with limited computational and memory resources. A key defining feature of our approach is that instead of using explicitly labeled training data it trains autonomously and incrementally, thereby making it robust to re-colorings in the environment. Prior results demonstrated the ability of the robot to learn a color map when given an executable motion sequence designed to present it with good color-learning opportunities based on the known structure of its environment. This paper extends these results by demonstrating that the robot can plan its own such motion sequence and perform just as well at color-learning. The knowledge acquired at each stage of the learning process is used as a bootstrap mechanism to aid the robot in planning its motion during subsequent stages.
Strasdat, H., Bennewitz, M. & Behnke, S.

Multi-Cue Localization for Soccer Playing Humanoid Robots

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 245-257 inproceedings


Abstract: An essential capability of a soccer playing robot is to robustly and accurately estimate its pose on the field. Tracking the pose of a humanoid robot is, however, a complex problem. The main difficulties are that the robot has only a constrained field of view, which is additionally often affected by occlusions, that the roll angle of the camera changes continously and can only be roughly estimated, and that dead reckoning provides only noisy estimates. In this paper, we present a technique that uses field lines, the center circle, corner poles, and goals extracted out of the images of a low-cost wide-angle camera as well as motion commands and a compass to localize a humanoid robot on the soccer field. We present a new approach to robustly extract lines using detectors for oriented line pints and the Hough transform. Since we first estimate the orientation, the individual line points are localized well in the Hough domain. In addition, while matching observed lines and model lines, we do not only consider their Hough parameters. Our similarity measure also takes into account the positions and lengths of the lines. In this way, we obtain a much more reliable estimate how well two lines fit. We apply Monte-Carlo localization to estimate the pose of the robot. The observation model used to evaluate the individual particles considers the differences of expected and measured distances and angles of the other landmarks. As we demonstrate in real-world experiments, our technique is able to robustly and accurately track the position of a humanoid robot on a soccer field. We also present experiments to evaluate the utility of using the different cues for pose estimation.
Stronger, D. & Stone, P.

Selective Visual Attention for Object Detection on a Legged Robot

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 158-170 inproceedings


Abstract: Autonomous robots can use a variety of sensors, such as sonar, laser range finders, and bump sensors, to sense their environments. Visual information from an onboard camera can provide particularly rich sensor data. However, processing all the pixels in every image, even with simple operations, can be computationally taxing for robots equipped with cameras of reasonable resolution and frame rate. This paper presents a novel method for a legged robot equipped with a camera to use selective visual attention to efficiently recognize objects in its environment. The resulting attention-based approach is fully implemented and validated on an Aibo ERS-7. It effectively processes incoming images 50 times faster than a baseline approach, with no significant difference in the efficacy of its object detection.
Sturm, J., van Rossum, P. & Visser, A.

Panoramic Localization in the 4-Legged League

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 387-394 inproceedings


Abstract: The abilities of mobile robots depend greatly on the performance of basic skills such as vision and localization. Although great progress has been made in the 4-Legged league in the past years, the performance of many of those approaches completely depends on the artificial environment conditions established on a 4-Legged soccer field. In this article, an algorithm is introduced that can provide localization information based on the natural appearance of the surroundings of the field. The algorithm starts making a scan of the surroundings by turning head and body of the robot on a certain spot. The robot learns the appearance of the surroundings at that spot by storing color transitions at different angles in a panoramic index. The stored panoramic appearance can be used to determine the rotation (including a confidence value) relative to the learned spot for other points on the field. The applicability of this kind of localization for more natural environments is demonstrated in two environments other than the official 4-Legged league field.
Umemura, S., Murakami, K. & Naruse, T.

Orientation Extraction and Identification of the Opponent Robots in RoboCup Small-Size League

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 395-401 inproceedings


Abstract: In RoboCup small-size league, it is necessary to analyze the opponent robots’ behavior in order to make a strategy of the own team. However, it is difficult to prepare image processing methods in advance in order to detect opponent robots’ sub-markers used for the orientation detection and identification of the robots, because there is no limitation in the rule in shape, color, arrangement, and the number. This paper proposes a new method to select the most specific sub-marker attached on the top of the robot based on the features such as the size, area, and color values by using the discriminant analysis, and also explains how to extract opponent robots’ orientations with some experimental results.
Weitzenfeld, A. & Dominey, P.F.

Cognitive Robotics: Command, Interrogation and Teaching in Robot Coaching

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 379-386 inproceedings


Abstract: The objective of the current research is to develop a generalized approach for human-robot interaction via spoken language that exploits recent developments in cognitive science, particularly notions of grammatical constructions as form-meaning mappings in language, and notions of shared intentions as distributed plans for interaction and collaboration. We demonstrate this approach distinguishing among three levels of human-robot interaction. The first level is that of commanding or directing the behavior of the robot. The second level is that of interrogating or requesting an explanation from the robot. The third and most advanced level is that of teaching the robot a new form of behavior. Within this context, we exploit social interaction by structuring communication around shared intentions that guide the interactions between human and robot. We explore these aspects of communication on distinct robotic platforms, the Event Perceiver and the Sony AIBO robot in the context of four-legged RoboCup soccer league. We provide a discussion on the state of advancement of this work.
Zaratti, M., Fratarcangeli, M. & Iocchi, L.

A 3D Simulator of Multiple Legged Robots Based on USARSim

2007 RoboCup 2006: Robot Soccer World Cup X, pp. 13-24 inproceedings


Abstract: This paper presents a flexible 3D simulator able to reproduce the appearance and the dynamics of generic legged robots and objects in the environment at full frame rate (30 frames per second). Such a simulator extends and improves USARSim (Urban Search and Rescue Simulator), a robot simulator in turn based on the game platform Unreal Engine. This latter provides facilities for good quality rendering, physics simulation, networking, highly versatile scripting language and a powerful visual editor. Our simulator extends USARSim features by allowing for the simulation and control of legged robots and it introduces a multi-view functionality for multi-robot support. We successfully tested the simulator capabilities by mimicking a virtual environment with up to five network-controlled legged robots, like AIBO ERS-7 and QRIO.