Abstract
After the sequence of winning the RoboCup Middle-Size League (MSL) in even years only (2012, 2014, 2016, 2018), Tech United Eindhoven achieved its first RoboCup win during an odd year at RoboCup 2019. This paper presents an evaluation of the tournament and describes the most notable scientific improvements made in preparation of the tournament. These developments consist of our solution to (unforeseen) localisation problems and the improvements in the control architecture of our eight-wheeled robot. The progress in the shooting lever is elaborated as well as the advancements in the arbitrary ball-detection in order to improve our scoring during the Technical Challenge. Additionally, research towards the application of artificial intelligence in predicting the actions of opponents and recognizing the appearance of the opponent robots will be presented.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
- Robocup soccer
- Middle-size league
- Multi-agent
- Localisation
- Over-actuated control
- Online calibration
- Arbitrary ball detection
- Opponent prediction
1 Introduction
Tech United Eindhoven represents the Eindhoven University of Technology in the RoboCup competitions. The team started participating in the Middle-Size League (MSL) in 2006 and has been playing the final of the world championship for 12 years now, while achieving the first place five times: 2012, 2014, 2016, 2018 and 2019. At the moment of writing, the MSL team consists of 6 PhD-, 8 MSc-, 4 BSc-, 6 former TU/e students, 3 TU/e staff members and two members not related to TU/e. This paper describes the major scientific improvements of our soccer robots over the past year and elaborates on some of the main improvements or developments in preparation of the RoboCup 2019 tournament. First, in Sect. 2, an introduction of our fifth generation soccer robot is given. This is followed by some statistics of the latest tournament and our solution in robustifying the localisation-method in Sects. 3 and 4, respectively. Section 5 focuses on our efforts to make better use of our sixth generation soccer robot; the eight-wheeled platform. Section 6 continues with our progress in hardware and software to obtain a more reproducible shot. Section 7 describes two of our efforts to integrate methods from the domain of artificial intelligence into our robots. This paper is concluded in Sect. 8.
2 Robot Platform
Our robots, as shown in Fig. 1, have been named TURTLEs (acronym for Tech United RoboCup Team: Limited Edition). The development of the TURTLEs started in 2005. Through tournaments and numerous demonstrations, these platforms have evolved into the fifth generation TURTLE, a very robust platform. For an outline of our robot design the reader is referred to the schematic representation in [1]. A detailed list of hardware specifications, along with CAD files of the base, upper-body, ball handling and shooting mechanism, has been published on the ROP wikiFootnote 1. The software controlling the robots is divided into four main processes: Vision, Worldmodel, Strategy and Motion. These processes communicate with each other through a real-time database (RTDB) designed by the CAMBADA team [2]. More detailed information on the software can be found in [3].
3 RoboCup 2019 Statistics
Eight teams participated in the Middle Size League tournament of RoboCup 2019, of which team IRIS from Indonesia and Robot Club Toulon from France participated for the first time. The other teams were from Japan (Hibikino-Musashi), China (NuBot and Water), Portugal (CAMBADA) and the Netherlands (Falcons and Tech United Eindhoven). Six of these eight teams participated in the main soccer competition leading to a total of 34 matches, of which Tech United played 14 matches. Compared to the 2018 tournament, the average number of goals increased from 6.3 to 7.0 during these matches, while the number of opponent goals was approximately similar to last year with 0.9 goal per match on average this year versus 1.1 last year. Via 188 in-game passes and 146 passes during (re)starts of play, we were able to produce 121 shots on goal. Based on odometry-data, our robots drove more than a marathon, namely 46.9[km]. TURTLE 4 contributed most with 10.3[km], while the goalkeeper required a modest 0.7[km].
Noteworthy is our improvement in localisation. As the omnivision-system typically gives the most accurate estimate of the on-board sensors, the average fitting-error of the line detection of the omnivision pose is taken into consideration to validate the robot location. For a minimum of 50 and a maximum of 100 linepoints, the average fitting error is allowed to be 0.09[m]. This maximum number of linepoints is set to limit the computation-time. Last year, the average localisation percentage dropped due to the increased field size. The TURTLEs managed to localize almost \(90\%\) of the time [4]. Here TURTLE 4 outperformed the other robots by managing to localize \(98\%\) of the total time. This year, several improvements in the line detections were implemented, especially around the corners of the field. Together with the improved fusion of the gyroscope and accelerometer signals with the omnivision pose and odometry for the robot-pose estimate [5], this year, significant better results were achieved with an average localisation of \(99.3\%\) of the time. This year, the localization performance of all the robots varied between 98.0 and \(99.9\%\). Even though the TURTLEs are similar in hardware and software, individual differences can be due to role, calibration accuracy or total playing time. Figure 2 confirms these numbers. In this figure, for last year and this year, both the correct (green) and missed locations (red) are indicated as function of the position of the field. For both years, the data from the latest Round Robin up to and including the final were taken into consideration. For the 2019 tournament, the data from the first match in the latest Round Robin were excluded, compensating for the overtime in the final match. The missed locations are estimated based on the pose estimate from a Kalman filter using gyro data and wheel-odometry. For visualisation purposes, the data of a single robot are visualized, in this case TURTLE 4. In both datasets, approximately 6.5[km] was covered. Due to the improvements, significant less misses are observed, confirming the percentages previously mentioned. Interestingly, a faster recovery behaviour is observed, as the sequences of consecutive missed positions are shorter in Fig. 2(b) compared to Fig. 2(a). This is due to less false positive and more true positive detected linepoints.
4 Robustified Localisation
At the start of RoboCup 2019, major localization problems were observed, leading to unpredictable behaviour of our robots. This section elaborates the issues which were encountered, followed by the solution which was implemented during the tournament.
4.1 Challenge
Our main localisation algorithm functions by converting an omnivision camera image to a TURTLE pose in an optimization over this pose given a set of line points of the field. Due to the inherent symmetry of the lines of the soccer field, this algorithm needs an external input that indicates the heading of the TURTLE with respect to the center of the opponent’s goal in order to uniquely determine the TURTLE’s pose on the field. As the two possible orientations for a set of field points are 180[degrees] apart, a relative large error in the orientation-estimate is allowed. Empirical tests show that an error in the provided heading of up to approximately 30[degrees] will not hamper the optimization. To this extent, the integrated magnetometer of the Xsens MTi-3 was employed until this year to get such an estimate based on the magnetic field in the vicinity of the TURTLE. However, during RoboCup 2019, initial calibration of this sensor indicated that the magnetic field around the official field was heavily distorted, resulting in headings that were off by up to 70[degrees], thus rendering the magnetometer measurements useless. This resulted in extended periods in which localization was unavailable combined with false positive TURTLE poses, i.e. wrong positions on the field about which the algorithm was confident. Efforts to mitigate the dependence on the magnetic field by fusing the gyroscope and magnetometer were unfruitful, as the distorted field did not only have high variance, but was also heavily biased, varying with the position of the TURTLE.
4.2 Proposed Solution
The implemented solution for this problem consists of a stationary Kalman filter (i.e. a complementary filter) that fuses the gyroscope, which is unaffected by the distortion of the magnetic field, and the last confident omnivision orientation as an adaptation of [5]. In order to process the next sample(s) of the optimization algorithm of the omnivision-camera signal, the orientation-estimation of the Kalman filter replaces the heading input of the magnetometer measurements. This strategy has the disadvantage that a single confident outlier in the omnivision orientation could, through the heading estimate of the Kalman filter, cause the estimated omnivision orientation to return the flipped position on the field, i.e. the other orientation that also fits the line points. This also results in a false estimate of the TURTLE position, without having the explicit possibility to recover. By facing a robot to a specific goal when put on the field and by assuming that the robot is stationary when initialized, the filter can be initialized with a low covariance on both its heading and angular velocity estimate. Afterwards, the filter determines its heading by integrating the gyroscope and fusing the omnivision orientation. The gyroscope is the primary source of information, corresponding to a low covariance, while a high covariance is set on the omnivision orientation as its main purpose is to limit the drift of the heading induced by integrating the gyroscope. As this drift is bounded by 0.01[degree/s], the covariance matrices are set such that the resulting time constant of the filter is between 100 and 1000[s]. The covariance matrices were tuned within this band to accommodate for the rise time of the complementary filter, as the TURTLE will not be orientated towards its reference perfectly and thus the Kalman filter needs to converge to the sequence of found omnivision orientations after initialization. In combination with better line-detection around corner points, a better reduction of outliers and an improved tracking of the fieldline, the implemented solution ensured that our localization by omnivision was successful \(99.3\%\) of the time, which is significantly better compared to the \(90\%\) which was achieved last year.
5 Eight-Wheeled Platform
Last year, the mechanical design, presented in Fig. 3(a), and the first version of the low level control architecture of the eight-wheeled platform were presented [4]. The main advantages compared to the three-wheeled platform are the possibility to apply the torque delivered by the motors in the desired movement and the expectation to keep all wheels on the ground while accelerating. In this section, first the improvements in the low-level control will be discussed. As the eight-wheeled platform is non-holonomic while the current high-level software assumes the platform to be holonomic, some adaptations have to be made in order to be able to play soccer. Ideas on how to make this connection will be elaborated in Sect. 5.2.
5.1 Improved Low-Level Control Architecture
This platform consists of four sets each having two hub-drive wheels and thus five times over-constrained. As shown in Fig. 3(b), each pair of wheels can pivot around its suspension by actuating the corresponding wheels in opposite direction [4]. In order to actuate the platform, last year a kinematic control scheme was presented. The resulting performance was limited due to interaction between the wheels. As a result, the control-architecture as shown in Fig. 4 was designed.
For this controller, the aim is to manipulate the position x, y and orientation \(\phi \) of the center C of the platform. First the left lower part of Fig. 4 is considered, which is marked with “Reference Generation of Pivots”. In this part of the controller, the reference angle \(\delta _{sp}\) of each wheel-unit \(i = 1, 2, 3, 4\) is based on the desired platform velocity \(v_{sp} = [\dot{x} ~ \dot{y} ~ \dot{\phi }]^T\) using the inverse kinematics G of the platform. At this stage, it is still assumed that the platform is holonomic. As this is not the case and in order to prevent step-responses, each pivot-setpoint is smoothed with a second order setpoint generator. Next, in the lower right part of Fig. 4, the “Pivot Controllers” are indicated. At these controllers, each reference angle of the pivots is controlled using a separate but identical position-to-force PD-controller \(C_{\delta }\). Here, the torque \(\tau _w\) has an opposite direction for each wheel in a wheel-unit. By designing a feed-forward \(FF_{\delta }\) for each pivot, the performance of these controllers is significantly increased. Using an estimate of the wheel-velocities \(v_w\) of each wheel and the rotation \(\delta _i\) of each wheel unit, the platform-velocity v is determined using the forward kinematics J of the platform. This is required to control the motion of the platform, shown in the left-upper part of the figure. As the platform is non-holonomic, two velocity-to-torque PID-controllers are used to control three degrees of freedom. The first controller \(C_t\) controls the translation by projecting the velocity-errors \(\dot{e}_x\) and \(\dot{e}_y\) of both the x and y direction into the direction of the setpoint. This is considered as the controllable direction, as the wheels are kinematically oriented according to this reference. The orientation is controlled using a separate controller \(C_r\). Finally, the wrench \(w_p\) is distributed among the eight-wheels. How to improve this distribution is an ongoing research.
5.2 Integration into High-Level Software
The software used for the current three-wheeled platforms is designed assuming the platform to be holonomic. As a result, the motion software will attempt to control the position \(x, y, \phi \) of the platform. For the eight-wheeled platform, this is however not possible due to the non-holonomic constraints set by the pivots. Therefore, in order to position at a specific location on the field, reducing small positioning errors requires the pivots to change orientation continuously. As a result, a large amount of energy is consumed in order to reduce these errors. However, in order to play soccer, it is not always necessary to control all three directions similarly. If, for example, the task of a robot is to intercept a pass, it is important that it positions itself on the line of the movement of the ball. In this situation, it is less important to position a bit more towards or a bit further away from the ball. Therefore, as shown in Fig. 4, depending on the action set by the high-level strategy, the relevant directions will be prioritized. This priority mode is then communicated to the motion controller, where it is used to lower the velocity references in the non-relevant directions.
6 Accurate Shooting
During matches it has been observed that the variation in the lob shots of the robots is too large, causing a lot of shots on goal to miss [6]. This section elaborates on improvements of the shooting performance of the robots, to make lob shots more reproducible. To this extent, two solutions are proposed. The first solution aims to gain improvements over short time, by adding a passive mechanism to the shooting mechanism. The second solution aims to improve the shooting performance over longer time by using the RGBD-camera as a feedback mechanism to calibrate the shooting mapping. This section is structured alike.
Thorough analysis of the cause of variation between the lob shots showed that a significant amount of variation was caused by the shooting lever not returning to its initial starting position after each shot. This results in the lever to have a different velocity when it impacts the ball. Approximately, 0.6 joule is lost per millimeter of lost stroke [7]. To eliminate this cause of variation, a passive mechanism was designed to retract the lever to its initial position, and to hold it at this position while the robot is driving.
Figure 5(a) shows schematically how a set of repelling magnets push the lever backwards when it has moved fully forward after a shot. A second set of magnets at the front of the solenoid, shown in Fig. 5(b), pulls the lever into the initial starting position and hold it there. Experiments and theoretical analysis showed no observable decrease in shooting power using this mechanism. The force exerted by the solenoid over the stroke of the actuator is compared to the force exerted by the added permanent magnets. The work done by magnetic force generated by the permanent magnets is 0.11[J], which is 0.3% of the work generated by the solenoid, 32 [J] [7], over the first 60[mm] of the stroke where the solenoid is driving the ball. By implementing the permanent magnets, the reproducability of the shot increased as the standard deviation decreased from 0.32 to 0.08[m/s] and from 1.06 to 0.49[degrees] for the ball’s starting velocity and angle respectively.
Over longer periods of time (weeks, months), small changes in the shooting system (lubrication, bearing performance) occur, which need to be compensated. To shoot at a specifc target, the inverse dynamics of the shooting mechanism is modeled in order to calculate the required robot inputs from a desired initial ball velocity \(v_0\) and angle \(\alpha _0\). For this mapping a second order polynomial describing the robot shooting dynamics was chosen, since this was proven to be sufficient from previous research [8]. Using an Extended Kalman Filter (EKF), the polynomial coefficients can be recursively updated after each shot while taking the variation in the measured initial ball state into account. The state vector of the filter becomes \(x=[a_0\ a_1\ a_2\ a_3\ a_4\ a_5 ]^T\), \(u=[K\ L]^T\). \(a_{[0, 5]}\) are the polynomial coefficients, and K and L the shooting duty cycle and lever height settings respectively; the inputs of the shooting mechanism. Using the RGBD-camera mounted on the robot, the realised initial ball velocity \(v_0\) and angle \(\alpha _0\) can be estimated in order to update the mapping.
Figure 6 shows the estimated initial ball velocity \(v_0\) as function of the robot inputs. A corresponding mapping is created for the initial ball angle \(\alpha _0\). These maps need to be inverted in order to obtain the inverse dynamics in order to calculate the desired robot inputs for a specific shot.
7 Artificial Intelligence
This section describes our efforts to arbitrary ball detection in Subsect. 7.1, and efforts towards learning opponent behavior in Subsect. 7.2. The latter is a continuation of our work on recognizing robots in camera images [9] and predicting robot behavior [10] as described in [3, 4]. At the final paragraph of this section, current research on how to learn the response to these game situations is elaborated.
7.1 Arbitrary Ball Detection
Ball detection is currently based on the upper and lower bounds of the YUV values of the ball its color. This detection method works smoothly in most cases, however it comes with its limitations. For example, changes in the brightness make the calibration of the upper- and lower bounds less accurate. The algorithm is more likely to make false predictions if the color of the ball is similar to its environment, and the algorithm is only capable of detecting one-colored balls. To solve these inaccuracies and make the detection more robust, an approach using a machine learning algorithm is proposed. The new ball detection algorithm uses multiple convolutional layers to create feature mappings of the camera frame. These feature maps are then used to localize potential ball locations. Finally, the potential ball locations are classified through a neural network providing the probability-rate of a ball being present at the corresponding location. Figure 7 visually represents this algorithm.
A speed of 13 frames per second using only one CPU core with an accuracy of \(80\%\) on images with different balls with variable brightness and colors was achieved. In further research, our focus will be on implementation. The potential of a Jetson board will be explored. If necessary, other ways of integration will be analysed.
7.2 Opponent Behavior
The arbitrary ball recognition may be combined with previous work on recognizing robots [9] and forms the basis for a system to learn the behavior of competing teams and use this in the simulator of Tech United. The logfiles collected by the TURTLEs during a game are recorded with a frequency of about 10–30[Hz]. In that data Game Turnover Points (GTP) are identified. GTP’s are points where a switch is made between attacking and defending. Such situations occur typically when the ball is lost or regained, or a shot at the goal is attempted. Using these GTP’s all successful or failing episodes during a game are found. Episodes describe a game moment as a series of successive steps. The number of steps of an episode varies between just a few to sometimes several hundreds.
Having collected these episodes, two functions are performed. First a Game Situation Image (GSI) is created for each step of an episode, analogous to the work of Van’t Klooster [10]. Then all episodes are classified according to its status as being one of the following: attack or defend, own half or opponent half, our ball or opponent ball and the starting situation, like kickoff, throw in, etc. For each step in an episode, the movements of the four field players of a team are calculated as single steps in the X and Y direction, forming a trail for each agent. The GSI and the agent trails are then fed into a neural network, which learns from this what each of the agent’s role is during an episode. This way the network learns how a team responds to the various game situations.
Once the network has learned a team’s behavior for every Game Situation, we use this network to pit two teams against each other, or to pit newly developed TURTLE behaviors against several opponents and use this to find weaknesses in our strategy. Currently, the project is in its early stages and the Game Situation Analysis is being developed and tested.
8 Conclusions
In this paper we have described the major scientific improvements of our soccer robots in preparation of the RoboCup 2019 tournament and the results we have achieved during the tournament. Not all of the developments have actively contributed to the result, but the methods developed will be integrated in preparation of future tournaments. By replacing the dependency of the magnetometer in the localization algorithm with an initialization procedure when putting the robots on the field, the localization-performance robustified. Our developments in the control of the eight-wheeled platform expand to the platform and strategy level, leading to its introduction during the Portugese Open 2019. The standard deviation in our shots was reduced by integrating magnets into the solenoid assembly, leading to an increased reproducibility. For arbitrary ball detection we have verified the feasibility of a neural network on the TURTLE’s computational units. The research to modeling opponents continues, currently the classification process is revisited. Altogether we hope our progress contributes to an even higher level of dynamic and scientifically challenging robot soccer. The latter, of course, while maintaining the attractiveness of our competition for a general audience. In this way we hope to go with the top in Middle-size league for some more years and contribute to our goal in 2050 to beat the human World Champion in soccer!
References
Martinez, C.L., et al.: Tech United Eindhoven, Winner RoboCup 2014 MSL. In: Bianchi, R.A.C., Akin, H.L., Ramamoorthy, S., Sugiura, K. (eds.) RoboCup 2014. LNCS (LNAI), vol. 8992, pp. 60–69. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-18615-3_5
Almeida, L., Santos, F., Facchinetti, T., Pedreiras, P., Silva, V., Lopes, L.S.: Coordinating distributed autonomous agents with a real-time database: the CAMBADA project. In: Aykanat, C., Dayar, T., Körpeoğlu, İ. (eds.) ISCIS 2004. LNCS, vol. 3280, pp. 876–886. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-30182-0_88
Schoenmakers, F., et al.: Tech United Eindhoven Team Description 2017 (2017). https://www.techunited.nl/media/images/Publications/TDP_2017.pdf
Douven, Y., et al.: Tech United Eindhoven middle size league winner 2018. In: Holz, D., Genter, K., Saad, M., von Stryk, O. (eds.) RoboCup 2018. LNCS (LNAI), vol. 11374, pp. 413–424. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-27544-0_34
Kon, J., Houtman, W., Kuijpers, W., van de Molengraft, R.: Pose and velocity estimation for soccer robots. Student Undergraduate Research E-journal! 4 (2018). https://doi.org/10.25609/sure.v4.2840
Kengen, C.M., Douven, Y.G.M., Van De Molengraft, M.J.G.: Towards a more reproducible shot with the tech united soccer robots. Master’s thesis, Eindhoven (2018). Reportnumber: CST 2018.097
Meessen, K.J., Paulides, J.J.H., Lomonova, E.A.: A football kicking high speed actuator for a mobile robotic application. In: IECON 2010–36th Annual Conference on IEEE Industrial Electronics Society, pp. 1659–1664. IEEE, November 2010. https://doi.org/10.1109/IECON.2010.5675433, http://alexandria.tue.nl/openaccess/Metis245159.pdf
Senden, J., Douven, Y., van de Molengraft, R.: A model-based approach to reach a 3D target with a soccer ball, kicked by a soccer robot. Master’s thesis, Eindhoven University of Technology (2016). Reportnumber: CST 2016.078. https://www.techunited.nl/media/images/Publications/StudentReports/July2016/0716549-Jordy.pdf
Van Lith, P., van de Molengraft, M., Dubbelman, G., Plantinga, M.: A minimalistic approach to identify and localize robots in RoboCup MSL soccer competitions in real-time. Technical report. http://www.techunited.nl/uploads/Minimalist%20MSL%20Robot%20Location%205.0.pdf
Van ’t Klooster, M., Nijmeijer, H., Dubbelman, G.: Deep learning for opponent action prediction in robot soccer middle size league. Master’s thesis, Eindhoven University of Technology (2018). Reportnumber: DC 2018.050
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Houtman, W. et al. (2019). Tech United Eindhoven Middle-Size League Winner 2019. In: Chalup, S., Niemueller, T., Suthakorn, J., Williams, MA. (eds) RoboCup 2019: Robot World Cup XXIII. RoboCup 2019. Lecture Notes in Computer Science(), vol 11531. Springer, Cham. https://doi.org/10.1007/978-3-030-35699-6_42
Download citation
DOI: https://doi.org/10.1007/978-3-030-35699-6_42
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-35698-9
Online ISBN: 978-3-030-35699-6
eBook Packages: Computer ScienceComputer Science (R0)