Work career |
Curriculum vitae | Publications | Press reports | Networks |
Publications |
2018: Abstract: Almost once a week broadcasts about earthquakes, hurricanes, tsunamis, or forest fires are filling the news. While oneself feels it is hard to watch such news, it is even harder for rescue troops to enter such areas. They need some skills to get a quick overview of the devastated area and find victims. Time is ticking, since the chance for survival shrinks the longer it takes till help is available. To coordinate the teams efficiently, all information needs to be collected at the command center. Therefore, teams investigate the destroyed houses and hollow spaces for victims. Doing so, they never can be sure that the building will not fully collapse while they are inside. Here, rescue robots are welcome helpers, as they are replaceable and make work more secure. Unfortunately, rescue robots are not usable off-the-shelf, yet. There is no doubt, that such a robot has to fulfil essential requirements to successfully accomplish a rescue mission. Apart from the mechanical requirements it has to be able to build a 3D map of the environment. This is essential to navigate through rough terrain and fulfil manipulation tasks (e.g. open doors). To build a map and gather environmental information, robots are equipped with multiple sensors. Since laser scanners produce precise measurements and support a wide scanning range, they are common visual sensors utilized for mapping. Unfortunately, they produce erroneous measurements when scanning transparent (e.g. glass, transparent plastic) or specular reflective objects (e.g. mirror, shiny metal). It is understood that such objects can be everywhere and a pre-manipulation to prevent their influences is impossible. Using additional sensors also bear risks. The problem is that these objects are occasionally visible, based on the incident angle of the laser beam, the surface, and the type of object. Hence, for transparent objects, measurements might result from the object surface or objects behind it. For specular reflective objects, measurements might result from the object surface or a mirrored object. These mirrored objects are illustrated behind the surface which is wrong. To obtain a precise map, the surfaces need to be recognised and mapped reliably. Otherwise, the robot navigates into it and crashes. Further, points behind the surface should be identified and treated based on the object type. Points behind a transparent surface should remain as they represent real objects. In contrast, Points behind a specular reflective surface should be erased. To do so, the object type needs to be classified. Unfortunately, none of the current approaches is capable to fulfil these requirements. Therefore, the following thesis addresses this problem to detect transparent and specular reflective objects and to identify their influences. To give the reader a start up, the first chapters describe: the theoretical background concerning propagation of light; sensor systems applied for range measurements; mapping approaches used in this work; and the state-of-the-art concerning detection and identification of transparent and specular reflective objects. Afterwards, the Reflection-Identification-Approach, which is the core of subject thesis is presented. It describes 2D and a 3D implementation to detect and classify such objects. Both are available as ROS-nodes. In the next chapter, various experiments demonstrate the applicability and reliability of these nodes. It proves that transparent and specular reflective objects can be detected and classified. Therefore, a Pre- and Post-Filter module is required in 2D. In 3D, classification is possible solely with the Pre-Filter. This is due to the higher amount of measurements. An example shows that an updatable mapping module allows the robot navigation to rely on refined maps. Otherwise, two individual maps are build which require a fusion afterwards. Finally, the last chapter summarizes the results and proposes suggestions for future work. 2017: Abstract: A favoured sensor for mapping is a 3D laser scanner since it allows a wide scanning range, precise measurements, and is usable indoor and outdoor. Hence, a mapping module delivers detailed and high resolution maps which makes it possible to navigate safely. Difficulties result from transparent and specular reflective objects which cause erroneous and dubious measurements. At such objects, based on the incident angle, measurements result from the object surface, an object behind the transparent surface, or an object mirrored with respect to the reflective surface. This paper describes an enhanced Pre-Filter-Module to distinguish between these cases. Two experiments demonstrate the usability and show that for single scans the identification of mentioned objects in 3D is possible. The first experiment was made in an empty room with a mirror. The second experiment was made in a stairway which contains a glass door. Further, results show that a discrimination between a specular reflective and a transparent object is possible. Especially for transparent objects the detected size is restricted to the incident angle. That is why future work concentrates on implementing a post-filter module. Gained experience shows that collecting the data of multiple scans and postprocess them as soon as the object was bypassed will improve the map. Abstract: Mapping is an essential task in mobile robotics. To fulfil advanced navigation and manipulation tasks a 3D rep- resentation of the environment is required. Applying stereo cameras or Time-of-flight cameras (TOF cameras) are one way to archive this requirement. Unfortunately, they suffer from drawbacks which makes it difficult to map properly. Therefore, costly 3D laser scanners are used. An inexpensive way to build a 3D representation is to use a 2D laser scanner and rotate the scan plane around an additional axis. A 3D pointcloud acquired with such a custom device consists of multiple 2D line scans. The pose of each line scan need to be determined to generate a 3D pointcloud. The pose consists of the encoder feedback as well as parameters resulting from a calibration. Using external sensor systems are a common method to determine these calibration parameters. This is costly and difficult when the robot needs to be calibrated outside the lab. Thus, this work presents a calibration method applied on a rotating 2D laser scanner. It uses a hardware setup to determine the required parameters for calibration. This hardware setup is light, small, and easy to transport. Hence, an out-of-lab calibration is possible. The hardware components of the 3D scanner system are an HOKUYO UTM-30LX-EW 2D laser scanner, a dynamixel servo-motor, and a control unit. The calibration system consists of a hemisphere. In the inner of the hemisphere a circular plate is mounted. The algorithm needs to be provided with a dataset of a single rotation from the laser scanner. To achieve a proper calibration result the scanner needs to be located in the middle of the hemisphere. By means of geometric formulas the algorithms determine the individual deviations of the placed laser scanner. In order to minimize errors, the algorithm solves the formulas in an iterative process. To verify the algorithm the laser scanner was mounted differently, the scanner position and the rotation axis was modified. In doing so, every deviation, was compared with the algorithm results. Several measurement settings were tested repeatedly. Additionally, the length deviation of the laser scanner is determined as there is an increased influence on the deviations during the measurement. Abstract: 3D laser scanners are favoured sensors for mapping in mobile service robotics at indoor and outdoor applications, since they deliver precise measurements at a wide scanning range. The resulting maps are detailed since they have a high resolution. Based on these maps robots navigate through rough terrain, fulfil advanced manipulation, and inspection tasks. In case of specular reflective and transparent objects, e.g., mirrors, windows, shiny metals, the laser measurements get corrupted. Based on the type of object and the incident angle of the incoming laser beam there are three results possible: a measurement point on the object plane, a measurement behind the object plane, and a measurement of a reflected object. It is important to detect such situations to be able to handle these corrupted points. This paper describes why it is difficult to distinguish between specular reflective and transparent surfaces. It presents a 3D-Reflection-Pre-Filter Approach to identify specular reflective and transparent objects in point clouds of a multi-echo laser scanner. Furthermore, it filters point clouds from influences of such objects and extract the object properties for further investigations. Based on an Iterative-Closest-Point-algorithm reflective objects are identified. Object surfaces and points behind surfaces are masked according to their location. Finally, the processed point cloud is forwarded to a mapping module. Furthermore, the object surface corners and the type of the surface is broadcasted. Four experiments demonstrate the usability of the 3D-Reflection-Pre-Filter. The first experiment was made in a empty room containing a mirror, the second experiment was made in a stairway containing a glass door, the third experiment was made in a empty room containing two mirrors, the fourth experiment was made in an office room containing a mirror. This paper demonstrate that for single scans the detection of specular reflective and transparent objects in 3D is possible. It is more reliable in 3D as in 2D. Nevertheless, collect the data of multiple scans and post-filter them as soon as the object was bypassed should pursued. This is why future work concentrates on implementing a post-filter module. Besides, it is the aim to improve the discrimination between specular reflective and transparent objects. Abstract: Mapping with laser scanners is the state-of-the-art method applied in service, industrial, medical, and rescue robotics. Although a lot of research has been done, maps still suffer from interferences caused by transparent and specular reflective objects. Glass, mirrors, shiny or translucent surfaces cause erroneous measurements depending on the incident angle of the laser beam. In past experiments the Mirror Detector Approach was implemented to determine such measurements with a multi-echo laser scanner. Recognition values are based on their differences in recorded measurements in regard to the distance of the echoes. This paper describes the research to distinguish between reflective and transparent objects. The implemented Mirror Detector was specifically modified for recognition of said objects for which four experiments were conducted; one experiment to show the map of the original Mirror Detector; two experiments to investigate intensity characteristics based on angle, distance, and material; and one experiment to show an applied discrimination with the extended version of the Mirror Detector, the Reflection Classifier Approach. To verify the results, a comparison with existing models was performed. This study showed that shiny metals, like aluminium, etc., provide significant characteristics, while mirrors are to be characterized by a mixed model of glass and shiny metal. Transparent objects turned out to be challenging because their appearance in the sensor data strongly depends on the background. Nevertheless, these experiments show that discrimination of transparent and reflective materials based on the reflected intensity is possible and feasible. 2016: Abstract: This publication describes a 2D Simultaneous Localization and Mapping approach applicable to multiple mobile robots. The presented strategy uses data of 2D LIDAR sensors to build a dynamic representation based on Signed Distance Functions. Novelties of the approach are a joint map built in parallel instead of occasional merging of smaller maps and the limited drift localization which requires no loop closure detection. A multi-threaded software architecture performs registration and data integration in parallel allowing for drift-reduced pose estimation of multiple robots. Experiments are provided demonstrating the application with single and multiple robot mapping using simulated data, public accessible recorded data, two actual robots operating in a comparably large area as well as a deployment of these units at the Robocup rescue league. 2015: Abstract: Laser scanners are state-of-the-art devices used for mapping in service, industry, medical and rescue robotics. Although a lot of work has been done in laser-based SLAM, maps still suffer from interferences caused by objects like glass, mirrors and shiny or translucent surfaces. Depending on the surface’s reflectivity, a laser beam is deflected such that returned measurements provide wrong distance data. At certain positions phantom-like objects appear. This paper describes a specular reflectance detection approach applicable to the emerging technology of multi-echo laser scanners in order to identify and filter reflective objects. Two filter stages are implemented. The first filter reduces errors in current scans on the fly. A second filter evaluates a set of laser scans, triggered as soon as a reflective surface has been passed. This makes the reflective surface detection more robust and is used to refine the registered map. Experiments demonstrate the detection and elimination of reflection errors. They show improved localization and mapping in environments containing mirrors and large glass fronts is improved. Abstract: SLAM! (SLAM!) is essential for a mobile robot. Localizing itself and obtaining information about the environment qualifies the robot to interact with the environment. For this reason different approaches for SLAM! are used in the robotics community. In the RoboCup Rescue Challenge most teams use the hector slam or gmapping approach. Hence it is essential to obtain accurate estimates of the robots position and the surrounding environment, the aim of this paper is to compare those approaches with the tsd slam which was developed in the last years at the NIT! (NIT!). Finally, this will evaluate the quality of our SLAM! approach in comparison to other state of the art approaches. Abstract: This publication describes a 2D Simultaneous Lo- calization and Mapping approach applicable to multiple mobile robots. The presented strategy uses data of 2D LIDAR sensors to build a dynamic representation based on Signed Distance Functions. A multi-threaded software architecture performs reg- istration and data integration in parallel allowing for drift- reduced pose estimation of multiple robots. Experiments are provided demonstrating the application with single and multiple robot mapping using simulated data, public accessible recorded data as well as two actual robots operating in a comparably large area. 2014: Abstract: This paper describes a data fusion approach for 3D sensors exploiting assets of the signed distance function. The objectoriented model is described as well as the algorithm design. We developed a framework respecting different modalities for multi-sensor fusion, 3D mapping and object localization. This approach is suitable for industrial applications having need for contact-less object localization like bin picking. In experiments we demonstrate 3D mapping as well as sensor fusion of a structured light sensor with a Time-of-Flight (ToF) camera. Abstract: This paper describes a data fusion approach for 3D sensors exploiting assets of the signed distance function. The objectoriented model is described as well as the algorithm design. We developed a framework respecting different modalities for multi-sensor fusion, 3D mapping and object localization. This approach is suitable for industrial applications having need for contact-less object localization like bin picking. In experiments we demonstrate 3D mapping as well as sensor fusion of a structured light sensor with a Time-of-Flight (ToF) camera. 2012: Abstract: This paper focuses on range image registration for robot localization and environment mapping. It extends the well-known Iterative Closest Point (ICP) algorithm in order to deal with erroneous measurements. The dealing with measurement errors originating from external lighting, occlusions or limitations in the measurement range is only rudimentary in literature. In this context we present a non-parametric extension to the ICP algorithm that is derived directly from measurement modalities of sensors in projective space. We show how aspects from reverse calibration can be embedded in search-tree-based approaches. Experiments demonstrate the applicability to range sensors like the Kinect device, Time-of-Flight cameras and 3D laser range nders. As a result the image registration becomes faster and more robust. |