iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://pubmed.ncbi.nlm.nih.gov/33803889
Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review - PubMed Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2021 Mar 18;21(6):2140.
doi: 10.3390/s21062140.

Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review

Affiliations
Review

Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review

De Jong Yeong et al. Sensors (Basel). .

Abstract

With the significant advancement of sensor and communication technology and the reliable application of obstacle detection techniques and algorithms, automated driving is becoming a pivotal technology that can revolutionize the future of transportation and mobility. Sensors are fundamental to the perception of vehicle surroundings in an automated driving system, and the use and performance of multiple integrated sensors can directly determine the safety and feasibility of automated driving vehicles. Sensor calibration is the foundation block of any autonomous system and its constituent sensors and must be performed correctly before sensor fusion and obstacle detection processes may be implemented. This paper evaluates the capabilities and the technical performance of sensors which are commonly employed in autonomous vehicles, primarily focusing on a large selection of vision cameras, LiDAR sensors, and radar sensors and the various conditions in which such sensors may operate in practice. We present an overview of the three primary categories of sensor calibration and review existing open-source calibration packages for multi-sensor calibration and their compatibility with numerous commercial sensors. We also summarize the three main approaches to sensor fusion and review current state-of-the-art multi-sensor fusion techniques and algorithms for object detection in autonomous driving applications. The current paper, therefore, provides an end-to-end review of the hardware and software methods required for sensor fusion object detection. We conclude by highlighting some of the challenges in the sensor fusion field and propose possible future research directions for automated driving systems.

Keywords: autonomous vehicles; calibration; camera; lidar; obstacle detection; perception; radar; self-driving cars; sensor fusion.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
An overview of the six distinct levels of driving automation that were described in the Society of Automotive Engineers (SAE) J3016 standard. Readers interested in the comprehensive descriptions of each level are advised to refer to SAE International. Figure redrawn and modified based on depictions in [7].
Figure 2
Figure 2
Architecture of an autonomous driving (AD) system from, (a) a technical perspective that describes the primary hardware and software components and their implementations; (b) a functional perspective that describes the four main functional blocks and the flow of information based on [15].
Figure 2
Figure 2
Architecture of an autonomous driving (AD) system from, (a) a technical perspective that describes the primary hardware and software components and their implementations; (b) a functional perspective that describes the four main functional blocks and the flow of information based on [15].
Figure 3
Figure 3
An example of the type and positioning of sensors in an automated vehicle to enable the vehicles perception of its surrounding. Red areas indicate the LiDAR coverage, grey areas show the camera coverage around the vehicle, blue areas display the coverage of short-range and medium-range radars, and green areas indicate the coverage of long-range radar, along with the applications the sensors enable—as depicted in [32] (redrawn).
Figure 4
Figure 4
Visualization (before correction for several degrees of sensor misalignment) of false-positive detections in current exploratory research. The colored points in the point clouds visualization represent LiDAR point cloud data and white points represent radar point cloud data. Several false-positive radar detections are highlighted by the grey rectangle, located at approximately 5–7 m from the radar sensor. The radar sensor in present setup is in short-range mode (maximum detection range is 19 m); hence, the traffic cone located at 20 m is not detectable.
Figure 5
Figure 5
The structure of Multi-Sensor Data Fusion (MSDF) framework for n given sensors. It consists of a sensor alignment process (estimation of calibration parameters—rotation matrix and translations vector) and an object detection process which contains n processing chains, each provides a list of the detected obstacles. Figure redrawn based on depictions in [118], but with the inclusion of an intrinsic calibration process.
Figure 6
Figure 6
A graphical representation of the pinhole camera. The pinhole (aperture) restraints the light rays from the target from entering the pinhole; hence, affecting the brightness of the captured image (during image formation). A large pinhole (a wide opening) will result in a brighter image but is less clear due to blurriness on both background and foreground. Figure redrawn based on depictions in [132,133].
Figure 7
Figure 7
The pinhole camera model from a mathematical perspective. The optical axis (also referred to as principal axis) aligns with the Z-axis of the camera coordinate system (ZC), and the intersections between the image plane and the optical axis is referred to as the principal points (cx, cy). The pinhole opening serves as the origin (O) of the camera coordinate system (XC, YC, ZC) and the distance between the pinhole and the image plane is referred to as the focal length (f). Computer vision convention uses right-handed system with the z-axis pointing toward the target from the direction of the pinhole opening, while y-axis pointing downward, and x-axis rightward. Conventionally, from a viewer’s perspective, the origin (o) of the 2D image coordinate system (x, y) is at the top-left corner of the image plane with x-axis pointing rightward, and y-axis downward. The (u, v) coordinates on the image plane refers to the projection of points in pixels. Figure redrawn based on depictions in [125,134,135].
Figure 8
Figure 8
The most employed patterns for camera calibration. (a) A 7 rows × 10 columns checkerboard pattern. The calibration uses the interior vertex points of the checkerboard pattern; thus, the checkerboard in (a) will utilize the 6 × 9 interior vertex points (some of which are circled in red) during calibration. (b) A 4 rows × 11 columns asymmetrical circular grid pattern. The calibration uses the information from circles (or “blobs” in image processing terms) detection to calibrate the camera. Other planar patterns include symmetrical circular grid and ChArUco patterns (a combination of checkerboard pattern and ArUco pattern) [128,137,141]. Figures source from OpenCV and modified.
Figure 9
Figure 9
The proposed calibration target design to jointly extrinsic calibrate multiple sensors (radar, camera, LiDAR). It consists of four circulars, tapered holes centrally located within a large rectangular board at the (a) front of the board, and a metallic trihedral corner reflector (circled in orange) located between the four circles at the (b) rear of the board. Figure source from [146,147] and modified.
Figure 10
Figure 10
A graphical representation of the vertical laser points of the (a) Velodyne HDL-64E and the (b) Velodyne VLP-32C. Reference [145] utilizes the Velodyne HDL-64E which consists of 64 channels (layers), and the vertical laser beams are distributed uniformly across the vertical FoV between −24.9° to 2°. The initial sensor configurations employed by the current authors [22] employs the Velodyne VLP-32C which consists of 32 channels (or layers) where the vertical laser beams are concentrated in the middle of the optical center across the vertical FoV between −25° to 15°. Based on sensor user manual [68].
Figure 11
Figure 11
The proposed triangular calibration target design to spatial temporal calibrates the sensors (camera, radar, LiDAR). (a) Front view of the calibration board consists of a printed AprilTag marker with a size of approximately 17 cm in length. (b) The trihedral corner reflector is attached at the rear of the triangular board in which the inner sides are overlaid with aluminum foil. The calibration target in figure is constructed based on and reference [169,170,171] and through personal communication [172].

Similar articles

Cited by

References

    1. World Health Organization . Global Status Report on Road Safety. WHO; Geneva, Switzerland: 2018.
    1. Road | Mobility and Transport. [(accessed on 20 November 2020)]; Available online: https://ec.europa.eu/transport/themes/its/road_it.
    1. Autonomous Vehicle Market to Garner Growth 63.5% [(accessed on 19 November 2020)]; Available online: https://www.precedenceresearch.com/autonomous-vehicle-market.
    1. Glon R., Edelstein S. The History of Self-Driving Cars. [(accessed on 18 November 2020)];2020 Available online: https://www.digitaltrends.com/cars/history-of-self-driving-cars-milestones/
    1. Wiggers K. Waymo’s Autonomous Cars Have Driven 20 Million Miles on Public Roads. [(accessed on 18 November 2020)];2020 Available online: https://venturebeat.com/2020/01/06/waymos-autonomous-cars-have-driven-20...