iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.3390/s19071593
Efficient Traffic Video Dehazing Using Adaptive Dark Channel Prior and Spatial–Temporal Correlations
Next Article in Journal
A New Denoising Method for UHF PD Signals Using Adaptive VMD and SSA-Based Shrinkage Method
Next Article in Special Issue
A Novel Decentralized Game-Theoretic Adaptive Traffic Signal Controller: Large-Scale Testing
Previous Article in Journal
New Approaches to Implementing the SmartJacket into Industry 4.0
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Traffic Video Dehazing Using Adaptive Dark Channel Prior and Spatial–Temporal Correlations

College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(7), 1593; https://doi.org/10.3390/s19071593
Submission received: 6 March 2019 / Revised: 26 March 2019 / Accepted: 26 March 2019 / Published: 2 April 2019
(This article belongs to the Special Issue Intelligent Transportation Related Complex Systems and Sensors)

Abstract

:
In order to restore traffic videos with different degrees of haziness in a real-time and adaptive manner, this paper presents an efficient traffic video dehazing method using adaptive dark channel prior and spatial-temporal correlations. This method uses a haziness flag to measure the degree of haziness in images based on dark channel prior. Then, it gets the adaptive initial transmission value by establishing the relationship between the image contrast and haziness flag. In addition, this method takes advantage of the spatial and temporal correlations among traffic videos to speed up the dehazing process and optimize the block structure of restored videos. Extensive experimental results show that the proposed method has superior haze removing and color balancing capabilities for the images with different degrees of haze, and it can restore the degraded videos in real time. Our method can restore the video with a resolution of 720 × 592 at about 57 frames per second, nearly four times faster than dark-channel-prior-based method and one time faster than image-contrast-enhanced method.

1. Introduction

Today, traffic video analysis plays a very important role in intelligent transportation systems. It has become a common way to help people track a vehicle, as well as locate and judge an accident. Because the images captured by outdoor cameras are often affected by different weather conditions, they suffer from poor visibility and lack of contrast. In the literature, there are many enhancements and dehazing algorithms that improve different images, such as traffic videos, underwater images, and satellite imagery [1,2,3]. The hazy weather that happens frequently all over the world is becoming a video analysis killer. The haze captured in the video degrades the contrast and color information and reduces the visibility. Therefore, the problem of how to efficiently and effectively remove the haze in traffic videos has attracted broad attention from both academia and industry. In general, when dealing with haze removal in traffic videos, the existing dehazing algorithms often exhibit poor real-time performance, overstretched contrast, and even fail to remove dense haze. The key issue of these problems is how to deal with images in different scenes with different degrees of haze, thus an adaptive algorithm that can remove haze based on the image characteristics is needed. Moreover, the existing video-dehazing methods are almost universal for all videos and do not consider the characteristics of videos in particular scenarios. For traffic videos, the time continuity, lane space structure, and camera spatial locations can be effectively used to decrease computational cost.
In order to restore traffic videos with different degrees of haziness in a real-time and adaptive manner, this paper presents an efficient traffic video dehazing method using adaptive dark channel prior and spatial-temporal correlations. This method can avoid overstretched contrast after haze removal and obtain satisfactory restored results for dense hazy videos by using a novel approach involving adaptive transmission estimation. This method also takes full advantage of the temporal and spatial correlations in traffic videos to meet the requirements of real-time dehazing, such as using time continuity to set the time slice, refining transmission by characteristics of block structure, decreasing restored area according to the lane space, and simplifying the calculation of parameters by using multi-camera distribution.

2. Related Works

Essentially, videos are composed of frames, thus the haze removal method for images can be used for videos. The image dehazing method is the most common way to restore hazy images. This method considers the inverse process of image degradation and describes the image degradation process in detail through an established physical model. The most critical step of this method is to obtain the parameters of the degradation model. Oakley et al. [4] improved the image quality by using the physical model and estimated the degradation model parameters based on a statistical model. This method is not widely used because it is only useful for gray-scale images, and the acquiring parameters require calibrated radar to get depth information. Narasimhan et al. [5] proposed a method to estimate the depth information by comparing two images of the same scene in different weather conditions. Chen et al. [6] used a sunny image and a foggy image for reference images to calculate parameters. Both of these methods need to receive eligible images in advance, which increases the difficulty of image acquisition.
To obtain the parameters of the degradation model effectively, some dehazing methods based on prior knowledge or assumptions were proposed, and they do not need to get reference images in advance or use an additional hardware device. Therefore, these methods have better adaptability than previous methods. Based on the assumption that a haze-free image has a higher contrast than a hazy image, Tan [7] proposed a haze removal approach by maximizing the contrast of recovered scene radiance. This approach can produce a satisfactory result for haze removal in single images, but it tends to overcompensate for the reduced contrast and leads to halo effects. Fattal [8] decomposed scene radiance of an image into the albedo and shading and then estimated the scene radiance based on independent component analysis, assuming that transmission shading and surface shading are locally uncorrelated. However, this method cannot generate impressive results when the captured image is heavily obscured by fog. He et al. [9] presented a single image haze removal method by using dark channel prior, which can estimate the transmission map directly. However, when a large white area without shading exists in the images, or the images have uneven illumination, this method takes a long time to restore the hazy images. In addition, the use of the soft matting algorithm makes this a complex computation. Then, Lai et al. [10] presented a haze removal method based on the difference-structure-preservation prior. In this method, the difference-structure-preservation dictionary is learned such that the local consistency features of the transmission map can be well preserved after coefficient shrinkage. Zhu et al. [11] presented a simple but effective Color Attenuation Prior (CAP)algorithm similar to Dark Channel Prior (DCP)using the difference in brightness and saturation to estimate the haze concentration to build a depth model for dehazing. Up until now, other researchers have improved their dehazing algorithms based on the dark channel prior. Yeh et al. [12] introduced a haze removal algorithm based on region decomposition and feature fusion, which is especially suitable for hazy images with large sky regions. Li et al. [13] proposed a novel haze removal method based on sky segmentation and dark channel prior to restore images. In this method, the average image intensity of the sky region is chosen as the atmospheric light value. Wang et al. [14] designed a new method of selecting atmospheric light values to weaken the area where the dark channel priority does not work effectively. A visibility restoration method was introduced by Huang et al. [15], which consists of three modules: (i) depth estimation module based on dark channel priority, (ii) color analysis module that repairs depth estimation distortion, and (iii) visibility restoration module that generates repair results. Riaz et al. [16] proposed a new and efficient method for transmission estimation with bright-object handling capability, which uses a local average haziness value to compute the transmission of such surfaces based on the observation that the transmission of a surface is loosely connected to its neighbors.
Usually, traffic video dehazing algorithms are proposed based on single-image dehazing algorithms. However, the computational complexity makes it difficult to apply single-image dehazing algorithms directly to video dehazing. Most existing research on video dehazing is to speed up the process of dehazing. Sun et al. [17] proposed a real-time haze removal method based on bilateral filtering to reduce the processing time of 320 × 240 images to a speed of 20 frames per second. However, this method cannot satisfy the requirements of high-definition videos. Wang et al. [18] proposed a method based on Retinex theory that enhances image contrast in YUV color space and can process an image of 704 × 576 in 0.055 s. Kumari et al. [19] proposed an approach for dehazing images and videos based on a filtering method. The use of a gray-scale morphological operation made the approach faster, and it took only 80% of the execution time compared to a fast bilateral filter. Berman et al. [20,21] proposed a new method via calculating the air-light to dehaze fogs, which was based on a non-local prior. Their algorithm relies on the assumption that colors of a haze-free image are well approximated by a few hundred distinct colors that form tight clusters in RGB space. It performs well on a wide variety of images. However, these methods take every frame in videos as a single image, and they are completely based on image dehazing methods.
The characteristics of videos can be applied in specific video dehazing algorithms. Tarel et al. [22] proposed a video dehazing method for onboard video systems. This method can separate moving objects and driveway regions in videos and only update the depth information of moving objects. Zhang et al. [23] proposed a method based on spatial and temporal correlation that uses spatial and temporal similarity between frames to optimize the estimation of a scene depth map. Shin et al. [24] proposed an effective video dehazing technique to reduce flicker artifacts by using adaptive temporal average. However, these methods cannot remove the haze from videos in real time. Therefore, Kim et al. [25] proposed an image-dehazing method based on the image degradation model and kept a balance between image contrast enhancement and image information loss. To improve the speed of video dehazing, they adopted a video dehazing method by using temporal correlation, which can reach a speed of 30 frames per second for videos with a resolution of 640 × 480. However, this method adopts a fixed initial transmission value that cannot be adapted to images with different degrees of haze, and it cannot efficiently remove dense haze in videos. Our method uses an adaptive initial transmission value based on image characteristics to handle different degrees of hazes; meanwhile, it can reduce the processing time through lane space separation.

3. Single-Image Dehazing Using Adaptive Dark Channel Prior

3.1. Framework of Single-Image Dehazing Method

The most common dehazing model is based on atmospheric optics [26], which can describe the degradation process of a hazy image. In [27], the modeling function is simplified, and it is represented by Equation (1).
I ( p ) = J ( p ) t ( p ) + A ( 1 t ( p ) )
where p is a pixel in the image, I ( p ) and J ( p ) are the observed and haze-free image, respectively, A is the global atmospheric light, and t ( p ) [ 0 , 1 ] is the transmission map for each pixel that describes the proportion of the light arriving at a digital camera without scattering.
The process of haze removal for every frame of a traffic video can be divided into three steps: calculating atmospheric light, estimating the transmission map, and restoring the image. In this paper, we present a novel adaptive method for transmission map estimation, thus the dehazing algorithm can be applied to images with different degrees of haze. The framework of the single-image dehazing algorithm is shown in Figure 1.
We use a hierarchical searching method based on quad-tree subdivision [25] to find the areas least affected by haze and to get the brightest pixel in this area. The detailed steps are as follows:
Step 1:
Divide an input image into four rectangular regions.
Step 2:
Define the score of each region as the average pixel value subtracted from the standard deviation of the pixel values within the region.
Step 3:
Select the region with the highest score and divide it further into four smaller regions.
Step 4:
Repeat Steps 1 through Step 3 until the size of the selected region is smaller than a prespecified threshold. The prespecified threshold in this paper is 200, which is that the height * width of the selected region is smaller than 200.
At last, we choose the color vector, which minimizes the distance | | ( I r ( p ) ,   I g ( p ) ,   I b ( p ) ) ( 255 ,   255 ,   255 ) | | , where I ( p ) is the value of pixel p in the selected region as the atmospheric light.

3.2. Transmission Estimation for Enhancing the Contrast of Blocks

In general, a hazy block yields low contrast, and the contrast of a restored block increases as the value of the estimated transmission decreases. We adopt the image-contrast-enhanced method [18] to maximize the contrast of the restored blocks and get the best estimated transmission value.
Mean squared error contrast (CMSE) [28] can define the contrast of a restored block, which is represented by Equation (2):
C M S E = p = 1 N ( J c ( p ) J c ¯ ) N 2
where J c ( p ) represents the RGB color channel of pixel p in a block of input image, c { r , g , b } , J c ¯ is the average value of J c ( p ) , and N is the number of pixels in a block.
According to the assumption that the scene depths are locally similar [8,12,16], the dehazing algorithm in this paper determines a single transmission value for each block of size 32 × 32, and then gets the fixed optimal transmission value t for each block. For a pixel p in a block, t ( p ) in Equation (1) can be replaced with the fixed estimated transmission t of its block. Hence, J c ( p ) is represented by Equation (3).
J c ( p ) = I c ( p ) A t + A
If Equation (3) is put into Equation (2), C M S E can be represented by Equation (4):
C M S E = p = 1 N ( I c ( p ) I c ¯ ) t 2 N 2
where I ¯ c is the average value of I c ( p ) in the input block. According to Equation (4), we can find that the mean squared error contrast is a decreasing function of t . Thus, we can select a small value of t to increase the contrast of a restored block. However, the value of t influences the pixel’s restored image value according to Equation (3).
However, when a block contains dense haze, it has a relatively narrow value range for input pixels. Thus, even though it is assigned a small t value, most of the input values are not truncated, and the block can be correctly restored. On the contrary, a block without haze usually has a broad range of values for input pixels and should be assigned a larger t value to reduce the information loss due to the truncation. Thus, we should not only enhance the contrast but also reduce the information loss.
Therefore, we need to set quantitative evaluations for contrast and information integrity. The contrast cost E c o n t r a s t and the information loss cost E l o s s were proposed by Kim [25] to evaluate the contrast and information integrity, respectively.
E c o n t r a s t = c { r , g , b } p B ( J c ( p ) J c ¯ ) N B = c { r , g , b } p B ( I c ( p ) I c ¯ ) N B
where J ¯ c and I ¯ c are the average values of J c ( p ) and I c ( p ) in block B , respectively, and N B is the number of pixels in B . Thus, we can maximize the mean squared error contrast by minimizing the value of E c o n t r a s t .
E l o s s = c { r , g , b } p B { ( min { 0 , J c ( p ) } ) 2 + ( max { 0 , J c ( p ) 255 } ) 2 }
where min { 0 , J c ( p ) } and max { 0 , J c ( p ) 255 } denote the truncated values for output pixels due to the underflow and overflow, respectively.
If we want to get a better restored image, the image contrast should be smoother, and the color information should be maintained as much as possible. Thus, these two factors should be taken into consideration synthetically, and the overall cost function is described as Equation (7).
E = E c o n t r a s t + λ L E l o s s
where λ L is a weight coefficient that controls the relative importance of the contrast cost and the information loss cost [18]. The minimum value of E represents the most suitable contrast for restored images, and the color loss is as small as possible. Finally, for each block in a hazy image, we can get an optimal transmission t by minimizing the value of E . The value of t is the transmission we use while dehazing.

3.3. Adaptive Estimation of Initial Transmission

3.3.1. Calculating Image Haziness Flag

We present a haziness flag T to measure the degree of haze in an image. The dark channel prior [9] can estimate the transmission of a block, which represents the luminosity of objects. The transmission has a close relationship with the degree of haze. Therefore, we can adopt the average value of transmission as the haziness flag T of an image. The haziness flag T is concerned with the effects of the degree of haze in images.
The dark channel prior is based on the observation that most local blocks in haze-free outdoor images contain some pixels that have very low intensities in at least one color channel. In other words, the dark channel value of a haze-free image is close to zero [9]. For any input image J , dark channel J d a r k can be expressed as Equation (8).
J d a r k ( p ) = m i n y Ω ( p ) ( m i n c { r , g , b } J c ( y ) )
where c { r , g , b } and Ω ( p ) represent a local block centered at p , and y is a pixel in the local block Ω ( p ) . A dark channel is the outcome of two minimum operators: m i n c J c ( y ) is performed on each pixel, and m i n y Ω ( p ) is a minimum filter [9].
Assuming that the atmospheric light A c is given, we can normalize the haze imaging Equation (1) by A c [9]:
I c ( p ) A c = t ( p ) J c ( p ) A c + 1 t ( p )
Since the transmission t ( p ) is a constant t ˜ ( p ) in local block, and the value of A c is given, the dark channel operation can be given by the following equations [9].
m i n y Ω ( p ) ( m i n c I c ( y ) A c ) = t ˜ ( p ) m i n y Ω ( p ) ( m i n c J c ( y ) A c ) + 1 t ˜ ( p )
Using the concept of a dark channel [9], if J c is an outdoor haze-free image except for the sky region, the intensity of dark channel is low and tends to be zero, which leads to:
m i n y Ω ( p ) ( m i n c J c ( y ) A c ) = 0
Putting Equation (11) into Equation (9), we can eliminate the multiplicative term and estimate the transmission t ˜ ( p ) simply by
t ˜ ( p ) = 1 m i n y Ω ( p ) ( m i n c I c ( y ) A c )
where t ˜ ( p ) is the predicted value of transmission of a block [9]. We need to calculate the average transmission for all blocks to obtain the average transmission T for the whole image, which is the value of image haziness flag.

3.3.2. Correction of Initial Transmission

According to our experimental results, in a hazy image, the range of T is generally between 0.4 and 0.6. Although the image haziness flag T can characterize the nature of the image, taking T as the initial transmission value to get the optimal transmission leads to an excessive value of t . Thus, we set a correction value X , and set T X as the initial transmission value to decrease this initial value.
The structural similarity (SSIM) index is a method for predicting the perceived quality of digital television and cinematic pictures, as well as other kinds of digital images and videos. To guarantee that the restored images are closer to ground truths, we adopted the SSIM index [29] to measure the similarity between the ground truths and restored images. Because the traffic video is captured by a fixed camera, we can get a haze-free image of the same scene as a reference image in advance and compare the restored image with the reference image. The initial value of T can be obtained directly because it is relevant to the nature of images, whereas the unknown value X is calculated by the SSIM. In our experiments, we set X as a series of values between 0.3 and 1.2, and the interval is 0.02. Then, we take every X in this range multiplied by T, that is, T X , as the initial value of transmission and get the corresponding restored image. At last, we find a restored image that is closest to the haze-free image based on the maximum value of the SSIM index. Thus, the value of transmission is the optimal initial value, and the corresponding correction value X is the optimal correction value of initial transmission.
However, this method needs a haze-free image to get the optimal correction value X . This method is limited in practical applications, thus it is necessary to get the correction value according to the image characteristics. After analyzing the image contrast and the haze in images, we find the relationship between the correction value of initial transmission and the image characteristics. Therefore, a relatively reasonable initial transmission correction value can be obtained directly from hazy images.
If the relatively reasonable correction value of initial transmission is X , we take T X as the initial transmission value. Because the dehazing algorithm is based on the concept of enhancing the image contrast to the greatest degree, the contrast is the important indicator. The value of image haziness flag T represents the degree of haze that degrades the image contrast. Thus, the image contrast and haziness flag value should be considered simultaneously. We set C as the image contrast and set T C as a quantitative value representing the image characteristics. The constant value X depends on the range of value T C .
Table 1 shows the values of X for different ranges of T C . In Table 1, X is the optimal correction value obtained by the method with reference images, and X is the relatively reasonable correction value obtained by the ranges of T C . In the dehazing algorithm, the initial transmission value is the key factor that affects the dehazing result. Table 1 shows the values of T X and T X , which are the initial transmission value derived by optimal correction of initial X and relatively reasonable correction value X , respectively. Figure 2 shows the histogram of T X and T X , where the values of T X and T X in the same group have similar values, and the difference of the values in the same group does not affect the dehazing results significantly. Therefore, our method can determine the optimal initial transmission value using only the nature of images and then obtain a more adaptive transmission value.

4. Adaptive Traffic Video Dehazing Method Using Spatial–Temporal Correlations

Compared with static traffic images, traffic videos have some unique characteristics. First, a traffic video is a collection of images with time continuity. Second, the cameras are fixed on the road and capture videos of the same scene over a long time, thus the videos are consistent in space. Therefore, we can use the correlations of spatial-temporal information to speed up traffic video dehazing.

4.1. Time Continuity of Traffic Videos

Because the cameras are fixed, the scenes in traffic videos barely change over a long period of time, and the influence of haze is stable. In our experiments, we use the traffic videos from ZhongHe elevated freeways in Hangzhou City, set a cycle of five minutes, and regard the frames in one cycle as a collection of images with the same characteristics. Figure 3 shows images whose interval is 1 min in a 5 min cycle, and the difference of T is very small, usually less than 0.04. Figure 4 presents the difference in restored images by using different T values where the results have no obvious influence on visibility with the difference of T less than 0.04. Therefore, if the videos are captured at the same scene, the values of T for these video images in a 5 min cycle are at the same level, and the cycle of 5 min is reasonable in practical application.
After setting the 5 min cycle, we can take the first frame of a video segment as a reference frame. We can determine the image haziness flag value T and the relatively reasonable initial transmission correction value X from the reference frame and then calculate the optimal transmission t . In this way, we can speed up the dehazing processing for the traffic video. This method can avoid incorrect transmission estimation, which is caused by the changes in atmospheric light, and eliminate the discontinuity of videos after dehazing.

4.2. Transmission Refinement Based on Spatial Structure

We estimate the optimal transmission based on the assumption that all pixels in a block have the same transmission. However, scene depths may vary spatially within a block, and the block-based transmission map usually has a blocking-artifact problem. Therefore, an edge-preserving filter is adopted to refine the block-based transmission map.
The single-image dehazing method using dark channel prior [9] employs the soft matting technique [30] to refine the large block size in the transmission map, which causes an enormous computational burden. In this paper, the guided filter method [31] is adopted to refine the transmission map, which has less computational cost. The filtered transmission t ^ ( p ) is an affine combination of the guidance image I ( p ) , as show in Equation (13):
t ^ ( p ) = s T I ( p ) + ψ
where s = ( s r , s g , s b ) T is a scaling vector, and ψ is an offset determined by the size of block. For a block in one image, the optimal parameters of s and ψ can be obtained by minimizing the difference between the transmission t ( p ) and the filtered transmission t ^ ( p ) using the least squares method as Equation (14):
( s , ψ ) = a r g m i n ( s , ψ ) p Ω ( t ( p ) t ^ ( p ) ) 2
If the transmission is too small, the noise will be enhanced in the restored image [9]. Thus, the lower limit of the transmission is set to 0.1. If a window slides pixel by pixel over the entire image, there will be multiple windows that overlap at each pixel position. Therefore, we adopt the centered window scheme, which sets the final transmission values as the average of all associated refined transmission values at each pixel position. However, the average transmission value in this scheme will cause blurring in the final transmission map, especially around object boundaries, where the depths change abruptly. To overcome this problem, the shiftable window scheme [32] is employed instead of the centered window scheme. The centered window scheme overlays a window on each pixel so that the window contains multiple objects with different depths, which leads to unreliable depth estimation. In the shiftable window scheme, the window is shifted within a block of 40 × 40. The optimal shift position is selected depending on the smallest change of pixel values within the window. Even though a shiftable window is selected for a specific pixel, the number of overlapping windows usually varies at different positions. The windows in smooth regions are selected more frequently than those in rough boundary regions. Thus, the shiftable window scheme can reduce the effects of unreliable transmission values derived from rough boundary regions, thereby alleviating the blurring artifacts.

4.3. Lane Separation for Traffic Videos

After analyzing the spatial characteristics of traffic video, we found that the traffic lane is an obvious structure. In a traffic video detection system, the detected objects are mostly concentrated in the driveway regions. The areas outside lanes are not the regions of interest in traffic video processing. Therefore, we can process haze removal only in the driveway region of traffic video to reduce computing time.
However, the estimations of atmospheric light and transmission are based on the whole image. If these values are achieved only through the driveway regions, it may cause some deviations, especially when the sky occupies a large area of the image, such as the cases shown in Table 2. The larger the sky region is, the greater the deviation for the value of T X is. Therefore, the separated lane can be used in the last step to restore the pixels only for the driveway regions.
We adopt a straight-line extraction algorithm based on the Hough transform to detect the lanes and separate the driveway region from the global image. The process of haze removal combined with the driveway region separation is described as follows:
  • Calculate the global atmospheric light A , the value of haziness flag T , and the image contrast C , then estimate the optimal transmission map for each block in an image.
  • Get the driveway region, as shown in Figure 5.
    • Step 1: Obtain the edge information in the video through edge detection.
    • Step 2: Remove obviously wrong-angle lines by Hough linear fitting, and obtain lane candidates, as shown in Figure 5b.
    • Step 3: Find the far left lane and the far right lane, and set them as the driveway boundaries, then find the intersection of these two lines, as shown in Figure 5c.
    • Step 4: Identify a rectangular area as the driveway region, which is composed of the boundary of the image and a horizontal line across the intersection, as shown in Figure 5c. If the intersection is outside the image, take the whole image area as the driveway region.
  • Use the original pixel values and the optimal transmission of driveway region in the dehazing model to restore the image in the driveway region.
In a traffic video detection system, each camera is located at a fixed position and captures the same traffic scenes for a long time. Based on the time continuity, the result of lane space separation for the initial frame of a traffic video can be used over a long time period. Lane space separation can decrease the area of haze removal and improve the efficiency of the dehazing algorithm. Figure 6 shows the haze removal results with and without lane separation. In this scene, the dehazing of 2000 frames needs 35.301 s without lane separation and 32.74 s with lane separation (lane space separation takes 0.182 s). Although lane separation requires some time, the operation just occurs in the first frame. Thus, the time for lane separation can be shared by all frames of a traffic video. With an increasing number of frames, the efficiency of the dehazing algorithm with lane separation will be improved more significantly. Hence, if the driveway region is a larger portion of a whole image, the processing time can be decreased obviously. When real-time processing is required, a little reduction in processing time has been of practical significance.

4.4. Optimization Based on Spatial Distribution of Cameras

With an increasingly complex layout of transportation networks, the number of traffic monitoring cameras also increases gradually, and sometimes there are multiple cameras in the same section of road. These cameras located in close physical proximity usually have the same hardware indicators. In a traffic video detection system, multiple cameras are connected to one system. These cameras have similar characteristics according to their spatial distribution. The weather is also an index with spatial characteristics, that is, the degrees of haze are similar in nearby regions. Thus, we can use the spatial distribution information of cameras to speed up dehazing and optimize the performance of the traffic video detection system.
Figure 7 shows the images captured by four surveillance videos of DE-elevated freeways in Hangzhou City at the same time. The locations of these cameras are shown in Figure 8, where the distance between the cameras is about 500 to 600 m. Table 3 shows the initial transmission values of these four videos. The haziness flag values T calculated from each video are shown in the first column of Table 3. We obtain relatively proper initial transmission correction value X by using the method proposed in Section 3, and then determine the initial transmission value T X . According to the results, these initial transmission values are very numerically similar, thus there may be no obvious influence on the restored images.
In traffic video dehazing, the cameras are divided into different regions according to their locations, and one camera in a region is set as the calibration camera. The images from the calibration camera are used to calculate the initial transmission value, which is also applied to other cameras in the same region. Therefore, we can avoid repeatedly calculating the values of T , C , and X for other cameras, thus improving the efficiency of haze removal. The results of haze removal with the initial transmission value obtained by calibration cameras is shown in Figure 9b, and the result directly using the initial transmission value obtained by the image itself is shown in Figure 9c. It is obvious that the results are very similar in these two ways. It takes 0.033 s to calculate the initial transmission value, which can be saved by using that of the calibration camera.

5. Results

In the efficient traffic video dehazing method using adaptive dark channel prior and spatial-temporal correlations, a video sequence is converted into Y U V color space where Y represents the luminance and U / V represents the chromaticity. Human eyes are more sensitive to high-frequency signals than low-frequency signals and more sensitive to changes in visibility than changes in color. The U and V components are less affected by haze than the Y component. Thus, we can only adopt the luminance ( Y ) component to reduce computational complexity. In our experiments, we implemented each method with Opencv and C/C++ language. The source codes were compiled with Microsoft Visual Studio 2010 and run on an Intel Core I5-2400 processor and 4 GB of main memory running a Windows 7 system.

5.1. Results for Single Image Dehazing

Our adaptive method can determine the initial transmission according to the image characteristics, thus it can produce a more satisfactory dehazing result than the method with fixed initial transmission. Figure 10 shows the restored images using our adaptive method, and there are four different initial transmission values, 0.1, 0.2, 0.3, and 0.4. It is obvious from the experimental results that the smaller initial transmission values may lead to some blocks in the images with overstretched contrast, therefore the optimal initial transmission for the first image is between 0.2 and 0.3, the value for the second image is between 0.3 and 0.4, and the value for the third and fourth images is above 0.4. The T X values for the images obtained by our method are all located in the range of the optimal initial transmission. Therefore, our method is adaptable for images with different degrees of haze.
Figure 11 shows four images from Foggy Road Image Database (FRIDA) [33] and restored these images using the dark-channel-prior-based method [9,31], the visibility enhancement algorithm [34], the image-contrast-enhanced method [25], the non-local image dehazing method [20,21], and our method. The SSIM values in Figure 7 are the average values of three channels of RGB. In FRIDA [33], each image without fog is associated with some hazy images, and different kinds of fog are added in each image—uniform fog, heterogeneous fog, cloudy fog, and cloudy heterogeneous fog. According to the experimental results, the dark-channel-prior-based method does not have satisfactory results for haze removal in heterogeneous fog and cloudy heterogeneous fog, while the image-contrast-enhanced method and our method achieves more satisfactory results for these two cases. In addition, our method obtains the highest SSIM for the restored images compared to the first three methods, thus the restored images using our method are more similar to ground truth. As to the results of non-local image dehazing method [20,21], the SSIM for some restored images may be higher than those of our method. However, the non-local image dehazing method takes longer processing time, as shown in Table 4. Table 4 provides the overall processing times of these methods. Our method is faster than the dark-channel-prior-based method [9,31] and visibility enhancement algorithm [34]. However, our method takes more time than the image-contrast-enhanced method [25] because it spends some time in calculating the image haziness flag value and the initial transmission correction value. However, the results for haze removal using the proposed method are better than the results of the image-contrast-enhanced method. Although the non-local image dehazing method can get more satisfactory restored images, it is too slow to be used in real-time scenarios. In addition, it usually needs to manually set the parameters to different scenes, which is not suitable for real-time traffic video processing. Further still, we can spread this part of the computation time over all frames in video dehazing and reach a faster dehazing speed through the fusion of spatial and temporal information.

5.2. Results for Traffic Video Dehazing

To get better restored images, we restore the whole image for the first frame of a time slice and use the area outside the lane space of the restored frame to replace those areas of the following frames. Moreover, we adopt the parallel programming tools SIMD [35] and OpenMP [36] for rapid calculation. Figure 12 presents a comparison of three approaches for traffic video dehazing, where Figure 12a shows the original videos; Figure 12b shows the results for the dark-channel-prior-based method with guided filtering [9,31], which uses the transmission map obtained from the first frame to filter the following frames; Figure 12c shows the results for the image-contrast-enhanced method [25], whose initial transmission is a constant value 0.3; Figure 12d shows the results produced by our method. Experimental results demonstrate that the image-contrast-enhanced method leads to some blocks with overstretched contrast, such as the images in groups (1), (3), and (4). For some urban scenes, the color is not obviously different between the driveway and background, such as the examples in group (1) with medium haze and group (2) with dense haze. Our method can restore these videos in a manner more similar to the haze-free scenes, and the driveway and the vehicles can been seen more clearly. However, the dark-channel-prior-based method cannot deal with these videos. For the suburban scenes where the trees and road surface are obviously different in color, such as images in group (3) that were captured in daytime and images in group (4) that were captured in dense haze with vehicle headlights on, our method achieves better restored results than the other two methods. For the restored images using our method in group (3), the driveway color is more uniform. For the restored images using our method in group (4), there are no blocks with overstretched contrast, and the color of trees with hierarchical structure is more realistic. Therefore, our method can maintain the image details and restore images that are more similar to the real scene with proper contrast.
As we can see from the experiment results, our method produces better haze removal results by determining parameters according to image characteristics. It is also applicable to dense fog or a variety of fog densities. Moreover, it makes the restored images more similar to the real scene and avoids the problem that the restored images exhibit overstretched contrast. Therefore, it can solve the general problems in the existing dehazing algorithms—contrast distortion after video dehazing and failure to remove dense haze.
In addition, our method adopts the spatial correlation, time continuity, lane separation, and spatial distribution of cameras to improve computational efficiency. Besides the processing time, the performance parameters of frames per second (fps) and SSIM of different methods for the video dehazing in Figure 12 are shown in Table 5. In order to meet the actual traffic scenarios, we process the video frame by frame, and the data show the total processing time for 1000 frames. Our method uses the initial frame in a time slice to calculate the transmission map and atmospheric light and adopts the lane separation to decrease the dehazing areas. Compared with other methods, the time of dehazing in our method decreases when the time slice increases. According to the experiment results, our method can obviously speed up video dehazing, especially if the video has high resolution or the driveway is only a small part of the whole image. Our method can restore the video with a resolution of 720 × 592 at about 57 fps, nearly four times faster than dark-channel-prior-based method and one time faster than image-contrast-enhanced method. Furthermore, our method obtains the highest SSIM for the restored videos compared with other existing methods, thus the restored videos using our method are more similar to ground truth. Therefore, the proposed method not only has superior haze removing and color balancing capabilities but also restores and enhances the degraded videos in real time.

6. Conclusions

Traditional haze removal methods fail to restore the images with different degrees of haziness in a real-time and adaptive manner under most circumstances. To solve this problem, we propose an efficient traffic video dehazing method using adaptive dark channel prior and spatial-temporal correlations. The dark channel prior is based on the statistics of outdoor haze-free images, but it cannot adaptively estimate the initial transmission value based on the degree of haze and contrast of images. Therefore, we adopt the image-contrast-enhanced method to obtain the best estimated transmission value as the initial transmission value of dark channel prior. The image dehazing method using adaptive dark channel prior can overcome the shortcomings of existing dehazing algorithms that overstretch contrast after haze removal and deal with images with dense haze to a satisfactory level. Additionally, we introduce the temporal-spatial correlation of traffic videos to speed up the traffic video dehazing using the time continuity to set a time slice, the characteristics of block structure to refine transmission, lane space structure to decrease the restored area, and multi-camera distribution to simplify the calculation of parameters. The experiment results show that our method can restore satisfactory image appearance, which can remove dense haze effectively and does not produce results with overstretched contrast. The temporal and spatial characteristics can reduce the computation time, especially for dehazing multiple videos.
However, the dark channel prior is a kind of statistic, and it may not work for some particular traffic videos. When there are rapidly changing hazes in the videos, the dark channel of the scene radiance has a great difference at different times. In addition, if the scene objects are inherently similar to the atmospheric light and no shadow is cast on them, the adaptive dark channel prior is invalid. The dark channel of the scene radiance has bright values near such objects. As a result, our method may underestimate the transmission of these objects and overestimate the haze layer.

Author Contributions

Formal analysis, G.Z. and J.W.; methodology, T.D., Y.Y. and Y.S.; project administration, T.D.; validation, Y.Y.; literature search, J.W. and G.Z.; writing-original draft, T.D. and J.W.; writing-review and editing, G.Z. and Y.S.

Funding

This work is supported by National Natural Science Foundation of China (No. 61672414, 61572437).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pyka, K. Wavelet-Based Local Contrast Enhancement for Satellite, Aerial and Close Range Images. Remote Sens. 2017, 9, 25. [Google Scholar] [CrossRef]
  2. Li, R.; Pan, J.; Li, Z.; Tang, J. Single Image Dehazing via Conditional Generative Adversarial Network. In Proceedings of the CVPR Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 July 2018; pp. 8202–8211. [Google Scholar]
  3. Mangeruga, M.; Bruno, F.; Cozza, M.; Agrafiotis, P.; Skarlatos, D. Guidelines for Underwater Image Enhancement Based on Benchmarking of Different Methods. Remote Sens. 2018, 10, 1652. [Google Scholar] [CrossRef]
  4. Oakley, J.P.; Satherley, B.L. Improving image quality in poor visibility conditions using a physical model for contrast degradation. IEEE Trans. Image Process. 1998, 7, 167–179. [Google Scholar] [CrossRef]
  5. Narasimhan, S.G.; Nayar, S.K. Removing weather effects from monochrome images. In Proceedings of the CVPR Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001; pp. 186–193. [Google Scholar]
  6. Chen, G.; Wang, T.; Zhou, H. A Novel Physics-based Method for Restoration of Foggy Day Images. J. Image Graph. 2008, 13, 888–893. [Google Scholar]
  7. Tan, R.T. Visibility in bad weather from a single image. In Proceedings of the CVPR Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  8. Fattal, R. Single image dehazing. In Proceedings of the ACM Siggraph, Los Angeles, CA, USA, 11–15 August 2008; pp. 1–9. [Google Scholar]
  9. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. In Proceedings of the CVPR Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1956–1963. [Google Scholar]
  10. Lai, Y.; Chen, Y.; Chiou, C.; Hsu, C. Single-Image Dehazing via Optimal Transmission Map Under Scene Priors. Circuits Syst. Video Technol. 2015, 25, 1–14. [Google Scholar]
  11. Zhu, Q.; Mai, J.; Shao, L. A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar] [Green Version]
  12. Yeh, C.; Kang, L.; Lee, M.; Lin, C. Haze effect removal from image via haze density estimation in optical model. Opt. Express 2013, 21, 27127–27141. [Google Scholar] [CrossRef] [PubMed]
  13. Li, B.; Wang, S.; Zheng, J.; Zheng, L. Single image haze removal using content-adaptive dark channel and post enhancement. IET Comput. Vis. 2014, 8, 131–140. [Google Scholar] [CrossRef]
  14. Wang, J.; He, N.; Zhang, L.; Lu, K. Single image dehazing with a physical model and dark channel prior. Neurocomputing 2015, 149, 718–728. [Google Scholar] [CrossRef]
  15. Huang, S.; Chen, B.; Wang, W. Visibility Restoration of Single Hazy Images Captured in Real-World Weather Conditions. IEEE Trans. Circuits Syst. Video Technol. 2014, 24, 1814–1824. [Google Scholar] [CrossRef]
  16. Riaz, I.; Fan, X.; Shin, H. Single image dehazing with bright object handling. IET Comput. Vis. 2016, 10, 817–827. [Google Scholar] [CrossRef]
  17. Sun, K.; Wang, B.; Zhou, Z. Real time image haze removal using bilateral filter. Trans. Beijing Inst. Technol. 2011, 31, 810–814. [Google Scholar]
  18. Wang, D.; Fan, J.; Liu, Y. A foggy video images enhancement algorithm of monitoring system. J. Xian Univ. Posts Telecommun. 2012, 5, TP391.41. [Google Scholar]
  19. Kumari, A.; Sahdev, S.; Sahoo, S.K. Improved single image and video dehazing using morphological operation. In Proceedings of the IEEE International Conference on VLSI Systems, Architecture, Technology and Applications, Bangalore, India, 8–10 January 2015; pp. 1–5. [Google Scholar]
  20. Berman, D.; Treibitz, T.; Avidan, S. Non-Local Image Dehazing. In Proceedings of the CVPR Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1674–1682. [Google Scholar]
  21. Berman, D.; Treibitz, T.; Avidan, S. Air-light Estimation using Haze-Lines. In Proceedings of the IEEE 13th International Conference on Intelligent Computer Communication and Processing, Stanford, CA, USA, 12–14 May 2017; pp. 5178–5191. [Google Scholar]
  22. Tarel, J.; Hautière, N.; Cord, A.; Gruyer, D.; Halmaoui, H. Improved visibility of road scene images under heterogeneous fog. In Proceedings of the IEEE Intelligent Vehicles Symposium, San Diego, CA, USA, 21–24 June 2010; pp. 478–485. [Google Scholar]
  23. Zhang, J.; Li, L.; Zhang, Y.; Yang, G.; Cao, X.; Sun, J. Video dehazing with spatial and temporal coherence. Vis. Comput. 2011, 27, 749–757. [Google Scholar] [CrossRef]
  24. Shin, D.K.; Kim, Y.M.; Park, K.T.; Lee, D.; Choi, W.; Moon, Y.S. Video dehazing without flicker artifacts using adaptive temporal average. In Proceedings of the IEEE International Symposium on Consumer Electronics, JeJu Island, Korea, 22–25 June 2014; pp. 1–2. [Google Scholar]
  25. Kim, J.; Jang, W.; Sim, J.Y.; Kim, C.S. Optimized contrast enhancement for real-time image and video dehazing. J. Vis. Commun. Image Represent. 2013, 24, 410–425. [Google Scholar] [CrossRef]
  26. Narasimhan, S.G.; Nayar, S.K. Vision and the Atmosphere. Int. J. Comput. Vis. 2002, 48, 233–254. [Google Scholar] [CrossRef]
  27. Pan, X.; Xie, F.; Jiang, Z.; Yin, J. Haze Removal for a Single Remote Sensing Image Based on Deformed Haze Imaging Model. IEEE Signal Process. Lett. 2015, 22, 1806–1810. [Google Scholar] [CrossRef]
  28. Peli, E. Contrast in complex images. J. Opt. Soc. Am. A 1990, 7, 2032–2040. [Google Scholar] [CrossRef]
  29. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  30. Levin, A.; Lischinski, D.; Weiss, Y. A closed-form solution to natural image matting. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 30, 228–242. [Google Scholar] [CrossRef] [PubMed]
  31. He, K.; Sun, J.; Tang, X. Guided image filtering. In Proceedings of the Springer ECCV European Conference on Computer Vision, Heraklion, Greece, 5–11 September 2010; pp. 1–14. [Google Scholar]
  32. Szeliski, R. Computer Vision: Algorithms and Applications; Springer: New York, NY, USA, 2010. [Google Scholar]
  33. Foggy Road Image DAtabase FRIDA. Available online: http://www.lcpc.fr/english/products/image-databases/article/frida-foggy-road-image-database (accessed on 8 June 2012).
  34. Huang, S.; Chen, B.; Cheng, Y. An Efficient Visibility Enhancement Algorithm for Road Scenes Captured by Intelligent Transportation Systems. IEEE Trans. Intell. Transp. Syst. 2014, 15, 2321–2332. [Google Scholar] [CrossRef]
  35. Patterson, D.A.; Hennessy, J.L. Computer Organization and Design: The Hardware/Software Interface; Morgan Kaufmann Publishers: Burlington, MA, USA, 1998. [Google Scholar]
  36. Chapman, B.; Jost, G.; van der Pas, R. Using OpenMP: Portable Shared Memory Parallel Programming (Scientific and Engineering Computation); MIT Press: Cambridge, MA, USA, 2008. [Google Scholar]
Figure 1. Framework of single-image dehazing method.
Figure 1. Framework of single-image dehazing method.
Sensors 19 01593 g001
Figure 2. The histogram of T * X and T * X′.
Figure 2. The histogram of T * X and T * X′.
Sensors 19 01593 g002
Figure 3. The difference of T for the images in a 5 min cycle. The images come from different scenes (a,b).
Figure 3. The difference of T for the images in a 5 min cycle. The images come from different scenes (a,b).
Sensors 19 01593 g003
Figure 4. The images with different T values.
Figure 4. The images with different T values.
Sensors 19 01593 g004
Figure 5. Lane space separation: (a) original Image; (b) lane candidates; (c) driveway boundary; (d) result for lane separation.
Figure 5. Lane space separation: (a) original Image; (b) lane candidates; (c) driveway boundary; (d) result for lane separation.
Sensors 19 01593 g005
Figure 6. Results for video dehazing with lane separation: (a) before haze removal; (b) haze removal without lane separation; (c) haze removal with lane separation.
Figure 6. Results for video dehazing with lane separation: (a) before haze removal; (b) haze removal without lane separation; (c) haze removal with lane separation.
Sensors 19 01593 g006
Figure 7. Example images of the nearby regions.
Figure 7. Example images of the nearby regions.
Sensors 19 01593 g007
Figure 8. The locations of cameras.
Figure 8. The locations of cameras.
Sensors 19 01593 g008
Figure 9. Results of haze removal with and without calibration camera: (a) original image; (b) initial transmission value for calibration camera is 0.596; (c) initial transmission value for image itself is 0.578.
Figure 9. Results of haze removal with and without calibration camera: (a) original image; (b) initial transmission value for calibration camera is 0.596; (c) initial transmission value for image itself is 0.578.
Sensors 19 01593 g009
Figure 10. Results for different initial transmission using our adaptive method.
Figure 10. Results for different initial transmission using our adaptive method.
Sensors 19 01593 g010
Figure 11. Comparison of the restored images using different methods; * SSIM = structural similarity.
Figure 11. Comparison of the restored images using different methods; * SSIM = structural similarity.
Sensors 19 01593 g011
Figure 12. Comparison of restored videos. (a) Original Videos; (b) Dark-channel-prior-based method; (c) Image-contrast-enhanced method; (d) Non-local Image Dehazing; (e) Our method.
Figure 12. Comparison of restored videos. (a) Original Videos; (b) Dark-channel-prior-based method; (c) Image-contrast-enhanced method; (d) Non-local Image Dehazing; (e) Our method.
Sensors 19 01593 g012aSensors 19 01593 g012b
Table 1. The value of x’ for different ranges of T * C.
Table 1. The value of x’ for different ranges of T * C.
Image No.TCT * CXThe Range of T * CXT * XT * X
10.40323.82241.54140.50T * C < 100.50.20160.2016
20.40066.34362.54100.520.50.20030.2083
30.41778.48453.54370.500.50.20880.2088
40.411313.40805.51510.500.50.20560.2057
50.432913.27745.74760.460.50.21640.1991
60.444417.64327.840040.460.50.22220.2044
70.421119.71608.30390.520.50.21600.2190
80.458422.136310.14800.5410 ≤ T * C < 150.60.27500.2476
90.427525.528910.91410.640.60.25650.2736
100.473226.913112.73460.620.60.28390.2934
110.437031.903713.94190.760.60.26220.3321
120.486231.338915.23590.6615 ≤ T * C < 200.70.34030.3209
130.446938.387117.15550.840.70.31280.3754
140.498735.675417.79040.620.70.34910.3092
150.455544.915220.46090.8020 ≤ T * C < 250.80.36440.3644
160.462550.907523.54220.860.80.37000.3977
170.472457.364327.10120.9425 ≤ T * C < 300.90.42520.4441
180.481263.673130.63951.00T * C ≥ 301.00.48120.4812
190.490970.375134.54541.061.00.49090.5203
Table 2. Global image and driveway.
Table 2. Global image and driveway.
RegionsParametersCase 1Case 2Case 3Case 4
Sensors 19 01593 i001 Sensors 19 01593 i002 Sensors 19 01593 i003 Sensors 19 01593 i004
Driveway RegionT0.5908560.7041050.8397630.83898
contrast47.754749.027354.031262.208
X0.900001.00001.00001.0000
T * X0.532000.704000.84000.8390
Global ImageT0.5632650.6323230.7734050.563549
contrast48.881149.261957.5056127.7800
X0.90001.00001.00001.0000
T * X0.50700.63200.77300.5660
Table 3. Initial transmission values for videos in nearby regions.
Table 3. Initial transmission values for videos in nearby regions.
CasesHaze Flag Value TInitial Transmission Correction Value XInitial Transmission Value T * X
a0.52418810.524
c0.58073210.581
b0.56991810.570
d0.51743110.517
Table 4. Processing times for single-image dehazing.
Table 4. Processing times for single-image dehazing.
Image ResolutionDark-Channel-Prior Method [9,31]Visibility Enhancement Algorithm [34]Image-Contrast-Enhanced Method [25]Dehazing Only Using Adaptive Dark Channel PriorNon-Local Image Dehazing [20,21]Our Method
640 × 4800.897 s1.014 s0.396 s0.506 s2.546 s0.433 s
480 × 4000.516 s0.895 s0.165 s0.301 s2.387 s0.252 s
320 × 2400.173 s0.348 s0.057 s0.262 s2.024 s0.211 s
Table 5. Comparing the performance parameters.
Table 5. Comparing the performance parameters.
CaseImage ResolutionHe et al. [9,31]Kim et al. [25]Our Method
TimefpsSSIMTimefpsSSIMTimefpsSSIM
(1)640 × 48066.787s15.00.687035.359 s28.30.699017.507 s57.10.7012
(2)640 × 48064.576 s15.40.700234.471 s29.00.707918.005 s55.50.7232
(3)720 × 59295.638 s10.50.615537.858 s26.40.632217.604 s56.80.6488
(4)720 × 59290.911 s11.00.593239.855 s25.10.601116.925 s59.00.6155

Share and Cite

MDPI and ACS Style

Dong, T.; Zhao, G.; Wu, J.; Ye, Y.; Shen, Y. Efficient Traffic Video Dehazing Using Adaptive Dark Channel Prior and Spatial–Temporal Correlations. Sensors 2019, 19, 1593. https://doi.org/10.3390/s19071593

AMA Style

Dong T, Zhao G, Wu J, Ye Y, Shen Y. Efficient Traffic Video Dehazing Using Adaptive Dark Channel Prior and Spatial–Temporal Correlations. Sensors. 2019; 19(7):1593. https://doi.org/10.3390/s19071593

Chicago/Turabian Style

Dong, Tianyang, Guoqing Zhao, Jiamin Wu, Yang Ye, and Ying Shen. 2019. "Efficient Traffic Video Dehazing Using Adaptive Dark Channel Prior and Spatial–Temporal Correlations" Sensors 19, no. 7: 1593. https://doi.org/10.3390/s19071593

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop