Keywords

1 Introduction

With the rapid development of artificial intelligence, higher request to efficient and effective intelligent vision systems is putting forward. To tackle with higher semantic tasks in computer vision, such as object recognition, behaviour analysis and motion analysis, researchers have developed numerous fundamental detection and tracking algorithms for the past decades.

To evaluate these algorithms fairly, the community has developed plenty of datasets including detection datasets (e.g., Caltech [14] and DETRAC [46]) and tracking datasets (e.g., KITTI-T [19] and VOT2016 [15]). The common shortcoming of these datasets is that videos are captured by fixed or moving car based cameras, which is limited in viewing angles in surveillance scene.

Benefiting from flourishing global drone industry, Unmanned Aerial Vehicle (UAV) has been applied in many areas such as security and surveillance, search and rescue, and sports analysis. Different from traditional surveillance cameras, UAV with moving camera has several advantages inherently, such as easy to deploy, high mobility, large view scope, and uniform scale. Thus it brings new challenges to existing detection and tracking technologies, such as:

  • High Density. Since UAV cameras are flexible to capture videos at wider view angle than fixed cameras, leading to large object number.

  • Small Object. Objects are usually small or tiny due to high altitude of UAV views, resulting in difficulties to detect and track them.

  • Camera Motion. Objects move very fast or rotate drastically due to the high-speed flying or camera rotation of UAVs.

  • Realtime Issues. The algorithms should consider realtime issues and maintain high accuracy on embedded UAV platforms for practical application.

To study these problems, limited UAV datasets are collected such as Campus [39] and CARPK [22]. However, they only focus on a specific task such as visual tracking or detection in constrained scenes, for instance, campus or parking lots. The community needs a more comprehensive UAV benchmark in unconstrained scenarios for further boosting research on related tasks.

To this end, we construct a large scale challenging UAV Detection and Tracking (UAVDT) benchmark (i.e., about 80, 000 representative frames from 10 hours raw videos) for 3 important fundamental tasks, i.e., object DETection (DET), Single Object Tracking (SOT) and Multiple Object Tracking (MOT). Our dataset is captured by UAVsFootnote 1 in various complex scenarios. Since the current majority of datasets focus on pedestrians, as a supplement, the objects of interest in our benchmark are vehicles. Moreover, these frames are manually annotated with bounding boxes and some useful attributes, e.g., vehicle category and occlusion. This paper makes the following contributions: (1) We collect a fully annotated dataset for 3 fundamental tasks applied in UAV surveillance. (2) We provide an extensive evaluation of the most recently state-of-the-art algorithms in various attributes for each task.

2 UAVDTBenchmark

The UAVDTbenchmark consists of 100 video sequences, which are selected from over 10 hours of videos taken with an UAV platform at a number of locations in urban areas, representing various common scenes including squares, arterial streets, toll stations, highways, crossings and T-junctions. The average, min, max length of a sequence are 778.69, 83 and 2, 970 respectively. The videos are recorded at 30 frames per seconds (fps), with the resolution of \(1080\times 540\) pixels.

Table 1. Summary of existing datasets (\(1k=10^{3}\)). D=DET, M=MOT, S=SOT.

2.1 Data Annotation

For annotation, we ask over 10 domain experts to label our dataset using the vatic toolFootnote 2 for two months. With several rounds of double-check, the annotation errors are reduced as much as possible. Specifically, about 80, 000 frames in the UAVDTbenchmark dataset are annotated over 2, 700 vehicles with 0.84 million bounding boxes. According to PASCAL VOC [16], the regions that cover too small vehicles are ignored in each frame due to low resolution. Figure 1 shows some sample frames with annotated attributes in the dataset.

Fig. 1.
figure 1

Examples of annotated frames in the UAVDTbenchmark. The three rows indicate the DET, MOT and SOT task, respectively. The shooting conditions of UAVs are presented in the lower right corner. The pink areas are ignored regions in the dataset. Different bounding box colors denote different classes of vehicles. For clarity, we only display some attributes. (Color figure online)

Based on different shooting conditions of UAVs, we first define 3 attributes for MOT task:

  • Weather Condition indicates illumination when capturing videos, which affects appearance representation of objects. It includes daylight, night and fog. Specifically, videos shot in daylight introduce interference of shadows. Night scene, bearing dim street lamp light, offers scarcely any texture information. In the meantime, frames captured at fog lack sharp details so that contours of objects vanish in the background.

  • Flying Altitude is the flying height of UAVs, affecting the scale variation of objects. Three levels are annotated, i.e., low-alt, medium-alt and high-alt. When shooting in low-altitude (\(10m\sim 30m\)), more details of objects are captured. Meanwhile the object may occupy larger area, e.g., \(22.6\%\) pixels of a frame in an extreme situation. When videos are collected in medium-altitude (\(30m\sim 70m\)), more view angles are presented. While in much higher altitude (\(>70m\)), plentiful vehicles are of less clarity. For example, most tiny objects just contain \(0.005\%\) pixels of a frame, yet object numbers can be more than a hundred.

  • Camera View consists of 3 object views. Specifically, front-view, side-view and bird-view mean the camera shooting along with the road, on the side, and on the top of objects, respectively. Note that the first two views may coexist in one sequence.

To evaluate DET algorithms thoroughly, we also label another 3 attributes including vehicle category, vehicle occlusion and out-of-view. vehicle category consists of car, truck and bus. vehicle occlusion is the fraction of bounding box occlusion, i.e., no-occ (\(0\%\)), small-occ (\(1\%\sim 30\%\)), medium-occ (\(30\%\sim 70\%\)) and large-occ (\(70\%\sim 100\%\)). Out-of-view indicates the degree of vehicle parts outside frame, divided into no-out (\(0\%\)), small-out (\(1\%\sim 30\%\)) and medium-out (\(30\%\sim 50\%\)). The objects are discarded when the out-of-view ratio is larger than \(50\%\). The distribution of the above attributes is shown in Fig. 2. Within an image, objects are defined as “occluded” by other objects or the obstacles in the scenes, e.g., under the bridge; while objects are regarded as“out-of-view” when they are out of the image or in the ignored regions.

Fig. 2.
figure 2

The distribution of attributes of both DET and MOT tasks in UAVDT.

For SOT task, 8 attributes are annotated for each sequence, i.e., Background Clutter (BC), Camera Rotation (CR), Object Rotation (OR), Small Object (SO), Illumination Variation (IV), Object Blur (OB), Scale Variation (SV) and Large Occlusion (LO). The distribution of SOT attributes is presented in Table 2. Specifically, \(74\%\) videos contain at least 4 visual challenges, and among them \(51\%\) have 5 challenges. Meanwhile, \(27\%\) of frames contribute to long-term tracking videos. As a consequence, a candidate SOT method can be estimated in various cruel environment, most likely at the same frame, guaranteeing the objectivity and discrimination of the proposed dataset.

Table 2. Distribution of SOT attributes, showing the number of coincident attributes across all videos. The diagonal line denotes the number of sequences with only one attribute.

Notably, our benchmark is divided into training and testing sets, with 30 and 70 sequences, respectively. The testing set consists of 20 sequences for both DET and MOT tasks, and 50 for SOT task. Besides, training videos are taken at different locations from the testing videos, but share similar scenes and attributes. This setting reduces the overfitting probability to particular scenario.

2.2 Comparison with Existing UAV Datasets

Although new challenges are brought to computer vision by UAVs, limited datasets [22, 31, 39] have been published to accelerate the improvement and evaluation of various vision tasks. By exploring the flexibility of UAVs flare maneuver in both altitude and plane domain, Matthias et al. [31] propose a low-altitude UAV tracking dataset to evaluate ability of SOT methods of tackling with relatively fierce camera movement, scale change and illumination variation, yet it still lacks varieties in weather conditions and camera motions, and its scenes are much less clustered than real circumstances. In [39], several video fragments are collected to analyze the behaviors of pedestrians in top-view scenes of campus with fixed UAV cameras for the MOT task. Although ideal visual angles benefit trackers to obtain stable trajectories by narrowing down challenges they have to meet, it also risks diversity when evaluating MOT methods. Hsieh et al. [22] present a dataset aiming at counting vehicles in parking lots. However, our dataset captures videos in unconstrained areas, resulting in more generalization.

The detailed comparisons of the proposed dataset with other works are summarized in Table 1. Although our dataset is not the largest one compared to existing datasets, it can represent the characteristics of UAV videos more effectively:

  • Our dataset provides a higher object density 10.52Footnote 3, compared to related works (e.g., UAV123 [31] 1.00, Campus [39] 0.02, DETRAC [46] 8.64 and KITTI [19] 5.35). CARPK [22] is an image based dataset to detect parking vehicles, which is not suitable for visual tracking.

  • Compared to related works [22, 31, 39] just focusing on specified scene, our dataset is collected from various scenarios in different weather conditions, flying altitudes, and camera views, etc.

3 Evaluation and Analysis

We run a representative set of state-of-the-art algorithms for each task. Codes for these methods are either available online or from the authors. All the algorithms are trained on the training set and evaluated on the testing set. Interestingly, some high ranking algorithms in other datasets may fail in complex scenarios.

Fig. 3.
figure 3

Precision-Recall plot on the testing set of the UAVDT-DET dataset. The legend presents the AP score and the GPU/CPU speed of each DET method respectively.

3.1 Object Detection

The current top deep based object detection frameworks is divided into two main categories: region-based (e.g., Faster-RCNN [37] and R-FCN [8]) and region-free (e.g., SSD [27] and RON [25]). Therefore, we evaluate the above mentioned 4 detectors in the UAVDTdataset.

Metrics. We follow the strategy in the PASCAL VOC challenge [16] to compute the Average Precision (AP) score in the Precision-Recall plot to rank the performance of DET methods. As performed in KITTI-D [19], the hit/miss threshold of the overlap between a pair of detected and groundtruth bounding boxes is set to 0.7.

Implementation Details. We train all DET methods on a machine with CPU i9 7900x and 64G memory, as well as a Nvidia GTX 1080 Ti GPU. Faster-RCNN and R-FCN are fine-tuned on the VGG-16 network and Resnet-50, respectively. We use 0.001 as the learning rate for the first 60k iterations and 0.0001 for the next 20k iterations. For region-free methods, the batch size is 5 for \(512\times 512\) model according to the GPU capacity. For SSD, we use 0.005 as the learning rate for 120k iterations. For RON, we use the 0.001 as the learning rate for the first 90k iterations, then we decay it to 0.0001 and continue training for the next 30k iterations. For all the algorithms, we use a momentum of 0.9 and a weight decay of 0.0005.

Fig. 4.
figure 4

Quantitative comparison results of DET methods in each attribute.

Overall Evaluation. Figure 3 shows the quantitative comparisons of DET methods, which shows no promising accuracy. For example, R-FCN obtains \(70.06\%\) AP score even in the hard set of KITTI-DFootnote 4, but only \(34.35\%\) in our dataset. This maybe our dataset contains a large number of small objects due to the shooting perspective, which is a difficult challenge in object detection. Another reason is that higher altitude brings more cluttered background.

To tackle with this problem, SSD combines multi-scale feature maps to handle objects of various sizes. Yet their feature maps are usually extracted from former layers, which lacks enough semantic meanings for small objects. Improved from SSD, RON fuses more semantic information from latter layers using a reverse connection, and performs well on other datasets such as PASCAL VOC [16]. Nevertheless, RON is inferior to SSD on our dataset. It maybe because the later layers are so abstract that represent the appearance of small objects not so effectively due to the low resolution. Thus the reverse connection fusing the latter layers may interfere with features in former layers, resulting in inferior performance. On the other hand, region-based methods offer more accurate initial locations for robust results by generating region proposals from region proposal networks. It is worth mentioning that R-FCN achieves the best result by making the unshared per-ROI computation of Faster-RCNN to be sharable [25].

Attribute-Based Evaluation. To further explore the effectiveness of DET methods on different situations, we also evaluate them on different attributes in Fig. 4. For the first 3 attributes, DET methods perform better on the sequences where objects have more details e.g., low-alt and side-view. While the object number is bigger and the background is more cluttered in daylight than night, leading to worse performance in daylight. For the remaining attributes, the performance drops very dramatically when detecting large vehicles, as well as handling with occlusion and out-of-view. The results can be attributed to two factors. Firstly, very limited training samples of large vehicles make it hard to train the detector to recognize them. As shown in Fig. 2, the number of truck and bus is only less than \(10\%\) of the whole dataset. Besides, it is even harder to detect small objects with other interference. Much work need to be done for small object detection under occlusions or out-of-view.

Run-time Performance. Although region based methods obtain relative good performance, their running speeds (i.e., \(<5\)fps) are too slow for practical applications especially with constrained computing resources. On the contrary, region free methods save the time of region proposal generation, and proceed at almost realtime speed.

Fig. 5.
figure 5

Quantitative comparison results of MOT methods in each attribute.

Table 3. Quantitative comparison results of MOT methods in the testing set of the UAVDTdataset. The last column shows the GPU/CPU speed. The best performer and realtime methods (\(>30\)fps) are highlighted in bold font. “−” indicates the data is not available.

3.2 Multiple Object Tracking

MOT methods are generally grouped into online or batch based. Therefore, we evaluate 8 recent algorithms including online methods (CMOT [2], MDP [50], SORT [6] and DSORT [48]) and batch based methods (GOG [35], CEM [30], SMOT [13] and IOUT [7]).

Metrics. We use multiple metrics to evaluate the MOT performance. These include identification precision (IDP) [38], identification recall (IDR), and the corresponding F1 score IDF1 (the ratio of correctly identified detections over the average number of ground-truth and computed detections.), Multiple Object Tracking Accuracy (MOTA) [4], Multiple Object Tracking Precision (MOTP) [4], Mostly Track targets (MT, percentage of groundtruth trajectories that are covered by a track hypothesis for at least \(80\%\)), Mostly Lost targets (ML, percentage of groundtruth objects whose trajectories are covered by the tracking output less than \(20\%\)), the total number of False Positives (FP), the total number of False Negatives (FN), the total number of ID Switches (IDS), and the total number of times a trajectory is Fragmented (FM).

Implementation Details. Since the above MOT algorithms are based on tracking-by-detection framework, all the 4 detection inputs are provided for MOT task. We run them on test set of the UAVDTdataset on the machine with CPU i7 6700 and 32G memory, as well as a NVIDIA Titan X GPU.

Overall Evaluation. As shown in Table 3, MDP with Faster-RCNN has the best 43.0 MOTA score and 61.5 IDF score among all the combinations. Besides, the MOTA score of SORT in our dataset is much lower than other datasets with Faster-RCNN, e.g., \(59.8\pm 10.3\) in MOT16 [29]. As object density is large in UAV videos, the FP and FN values on our dataset are also much larger than other datasets for the same algorithm. Meanwhile, IDS and FM appear more frequently. It means the proposed dataset is more challenging than existing ones.

Moreover, the algorithms using only position information (e.g., IOUT, SORT) could keep fewer tracklets combining with higher IDS and FM because of absence of appearance information. GOG has the worst IDF even though the MOTA is well because of the too much IDS and FM. DSORT performs well on IDS among these methods, which means deep feature has an advantage in the aspect of representing appearance of the same target. MDP mostly has the best IDS and FM value because of their individual-wised tracker model. So the trajectories are more complete than others with the higher IDF. Meanwhile, FP values will increase by associating more objects in complex scenes.

Attribute-Based Evaluation. Figure 5 shows the performances of MOT methods on different attributes. Most methods perform better in daylight than night or fog (see Fig. 5(a)). It is fair and reasonable that objects in daylight provide clearer appearance clues for tracking. In other illumination conditions, object appearance is confusing so the algorithms considering more motion clues achieve better performance, e.g., SORT, SMOT and GOG. Notably, on the sequences with night, the performances of methods are much worse even the provided detections in night own a good AP score. This is because objects are hard to track with confusing environment in night. In Fig. 5(b), the performance of most MOT methods increases with the decline of height. When UAVs capture videos in a lower height, fewer objects are captured in that view to facilitate object association. In terms of Camera Views as shown in Fig. 5(c), vehicles in front-view and side-view offer more details to distinguish different targets compared with bird-view, leading to better accuracy.

Besides, different detection input can guide MOT methods to focus on different scenes. Specifically, the performance with Faster-RCNN is better on sequences where object details are clearer (e.g., daylight, low-alt and side-view); while R-FCN detection offers more stable inputs for each method when sequences have other challenging attributes, such as fog and high-alt. SSD and RON offer more accurate detection candidates for tracking such that the performances of MOT methods with these detections are balanced in each attribute.

Run-time Performance. Given different detection inputs, the speed of each method varies with the number of object detection candidates. However, IOUT and SORT using only position information generally proceed at ultra-real-time speed, while DSORT and CMOT using appearance information proceed much slower. As the object number is huge in our dataset, the speed of the method processing each object respectively (e.g., MDP) dramatically declines.

Fig. 6.
figure 6

The precision and success plots on the UAVDT-SOT benchmark using One-pass Evaluation [49].

Table 4. Quantitative comparison results (i.e., overlap score / precision score) of SOT methods in each attribute. The last column shows the GPU/CPU speed. The best performer and realtime methods (\(>30\)fps) are highlighted in bold font. “−” indicates the data is not available.

3.3 Single Object Tracking

The SOT field is dominated by correlation filter and deep learning based approaches [15]. We evaluate 18 recent such trackers on our dataset. These trackers can be generally categorized into 3 classes based on their learning strategy and utilized features: I) correlation filter (CF) trackers with hand crafted features (KCF [21], Staple-CA [32], and SRDCFdecon [11]); II) CF trackers with deep features (ECO [9], C-COT [12], HDT [36], CF2 [28], CFNet [43], and PTAV [17]); III) Deep trackers (MDNet [33], SiamFC [5], FCNT [44], SINT [42], MCPF [53], GOTURN [20], ADNet [52], CREST [41], and STCT [45]).

Metrics. Following the popular visual tracking benchmark [49], we adopt the success plot and precision plot to evaluate the tracking performance. The success plot shows the percentage of bounding boxes whose intersection over union with their corresponding groundtruth bounding boxes are larger than a given threshold. The trackers in success plot are ranked according to their success score, which is defined as the area under the curve (AUC). The precision plot presents the percentage of bounding boxes whose center points are within a given distance (\(0\sim 50\) pixels) to the ground truth. Trackers in precision plot are ranked according to their precision score, which is the percentage of bounding boxes within a distance threshold of 20 pixels.

Implementation Details. All the trackers are run on the machine with CPU i7 4790k and 16G memory, as well as a NVIDIA Titan X GPU.

Overall Evaluation. The performance for each tracker is reported in Fig. 6. The figure shows that: (I) All the evaluated trackers perform not well on our dataset. Specifically, the state-of-the-art methods such as MDNet only achieves 46.4 success score and 72.5 precision score. Compared to the best results (i.e., 69.4 success score and 92.8 precision score) on OTB100 [49], a significantly large performance gap is formulated. Such performance gap is also observed when compared to the results on UAV-123. For example, KCF achieves a success score of 33.1 on UAV-123 but only 29.0 on our dataset. These results indicate that our dataset poses new challenges for the visual tracking community and more efforts can be devoted to the real-world UAV tracking task. (II) Generally, deep trackers achieves more accurate results than CF trackers with deep features, and then CF trackers with hand-crafted features. Among the top 10 trackers, there are 6 deep trackers (MDNet, GOTURN, SianFC, ADNet, MCFP and CREST), 3 CF trackers with deep features (ECO, CFNet, and C-COT), and one CF tracker with hand-crafted features namely SRDCFdecon.

Attribute-Based Evaluation. As presented in Table 4, the deep tracker MDNet achieves best results on 7 out of 8 tracking attributes, which can be attributed to its multiple domain training and hard sample mining. CF trackers with deep features such as CF2 and HDT fall behind due to no scale adaptation. SINT [42] does not update its models during tracking, which results in a limited performance. Staple-CA performs well on the SO and IV attributes, as its improved model update strategy can reduce over-fitting to recent samples. Most of the evaluated methods act poorly on the BC and LO attributes, which may be caused by the decline of discriminative ability of appearance features extracted from cluttered or low resolution image regions.

Run-time Performance. From the last column of Table 4, We note that (I) The top 10 accurate trackers run far from real time even on a high-end CPU. For example, the fastest tracker among top 10 accurate only runs at 11.7fps and the most accurate MDNet runs at 0.28 fps. On the other hand, the realtime trackers on CPU (e.g., Staple-CA and KCF), achieve success scores 39.5 and 29.0, which are intolerant for practical applications. (II) When a high-end GPU card is used, only 3 out of 18 trackers (GOTURN, SiamFC, SINT) can perform in real-time. But again their best success score is just 45.1, which is not accurate enough for real applications. Overall, more work need to be done to develop a faster and more precise tracker.

4 Discussion

Our benchmark, delivering from real-life demand, vividly samples real circumstances. Since algorithms generally perform poorly on it comparing with their plausible performances with other datasets, we think this benchmark dataset can reveal some promising research trends and benefit the community. Based on the above analysis, there are several research directions worth exploring:

Realtime Issues. Running speed is a crucial measurement in practical applications. Although the performance of deep learning methods surpass other methods by a large margin (especially in SOT task), the requirements of computational resources are very harsh in embedded UAV platforms. To achieve high efficiency, some recent methods [47, 54] develop an approximate network by pruning, compressing, or low-bit representing. We expect the future works count more real-time constraints not just accuracy.

Scene Priors. Different methods perform the best in different scenarios. When considering scene priors in detection and tracking approaches, more robust performance is expected. For example, MDNet [33] trains a specific object-background classifier for each sequence to handle varies scenarios, which make it rank the first in most datasets. We think along with our dataset this magnificent design may inspired more methods to deal with mutable scenes.

Motion Clues. Since the appearance information is not always reliable, tracking methods would gain more robustness when considering motion clues. Many recently proposed algorithms make their efforts in this trend with the help of LSTM [24, 51], but still have not met with expectations. Considering with the fierce motions of both object and background, our benchmark may fruit this research trend in the future.

Small Objects. In our dataset, \(27.5\%\) of objects consist of less than 400 pixels, almost \(0.07\%\) of a frame. It provides limited textures and contours for feature extraction which causes the accuracy loss of algorithms heavily based on appearance. Meanwhile, generally methods tend to save their time consuming by down-sampling images. It exacerbates the situations harshly, e.g., DET methods mentioned above generally enjoy a \(10\%\) accuracy rise due to our parameters adjusting of authors provided codes and settings, mainly dealing with the size of anchors. However their performance still cannot met with expectation. We advise researchers should gain more promotions if they pay more attention on handling with small objects.

5 Conclusion

In this paper, we construct a new and challenging UAV benchmark for 3 foundational visual tasks including DET, MOT and SOT. The dataset consists of 100 videos (80k frames) captured with UAV platform from complex scenarios. All frames are annotated with manually labelled bounding boxes and 3 circumstances attributes, i.e., weather condition, flying altitude, and camera view. SOT dataset has additional 8 attributes, e.g., background clutter, camera rotation and small object. Moreover, an extensive evaluation of most recent and state-of-the-art methods is provided. We hope the proposed benchmark will contribute to the community by establishing a unified platform for evaluation of detection and tracking methods for real scenarios. In the future, we expect to extend the current dataset to include more sequences for other high-level tasks applied in computer vision, and richer annotations and more baselines for evaluation.