iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.3390/s24010001
No-Reference Quality Assessment of Extended Target Adaptive Optics Images Using Deep Neural Network
Next Article in Journal
Multimodal Early Birth Weight Prediction Using Multiple Kernel Learning
Next Article in Special Issue
Design and Experimental Study of a Hybrid Micro-Vibration Isolation System Based on a Strain Sensor for High-Precision Space Payloads
Previous Article in Journal
Towards Minimizing the LiDAR Sim-to-Real Domain Shift: Object-Level Local Domain Adaptation for 3D Point Clouds of Autonomous Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

No-Reference Quality Assessment of Extended Target Adaptive Optics Images Using Deep Neural Network

1
Key Laboratory of Adaptive Optics, Chinese Academy of Sciences, Chengdu 610209, China
2
School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
3
Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
4
University of Chinese Academy of Sciences, Beijing 101408, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(1), 1; https://doi.org/10.3390/s24010001
Submission received: 20 November 2023 / Revised: 8 December 2023 / Accepted: 13 December 2023 / Published: 19 December 2023
(This article belongs to the Special Issue Sensing Technologies in Optical Image Stabilization)

Abstract

:
This paper proposes a supervised deep neural network model for accomplishing highly efficient image quality assessment (IQA) for adaptive optics (AO) images. The AO imaging systems based on ground-based telescopes suffer from residual atmospheric turbulence, tracking error, and photoelectric noise, which can lead to varying degrees of image degradation, making image processing challenging. Currently, assessing the quality and selecting frames of AO images depend on either traditional IQA methods or manual evaluation by experienced researchers, neither of which is entirely reliable. The proposed network is trained by leveraging the similarity between the point spread function (PSF) of the degraded image and the Airy spot as its supervised training instead of relying on the features of the degraded image itself as a quality label. This approach is reflective of the relationship between the degradation factors of the AO imaging process and the image quality and does not require the analysis of the image’s specific feature or degradation model. The simulation test data show a Spearman’s rank correlation coefficient (SRCC) of 0.97, and our method was also validated using actual acquired AO images. The experimental results indicate that our method is more accurate in evaluating AO image quality compared to traditional IQA methods.

1. Introduction

Image quality assessment (IQA) algorithms aim to reproduce the human perception of image quality. On the one hand, image quality assessment removes poor-quality images and establishes a strong foundation for image post-processing and information mining (e.g., image enhancement, alignment, fusion, and recognition), among others. On the other hand, it also serves as an important metric for evaluating system performance and evaluating image post-processing algorithms.
The IQA can be divided into a subjective assessment and an objective assessment in terms of methods [1]. The subjective assessment assesses the quality of images from human perception, while the objective assessment uses mathematical models to provide quantitative values [2]. IQA models can be classified into three categories, including full-reference (FR), reduced-reference (RR), and no-reference/blind (NR/B) models based on accessibility to a reference [3]. This paper focuses on NR-IQA methods, which can be divided into two categories: hand-crafted feature-based IQA and learning feature-based IQA [4]. Hand-crafted feature-based approaches generally use natural scene statistics (NSS) features to predict image quality scores [5], such as statistical multivariate Gaussian (MVG) models [6], a generalized Gaussian distribution (GGD) model [7], an asymmetric generalized Gaussian distribution (AGGD) [8], multi-channel fused image features [9], and k-nearest neighbor (KNN) based models [10] and so on. So far, NSS-based NR-IQA methods are still being developed. For example, Wang extracted NSS features from a logarithmic field [11] and Dendi used NSS features to assess the quality of a video [12]. However, the ability of the hand-crafted feature-based approach is limited, and it cannot express the high-level semantic information for NR-IQA [1].
With the great success of deep learning in computer vision tasks, some learning-based approaches are also proposed for no-reference/blind image quality assessment. Kang et al. first proposed a convolutional neural network-based NR-IQA method [13]. Hou et al. proposed constructing an image quality assessment model using a fully connected neural network, with NSS features used as the input to train the network [14]. Ma et al. first proposed the Meon model, a multitask-based end-to-end deep convolution network model, to simultaneously learn the distortion type and image quality [15]. Zhu et al. introduced meta-learning into the image quality assessment domain to solve generalization problems and designed the MetaIQA model as a no-reference/blind image quality assessment model [16]. Su et al. separated the IQA procedure into three stages: understanding the content, learning the perception rules, and predicting the quality and proposed a self-adaptive hyper network architecture to blind assess image quality in the wild [17]. You et al. explored the application of the transformer in IQA [18]. Korhonen et al. proposed an RNN-IQA model for assessing high-resolution image quality [19].
Image enhancement, generation, and recovery models have become increasingly popular, thus prompting the development of various methods that can be used to measure their performance. However, most of these methods cater to general image quality assessment and are less reliable when adapting to specific domains, such as adaptive optics (AO) imaging [20]. Consequently, AO is extensively used in ground-based telescopes for high-resolution imaging, laser systems, and ophthalmology. AO systems can correct a significant number of wavefront aberrations [21]. Nevertheless, the imaging quality of AO systems is often adversely affected by various factors, including:
  • The wind load, mounting dithering of a telescope, and other factors lead to the overall tilt of the optical axis. The telescope tracking system cannot fully overcome the overall tilt of the optical axis, resulting in motion blur in the AO image;
  • The AO system is only capable of partially correcting for wavefront aberrations caused by atmospheric turbulence. As a result, residual higher-order aberrations may lead to diffuse blurring of the image [22].
Mainstreaming algorithms face the challenge of capturing all of the above-mentioned distortions accurately to ensure proper quality prediction. Furthermore, due to the limited availability of datasets and the complexity of the degradation model, no IQA method has been developed specifically for the AO images. Tian et al. proposed entropy measurement as an approach to assess AO images [23]. Guo et al. used the normalized LOG domain [24], and Wen et al. applied the Sobel approach to assess the quality of recovered images after undergoing a similar number of blind convolutions [25]. However, AO images based on ground-based telescopes not only undergo more complex degradation but also differ significantly from natural images in terms of the content:
  • Since AO images are grayscale, color features cannot be used to assess them.
  • AO images are affected by blurred halos resulting from motion blur and atmospheric turbulence.
  • The large black or gray backgrounds present in AO images contain little semantic information.
Furthermore, due to the complex and specific nature of the AO system, there are numerous factors that degrade AO images. Thus, a single image characteristic or distortion model cannot accurately assess AO image quality. The studies mentioned above about AO image assessment are based on image features without considering the special characteristics of AO images and without using the physical information about the imaging process of AO images.
This study proposes an efficient IQA method for extended target AO images, which utilizes a deep neural network to learn the relationship between the degradation factors in the AO imaging process and the image quality. Unlike traditional methods, this method assesses the image quality by extracting the features of the PSF, which can characterize the degradation process of AO images, rather than relying on the image features. This method is more objective than the manual assessment method and is more consistent with the particularity of the AO image than the assessment methods based on image features. Additionally, the AO extended target image dataset created in this paper takes into account the imaging process of AO images.

2. Methods

In this study, diverse 2D rendered images were produced by adjusting the lighting, pose, and distance of various 3D models, which is illustrated in Figure 1. The 2D rendered images were then degraded by accounting for the influence of atmospheric turbulence and motion blur due to system-induced mechanical jitter, resulting in the creation of 400,000 AO simulated images. Next, we generated labels for each of the degraded images based on their PSF. Lastly, the simulated images and their respective quality labels were combined to develop the AO extended target image quality assessment dataset used to train the network in this study.

2.1. Degraded AO Images for Extended Target

Firstly, the 3D models of extended objects were constructed according to the collected data from the Internet. Then, we developed a Blender script to obtain different 2D projection images by changing the lighting direction and intensity of the 3D model, the relative angle between the 3D model and the virtual camera, and the distance between the 3D model and the virtual camera.
In this paper, the effects of the residual atmospheric turbulence and the motion blur caused by the system mechanical jitter are considered in the AO degradation procedure [26]:
g x , y = f x , y h x , y t x , y
In Equation (1), f x , y represents the 2D rendered images, such as images shown in Figure 2; g x , y is the degraded image, and examples of g x , y are shown in Figure 3; h x , y is the PSF of atmospheric turbulence; t x , y is the function of motion blur; and * represents convolution. The CCD and stray light in the optical path will introduce background noise, resulting in further degradation of the images, but the impact of such noise can be removed by means of filtering, so the impact of noise is not considered in this paper [27]. A large number of AO degradation simulation images can be generated by changing the parameters of Equation (1). The atmospheric degradation wavefront under the condition of isoplanatic incoherent imaging is completely determined by the wavefront phase, which means that the simulation of the PSF of the atmospheric degradation is equivalent to the numerical simulation for the phase screen of atmospheric turbulence. So, in this paper, we simulated different types of atmospheric turbulence by generating 5 sets of wavefront phases according to the Kolmogorolf spectrum [28] randomly at each value in the RMS. In general, the larger the RMS of the wavefront, the more serious the turbulence. The residual turbulence and motion blur are not too serious because the real AO images are partially corrected by the AO system. In order to better fit the real captured images, we set the RMS between [0.1 and 0.5] after analyzing the images captured by the actual AO system. The motion blur scale was set to between [3, 9] pixels. So, for each rendered image, we randomly selected 5 RMS values of the wavefront at [0.1, 0.5]. Similarly, we took 5 numbers as the length of the motion blur in [3, 9] pixels randomly. For each length of motion blur, five directions were randomly selected at [0, 360]. In this study, we combined the random wavefront, RMS of the wavefront, and direction and length of the motion blur to generate 625 degraded images for each rendered image. Figure 3 shows some of the degraded images.

2.2. Labels for Simulation AO Images

The PSF, defined as the response of an imaging system to a point light source, is an important indicator to measure the quality of an imaging system. The PSF is a comprehensive representation of the residual aberrations in optical systems.
If the distortion effects of atmospheric turbulence and imaging systems are not considered, an ideal point source of light would produce a Fraunhofer diffraction pattern after being imaged by an optical system. Since the apertures of most optical systems are circular in shape, the image appears as a bright central spot surrounded by uniformly decreasing brightness, known as an Airy spot, as shown in the first row of Figure 4. However, due to the distortion of light during transmission, the final image formed will deviate from the shape and energy distribution of the Airy spot, resulting in the PSF. The point spread function is related to both the imaging aberration and the diffraction effect of the optical system, making it an objective and reliable metric for evaluating the imaging quality of an optical system [27]. Based on the aforementioned optical imaging theory, the image formed by an optical system is the convolution of each point in the object image with the corresponding PSF. Therefore, the PSF is an important parameter for assessing the degradation quality of an image. A PSF that is closer to an Airy disk indicates a lower degree of image degradation and better image quality. Figure 4 presents the differences in the obtained images under different PSFs. As depicted in Equation (1), we consider h x , y t x , y as the PSF of the degraded image. The objective quality score of the image is determined using the normalized correlation coefficient [29] between the PSF and the Airy spot.
s = 1 m n i = 1 m j = 1 n P S F i , j P S F ¯ A i r y i , j A i r y ¯ 1 m n i = 1 m j = 1 n P S F i , j P S F ¯ 2 1 m n i = 1 m j = 1 n A i r y i , j A i r y ¯ 2
In Equation (2), m and n represent the height and width of the PSF and the Airy, respectively. P S F ¯ indicates the average value of the PSF. A i r y ¯ indicates the average value of the Airy.
To further verify the label generation effectiveness in AO images based on the PSF, the same ideal image was employed for various degrees of atmospheric turbulence and motion blur degradation.
Figure 5 depicts 16 images numbered from left to right and from top to bottom. Equation (2) provides the score presented in Figure 6, which indicates that the label values decrease from left to right as the motion blur increases. This causes the quality of the images in Figure 5 to decline accordingly. Similarly, the values of the labels decrease from top to bottom as the atmospheric turbulence increases, and the image clarity decreases in Figure 5 accordingly. The increase in the PSF dispersion and distortion results in a more significant decline in the image quality and label value. The proposed PSF-based labeling method accurately reflects the AO image degradation degree and, thus, is useful in generating quality labels.
The distribution of the quality labels for the degraded AO images is shown in Figure 7, which approximately conforms to a normal distribution. The x-axis of Figure 7 represents the quality label values for simulated images based on the PSF, and the y-axis represents the counts corresponding to those scores. The datasets are divided into training, validation, and testing sets according to 4:1:1 randomly, and they are independently and identically distributed.

2.3. Network Model

Given that the PSF remains unknown during actual image acquisition and may be challenging to compute [30], reliance on the deep network’s feature representation ability is necessary for analyzing the relationship between the PSF-based image quality score and the input image. The architecture of the proposed AO extended target IQA network is illustrated in Figure 8. It comprises an input adaptive module, a multi-scale feature extraction module, and a quality prediction network.

2.3.1. Input Adaptive Module

To improve the accuracy of the network and reduce the loss caused by downsampling, we pass the image through an input adaptive module before inputting it into the multi-scale feature extraction module. For this paper, the image size was 512 × 512. Downsampling the image to a direct size of 224 × 224 results not only in information loss but also in image quality alteration. Additionally, AO images are single-channel, which means that they are grayscale. As such, we apply the SpaceToDepth [31] and DICEUnit [30] operations to the image to effectively reduce the image quality loss resulting from direct downsampling and to utilize the channel dimension convolution operation to fuse the image features. The SpaceToDepth operation moves the data in the spatial dimensions (width and height) to the depth dimension (channel). The DICEUnit extracts image features from three dimensions: channel, width, and height. The extracted features from the three branches are then fused together as the output of this structure. In this paper, the number of channels was increased through the SpaceToDepth operation, and the features from different channels were fused using the DICEUnit to reduce the loss caused by the image sampling. Then, the image data was passed through a residual block and two 3 × 3 convolutional operations and sent to the multi-scale feature extraction module.

2.3.2. Multi-Scale Feature Extraction Module

To characterize various types of degradation, we employ convolutions to extract features at multiple scales, ranging from local to global [32], which has achieved excellent results in both [17,33]. We concatenate feature maps at various scales as input to the quality prediction network, as outlined in Equation (3) [33]. More specifically, we utilize ResNet50 as the backbone for this task. ResNet50 consists of 4 Residual Blocks. And, we gather feature maps from four different stages of ResNet50. Our network uses ResNet50 as the backbone due to two reasons. Firstly, ResNet50 offers strong feature representation, making remarkable achievements in image processing, and, secondly, it maintains a proper balance between accuracy and speed, achieving high detection performance while consuming limited computing resources.
h s = c o n c a t ( s 1 , s j , , s n )

2.3.3. Quality Prediction Network

To map learned multi-scale image features to a quality score, we use a small and simple quality prediction network, which consists of a fully connected layer. We deploy a sigmoid function as the activation function and use the mean square error as the loss function. Specifically, after the data output from the multi-scale feature extraction module is subjected to average pooling, it goes through FC (2048) and obtains the final predicted quality score.
We implemented the algorithm proposed in this paper using the PyTorch deep learning framework. The training platform was a single NVIDIA GeForce GTX 3090 GPU. To facilitate training, the network was assigned a small initial learning rate of 1 × 10−3, which was altered with each iteration of training based on the initial learning rate. The learning rate was adapted using Equation (4), where T was set to 2 and α was set to 0.8. We used an Adam optimizer with a weight decay of 5 × 10−4 to train our model for 500 epochs, with a batch size of 32.
η = η 0 α e T

2.4. Metrics

The prediction performances were evaluated based on the Spearman rank order correlation coefficient (SRCC), the Pearson correlation coefficient (PLCC), and the root-mean-square error (RMSE) between the predicted and ground truth image quality scores. We chose these evaluation measures as they are commonly used to evaluate image quality assessment of natural images [34].
The PLCC describes the linear correlation between two sets of data with values ranging from −1 to 1. When the PLCC value equals zero, the two sets of data are not correlated. When the PLCC value equals 1 or −1, this indicates a complete positive correlation or a negative correlation between two sets of data.
P L C C = i = 1 N x i x ¯ y i y ¯ i = 1 N x i x ¯ 2 i = 1 N y i y ¯ 2
The SRCC analyses the linear correlation by computing the rank size of two sets of variables, without requiring the distribution of the original variables. The range of values for the SRCC is from 0 to 1 inclusive, with higher values indicating a stronger correlation between the two sets of data.
S R C C = 1 6 i = 1 N d i 2 N N 2 1
The RMSE indicates how dissimilar the predicted and label values are from each other.
R M S E = i = 1 N ( x i y i ) 2 N
In Equations (5)–(7) [34], N represents the number of images, x i represents the predicted score of the IQA algorithm for the i t h image, y i represents the label value for the i t h image, and d i represents the difference between the rank of x i and the rank of y i .

3. Results

To assess the viability of our proposed method, three classical NR-IQA methods, i.e., Tenengrad [25], LOG [24], and HyperIQA [17], were employed to compare our method’s effectiveness on both simulated and real data. The Tenengrad and LOG IQA methods are based on image NSS features and have been utilized in AO image assessment. The Tenengrad method extracts gradient values in horizontal and vertical directions by using the Sobel operator and utilizes the sum of squares as an assessment function. The LOG method normalizes the input image into a LOG domain. HyperIQA is a learning-based IQA method that has exhibited superior results on real distorted images. We used the default configurations as provided by the authors to compare the methods.
We conducted tests separately on simulation images and real captured images. The parameters used for the degradation algorithm in the simulation images were consistent with those of the real AO system. The specific parameters are shown below. The parameters of the actual AO imaging system are as follows: (1) Focal length: 840 mm (for small field of view); (2) Wavelength: using 1.0–1.3 μm filter, with the center wavelength of 1.15 μm; (3) Spectral range: 300 nm; (4) CCD pixel size: 15 μm; (5) Aperture diameter: 36 mm; (6) System diffraction limit half-width: 1.8 pix. The abovementioned parameters are consistent with the parameters of the algorithm used to attain the degraded simulation images.

3.1. Simulation Images

Figure 9 demonstrates that the label value is the ordinate of all the sub-images, and the label value increases with the image’s enhanced quality. Meanwhile, the normalized score value of the abovementioned method is plotted on the abscissa axis, where higher score values indicate better image quality based on the method. The imaging results of LOG and Tenengrad are pictured in Figure 9, respectively. Though these two methods generally reflect the observable trend of image quality changes, they produce numerous discrete points and, hence, inaccurate results. Additionally, they generate low label values with high scores, impairing their deliverance of refined results. The imagery produced by HyperIQA, as represented in Figure 9, is not appropriate for AO images due to a more extensive presence of discrete points. In contrast, the outcome of our method, as depicted in Figure 9, establishes a proportional relationship between the trained model’s score value and the resulting label value and, therefore, produces fewer discrete points. As such, this confirms the effectiveness of our method in the simulation images.
It can be observed from Table 1 that the IQA for the AO images presented in this study outperforms other methods on the testing dataset.

3.2. Real Images

To test the accuracy and stability of the methods, we built an AO platform in the laboratory, and the specific parameters of the platform are described in Section 3. We used the platform we built to collect a set of images. Specifically, a real target of Figure 10 was placed at the focal position of the AO platform and illuminated by an external light source. We used the atmospheric turbulence simulator, which generated turbulence by heating the air to simulate atmospheric turbulence, and obtained AO images with varying quality.
In order to compare the simulated degraded images presented in this study with the images acquired by the AO system, we compared the simulated degraded images according to Equation (1) in Figure 10 to the images obtained by the AO system. As shown in Figure 11, the images generated by this simulation are very similar to the actual images, both visually and physically.
We controlled the degree of correction for the AO system by changing the correction voltage applied by the corrector. The recorded RMS value of the wavefront served as the metric for evaluating the system correction effectiveness. Finally, we randomly selected 16 images from the acquired images and sorted them according to the degree of correction from the weakest to the strongest. In Figure 12, from left to right and then down, the RMS value increases.
We compared the proposed method and the IQA methods of Tenengrad, LOG, and HyperIQA, respectively, against the degree of correction in the AO system.
The proposed method in this paper is designed for AO images, and its results match the correction degree of the AO system. The Tenengrad, LOG, and HyperIQA methods shown in Figure 13 match the degree of correction of the AO system only in the local area.

4. Discussion

From the tests of the simulated and real images, we can see that although the Tenengrad and LOG methods based on the NSS feature of the image have been applied in the AO image quality assessment, their performance is poor. These two methods only extract one type of image feature as an evaluation criterion. However, the imaging process of AO images is complicated, with multiple factors that degrade the image quality. A single image feature cannot well reflect the image quality and the system’s correction state. The HyperIQA, which performs well in the field of natural distortion images, cannot be well adapted to AO images. Although HyperIQA utilizes the powerful learning ability of neural networks, the quality evaluation of natural images is sensitive not only to degradation models but also to the image content. AO images have a single content, and the target shape is irrelevant to the imaging quality and the system correction capability. The method proposed in this paper maps the multi-scale features of the image onto the PSF reflecting the imaging process, which can better reflect the imaging quality of the AO system. Meanwhile, we implemented the algorithm proposed in a computer with an Intel Core I9-10900X CPU, 32 GB RAM, and NVIDIA GeForce GTX 3090. The computation time for 10,000 AO images of our method is 165 s, and the IQA speed is approximately 61FPS, which meets the online IQA requirements for AO extended target images.

5. Conclusions

This paper introduces an IQA network designed to establish the relationship between degradation factors in the AO imaging process and the image quality of AO extended target images. The AO extended target image dataset, specifically created for this research, takes into account the critical factors affecting image quality, such as turbulence and jitter. The results of this study indicate that the PSF of the degraded image serves as a superior quality metric for AO images in comparison to image features. The dataset construction process uncovers the association between the PSF, the AO system’s correction capability, and the image quality. Nonetheless, given that the PSF remains unknown during actual image acquisition and may be challenging to compute, reliance on the deep network’s feature representation ability is necessary for analyzing the relationship between the PSF-based image quality score and the input image. The SRCC on the test data was 0.97, and our method was validated on actual acquired AO images as well. Furthermore, our approach addresses the limitations of AO IQA and image selection, offering an efficient solution for preserving system resources. Additionally, the method supplies a solid foundation for image post-processing and evaluation criteria. The AO image assessment results can aid in optimizing system parameters, thereby enhancing system performance. In future work, we plan to improve the network structure and expand the dataset, allowing the application of the method proposed in this study to be extended to assessing image quality affected by atmospheric turbulence.

Author Contributions

Conceptualization, G.G. and H.B.; methodology, G.G., L.L. and H.C.; software, G.G. and N.J.; validation, G.G., S.L., Q.B. and H.B.; formal analysis, G.G. and C.R.; investigation, G.G., L.L., C.R. and H.B.; resources, H.C.; data curation, G.G., L.L. and N.J.; writing—original draft preparation, G.G. and H.C.; writing—review and editing, G.G., L.L., C.R., Q.B., S.L. and H.B.; visualization, G.G. and L.L.; supervision, C.R. and H.B.; project administration, N.J.; funding acquisition, C.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Laboratory Innovation Foundation of the Chinese Academy of Science (No. YJ22K002).

Data Availability Statement

The data presented in this study are available upon request from the first author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, B.; Wang, Z.; Liao, Y.; Lin, X. HVS-based structural similarity for image quality assessment. In Proceedings of the 2008 9th International Conference on Signal Processing, Beijing, China, 26–29 October 2008; pp. 1194–1197. [Google Scholar]
  2. Wasson, V.; Kaur, B. Full Reference Image Quality Assessment from IQA Datasets: A Review. In Proceedings of the 2019 6th International Conference on Computing for Sustainable Global Development, New Delhi, India, 3–15 March 2019; pp. 735–738. [Google Scholar]
  3. Zhang, X.; Lin, W.; Huang, Q. Fine-Grained Image Quality Assessment: A Revisit and Further Thinking. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 2746–2759. [Google Scholar] [CrossRef]
  4. Saad, M.A.; Bovik, A.C.; Charrier, C. Blind image quality assessment: A natural scene statistics approach in the DCT domain. IEEE Trans. Image Process. 2012, 21, 3339–3352. [Google Scholar] [CrossRef]
  5. Yan, J.B.; Fang, Y.M.; Liu, X.L. The review of distortion-related image quality assessment. J. Image Graph. 2022, 27, 1430–1466. [Google Scholar]
  6. Bao, K.D.; Meng, X.C.; Shao, F.; Ye, M.M.; Jin, K.J.; Peng, Z.Y. A MVG-based non-reference quality evaluation method for Pan/MS Fusion. Natl. Remote Sens. Bull. 2022, 26, 568–578. [Google Scholar] [CrossRef]
  7. Novey, M.; Adali, T.; Roy, A. A Complex Generalized Gaussian Distribution—Characterization, Generation, and Estimation. IEEE Trans. Signal Process. 2010, 58, 1427–1433. [Google Scholar] [CrossRef]
  8. Nacereddine, N.; Goumeidane, A.B. Asymmetric Generalized Gaussian Distribution Parameters Estimation based on Maximum Likelihood, Moments and Entropy. In Proceedings of the 2019 IEEE 15th International Conference on Intelligent Computer Communication and Processing (ICCP), Cluj-Napoca, Romania, 5–7 September 2019; pp. 343–350. [Google Scholar]
  9. Qu, X.; Yan, J. Image Fusion Algorithm Based on Features Motivated Multi-Channel Pulse Coupled Neural Networks. In Proceedings of the 2008 2nd International Conference on Bioinformatics and Biomedical Engineering, Shanghai, China, 16–18 May 2008; pp. 2103–2106. [Google Scholar]
  10. Taunk, K.; De, S.; Verma, S.; Swetapadma, A. A Brief Review of Nearest Neighbor Algorithm for Learning and Classification. In Proceedings of the 2019 International Conference on Intelligent Computing and Control Systems (ICCS), Madurai, India, 15–17 May 2019; pp. 1255–1260. [Google Scholar]
  11. Wang, T.; Zhang, L.; Jia, H. An effective general-purpose NR-IQA model using natural scene statistics (NSS) of the luminance relative order. Signal Process. Image Commun. 2019, 71, 100–109. [Google Scholar] [CrossRef]
  12. Dendi SV, R.; Channappayya, S.S. No-reference video quality assessment using natural spatiotemporal scene statistics. IEEE Trans. Image Process. 2020, 29, 5612–5624. [Google Scholar] [CrossRef] [PubMed]
  13. Kang, L.; Ye, P.; Li, Y.; Doermann, D. Convolutional neural networks for no-reference image quality assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; IEEE: Piscataway, NJ, USA; pp. 1733–1740. [Google Scholar]
  14. Hou, W.; Gao, X.; Li, X. Blind image quality assessment via deep learning. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 1275–1286. [Google Scholar] [PubMed]
  15. Ma, K.; Liu, W.; Zhang, K.; Duanmu, Z.; Wang, Z.; Zuo, W. End-to-end blind image quality assessment using deep neural networks. IEEE Trans. Image Process. 2018, 27, 1202–1213. [Google Scholar] [CrossRef]
  16. Zhu, H.; Li, L.; Wu, J.; Dong, W.; Shi, G. MetaIQA: Deep Meta-Learning for No-Reference Image Quality Assessment. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 14131–14140. [Google Scholar]
  17. Su, S.; Yan, Q.; Zhu, Y.; Zhang, C.; Ge, X.; Sun, J.; Zhang, Y. Blindly Assess Image Quality in the Wild Guided by a Self-Adaptive Hyper Network. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 3664–3673. [Google Scholar]
  18. You, J.; Korhonen, J. Transformer For Image Quality Assessment. In Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 19–22 September 2021; pp. 1389–1393. [Google Scholar]
  19. Korhonen, J.; Su, Y.; You, J. Consumer Image Quality Prediction using Recurrent Neural Networks for Spatial Pooling. arXiv 2016, arXiv:2106.00918. [Google Scholar]
  20. Sreedher, G.; Ho, M.-L.; Smith, M.; Udayasankar, U.K.; Risacher, S.; Rapalino, O.; Greer, M.-L.C.; Doria, A.S.; Gee, M.S. Magnetic resonance imaging quality control, quality assurance and quality improvement. Pediatr. Radiol. 2021, 51, 698–708. [Google Scholar] [CrossRef] [PubMed]
  21. Jiang, W. Overview of adaptive optics development. Opto-Electron. Eng. 2018, 45, 170489. [Google Scholar]
  22. Hua, B.; Changhui, R.; Yu, T.; Libo, Z.; Hao, C.; Xiao, L. Research progress on adaptive optical image post reconstruction. Opto-Electron. Eng. 2018, 45, 170730. [Google Scholar]
  23. Tian, Y.; Rao, C.; Wei, K. Adaptive Optics Image Restoration Based on Frame Selection and Multi-frame Blind Deconvolution. Chin. Astron. Astrophys. 2009, 33, 223–230. [Google Scholar]
  24. Guo, S.; Zhang, R.; Li, J.; Zou, J.; Liu, C.; Gao, W. Perceptual image quality in normalized LOG domain for Adaptive Optics image post-processing. In Proceedings of the 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, Beijing, China, 17–19 May 2015; SPIE OIT: Bellingham, WA, USA, 2015; Volume 9622. [Google Scholar]
  25. Changli, W.; Rong, X.; Tao, M.; Changhai, L. Atmosphere Turbulence Blurred Image Restoration Based on Atmosphere Coherent Length. Acta Opt. Sin. 2014, 34, 0301002. [Google Scholar] [CrossRef]
  26. Long, X.; Hua, B.; Rao, C.H.; Gao, G.; Zhou, L. Improved fast phase unwrapping algorithm based on parallel acceleration. Opto-Electron. Eng. 2020, 47, 200111. [Google Scholar]
  27. Yu, D.; Tan, H. Engineering Optics; China Machine Press: Beijing, China, 2016. [Google Scholar]
  28. Wu, J.; Yang, C.; Liu, J. Optical Transmission Theory in the Atmosphere; Beijing University of Posts and Telecommunications Press: Beijing, China, 2005; pp. 130–131. [Google Scholar]
  29. Haralick, R.M.; Shapiro, L.G. Computer and Robot Vision, Volume II; Addison-Wesley: Boston, MA, USA, 1992; pp. 316–317. [Google Scholar]
  30. Mehta, S.; Hajishirzi, H.; Rastegari, M. DiCENet: Dimension-Wise Convolutions for Efficient Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 2416–2425. [Google Scholar] [CrossRef] [PubMed]
  31. Ridnik, T.; Lawen, H.; Noy, A.; Ben, E.; Sharir, B.G.; Friedman, I. TResNet: High Performance GPU-Dedicated Architecture. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; 2021; pp. 1399–1408. [Google Scholar]
  32. Kelishadrokhi, M.K.; Ghattaei, M.; Fekri-Ershad, S. Innovative local texture descriptor in joint of human-based color features for content-based image retrieval. SIViP 2023, 17, 4009–4017. [Google Scholar] [CrossRef]
  33. Pan, Q.; Guo, N.; Qingge, L.; Zhang, J.; Yang, P. PMT-IQA: Progressive Multi-task Learning for Blind Image Quality Assessment. arXiv 2016, arXiv:2301.01182. [Google Scholar]
  34. Badal, N.; Soundararajan, R.; Garg, A.; Patil, A. No Reference Pansharpened Image Quality Assessment Through Deep Feature Similarity. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 7235–7247. [Google Scholar] [CrossRef]
Figure 1. Routine diagram of the proposed method. First, based on the residual aberration of the AO system, the projected 2D images of the expanded target’s 3D model under different conditions are degraded into simulated images. Quality scores are assigned to each frame of the simulated images according to the degradation process’s PSF to construct a dataset for training the IQA network. Finally, real images are inputted to predict their scores.
Figure 1. Routine diagram of the proposed method. First, based on the residual aberration of the AO system, the projected 2D images of the expanded target’s 3D model under different conditions are degraded into simulated images. Quality scores are assigned to each frame of the simulated images according to the degradation process’s PSF to construct a dataset for training the IQA network. Finally, real images are inputted to predict their scores.
Sensors 24 00001 g001
Figure 2. Generated 2D rendered images by changing the lighting direction and intensity of the 3D model, the relative angle between the 3D model and the virtual camera, and the distance between the 3D model and the virtual camera.
Figure 2. Generated 2D rendered images by changing the lighting direction and intensity of the 3D model, the relative angle between the 3D model and the virtual camera, and the distance between the 3D model and the virtual camera.
Sensors 24 00001 g002
Figure 3. Examples of degraded images by changing the wavefront, RMS of the wavefront, and direction and length of the motion blur.
Figure 3. Examples of degraded images by changing the wavefront, RMS of the wavefront, and direction and length of the motion blur.
Sensors 24 00001 g003
Figure 4. Rendered images are shown in left column, PSFs are shown in middle column, and degraded images are shown in right column. The middle image in the first row is an Airy spot, and the PSFs become more and more diffused from top to bottom. Consequently, the quality of the corresponding degraded images becomes worse and worse.
Figure 4. Rendered images are shown in left column, PSFs are shown in middle column, and degraded images are shown in right column. The middle image in the first row is an Airy spot, and the PSFs become more and more diffused from top to bottom. Consequently, the quality of the corresponding degraded images becomes worse and worse.
Sensors 24 00001 g004
Figure 5. AO simulation image with different degrees of degradation. The motion blur gradually increases from left to right, and the turbulence disturbance gradually increases from top to bottom.
Figure 5. AO simulation image with different degrees of degradation. The motion blur gradually increases from left to right, and the turbulence disturbance gradually increases from top to bottom.
Sensors 24 00001 g005
Figure 6. Relationship between labels and degree of degradation. Figure 5 depicts 16 images numbered from left to right and from top to bottom. Equation (2) provides the score presented here.
Figure 6. Relationship between labels and degree of degradation. Figure 5 depicts 16 images numbered from left to right and from top to bottom. Equation (2) provides the score presented here.
Sensors 24 00001 g006
Figure 7. The distribution of quality labels for the degraded AO images approximately conforms to normal distribution.
Figure 7. The distribution of quality labels for the degraded AO images approximately conforms to normal distribution.
Sensors 24 00001 g007
Figure 8. Proposed deep neural network architecture. It comprises an input adaptive module, a multi-scale feature extraction module, and a quality prediction network.
Figure 8. Proposed deep neural network architecture. It comprises an input adaptive module, a multi-scale feature extraction module, and a quality prediction network.
Sensors 24 00001 g008
Figure 9. Different IQA methods against labels: (a) Our method (Upper left), (b) LOG (Upper right), (c) HyperIQA (Lower left), and (d) Tenengrad (Lower right). The normalized score value of the abovementioned method is plotted on the abscissa axis, where higher score values indicate better image quality based on the method.
Figure 9. Different IQA methods against labels: (a) Our method (Upper left), (b) LOG (Upper right), (c) HyperIQA (Lower left), and (d) Tenengrad (Lower right). The normalized score value of the abovementioned method is plotted on the abscissa axis, where higher score values indicate better image quality based on the method.
Sensors 24 00001 g009
Figure 10. Image of real target.
Figure 10. Image of real target.
Sensors 24 00001 g010
Figure 11. Degraded images (up) against real images (down). The images generated by this simulation are very similar to the actual images, both visually and physically.
Figure 11. Degraded images (up) against real images (down). The images generated by this simulation are very similar to the actual images, both visually and physically.
Sensors 24 00001 g011
Figure 12. Real AO images from weak to strong. We controlled the degree of correction for the AO system by changing the correction voltage applied by the corrector. Different levels of correction resulted in varying image quality.
Figure 12. Real AO images from weak to strong. We controlled the degree of correction for the AO system by changing the correction voltage applied by the corrector. Different levels of correction resulted in varying image quality.
Sensors 24 00001 g012
Figure 13. Different methods against the AO correction: (a) Our method (Upper left), (b) LOG (Upper right), (c) HyperIQA (Lower left), and (d) Tenengrad (Lower right).
Figure 13. Different methods against the AO correction: (a) Our method (Upper left), (b) LOG (Upper right), (c) HyperIQA (Lower left), and (d) Tenengrad (Lower right).
Sensors 24 00001 g013
Table 1. Performance comparison for AO image different IQA methods.
Table 1. Performance comparison for AO image different IQA methods.
MethodsSRCCPLCCRMSE
Tenengrad0.6730.6560.075
LOG0.7690.7290.149
HyperIQA0.6560.6380.081
Ours0.9710.9610.008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, G.; Li, L.; Chen, H.; Jiang, N.; Li, S.; Bian, Q.; Bao, H.; Rao, C. No-Reference Quality Assessment of Extended Target Adaptive Optics Images Using Deep Neural Network. Sensors 2024, 24, 1. https://doi.org/10.3390/s24010001

AMA Style

Gao G, Li L, Chen H, Jiang N, Li S, Bian Q, Bao H, Rao C. No-Reference Quality Assessment of Extended Target Adaptive Optics Images Using Deep Neural Network. Sensors. 2024; 24(1):1. https://doi.org/10.3390/s24010001

Chicago/Turabian Style

Gao, Guoqing, Lingxiao Li, Hao Chen, Ning Jiang, Shuqi Li, Qing Bian, Hua Bao, and Changhui Rao. 2024. "No-Reference Quality Assessment of Extended Target Adaptive Optics Images Using Deep Neural Network" Sensors 24, no. 1: 1. https://doi.org/10.3390/s24010001

APA Style

Gao, G., Li, L., Chen, H., Jiang, N., Li, S., Bian, Q., Bao, H., & Rao, C. (2024). No-Reference Quality Assessment of Extended Target Adaptive Optics Images Using Deep Neural Network. Sensors, 24(1), 1. https://doi.org/10.3390/s24010001

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop