Abstract
Accurate detection of heart-related diseases in echocardiography (echo) often requires determining the performance of cardiac valves or contractile events such as strain at a high temporal resolution. In high-end cart-based imaging systems, this is achieved by increasing the frame rate using specialized beamforming and imaging hardware, or by limiting the imaging field of view (FOV). In point-of-care imaging, such a high frame rate imaging technology is currently unavailable. In this paper, we propose a new frame rate up-conversion technique, as a post-processing step during or after the echo acquisition. The proposed technique takes advantage of both variational autoencoders (VAE) and generative adversarial networks (GAN), and produces realistic frames at a high frame rate that can be used to augment conventional imaging. The proposed technique is robust to variations in heart rate since its latent space not only uses immediate previous frames, but it also takes into account the appearance of end-diastolic and end-systolic frames in its estimation. Our results show that the proposed technique can increase the frame rate by at least 5 times without any requirement for limiting the imaging FOV.
F. T. Dezaki and H. Girgis—Joint first authors.
P. Abolmaesumi and T. Tsang—Joint senior authors.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Echo imaging is based on the acoustic pulse-echo measurement: an ultrasound pulse is transmitted, and echo signals are subsequently received. Temporal Resolution (TR) is the ability to locate moving structures at anytime accurately and is determined by imaging frame rate (FR). More images per second improve TR. In high-end cart-based imaging systems, the frame rate is increased by using specialized beamforming and imaging hardware, or by limiting the imaging field-of-view. In mobile point-of-care imaging, given cost, memory storage, and limitations in computation power and data transmission, full field-of-view high frame rate imaging technology is currently unavailable.
Traditional 2-dimensional (2D) echo imaging is on the basis of a TR of less than 100 Hz. Although these frame rates are adequate to assess cardiac morphology and certain functional aspects, they do not allow the resolution of all mechanical events, as some of them are very short-lived [1]. High frame rates enable us to see rapidly moving structures (such as valves) without motion artifacts and also perform velocity and deformation analysis (i.e., tissue Doppler).
There are two sets of approaches to increase the frame rate in echocardiography. The first one is based on acquisition schemes, while the second one is based on post-processing techniques.
In the first sets of approaches, several technical advances in cardiac ultrasound allow data to be acquired at a very high frame rate. The main drawback of such high frame rate data acquisition is that it typically has resulted in image quality degradation [1], and increased hardware complexity [2]. Retrospective gating [3], plane wave/diverging wave imaging [4], and multi-line transmit systems [5] are among the methods in ultrafast imaging. In point-of-care imaging, such a high frame rate imaging technology is currently unavailable. In the second sets of approaches, various post-processing methods have been developed to avoid the computational costs and complex hardware requirements associated with acquisition schemes. The imaging process remains the same as traditional echo imaging with standard clinical echocardiography equipment, and the frame rate up-conversion (FRUC) is done in the processing time. FRUC is a technique that increases the frame rate of the video by inserting newly generated frames into the original sequence.
Several FRUC algorithms have been proposed that use motion estimation and dictionary learning [2, 6, 7]. Deep learning has been also used for future frame prediction in computer vision [8, 9]. The most recent methods use variational auto-encoders to reduce image reconstruction artifacts [10] and adversarial loss [11] to obtain more realistic results.
In this paper, we propose the first deep-learning-based solution for frame rate up-conversion in echocardiography that can be used to augment conventional imaging without the need of specialized beamforming and imaging hardware, or limiting the imaging field of view. It should be noted that our design is robust to variations in heart rate. The proposed technique takes advantage of both variational autoencoders (VAE) and generative adversarial networks (GAN), and conditions the latent space of the VAE through taking into account not only the immediate previous frames but also the appearance of end-diastolic and end-systolic frames. Using data from 3,112 patient studies, we demonstrate that the proposed technique can increase the frame rate by 5 times without compromising the imaging FOV, and generate realistic images that are visually indistinguishable from clinically acquired echo data.
2 Methods
We start by explaining how our model generates new echo cine frames, before detailing the training procedure. The future frame \({\hat{\mathbf{x}}}_{\mathbf{t}}\) is synthesized based on a latent variable \(\mathbf{z}_{\mathbf{t}-{\mathbf{1}}}\) and the previous frame \({\hat{\mathbf{x }}}_{\mathbf{t}-\mathbf{1}}\). This process is shown in the red box in Fig. 1. The latent variable \(\mathbf{z}_{\mathbf{t}-\mathbf{1}}\) is sampled from a prior distribution \(p(\mathbf{z}_{\mathbf{t}-\mathbf{1}})\) that is learned during the training procedure. The previous frame \(\hat{\mathbf {x}}_{t-1}\) can be either a ground-truth frame (for the initial frames) or the last predicted frame. The recurrent generator network G predicts sequence of future frames \(\hat{x}_{1:T}\) using convolutional Long Short Term Memory (LSTM) [12]. As shown in Fig. 2, predicted pixel-space transformations between current frame and its next frame are convolved with the input image to generate the next frame. The training procedure is illustrated in the black box in Fig. 1, and discussed in detail in the following sections.
2.1 Variational Autoencoders
To address the challenge of mapping from a high-dimensional input to a high-dimensional output distribution, learning a low-dimensional latent code to represent aspects of the possible outputs not contained in the input image is of great help. Intuitively, the latent codes encapsulate any ambiguous or stochastic events that might affect the future. The predictions are conditioned on a set of c context frames, \(\mathbf{x}_{\mathbf{t}-\mathbf{c}}, ..., \mathbf{x}_{\mathbf{t}-\mathbf{1}}\) (\(c=1\) for conditioning on one frame). Our goal is to sample from \(p(\mathbf{x}_{\mathbf{t}}| \mathbf{x}_{\mathbf{t}-\mathbf{c}:\mathbf{t}-\mathbf{1}}, \mathbf{z}_{\mathbf{t}-\mathbf{c}:\mathbf{t}-\mathbf{1}})\), which is intractable as it involves marginalizing over the latent variables. We instead maximize the variational lower bound as in the variational autoencoder [13]. To encode any transitional information between consecutive frames, the encoder E is conditioned on \(\mathbf{x}_{\mathbf{t}-\mathbf{1}}\) and \(\mathbf{x}_{\mathbf{t}}\). Moreover, to encode the volume changes of the cardiac chambers during a cycle, the encoder is conditioned on end diastolic (ED) and end systolic (ES) frames. This is a conditional version of variational autoencoder, which embed ground truth frames in the latent code \(\mathbf{z}_{\mathbf{t}-\mathbf{1}}\). During training, the latent code is sampled from a Gaussian distribution \(\mathcal {N}(\mu _{\mathbf{z}_{\mathbf{t}-\mathbf{1}}}, \sigma ^2_\mathbf{{z}_{t-1}})\) using a reparameterization approach [13]. The reconstruction loss is as follows:
A regularization term encourages the approximate posterior to be close to the prior distribution:
2.2 Generative Adversarial Networks
We can enforce our model to generate sharper and more realistic frames with the help of GANs. Given a discriminator network D that is trained to distinguish between generated videos \(\hat{x}_{1:T}\) from real videos \(x_{1:T}\), the generator can be trained to match the distribution of real echo cines using the binary cross-entropy loss:
2.3 Complementary Effect of VAE and GAN
GAN models are capable of generating natural videos under the guidance of learned discriminator networks. However, GANs suffer from the mode collapse [14], which can lead to the generator producing limited varieties of samples, by finding the most realistic image from the discriminator perspective. In other words, \(\mathbf {\hat{x}}\) will be independent of \(\mathbf {z}\). On the other hand, VAEs encourage latent variables to be meaningful so that they can make accurate predictions at training time. However, latent variables used in VAEs are the encoding of the ground truth images, unlike GANs which are trained with completely random variables. Moreover, the discriminator D does not see results sampled from the prior during training. To combine both approaches (Shown in Fig. 1), another discriminator network \(D_{VAE}\) can be introduced to improve the performance of the generator [11]. Note that the same generator network with shared weights is used at every time step. The latent variables in this approach are sampled from the VAE’s latent distribution \(q(\mathbf {z}_{t}|\mathbf {x}_{t}, \mathbf {x}_{t-1})\):
Therefore, the final objective of the echo cine series prediction is:
where \(\lambda _{R}\) and \(\lambda _{KL}\) control the relative importance of each term.
2.4 Network Architecture
Figure 2 depicts our generator network G. The network is inspired from the architecture proposed by [8], i.e., convolutional dynamic neural advection (CDNA). The sequence of future frames is predicted by feeding the latent variable \(\mathbf{z}_{\mathbf{t}-\mathbf{1}}\) and the previous frame \({\hat{\mathbf{x}}}_{{\mathbf{t}-\mathbf{1}}}\) (either the ground truth or the previous prediction frame). Latent codes are concatenated along the channel dimension of all the convolutional layers of the network. Each convolutional layer is followed by instance normalization [15] and rectified linear (ReLU) activations [16]. Convolutional LSTMs are used to model motion. For each time step prediction, the network predicts four convolutional kernels to produce a set of transformed frames. The network also predicts a synthesized frame and a compositing mask by passing the final layer output through two convolutional layers with sigmoid and softmax activation functions, respectively. Finally, these sets of transformed frames along with the synthesized and previous frames are merged by the mask.
The encoder E is a standard convolutional network except that the two input images and ED and ES frames are concatenated along the channel dimension. The architecture is the same as the one used in [14]. As for the architecture of the discriminator, we used a 3D convolutional neural network using all the T frames. Both the discriminators, D and \(D_{VAE}\), have the same architecture with separate weights. The architecture is inspired from the one used in [14] except that the 2D convolution filters are inflated to 3D ones.
3 Experiments and Results
We carried out experiments on a set of 2D apical 4 chambers (AP4) cine series collected from the Picture Archiving and Communication System at Vancouver General Hospital, with ethics approval of the Clinical Medical Research Ethics Board, in consultation with the Information Privacy Office. Data set consists of 3,112 individual patient studies. Experiments were run by randomly dividing these cases into mutually exclusive patients, such that 75% of the cases were available for training and validation, and 25% for test. These clinical echo cine series included various heart rates (i.e., from 47 to 104 beats per minute). Location of the ES and ED frames in a cardiac cycle is recorded by an expert sonographer. Each cine is temporally down-sampled by a factor of 5, and the model is trained to reconstruct the original cine series.
Evaluating the performance of video prediction is a common challenge. The standard quantitative metrics are mean-squared error (MSE), peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). Although these standard metrics provide a way to benchmark the proposed method against its counterparts, often, they do not correlate properly with the human preference [17]. Therefore, we also use the learned perceptual image distance metric (LPIPS) [17] to evaluate our method. The LPIPS is calculated by \(\mathcal {L}_2\) distance between deep features of images. Deep features are extracted by employing pre-trained AlexNet.
Table 1 benchmarks the performance of the proposed method against the VAE-only and VAE+GAN techniques. First, we compared the performance of the three methods, including the proposed method by considering the MSE and PSNR metrics. As reported in Table 1, the VAE-only technique achieves the lowest MSE and highest PSNR. Although this result may imply that the VAE-only technique performs better than others, it produces blurry and unrealistic images. A sample result is shown in Fig. 3. This means that the MSE and PSNR metrics are not good candidates in this application. The reason that the MSE and PSNR of the proposed and VAE+GAN techniques are larger than those of the VAE-only is that the GAN gives priority to matching joint distributions of pixels, but not the per-pixel similarity. Our experiments have shown that the LPIPS corresponds better to human preferences and it is also discussed in [17]. Therefore, to fairly compare the proposed technique against its counterparts, the LPIPS and SSIM metrics must be taken into consideration. As shown in the table, the proposed technique provides the lowest LPIPS and the highest SSIM. This means that it outperforms the VAE-only and VAE+GAN techniques.
Figure 4 shows a more detailed comparison between the proposed and VAE+GAN techniques. In this figure, the average LPIPS, MSE, PSNR and SSIM metrics are plotted against the time step. As illustrated, all four metrics are improved when the proposed technique is employed. This happens since the proposed technique conditions the latent space of the VAE through the use of the appearance of ED and ES frames. Apart from what technique is used to predict the future frames, we expect to see the performance to degrade as the time step is increased. This phenomenon applies to the proposed technique too; however, the rate of performance degradation is slower compared to that of its counterpart.
4 Conclusion and Future Works
In this paper, we proposed a new frame rate up-conversion technique for echocardiography. The proposed technique takes advantage of both VAE and GAN, and produces realistic frames at a high frame rate that can be used to augment conventional imaging. The proposed technique is robust to variations in heart rate since its latent space not only uses immediate previous frames, but it also takes into account the appearance of end-diastolic and end-systolic frames in its estimation. Our results show that the proposed technique can increase the frame rate by at least 5 times without any requirement for limiting the imaging field of view. Our comparison to state-of-the-art using a large patient dataset shows that the proposed approach can reconstruct rapid events in echo, such as the motion of valves, at high temporal resolution.
References
Cikes, M., Tong, L., et al.: Ultrafast cardiac ultrasound imaging: technical principles, applications, and clinical benefits. JACC Cardiovasc. Imaging 7(8), 812–823 (2014)
Gifani, P., Behnam, H., et al.: Temporal super resolution enhancement of echocardiographic images based on sparse representation. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 63(1), 6–19 (2016)
Provost, J., Lee, W.N., et al.: Electromechanical wave imaging of normal and ischemic hearts in vivo. IEEE Trans. Med. Imaging 29(3), 625–635 (2010)
Papadacci, C., Pernot, M., et al.: High-contrast ultrafast imaging of the heart. IEEE Trans. Ultrason. Ferroelectr. Freq. Control. 61(2), 288–301 (2014)
Tong, L., Ramalli, A., et al.: Multi-transmit beam forming for fast cardiac imaging–experimental validation and in vivo application. IEEE Trans. Med. Imaging 33(6), 1205–1219 (2014)
Contijoch, F., Fernandez-de Manuel, L., et al.: Increasing temporal resolution of 3D transesophageal ultrasound by rigid body registration of sequential, temporally offset sequences. In: 2010 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, pp. 328–331. IEEE (2010)
Perrin, D.P., Vasilyev, N.V., et al.: Temporal enhancement of 3D echocardiography by frame reordering. JACC Cardiovasc. Imaging 5(3), 300–304 (2012)
Srivastava, N., Mansimov, E., Salakhudinov, R.: Unsupervised learning of video representations using LSTMS. In: International Conference on Machine Learning, pp. 843–852 (2015)
Finn, C., Goodfellow, I., Levine, S.: Unsupervised learning for physical interaction through video prediction. In: NIPS, pp. 64–72 (2016)
Babaeizadeh, M., Finn, C., et al.: Stochastic variational video prediction. arXiv preprint. arXiv:1710.11252 (2017)
Lee, A.X., Zhang, R., Ebert, F., Abbeel, P., Finn, C., Levine, S.: Stochastic adversarial video prediction. arXiv preprint. arXiv:1804.01523 (2018)
Xingjian, S., Chen, Z., et al.: Convolutional LSTM network: a machine learning approach for precipitation nowcasting. In: NIPS, pp. 802–810 (2015)
Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint. arXiv:1312.6114 (2013)
Zhu, J.Y., Zhang, R., et al.: Toward multimodal image-to-image translation. In: NIPS, pp. 465–476 (2017)
Ulyanov, D., Vedaldi, A., Lempitsky, V.: Instance normalization: the missing ingredient for fast stylization. arXiv preprint. arXiv:1607.08022 (2016)
Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: ICML, pp. 807–814 (2010)
Zhang, R., Isola, P., et al.: The unreasonable effectiveness of deep features as a perceptual metric. In: CVPR, pp. 586–595 (2018)
Acknowledgements
This work is supported in part by the Canadian Institutes of Health Research (CIHR) and in part by the Natural Sciences and Engineering Research Council of Canada (NSERC). The authors would like to acknowledge the support provided by Dale Hawley and the Vancouver Coastal Health in providing us with the anonymized, deidentified data.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Taheri Dezaki, F., Girgis, H., Rohling, R., Gin, K., Abolmaesumi, P., Tsang, T. (2019). Frame Rate Up-Conversion in Echocardiography Using a Conditioned Variational Autoencoder and Generative Adversarial Model. In: Shen, D., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. MICCAI 2019. Lecture Notes in Computer Science(), vol 11765. Springer, Cham. https://doi.org/10.1007/978-3-030-32245-8_78
Download citation
DOI: https://doi.org/10.1007/978-3-030-32245-8_78
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-32244-1
Online ISBN: 978-3-030-32245-8
eBook Packages: Computer ScienceComputer Science (R0)