Abstract
A reliable Ultrasound (US)-to-US registration method to compensate for brain shift would substantially improve Image-Guided Neurological Surgery. Developing such a registration method is very challenging, due to factors such as the tumor resection, the complexity of brain pathology and the demand for fast computation. We propose a novel feature-driven active registration framework. Here, landmarks and their displacement are first estimated from a pair of US images using corresponding local image features. Subsequently, a Gaussian Process (GP) model is used to interpolate a dense deformation field from the sparse landmarks. Kernels of the GP are estimated by using variograms and a discrete grid search method. If necessary, the user can actively add new landmarks based on the image context and visualization of the uncertainty measure provided by the GP to further improve the result. We retrospectively demonstrate our registration framework as a robust and accurate brain shift compensation solution on clinical data.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
During neurosurgery, Image-Guided Neurosurgical Systems (IGNSs) provide a patient-to-image mapping that relates the preoperative image data to an intraoperative patient coordinate system, allowing surgeons to infer the locations of their surgical instruments relative to preoperative image data and helping them to achieve a radical tumor resection while avoiding damage to surrounding functioning brain tissue.
Commercial IGNSs assume a rigid registration between preoperative imaging and patient coordinates. However, intraoperative deformation of the brain, also known as brain shift, invalidates this assumption. Since brain shift progresses during surgery, the rigid patient-to-image mapping of IGNS becomes less and less accurate. Consequently, most surgeons only use IGNS to make a surgical plan but justifiably do not trust it throughout the entire operation [1, 2].
Related Work. As one of the most important error sources in IGNS, intraoperative brain shift must be compensated in order to increase the accuracy of neurosurgeries. Registration between the intraoperative MRI (iMRI) image and preoperative MRI (preMRI) image (preop-to-intraop registration) has been a successful strategy for brain shift compensation [3,4,5,6]. However, iMRI acquisition is disruptive, expensive and time consuming, making this technology unavailable for most clinical centers worldwide. More recently, 3D intraoperative Ultrasound (iUS) appears to be a promising replacement for iMRI. Although some progress has been made by previous work on preMRI-to-iUS registration [7,8,9,10,11,12,13], yet there are still no clinically accepted solutions and no commercial neuro-navigation systems that provide brain shift compensation. This is due to three reasons: (1) Most non-rigid registration methods can not handle artifacts and missing structures in iUS; (2) The multi-modality of preMRI-to-iUS registration makes the already difficult problem even more challenging; (3) A few methods [14] can achieve a reasonable alignment, yet they take around 50 min for an US pair and are too slow to be clinically applicable. Another shortcoming of existing brain shift compensation approaches is the lack of an uncertainty measure. Brain shift is a complex spatio-temporal phenomenon and, given the state of registration technology and the importance of the result, it seems reasonable to expect an indication (e.g. error bars) of the confidence level in the estimated deformation.
In this paper, we propose a novel feature-driven active framework for brain shift compensation. Here, landmarks and their displacement are first estimated from a pair of US images using corresponding local image features. Subsequently, a Gaussian Process (GP) model [15] is used to interpolate a dense deformation field from the sparse landmarks. Kernels of the GP are estimated by using variograms and a discrete grid search method. If necessary, for areas that are difficult to align, the user can actively add new landmarks based on the image context and visualization of the uncertainty measure provided by the GP to further improve the registration accuracy. We retrospectively demonstrate the efficacy of our method on clinical data.
Contributions and novelties of our work can be summarized as follows:
-
1.
The proposed feature-based registration is robust for aligning iUS image pairs with missing correspondence and is fast.
-
2.
We explore applying the GP model and variograms for image registration.
-
3.
Registration uncertainty in transformation parameters can be naturally obtained from the GP model.
-
4.
To the best of our knowledge, the proposed active registration strategy is the first method to actively combine user expertise in brain shift compensation.
2 Method
2.1 The Role of US-to-US Registration
In order to alleviate the difficulty of preop-to-intraop registration, instead of directly aligning iMRI and iUS images, we choose an iterative compensation approach which is similar to the work in [16].
As shown in Fig. 1 the acquisition processes for pre-duraUS (preUS) and post-resectionUS (postUS) take place before opening the dura and after (partial) tumor resection, respectively. Since most brain-shift occurs after taking the preUS, a standard multi-modal registration may be suffice to achieve a good alignment \(T_{\mathrm {multi}}\) between preMRI and preUS [12]. Next, we register the preUS to postUS using the proposed feature-driven active framework to acquire a deformable mapping \(T_{\mathrm {mono}}\). After propagating \(T_{\mathrm {multi}}\) and \(T_{\mathrm {mono}}\) to the preMRI, surgeons may use it as an updated view of anatomy to compensate for brain shift during the surgery.
2.2 Feature-Based Registration Strategy
Because of tumor resection, compensating for brain shift requires non-rigid registration algorithms capable of aligning structures in one image that have no correspondences in the other image. In this situation, many image registration methods that take into account the intensity pattern of the entire image will become trapped in incorrect local minima.
We therefore pursue a Feature-Based Registration (FBR) strategy due to its robustness in registering images with missing correspondence [17]. FBR mainly consists of 3 steps: feature-extraction, feature-matching and dense deformation field estimation. An optional “active registration” step can be added depending on the quality of FBR.
Feature Extraction and Matching. As illustrated in Fig. 2(a) and (b), distinctive local image features are automatically extracted and identified as key-points on preUS and postUS images. An automatic matching algorithm searches for a corresponding postUS key-point for each key-point on the preUS image [17].
For a matched key-point pair, let \(\mathbf {x}_i\) be the coordinates of the preUS key-point and \(\mathbf {x}^{\mathrm {post}}_i\) be the coordinate of its postUS counterpart. We first use all matched PreUS key-points as landmarks, and perform a land-mark based preUS-to-postUS affine registration to obtain a rough alignment. \(\mathbf {x}^{\mathrm {post}}_i\) becomes \(\mathbf {x}^{\mathrm {affine}}_i\) after the affine registration. The displacement vector, which indicates the movement of landmark \(\mathbf {x}_i\) due to the brain shift process, can be calculated as \(\mathbf {d}(\mathbf {x}_i)=\mathbf {x}^{\mathrm {affine}}_i-\mathbf {x}_i\). where \(\mathbf {d}=[d_x,d_y,d_z]\).
Dense Deformation Field. The goal of this step is to obtain a dense deformation field from a set of N sparse landmark and their displacements \(\mathcal {D}=\{ (\mathbf {x}_i,\mathbf {d}_i),i=1:N \}\), where \(\mathbf {d}_i=\mathbf {d}(\mathbf {x}_i)\) is modeled as a observation of displacements.
In the GP model, let \(\mathbf {d}(\mathbf {x})\) be the displacement vector for the voxel at location \(\mathbf {x}\) and define a prior distribution as \(d(\mathbf {x})\sim \mathrm {GP}(\mathrm {m}(\mathbf {x}),\mathrm {k}(\mathbf {x},\mathbf {x}'))\), where \(\mathrm {m}(\mathbf {x})\) is the mean function, which usually is set to 0, and the GP kernel \(\mathbf {k}(\mathbf {x},\mathbf {x}')\) represents the spatial correlation of displacement vectors.
By the modeling assumption, all displacement vectors follow a joint Gaussian distribution \(p(\mathbf {d}\mid \mathbf {X})=\mathcal {N} (\mathbf {d}\mid \mathbf {\mu },\mathbf {K}) \), where \(K_{ij}=\mathbf {k}(\mathbf {x},\mathbf {x}')\) and \(\mathbf {\mu } = (\mathrm {m}(\mathbf {x}_1) ,...,\mathrm {m}(\mathbf {x}_N)) \). As a result, the displacement vectors \(\mathbf {d}\) for known landmarks and \(N_*\) unknown displacement vectors \(\mathbf {d}_*\) at location \(\mathbf {X}_*\), which we want to predict, have the following relationship:
In Eq. 1, \(\mathbf {K}=\mathrm {k}(\mathbf {X},\mathbf {X})\) is the \(N\times N\) matrix, \(\mathbf {K}_*=\mathrm {k}(\mathbf {X},\mathbf {X_*})\) is a similar \(N \times N_*\) matrix, and \(\mathbf {K_{**}}=\mathrm {k}(\mathbf {X_*},\mathbf {X_*})\) is a \(N_* \times N_*\) matrix. The mean \(\mu _*=[\mu _{*x},\mu _{*y},\mu _{*z}]\) represents values of voxel-wise displacement vectors and can be estimated from the posterior Gaussian distribution \(p(\mathbf {d}_*\mid \mathbf {X_*},\mathbf {X},\mathbf {d})= \mathcal {N}(\mathbf {d}_*\mid \mu _*,\Sigma _*)\) as
Given \(\mu (\mathbf {X})= \mu (\mathbf {X_*})=0\), we can obtain the dense deformation field for the preUS image by assigning \(\mu _{*x}\),\(\mu _{*y}\),\(\mu _{*z}\) to \(d_x\), \(d_y\) and \(d_z\), respectively.
Active Registration. Automatic approaches may have difficulty in the preop-to-intraop image registration, especially for areas near the tumor resection site. Another advantage of the GP framework is the possibility of incorporating user expertise to further improve the registration result.
From Eq. 1, we can also compute the covariance matrix of the posterior Gaussian \(p(\mathbf {d}_*\mid \mathbf {X_*},\mathbf {X},\mathbf {d})\) as
Entries on the diagonal of \(\Sigma _*\) are the marginal variances of predicted values. They can be used as an uncertainty measure to indicates the confidence in the estimated transformation parameters.
If users are not satisfied by the FBR alignment result, they could manually, guided by the image context and visualization of registration uncertainty, add new corresponding pairs of key-points to drive the GP towards better results.
2.3 GP Kernel Estimation
The performance of GP registration depends exclusively on the suitability of the chosen kernels and its parameters. In this study, we explore two schemes for the kernel estimation: Variograms and discrete grid search.
Variograms. The variogram is a powerful geostatistical tool for characterizing the spatial dependence of a stochastic process [18]. While being briefly mentioned in [19], it has not yet received much attention in the medical imaging field.
In the GP registration context, where \(\mathbf {d}(\mathbf {x})\) is modelled as a random quantity, variograms can measure the extent of pairwise spatial correlation between displacement vectors with respect to their distance, and give insight into choosing a suitable GP kernel.
In practice, we estimate the empirical variogram of landmarks’ displacement vector field using
For the norm term \(\Vert {\mathbf {d}(\mathbf {x}_i)-\mathbf {d}(\mathbf {x}_j)}\Vert \), we separate its 3 components \(d_x\) \(d_y\) \(d_z\) and construct 3 variograms respectively. As shown in Fig. 3(a), for displacement vectors \(\mathbf {d}(x_1)\) and \(\mathbf {d}(x_2)\), \(\Vert {d_x(\mathbf {x}_2)-d_x(\mathbf {x}_1)}\Vert \) is the vector difference with respect to the \(\mathbf {x}\) axis, etc. h represents the distance between two key-points.
To construct an empirical variogram, the first step is to make a variogram cloud by plotting \(\Vert {d(\mathbf {x}_2)-d(\mathbf {x}_1)}\Vert ^2\) and \(h_{ij}\) for all displacement pairs. Next, we divide the variogram cloud into bins with a bin width setting to 2\(\delta \). Lastly, the mean of each bin is calculated and further plotted with the mean distance of that bin to form an empirical variogram. Figure 4(a) shows an empirical variogram of a real US image pair that has 71 corresponding landmarks.
In order to obtain the data-driven GP kernel function, we further fit a smooth curve, generated by pre-defined kernel functions, to the empirical variogram. As shown in Fig. 4(b), a fitted curve is commonly described by the following characteristics:
- Nugget:
-
The non-zero value at \(h=0\).
- Sill:
-
The value at which the curve reaches its maximum.
- Range:
-
The value of distance h where the sill is reached.
Fitting a curve to an empirical variogram is implemented in most geostatistics software. A popular choice is choosing several models that appear to have the right shape and use the one with smallest weighted squared error [18]. In this study, we only test Gaussian curves
Here, \(c_0\) is the nugget, \(c=\mathrm {Sill}-c_0\) and a is the model parameter. Once the fitted curve is found, we can obtain a from the equation (5) and use it as the Gaussian kernel scale in the GP interpolation.
Discrete Grid Search. The variogram scheme often requires many landmarks to work well [18]. For US pairs that have fewer landmarks, we choose predefined Gaussian kernels, and use cross validation to determine the scale parameter in a discrete grid search fashion [15].
3 Experiments
The experimental dataset consists of 6 sets 3D preUS and postUS image pairs. The US signals were acquired on a BK Ultrasound 3000 system that is directly connected to the Brainlab VectorVision Sky neuronavigaton system during surgery. Signals were further reconstructed as 3D volume using the PLUS [20] library in 3D Slicer [21] (Table 1).
We used the mean euclidean distance between the predicted and ground truth of key-points’ coordinates, measured in mm, for the registration evaluation. During the evaluation, we compared: affine, thin-plate kernel FBR, variograms FBR and gaussian kernel FBR. For US pairs with fewer than 50 landmarks, we used leave-one-out cross validation, otherwise we used 5-fold cross validation. All of the compared methods were computed in less than 10 min.
The pre-defined Gaussian kernel with discrete grid search generally yield better result than the variogram scheme. This is reasonable as the machine learning approach stresses the prediction performance, while the geostatistical variogram favours the interpretability of the model. Notice that the cross validation strategy is not an ideal evaluation, this could be improved by using manual landmarks in public datasets, such as RESECT [22] and BITE [23].
In addition, we have performed preliminary tests on active registration as shown in Fig. 5, which illustrate the use of a colour map of registration uncertainty to guide the manual placement of 3 additional landmarks to improve the registration. By visual inspection, we can see the alignment of tumor boundary substantially improved.
4 Discussion
One key point of our framework is the “active registration” idea that aims to overcome the limitation of automatic image registration. Human and machines have complementary abilities; we believe that the element of simple user interaction should be added to the pipeline for some challenging medical imaging applications. Although the proposed method is designed for brain shift compensation, it is also applicable to other navigation systems that require tracking of tissue deformation. The performance of FBR is highly correlated with the quality of feature matching. In future works, we plan to test different matching algorithms [24], and also perform more validation with public datasets.
References
Gerard, I.J., et al.: Brain shift in neuronavigation of brain tumors: a review. Med. Image Anal. 35, 403–420 (2017)
Bayer, S., et al.: Intraoperative imaging modalities and compensation for brain shift in tumor resection surgery. Int. J. Biomed. Imaging 2017 (2017). Article ID. 6028645
Hata, N., Nabavi, A., Warfield, S., Wells, W., Kikinis, R., Jolesz, F.A.: A volumetric optical flow method for measurement of brain deformation from intraoperative magnetic resonance images. In: Taylor, C., Colchester, A. (eds.) MICCAI 1999. LNCS, vol. 1679, pp. 928–935. Springer, Heidelberg (1999). https://doi.org/10.1007/10704282_101
Clatz, O., et al.: Robust nonrigid registration to capture brain shift from intraoperative MRI. IEEE TMI 24(11), 1417–1427 (2005)
Vigneron, L.M., et al.: Serial FEM/XFEM-based update of preoperative brain images using intraoperative MRI. Int. J. Biomed. Imaging 2012 (2012). Article ID. 872783
Drakopoulos, F., et al.: Toward a real time multi-tissue adaptive physics-based non- rigid registration framework for brain tumor resection. Front. Neuroinf. 8, 11 (2014)
Gobbi, D.G., Comeau, R.M., Peters, T.M.: Ultrasound/MRI overlay with image warping for neurosurgery. In: Delp, S.L., DiGoia, A.M., Jaramaz, B. (eds.) MICCAI 2000. LNCS, vol. 1935, pp. 106–114. Springer, Heidelberg (2000). https://doi.org/10.1007/978-3-540-40899-4_11
Arbel, T., et al.: Automatic non-linear MRI-ultrasound registration for the correction of intra-operative brain deformations. Comput. Aided Surg. 9, 123–136 (2004)
Pennec, X., et al.: Tracking brain deformations in time sequences of 3D US images. Pattern Recogn. Lett. 24, 801–813 (2003)
Letteboer, M.M.J., Willems, P.W.A., Viergever, M.A., Niessen, W.J.: Non-rigid Registration of 3D ultrasound images of brain tumours acquired during neurosurgery. In: Ellis, R.E., Peters, T.M. (eds.) MICCAI 2003. LNCS, vol. 2879, pp. 408–415. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-39903-2_50
Reinertsen, I., Descoteaux, M., Drouin, S., Siddiqi, K., Collins, D.L.: Vessel driven correction of brain shift. In: Barillot, C., Haynor, D.R., Hellier, P. (eds.) MICCAI 2004. LNCS, vol. 3217, pp. 208–216. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-30136-3_27
Fuerst, B., et al.: Automatic ultrasound-MRI registration for neurosurgery using 2D and 3D \(LC^2\) metric. Med. Image Anal. 18(8), 1312–1319 (2014)
Rivaz, H., Collins, D.L.: Deformable registration of preoperative MR, pre-resection ultrasound, and post-resection ultrasound images of neurosurgery. IJCARS 10, 1017–1028 (2015)
Ou, Y., et al.: DRAMMS: deformable registration via attribute matching and mutual-saliency weighting. Med. Image Anal. 15, 622–639 (2011)
Rasmussen, C.E., Williams, C.: Gaussian Processes for Machine Learning. MIT Press, Cambridge (2006)
Riva, M., et al.: 3D intra-op US and MR image guidance: pursuing an ultrasound-based management of brainshift to enhance neuronavigation. IJCARS 12(10), 1711–1725 (2017)
Toews, M., Wells, W.M.: Efficient and robust model-to-image alignment using 3D scale-invariant features. Med. Image Anal. 17, 271–282 (2013)
Cressie, N.A.C.: Statistics for Spatial Data, p. 900. Wiley, Hoboken (1991)
Ruiz-Alzola, J., Suarez, E., Alberola-Lopez, C., Warfield, S.K., Westin, C.-F.: Geostatistical medical image registration. In: Ellis, R.E., Peters, T.M. (eds.) MICCAI 2003. LNCS, vol. 2879, pp. 894–901. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-39903-2_109
Lasso, A., et al.: PLUS: open-source toolkit. IEEE TBE 61(10), 2527–2537 (2014)
Kikinis, R., et al.: 3D Slicer. Intraoper. Imaging IGT 3(19), 277–289 (2014)
Xiao, Y., et al.: RESECT: a clinical database. Med. Phys. 44(7), 3875–3882 (2017)
Mercier, L., et al.: BITE: on-line database. Med. Phys. 39(6), 3253–3261 (2012)
Jian, B., Vemuri, B.C.: Robust point set registration using Gaussian mixture models. IEEE TPAMI 33(8), 1633–1645 (2011)
Acknowledgement
MS was supported by the International Research Center for Neurointelligence (WPI-IRCN) at The University of Tokyo Institutes for Advanced Study. This work was also supported by NIH grants P41EB015898 and P41EB015902.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Luo, J. et al. (2018). A Feature-Driven Active Framework for Ultrasound-Based Brain Shift Compensation. In: Frangi, A., Schnabel, J., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2018. MICCAI 2018. Lecture Notes in Computer Science(), vol 11073. Springer, Cham. https://doi.org/10.1007/978-3-030-00937-3_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-00937-3_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-00936-6
Online ISBN: 978-3-030-00937-3
eBook Packages: Computer ScienceComputer Science (R0)