iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.1007/s10107-018-1304-2
Optimal estimation of sensor biases for asynchronous multi-sensor data fusion | Mathematical Programming Skip to main content
Log in

Optimal estimation of sensor biases for asynchronous multi-sensor data fusion

  • Full Length Paper
  • Series B
  • Published:
Mathematical Programming Submit manuscript

Abstract

An important step in a multi-sensor surveillance system is to estimate sensor biases from their noisy asynchronous measurements. This estimation problem is computationally challenging due to the highly nonlinear transformation between the global and local coordinate systems as well as the measurement asynchrony from different sensors. In this paper, we propose a novel nonlinear least squares formulation for the problem by assuming the existence of a reference target moving with an (unknown) constant velocity. We also propose an efficient block coordinate decent (BCD) optimization algorithm, with a judicious initialization, to solve the problem. The proposed BCD algorithm alternately updates the range and azimuth bias estimates by solving linear least squares problems and semidefinite programs. In the absence of measurement noise, the proposed algorithm is guaranteed to find the global solution of the problem and the true biases. Simulation results show that the proposed algorithm significantly outperforms the existing approaches in terms of the root mean square error.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Notes

  1. There exist degenerate scenarios where \(\text {rank}({\mathbf {H}})< 3\) even when \(K\ge 3\), e.g., the target trajectory and the sensor are on the same straight line. In this paper, we will focus on generic scenarios where \(\text {rank}({\mathbf {H}})=3\) when \(K\ge 3\).

References

  1. Alizadeh, F., Haeberly, J.P.A., Overton, M.L.: Complementarity and nondegeneracy in semidefinite programming. Math. Program. 77(1), 111–128 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  2. Bar-Shalom, Y.: Multitarget-Multisensor Tracking: Advanced Applications. Artech House, Norwood (1990)

    Google Scholar 

  3. Border, K.: Notes on the implicit function theorem. California Institute of Technology, Division of the Humanities and Social Sciences (2013). http://people.hss.caltech.edu/~kcb/Notes/IFT.pdf. Accessed 15 Jan 2016

  4. Boyd, S.P., Vandenberghe, L.: Convex Optimization. Cambridge University Press, New York (2004)

    Book  MATH  Google Scholar 

  5. Cowley, D.C., Shafai, B.: Registration in multi-sensor data fusion and tracking. In: Proceedings of 1993 American Control Conference, pp. 875–879 (1993)

  6. Dana, M.P.: Registration: a prerequisite for multiple sensor tracking. In: Bar-Shalom, Y. (ed.) Multitarget-Multisensor Tracking: Advanced Applications. Norwood, Artech House (1990)

    Google Scholar 

  7. Fischer, W.L., Muehe, C.E., Cameron, A.G.: Registration errors in a netted air surveillance system. Technical Report, DTIC Document (1980)

  8. Fortunati, S., Farina, A., Gini, F., Graziano, A., Greco, M.S., Giompapa, S.: Least squares estimation and Cramér–Rao type lower bounds for relative sensor registration process. IEEE Trans. Signal Process. 59(3), 1075–1087 (2011)

    Article  MathSciNet  Google Scholar 

  9. Fortunati, S., Gini, F., Farina, A., Graziano, A.: On the application of the expectation-maximisation algorithm to the relative sensor registration problem. IET Radar Sonar Navig. 7(2), 191–203 (2013)

    Article  Google Scholar 

  10. Grant, M., Boyd, S.P., Ye, Y.: CVX: Matlab software for disciplined convex programming (2008)

  11. Grippo, L., Sciandrone, M.: On the convergence of the block nonlinear Gauss–Seidel method under convex constraints. Oper. Res. Lett. 26(3), 127–136 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  12. Helmberg, C., Rendl, F., Vanderbei, R.J., Wolkowicz, H.: An interior-point method for semidefinite programming. SIAM J. Optim. 6(2), 342–361 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  13. Helmick, R.E., Rice, T.R.: Removal of alignment errors in an integrated system of two 3-D sensors. IEEE Trans. Aerosp. Electron. Syst. 29(4), 1333–1343 (1993)

    Article  Google Scholar 

  14. Horn, R.A., Johnson, C.R.: Matrix Analysis, pp. 12–14. Cambridge University Press, Cambridge (1985)

    Book  Google Scholar 

  15. Hsieh, C.S., Chen, F.C.: Optimal solution of the two-stage Kalman estimator. IEEE Trans. Autom. Control 44(1), 194–199 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  16. Leung, H., Blanchette, M., Harrison, C.: A least squares fusion of multiple radar data. Proc. RADAR 94, 364–369 (1994)

    Google Scholar 

  17. Li, X.R., Jilkov, V.P.: Survey of maneuvering target tracking: III. Measurement models (2001)

  18. Li, X.R., Jilkov, V.P.: Survey of maneuvering target tracking. I. Dynamic models. IEEE Trans. Aerosp. Electron. Syst. 39(4), 1333–1364 (2003)

    Article  Google Scholar 

  19. Li, Z., Chen, S., Leung, H., Bosse, E.: Joint data association, registration, and fusion using EM-KF. IEEE Trans. Aerosp. Electron. Syst. 46(2), 496–507 (2010)

    Article  Google Scholar 

  20. Lin, X., Bar-Shalom, Y.: Multisensor target tracking performance with bias compensation. IEEE Trans. Aerosp. Electron. Syst. 42(3), 1139–1149 (2006)

    Article  Google Scholar 

  21. Lin, X., Bar-Shalom, Y., Kirubarajan, T.: Multisensor multitarget bias estimation for general asynchronous sensors. IEEE Trans. Aerosp. Electron. Syst. 41(3), 899–921 (2005)

    Article  Google Scholar 

  22. Liu, H., Yue, M.C., So, A.M.C., Ma, W.K.: A discrete first-order method for large-scale MIMO detection with provable guarantees. In: Proceedings of the 18th IEEE International Workshop on Signal Processing Advances in Wireless Communications (SPAWC 2017), pp. 669–673 (2017)

  23. Lu, C., Liu, Y.F., Zhang, W.Q., Zhang, S.: Tightness of a new and enhanced semidefinite relaxation for MIMO detection (2017). arxiv:1710.02048

  24. Luo, Z.Q., Chang, T.H.: SDP relaxation of homogeneous quadratic optimization: approximation bounds and applications. In: Palomar, D.P., Eldar, Y.C. (eds.) Convex Optimization in Signal Processing and Communications, pp. 117–165. Cambridge University Press, Cambridge (2010)

    Google Scholar 

  25. Luo, Z.Q., Ma, W.K., So, A.M.C., Ye, Y., Zhang, S.: Semidefinite relaxation of quadratic optimization problems. IEEE Signal Process. Mag. 27(3), 20–34 (2010)

    Article  Google Scholar 

  26. Mahler, R., El-Fallah, A.: Bayesian unified registration and tracking. In: SPIE Defense, Security, and Sensing. International Society for Optics and Photonics (2011)

  27. Mahler, R.P.S.: Multitarget bayes filtering via first-order multitarget moments. IEEE Trans. Aerosp. Electron. Syst. 39(4), 1152–1178 (2003)

    Article  Google Scholar 

  28. Mo, L., Song, X., Zhou, Y., Sun, Z.K., Bar-Shalom, Y.: Unbiased converted measurements for tracking. IEEE Trans. Aerosp. Electron. Syst. 34(3), 1023–1027 (1998)

    Article  Google Scholar 

  29. Nabaa, N., Bishop, R.H.: Solution to a multisensor tracking problem with sensor registration errors. IEEE Trans. Aerosp. Electron. Syst. 35(1), 354–363 (1999)

    Article  Google Scholar 

  30. Okello, N.N., Challa, S.: Joint sensor registration and track-to-track fusion for distributed trackers. IEEE Trans. Aerosp. Electron. Syst. 40(3), 808–823 (2004)

    Article  Google Scholar 

  31. Pu, W., Liu, Y.F., Yan, J., Zhou, S., Liu, H., Luo, Z.Q.: A two-stage optimization approach to the asynchronous multi-sensor registration problem. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3271–3275 (2017)

  32. Ristic, B., Clark, D.E., Gordon, N.: Calibration of multi-target tracking algorithms using non-cooperative targets. IEEE J. Sel. Top. Signal Process. 7(3), 390–398 (2013)

    Article  Google Scholar 

  33. Ristic, B., Okello, N.: Sensor registration in ECEF coordinates using the MLR algorithm. In: Proceedings of the Sixth International Conference of Information Fusion, pp. 135–142 (2003)

  34. Taghavi, E., Tharmarasa, R., Kirubarajan, T., Bar-Shalom, Y., Mcdonald, M.: A practical bias estimation algorithm for multisensor-multitarget tracking. IEEE Trans. Aerosp. Electron. Syst. 52(1), 2–19 (2016)

    Article  Google Scholar 

  35. Zhou, Y.: A Kalman filter based registration approach for asynchronous sensors in multiple sensor fusion applications. In: Proceedings of 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 2 (2004)

  36. Zhou, Y., Leung, H., Yip, P.C.: An exact maximum likelihood registration algorithm for data fusion. IEEE Trans. Signal Process. 45(6), 1560–1573 (1997)

    Article  Google Scholar 

  37. Zia, A., Kirubarajan, T., Reilly, J.P., Yee, D., Punithakumar, K., Shirani, S.: An EM algorithm for nonlinear state estimation with model uncertainties. IEEE Trans. Signal Process. 56(3), 921–936 (2008)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We would like to thank Professor Stephen Boyd of Stanford University for several useful discussions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhi-Quan Luo.

Additional information

This work was performed at the Shenzhen Research Institute of Big Data, The Chinese University of Hong Kong, Shenzhen. It is funded in part by a National Natural Science Foundation of China (NSFC) Key Project Grant 61731018, by NSFC Grants 11331012, 11631013, and 61601340, and in part by the China National Funds for Distinguished Young Scientists Grant 61525105.

Appendices

Appendix A: Proof of Theorem 1

We first reformulate problem (4.2) as

$$\begin{aligned} \min _{\varDelta \rho }\min _{{\mathbf {v}}}~~\left\| {\mathbf {H}}_{0}\varDelta \rho +{\mathbf {H}}_{1}{\mathbf {v}}-{\mathbf {y}} \right\| ^2. \end{aligned}$$
(7.1)

The closed-form solution of the above problem with respect to \({\mathbf {v}}\) is

$$\begin{aligned} {\mathbf {v}}^*=-\left( {\mathbf {H}}_1^T{\mathbf {H}}_1\right) ^{-1}{\mathbf {H}}_1^T\left( {\mathbf {H}}_0\varDelta \rho -{\mathbf {y}}\right) . \end{aligned}$$
(7.2)

Substituting (7.2) into (7.1), we obtain the following linear LS problem with respect to \(\varDelta \rho \):

$$\begin{aligned} \min _{\varDelta \rho }~~\left\| \left( {\mathbf {I}}_{2(K-1)}-\bar{\mathbf {H}}_1\right) \left( {\mathbf {H}}_0\varDelta \rho -{\mathbf {y}}\right) \right\| ^2, \end{aligned}$$
(7.3)

where \(\bar{\mathbf {H}}_1={\mathbf {H}}_1\left( {\mathbf {H}}_1^T{\mathbf {H}}_1\right) ^{-1}{\mathbf {H}}_1^T\). By the definitions of \({\mathbf {H}}_0,~{\mathbf {H}}_1,\) and \({\mathbf {y}}\) in (4.3), problem (7.3) can be further rewritten as

$$\begin{aligned} \min _{\varDelta \rho }~~\sum _{k=1}^{K-1}\displaystyle \left\| {\mathbf {a}}_k\varDelta \rho - {\mathbf {b}}_k \right\| ^2, \end{aligned}$$

where

$$\begin{aligned} {\mathbf {a}}_k=\left[ \displaystyle \sum _{\ell =1}^{K-1}\bar{T}_\ell ^kc_\ell ,~\displaystyle \sum _{\ell =1}^{K-1}\bar{T}_\ell ^ks_\ell \right] ^T,\quad {\mathbf {b}}_k=\left[ \displaystyle \sum _{\ell =1}^{K-1}\bar{T}_\ell ^ky_\ell ^c,~\displaystyle \sum _{\ell =1}^{K-1}\bar{T}_\ell ^ky_\ell ^s\right] ^T, \end{aligned}$$

and

$$\begin{aligned} \bar{T}_\ell ^k=\left\{ \begin{aligned}&1-\frac{T_k^2}{\sum _{i=1}^{K-1}T_i^2},&\ell =k,\\&-\frac{T_kT_\ell }{\sum _{i=1}^{K-1}T_i^2},&\ell \ne k.\end{aligned}\right. \end{aligned}$$

To prove Theorem 1, it suffices to show that \({\mathbf {a}}_k^T{\mathbf {a}}_k\) and \({\mathbf {a}}_k^T{\mathbf {b}}_k\) for all \(k=1,2,\ldots ,K-1\) are independent of \(\varDelta \phi .\) Next, we show that this is indeed true.

It is simple to compute, for \(k=1,2,\ldots ,K,\)

$$\begin{aligned} {\mathbf {a}}_k^T{\mathbf {a}}_k =\sum _{\ell =1}^{K-1}\sum _{j=1}^{K-1}\bar{T}_\ell ^k\bar{T}_j^k\left( c_\ell c_j+s_\ell s_j\right) ,\quad {\mathbf {a}}_k^T{\mathbf {b}}_k =\sum _{\ell =1}^{K-1}\sum _{j=1}^{K-1}\bar{T}_\ell ^k\bar{T}_j^k\left( c_\ell y^c_j+s_\ell y^s_j\right) . \end{aligned}$$

Moreover, by the definitions of \(c_k, s_k, y_k^c,\) and \(y_k^s\) in (4.4), (4.5), (4.6), and (4.7) and the fact

$$\begin{aligned} \cos (\alpha _1+\varDelta \phi )\cos (\alpha _2+\varDelta \phi )+\sin (\alpha _1+\varDelta \phi )\sin (\alpha _2+\varDelta \phi ) =\cos (\alpha _1-\alpha _2), \end{aligned}$$

we can obtain, for \(\ell , j=1,2,\ldots ,K-1,\)

$$\begin{aligned} c_\ell c_j+s_\ell s_j=&\lambda ^{-2}\left[ \cos \left( \phi _{\ell +1}-\phi _{j+1}\right) +\cos \left( \phi _{\ell }-\phi _{j}\right) \right. \\&\left. -\,\cos \left( \phi _{\ell +1}-\phi _{j}\right) -\cos \left( \phi _{j+1}-\phi _{\ell }\right) \right] ,\\ c_\ell y_j^c+s_\ell y_j^s=&\lambda ^{-2}\left[ \rho _{j+1}\cos \left( \phi _{\ell +1}-\phi _{j+1}\right) +\rho _{j}\cos \left( \phi _{\ell }-\phi _{j}\right) \right. \\&\left. -\,\rho _{j}\cos \left( \phi _{\ell +1}-\phi _{j}\right) -{{\rho _{j+1}}}\cos \left( \phi _{j+1}-\phi _{\ell }\right) \right] . \end{aligned}$$

This immediately shows that \({\mathbf {a}}_k^T{\mathbf {a}}_k\) and \({\mathbf {a}}_k^T{\mathbf {b}}_k\) for all \(k=1,2,\ldots ,K-1\) are independent of \(\varDelta \phi .\) This completes the proof of Theorem 1.

Appendix B: Proof of Theorem 2

To show that SDP (5.10) admits a unique solution of rank one, it is sufficient to show that, if \(\mathbf {\varDelta H}\) is sufficiently small, there always exists \({\mathbf {W}}=\text {Diag}({\mathbf {w}})\) with \(|{\mathbf {w}}_m|=1\) for all \(m=1,2,\ldots ,M\) such that \({\mathbf {1}}{\mathbf {1}}^T\in {\mathbb {H}}^{M+1}\) is the unique solution of the following SDP

$$\begin{aligned} \begin{aligned} \min _{{\mathbf {X}}\in {\mathbb {H}}^{M+1}}\quad&\text {Tr}(\hat{\mathbf {C}}X)\\ \text {s.t.}\quad \quad&\text {diag}({\mathbf {X}})={\mathbf {1}}, \\&{\mathbf {X}}\succeq {\mathbf {0}}, \end{aligned} \end{aligned}$$
(7.4)

where

$$\begin{aligned} \hat{\mathbf {C}}=\begin{bmatrix}{\mathbf {W}}^\dagger {\mathbf {H}}^\dagger \mathbf {PHW}&\quad {\mathbf {W}}^\dagger {\mathbf {H}}^\dagger {\mathbf {P}}{\mathbf {c}} \\ {\mathbf {c}}^\dagger \mathbf {PHW}&\quad 0\end{bmatrix}\in {\mathbb {H}}^{M+1}. \end{aligned}$$

By Lemma 1, we only need to show that, if \(\mathbf {\varDelta H}\) is sufficiently small, there always exists \({\mathbf {W}}=\text {Diag}({\mathbf {w}})\) with \(|{\mathbf {w}}_m|=1\) for all \(m=1,2,\ldots ,M\) such that \({\mathbf {1}}{\mathbf {1}}^T\in {\mathbb {H}}^{M+1}\) and some \({\mathbf {y}}\in {\mathbb {R}}^{M+1}\) jointly satisfy

$$\begin{aligned}&\begin{aligned}&\hat{\mathbf {C}}+\text {diag}({\mathbf {y}})\\&\quad =\begin{bmatrix}{\mathbf {W}}^\dagger {\mathbf {H}}^\dagger \mathbf {PHW}+\text {Diag}({\mathbf {y}}_{1:M})&\quad {\mathbf {W}}^\dagger {\mathbf {H}}^\dagger {\mathbf {P}}{\mathbf {c}} \\ {\mathbf {c}}^\dagger \mathbf {PHW}&\quad {\mathbf {y}}_{M+1}\end{bmatrix}\succeq {\mathbf {0}}, \end{aligned} \end{aligned}$$
(7.5)
$$\begin{aligned}&\begin{bmatrix}{\mathbf {W}}^\dagger {\mathbf {H}}^\dagger \mathbf {PHW}+\text {Diag}({\mathbf {y}}_{1:M})&\quad {\mathbf {W}}^\dagger {\mathbf {H}}^\dagger {\mathbf {P}}{\mathbf {c}} \\ {\mathbf {c}}^\dagger \mathbf {PHW}&\quad {\mathbf {y}}_{M+1}\end{bmatrix}{\mathbf {1}}={\mathbf {0}}\in {\mathbb {R}}^{M+1}, \end{aligned}$$
(7.6)

and

$$\begin{aligned} {\mathbf {W}}^\dagger {\mathbf {H}}^\dagger \mathbf {PHW}+\text {Diag}({\mathbf {y}}_{1:M}) \succ {\mathbf {0}}. \end{aligned}$$
(7.7)

Notice that the above (7.5), (7.6), and (7.7) correspond to conditions 2, 3, and 4 in Lemma 1, respectively, and \({\mathbf {1}}{\mathbf {1}}^T\in {\mathbb {S}}^{M+1}\) obviously satisfies condition 1 in Lemma 1. Moreover, we rewrite (7.6) as follows:

$$\begin{aligned}&\left[ {\mathbf {W}}^\dagger {\mathbf {H}}^\dagger \mathbf {PHW}+\text {Diag}({\mathbf {y}}_{1:M}) \right] {\mathbf {1}}+{\mathbf {W}}^\dagger {\mathbf {H}}^\dagger {\mathbf {P}}{\mathbf {c}}={\mathbf {0}}\in {\mathbb {R}}^{M}, \end{aligned}$$
(7.8a)
$$\begin{aligned}&\left[ {\mathbf {c}}^\dagger \mathbf {PHW}\right] {\mathbf {1}}+{\mathbf {y}}_{M+1}=0. \end{aligned}$$
(7.8b)

Next, we prove that, if \(\mathbf {\varDelta H}\) is sufficiently small, there always exist \({\mathbf {W}}=\text {Diag}({\mathbf {w}})\) with \(|{\mathbf {w}}_m|=1\) for all \(m=1,2,\ldots ,M\) and \({\mathbf {y}}\in {\mathbb {R}}^{M+1}\)that satisfy (7.5), (7.8a), (7.8b), and (7.7). We divide the proof into two parts. More specifically, we first show that, if there exist \({\mathbf {W}}=\text {Diag}({\mathbf {w}})\) with \(|{\mathbf {w}}_m|=1\) for all \(m=1,2,\ldots ,M\) and \({\mathbf {y}}\in {\mathbb {R}}^{M+1}\) satisfying (7.8a) and (7.7), then we can construct \({\mathbf {y}}_{M+1}\) that simultaneously satisfies (7.5) and (7.8b). Then, we show that, if \(\mathbf {\varDelta H}\) is sufficiently small, there indeed exist \({\mathbf {W}}=\text {Diag}({\mathbf {w}})\) with \(|{\mathbf {w}}_m|=1\) for all \(m=1,2,\ldots ,M\) and \({\mathbf {y}}_{1:M}\) that satisfy (7.8a) and (7.7).

Let us first assume that \({\mathbf {W}}=\text {Diag}({\mathbf {w}})\) (with \(|{\mathbf {w}}_m|=1\) for all \(m=1,2,\ldots ,M\)) and \({\mathbf {y}}_{1:M}\) satisfy (7.7) and (7.8a). We show that there exists \({\mathbf {y}}_{M+1}\) that simultaneously satisfies (7.5) and (7.8b). Let

$$\begin{aligned} {\mathbf {y}}_{M+1}={\mathbf {c}}^\dagger \mathbf {PHW}\left[ {\mathbf {W}}^\dagger {\mathbf {H}}^\dagger \mathbf {PHW}+\text {Diag}({\mathbf {y}}_{1:M}) \right] ^{-1}{\mathbf {W}}^\dagger {\mathbf {H}}^\dagger {\mathbf {P}}{\mathbf {c}}. \end{aligned}$$
(7.9)

This, together with (7.7), immediately implies (7.5). Moreover, from (7.8a), we have

$$\begin{aligned} {\mathbf {1}}=-\left[ {\mathbf {W}}^\dagger {\mathbf {H}}^\dagger \mathbf {PHW}+\text {Diag}({\mathbf {y}}_{1:M}) \right] ^{-1}{\mathbf {W}}^\dagger {\mathbf {H}}^\dagger {\mathbf {P}}{\mathbf {c}}\in {\mathbb {R}}^{M}. \end{aligned}$$

Combining the above and (7.9) immediately yields (7.8b).

Now, let us argue that, if \(\mathbf {\varDelta H}\) is sufficiently small, then there always exist \({\mathbf {W}}=\text {Diag}({\mathbf {w}})\) (with \(|{\mathbf {w}}_m|=1\) for all \(m=1,2,\ldots ,M\)) and \({\mathbf {y}}_{1:M}\) that satisfy (7.8a) and (7.7). In particular, the following Claim 1 and Claim 2 guarantee the existence of \({\mathbf {W}}=\text {Diag}({\mathbf {w}})\) (with \(|{\mathbf {w}}_m|=1\) for all \(m=1,2,\ldots ,M\)) and \({\mathbf {y}}_{1:M}\) that satisfy (7.8a) and (7.7), respectively.

Claim 1. There always exist a neighborhood \({{\mathcal {H}}}\subseteq {\mathbb {C}}^{(K-1)\times M}\) containing \(\tilde{\mathbf {H}}\) and two unique continuously differentiable functions \(d_{\varvec{\psi }}:{\mathbb {C}}^{(K-1)\times M}\mapsto {\mathbb {R}}^{M}\) and \(d_{{\mathbf {y}}}:{\mathbb {C}}^{(K-1)\times M}\mapsto {\mathbb {R}}^{M}\) such that (7.8a) holds true for all \({\mathbf {H}}\in {\mathcal {H}}\) with \({\mathbf {W}}=\text {Diag}\left( \left[ e^{j\psi _1}, e^{j\psi _2},\ldots , e^{j\psi _M}\right] ^T\right) \), \(\varvec{\psi }=d_{\varvec{\psi }}({\mathbf {H}}),\) and \({\mathbf {y}}=d_{{\mathbf {y}}}({\mathbf {H}})\).

Proof

To prove Claim 1, we first reformulate complex equation (7.8a) as an equivalent real form; then we apply the implicit function theorem [3] based on the equivalent form to show the existence of the neighborhood \({\mathcal {H}}\) and two unique continuously differentiable functions \(d_{\varvec{\psi }}\) and \(d_{{\mathbf {y}}}.\)

The equivalent form of (7.8a) is

$$\begin{aligned} f\left( \varvec{\psi },{\mathbf {y}},{\mathbf {H}}\right) =\varvec{0}, \end{aligned}$$
(7.10)

where \(f\left( \varvec{\psi },{\mathbf {y}},{\mathbf {H}} \right) :{\mathbb {R}}^M\times {\mathbb {R}}^M\times {\mathbb {C}}^{(K-1)\times M}\mapsto {\mathbb {R}}^{2M}\) is

$$\begin{aligned} \begin{aligned} f\left( \varvec{\psi },{\mathbf {y}},{\mathbf {H}}\right) =&\begin{bmatrix}\text {Re}\left\{ \left[ {\mathbf {W}}^\dagger {\mathbf {H}}^\dagger \mathbf {PHW}+\text {diag}({\mathbf {y}}_{1:M}) \right] {\mathbf {1}}+{\mathbf {W}}^\dagger {\mathbf {H}}^\dagger {\mathbf {P}}{\mathbf {c}}\right\} \\ \text {Im}\left\{ \left[ {\mathbf {W}}^\dagger {\mathbf {H}}^\dagger \mathbf {PHW}+\text {diag}({\mathbf {y}}_{1:M}) \right] {\mathbf {1}}+{\mathbf {W}}^\dagger {\mathbf {H}}^\dagger {\mathbf {P}}{\mathbf {c}}\right\} \end{bmatrix}\\ =&\begin{bmatrix} \mathbf {C_{\varvec{\psi }}\text {Re}\{ A_H \}}\mathbf {C_{\varvec{\psi }}}+\mathbf {S_{\varvec{\psi }}\text {Im}\{ A_H \}}\mathbf {C_{\varvec{\psi }}}\\ \mathbf {C_{\varvec{\psi }}\text {Re}\{ A_H \}}\mathbf {S_{\varvec{\psi }}}+\mathbf {S_{\varvec{\psi }}\text {Im}\{ A_H \}}\mathbf {S_{\varvec{\psi }}} \end{bmatrix}{\mathbf {1}}\\&+ \begin{bmatrix} -\mathbf {C_{\varvec{\psi }}\text {Im}\{ A_H \}}\mathbf {S_{\varvec{\psi }}}+\mathbf {S_{\varvec{\psi }}\text {Re}\{ A_H \}}\mathbf {S_{\varvec{\psi }}}\\ \mathbf {C_{\varvec{\psi }}\text {Im}\{ A_H \}}\mathbf {C_{\varvec{\psi }}}-\mathbf {S_{\varvec{\psi }}\text {Re}\{ A_H \}}\mathbf {C_{\varvec{\psi }}} \end{bmatrix}{\mathbf {1}}\\&+ \begin{bmatrix} \mathbf {C_{\varvec{\psi }}}\mathbf {\text {Re}\{ b_H \}}+\mathbf {S_{\varvec{\psi }}}\mathbf {\text {Im}\{ b_H \}}\\ \mathbf {C_{\varvec{\psi }}}\mathbf {\text {Im}\{ b_H \}}-\mathbf {S_{\varvec{\psi }}}\mathbf {\text {Re}\{ b_H \}} \end{bmatrix} +\begin{bmatrix}{\mathbf {y}}\\\varvec{0}\end{bmatrix}. \end{aligned} \end{aligned}$$
(7.11)

In the above, \(\mathbf {C_{\varvec{\psi }}}\) and \(\mathbf {S_{\varvec{\psi }}}\) are diagonal matrices related to \(\varvec{\psi }\) as follows:

$$\begin{aligned} \begin{aligned} \mathbf {C_{\varvec{\psi }}}=\text {Diag}\left( \left[ \cos \varvec{\psi }_1,\ldots ,\cos \varvec{\psi }_M\right] ^T\right) ,\\ \mathbf {S_{\varvec{\psi }}}=\text {Diag}\left( \left[ \sin \varvec{\psi }_1,\ldots ,\sin \varvec{\psi }_M\right] ^T\right) ; \end{aligned} \end{aligned}$$
(7.12)

and \(\mathbf {A_H}\in {\mathbb {H}}^{M}\) and \(\mathbf { b_H}\in {\mathbb {C}}^M\) are determined by \({\mathbf {H}}\) as follows:

$$\begin{aligned} \mathbf {A_H} = \mathbf {H^\dagger PH},~\mathbf { b_H} = \mathbf {H^\dagger Pc}. \end{aligned}$$

Now, we apply the implicit function theorem [3, Theorem 5] to Eq. (7.10). Notice that it is the same to define \(f\left( \varvec{\psi },{\mathbf {y}},{\mathbf {H}}\right) \) in (7.11) over \(\left( \varvec{\psi },{\mathbf {y}},{\mathbf {H}}\right) \) or define \(f(\varvec{\psi },{\mathbf {y}},\text {Re}\{\mathbf {{H}}\},\text {Im}\{\mathbf {{H}}\}): {\mathbb {R}}^M\times {\mathbb {R}}^M\times {\mathbb {R}}^{(K-1)\times M}\times {\mathbb {R}}^{(K-1)\times M}\mapsto {\mathbb {R}}^{2M}\) over \((\varvec{\psi },{\mathbf {y}},\text {Re}\{\mathbf {{H}}\},\text {Im}\{\mathbf {{H}}\}).\) For notational simplicity, we will keep using (7.11) (but we can think it as \(f(\varvec{\psi },{\mathbf {y}},\text {Re}\{\mathbf {{H}}\},\text {Im}\{\mathbf {{H}}\})\)) in our proof. First of all, \(f(\varvec{\psi },{\mathbf {y}},\mathbf {{H}})\) is continuously differentiable (i.e., \(f(\varvec{\psi },{\mathbf {y}},\text {Re}\{\mathbf {{H}}\},\text {Im}\{\mathbf {{H}}\})\) is continuously differentiable).

Moreover, it follows from (5.17) that

$$\begin{aligned} f\left( \varvec{\tilde{\psi }},\tilde{\mathbf {y}},\tilde{\mathbf {H}}\right) ={\mathbf {0}}, \end{aligned}$$
(7.13)

where \(\varvec{\tilde{\psi }}=\varvec{\varDelta \bar{\phi }},~\tilde{\mathbf {y}}={\mathbf {0}},\) and \(\tilde{\mathbf {H}}\) is defined in (5.14). If the Jacobian matrix \(D_{(\varvec{\psi },{\mathbf {y}})}f\) is invertible at point \((\varvec{\tilde{\psi }},\tilde{\mathbf {y}},\tilde{\mathbf {H}})\), then it follows from the implicit function theorem [3, Theorem 5] that the desired \(\mathcal {{H}}\), \({d}_{\varvec{\psi }},\) and \({d}_{{\mathbf {y}}}\) in Claim 1 must exist.

It remains to prove that the Jacobian matrix \(D_{(\varvec{\psi },{\mathbf {y}})}f\) at point \((\varvec{\tilde{\psi }},\tilde{\mathbf {y}},\tilde{\mathbf {H}} )\) is invertible.

By some calculations, we obtain

$$\begin{aligned} D_{(\varvec{\psi },{\mathbf {y}})}f=\begin{bmatrix} \frac{\partial f}{\partial \varvec{\psi }}&\frac{\partial f}{\partial {\mathbf {y}}} \end{bmatrix}=\begin{bmatrix} *&{\mathbf {I}}_M\\{\mathbf {G}}&{\mathbf {0}} \end{bmatrix}\in {\mathbb {C}}^{2M\times 2M}, \end{aligned}$$
(7.14)

where

$$\begin{aligned} \begin{aligned} {\mathbf {G}}=&~{\mathbf {C}}_{\varvec{\psi }}\mathbf {\text {Re}\{ A_H \}}{\mathbf {C}}_{\varvec{\psi }}- {\mathbf {S}}_{\varvec{\psi }}\text {Diag}(\mathbf {\text {Re}\{ A_H \}}{\mathbf {s}}_{\varvec{\psi }})\\ {}&+ {\mathbf {S}}_{\varvec{\psi }}\mathbf {\text {Im}\{ A_H \}}{\mathbf {C}}_{\varvec{\psi }}+ {\mathbf {C}}_{\varvec{\psi }}\text {Diag}(\mathbf {\text {Im}\{ A_H \}}{\mathbf {s}}_{\varvec{\psi }})\\ {}&- {\mathbf {C}}_{\varvec{\psi }}\mathbf {\text {Im}\{ A_H \}}{\mathbf {S}}_{\varvec{\psi }}- {\mathbf {S}}_{\varvec{\psi }}\text {Diag}(\mathbf {\text {Im}\{ A_H \}}{\mathbf {c}}_{\varvec{\psi }})\\ {}&+ {\mathbf {S}}_{\varvec{\psi }}\mathbf {\text {Re}\{ A_H \}}{\mathbf {S}}_{\varvec{\psi }}- {\mathbf {C}}_{\varvec{\psi }}\text {Diag}(\mathbf {\text {Re}\{ A_H \}}{\mathbf {c}}_{\varvec{\psi }})\\ {}&+ {\mathbf {S}}_{\varvec{\psi }}\text {Diag}(\mathbf {\text {Im}\{ b_H \}})+{\mathbf {C}}_{\varvec{\psi }}\text {Diag}(\mathbf {\text {Re}\{ b_H \}}). \end{aligned} \end{aligned}$$
(7.15)

By (7.13), we have

$$\begin{aligned} \begin{aligned} {\text {Re}\{ \mathbf {A}_{\tilde{\mathbf{H}}}\}}\mathbf {{c}}_{\tilde{\varvec{\psi }}}-{\text {Im}\{ \mathbf {A}_{\tilde{\mathbf{H}}}\}}\mathbf {{s}}_{\varvec{\tilde{\psi }}}={\text {Re}\{ \mathbf {A}_{\tilde{\mathbf{H}}}\}},\\ {\text {Im}\{\mathbf {A}_{\tilde{\mathbf{H}}}\}}\mathbf {{c}}_{\varvec{\tilde{\psi }}}+{\text {Re}\{ \mathbf {A}_{\tilde{\mathbf{H}}}\}}\mathbf {{s}}_{\varvec{\tilde{\psi }}}={\text {Im}\{ \mathbf {b}_{\tilde{\mathbf{H}}}\}}. \end{aligned} \end{aligned}$$
(7.16)

Substituting (7.16) into (7.15), we obtain

$$\begin{aligned} {\mathbf {G}}|_{(\varvec{\tilde{\psi }},\tilde{\mathbf {y}},\tilde{\mathbf {H}})}=\begin{bmatrix}{\mathbf {C}}_{\varvec{\tilde{\psi }}}&{\mathbf {S}}_{\varvec{\tilde{\psi }}}\end{bmatrix} \begin{bmatrix}{\text {Re}\{ \mathbf {A}_{\tilde{\mathbf{H}}}\}}&-{\text {Im}\{ \mathbf {A}_{\tilde{\mathbf{H}}}\}}\\{\text {Im}\{ \mathbf {A}_{\tilde{\mathbf{H}}}\}}&{\text {Re}\{ \mathbf {A}_{\tilde{\mathbf{H}}}\}}\end{bmatrix} \begin{bmatrix}{\mathbf {C}}_{\varvec{\tilde{\psi }}}\\{\mathbf {S}}_{\varvec{\tilde{\psi }}}\end{bmatrix}. \end{aligned}$$

Since \({\mathbf {A}_{\tilde{\mathbf{H}}}}={{\mathbf {H}}^\dagger \mathbf P \tilde{\mathbf{H}}}\) is positive definite and the matrix \(\left[ {\mathbf {C}}_{\varvec{\tilde{\psi }}}, {\mathbf {S}}_{\varvec{\tilde{\psi }}}\right] \) is of full row rank, we immediately have

$$\begin{aligned} {\mathbf {x}}^T{\mathbf {G}}|_{(\varvec{\tilde{\psi }},\tilde{\mathbf {y}},\tilde{\mathbf {H}})}{\mathbf {x}}>0,~\forall ~{\mathbf {x}}\in {\mathbb {R}}^{M}, \end{aligned}$$

which further implies that \({\mathbf {G}}|_{(\varvec{\tilde{\psi }},\tilde{\mathbf {y}},\tilde{\mathbf {H}})}\) is invertible. Consequently, the Jacobian matrix \(D_{(\varvec{\psi },{\mathbf {y}})}f\) in (7.14) at point \((\varvec{\tilde{\psi }},\tilde{\mathbf {y}},\tilde{\mathbf {H}})\) is invertible. The proof of Claim 1 is completed. \(\square \)

Let \({\mathbf {F}}\) and \({\mathbf {D}}\) be the Jacobian matrices of function f (defined in (7.11)) with respect to \(\text {vec}\left( {\mathbf {H}}\right) \) and \((\varvec{\psi },{\mathbf {y}})\) at point \((\varvec{\tilde{\psi }},\tilde{\mathbf {y}},\tilde{\mathbf {H}})\), respectively. From the proof of Claim 1, we know that both \({\mathbf {F}}\) and \({\mathbf {D}}\) are well defined.

Claim 2. Consider \({\mathbf {H}}\in {{\mathcal {H}}}\subseteq {\mathbb {C}}^{(K-1)\times M},\) where \({{\mathcal {H}}}\) is the one in Claim 1. Suppose

$$\begin{aligned} \lambda _{\text {min}}({\mathbf {H}}^\dagger {{\mathbf {P}}}{{\mathbf {H}}})>\min _{1\le m\le M}\left\{ -\text {Re}\left\{ \left[ {\mathbf {D}}^{-1}{\mathbf {F}}\text {vec}\left( {\mathbf {H}} - {{\tilde{\mathbf{H }}}}\right) \right] _{M+m} \right\} \right\} .\end{aligned}$$
(7.17)

Then, \({\mathbf {W}}\) and \({\mathbf {y}}\) defined in Claim 1 satisfy (7.7).

Proof

It follows from Claim 1 that

$$\begin{aligned} {\mathbf {y}}_{m}{{({\mathbf {H}})}}:=\left[ d_{{\mathbf {y}}}({\mathbf {H}})\right] _m,~m=1,2,\ldots ,M \end{aligned}$$

are continuously differentiable functions with respect to \({\mathbf {H}}\in {{\mathcal {H}}}\subseteq {\mathbb {C}}^{(K-1)\times M}.\)

Therefore, for \(m=1,2,\ldots ,M,\) we get

$$\begin{aligned} {\mathbf {y}}_m{{({\mathbf {H}})}}&= {\mathbf {y}}_m({{\tilde{\mathbf{H }}}}) + \text {Re}\left\{ \text {Tr}\left( \nabla {\mathbf {y}}_m({{\tilde{\mathbf{H }}}}) {{ \left( {\mathbf {H}}-{{\tilde{\mathbf{H }}}}\right) }}\right) \right\} + {{o}}(\Vert { {{\mathbf {H}}-{{\tilde{\mathbf{H }}}}}}\Vert )\nonumber \\&= -\text {Re}\left\{ \left[ {\mathbf {D}}^{-1}{\mathbf {F}}\text {vec} {{\left( {\mathbf {H}}-{{\tilde{\mathbf{H }}}}\right) }} \right] _{M+m} \right\} + {{o}}(\Vert {{{\mathbf {H}}-{{\tilde{\mathbf{H }}}}}}\Vert ), \end{aligned}$$
(7.18)

where (7.18) is due to the differentiation rule of the implicit function and \({\mathbf {y}}_m({{\tilde{\mathbf{H }}}})=0.\) Combining (7.18) and (7.17), we immediately have

$$\begin{aligned} {\mathbf {W}}^\dagger {\mathbf {H}}^\dagger \mathbf {PHW}+\text {Diag}({\mathbf {y}}_{1:M}) ={\mathbf {W}}^\dagger \left[ {\mathbf {H}}^\dagger {{\mathbf {P}}}{{\mathbf {H}}}+\text {Diag}({\mathbf {y}}_{1:M})\right] {\mathbf {W}}\succ {\mathbf {0}}, \end{aligned}$$
(7.19)

which shows that \({\mathbf {W}}\) and \({\mathbf {y}}\) satisfy (7.7). The proof of Claim 2 is completed. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pu, W., Liu, YF., Yan, J. et al. Optimal estimation of sensor biases for asynchronous multi-sensor data fusion. Math. Program. 170, 357–386 (2018). https://doi.org/10.1007/s10107-018-1304-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-018-1304-2

Keywords

Mathematics Subject Classification

Navigation