iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.1007/s11063-022-11136-6
Multi-view Subspace Clustering Based on Unified Measure Standard | Neural Processing Letters Skip to main content
Log in

Multi-view Subspace Clustering Based on Unified Measure Standard

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

In recent years, multi-view subspace clustering has attracted extensive attention. In order to improve the clustering performance, the previous work tries to explore the consistency and specificity between different views by making the common representation matrix as close to the representation matrix learned in each view as possible. However, the values of the elements corresponding to a similar degree of the strong or weak relationship often have different magnitude levels in the representation matrix learned in each view. In this situation, the above strategy will make the information of some views ignored or magnified. To overcome this limitation, we propose a novel multi-view subspace clustering method in this paper. Because our proposed method can normalize the degree of the strong or weak relationship in each view to the unified measure standard by scaling the representation matrix learned in each view, the consistency and specificity between different views will be mined more effectively. In addition, we provide a theoretical analysis of the convergence and computation complexity of our numerical algorithm. The experimental results on several benchmark data sets indicate that our proposed method is not only effective but also efficient for the problem of multi-view subspace clustering.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data Availability

The data sets generated and/or analysed during the current study are available from the corresponding author on reasonable request.

Notes

  1. http://cvc.yale.edu/projects/yalefacesB/yalefacesB.html.

  2. http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html.

References

  1. Vidal R (2011) Subspace clustering. IEEE Signal Process Mag 28(2):52–68

    Article  Google Scholar 

  2. Li B, Liu R, Cao J, Zhang J, Lai Y, Liu X (2018) Online low-rank representation learning for joint multi-subspace recovery and clustering. IEEE Trans Image Process 27(1):335–348

    Article  MathSciNet  MATH  Google Scholar 

  3. Peng X, Feng J, Xiao S, Yau W, Zhou JT, Yang S (2018) Structured autoencoders for subspace clustering. IEEE Trans Image Process 27(10):5076–5086

    Article  MathSciNet  Google Scholar 

  4. Wang B, Hu Y, Gao J, Sun Y, Ju F, Yin B (2021) Learning adaptive neighborhood graph on Grassmann manifolds for video/image-set subspace clustering. IEEE Trans Multimed 23:216–227

    Article  Google Scholar 

  5. Yin M, Liu W, Li M, Jin T, Ji R (2021) Cauchy loss induced block diagonal representation for robust multi-view subspace clustering. Neurocomputing 427:84–95

    Article  Google Scholar 

  6. Xiao X, Wei L (2020) Robust subspace clustering via latent smooth representation clustering. Neural Process Lett 52(2):1317–1337

    Article  Google Scholar 

  7. Wei L, Zhang Y, Yin J, Zhou R, Zhu C, Zhang X (2019) An improved structured low-rank representation for disjoint subspace segmentation. Neural Process Lett 50(2):1035–1050

    Article  Google Scholar 

  8. Tseng P (2000) Nearest q-flat to m points. J Optim Theory Appl 105(1):249–252

    Article  MathSciNet  MATH  Google Scholar 

  9. Costeira JP, Kanade T (1998) A multibody factorization method for independently moving objects. Int J Comput Vis 29(3):159–179

    Article  Google Scholar 

  10. Tipping ME, Bishop CM (1999) Mixtures of probabilistic principal component analyzers. Neural Comput 11(2):443–482

    Article  Google Scholar 

  11. Lu C, Feng J, Lin Z, Mei T, Yan S (2019) Subspace clustering by block diagonal representation. IEEE Trans Pattern Anal Mach Intell 41(2):487–501

    Article  Google Scholar 

  12. Zhang S, You C, Vidal R, Li C (2021) Learning a self-expressive network for subspace clustering. In: CVPR, pp 12393–12403

  13. You C, Li C, Robinson DP, Vidal R (2019) Is an affine constraint needed for affine subspace clustering? In: ICCV, pp 9914–9923

  14. Shi J, Malik J (2000) Normalized cuts and image segmentation. IEEE Trans Pattern Anal Mach Intell 22(8):888–905

    Article  Google Scholar 

  15. Tang K, Liu R, Su Z, Zhang J (2014) Structure-constrained low-rank representation. IEEE Trans Neural Netw Learn Syst 25(12):2167–2179

    Article  Google Scholar 

  16. Tang K, Dunson DB, Su Z, Liu R, Zhang J, Dong J (2016) Subspace segmentation by dense block and sparse representation. Neural Netw 75:66–76

    Article  MATH  Google Scholar 

  17. Tang K, Su Z, Liu Y, Jiang W, Zhang J, Sun X (2019) Subspace segmentation with a large number of subspaces using infinity norm minimization. Pattern Recognit 89:45–54

    Article  Google Scholar 

  18. Elhamifar E, Vidal R (2013) Sparse subspace clustering: algorithm, theory, and applications. IEEE Trans Pattern Anal Mach Intell 35(11):2765–2781

    Article  Google Scholar 

  19. Liu G, Lin Z, Yan S, Sun J, Yu Y, Ma Y (2013) Robust recovery of subspace structures by low-rank representation. IEEE Trans Pattern Anal Mach Intell 35(1):171–184

    Article  Google Scholar 

  20. Lu C-Y, Min H, Zhao Z-Q, Zhu L, Huang D-S, Yan S (2012) Robust and efficient subspace segmentation via least squares regression. In: ECCV

  21. Zhang C, Fu H, Hu Q, Cao X, Xie Y, Tao D, Xu D (2020) Generalized latent multi-view subspace clustering. IEEE Trans Pattern Anal Mach Intell 42(1):86–99

    Article  Google Scholar 

  22. Li Z, Tang C, Zheng X, Liu X, Zhang W, Zhu E (2022) High-order correlation preserved incomplete multi-view subspace clustering. IEEE Trans Image Process 31:2067–2080

    Article  Google Scholar 

  23. Huang Y, Xiao Q, Du S, Yu Y (2022) Multi-view clustering based on low-rank representation and adaptive graph learning. Neural Process Lett 54(1):265–283

    Article  Google Scholar 

  24. Kang Z, Zhao X, Peng C, Zhu H, Zhou JT, Peng X, Chen W, Xu Z (2020) Partition level multiview subspace clustering. Neural Netw 122:279–288

    Article  Google Scholar 

  25. Wang S, Liu X, Zhu X, Zhang P, Zhang Y, Gao F, Zhu E (2022) Fast parameter-free multi-view subspace clustering with consensus anchor guidance. IEEE Trans Image Process 31:556–568

    Article  Google Scholar 

  26. Gao H, Nie F, Li X, Huang H (2015) Multi-view subspace clustering. In: ICCV, pp 4238–4246

  27. Zhang C, Hu Q, Fu H, Zhu P, Cao X (2017) Latent multi-view subspace clustering. In: CVPR, pp 4333–4341

  28. Zhang C, Fu H, Liu S, Liu G, Cao X (2015) Low-rank tensor constrained multiview subspace clustering. In: ICCV, pp 1582–1590

  29. Xie Y, Tao D, Zhang W, Liu Y, Zhang L, Qu Y (2018) On unifying multi-view self-representations for clustering by tensor multi-rank minimization. Int J Comput Vis 126(11):1157–1179

    Article  MathSciNet  MATH  Google Scholar 

  30. Kilmer ME, Braman KS, Hao N, Hoover RC (2013) Third-order tensors as operators on matrices: a theoretical and computational framework with applications in imaging. SIAM J Matrix Anal Appl 34(1):148–172

    Article  MathSciNet  MATH  Google Scholar 

  31. Wu J, Lin Z, Zha H (2019) Essential tensor learning for multi-view spectral clustering. IEEE Trans Image Process 28(12):5910–5922

    Article  MathSciNet  MATH  Google Scholar 

  32. Wu J, Xie X, Nie L, Lin Z, Zha H (2020) Unified graph and low-rank tensor learning for multi-view clustering. In: AAAI, pp 6388–6395

  33. Gao Q, Xia W, Wan Z, Xie D, Zhang P (2020) Tensor-svd based graph learning for multi-view subspace clustering. In: AAAI, pp 3930–3937

  34. Cao X, Zhang C, Fu H, Liu S, Zhang H (2015) Diversity-induced multi-view subspace clustering. In: CVPR, pp 586–594

  35. Wang X, Guo X, Lei Z, Zhang C, Li SZ (2017) Exclusivity-consistency regularized multi-view subspace clustering. In: CVPR, pp 1–9

  36. Luo S, Zhang C, Zhang W, Cao X (2018) Consistent and specific multi-view subspace clustering. In: AAAI, pp 3730–3737

  37. Wang H, Yang Y, Liu B (2020) GMC: graph-based multi-view clustering. IEEE Trans Knowl Data Eng 32(6):1116–1129

    Article  Google Scholar 

  38. Kang Z, Shi G, Huang S, Chen W, Pu X, Zhou JT, Xu Z (2020) Multi-graph fusion for multi-view spectral clustering. Knowl Based Syst 189:66

    Article  Google Scholar 

  39. Zhang X (2004) Matrix analysis and applications. Tsinghua University Press, Beijing

    Google Scholar 

  40. Lin Z, Chen M, Wu L, Ma Y (2009) The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. UIUC Technical Report, UILU-ENG-09-2215

  41. Ikizler N, Cinbis RG, Pehlivan S, Duygulu P (2008) Recognizing actions from still images. In: ICPR, pp 1–4

  42. Kang Z, Zhou W, Zhao Z, Shao J, Han M, Xu Z (2020) Large-scale multi-view subspace clustering in linear time. In: AAAI, pp 4412–4419

  43. Ojala T, Pietikäinen M, Mäenpää T (2002) Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell 24(7):971–987

    Article  MATH  Google Scholar 

  44. Selvaraj A, Ganesan L, Priyal SP (2006) Texture classification using gabor wavelets based rotation invariant features. Pattern Recognit Lett 27(16):1976–1982

    Article  Google Scholar 

  45. Kumar A, Rai P III, Daume H (2011) Co-regularized multi-view spectral clustering. In: NIPS, pp 1413–1421

  46. Xia R, Pan Y, Du L, Yin J (2014) Robust multi-view spectral clustering via low-rank and sparse decomposition. In: AAAI, pp 2149–2155

  47. Kang Z, Lin Z, Zhu X, Xu W (2021) Structured graph learning for scalable subspace clustering: from single-view to multi-view. CoRR arXiv:2102.07943

  48. Liu L, Chen P, Luo G, Kang Z, Luo Y, Han S (2022) Scalable multi-view clustering with graph filtering. CoRR arXiv:2205.09228

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grant 62076115, in part by the LiaoNing Revitalization Talents Program under Grant XLYC1907169, and in part by the Program of Star of Dalian Youth Science and Technology under Grants 2019RQ033 and 2020RQ053.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kewei Tang.

Ethics declarations

Conflict of interest

The authors declared that they have no conflicts of interest to this work.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A: Proof of Theorem 1

Appendix A: Proof of Theorem 1

Suppose we have obtained \(Z^{(v)}_{k}, \mu _{k}\) and \(C_{k}\) by our algorithm. In the next round of iteration, when updating \(Z^{(v)}\), \(\mu _{k}\) and \(C_{k}\) are fixed. Since Eq. (6) is a closed-form solution, \(Z^{(v)}_{k+1}\) is the optimal solution. Therefore, it will minimize the value of the objective function with \(\mu _{k}\) and \(C_{k}\) fixed, i.e.

$$\begin{aligned} f(Z^{(v)}_{k+1},\mu _{k},C_{k})\le f(Z^{(v)}_{k},\mu _{k},C_{k}). \end{aligned}$$
(A1)

When updating \(\mu , Z^{(v)}_{k+1}\) and \(C_{k}\) are fixed. Since both Eqs. (8) and (10) are the closed-form solution, \(\mu _{k+1}\) is the optimal solution. Therefore, it will minimize the value of the objective function with \(Z^{(v)}_{k+1}\) and \(C_{k}\) fixed, i.e.

$$\begin{aligned} f(Z^{(v)}_{k+1},\mu _{k+1},C_{k})\le f(Z^{(v)}_{k+1},\mu _{k},C_{k}). \end{aligned}$$
(A2)

When updating \(C, Z^{(v)}_{k+1}\) and \(\mu _{k+1}\) are fixed. Since Eq. (12) is also a closed-form solution, \(C_{k+1}\) is the optimal solution. Therefore, it will minimize the value of the objective function with \(Z^{(v)}_{k+1}\) and \(\mu _{k+1}\) fixed, i.e.

$$\begin{aligned} f(Z^{(v)}_{k+1},\mu _{k+1},C_{k+1})\le f(Z^{(v)}_{k+1},\mu _{k+1},C_{k}). \end{aligned}$$
(A3)

Hence, after a round of iteration, the objective function value will decrease, i.e.

$$\begin{aligned} f(Z^{(v)}_{k+1},\mu _{k+},C_{k+1})\le f(Z^{(v)}_{k},\mu _{k},C_{k}). \end{aligned}$$
(A4)

Hence, Theorem 1 is proved.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tang, K., Wang, X. & Li, J. Multi-view Subspace Clustering Based on Unified Measure Standard. Neural Process Lett 55, 6231–6246 (2023). https://doi.org/10.1007/s11063-022-11136-6

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-022-11136-6

Keywords

Navigation