iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.1007/978-3-031-31975-4_1
Explicit Diffusion of Gaussian Mixture Model Based Image Priors | SpringerLink
Skip to main content

Explicit Diffusion of Gaussian Mixture Model Based Image Priors

  • Conference paper
  • First Online:
Scale Space and Variational Methods in Computer Vision (SSVM 2023)

Abstract

In this work we tackle the problem of estimating the density \( f_X \) of a random variable \( X \) by successive smoothing, such that the smoothed random variable \( Y \) fulfills \( (\partial _t - \varDelta _1)f_Y(\,\cdot \,, t) = 0 \), \( f_Y(\,\cdot \,, 0) = f_X \). With a focus on image processing, we propose a product/fields-of-experts model with Gaussian mixture experts that admits an analytic expression for \(f_Y (\,\cdot \,, t)\) under an orthogonality constraint on the filters. This construction naturally allows the model to be trained simultaneously over the entire diffusion horizon using empirical Bayes. We show preliminary results on image denoising where our model leads to competitive results while being tractable, interpretable, and having only a small number of learnable parameters. As a byproduct, our model can be used for reliable noise estimation, allowing blind denoising of images corrupted by heteroscedastic noise.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    For notational convenience, throughout this article we do not make a distinction between the distribution and density of a random variable.

  2. 2.

    Without any reference to samples \( x_i \sim f_X \), an equivalent statement may be that \( f_X \) is (close to) zero almost everywhere (in the layman—not measure-theoretic—sense).

  3. 3.

    For simplicity, we discard the normalization constant \( Z \), which is independent of \( t \).

  4. 4.

    For visualization purposes, we normalized the negative-log density to have a minimum of zero over \( t \): \( l_\theta (x, t) = -\log \tilde{f}_\theta (x, t) - (\max _t \log \tilde{f}_\theta (x, t)) \).

References

  1. Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)

    Article  Google Scholar 

  2. Cole, K., Beck, J., Haji-Sheikh, A., Litkouhi, B.: Heat Conduction Using Greens Functions. CRC Press, Boca Raton (2010)

    Book  MATH  Google Scholar 

  3. Efron, B.: Tweedie’s formula and selection bias. J. Am. Stat. Assoc. 106(496), 1602–1614 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  4. Gut, A.: An Intermediate Course in Probability. Springer, New York (2009). https://doi.org/10.1007/978-1-4419-0162-0

    Book  MATH  Google Scholar 

  5. Guth, F., Coste, S., Bortoli, V.D., Mallat, S.: Wavelet score-based generative modeling. In: Advances in Neural Information Processing Systems (2022)

    Google Scholar 

  6. Hinton, G.E.: Training products of experts by minimizing contrastive divergence. Neural Comput. 14(8), 1771–1800 (2002)

    Article  MATH  Google Scholar 

  7. Hyvarinen, A.: Estimation of non-normalized statistical models by score matching. J. Mach. Learn. Res. 14 (2005)

    Google Scholar 

  8. Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P.: Optimization by simulated annealing. Science 220(4598), 671–680 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  9. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proceedings of the International Conference on Computer Vision, vol. 2, pp. 416–423 (2001)

    Google Scholar 

  10. Miyasawa, K.: An empirical bayes estimator of the mean of a normal population. In: Bulletin of the International Statistical Institute, pp. 161–188 (1961)

    Google Scholar 

  11. Pock, T., Sabach, S.: Inertial proximal alternating linearized minimization (iPALM) for nonconvex and nonsmooth problems. SIAM J. Imag. Sci. 9(4), 1756–1787 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  12. Raphan, M., Simoncelli, E.P.: Least squares estimation without priors or supervision. Neural Comput. 23(2), 374–420 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  13. Robbins, H.: An empirical bayes approach to statistics. In: Proceedings of the Berkeley Symposium on Mathematical Statistics and Probability, pp. 157–163 (1956)

    Google Scholar 

  14. Roberts, G.O., Tweedie, R.L.: Exponential convergence of langevin distributions and their discrete approximations. Bernoulli 2(4), 341–363 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  15. Roth, S., Black, M.J.: Fields of experts. Int. J. Comput. Vision 82(2), 205–229 (2009)

    Article  MATH  Google Scholar 

  16. Schrempf, O., Feiermann, O., Hanebeck, U.: Optimal mixture approximation of the product of mixtures. In: Proceedings of the International Conference on Information Fusion, vol. 1, pp. 85–92 (2005)

    Google Scholar 

  17. Song, Y., Ermon, S.: Generative modeling by estimating gradients of the data distribution. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  18. Song, Y., Sohl-Dickstein, J., Kingma, D.P., Kumar, A., Ermon, S., Poole, B.: Score-based generative modeling through stochastic differential equations. In: Proceedings of the International Conference on Learning Representations (2021)

    Google Scholar 

  19. Vincent, P.: A connection between score matching and denoising autoencoders. Neural Comput. 23(7), 1661–1674 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  20. Zoran, D., Weiss, Y.: From learning models of natural image patches to whole image restoration. In: Proceedings of the International Conference on Computer Vision, pp. 479–486 (2011)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Martin Zach .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1347 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zach, M., Pock, T., Kobler, E., Chambolle, A. (2023). Explicit Diffusion of Gaussian Mixture Model Based Image Priors. In: Calatroni, L., Donatelli, M., Morigi, S., Prato, M., Santacesaria, M. (eds) Scale Space and Variational Methods in Computer Vision. SSVM 2023. Lecture Notes in Computer Science, vol 14009. Springer, Cham. https://doi.org/10.1007/978-3-031-31975-4_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-31975-4_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-31974-7

  • Online ISBN: 978-3-031-31975-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics