Abstract
We study a sparse coding learning algorithm that allows for a simultaneous learning of the data sparseness and the basis functions. The algorithm is derived based on a generative model with binary latent variables instead of continuous-valued latents as used in classical sparse coding. We apply a novel approach to perform maximum likelihood parameter estimation that allows for an efficient estimation of all model parameters. The approach is a new form of variational EM that uses truncated sums instead of factored approximations to the intractable posterior distributions. In contrast to almost all previous versions of sparse coding, the resulting learning algorithm allows for an estimation of the optimal degree of sparseness along with an estimation of the optimal basis functions. We can thus monitor the time-course of the data sparseness during the learning of basis functions. In numerical experiments on artificial data we show that the algorithm reliably extracts the true underlying basis functions along with noise level and data sparseness. In applications to natural images we obtain Gabor-like basis functions along with a sparseness estimate. If large numbers of latent variables are used, the obtained basis functions take on properties of simple cell receptive fields that classical sparse coding or ICA-approaches do not reproduce.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Comon, P.: Independent Component Analysis, a new concept? Signal Process 36(3), 287–314 (1994)
Olshausen, B., Field, D.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609 (1996)
Olshausen, B., Millman, K.: Learning sparse codes with a mixture-of-Gaussians prior. In: Proc NIPS, vol. 12, pp. 841–847 (2000)
Ringach, D.L.: Spatial Structure and Symmetry of Simple-Cell Receptive Fields in Macaque Primary Visual Cortex. J. Neurophysiol. 88, 455–463 (2002)
Neal, R., Hinton, G.: A view of the EM algorithm that justifies incremental, sparse, and other variants. In: Learning in Graphical Models, pp. 355–369 (1998)
Lücke, J., Eggert, J.: Expectation Truncation And the Benefits of Preselection in Training Generative Models. J. Mach Learn Res., revision in review (2010)
Lücke, J., Sahani, M.: Maximal causes for non-linear component extraction. J. Mach. Learn. Res. 9, 1227–1267 (2008)
Lücke, J., Turner, R., Sahani, M., Henniges, M.: Occlusive Components Analysis. In: Proc NIPS, vol. 22, pp. 1069–1077 (2009)
Hoyer, P.: Non-negative sparse coding. In: Neural Networks for Signal Processing XII: Proceedings of the IEEE Workshop, pp. 557–565 (2002)
Hateren, J., Schaaf, A.: Independent Component Filters of Natural Images Compared with Simple Cells in Primary Visual Cortex. Proc. Biol. Sci. 265(1394), 359–366 (1998)
Lücke, J.: Receptive Field Self-Organization in a Model of the Fine Structure in V1 Cortical Columns. Neural Computation (2009)
Berkes, P., Turner, R., Sahani, M.: On sparsity and overcompleteness in image models. In: Proc NIPS, vol. 20 (2008)
Hinton, G., Ghahramani, Z.: Generative models for discovering sparse distributed representations. Phil. Trans. R Soc. B 352(1358), 1177 (1997)
Harpur, G., Prager, R.: Development of low entropy coding in a recurrent network. Network-Comp. Neural. 7, 277–284 (1996)
Haft, M., Hofman, R., Tresp, V.: Generative binary codes. Pattern Anal. Appl. 6(4), 269–284 (2004)
Lücke, J., Sahani, M.: Generalized softmax networks for non-linear component extraction. In: de Sá, J.M., Alexandre, L.A., Duch, W., Mandic, D.P. (eds.) ICANN 2007. LNCS, vol. 4668, pp. 657–667. Springer, Heidelberg (2007)
Rehn, M., Sommer, F.T.: A network that uses few active neurones to code visual input predicts the diverse shapes of cortical receptive fields. J. Comput. Neurosci. 22(2), 135–146 (2007)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Henniges, M., Puertas, G., Bornschein, J., Eggert, J., Lücke, J. (2010). Binary Sparse Coding. In: Vigneron, V., Zarzoso, V., Moreau, E., Gribonval, R., Vincent, E. (eds) Latent Variable Analysis and Signal Separation. LVA/ICA 2010. Lecture Notes in Computer Science, vol 6365. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15995-4_56
Download citation
DOI: https://doi.org/10.1007/978-3-642-15995-4_56
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-15994-7
Online ISBN: 978-3-642-15995-4
eBook Packages: Computer ScienceComputer Science (R0)