Computer Science > Machine Learning
[Submitted on 2 May 2022 (v1), last revised 6 Oct 2022 (this version, v8)]
Title:VICE: Variational Interpretable Concept Embeddings
View PDFAbstract:A central goal in the cognitive sciences is the development of numerical models for mental representations of object concepts. This paper introduces Variational Interpretable Concept Embeddings (VICE), an approximate Bayesian method for embedding object concepts in a vector space using data collected from humans in a triplet odd-one-out task. VICE uses variational inference to obtain sparse, non-negative representations of object concepts with uncertainty estimates for the embedding values. These estimates are used to automatically select the dimensions that best explain the data. We derive a PAC learning bound for VICE that can be used to estimate generalization performance or determine a sufficient sample size for experimental design. VICE rivals or outperforms its predecessor, SPoSE, at predicting human behavior in the triplet odd-one-out task. Furthermore, VICE's object representations are more reproducible and consistent across random initializations, highlighting the unique advantage of using VICE for deriving interpretable embeddings from human behavior.
Submission history
From: Lukas Muttenthaler [view email][v1] Mon, 2 May 2022 09:03:55 UTC (46,693 KB)
[v2] Tue, 3 May 2022 13:53:00 UTC (46,692 KB)
[v3] Wed, 4 May 2022 11:26:47 UTC (46,693 KB)
[v4] Mon, 9 May 2022 15:42:08 UTC (46,693 KB)
[v5] Tue, 10 May 2022 12:35:09 UTC (46,694 KB)
[v6] Fri, 13 May 2022 15:33:21 UTC (46,692 KB)
[v7] Mon, 30 May 2022 08:24:42 UTC (46,690 KB)
[v8] Thu, 6 Oct 2022 09:10:10 UTC (45,895 KB)
Current browse context:
cs.LG
References & Citations
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender
(What is IArxiv?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.