Cite
Hänni, Kaarel, et al. Mathematical Models of Computation in Superposition. arXiv:2408.05451, arXiv, 10 Aug. 2024. arXiv.org, https://doi.org/10.48550/arXiv.2408.05451.
Metadata
Title: Mathematical Models of Computation in Superposition Authors: Kaarel Hänni, Jake Mendel, Dmitry Vaintrob, Lawrence Chan Cite key: hanni2024
Links
Abstract
Superposition — when a neural network represents more
features'' than it has dimensions -- seems to pose a serious challenge to mechanistically interpreting current AI systems. Existing theory work studies \emph{representational} superposition, where superposition is only used when passing information through bottlenecks. In this work, we present mathematical models of \emph{computation} in superposition, where superposition is actively helpful for efficiently accomplishing the task. We first construct a task of efficiently emulating a circuit that takes the AND of the $\binom{m}{2}$ pairs of each of $m$ features. We construct a 1-layer MLP that uses superposition to perform this task up to $\varepsilon$-error, where the network only requires $\tilde{O}(m^{\frac{2}{3}})$ neurons, even when the input features are \emph{themselves in superposition}. We generalize this construction to arbitrary sparse boolean circuits of low depth, and then construct
error correction” layers that allow deep fully-connected networks of width to emulate circuits of width and \emph{any} polynomial depth. We conclude by providing some potential applications of our work for interpreting neural networks that implement computation in superposition.
Notes
From Obsidian
(As notes and annotations from Zotero are one-way synced, this section include a link to another note within Obsidian to host further notes)
Mathematical-Models-of-Computation-in-Superposition
From Zotero
(one-way sync from Zotero) Imported: 2025-06-25 Comment: 28 pages, 5 figures. Published at the ICML 2024 Mechanistic Interpretability (MI) Workshop View in local Zotero
Annotations
Highlighting colour codes
Link to original
- Note: highlights for quicker reading or comments stemmed from reading the paper but might not be too related to the paper
- External Insight: Insights from other works but was mentioned in the paper
- Question/Critic: questions or comments on the content of paper
- Claim: what the paper claims to have found/achieved
- Finding: new knowledge presented by the paper
- Important: anything interesting enough (findings, insights, ideas, etc.) that’s worth remembering
From Zotero
(one-way sync from Zotero)