Cite

Adler, Micah, and Nir Shavit. On the Complexity of Neural Computation in Superposition. arXiv:2409.15318, arXiv, 5 Sept. 2024. arXiv.org, https://doi.org/10.48550/arXiv.2409.15318.

Metadata

Title: On the Complexity of Neural Computation in Superposition Authors: Micah Adler, Nir Shavit Cite key: adler2024a

Links

Abstract

Recent advances in the understanding of neural networks suggest that superposition, the ability of a single neuron to represent multiple features simultaneously, is a key mechanism underlying the computational efficiency of large-scale networks. This paper explores the theoretical foundations of computing in superposition, focusing on explicit, provably correct algorithms and their efficiency. We present the first lower bounds showing that for a broad class of problems, including permutations and pairwise logical operations, a neural network computing in superposition requires at least parameters and neurons, where is the number of output features being computed. This implies that any “lottery ticket” sparse sub-network must have at least parameters no matter what the initial dense network size. Conversely, we show a nearly tight upper bound: logical operations like pairwise AND can be computed using neurons and parameters. There is thus an exponential gap between computing in superposition, the subject of this work, and representing features in superposition, which can require as little as ) neurons based on the Johnson-Lindenstrauss Lemma. Our hope is that our results open a path for using complexity theoretic techniques in neural network interpretability research.

Notes

From Obsidian

(As notes and annotations from Zotero are one-way synced, this section include a link to another note within Obsidian to host further notes)

On-the-Complexity-of-Neural-Computation-in-Superposition

From Zotero

(one-way sync from Zotero) Imported: 2025-06-25 Comment: 43 pages, 8 figures View in local Zotero

Annotations

Highlighting colour codes

  • Note: highlights for quicker reading or comments stemmed from reading the paper but might not be too related to the paper
  • External Insight: Insights from other works but was mentioned in the paper
  • Question/Critic: questions or comments on the content of paper
  • Claim: what the paper claims to have found/achieved
  • Finding: new knowledge presented by the paper
  • Important: anything interesting enough (findings, insights, ideas, etc.) that’s worth remembering
Link to original

From Zotero

(one-way sync from Zotero)