Cite
Jha, Rishi, et al. Harnessing the Universal Geometry of Embeddings. arXiv:2505.12540, arXiv, 20 May 2025. arXiv.org, https://doi.org/10.48550/arXiv.2505.12540.
Metadata
Title: Harnessing the Universal Geometry of Embeddings Authors: Rishi Jha, Collin Zhang, Vitaly Shmatikov, John X. Morris Cite key: jha2025Harnessing
Links
Abstract
We introduce the first method for translating text embeddings from one vector space to another without any paired data, encoders, or predefined sets of matches. Our unsupervised approach translates any embedding to and from a universal latent representation (i.e., a universal semantic structure conjectured by the Platonic Representation Hypothesis). Our translations achieve high cosine similarity across model pairs with different architectures, parameter counts, and training datasets. The ability to translate unknown embeddings into a different space while preserving their geometry has serious implications for the security of vector databases. An adversary with access only to embedding vectors can extract sensitive information about the underlying documents, sufficient for classification and attribute inference.
Notes
From Obsidian
(As notes and annotations from Zotero are one-way synced, this section include a link to another note within Obsidian to host further notes)
Harnessing-the-Universal-Geometry-of-Embeddings
From Zotero
(one-way sync from Zotero)
Annotations
Highlighting colour codes
Link to original
- Note: highlights for quicker reading or comments stemmed from reading the paper but might not be too related to the paper
- External Insight: Insights from other works but was mentioned in the paper
- Question/Critic: questions or comments on the content of paper
- Claim: what the paper claims to have found/achieved
- Finding: new knowledge presented by the paper
- Important: anything interesting enough (findings, insights, ideas, etc.) that’s worth remembering
From Zotero
(one-way sync from Zotero) Imported: 2025-06-25
Claim | View in local Zotero: page 1
“translating text embeddings from one vector space to another without any paired data, encoders, or predefined sets of matches.”
Claim | View in local Zotero: page 1
“translates any embedding to and from a universal latent representation”
Important | View in local Zotero: page 5
“each vec2vec is trained on two sets of embeddings generated from disjoint sets of 1 million 64-token sequences sampled from NQ”
2 sets of embeddings are from completely different inputs