Cite

Turner, Alexander Matt, et al. Steering Language Models With Activation Engineering. arXiv:2308.10248, arXiv, 10 Oct. 2024. arXiv.org, http://arxiv.org/abs/2308.10248.

Metadata

Title: Steering Language Models With Activation Engineering Authors: Alexander Matt Turner, Lisa Thiergart, Gavin Leech, David Udell, Juan J. Vazquez, Ulisse Mini, Monte MacDiarmid Cite key: turner2024

Links

Abstract

Prompt engineering and finetuning aim to maximize language model performance on a given metric (like toxicity reduction). However, these methods do not fully elicit a model’s capabilities. To reduce this gap, we introduce activation engineering: the inference-time modification of activations in order to control (or steer) model outputs. Specifically, we introduce the Activation Addition (ActAdd) technique, which contrasts the intermediate activations on prompt pairs (such as “Love” versus “Hate”) to compute a steering vector (Subramani et al. 2022). By tactically adding in e.g. the “Love” - “Hate” steering vector during the forward pass, we achieve SOTA on negative-to-positive sentiment shift and detoxification using models including LLaMA-3 and OPT. ActAdd yields inference-time control over high-level output properties (like topic and sentiment) while preserving performance on off-target tasks. ActAdd is lightweight: it does not require any machine optimization and works with a single pair of data points, which enables rapid iteration over steering. ActAdd demonstrates the power of activation engineering.

Notes

From Obsidian

(As notes and annotations from Zotero are one-way synced, this section include a link to another note within Obsidian to host further notes)

Steering-Language-Models-With-Activation-Engineering

From Zotero

(one-way sync from Zotero)

Annotations

Highlighting colour codes

  • Note: highlights for quicker reading or comments stemmed from reading the paper but might not be too related to the paper
  • External Insight: Insights from other works but was mentioned in the paper
  • Question/Critic: questions or comments on the content of paper
  • Claim: what the paper claims to have found/achieved
  • Finding: new knowledge presented by the paper
  • Important: anything interesting enough (findings, insights, ideas, etc.) that’s worth remembering
Link to original

From Zotero

(one-way sync from Zotero)