What is “alignment”? ML systems can exhibit goal-directed behavior, but it is difficult to understand or control what they are “trying” to do. Powerful models could cause harm if they were trying to manipulate and deceive humans. The goal of intent alignment is to instead train these models to be helpful and honest.

Motivation: We expect that modern ML techniques would lead to severe misalignment if scaled up to large enough computers and datasets. Practitioners may be able to adapt before these failures have catastrophic consequences, but we could reduce the risk by adopting scalable methods further in advance.

Team: The Theory team consists of Mark Xu, Jacob Hilton, Eric Neyman, and Dávid Matolcsi as researchers and Kyle Scott on operations.

What we’re working on: We're currently working on formalizing explanations of neural network behaviors, as explained in this talk and this preprint. At a high level, we are trying to understand how to formalize mechanistic explanations of neural network behavior, so that we can identify when a novel input may lead to anomalous behavior. We see this the most promising approach to our broader research agenda, which is explained along with our research methodology in our report on Eliciting Latent Knowledge.

Methodology: We’re unsatisfied with an algorithm if we can see any plausible story about how it eventually breaks down, which means that we can rule out most algorithms on paper without ever implementing them. The cost of this approach is that it may completely miss strategies that exploit important structure in realistic ML models; the benefit is that you can consider lots of ideas quickly. (More)

Future plans: We expect to focus on similar theoretical problems in alignment until we either become more pessimistic about tractability or ARC grows enough to branch out into other areas. Over the long term we are likely to work on a combination of theoretical and empirical alignment research, collaborations with industry labs, alignment forecasting, and ML deployment policy.