Large Mode

Preview

Amp has a new agent mode: large. It uses Claude Sonnet 4.5 with 1M tokens of context.

To use it, run amp --mode large, or set "amp.internal.visibleModes": ["large"] to be able to select it in the UI.

Use large mode sparingly. Using it never is totally fine, and we made this news post undiscoverable since we don't want most people stumbling across it and using it. 200k tokens is plenty!

Don't use large mode for meandering conversations. Models provide better results with less context, so keep your conversations short and focused.

Don't use large mode to save money. Even though Sonnet is cheaper token-by-token than Opus, Opus often ends up cheaper overall because it needs fewer tokens and makes fewer mistakes. Also, LLM inference cost is exponential as the thread gets longer, and on top of that, Anthropic charges ~2x for long-context tokens.

So, why are we even shipping large mode? We've seen it work well for some large-scale refactors, and we want to learn where else long-context might be useful. Also, it lets us artificially constrain smart mode to use only the high-quality portion of the context window, even if, say, Opus 4.5 eventually gains 1M-token context.