Opus 4.7

Technology

Anthropic's latest AI release which reportedly faced compute rationing and user complaints.


First Mentioned

5/10/2026, 5:09:25 AM

Last Updated

5/10/2026, 5:10:32 AM

Research Retrieved

5/10/2026, 5:10:32 AM

Summary

Opus 4.7 is a state-of-the-art large language model within the Claude series developed by Anthropic. It represents a significant upgrade over Opus 4.6, specifically optimized for agentic coding, long-horizon reasoning, and multimodal tasks. The model features a 1-million-token context window and a knowledge cutoff of January 2026. While it offers improved instruction following and vision capabilities, its new tokenizer increases token usage for English text by approximately 12–18%. Anthropic's commitment to "constitutional AI" led to a conflict with US federal agencies over surveillance and autonomous weapons use, resulting in a "supply chain risk" designation by the Department of Defense, which was later challenged by a temporary court injunction in March 2026.

Referenced in 1 Document
Research Data
Extracted Attributes
  • Type

    Large Language Model (LLM)

  • Series

    Claude

  • Developer

    Anthropic

  • Key Feature

    Constitutional AI

  • Input Pricing

    $5 per million tokens

  • Context Window

    1,000,000 tokens

  • Output Pricing

    $25 per million tokens

  • Knowledge Cutoff

    2026-01-01

  • Max Output Tokens

    128,000 tokens

  • Vision Capability

    Up to 2,576 pixels on the long edge (~3.75 megapixels)

Timeline
  • Anthropic first releases the Claude series of large language models. (Source: Wikipedia)

    2023-01-01

  • Knowledge cutoff for the Opus 4.7 model. (Source: https://caylent.com/blog/claude-opus-4-7-deep-dive-capabilities-migration-and-the-new-economics-of-long-running-agents)

    2026-01-01

  • A federal judge issues a temporary injunction against the Department of Defense's designation of Anthropic as a supply chain risk. (Source: Wikipedia)

    2026-03-26

  • Claude Opus 4.7 becomes generally available on GitHub Copilot for Pro+, Business, and Enterprise users. (Source: https://github.blog/changelog/2026-04-16-claude-opus-4-7-is-generally-available/)

    2026-04-16

  • Promotional pricing for Opus 4.7 on GitHub Copilot ends, and the premium request multiplier is updated to 15x. (Source: https://github.blog/changelog/2026-04-16-claude-opus-4-7-is-generally-available/)

    2026-04-30

Claude (language model)

Claude is a series of large language models developed by Anthropic and first released in 2023. Since Claude 3, each generation has typically been released in three sizes, from least to most capable: Haiku, Sonnet, and Opus. An additional model named Claude Mythos was released to a handful of companies in 2026 but not to the public. Claude is used for software development via Claude Code. Claude is trained using "constitutional AI", a technique developed by Anthropic to improve ethical and legal compliance (AI alignment). The name Claude has been described both as a tribute to Claude Shannon, who pioneered information theory, and as a friendly, male-gendered counterpart to virtual assistants like Alexa and Siri. US federal agencies started phasing out the use of Claude after Anthropic refused to remove contractual prohibitions on the use of Claude for mass domestic surveillance and fully-autonomous weapons. Following the refusal, the Department of Defense designated the company a "supply chain risk" and barred all U.S. military private contractors, suppliers, and partners from doing business with the firm. On March 26, 2026, a federal judge issued a temporary injunction against the DoD's designation.

Web Search Results
  • Introducing Claude Opus 4.7

    _Instruction following_. Opus 4.7 is substantially better at following instructions. Interestingly, this means that prompts written for earlier models can sometimes now produce unexpected results: where previous models interpreted instructions loosely or skipped parts entirely, Opus 4.7 takes the instructions literally. Users should re-tune their prompts and harnesses accordingly. [...] ## Migrating from Opus 4.6 to Opus 4.7 Opus 4.7 is a direct upgrade to Opus 4.6, but two changes are worth planning for because they affect token usage. First, Opus 4.7 uses an updated tokenizer that improves how the model processes text. The tradeoff is that the same input can map to more tokens—roughly 1.0–1.35× depending on the content type. Second, Opus 4.7 thinks more at higher effort levels, particularly on later turns in agentic settings. This improves its reliability on hard problems, but it does mean it produces more output tokens. [...] _Improved multimodal support_. Opus 4.7 has better vision for high-resolution images: it can accept images up to 2,576 pixels on the long edge (~3.75 megapixels), more than three times as many as prior Claude models. This opens up a wealth of multimodal uses that depend on fine visual detail: computer-use agents reading dense screenshots, data extractions from complex diagrams, and work that needs pixel-perfect references.1

  • Claude Opus 4.7 Deep Dive: Capabilities, Migration, and the New ...

    At a spec level, Opus 4.7 is positioned as Anthropic’s most capable generally available model for coding, enterprise workflows, multimodal reasoning, financial analysis, life sciences, cybersecurity, and long-running agentic work. It supports a 1M context window with no long-context pricing premium, up to 128K output tokens, and standard Opus pricing at $5 per million input tokens and $25 per million output tokens. The model's reliable knowledge cutoff is January 2026. [...] That’s the real story behind Claude Opus 4.7. Pricing stays where Opus 4.6 pricing was, but the model is positioned as meaningfully better at agentic coding, long-horizon autonomy, multimodal reasoning, memory, and enterprise knowledge work. In other words, the headline is not a cheaper frontier model. It’s that the same price card is now supposed to buy more sustained autonomy and better execution on the kinds of workflows that matter in production. [...] That said, the benchmark table is strong rather than perfect. Opus 4.7 looks especially compelling on coding, tool use, computer use, financial analysis, and visual reasoning, but the materials do not support a clean “best at everything” story. That’s actually the more credible takeaway. The value proposition here is not universal dominance. It’s that Anthropic appears to have moved the premium tier forward on the kinds of long-running, multimodal, agentic workloads enterprises actually pay for.

  • Claude Opus 4.7 Review: What Actually Changed and What Got ...

    ## How This Affects Remy Remy uses Claude Opus as its core agent for complex reasoning and spec compilation, with Sonnet handling specialist subtasks. The Opus 4.7 improvements to agentic persistence are directly relevant here — the same failure modes that affected 4.6 in long coding sessions also affected spec-to-code compilation for larger applications. With 4.7, Remy handles longer, more complex specs with fewer mid-process failures. The model is better at tracking which parts of a spec have been compiled, which haven’t, and where it needs to revisit earlier decisions based on later constraints. [...] ## The Short Version Claude Opus 4.7 is a meaningful step forward for agentic coding workflows. It fixes a persistent issue with multi-step task completion, posts real gains on SWE-Bench and HumanEval, and introduces a redesigned tokenizer that improves multilingual handling. But the same tokenizer bumps token counts by roughly 12–18% on typical workloads, which means you’re paying more per task even if the model performs better. And web research quality — one area where Opus 4.6 was genuinely strong — has taken a measurable hit. This is not a clean upgrade for every use case. Whether it’s worth switching depends entirely on what you’re using it for. [...] For English, the math goes the other way. The new tokenizer is slightly less efficient with English text. Typical English prompts and completions run 12–18% longer in token count compared to the Opus 4.6 tokenizer. On a single request, this is noise. Across thousands of API calls, it adds up fast. If you’re running Opus 4.7 at scale on English-language tasks, expect your token costs to be meaningfully higher than with 4.6 — even if the per-token price stays the same. This is not always obvious in initial testing because small-volume tests don’t surface the cumulative effect.

  • Claude Opus 4.7 is generally available - GitHub Changelog

    # Claude Opus 4.7 is generally available - GitHub Changelog Skip to contentSkip to sidebar : Promotional pricing ended on April 30th, and the premium request multiplier was updated to 15x._ Claude Opus 4.7, Anthropic’s latest Opus model, is now rolling out on GitHub Copilot. In our early testing, Opus 4.7 delivers stronger multi-step task performance and more reliable agentic execution, building on the coding strategy strengths of its predecessor. It also shows meaningful improvement in long-horizon reasoning and complex, tool-dependent workflows. [...] As part of our efforts to improve service reliability, we are streamlining our model offerings. Over the coming weeks, Opus 4.7 will replace Opus 4.5 and Opus 4.6 in the model picker for Copilot Pro+. We’ve seen strong improvements across our benchmarks, and we’re committed to providing individual users with state-of-the-art models while ensuring a fast, reliable Copilot experience. This model is launching with a 7.5× premium request multiplier as part of promotional pricing until April 30th. ### Availability in GitHub Copilot Claude Opus 4.7 will be available to Copilot Pro+, Business, and Enterprise users. You’ll be able to select the model in the model picker in: [...] You’ll be able to select the model in the model picker in: Visual Studio Code Visual Studio Copilot CLI GitHub Copilot Cloud Agent github.com GitHub Mobile IOS and Android JetBrains Xcode Eclipse Rollout will be gradual. Check back soon if you don’t see it yet. ### Enabling access Copilot Enterprise and Copilot Business plan administrators must enable the Claude Opus 4.7 policy in Copilot settings. ### Learn more To explore all models available in GitHub Copilot, see our documentation on models and get started with Copilot. ### Share your feedback Join the GitHub Community to share your feedback. ## Table of Contents Availability in GitHub Copilot Enabling access Learn more Share your feedback

  • What's new in Claude Opus 4.7

    More literal instruction following, particularly at lower effort levels. The model will not silently generalize an instruction from one item to another, and will not infer requests you didn't make. Response length calibrates to perceived task complexity rather than defaulting to a fixed verbosity. Fewer tool calls by default, using reasoning more. Raising effort increases tool usage. More direct, opinionated tone with less validation-forward phrasing and fewer emoji than Claude Opus 4.6's warmer style. More regular progress updates to the user throughout long agentic traces. If you've added scaffolding to force interim status messages, try removing it. Fewer subagents spawned by default. Steerable through prompting. [...] Real-time cybersecurity safeguards: requests that involve prohibited or high-risk topics may lead to refusals. For legitimate security work, apply to the Cyber Verification Program. [...] Additionally, operations like mapping coordinates to images are now simpler — the model's coordinates are 1:1 with actual pixels, so there's no scale-factor math required. High-res images use more tokens. If the additional image fidelity is unnecessary, downsample images before sending to Claude to avoid token-usage increases. Beyond resolution, Claude Opus 4.7 also improves on: Low-level perception — pointing, measuring, counting, and similar tasks. Image localization — natural-image bounding-box localization and detection are improved. See Images and vision for details. ### New `xhigh` effort level

Location Data

Opus, Maricopa County, Arizona, United States

residential

Coordinates: 33.8388416, -112.1356128

Open Map