This week in AI dev tools: Claude Sonnet 4’s larger context window, ChatGPT updates, and more (August 14, 2025)
Anthropic expands Claude Sonnet 4’s context window to 1M tokens
With this larger context window, Claude can process codebases with 75,000+ lines of code in a single request. This allows it to better understand project architecture, cross-file dependencies, and make suggestions that fit with the complete system design.
Longer context windows are now in beta on the Anthropic API and Amazon Bedrock, and will soon be available in Google Cloud’s Vertex AI.
For prompts over 200K tokens, pricing will increase to $6 / million tokens (MTok) for input and $22.50 / MTok for output. The pricing for requests under 200K tokens will be $3 / MTok for input and $15 / MTok for output.
The company also extended its learning mode designed for students into Claude.ai and Claude Code. Learning mode asks users questions to guide then through concepts instead of providing immediate answers, to promote critical thinking of problems.
OpenAI adds GPT-4o as a legacy model in ChatGPT
With this update, paid users will now be able to select GPT-4o when using ChatGPT, along with other models like o3, GPT-4.1, and GPT-5 Thinking mini.
The model picker for GPT-5 also now includes Auto, Fast, and Thinking mode. Fast prioritizes giving the fastest answers, thinking prioritizes giving deeper answers that take longer to think through, and auto chooses between the two.
The company also increased the message limit for Plus and Team users to 3,000 per week on GPT-5 Thinking.
Google releases Gemma 3 270M
This new model is “designed from the ground up for task-specific fine-tuning with strong instruction-following and text structuring capabilities already trained in,” according to Google.
It is ideal in situations where there is a high-volume, well-defined task; speed and cost matters; user privacy needs to be protected; or there is a desire for a fleet of specialized task models.
Both pretrained and instruction tuned versions of the model are available for download from Hugging Face, Ollama, Kaggle, LM Studio, and Docker. Alternatively, the models can be tried out in Vertex AI.
NVIDIA releases latest models in Llama Nemotron family
Llama Nemotron are a family of reasoning models, and the latest updates include a new hybrid model architecture, compact quantized models, and a configurable thinking budget to give developers more control over token generation.
This combination lets the models reason more deeply and respond faster, without needing more time or computing power. This means better results at a lower cost,” the company wrote in an announcement.
Google’s coding agent Jules gets critique functionality
Google is enhancing its AI coding agent, Jules, with new functionality that reviews and critiques code while Jules is still working on it.
“In a world of rapid iteration, the critic moves the review to earlier in the process and into the act of generation itself. This means the code you review has already been interrogated, refined, and stress-tested … Great developers don’t just write code, they question it. And now, so does Jules,” Google wrote in a blog post.
According to the company, the coding critic is like a peer reviewer who is familiar with code quality principles and is “unafraid to point out when you’ve reinvented a risky wheel.”
GitHub to be folded into Microsoft’s CoreAI org
GitHub’s CEO Thomas Dohmke has announced his plans to leave the company at the end of the year.
In a memo to employees, he said that Microsoft doesn’t plan to replace him; rather, GitHub and its leadership team will now operate under Microsoft’s CoreAI organization, a group within the company focused on developing AI-powered tools, including GitHub Copilot.
“Today, GitHub Copilot is the leader of the most successful and thriving market in the age of AI, with over 20 million users and counting,” he wrote. “We did this by innovating ahead of the curve and showing grit and determination when challenged by the disruptors in our space. In just the last year, GitHub Copilot became the first multi-model solution at Microsoft, in partnership with Anthropic, Google, and OpenAI. We enabled Copilot Free for millions and introduced the synchronous agent mode in VS Code as well as the asynchronous coding agent native to GitHub.”
Sentry launches MCP monitoring tool
Application monitoring company Sentry is making it easier to gain visibility into MCP servers with the launch of a new monitoring tool.
With MCP monitoring, developers can understand things like which clients are experiencing errors, which tools are most used, or which tools are running slow. They can also correlate errors with events like traffic spikes or new release deployments, or figure out if errors are only happening on one type of transport.
According to Cody De Arkland, head of developer experience at Sentry, when Sentry launched its own MCP server, it was getting over 30 million requests per month. He said that at that scale, it’s inevitable that errors will occur, and existing monitoring tools were struggling with MCP servers.
bitHuman launches SDK for creating AI avatars
AI company bitHuman has announced a visual SDK for creating avatars for use as chat agents, instructors, virtual coaches, companions, and experts in different fields.
According to the company, the SDK allows avatars to be created on Arm-based and x86 systems without a GPU. The avatars have a small footprint and can be run online or offline on devices like Chromebooks, Mac Minis, and Raspberry Pis.
Because of their small footprint, these characters can be brought to a wide range of environments, including classrooms, kiosks, mobile apps, or edge devices.
Read last week’s updates here: This week in AI dev tools: GPT-5, Claude Opus 4.1, and more (August 8, 2025)
The post This week in AI dev tools: Claude Sonnet 4’s larger context window, ChatGPT updates, and more (August 14, 2025) appeared first on SD Times.
Tech Developers
No comments