Google Built the Foundation. Someone Else Built the House.
Google has fallen to fifth place in the AI coding tools race. That is not a typo.
According to JetBrains' January 2026 developer survey, just 8% of developers use Gemini Code Assist for coding at work. Google's newer entry, Antigravity, launched in November 2025 and has reached only 6% adoption. Meanwhile, GitHub Copilot sits at 29%, and both Cursor and Claude Code have climbed to 18% each.
The company that invented the Transformer architecture, the very foundation every AI coding model is built on, is now trailing four competitors in the tools developers actually reach for when writing code.
The numbers tell a clear story
The AI coding assistant market has consolidated fast. The top three players, GitHub Copilot, Claude Code, and Cursor (via Anysphere), now control over 70% of the market. All three have crossed the $1 billion ARR threshold. Cursor recently crossed $2 billion in annualized revenue and is reportedly seeking a $50 billion valuation.
What makes this more striking is the trajectory. GitHub Copilot's growth has stalled since last year. Claude Code's adoption jumped 1.5x between September 2025 and January 2026. Google's tools, despite the company's massive resources and developer ecosystem, have not broken through.
Why developers are choosing the competition
Three factors are driving developer preferences, and Google is losing on all of them.
Reliability matters more than benchmarks
Developers on Google's own forums have reported quota errors affecting paying subscribers, geographic access limitations, and wildly inconsistent performance. One thread titled "What on earth is going on with Gemini Code Assist?" captures the frustration. Users describe days where the tool "couldn't do even simple stuff" followed by periods where it works "almost back to normal."
Code quality is another sticking point. Developers report that Gemini often creates duplicate variables or functions, struggles with context awareness across larger codebases, and produces output that needs significant validation before you can trust it.
Compare that to Claude Code's satisfaction numbers: 91% CSAT score, an NPS of 54, and 46% of developers naming it their "most loved" tool. Cursor comes in at 19% "most loved." GitHub Copilot, despite its dominant market share, manages just 9%.
When we tested Gemini Code Assist against Claude Code and Cursor on real projects last quarter, the gap in reliability matched what the survey data shows. Gemini would produce solid results on straightforward tasks, then fall apart on anything requiring multi-file context or complex refactoring.
The agentic gap
The biggest strategic issue for Google is an architectural one. The AI coding market is moving toward agentic workflows, where the AI does not just suggest code completions but autonomously handles entire tasks like debugging, refactoring, and writing tests across a full codebase.
Claude Code and Cursor have leaned hard into this direction. Claude Code operates as a terminal-native agent that can navigate repositories, run commands, and execute multi-step engineering tasks. Cursor combines an IDE with deep agentic capabilities. GitHub Copilot has been racing to catch up with its own agent mode.
Google's approach has been different. Gemini Code Assist focuses on IDE integration and enterprise code indexing, an assistive model rather than an agentic one. That strategy has its merits, but developers in 2026 increasingly want tools that can do the work, not just help them do it.
Enterprise lock-in is not enough
Google has tried to leverage its cloud ecosystem to drive adoption. Gemini Code Assist integrates deeply with Google Cloud, Firebase, and Android development workflows. The pitch is compelling on paper: if your stack is already Google, your coding assistant should be too.
But developers are not buying it. GitHub Copilot has locked up enterprise deployment at 90% of Fortune 100 companies. Cursor and Claude Code are winning individual developers and small teams, who then push for adoption at their companies. Google's enterprise-first approach means it is fighting for the last slice of a pie that has already been divided.
Where Google actually has an edge
We would be doing a disservice to pretend Google has nothing going for it. A few areas genuinely stand out.
Context window size. Gemini 3.1 Pro offers a 1 million token context window, the largest in the industry. For developers working with massive monorepos or needing to process entire codebases at once, this is a real advantage that no competitor matches today.
Pricing. At $2 per million input tokens and $12 per million output tokens, Gemini 3.1 Pro undercuts both Claude Opus 4.6 and GPT-5.2. For high-volume API usage, the cost savings add up.
Benchmark performance. On SWE-bench Verified, Gemini 3.1 Pro scores 80.6%, which is very close to Claude Opus 4.6 at 80.8% and Claude Sonnet 4.6 at 82.1%. The raw model capability is competitive. The gap is in the tooling and developer experience built around it, not the underlying model.
Google-stack development. If you are building Android apps, working in Firebase, or deploying on Google Cloud, Gemini Code Assist's deep integration with those services provides context that generic tools cannot match. Google's own internal data shows developers using Gemini Code Assist spend 31% less time context-switching and resolve dependency issues 40% faster.
What this means for Google's AI strategy
Google's position in AI coding tools is a microcosm of a broader pattern. The company builds world-class models, then struggles to build the products and developer experiences that turn those models into tools people actually want to use.
Gemini 3.1 Pro is genuinely competitive on benchmarks. But benchmarks do not ship products. Developer experience does. And right now, Claude Code's terminal agent, Cursor's IDE, and even Copilot's deep GitHub integration all deliver a smoother, more reliable experience than what Google offers.
The real risk for Google is not losing the AI coding market specifically. It is what that loss signals about the company's ability to compete in the applied AI layer. If Google cannot win developers, a community it has historically owned through Android, Chrome, and Go, it raises questions about its AI product strategy everywhere else.
Our take
Google has the models, the money, and the infrastructure. What it does not have is a product that developers love. An 8% adoption rate is not a starting point you can grow from when your competitors are consolidating at 18-29% and pulling away.
The path forward is not mysterious. Google needs to ship a reliable, agentic coding tool that works outside the Google Cloud ecosystem. Antigravity could become that product, but at 6% adoption five months after launch, the window is closing.
The irony is hard to miss. Google published "Attention Is All You Need" in 2017 and gave the world the architecture behind every AI coding tool on the market. Nine years later, it is watching from fifth place as other companies build the products developers actually want to use.
