The JetBrains AI Pulse Survey from January 2026 is the most comprehensive developer AI usage dataset published this year, covering 24,534 developers across 183 countries. Its headline finding: 90% of developers now regularly use at least one AI tool at work. That figure marks a decisive shift from experimentation to infrastructure. AI coding tools are no longer a productivity experiment championed by early adopters — they are the default working environment for software development professionals worldwide, embedded in IDEs, code review pipelines, and CI workflows at scale.

AI Coding Tools Adoption 2026: The JetBrains Survey Findings

The JetBrains AI Pulse Survey, conducted in January 2026 across 24,534 developers, delivers the most granular dataset on AI tool adoption published this year. Ninety percent of respondents report regularly using at least one AI tool at work — a figure that JetBrains defines as weekly or more frequent usage, not one-time trials. That same survey found 74% have adopted specialized AI tools, meaning tools designed specifically for coding workflows rather than general-purpose large language models accessed via chat interfaces. The survey methodology matters here: JetBrains weights its respondent pool for regional and role diversity, tracks the same categories year over year, and distinguishes clearly between awareness, trial, and regular adoption — distinctions that most industry surveys collapse into a single “have used” metric. The result is a dataset where the 90% figure is directly comparable across years and not inflated by single-session experiments. For context, the 2024 equivalent survey reported 76% of developers used or planned to use AI tools, meaning the category has gained roughly 14 percentage points of regular adoption in twelve months — one of the fastest adoption curves observed in developer tooling history.

Overall Adoption: 90% of Developers Now Use AI Tools at Work

The 90% regular adoption rate from JetBrains represents the crossing of a meaningful threshold: AI tool usage is now the norm, and non-usage is the statistical outlier. Eighty-four percent of developers surveyed in the broader 2026 dataset either already use or plan to use AI tools, up from 76% in 2024. Separately, 85% of developers report regularly using AI for core development tasks including coding, debugging, and code review — not just for ancillary tasks like writing documentation or generating boilerplate. What the aggregate numbers obscure is the depth of integration. Regular usage at the 85–90% level still encompasses a wide spectrum: some developers use AI tools opportunistically for one-off questions, while others have restructured entire workflows around AI-assisted development cycles. The 62% who rely on at least one AI coding assistant and the 51% who use one daily are the more telling figures for understanding how deeply the tools have penetrated actual development workflows. Daily usage at 51% means AI tools have achieved the same habitual embedding as version control or testing frameworks — they are no longer consciously invoked tools but automatic steps in how software gets written. The remaining 10% who do not yet regularly use AI tools skew toward specific language communities (systems programming, embedded development), specific industries with strict data governance requirements, and senior developers with established practices who have evaluated tools and actively declined them.

Market Share: GitHub Copilot vs Cursor vs Everyone Else

At work, the AI coding tools market has consolidated around two clear leaders: GitHub Copilot holds 29% market share and Cursor holds 18%, according to the JetBrains survey’s workplace usage breakdown. Combined, those two tools account for 47% of workplace AI coding tool usage — a significant concentration in a category that a year ago was more fragmented. The remaining 53% is split across a long tail of tools including JetBrains AI Assistant, Amazon Q Developer, Tabnine, Codeium, Claude Code, Sourcegraph Cody, and general-purpose LLM interfaces accessed via browser or API. GitHub Copilot’s workplace dominance reflects the Microsoft distribution advantage: organizations that run on Azure, use GitHub for source control, and have existing Microsoft Enterprise Agreements can add Copilot to their stack with minimal procurement friction. Cursor’s 18% workplace share is more impressive given that it operates without an enterprise sales motion comparable to Microsoft’s — it has achieved nearly two-thirds of Copilot’s market share through product-led growth and developer preference, not license bundling. The market share numbers also reveal a significant divergence between workplace and personal usage: outside of work, the rankings shift substantially, with ChatGPT-based workflows and local model setups gaining share relative to their workplace numbers. Developers are choosing different tools for different contexts, and the workplace market share figures should be read as reflecting procurement dynamics alongside technical preference.

GitHub Copilot’s 4.7 Million Paid Subscribers: The Enterprise Standard

GitHub Copilot crossed 4.7 million paid subscribers in January 2026, representing 75% year-over-year growth from its January 2025 baseline and firmly establishing it as the market category leader in paid AI coding tools. The 42% market share among paid tools — distinct from the broader 29% workplace share figure that includes free tiers and trial usage — confirms that when organizations or individual developers open their wallets for an AI coding tool, Copilot captures the largest portion of that spending. The productivity data attached to that subscriber base is the most-cited in the industry: GitHub’s own telemetry shows Copilot-assisted developers complete individual tasks 55% faster on average, and teams with Copilot access ship 126% more projects per week compared to their baseline. These figures are from GitHub’s internal measurement of Copilot-enabled repositories versus control groups and have been referenced across multiple independent analyses. The 55% faster task completion rate is consistent with other studies measuring AI coding tool impact on time-to-completion for well-defined coding tasks, though the effect varies significantly by task type: code generation for known patterns shows the largest speedup, while debugging novel failures and architectural decision-making show smaller gains. The 126% more projects per week is a throughput metric that captures both the speed increase and the cognitive offloading effect — developers who spend less time on routine coding have capacity to initiate and complete more discrete work items in the same calendar period.

The Trust Crisis: Why Only 29% Trust AI Tool Output in 2026

The most striking finding in the 2026 AI tool adoption data is not about adoption rates at all — it is about trust. Only 29% of developers report trusting AI tool output, a figure that represents a dramatic collapse from 2023 levels when trust exceeded 70%. That reversal has happened during a period when adoption has increased, which means developers are now in the counterintuitive position of regularly using tools they do not particularly trust. Understanding this dynamic is essential for interpreting the adoption statistics correctly. The trust decline does not mean developers have concluded AI tools are useless — it means the field has matured past the novelty phase where outputs were evaluated with insufficient skepticism. Early adopters in 2023 were often amazed that the tools worked at all; by 2026, developers have accumulated enough experience with confident-but-wrong outputs, hallucinated API signatures, subtly buggy generated code, and security vulnerabilities in AI suggestions to have recalibrated their prior probability that any given output is correct. The practical result is a verification overhead: most experienced developers now treat AI-generated code as a starting point requiring code review, not a final output. This behavioral adaptation is healthy from a code quality perspective — it prevents the worst failure modes of uncritical AI output acceptance — but it also means the productivity gains from AI tools are partially offset by the verification cost. Teams that have not yet built explicit review processes for AI-generated code are absorbing this verification cost inconsistently, and the 29% trust figure suggests that most developers have concluded, from experience, that trust without verification is not warranted.

Productivity Data: 3.6 Hours/Week Saved and 22% AI-Authored Code

Developers using AI coding tools save an average of 3.6 hours per week, and 22% of merged code in AI-tool-enabled projects is now AI-authored — two data points that together define the scale of AI’s impact on software production in 2026. The 3.6 hours per week figure translates to roughly 9% of a standard 40-hour workweek recovered from tasks that previously required direct developer attention: boilerplate generation, test scaffolding, documentation drafting, repetitive refactoring, and routine bug patterns. Across a team of ten developers, that aggregate saving represents the equivalent of one additional developer’s weekly output being unlocked from existing headcount. The 22% AI-authored code figure demands careful interpretation. It refers to merged code where AI tools were the primary generation source — code that was suggested by an AI tool, reviewed by a developer, and committed with minimal modification. It does not capture the larger proportion of code that was AI-assisted but significantly modified by the developer before commit. Both the 3.6 hours and the 22% figures are directional aggregates that mask meaningful variation by role, language, and task type. Senior engineers working on novel algorithmic problems report smaller productivity gains and lower AI code merge rates than mid-level engineers working in mature codebases with established patterns. Frontend engineers in JavaScript and TypeScript report higher AI adoption and merge rates than systems engineers in Rust or C, reflecting both the tools’ training data distributions and the nature of the work. The honest read on the 22% figure is that it marks a genuine shift in where code comes from — not a majority, but a fraction large enough to require active governance, review standards, and accountability frameworks for AI-sourced code.

Enterprise vs Individual Developer Adoption Patterns

The enterprise AI coding tool adoption pattern diverges sharply from individual developer behavior, and the 97% of enterprise developers who use generative AI coding tools daily represents a fundamentally different dynamic than the 51% daily usage figure for the broader developer population. Enterprise adoption is not primarily driven by individual developer preference — it is driven by organizational licensing decisions, tool standardization mandates, and the embedding of AI features into approved IDE configurations. A developer working at a company with a GitHub Enterprise license and a mandate to use Copilot is counted in the 97% regardless of how enthusiastically they have personally embraced AI tools. Individual developer adoption, by contrast, requires a deliberate decision to evaluate, pay for, and integrate a tool. The enterprise-individual gap also reflects different tool selection criteria. Individual developers prioritize raw capability, context window, code quality, and responsiveness; they switch tools based on personal productivity experience. Enterprise teams prioritize security posture, data residency, administrative controls, audit logging, SSO integration, and vendor support commitments. These different selection criteria explain why GitHub Copilot dominates enterprise market share — it excels on the procurement and compliance dimensions even where competitors may match or exceed it on raw capability — and why Cursor and Claude Code have grown fastest among individual developers and small teams operating outside formal procurement cycles. The enterprise mandate dynamic also creates a discrepancy between adoption metrics and satisfaction metrics. Tools that are mandated achieve high adoption without necessarily achieving high satisfaction, which is one explanation for the trust data: some portion of the 71% who do not trust AI tool output are using tools they would not have personally selected.

Where AI Coding Tool Adoption Is Headed Next

The trajectory established by the JetBrains AI Pulse data points toward consolidation, deeper integration, and the emergence of governance as the central enterprise challenge in 2026 and beyond. With adoption already at 90% regular usage, the remaining growth opportunity is not in converting non-users — it is in deepening integration for the 90% who are already using tools, and in resolving the trust and verification gap that the 29% trust figure exposes. The next phase of adoption is not about whether developers use AI tools but about what proportion of the software development workflow AI tools handle autonomously. The current 22% AI-authored code baseline will rise as agentic tools capable of multi-file edits, test generation, and CI pipeline integration become the standard offering rather than a premium feature. Tools like Cursor’s background agents, GitHub Copilot’s workspace agent, and Claude Code’s subagent mode are early implementations of a pattern that will define the category in 2027: AI tools that initiate and complete multi-step coding tasks with minimal human intervention per step. For enterprise teams, the most consequential development in the next twelve months will not be the adoption curve itself but the governance infrastructure built around it. As AI-authored code crosses from 22% toward higher fractions of total merged code, questions about code review standards, attribution, license compliance, security vulnerability liability, and developer accountability for AI-generated code will move from theoretical to operationally urgent. The 2026 data has established where adoption is; the 2027 conversation will be about how organizations manage what that adoption has produced.


Frequently Asked Questions

What percentage of developers regularly use AI coding tools in 2026? According to the JetBrains AI Pulse Survey conducted in January 2026 across 24,534 developers, 90% of developers regularly use at least one AI tool at work. A broader industry figure of 84% covers developers who either already use or plan to use AI tools, up from 76% in 2024.

What is GitHub Copilot’s market share among paid AI coding tools? GitHub Copilot holds 42% market share among paid AI coding tools as of January 2026, with 4.7 million paid subscribers. In workplace usage across all tools (including free tiers), Copilot holds 29% and Cursor holds 18%.

Why do only 29% of developers trust AI tool output if adoption is at 90%? The trust decline from over 70% in 2023 to 29% in 2026 reflects accumulated developer experience with AI-generated code that is confidently wrong. Developers have learned through practice that AI tools require systematic verification, not automatic trust. High adoption and low trust coexist because developers find the productivity benefits worth the verification overhead.

How much time do developers save using AI coding tools weekly? Developers save an average of 3.6 hours per week using AI coding tools. GitHub Copilot’s internal data shows 55% faster task completion and 126% more projects completed per week for Copilot-enabled teams versus baseline comparison groups.

What percentage of merged code is now AI-authored? In AI tool-enabled development environments, 22% of merged code is AI-authored — meaning it was primarily generated by an AI tool, reviewed by a developer, and committed with minimal modification. This figure is expected to grow as agentic AI coding tools with multi-step autonomous capabilities become standard in 2026 and 2027.