Before the rise of elastic computing, there was a very real tradeoff between delivery velocity and stability. Shipping fast meant monitoring new deployments really closely, especially in the event that a new feature didn’t get thorough testing and review in staging.
In 2026, we’re seeing new delivery paradigms that promote trust in deployments. One of the most impactful ones has been the “environment pipeline”: teams are provisioning on-demand, lightweight environments to use throughout multiple testing and review stages. This means that every single feature gets more eyes on it, and the elastic nature of these environments means they don’t slow things down. Today, these environments serve both human developers and coding agents like Claude Code and Codex.
DORA metrics improve with scalable infrastructure
When you’re working towards tightening your engineering processes, your main focus is continuous improvement. Arbitrary performance goals don’t make sense for something as team-specific as software delivery; progress is relative.
Many teams use DORA metrics to benchmark their software throughput and stability. This allows them to measure five critical (yet simple) facets of delivery over time. This way, teams can find their rate of improvement relative to just their past performance.
The beauty of DORA’s system is that it isn’t just descriptive, but also prescriptive for process improvements. DORA mentions flexible infrastructure as one of the most effective delivery “unblockers”. In 2026, flexible infrastructure is no longer a differentiator among high-performing teams, rather it’s an expectation. The focus has now shifted to how capable your pipeline is for the scale and variety of workloads running through it, especially automated and agentic ones.
The US National Institute of Standards and Technologies (NIST) defines one of the facets of “cloud” as “on demand self-service”. Implementing infrastructure self-service brings teams closer to using the cloud at its fullest capacity, and further from using antiquated “data center” architecture patterns.
The modern environment stack
With the advantages of elastic computing, environment management today looks different: environments are expendable, leaning more into the “cattle, not pets” pattern. Teams have leveraged the cloud’s capabilities to make pre-production environments as adaptable and scalable as their production-grade counterparts.
Five or ten years ago, you’d see teams with dedicated, always-running staging, test, and QA environments (and only maybe a couple of each). Inevitable environment bottlenecks meant that teams needed to approach environments in a different way, and this was a major weak spot for delivery.
Fast forward to 2026, and we have a new environment “stack” that developers and agents move with, not around.
Staging environment
This is the final check before your code gets shipped. This environment is nearly identical to production, so your end-to-end testing and stakeholder review should catch any last-minute surprises.
Staging serves as a funnel: all individually-tested features get merged and deployed to staging. From there, teams can verify that the features don’t clash and that no new bugs have been introduced.
Ephemeral environments
Ephemeral environments account for most pre-production environments in your pipeline. They’re an architecture pattern (and best practice) that is useful during multiple stages of the delivery pipeline. In essence, they’re lightweight and can be spun up really quickly on-demand. Their use cases vary widely: ephemeral environments are implemented as test environments, preview environments, agentic environments, and QA environments.
Generally, ephemeral environments serve best as post-commit environments. They should react to your development workflow, which is usually done by configuring them to respond to GitOps events. Attaching ephemeral environments to their corresponding PRs helps keep them up-to-date with code changes and accessible for team members. In 2026, this includes PRs opened by coding agents. An agent-opened PR should trigger the same environment scaffolding as one from a human developer: isolated, fully capable, and ready for automated and/or human review.
Ephemeral environments should be nearly as capable as staging, and their elasticity makes this realistic, cost-wise. They should be able to support production-like data to accommodate end-to-end tests. Ideally, ephemeral environments help teams shift testing and review left. They also need to support agent workloads: coding agents may spin up, use, and discard many ephemeral environments in the course of completing a single task, running tests and validating changes autonomously before a human ever reviews the output.
Development environments
Much of your inner loop development still happens in development environments, but in 2026, that inner loop is increasingly shared between humans and coding agents. Agents handling first-draft implementation, refactoring, or test generation need the same reliable, production-like environment that developers do. Many teams are achieving better dev/prod parity with cloud development environments (CDEs). Local environments can still be configured to mirror cloud services, but the shift toward CDEs makes it easier to give both humans and agents a consistent, trustworthy foundation to work from.
Development environments are most effective when developers (and the agents working alongside them) can completely and confidently use them to develop a feature. That way, the development pipeline flows as expected, and test environments are kept for tests and verifications only, as best practice.
Environments are tied to developer experience
It’s no secret that better processes lead to happier developers. And in turn, happier developers write high-quality, stable code. Tech leaders’ recent focus on developer experience (DevEx) has cracked the code to a more sustainable development loop.
A large part of DevEx focuses on maximizing the time developers spend in flow or deep focus. This means keeping meetings to a minimum, reducing manual blockers, and providing better environment management.
On-demand environment management
Any manual steps in your pipeline lead to delays, and many of them don’t need to. Anything from requesting a review to requesting a test environment can create a bottleneck in your pipeline. Developers can remain in deep focus when they get instant feedback on the features they’re building. Waiting any number of hours to days of delay will thwart that focus, and send them back to square one.
It’s best-practice to allow developers to provision the infrastructure they need, when they need it. Any downtime, regardless of how minimal, can be critical to productivity. The same principle applies to coding agents: an agent blocked on environment provisioning is wasted time. Instant, automated environment provisioning is what lets agent-assisted workflows maintain the speed that they’re intrinsically capable of.
More environments, more often
When you have easily-provisioned environments at your disposal, your features are subject to more “checkpoints”. This means your developers can test at the PR-level, instead of guessing whether a feature will survive in staging. With elastic compute, cost and energy are no longer inhibitors of more environments, meaning your team can provision as many as they realistically need, guilt-free. This matters even more as coding agents enter the picture. A single agent task might require several environment iterations (spinning up, testing, failing, and retrying) before producing a PR worth reviewing. Elastic infrastructure absorbs that demand, making the human-agent development loop as fast and reliable as your pipeline allows.