🤖 Announcing SmartTests: AI powered Contract Testing. Read our CTO's Blog Post here!

Shifting Testing Left: The Request Isolation Solution

Table of contents
Take Signadot for a whirl
Share

Originally posted on The New Stack. This is part of an ongoing series. Read previous parts:

In previous parts of this series we’ve discussed how QA has a role even as testing shifts left, and how there are multiple models for putting more accurate testing back in the hands of developers and software development engineers in test (SDETs). Now let’s consider a model that’s effective as teams scale beyond a single two-pizza team. Conceptually, testing in a complex microservice environment should include a highly accurate shared environment where developers’ tests and experiments won’t interfere with others’ updates.

In the paragraph above I say “back” in the hands of developers because, before microservice architecture and huge clusters of containers, developers had the power to run tests on their code that very closely resembled how their code would run in production.

Let’s discuss how such an approach works on an architectural level.

How To Leverage a Shared Cluster To Shift Left Testing?

In a small team, a shared testing environment can put real testing in the hands of developers. With test versions of third-party dependencies and copies of the production versions of all internal services, it’s a great way for developers to do pre-release testing. In larger teams, the essential problem is that too many developers will want to test at once. When the test code is pushed to service A, the team that works with service B isn’t able to test, lest their tests fail or their changes interfere with service A’s tests.

The solution is to establish a highly reliable cluster for testing, and then let teams deploy test versions of services that don’t affect the cluster as a whole.

Source: Signadot

A few concepts that we’ll use throughout this explanation:

  • Baseline — The baseline version of the cluster should include services and resources that are extremely close to the deployed production environment. Changes need to be merged to baseline before deployment, ensuring that there aren’t large gaps between this cluster and prod. This baseline is typically kept up to date with the main/trunk branch using CI/CD pipelines.
  • Sandbox — The service or group of services on which developers are testing. In general this will involve a new version of an existing service, but a sandbox could also contain new services.

The core idea is that, with each request between services, we need to intelligently decide if the request should go to the baseline service or to the sandbox. This solution is generally referred to as request-level isolation of testing/development services.

What We Need to Make Request-Level Isolation Work

The technical lift for request isolation isn’t zero, and it’s important to identify which components need to be in place to make the system work:

  • Request routing — We also require some system to intelligently route requests. Every service-to-service request could be routed to a different target service based on the value of certain request headers.

At Signadot, request routing for sandboxes uses either a service mesh such as Istio or without a service mesh, using Signadot’s DevMesh. If using a service mesh, Signadot configures the mesh to perform request routing on its behalf without needing additional in-cluster components specific to routing. The DevMesh sidecar is a lightweight Envoy-based proxy that can be added to any workload through a Kubernetes pod annotation.

  • Context Propagation — Finally, we must have a system to keep track of whether each requests is a “test” request from a developer working with a sandbox. As such, a system for context propagation with consistent headers is required.

At Signadot, we harness the power of OpenTelemetry to do this context propagation. OpenTelemetry’s “baggage” component is perfect for this need.

Success Stories With Request Isolation

Many teams from mid-level to enterprise use request isolation in some form to put the power of highly accurate testing in the hands of developers. Here are a couple of examples.

Uber’s Short-Lived Test Environments: Sandboxes in a Shared Cluster

At Uber, the frustrations mentioned in my previous articles with per-developer environments (out-of-date requirements, slow updates), lead to the development of Short-Lived Application Test Environments (SLATE) that created sandboxes within a cluster with up-to-date versions of dependent services.

Source: Uber

Uber goes one step further than the model I describe above: Rather than having everything work within an up-to-date staging cluster, SLATEs can also call production services as needed. The team describes the benefits in their blog post:

“SLATE significantly improved the experience and velocity of E2E testing [end-to-end] for developers. It allowed them to test their changes spread across multiple services and against production dependencies. Multiple clients like mobile, test suites and scripts can be used for testing services deployed in SLATE. A SLATE environment can be created on demand and can be reclaimed when not in reuse, resulting in efficient uses of infrastructure. While providing all this, it enforces data isolation and compliance requirements.”

DoorDash Gets Feedback 10 Times Faster

Before implementing request isolation on a shared cluster, developers at DoorDash were doing final testing in a shared staging environment. Developers were forced to use mocks and contract tests to simulate how things would work on staging, and this imperfect replication caused staging to often break while testing new features.

With request isolation, developers are still using staging to test their work, but those tests are isolated from others meaning they won’t affect staging’s stability for others.

Using Signadot’s local sandbox feature, the services a dev is experimenting with can run on their local workstation, while all requests go to the shared cluster. This allows much faster testing.

Developer experience specialists at DoorDash estimates that it’s 10 times faster to get feedback now compared to using a shared staging environment for final testing.

Conclusions: Back to Developer Testing

When discussing part 1 of this series on Hacker News, one user wrote back with a basic observation:

“Any time the code you’re developing can’t be run without some special magic computer (that you don’t have right now), special magic database (that isn’t available right now), special magic workload (production load, yeah…), I predict the outcome is going to be bad one way or another.”

Ironically, this is the state of modern software development on Kubernetes clusters. The cluster can’t be fully replicated locally so we’re doomed to imperfect replication and mocks standing in for big chunks of our stack. Ever since the birth of agile methodologies, we’ve expected developers to be able to try out their code almost instantly.

This “shift left” then, is truly a return to form. With request isolation we can let developers do what they’ve always done: experiment, try things out, and discover what works.

Join the Signadot Community

We’d love to show you how request isolation has worked for our users and explore how Signadot can help you. Check out signadot.com for more user stories, tutorials and best practices.

Lightweight Kubernetes Developer Environments for Fast Integration Testing

In modern cloud-native development, the velocity of innovation is directly tied to the efficiency of the development and testing lifecycle. Traditional, static staging environments often become bottlenecks, burdened by high costs, resource contention, and slow feedback loops. To address these challenges, engineering teams are increasingly adopting lightweight and ephemeral Kubernetes environments. These solutions provide on-demand, isolated testing grounds that significantly reduce infrastructure costs and accelerate development cycles.This article examines the role of lightweight Kubernetes environments, explores different approaches to their implementation, and highlights tools that enable teams to build faster and more cost-effectively.The Limitations of Traditional Staging EnvironmentsFor years, a shared staging environment was the standard for pre-production testing. However, in a microservices architecture, this model presents several critical drawbacks:High Cost: Maintaining a 1:1 replica of a production environment is resource-intensive and expensive, especially when it sits idle for long periods.Resource Contention: Multiple developers or teams attempting to deploy and test changes in a single shared environment often leads to conflicts, overwritten changes, and scheduling delays.Slow Feedback: The process of deploying to a staging environment, running tests, and gathering feedback is often slow, hindering developer productivity and extending release cycles.Configuration Drift: Shared environments can easily diverge from production, leading to tests that pass in staging but fail in production.Kubernetes provides the foundational capabilities to overcome these issues by enabling the programmatic creation and destruction of environments. This has given rise to the practice of using ephemeral environments for testing and validation [1].Types of Lightweight Kubernetes EnvironmentsLightweight Kubernetes environments can be categorized based on their use case, from local development to integrated CI/CD testing.Local Kubernetes ClustersFor initial development and unit testing, developers can run a complete Kubernetes cluster on their local machine. These environments are designed for fast startup and ease of use, allowing developers to work with the Kubernetes API without needing access to a remote cluster.Leading tools for local Kubernetes development include:Kind (Kubernetes IN Docker): Creates multi-node Kubernetes clusters using Docker containers as nodes. It is known for its fast startup times and seamless integration with CI/CD pipelines [2].MicroK8s: A lightweight, zero-configuration Kubernetes distribution that is simple to install and comes with built-in add-ons for common services like DNS, storage, and monitoring [2].These tools are excellent for isolated component development but do not solve the challenge of testing a service's integration with a complex web of remote dependencies.Ephemeral Preview EnvironmentsEphemeral environments, also known as preview environments, are on-demand, short-lived deployments created automatically for a specific code change, such as a pull request (PR). They provide an isolated, production-like environment to test new features, bug fixes, and other changes with all their dependencies before merging to the main branch [3].By integrating these environments into a CI/CD pipeline, teams can automate the process of spinning up an environment for every commit, running automated tests, and tearing it down upon completion. This practice dramatically accelerates feedback loops and reduces infrastructure costs by ensuring resources are only consumed when needed [1].Strategies for Implementing Ephemeral EnvironmentsThere are two primary approaches to creating ephemeral environments in Kubernetes, each with different implications for cost and complexity.1. Full Environment DuplicationThe most straightforward approach is to duplicate the entire application stack for every pull request. This involves creating a new Kubernetes namespace and deploying all microservices and their dependencies into it. Tools like Helm and Kustomize are often used to codify and automate these deployments, ensuring consistency and reducing manual error [1].While this method provides maximum isolation, it is often prohibitively expensive and slow. Replicating dozens or hundreds of microservices for every PR consumes significant compute and memory resources, leading to high cloud bills and long wait times for developers.2. Resource-Smart Service ForkingA more advanced and cost-efficient strategy is to create lightweight environments by forking only the services that have changed. Instead of duplicating the entire stack, this approach deploys only the new version of a service and intelligently routes requests to it within an isolated context, while leveraging the existing, shared baseline environment for all other upstream and downstream dependencies.Signadot is a platform that specializes in this resource-smart approach. It creates lightweight preview environments, or "Sandboxes," that are not full replicas of a namespace. When a developer wants to test a change to a specific microservice, Signadot deploys only that new version. The Sandbox intercepts requests intended for the new service and routes them accordingly, while all other requests flow through the stable, shared services [4].This model offers several distinct advantages:Massive Cost Reduction: By avoiding the duplication of the entire stack, infrastructure costs can be reduced by over 90% compared to the full duplication model.Unmatched Speed: Sandboxes can be spun up in seconds, as only the modified components need to be deployed. This provides developers with nearly instant feedback [5].High-Fidelity Testing: Developers can test their changes against real dependencies running in a shared cluster, ensuring that tests accurately reflect real-world behavior [4].Collaborative Workflows: Multiple PRs from different teams can be combined into a single Sandbox, enabling comprehensive, cross-team integration testing before a single line of code is merged [5].Some technology companies have adopted this approach with Signadot to improve developer velocity, catch bugs earlier, and significantly lower their infrastructure spend.Selecting the Right Kubernetes Preview Environment ToolThe market for Kubernetes preview environment tools is growing, with various platforms offering different levels of automation and control [3]. When selecting a tool, teams should consider factors like cost, speed, integration complexity, and the ability to handle complex microservice dependencies.For organizations with complex, interdependent microservices, a solution that minimizes resource duplication is essential for scaling development practices effectively.ConclusionLightweight Kubernetes environments are a transformative technology for modern software development teams. By moving away from slow, expensive, and contentious staging environments to on-demand ephemeral previews, organizations can empower their developers to test changes faster, more frequently, and with greater confidence.While several tools can facilitate the creation of these environments, approaches that prioritize resource efficiency, such as the service-forking model, offer the most significant benefits in terms of cost savings and development speed. By adopting these advanced testing methodologies, platform engineering teams can provide a superior developer experience and accelerate the delivery of high-quality software.
Arjun Iyer
July 23, 2025
Read more

Join our 1000+ subscribers for the latest updates from Signadot