Skip to main content

Testing Microservices with RabbitMQ

Prerequisites
  • Signadot CLI installed - Follow the installation guide
  • Docker Desktop running locally
  • kubectl configured to access your Kubernetes cluster
  • Python 3.8+ and pip
  • RabbitMQ knowledge - Basic understanding of exchanges, queues, and routing

Introduction

Asynchronous microservices are hard to test: if two versions of a consumer read from the same queue, they compete for messages. Spinning up a separate broker per branch is slow and pricey.

Signadot Sandboxes solve this with request-level isolation. Keep a single RabbitMQ, but route messages only to the intended version (sandbox) of your consumer using a sandbox routing key. Each sandboxed consumer has its own queue binding; baseline traffic remains untouched while you test safely in parallel.

What you will accomplish:

  • Set up a RabbitMQ-based microservices application
  • Use routing keys + selective consumption to isolate sandbox traffic
  • Deploy services to Kubernetes
  • Create Signadot sandboxes for isolated testing
  • Test message routing between baseline and sandbox environments

Time required: 45-60 minutes

Architecture Overview

  • A topic exchange fans out each message to both baseline and sandbox queues.
  • OTel propagates the sandbox routing key automatically as W3C baggage.
  • Selective consumption in consumers:
    • Baseline consumer processes messages with no routing key, and skips messages whose key belongs to an active sandbox of this same service. It still processes messages carrying keys for other services’ sandboxes or inactive/unknown keys.
    • Sandbox consumer processes only messages with its own sandbox key.

This gives isolation without multiple brokers: everyone receives the message, but only the right consumer acts on it.

The idea in one picture

Baseline and sandbox consumers each have their own queue bound to the same exchange. Routing keys in headers determine who should act.

Project structure

rabbitmq-signadot-demo/
├── publisher/
│ ├── app.py
│ ├── requirements.txt
│ └── Dockerfile
├── consumer/
│ ├── app.py
│ ├── requirements.txt
│ └── Dockerfile
├── k8s/
│ ├── namespace.yaml
│ ├── rabbitmq.yaml
│ ├── redis.yaml
│ ├── publisher.yaml
│ └── consumer.yaml
└── signadot/
├── sandboxes/
│ ├── publisher.yaml
│ └── consumer.yaml
└── routegroups/
└── demo.yaml

You can scaffold this from your own repo or adapt from Signadot examples.

Step 1 — Deploy the baseline stack

The full application code, Dockerfiles, and Kubernetes manifests live in the Signadot examples repository. The repo README covers building images, deploying RabbitMQ, Redis, and the baseline publisher/consumer services step by step.

Clone the repo and follow the README to get the baseline stack running:

git clone https://github.com/signadot/examples.git
cd rabbitmq-signadot-demo

Once deployed, verify all pods are healthy:

Verify baseline pods
$kubectl get pods -n rabbitmq-demo
NAME READY STATUS RESTARTS AGE
rabbitmq-0 1/1 Running 0 2m
redis-0 1/1 Running 0 2m
publisher-6b8f9d4c5f-x2k7p 1/1 Running 0 90s
consumer-7c9a8b3d2e-m4n6q 1/1 Running 0 90s

Connect locally

Use Signadot Local Connect to reach cluster services from your machine:

export CLUSTER_NAME=<your_signadot_cluster_name>
signadot local connect --cluster=$CLUSTER_NAME

Send a baseline message to confirm the stack works:

curl -X POST http://localhost:8080/publish \
-H "Content-Type: application/json" \
-d '{"order_id": "001", "amount": 100, "message": "Baseline order"}'

Check the consumer logs:

Baseline consumer output
$kubectl logs -l app=consumer -n rabbitmq-demo --tail=50 | grep PROCESSING
Defaulted container "consumer" out of: consumer, sd-sidecar, sd-init-networking (init)
2025-08-22 15:13:28,935 - INFO - [Baseline] --> PROCESSING ORDER 001 amount=100

Step 2 — Create sandboxes and a RouteGroup

Spin up forked workloads for publisher-v2 and consumer-v2:

export CLUSTER_NAME=<your_registered_cluster>

signadot sandbox apply -f signadot/sandboxes/publisher.yaml --set cluster=$CLUSTER_NAME
signadot sandbox apply -f signadot/sandboxes/consumer.yaml --set cluster=$CLUSTER_NAME

Create a RouteGroup that ties them together under one routing key / URL (optional):

signadot routegroup apply -f signadot/routegroups/demo.yaml --set cluster=$CLUSTER_NAME
signadot routegroup list

Step 3 — Prove isolation

Baseline message (no header → baseline handles it):

Baseline message published
$curl -x POST http://publisher.rabbitmq-demo.svc:8080/publish \
$> -H "Content-Type: application/json" \
$> -d '{"order_id": "002", "amount": 200, "message": "Baseline order"}'
{"message":{"amount": 200, "message": "Baseline order", "order_id": "002", "routing_key": "baseline"},"routing_key":"baseline","status":"published"}

Sandbox message (explicit header → sandbox handles it):

Sandbox message published
$ROUTING_KEY=$(signadot sandbox get consumer-v2 -o json | jq -r '.routingKey')
$echo "Sandbox routing key: $ROUTING_KEY"
Sandbox routing key: th44pmjmhc7sq
$curl -X POST http://publisher.rabbitmq-demo.svc:8080/publish \
$ -H "Content-Type: application/json" \
$ -H "baggage: sd-routing-key=$ROUTING_KEY" \
$ -d '{"order_id":"003","amount":300,"message":"Sandbox order"}'
{"message": {"amount": 300, "message": "Sandbox order", "order_id": "003", "routing_key": "th44pmjmhc7sq"}, "routing_key": "th44pmjmhc7sq","status":"published"}

Watch logs side-by-side:

  • Baseline:
Baseline consumer logs
$kubectl logs -f -n rabbitmq-demo -l app=consumer | grep PROCESSING
Defaulted container "consumer" out of: consumer, sd-sidecar, sd-init-networking (init)
2025-08-22 15:10:08,492 - INFO - [Baseline] --> PROCESSING ORDER 002 amount=200
2025-08-22 15:10:40,469 - INFO - [Baseline] --> PROCESSING ORDER 002 amount=200
2025-08-22 15:10:43,445 - INFO - [Baseline] --> PROCESSING ORDER 002 amount=200
2025-08-22 15:13:28,935 - INFO - [Baseline] --> PROCESSING ORDER 001 amount=100
  • Sandbox (replace name with your sandbox consumer deployment)
Sandbox consumer logs
$kubectl logs -f -n rabbitmq-demo deploy/consumer-v2-dep-consumer-77efbe45 | grep PROCESSING
Defaulted container "consumer" out of: consumer, sd-sidecar, sd-init-networking (init)
2025-08-22 15:08:37,367 - INFO - [consumer-v2] --> PROCESSING ORDER 003 amount=300
2025-08-22 15:10:12,440 - INFO - [consumer-v2] --> PROCESSING ORDER 003 amount=300
2025-08-22 15:10:32,658 - INFO - [consumer-v2] --> PROCESSING ORDER 003 amount=300

Expected outcome:

  • Baseline processes only messages without a sandbox key or routing key of sandboxes of other services
  • Sandbox processes only messages with its own key

What’s Happening Under the Hood

  • Context propagation: When you hit a sandbox URL or pass sd-routing-key, the publisher attaches that key to message headers.
  • Fan-out delivery: RabbitMQ’s topic exchange routes messages to both baseline and sandbox queues (via their bindings).
  • Selective consumption:
    • Baseline consumer accepts messages without a sandbox key; it skips messages with a key that belongs to an active sandbox.
    • Sandbox consumer processes only messages whose sd-routing-key equals its sandbox ID.
  • No message loss: Each sandbox has its own queue bound to the exchange. Messages always have a target consumer; baseline only “avoids” messages while a matching sandbox is active.

Conclusion

You’ve built and deployed a minimal RabbitMQ publisher/consumer stack, forked services into Signadot sandboxes, and confirmed that sandbox routing keys isolate messages in a shared RabbitMQ. This approach scales beyond this demo:

  • Works with Kafka, Pub/Sub, or SQS (using analogous header/attribute + selective consumption patterns)
  • Supports multiple sandboxes concurrently (e.g., per PR)
  • Integrates naturally with CI/CD to spin up ephemeral test envs for each change

For deeper dives, see:

Happy testing!