Skip to main content

Local Development with Sandboxes

Prerequisites
  • Signadot account (No account yet? Sign up here).
  • A Kubernetes cluster
    • Option 1: Set it up on your cluster: This can be a local Kubernetes cluster spun up using minikube, k3s, etc.
      kubectl create ns hotrod
      kubectl -n hotrod apply -f https://raw.githubusercontent.com/signadot/hotrod/main/k8s/all-in-one/demo.yaml
    • Option 2: Use a Playground Cluster: If you don't have a Kubernetes cluster for the above steps, you can provision a Playground Cluster from the Dashboard. It comes with the Signadot Operator and the HotROD application pre-installed in the hotrod namespace.
  • Signadot CLI v0.5 or higher.
  • A working installation of Golang.

Overview

In this guide, you will learn how to use Sandboxes for local iterative development and testing. We will be making changes to a microservice locally on our workstation, and using Signadot CLI to connect our local machine with a remote Kubernetes cluster to test our local changes with the rest of the services in the cluster to get fast feedback. Let's get started!

Understanding the demo application

We'll be using the HotROD demo application that consists of 4 services: frontend, route, customer, and driver, as well as some stateful components. These components make up a simple application for ride-sharing where the end user can request rides to one of 4 locations and have a nearby driver assigned along with an ETA.

These four microservices running on the remote cluster will serve as our "baseline" - the stable, pre-production version of the application. Typically this is an environment that is updated continuously by a CI/CD process.

Local Development with Sandboxes

Connect to Kubernetes Cluster

Firstly, let's switch kube-context to the remote Kubernetes cluster where the Signadot Operator is installed.

kubectl config get-contexts # Find available kube-contexts
kubectl config use-context <kube-context-name> # Target the cluster that we're going to use for this quickstart.

# Using kubectl, you can run the following commands to verify that all the pre-requisites are set up correctly.
kubectl get pods -n signadot # Signadot Operator components run here.
kubectl get pods -n hotrod # demo app runs here.

Now, let's define the following values in the Signadot CLI config located at $HOME/.signadot/config.yaml:

org: <your-org-name> # Find it on https://app.signadot.com/settings/global 
api_key: <your-api-key> # Create API key from https://app.signadot.com/settings/apikeys

local:
connections:
- cluster: <cluster name> # Find it on the clusters page: https://app.signadot.com/settings/clusters
kubeContext: <kube-context-name> # kube-context obtained above
type: PortForward

A correctly configured configuration file will look something like this:

org: my-company
api_key: TJvQdbEs2dVNotRealKeycVJukaMZQAeIYrOKRvQ

local:
connections:
- cluster: company-dev
kubeContext: company-k8s-context-name
type: PortForward

Read more about CLI configuration here.

You are now ready to use the CLI to connect to the Kubernetes cluster, as well as start testing local changes using Sandboxes.

$ signadot local connect
signadot local connect needs root privileges for:
- updating /etc/hosts with cluster service names
- configuring networking to direct local traffic to the cluster
Password:

signadot local connect has been started ✓
you can check its status with: signadot local status

You can check its status with: signadot local status

$ signadot local status
* runtime config: cluster my-cluster, running with root-daemon
✓ Local connection healthy!
* port-forward listening at ":64343"
* localnet has been configured
* 13 hosts accessible via /etc/hosts
* Connected Sandboxes:
- No active sandbox

This establishes a bidirectional connection between your workstation and the cluster. Running cat /etc/hosts will show you the list of hosts corresponding to HotROD services among others.

$ cat /etc/hosts | grep -i hotrod
<SERVICE IP> customer.hotrod.svc # DO NOT EDIT -- added by signadot
<SERVICE IP> driver.hotrod.svc # DO NOT EDIT -- added by signadot
<SERVICE IP> frontend.hotrod.svc # DO NOT EDIT -- added by signadot
<SERVICE IP> route.hotrod.svc # DO NOT EDIT -- added by signadot
...

Here are some endpoints to the HotROD services running on the cluster that you can now access from your browser:

http://frontend.hotrod.svc:8080
http://route.hotrod.svc:8083/route?pickup=123&dropoff=456
http://customer.hotrod.svc:8081/customers
http://customer.hotrod.svc:8081/customer?customer=392

Access the link to the HotROD frontend UI (on http://frontend.hotrod.svc:8080) and request a few rides. Clicking on one of the four locations will order a ride for the location and displays an entry below along with the ETA.

In the above screenshot, the ride requests are returning ETAs of 100s of minutes, which doesn't look quite right! We now check the route service by hitting its address directly.

http://route.hotrod.svc:8083/route?pickup=123&dropoff=456

{
"Pickup": "123",
"Dropoff": "456",
"ETA": 7200000000000
}

For example, above, it returned 7200000000000 nanoseconds or 120 minutes. Let's investigate and make a change to fix this behavior.

Modify route service locally

Clone the HotROD repository so that you can start running these microservices locally. In the route microservice source under /services/route/server.go, we find the root cause, a bug in the computeRoute function that returns the ETA incorrectly in hours instead of minutes.

func computeRoute(ctx context.Context, pickup, dropoff string) *Route {
...
return &Route{
Pickup: pickup,
Dropoff: dropoff,
ETA: time.Duration(eta) * time.Minute, // updated from time.Hour
}
}

Now let's run just the Route Service locally using the command below:

go run cmd/hotrod/main.go route

And then check the Route Service's behavior locally to verify that it is fixed. For example, you may see a value like the following:

http://localhost:8083/route?pickup=123&dropoff=456
{
"Pickup":"123",
"Dropoff":"456",
"ETA":120000000000
}

This time, the ETA has a value of 120000000000 nanoseconds which equals 2 minutes. That seems to have fixed it. We will leave this microservice running and set up a sandbox to test this modified version running on our workstation with the dependencies running within Kubernetes.

info

If the Route microservice made any calls to other microservices, these calls would be automatically sent to the cluster because we ran signadot local connect above to establish connectivity between the workstation and the cluster.

Create Sandbox to test with Kubernetes dependencies

We will be creating a sandbox to test our local change in the context of dependencies running inside the Kubernetes cluster. For this, you will need to use the following sandbox specification:

name: local-route-sandbox
spec:
cluster: "@{cluster}"
description: "Sandbox with Local Workload running Route Service"
local:
- name: "local-route"
from:
kind: Deployment
namespace: hotrod
name: route
mappings:
- port: 8083
toLocal: "localhost:8083"
defaultRouteGroup:
endpoints:
- name: frontend-endpoint
target: http://frontend.hotrod.svc:8080

Looking closely, there are 2 main sections to pay attention to:

  1. local describes the local workload that we'll be running and its relationship to the overall application. In this case, we're mapping port 8083 from the route Deployment in the hotrod namespace to localhost:8083 which is where our local version of the route service is running.
  2. endpoints defines an optional preview endpoint which points to the frontend. This preview endpoint is implicitly associated with this sandbox and can be used for collaboration as we'll see below.

Now let's apply the sandbox using the Signadot CLI. Run the below command with the value for cluster name parameter from the Clusters page.

$ signadot sandbox apply -f local-route-sandbox.yaml --set cluster=<cluster name>
Created sandbox "local-route-sandbox" (routing key: 6xrvzgjnt8zll) in cluster "my-cluster".

Waiting (up to --wait-timeout=3m0s) for sandbox to be ready...
✓ Sandbox status: Ready: All desired workloads are available.

Dashboard page: https://app.signadot.com/sandbox/id/6xrvzgjnt8zll

SANDBOX ENDPOINT TYPE URL
frontend-endpoint host https://frontend-endpoint--local-route-sandbox.preview.signadot.com

The sandbox "local-route-sandbox" was applied and is ready.

Let's check the local connection status once again.

$ signadot local status
* runtime config: cluster my-cluster, running with root-daemon
✓ Local connection healthy!
* port-forward listening at ":64343"
* localnet has been configured
* 13 hosts accessible via /etc/hosts
* Connected Sandboxes:
- local-route-sandbox
* Routing Key: 6xrvzgjnt8zll
- local-route: routing from Deployment/route in namespace "hotrod"
- remote port 8083 -> localhost:8083
✓ connection ready

What the above tells us is that for a specific routing key (6xrvzgjnt8zll), the requests sent to the route Deployment will be mapped back to your workstation on port 8083 - which is where the local version of the route microservice that we started in the previous step is running.

Time to test the flow end-to-end. One way we can do this is by setting the above "Routing Key" on requests that we make to the front end. For example, if we use ModHeader on our browser, we can set the header key to uberctx-sd-routing-key and value to what we obtained as the routing key from the sandbox we created above.

If you want to share this test version internally within our organization, you can optionally use the preview endpoint that we generated for our sandbox. The hosted preview URL, https://frontend-endpoint--local-route-sandbox.preview.signadot.com can be accessed by any authenticated user, and automatically sets the above uberctx-sd-routing-key request header for us and allows anyone else to access and exercise the modified version of the application.

There! We have fixed it to report the ETA correctly, and we tested it in a high-fidelity Kubernetes environment without creating a branch, PR, or even building a single Docker image! To learn more about how sandboxes work under the hood, check out header propagation and sandbox resources.

Video Walkthrough

To see this quickstart in action, check out the following video.