Skip to main content

Deploying a MariaDB Resource Plugin

Prerequisites
  • Your Kubernetes cluster is connected to Signadot. You can set this up using the Signadot Quickstart Guide.
  • The HotRod application is running in your Kubernetes cluster. Deployment instructions are available in the HotRod repository.

Introduction

Testing database schema changes in a shared environment can be tough. When multiple teams rely on the same database, modifying schemas might disrupt others. This challenge becomes critical when adding new columns or altering existing tables.

However, you can overcome this problem. Using Signadot Sandboxes and Resource Plugins, you can create isolated databases for testing. Signadot allows you to spin up temporary databases specifically for your schema changes. You can test modifications without affecting the shared database or impacting other teams.

In this tutorial, we'll show you how to set up database isolation using Signadot. We'll use the HotRod demo application for this tutorial, particularly the location service that interacts with a MariaDB database. You can clone the repo to follow along.

You'll learn how to add a new description column to the locations table and test it within an isolated database instance.

By the end, you'll be able to:

  • Create a resource plugin to provision a temporary MariaDB database.
  • Register and manage the plugin using Signadot.
  • Implement and test schema changes in isolation.
  • Clean up resources after testing is complete.

Let's get started and see how Signadot can make testing database schema changes easier in a collaborative environment.

Roles and Responsibilities

This tutorial covers two main areas of responsibility:

  1. Setting up the resource plugin — configuring the infrastructure that provisions temporary MariaDB instances. This involves creating the plugin, setting up Kubernetes RBAC, and registering the plugin with Signadot.

  2. Testing schema changes — introducing the schema change, updating the application code, creating a sandbox with the plugin, and verifying the changes in isolation.

Depending on your team structure, these tasks may be handled by different people (for example, a platform engineer and an application developer) or by the same person. The tutorial walks through both areas end to end.

Setting Up the MariaDB Resource Plugin

To enable safe testing of schema changes, you need to set up a resource plugin that provisions temporary MariaDB instances. This plugin allows sandboxes to create and use their own databases without affecting the shared environment.

Creating the Resource Plugin

First, create a resource plugin that provisions a temporary MariaDB server for use within a sandbox. The full plugin source code, including the plugin configuration, provisioning script, deprovisioning script, and Kubernetes RBAC manifests, is available here.

Clone the repository to get started:

git clone https://github.com/signadot/plugins.git
cd plugins/mariadb

The plugin consists of several components:

  • Plugin Configuration (plugin.yaml): Defines the plugin's behavior, inputs, outputs, and the scripts to run during provisioning and deprovisioning. The key elements are:

    • name: The name of the plugin (mariadb).
    • runner: Specifies the Docker image (dtzar/helm-kubectl) and Kubernetes namespace (signadot) for running the plugin scripts. It also references the sd-mariadb service account.
    • create: Defines the provisioning step. It takes an input from the sandbox (dbname) and provides outputs (host, port, root-password) that the sandbox's forked workloads will consume as environment variables.
    • delete: Specifies the deprovisioning script to run when the sandbox is deleted.
  • Provisioning Script (plugin/provision.sh): Contains the logic to deploy a MariaDB instance using Helm. It adds the Bitnami Helm repository, installs a MariaDB instance customized for the sandbox, retrieves the root password from the generated Kubernetes Secret, and writes the database host, port, and password to temporary files for Signadot to read as outputs.

  • Deprovisioning Script (plugin/deprovision.sh): Handles cleanup of the MariaDB instance when the sandbox is deleted. It uninstalls the MariaDB Helm release associated with the sandbox, removing all related Kubernetes resources.

You can review each of these files in detail in the repository.

Configuring Kubernetes Permissions

For the plugin to work correctly, it needs appropriate permissions in the Kubernetes cluster. The repository includes a ServiceAccount, Role, and RoleBinding manifest that grants the necessary access.

Apply the configuration to your cluster:

kubectl -n signadot apply -f k8s/mariadb-init.yaml

This sets up the sd-mariadb service account with permissions to read pods and create/delete ConfigMaps, Secrets, Services, and StatefulSets within the signadot namespace. You can review the full RBAC manifest in the repository.

Registering the Plugin with Signadot

Now that the plugin is configured, register it with Signadot so that sandboxes can use it.

Use the Signadot CLI to apply the plugin configuration:

signadot resourceplugin apply -f plugin.yaml

This command registers the mariadb plugin with the Signadot control plane. The plugin is now available for use in sandbox specifications.

Testing Schema Changes with an Isolated Database

Now that the MariaDB resource plugin is set up, you can safely test your schema changes. You'll add a new description field to the locations table, update the application code, create a sandbox using the plugin, and verify your changes—all without affecting the shared database.

Introducing the Schema Change

Let's assume you need to add a description column to the locations table in the HotRod application. This field will store extra information about each location. Testing this change on the shared database could disrupt other teams. Therefore, you'll use an isolated database for testing.

You can clone the HotRod repository to follow along.

  • Making Code Changes

    First, update the Location struct in the interface.go file of the location service to include the new Description field:

    // hotrod/services/location/interface.go

    type Location struct {
    ID int64 `json:"id"`
    Name string `json:"name"`
    Coordinates string `json:"coordinates"`
    Description string `json:"description"` // New field
    }

    Next, modify the database.go file to handle the description field throughout the database interactions.

  • Updating the Table Schema

    In database.go, update the tableSchema constant to add the description column:

    // hotrod/services/location/database.go

    const tableSchema = `
    CREATE TABLE IF NOT EXISTS locations
    (
    id bigint unsigned NOT NULL AUTO_INCREMENT,
    name varchar(255) NOT NULL,
    coordinates varchar(255) NOT NULL,
    description varchar(255), -- New column

    PRIMARY KEY (id),
    UNIQUE KEY name (name)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci;
    `
  • Updating Seed Data

    Update the seed variable to include descriptions for each location. Here is one entry as an example:

    // hotrod/services/location/database.go

    {
    ID: 1,
    Name: "My Home",
    Coordinates: "231,773",
    Description: "Cozy two-bedroom apartment", // New field
    },

    Apply the same pattern to all other entries in the seed slice (Rachel's Floral Designs, Amazing Coffee Roasters, Trom Chocolatier, and Japanese Desserts).

  • Modifying Insert and Update Queries

    Update the Create method to include description in the INSERT statement:

    query := "INSERT INTO locations SET name = ?, coordinates = ?, description = ?"
    res, err := d.db.Exec(query, location.Name, location.Coordinates, location.Description)

    Update the Update method similarly:

    query := "UPDATE locations SET name = ?, coordinates = ?, description = ? WHERE id = ?"
    res, err := d.db.Exec(query, location.Name, location.Coordinates, location.Description, location.ID)

    The rest of each method's logic (retry handling, error checking) stays the same. You only need to add location.Description to the Exec call arguments.

  • Adjusting Select Queries

    Update the List and Get methods to include description in the SELECT statement and scan it into the struct.

    In the List method, change the query and the Scan call:

    query := "SELECT id, name, coordinates, description FROM locations"
    // ...
    if err := rows.Scan(&c.ID, &c.Name, &c.Coordinates, &c.Description); err != nil {

    In the Get method, apply the same change:

    query := "SELECT id, name, coordinates, description FROM locations WHERE id = ?"
    // ...
    if err := row.Scan(&c.ID, &c.Name, &c.Coordinates, &c.Description); err != nil {

    The rest of the method logic (tracing, retry handling, delay simulation) remains unchanged. You only need to add description to the SQL query and &c.Description to the Scan call.

Creating the Sandbox Specification

Now, create a sandbox specification that uses the MariaDB plugin to provision an isolated database for testing your schema changes. Use the following sandbox.yaml file:

# sandbox.yaml

name: hotrod-schema-change
spec:
cluster: "@{cluster}"
resources:
- name: mydb
plugin: mariadb
params:
dbname: hotrod

defaultRouteGroup:
endpoints:
- name: location
target: "http://location.hotrod.svc:8081"

forks:
- forkOf:
namespace: hotrod
name: location
kind: Deployment
customizations:
images:
- image: <your-registry>/hotrod:test
container: hotrod
env:
- name: DB_HOST
valueFrom:
resource:
name: mydb
outputKey: provision.host
- name: DB_PORT
valueFrom:
resource:
name: mydb
outputKey: provision.port
- name: DB_ROOT_PASSWORD
valueFrom:
resource:
name: mydb
outputKey: provision.root-password
- name: DB_DATABASE
value: "hotrod"

Applying the Sandbox

Use the Signadot CLI to create the sandbox:

signadot sandbox apply -f sandbox.yaml --set cluster=<your-cluster-name>

Before applying the sandbox, build and push your updated location service image to a registry accessible by your cluster:

docker build -t <your-registry>/hotrod:test .
docker push <your-registry>/hotrod:test

Update the image field in sandbox.yaml to match your registry. Then apply the sandbox, replacing <your-cluster-name> with the name of your Kubernetes cluster.

Testing the End-to-End Flow

With your sandbox created, it's time to test your schema changes in the HotRod application. Since the location service is not exposed externally, you'll use signadot local connect to route traffic to it from your local machine. You must also include the sandbox routing key in your request headers.

Step 1: Start Signadot Local Connect

Start the local connection to your cluster:

signadot local connect

Step 2: Obtain the Sandbox Routing Key

Retrieve the sandbox routing key, which routes requests to your sandboxed services. You can find this key on the Signadot Dashboard at app.signadot.com, or retrieve it via the CLI:

signadot sandbox get hotrod-schema-change -o json | jq -r .routingKey

Step 3: Send a Request to the location Service

Use curl to send a POST request to the location service within the sandbox. Use the service's cluster-local DNS name and include the routing key in the request headers:

curl -X POST http://location.hotrod.svc:8081/locations \
-H "Content-Type: application/json" \
-H "Baggage: sd-routing-key=<routing-key>" \
-d '{
"name": "Central Park",
"coordinates": "400,500",
"description": "A large public park in New York City"
}'

Replace <routing-key> with the actual routing key value. You should get an output similar to below:

{
"id": 732,
"name": "Central Park",
"coordinates": "400,500",
"description": "A large public park in New York City"
}

Step 4: Retrieve All Locations

Send a GET request to list all locations:

curl http://location.hotrod.svc:8081/locations \
-H "Baggage: sd-routing-key=<routing-key>"

Your output should be similar:

[
{
"id": 1,
"name": "My Home",
"coordinates": "231,773",
"description": "Cozy two-bedroom apartment"
},
{
"id": 123,
"name": "Rachel's Floral Designs",
"coordinates": "115,277",
"description": "Flower shop specializing in arrangements"
},
{
"id": 392,
"name": "Trom Chocolatier",
"coordinates": "577,322",
"description": "Artisan chocolate and confectionery"
},
{
"id": 567,
"name": "Amazing Coffee Roasters",
"coordinates": "211,653",
"description": "Local coffee roaster with a variety of blends"
},
{
"id": 731,
"name": "Japanese Desserts",
"coordinates": "728,326",
"description": "Traditional Japanese sweets and pastries"
}
]

Verifying Database Schema Changes

After testing the end-to-end flow, you may want to confirm that the database schema has been updated correctly. You can connect to the temporary MariaDB instance provisioned by the plugin.

Step 1: Retrieve the Database Host and Password

First, retrieve the resource outputs from your sandbox. Use the Signadot CLI to get the sandbox details:

signadot sandbox get hotrod-schema-change -o yaml

Look for the resources section in the output—it contains the provisioned outputs including provision.host, provision.port, and provision.root-password.

You can also extract specific values directly using jq:

# Get the database host
signadot sandbox get hotrod-schema-change -o json | jq -r '.resources[] | select(.name=="mydb") | .outputs["provision.host"]'

# Get the root password
signadot sandbox get hotrod-schema-change -o json | jq -r '.resources[] | select(.name=="mydb") | .outputs["provision.root-password"]'

The host will look something like signadot-mydb-<sandbox-id>-mariadb.signadot.svc.

Step 2: Connect to the Database

With signadot local connect still running, connect using the MySQL client:

mysql -h <db-host> -P 3306 -u root -p

Replace <db-host> with the provision.host value you retrieved above. Enter the provision.root-password value when prompted.

Once connected, select the hotrod database and describe the locations table:

USE hotrod;
DESCRIBE locations;

You should see that the locations table includes the new description column:

+-------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+--------------+------+-----+---------+----------------+
| id | bigint(20) | NO | PRI | NULL | auto_increment |
| name | varchar(255) | NO | UNI | NULL | |
| coordinates | varchar(255) | NO | | NULL | |
| description | varchar(255) | YES | | NULL | |
+-------------+--------------+------+-----+---------+----------------+

From the above, you can see that:

  • The description column has been successfully added to the locations table schema.
  • The column allows NULL values (YES in the Null column), which aligns with your schema definition.
  • The table now supports storing additional information for each location through the description field.

Deleting the Sandbox

After testing, clean up the resources by deleting the sandbox:

signadot sandbox delete hotrod-schema-change

This command removes the sandbox and triggers the deprovision.sh script to delete the temporary database.

Conclusion

In this tutorial, you deployed a MariaDB resource plugin to provision temporary isolated databases, and used it to safely test a schema change in the HotRod location service. The same pattern applies to any external resource your services depend on—Signadot's plugins repository includes examples for PostgreSQL with Vault, Terraform, and Amazon SQS, and you can build your own to fit your needs.