Command-line Interface (CLI)
Installation
Installation and configuration instructions are available here.
Usage
Use the help
command to see a list of all available "root" commands and global flags:
signadot help
Each command may have command-specific flags and nested sub-commands, which you can see by running:
signadot help <command>
Examples
The examples here work with v0.2+ of the Signadot CLI. If you're upgrading from v0.1, please note that the sandbox file format and the names of some commands have changed.
Clusters
You can use the cluster add
command to begin the process of connecting a Kubernetes cluster to Signadot:
signadot cluster add --name my-cluster
The --name
that you specify is only used within Signadot. It's the value you'll pass back in other commands to tell Signadot which Kubernetes cluster you want to work with.
The cluster add
command will generate the first auth token for that cluster and provide an example kubectl
command to install the cluster token as a Secret.
You can use signadot cluster list
to see the names of clusters already registered with Signadot.
You can also create a new auth token for an existing cluster with:
signadot cluster token create --cluster my-cluster
Sandboxes
To create a sandbox, first write a YAML or JSON file containing the name
and
spec
for the sandbox. The available fields within spec
are documented in the
Sandbox spec reference
For example:
name: my-sandbox
spec:
cluster: my-cluster
description: Testing sandboxes
forks:
- forkOf:
kind: Deployment
namespace: example
name: my-app
customizations:
images:
- image: example.com/my-app:dev-abcdef
env:
- name: EXTRA_ENV
value: foo
defaultRouteGroup: # CLI v0.3.7+ required (see sandbox specification for details)
endpoints:
- name: my-endpoint
target: http://my-app.example.svc:8080
Then submit this sandbox by passing the filename to the sandbox apply
command:
signadot sandbox apply -f my-sandbox.yaml
You can use signadot sandbox list
to see all existing sandboxes,
and signadot sandbox get
to see details about a single sandbox.
# List all sandboxes
signadot sandbox list
# Get one sandbox by name
signadot sandbox get my-sandbox
Each of the above commands can also produce meachine-readable output (JSON or YAML). For example:
# List all sandboxes in machine-readable format
signadot sandbox list -o json
# Get one sandbox in machine-readable format
signadot sandbox get my-sandbox -o yaml
You can delete a sandbox either by name, or by pointing at the same file that was used to create it:
# Delete sandbox by name
signadot sandbox delete my-sandbox
# Delete sandbox specified in a file
signadot sandbox delete -f my-sandbox.yaml
Also, the sandbox spec supports automatic deletion with time to live.
Local Workloads
signadot local {connect,disconnect}
The examples here work with v0.5.0+ of the Signadot CLI.
To run Local Workloads within sandboxes, one first connects to the cluster:
% signadot local connect
signadot local connect needs root privileges for:
- updating /etc/hosts with cluster service names
- configuring networking to direct local traffic to the cluster
Password:
signadot local connect has been started ✓
* runtime config: cluster demo, running with root-daemon
✓ Local connection healthy!
* operator version 0.16.0
* port-forward listening at ":59933"
* localnet has been configured
* 45 hosts accessible via /etc/hosts
* sandboxes watcher is running
* Connected Sandboxes:
- No active sandbox
In the event several connections are specified, you will be asked to provide the signadot cluster name of the cluster to which you want to connect:
% signadot local connect
Error: must specify --cluster=... (one of [air signadot-staging])
% signadot local connect --cluster=air
signadot local connect needs root privileges for:
- updating /etc/hosts with cluster service names
- configuring networking to direct local traffic to the cluster
Password:
signadot local connect has been started ✓
...
Once connected, you can run workloads locally which will receive requests from
the cluster and, if that workload communicates with other services in the
cluster, all of its outbound requests will be directed to the cluster. This is
accomplished simply by running signadot sandbox apply
using sandbox specs
with a local section.
Starting with Signadot CLI v0.6.0
, all the applied sandboxes having a local
section will be registered along with a machine ID
, a hashed unique
identifier of the workstation from where the CLI is being run. Once connected,
the CLI will continuously run a sandbox discovery service (sandboxes
watcher
), automatically establishing all the required tunnels for the
configured local workloads. Note that this new feature requires the Signadot
Operator >= v0.14.1
to be functional, otherwise you will find the following
message in the status:
% signadot local status
* runtime config: cluster test, running with root-daemon
✓ Local connection healthy!
* port-forward listening at ":36337"
* localnet has been configured
* 25 hosts accessible via /etc/hosts
* sandboxes watcher is not running ("this feature requires operator >= 0.14.1")
* Connected Sandboxes:
- No active sandbox
With older operator versions, you can still apply sandboxes with local references, but they won't be automatically re-established after disconnecting and connecting back.
To disconnect, run:
% signadot local disconnect
In case you want to remove all the connected sandboxes when disconnecting, run:
% signadot local disconnect --clean-local-sandboxes
signadot local proxy
The examples here work with v0.7.0+ of the Signadot CLI.
signadot local proxy
provides the ability to run arbitrary cluster services locally
via proxy, similar to Kubernetes port forwarding. In order to run it,
one must specify one of:
--sandbox <sandbox-name>
--routegroup <route-group-name>
--cluster <cluster-name>
In the event a Sandbox or RouteGroup is specified, the proxy goes to the associated cluster and will inject the associated routing keys (unless already present in a given request). In the event a cluster is specified, no headers are injected.
With this in hand, signadot local proxy
then will proxy remote services
to local servers, each specified as
--map <scheme>://<host>:<port>@<host>:<port>
On the right side of the @
, we have the local bind address. To the
left of the @
, we have a URL which is resolved in the remote cluster.
The scheme may be one of http
, grpc
, or tcp
. However, no header
injection is performed with tcp
.
As an example, consider the case where one would like to run a test
against a sandbox called feature-x
by hitting the endpoint http://backend.staging.svc:8000
. Suppose
also the test accepts an environment variable $BACKEND_SERVICE_ADDR
.
export BACKEND_SERVICE_ADDR=localhost:8001
signadot local proxy --sandbox feature-x --map http://backend.staging.svc:8000@$BACKEND_SERVICE_ADDR &
pid=$!
# run the test
newman run -env-var backend=$BACKEND_SERVICE_ADDR
kill $pid
RouteGroups
The examples here work with v0.3.7+ of the Signadot CLI.
To create a RouteGroup, first write a YAML or JSON file containing the name
and
spec
. The available fields within spec
are documented in the
RouteGroup spec reference
For example:
name: my-routegroup
spec:
cluster: my-cluster
description: "route group for testing multiple sandboxes together"
match:
any:
- label:
key: feature
value: new-feature-x-*
endpoints:
- name: frontend-endpoint
target: http://frontend.hotrod.svc:8080
Then submit this routegroup by passing the filename to the routegroup apply
command:
signadot routegroup apply -f my-routegroup.yaml
You can use signadot routegroup list
to see all existing routegroups,
and signadot routegroup get
to see details about a single routegroup.
# List all routegroups
signadot routegroup list
# Get one routegroup by name
signadot routegroup get my-routegroup
You can delete a routegroup either by name, or by pointing at the same file that was used to create it:
# Delete routegroup by name
signadot routegroup delete my-routegroup
# Delete routegroup specified in a file
signadot routegroup delete -f my-routegroup.yaml
ResourcePlugins
The examples here work with v0.4.0+ of the Signadot CLI.
To create a ResourcePlugin, first write a YAML or JSON file containing the name
and
spec
. The available fields within spec
are documented in the
ResourcePlugin spec reference
For example:
name: my-plugin
spec:
runner:
image: ubuntu
create:
- name: say-hello
script: |
#!/usr/bin/env bash
echo hello
delete:
- name: say-goodbye
script: |
#!/usr/bin/env bash
echo good-bye
Then submit this plugin by passing the filename to the resourceplugin apply
command:
signadot resourceplugin apply -f my-resourceplugin.yaml
You can use signadot resourceplugin list
to see all existing resourceplugins,
and signadot resourceplugin get
to see details about a single resourceplugin.
# List all resourceplugins
signadot resourceplugin list
# Get one resource plugin by name
signadot resourceplugin get my-plugin
You can delete a resourceplugin either by name, or by pointing at the same file that was used to create it:
# Delete resourceplugin by name
signadot resourceplugin delete my-resourceplugin
# Delete resourceplugin specified in a file
signadot resourceplugin delete -f my-resourceplugin.yaml
Job Runner Groups
The examples here work with v0.8.0+ of the Signadot CLI.
To create a Job Runner Group, first write a YAML or JSON file containing the name
and
spec
. The available fields within spec
are documented in the
JobRunnerGroup spec reference
For example:
name: my-jrg
spec:
cluster: your-cluster-name
labels:
my-custom-label: foo
env: bar
namespace: signadot
jobTimeout: 1h
image: ubuntu:latest
podTemplate:
metadata:
creationTimestamp: null
spec:
containers:
- image: ubuntu:latest
name: main
resources: {}
scaling:
manual:
desiredPods: 1
Then submit this job runner group by passing the filename to the jrg apply
or jobrunnergroup apply
command:
signadot jobrunnergroup apply -f my-jobrunnergroup.yaml
You can use signadot jobrunnergroup list
to see all existing jobrunnergroups,
and signadot jobrunnergroup get
to see details about a single jobrunnergroup.
# List all job runner groups
signadot jobrunnergroup list
# Get one job runner group by name
signadot jobrunnergroup get my-jrg
You can delete a jobrunnergroup either by name, or by pointing at the same file that was used to create it:
# Delete job runner group by name
signadot jobrunnergroup delete my-jrg
# Delete job runner group specified in a file
signadot jobrunnergroup delete -f my-jobrunnergroup.yaml
Jobs
The examples here work with v0.8.0+ of the Signadot CLI.
To submit a Job, first write a YAML or JSON file containing the
spec
. The available fields within spec
are documented in the
Job spec reference
For example:
spec:
namePrefix: my-job
runnerGroup: my-jrg
script: |
#!/bin/bash
x=1
while [ $x -le 30 ]
do
echo "Welcome $x times (env TEST=$TEST)"
x=$(( $x + 1 ))
sleep 1
done
echo "This is an artifact" > /tmp/my-artifact.txt
echo "We are done!"
uploadArtifact:
- path: /tmp/my-artifact.txt
meta:
format: text
kind: demo
Notice that runnerGroup
is set to the name of the JRG in the previous example. A Job
runs in the context of its associated JRG.
Then submit this job by passing the filename to the job submit
command:
signadot job submit -f my-job.yaml
You can use signadot job list
to see all jobs,
and signadot job get
to see details about a single job.
# List all jobs with only running or queued state
signadot job list
# List all jobs including completed, cancelled states
signadot job list --all
# Get one job runner group by name
signadot job get my-job-[generated_id]
You can cancel a job either by the generated name.
# Delete job by name
signadot job delete my-job-[generated_id]
Artifacts
The examples here work with v0.8.0+ of the Signadot CLI.
To download an artifact linked to a job, first list the available artifacts
with job get [NAME]
, once you know the artifact you would like to download
apply the following command.
# Download the stdout of job logs
signadot artifact download --job my-job-[generated_id] @stdout
# Download the stderr of job logs with a custom name (it will not create any unresolved directory)
signadot artifact download --job my-job-[generated_id] @stderr --output job_errors
# Download user/custom artifacts. The output will be the basename only (my-artifact.txt)
signadot artifact download --job my-job-[generated_id] /tmp/my-artifact.txt
Logs
The examples here work with v0.8.0+ of the Signadot CLI.
To see jobs logs we can use logs
, for example:
# See stdout logs of job
signadot log --job my-job-[generated_id]
# By default is stdout, but we can see stderr too
signadot log --job my-job-[generated_id] --stream stderr
YAML/JSON Templates
The examples here work with v0.4.0+ of the Signadot CLI.
Sandboxes, RouteGroups, and ResourcePlugins support yaml and json templating which allows substituting values provided in the spec files with values specified on the command line or in other files.
Templating helps with CI automation where some values come from the environment
and it also helps with spec organisation, such as allowing a script to be
edited in a file with extension .sh
while having it automatically embedded
into the spec.
A template file is a yaml file with template directives in strings. A template
directive takes the form @{<value-spec>}
.
A value spec specifies something to put in place of the template directive, which may be a variable reference or an embedding:
<value-spec> ::= <variable-ref> | <embedding>
Variables
A variable in a template directive takes the form @{<variable>}
and
can occur in any string literal. For example
name: "@{dev}-@{team}-feature"
city: "@{city}"
A variable must match the regular expression ^[a-zA-Z][a-zA-Z0-9._-]*$
.
Variable values are then supplied on the command line in the form
--set <variable>=<value>
. For example
signadot sandbox apply -f SANDBOX_FILE --set dev=jane --set team=plumbers
If a variable reference occurs in a sandbox file and no value for that variable is provided, then an error is flagged and the operation is aborted.
Embeddings
An embedding takes the form @{embed: <file>}
. <file>
is a path
to another file. Relative paths are interpreted relative to the directory
in which the template file lives, or relative to the current working directory
if the template file is stdin.
The contents of <file>
are then placed in the string in the resulting document.
For example, with template
name: "@{embed: name.txt}"
and if name.txt contains
Jane
Plumb
Then the expanded file would be
name: "Jane\nPlumb"
Perhaps rendered to yaml as
name: |
Jane
Plumb
Expansion Encodings
Both variable references and embeddings can be expanded in a few different ways.
By default, the value of a variable or the contents of a file are simply placed in the string in which the directive occurs. This replacement occurs by means of operations on the yaml/json string in which the directive occurs, rather than by means of operations on the containing document, so there is no problem with quoting or indentation. This expansion encoding is called a raw expansion encoding and it is the default.
Alternative expansion encodings can be specified for variables or embeddings by
appending [<encoding>]
to the variable name or the operation. raw
, yaml
,
and binary
expansion encodings are supported.
Expansion encodings other than raw
cannot occur properly within a string, nor can
one have multiple template directives in one string when one of them is a non-raw
expansion encoding.
Below are examples of the input and output of raw
, yaml
, and binary
expansion encodings.
For example:
# illegal template
name: " @{embed[yaml]: file.yaml}"
# ^ (contains a space outside the directive)
---
# ok embedding, nothing outside the directive
name: "@{embed[yaml]: file.yaml}"
raw
expansion encoding
# both dev and team are expanded with the raw expansion encoding,
# which is just string interpolation.
name: "@{dev[raw]}-@{team}"
---
# output
name: "jane-plumbers"
yaml
expansion encoding
# the value of the field 'podTemplate' will be the full yaml (not a string)
# defined by the contents of pod-template.yaml
podTemplate: "@{embed[yaml]: pod-template.yaml}"
---
# output
podTemplate:
spec:
containers:
raw
expansion encoding of yaml
# the value of the field 'podTemplateString' will be a string containing
# yaml defined by the contents of pod-template.yaml
podTemplateString: "@{embed[raw]: pod-template.yaml}"
---
# output
podTemplateString: |
spec:
containers:
yaml
expansion encoding of a port number
# with --set port=123, port will be a yaml number
port: "@{port[yaml]}"
---
# output
port: 123
binary
expansion encoding
# with --set data="$(dd if=/dev/urandom count=1)"
data: "@{data[binary]}"
---
# output
data: "mj3cLyX8B5yiL0S9jOm2WKapDD8a56WyJW0nyg7ufiL8w8KZ4mEM0z7eCnBAlqBuDpJPjIwy7Am3X5yQSv7MHZK0Ui+QRCL0D53blHAcFS8WV9bZSRkTBtBAyaeF0tH0HLRKPyXuJPDoZC1NmQIPdsW/el5gmf9nXjtYegjd8oNiN4bayDULXEOjeYkKNyVGGCvCn6TAQ3UeCmfckIcad7Ek1Vvm4EETDF4OFAL6hsYBujeZXdjn/vIquWtMWCq8iVPlLJQ0sMyzCCsNJvx170RSNF6/xh7zeI+RsfFBD/b71Kg0GdFz8zSOSSxsRZqAZBZGdUmYmJvQOyhYOq5Z9sumwSnqgBHjP3gnUaVS4jyaQuhcJJoLlTwHTd6X0WLPknctDGZWCLMd10XVLvKJoPEL99GDhWKyDpXwB2PNNWtZBa8UGrr6YYRsrttlgFkKd/tcSc/zYc8rjsdWARBPSvntSAzKPy459niet+OtpW8SpBuykmT60OZdRUYkntdEvmn7YP88e4aQxT3hZJx+qQGaJUT00w/MdtPL9JUZ63LVKvyDzCf3FGEtDipZZemveNQmyzTg/3mAIBHljS7QZdzeZtZ4ITh+Nxjm30I/Hedkd1qrPBZtoTqMgyVtXYCU+HVLMbiDLUj/9Jc0XL8VHwQoaCJ9M/V+ORZYSOdlDYY="
CLI Config
The CLI reads configuration from a file stored at $HOME/.signadot/config.yaml.
This location can be overridden with the --config
flag. This file contains
information for using the CLI as an interface to Signadot API and also for
running Local Sandboxes. Some parameters in this file can also be set via
environment variables. See the config file
reference.