Skip to content

Using Jsonnet

The most powerful piece of Tanka is the Jsonnet data templating language. Jsonnet is a superset of JSON, adding variables, functions, patching (deep merging), arithmetic, conditionals and many more to it.

It has a lot in common with more real programming languages such as JavaScript than with markup languages, still it is tailored specifically to representing data and configuration. As opposed to JSON (and YAML) it is a language meant for humans, not for computers.

Creating a new project

To get started with Tanka and Jsonnet, let’s initiate a new project, in which we will install both Prometheus and Grafana into our Kubernetes cluster:

Terminal window
mkdir prom-grafana && cd prom-grafana # create a new folder for the project and change to it
tk init # initiate a new project

This gives us the following directory structure:

  • Directoryenvironments
    • Directorydefault default environment
      • main.jsonnet main file (important!)
      • spec.json environment’s config
    • jsonnetfile.json
    • Directorylib/ libraries
    • Directoryvendor/ external libraries

For the moment, we only really care about the environments/default folder. The purpose of the other directories will be explained later in this guide (mostly related to libraries).

Environments

When using Tanka, you apply configuration for an Environment to a Kubernetes cluster. An Environment is some logical group of pieces that form an application stack.

Grafana Labs for example runs Loki, Cortex and of course Grafana for our Grafana Cloud hosted offering. For each of these, we have a separate environment. Furthermore, we like to see changes to our code in separate dev setups to make sure they are all good for production usage – so we have dev and prod environments for each app as well, as prod environments usually require other configuration (secrets, scale, etc) than dev. This roughly leaves us with the following:

EnvironmentLokiCortexGrafana
prodName: /environments/loki/prod
Namespace: loki-prod
Name: /environments/cortex/prod
Namespace: cortex-prod
Name: /environments/grafana/prod
Namespace: grafana-prod
devName: /environments/loki/dev
Namespace: loki-dev
Name: /environments/cortex/dev
Namespace: cortex-dev
Name: /environments/grafana/dev
Namespace: grafana-dev

There is no limit in Environment complexity, create as many as you need to model your own requirements. Grafana Labs for example also has all of these multiplied per high-availability region.

To get started, a single environment is enough. Lets use the automatically created environnments/default for that.

Defining Resources

While kubectl loads all .yaml files in a certain folder, Tanka has a single file that serves as the canonical source for all contents of an environment, called main.jsonnet. This is just like Go has the main.go or C++ the main.cpp.

Similar to JSON, each .jsonnet file holds a single object. The one returned by main.jsonnet will hold all of your Kubernetes resources:

main.jsonnet
{
"some_deployment": { /* ... */ },
"some_service": { /* ... */ }
}

They may be deeply nested, Tanka extracts everything that looks like a Kubernetes resource automatically.

So let’s rewrite the previous .yaml to very basic .jsonnet:

/environments/default/main.jsonnet
{
// Grafana
grafana: {
deployment: {
apiVersion: 'apps/v1',
kind: 'Deployment',
metadata: {
name: 'grafana',
},
spec: {
selector: {
matchLabels: {
name: 'grafana',
},
},
template: {
metadata: {
labels: {
name: 'grafana',
},
},
spec: {
containers: [
{
image: 'grafana/grafana',
name: 'grafana',
ports: [{
containerPort: 3000,
name: 'ui',
}],
},
],
},
},
},
},
service: {
apiVersion: 'v1',
kind: 'Service',
metadata: {
labels: {
name: 'grafana',
},
name: 'grafana',
},
spec: {
ports: [{
name: 'grafana-ui',
port: 3000,
targetPort: 3000,
}],
selector: {
name: 'grafana',
},
type: 'NodePort',
},
},
},
// Prometheus
prometheus: {
deployment: {
apiVersion: 'apps/v1',
kind: 'Deployment',
metadata: {
name: 'prometheus',
},
spec: {
minReadySeconds: 10,
replicas: 1,
revisionHistoryLimit: 10,
selector: {
matchLabels: {
name: 'prometheus',
},
},
template: {
metadata: {
labels: {
name: 'prometheus',
},
},
spec: {
containers: [
{
image: 'prom/prometheus',
imagePullPolicy: 'IfNotPresent',
name: 'prometheus',
ports: [
{
containerPort: 9090,
name: 'api',
},
],
},
],
},
},
},
},
service: {
apiVersion: 'v1',
kind: 'Service',
metadata: {
labels: {
name: 'prometheus',
},
name: 'prometheus',
},
spec: {
ports: [
{
name: 'prometheus-api',
port: 9090,
targetPort: 9090,
},
],
selector: {
name: 'prometheus',
},
},
},
},
}

At the moment, this is even more verbose because we have effectively converted YAML to JSON, which requires more characters by design.

But Jsonnet opens up enough possibilities to improve this a lot, which will be covered in the next sections.

Taking a look at the generated resources

So far so good, but can we make sure Tanka correctly finds our resources? We can! By running tk show you can see the good old yaml, just as kubectl receives it:

/prom-grafana $
# run from the project root:
tk show environments/default
Output
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
spec:
selector:
# ...

Spend some time here and try to identify resources from the output in the .jsonnet source.

Connecting to the cluster

The YAML looks as expected? Let’s apply it to the cluster. To do so, Tanka needs some additional configuration.

While kubectl uses a $KUBECONFIG environment variable and a file in the home directory to store the currently selected cluster, Tanka takes a more explicit approach:

Each environment has a file called spec.json, which includes the information to select a cluster:

{
"apiVersion": "tanka.dev/v1alpha1",
"kind": "Environment",
"metadata": {
"name": "default"
},
"spec": {
"apiServer": "https://127.0.0.1:6443", // cluster to use
"namespace": "monitoring" // default namespace for all created resources
}
}

You still have to setup a cluster in $KUBECONFIG that matches this IP – Tanka will automatically find and use it. This also means that all of your kubectl clusters just work.

This allows us to make sure that you will never accidentally apply to the wrong cluster.

Verifying the changes

Before applying to the cluster, Tanka gives you a chance to check that your changes actually behave as expected: tk diff works just like git diff – you see what will be changed.

/prom-grafana $
tk diff environments/default
Output
--- /tmp/LIVE-610130621/apps.v1.Deployment.monitoring.grafana 2019-12-17 20:14:45.213363586 +0100
+++ /tmp/MERGED-517481208/apps.v1.Deployment.monitoring.grafana 2019-12-17 20:14:45.213363586 +0100
@@ -0,0 +1,45 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: grafana
+ namespace: monitoring
+ # ...
+spec:
+ selector:
+ matchLabels:
+ name: grafana
+ strategy:
+ rollingUpdate:
+ maxSurge: 25%
+ maxUnavailable: 25%
+ type: RollingUpdate
+ template:
+ metadata:
+ creationTimestamp: null
+ labels:
+ name: grafana
+ spec:
+ containers:
+ - image: grafana/grafana
+ imagePullPolicy: IfNotPresent
+ # ...

As you can see, it shows everything as to-be created .. just as we’d expect, since we are using a blank namespace.

Applying to the cluster

Once it’s all looking good, tk apply serves the exact same purpose as kubectl apply:

/prom-grafana $
tk apply environments/default
Applying to namespace 'monitoring' of cluster 'default' at 'https://127.0.0.1:6443' using context 'default'.
Please type 'yes' to confirm: yes
deployment.apps/grafana created
deployment.apps/prometheus created
service/grafana created
service/prometheus created

It shows you the diff first and the chosen cluster once more and requires interactive approval (type yes).

After that, kubectl is used to apply to the cluster. By piping to kubectl Tanka makes sure it behaves exactly as you would expect it. No edge-cases of differing Kubernetes client implementations should ever occur.

Checking it worked

Again, let’s connect to Grafana:

Terminal window
kubectl port-forward --namespace=monitoring deployments/grafana 8080:3000

And go to http://localhost:8080 for Grafana’s UI.