Skip to main content

Deploying an application to the Cloud Platform with Helm

Introduction

This is a guide to deploying an application into the Cloud Platform using the Helm tool. Helm is a package manager for Kubernetes YAML config.

At the end of this document, you will have deployed a Multi-container demo app to your namespace and now have the ability to scale the app using the Helm tool. If you have any questions or ways to improve this document, please feel free to post a message in the #ask-cloud-platform slack channel/

Disclaimer: You’ll see fairly quickly that the application is not fit for production. A perfect example of this is the plaintext secrets file. For the demo application we’ve left this file in plaintext but it must be encrypted when writing your own manifests for production/non-production work in the MoJ.

Helm overview

Your app’s Kubernetes config needs to be in YAML format. The simplest way is to store YAML files in your application, and your CI/CD deploys them using kubectl apply -f deployment.yaml.

However when you want to deploy this config to multiple environments, it needs to vary slightly between environments. This is where Helm helps - it allows you to use YAML templates, with variables in. Helm packages up YAML files and calls it a ‘chart’. So you store this chart directory in your repo, with a values file that specifies what variables it has and defaults. And your CI/CD deploys it to the cluster with helm install --set name=prod my_app_chart, supplying the values on the commandline or as a file.

Requirements

It is assumed you have the following:

Deploy the app

The multi-container demo application we’re going to use is a very simple Ruby application with an on-cluster Postgresql database.

Note: The default installation include all components (API server, postgres, worker, rails app) of this application. Even though the postgresql database will be installed within the Kubernetes cluster, when deploying to cloud platform, this can be disabled and setup a RDS instance using the terraform module which is recommended if running in production.

The Helm deployment manifests have been pre-written for this exercise. But if you wish to know more about these files and what they do have a quick browse of the README.

Set up

First we need to clone our Multi-container demo application and change directory:

$ git clone https://github.com/ministryofjustice/cloud-platform-multi-container-demo-app.git
$ cd cloud-platform-multi-container-demo-app

You now have a functioning git repository that contains a multi-container ruby application. Have a browse around and get familiar with the directory structure.

Browse the cluster

Let’s make use of the command line tool kubectl to browse around the cluster to see what it looks like before we deploy our application:

$ kubectl get pods --namespace <namespace-name>

The <namespace-name> here is the environment you created, listed in the requirements section at the beginning of the document.

If you receive the below error message then you’ve either not typed in your namespace correctly or you don’t have permission to perform a get pods command. Either way, you’ll need to go back and review the Creating an Environment document previously mentioned.

$ Error from server (Forbidden): pods is forbidden: User "test-user" cannot list pods in the namespace "demo"

Using Helm

Helm allows you to manage application deployment to Kubernetes using Charts. You can read about of some of the many features of Helm Charts. We’ve chosen to use Helm as the default way to deploy applications to the Cloud Platform as it provides useful tooling as an interface to the YAML files that Kubernetes uses to run.

Installing and configuring Helm

Install the client via Homebrew or by other means:

$ brew install helm

When succesful, you’ll be greeted with the message:

Happy Helming

This is an indication we’re ready to deploy our applicaton.

Application install

To deploy the application with Helm first change directory so we can focus on the app we need:

$ cd helm_deploy/multi-container-app/

Values for our application are stored in the values.yaml at the root of our directory. Configurations such as ‘number of pods running’ and which image repository to use is stored here in this file. Open this file and get familiar with our application layout.

If you want to use the postgres database deployed in the container,

  1. apply the secrets required for the posgres database by modifying and applying this file
  2. create another secret file secrets.yaml to store the database url with key called url. The database url should be in the format:

    postgres://<USERNAME>:<ADD-PASSWORD-HERE>@<DB-SERVICE-NAME>:5432/<DATABASE_NAME>
    

    For this helm deployment, the url will look like this:

    postgres://postgres:<ADD-PASSWORD-HERE>@multi-container-app-postgresql:5432/multi_container_demo_app
    

    Update the <ADD-PASSWORD-HERE> with the password you mentioned in this secret file

  3. Encode the secret:

    echo -n 'postgres://postgres:<ADD-PASSWORD-HERE>@multi-container-app-postgresql:5432/multi_container_demo_app' | base64 -b0
    
  4. Apply the secret file to your environment: kubectl apply -f secrets.yaml -n <namespace-name>

Update the values.yaml file for:

  • <DATABASE-URL-SECRET-NAME> with the name of the secret where you stored the database url
  • <DNS-PREFIX> with the your own name.

Run the following (replacing <namespace-name> with your environment name:

helm install multi-container-app . \
--values values.yaml \
--namespace <namespace-name>

Note: We’re naming it like this as app names and host names have to be unique on the cluster.

You’ll see quite a lot of output as the various components are created.

Viewing your application

Congratulations on getting this far. If all went well your pods are now deployed and is now being served on your specified URL.

Let’s check:

$ kubectl get pods --namespace <namespace-name>

If the deploy was successful you should be greeted with something similar to the below:

NAME                                           READY     STATUS    RESTARTS   AGE
content-api-7cf4588945-vk46w            1/1     Running                      0          2m34s
multi-container-app-postgresql-0        1/1     Running                      0          2m34s
rails-app-59bd9f9b64-tsn8w              1/1     Running                      0          2m34s
rails-migrations-kwldv                  0/1     Completed                    1          2m34s
worker-874774c95-q4frp                  1/1     Running                      0          2m34s

You should have a postgres pod, 3 app pods ready with the status running and 1 migration job completed.

(There’s also a line for a pod which ran the initial database migrations, but that’s completed and no longer running so we’ll ignore it.)

Let’s check your host has a URL by running:

$ kubectl get ingress --namespace <namespace-name>

This will return the URL of your given app. Open it using your favourite browser.

You can also secure the application with http basic authentication. For more information on how to setup, see this topic.

You should be met with an Demo app with the title, ‘Hello from Rails’ with a cat image. If you refresh the page, you should see a different cat picture (the URL of which the rails-app fetches from the content-api component).

View the logs

Each pod will generate logs that can be viewed via the API. Let’s have a browse of our application logs.

First grab the pod name:

$ kubectl get pods --namespace <namespace-name>

Then copy the name of a pod that isn’t postgresql.

   NAME                                           READY     STATUS    RESTARTS   AGE
   content-api-7cf4588945-vk46w            1/1     Running                      0          2m34s
*  rails-app-59bd9f9b64-tsn8w              1/1     Running                      0          2m34s
   worker-874774c95-q4frp                  1/1     Running                      0          2m34s

We’re going to follow the log, so we’ll run:

$ kubectl logs rails-app-59bd9f9b64-tsn8w --namespace <namespace-name> -f

As you can see, this tails the log and you should see our health checks giving a HTTP 200.

Read more about Kubernetes logging here.

Scale the application

You now have our application up and running but you decide two pods aren’t enough. Say you want to run three. This is simply a case of changing the replicaCount value in the values.yaml and then running the helm upgrade command.

Edit values.yaml and change replicaCount from 1 to 3. Save the file, then run:

$ helm upgrade multi-container-app . --namespace <namespace-name>
$ helm upgrade multi-container-app -f values.yaml . --namespace <namespace-name>

This command spins up another pod to bring the total number to 3.

If we run the familiar command we’ve been using:

$ kubectl get pods --namespace <env-var>

You’ll see the pod replication in progress.

Tear it all down

Finally, we have built are app and deployed to the cluster. There is only one thing left to do. Destroy it.

To delete the deployment you simply run:

$ helm del multi-container-app --namespace <namespace-name>

And then confirm the pods are terminating as expected:

$ kubectl get pods --namespace <namespace-name>

Next steps

The next step will be to create your own Helm Chart. You can try this with an application of your own or run through Bitnami’s excellent guide on how to build using a simple quickstart.

Migrate from Helm v2 to Helm v3

One of the most important parts of upgrading to a new major release of Helm is the migration of data. This is especially true of Helm v2 to v3 considering the architectural changes between the releases. This is where the helm-2to3 plugin comes in.

It helps with this migration by supporting:

  • Migration of Helm v2 configuration.
  • Migration of Helm v2 releases.
  • Clean up Helm v2 configuration, release data and Tiller deployment.

Helm v3 CLI Binary

As we do not want to overwrite Helm v2 CLI binary, we need to perform an additional step to ensure that both CLI versions can co-exist until we are ready to remove the Helm v2 CLI and all it’s related data:

Download latest Helm v3 release from here, rename the binary to helm3 and store it in your path.

mv helm /usr/local/bin/helm3

You should now be able to check the version of helm v3 by running:

helm3 version --short

helm v3 does not install with any repositories including the stable repo, you can confirm this by running:

helm3 repo list

You should see the message Error: no repositories to show

Helm-2TO3 Plugin

helm-2to3 plugin will allow us to migrate and cleanup helm v2 configuration and releases to helm v3 in-place.

Installed Kubernetes objects will not be modified or removed.

To install the plugin:

helm3 plugin install https://github.com/helm/helm-2to3 -n <namespace-name>
Downloading and installing helm-2to3 v0.3.0 ...
https://github.com/helm/helm-2to3/releases/download/v0.3.0/helm-2to3_0.3.0_darwin_amd64.tar.gz
Installed plugin: 2to3
helm3 plugin list
NAME    VERSION DESCRIPTION
2to3    0.3.0   migrate and cleanup Helm v2 configuration and releases in-place to Helm v3

Migrate helm v2 Configuration

First we need to migrate Helm v2 config and data folders.

It will migrate:

  • Chart starters
  • Repositories
  • Plugins

The safest way is to start with –dry-run flag:

helm3 2to3 move config --dry-run

This will show you all the commands and files that will be moving to helm3. To run the actual migration, run without the –dry-run flag:

helm3 2to3 move config

If you now run the helm3 repo list, you should get back the stable repo, and any other you have used in helm v2:

helm3 repo list
NAME        URL
stable      https://kubernetes-charts.storage.googleapis.com

Migrate helm v2 releases

The migration is conducted per release, you can get a list of all releases by running:

helm list --tiller-namespace <namespace-name>

The safest way to run the convert is to again use the –dry-run flag:

helm3 2to3 convert <release-name> --tiller-ns <namespace-name> --dry-run

To run the actual migration, run without the –dry-run flag:

helm3 2to3 convert <release-name> --tiller-ns <namespace-name>

To view if it was successful and a part of helm3, run a helm3 list:

helm3 list -n <namespace-name>

You have now migrated your release from helm v2 to helm v3. (If required, please migrate any other releases). However, the release is still listed on helm v2 and needs to be cleaned up.

Clean up of helm v2 data

The last step is cleaning up the old data.

It will clean:

  • Configuration (Helm home directory)
  • v2 release data
  • Tiller deployment
helm3 2to3 cleanup --tiller-ns <namespace-name> --dry-run

It will show what releases going to be deleted, Tiller service to be removed from your namespace and Helm v2 home folder will be deleted.

NOTE: The cleanup command will remove the Helm v2 Configuration, Release Data and Tiller Deployment. It cleans up all releases managed by Helm v2. It will not be possible to restore them if you haven’t made a backup of the releases. Helm v2 will not be usable afterwards.

To run the actual cleanup, run without the –dry-run flag:

helm3 2to3 cleanup --tiller-ns <namespace-name>

Tiller RBAC Configuration Removal

The Tiller Service Account resource created using your 01-rbac.yaml file in the cloud-platform-environments repo is no longer required.

Please create a PR to remove the Tiller Service Account and RoleBinding from the 01-rbac.yaml file.

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: myapp-dev ### Your namespace
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller
  namespace: myapp-dev ### Your namespace
subjects:
- kind: ServiceAccount
  name: tiller
  namespace: myapp-dev ### Your namespace
roleRef:
  kind: ClusterRole
  name: admin
  apiGroup: rbac.authorization.k8s.io

A member of the Cloud-Platform team will manually delete the Rolebinding before this is applied, as the concourse pipeline is not able to delete Rolebindings currently.

kubectl -n <namespace-name> delete rolebindings tiller
This page was last reviewed on 14 November 2022. It needs to be reviewed again on 14 February 2023 .
This page was set to be reviewed before 14 February 2023. This might mean the content is out of date.