Skip to main content

Deploying a multi-container application to the Cloud Platform


This section goes through the process of deploying a demo application consisting of several components, each running in its own container.

Please see the application README for a description of the different components, and how they connect. You can also run the application locally via docker-compose to confirm that it works as it should.

Running in the Kubernetes Cluster

In the Cloud Platform kubernetes cluster, the application will be set up like this:

Multi-container architecture diagram

Each container needs a Deployment which will contain a Pod. Services make pods available on the cluster’s internal network, and an Ingress exposes one or more services to the outside world.

Create an RDS instance

The application database will be an Amazon RDS instance. To create this, refer to the cloud platform RDS repository, and create a terraform file in your sub-directory of the cloud platform environments repository (you will need to raise a PR for this, and get the cloud platform team to approve it).

For more information see Adding AWS resources to your environment.

The demo application, and this guide, assumes a DATABASE_URL environment variable, exported by the terraform RDS module as follows:

data {
  url = "postgres://${module.module_name.database_username}:${module.module_name.database_password}@${module.module_name.rds_instance_endpoint}/${module.module_name.database_name}"

Please ensure that your file exports a database url value in this way (changing module_name to match the name you use in your file).

Connecting to your RDS instance from your local machine

This is not required for this tutorial.

If you need to access an RDS instance from your local machine, you can find instructions for doing so here.

Build docker images and pushing to ECR

As before, we need to build docker images which we will push to our Amazon ECR.

Please carry out the following steps on your own working copy of the demo application.

For team_name and repo_name please use the values from your file, when you created your ECR.

cd rails-app
docker build -t [team_name]/[repo_name]:rails-app .
docker tag [team_name]/[repo_name]:rails-app[team_name]/[repo_name]:rails-app-1.0
docker push[team_name]/[repo_name]:rails-app-1.0

Note that we are overloading the tag value to push multiple different containers to a single Amazon ECR. This is because of a quirk in the way Amazon ECR refers to image repositories and images.

Repeat the steps above for the content-api and worker sub-directories (changing rails-app as appropriate, in the commands).

The makefile in the demo application contains commands to make this process easier. Don’t forget to edit the values for TEAM_NAME, REPO_NAME and VERSION appropriately.

Kubernetes configuration

As per the diagram, we need to configure six objects in kubernetes - 3 deployments, 2 services and 1 ingress.

You can see these YAML config files in the kubernetes_deploy directory of the demo application.

Note: The yaml files in the github repository refer to docker images from docker hub, e.g. ministryofjustice/cloud-platform-multi-container-demo-app:worker-1.6 These will work, but in order to deploy the docker images you built in the earlier step, please change this to the full reference to your docker images.

You also need to change the host entry in the ingress.yaml file, so that this instance of the application has a unique hostname (i.e. change to something like [yourname]

Additionally in the ingress.yaml file, you will need to amend the following line:

  - <ingress-name>-<namespace-name>-green

<ingress-name> should be set to the name identifier of your ingress, in this case multi-container-demo.

<namespace-name> should be set to the namespace which you are working within.

green here is the correct value for ingress in EKS live cluster so leave as it is.

The cluster will not allow you to deploy an ingress with the same hostname as an existing ingress, so it’s important to ensure your hostname is unique.

In rails-app-deployment.yaml and worker-deployment.yaml you can see the configuration for two environment variables:

  • DATABASE_URL is retrieved from the kubernetes secret which was created when the RDS instance was set up
  • CONTENT_API_URL uses the name and port defined in content-api-service.yaml

In the kubernetes_deploy directory of the demo application, you will also see a migration job yaml config file. This will run rails database migrations.

Setup HTTP basic Authentication using the guidance here.

Deploying to the cluster

After you have built and pushed your docker images, and made the corresponding changes to the kubernetes_deploy/*.yaml files, you can apply the configuration to your namespace in the kubernetes cluster:

  kubectl apply --filename kubernetes_deploy --namespace [your namespace]

Interacting with the application

You should be able to view the application in your browser at:

https://<unique hostname you chose>

It should behave in the same way as when you were running it locally via docker-compose.

Further Development

Once you have deployed a working application, you can create a monitoring dashboard and custom alerts using these examples:

  • Grafana dashboard - Follow the guide on how to create a Grafana dashboard, make corresponding changes to monitoring-grafana-dashboard.yaml file and apply the configmap to your namespace in the kubernetes cluster, which will show your Grafana-Dashboard here.

      kubectl apply --filename monitoring-grafana-dashboard.yaml --namespace [your namespace]
  • Custom Alerts - Follow the guide on how to create Custom Alerts, make corresponding changes to prometheus-app-alert.yaml file and apply the PrometheusRule to your namespace in the kubernetes cluster, this will create custom alerts for your application.

      kubectl apply --filename prometheus-app-alert.yaml --namespace [your namespace]
This page was last reviewed on 17 January 2023. It needs to be reviewed again on 17 April 2023 .
This page was set to be reviewed before 17 April 2023. This might mean the content is out of date.