Skip to main content

Creating a Cloud Platform Environment

Introduction

This is a guide to creating a environment (i.e. a namespace) on our Kubernetes cluster.

We define an environment as a Kubernetes namespace with some key resources deployed in it. Each Kubernetes namespace creates a logical separation within our cluster that provides isolation from any other namespace. You can think of a namespace as a separate portion of the cloud platform set aside for your service.

Once you have created an environment you will be able to perform actions using the kubectl tool in the namespace you have created.

Objective

By the end of this guide you’ll have created a Kubernetes namespace ready for you to deploy an application into.

Create an environment

Pre-requisites

  • Install the Cloud Platform CLI tool
  • Git
  • You must be a member of at least one team in the ministryofjustice organisation on GitHub

You create an environment by adding the definition of the environment as YAML files in the following GitHub repository, using the cloud-platform CLI tool:

cloud-platform-environments

To create a namespace, clone the repository:

$ git clone git@github.com:ministryofjustice/cloud-platform-environments.git
$ cd cloud-platform-environments

Then, run this command to create your namespace:

$ cloud-platform environment create

You will be prompted to provide information about the new namespace, and the CLI tool will create the necessary YAML and Terraform files in the correct location in the repository.

Please create a new branch, add the new files and raise a pull request. The cloud platform team need to approve your pull request before you can merge it. Once you have done this, a build pipeline will create your namespace on the cluster.

Please create one PR per namespace i.e. if you need namespaces myapp-dev, myapp-staging and myapp-prod, please raise a separate PR for each one. This makes it a lot easier for the cloud platform team to review your PRs.

Naming

When creating your environment, please ensure that you comply with our guidance on naming things.

This is particularly important for domain names, but since these often closely match the namespace name, it’s worth choosing a good name at this stage.

This is all you need to do in order to create your namespace.

Accessing your environments

Once the pipeline has completed you will be able to check that your environment is available by running:

$ kubectl get namespaces

This will return a list of the namespaces within the cluster, and you should see yours in the list.

You can now run commands in your namespace by appending the -n or --namespace flag to kubectl. So for instance, to see the pods running in our myenv-dev namespace, we would run:

$ kubectl get pods -n myenv-dev or

$ kubectl get pods --namespace myenv-dev

Next steps

Create an ECR repository to push your application docker image to.

Then you can try deploying an app to Kubernetes manually by writing some custom YAML files or deploying an app with Helm, a Kubernetes package manager.

The remainder of this section we describe in detail the YAML files which define your namespace, in case you need to alter any values.

Environment definition files

To set up an environment we create 5 Kubernetes YAML files in the directory for your namespace:

These files define key elements of the namespace and restrictions we want to place on it so that we have security and resource allocation properties. We describe each of these files in more detail below in case you want to make future changes.

In addition to the Kubernetes configuration files, we create three terraform files:

  • resources/main.tf
  • resources/versions.tf
  • resources/variables.tf

These files define the standard terraform backend and providers which you will need when you add terraform modules to create the AWS resources your service will use (e.g. an ECR for your docker images, RDS databases, and S3 buckets). The resources/variables.tf file contains attributes of your namespace which those later terraform files will refer to.

Namespace YAML files

These files define key elements of the namespace and restrictions we want to place on it so that we have security and resource allocation properties.

00-namespace.yaml

The 00-namespace.yaml file defines the namespace so that the Kubernetes cluster knows to create the namespace and what to call it.


apiVersion: v1
kind: Namespace
metadata:
  name: myapp-dev
  labels:
    cloud-platform.justice.gov.uk/is-production: "false"
    cloud-platform.justice.gov.uk/environment-name: "development"
  annotations:
    cloud-platform.justice.gov.uk/application: "My Service"
    ...

The namespace is your (team’s) ‘private’ part of the cluster. The name of your namespace must be unique across the whole of the cluster. If you try to create a new namespace using the name of one which already exists, you will get an error when you try to apply the generated kubernetes config files (or when our build pipeline applies them on your behalf).

For ‘real’ services, this is very unlikely to be a problem - most services have distinct names, so namespace name conflicts are unlikely. But, if you are creating a test/dummy namespace in order to learn how the platform works, it’s better to avoid generic names like ‘dummy’, ‘test’ or ‘example’. Add something unique (e.g. your name) to minimise the risk of trying to re-use a name by mistake.

01-rbac.yaml

We will also create a RoleBinding resource by adding the 01-rbac.yaml file. This will provide us with access policies on the namespace we have created in the cluster.

A role binding resource grants the permissions defined in a role to a user or set of users. A role can be another resource we can create but in this instance we will reference a Kubernetes default role ClusterRole - admin.

This RoleBinding resource references the ClusterRole - admin to provide admin permissions on the namespace to the set of users defined under subjects. In this case, the <yourTeam> GitHub group will have admin access to any resources within the namespace myapp-dev.

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: myapp-dev-admins  ### Your namespace with `-admin` e.g. `<servicename-env>-admin`
  namespace: myapp-dev ### Your namespace `<servicename-env>`
subjects:
  - kind: Group
    name: "github:<yourTeam>" ### Make this the name of the GitHub team you want to give access to
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: admin
  apiGroup: rbac.authorization.k8s.io

02-limitrange.yaml

As we are working on a shared Kubernetes cluster it is useful to put in place limits on the resources that each namespace, pod and container can use. This helps to stop us accidentally entering a situation where one service impacts the performance of another through using more resources than are available.

The first Kubernetes limit we can use is a LimitRange which we define in 02-limitrange.yaml.

The LimitRange object specifies two key resource limits on containers, defaultRequest and default. defaultRequest is the memory and cpu a container will request on startup. This is what the Kubernetes scheduler uses to determine whether there is enough space on the cluster to run your application and what your application will start up with when it is created. default is the limit at which your application will be killed or throttled.

apiVersion: v1
kind: LimitRange
metadata:
  name: limitrange
  namespace: myapp-dev ### Your namespace `<servicename-env>`
spec:
  limits:
  - default:
      cpu: 1000m
      memory: 1000Mi
    defaultRequest:
      cpu: 10m
      memory: 100Mi
    type: Container

03-resourcequota.yaml

The ResourceQuota object allows us to set a total limit on the resources which are made available to a namespace. On the Cloud Platform, we take a simple approach, and merely limit the maximum number of pods which a namespace is allowed to launch.


apiVersion: v1
kind: ResourceQuota
metadata:
  name: namespace-quota
  namespace: myapp-dev ### Your namespace `<servicename-env>`
spec:
  hard:
    pods: "50"

04-networkpolicy.yaml

The NetworkPolicy object defines how groups of pods are allowed to communicate with each other and other network endpoints. By default pods are non-isolated, they accept traffic from any source. We apply a network policy to restrict where traffic can come from, allowing traffic only from the ingress controller and other pods in your namespace.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default
  namespace: myapp-dev ### Your namespace `<servicename-env>`
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector: {}
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-ingress-controllers
  namespace: myapp-dev
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          component: ingress-controllers
This page was last reviewed on 19 May 2021. It needs to be reviewed again on 19 August 2021 .
This page was set to be reviewed before 19 August 2021. This might mean the content is out of date.