Knowledge Base Hub

Browse through our helpful how-to guides to get the fastest solutions to your technical issues.

Home  >  How-Tos  >  PHP Application Deployment with Kubernetes on Ubuntu 16.04
Top Scroll

PHP Application Deployment with Kubernetes on Ubuntu 16.04

 20 min

Introduction

Kubernetes (K8s) is a portable, extensible open-source container orchestration platform to manage containerized workloads and services, that makes easier declarative configuration deployment, scaling and automation. K8s lets you create, update and scale containers without worrying about server downtime.

Nginx acts as a proxy to PHP-FPM when running a PHP application. It can be a cumbersome process if this setup is containerized in a single container, but Kubernetes will help to manage both services in separate containers. K8s allows you to keep your containers switchable and reusable, and you will not have to rebuild your container image every time there’s a new version of PHP or Nginx.

In this KB article, you will learn about how to deploy a PHP 7 application on Kubernetes cluster with PHP-FPM and Nginx running in separate containers. Also, learn how to keep your application code and configuration files apart from the container image using DigitalOcean’s Block Storage system. With this perspective, you can reuse the Nginx image for any application that requires web or proxy server by passing a configuration volume instead of rebuilding the image.

Prerequisites

  • A basic understanding of Kubernetes objects.

  • A Kubernetes cluster running on Ubuntu 16.04.

  • A DigitalOcean account and an API access token with read and write permissions to create storage volume.

  • Your application code hosted on a publicly accessible URL, like Github.

STEP 1 – Create the PHP-FPM and Nginx Services

Here you will create the PHP-FPM and Nginx services. With the service you get an access to a set of pods from inside the cluster. Without the requirement for IP addresses, these services can communicate directly within the cluster through their names. By using PHP-FPM service, you can access the PHP-FPM pods, while the Nginx service will allow access to the Nginx pods.

As Nginx pods will proxy the PHP-FPM pods, you have to tell the service the way to find them. Instead of using IP addresses, Kubernetes’ automatic service discovery is beneficial to use human-readable names to route requests to the appropriate service.

To create the service, you have to create an object definition file. Every Kubernetes object definition is a YAML file that includes at least the following items:

apiVersion: The Kubernetes API version that the definition belongs to.

kind: The Kubernetes object this file represents. For example, a pod or service.

metadata: This includes the object name with any labels that you may want to apply to it.

spec: This contains a particular setup relying upon the type of object you are creating, like the container image or the ports on which the container will be available from.

First, you have to create a directory to hold your Kubernetes object definitions.

SSH to your master node and make the definitions directory that will clutch your Kubernetes object definitions.

mkdir definitions

Navigate to the newly created definitions directory:

cd definitions

Build your PHP-FPM service by creating a php_service.yaml file:

nano php_service.yaml

Set kind as Service to define that this object is a service:

php_service.yaml


apiVersion: v1
kind: Service

Name the service php as it will give access to PHP-FPM:

php_service.yaml


metadata:
name: php

You can group various objects logically with labels. In this knowledge base, you will use labels to group the objects within “tiers” like frontend or backend. The PHP pods will run back of this service so you can label it as tier: backend.

php_service.yaml


labels:
tier: backend

A service figures out which pods to access by using selector labels. A pod that coordinates these names will be serviced, independent of whether the pod was made before or after the service. You will include labels for your pods later in the tutorial.

Apply the tier: backend label to assign the pod into the backend tier. Also, you have to add the app: php label to define that this pod runs PHP. Add these two labels after the metadata section.

php_service.yaml


spec:
selector:
app: php
tier: backend

Next, define the port used to access this service. You will use port 9000 in this tutorial. Add this port to the php_service.yaml file under spec:

php_service.yaml


ports:
– protocol: TCP
port: 9000

Done! Your created php_service.yaml file will look like this:

php_service.yaml

apiVersion: v1
kind: Service
metadata:
name: php
labels:
tier: backend
spec:
selector:
app: php
tier: backend
ports:
– protocol: TCP
port: 9000

Hit CTRL + o to save the file, and then CTRL + x to exit nano.

Now that you have created the object definition for your service, to run this service you will use the kubectl apply command plus the -f argument and mention your php_service.yaml file.

Create your service:

kubectl apply -f php_service.yaml

Following output authenticates service creation:

Output
service/php created

Check that your service is running:

kubectl get svc

You will notice your PHP-FPM service running:

Output

NAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGE
kubernetesClusterIP10.96.0.1 None443/TCP10m
phpClusterIP10.100.59.238None9000/TCP5m

There are several service types that Kubernetes supports. Your php service uses the default ClusterIP service type. This service type assigns an internal IP and forms the service accessible only from within the cluster.

Now your PHP-FPM service is ready. You have to create the Nginx service. Create and open a new file named nginx_service.yaml with the editor:

nano nginx_service.yaml

This service will point Nginx pods so you can call it nginx. You can also add a tier: backend label as it applies in the backend tier:

nginx_service.yaml

apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
tier: backend

Point the pods with selector labels app: nginx and tier: backend as similar to the php service. Make this service accessible on the default HTTP port 80.

nginx_service.yaml


spec:
selector:
app: nginx
tier: backend
ports:
– protocol: TCP
port: 80

The Nginx service will be publicly accessible to the internet by your Droplet’s public IP address. Under spec.externalIPs, your_public_ip can be found from your DigitalOcean Cloud Panel, add:

nginx_service.yaml


spec:
externalIPs:
your_public_ip

Your nginx_service.yaml file will look like as follows:

nginx_service.yaml

apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
tier: backend
spec:
selector:
app: nginx
tier: backend
ports:
– protocol: TCP
port: 80
externalIPs:
your_public_ip

Save and close the file.

Execute the following command to create the Nginx service:

kubectl apply -f nginx_service.yaml

You will get the output when the service is running:

Output
service/nginx created

Run the following command to view all running services:

kubectl get svc

You can see both the Nginx and PHP-FPM services listed in the output:

NAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGE
kubernetesClusterIP10.96.0.1 None443/TCP10m
nginxClusterIP10.102.160.47your_public_ip80/TCP50s
phpClusterIP10.100.59.238None9000/TCP5m

Please note, you can run the following command to delete a service:

kubectl delete svc/service_name

Now that you have created your PHP-FPM and Nginx services, you need to define where to store your application code and configuration files.

nginx_service.yaml


spec:
externalIPs:
– your_public_ip

 

STEP 2 – Installing the DigitalOcean Storage Plug-In

Kubernetes provides various storage plug-ins that can create the storage space for your environment. In this step, you will install the DigitalOcean storage plug-in to generate block storage on DigitalOcean. It will add a storage class called do-block-storage when the installation is completed that you will use to create your block storage.

At first, you have to configure the Kubernetes Secret object to store your DigitalOcean API token. Kubernetes Secret objects are used to share sensitive data or information such as SSH keys and passwords with other Kubernetes objects within the same namespace. Namespaces offer a way to logically separate your Kubernetes objects.

Open a file called secret.yaml with the editor:

nano secret.yaml

Define the name your Secret object digitalocean and add it to the kube-system namespace. The kube-system namespace is the default namespace for Kubernetes’ internal services as well as it is also used by the DigitalOcean storage plug-in to launch different components.

secret.yaml

apiVersion: v1
kind: Secret
metadata:
name: digitalocean
namespace: kube-system

Secret uses a data or stringData key instead of a spec key to hold the required information. The data parameter keeps base64 encoded data that is automatically decoded when retrieved. The stringData parameter holds non-encoded data that is automatically encoded when creation or updates and doesn’t output the data during retrieving Secrets. You can use stringData in this tutorial for help.

Add the access-token as stringData:

secret.yaml


stringData:
access-token: your-api-token

Save and exit the file.

Your secret.yaml file will look like this:

secret.yaml

apiVersion: v1
kind: Secret
metadata:
name: digitalocean
namespace: kube-system
stringData:
access-token: your-api-token

Create the secret:

kubectl apply -f secret.yaml

Here is the output upon Secret creation:

Output
secret/digitalocean created

Execute the following command to view the secret:

kubectl -n kube-system get secret digitalocean

The output will look like this:

Output

NAMETYPEDATAAGE
digitaloceanOpaque141s

The Opaque type indicates that this Secret is read-only, which is standard for stringData Secrets. You can learn more about it on the Secret design spec. The DATA field displays the number of items stored in this Secret. Here, it shows 1 as you have a single key stored.

Now that your Secret is in place, install the DigitalOcean block storage plug-in:

kubectl apply -f https://raw.githubusercontent.com/digitalocean/csi-digitalocean/master/deploy/kubernetes/releases/csi-digitalocean-v0.3.0.yaml

The output will look like as follows:

Output

storageclass.storage.k8s.io/do-block-storage created
serviceaccount/csi-attacher created
clusterrole.rbac.authorization.k8s.io/external-attacher-runner created
clusterrolebinding.rbac.authorization.k8s.io/csi-attacher-role created
service/csi-attacher-doplug-in created
statefulset.apps/csi-attacher-doplug-in created
serviceaccount/csi-provisioner created
clusterrole.rbac.authorization.k8s.io/external-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/csi-provisioner-role created
service/csi-provisioner-doplug-in created
statefulset.apps/csi-provisioner-doplug-in created
serviceaccount/csi-doplug-in created
clusterrole.rbac.authorization.k8s.io/csi-doplug-in created
clusterrolebinding.rbac.authorization.k8s.io/csi-doplug-in created
daemonset.apps/csi-doplug-in created

Now that you have installed the DigitalOcean storage plug-in, you can create block storage to keep your application code and configuration files.

 

STEP 3 – Creating the Persistent Volume

You are now ready to create your Persistent Volume with your Secret in place and the block storage plug-in installed. A Persistent Volume (PV) is block storage of a specified size that exists individually of a pod’s life cycle. You can manage or update your pods by using Persistent Volume without worrying about losing your application code. Using PersistentVolumeClaim (PVC), a Persistent Volume is accessed which mounts the PV at the necessary path.

Open a file named code_volume.yaml through your editor:

nano code_volume.yaml

Define the PVC code by adding the parameters and values to your file are as follows:

code_volume.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: code

The spec for a PVC includes the following items:

  • accessModes can change by the use case. These are:
    • ReadWriteOnce – scales the volume as read-write by a single node
    • ReadOnlyMany – scales the volume as read-only by many nodes
    • ReadWriteMany – scales the volume as read-write by many nodes
  • resources – the storage space that you need.

DigitalOcean block storage is only scaled to a single node, so you have set the accessModes to ReadWriteOnce. This tutorial will help you by adding a small amount of application code so 1GB will be sufficient in this use case. If you wish to store a massive amount of data or code on the volume, you can adjust the storage parameter as per your requirements. You can increase the size of storage after volume creation, but shrinking the disk is not supported.

code_volume.yaml


spec:
accessModes:
– ReadWriteOnce
resources:
requests:
storage: 1Gi

Next, define the storage class that Kubernetes will use to provision the volumes. You can use the do-block-storage class created by the DigitalOcean block storage plug-in.

code_volume.yaml


storageClassName: do-block-storage

Your code_volume.yaml file will look similar to this:

code_volume.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: code
spec:
accessModes:
– ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: do-block-storage

Save and exit the file.

Create the code PersistentVolumeClaim with using kubectl:

kubectl apply -f code_volume.yaml

The object was successfully created. You are prepared to mount your 1GB PVC as a volume.

Output
persistentvolumeclaim/code created

To view available Persistent Volumes (PV):

kubectl get pv

You can see your PV listed:

Output

NAMECAPACITYACCESS MODESRECLAIM POLICYSTATUSCLAIMSTORAGECLASSREASONAGE
pvc-ca4df10f-ab8c-11e8-b89d-12331aa95b131GiRWODeleteBounddefault/codedo-block-storage 2m

Above output is an overview of your configuration file except for Reclaim Policy and Status. The Reclaim Policy describes what is done with the PV, after the PVC accessing it is deleted. Delete removes the PV from Kubernetes and DigitalOcean infrastructure. You can get more information about the Reclaim Policy and Status from the Kubernetes PV documentation.

You have successfully created a Persistent Volume through DigitalOcean block storage plug-in. Now your Persistent Volume is ready. You can create your pods using a Deployment.

 

STEP 4 – Creating a PHP-FPM Deployment

In this step, you will discover how to use a Deployment to create your PHP-FPM pod. Deployments provide a consistent way to create, update, and manage pods by using ReplicaSets. If an update doesn’t work as expected, a Deployment will automatically rollback its pods to a previous image.

The Deployment spec.selector key will list the labels of the pods it will manage. It will also use the template key to create the needed pods.

In this step, we will explain the use of Init Containers. Init Containers execute one or more commands before the regular containers defined under the pod’s template key. In this tutorial, your Init Container will fetch a sample index.php file from GitHub Gist with using wget. Following are the contents of the sample file:

index.php

echo phpinfo();

Open a new file named php_deployment.yaml to create your deployment with your editor:

nano php_deployment.yaml

This Deployment will manage your PHP-FPM pods, so you have to name the Deployment object php. The pods apply to the backend tier so you can arrange the Deployment into this group by using the tier: backend label:

php_deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: php
labels:
tier: backend

For the Deployment spec, you have to define how many copies of this pod to create through using the replicas parameter. The number of replicas can change depending on your needs and available resources. You will create one replica in this KB:

php_deployment.yaml


spec:
replicas: 1

Under the selector key, the deployment will manage pods that match the app: php and tier: backend labels.

php_deployment.yaml


selector:
matchLabels:
app: php
tier: backend

Next, the Deployment spec needs the template for your pod’s object definition. This template will define specifications to create the pod from. First, you have to add the labels that were defined for the php service selectors and the Deployment’s matchLabels. Add app: php and tier: backend under template.metadata.labels:

php_deployment.yaml


template:
metadata:
labels:
app: php
tier: backend

A pod may have many containers and volumes, but each will require a name. You can particularly mount volumes to a container by defining a mount path for each volume.

First, define the volumes that your containers will access. You created a PVC named code to keep your application code, so name this volume too as code. Add the following code under spec.template.spec.volumes:

php_deployment.yaml


spec:
volumes:
– name: code
persistentVolumeClaim:
claimName: code

Next, define the container you wish to run in this pod. You can get several images on the Docker store, but in this kb, you will use the php:7-fpm image. Add the following code under spec.template.spec.containers:

php_deployment.yaml


containers:
– name: php
image: php:7-fpm

Next, you will mount the volumes that the container needs access to. This container will run your PHP code so it will require access to the code volume. You will also need to use mountPath to define /code as the mount point.

Under spec.template.spec.containers.volumeMounts, add:

php_deployment.yaml


volumeMounts:
– name: code
mountPath: /code

Now you mounted the volume, you require getting your application code on the volume. You may have earlier used FTP/SFTP or cloned the code across an SSH connection to achieve this, but this step will explain to you how to copy the code using an Init Container.

Based on the difficulty of your setup process, you can either use a single initContainer to run a script that creates your application, or you can use one initContainer per command. Make sure that the volumes are mounted to the initContainer.

In this KB, you will use a single Init Container with busybox to download the code. busybox is a small image that includes the wget utility that you will use to finish this.

Under spec.template.spec, add your initContainer and define the busybox image:

php_deployment.yaml


initContainers:
– name: install
image: busybox

Your Init Container will require access to the code volume so that it can download the code into that location. Under spec.template.spec.initContainers, mount the volume code at the /code path:

php_deployment.yaml


volumeMounts:
– name: code
mountPath: /code

Every Init Container requires to run a command. Your Init Container will use wget to download the code from Github into the /code working directory. The -O option gives the downloaded file a name, and you will call this file index.php.

Note: Make sure to trust the code you’re pulling. Before pulling it to your server, review the source code to ensure you are comfortable with what the code does.

Add the following lines under the install container in spec.template.spec.initContainers:

php_deployment.yaml


command:
– wget
– “-O”
– “/code/index.php”
https://raw.githubusercontent.com/do-community/php-kubernetes/master/index.php

Your completed php_deployment.yaml file will look similar to this:

php_deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: php
labels:
tier: backend
spec:
replicas: 1
selector:
matchLabels:
app: php
tier: backend
template:
metadata:
labels:
app: php
tier: backend
spec:
volumes:
– name: code
persistentVolumeClaim:
claimName: code
containers:
– name: php
image: php:7-fpm
volumeMounts:
– name: code
mountPath: /code
initContainers:
– name: install
image: busybox
volumeMounts:
– name: code
mountPath: /code
command:
– wget
– “-O”
– “/code/index.php”
https://raw.githubusercontent.com/do-community/php-kubernetes/master/index.php

Save the file and exit the editor.

Create the PHP-FPM Deployment with kubectl:

kubectl apply -f php_deployment.yaml

You will see the following output on Deployment creation:

Output
deployment.apps/php created

To sum up, this Deployment will start by downloading the defined images. It will then request the PersistentVolume from your PersistentVolumeClaim and serially run your initContainers. Once finished, the containers will run and mount the volumes to the defined mount point. When all of these actions are complete, your pod will be up and running.

You can see your Deployment by running:

kubectl get deployments

You can view the output:

Output

NAMEDESIREDCURRENTUP-TO-DATEAVAILABLEAGE
php111019s

The above output helps you to understand the current state of the Deployment. A Deployment is one of the controllers that manage the desired state. The template you created defines that the DESIRED state will have 1 replicas of the pod named php. The CURRENT field shows how many replicas are running, so this should match the DESIRED state. You can get more information about the remaining fields in the Kubernetes Deployments documentation.

You can see the pods that this Deployment started with executing the following command:

kubectl get pods

The output of the above command varies depending on how much time has passed after creating the Deployment. If you run it quickly after creation, the output will look like this:

NAMEREADYSTATUSRESTARTSAGE
php-86d59fd666-bf8zd0/1Init:0/109s

The columns describe the following information:

  • Ready: The number of replicas running this pod.
  • Status: The status of the pod. Init indicates that the Init Containers are running. In this output, 0 out of 1 Init Containers have stopped running.
  • Restarts: How many times this process has restarted to start the pod. This number will increase if some of your Init Containers fail. The Deployment will restart it until it reaches the desired state.

Based on the difficulty of your startup scripts, it can take a few minutes for the status to change to podInitializing:

NAMEREADYSTATUSRESTARTSAGE
php-86d59fd666-lkwgn0/1podInitializing039s

This means the Init Containers have completed and the containers are initializing. If you run the command when all of the containers are running, you can see the pod status changed to Running.

NAMEREADYSTATUSRESTARTSAGE
php-86d59fd666-lkwgn1/1Running01m

Now your pod is running successfully. In case your pod doesn’t start, you can debug with the commands mentioned below:

  • Shows detailed information of a pod:

kubectl describe pods pod-name

  • Shows logs generated by a pod:

kubectl logs pod-name

  • Shows logs for a specific container in a pod:

kubectl logs pod-name container-name

Your application code is mounted. The PHP-FPM service is now able to manage connections. Now you can create your Nginx Deployment.

STEP 5 – Creating the Nginx Deployment

In this last step, you will use ConfigMap to configure Nginx. A ConfigMap keeps your configuration in a key-value format that you can refer it in other Kubernetes object definitions. This manner will give you the flexibility to reuse or swap the image with a several Nginx versions if wanted. Updating the ConfigMap will automatically replicate the modifications to any pod mounting it.

Create a nginx_configMap.yaml file for your ConfigMap with your editor:

nano nginx_configMap.yaml

Name the ConfigMap nginx-config and arrange it into the tier: backend micro-service:

nginx_configMap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
labels:
tier: backend

Then, you have to add the data for the ConfigMap. Name the key config and add the contents of your Nginx configuration file as the value. You can refer the example Nginx configuration from this tutorial.

As Kubernetes can route requests to the relevant host for a service. You can enter the name of your PHP-FPM service in the fastcgi_pass parameter rather than of its IP address. Add the following lines to your nginx_configMap.yaml file:

nginx_configMap.yaml


data:
config : |
server {
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root ^/code^;

location / {
try_files $uri $uri/ /index.php?$query_string;
}

location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}

Your nginx_configMap.yaml file will look similar to this:

nginx_configMap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
labels:
tier: backend
data:
config : |
server {
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /code;

location / {
try_files $uri $uri/ /index.php?$query_string;
}

location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}

Save the file and exit the editor.

Create the ConfigMap:

kubectl apply -f nginx_configMap.yaml

You will see the following output:

Output
configmap/nginx-config created

You have finished creating your ConfigMap and you can now build your Nginx Deployment.

Begin by opening a new nginx_deployment.yaml file in the editor:

nano nginx_deployment.yaml

Specify Name the Deployment nginx and add the label tier: backend:

nginx_configMap.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
tier: backend

Define what you need one replicas in the Deployment spec. This Deployment will handle pods with labels app: nginx and tier: backend. Add the following parameters and values:

nginx_configMap.yaml


spec:
replicas: 1
selector:
matchLabels:
app: nginx
tier: backend

Next, add the pod template. You have to use the same labels that you added for the Deployment selector.matchLabels. Add the below code:

nginx_configMap.yaml


template:
metadata:
labels:
app: nginx
tier: backend

Allow Nginx access to the code PVC that you created previously. Under spec.template.spec.volumes, add:

nginx_configMap.yaml


spec:
volumes:
– name: code
persistentVolumeClaim:
claimName: code

Pods can mount a ConfigMap as a volume. Defining a file name and key will create a file with its value as the content. To use the ConfigMap, set path to the name of the file that will hold the contents of the key. You need to create a file site.conf from the key config. Under spec.template.spec.volumes, add the following lines:

nginx_configMap.yaml


– name: config
configMap:
name: nginx-config
items:
– key: config
path: site.conf

Warning: If a file is not defined, the contents of the key will replace the mountPath of the volume. This means that if a path is not specified in a clear and detailed manner, you will lose all content in the target folder.

Next, you will define the image to build your pod from. This tutorial will use the nginx:1.7.9 image for durability, but you can discover other Nginx images at Docker store. Also, make Nginx accessible on port 80. Under spec.template.spec add:

nginx_deployment.yaml


containers:
– name: nginx
image: nginx:1.7.9
ports:
– containerPort: 80

Nginx and PHP-FPM require to access the file at the same path, so mount the code volume at /code:

nginx_configMap.yaml


volumeMounts:
– name: code
mountPath: /code

Under the /etc/nginx/conf.d directory, the nginx:1.7.9 image will automatically load any configuration files. In this directory will create the file /etc/nginx/conf.d/site.conf by mounting the config volume. Add the following lines under volumeMounts:

nginx_configMap.yaml


– name: config
mountPath: /etc/nginx/conf.d

Your nginx_deployment.yaml file will look similar to this:

nginx_configMap.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
tier: backend
spec:
replicas: 1
selector:
matchLabels:
app: nginx
tier: backend
template:
metadata:
labels:
app: nginx
tier: backend
spec:
volumes:
– name: code
persistentVolumeClaim:
claimName: code
– name: config
configMap:
name: nginx-config
items:
– key: config
path: site.conf
containers:
– name: nginx
image: nginx:1.7.9
ports:
– containerPort: 80
volumeMounts:
– name: code
mountPath: /code
– name: config
mountPath: /etc/nginx/conf.d

Save the file and exit the editor.

Create the Nginx Deployment:

kubectl apply -f nginx_deployment.yaml

Now your Deployment is created. Here is the output:

Output
deployment.apps/nginx created

List your Deployments with running the following command:

kubectl get deployments

The following output indicates Nginx and PHP-FPM Deployments:

Output

NAMEDESIREDCURRENTUP-TO-DATEAVAILABLEAGE
nginx111016s
php11117m

List the pods managed by both of the Deployments:

kubectl get pods

You can see the pods that are running in the following output:

Output

NAMEREADYSTATUSRESTARTSAGE
nginx-7bf5476b6f-zppml1/1Running032s
php-86d59fd666-lkwgn1/1Running07m

Now that all of the Kubernetes objects are ready, you can hit the Nginx service at your browser.

List the running services:

kubectl get services -o wide

Get the External IP for your Nginx service:

NAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGESELECTOR
kubernetesClusterIP10.96.0.1 None443/TCP39mNone
nginxClusterIP10.102.160.47your_public_ip80/TCP27mapp=nginx,tier=backend
phpClusterIP10.100.59.238None9000/TCP34mapp=php,tier=backend

At your browser, visit your server by typing in http://your_public_ip. You will see the output of php_info() and have verified that your Kubernetes services are live and running.

Conclusion

In this step-by-step guide, you containerized the PHP-FPM and Nginx services so that you can manage them independently. This approach enhances the scalability of your project as you grow as well as allows you to efficiently use resources. You also can save your application code on a volume so that you will simply update your services in the future.

For our Knowledge Base visitors only
Get 10% OFF on Hosting
Special Offer!
30
MINS
59
SECS
Claim the discount before it’s too late. Use the coupon code:
STORYSAVER
Note: Copy the coupon code and apply it on checkout.