<?xml version="1.0" encoding="utf-8" ?><rss version="2.0" xmlns:tt="http://teletype.in/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:media="http://search.yahoo.com/mrss/"><channel><title>@mronx</title><generator>teletype.in</generator><description><![CDATA[@mronx]]></description><link>https://teletype.in/@mronx?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=mronx</link><atom:link rel="self" type="application/rss+xml" href="https://teletype.in/rss/mronx?offset=0"></atom:link><atom:link rel="next" type="application/rss+xml" href="https://teletype.in/rss/mronx?offset=10"></atom:link><atom:link rel="search" type="application/opensearchdescription+xml" title="Teletype" href="https://teletype.in/opensearch.xml"></atom:link><pubDate>Wed, 13 May 2026 23:29:58 GMT</pubDate><lastBuildDate>Wed, 13 May 2026 23:29:58 GMT</lastBuildDate><item><guid isPermaLink="true">https://teletype.in/@mronx/pLyEGJJ4nic</guid><link>https://teletype.in/@mronx/pLyEGJJ4nic?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=mronx</link><comments>https://teletype.in/@mronx/pLyEGJJ4nic?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=mronx#comments</comments><dc:creator>mronx</dc:creator><title>Kubernetes: deploy Laravel the easy way </title><pubDate>Sun, 26 Feb 2023 13:55:04 GMT</pubDate><description><![CDATA[<img src="https://learnk8s.io/a/52d7048ad9ab46d9a2182847fe02b20f.svg"></img>TL;DR: In this article, you will learn the basics of how to deploy a Laravel application in Kubernetes.]]></description><content:encoded><![CDATA[
  <figure id="HDmf" class="m_original">
    <img src="https://learnk8s.io/a/52d7048ad9ab46d9a2182847fe02b20f.svg" width="225" />
  </figure>
  <hr />
  <p id="Xhth"><strong>TL;DR:</strong> In this article, you will learn the basics of how to deploy a Laravel application in Kubernetes.</p>
  <p id="XLPf">Laravel is an excellent framework for developing PHP applications.</p>
  <p id="cCU5">Whether you need to prototype a new idea, develop an MVP (Minimum Viable Product) or release a full-fledged enterprise system, Laravel facilitates all of the development tasks and workflows.</p>
  <p id="Lo2y"><em>How you deal with deploying the application is a different story.</em></p>
  <p id="ErMB"><a href="https://laravel.com/docs/7.x/homestead" target="_blank">Vagrant is an excellent choice to set up a development environment</a> that mirrors your production environment.</p>
  <p id="OWZA">But it&#x27;s still limited to a single machine.</p>
  <p id="85Sg"><strong>In production, you will most likely require more than just one web server and database.</strong></p>
  <p id="lpjB">And you probably don&#x27;t have a single app, but multiple apps with different concerns such as an API, a front-end, workers to process batch jobs, etc.</p>
  <p id="FzX0"><em>How do you deploy your apps and make sure that they can scale efficiently with your users?</em></p>
  <p id="A65l">In this article, you will learn how to set up a Laravel application in Kubernetes.</p>
  <h2 id="kubernetes-why-and-what-">Kubernetes, why and what?</h2>
  <p id="NCF6"><em>Who has lots of application deployed in production?</em></p>
  <p id="R7T7"><strong>Google, of course.</strong></p>
  <p id="Q5u8"><a href="https://kubernetes.io/" target="_blank">Kubernetes is an open-source tool</a> that was initially born from Google to facilitate a large number of deployments across their infrastructure.</p>
  <p id="Qmy6">It is good at three things:</p>
  <ol id="4TuU">
    <li id="FwMB">Running any type of app (not just PHP).</li>
    <li id="W5yU">Scheduling deployments across several servers.</li>
    <li id="bk5s">Being programmable.</li>
  </ol>
  <p id="Eey5">Let&#x27;s have a look at how you can leverage Kubernetes to deploy a Laravel app.</p>
  <h2 id="deploying-a-laravel-application-to-minikube">Deploying a Laravel Application to Minikube</h2>
  <p id="UJKU">You can run Kubernetes on several cloud hosting providers such as <a href="https://cloud.google.com/kubernetes-engine" target="_blank">Google Cloud Engine (GCP)</a>, <a href="https://aws.amazon.com/eks/" target="_blank">Amazon Web Services (AWS)</a>, <a href="https://azure.microsoft.com/en-us/services/kubernetes-service/" target="_blank">Azure</a>.</p>
  <p id="xqRK">In this tutorial, you will run the application on <a href="https://minikube.sigs.k8s.io/docs/" target="_blank">Minikube</a> — a tool that makes it easier to run Kubernetes locally.</p>
  <p id="XPwc">Similar to Vagrant, Minikube is merely a Virtual Machine that contains a Kubernetes cluster.</p>
  <h2 id="the-application">The application</h2>
  <p id="2uqv">I have prepared a simple Laravel application which you can clone from <a href="https://github.com/learnk8s/laravel-kubernetes-demo" target="_blank">the repository on GitHub</a>.</p>
  <p id="J7zy">It is nothing more than a fresh Laravel installation.</p>
  <p id="8Ped">Therefore you can follow this tutorial using either the demo application or you can create a new Laravel application.</p>
  <p id="YB3V">Let&#x27;s get started by cloning the project with:</p>
  <p id="pM0T">bash</p>
  <pre id="t1Pn">git clone https://github.com/learnk8s/laravel-kubernetes-demo.git
cd laravel-kubernetes-demo</pre>
  <h2 id="before-you-start">Before you start</h2>
  <p id="IoS2">To follow with this demonstration, you will need the following tools installed in your computer:</p>
  <ol id="5YPB">
    <li id="DiZw"><a href="https://docs.docker.com/install/" target="_blank">Docker</a></li>
    <li id="mtDd"><a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" target="_blank">kubectl</a></li>
    <li id="xf2x"><a href="https://github.com/kubernetes/minikube/releases" target="_blank">minikube</a></li>
  </ol>
  <blockquote id="u38o">Are you having problems installing and running these applications on Windows? Check out the article <a href="https://learnk8s.io/installing-docker-kubernetes-windows" target="_blank">Getting started with Docker and Kubernetes on Windows 10</a>, for a step by step guide.</blockquote>
  <h2 id="packaging-laravel-in-a-container">Packaging Laravel in a container</h2>
  <p id="pTCu">Kubernetes doesn&#x27;t know how to deploy Laravel apps.</p>
  <p id="FjQu"><em>Or Java.</em></p>
  <p id="RsZo"><em>Or Node.js.</em></p>
  <p id="rX3D"><em>Or any other programming language.</em></p>
  <p id="JUT8"><strong>Kubernetes only knows how to deploy containers.</strong></p>
  <p id="XlXL">Containers are a Linux feature that is used to limit what a process can do.</p>
  <p id="TDRm">When you start a process such as PHP as a container, you can define how much memory and CPU it can use.</p>
  <p id="2gdr">Also, you can define what network and filesystem it is allowed to see (and a few more things).</p>
  <p id="ClTt">You could use containers isolate and launch several PHP instances on your server.</p>
  <p id="60BQ"><em>Just as you use virtual machine to isolate your development environment.</em></p>
  <p id="WW0j">Docker is the most popular tool to create and run containers.</p>
  <p id="stsi">But there are several other options such as <a href="https://en.wikipedia.org/wiki/LXC" target="_blank">LXC</a>, <a href="https://podman.io/" target="_blank">Podman</a>, <a href="https://containerd.io/" target="_blank">containerd</a>, etc.</p>
  <p id="d472"><strong>In this tutorial, you will use Docker.</strong></p>
  <p id="B7fN">So, as a first step, you should build a Docker image of your application.</p>
  <p id="awiL">An image contains all the file needed to launch the container.</p>
  <p id="8XU7">Go ahead and create a <code>Dockerfile</code> <em>(capital &quot;D&quot;)</em> in the root of your project:</p>
  <p id="Yxfz">Dockerfile</p>
  <pre id="ZuAa">FROM composer:1.6.5 as build
WORKDIR /app
COPY . /app
RUN composer install

FROM php:7.1.8-apache
EXPOSE 80
COPY --from=build /app /app
COPY vhost.conf /etc/apache2/sites-available/000-default.conf
RUN chown -R www-data:www-data /app a2enmod rewrite</pre>
  <p id="sboO">This <code>Dockerfile</code> has two parts:</p>
  <ul id="wbXy">
    <li id="Or1e">In the first part, you install all the application&#x27;s dependencies.</li>
    <li id="7ItD">The second part prepares the webserver with PHP-mod.</li>
  </ul>
  <blockquote id="qLK5">A <code>Dockerfile</code> above uses a <a href="https://docs.docker.com/develop/develop-images/multistage-build/" target="_blank">multi-stage build</a>.</blockquote>
  <p id="XynM">The Dockerfile is just a description of what files should be bundled in the container.</p>
  <p id="EaTX">You can execute the instructions and create the Docker image with:</p>
  <p id="15hl">bash</p>
  <pre id="Kqeu">docker build -t laravel-kubernetes-demo .</pre>
  <p id="KC15">Note the following about this command:</p>
  <ul id="0n5c">
    <li id="3mVj"><code>-t laravel-kubernetes-demo</code> defines the name (&quot;tag&quot;) of your container — in this case, your container is just called <code>laravel-kubernetes-demo</code></li>
    <li id="OMxR"><code>.</code> is the location of the <code>Dockerfile</code> and application code — in this case, it&#x27;s the current directory</li>
  </ul>
  <p id="Palw"><strong>The output is a Docker image.</strong></p>
  <p id="OPq1"><em>What is a Docker image?</em></p>
  <p id="kHX4">A Docker image is an archive containing all the files that belong to a container.</p>
  <p id="Nvf9">If you want to test it, you should run the container (and the process inside it).</p>
  <p id="7h9j">You can run the container with:</p>
  <p id="Kflk">bash</p>
  <pre id="BFcJ">docker run -ti \
  -p 8080:80 \
  -e APP_KEY=base64:cUPmwHx4LXa4Z25HhzFiWCf7TlQmSqnt98pnuiHmzgY= \
  laravel-kubernetes-demo</pre>
  <p id="991D">And the application should be available on <a href="http://localhost:8080/" target="_blank">http://localhost:8080</a>.</p>
  <blockquote id="EIVX">Please note that, with this setup, the container is generic and the <code>APP_KEY</code> is not hardcoded or shared.</blockquote>
  <h2 id="sharing-docker-image-with-a-registry">Sharing Docker image with a registry</h2>
  <p id="LjoW"><em>You built and ran the container locally, but how do you make it available to your Kubernetes cluster?</em></p>
  <p id="0YbL">Usually, to share images, you can use a container registry such as <a href="https://hub.docker.com/" target="_blank">Docker Hub</a> or <a href="https://quay.io/" target="_blank">Quay.io</a>.</p>
  <p id="qPkb">Container registries are web apps that store container images — like the <code>laravel-kubernetes-demo</code> image that you built earlier.</p>
  <p id="Z7LB">In this tutorial you will use Docker Hub to upload your containers.</p>
  <p id="w4rt"><strong>To use Docker Hub, you first have to <a href="https://hub.docker.com/signup" target="_blank">create a Docker ID</a>.</strong></p>
  <p id="YTuj">A Docker ID is your Docker Hub username.</p>
  <p id="si0x">Once you have your Docker ID, you have to authorise Docker to connect to the Docker Hub account:</p>
  <p id="oRW6">bash</p>
  <pre id="UsCS">docker login</pre>
  <p id="i0lJ">Before you can upload your image, there is one last thing to do.</p>
  <p id="kIzp"><strong>Images uploaded to Docker Hub must have a name of the form <code>username/image</code>:</strong></p>
  <ul id="53Kw">
    <li id="NoK5"><code>username</code> is your Docker ID</li>
    <li id="xJrV"><code>image</code> is the name of the image</li>
  </ul>
  <p id="8BAl">If you wish to rename your image according to this format, run the following command:</p>
  <p id="y9W2">bash</p>
  <pre id="a1Kk">docker tag laravel-kubernetes-demo &lt;my-username&gt;/laravel-kubernetes-demo</pre>
  <blockquote id="RG3o">Please replace <code>&lt;my-username&gt;</code> with your Docker ID this time.</blockquote>
  <p id="gNDd"><strong>Now you can upload your image to Docker Hub:</strong></p>
  <p id="j3Jp">bash</p>
  <pre id="4l54">docker push &lt;my-username&gt;/laravel-kubernetes-demo</pre>
  <p id="ooMl">Your image is now publicly available as <code>&lt;my-username&gt;/laravel-kubernetes-demo</code> on Docker Hub and everybody can download and run it.</p>
  <p id="s0Hj">To verify this, you can re-run your app, but this time using the new image name.</p>
  <p id="axzZ">bash</p>
  <pre id="owsC">docker run -ti \
  -p 8080:80 \
  -e APP_KEY=base64:cUPmwHx4LXa4Z25HhzFiWCf7TlQmSqnt98pnuiHmzgY= \
  &lt;my-username&gt;/laravel-kubernetes-demo</pre>
  <p id="Gy65">Everything should work exactly as before.</p>
  <p id="qohi">The image is now available in the registry.</p>
  <p id="538R">Anybody who has access to the registry it can use it.</p>
  <h2 id="deploying-laravel-in-kubernetes">Deploying Laravel in Kubernetes</h2>
  <p id="GaXO">Now that the application&#x27;s image is built and available, you can go ahead an deploy it.</p>
  <p id="VXfH">You can deploy the container image with:</p>
  <p id="EYWZ">bash</p>
  <pre id="f4RH">kubectl run laravel-kubernetes-demo \
  --restart=Never \
  --image=&lt;my-username&gt;/laravel-kubernetes-demo \
  --port=80 \
  --env=APP_KEY=base64:cUPmwHx4LXa4Z25HhzFiWCf7TlQmSqnt98pnuiHmzgY=</pre>
  <p id="QrtG">Let&#x27;s review the command:</p>
  <ul id="2YnQ">
    <li id="3YV1"><code>kubectl run laravel-kubernetes-demo</code> deploys an app in the cluster and gives it the name <code>laravel-kubernetes-demo</code>.</li>
    <li id="j69M"><code>--restart=Never</code> is used not to restart the app when it crashes.</li>
    <li id="42Hd"><code>--image=&lt;my-username&gt;/laravel-kubernetes-demo</code> and <code>--port=80</code> are the name of the image and the port exposed on the container.</li>
  </ul>
  <blockquote id="3RNJ">Please note that 80 is the port exposed in the container. If you make a mistake, you shouldn&#x27;t increment the port; you can still use port 80. If you make a mistake, you can execute <code>kubectl delete pod app</code> and start again.</blockquote>
  <p id="wxCg">In Kubernetes, an app deployed in the cluster is called a Pod.</p>
  <p id="Kcpw">You can check that a Pod is successfully created with:</p>
  <p id="iu9w">bash</p>
  <pre id="PNFx">kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
laravel-kubernetes-demo     1/1       Running   0          18m</pre>
  <p id="uJRp">You can also use the Minikube dashboard to monitor the pods and cluster.</p>
  <p id="Wzr8">The GUI also helps with visualising most of the discussed concepts.</p>
  <p id="PLBx">To view the dashboard, just run the following:</p>
  <p id="Tk2x">bash</p>
  <pre id="p9Lw">minikube dashboard</pre>
  <p id="KWGe">or to acquire the dashboard&#x27;s URL address:</p>
  <p id="Et9W">bash</p>
  <pre id="umGa">minikube dashboard --url=true</pre>
  <h2 id="exposing-the-application">Exposing the application</h2>
  <p id="oqz4">So far, you have only deployed an application.</p>
  <p id="MvQY"><em>But how do you access it?</em></p>
  <p id="Ryca">The deployed application has a dynamic IP address assigned.</p>
  <p id="a0Fz">That means that every time you deploy or scale an app, a different IP address is assigned to it.</p>
  <p id="KRS2">You might find it difficult to route the traffic directly to the app.</p>
  <p id="e9Wq">To avoid updating IP addresses manually when visiting the app, you can use a load balancer.</p>
  <p id="FF4R">In Kubernetes, a Service is a load balancer for a collection of Pods.</p>
  <p id="0Z15">So even if the IP address of a Pod changes, the IP address of the Service is always fixed.</p>
  <p id="s6f9">The Service is designed to keep track of the Pods&#x27; IP addresses, so you don&#x27;t have to update IP address manually.</p>
  <p id="JexA">Pod 1</p>
  <p id="Nz5b">10.0.0.1</p>
  <p id="hv7x">IP</p>
  <p id="JFzk">Pod 3</p>
  <p id="4Q6H">10.0.0.3</p>
  <p id="iKu5">IP</p>
  <p id="UemM">Service</p>
  <p id="F5Gy">10.0.1.0</p>
  <p id="mZO2">IP</p>
  <p id="Yt8v">Incoming</p>
  <p id="egfO">traffic</p>
  <p id="reXG">Restart</p>
  <p id="K1JO">You can create a service with:</p>
  <p id="qysz">bash</p>
  <pre id="Y0Y9">kubectl expose pods laravel-kubernetes-demo --type=NodePort --port=80
service &quot;laravel-kubernetes-demo&quot; exposed</pre>
  <p id="TLCB">You can verify that the Service was created successfully with:</p>
  <p id="Ce5r">bash</p>
  <pre id="S0X8">kubectl get services</pre>
  <p id="GyK1">You can also view the running service under the &quot;Services&quot; navigation menu within the dashboard.</p>
  <p id="SkAV">A more exciting way to verify this deployment and the service is seeing it in the browser.</p>
  <p id="zJtZ">To obtain the URL of the application (service), you can use the following command:</p>
  <p id="qjwZ">bash</p>
  <pre id="pGC6">minikube service --url=true laravel-kubernetes-demo
http://192.168.99.101:31399</pre>
  <p id="7DcA">or, launch the application directly in the browser:</p>
  <p id="zWUv">bash</p>
  <pre id="WoZp">minikube service laravel-kubernetes-demo</pre>
  <h2 id="breaking-the-app">Breaking the app</h2>
  <p id="eJV2">At this point you should have a local Kubernetes cluster with:</p>
  <ul id="LYwU">
    <li id="yAsG">A single Pod running</li>
    <li id="muYn">A Service that routes traffic to a Pod</li>
  </ul>
  <p id="Ccgf">Having a single Pod is usually not enough.</p>
  <p id="9Ik4"><em>For instance, what happens when the Pod is accidentally deleted?</em></p>
  <p id="Wktp">Let&#x27;s find out.</p>
  <p id="O3M3">You can delete the Pod with:</p>
  <p id="AdYP">bash</p>
  <pre id="X7J4">kubectl delete pod laravel-kubernetes-demo</pre>
  <p id="QzqE">If you visit the app with <code>minikube service laravel-kubernetes-demo</code>, does it still work?</p>
  <p id="05YN"><em>It doesn&#x27;t.</em></p>
  <p id="fEee"><strong>But why?</strong></p>
  <p id="FjDp">You deployed a single Pod in isolation.</p>
  <p id="oJZI">There&#x27;s no process looking after and respawning it when it&#x27;s deleted.</p>
  <p id="cvih">As you can imagine, this deployment is of limited used.</p>
  <p id="wAjL">It&#x27;d be better if there could be a mechanism to watch Pods and restart them when they are deleted, or they crash.</p>
  <p id="H3H4">Kubernetes has an abstraction designed to solve that specific challenge: the Deployment object.</p>
  <p id="sln6">Here&#x27;s an example for a Deployment definition:</p>
  <p id="WG10">deployment.yaml</p>
  <pre id="8nK7">apiVersion: apps/v1
kind: Deployment
metadata:
  name: laravel-kubernetes-demo
spec:
  selector:
    matchLabels:
      run: laravel-kubernetes-demo
  template:
    metadata:
      labels:
        run: laravel-kubernetes-demo
    spec:
      containers:
        - name: demo
          image: &lt;my-username&gt;/laravel-kubernetes-demo
          ports:
            - containerPort: 80
          env:
            - name: APP_KEY
              value: base64:cUPmwHx4LXa4Z25HhzFiWCf7TlQmSqnt98pnuiHmzgY=</pre>
  <p id="pKgv">You can save the file above as <code>deployment.yaml</code>.</p>
  <p id="1fsV">You can submit the Deployment to the cluster with:</p>
  <p id="vLju">bash</p>
  <pre id="DaNv">kubectl apply -f deployment.yaml</pre>
  <p id="9hbP">If you try to visit the application with <code>minikube service laravel-kubernetes-demo</code>, <em>do you see the app?</em></p>
  <p id="o0sy">Yes, it worked.</p>
  <p id="113u"><em>Did the Deployment create a Pod?</em></p>
  <p id="gZw6">Let&#x27;s find out:</p>
  <p id="RJ21">bash</p>
  <pre id="0DYu">kubectl get pods</pre>
  <p id="smZU">The Deployment created a single Pod.</p>
  <p id="BLsh"><em>What happens when you delete it again?</em></p>
  <p id="x3Tc">bash</p>
  <pre id="28rv">kubectl delete pod &lt;replace with pod id&gt;</pre>
  <p id="Q7Sz">The Deployment immediately respawned another Pod.</p>
  <p id="laQc"><strong>Great!</strong></p>
  <h2 id="scaling-the-application">Scaling the application</h2>
  <p id="yHoP">You have successfully deployed the application in Kubernetes that is resilient.</p>
  <p id="96pa">But you still have one deployment with a single Pod running.</p>
  <p id="qevq"><em>What if your application becomes more popular?</em></p>
  <p id="nnNG">Let&#x27;s scale this deployment to three instances.</p>
  <p id="nDpG">Pod 1</p>
  <p id="j9M3">Pod 2</p>
  <p id="D0eg">Pod 3</p>
  <p id="CXDY">Service</p>
  <p id="yGzO">Incoming</p>
  <p id="1qca">traffic</p>
  <p id="R2QM">Restart</p>
  <p id="FWuq">You can use the following command to scale the Deployment:</p>
  <p id="AmS1">bash</p>
  <pre id="wyHt">kubectl scale --replicas=3 deployment/laravel-kubernetes-demo
deployment &quot;laravel-kubernetes-demo&quot; scaled</pre>
  <p id="NEFt">You have three replicas.</p>
  <p id="W6b9">You can verify it with:</p>
  <p id="9OsG">bash</p>
  <pre id="p8fb">kubectl get deployment,pods
NAME                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
laravel-kubernetes-demo   3         3         3            3           59m</pre>
  <p id="YTbO">You can also see this in the Dashboard under Pods or in the Service detail screen.</p>
  <p id="ZwLk">Now you&#x27;re running three instances of the applications using three Pods.</p>
  <p id="HqyN">Imagine your application becoming even more popular.</p>
  <p id="9pm7">Thousands of visitors are using your website or software.</p>
  <p id="9mNK">In the past, you may have been busy writing more scripts to create more instances of your application.</p>
  <p id="YFJc">In Kubernetes you can scale to multiple instances in a snap:</p>
  <p id="OTup">bash</p>
  <pre id="zxPO">kubectl scale --replicas=10 deployment/laravel-kubernetes-demo
deployment &quot;laravel-kubernetes-demo&quot; scaled</pre>
  <p id="8cLb">You can see how convenient it is to use Kubernetes to scale your website.</p>
  <h2 id="using-nginx-ingress-to-expose-the-app">Using Nginx Ingress to expose the app</h2>
  <p id="ptUB">You&#x27;ve already achieved great things; you deployed the application and scaled the deployment.</p>
  <p id="W78G">You have already seen the running application in the browser when pointed to the cluster&#x27;s (Minikube) IP address and node&#x27;s port number.</p>
  <p id="ilBA">Now, you will see how to access the application through an assigned URL as you would do when deploying to the cloud.</p>
  <p id="wFLD">To use a URL in Kubernetes, you need an Ingress.</p>
  <p id="LqJS">An Ingress is a set of rules to allow inbound connections to reach a Kubernetes cluster.</p>
  <p id="OVsT">In the past, you might have used Nginx or Apache as a reverse proxy.</p>
  <p id="MMzF">The Ingress is the equivalent of a reverse proxy in Kubernetes.</p>
  <p id="47kv">Pod 1</p>
  <p id="C62T">Pod 2</p>
  <p id="zwcw">Pod 3</p>
  <p id="ojxC">Service</p>
  <p id="sBSH">Ingress</p>
  <p id="L380">Incoming</p>
  <p id="hAzp">traffic</p>
  <p id="vfUr">Restart</p>
  <p id="0u3y">I have included an <code>ingress.yaml</code> file with the source code of this demo application with the following contents:</p>
  <p id="iDdJ">ingress.yaml</p>
  <pre id="0G8M">apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: laravel-kubernetes-demo-ingress
  annotations:
    ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: laravel-kubernetes-demo
                port:
                  number: 80</pre>
  <p id="2gv0">Among the basic content you would expect from a Kubernetes resource file, this file defines a set of rules to follow when routing inbound traffic.</p>
  <p id="nZ1o">The Ingress resource is useless without an Ingress controller so you will need to create a new controller or use an existing one.</p>
  <p id="h8fM">Minikube comes with the Nginx as Ingress controller, and you can enable it with:</p>
  <p id="O3Xg">bash</p>
  <pre id="3bQr">minikube addons enable ingress</pre>
  <blockquote id="CxyE">Please note that it may take few minutes for Minikube to download and install Nginx as an Ingress.</blockquote>
  <p id="7bWr">Once you have enabled the Ingress addon, you can create the Ingress in this way:</p>
  <p id="xyeR">bash</p>
  <pre id="PO4e">kubectl create -f ingress.yaml</pre>
  <p id="epdx">You can verify and obtain the Ingress&#x27; information by running the following command:</p>
  <p id="IiOC">bash</p>
  <pre id="mP40">kubectl describe ing laravel-kubernetes-demo-ingress
Name:             laravel-kubernetes-demo-ingress
Namespace:        default
Address:          192.168.99.101
Default backend:  default-http-server:80 (&lt;none&gt;)
Rules:
  Host  Path  Backends
  ----  ----  --------
  *
        /   laravel-kubernetes-demo:80 (172.17.0.6:80)
Annotations:  ingress.kubernetes.io/rewrite-target: /
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  CREATE  39s   nginx-ingress-controller  Ingress default/laravel-kubernetes-demo-ingress
  Normal  UPDATE  20s   nginx-ingress-controller  Ingress default/laravel-kubernetes-demo-ingress</pre>
  <p id="52y2"><em>But where do you access the app?</em></p>
  <p id="rPM9">You should visit the IP address of the cluster.</p>
  <p id="DpJ6">You can use minikube&#x27;s IP address and visit <a href="http://minikube_ip/" target="_blank">http://minikube_ip</a>.</p>
  <h2 id="this-is-just-the-beginning">This is just the beginning</h2>
  <p id="c5G0">Hopefully, this article has helped you in getting acquainted with Kubernetes.</p>
  <p id="JiqT">From my own experience, once one has performed similar deployments a couple or more times, things start getting habitual and make a lot more sense.</p>
  <p id="Os1f">But our Kubernetes journey has only just begun.</p>
  <h2 id="that-s-all-folks-">That&#x27;s all folks!</h2>
  <p id="lEde">If you enjoyed this article, you might find the following articles interesting:</p>
  <ul id="4khJ">
    <li id="ngZ0"><a href="https://learnk8s.io/blog/kubernetes-on-solar-plants" target="_blank">Kubernetes to control IoT</a> devices such as Raspberry Pis and build your Internet of Things automated fleet.</li>
    <li id="Nxjq">Learn how you can use virtual machines that can disappear at any time to <a href="https://learnk8s.io/blog/kubernetes-spot-instances" target="_blank">lower your infrastructure costs</a></li>
  </ul>
  <p id="weTO">Don&#x27;t miss the next article!</p>
  <p id="v7ZM">Be the first to be notified when a new article or Kubernetes experiment is published.</p>
  <p id="ahUP"></p>
  <p id="jG3m"></p>
  <p id="Xq0X">source: <a href="https://learnk8s.io/blog/kubernetes-deploy-laravel-the-easy-way" target="_blank">https://learnk8s.io/blog/kubernetes-deploy-laravel-the-easy-way</a></p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@mronx/L5L-in-K8S</guid><link>https://teletype.in/@mronx/L5L-in-K8S?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=mronx</link><comments>https://teletype.in/@mronx/L5L-in-K8S?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=mronx#comments</comments><dc:creator>mronx</dc:creator><title>Deploying Laravel in Kubernetes </title><pubDate>Sun, 26 Feb 2023 13:30:19 GMT</pubDate><media:content medium="image" url="https://img3.teletype.in/files/2b/5a/2b5a401c-01df-4c93-a09e-f7a0ae799f47.png"></media:content><description><![CDATA[<img src="https://chris-vermeulen.com/content/images/2021/10/Laravel-Containers.png"></img>Deploying Laravel in Kubernetes simplifies running, scaling and monitoring Kubernetes in an easily reproducible way.]]></description><content:encoded><![CDATA[
  <p id="zgHl">Deploying Laravel in Kubernetes simplifies running, scaling and monitoring Kubernetes in an easily reproducible way.</p>
  <p id="VvlC">There are plenty of aspects to take into account when running Laravel.</p>
  <p id="xvGu">FPM, Nginx, Certificates, Static Assets, Queue Workers, Caches, The Scheduler, Monitoring, Distributed Logging and a bunch more stuff.</p>
  <p id="vjWL">Tools like Laravel Forge, and Laravel Vapor manage many of these things for you, but what would the tech world look like without choices ?</p>
  <p id="oRjU">Laravel already ships with a Docker setup with Laravel Sail, but in this series we will build our own images in a production like fashion, specialising the containers and images to each of the different parts of our application.</p>
  <p id="SdMF">We will also create a reproducible setup for our application,<br />which can easily be used to deploy other Laravel applications as well.</p>
  <p id="GZ3O">This series will cover everything from local development, CI/CD, Codified Infrastructure including databases, Declarative configurations for deployment in Kubernetes for each independent component of the application, Monitoring the deployed application and infrastructure, Distributed Logging infrastructure, and Alerting for application and infrastructure metrics.</p>
  <p id="0ztr">There is a lot covered in this series, and the best way to approach this would be to read 2-3 posts, and implement them as you go through, and then do a bit of digging to better understand why and how they work.</p>
  <p id="opNg">Below all the series episodes are listed in their particular section of deployment</p>
  <p id="dH6B"></p>
  <h2 id="s3ou"><u>! PART ONE: Installing Laravel</u></h2>
  <p id="D2u1">This series will show you how to go from <code>laravel new</code> to Laravel running in Kubernetes, including monitoring, logging, exposing and bunch more.</p>
  <p id="P3Pn">Part 1 of this series covers creating a new laravel installation which we can deploy in Kubernetes.</p>
  <p id="0b1r">TLDR;</p>
  <p id="n3t3"></p>
  <h1 id="prerequisites">Prerequisites</h1>
  <ul id="5lX1">
    <li id="pYfW">Docker running locally.</li>
  </ul>
  <p id="J0Sf">We will be using Laravel sail to run our application locally as a start, but will build our own Docker images as we go through.</p>
  <p id="QutF">Why ?</p>
  <ol id="o6oQ">
    <li id="BLGI">Productionising our Docker images for a smaller size</li>
    <li id="2tgo">We need multiple images for things like fpm and nginx when we move toward running in Kubernetes</li>
    <li id="pbTL">For existing applications which do not have sail as part of their version &lt; 8.0</li>
    <li id="DOmL">Learning</li>
  </ol>
  <h1 id="install-a-new-laravel-application">Install a new Laravel application</h1>
  <p id="U2lS">Change directory to where you want the new application installed.</p>
  <p id="C7bh">Install a new Laravel application. For full documentation see here <a href="https://laravel.com/docs/8.x/installation#your-first-laravel-project" target="_blank">https://laravel.com/docs/8.x/installation#your-first-laravel-project</a></p>
  <p id="bV2F">We will be installing only our app, Redis, and Mysql as part of this post, as we will not be using the rest just yet, and can add them later if necessary.</p>
  <pre id="pJmR"># Mac OS
curl -s &quot;https://laravel.build/laravel-in-kubernetes?with=mysql,redis&quot; | bash
cd laravel-in-kubernetes
./vendor/bin/sail up

# Linux
curl -s https://laravel.build/laravel-in-kubernetes?with=mysql,redis | bash
cd laravel-in-kubernetes
./vendor/bin/sail up
</pre>
  <p id="htM2">It might take a while for your application to come up the first time. This is due to new Docker images being downloaded, built, and started up for most services.</p>
  <p id="MQl0">You should be able to reach you application http://localhost</p>
  <h2 id="port-mappings">Port mappings</h2>
  <p id="m2hc">Your service might error when starting due to port mounting with an error similar to</p>
  <pre id="SmPO">ERROR: for laravel.test  Cannot start service laravel.test: Ports are not available: listen tcp 0.0.0.0:80: bind: address already in use</pre>
  <p id="wovA">To solve this you can set the APP_PORT environment variable when running <code>sail up</code></p>
  <pre id="hnX5">APP_PORT=8080 ./vendor/bin/sail up</pre>
  <p id="ZRzx">You should now be able to reach the application at http://localhost:8080 or whichever port you chose in APP_PORT</p>
  <h1 id="understanding-the-docker-compose-file">Understanding the docker-compose file</h1>
  <p id="ezST">With sail, your application has a <code>docker-compose.yml</code> file in the root directory.</p>
  <p id="kpUM">This docker-compose file controls what runs when you run <code>sail up</code></p>
  <p id="DwY4">Sail is essentially an abstraction on top of Docker to more easily manage running Laravel</p>
  <p id="v102">You can see the underlying details by looking at the <code>docker-compose.yml</code> file, used for running your Laravel application locally, and the <code>./vendor/laravel/sail/runtimes/8.0/Dockerfile</code> file, building the container which runs Laravel.</p>
  <h1 id="commit-changes">Commit changes</h1>
  <p id="mje4">Let&#x27;s commit our changes at this point, so we can revert anything in future.</p>
  <pre id="DcuR">git init
git add .
git commit -m &quot;Initial Laravel Install&quot;</pre>
  <h1 id="adding-authentication">Adding authentication</h1>
  <p id="PfTr">For our application, we want at least a little bit of functionality, so we&#x27;ll use Laravel Breeze to add a login and register pages.</p>
  <pre id="gefh">./vendor/bin/sail composer require laravel/breeze --dev
./vendor/bin/sail php artisan breeze:install
./vendor/bin/sail npm install
./vendor/bin/sail npm run dev
./vendor/bin/sail php artisan migrate
</pre>
  <p id="5d6b">Now you can head over to <a href="http://localhost:8080/register" target="_blank">http://localhost:8080/r</a>egister to see your new register page.</p>
  <p id="uLfG">Fill out the form, submit, and if everything works correctly, you should see a logged in dashboard</p>
  <h1 id="commit-again">Commit again</h1>
  <pre id="IHya">git add .
git commit -m &quot;Add breeze authentication&quot;</pre>
  <h1 id="running-tests">Running tests</h1>
  <p id="Ugrf">You can also run the test suite using</p>
  <pre id="DJ6r">./vendor/bin/sail artisan test</pre>
  <p id="EJ5d">Next, we want to start moving our Laravel application closer to Kubernetes. We will build a bunch of Docker images and update our docker-compose to reflect a more production ready installation.</p>
  <h1 id="onto-the-next">Onto the next</h1>
  <p id="wyyX">Next we&#x27;ll look at Dockerizing our Laravel application for production use</p>
  <p id="EdvM"></p>
  <h2 id="4CWu"><u>! PART TWO: Dockerizing Laravel</u></h2>
  <p id="hudv">In this part of the series, we are going to Dockerise our Laravel application with different layers, for all the different technical pieces of our application (FPM, Web Server, Queues, Cron etc.)</p>
  <p id="BBBJ">We will do this by building layers for each process, copy in the codebase, and build separate containers for them.</p>
  <p id="xUFK">TLDR;</p>
  <p id="1K9c"></p>
  <h1 id="prerequisites">Prerequisites</h1>
  <ul id="tVAZ">
    <li id="AgcM">A Laravel application. You can see <a href="https://chris-vermeulen.com/laravel-in-kubernetes-part-1" target="_blank">Part 1</a> if you haven&#x27;t got an application yet</li>
    <li id="oc1I">Docker running locally</li>
  </ul>
  <h1 id="getting-started">Getting started</h1>
  <p id="B9Yu">Laravel 8.0 ships with Sail, which already runs Laravel applications in Docker, but it is not entirely production ready, and might need to be updated according to your use case and needs for sizing, custom configs etc. It only has a normal PHP container, but we might need a few more containers for production.</p>
  <p id="aeBN">We need a FPM container to process requests, a PHP CLI container to handle artisan commands, and for example running queues, and an Nginx container to serve static content etc.</p>
  <p id="ymwN">As you can already see, simply running one container would not serve our needs, and doesn&#x27;t allow us to scale or manage different pieces of our application differently from the others.</p>
  <p id="Fn10">In this post we&#x27;ll cover all of the required containers, and what each of them are specialised for.</p>
  <h2 id="why-wouldn-t-we-use-the-default-sail-container">Why wouldn&#x27;t we use the default sail container</h2>
  <p id="C7Kw">The default sail container contains everything we need to run the application, to the point where it has too much for a production deployment.</p>
  <p id="eSIt">For local development it works well out of the box, but for production deployment using Kubernetes, it&#x27;s a bit big, and has too many components installed in a single container.</p>
  <p id="n0ey">The more &quot;stuff&quot; installed in a container, the more places there are to attack and for us to manage. For our our Kubernetes deployment we are going to split out the different parts (FPM, Nginx, Queue Workers, Crons etc.).</p>
  <h1 id="kubernetes-filesystem">Kubernetes filesystem</h1>
  <p id="66OX">One thing we need to look into first, is the Kubernetes filesystem.</p>
  <p id="nSne">By default, you can write thing to files on a local drive to run things like logs and sessions.</p>
  <p id="YCnX">When moving toward Kubernetes, we start playing in the field of distributed applications, and a local filesystem no longer suffices.</p>
  <p id="qZhe">If you think about sessions for example. If we have 2 Kubernetes pods, we need to reach for the same one for recurring requests from the same user, otherwise the session might not exist.</p>
  <p id="QFvR">With that in mind we need to make a couple updates to our application in preparation of Dockerizing the system.</p>
  <p id="PHtO">We will also eventually secure our application with a readonly filesystem, to prevent localised logic.</p>
  <h1 id="logging-update">Logging Update</h1>
  <p id="mPka">One thing we need to do before we start setting up our Docker containers, is to update the logging driver to output to stdout, instead of to a file.</p>
  <p id="wzXa">Being able to run <code>kubectl logs</code> and getting application logs is the primary reason for updating to use stdout. If we log to a file, we would need to cat the log files and that makes it a bunch more difficult.</p>
  <p id="iKmk">So let&#x27;s update the logging to point at stdout.</p>
  <p id="BrcH">In the application configuration <code>config/logging.php</code> , add a new log channel for stdout</p>
  <pre id="mYHr">return [
    &#x27;channels&#x27; =&gt; [
        &#x27;stdout&#x27; =&gt; [
            &#x27;driver&#x27; =&gt; &#x27;monolog&#x27;,
            &#x27;level&#x27; =&gt; env(&#x27;LOG_LEVEL&#x27;, &#x27;debug&#x27;),
            &#x27;handler&#x27; =&gt; StreamHandler::class,
            &#x27;formatter&#x27; =&gt; env(&#x27;LOG_STDOUT_FORMATTER&#x27;),
            &#x27;with&#x27; =&gt; [
                &#x27;stream&#x27; =&gt; &#x27;php://stdout&#x27;,
            ],
        ],
    ],
],</pre>
  <p id="yHMx">Next, update your .env file to use this Logger</p>
  <pre id="dLeS">LOG_CHANNEL=stdout</pre>
  <p id="M7RD">The application will now output any logs to stdout so we can read it directly.</p>
  <h1 id="session-update">Session update</h1>
  <p id="Hz8V">Sessions also use the local filesystem by default, and we want to update this to use Redis instead, so all pods can reach for the same session database, along with our Cache.</p>
  <p id="ckmJ">In order to do this for sessions, <a href="https://laravel.com/docs/8.x/session#redis" target="_blank">we need to install</a> the <a href="https://github.com/predis/predis" target="_blank">predis/predis</a> package.</p>
  <p id="2d9X">We can install it from local composer, or simply add it to the composer.json file, and then Docker will take care of installing it.</p>
  <pre id="YBB2">$ composer require predis/predis</pre>
  <p id="kXFd">Or if you prefer, simply add it to the require list in <code>composer.json</code></p>
  <pre id="FJ91">{
    &quot;require&quot;: {
        [...]
        &quot;predis/predis&quot;: &quot;^1.1&quot;</pre>
  <p id="rBOG">Also, update the <code>.env</code> to use Redis for sessions</p>
  <pre id="Ixsc">SESSION_DRIVER=redis</pre>
  <h1 id="https-for-production">HTTPS for production</h1>
  <p id="CeTu">Because we are going to expose our application and add Let&#x27;s Encrypt certificates, we also need to force HTTPS for production.</p>
  <p id="izry">When the request actually reaches our applications, it will be an http request, as TLS terminates at the Ingress.</p>
  <p id="Hg4o">We need to therefor force HTTPS urls for our application.</p>
  <p id="Vvgh">When our application serves html pages for example, it will add the URLS to css files using http if the request is http. We need to force https, so all the urls in our html are https.</p>
  <p id="hyXf">In the <code>app/Providers/AppServiceProvider.php</code> file, in the boot method, force https for production.</p>
  <pre id="gd6I">&lt;?php

namespace App\Providers;

# Add the Facade
use Illuminate\Support\Facades\URL;
use Illuminate\Support\ServiceProvider;

class AppServiceProvider extends ServiceProvider
{
    /** All the rest */

    public function boot()
    {
        if($this-&gt;app-&gt;environment(&#x27;production&#x27;)) {
            URL::forceScheme(&#x27;https&#x27;);
        }
    }
}
</pre>
  <p id="iLhr">This will force any assets served in production to be requested from an https domain, which our application will have.</p>
  <h1 id="docker-containers">Docker Containers</h1>
  <p id="Dipe">We want to create multiple containers for our application, but we want to use the same base pieces for different pieces, which specialise in specific pieces.</p>
  <p id="r6Qx">Our container structure looks a bit like the below diagram.</p>
  <figure id="2Bqc" class="m_custom">
    <img src="https://chris-vermeulen.com/content/images/2021/10/Laravel-Containers.png" width="1202" />
  </figure>
  <p id="tG6k">We will use <a href="https://docs.docker.com/develop/develop-images/multistage-build/" target="_blank">Docker Multi Stage Builds</a> to achieve each of the different pieces of the diagram</p>
  <p id="9dIK">We will start with the 2 base images (NPM, Composer), and then build out each of the custom pieces.</p>
  <h2 id="the-dockerignore-file">The <code>.dockerignore</code> file</h2>
  <p id="T8Mu">We will start by adding a <code>.dockerignore</code> file so we can prevent Docker from copying in the <code>node_modules</code> and the <code>vendor</code> directory, as we want to build any binaries for the specific architecture in the image.</p>
  <p id="rARf">In the root of your project, create a file called <code>.dockerignore</code> with the following contents</p>
  <pre id="Lkjb">/vendor
/node_modules</pre>
  <h2 id="the-dockerfile">The Dockerfile</h2>
  <p id="jiSJ">We need to create a Dockerfile in the root of our project, and setup some reusable pieces.</p>
  <p id="nv48">In the root of your project, create a file called <code>Dockerfile</code>.</p>
  <pre id="s4mo">$ touch Dockerfile</pre>
  <p id="xXZK">Next, create 2 variables inside the Dockerfile to contain the PHP packages we require.</p>
  <p id="QKmB">We&#x27;ll use two variables. One for built-in extensions, and one for extensions we need to instal using <code>pecl</code>.</p>
  <pre id="WesS"># Create args for PHP extensions and PECL packages we need to install.
# This makes it easier if we want to install packages,
# as we have to install them in multiple places.
# This helps keep ou Dockerfiles DRY -&gt; https://bit.ly/dry-code
# You can see a list of required extensions for Laravel here: https://laravel.com/docs/8.x/deployment#server-requirements
ARG PHP_EXTS=&quot;bcmath ctype fileinfo mbstring pdo pdo_mysql tokenizer dom pcntl&quot;
ARG PHP_PECL_EXTS=&quot;redis&quot;</pre>
  <p id="xoUX">If your application needs additional extensions installed, feel free to add them to the list before building.</p>
  <h2 id="composer-stage">Composer Stage</h2>
  <p id="akS7">We need to build a Composer base, which contains all our code, and installed Composer dependencies.</p>
  <figure id="CwN5" class="m_custom">
    <img src="https://chris-vermeulen.com/content/images/2021/10/Laravel-Containers-Composer.png" width="1202" />
  </figure>
  <p id="rJr8">This will set us up for all the following stages to reuse the Composer packages.</p>
  <p id="MIfZ">Once we have build the Composer base, we can build the other layers from that, only using the specific parts we need.</p>
  <p id="kRGx">We start with a Composer image which is based of php-8 in an alpine distro image.</p>
  <p id="o0fz">This will help us install dependencies of our application.</p>
  <p id="ItPZ">In our Dockerfile, we can add the Composer stage (This goes directly after the previous piece)</p>
  <pre id="T7N3"># We need to build the Composer base to reuse packages we&#x27;ve installed
FROM composer:2.1 as composer_base

# We need to declare that we want to use the args in this build step
ARG PHP_EXTS
ARG PHP_PECL_EXTS

# First, create the application directory, and some auxilary directories for scripts and such
RUN mkdir -p /opt/apps/laravel-in-kubernetes /opt/apps/laravel-in-kubernetes/bin

# Next, set our working directory
WORKDIR /opt/apps/laravel-in-kubernetes

# We need to create a composer group and user, and create a home directory for it, so we keep the rest of our image safe,
# And not accidentally run malicious scripts
RUN addgroup -S composer \
    &amp;&amp; adduser -S composer -G composer \
    &amp;&amp; chown -R composer /opt/apps/laravel-in-kubernetes \
    &amp;&amp; apk add --virtual build-dependencies --no-cache ${PHPIZE_DEPS} openssl ca-certificates libxml2-dev oniguruma-dev \
    &amp;&amp; docker-php-ext-install -j$(nproc) ${PHP_EXTS} \
    &amp;&amp; pecl install ${PHP_PECL_EXTS} \
    &amp;&amp; docker-php-ext-enable ${PHP_PECL_EXTS} \
    &amp;&amp; apk del build-dependencies

# Next we want to switch over to the composer user before running installs.
# This is very important, so any extra scripts that composer wants to run,
# don&#x27;t have access to the root filesystem.
# This especially important when installing packages from unverified sources.
USER composer

# Copy in our dependency files.
# We want to leave the rest of the code base out for now,
# so Docker can build a cache of this layer,
# and only rebuild when the dependencies of our application changes.
COPY --chown=composer composer.json composer.lock ./

# Install all the dependencies without running any installation scripts.
# We skip scripts as the code base hasn&#x27;t been copied in yet and script will likely fail,
# as &#x60;php artisan&#x60; available yet.
# This also helps us to cache previous runs and layers.
# As long as comoser.json and composer.lock doesn&#x27;t change the install will be cached.
RUN composer install --no-dev --no-scripts --no-autoloader --prefer-dist

# Copy in our actual source code so we can run the installation scripts we need
# At this point all the PHP packages have been installed, 
# and all that is left to do, is to run any installation scripts which depends on the code base
COPY --chown=composer . .

# Now that the code base and packages are all available,
# we can run the install again, and let it run any install scripts.
RUN composer install --no-dev --prefer-dist</pre>
  <h3 id="testing-the-composer-stage">Testing the Composer Stage</h3>
  <p id="CVyK">We can now build the Docker image and make sure it builds correctly, and installs all our dependencies</p>
  <pre id="oqN0">docker build . --target composer_base</pre>
  <h2 id="frontend-stage">Frontend Stage</h2>
  <p id="Fz4B">We need to install the NPM packages as well, so we can run any compilations for Laravel Mix as well.</p>
  <figure id="s4X7" class="m_custom">
    <img src="https://chris-vermeulen.com/content/images/2021/10/Laravel-Containers-NPM.png" width="1202" />
  </figure>
  <p id="7DpG">Laravel Mix is an NPM package, so we also need a container which we can use to compile the dependencies to the <code>public</code> directory.</p>
  <p id="Acdd">Usually you run this just using <code>npm run prod</code>, and we need to convert this to a Docker Stage.</p>
  <p id="HmQ6">In the Dockerfile, we can add the next stage for NPM</p>
  <pre id="bkVt"># For the frontend, we want to get all the Laravel files,
# and run a production compile
FROM node:14 as frontend

# We need to copy in the Laravel files to make everything is available to our frontend compilation
COPY --from=composer_base /opt/apps/laravel-in-kubernetes /opt/apps/laravel-in-kubernetes

WORKDIR /opt/apps/laravel-in-kubernetes

# We want to install all the NPM packages,
# and compile the MIX bundle for production
RUN npm install &amp;&amp; \
    npm run prod</pre>
  <h3 id="testing-the-frontend-stage">Testing the frontend stage</h3>
  <p id="ZhSq">Let&#x27;s build the frontend image to make sure it builds correctly, and doesn&#x27;t fail along the way</p>
  <pre id="8ivu">$ docker build . --target frontend 
</pre>
  <h2 id="cli-container">CLI Container</h2>
  <p id="M8Im">We are going to need a CLI container to run Queue jobs, Crons (The Scheduler), Migrations, and Artisan commands when in Docker / Kubernetes</p>
  <figure id="Vx5Q" class="m_custom">
    <img src="https://chris-vermeulen.com/content/images/2021/10/Laravel-Containers-CLI.png" width="1202" />
  </figure>
  <p id="B357">In the Dockerfile add a new piece for CLI usage.</p>
  <pre id="9QwL"># For running things like migrations, and queue jobs,
# we need a CLI container.
# It contains all the Composer packages,
# and just the basic CLI &quot;stuff&quot; in order for us to run commands,
# be that queues, migrations, tinker etc.
FROM php:8.0-alpine as cli

# We need to declare that we want to use the args in this build step
ARG PHP_EXTS
ARG PHP_PECL_EXTS

WORKDIR /opt/apps/laravel-in-kubernetes

# We need to install some requirements into our image,
# used to compile our PHP extensions, as well as install all the extensions themselves.
# You can see a list of required extensions for Laravel here: https://laravel.com/docs/8.x/deployment#server-requirements
RUN apk add --virtual build-dependencies --no-cache ${PHPIZE_DEPS} openssl ca-certificates libxml2-dev oniguruma-dev &amp;&amp; \
    docker-php-ext-install -j$(nproc) ${PHP_EXTS} &amp;&amp; \
    pecl install ${PHP_PECL_EXTS} &amp;&amp; \
    docker-php-ext-enable ${PHP_PECL_EXTS} &amp;&amp; \
    apk del build-dependencies

# Next we have to copy in our code base from our initial build which we installed in the previous stage
COPY --from=composer_base /opt/apps/laravel-in-kubernetes /opt/apps/laravel-in-kubernetes
COPY --from=frontend /opt/apps/laravel-in-kubernetes/public /opt/apps/laravel-in-kubernetes/public</pre>
  <h3 id="testing-the-cli-image-build">Testing the CLI image build</h3>
  <p id="LTW0">We can build this layer to make sure everything works correctly</p>
  <pre id="vbJy">$ docker build . --target cli
[...]
 =&gt; =&gt; writing image sha256:b6a7b602a4fed2d2b51316c1ad90fd12bb212e9a9c963382d776f7eaf2eebbd5 </pre>
  <p id="tF4Q">The CLI layer has successfully built, and we can move onto the next layer</p>
  <h2 id="fpm-container">FPM Container</h2>
  <p id="foPV">We can now also build out the specific parts of the application, the first of which is the container which runs fpm for us.</p>
  <figure id="1sxB" class="m_custom">
    <img src="https://chris-vermeulen.com/content/images/2021/10/Laravel-Containers-FPM.png" width="1202" />
  </figure>
  <p id="R1wh">In the same Dockerfile, we will create another stage to our docker build called <code>fpm_server</code> with the following contents</p>
  <pre id="kdMA"># We need a stage which contains FPM to actually run and process requests to our PHP application.
FROM php:8.0-fpm-alpine as fpm_server

# We need to declare that we want to use the args in this build step
ARG PHP_EXTS
ARG PHP_PECL_EXTS

WORKDIR /opt/apps/laravel-in-kubernetes

RUN apk add --virtual build-dependencies --no-cache ${PHPIZE_DEPS} openssl ca-certificates libxml2-dev oniguruma-dev &amp;&amp; \
    docker-php-ext-install -j$(nproc) ${PHP_EXTS} &amp;&amp; \
    pecl install ${PHP_PECL_EXTS} &amp;&amp; \
    docker-php-ext-enable ${PHP_PECL_EXTS} &amp;&amp; \
    apk del build-dependencies
    
# As FPM uses the www-data user when running our application,
# we need to make sure that we also use that user when starting up,
# so our user &quot;owns&quot; the application when running
USER  www-data

# We have to copy in our code base from our initial build which we installed in the previous stage
COPY --from=composer_base --chown=www-data /opt/apps/laravel-in-kubernetes /opt/apps/laravel-in-kubernetes
COPY --from=frontend --chown=www-data /opt/apps/laravel-in-kubernetes/public /opt/apps/laravel-in-kubernetes/public

# We want to cache the event, routes, and views so we don&#x27;t try to write them when we are in Kubernetes.
# Docker builds should be as immutable as possible, and this removes a lot of the writing of the live application.
RUN php artisan event:cache &amp;&amp; \
    php artisan route:cache &amp;&amp; \
    php artisan view:cache</pre>
  <h3 id="testing-the-fpm-build">Testing the FPM build</h3>
  <p id="sfWW">We want to build this stage to make sure everything works correctly.</p>
  <pre id="8QXQ">$ docker build . --target fpm_server
[...]
=&gt; =&gt; writing image sha256:ead93b67e57f0cdf4ec9c1ca197cf8ca1dacb0bb030f9f57dc0fccf5b3eb9904</pre>
  <h2 id="web-server-container">Web Server container</h2>
  <p id="lutK">We need to build a web server image which is used to serve static content, and send any PHP requests to our PFM container.</p>
  <figure id="auPQ" class="m_custom">
    <img src="https://chris-vermeulen.com/content/images/2021/10/Laravel-Containers-Nginx.png" width="1202" />
  </figure>
  <p id="zxjk">This is quite important, as we can serve static content through our PHP app, but Nginx is a lot better at it than PHP, and can serve static content a lot more efficiently.</p>
  <p id="IY95">The first thing we need is a nginx configuration for our web server.</p>
  <p id="BmfB">We&#x27;ll also use a Nginx Template, so we can inject the FPM URL into the configuration when the container starts up.</p>
  <p id="bhRT">Create a directory called <code>docker</code> in the root of your project</p>
  <pre id="PNAQ">mkdir -p docker</pre>
  <p id="uKn9">Inside of that folder, you can create a file called <code>nginx.conf.template</code> with the following content</p>
  <pre id="wVQM">server {
    listen 80 default_server;
    listen [::]:80 default_server;

    # We need to set the root for our sevrer,
    # so any static file requests gets loaded from the correct path
    root /opt/apps/laravel-in-kubernetes/public;

    index index.php index.html index.htm index.nginx-debian.html;

    # _ makes sure that nginx does not try to map requests to a specific hostname
    # This allows us to specify the urls to our application as infrastructure changes,
    # without needing to change the application
    server_name _;

    # At the root location,
    # we first check if there are any static files at the location, and serve those,
    # If not, we check whether there is an indexable folder which can be served,
    # Otherwise we forward the request to the PHP server
    location / {
        # Using try_files here is quite important as a security concideration
        # to prevent injecting PHP code as static assets,
        # and then executing them via a URL.
        # See https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/#passing-uncontrolled-requests-to-php
        try_files $uri $uri/ /index.php?$query_string;
    }

    # Some static assets are loaded on every page load,
    # and logging these turns into a lot of useless logs.
    # If you would prefer to see these requests for catching 404&#x27;s etc.
    # Feel free to remove them
    location = /favicon.ico { access_log off; log_not_found off; }
    location = /robots.txt  { access_log off; log_not_found off; }

    # When a 404 is returned, we want to display our applications 404 page,
    # so we redirect it to index.php to load the correct page
    error_page 404 /index.php;

    # Whenever we receive a PHP url, or our root location block gets to serving through fpm,
    # we want to pass the request to FPM for processing
    location ~ \.php$ {
        #NOTE: You should have &quot;cgi.fix_pathinfo = 0;&quot; in php.ini
        include fastcgi_params;
        fastcgi_intercept_errors on;
        fastcgi_pass ${FPM_HOST};
        fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
    }

    location ~ /\.ht {
        deny all;
    }

    location ~ /\.(?!well-known).* {
        deny all;
    }
}</pre>
  <p id="92N6">Once we have that completed, we can create the new Docker image stage which contains the Nginx layer</p>
  <pre id="GF4l"># We need an nginx container which can pass requests to our FPM container,
# as well as serve any static content.
FROM nginx:1.20-alpine as web_server

WORKDIR /opt/apps/laravel-in-kubernetes

# We need to add our NGINX template to the container for startup,
# and configuration.
COPY docker/nginx.conf.template /etc/nginx/templates/default.conf.template

# Copy in ONLY the public directory of our project.
# This is where all the static assets will live, which nginx will serve for us.
COPY --from=frontend /opt/apps/laravel-in-kubernetes/public /opt/apps/laravel-in-kubernetes/public</pre>
  <h3 id="testing-the-web-server-build">Testing the Web Server build</h3>
  <p id="LRzp">We can now build up to this stage to make sure it builds successfully.</p>
  <pre id="ACiM">$ docker build . --target web_server
[...]
=&gt; =&gt; writing image sha256:1ea6b28fcd99d173e1de6a5c0211c0ba770f6acef5a3231460739200a93feef2 </pre>
  <h2 id="cron-container">Cron container</h2>
  <p id="4Myo">We also want to create a Cron layer, which we can use to run the Laravel scheduler.</p>
  <figure id="m9Db" class="m_custom">
    <img src="https://chris-vermeulen.com/content/images/2021/10/Laravel-Containers-CRON.png" width="1202" />
  </figure>
  <p id="gmwS">We want to specify crond to run in the foreground as well, and make it the primary command when the container starts up.</p>
  <pre id="kUxK"># We need a CRON container to the Laravel Scheduler.
# We&#x27;ll start with the CLI container as our base,
# as we only need to override the CMD which the container starts with to point at cron
FROM cli as cron

WORKDIR /opt/apps/laravel-in-kubernetes

# We want to create a laravel.cron file with Laravel cron settings, which we can import into crontab,
# and run crond as the primary command in the forground
RUN touch laravel.cron &amp;&amp; \
    echo &quot;* * * * * cd /opt/apps/laravel-in-kubernetes &amp;&amp; php artisan schedule:run&quot; &gt;&gt; laravel.cron &amp;&amp; \
    crontab laravel.cron

CMD [&quot;crond&quot;, &quot;-l&quot;, &quot;2&quot;, &quot;-f&quot;]</pre>
  <h3 id="testing-the-cron-build">Testing the Cron build</h3>
  <p id="tfzc">We can build the container to make sure everything works correctly.</p>
  <pre id="CABl">$ docker build . --target cron
 =&gt; =&gt; writing image sha256:b6fb826820e0669563a8746f83fb168fe39393ef6162d65c64439aa26b4d713b  </pre>
  <h2 id="the-complete-build">The Complete Build</h2>
  <p id="6E0f">In our Dockerfile, we now have 4 stages, <em><code>composer_base, frontend, fpm_server, cli, and cron</code> </em>but we need a sensible default to build from.</p>
  <p id="WXjj">Whenever we run the container then, it will start up with our default stage, and we have sensible and predictable results.</p>
  <p id="33ZL">We can specify this right at the end of our Dockerfile, by specifying a last <code>FROM</code> statement with the default stage.</p>
  <pre id="xujq"># [...]

FROM cli</pre>
  <h1 id="hardcoded-values">Hardcoded values</h1>
  <p id="vTAM">You&#x27;ll notice we&#x27;ve used a variable interpolation in the <code>nginx.conf.template</code> file for the fpm host.</p>
  <pre id="fhLb"># [...]
fastcgi_pass ${FPM_HOST};
# [...]</pre>
  <p id="Cl0f">The reason we&#x27;ve done this, is to replace the FPM host at runtime, as it will change depending on where we are running.</p>
  <p id="4kgc">For Docker Compose, it will be the name of the fellow fpm container, but for Kubernetes it will be the name of the service created when running the FPM container.</p>
  <p id="ClTE">Nginx 1.19 Docker images support using templates for nginx configurations where we can use environment variables.</p>
  <p id="uEvA">It uses <em><strong>envsubst</strong></em> under the hood to replace any variables with ENV variables we pass in.</p>
  <p id="1hNp">It does this when the container is started up.</p>
  <h1 id="docker-compose">Docker Compose</h1>
  <p id="jGui">Next, we can test our Docker images locally by building a docker-compose file which runs each stage of our image together so we can use it in that way locally, and reproduce it when we get to Kubernetes</p>
  <p id="owOA">First step is to create a <code>docker-compose.yml</code> file.</p>
  <p id="BwUU">Laravel Sail already comes with one prefilled, but we are going to change it up a bit to have all our separate containers running, so we can validate what will run in Kubernetes early in our cycle.</p>
  <blockquote id="2vt8">If you are not using Laravel Sail, and don&#x27;t have a <code>docker-compose.yml</code> file in the root of your project, you can skip the part where we move it to a backup file.</blockquote>
  <p id="SPMB">First thing we want to do is move the sail docker-compose file to a backup file called <code>docker-compose.yml.backup</code>.</p>
  <p id="TUIQ">Next, we want to create a base <code>docker-compose.yml</code> for our new image stages</p>
  <pre id="Sa03">version: &#x27;3&#x27;
services:
    # We need to run the FPM container for our application
    laravel.fpm:
        build:
            context: .
            target: fpm_server
        image: laravel-in-kubernetes/fpm_server
        # We can override any env values here.
        # By default the .env in the project root will be loaded as the environment for all containers
        environment:
            APP_DEBUG: &quot;true&quot;
        # Mount the codebase, so any code changes we make will be propagated to the running application
        volumes:
            # Here we mount in our codebase so any changes are immediately reflected into the container
            - &#x27;.:/opt/apps/laravel-in-kubernetes&#x27;
        networks:
            - laravel-in-kubernetes

    # Run the web server container for static content, and proxying to our FPM container
    laravel.web:
        build:
            context: .
            target: web_server
        image: laravel-in-kubernetes/web_server
        # Expose our application port (80) through a port on our local machine (8080)
        ports:
            - &#x27;8080:80&#x27;
        environment:
            # We need to pass in the new FPM hst as the name of the fpm container on port 9000
            FPM_HOST: &quot;laravel.fpm:9000&quot;
        # Mount the public directory into the container so we can serve any static files directly when they change
        volumes:
            # Here we mount in our codebase so any changes are immediately reflected into the container
            - &#x27;./public:/opt/apps/laravel-in-kubernetes/public&#x27;
        networks:
            - laravel-in-kubernetes
    # Run the Laravel Scheduler
    laravel.cron:
        build:
            context: .
            target: cron
        image: laravel-in-kubernetes/cron
        # Here we mount in our codebase so any changes are immediately reflected into the container
        volumes:
            # Here we mount in our codebase so any changes are immediately reflected into the container
            - &#x27;.:/opt/apps/laravel-in-kubernetes&#x27;
        networks:
            - laravel-in-kubernetes
    # Run the frontend, and file watcher in a container, so any changes are immediately compiled and servable
    laravel.frontend:
        build:
            context: .
            target: frontend
        # Override the default CMD, so we can watch changes to frontend files, and re-transpile them.
        command: [&quot;npm&quot;, &quot;run&quot;, &quot;watch&quot;]
        image: laravel-in-kubernetes/frontend
        volumes:
            # Here we mount in our codebase so any changes are immediately reflected into the container
            - &#x27;.:/opt/apps/laravel-in-kubernetes&#x27;
            # Add node_modeules as singular volume.
            # This prevents our local node_modules from being propagated into the container,
            # So the node_modules can be compiled for each of the different architectures (Local, Image)
            - &#x27;/opt/app/node_modules/&#x27;
        networks:
            - laravel-in-kubernetes

networks:
    laravel-in-kubernetes:</pre>
  <p id="3ZZG">If we run these containers, we should be able to access the home page from localhost:8080</p>
  <pre id="NSsH">$ docker-compose up -d</pre>
  <p id="5GYi">If you now open <a href="http://localhost:8080/" target="_blank">http://localhost:8080</a>, you should see your application running.</p>
  <figure id="bq3A" class="m_retina">
    <img src="https://chris-vermeulen.com/content/images/2021/08/image-1.png" width="1185" />
  </figure>
  <p id="0yz2">Our containers are now running properly. Nginx is passing our request onto FPM, and FPM is creating a response from our code base, and sending that back to our browser.</p>
  <p id="Rhhn">Our crons are also running correctly in the cron container. You can see this, by checking the logs for the cron container.</p>
  <pre id="O9u6">$ docker-compose logs laravel.cron
Attaching to laravel-in-kubernetes_laravel.cron_1
laravel.cron_1      | No scheduled commands are ready to run.</pre>
  <h2 id="running-mysql-in-docker-compose-yml">Running Mysql in docker-compose.yml</h2>
  <p id="d3tM">We need to run Mysql in docker as well for local development.</p>
  <p id="tQTs">Sail does ship with this by default, and if you check the <code>docker-compose.yml.backup</code> file, you will notice a mysql service, which we can copy over as exists, and add to our <code>docker-compose.yml</code>.</p>
  <p id="WW1H">Docker Compose will automatically load the .env file from our project, and these are the values referenced in the <code>docker-compose.yml.backup</code> which Sail ships with</p>
  <pre id="y6AE">services:
    [...]
    mysql:
        image: &#x27;mysql:8.0&#x27;
        ports:
            - &#x27;${FORWARD_DB_PORT:-3306}:3306&#x27;
        environment:
            MYSQL_ROOT_PASSWORD: &#x27;${DB_PASSWORD}&#x27;
            MYSQL_DATABASE: &#x27;${DB_DATABASE}&#x27;
            MYSQL_USER: &#x27;${DB_USERNAME}&#x27;
            MYSQL_PASSWORD: &#x27;${DB_PASSWORD}&#x27;
            MYSQL_ALLOW_EMPTY_PASSWORD: &#x27;yes&#x27;
        volumes:
            - &#x27;laravel-in-kubernetes-mysql:/var/lib/mysql&#x27;
        networks:
            - laravel-in-kubernetes
        healthcheck:
          test: [&quot;CMD&quot;, &quot;mysqladmin&quot;, &quot;ping&quot;, &quot;-p${DB_PASSWORD}&quot;]
          retries: 3
          timeout: 5s
          
# At the end of the file
volumes:
    laravel-in-kubernetes-mysql:</pre>
  <p id="oFg4">We can now run docker-compose up again, and Mysql should be running alongside our other services.</p>
  <pre id="slS5">$ docker-compose up -d</pre>
  <h3 id="running-migrations-in-docker-compose">Running migrations in docker-compose</h3>
  <p id="xXCH">To test out our Mysql service and that our application can actually connect to Mysql, we can run migrations in the FPM container, as it has all of the right dependencies.</p>
  <pre id="LPvd">$ docker-compose exec laravel.fpm php artisan migrate
Migration table created successfully.
Migrating: 2014_10_12_000000_create_users_table
Migrated:  2014_10_12_000000_create_users_table (35.78ms)
Migrating: 2014_10_12_100000_create_password_resets_table
Migrated:  2014_10_12_100000_create_password_resets_table (25.64ms)
Migrating: 2019_08_19_000000_create_failed_jobs_table
Migrated:  2019_08_19_000000_create_failed_jobs_table (30.73ms)
</pre>
  <p id="Qg5m">This means our application can connect to the database, and our migrations have been run.</p>
  <p id="yO49">With the volume we attached, we should be able to restart all of the containers, and our data will stay persisted.</p>
  <h2 id="onto-kubernetes">Onto Kubernetes</h2>
  <p id="Kmrw">Now that we have docker-compose running locally, we can move forward onto building our images and pushing them to a registry.</p>
  <p id="txp4"></p>
  <h2 id="RE0w"><u>! PART THREE: Container registries</u></h2>
  <p id="pH9l">In this post, we will take our new Dockerfile and layers, and build the images, and push them up to a registry, so we can easily use them in Kubernetes.</p>
  <h1 id="building-our-images-and-pushing-them-into-a-registry">Building our images, and pushing them into a registry</h1>
  <p id="wCv1">First thing that needs to happen before we can move into Kubernetes, is to build our Docker images containing everything, and ship those to a Container Registry where Kubernetes can reach them.</p>
  <p id="V4pu">Docker hub offers free registries, but only 1 private repo.</p>
  <p id="yeMf">For our use case we are going to use Gitlab.</p>
  <p id="kocq">It makes it easy to build CI/CD pipelines, as well as has a really nice registry for our images to be stored securely.</p>
  <h2 id="creating-the-registry">Creating the Registry</h2>
  <p id="cQ3E">We need to create a new registry in Gitlab.</p>
  <blockquote id="gEoh">If you already have another registry, or prefer using Docker Hub, you may skip this piece.</blockquote>
  <p id="5m5h">You&#x27;ll need a new repository first.</p>
  <p id="WgIo">Once you have created one, go to <em>Packages &amp; Registries &gt; Container Registry</em>, and you&#x27;ll see instructions on how to login, and get the url for your container registry</p>
  <p id="DrH6">In my case this is <em>registry.gitlab.com/laravel-in-kubernetes/laravel-app</em></p>
  <h2 id="login-to-the-registry">Login to the registry</h2>
  <p id="Qo1b">Depending on whether you have 2 factor auth enabled, you might need to generate credentials for your local machine.</p>
  <p id="HLTZ">You can create a pair in <em>Settings &gt; Repository &gt; Deploy Tokens</em>, and use these as a username and password to login to the registry. The Deploy Token needs write access to the registry.</p>
  <pre id="EqFL">$ docker login registry.gitlab.com -u [username] -p [token]
Login Succeeded</pre>
  <h2 id="building-our-images">Building our images</h2>
  <p id="SLgV">We now need to build our application images, and tag them to our registry.</p>
  <p id="Awk2">In order to do this we need to point at the specific stage we need to build, and tag it with a name.</p>
  <pre id="7l4x">$ docker build . -t [your_registry_url]/cli:v0.0.1 --target cli

$ docker build . -t [your_registry_url]/fpm_server:v0.0.1 --target fpm_server

$ docker build . -t [your_registry_url]/web_server:v0.0.1 --target web_server

$ docker build . -t [your_registry_url]/cron:v0.0.1 --target cron</pre>
  <h2 id="pushing-our-images">Pushing our images</h2>
  <p id="C0uV">Next we need to push our images to our new registry to be used with Kubernetes</p>
  <pre id="to2D">$ docker push [your_registry_url]/cli:v0.0.1
$ docker push [your_registry_url]/fpm_server:v0.0.1
$ docker push [your_registry_url]/web_server:v0.0.1
$ docker push [your_registry_url]/cron:v0.0.1</pre>
  <p id="uwOO">Our images are now available inside the registry, and ready to be used in Kubernetes.</p>
  <h2 id="repeatable-build-steps-with-makefile">Repeatable build steps with Makefile</h2>
  <p id="qhE1">In order for us to easily repeat the build steps, we can use a Makefile to specify our build commands, and variableise the specific pieces like our registry url, and the version of our containers.</p>
  <p id="zrvj">In the root of the project, create a <code>Makefile</code></p>
  <pre id="76S1">$ touch Makefile</pre>
  <p id="j5Qu">This file will allow us to express our build commands reproducibly.</p>
  <p id="sjo4">In the new <code>Makefile</code> add the following contents, which variableise the version and registry, and then specify the commands.</p>
  <pre id="A2ml"># VERSION defines the version for the docker containers.
# To build a specific set of containers with a version,
# you can use the VERSION as an arg of the docker build command (e.g make docker VERSION=0.0.2)
VERSION ?= v0.0.1

# REGISTRY defines the registry where we store our images.
# To push to a specific registry,
# you can use the REGISTRY as an arg of the docker build command (e.g make docker REGISTRY=my_registry.com/username)
# You may also change the default value if you are using a different registry as a default
REGISTRY ?= registry.gitlab.com/laravel-in-kubernetes/laravel-app


# Commands
docker: docker-build docker-push

docker-build:
	docker build . --target cli -t ${REGISTRY}/cli:${VERSION}
	docker build . --target cron -t ${REGISTRY}/cron:${VERSION}
	docker build . --target fpm_server -t ${REGISTRY}/fpm_server:${VERSION}
	docker build . --target web_server -t ${REGISTRY}/web_server:${VERSION}

docker-push:
	docker push ${REGISTRY}/cli:${VERSION}
	docker push ${REGISTRY}/cron:${VERSION}
	docker push ${REGISTRY}/fpm_server:${VERSION}
	docker push ${REGISTRY}/web_server:${VERSION}
</pre>
  <p id="kQ3e">You can then use a make command to easily build, and push the containers all together.</p>
  <pre id="fSQF">$ make docker VERSION=v0.0.2

# If you only want to run the builds
$ make docker-build VERSION=v0.0.2

# If you only want to push the images
$ make docker-push VERSION=v0.0.2</pre>
  <h1 id="onto-the-next">Onto the next</h1>
  <p id="HJN6">Next we will setup our Kubernetes Cluster where we will run our images.</p>
  <p id="jHRe"></p>
  <h2 id="cDPE"><u>! PART FOUR: Kubernetes Cluster Setup</u></h2>
  <p id="augT">In this post, we will spin up our Kubernetes cluster using Terraform, in DigitalOcean.</p>
  <p id="TMST">We will create this using Terraform, so we can easily spin up and spin down our cluster, as well as keep all of our information declarative.</p>
  <p id="cWeN">If you&#x27;d like to spin up a cluster without Terraform, you can easily do this in the DigitalOcean UI, and download the kubeconfig</p>
  <p id="f3XM"></p>
  <h1 id="creating-our-initial-terraform-structure">Creating our initial Terraform structure</h1>
  <p id="VcZs">For this blog series, we will create a separate repository for our Terraform setup, but you can feel free to create a subdirectory in the root of your project and run terraform commands from there.</p>
  <p id="bqbv">Create a new directory to act as the base of our new repository</p>
  <pre id="j3ft">mkdir -p laravel-in-kubernetes-infra
cd laravel-in-kubernetes-infra/</pre>
  <h3 id="terraform-initialisation">Terraform initialisation</h3>
  <p id="Q3KN">In the new directory we need a few files.</p>
  <p id="mDh5">We will start with a file called <code>versions.tf</code> to contain the required versions of our providers.</p>
  <pre id="1xR6">terraform {
  required_providers {
    digitalocean = {
      source = &quot;digitalocean/digitalocean&quot;
      version = &quot;~&gt; 2.11&quot;
    }
  }
}</pre>
  <p id="mZLE">Once that file is created, we can initialise the Terraform base and download the DigitalOcean providers</p>
  <pre id="uXv8">$ terraform init
[...]
Terraform has been successfully initialized!</pre>
  <p id="lLrd">From here, we can start creating the provider details, and spin up our clusters.</p>
  <h1 id="terraform-provider-setup">Terraform Provider Setup</h1>
  <p id="MQ5m">Next, we need to get a access token from DigitalOcean which Terraform can use when creating infrastructure.</p>
  <p id="5wlF">You can do this by login into your DigitalOcean account, going to <em>API &gt; Generate New Token</em> and giving it an appropriate name, and make sure it has write access.</p>
  <p id="TjCz">Create a new file called <code>local.tfvars</code> and save the token in that file.</p>
  <pre id="Z893">do_token=&quot;XXX&quot;</pre>
  <p id="T72h">Now we need to ignore the <code>local.tfvars</code> file in our repository along with some other files.</p>
  <p id="OOv9">We also need to register the variable with Terraform, so it knows to look for it, and validate it.</p>
  <p id="mGrV">Create a <code>variables.tf</code> file to declare the variable</p>
  <pre id="Tybm">variable &quot;do_token&quot; {
  type = string
}</pre>
  <p id="oRJy">At this point we can run terraform validate to make sure all our files are in order.</p>
  <pre id="EUhN">$ terraform validate
Success! The configuration is valid.</pre>
  <h1 id="ignore-terraform-state-files">Ignore Terraform state files</h1>
  <p id="VYEE">Create a <code>.gitignore</code> file containing matching <a href="https://github.com/github/gitignore/blob/master/Terraform.gitignore" target="_blank">https://github.com/github/gitignore/blob/master/Terraform.gitignore</a></p>
  <pre id="krPA"># Local .terraform directories
**/.terraform/*

# .tfstate files
*.tfstate
*.tfstate.*

# Crash log files
crash.log

# Exclude all .tfvars files, which are likely to contain sentitive data, such as
# password, private keys, and other secrets. These should not be part of version 
# control as they are data points which are potentially sensitive and subject 
# to change depending on the environment.
#
*.tfvars

# Ignore override files as they are usually used to override resources locally and so
# are not checked in
override.tf
override.tf.json
*_override.tf
*_override.tf.json

# Include override files you do wish to add to version control using negated pattern
#
# !example_override.tf

# Include tfplan files to ignore the plan output of command: terraform plan -out=tfplan
# example: *tfplan*

# Ignore CLI configuration files
.terraformrc
terraform.rc</pre>
  <p id="3Tsq">Once we ignore sensitive files, we can initialise the directory as a git repo, and commit our current changes</p>
  <h2 id="initialise-git-repo">Initialise Git Repo</h2>
  <pre id="JYY2">$ git init
Initialized empty Git repository in [your_directory]
$ git add .
$ git commit -m &quot;Init&quot;</pre>
  <h1 id="configure-digitalocean-provider">Configure DigitalOcean Provider</h1>
  <p id="00Kl">Create a new file called <code>providers.tf</code> where we can register the DigitalOcean provider with DigitalOceans&#x27; token</p>
  <pre id="IKVC">provider &quot;digitalocean&quot; {
  token = var.do_token
}</pre>
  <p id="n6tN">Remember to add and commit this new file.</p>
  <h1 id="getting-ready-to-run-kubernetes">Getting ready to run Kubernetes</h1>
  <h2 id="kubernetes-version">Kubernetes Version</h2>
  <p id="7N9h">In order to run Kubernetes, we need to define which version of Kubernetes we&#x27;d like to run.</p>
  <p id="bDcx">We&#x27;ll do this using a Terraform Data Source from DigitalOcean to get us the latest patch version of our chosen version, which for this guide, will be the latest DigitalOcean ships, which is 1.21.X</p>
  <p id="20mb">Create a file in the root of your repository called <code>kubernetes.tf</code> containing the data source for versions</p>
  <pre id="Ls7o">data &quot;digitalocean_kubernetes_versions&quot; &quot;kubernetes-version&quot; {
  version_prefix = &quot;1.21.&quot;
}</pre>
  <p id="RL4j">This should be enough to define the required version.</p>
  <p id="ebj5">DigitalOcean and Terraform will now keep your cluster up to date with the latest patches. These are important for security and stability fixes.</p>
  <h2 id="machine-sizes">Machine Sizes</h2>
  <p id="IKz7">We also need to define which machine sizes we&#x27;d like to run as part of our cluster.</p>
  <p id="LKJq">Kubernetes in DigitalOcean runs using Node Pools.</p>
  <p id="Mh3z">We can use these to have different machines of different capabilities, depending on our needs.</p>
  <p id="vG37">For now, we will create a single Node Pool with some basic machines to run our Laravel application.</p>
  <p id="5k7U">In our <code>kubernetes.tf</code> file, add the data source for the machine sizes we will start off with.</p>
  <pre id="13mN">[...]
data &quot;digitalocean_sizes&quot; &quot;small&quot; {
  filter {
    key    = &quot;slug&quot;
    values = [&quot;s-2vcpu-2gb&quot;]
  }
}</pre>
  <h2 id="region">Region</h2>
  <p id="xfhK">We also need to define a region for where our Kubernetes cluster is going to run.</p>
  <p id="s7vd">We can define this as a variable, to make it easy to change for different folks in different places.</p>
  <p id="uUEx">in <code>variables.tf</code>, add a new variable for the region you would like to use.</p>
  <pre id="J2N6">[...]
variable &quot;do_region&quot; {
  type = string
  default = &quot;fra1&quot;
}</pre>
  <p id="EALE">I have defaulted it to Frankfurt 1 for ease of use, but you can now override it in <code>local.tfvars</code> like so</p>
  <pre id="BTQx">do_region=&quot;fra1&quot;</pre>
  <h1 id="create-our-kubernetes-cluster">Create our Kubernetes cluster</h1>
  <p id="KV4r">Next step we need to look at is actually spinning up our cluster.</p>
  <p id="fqse">This is a pretty simple step. Create a Kubernetes Cluster resource in our <code>kubernetes.tf</code> file, with some extra properties for Cluster management with DigitalOcean.</p>
  <pre id="uVmk">resource &quot;digitalocean_kubernetes_cluster&quot; &quot;laravel-in-kubernetes&quot; {
  name = &quot;laravel-in-kubernetes&quot;
  region = var.do_region

  # Latest patched version of DigitalOcean Kubernetes.
  # We do not want to update minor or major versions automatically.
  version = data.digitalocean_kubernetes_versions.kubernetes-version.latest_version

  # We want any Kubernetes Patches to be added to our cluster automatically.
  # With the version also set to the latest version, this will be covered from two perspectives
  auto_upgrade = true
  maintenance_policy {
    # Run patch upgrades at 4AM on a Sunday morning.
    start_time = &quot;04:00&quot;
    day = &quot;sunday&quot;
  }

  node_pool {
    name = &quot;default-pool&quot;
    size = &quot;${element(data.digitalocean_sizes.small.sizes, 0).slug}&quot;
    # We can autoscale our cluster according to use, and if it gets high,
    # We can auto scale to maximum 5 nodes.
    auto_scale = true
    min_nodes = 1
    max_nodes = 5

    # These labels will be available in the node objects inside of Kubernetes,
    # which we can use as taints and tolerations for workloads.
    labels = {
      pool = &quot;default&quot;
      size = &quot;small&quot;
    }
  }
}</pre>
  <p id="XEWi">Now that we have added the cluster details, we can validate our Terraform once more</p>
  <pre id="d9dR">$ terraform validate
Success! The configuration is valid.</pre>
  <p id="xvjq">We can now create our Kubernetes cluster</p>
  <pre id="Tq5E">$ terraform apply
var.do_token
  Enter a value: 
</pre>
  <p id="4z4J">Terraform is asking us to pass in a do_token, but we have specified this in our local.tfvars file.</p>
  <p id="WcHF">Terraform will not automatically pull values from these files, but will from files with <code>auto.tfvars</code> suffix.</p>
  <p id="5ESM">Let&#x27;s rename our <code>local.tfvars</code> to <code>local.auto.tfvars</code></p>
  <pre id="Sbhm">mv local.tfvars local.auto.tfvars</pre>
  <p id="uabF">We should now be able to run terraform apply correctly</p>
  <pre id="79Qb">$ terraform apply
[...]
Plan: 1 to add, 0 to change, 0 to destroy.
[...]
digitalocean_kubernetes_cluster.laravel-in-kubernetes: Creating...
digitalocean_kubernetes_cluster.laravel-in-kubernetes: Still creating... [10s elapsed]
[...]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.</pre>
  <p id="knnF">Our cluster is now created successfully, and we need to fetch the kubeconfig file.</p>
  <h1 id="fetching-cluster-access-details">Fetching Cluster access details</h1>
  <p id="ISsZ">We need to get a kubeconfig file from DigitalOcean to access our cluster.</p>
  <p id="Xs2Z">We can do this through Terraform with resource attributes, but this does not scale too well with a team, as not everyone should have access to run Terraform locally.</p>
  <p id="9cf1">The other mechanism we can use for this is by utilising <code>doctl</code> <a href="https://github.com/digitalocean/doctl" target="_blank">https://github.com/digitalocean/doctl</a></p>
  <p id="e5g1">You can follow the installation guide to get it up and running locally <a href="https://github.com/digitalocean/doctl#installing-doctl" target="_blank">https://github.com/digitalocean/doctl#installing-doctl</a></p>
  <h2 id="get-the-kubeconfig">Get the kubeconfig</h2>
  <p id="CHKD">Next we need to fetch the kubeconfig using doctl</p>
  <h3 id="get-the-id-of-our-cluster-first">Get the ID of our cluster first</h3>
  <pre id="Gryz">$ doctl kubernetes clusters list
ID                Name                     Region    Version        Auto Upgrade    Status          Node Pools
[your-id-here]    laravel-in-kubernetes    fra1      1.21.2-do.2    true            running         default-pool
</pre>
  <p id="BynP">Copy the id from there, and then download the kubeconfig file into your local config file.</p>
  <pre id="7blD">$ doctl k8s cluster kubeconfig save [your-id-here]
Notice: Adding cluster credentials to kubeconfig file found in &quot;/Users/chris/.kube/config&quot;
Notice: Setting current-context to do-fra1-laravel-in-kubernetes</pre>
  <p id="18Yb">You should now be able to get pods in your new cluster</p>
  <pre id="v14Q">$ kubectl get pods -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   cilium-8r6qz                       1/1     Running   0          6m33s
kube-system   cilium-operator-6cc67c77f9-4c5vd   1/1     Running   0          9m27s
kube-system   cilium-operator-6cc67c77f9-qhwbb   1/1     Running   0          9m27s
kube-system   coredns-85d9ccbb46-6nkqb           1/1     Running   0          9m27s
kube-system   coredns-85d9ccbb46-hmjbw           1/1     Running   0          9m27s
kube-system   csi-do-node-jppxt                  2/2     Running   0          6m33s
kube-system   do-node-agent-647dj                1/1     Running   0          6m33s
kube-system   kube-proxy-xlldk                   1/1     Running   0          6m33s
</pre>
  <p id="IzWh">This shows that our Kubernetes cluster is running, and we are ready to move on to the next piece.</p>
  <h1 id="onto-the-next">Onto the next</h1>
  <p id="Kj12">Next we are going to spin up a database for our application.</p>
  <p id="m4hD">You can do this using either a Managed Database from DigitalOcean, or run it in your new Kubernetes cluster. The next post has instructions on running your database in both of these ways</p>
  <p id="IMIN"></p>
  <h2 id="v6v3"><u>! PART FIVE: Deploying a database for our application</u></h2>
  <p id="PTbG"></p>
  <p id="oInv">Deploying a database for our application can be quite a challenge.</p>
  <p id="i6qP">On one hand, using a managed database makes sense from a management perspective, but might be a bit more expensive than running it ourselves.</p>
  <p id="JW1G">On the other hand, running it ourselves comes with a whole array of possible maintenance issues like Storage, Backups and Restoration.</p>
  <p id="TiAb">Also introducing Storage into our Kubernetes cluster makes it quite a bit more management, especially for production critical loads.</p>
  <p id="M7dr">In this post we will cover both options</p>
  <p id="o2ll"></p>
  <h1 id="managed-database">Managed Database</h1>
  <p id="xyGq">The easiest to manage, if you are willing to fork out a couple more bucks, is a managed database.</p>
  <p id="qLOk">Most Cloud providers offer managed databases, including DigitalOcean on which this series is built.</p>
  <p id="NGkb">We are going to use Mysql in this post, as it is the most used option IMO for Laravel.</p>
  <p id="UdmW">You are welcome to switch this out for Postgres if you are so inclined.</p>
  <p id="xg5E">In the Infrastructure repository we created, we can add a new file called <code>database.tf</code> where we can define the configuration for our DigitalOcean Managed database.</p>
  <pre id="EeFP"># Define some constant values for the different versions of DigitalOcean databases
locals {
  mysql = {
    engine = &quot;mysql&quot;
    version = &quot;8&quot;
  }
  postgres = {
    engine = &quot;pg&quot;
    version = &quot;13&quot; # Available options: 10 | 11 | 12 | 13
  }
}

# We need to create a database cluster in DigitalOcean,
# based on Mysql 8, which is the version DigitalOcean provides.
# You can switch this out for Postgres by changing the &#x60;locals.&#x60; pointer to point at postgres.
resource &quot;digitalocean_database_cluster&quot; &quot;laravel-in-kubernetes&quot; {
  name = &quot;laravel-in-kubernetes&quot;
  engine = local.mysql.engine # Replace with &#x60;locals.postgres.engine&#x60; if using postgres
  version = local.mysql.version # Replace with &#x60;locals.postgres.version&#x60; if using postgres
  size = &quot;db-s-1vcpu-1gb&quot;
  region = var.do_region
  node_count = 1
}

# We want to create a separate database for our application inside the database cluster.
# This way we can share the cluster resources, but have multiple separate databases.
resource &quot;digitalocean_database_db&quot; &quot;laravel-in-kubernetes&quot; {
  cluster_id = digitalocean_database_cluster.laravel-in-kubernetes.id
  name = &quot;laravel-in-kubernetes&quot;
}

# We want to create a separate user for our application,
# So we can limit access if necessary
# We also use Native Password auth, as it works better with current Laravel versions
resource &quot;digitalocean_database_user&quot; &quot;laravel-in-kubernetes&quot; {
  cluster_id = digitalocean_database_cluster.laravel-in-kubernetes.id
  name = &quot;laravel-in-kubernetes&quot;
  mysql_auth_plugin = &quot;mysql_native_password&quot;
}

# We want to allow access to the database from our Kubernetes cluster
# We can also add custom IP addresses
# If you would like to connect from your local machine,
# simply add your public IP
resource &quot;digitalocean_database_firewall&quot; &quot;laravel-in-kubernetes&quot; {
  cluster_id = digitalocean_database_cluster.laravel-in-kubernetes.id

  rule {
    type  = &quot;k8s&quot;
    value = digitalocean_kubernetes_cluster.laravel-in-kubernetes.id
  }

#   rule {
#     type  = &quot;ip_addr&quot;
#     value = &quot;ADD_YOUR_PUBLIC_IP_HERE_IF_NECESSARY&quot;
#   }
}

# We also need to add outputs for the database, to easily be able to reach it.

# Expose the host of the database so we can easily use that when connecting to it.
output &quot;laravel-in-kubernetes-database-host&quot; {
  value = digitalocean_database_cluster.laravel-in-kubernetes.host
}

# Expose the port of the database, as it is usually different from the default ports of Mysql / Postgres
output &quot;laravel-in-kubernetes-database-port&quot; {
  value = digitalocean_database_cluster.laravel-in-kubernetes.port
}
</pre>
  <p id="EcOk">Once we apply that, it might take some time to create the database, but Terraform will pump out a database host and port for us.</p>
  <pre id="ZOzr">$ terraform apply 
[...]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Outputs:

laravel-in-kubernetes-database-host = &quot;XXX&quot;
laravel-in-kubernetes-database-port = 25060
</pre>
  <p id="Sduk">You will now see your database host and port.</p>
  <h2 id="security">Security</h2>
  <p id="CVc5">But what about the username and password ?</p>
  <p id="asFN">We could fetch these from Terraform directly using the <code>digitalocean_database_user.laravel-in-kubernetes.password</code> attribute like <a href="https://registry.terraform.io/providers/digitalocean/digitalocean/latest/docs/resources/database_user#password" target="_blank">here</a>. The problem with this is that the password will be stored in Terraform state, and anyone who has the state will be able to access this value, which compromises your database.</p>
  <p id="HzAl">What we want to be doing is to create the initial user, with a initial password, and then change that outside of Terraform.</p>
  <p id="MRqq">There are other solutions to this such as Key Stores provided by Cloud providers, which can be used with the <a href="https://external-secrets.io/" target="_blank">External Secrets Operator</a> to provide these seamlessly in Kubernetes.</p>
  <p id="SHnM">For the moment though, we will use the DigitalOcean UI, to regenerate the password, and use that outside of Terraform for the future.</p>
  <p id="kT17">In the DigitalOcean UI, you can regenerate the password, and store it to use in the next steps.</p>
  <h2 id="laravel-changes">Laravel Changes</h2>
  <p id="UGeE">When using a default DigitalOcean Managed Database install for our application, we need to make one change to our actual code base.</p>
  <p id="rOLW">Laravel migrations will fail with an error for not allowing tables without Primary Keys such as</p>
  <pre id="uhjG">Migrating: 2014_10_12_100000_create_password_resets_table

In Connection.php line 692:
                                                                               
  SQLSTATE[HY000]: General error: 3750 Unable to create or change a table wit  
  hout a primary key, when the system variable &#x27;sql_require_primary_key&#x27; is s  
  et. Add a primary key to the table or unset this variable to avoid this mes  
  sage. Note that tables without a primary key can cause performance problems  
   in row-based replication, so please consult your DBA before changing this   
  setting. (SQL: create table &#x60;password_resets&#x60; (&#x60;email&#x60; varchar(255) not nul  
  l, &#x60;token&#x60; varchar(255) not null, &#x60;created_at&#x60; timestamp null) default char  
  acter set utf8mb4 collate &#x27;utf8mb4_unicode_ci&#x27;)                              
                                                                               

In Connection.php line 485:
                                                                               
  SQLSTATE[HY000]: General error: 3750 Unable to create or change a table wit  
  hout a primary key, when the system variable &#x27;sql_require_primary_key&#x27; is s  
  et. Add a primary key to the table or unset this variable to avoid this mes  
  sage. Note that tables without a primary key can cause performance problems  
   in row-based replication, so please consult your DBA before changing this   
  setting.                                                             </pre>
  <p id="X1qV">To get around this error, we can switch off the primary key constraint.</p>
  <p id="J6VW">It&#x27;s advisable to add primary keys for your tables, but if you have an existing application, it might be a better idea to switch off first, then add primary keys later, depending on your specific case.</p>
  <p id="sim3">The way I like to do this is by adding a specific statement which catches migration events, and then switches off the primary key constraints.</p>
  <p id="IaAK">In <code>app/Providers/AppServiceProvider.php</code>, add a the following to the <code>register</code> method</p>
  <pre id="UBRR">use Illuminate\Database\Events\MigrationsEnded;
use Illuminate\Database\Events\MigrationsStarted;
use Illuminate\Support\Facades\DB;
use Illuminate\Support\Facades\Event;


/**
 * Register any application services.
 *
 * @return void
 */
public function register()
{
    // https://github.com/laravel/framework/issues/33238#issuecomment-897063577
    Event::listen(MigrationsStarted::class, function () {
        DB::statement(&#x27;SET SESSION sql_require_primary_key=0&#x27;);
    });
    Event::listen(MigrationsEnded::class, function () {
        DB::statement(&#x27;SET SESSION sql_require_primary_key=1&#x27;);
    });
}</pre>
  <p id="smmy">Once we&#x27;ve have done this, we can commit the new fix, and rebuild both our application containers so they contain the new code updates</p>
  <pre id="oE3Q">// Commit the fix
$ git add app/Providers/AppServiceProvider.php
$ git commit -m &quot;Disable Primary Key check for migrations&quot;

// Rebuild our container images
$ make docker-build

// Lastly push up the new container images to our registry
$ make docker-push</pre>
  <p id="Yy6X">When we now run migrations against the managed database, everything should work.</p>
  <p id="uByw">In the next step, we will start deploying our application and run migrations on startup.</p>
  <h1 id="self-managed-database">Self-managed database</h1>
  <p id="762T">If you would like to use your own database running in Kubernetes, you can of course do this.</p>
  <p id="bphK">For running a database in Kubernetes there are a few things to keep in mind</p>
  <ul id="6cv0">
    <li id="Zqno">Database maintenance such as backups, upgrades, security etc.</li>
    <li id="uhDW">Persistence. You&#x27;re probably going to need some persistence so your data remains stable throughout upgrades and updates.</li>
    <li id="yuja">Scalability. Running a distributed database with separated write &amp; read replicas could become quite difficult to manage. As a starting point you will not need to scale your database this way, but in future you might</li>
  </ul>
  <p id="F1An">All of this taken into account, we will deploy a MySQL 8 database inside of Kubernetes with persistence to DigitalOcean, and a manual backup and restore strategy. We won&#x27;t cover monitoring for it just yet, as this will be covered in depth by a future post.</p>
  <h2 id="creating-a-persistentvolumeclaim-in-kubernetes">Creating a PersistentVolumeClaim in Kubernetes</h2>
  <p id="0fDR">We need to create a PersistentVolumeClaim.</p>
  <p id="qz0I">This will trigger the CSI to create us a volume in the Cloud provider, in this case DigitalOcean, register that in Kubernetes, and then create a PersistentVolumeClaim, which we can use to persist our database data across deployments and upgrades.</p>
  <p id="jV4q">In the next Step of the series, we will create a deployment repo to store all our Kubernetes configurations in.</p>
  <p id="Hu3A">Because we are jumping ahead we will go ahead and do that now.</p>
  <p id="xSdr">Create a new directory for your deployment manifests, with a subdirectory for your database.</p>
  <pre id="R5EI"># First make the deployment directory 
mkdir -p deployment
cd deployment

# Then next create a database directory to store database specific manifests
mkdir -p database</pre>
  <p id="yr1s">Next, create a file called <code>database/persistent-volume-claim.yml</code> where we will store the configuration.</p>
  <pre id="uaSa">apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: laravel-in-kubernetes-mysql
spec:
  storageClassName: do-block-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi</pre>
  <p id="OY8C">We specify that we only want 1 GB of data for the moment. You can always resize this at a later point if necessary.</p>
  <p id="rswl">You can apply that to your Kubernetes cluster, and after a few minutes you should see the DigitalOcean volume mounted.</p>
  <pre id="IQOi">$ kubectl apply -f database
persistentvolumeclaim/laravel-in-kubernetes-mysql created
$ kubectl get persistentvolume
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                             STORAGECLASS       REASON   AGE
pvc-47da21f2-113c-4415-b7c0-08e3782ac1c3   1Gi        RWO            Delete           Bound    app/laravel-in-kubernetes-mysql   do-block-storage            16s</pre>
  <p id="okth">You can also see the volume created in the DigitalOcean UI under Volumes.</p>
  <figure id="RhVi" class="m_original">
    <img src="https://chris-vermeulen.com/content/images/2021/08/image-2.png" />
  </figure>
  <p id="jxtc">You&#x27;ll notice that it is not mounted to a particular droplet just yet.</p>
  <p id="Mvyp">The Volume will only be mounted once an application actually tries to use the PVC.</p>
  <p id="9kQn">This is intentional, as the volume will be mounted to the specific Droplet where the pod is running.</p>
  <h2 id="creating-secrets-for-our-mysql-database">Creating Secrets for our Mysql database</h2>
  <p id="yHFk">We need to create a username and password which we can use with Mysql.</p>
  <p id="pyNA">Mysql allows us to inject these as environment variables, but first we need to save them to a Kubernetes Secret.</p>
  <p id="l61Y">Create a new random password for use in our application.</p>
  <pre id="IKw6">$ LC_ALL=C tr -dc &#x27;A-Za-z0-9&#x27; &lt;/dev/urandom | head -c 20 ; echo
eyeckfIIXw3KX0Rd0GHo</pre>
  <p id="ztNc">We also need a username which in this case we&#x27;ll call <code>laravel-in-kubernetes</code></p>
  <p id="YeBz">Create a new file called <code>secret.yml</code> in the database folder which contains our Username and Password.</p>
  <pre id="yIAL">apiVersion: v1
kind: Secret
metadata:
  name: laravel-in-kubernetes-mysql
type: Opaque
stringData:
  DB_USERNAME: &quot;laravel-in-kubernetes&quot;
  DB_PASSWORD: &quot;eyeckfIIXw3KX0Rd0GHo&quot;</pre>
  <h3 id="a-note-on-security">A note on security</h3>
  <p id="9gkQ">A good approach would be to not store this secret in version control as that would expose our passwords to whoever has access to the manifests.</p>
  <p id="3eHA">An alternative solution might be to use <a href="https://github.com/bitnami-labs/sealed-secrets" target="_blank">Sealed Secrets</a> or <a href="https://external-secrets.io/" target="_blank">External Secrets Operator</a> from <a href="https://www.container-solutions.com/" target="_blank">Container Solutions</a></p>
  <p id="z6Bd">For the moment, we will use this to keep the learning simple.</p>
  <p id="Yoih">So from here we can apply that secret, and make it available to our database in coming steps.</p>
  <pre id="Ifyr">$ kubectl  apply -f database/
secret/laravel-in-kubernetes-mysql created</pre>
  <h2 id="creating-a-statefulset-for-the-database">Creating a StatefulSet for the database</h2>
  <p id="FZXr">In our <code>database</code> folder we can create another file called <code>statefulset.yml</code> where we will declare our database setup, with some liveness and readiness probes, as well as resource requests for most stable running.</p>
  <p id="vYmm">We use a StatefulSet so it only reschedules it when it really needs to.</p>
  <pre id="E96X">apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: laravel-in-kubernetes-mysql
  labels:
    tier: backend
    layer: database
spec:
  selector:
    matchLabels:
      tier: backend
      layer: database
  serviceName: laravel-in-kubernetes-mysql
  replicas: 1
  template:
    metadata:
      labels:
        tier: backend
        layer: database
    spec:
      containers:
      - name: mysql
        image: mysql:5.7
        ports:
        - name: mysql
          containerPort: 3306
        env:
        - name: MYSQL_RANDOM_ROOT_PASSWORD
          value: &#x27;1&#x27;
        - name: MYSQL_DATABASE
          value: laravel-in-kubernetes
        - name: MYSQL_USER
          valueFrom:
            secretKeyRef:
              name: laravel-in-kubernetes-mysql
              key: DB_USERNAME
        - name: MYSQL_PASSWORD
          valueFrom:
            secretKeyRef:
              name: laravel-in-kubernetes-mysql
              key: DB_PASSWORD
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        resources:
          requests:
            cpu: 300m
            memory: 256Mi
        livenessProbe:
          exec:
            command:
            - bash
            - -c
            - mysqladmin -u ${MYSQL_USER} -p${MYSQL_PASSWORD} ping
          initialDelaySeconds: 10
          periodSeconds: 5
          timeoutSeconds: 5
        readinessProbe:
          exec:
            command:
            - bash
            - -c
            - mysql -h 127.0.0.1 -u ${MYSQL_USER} -p${MYSQL_PASSWORD} -e &quot;SELECT 1&quot;
          initialDelaySeconds: 5
          periodSeconds: 2
          timeoutSeconds: 1
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: laravel-in-kubernetes-mysql</pre>
  <p id="eJEJ">The StatefulSet will start up a single pod containing our database, mount our PersistentVolumeClaim into the container to store the data in a DigitalOcean Volume, and automatically check for Mysql Availability before allowing other pods to connect.</p>
  <p id="wktj">When we redeploy the StatefulSet for upgrades of Mysql or changing settings, our data will stay persisted, and the CSI will remount the volumes to the new nodes where our StatefulSet is running.</p>
  <h2 id="database-service">Database Service</h2>
  <p id="afSv">The next piece we need is a Kubernetes Service so we can easily connect to our database instance.</p>
  <p id="BxAD">In the <code>database</code> folder, create a new file called <code>service.yml</code> where we can specify the Service details</p>
  <pre id="ieBk">apiVersion: v1
kind: Service
metadata:
  name: laravel-in-kubernetes-mysql
spec:
  selector:
    tier: backend
    layer: database
  ports:
    - protocol: TCP
      port: 3306
      targetPort: 3306</pre>
  <p id="s55y">We can apply that, and in future if we&#x27;d like to connect to that database we can use <code>mysql</code> as the url and <code>3306</code> as the port.</p>
  <pre id="DAAg">$ kubectl apply -f database/
service/mysql created</pre>
  <h2 id="database-backups">Database backups</h2>
  <p id="yw6q">As we are mounting to a DigitalOcean volume, our data should be fairly safe.</p>
  <p id="4TwL">But, there are a few things we need to take care of.</p>
  <p id="jtIB">For example, if we recreate our cluster for a major version upgrade, we need to manually remount our volume into the Kubernetes cluster.</p>
  <p id="Ptpl">We also need to make sure if we accidentally delete the PersistentVolumeClaim, we can restore it from a data source.</p>
  <p id="JAJd">For this and more on Backups, you can have a look at <a href="https://kubernetes.io/docs/concepts/storage/volume-snapshots/" target="_blank">Kubernetes Volume Snapshots</a> and <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#using-volume-populators" target="_blank">Kubernetes Volume Data Sources</a>. This will allow you to restore data on failure.</p>
  <p id="2Vrz">There are also a few tools to help alleviate a lot of this manual work called <a href="https://velero.io/" target="_blank">Velero</a> you can have a look at.</p>
  <h1 id="onto-the-next">Onto the next</h1>
  <p id="nDBq">Next, we will start deploying our application in Kubernetes.</p>
  <p id="ZSWF"></p>
  <h2 id="vKkt"><u>! PART SIX: Deploying Laravel Web App in Kubernetes</u></h2>
  <p id="RAfL"></p>
  <p id="QhAB">In this post we will cover deploying our Laravel Web App inside of Kubernetes.</p>
  <p id="jfcS">This covers our main app and our migrations in Kubernetes.</p>
  <p id="i2DR">This post also assumes you have Dockerised your application, using <a href="https://chris-vermeulen.com/laravel-in-kubernetes-part-2/" target="_blank">Part 2</a> &amp; <a href="https://chris-vermeulen.com/laravel-in-kubernetes-part-3/" target="_blank">Part 3</a> from this series. If not, and you have containerised your application, you should be able to follow along if you have the same style of Docker files, or if you have a monolithic Docker image, such as the one from <a href="https://laravel.com/docs/8.x/sail" target="_blank">Laravel Sail</a>, you can simply replace the images in the manifests with your image.</p>
  <p id="4sCk"></p>
  <h1 id="deployment-repo">Deployment Repo</h1>
  <p id="zYWT">First thing we&#x27;ll start with is a fresh repository. This is where we will store all of our deployment manifests, and also where we will deploy from.</p>
  <p id="L2UO">If you followed the self-managed database tutorial in the previous post, you&#x27;ll already have created a deployment repo, and can skip the creation of this directory.</p>
  <p id="WyBB">Start with a fresh directory in your projects folder, or wherever you keep your source code folders.</p>
  <pre id="jej8">mkdir -p laravel-in-kubernetes-deployment
cd laravel-in-kubernetes-deployment</pre>
  <h1 id="common-configuration">Common Configuration</h1>
  <p id="DEA2">We want to create a ConfigMap and Secret which we can use for all the different pieces of our application and easily configure them commonly.</p>
  <h2 id="common-folder">Common folder</h2>
  <p id="s8V1">We&#x27;ll start with a common folder for the common manifests.</p>
  <pre id="JXy6">$ mkdir -p common</pre>
  <h2 id="configmap">ConfigMap</h2>
  <p id="8imD">Create a ConfigMap, matching all of the details in the <code>.env</code> file, except the Secret values.</p>
  <p id="sGG3">Create a new file called <code>common/app-config.yml</code> with the following content</p>
  <pre id="eo67">apiVersion: v1
kind: ConfigMap
metadata:
  name: laravel-in-kubernetes
data:
  APP_NAME: &quot;Laravel&quot;
  APP_ENV: &quot;local&quot;
  APP_DEBUG: &quot;true&quot;
  # Once you have an external URL for your application, you can add it here. 
  APP_URL: &quot;http://laravel-in-kubernetes.test&quot;
  
  # Update the LOG_CHANNEL to stdout for Kubernetes
  LOG_CHANNEL: &quot;stdout&quot;
  LOG_LEVEL: &quot;debug&quot;
  DB_CONNECTION: &quot;mysql&quot;
  DB_HOST: &quot;mysql&quot;
  DB_PORT: &quot;3306&quot;
  DB_DATABASE: &quot;laravel_in_kubernetes&quot;
  BROADCAST_DRIVER: &quot;log&quot;
  CACHE_DRIVER: &quot;file&quot;
  FILESYSTEM_DRIVER: &quot;local&quot;
  QUEUE_CONNECTION: &quot;sync&quot;
  
  # Update the Session driver to Redis, based off part-2 of series
  SESSION_DRIVER: &quot;redis&quot;
  SESSION_LIFETIME: &quot;120&quot;
  MEMCACHED_HOST: &quot;memcached&quot;
  REDIS_HOST: &quot;redis&quot;
  REDIS_PORT: &quot;6379&quot;
  MAIL_MAILER: &quot;smtp&quot;
  MAIL_HOST: &quot;mailhog&quot;
  MAIL_PORT: &quot;1025&quot;
  MAIL_ENCRYPTION: &quot;null&quot;
  MAIL_FROM_ADDRESS: &quot;null&quot;
  MAIL_FROM_NAME: &quot;${APP_NAME}&quot;
  AWS_DEFAULT_REGION: &quot;us-east-1&quot;
  AWS_BUCKET: &quot;&quot;
  AWS_USE_PATH_STYLE_ENDPOINT: &quot;false&quot;
  PUSHER_APP_ID: &quot;&quot;
  PUSHER_APP_CLUSTER: &quot;mt1&quot;
  MIX_PUSHER_APP_KEY: &quot;${PUSHER_APP_KEY}&quot;
</pre>
  <h2 id="secret">Secret</h2>
  <p id="2q11">Create a Secret, matching all the secret details in .env. This is where we will pull in any secret values for our application.</p>
  <p id="0ZE2">Create a new file called <code>common/app-secret.yml</code> with the following content</p>
  <pre id="HDnp">apiVersion: v1
kind: Secret
metadata:
  name: laravel-in-kubernetes
type: Opaque
stringData:
  APP_KEY: &quot;base64:eQrCXchv9wpGiOqRFaeIGPnqklzvU+A6CZYSMosh1to=&quot;
  DB_USERNAME: &quot;sail&quot;
  DB_PASSWORD: &quot;password&quot;
  REDIS_PASSWORD: &quot;null&quot;
  MAIL_USERNAME: &quot;null&quot;
  MAIL_PASSWORD: &quot;null&quot;
  AWS_ACCESS_KEY_ID: &quot;&quot;
  AWS_SECRET_ACCESS_KEY: &quot;&quot;
  PUSHER_APP_KEY: &quot;&quot;
  PUSHER_APP_SECRET: &quot;&quot;
  MIX_PUSHER_APP_KEY: &quot;${PUSHER_APP_KEY}&quot;
</pre>
  <p id="I7zb">We can apply both of these files for usage in our Deployments.</p>
  <pre id="I4sk">$ kubectl apply -f common/</pre>
  <h2 id="update-configmap-with-database-details">Update ConfigMap with database details</h2>
  <p id="GAIt">We can fill in our database details as well in the ConfigMap and the Secret so our database can connect easily.</p>
  <p id="9G67">In the <code>common/app-config.yml</code> replace the values for the <code>DB_*</code> connection details,</p>
  <pre id="ilR2">apiVersion: v1
kind: ConfigMap
metadata:
  name: laravel-in-kubernetes
data:
  DB_CONNECTION: &quot;mysql&quot;
  DB_HOST: &quot;mysql&quot; # Use host from terraform if using managed Mysql
  DB_PORT: &quot;3306&quot; # Use port from terraform if using managed Mysql
  DB_DATABASE: &quot;laravel-in-kubernetes&quot;
</pre>
  <h2 id="updating-configuration-with-production-details">Updating configuration with production details</h2>
  <p id="XpNE">We also need to update our application configuration with production details, so our app runs in a production like fashion in Kubernetes.</p>
  <p id="aBOQ">In the <code>common/app-config.yml</code>, replace the details with production settings.</p>
  <pre id="pD9X">apiVersion: v1
kind: ConfigMap
metadata:
  name: laravel-in-kubernetes
data:
  APP_NAME: &quot;Laravel&quot;
  APP_ENV: &quot;production&quot;
  APP_DEBUG: &quot;false&quot;</pre>
  <h2 id="apply-the-configurations">Apply the configurations</h2>
  <p id="4qjV">We can now apply those into our cluster.</p>
  <pre id="EEQB">$ kubectl apply -f common/
configmap/laravel-in-kubernetes configured
</pre>
  <h2 id="update-secret-with-database-details">Update Secret with database details</h2>
  <p id="gqdC">We also need to fill our Secret with the correct database details</p>
  <pre id="sHMX">apiVersion: v1
kind: Secret
metadata:
  name: laravel-in-kubernetes
type: Opaque
stringData:
  DB_USERNAME: &quot;XXX&quot; # Replace with your DB username
  DB_PASSWORD: &quot;XXX&quot; # Replace with your DB password
  </pre>
  <p id="pcGt">We can apply that, and then move onto the deployments</p>
  <pre id="OYDv">$ kubectl apply -f common/
secret/laravel-in-kubernetes configured
</pre>
  <h1 id="fpm-deployment">FPM Deployment</h1>
  <p id="KVU3">We need a Deployment to run our application.</p>
  <p id="penU">The Deployment instructs Kubernetes which image to deploy and how many replicas of it to run.</p>
  <h2 id="fpm-directory">FPM Directory</h2>
  <p id="a0h8">First we need to create an <code>fpm</code> directory where we can store all of our FPM Deployment configurations</p>
  <pre id="HFYv">$ mkdir -p fpm</pre>
  <h2 id="fpm-deployment-1">FPM Deployment</h2>
  <p id="QcDp">We&#x27;ll start with a very basic Kubernetes Deployment for our FPM app inside the <code>fpm</code> directory called <code>deployment.yml</code></p>
  <pre id="0G05">apiVersion: apps/v1
kind: Deployment
metadata:
  name: laravel-in-kubernetes-fpm
  labels:
    tier: backend
    layer: fpm
spec:
  replicas: 1
  selector:
    matchLabels:
      tier: backend
      layer: fpm
  template:
    metadata:
      labels:
        tier: backend
        layer: fpm
    spec:
      containers:
        - name: fpm
          image: [your_registry_url]/fpm_server:v0.0.1
          ports:
            - containerPort: 9000</pre>
  <p id="meDt">We can now apply that, and we should see the application running correctly.</p>
  <pre id="RzGv">$ kubectl apply -f fpm/deployment.yml 
deployment.apps/laravel-in-kubernetes-fpm created

$ kubectl get deploy,pods
NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/laravel-in-kubernetes-fpm   1/1     1            1           58s

NAME                                             READY   STATUS    RESTARTS   AGE
pod/laravel-in-kubernetes-fpm-79fb79c548-2lp7m   1/1     Running   0          59s
</pre>
  <p id="gOJi">You should also be able to see the logs from the FPM pod.</p>
  <pre id="EZdi">$ kubectl logs laravel-in-kubernetes-fpm-79fb79c548-2lp7m
[30-Aug-2021 19:33:49] NOTICE: fpm is running, pid 1
[30-Aug-2021 19:33:49] NOTICE: ready to handle connections</pre>
  <p id="ZBNd">Everything is now running well for our FPM Deployment.</p>
  <h2 id="private-registry">Private Registry</h2>
  <p id="svAG">If you are using a private registry for your images, you can have a look here for how to authenticate a private registry for your cluster.</p>
  <ul id="Uf4E">
    <li id="PsYn"><a href="https://chris-vermeulen.com/using-gitlab-registry-with-kubernetes/" target="_blank">https://chris-vermeulen.com/using-gitlab-registry-with-kubernetes/</a></li>
    <li id="nQnn"><a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" target="_blank">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a></li>
  </ul>
  <h2 id="fpm-service">FPM Service</h2>
  <p id="5eRK">We also need a Kubernetes Service. This will expose our FPM container port in Kubernetes for us to use from our future NGINX deployment</p>
  <p id="cqfu">Create a new file <code>service.yml</code> in the <code>fpm</code> directory.</p>
  <pre id="U38q">apiVersion: v1
kind: Service
metadata:
  name: laravel-in-kubernetes-fpm
spec:
  selector:
    tier: backend
    layer: fpm
  ports:
    - protocol: TCP
      port: 9000
      targetPort: 9000
</pre>
  <p id="XTXH">This will allow us to connect to the FPM container from our Web Server deployment, which we will deploy next.</p>
  <p id="6rEx">First, we need to apply the new Service though</p>
  <pre id="uE5j">$ kubectl apply -f fpm/service.yml    
service/laravel-in-kubernetes-fpm created
</pre>
  <h1 id="web-server-deployment">Web Server Deployment</h1>
  <p id="pU4X">The next piece we need to deploy, is our Web Server container as well as it&#x27;s service.</p>
  <p id="SIiU">This will help expose our FPM application to the outside world.</p>
  <h2 id="web-server-directory">Web Server Directory</h2>
  <p id="VWmY">Create a new folder called <code>webserver</code></p>
  <pre id="wWRY">mkdir -p webserver</pre>
  <h2 id="web-server-deployment-1">Web Server Deployment</h2>
  <p id="mTHM">Within the <code>webserver</code> folder, create the Web Server <code>deployment.yml</code> file.</p>
  <p id="OW9T">We will also inject the <code>FPM_HOST</code> environment variable to point Nginx at our FPM deployment.</p>
  <pre id="Ykhq">apiVersion: apps/v1
kind: Deployment
metadata:
  name: laravel-in-kubernetes-webserver
  labels:
    tier: backend
    layer: webserver
spec:
  replicas: 1
  selector:
    matchLabels:
      tier: backend
      layer: webserver
  template:
    metadata:
      labels:
        tier: backend
        layer: webserver
    spec:
      containers:
        - name: webserver
          image: [your_registry_url]/web_server:v0.0.1
          ports:
            - containerPort: 80
          env:
            # Inject the FPM Host as we did with Docker Compose
            - name: FPM_HOST
              value: laravel-in-kubernetes-fpm:9000
</pre>
  <p id="S1rD">We can apply that, and see that our service is running correctly.</p>
  <pre id="LOYc">$ kubectl apply -f webserver/deployment.yml 
deployment.apps/laravel-in-kubernetes-webserver created

$ kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
laravel-in-kubernetes-fpm-79fb79c548-2lp7m         1/1     Running   0          9m9s
laravel-in-kubernetes-webserver-5877867747-zm7zm   1/1     Running   0          6s

$ kubectl logs laravel-in-kubernetes-webserver-5877867747-zm7zm
[...]
2021/08/30 19:42:51 [notice] 1#1: start worker processes
2021/08/30 19:42:51 [notice] 1#1: start worker process 38
2021/08/30 19:42:51 [notice] 1#1: start worker process 39
</pre>
  <p id="ofOC">Our Web Server deployment is now running successfully.</p>
  <p id="5LkI">We are now be able to move onto the service.</p>
  <h2 id="web-server-service">Web Server Service</h2>
  <p id="brE0">We also need a webserver service to expose the nginx deployment to the rest of the cluster.</p>
  <p id="l3oc">Create a new file in the <code>webserver</code> directory called <code>service.yml</code></p>
  <pre id="Cd6R">apiVersion: v1
kind: Service
metadata:
  name: laravel-in-kubernetes-webserver
spec:
  selector:
    tier: backend
    layer: webserver
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
</pre>
  <p id="JCAq">We can apply that, and test our application, by port-forwarding it to our local machine.</p>
  <pre id="NPuL">$ kubectl apply -f webserver/service.yml 
service/laravel-in-kubernetes-webserver created

$ kubectl port-forward service/laravel-in-kubernetes-webserver 8080:80
Forwarding from 127.0.0.1:8080 -&gt; 80
Forwarding from [::1]:8080 -&gt; 80
</pre>
  <p id="zb3R">If you now open up <a href="http://localhost:8080/" target="_blank">http://localhost:8080</a> on your local machine and you should see you application running in Kubernetes</p>
  <figure id="C5C6" class="m_retina">
    <img src="https://chris-vermeulen.com/content/images/2021/08/image.png" width="1185" />
  </figure>
  <p id="YCCl">This means your application is running correctly, and it can serve requests.</p>
  <h1 id="using-the-database">Using the Database</h1>
  <p id="phWV">Next, we need to inject our common config and secret into the FPM deployment, to provide it with all the database details</p>
  <p id="VCg7">You can see <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables" target="_blank">here</a> for a better understanding of how to use secrets and configmaps as environment variables.</p>
  <p id="JsHZ">We are going to use envFrom to directly inject our ConfigMap and Secret into the container.</p>
  <p id="k2Oa">In the FPM deployment</p>
  <pre id="TIHH">apiVersion: apps/v1
kind: Deployment
metadata:
  [...]
spec:
  [...]
  template:
    [...]
    spec:
      containers:
        - name: fpm
          [...]
          envFrom:
            - configMapRef:
                name: laravel-in-kubernetes
            - secretRef:
                name: laravel-in-kubernetes</pre>
  <p id="ktK2">Kubernetes will now inject these values as environment variables when our application starts to run.</p>
  <p id="BMRy">Apply the new configuration to make sure everything works correctly</p>
  <pre id="IvBm">$ kubectl apply -f fpm/
deployment.apps/laravel-in-kubernetes-fpm configured
service/laravel-in-kubernetes-fpm unchanged

$ kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
laravel-in-kubernetes-fpm-84cf5b9bd7-z2jfd         1/1     Running   0          32s
laravel-in-kubernetes-webserver-5877867747-zm7zm   1/1     Running   0          15m

$ kubectl logs laravel-in-kubernetes-fpm-84cf5b9bd7-z2jfd
[30-Aug-2021 19:57:31] NOTICE: fpm is running, pid 1
[30-Aug-2021 19:57:31] NOTICE: ready to handle connections</pre>
  <p id="pNdb">Everything seems to be working swimmingly.</p>
  <h1 id="migrations">Migrations</h1>
  <p id="iidz">The next piece we want to take care of, is running migrations for the application</p>
  <p id="R3pp">I&#x27;ve heard multiple opinions on when to run migrations, and there are multiple ways.</p>
  <p id="L7ls">Some options around migrations</p>
  <h2 id="running-migrations-as-initcontainers">Running migrations as initContainers</h2>
  <p id="2QdT">We&#x27;ll be using a Kubernetes initContainer to run our migrations. This makes it quite simple, and stops any deployment if the migrations don&#x27;t pass first, giving us a clean window to fix any issues and deploy again.</p>
  <p id="YDi0">In our application, we need to add a new initContainer.</p>
  <p id="fbnz">We can go ahead and do this in the <code>fpm/deployment.yml</code> file.</p>
  <pre id="A7uN">apiVersion: apps/v1
kind: Deployment
metadata:
  name: fpm
  namespace: app
  labels:
    tier: backend
    layer: fpm
spec:
  [...]
  template:
    metadata: [...]
    spec:
      initContainers:
        - name: migrations
          image: [your_registry_url]/cli:v0.0.1
          command:
            - php
          args:
            - artisan
            - migrate
            - --force
          envFrom:
            - configMapRef:
                name: laravel-in-kubernetes
            - secretRef:
                name: laravel-in-kubernetes
      containers:
        - name: fpm
          [...]</pre>
  <p id="0eDt">This will run a container before starting up our primary container to run migrations, and only if successful, will it run our primary app, and replace the running instances.</p>
  <p id="i8sw">Let&#x27;s apply that and see the results.</p>
  <pre id="kbIe">$ kubectl apply -f fpm/
deployment.apps/fpm configured

$ kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
laravel-in-kubernetes-fpm-856dcb9754-trf65         1/1     Running   0          16s
laravel-in-kubernetes-webserver-5877867747-zm7zm   1/1     Running   0          36m
</pre>
  <p id="542s">Next, we want to check the logs from the migrations initContainer to see if it was successful.</p>
  <pre id="3aXK">$ kubectl logs laravel-in-kubernetes-fpm-856dcb9754-trf65 -c migrations
Migrating: 2014_10_12_100000_create_password_resets_table
Migrated:  2014_10_12_100000_create_password_resets_table (70.34ms)
Migrating: 2019_08_19_000000_create_failed_jobs_table
Migrated:  2019_08_19_000000_create_failed_jobs_table (24.21ms)</pre>
  <p id="Ebng">Our migrations are now successfully run.</p>
  <h3 id="errors">Errors</h3>
  <p id="0HfE">If you receive errors at this point, you can check the logs to see what went wrong.</p>
  <p id="DiGC">Most likely you cannot connect to your database or have provided incorrect credentials.</p>
  <p id="ZmFj">Feel free to comment on this blog, and I&#x27;d be happy to help you figure it out.</p>
  <h1 id="onto-the-next-">Onto the next.</h1>
  <p id="gtqe">In the next episode of this series, we will go over deploying queue workers</p>
  <p id="FIwI"></p>
  <h2 id="34Iw"><u>! PART SEVEN: Deploying Redis to run Queue workers and cache</u></h2>
  <p id="GZnu"></p>
  <p id="ZSde">In this post, we&#x27;ll go over deploying a Redis instance, where we can run our Queue workers from in Laravel.</p>
  <p id="0Fj4">The Redis instance can also be used for Caching inside Laravel, or a second Redis cluster can be installed for Cache separately</p>
  <p id="YkLS">We will cover two methods of running a Redis Instance.</p>
  <p id="axs3">On one hand we&#x27;ll use a managed Redis Cluster from DigitalOcean, which alleviates the maintenance burden for us, and gives us a Redis cluster which is immediately ready for use.</p>
  <p id="g9rV">On the other hand, we&#x27;ll deploy a Redis Instance into the Kubernetes cluster. This saves us some money, but does add a whole bunch of management problems into the mix.</p>
  <p id="Fq1v"></p>
  <h1 id="managed-redis">Managed Redis</h1>
  <p id="qPa7">In the same fashion as we did our <a href="https://chris-vermeulen.com/laravel-in-kubernetes-part-5/" target="_blank">database</a>, we will deploy a managed Redis instance in DigitalOcean.</p>
  <p id="ysi0">In the infrastructure repository we created earlier, we can add a new file called <code>redis.tf</code>, where we can store our Terraform configuration for the Redis Instance in DigitalOcean.</p>
  <pre id="H10G">resource &quot;digitalocean_database_cluster&quot; &quot;laravel-in-kubernetes-redis&quot; {
  name = &quot;laravel-in-kubernetes-redis&quot;
  engine = &quot;redis&quot;
  version = &quot;6&quot;
  size = &quot;db-s-1vcpu-1gb&quot;
  region = var.do_region
  node_count = 1
}

# We want to allow access to the database from our Kubernetes cluster
# We can also add custom IP addresses
# If you would like to connect from your local machine,
# simply add your public IP
resource &quot;digitalocean_database_firewall&quot; &quot;laravel-in-kubernetes-redis&quot; {
  cluster_id = digitalocean_database_cluster.laravel-in-kubernetes-redis.id

  rule {
    type  = &quot;k8s&quot;
    value = digitalocean_kubernetes_cluster.laravel-in-kubernetes.id
  }

#   rule {
#     type  = &quot;ip_addr&quot;
#     value = &quot;ADD_YOUR_PUBLIC_IP_HERE_IF_NECESSARY&quot;
#   }
}

output &quot;laravel-in-kubernetes-redis-host&quot; {
  value = digitalocean_database_cluster.laravel-in-kubernetes-redis.host
}

output &quot;laravel-in-kubernetes-redis-port&quot; {
  value = digitalocean_database_cluster.laravel-in-kubernetes-redis.port
}
</pre>
  <p id="sQf3">Let&#x27;s apply that, and we should see a host and port pop up after a little while.</p>
  <pre id="NGAW">$ terraform apply
[...]
Plan: 3 to add, 0 to change, 0 to destroy.
Enter a value: yes

digitalocean_database_cluster.laravel-in-kubernetes-redis: Creating...
[...]
Outputs:

laravel-in-kubernetes-database-host = &quot;XXX&quot;
laravel-in-kubernetes-database-port = 25060
laravel-in-kubernetes-redis-host = &quot;XXX&quot;
laravel-in-kubernetes-redis-port = 25061</pre>
  <p id="PFgP">We now have details for our Redis Instance, but not a username and password.</p>
  <p id="zbNS">Terraform does output these for us, but these would then be stored in the state file, which is not ideal.</p>
  <p id="yY6b">For the moment, you cannot change the password of the deployed Redis Instance in DigitalOcean, so we&#x27;ll use the username and password from Terraform.</p>
  <p id="lfyC">We won&#x27;t output these from Terraform, as they will then show up in logs when we build CI/CD.</p>
  <p id="Xx5m">You can <code>cat</code> the state file, search for the Redis instance, and find the username and password in there.</p>
  <pre id="mGON">$ cat terraform.tfstate | grep &#x27;&quot;name&quot;: &quot;laravel-in-kubernetes-redis&quot;&#x27; -A 20 | grep -e password -e &#x27;&quot;user&quot;&#x27;
&quot;password&quot;: &quot;XXX&quot;,
&quot;user&quot;: &quot;default&quot;,
</pre>
  <p id="iFLD">Store these values somewhere safe, as we&#x27;ll use them in the next step of our deployment.</p>
  <h1 id="self-managed-redis">Self-managed Redis</h1>
  <p id="dJx1">Self-managed Redis, means we will be running Redis ourselves inside of Kubernetes, with AOF and persistence.</p>
  <p id="I3iv">This has more management than a managed cluster, but does save us some cost.</p>
  <p id="AiCM">We&#x27;ll run our Redis Instance in a statefulset to ensure a stable running set of pods.</p>
  <p id="yOdL">In our deployment repo, create a new directory called <code>redis</code>. Here we will store all our details for the Redis Cluster.</p>
  <p id="4TTc">Create a new file in the <code>redis</code> directory, called <code>persistent-volume-claim.yml</code>. This is where we will store the configuration for the storage we need provisioned in DigitalOcean.</p>
  <pre id="xhx3">apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: laravel-in-kubernetes-redis
spec:
  storageClassName: do-block-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      # We are starting with 1GB. We can always increase it later.
      storage: 1Gi</pre>
  <p id="uBLf">Apply that, and we should see the volume created after a few seconds.</p>
  <pre id="6syZ">$ kubectl apply -f redis/persistent-volume-claim.yml 
persistentvolumeclaim/laravel-in-kubernetes-redis created
$ kubectl get persistentvolumes
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                 STORAGECLASS       REASON   AGE
pvc-f5aac936-98f5-48f1-a526-a68bc5c17471   1Gi        RWO            Delete           Bound    default/laravel-in-kubernetes-redis   do-block-storage            25s

</pre>
  <p id="GY9Z">Our volume has been successfully created, and we can move on to actually deploying Redis.</p>
  <p id="geZt">Create a new file in the <code>redis</code> folder called <code>statefulset.yml</code> where we will configure the Redis Node.</p>
  <pre id="ZSrY">apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: laravel-in-kubernetes-redis
  labels:
    tier: backend
    layer: redis
spec:
  serviceName: laravel-in-kubernetes-redis
  selector:
    matchLabels:
      tier: backend
      layer: redis
  replicas: 1
  template:
    metadata:
      labels:
        tier: backend
        layer: redis
    spec:
      containers:
      - name: redis
        image: redis:5.0.4
        command: [&quot;redis-server&quot;, &quot;--appendonly&quot;, &quot;yes&quot;]
        ports:
        - containerPort: 6379
          name: web
        volumeMounts:
        - name: redis-aof
          mountPath: /data
      volumes:
        - name: redis-aof
          persistentVolumeClaim:
            claimName: laravel-in-kubernetes-redis</pre>
  <p id="zskI">As you can see, we are also mounting our PersistentVolumeClaim into the container, so our AOF file will persist across container restarts.</p>
  <p id="Mjmr">We can go ahead and apply the statefulset, and we should see our Redis pod pop up.</p>
  <pre id="9g2c">$ kubectl apply -f redis/statefulset.yml 
statefulset.apps/laravel-in-kubernetes-redis created

# after a few seconds
$ kubectl get pods
laravel-in-kubernetes-redis-0   1/1     Running   0          18s

# Inspect the logs
$ kubectl logs laravel-in-kubernetes-redis-0
1:C 30 Aug 2021 17:31:16.678 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 30 Aug 2021 17:31:16.678 # Redis version=5.0.4, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 30 Aug 2021 17:31:16.678 # Configuration loaded
1:M 30 Aug 2021 17:31:16.681 * Running mode=standalone, port=6379.
1:M 30 Aug 2021 17:31:16.681 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 30 Aug 2021 17:31:16.681 # Server initialized
1:M 30 Aug 2021 17:31:16.681 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command &#x27;echo never &gt; /sys/kernel/mm/transparent_hugepage/enabled&#x27; as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 30 Aug 2021 17:31:16.681 * Ready to accept connections
</pre>
  <p id="wuoX">We now have Redis successfully running, and we just need to add a service to make it discoverable in Kubernetes.</p>
  <p id="lSrt">Create a new Service file in the <code>redis</code> directory called <code>service.yml</code> where we will store the service for Redis.</p>
  <pre id="DnAb">apiVersion: v1
kind: Service
metadata:
  name: laravel-in-kubernetes-redis
  labels:
    tier: backend
    layer: redis
spec:
  ports:
  - port: 6379
    protocol: TCP
  selector:
    tier: backend
    layer: redis
  type: ClusterIP</pre>
  <p id="5EBV">Apply that, and we&#x27;ll have a Redis connection ready to go.</p>
  <h1 id="onto-the-next">Onto the next</h1>
  <p id="FIYQ">Next, we&#x27;ll move on to deploying our Queue workers in Kubernetes.</p>
  <p id="HgG0"></p>
  <h2 id="LPAQ"><u>! PART EIGHT: Deploying Laravel Queue workers in Kubernetes</u></h2>
  <p id="St1f"></p>
  <p id="nBjY">In this post we will cover deploying Laravel Queue workers in Laravel.</p>
  <p id="Tt7W">Deploying Laravel Queue workers in Kubernetes, makes it fairly easy to scale out workers when jobs start piling up, and releasing resources when there is lower load on the system.</p>
  <p id="AT1I"></p>
  <h1 id="queue-connection-update">Queue connection update</h1>
  <p id="xTmW">We need to make sure the Queue Workers can connect to our Redis instance.</p>
  <p id="C8tl">Update the ConfigMap and Secret in the <code>common/</code> directory to have the new Redis Details, and switch the Queue Driver to Redis</p>
  <h2 id="updating-the-configmap">Updating the ConfigMap</h2>
  <p id="j8hQ">Update the details in the <code>common/app-config.yml</code> for Redis and the Queue Driver</p>
  <pre id="jePH">apiVersion: v1
kind: ConfigMap
metadata:
  name: laravel-in-kubernetes
data:
  QUEUE_CONNECTION: &quot;redis&quot;
  REDIS_HOST: &quot;XXX&quot;
  REDIS_PORT: &quot;XXX&quot;</pre>
  <p id="f1PQ">We can apply the new ConfigMap</p>
  <pre id="Ecoe">$ kubectl apply -f common/app-config.yml
configmap/laravel-in-kubernetes configured
</pre>
  <h2 id="updating-the-secret">Updating the Secret</h2>
  <p id="vCgr">Update the details in the <code>common/app-secret.yml</code> to contain the new Redis connection details.</p>
  <pre id="Q2aa">apiVersion: v1
kind: Secret
metadata:
  name: laravel-in-kubernetes
type: Opaque
stringData:
  REDIS_PASSWORD: &quot;XXX&quot; # If you have no password set, you can set this to an empty string</pre>
  <p id="cyWo">We can apply the new Secret, and then move on to running the actual Queues.</p>
  <pre id="Yf4w">$ kubectl apply -f common/app-secret.yml
secret/laravel-in-kubernetes configured
</pre>
  <h1 id="queue-directory">Queue directory</h1>
  <p id="o65w">First thing we&#x27;ll need is a new directory in our <code>deployment</code> repo called <code>queue-workers</code>.</p>
  <p id="FByp">Here is where we will configure our queue-workers.</p>
  <pre id="n5qj">$ mkdir -p queue-workers</pre>
  <h1 id="creating-the-deployment">Creating the deployment</h1>
  <p id="9h0u">Next, we need to create a Deployment for our queue workers, which will run them and be able to scale them for us.</p>
  <p id="yNyw">In the <code>queue-workers</code> directory, create a new file called <code>deployment-default.yml</code>.</p>
  <pre id="eA1o">apiVersion: apps/v1
kind: Deployment
metadata:
  name: laravel-in-kubernetes-queue-worker-default
  labels:
    tier: backend
    layer: queue-worker
    queue: default
spec:
  replicas: 1
  selector:
    matchLabels:
      tier: backend
      layer: queue-worker
      queue: default
  template:
    metadata:
      labels:
        tier: backend
        layer: queue-worker
        queue: default
    spec:
      containers:
        - name: queue-worker
          image: [your_registry_url]/cli:v0.0.1
          command:
            - php
          args:
            - artisan
            - queue:work
            - --queue=default
            - --max-jobs=200
          ports:
            - containerPort: 9000
          envFrom:
            - configMapRef:
                name: laravel-in-kubernetes
            - secretRef:
                name: laravel-in-kubernetes
</pre>
  <p id="pHVL">This deployment will deploy the queue workers for the default queue only. We will cover adding more queues further down in the post.</p>
  <p id="rgP4">Let&#x27;s apply the new Queue worker, and check that the pod is running correctly.</p>
  <pre id="Ir3V">$ kubectl apply -f queue-workers/deployment-default.yml 
deployment.apps/laravel-in-kubernetes-queue-worker-default created

$ kubectl get pods
NAME                                                          READY   STATUS    RESTARTS   AGE
laravel-in-kubernetes-fpm-856dcb9754-trf65                    1/1     Running   0          10h
laravel-in-kubernetes-queue-worker-default-594bc6f4bb-8swdw   1/1     Running   0          9m38s
laravel-in-kubernetes-webserver-5877867747-zm7zm              1/1     Running   0          10h
</pre>
  <p id="mQVS">That&#x27;s it. The Queue workers are running correctly.</p>
  <h1 id="separate-queues">Separate queues</h1>
  <p id="Xqou">Our current deployment is running the queue for only the default queue.</p>
  <p id="GXMv">If we&#x27;d like to add additional workers for more queues, we can simply add another deployment file called <code>deployment-{queue-name}.yml</code>, update the queue label with the new name, and update the <code>--queue</code> flag to the new queue name.</p>
  <p id="1IE1">Once we apply that, we&#x27;ll have a second group of queue workers to run our other queue.</p>
  <h2 id="we-can-also-run-a-single-queue">We can also run a single queue</h2>
  <p id="neHp">If you have not built in multiple queues into your application, you can also remove the <code>--queue</code> flag from the queue-worker deployment to have it run all of the queued jobs.</p>
  <h1 id="onto-the-next">Onto the next</h1>
  <p id="haCS">Next, we&#x27;ll look at running the cron job for our Laravel scheduler.</p>
  <p id="n9gn"></p>
  <h2 id="fg0A"><u>! PART NINE: Deploying the Laravel Scheduler</u></h2>
  <p id="vr3O"></p>
  <p id="zU1S">In this post, we&#x27;ll cover deploying the Laravel Scheduler in Kubernetes.</p>
  <p id="T58a">The Laravel Scheduler takes care of running tasks / jobs on a set schedule or at specific times.</p>
  <p id="prwE"></p>
  <h1 id="kubernetes-cronjob-or-cron-in-a-container">Kubernetes Cronjob or Cron in a Container ?</h1>
  <p id="Y76a">There are some differences we need to be aware of before willy-nilly jumping into a specific implementation.</p>
  <h2 id="kubernetes-cronjobs">Kubernetes Cronjobs</h2>
  <p id="UqA2">Kubernetes comes with a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" target="_blank">built-in Cron mechanism</a> which can be used to run tasks or jobs on a schedule.</p>
  <p id="QUxx">Whilst it is a great mechanism to run jobs on a schedule, we have built most of our scheduling into our Laravel app, to make it more declarative and testable with our codebase.</p>
  <p id="IoJr">We could run our scheduler (Which needs to be run every minute with actual cron), using a Kubernetes CronJob, but there are a few things to be aware of.</p>
  <p id="jo7I">Kubernetes CronJobs have <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" target="_blank">some limitations</a> to be aware of. They will also create a new pod every minute to run the Laravel Scheduler, and once completed kill it off.</p>
  <p id="heJl">This means that there will be a new pod scheduled every minute, which might cause some churn and duplicated runs, and also have the issue where if there is not enough resources to schedule the pods, our cron will stop running and not be scheduled until we fix the issue.</p>
  <p id="htKN">Creating new pods every minute also creates a lot of scheduling overhead.</p>
  <h2 id="cron-in-a-container">Cron in a container</h2>
  <p id="O58s">Cron in a container is a bit more lightweight in terms of scheduling, but does have some invisibility when it comes to how many schedule runs have passed, and when they start failing.</p>
  <p id="NUMm">Cron will not fail if one of its jobs fails, and it will just continue trudging silently on.</p>
  <p id="bDcS">This might be more performant than the Kubernetes CronJobs, but we might not spot unexpected failures in our Laravel Scheduler.</p>
  <p id="rgiB">For this reason, we are going to use the Kubernetes CronJobs, but we will also cover running inside a cron container, for cases where Kubernetes performance is an issue</p>
  <h1 id="laravel-scheduler-on-one-server-at-a-time">Laravel Scheduler on one server at a time</h1>
  <p id="Ts9v">Laravel has a <a href="https://laravel.com/docs/8.x/scheduling#running-tasks-on-one-server" target="_blank">built in feature for running a task on only one server at a time</a>.</p>
  <p id="Dfci">I strongly recommend using this feature if your jobs are not idempotent, meaning re-runnable with the same end result. If you are sending mails or notifications, you want to make sure you don&#x27;t run them twice if a Cron accidentally runs multiple times.</p>
  <h1 id="kubernetes-cronjob">Kubernetes CronJob</h1>
  <p id="uatH">We want to create a new Kubernetes CronJob object, in which we can specify how to run the scheduler.</p>
  <h2 id="cronjob-folder">Cronjob folder</h2>
  <p id="ZGUC">We&#x27;ll start by creating a new folder in our deployment repo called <code>cron</code></p>
  <pre id="6w6N">$ mkdir -p cron</pre>
  <h2 id="cronjob-resource">CronJob resource</h2>
  <p id="csFD">Within the new <code>cron</code> folder, we can create our new CronJob object, passing in the environment variables in the same way we did before.</p>
  <p id="t4ty">This will instruct Kubernetes to run a pod every minute with our command.</p>
  <p id="jwAj">Create a new file called <code>cronjob.yml</code> in the <code>cron</code> directory.</p>
  <pre id="6qAX">apiVersion: batch/v1
kind: CronJob
metadata:
  name: laravel-in-kubernetes-scheduler
spec:
  schedule: &quot;* * * * *&quot;
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: scheduler
            image: [your_registry_url]/cli:v0.0.1
            command:
              - php
            args:
              - artisan
              - schedule:run
            envFrom:
              - configMapRef:
                  name: laravel-in-kubernetes
              - secretRef:
                  name: laravel-in-kubernetes
          restartPolicy: OnFailure</pre>
  <p id="hNFF">We can apply that, and watch the pods in Kubernetes. After about a minute a pod should start running.</p>
  <pre id="wCjf">$ kubectl apply -f cron/cronjob.yml
cronjob.batch/laravel-in-kubernetes-scheduler created

$ kubectl get pods
NAME                                                          READY   STATUS      RESTARTS   AGE
[...]
laravel-in-kubernetes-scheduler-27173731-z2cmg                0/1     Completed   0          38s

$ kubectl logs laravel-in-kubernetes-scheduler-27173731-z2cmg
No scheduled commands are ready to run.</pre>
  <p id="bM0f">Our scheduler is now running correctly.</p>
  <p id="4joP">Kubernetes by default will keep the last 3 executions of our CronJob for us to inspect. We can use those to have a look at logs.</p>
  <p id="cwlE">After 5 minutes you should see 3 <code>Completed</code> pods for the scheduler, and you can run logs on each of them.</p>
  <pre id="dlCH">$ kubectl get pods
kubectl get pods
NAME                                                          READY   STATUS      RESTARTS   AGE
[...]
laravel-in-kubernetes-scheduler-27173732-pgr6t                0/1     Completed   0          2m46s
laravel-in-kubernetes-scheduler-27173733-qg7ld                0/1     Completed   0          106s
laravel-in-kubernetes-scheduler-27173734-m8mdp                0/1     Completed   0          46s
</pre>
  <p id="5Crf">Our scheduler is now running successfully.</p>
  <h1 id="cron-in-a-container-1">Cron in a container</h1>
  <p id="krJD">The other way to run the cron, is run a dedicated container with cron.</p>
  <p id="nmIV">We previously built our cron image together with our other images, and we can use that image in a container.</p>
  <p id="wt0f">In the same <code>cron</code> directory we created we can create a <code>deployment.yml</code> file to contain our cron running in a Deployment.</p>
  <pre id="bEXv">apiVersion: apps/v1
kind: Deployment
metadata:
  name: laravel-in-kubernetes-cron
  labels:
    tier: backend
    layer: cron
spec:
  replicas: 1
  selector:
    matchLabels:
      tier: backend
      layer: cron
  template:
    metadata:
      labels:
        tier: backend
        layer: cron
    spec:
      containers:
        - name: cron
          image: [your_registry_url]/cron:v0.0.1
          envFrom:
            - configMapRef:
                name: laravel-in-kubernetes
            - secretRef:
                name: laravel-in-kubernetes
</pre>
  <p id="9Scc">We can apply that, and we should see a cron container pop up, and if we check the logs, we should start seeing some scheduler messages pop up after a while.</p>
  <pre id="gnFD">$ kubectl apply -f cron/deployment.yml 
deployment.apps/laravel-in-kubernetes-cron created

$ kubectl get pods
NAME                                                          READY   STATUS    RESTARTS   AGE
[...]
laravel-in-kubernetes-cron-844c45f6c9-4tdkv                   1/1     Running   0          80s

$ kubectl logs laravel-in-kubernetes-cron-844c45f6c9-4tdkv
No scheduled commands are ready to run.
No scheduled commands are ready to run.
No scheduled commands are ready to run.
No scheduled commands are ready to run.
No scheduled commands are ready to run.</pre>
  <p id="8oDL">The scheduler is now running successfully in a container.</p>
  <p id="R0Pc">We now have the scheduler running successfully in Kubernetes</p>
  <h2 id="onto-next">Onto next</h2>
  <p id="1qQD">Next, we&#x27;ll look at exposing our application through a Load Balancer, using the <a href="https://kubernetes.github.io/ingress-nginx/" target="_blank">Nginx Ingress</a></p>
  <p id="CmLP"></p>
  <h2 id="m3RU"><u>! PART TEN: Exposing the application</u></h2>
  <p id="7efX"></p>
  <p id="Yc5x">Our application is now successfully deployed in Kubernetes, but we need to expose it to the outside world.</p>
  <p id="mcVK">We can access it locally by running <code>kubectl port-forward svc/laravel-in-kubernetes-webserver 8080:80</code> and going to <a href="http://localhost:8080/" target="_blank">http://localhost:8080</a>.</p>
  <p id="mfS0">We need to expose our application to the outside world though so our users can access it.</p>
  <p id="wNYN"></p>
  <h1 id="kubernetes-load-balancer">Kubernetes Load Balancer</h1>
  <p id="tmYe">The primary backing for exposing our application in Kubernetes, is the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" target="_blank">Load Balancer service type</a> in Kubernetes.</p>
  <p id="CN5l">It adds a DigitalOcean load balancer pointing at all of our Kubernetes nodes, which in turn point at our services.</p>
  <p id="zWRm">We could simply change the Service type for our webserver service to a LoadBalancer and get a external IP to call it on.</p>
  <p id="c0JU">This is not the recommended method of exposing applications, but we&#x27;ll cover it briefly, just so you know it exists and how to use it.</p>
  <p id="Lw2W">In our deployment repo, we can update the <code>webserver/service.yml</code> file to have the type LoadBalancer, update it, and see it&#x27;s external IP be created after a few minutes.</p>
  <pre id="rATp"># webserver/service.yml
apiVersion: v1
kind: Service
metadata:
  name: laravel-in-kubernetes-webserver
spec:
  # We can add type LoadBalancer here
  type: LoadBalancer
  selector:
    tier: backend
    layer: webserver
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
</pre>
  <p id="tyGj">Now we can apply that, and wait a few minutes for the LoadBalancer to be created.</p>
  <pre id="V1Fu">$ kubectl apply -f webserver/
service/laravel-in-kubernetes-webserver configured

$ kubectl get svc
NAME                              TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
[...]
laravel-in-kubernetes-webserver   LoadBalancer   10.245.76.55    &lt;pending&gt;     80:30844/TCP   12d

$ # After a few minutes (Took 10 on my end) we should see an external IP.
$ kubectl get svc
NAME                              TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
[...]
laravel-in-kubernetes-webserver   LoadBalancer   10.245.76.55    157.245.20.41   80:30844/TCP   12d
</pre>
  <p id="4UF1">In this case my IP is <code>157.245.20.41</code>. If I open in up in my browser it show the application</p>
  <figure id="tB9G" class="m_retina">
    <img src="https://chris-vermeulen.com/content/images/2021/09/image-2.png" width="1179" />
  </figure>
  <p id="gdq1">You can also see the load balancer created in the DigitalOcean UI.</p>
  <figure id="hyLw" class="m_retina">
    <img src="https://chris-vermeulen.com/content/images/2021/09/image-3.png" width="1239" />
  </figure>
  <p id="cStQ">To learn more about configuring Load Balancers in this way, you can have a look at <a href="https://docs.digitalocean.com/products/kubernetes/how-to/configure-load-balancers/" target="_blank">this page for DigitalOcean</a>. It has many configurable settings.</p>
  <p id="EBB7">For the moment though, if you followed along and created the LoadBalancer Service, now would be a good time time to delete it, as we are going to create one in the next section.</p>
  <p id="2Oz0">Update the <code>webserver/service.yml</code> file once more and remove the <code>type: LoadBalancer</code> line</p>
  <pre id="BnqR"># webserver/service.yml
apiVersion: v1
kind: Service
metadata:
  name: laravel-in-kubernetes-webserver
spec:
  # Commented for clarity, but you can simply remove it entirely
  # type: LoadBalancer
  selector:
    tier: backend
    layer: webserver
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
</pre>
  <p id="J0KB">Now we can apply that, and the Load Balancer should be deleted automatically in DigitalOcean.</p>
  <pre id="WlIf">$ kubectl apply -f webserver/
deployment.apps/laravel-in-kubernetes-webserver unchanged
service/laravel-in-kubernetes-webserver configured

$ kubectl get services
NAME                              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
[...]
laravel-in-kubernetes-webserver   ClusterIP   10.245.76.55    &lt;none&gt;        80/TCP     12d</pre>
  <p id="10bc">You&#x27;ll notice that the external IP no longer exists. You can also check the DigitalOcean UI, and you&#x27;ll see the LoadBalancer no longer exists.</p>
  <figure id="waIH" class="m_retina">
    <img src="https://chris-vermeulen.com/content/images/2021/09/image-4.png" width="1212" />
  </figure>
  <h1 id="installing-the-nginx-ingress-controller">Installing the Nginx Ingress Controller</h1>
  <p id="UbL8">Our preferred method for exposing applications, is by deploying an Ingress controller to the Kubernetes Cluster, and then exposing the Ingress using a LoadBalancer.</p>
  <p id="K7GO">This allows us to create a single LoadBalancer for our cluster and all the applications in our cluster, whilst easily creating the correct routing rules, and pointing a DNS entry at our LoadBalancer.</p>
  <p id="Poh4">In total, we will easily expose our applications, and configure any custom configurations we need.</p>
  <h2 id="deploying-the-controller">Deploying the controller</h2>
  <p id="zMYA">First we need to deploy the controller. The documentation is available <a href="https://kubernetes.github.io/ingress-nginx/" target="_blank">here</a></p>
  <p id="1PhY">We are using the DigitalOcean Kubernetes Service, and therefor will be using the DigitalOcean specific provider.</p>
  <p id="TmKE">You can have a look at all the different providers here <a href="https://kubernetes.github.io/ingress-nginx/deploy/#provider-specific-steps" target="_blank">https://kubernetes.github.io/ingress-nginx/deploy/#provider-specific-steps</a></p>
  <p id="sjKo">We want to version control the Ingress Controller, so we can visibly see any changes if we every update. What we will do for this case, is instead of applying directly from the URL, we will create an ingress directory, and create the manifest in that directory.</p>
  <pre id="ZstZ">$ mkdir ingress-controller
$ wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/do/deploy.yaml -O ingress-controller/controller.yml
</pre>
  <p id="sniA">You can inspect this file to see all the parts which get deployed for the Ingress controller.</p>
  <p id="QY74">The defaults should suffice for our application, so we can apply that.</p>
  <pre id="PZfG">$ kubectl apply -f ingress-controller/
namespace/ingress-nginx unchanged
serviceaccount/ingress-nginx unchanged
configmap/ingress-nginx-controller configured
clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
role.rbac.authorization.k8s.io/ingress-nginx unchanged
rolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
service/ingress-nginx-controller-admission unchanged
service/ingress-nginx-controller configured
deployment.apps/ingress-nginx-controller configured
ingressclass.networking.k8s.io/nginx unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission configured
serviceaccount/ingress-nginx-admission unchanged
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
role.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
job.batch/ingress-nginx-admission-create unchanged
job.batch/ingress-nginx-admission-patch unchanged

$ # After a few minutes (usually about 10), the ingress service will be available with an external IP
$ kubectl get service -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.245.228.253   104.248.101.239   80:30173/TCP,443:31300/TCP   6m21s
</pre>
  <p id="3wxL">The Nginx Ingress Controller is now deployed and ready to be used.</p>
  <h1 id="adding-an-ingress-for-the-application">Adding an Ingress for the application</h1>
  <p id="e12e">The next piece we need to do is add an actual Ingress resource for our application to configure how the Ingress should be routed.</p>
  <p id="frpt">In the Deployment repo once again, we can add this.</p>
  <p id="NATs">In the <code>webserver</code> directory, create a new file called <code>ingress.yml</code> with the following contents.</p>
  <pre id="mlX4">apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: laravel-in-kubernetes-webserver
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: laravel-in-kubernetes-webserver
            port:
              number: 80
</pre>
  <p id="sAOo">This tells our Ingress Controller how to route requests to our application. In this case the base path on our Ingress will route to our webserver deployment.</p>
  <p id="sQHD">Apply that, and if you open the IP of your service in your browser, you should see your application running successfully through the Ingress.</p>
  <pre id="Mp8t">$ kubectl apply -f webserver
kubectl apply -f webserver/
deployment.apps/laravel-in-kubernetes-webserver unchanged
ingress.networking.k8s.io/laravel-in-kubernetes-webserver created
service/laravel-in-kubernetes-webserver unchanged

$ kubectl get services ingress-nginx-controller -o jsonpath=&#x27;{.status.loadBalancer.ingress[0].ip}&#x27; -n ingress-nginx
104.248.101.239</pre>
  <figure id="oQ1l" class="m_retina">
    <img src="https://chris-vermeulen.com/content/images/2021/09/image-5.png" width="1348" />
  </figure>
  <p id="0WGl">The application is now exposed on the public domain and going through our Load Balancer.</p>
  <h1 id="load-balancer-reports-nodes-as-down">Load Balancer reports nodes as down</h1>
  <p id="C0DU">In DigitalOcean, when you have a LoadBalancer in front of your nodes, it will automatically check the health of the NodePorts exposed by the worker nodes.</p>
  <p id="Wyw4">BUT, if the LoadBalancer service, in this case the ingress-controller, is deployed on only one node, only that node will report as successful.</p>
  <p id="PNsi">This is not really a problem, and looks more pressing than it necessarily is.</p>
  <p id="i1ek">There are a few ways to fix this though if you think it&#x27;s necessary.</p>
  <h3 id="update-the-ingress-controller-to-a-daemonset">Update the Ingress Controller to a DaemonSet.</h3>
  <p id="4pcc">Updating the Ingress Controller Deployment to a DaemonSet will deploy a pod per node, and DigitalOcean will be able to detect each when doing the HealthChecks.</p>
  <h3 id="update-the-externaltrafficpolicy-for-the-ingress-deployment-to-cluster">Update the externalTrafficPolicy for the Ingress Deployment to Cluster</h3>
  <p id="mWxa">You could set the externalTrafficPolicy on the Ingress Controller Service to &quot;cluster&quot;, but this will lose the source IP address of the originating client.</p>
  <p id="QJ7J">You can see <a href="https://www.digitalocean.com/community/questions/kubernetes-load-balancer-says-all-but-one-node-is-down#answer_62675" target="_blank">here</a> for more details.</p>
  <h2 id="onto-the-next">Onto the next</h2>
  <p id="win1">Next, we&#x27;re going to look at adding certificates for our API, so we can server the application using https.</p>
  <p id="iquD"></p>
  <h2 id="tKjZ">! PART ELEVEN: Adding Let&#x27;s Encrypt certificates to the application</h2>
  <p id="LXDD"></p>
  <p id="hFvJ">The next important piece, is for us to add certificates to our application, so our users can securely use our application across the internet.</p>
  <p id="7yOK">We are going to use <a href="https://cert-manager.io/docs/installation/" target="_blank">Cert Manager</a> to achieve this, as it will automatically provision new certificates for us, as well as renew them on a regular basis.</p>
  <p id="lR3u">We will use the Let&#x27;s Encrypt tooling to issue certificates.</p>
  <p id="d1Bj">But first, we need a DNS name for our service.</p>
  <p id="dAgz">For this piece you&#x27;ll need a domain name. I&#x27;ll be using <a href="https://larakube.chris-vermeulen.com" target="_blank">https://larakube.chris-vermeulen.com</a> for this demo.</p>
  <p id="G7lN"></p>
  <h1 id="setting-up-a-domain-name">Setting up a domain name</h1>
  <p id="lVTy">Setting up a domain name is fairly simple for our Kubernetes cluster</p>
  <p id="3le0">We need to point either a domain, or a subdomain to our LoadBalancer created by the Nginx Ingress</p>
  <p id="q6Eo">In my case, I am simply pointing laravel-in-kubernetes.chris-vermeulen.com to my load balancer.</p>
  <figure id="F5SZ" class="m_original">
    <img src="https://chris-vermeulen.com/content/images/2021/09/image-6.png" width="1244" />
  </figure>
  <p id="ximD">If you are doing this outside of DigitalOcean, you can also create a A NAME record, pointing at the IP of your LoadBalancer.</p>
  <pre id="qeJM">$ kubectl get svc -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.245.228.253   104.248.101.239   80:30173/TCP,443:31300/TCP   8d</pre>
  <p id="9bx0">Once you have the IP for the LoadBalancer, you can point the A name directly at it</p>
  <figure id="9LrJ" class="m_original">
    <img src="https://chris-vermeulen.com/content/images/2021/09/image-7.png" width="1220" />
  </figure>
  <p id="rk8V">For more stability, you can also assign a Floating IP (a.k.a Static IP) to the LoadBalancer, and use that instead.</p>
  <p id="rTIP">That way, if you ever need to recreate the LoadBalancer, you can keep the same IP.</p>
  <h1 id="https-error">HTTPS error</h1>
  <p id="vkXj">If you now load the DNS name in your browser, you&#x27;ll notice it&#x27;ll throw a insecure warning immediately (I am using Chrome)</p>
  <figure id="fbyK" class="m_original">
    <img src="https://chris-vermeulen.com/content/images/2021/09/image-8.png" width="774" />
  </figure>
  <p id="6bPU">This is due to a redirect from http to https.</p>
  <p id="CFYv">But this is exactly what this post is about. We now need to add SSL certificates to our website to serve it securely.</p>
  <p id="OnQt">We&#x27;ll issue the certs from Let&#x27;s Encrypt as they are secure and free, and easy to manage.</p>
  <h1 id="installing-the-cert-manager">Installing the Cert Manager</h1>
  <p id="Ym4T">First thing we need to do for certs is install <a href="https://cert-manager.io/docs/" target="_blank">Cert manager</a></p>
  <p id="dBSk">We&#x27;ll do this by using the bundle once again.</p>
  <p id="0Gve">At the time of writing the current version was <a href="https://github.com/jetstack/cert-manager/releases/tag/v1.5.3" target="_blank">v1.5.3</a>. You can see the latest release <a href="https://github.com/jetstack/cert-manager/releases/latest" target="_blank">here</a>.</p>
  <p id="mg9J">We&#x27;ll download the latest bundle and install it in the same way we did the Ingress Controller</p>
  <p id="A6LT">First we need to create a new directory in our deployment repo called <code>cert-manager</code> and download the cert manager bundle there.</p>
  <pre id="5BtJ">$ mkdir -p cert-manager
$ wget https://github.com/jetstack/cert-manager/releases/download/v1.5.3/cert-manager.yaml -O cert-manager/manager.yml</pre>
  <p id="Wlc4">We now have the local files, and we can install the cert manager in our cluster.</p>
  <pre id="iXhh">$ kubectl apply -f cert-manager/
[...]</pre>
  <p id="S46E">You&#x27;ll now see the cert manager pods running, and we are ready to start issuing certs for our API.</p>
  <pre id="YpU0">$ kubectl get pods -n cert-manager
NAME                                      READY   STATUS    RESTARTS   AGE
cert-manager-848f547974-v2pf8             1/1     Running   0          30s
cert-manager-cainjector-54f4cc6b5-95k9v   1/1     Running   0          30s
cert-manager-webhook-7c9588c76-6kxs5      1/1     Running   0          30s</pre>
  <p id="GTi3">You can also use the <a href="https://cert-manager.io/docs/installation/verify/#manual-verification" target="_blank">instructions on the cert manager page</a> to verify the installation</p>
  <h2 id="creating-the-issuer">Creating the issuer</h2>
  <p id="tv6j">Next piece, we need to create an issuer for our certificates.</p>
  <p id="Xieo">This is for Let&#x27;s Encrypt (Are any <a href="https://cert-manager.io/docs/configuration/acme/" target="_blank">ACME</a> issuer) to remind you about certificate renews (This will happen automatically with Cert Manager), and some other admin pieces.</p>
  <p id="qf8P">I&#x27;ve used Let&#x27;s Encrypt for years now, and never been spammed, ever.</p>
  <p id="ttKv">Now the one thing we also need to do, is create 2 issuers. One for Let&#x27;s Encrypt staging so we can test whether our configuration is valid, and a production one to issue the actual certificate.</p>
  <p id="lUFD">This is important so you don&#x27;t run into <a href="https://letsencrypt.org/docs/rate-limits/" target="_blank">Let&#x27;s Encrypt rate limits</a> if you accidentally make a configuration mistake.</p>
  <p id="4eP1">Again in the cert-manager directory in the deployment repo, create a new file called <code>cluster-issuer.yml</code> in the <code>cert-manager</code> directory, where we can configure our ClusterIssuers.</p>
  <blockquote id="rbxK">We are using ClusterIssuers to make it easy for our setup, but you can also use the normal Issuers for namespaced issuers.</blockquote>
  <pre id="MgSP">apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: laravel-in-kubernetes-staging
spec:
  acme:
    # You must replace this email address with your own.
    # Let&#x27;s Encrypt will use this to contact you about expiring
    # certificates, and issues related to your account.
    email: chris@example.com
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      # Secret resource that will be used to store the account&#x27;s private key.
      name: laravel-in-kubernetes-staging-key
    # Add a single challenge solver, HTTP01 using nginx
    solvers:
    - http01:
        ingress:
          class: nginx
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: laravel-in-kubernetes-production
spec:
  acme:
    # You must replace this email address with your own.
    # Let&#x27;s Encrypt will use this to contact you about expiring
    # certificates, and issues related to your account.
    email: chris@example.com
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      # Secret resource that will be used to store the account&#x27;s private key.
      name: laravel-in-kubernetes-production-key
    # Add a single challenge solver, HTTP01 using nginx
    solvers:
    - http01:
        ingress:
          class: nginx</pre>
  <p id="BJVk">We can now create our issuers.</p>
  <pre id="cAIt">$ kubectl apply -f cert-manager/cluster-issuer.yml
clusterissuer.cert-manager.io/laravel-in-kubernetes-staging created
clusterissuer.cert-manager.io/laravel-in-kubernetes-production created</pre>
  <p id="s1Pu">Next, we want to check they were created successfully.</p>
  <p id="Sp8g">Let&#x27;s start with the staging one.</p>
  <pre id="dXJg">$ kubectl describe clusterissuer laravel-in-kubernetes-staging
[...]
Status:
  Acme:
    Last Registered Email:  chris@example.com
    Uri:                    XXX
  Conditions:
    Last Transition Time:  2021-09-22T21:02:27Z
    Message:               The ACME account was registered with the ACME server
    Observed Generation:   2
    Reason:                ACMEAccountRegistered
    Status:                True
    Type:                  Ready
Events:                    &lt;none&gt;
</pre>
  <p id="Y93y">We can see <code>Status: true</code> and <code>Type: Ready</code>, which show us that the ClusterIssuer is correct and working as we need it to.</p>
  <p id="nVdA">Next, we can check the production ClusterIssuer.</p>
  <pre id="H4R4">$ kubectl describe clusterissuer laravel-in-kubernetes-production
[...]
Status:
  Acme:
    Last Registered Email:  chris@example.com
    Uri:                    XXX
  Conditions:
    Last Transition Time:  2021-09-22T21:06:20Z
    Message:               The ACME account was registered with the ACME server
    Observed Generation:   1
    Reason:                ACMEAccountRegistered
    Status:                True
    Type:                  Ready
Events:                    &lt;none&gt;</pre>
  <p id="zsCg">We can see that it too was created successfully.</p>
  <p id="XyXD">Now, we can add a certificate to our ingress.</p>
  <h2 id="fixing-a-small-issue-with-kubernetes">Fixing a small issue with Kubernetes</h2>
  <p id="s5G7">There is an <a href="https://docs.digitalocean.com/products/kubernetes/how-to/configure-load-balancers/#accessing-by-hostname-annotation" target="_blank">Existing bug</a> in Kubernetes propagated through to DigitalOcean, which we need to fix first in our cluster though.</p>
  <blockquote id="4GyL">Quick description of the problem.<br />When we add the certificate, the cert-manager will deploy a endpoint which confirms we own the domain, and then do some validation with Let&#x27;s Encrypt to issue the certificate.<br />The current problem is we cannot reach the LoadBalancer hostname from inside the cluster, where cert-manager is trying to confirm the endpoint.<br />This means it cannot validate that the domain is ours.<br /><br />The solution to this is to not use the IP as our LoadBalancer endpoint in the Service, but rather the actual hostname</blockquote>
  <p id="CpFj">We need to update the Ingress Controller&#x27;s Service with an extra annotation to update it&#x27;s external hostname to whatever domain we have assigned to it.</p>
  <p id="AIoD">In the <code>ingress-controller/controller.yml</code> file, search for <code>LoadBalancer</code> to find the service, and add an extra annotation for the hostname</p>
  <pre id="llJD">[...]
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: &#x27;true&#x27;
    # We need to add this annotation for the load balancer hostname to fix the bug
    # Replace it with your domain or subdomain
    service.beta.kubernetes.io/do-loadbalancer-hostname: &quot;laravel-in-kubernetes.chris-vermeulen.com&quot;
  labels: [...]
  name: ingress-nginx-controller
  namespace: ingress-nginx
[...]</pre>
  <p id="2IKp">Now we can apply that, and check that it&#x27;s working correctly.</p>
  <pre id="OLO5">$ kubectl apply -f ingress-controller/controller.yml
[...]

$ kubectl get svc -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP                                 PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.245.228.253   laravel-in-kubernetes.chris-vermeulen.com   80:30173/TCP,443:31300/TCP   9d</pre>
  <p id="mwGa">You&#x27;ll see that the external ip is now the hostname pointing at our LoadBalancer.</p>
  <p id="82k9">The certificate issuing will now work as we expect it to.</p>
  <h1 id="add-certificates-to-ingress">Add certificates to Ingress</h1>
  <h2 id="issuing-staging-certificate">Issuing staging certificate</h2>
  <p id="J2ZC">Next, let&#x27;s update the ingress, using the <strong>staging</strong> ClusterIssuer to make sure the certificate is going to be issued correctly.</p>
  <p id="LGr8">We need to add 3 things to the <code>webserver/ingress.yml</code>.</p>
  <p id="Rg7n">We need to add an annotation with the cluster-issuer name, a tls section configuration, and a host to the Ingress rules.</p>
  <p id="JbVO">Remember to change the URLs to your domain or subdomain.</p>
  <pre id="86ut">apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: laravel-in-kubernetes-webserver
  annotations:
    # We need to add the cluster issuer annotation
    cert-manager.io/cluster-issuer: &quot;laravel-in-kubernetes-staging&quot;
spec:
  # We need to add a tls section
  tls:
  - hosts:
    - laravel-in-kubernetes.chris-vermeulen.com
    secretName: laravel-in-kubernetes-tls
  ingressClassName: nginx
  rules:
  # We also need to add a host for our ingress path
  - host: laravel-in-kubernetes.chris-vermeulen.com
    http: [...]
</pre>
  <p id="QbpI">We can now apply the Ingress, and then have a look at the certificate generated to make sure it&#x27;s ready.</p>
  <pre id="Pxaj">$ kubectl apply -f webserver/ingress.yml 
ingress.networking.k8s.io/laravel-in-kubernetes-webserver configured

# Now we can check the certificate to make sure it&#x27;s ready
$ kubectl get certificate
NAME                                READY   SECRET                              AGE
laravel-in-kubernetes-ingress-tls   True    laravel-in-kubernetes-ingress-tls   37s
</pre>
  <p id="p94m">If your certificate is not showing up correctly, or not marked as ready after a minute or so, you can consult the <a href="https://cert-manager.io/docs/faq/acme/" target="_blank">TroubleShooting guide for ACME cert-manager</a></p>
  <h2 id="issuing-the-production-certificate">Issuing the production certificate</h2>
  <p id="w7Iu">If everything is working correctly, you will need to delete the ingress, and recreate it, as we need to recreate the certificate secret, and just an annotation change will not be enough to reissue a production certificate, and recreate the certificate.</p>
  <p id="MLh4">So as a first step, let&#x27;s delete the Ingress.</p>
  <pre id="HeHU">$ kubectl delete -f webserver/ingress.yml
ingress.networking.k8s.io &quot;laravel-in-kubernetes-webserver&quot; deleted</pre>
  <p id="X7dZ">Next, let&#x27;s update the Ingress annotation to the production issuer.</p>
  <p id="bxFE">In <code>webserver/ingress.yml</code>, update the annotation for issuer</p>
  <pre id="Ylsi">apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: laravel-in-kubernetes-webserver
  annotations:
    # Update to production
    cert-manager.io/cluster-issuer: &quot;laravel-in-kubernetes-production&quot;
spec:
  [...]</pre>
  <p id="x4hj">Next we can recreate the Ingress, and a certificate will be issued against the production Let&#x27;s Encrypt, and we should then have HTTPS in the browser when we open the URL.</p>
  <pre id="lDX9">$ kubectl  apply -f webserver/ingress.yml 
ingress.networking.k8s.io/laravel-in-kubernetes-webserver created

$ kubectl get certificate
NAME                                READY   SECRET                              AGE
laravel-in-kubernetes-ingress-tls   True    laravel-in-kubernetes-ingress-tls   11s
</pre>
  <p id="tdv0">We now have a production certificate issued by cert-manager through Let&#x27;s Encrypt, and you should see the lock in your browser without any issues.</p>
  <figure id="402p" class="m_retina">
    <img src="https://chris-vermeulen.com/content/images/2021/09/image-9.png" width="860.5" />
  </figure>
  <p id="ubrj">We now have certificates setup and working and our site is secure for people to connect to and do stuff, whatever that may be.</p>
  <hr />
  <p id="mKEU">Next, we are going to move onto distributed logging, so we can easily catch all the logs from our applications in an easily searchable place.</p>
  <p id="YJtF"></p>
  <p id="KRFv"></p>
  <p id="PPkV"></p>
  <p id="ok2S">Source: <a href="https://chris-vermeulen.com/tag/laravel-in-kubernetes/" target="_blank">https://chris-vermeulen.com/tag/laravel-in-kubernetes/</a></p>

]]></content:encoded></item></channel></rss>