Deploying a Node.js/Angular 5 Application to Kubernetes With Docker
1. Introduction
In this article, we containerize (Docker) and deploy a Node.js/Angular 5 application to Kubernetes (Kubernetes Engine on Google Cloud Platform).
The sample project is hosted here on Google Cloud Platform Kubernetes Engine:
If you want to Gain In-depth Knowledge on NodeJS, please go through this link NodeJS Online Training
2. Code Usage
Using the code is pretty much similar to what we described earlier in the previous article. We will set up aenv_vars/application.properties file for our new project.
Dockerfile
Let us begin by understanding the Dockerfile we used to containerize our app. Here is a brief description of what this Dockerfile is doing:
- We build the image on top of the centos/nodejs-8-centos7 image.
- We take in some of the overridable arguments and set environment parameters from them.
- We copy the source into the context in the $APP_BUILD_DIR directory.
- In APP_BUILD_DIR, we build the code with npm install/npm run, ng build, etc., commands and move the built distribution to the $APP_HOME_DIR directory.
- We also copy over the configurations for Apache HTTPD to the respective configuration directories /etc/httpd/conf, /etc/httpd/conf.d/, and set the permissions.
- Since we host this application using Apache HTTPD, we set the permissions and run the application with Apache user.
- The entrypoint is a overridable simple pass through file which calls the default command.
- The default command basically runs HTTPD with the httpd.conf configuration files we copied earlier to the /etc/httpd/conf directory.
FROM centos/nodejs-8-centos7 ARG APP_NAME=pyfln-ui ARG APP_BASE_DIR=/var/www/html ARG APP_BUILD_DIR=/opt/app-root/src/ ARG API_ENDPOINT=http://127.0.0.1:8000 ARG APACHE_LOG_DIR=/var/log/httpd ENV APP_BUILD_DIR $APP_BUILD_DIR ENV APP_BASE_DIR $APP_BASE_DIR ENV APP_NAME ${APP_NAME} ENV API_ENDPOINT ${API_ENDPOINT} ENV APACHE_LOG_DIR ${APACHE_LOG_DIR} ENV LD_LIBRARY_PATH /opt/rh/rh-nodejs8/root/usr/lib64 ENV PATH /opt/rh/rh-nodejs8/root/usr/bin:/opt/app-root/src/node_modules/.bin/:/opt/app-root/src/.npm-global/bin/:/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ENV NPM_CONFIG_PREFIX /opt/app-root/src/.npm-global EXPOSE 8080 USER root COPY files ${APP_BUILD_DIR}/files #RUN cp ${APP_BUILD_DIR}/files/pyfln.rep /etc/yum.repos.d/ \ # && update-ca-trust force-enable RUN yum install -y httpd httpd-tools RUN cp ${APP_BUILD_DIR}/files/npm/npmrc ~/.npmrc \ && cp ${APP_BUILD_DIR}/files/httpd/httpd.conf /etc/httpd/conf/ \ && cp ${APP_BUILD_DIR}/files/httpd/default-site.conf /etc/httpd/conf.d/default-site.conf \ && chown apache:apache /etc/httpd/conf/httpd.conf \ && chmod 755 /etc/httpd/conf/httpd.conf \ && chown -R apache:apache /etc/httpd/conf.d \ && chmod -R 755 /etc/httpd/conf.d \ && touch /etc/httpd/logs/error_log /etc/httpd/logs/access_log \ && chmod -R 766 /etc/httpd/logs \ && chown -R apache:apache /etc/httpd/logs \ && touch ${APACHE_LOG_DIR}/error.log ${APACHE_LOG_DIR}/access_log \ && chown -R apache:apache ${APACHE_LOG_DIR} \ && chmod -R g+rwX ${APACHE_LOG_DIR} \ && chown -R apache:apache /var/run/httpd \ && chmod -R g+rwX ${APACHE_LOG_DIR} COPY . ${APP_BUILD_DIR} RUN npm --max_old_space_size=8000 --registry https://registry.npmjs.org/ install -g [email protected] --loglevel=verbose \ && npm --max_old_space_size=8000 --registry https://registry.npmjs.org/ install -g @angular/[email protected] --loglevel=verbose RUN cd ${APP_BUILD_DIR} \ && npm --max_old_space_size=8000 --registry https://registry.npmjs.org/ install --no-optional --loglevel=verbose \ && npm --max_old_space_size=8000 --registry https://registry.npmjs.org/ run ng build --prod --env=prod --aot --verbose --show-circular-dependencies \ && mkdir -p ${APP_BASE_DIR} \ && cp -r ${APP_BUILD_DIR}/dist/. ${APP_BASE_DIR}/ \ && cp ${APP_BUILD_DIR}/files/entrypoint.sh ${APP_BASE_DIR}/ \ && chmod -R 0755 $APP_BASE_DIR/ \ && chown -R apache:apache $APP_BASE_DIR/ WORKDIR $APP_BASE_DIR USER apache ENTRYPOINT ["./entrypoint.sh"] CMD ["/usr/sbin/httpd","-f","/etc/httpd/conf/httpd.conf","-D","FOREGROUND"]
Kubernetes Deployment, Service, and Ingress
Our Deployment, Service, and Route files are pretty much the same as in the starter article. We only updated the parameters we pass to these templates. Let us take a look at the code for these files:
1. Deployment
In our deployment, we create a deployment with the name __APP_NAME__-dc. The variable __APP_NAME__
is replaced with our parameter kubejencdp-py
by our template processing script.
We deploy one replica with the container image kubejencdp-py (the __IMAGE__
variable will be updated by the image name by the template processing script). We are passing the __TIMESTAMP__
variable which is updated by our pipeline with a timestamp of the deployment. This ensures that we pull the latest image even if we apply the same deployment. You can find more information about this trick in this discussion on GitHub. We expose the port 8000 as exposed by the container.
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: __APP_NAME__-dc spec: replicas: 1 template: metadata: labels: app: __APP_NAME__ updateTimestamp: "__TIMESTAMP__" spec: containers: - name: __APP_NAME__-ctr image: >- __IMAGE__ ports: - name: http-port containerPort: 8080 env: - name: API_ENDPOINT value: "http://__APP_NAME__-api:8080/" - name: DEPLOY_TIMESTAMP value: "__TIMESTAMP__" imagePullPolicy: Always
2. Service
Our service is pretty straightforward. It exposes port 8000 which is the port of our pod deployed with the deployment above.
apiVersion: v1 kind: Service metadata: labels: app: __APP_NAME__ name: __APP_NAME__-svc spec: ports: - name: http-port port: 8080 protocol: TCP targetPort: 8080 selector: app: __APP_NAME__ sessionAffinity: None type: NodePort
3. Ingress
Our ingress exposes the http-port of our service outside the cluster:
apiVersion: extensions/v1beta1 kind: Ingress metadata: labels: app: __APP_NAME__ name: __APP_NAME__-ingress spec: backend: serviceName: __APP_NAME__-svc servicePort: http-port
Jenkins Pipeline
Take your career to new heights of success with an AngularJS Online Training
1. Initialization
In our initalization stage, we basically take most of the parameters from the env_vars/application.properties files as described above. The timestamp is taken from the wrapper script below:
def getTimeStamp(){ return sh (script: "date +'%Y%m%d%H%M%S%N' | sed 's/[0-9][0-9][0-9][0-9][0-9][0-9]\$//g'", returnStdout: true); }
And the following function reads the values from the env_vars/application.properties file:
def getEnvVar(String paramName){ return sh (script: "grep '${paramName}' env_vars/project.properties|cut -d'=' -f2", returnStdout: true).trim(); }
Here's our initialization stage:
stage('Init'){ steps{ //checkout scm; script{ env.BASE_DIR = pwd() env.CURRENT_BRANCH = env.BRANCH_NAME env.IMAGE_TAG = getImageTag(env.CURRENT_BRANCH) env.TIMESTAMP = getTimeStamp(); env.APP_NAME= getEnvVar('APP_NAME') env.IMAGE_NAME = getEnvVar('IMAGE_NAME') .. ... env.GCLOUD_K8S_CLUSTER_NAME = getEnvVar('GCLOUD_K8S_CLUSTER_NAME') env.JENKINS_GCLOUD_CRED_LOCATION = getEnvVar('JENKINS_GCLOUD_CRED_LOCATION') } } }
2. Cleanup
Our cleanup script simply clears out our any dangling or stale images.
stage('Cleanup'){ steps{ sh ''' docker rmi $(docker images -f 'dangling=true' -q) || true docker rmi $(docker images | sed 1,2d | awk '{print $3}') || true ''' } }
3. Build
Here we build our Docker project. Please notice that since we will be pushing our image to Docker Hub, the tag we are using contains DOCKER_REGISTRY_URL which is registry.hub.docker.com and my DOCKER_PROJECT_NAMESPACE is amitthk. You may want to update these values according to your docker registry.
stage('Build'){ steps{ withEnv(["APP_NAME=${APP_NAME}", "PROJECT_NAME=${PROJECT_NAME}"]){ sh ''' docker build -t ${DOCKER_REGISTRY_URL}/${DOCKER_PROJECT_NAMESPACE}/${IMAGE_NAME}:${RELEASE_TAG} --build-arg APP_NAME=${IMAGE_NAME} -f app/Dockerfile app/. ''' } } }
4. Publish
In order to publish our image to the Docker registry, we make use of Jenkins's credentials defined with variable JENKINS_DOCKER_CREDENTIALS_ID. To understand how this is set up.
stage('Publish'){ steps{ withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: "${JENKINS_DOCKER_CREDENTIALS_ID}", usernameVariable: 'DOCKER_USERNAME', passwordVariable: 'DOCKER_PASSWD']]) { sh ''' echo $DOCKER_PASSWD | docker login --username ${DOCKER_USERNAME} --password-stdin ${DOCKER_REGISTRY_URL} docker push ${DOCKER_REGISTRY_URL}/${DOCKER_PROJECT_NAMESPACE}/${IMAGE_NAME}:${RELEASE_TAG} docker logout ''' } } }
5. Deploy
In our Deploy stage, we make use of Jenkins's secret file credential set up in the JENKINS_GCLOUD_CRED_ID
variable. Again, to check how this variable is set up, please refer to first article.
For deployment, we process our deployment, service, and ingress files mentioned above using our simple script named process_files.sh
. This script simply replaces some of the build/deployment variables like __APP_NAME__
, __TIMESTAMP__
, __IMAGE__
, etc. that we want to update our deployment/service/ingress with:
if (($# <5)) then echo "Usage : $0 <DOCKER_PROJECT_NAME> <APP_NAME> <IMAGE_TAG> <directory containing k8s files> <timestamp>" exit 1 fi PROJECT_NAME=$1 APP_NAME=$2 IMAGE=$3 WORK_DIR=$4 TIMESTAMP=$5 main(){ find $WORK_DIR -name *.yml -type f -exec sed -i.bak1 's#__PROJECT_NAME__#'$PROJECT_NAME'#' {} \; find $WORK_DIR -name *.yml -type f -exec sed -i.bak2 's#__APP_NAME__#'$APP_NAME'#' {} \; find $WORK_DIR -name *.yml -type f -exec sed -i.bak3 's#__IMAGE__#'$IMAGE'#' {} \; find $WORK_DIR -name *.yml -type f -exec sed -i.bak3 's#__TIMESTAMP__#'$TIMESTAMP'#' {} \; } main
And here is our Deployment stage. We activate our gcloud credential, we process our templates using the process_files.sh
script mentioned above, then we use Kubectl to apply our processed templates. We watch our rollout using the Kubectl rollout status command :
stage('Deploy'){ steps{ withCredentials([file(credentialsId: "${JENKINS_GCLOUD_CRED_ID}", variable: 'JENKINSGCLOUDCREDENTIAL')]) { sh """ gcloud auth activate-service-account --key-file=${JENKINSGCLOUDCREDENTIAL} gcloud config set compute/zone asia-southeast1-a gcloud config set compute/region asia-southeast1 gcloud config set project ${GCLOUD_PROJECT_ID} gcloud container clusters get-credentials ${GCLOUD_K8S_CLUSTER_NAME} chmod +x $BASE_DIR/k8s/process_files.sh cd $BASE_DIR/k8s/ ./process_files.sh "$GCLOUD_PROJECT_ID" "${IMAGE_NAME}" "${DOCKER_PROJECT_NAMESPACE}/${IMAGE_NAME}:${RELEASE_TAG}" "./${IMAGE_NAME}/" ${TIMESTAMP} cd $BASE_DIR/k8s/${IMAGE_NAME}/. kubectl apply --force=true --all=true --record=true -f $BASE_DIR/k8s/$IMAGE_NAME/ kubectl rollout status --watch=true --v=8 -f $BASE_DIR/k8s/$IMAGE_NAME/$IMAGE_NAME-deployment.yml gcloud auth revoke --all """ } } }
Conclusion
We completed the containerization, build, and deployment of a simple Node.js/Angular5 application to Kubernetes. The Kubernetes engine used is from Google Cloud Platform.