Kubernetes and Java EE 8: Cloud Native Java EE apps on k8
In the current era of Cloud Computing, it is extremely important to build cloud native applications and when we talk about CNA(Cloud Native Apps), Containerizing your app and Orchestrating them becomes extremely vital. So let’s get started.
First of all we will create a Java EE application and we will deploy it on Wildfly Java EE Container. We will package our application as a thin WAR file and then use Docker (the most famous Container Runtime) to host our WildFly server, which in turn will host out war file. For the sake of simplicity we will keep the application as simple as it can be with bare minimum Java EE features (like: Stateless EJBs and CDI). This is what I mean when is say the word “Containerizing an app”.
Let us assume that this simple app will be the back end of a Blog Site and we will expose two kinds of REST endpoints from it. One will be a simple “PING” service, which will allow us to quickly verify of our app is reachable. Second endpoint will be to list all the users available in our blog application. The focus of this post is to run our Java EE apps on Docker Container and eventually run them inside Kubernetes node and let Kubernetes Orchestrate our application. The complete source code can be found at my github repository.
Kubernetes has the concepts of Pods, which are the smallest atomic unit in k8, and are k8s abstraction that represent a group of one or more application containers (such as Docker or rkt) and some shared resources for those containers (like storage, volume networking etc). Important thing here is the group of one or more app containers. For Example: our application may have one container running our Web app and another container may be running a noSql Database and both these containers can be clubbed into single unit inside a Pod. If you are not familiar with concepts of Kubernetes like what Pods, Nodes, Cluster, Deployment, Services etc, then I would suggest to please go through them on k8s official website. K8 orchestrates our application containers so first of all let us create our application container and then we will see how we can create k8 deployments and expose them, scale them (horizontal scaling) and how can k8 help us with rolling updates.
Let us start with Containerization.
First thing is to create a Java EE maven application and the pom file should be updated properly for Java and Java EE versions.
First thing to create a Java EE Rest app is to define Jax-rs Application class. So do not forget to create that because that is your gateway for jax-rs resources.
As with any other Java EE app you will have your entity objects (model in our case coz we are not using any live DB for persistence), DAO repositories (here we use hardcoded in-memory, application scoped, repositories) and a service layer. As usual you will create EJBs in your service layer with business logics:
Now let’s have quick look on our rest endpoints
After putting all the pieces of the Java EE app together, we will run the mvn package command to build the war file. If we look at the maven-war-plugin defined in our pom we can see that the war file will have the name of cloudnative and by default that will be its root context path.
Now our development is complete and it is time to deploy our created war to a wildfly server running in Docker container. So let’s talk little bit about that.
Every Java EE server available out there, has a auto-deploy directory. That means it has a directory and at server start-up it scans this directory and automatically deploys all the artifacts present into it. So we will leverage this feature of Wildfly and put our war file inside its auto-deploy directory. As a result, when we will start our Docker container, our application will also be hosted on designated port of our local system.
NOTE: I have already extended the wildfly docker image to declare an environment variable as DEPLOYMENT_DIR so that our docker files in our applications look dumb and simple. If you want to change anything there you can always look at the source in my github repository.
Now coming back to the topic of deployment: Let us create a blank Dockerfile at the project root directory and we will add few lines to it:
FROM narif/wildfly-admin:1 LABEL maintainer="najeeb.oo7.dd@gmail.com"
COPY target/cloudnative.war $DEPLOYMENT_DIR
After this we will have to open a terminal in our project root directory and issue the docker build command to first create a docker image.
The purpose of this command is to tell docker that it has to build an image with a tag javaeeandk8:latest (javaeeandk8 -> image name, latest -> version) and from this directory (the . should not be missed). After encountering this Docker reads your Dockerfile in your current directory and starts building your image in layers.
You can also Visual Studio to manage your docker operation. For instance, rather than issuing the build command you can right click in your Dockerfile and click on build to start creating the image and Visual Studio will help you with it.
When you run the build command, for the first time it will pull the required images and then complete your build. Docker will pull the required images from the docker hub if one is not present in your local repository. After the build is completed you may see an output like this:
This tells us that our image is complete and now it is time to run a docker container with that image. To do this we will issue a docker run command:
/> docker run –it –p 8080:8080 javaeeandk8:latest
This command will run a docker container with a random name (you can specify your own names if you want by using --name option with run command). –it as you may guessed it starts your container in interactive mode. When the logs stop you can see that you app is now deployed on default server with the specified context root
Let us also verify of our ports are properly exposed or not. To do so we will look all the docker processes running currently via another docker command: docker ps (to view all the running processes) or docker ps –a (to view even the terminated ones)
/> docker ps
/> docker ps –a
Now to actually make any rest calls to your running application first thing is to know your docker machine’s IP address. To do so you can do two things:
First one is a very simple command but it only gives you your ip address:
/> docker-machine ip default
Second option will give you tons of other information along with your IPAddress like where are your ssh keys stored etc etc. Worth inspecting your docker-machine
/>docker-machine inspect
Now lets use this IP address to make some rest calls:
Ping Service:
User Service:
So far everything worked perfectly fine.
Only thing left now is to push our working image to docker hub and for that we will issue bunch of commands:
/> docker login
(enter details username and then password when prompted for)
once we are logged in we have to tag our image with a version number:
/> docker tag javaeeandk8:latest narif/javaeeandk8:v1
/> docker push narif/javaeeandk8:v1
And we have our imaged pushed to docker hub and we can access it from anywhere in the globe.
Now we have a Dockerized Java EE app. We can take this image to any cloud provider (Oracle, AWS etc etc) and they can host it as they want. This is really powerful!
Let us begin Orchestrating our application containers using K8.
To run Kubernetes on our local desktops, we need minikube. Minikube creates a Single Node Kubernetes cluster in our local desktop using a hypervisor. On Windows home edition we have to Virtual box and you can follow the details on how to install minikube here.
NOTE: My assumption is that your personal laptop is also having Windows 10 Home edition (same as mine) and therefore you will also have to install kubectl and minikube explicitly.
To start our minikube instance we issue a command
/> minikube start
This will start our minikube i.e single node k8 cluster on our local machine.
Once our minikube instance is up and running we will have to verify if our kubectl is working to do that we will issue
/> kubectl version
Cool it works!
Let us now check the nodes – as we are running k8 via minikube we will only have a single node
/> kubectl get nodes
Our playing ground is set now. Now we need to create a Deployment and we will do so using the create deployment command:
/> kubectl create deployment javaeeandk8-deployment --image=narif/javaeeandk8:v1
This will create a deployment which we can see using the kubectl get deployments command.
The effect of this deployment will be that it will create a pod for us automatically and start running our Java ee application in it. So see the further details of the pods we can issue another command kubectl get pods (to see if the pod got created)
/> kubectl get pods
We can see the pod is in running status. We can also describe the pods to dive deeper into the pod details like what docker containers is it running and from where did it pull those images etc etc.
/> kubectl describe pods
We can even inspect the logs which are generated inside the pod which is running our docker container and for that we use kubectl logs <pod_name>
/> kubectl logs <your_pod_name>
for me it will be
/> kubectl logs javaeeandk8-deployment-54987c64dc-x2wbf
You can even run the exec command to get inside a pod and play with it (similar to the docker exec command -> kubectl exec will also take you inside the running docker container).
We saw bunch of things here and have already created a deployment which inturn created a pod in which our app is running. But wait, can we access it now? Are we done? The answer is a blunt No. The created pods and deployment are accessible only from the k8’s internal virtual network ie its internal ip addresses and not from the outside world. To make it accessible from postman we still need to do few more things and that will be the creation of service to expose our deployment.
Services are Kubernetes abstractions which define a logical set of pods and policies by which to access them. Each pod gets its own private IP but those IPs are not accessible from outside the cluster. To access a running pod from outside the cluster (Postman in our case) we need to expose them using services. In short, services allow our applications to receive traffic. They are of 4 types
1) ClusterIP (default): Exposes the service on an internal IP in the cluster. This type makes the services only reachable from within the cluster.
2) NodePort – (Only option supported by Minikube) Exposes the service on the same port of each selected Node in the cluster using NAT. Makes a service accessible from outside the cluster using <NodeIP>:<NodePort>. Superset of ClusterIP.
3) LoadBalancer – (supported if you have external LoadBalancer service from a Cloud provider) Creates an external load balancer in the current cloud and assigns a fixed, external IP to the service.
4) ExternalName
Enough of the theory part, time to get our hands dirty.
Let us list the services running in our cluster
/> kubectl get services
There is only one service running. Time to create one for our deployment to expose it:
/> kubectl expose deploy/javaeeandk8-deployment --type="NodePort" --port 8080
Now we can describe the just created service
/> kubectl describe svc/javaeeandk8-deployment
Now our application is exposed. Time to access it!
Wait what is the IP with which I am going to access my services?
IP will be your minikube IP and Port will be the NodePort
If you don’t know your minikube ip use the command to get that
/> echo $(minikube ip)
Note this IP is not same as that of your docker-machine’s IP. These look similar because they are using the Hypervisor.
Time to bring in postman
Ping
Users
This is how we can deploy our Java EE applications on Kubernetes.
This is all good, but the real power of K8 is how it can scale our applications and do rolling updates without any downtime. Both of these features removes the manual intervention and makes an application Highly Available.
Scaling Application on the Fly
To scale our application first thing is to check how many replica sets are available for our application and to do so we get our deployment.
/> kubectl get deploy
/> kubectl get rs
So from above we can deduce bunch of things- we have 1 replica set and when we list the rs we can also what is number of desired replicas and what is currently available and how many of them are ready.
Now time to scale our replicas to 4
/> kubectl scale deploy/javaeeandk8-deployment --replicas=4
Now the deployment is
/> kubectl get deploy
And these replicas have been scaled out to 4 different pods. Minikube will automatically load balance the request to these 4 pods thus facilitating HA.
/> kubectl get pods
4 new pods means 4 new instances of the app. This scaling can be automated depending upon the load.
To scale down we can use the same command
/> kubectl scale deploy/javaeeandk8-deployment --replicas=2
and then when we describe our deployment we can see the app has been scaled down to 2 replicas
NOTE: The preferred way of creating and managing K8 resources is via YAML files. But to give you the comfortable feel of how things operate actually I have followed the CLI.
I will be uploading the deployment yaml files with source code for the curious ones.
Thanks for your time.