About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Tutorial
Boost Java performance with OpenJ9 JITServer in Kubernetes
Learn how to deploy and configure OpenJ9 JITServer on Kubernetes to improve Java application startup time, reduce memory footprint, and deliver consistent performance for cloud-native and containerized workloads
The JITServer technology from the Eclipse OpenJ9 JVM allows you to offload JIT compilations to a server that runs on a local or a remote machine. This mechanism offers numerous advantages like better JVM ramp-up, better app autoscaling, better QoS, smaller peak memory consumption, improved application density, and so on.
NOTE: The Eclipse OpenJ9 JVM is distributed with the IBM Semeru Runtimes Java distribution. In this context, the JITServer is referred to as the Semeru Cloud Compiler.
Learning objectives
In this tutorial, we will give a practical example of connecting a client JVM running a Spring Boot app (PetClinic), to a JITServer instance, in a Kubernetes environment, namely MicroK8s from Canonical. We picked MicroK8s because it’s a lightweight Kubernetes distribution that is easy to install/manage and comes with many auxiliary services like load balancing, service mesh, observability, and so on.
Prerequisites
- Ubuntu -- Linux OS distribution
- MicroK8s -- Lightweight Kubernetes
- KVM -- Kernel Virtual Machine on Linux
- Podman -- Container engine
In these experiments, we installed MicroK8s 1.23 on a KVM with eight vCPUs and 16GB of RAM running on Ubuntu 22.04. How to install/configure the VM and MicroK8s is outside the scope of this tutorial. For MicroK8s, the following add-ons were enabled: dns, registry, storage, rbac, ingress, and Prometheus. In Ubuntu, we also installed podman-docker to build the various container images. Note that in MicroK8s all kubectl commands need to be prefixed by microk8s commands. To simplify development, we have created an alias:
$ alias kubectl='microk8s kubectl'
Estimated time
Completing this tutorial should take about 30 minutes.
Steps
- Deploying JITServer
- Creating the PetClinic app container image
- Deploying the PetClinic application and connecting it to JITServer
Step 1. Deploying JITServer
In order to run JITServer in containers, we need to build a container image. This can be easily done because JITServer is nothing but a full-fledged OpenJ9 JVM that works in server mode, only performing JIT compilations. Thus, we can start with an OpenJ9 container image and change the CMD instruction to call jitserver instead of java. Since IBM Semeru Runtimes already provide production-ready binaries of OpenJ9 JVM and OpenJDK class libraries, we can simply use that as a base image. A possible Dockerfile is shown below:
FROM docker.io/ibm-semeru-runtimes:open-17.0.3_7-jre
CMD ["/opt/java/openjdk/bin/jitserver"]
However, an even easier alternative is to use an unmodified IBM Semeru Runtimes container image and specify a jitserver argument at runtime. For example:
$ podman run docker.io/ibm-semeru-runtimes:open-17.0.3_7-jre jitserver
Additional options for JITServer can be provided at runtime with the OPENJ9_JAVA_OPTIONS environment variable.
Deploying the JITServer container in Kubernetes can be done either through manifest (YML) files or through a Helm chart, and we’ll discuss both alternatives.
(Option A) Deploying JITServer with YML files
For deploying JITServer in MicroK8s, we will create a JITServer.yaml manifest file that defines both the JITServer deployment and JITServer service. The content of our file is shown below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jitserver
labels:
app: jitserver
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: jitserver
replicas: 1
template:
metadata:
labels:
app: jitserver
spec:
containers:
- name: jitserver
image: docker.io/ibm-semeru-runtimes:open-17.0.3_7-jre
imagePullPolicy: IfNotPresent
# Instruct the OpenJ9 JVM to start in server mode
args: [“jitserver”]
ports:
- containerPort: 38400
resources:
requests:
memory: "1200Mi"
cpu: "1000m"
limits:
memory: "1200Mi"
cpu: "8000m"
env:
- name: OPENJ9_JAVA_OPTIONS
value: "-XX:+JITServerLogConnections"
---
apiVersion: v1
kind: Service
metadata:
# A client connects to this endpoint
name: jitserver
spec:
selector:
app: jitserver
ports:
- protocol: TCP
port: 38400
targetPort: 38400
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 86400
The service name defined here (jitserver) should be used in the YML file for the client deployment to connect the client to the server. Note the sessionAffinity spec in the JITServer service definition. This ensures that a client JVM always attempts to connect to the same JITServer pod and is important for performance because JITServer instances cache some information about the connected client JVMs.
Deploy JITServer by applying the manifest file:
$ kubectl apply -f JITServer.yaml
Verify that the JITServer pod is up and running:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
jitserver-65585794b4-l9fk8 1/1 Running 0 13m
Additionally, we can inspect the output of the JITServer pod with the kubectl logs command:
$ kubectl logs pod/jitserver-65585794b4-l9fk8
#INFO: StartTime: May 13 12:18:59 2022
#INFO: TimeZone: UTC (UTC)
JITServer is ready to accept incoming requests
For now, delete the JITServer provisioned by means of a YML file:
$ kubectl delete -f JITServer.yaml
(Option B) Deploying JITServer with a Helm chart
An alternate way to deploy JITServer is to use the Helm chart in this GitHub repo. The steps involved are detailed in this blog post, but for completeness, we’ll summarize them here. First, we need to add the repo for the JITServer Helm chart:
$ microk8s helm3 repo add openj9 https://github.com/eclipse-openj9/openj9-utils/tree/master/helm-chart
Then we can deploy the chart:
$ microk8s helm3 install myjitserver openj9/openj9-jitserver-chart
This will instantiate a deployment and a service, both named myjitserver-openj9-jitserver-chart.
By default, the JITServer chart uses the docker.io/ibm-semeru-runtimes repository to load an IBM Semeru branded OpenJ9 container image based on Java 8 and the latest release of OpenJ9. If desired, this can be changed with the --set image.repository= and --set image.tag= options given to the helm3 install command.
It is important to note that the client JVM and the server must use the same Java version (only Java 8, 11, and 17 are supported at this time) and the same OpenJ9 release. In order to determine the Java version used by JITServer, we can execute java -version inside the JITServer container. First determine the name of the JITServer pod(s):
$ kubectl get pods | grep "myjitserver-openj9-jitserver-chart"
myjitserver-openj9-jitserver-chart-756bfd7565-5f4cv 1/1 Running 0 77m
Then run the exec command in one of the JITServer pods:
$ kubectl exec myjitserver-openj9-jitserver-chart-756bfd7565-5f4cv -- java -version
openjdk version "1.8.0_332"
IBM Semeru Runtime Open Edition (build 1.8.0_332-b09)
Eclipse OpenJ9 VM (build openj9-0.32.0, JRE 1.8.0 Linux amd64-64-Bit
Compressed References 20220422_370 (JIT enabled, AOT enabled)
OpenJ9 - 9a84ec34e
OMR - ab24b6666
JCL - 0b8b8af39a based on jdk8u332-b09)
This informs us that the JITServer uses Java 8, OpenJ9 release 0.32.0. Let’s say that we plan on connecting a Java application based on Java 17, OpenJ9 release 0.32.0. In this case, we need to launch another JITServer with matching characteristics, which can be done by selecting the appropriate tag from Dockerhub. In order to simplify the selection process, a table of tags for the IBM Semeru containers can be found in the README file for the JITServer Chart. For completeness, we’ve included the same table below, as it was at the time of this writing:
| OpenJ9 Release | Java 8 | Java 11 | Java 17 | ||
|---|---|---|---|---|---|
| 0.27.0 | open-8u302-b08-jre | open-11.0.12_7-jre | |||
| 0.29.0 | open-8u312-b07-jre | open-11.0.13_8-jre | |||
| 0.29.1 | open-17.0.1_12-jre | ||||
| 0.30.0 | open-8u322-b06-jre | open-11.0.12_7-jre | open-17.0.2_8-jre | ||
| 0.30.1 | open-11.0.14.1_1-jre | ||||
| 0.32.0 | open-8u332-b09-jre | open-11.0.15_10-jre | open-17.0.3_7-jre |
The table indicates that, for our purposes, we need to use a tag of open-17.0.3_7-jre, so the helm3 install command becomes:
$ microk8s helm3 install myjitserver-j17 --set image.tag="open-17.0.3_7-jre" openj9/openj9-jitserver-chart
Verify that the JITServer pod is up and running with:
$ kubectl get pods | grep myjitserver-j17-openj9-jitserver-chart
myjitserver-j17-openj9-jitserver-chart-8668b58cb6-djrjh 1/1
Running 0 109s
NOTE: We can pass additional options to the JVM running JITServer by setting the
OPENJ9_JAVA_OPTIONSenvironment variable like so:
$ microk8s helm3 install myjitserver-j17 --set env[0].name="OPENJ9_JAVA_OPTIONS" --set env[0].value="-XX:+JITServerLogConnections" --set image.tag="open-17.0.3_7-jre" openj9/openj9-jitserver-chart
Step 2. Creating the PetClinic app container image
A bare-bones Dockerfile for our Spring Boot application could look like this:
FROM docker.io/ibm-semeru-runtimes:open-17.0.3_7-jre
WORKDIR /work
RUN chmod 777 /work
COPY --chown=1001:0 spring-petclinic-2.3.0.BUILD-SNAPSHOT.jar /work/application.jar
EXPOSE 8080
USER 1001:0
CMD ["java", "-jar", "application.jar"]
NOTE: spring-petclinic-2.3.0.BUILD-SNAPSHOT.jar is the jar file containing the PetClinic app. If you don’t want to build one yourself, you can download a version with:
$ wget https://raw.githubusercontent.com/mpirvu/dockerized-apps/main/Petclinic/PetclinicContext/spring-petclinic-2.3.0.BUILD-SNAPSHOT.jar
Save the Dockerfile content as Dockerfile_petclinic and create the PetClinic container image with:
$ podman build -f Dockerfile_petclinic -t localhost:32000/petclinic:j17 .
Then push the image to the MicroK8s built-in container registry:
$ podman push --tls-verify=false localhost:32000/petclinic:j17
NOTE: The
--tls-verify=falseswitch is needed because the built-in registry is insecure. See MicroK8s for more information on how to work with MicroK8s built-in registry. An alternative is to use a public registry like Docker Hub or quay.io.
Step 3. Deploying the PetClinic application and connecting it to JITServer
For deploying the PetClinic app in Kubernetes, we will create a manifest file to define a deployment and a service. A minimal Petclinic.yaml file is shown below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: petclinic
labels:
app: petclinic
spec:
selector:
matchLabels:
app: petclinic
replicas: 1
template:
metadata:
labels:
app: petclinic
spec:
containers:
- name: petclinic
image: localhost:32000/petclinic:j17
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
resources:
requests:
memory: "500Mi"
cpu: "500m"
limits:
memory: "500Mi"
cpu: "1000m"
env:
# Provide additional options to the JVM
- name: OPENJ9_JAVA_OPTIONS
value: "-XX:+UseJITServer -XX:JITServerAddress=myjitserver-j17-openj9-jitserver-chart -XX:+JITServerLogConnections"
---
apiVersion: v1
kind: Service
metadata:
name: petclinic-service
spec:
ports:
- port: 8080
selector:
app: petclinic
To connect the PetClinic container with the JITServer service launched by the Helm chart, we have provided the OPENJ9_JAVA_OPTIONS environment variable to pass additional options to the OpenJ9 JVM running the PetClinic app. In our YML file, we specify the following options:
# Provide additional options to the JVM
- name: OPENJ9_JAVA_OPTIONS
value: "-XX:+UseJITServer -XX:JITServerAddress=myjitserver-j17-openj9-jitserver-chart -XX:+JITServerLogConnections"
Let's take a closer look at the command-line options (prefix -XX) we are passing to the OpenJ9 JVM:
| Option | Description | ||
|---|---|---|---|
| +UseJITServer | Start the JVM in JITServer client mode. | ||
| JITServerAddress | Specifies the JITServer server name or IP address for the JITServer client to connect to. | ||
| +JITServerLogConnections | Enables logging of connection/disconnection events between the client JVM and JITServer server. |
NOTE: Click here to see the full list of available OpenJ9 JVM command-line options.
Apply this YML file with:
$ kubectl apply -f Petclinic.yaml
And verify that the JVM connected to the JITServer by looking at the PetClinic pod logs:
$ kubectl get pods | grep petclinic
petclinic-77dfdffdc8-wlhhb 1/1
Running 0 3m20s
$ kubectl logs petclinic-77dfdffdc8-wlhhb | grep JITServer
#JITServer: t= 0 Connected to a server (serverUID=8109933572336871497)
Note: The most common reason for a client JVM not being able to connect to a JITServer is a mismatch between the OpenJ9 release used by the client and the server. This can be confirmed (or ruled out) by executing java -version, both in the client pod and in the server pod. Another possibility for a connection failure is a typo in the name of the JITServer endpoint.
Summary
In this tutorial, we demonstrated how to take advantage of the OpenJ9 JITServer technology in a Kubernetes setting. Through practical examples, we showed how to deploy JITServer using YML files, how to deploy JITServer using the Helm chart, how to deploy a simple Sprint Boot application and connect it to the JITServer compilation service, and how to verify that the connection to JITServer is successful. In future content, we will cover more advanced topics like network traffic encryption, JITServer monitoring with Prometheus, and autoscaling based on custom metrics.