About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Tutorial
Deploy Quarkus applications to Kubernetes without writing YAML
Automate container deployment with Minikube, Jib, and PostgreSQL using configuration-driven manifests
In this tutorial, you learn how to deploy a containerized app using Minikube and Quarkus, where most of the configuration is handled for you in application.properties and Quarkus extensions. We add PostgreSQL inside the cluster, wire configuration via a ConfigMap, expose the app with a NodePort, and add liveness and readiness probes—without writing YAML for the app itself. The only manual YAML is the PostgreSQL Deployment and Service; everything else is driven by application.properties and Quarkus extensions.
Prerequisites
To complete this tutorial, you'll need:
You also need a Quarkus project with the Kubernetes and Jib extensions (and optionally a database-backed API) that you will deploy. You create such a Quarkus project when you complete the Quarkus basics learning path.
Step 1. Install kubectl and Minikube (if needed)
If you already have kubectl and a running Kubernetes cluster (which you do if you completed Step 5 of the containerizing tutorial in the Quarkus basics learning path), you can use it. Otherwise, set up kubectl and Minikube so that we have a consistent local environment.
Install kubectl
kubectl is the standard CLI for talking to a Kubernetes cluster.
Using SDKMAN!:
sdk install kubectlUsing Homebrew:
brew install kubectl
Verify your installation:
kubectl version --client
Install and start Minikube
Minikube runs a local Kubernetes cluster. We use Podman as the driver so you don't need a separate Docker daemon.
Install Minikube. For example, on macOS with Homebrew issue this command:
brew install minikubeSee the Minikube installation docs for other methods.
Start a Minikube cluster with Podman and the containerd runtime:
minikube start --driver=podman --container-runtime=containerd \ --insecure-registry="10.0.0.0/24"We use containerd (instead of CRI-O) so the registry add-on and add-on verification work without errors. The
--insecure-registryflag lets the in-cluster runtime pull from the registry add-on over plain HTTP. Keep this cluster running for the rest of the tutorial. You can stop it later withminikube stop.Enable the registry add-on so that you can push images from your host into the cluster:
minikube addons enable registryThe registry runs inside the cluster. In Step 6 we use a port-forward so the host can reach it at
localhost:5000for Jib pushes.Verify the cluster:
kubectl cluster-infoYour kubectl context should point at the Minikube cluster.
Step 2. Add the Minikube extension
You already have quarkus-kubernetes and quarkus-container-image-jib. Add the Minikube extension so that the generated manifests target a local Minikube cluster (for example, correct image pull policy and service defaults).
From the root of your project:
quarkus extension add quarkus-minikube
Step 3. Deploy PostgreSQL into Minikube
The Fruit API needs PostgreSQL. Deploy it into the cluster first so the ConfigMap and app configuration in the next steps can assume the database is already there. This is the only manual YAML in the tutorial—everything for the Quarkus application itself is generated by Quarkus. We use the credentials user / pass / example for consistency. The image is set to docker.io/library/postgres:16 (fully qualified) so Minikube's container runtime can pull it without short-name resolution issues.
Create a file named
postgresql.yaml(for example, in your project root or ak8sfolder) with the following content. Copy the following block without any leading spaces so the YAML parses correctly (the---document separator must start at column 0):apiVersion: v1 kind: Service metadata: name: postgresql spec: selector: app: postgresql ports: - port: 5432 targetPort: 5432 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: postgresql spec: replicas: 1 selector: matchLabels: app: postgresql template: metadata: labels: app: postgresql spec: containers: - name: postgresql image: docker.io/library/postgres:16 ports: - containerPort: 5432 env: - name: POSTGRES_USER value: user - name: POSTGRES_PASSWORD value: pass - name: POSTGRES_DB value: exampleApply the manifest:
kubectl apply -f postgresql.yamlIf you're using Minikube and the pod shows
ImageInspectErroror stays inImagePullBackOff, pull the image into Minikube's cache (and use the full image name so the runtime can resolve it):minikube image pull docker.io/library/postgres:16Then, delete the pod so it is re-created and can use the loaded image:
kubectl delete pod -l app=postgresqlWait until the PostgreSQL pod is running:
kubectl get pods -l app=postgresql -wWhen the pod shows
Runningand is ready (1/1), press Ctrl+C to stop watching. The database is now available inside the cluster as the servicepostgresqlon port 5432.
Step 4. Configure the data source using a ConfigMap
Keep configuration separate from code: put the data source settings in a Kubernetes ConfigMap and let Quarkus inject them into the generated manifests.
Create the ConfigMap manifest
Quarkus can augment existing Kubernetes YAML. We add a file that defines the ConfigMap; Quarkus merges it into the generated output and wires the Deployment to use it.
Create the directory:
mkdir -p src/main/kubernetesCreate
src/main/kubernetes/common.ymlwith the following content:apiVersion: v1 kind: ConfigMap metadata: name: postgresql-datasource-props data: POSTGRESQL_URL: "jdbc:postgresql://postgresql:5432/example" POSTGRESQL_USER: "user" POSTGRESQL_PASSWORD: "pass"The keys (
POSTGRESQL_URL,POSTGRESQL_USER,POSTGRESQL_PASSWORD) match the environment variable names the Quarkus data source extension expects when we reference them inapplication.properties.
Wire the ConfigMap in application.properties
In
src/main/resources/application.properties, add the following so the data source uses these environment variables and the ConfigMap is attached to the generated Deployment:# Datasource: use env vars (injected from ConfigMap in Kubernetes) quarkus.datasource.db-kind=postgresql %prod.quarkus.datasource.username=${POSTGRESQL_USER} %prod.quarkus.datasource.password=${POSTGRESQL_PASSWORD} %prod.quarkus.datasource.jdbc.url=${POSTGRESQL_URL} # Tell Quarkus to inject this ConfigMap into the generated Kubernetes manifests quarkus.kubernetes.env.configmaps=postgresql-datasource-propsThe
%prod.prefix applies those properties only when the prod profile is active (in Kubernetes), so in dev and test the URL and credentials stay unset and Quarkus uses Dev Services, while in production the app reads them from the ConfigMap.Keep your existing Hibernate ORM settings (for example,
quarkus.hibernate-orm.schema-management.strategy=drop-and-createfor dev andquarkus.hibernate-orm.log.sql=true) if you still have them. For production you would typically switch tononeorvalidate.
When you build for Kubernetes, Quarkus includes the ConfigMap in target/kubernetes/ and adds a reference in the Deployment so the container receives POSTGRESQL_URL, POSTGRESQL_USER, and POSTGRESQL_PASSWORD from the ConfigMap. This means no hardcoded credentials go into the image or into hand-written YAML.
Step 5. Configure the container image and Kubernetes deployment
Add or adjust the following in src/main/resources/application.properties so Jib builds and pushes an image to the in-cluster registry and the generated Kubernetes manifests use NodePort (so you can reach the app with minikube service ... --url). Keep your existing data source and ConfigMap settings.
# Container image (pushed to the Minikube registry addon at localhost:5000)
quarkus.container-image.registry=localhost:5000
quarkus.container-image.group=
quarkus.container-image.name=jib-tutorial
quarkus.container-image.tag=1.0
quarkus.container-image.build=true
quarkus.container-image.push=true
quarkus.container-image.insecure=true
# Kubernetes deployment
quarkus.kubernetes.replicas=1
quarkus.kubernetes.deployment-target=kubernetes
quarkus.kubernetes.ingress.expose=false
quarkus.kubernetes.service-type=NodePort
Here's what these configurations do:
- quarkus.container-image.registry=localhost:5000: Jib pushes to the Minikube registry add-on (exposed on the host via port-forward in Step 6).
- quarkus.container-image.group= (empty): Keeps the image path as
localhost:5000/jib-tutorial:1.0. - quarkus.container-image.push=true: Jib pushes the image to the registry after building.
- quarkus.container-image.insecure=true: Allows push over plain HTTP (the in-cluster registry does not use TLS).
- quarkus.container-image.tag=1.0: A fixed tag avoids the Kubernetes default of
imagePullPolicy: Alwaysthat:latesttriggers.
Update the tests
The basics learning path created FruitResourcetest with a simple POST-and-GET check. Extend it so the build (including Step 6) exercises the full CRUD surface. tests use the same PostgreSQL data source as dev; with a container runtime (Podman or Docker) available, Quarkus Dev Services starts PostgreSQL for tests automatically.
Replace the contents of src/test/java/com/ibm/developer/FruitResourcetest.java with the following:
package com.ibm.developer;
import static io.restassured.RestAssured.given;
import static org.hamcrest.CoreMatchers.hasItems;
import static org.hamcrest.CoreMatchers.is;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.test;
import io.quarkus.test.junit.Quarkustest;
import jakarta.transaction.Transactional;
@Quarkustest
public class FruitResourcetest {
@BeforeEach
@Transactional
void cleanDatabase() {
Fruit.deleteAll();
}
@test
public void testFruitEndpoint() {
// Insert a fruit
given()
.body("{\"name\":\"Banana\", \"color\":\"Yellow\"}")
.header("Content-Type", "application/json")
.when().post("/fruits")
.then()
.statusCode(200);
// Check that it is listed
given()
.when().get("/fruits")
.then()
.statusCode(200)
.body("name", hasItems("Banana"));
}
@test
public void testGetById() {
int id = given()
.body("{\"name\":\"Apple\", \"color\":\"Red\"}")
.header("Content-Type", "application/json")
.when().post("/fruits")
.then()
.statusCode(200)
.extract().path("id");
given()
.when().get("/fruits/" + id)
.then()
.statusCode(200)
.body("name", is("Apple"))
.body("color", is("Red"));
}
@test
public void testUpdate() {
int id = given()
.body("{\"name\":\"Grape\", \"color\":\"Purple\"}")
.header("Content-Type", "application/json")
.when().post("/fruits")
.then()
.statusCode(200)
.extract().path("id");
given()
.body("{\"name\":\"Grape\", \"color\":\"Green\"}")
.header("Content-Type", "application/json")
.when().put("/fruits/" + id)
.then()
.statusCode(200)
.body("color", is("Green"));
}
@test
public void testDelete() {
int id = given()
.body("{\"name\":\"Peach\", \"color\":\"Orange\"}")
.header("Content-Type", "application/json")
.when().post("/fruits")
.then()
.statusCode(200)
.extract().path("id");
given()
.when().delete("/fruits/" + id)
.then()
.statusCode(204);
given()
.when().get("/fruits/" + id)
.then()
.statusCode(404);
}
}
The @BeforeEach / @Transactional cleanup calls Fruit.deleteAll() inside a transaction so each test starts with an empty table. Run the tests with:
./mvnw test
Ensure a container runtime is running so Dev Services can start PostgreSQL.
Step 6. Build and push the container image
We build the OCI image with Jib and push it to the in-cluster registry. Jib pushes over HTTP; no local tagging or minikube image load is needed. This mirrors a typical CI/CD flow where the build pushes to a registry and the cluster pulls from it.
Start the port-forward so your host can reach the in-cluster registry at
localhost:5000. Run this in a separate terminal (or background it) and keep it running during the build:kubectl port-forward -n kube-system service/registry 5000:80 &Verify the registry is reachable:
curl -s http://localhost:5000/v2/ && echo "Registry OK"You should see
{}followed byRegistry OK.Build the application and push the container image:
./mvnw clean packageWith
quarkus.container-image.build=trueandquarkus.container-image.push=true, Maven builds the image with Jib and pushes it tolocalhost:5000/jib-tutorial:1.0in one step. The Kubernetes manifests are generated undertarget/kubernetes/.Note (base image digest warning): You may see a warning that the base image does not use a specific digest and the build may not be reproducible. That means Jib is using a tag (for example,
1.24) for the base image; the registry could change what that tag points to later. You can ignore the warning for local development. For reproducible builds (for example, in CI/CD), pin the base image to a digest inapplication.properties, for example:quarkus.jib.base-jvm-image=registry.access.redhat.com/ubi9/openjdk-25-runtime@sha256:<digest>. Get the digest frompodman inspect <image>:<tag>(use theRepoDigestsvalue) or your registry's image details.Verify the image is in the registry (optional):
curl -s http://localhost:5000/v2/jib-tutorial/tags/listYou should see
{"name":"jib-tutorial","tags":["1.0"]}.
Step 7. Deploy the application
With the image built and pushed to the in-cluster registry (Step 6), deploy the application by applying the generated Minikube manifest.
Apply the generated manifest. The Minikube extension produced
target/kubernetes/minikube.ymlwith the Deployment, Service, and ConfigMap:kubectl apply -f target/kubernetes/minikube.ymlInspect what Quarkus generated. Open
target/kubernetes/minikube.yml(ortarget/kubernetes/kubernetes.yml). You should see:- If you use a database: a ConfigMap named
postgresql-datasource-propsand a Deployment whose pod template references it (for example,envFromorvalueFrom). A Deployment for your Quarkus app and a Service (type NodePort) exposing the application port.
This is what Quarkus produced from your
application.properties(and, for database-backed apps, thecommon.ymlfragment). No hand-written Deployment or Service YAML for the app.
- If you use a database: a ConfigMap named
Verify that the application pod is running:
kubectl get pods kubectl get servicesWait until the Quarkus pod is
Runningand ready. The NodePort service will show a high port (for example, 3xxxx) on the Minikube node. The service name matchesquarkus.container-image.name(for example,jib-tutorial).
Step 8. Access and test the running application
Use Minikube to get a URL to the NodePort service so you can reach the app without kubectl port-forward.
Get the URL for your service (use the name from
quarkus.container-image.name, for example,jib-tutorial):minikube service jib-tutorial --urlCopy the printed URL (for example,
http://192.168.49.2:31234).test the API. Replace
BASE_URLwith the URL from the previous command.Create a fruit:
curl -X POST -H "Content-Type: application/json" \ -d '{"name":"Banana", "color":"Yellow"}' \ BASE_URL/fruitsYou should get a 200 response with the created fruit (including an ID).
List all fruits:
curl BASE_URL/fruits- Get a fruit by ID:
curl BASE_URL/fruits/1 - Update a fruit:
curl -X PUT -H "Content-Type: application/json" -d '{"name":"Banana", "color":"Green"}' BASE_URL/fruits/1 - Delete a fruit by ID:
curl -X DELETE BASE_URL/fruits/1
The Fruit API is now running in Kubernetes and talking to PostgreSQL in the cluster; the only manual YAML was the database Deployment and Service.
Step 9. Add liveness and readiness probes with SmallRye Health
Kubernetes uses liveness and readiness probes to manage pod lifecycle and traffic. Quarkus can expose health endpoints and add the corresponding probe configuration to the generated Deployment automatically—again with no manual YAML.
Add the SmallRye Health extension:
quarkus add extension quarkus-smallrye-healthQuarkus automatically registers:
- Liveness:
GET /q/health/live—is the process alive? Readiness:
GET /q/health/ready—is the app ready to receive traffic (for example, database connected)?No extra code is required. The Kubernetes extension wires these into the generated Deployment as
livenessProbeandreadinessProbe.
- Liveness:
Rebuild, push the image, and redeploy (same pattern as Steps 6–7):
./mvnw clean package kubectl apply -f target/kubernetes/minikube.ymlEnsure the port-forward is still running so the build can push to the registry. The updated manifest includes the health probes; the new pod will roll out. If the Deployment didn't pick up the new image, run
kubectl rollout restart deployment jib-tutorial(use the name that matchesquarkus.container-image.name).Inspect the updated manifest. In
target/kubernetes/minikube.yml(orkubernetes.yml), find the Deployment and look at the pod template'slivenessProbeandreadinessProbesections. They point at/q/health/liveand/q/health/ready. No manual probe configuration is needed.
Conclusion
You've deployed a Quarkus application to Kubernetes without writing YAML for the app itself. Here's what was automated:
- Manifests: The Kubernetes Deployment and Service and ConfigMap reference were generated from
application.propertiesandsrc/main/kubernetes/common.yml. - Probes: Liveness and readiness were added by SmallRye Health and the Quarkus Kubernetes extension—no hand-written probe blocks.
- ConfigMap injection: Datasource configuration lived in a ConfigMap; Quarkus wired it into the generated manifests via
quarkus.kubernetes.env.configmaps. - Image build and push: Jib built the OCI image with no Dockerfile and no Docker daemon, and pushed it to the in-cluster registry.
What was manual:
- The PostgreSQL Deployment and Service YAML (the only hand-written Kubernetes in this tutorial).
- Enabling the Minikube registry add-on and running
kubectl port-forwardso the host can push images to it. In CI/CD you would push to an external registry and reference it in your build config.
Your Fruit API is now deployed and connected to PostgreSQL in the cluster. The next tutorial will secure it with OIDC so that only authenticated users can call the endpoints.