I am going to use the operator to manage my domains in OpenShift. The operator
pattern is common in Kubernetes for managing complex software products that
have special lifecycle requirements, different to the base assumptions made
by Kubernetes. For example, when there is state in a pod that needs to be
saved or migrated before terminating a pod. The WebLogic Kubernetes operator
includes such built-in knowledge of WebLogic, so it greatly simplifies the
management of WebLogic in a Kubernetes environment. Plus it is completely
open source and supported by Oracle.
Here’s an overview of the process I am going to walk through:
Create a new project (namespace) where I will be deploying WebLogic,
Prepare the project for the WebLogic Kubernetes Operator,
Install the operator,
View the operator logs in Kibana,
Prepare Docker images to run my domain,
Create the WebLogic domain,
Verify access to the WebLogic administration console and WLST,
Deploy a test application into the cluster,
Set up a route to expose the application publicly, and
Test scaling and load balancing.
Before we get started, you should clone the WebLogic operator project
from GitHub. It contains many of the samples and helpers we will need.
In the OpenShift web user interface, create a new project. If you already
have other projects, go to the Application Console, and then click on the
project pulldown at the top and click on “View All Projects” and then the
“Create Project” button. If you don’t have existing projects, OpenShift
will take you right to the create project page when you log in. I called
my project “weblogic” as you can see in the image below:
Then navigate into your project view. Right now it will be empty, as shown
Prepare the project for the WebLogic Kubernetes Operator
The easiest way to get the operator Docker image is to just pull it from the
Docker Hub. You can review details of the image in the
You can use the following command to pull the image. You may need to docker login
first if you have not previously done so:
Instead of pulling the image and manually copying it onto our OpenShift nodes,
we could also just add an Image Pull Secret to our project (namespace) so
that OpenShift will be able to pull the image for us. We can do this
with the following commands (at this stage we are using a user with the cluster-admin role):
In this command, replace DOCKER_USER with your Docker store userid,
DOCKER_PASSWORD with your password, and DOCKER_EMAIL with the email
address associated with your Docker Hub account.
We also need to tell OpenShift to link this secret to our service account.
Assuming we want to use the default service account in our weblogic
project (namespace), we can run this command:
oc secrets link default docker-store-secret --for=pull
(Optional) Build the image yourself
It is also possible to build the image yourself, rather than pulling it
from Docker Hub. If you want to do that, first go to Docker Hub and
accept the license for the Server JRE image,
ensure you have the listed
prerequisites installed, and then run these commands:
After a few moments, you should see the pods running in our namespace:
oc get pods,services
NAME READY STATUS RESTARTS AGE
pod/elasticsearch-75b6f589cb-c9hbw 1/1 Running 0 10s
pod/kibana-746cc75444-nt8pr 1/1 Running 0 10s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/elasticsearch ClusterIP 172.30.143.158 <none> 9200/TCP,9300/TCP 10s
service/kibana NodePort 172.30.18.210 <none> 5601:32394/TCP 10s
So based on the service shown above and our project (namespace) named weblogic,
the URL for Elasticsearch will be elasticsearch.weblogic.svc.cluster.local:9200.
We will need this URL later.
Install the operator
Now we are ready to install the operator. In the 2.0 release, we use Helm to
install the operator. So first we need to download Helm and set up Tiller on
our OpenShift cluster (if you have not already installed it).
Before we install Tiller, let’s create a cluster role binding to make sure the
default service account in the kube-system namespace (which tiller will run
under) has the cluster-admin role, which it will need to install and manage
Now we can execute helm init to install tiller on the OpenShift cluster.
Check it was successful with this command:
oc get deploy -n kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
tiller-deploy 1 1 1 1 18s
When you install the operator you can either pass the configuration parameters
into Helm on the command line, or if you prefer, you can store them in a YAML
file and pass that file in. I like to store them in a file. There is a
sample provided, so we can just make a copy and update it with our details.
Set the domainNamespaces parameter to include just weblogic, i.e. the
project (namespace) that we created to install WebLogic in.
Set the image parameter to match the name of the image you pulled from
Docker Hub or built yourself. If you just create the image pull secret
then use the value I have shown here:
# image specifies the docker image containing the operator code.image: "oracle/weblogic-kubernetes-operator:2.0"
Set the imagePullSecrets list to include the secret we created earlier.
If you did not create the secret you can leave this commented out.
- name: "docker-store-secret"
Set the elkIntegrationEnabled parameter to true.
# elkIntegrationEnabled specifies whether or not Elastic integration is enabled.elkIntegrationEnabled: true
Set the elasticSearchHost to the address of the Elasticsearch server
that we set up earlier.
# elasticSearchHost specifies the hostname of where elasticsearch is running.# This parameter is ignored if 'elkIntegrationEnabled' is false.elasticSearchHost: "elasticsearch.weblogic.svc.cluster.local"
Now we can use helm to install the operator with this command. Notice
that I pass in the name of my parameters YAML file in the --values option:
This command will wait until the operator starts up successfully. If it has
to pull the image, that will obviously take a little while, but if this command
does not finish in a minute or so, then it is probably stuck. You can
send it to the background and start looking around to see what went wrong.
Most often it will be a problem pulling the image. If you see the pod has
status ImagePullBackOff then OpenShift was not able to pull the image.
You can verify the pod was created with this command:
oc get pods
NAME READY STATUS RESTARTS AGE
elasticsearch-75b6f589cb-c9hbw 1/1 Running 0 2h
kibana-746cc75444-nt8pr 1/1 Running 0 2h
weblogic-operator-54d99679f-dkg65 1/1 Running 0 48s
View the operator logs in Kibana
Now we have the operator running, let’s take a look at the logs in Kibana.
We installed Kibana earlier. Let’s expose Kibana outside our cluster:
oc expose service kibana
You can check this worked with these commands:
oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
kibana kibana-weblogic.sub11201828382.certificationvc.oraclevcn.com kibana 5601 None
oc get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch ClusterIP 172.30.143.158 <none> 9200/TCP,9300/TCP 2h
internal-weblogic-operator-svc ClusterIP 172.30.252.148 <none> 8082/TCP 7m
kibana NodePort 172.30.18.210 <none> 5601:32394/TCP 2h
Now you should be able to access Kibana using the OpenShift front-end address
and the node port for the Kibana service. In my case the node port is 32394
and my OpenShift server is accessible to me as openshift
so I would use the address https://openshift:32394.
You will see a page like this one:
Click on the “Create” button, then click on the “Discover” option in the menu
on the left hand side. Now hover over the entries for level and log in the
field list, and click on the “Add” button that appears next to each one.
Now you should have a nice log screen like this:
Great! We have the operator installed. Now we are ready to move on to create
some WebLogic domains.
Prepare Docker images to run the domain
Now we have some choices to make. There are two main ways to run WebLogic
in Docker - we can use a standard Docker image which contains the WebLogic
binaries but keep the domain configuration, applications, etc., outside the
image, for example in a persistent volume; or we can create Docker images
with both the WebLogic binaries and the domain burnt into them. There are
advantages and disadvantages to both approaches, so it really depends on
how we want to treat our domain.
The first approach is good if you just
want to run WebLogic in Kubernetes but you still want to use the admin
console and WLST and so on to manage it. The second approach is better
if you want to drive everything from a CI/CD pipeline where you do not
mutate the running environment, but instead you update the source and then
build new images and roll the environment to uptake them. A number of these
kinds of considerations are listed here.
For the sake of this post, let’s use the “domain in image” option (the
So we will need a base WebLogic image with the necessary patches installed,
and then we will create our domain on top of that. Let’s create a domain
with a web application deployed in it, so that we have something to use
to test our load balancing configuration and scaling later on.
The easiest way to get the base image is to grab it from Oracle using
docker pull store/oracle/weblogic:188.8.131.52
The standard WebLogic Server 184.108.40.206.0
image from Docker Hub has the necessary patches already installed. It is
worth knowing how to install patches, in case you need some additional one-off patches.
If you are not interested in that, skip forward to
(Optional) Manually creating a patched WebLogic image
Here is an example Dockerfile that we can use to install the necessary
patches. You can modify this to add any additional one-off patches that
you need. Follow that pattern already there to copy the patch into the
container, apply it, and then remove the temporary files after you are done.
# Install patches to run WebLogic on Kubernetes
# Start with an unpatched WebLogic 220.127.116.11.0 Docker image
MAINTAINER Mark Nelson <email@example.com>
# We need patch 29135930 to run WebLogic on Kubernetes
# We will also also install the latest PSU which is 28298734
# That prereqs a newer version of OPatch, which is provided by 28186730
# Copy the patches into the container
COPY $PATCH_PKG0 /u01/
COPY $PATCH_PKG2 /u01/
COPY $PATCH_PKG3 /u01/
# Install the psmisc package which is a prereq for 28186730
RUN yum -y install psmisc
# Install the three patches we need - do it all in one command to
# minimize the number of layers and the size of the resulting image.
# Also run opatch cleanup and remove temporary files.
RUN cd /u01 && \
$JAVA_HOME/bin/jar xf /u01/$PATCH_PKG0 && \
$JAVA_HOME/bin/java -jar /u01/6880880/opatch_generic.jar \
-silent oracle_home=/u01/oracle -ignoreSysPrereqs && \
echo "opatch updated" && \
sleep 5 && \
cd /u01 && \
$JAVA_HOME/bin/jar xf /u01/$PATCH_PKG2 && \
cd /u01/28298734 && \
$ORACLE_HOME/OPatch/opatch apply -silent && \
cd /u01 && \
$JAVA_HOME/bin/jar xf /u01/$PATCH_PKG3 && \
cd /u01/29135930 && \
$ORACLE_HOME/OPatch/opatch apply -silent && \
$ORACLE_HOME/OPatch/opatch util cleanup -silent && \
rm /u01/$PATCH_PKG0 && \
rm /u01/$PATCH_PKG2 && \
rm /u01/$PATCH_PKG3 && \
rm -rf /u01/6880880 && \
rm -rf /u01/28298734 && \
rm -rf /u01/29135930
This Dockerfile assumes the patch archives are available in the same
directory. You would need to download the patches from My Oracle
Support and then you can build the image
with this command:
I am going to use the WebLogic Deploy Tooling
to define my domain. If you are not familiar with this tool, you might
want to check it out! It lets you define your domain declaratively instead
of writing custom WLST scripts. For just one domain, maybe not such a big
deal, but if you need to create a lot of domains it is pretty useful. It
also lets you parameterize them, and it can introspect existing domains to
create a model and associated artifacts. You can also use it to “move”
domains from place to place, say from an on-premises install to Kubernetes,
and you can change the version of WebLogic on the way without needing to worry
about differences in WLST from version to version - it takes care of all
that for you. Of course, we don’t need all those features for what we need
to do here, but it is good to know they are there for when you might need
I created a GitHub repository with my domain model
here. You can just
clone this repository and then run the commands below to download
the WebLogic Deploy Tooling and then build the domain in a new Docker
image that we will tag my-domain1-image:1.0:
git clone https://github.com/markxnelson/simple-sample-domain
curl -Lo weblogic-deploy.zip https://github.com/oracle/weblogic-deploy-tooling/releases/download/weblogic-deploy-tooling-0.14/weblogic-deploy.zip
# make sure JAVA_HOME is set correctly, and `mvn` is on your PATH
I won’t go into all the nitty gritty details of how this works, that’s a
subject for another post (if you are interested, take a look at the documentation
in the GitHub project),
but take a look at the simple-toplogy.yaml file
to get a feel for what is happening:
As you can see it is all parameterized. Most of those properties are defined
# These variables are used for substitution in the WDT model file.# Any port that will be exposed through Docker is put in this file.# The sample Dockerfile will get the ports from this file and not the WDT model.
On lines 10-17 we are defining a cluster named cluster-1 with two dynamic
servers in it. On 18-25 we are defining the admin server. And on 30-38 we
are defining an application that we want deployed. This is a simple web
application that prints out the IP address of the managed server it is running
on. Here is the main page of that application:
Next, we need to create the domain custom resource. To do this, we prepare
a Kubernetes YAML file as follows. I have removed the comments to make this
more readable, you can find a sample here
which has extensive comments to explain how to create these files:
Now we can use this file to create the domain custom resource, using the
oc apply -f domain.yaml
You can verify it was created, and view the resource that was created with
oc get domains
oc describe domain domain1
The operator will notice this new domain custom resource and it will react
accordingly. In this case, since we have asked for the admin server and
the servers in the the cluster to come to the “RUNNING” state (in lines 27
and 25 above) the operator will start up the admin server first, and then
both managed servers. You can watch this happen using this command:
oc get pods -w
This will print out the current pods, and then update every time there is
a change in status. You can hit Ctrl-C to exit from the command when you
have seen enough.
The operator also creates services for the admin server, each managed server
and the cluster. You can see the services with this command:
oc get services
You will notice a service called domain1-admin-server-external which is
used to expose the admin server’s default channel outside of the cluster,
to allow us to access the admin console and to use WLST. We need to tell
OpenShift to make this service available externally by creating a route
with this command:
oc expose service domain1-admin-server-external --port=default
This will expose that service on the NodePort it declared.
Verify access to the WebLogic administration console and WLST
Now you can start a browser and point it to any one of your worker nodes
and use the NodePort from the service (30701 in the example above) to
access the admin console. For me, since I have an entry in my /etc/hosts
for my OpenShift server, this address is http://openshift:30701/console.
You can log in to the admin console and use it as normal. You might like
to navigate into “Deployments” to verify that our web application is there:
You might also like to go to the “Server” page to validate that you can see
all of the managed servers:
We can also use WLST against the domain, if desired. To do this, just
start up WLST as normal on your client machine and then use the OpenShift
server address and the NodePort to form the t3 URL. Using the example
above, my URL is t3://openshift:30701:
Initializing WebLogic Scripting Tool (WLST) ...
Jython scans all the jar files it can find at first startup. Depending on the system, this process may take a few minutes to complete, and WLST may not return a prompt right away.
Welcome to WebLogic Server Administration Scripting Shell
Type help() for help on available commands
Connecting to t3://openshift:30701 with userid weblogic ...
Successfully connected to Admin Server "admin-server" that belongs to domain "domain1".
Warning: An insecure protocol was used to connect to the server.
To ensure on-the-wire security, the SSL port or Admin port should be used instead.
You can use WLST as normal, either interactively, or you can run scripts.
Keep in mind though, that since you have your domain burnt into the image,
when you restart the pods, any changes you made with WLST would be lost.
If you want to make permanent changes, you would need to include the WLST
scripts in the image building process and then re-run it to build a new
version of the image.
Of course, if you have chosen to put your domain in peristent storage instead
of burning it into the image, this caveat would not apply.
Set up a route to expose the application publicly
Now, let’s expose our web application outside the OpenShift cluster. To
do this, we are going to want to set up a load balancer to distribute
requests across all of the managed servers, and then expose the load
We can use the provided sample
to install the Traefik load balancer using the following command:
Note that you would set the hostname to your real DNS hostname when you do
this for real. In this example, I am just using a made up hostname.
Test scaling and load balancing
Now we can hit the web application to verify the load balancing is working.
You can hit it from a browser, but in that case session affinity will kick
in, so you will likely see a response from the same managed server over and
over again. If you use curl though, you should see it round robin.
You can run curl in a loop using this command:
curl -v -H 'host: domain1.org' http://openshift:30305/testwebapp/
The web application just prints out the name and IP address of the managed
server. So you should see the output alternate between all of the managed
servers in sequence.
Now, let’s scale the cluster down and see what happens. To initiate scaling,
we can just edit the domain custom resource with this command:
oc edit domain domain1
This will open the domain custom resource in an editor. Find the entry
for cluster-1 and underneath that the replicas entry:
You can change the replicas to another value, for example 2, and then
save and exit. The operator will notice this change and will react by
gracefully shutting down two of the managed servers. You can watch this
happen with the command:
oc get pods -w
You will also notice in the other window where you have curl running that
those two managed servers no longer get requests. You will also notice that
there are not failed requests - the servers are removed from the domain1-cluster-cluster-1
service early so they will not receive requests and lead to a connection refused
or timeout. The ingress and the load balancer automatically adjust.
Once the scaling is finished, you might want to scale back up to 4 and
watch the operation in reverse.
Well at this point we have our custom WebLogic domain, with our own
configuration and applications deployed, running on OpenShift under the
control of the operator. We have seen how we can access the admin console,
how to use WSLT, how to set up load balancing and expose applications outside
the OpenShift cluster, and how to control scaling.
Here are a few screenshots from the OpenShift console showing what we have
In future posts I will look in more detail at related topics like using a
CI/CD pipeline to drive image creation, exporting WebLogic metrics to
Prometheus, and more.