The Open Liberty Operator can be used to deploy and manage applications running on Open Liberty into Kubernetes clusters. You can also perform Day-2 operations such as gathering traces and dumps using the operator.
This documentation refers to the latest codebase. For documentation and samples of older releases, please check out the main releases page and navigate the corresponding tag.
Use the instructions for one of the releases to install the operator into a Kubernetes cluster.
The Open Liberty Operator can be installed to:
-
watch own namespace
-
watch another namespace
-
watch multiple namespaces
-
watch all namespaces in the cluster
Appropriate cluster roles and bindings are required to watch another namespace, watch multiple namespaces or watch all namespaces in the cluster.
Note
|
The Open Liberty Operator can only interact with resources it is given permission to interact through Role-based access control (RBAC). Some of the operator features require interacting with resources in other namespaces. In that case, the operator must be installed with correct ClusterRole definitions.
|
The architecture of the Open Liberty Operator follows the basic controller pattern: the Operator container with the controller is deployed into a Pod and listens for incoming resources with Kind: OpenLibertyApplication
. Creating an OpenLibertyApplication
custom resource (CR) triggers the Open Liberty Operator to create, update or delete Kubernetes resources needed by the application to run on your cluster.
In addition, Open Liberty Operator makes it easy to perform Day-2 operations on an Open Liberty server running inside a Pod as part of an OpenLibertyApplication
instance:
* Gather server traces using resource Kind: OpenLibertyTrace
* Generate server dumps using resource Kind: OpenLibertyDump
Each instance of OpenLibertyApplication
CR represents the application to be deployed on the cluster:
apiVersion: openliberty.io/v1beta1
kind: OpenLibertyApplication
metadata:
name: my-liberty-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
service:
type: ClusterIP
port: 9080
expose: true
storage:
size: 2Gi
mountPath: "/logs"
The following table lists configurable parameters of the OpenLibertyApplication
CRD. For complete OpenAPI v3 representation of these values please see OpenLibertyApplication
CRD.
Each OpenLibertyApplication
CR must specify applicationImage
parameter. Specifying other parameters is optional.
Parameter |
Description |
|
The current version of the application. Label |
|
The name of the OpenShift service account to be used during deployment. |
|
The absolute name of the image to be deployed, containing the registry and the tag. On OpenShift, it can also be set to |
|
The name of the application this resource is part of. If not specified, it defaults to the name of the CR. |
|
A boolean to toggle the automatic configuration of Kubernetes resources for the |
|
The policy used when pulling the image. One of: |
|
If using a registry that requires authentication, the name of the secret containing credentials. |
|
The list of Init Container definitions. |
|
The list of |
|
An array of architectures to be considered for deployment. Their position in the array indicates preference. |
|
A YAML object that represents a |
|
A boolean to toggle whether the operator should automatically detect and use a |
|
The name of a |
|
A boolean to toggle whether the operator expose the application as a bindable service. The default value for this parameter is |
|
The port exposed by the container. |
|
The port that the operator assigns to containers inside pods. Defaults to the value of |
|
The name for the port exposed by the container. |
|
An array consisting of service ports. |
|
The Kubernetes Service Type. |
|
Node proxies this port into your service. Please note once this port is set to a non-zero value it cannot be reset to zero. |
|
Annotations to be added to the service. |
|
A YAML object representing a Certificate. |
|
A name of a secret that already contains TLS key, certificate and CA to be mounted in the pod. |
|
Service binding type to be provided by this CR. At this time, the only allowed value is |
|
Protocol of the provided service. Defauts to |
|
Specifies context root of the service. |
|
Optional value to specify username as SecretKeySelector. |
|
Optional value to specify password as SecretKeySelector. |
|
An array consisting of services to be consumed by the |
|
The type of service binding to be consumed. At this time, the only allowed value is |
|
The name of the service to be consumed. If binding to an |
|
The namespace of the service to be consumed. If binding to an |
|
Optional field to specify which location in the pod, service binding secret should be mounted. If not specified, the secret keys would be injected as environment variables. |
|
A boolean to toggle the creation of Knative resources and usage of Knative serving. |
|
A boolean that toggles the external exposure of this deployment via a Route or a Knative Route resource. |
|
The static number of desired replica pods that run simultaneously. |
|
Required field for autoscaling. Upper limit for the number of pods that can be set by the autoscaler. It cannot be lower than the minimum number of replicas. |
|
Lower limit for the number of pods that can be set by the autoscaler. |
|
Target average CPU utilization (represented as a percentage of requested CPU) over all the pods. |
|
The minimum required CPU core. Specify integers, fractions (e.g. 0.5), or millicore values(e.g. 100m, where 100m is equivalent to .1 core). Required field for autoscaling. |
|
The minimum memory in bytes. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
|
The upper limit of CPU core. Specify integers, fractions (e.g. 0.5), or millicores values(e.g. 100m, where 100m is equivalent to .1 core). |
|
The memory upper limit in bytes. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
|
An array of environment variables following the format of |
|
An array of references to |
|
A YAML object configuring the Kubernetes readiness probe that controls when the pod is ready to receive traffic. |
|
A YAML object configuring the Kubernetes liveness probe that controls when Kubernetes needs to restart the pod. |
|
A YAML object representing a pod volume. |
|
A YAML object representing a pod volumeMount. |
|
A convenient field to set the size of the persisted storage. Can be overridden by the |
|
The directory inside the container where this persisted storage will be bound to. |
|
A YAML object representing a volumeClaimTemplate component of a |
|
Labels to set on ServiceMonitor. |
|
A YAML snippet representing an array of Endpoint component from ServiceMonitor. |
|
A convenient field to request the size of the persisted storage to use for serviceability. Can be overridden by the |
|
A convenient field to request the StorageClassName of the persisted storage to use for serviceability. Can be overridden by the |
|
The name of the PersistentVolumeClaim resource you created to be used for serviceability. Must be in the same namespace. |
|
Annotations to be added to the Route. |
|
Hostname to be used for the Route. |
|
Path to be used for Route. |
|
TLS termination policy. Can be one of |
|
HTTP traffic policy with TLS enabled. Can be one of |
|
A YAML object representing a Certificate. |
|
A name of a secret that already contains TLS key, certificate and CA to be used in the route. Also can contain destination CA certificate. The following keys are valid in the secret: |
|
Specifies the configuration for single sign-on providers to authenticate with. Specify sensitive fields, such as clientId and clientSecret, for the selected providers by using the |
|
Specifies whether to map a user identifier to a registry user. This parameter applies to all providers. |
|
Specifies a callback protocol, host and port number, such as https://myfrontend.mycompany.com. This parameter applies to all providers. |
|
Specifies the host name of your enterprise GitHub, such as github.mycompany.com. The default is github.com, which is the public Github. |
|
The list of OpenID Connect (OIDC) providers to authenticate with. Required fields: discoveryEndpoint. Specify sensitive fields, such as clientId and clientSecret, by using the |
|
Specifies a discovery endpoint URL for the OpenID Connect provider. Required field. |
|
The name of the social login configuration for display. |
|
Specifies the name of the claim. Use its value as the user group membership. |
|
Specifies whether to enable host name verification when the client contacts the provider. |
|
The unique ID for the provider. Default value is oidc. |
|
Specifies the name of the claim. Use its value as the subject realm. |
|
Specifies one or more scopes to request. |
|
Specifies the required authentication method. |
|
Specifies whether the UserInfo endpoint is contacted. |
|
Specifies the name of the claim. Use its value as the authenticated user principal. |
|
The list of OAuth 2.0 providers to authenticate with. Required fields: authorizationEndpoint, tokenEndpoint. Specify sensitive fields, clientId and clientSecret by using the |
|
Specifies an authorization endpoint URL for the OAuth 2.0 provider. Required field. |
|
Specifies a token endpoint URL for the OAuth 2.0 provider. Required field. |
|
Name of the header to use when an OAuth access token is forwarded. |
|
Determines whether the access token that is provided in the request is used for authentication. If the parameter is set to true, the client must provide a valid access token. |
|
Determines whether to support access token authentication if an access token is provided in the request. If the parameter is set to true and an access token is provided in the request, then the access token is used as an authentication token. |
|
The name of the social login configuration for display. |
|
Specifies the name of the claim. Use its value as the user group membership. |
|
Specifies the unique ID for the provider. The default value is oauth2. |
|
Specifies the realm name for this social media. |
|
Specifies the name of the claim. Use its value as the subject realm. |
|
Specifies one or more scopes to request. |
|
Specifies the required authentication method. |
|
Specifies the name of the claim. Use its value as the authenticated user principal. |
|
The URL for retrieving the user information. |
|
Indicates which specification to use for the user API. |
Use official Open Liberty images and guidelines to create your application image.
Use the following CR to deploy your application image to a Kubernetes environment:
apiVersion: openliberty.io/v1beta1
kind: OpenLibertyApplication
metadata:
name: my-liberty-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
The applicationImage
value must be defined in OpenLibertyApplication
CR. On OpenShift, the operator tries to find an image stream name with the applicationImage
value. The operator falls back to the registry lookup if it is not able to find any image stream that matches the value. If you want to distinguish an image stream called my-company/my-app
(project: my-company
, image stream name: my-app
) from the Docker Hub my-company/my-app
image, you can use the full image reference as docker.io/my-company/my-app
.
To get information on the deployed CR, use either of the following:
oc get olapp my-liberty-app
oc get olapps my-liberty-app
oc get openlibertyapplication my-liberty-app
Open Liberty Operator is based on the generic Runtime Component Operator. To see more
information on the usage of common functionality, see the Runtime Component Operator documentation below. Note that, in the samples from the links below, the instances of Kind:
RuntimeComponent
must be replaced with Kind: OpenLibertyApplication
.
For functionality that is unique to the Open Liberty Operator, see the following sections.
The Open Liberty Operator sets a number of environment variables related to console logging by default. The following table shows the variables and their corresponding values.
Name |
Value |
|
info |
|
message,accessLog,ffdc,audit |
|
json |
To override these default values with your own values, set them manually in your CR env
list. Refer to Open Liberty’s logging documentation for information on values you can set.
apiVersion: openliberty.io/v1beta1
kind: OpenLibertyApplication
metadata:
name: my-liberty-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
env:
- name: WLP_LOGGING_CONSOLE_FORMAT
value: "DEV"
- name: WLP_LOGGING_CONSOLE_SOURCE
value: "messages,trace,accessLog"
- name: WLP_LOGGING_CONSOLE_LOGLEVEL
value: "error"
Open Liberty provides capabilities to delegate authentication to external providers. Your application users can log in using their existing accounts for social media providers such as Google, Facebook, LinkedIn, Twitter, GitHub, or any OpenID Connect (OIDC) or OAuth 2.0 clients. Open Liberty Operator allows to easily configure and manage the single sign-on information for your applications.
Configure and build the application image with single sign-on by following the instructions here.
To specify sensitive information such as client IDs, client secrets and tokens for the login providers you selected in application image, create a Secret
named <OpenLibertyApplication_name>-olapp-sso
in the same namespace as the OpenLibertyApplication
instance. In the sample snippets provided below, OpenLibertyApplication
is named my-app
, hence secret must be named my-app-olapp-sso
. Both are in the same namespace called demo
.
The keys within the Secret
must follow this naming pattern: <provider_name>-<sensitive_field_name>
. For example, google-clientSecret
. Instead of the -
character in between, you can also use .
or _
. For example, oauth2_userApiToken
.
Open Liberty Operator watches for the creation and deletion of the SSO secret as well as any updates to it. Adding, updating or removing keys from Secret will be passed down to the application automatically.
apiVersion: v1
kind: Secret
metadata:
# Name of the secret should be in this format: <OpenLibertyApplication_name>-olapp-sso
name: my-app-olapp-sso
# Secret must be created in the same namespace as the OpenLibertyApplication instance
namespace: demo
type: Opaque
data:
# The keys must be in this format: <provider_name>-<sensitive_field_name>
github-clientId: bW9vb29vb28=
github-clientSecret: dGhlbGF1Z2hpbmdjb3c=
twitter-consumerKey: bW9vb29vb28=
twitter-consumerSecret: dGhlbGF1Z2hpbmdjb3c=
oidc-clientId: bW9vb29vb28=
oidc-clientSecret: dGhlbGF1Z2hpbmdjb3c=
oauth2-clientId: bW9vb29vb28=
oauth2-clientSecret: dGhlbGF1Z2hpbmdjb3c=
oauth2-userApiToken: dGhlbGF1Z2hpbmdjb3c=
Next, configure single sign-on in OpenLibertyApplication
CR. At minimum, sso: {}
should be set in order for the operator to pass the values from the above Secret
to your application. Refer to the parameters list for additional configurations for sso
.
In addition, single sign-on requires secured Service and secured Route configured with necessary certificates. Refer to Certificate Manager Integration for more information.
To automatically trust certificates from well known identity providers, including social login providers such as Google and Facebook, set environment variable SEC_TLS_TRUSTDEFAULTCERTS
to true
. To automatically trust certificates issued by the Kubernetes cluster, set environment variable SEC_IMPORT_K8S_CERTS
to true
. Alternatively, you could include the necessary certificates manually when building application image or mounting them using a volume when deploying your application.
In the following example, a self-signed certificate is used for secured Service and Route.
apiVersion: openliberty.io/v1beta1
kind: OpenLibertyApplication
metadata:
name: my-app
namespace: demo
spec:
applicationImage: quay.io/my-repo/my-app:1.0
env:
- name: SEC_TLS_TRUSTDEFAULTCERTS
value: "true"
- name: SEC_IMPORT_K8S_CERTS
value: "true"
sso:
redirectToRPHostAndPort: https://redirect-url.mycompany.com
github:
hostname: github.mycompany.com
oauth2:
- authorizationEndpoint: specify-required-value
tokenEndpoint: specify-required-value
oidc:
- discoveryEndpoint: specify-required-value
service:
certificate:
isCA: true
issuerRef:
kind: ClusterIssuer
name: self-signed
port: 9443
type: ClusterIP
expose: true
route:
certificate:
isCA: true
issuerRef:
kind: ClusterIssuer
name: self-signed
termination: reencrypt
The operator can request a client Id and client Secret from providers, rather than requiring them in advance. This can simplify deployment, as the provider’s administrator can supply the information needed for registration once, instead of supplying clientIds and secrets repetitively. The callback URL from provider to client is supplied by the operator, so doesn’t need to be known in advance. Additional attributes named <provider_name>-autoreg-<field_name>
are added to the Kubernetes secret shown below. First the operator will make an https request to the sso.oidc[].discoveryEndpoint
to obtain URLs for subsequent REST calls. Next it will make additional REST calls to the provider and obtain a client Id and client Secret. The Kubernetes secret will be updated with the obtained values. This is tested on OpenShift with Red Hat Single Sign-on (RH-SSO) and IBM Security Verify. See the following example.
apiVersion: v1
kind: Secret
metadata:
# Name of the secret should be in this format: <OpenLibertyApplication_name>-olapp-sso
name: my-app-olapp-sso
# Secret must be created in the same namespace as the OpenLibertyApplication instance
namespace: demo
type: Opaque
data:
# base64 encode the data before entering it here.
#
# Leave the clientId and secret out, registration will obtain them and update their values.
# oidc-clientId
# oidc-clientSecret
#
# Reserved: <provider>-autoreg-RegisteredClientId and RegisteredClientSecret
# are used by the operator to store a copy of the clientId and clientSecret values.
#
# Automatic registration attributes have -autoreg- after the provider name.
#
# Red Hat Single Sign On requires an initial access token for registration.
oidc-autoreg-initialAccessToken: xxxxxyyyyy
#
# IBM Security Verify requires a special clientId and clientSecret for registration.
# oidc-autoreg-initialClientId: bW9vb29vb28=
# oidc-autoreg-initialClientSecret: dGhlbGF1Z2hpbmdjb3c=
#
# Optional: Grant types are the types of OAuth flows the resulting clients will allow
# Default is authorization_code,refresh_token. Specify a comma separated list.
# oidc-autoreg-grantTypes: base64 data goes here
#
# Optional: Scopes limit the types of information about the user that the provider will return.
# Default is openid,profile. Specify a comma-separated list.
# oidc-autoreg-scopes: base64 data goes here
#
# Optional: To skip TLS certificate checking with the provider during registration, specify insecureTLS as true. Default is false.
# oidc-autoreg-insecureTLS: dHJ1ZQ==
Note: For RH-SSO, optionally set the sso.oidc[].userNameAttribute
parameter to preferred_username to obtain the user ID that was used to log in. For IBM Security Verify, set the parameter to given_name.
You can use multiple OIDC and OAuth 2.0 providers to authenticate with. First, configure and build application image with multiple OIDC and/or OAuth 2.0 providers. For example, set ARG SEC_SSO_PROVIDERS="google oidc:provider1,provider2 oauth2:provider3,provider4"
in your Dockerfile. The provider name must be unique and must contain only alphanumeric characters.
sso:
oidc:
- id: provider1
discoveryEndpoint: specify-required-value
- id: provider2
discoveryEndpoint: specify-required-value
oauth2:
- id: provider3
authorizationEndpoint: specify-required-value
tokenEndpoint: specify-required-value
- id: provider4
authorizationEndpoint: specify-required-value
tokenEndpoint: specify-required-value
Next, use the provider name in SSO Secret
to specify its client ID and secret. For example, provider1-clientSecret: dGhlbGF1Z2hpbmdjb3c=
. To configure a parameter for the corresponding provider in OpenLibertyApplication
CR, use sso.oidc[].id
or sso.oauth2[].id
parameter as in the following example.
apiVersion: v1
kind: Secret
metadata:
# Name of the secret should be in this format: <OpenLibertyApplication_name>-olapp-sso
name: my-app-olapp-sso
# Secret must be created in the same namespace as the OpenLibertyApplication instance
namespace: demo
type: Opaque
data:
# The keys must be in this format: <provider_name>-<sensitive_field_name>
google-clientId: xxxxxxxxxxxxx
google-clientSecret: yyyyyyyyyyyyyy
provider1-clientId: bW9vb29vb28=
provider1-clientSecret: dGhlbGF1Z2hpbmdjb3c=
provider2-autoreg-initialClientId: bW9vb29vb28=
provider2-autoreg-initialClientSecret: dGhlbGF1Z2hpbmdjb3c=
provider3-clientId: bW9vb29vb28=
provider3-clientSecret: dGhlbGF1Z2hpbmdjb3c=
provider4-clientId: bW9vb29vb28=
provider4-clientSecret: dGhlbGF1Z2hpbmdjb3c=
The operator makes it easy to use a single storage for serviceability related operations, such as gathering server traces or dumps (see Day-2 Operations). The single storage will be shared by all Pods of an OpenLibertyApplication
instance. This way you don’t need to mount a separate storage for each Pod. Your cluster must be configured to automatically bind the PersistentVolumeClaim
(PVC) to a PersistentVolume
or you must bind it manually.
You can specify the size of the persisted storage to request using serviceability.size
parameter. You can also specify which storage class to request using serviceability.storageClassName
parameter if you don’t want to use the default storage class. The operator will automatically create a PersistentVolumeClaim
with the specified size and access modes ReadWriteMany
and ReadWriteOnce
. It will be mounted at /serviceability
inside all Pods of the OpenLibertyApplication
instance.
apiVersion: openliberty.io/v1beta1
kind: OpenLibertyApplication
metadata:
name: my-liberty-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
serviceability:
size: 1Gi
storageClassName: nfs
You can also create the PersistentVolumeClaim
yourself and specify its name using serviceability.volumeClaimName
parameter. You must create it in the same namespace as the OpenLibertyApplication
instance.
apiVersion: openliberty.io/v1beta1
kind: OpenLibertyApplication
metadata:
name: my-liberty-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
serviceability:
volumeClaimName: my-pvc
Once a PersistentVolumeClaim
is created by operator, its size can not be updated. It will not be deleted when serviceability is disabled or when the OpenLibertyApplication
is deleted.
See the troubleshooting guide for information on how to investigate and resolve deployment problems.
-
The corresponding
OpenLibertyApplication
must already have storage for serviceability configured in order to use the day-2 operations -
The custom resource (CR) for a day-2 operation must be created in the same namespace as the
OpenLibertyApplication
To allow auto-discovery of supported day-2 operations from external tools the following annotation has been added to the OpenLibertyApplication
CRD:
annotations:
openliberty.io/day2operations: OpenLibertyTrace,OpenLibertyDump
Additionally, each day-2 operation CRD has the following annotation which illustrates the k8s Kind
(s) the operation applies to:
annotations:
day2operation.openliberty.io/targetKinds: Pod
You can request a snapshot of the server status including different types of server dumps, from an instance of Open Liberty server running inside a Pod
, using Open Liberty Operator and OpenLibertyDump
custom resource (CR). To use this feature the OpenLibertyApplication
needs to have storage for serviceability already configured. Also, the OpenLibertyDump
CR must be created in the same namespace as the Pod
to operate on.
The configurable parameters are:
Parameter |
Description |
|
The name of the Pod, which must be in the same namespace as the |
|
Optional. List of memory dump types to request: thread,heap,system |
Example including heap and thread dump:
apiVersion: openliberty.io/v1beta1
kind: OpenLibertyDump
metadata:
name: example-dump
spec:
podName: Specify_Pod_Name_Here
include:
- thread
- heap
Dump file name will be added to OpenLibertyDump CR status and file will be stored in serviceability folder using format such as /serviceability/NAMESPACE/POD_NAME/TIMESTAMP.zip
Once the dump has started, the CR can not be re-used to take more dumps. A new CR needs to be created for each server dump.
You can check the status of a dump operation using the status
field inside the CR YAML. You can also run the command oc get oldump -o wide
to see the status of all dump operations in the current namespace.
Note: System dump might not work on certain Kubernetes versions, such as OpenShift 4.x
You can request server traces, from an instance of Open Liberty server running inside a Pod
, using Open Liberty Operator and OpenLibertyTrace
custom resource (CR). To use this feature the OpenLibertyApplication
must already have storage for serviceability configured. Also, the OpenLibertyTrace
CR must be created in the same namespace as the Pod
to operate on.
The configurable parameters are:
Parameter |
Description |
|
The name of the Pod, which must be in the same namespace as the |
|
The trace string to be used to selectively enable trace. The default is *=info. |
|
The maximum size (in MB) that a log file can reach before it is rolled. To disable this attribute, set the value to 0. By default, the value is 20. This setting does not apply to the |
|
If an enforced maximum file size exists, this setting is used to determine how many of each of the logs files are kept. This setting also applies to the number of exception logs that summarize exceptions that occurred on any particular day. |
|
Set to true to stop tracing. |
Example:
apiVersion: openliberty.io/v1beta1
kind: OpenLibertyTrace
metadata:
name: example-trace
spec:
podName: Specify_Pod_Name_Here
traceSpecification: "*=info:com.ibm.ws.webcontainer*=all"
maxFileSize: 20
maxFiles: 5
Generated trace files, along with messages.log files, will be in the folder using format /serviceability/NAMESPACE/POD_NAME/
Once the trace has started, it can be stopped by setting the disable
parameter to true
. Deleting the CR will also stop the tracing. Changing the podName
will first stop the tracing on the old Pod before enabling traces on the new Pod.
You can check the status of a trace operation using the status
field inside the CR YAML. You can also run the command oc get oltrace -o wide
to see the status of all trace operations in the current namespace.
Note: The operator doesn’t monitor the Pods. If the Pod is restarted or deleted after the trace is enabled, then the tracing wouldn’t be automatically enabled when the Pod comes back up. In that case, the status of the trace operation may not correctly report whether the trace is enabled or not.