Configuring OpenShift 4.2 Cluster Logging in a Private Cloud environment
The OpenShift Container Platform (OCP) 4.2 installer doesn’t configure Cluster-wide logging and it is left to an administrator to configure after the cluster is up and running. This short guide details the steps to configure cluster-wide logging using the Red Hat provided Cluster Logging Operator on dedicated infrastructure
nodes with storage provided by rook-ceph
.
Assumptions
It is assumed that an OCP 4.2 cluster has already been installed and configured in a VMWare private cloud environment similar to the one shown below:
It is assumed that rook-ceph
has been configured and that the following storage classes
are available:
oc get scNAME PROVISIONER AGE
csi-cephfs rook-ceph.cephfs.csi.ceph.com 3d15h
rook-ceph-block (default) rook-ceph.rbd.csi.ceph.com 3d15h
Installation of Cluster-wide Logging
Label the `infrastructure` nodes as shown below. These labels will be used later to ensure that the logging components run on isolated nodes.
oc label node infra1 role=infra-node
oc label node infra2 role=infra-node
oc label node infra3 role=infra-node
Copy the text shown below to a file named eo-namespace.yaml
. This YAML will be used to create a namespace for the ElasticSearch Operator
apiVersion: v1
kind: Namespace
metadata:
name: openshift-operators-redhat
annotations:
openshift.io/node-selector: “”
labels:
openshift.io/cluster-monitoring: “true”
Execute oc create -f eo-namespace.yaml
Copy the text shown below to a file named `clo-namespace.yaml`. This YAML will be used to create a namespace for Cluster Logging Operator
apiVersion: v1
kind: Namespace
metadata:
name: openshift-logging
annotations:
openshift.io/node-selector: “”
labels:
openshift.io/cluster-logging: “true”
openshift.io/cluster-monitoring: “true”
Execute oc create -f clo-namespace.yaml
Copy the text shown below to a file named eo-eg.yaml
. This YAML will be used to create the OperatorGroup for the ElasticSearch Operator
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-operators-redhat
namespace: openshift-operators-redhat
spec: {}
Execute oc create eo-eg.yaml
Copy the text shown below to a file named eo-sub.yaml
. This YAML will be used to create a Subscription to subscribe the openshift-operators-redhat namespace to the ElasticSearch Operator
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
generateName: “elasticsearch-”
namespace: “openshift-operators-redhat”
spec:
channel: “4.2”
installPlanApproval: “Automatic”
source: “redhat-operators”
sourceNamespace: “openshift-marketplace”
name: “elasticsearch-operator”
Execute oc create eo-sub.yaml
Execute oc project openshift-operators-redhat
to change to the openshift-operators-redhat project
Copy the text shown below to a file named eo-rbac.yaml
. This YAML will be used to create a RBAC definition to allow Prometheus permission to access the openshift-operators-redhat namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: prometheus-k8s
namespace: openshift-operators-redhat
rules:
- apiGroups:
— “”
resources:
— services
— endpoints
— pods
verbs:
— get
— list
— watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: prometheus-k8s
namespace: openshift-operators-redhat
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: prometheus-k8s
subjects:
kind: ServiceAccount
name: prometheus-k8s
namespace: openshift-operators-redhat
Execute oc create eo-rbac.yaml -n openshift-operators-redhat
Using the OCP web console, click Operators → OperatorHub
Select Logging and Tracing from the left side menu and then select Cluster Logging from the list of available Operators, and click Install.
On the Create Operator Subscription
page, select openshift-logging
from the A specific namespace on the cluster and change the Update Channel to 4.2
. Then, click Subscribe.
Navigate to Operators → Installed Operators and wait for the Cluster Logging Operator to show the InstallSucceeded
status.
Navigate to the Administration → Custom Resource Definitions page and click on the ClusterLogging CRD.
Click Instances and then click Create Cluster Logging
Replace the YAML with the following. Note that we will set the NodeSelector later as it isn’t supported at this stage:
apiVersion: “logging.openshift.io/v1”
kind: “ClusterLogging”
metadata:
name: “instance”
namespace: “openshift-logging”
spec:
managementState: “Managed”
logStore:
type: “elasticsearch”
elasticsearch:
nodeCount: 3
storage:
storageClassName: rook-ceph-block
size: 200G
redundancyPolicy: “SingleRedundancy”
visualization:
type: “kibana”
kibana:
replicas: 1
curation:
type: “curator”
curator:
schedule: “30 3 * * *”
collection:
logs:
type: “fluentd”
fluentd: {}
Click Create
Wait a few moments and then click Instances again to refresh the page. Click on instance
Click YAML to edit the configuration and add the NodeSelector.
Add the following text to the curation, logStore and visualization sections as shown below
nodeSelector:
kubernetes.io/os: linux
role: infra-node
Wait for the pods
in the openshift-logging
namespace to get to the running
state
oc get pods -n openshift-loggingNAME READY STATUS RESTARTS AGE
cluster-logging-operator-79d8f55b67-zwxlz 1/1 Running 0 2d3h
curator-1579836600-l2s7z 0/1 Completed 0 10h
elasticsearch-cdm-xnixjxtb-1–5dbcf85db4–5kgvx 2/2 Running 0 2d3h
elasticsearch-cdm-xnixjxtb-2–69d678b9bf-g2847 2/2 Running 0 2d3h
elasticsearch-cdm-xnixjxtb-3–68f4b89fd8-bwpwt 2/2 Running 0 2d3h
fluentd-2fmvf 1/1 Running 0 2d3h
fluentd-2qhjd 1/1 Running 0 2d3h
fluentd-gjfgf 1/1 Running 0 2d3h
fluentd-hbd8v 1/1 Running 0 2d3h
fluentd-nk72r 1/1 Running 0 2d3h
fluentd-q2rfn 1/1 Running 0 2d3h
fluentd-s2c6v 1/1 Running 0 2d3h
fluentd-sf52w 1/1 Running 0 2d3h
fluentd-shjk2 1/1 Running 0 2d3h
fluentd-sskns 1/1 Running 0 2d3h
fluentd-sw5cf 1/1 Running 0 2d3h
fluentd-vl2q7 1/1 Running 0 2d3h
fluentd-zcmmq 1/1 Running 0 2d3h
kibana-65f44f48c9-xj9f6 2/2 Running 0 2d3h
Using the OCP web console, click Networking → Routes and select the openshift-logging
project.
Click on the kibana route to access Kibana and view your logs
Conclusion
You’ve now installed the Red Hat provided OpenShift Cluster Logging Operator and configured it to run on dedicated infrastructure nodes.