Deployment and Operations

3 mins read

Setting Up Effective Log Management in Kubernetes

Installing and configuring an effective logging system for Cloudentity stack in any environment.

Logging at Cloudentity

At Cloudentity, we belive that logging is essential for security audits and incident investigations. By analyzing logs, security teams can identify and respond to suspicious activities, detect data breaches, and comply with regulatory standards that require data access and modification tracking.

The article describes how to configure logging in Cloudentity for both on-premises installation and SaaS solution.

Customer Deployed Installation

For on-premises installations, we recommend configuring Elasticsearch in Kubernetes (using Elastic Cloud on Kubernetes - ECK) to collect and analyze logs and tracing (OpenTelemetry - OTEL). This section includes instructions on configuring and deploying ECK, setting up OpenTelemetry to ingest logs, and visualizing and analyzing the logs collected on Elasticsearch.

Note

For a complete and ready-to-use solution, consider exploring our Cloudentity on Kubernetes via the GitOps approach. Get started with our quickstart guide, and delve deeper with the deployment configuration details.

Prerequisites

  1. Kubernetes cluster installed
  2. Cloudentity deployed on the cluster
  3. ECK with Elasticsearch and Kibana installed
  4. Helm installed and configured with the cluster

Configuration

In order to collect Cloudentity logs install Elastic Filebeat, a lightweight agent for forwarding and centralizing log data.

Each Kubernetes node should have an instance of Filebeat, so they should be backed by a DaemonSet object. Please follow these steps to install Filebeat using the Helm chart:

  1. Add the Elastic Helm charts repo:

    helm repo add elastic https://helm.elastic.co
    
  2. Update Helm repositories:

    helm repo update
    
  3. Prepare values.yaml configuration file. Below is the minimal version of the file based on our experience:

    daemonset:
      filebeatConfig:
        filebeat.yml: |
          filebeat.inputs:
          - type: container
            paths:
              - /var/log/containers/acp*.log
            processors:
            - decode_json_fields:
                fields: ["message"]
                target: json
                max_depth: 1
                add_error_key: true
          processors:
          - add_host_metadata:
          - add_kubernetes_metadata:
              host: $${NODE_NAME}
              matchers:
              - logs_path:
                  logs_path: "/var/log/containers/"
          - copy_fields:
              fields:
                - from: kubernetes.container.name
                  to: event.dataset
                - from: kubernetes.container.name
                  to: app
              fail_on_error: false
              ignore_missing: true
          - rename:
              fields:
              - from: input.type
                to: host.type
              - from: json.cause
                to: error.type
              - from: json.code
                to: event.code
              - from: json.description
                to: event.type:info
              - from: json.details
                to: event.reason
              - from: json.duration
                to: event.duration
              - from: json.error
                to: error.message
              - from: json.hint
                to: event.kind:enrichment
              - from: json.host
                to: host.container.ip
              - from: json.ip
                to: client.ip
              - from: json.level
                to: log.level
              - from: json.method
                to: http.request.method
              - from: json.msg
                to: event.action
              - from: json.name
                to: service.name
              - from: json.path
                to: url.path
              - from: json.size
                to: http.response.bytes
              - from: json.stack
                to: error.stack_trace
              - from: json.status
                to: http.response.status_code
              - from: json.sub
                to: user.id
              - from: json.tenantID
                to: tenant.id
              - from: json.traceID
                to: trace.id
              - from: json.userAgent
                to: user_agent.original
              ignore_missing: true
              fail_on_error: false
          - convert:
              fields:
              - from: event.duration
                type: long
              - from: http.request.bytes
                type: long
              - from: http.response.body.bytes
                type: long
              - from: http.response.status_code
                type: long
              - from: error.code
                type: long
              ignore_missing: true
              fail_on_error: false
          output.elasticsearch:
            hosts: ["<elasticearch svc address>:9200"]
            protocol: "https"
            username: '<elasticsearch username>'
            password: '<elasticsearch password>'
            ssl:
              certificate_authorities: ["/usr/share/filebeat/certs/ca.crt"]      
    

    (note) Refer to the official filebeat.yml reference page to get know more possible options for this file.

  4. Install Filebeat in a dedicated logging namespace:

    helm install filebeat-release --values values.yaml --namespace logging --create-namespace elastic/filebeat
    
  5. Verify installation:

    helm list --all
    kubectl get pods --namespace logging
    

    The output of the above commands should present that the Helm chart is installed and all Filebeat pods are up and running.

Hardening

In the production environment, the Elasticsearch credentials defined in the values.yaml file should be stored in a Secret entity and referred to in that file. Also, the SSL verification should be enabled, and the CA certificate of Elastisearch should be provided.

Updated: Oct 27, 2023