Helm Chart Quality Assurance at Cloudentity
Just like every other part of the development process, Helm Charts development must ensure quality standards in order to avoid issues related to misconfiguration or security vulnerabilities. We handle this process in a number of different ways, starting from static checks against predefined rules based on best practices and mandatory requirements, followed by custom checks implemented by us. Finally, we can use tools that allow active scanning of development and production environments deployed using our charts.
Cloudentity provides Helm Charts to enable the deployment of its platform as well as the connection of Istio Authorizer or Open Finance mock applications. Read on to understand how we ensure the quality of our Charts.
Helm Chart Quality Assurance Tools
We use the following static code analysis tools:
To complement the above, we have the following custom framework:
Static Code Analysis
Helm Lint
Helm Lint is a Helm subcommand for charts examination which helps you assert if your charts would install correctly. It gives you useful information about fundamental issues, like mistype or misconfiguration.
$ helm lint .
==> Linting .
[ERROR] Chart.yaml: version is required
[INFO] Chart.yaml: icon is recommended
[ERROR] templates/: validation: chart.metadata.version is required
[ERROR] : unable to load chart
validation: chart.metadata.version is required
Error: 1 chart(s) linted, 1 chart(s) failed
kube-score
kube-score is a static analysis tool checking Kubernetes files for reliability and security issues. For example, it enables locating DDOS attack vulnerabilities due to a lack of resource limits.
Follow the instruction on the project’s github website to install kube-score. You can expect the following output from the tool:
$ helm template . | docker run -i -v $(pwd):/project zegl/kube-score:latest score --output-format ci -
[OK] RELEASE-NAME-kong-authorizer apps/v1/DaemonSet
[OK] RELEASE-NAME-kong-authorizer apps/v1/DaemonSet
[CRITICAL] RELEASE-NAME-kong-authorizer apps/v1/DaemonSet: (kong-authorizer) Ephemeral Storage limit is not set
[CRITICAL] RELEASE-NAME-kong-authorizer apps/v1/DaemonSet: The pod does not have a matching NetworkPolicy
[OK] RELEASE-NAME-kong-authorizer apps/v1/DaemonSet
[OK] RELEASE-NAME-kong-authorizer apps/v1/DaemonSet
[CRITICAL] RELEASE-NAME-kong-authorizer apps/v1/DaemonSet: (kong-authorizer) ImagePullPolicy is not set to Always
[CRITICAL] RELEASE-NAME-kong-authorizer apps/v1/DaemonSet: (kong-authorizer) The container is running with a low user ID
[CRITICAL] RELEASE-NAME-kong-authorizer apps/v1/DaemonSet: (kong-authorizer) The container running with a low group ID
[OK] RELEASE-NAME-kong-authorizer apps/v1/DaemonSet
[OK] RELEASE-NAME-kong-authorizer apps/v1/DaemonSet
[CRITICAL] RELEASE-NAME-kong-authorizer apps/v1/DaemonSet: Container has the same readiness and liveness probe
Kubeval
Kubeval is a tool that allows you to validate Kubernetes configuration files by comparing schemas against the Kubernetes OpenAPI specification. It helps you locate misconfigured resources, such as values outside of schema.
Follow the instruction on the project website to install Kubeval. You can expect the following output from the tool:
$ helm template . | kubeval --skip-kinds AuthorizationPolicy,EnvoyFilter
PASS - istio-authorizer/templates/serviceaccount.yaml contains a valid ServiceAccount (RELEASE-NAME-istio-authorizer)
PASS - istio-authorizer/templates/dockerregistry.yaml contains a valid Secret (docker.cloudentity.io)
PASS - istio-authorizer/templates/secret.yaml contains a valid Secret (RELEASE-NAME-istio-authorizer)
PASS - istio-authorizer/templates/configmap.yaml contains a valid ConfigMap (RELEASE-NAME-istio-authorizer)
PASS - istio-authorizer/templates/clusterrole.yaml contains a valid ClusterRole (RELEASE-NAME-istio-authorizer)
PASS - istio-authorizer/templates/clusterrolebinding.yaml contains a valid ClusterRoleBinding (default.RELEASE-NAME-istio-authorizer)
PASS - istio-authorizer/templates/clusterrolebinding.yaml contains a valid ClusterRoleBinding (default.RELEASE-NAME-istio-authorizer-auth-delegator)
PASS - istio-authorizer/templates/service.yaml contains a valid Service (RELEASE-NAME-istio-authorizer)
PASS - istio-authorizer/templates/deployment.yaml contains a valid DaemonSet (RELEASE-NAME-istio-authorizer)
WARN - istio-authorizer/templates/policy.yaml containing a AuthorizationPolicy (default.acp-istio-authorizer-policy) was not validated against a schema
WARN - istio-authorizer/templates/envoyfilter.yaml containing a EnvoyFilter (default.RELEASE-NAME-istio-authorizer) was not validated against a schema
PASS - istio-authorizer/templates/tests/allow-request.yaml contains a valid Pod (RELEASE-NAME-allow-request-validation-test)
PASS - istio-authorizer/templates/tests/block-request.yaml contains a valid Pod (RELEASE-NAME-block-access-validation-test)
Trivy
Trivy is a versatile security scanner not just limited to static analysis. One of the additional features of image security scanning could allow you to spot outdated libraries and tools inside your image. The following supported feature is SBOM generation for image auditing, that you may need to provide for various purposes. Finally, it also allows for active Kubernetes cluster scanning.
Follow instructions on the project Github page to install Trivy. You can expect the following report from the tool:
$ docker run -v /home/lucas/repo/cloudentity/acp/acp-authorizers/kong-authorizer:/project aquasec/trivy:latest config --ignorefile /project/.trivyignore --exit-code 1 /project
2022-06-24T13:24:20.146Z INFO Misconfiguration scanning is enabled
2022-06-24T13:24:20.728Z INFO Detected config files: 6
templates/deployment.yaml (helm)
================================
Tests: 31 (SUCCESSES: 27, FAILURES: 4, EXCEPTIONS: 0)
Failures: 4 (UNKNOWN: 0, LOW: 4, MEDIUM: 0, HIGH: 0, CRITICAL: 0)
LOW: Container 'kong-authorizer' of DaemonSet 'kong-authorizer' should set 'resources.limits.cpu'
════════════════════════════════════════
Enforcing CPU limits prevents DoS via resource exhaustion.
See https://avd.aquasec.com/misconfig/ksv011
────────────────────────────────────────
templates/deployment.yaml:34-86
────────────────────────────────────────
34 ┌ - name: kong-authorizer
35 │ securityContext:
36 │ allowPrivilegeEscalation: false
37 │ capabilities:
38 │ drop:
39 │ - ALL
40 │ readOnlyRootFilesystem: true
41 │ image: "docker.cloudentity.io/kong-authorizer:2.3.0"
42 └ imagePullPolicy: IfNotPresent
..
────────────────────────────────────────
LOW: Container 'kong-authorizer' of DaemonSet 'kong-authorizer' should set 'resources.requests.cpu'
════════════════════════════════════════
When containers have resource requests specified, the scheduler can make better decisions about which nodes to place pods on and how to deal with resource contention.
See https://avd.aquasec.com/misconfig/ksv015
────────────────────────────────────────
templates/deployment.yaml:34-86
────────────────────────────────────────
34 ┌ - name: kong-authorizer
35 │ securityContext:
36 │ allowPrivilegeEscalation: false
37 │ capabilities:
38 │ drop:
39 │ - ALL
40 │ readOnlyRootFilesystem: true
41 │ image: "docker.cloudentity.io/kong-authorizer:2.3.0"
42 └ imagePullPolicy: IfNotPresent
..
────────────────────────────────────────
Custom Tests Implementation
The previous section presented frameworks mostly centered on checking your files against predefined rules. Tools shown in this section, instead, allow for testing charts using custom rules. The most significant difference between the tools below is the language used in test implementation.
Conftest
Conftest is a framework for implementing custom tests in the OPA language, which you can also find in access policy definitions for Cloudentity and in Gatekeeper for Kubernetes cluster policy definitions.
Example OPA policy:
deny[msg] {
input.kind == "DaemonSet"
repo := "docker.cloudentity.io/"
image := input.spec.template.spec.containers[_].image
not startswith(image, repo)
msg := sprintf("Image must be sourced from docker.cloudentity.com: %v", [image])
}
Example output from Conftest (test
is the policy above):
conftest test deployment.yaml
FAIL - deployment.yaml - main - Image must be sourced from docker.cloudentity.com: docker.cloudentity.com/kong-authorizer:2.3.0
8 tests, 7 passed, 0 warnings, 1 failure, 0 exceptions
config-lint
This test framework uses YAML for policy definition instead of OPA, commonly used in many DevOps tools and frameworks, so DevOps engineers may feel more comfortable using it. You can install config-lint by following instructions on their Github page.
Sample policy defined in config-lint:
version: 1
description: Custom rules for Kubernetes
type: Kubernetes
files:
- "*.yaml"
rules:
- id: DEPLOYMENT_IMAGE_REPOSITORY
severity: FAILURE
message: DaemonSet must use a valid image repository
resource: DaemonSet
assertions:
- every:
key: spec.template.spec.containers
expressions:
- key: image
op: starts-with
value: "docker.cloudentity.io/"
Sample report from config-lint (rules.yaml
contains the definition above):
$ docker run -v $(pwd):/project stelligent/config-lint -rules /project/rules.yaml /project/deployment.yaml
[
{
"AssertionMessage": "Every expression fails: And expression fails: image does not start with docker.cloudentity.io/",
"Category": "",
"CreatedAt": "2022-06-24T13:59:29Z",
"Filename": "/project/deployment.yaml",
"LineNumber": 0,
"ResourceID": "RELEASE-NAME-kong-authorizer",
"ResourceType": "DaemonSet",
"RuleID": "DEPLOYMENT_IMAGE_REPOSITORY",
"RuleMessage": "DaemonSet must use a valid image repository",
"Status": "FAILURE"
}
]
Polaris
Similar to config-lint, Polaris allows you to write a custom check with YAML syntax, contains a set of rules based on best practices, and allows deployment inside the cluster for auditing purposes. You can install Polaris using Kubernetes, Helm, or binary from their Github page.
Sample policy:
checks:
imageRegistry: warning
customChecks:
imageRegistry:
successMessage: Image comes from allowed registry
failureMessage: Image should be only from allowed registry
category: Security
target: Container
schema:
'$schema': http://json-schema.org/draft-07/schema
type: object
properties:
image:
type: string
pattern: ^cloudentity.io/.+$
For information on how to implement various custom checks, visit Polaris documentation page.
Sample output from Polaris (config.yaml
is the policy above):
./polaris audit --config config.yaml --audit-path deployment.yaml
{
"PolarisOutputVersion": "1.0",
"AuditTime": "2022-06-29T09:26:00+02:00",
"SourceType": "Path",
"SourceName": "deployment.yaml",
"DisplayName": "deployment.yaml",
"ClusterInfo": {
"Version": "unknown",
"Nodes": 0,
"Pods": 0,
"Namespaces": 0,
"Controllers": 1
},
"Results": [
{
"Name": "RELEASE-NAME-kong-authorizer",
"Namespace": "",
"Kind": "ServiceAccount",
"Results": {},
"PodResult": null,
"CreatedTime": "0001-01-01T00:00:00Z"
},
{
"Name": "docker.cloudentity.io",
"Namespace": "",
"Kind": "Secret",
"Results": {},
"PodResult": null,
"CreatedTime": "0001-01-01T00:00:00Z"
},
{
"Name": "RELEASE-NAME-kong-authorizer",
"Namespace": "",
"Kind": "Secret",
"Results": {},
"PodResult": null,
"CreatedTime": "0001-01-01T00:00:00Z"
},
{
"Name": "RELEASE-NAME-kong-authorizer",
"Namespace": "",
"Kind": "ConfigMap",
"Results": {},
"PodResult": null,
"CreatedTime": "0001-01-01T00:00:00Z"
},
{
"Name": "RELEASE-NAME-kong-authorizer",
"Namespace": "",
"Kind": "Service",
"Results": {},
"PodResult": null,
"CreatedTime": "0001-01-01T00:00:00Z"
},
{
"Name": "RELEASE-NAME-kong-authorizer",
"Namespace": "",
"Kind": "DaemonSet",
"Results": {},
"PodResult": {
"Name": "",
"Results": {},
"ContainerResults": [
{
"Name": "kong-authorizer",
"Results": {
"imageRegistry": {
"ID": "imageRegistry",
"Message": "Image should be only from allowed registry",
"Details": null,
"Success": false,
"Severity": "warning",
"Category": "Security",
"Mutations": null,
"Comments": null
}
}
}
]
},
"CreatedTime": "0001-01-01T00:00:00Z"
}
],
"Score": 0
}
Conclusion
As you can see, Cloudentity ensures that its Helm Charts are error-free and save to use by running a number of quality checks before deploying the chart.
Like what you see? Register for free to get access to a Cloudentity tenant and start exploring our platform!