Validate best practices and strategies for Kubernetes YAML

This article comes from Rancher Labs

The most common definition of a Kubernetes workload is a file in YAML format. One of the challenges of using YAML is that it is quite difficult to express constraints or relationships between manifest files.

If you want to check whether all the images deployed in the cluster are extracted from the trusted image repository, what should you do? How to prevent deployment without poddisruption budgets from being submitted to the cluster?

Integrated static checking can detect errors and policy violations near the development life cycle. And with improved assurance around the effectiveness and security of resource definitions, you can believe that production workloads follow best practices.

The ecosystem of Kubernetes YAML file static inspection can be divided into the following categories:

  • API verifier: this kind of tool can verify the given YAML manifest against the Kubernetes API server.

  • Built in checker: this kind of tool is bundled with opinion check on security, best practices, etc.

  • Custom validators: these tools allow custom checks to be written in several languages, such as Rego and Javascript.

In this article, you will learn and compare six different tools:

  • Kubeval

  • Kube-score

  • Config-lint

  • Copper

  • Conftest

  • Polaris

Let's start!

Verify Deployment

Before you start comparing tools, you should set a benchmark. Can you follow some of the following best practices?

apiVersion: apps/v1
kind: Deployment
metadata:
  name: http-echo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: http-echo
  template:
    metadata:
      labels:
        app: http-echo
    spec:
      containers:
      - name: http-echo
        image: hashicorp/http-echo
        args: ["-text", "hello-world"]
        ports:
        - containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
  name: http-echo
spec:
  ports:
  - port: 5678
    protocol: TCP
    targetPort: 5678
  selector:
    app: http-echo

We will use this YAML file to compare different tools.

You can find the above YAML list and file base valid in the git warehouse YAML and other manifest mentioned in the article:

https://github.com/amitsaha/kubernetes-static-checkers-demo

manifest describes a web application that always replies to "Hello World" messages on port 5678.

You can deploy the application in the following ways:

kubectl apply -f hello-world.yaml

You can test it with the following command:

kubectl port-forward svc/http-echo 8080:5678

You can visit http://localhost:8080 And confirm whether the application can run as expected. But does it follow best practices?

Let's look down.

Kubeval

Home page: https://www.kubeval.com/

Kubeval's premise is that any interaction with Kubernetes should be through its REST API. Therefore, you can use the API schema to verify whether a given YAML input conforms to the schema. Let's take a look at an example.

You can install kubeval according to the instructions on the project website. The latest version at the time of writing is 0.15.0. After the installation, let's run it with the manifest discussed above:

kubeval base-valid.yaml
PASS - base-valid.yaml contains a valid Deployment (http-echo)
PASS - base-valid.yaml contains a valid Service (http-echo)

When successful, kubeval exits with a code of 0. You can verify the exit code using the following code:

echo $?
0

Now let's test kubeval with another manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: http-echo
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: http-echo
    spec:
      containers:
      - name: http-echo
        image: hashicorp/http-echo
        args: ["-text", "hello-world"]
        ports:
        - containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
  name: http-echo
spec:
  ports:
  - port: 5678
    protocol: TCP
    targetPort: 5678
  selector:
    app: http-echo

Can you find the problem?

Let's run kubeval:

kubeval kubeval-invalid.yaml
WARN - kubeval-invalid.yaml contains an invalid Deployment (http-echo) - selector: selector is required
PASS - kubeval-invalid.yaml contains a valid Service (http-echo)

# let's check the return value
echo $?
1

The resource failed validation. A Deployment using the app/v1 API version must contain a selector that matches the Pod tag. The above manifest does not contain a selector. Running kubeval against the manifest reported an error and a non-zero exit code.

You may wonder what happens when you run kubectl apply -f with the above manifest?

Let's try:

kubectl apply -f kubeval-invalid.yaml
error: error validating "kubeval-invalid.yaml": error validating data: ValidationError(Deployment.spec):
missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec; if you choose to ignore these errors,
turn validation off with --validate=false

This is the mistake kubeval warned you. You can fix resources by adding selector s like this.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: http-echo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: http-echo
  template:
    metadata:
      labels:
        app: http-echo
    spec:
      containers:
      - name: http-echo
        image: hashicorp/http-echo
        args: ["-text", "hello-world"]
        ports:
        - containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
  name: http-echo
spec:
  ports:
  - port: 5678
    protocol: TCP
    targetPort: 5678
  selector:
    app: http-echo

The advantage of tools like kubeval is that you can detect such errors early in the deployment cycle. In addition, you don't need to access clusters to run checks - they can run offline. By default, kubeval validates resources against the latest unpublished Kubernetes API schema. However, in most cases, you may want to run validation based on a specific version of Kubernetes. You can use the flag Kubernetes version to test specific API versions:

kubeval --kubernetes-version 1.16.1 base-valid.yaml

Note that the version should be major Minor. Patch. Form of. To see the versions available for validation, look at the JSON schema on GitHub, which kubeval uses to perform validation.

If you need to run kubeval offline, you can download schemas and use the -- schema location flag to use the local directory. In addition to a single YAML file, you can also run kubeval for directories and standard input. You should also know that kubeval is easy to integrate with your continuous integration pipeline. If you want to include checks before submitting your manifest to the cluster, kubeval's support for three output formats may help you.

  • pure text

  • JSON

  • Test anything protocol (TAP)

And you can use one of these formats to further parse the output to create a custom summary of the results. However, one limitation of kubeval is that it cannot validate the custom resource definition (CRD). But kubeval can ignore them.

Although Kubeval is a great choice for checking and verifying resources, please note that resources that pass the test do not guarantee compliance with best practices. For example, using the latest tags in container images is not considered a best practice. However, Kubeval does not report it as an error, it validates YAML without warning.

What if you want to score YAML and catch violations such as using the latest tags? How do you check your YAML files against best practices?

Kube-score

Home page: https://github.com/zegl/kube-score

Kube score analyzes the YAML list and scores according to the built-in check. These checks are selected based on safety recommendations and best practices, such as:

  • Run the container as a non root user.

  • Specify a health check for pods.

  • Define resource requests and restrictions.

  • The result of the check can be OK, WARNING or CRITICAL.

You can try Kube score online or install it locally. At the time of writing this article, the latest version is 1.7.0. Let's try to use the previous manifest base valid Yaml to run it:

apps/v1/Deployment http-echo
[CRITICAL] Container Image Tag
  · http-echo -> Image with latest tag
      Using a fixed tag is recommended to avoid accidental upgrades
[CRITICAL] Pod NetworkPolicy
  · The pod does not have a matching network policy
      Create a NetworkPolicy that targets this pod
[CRITICAL] Pod Probes
  · Container is missing a readinessProbe
      A readinessProbe should be used to indicate when the service is ready to receive traffic.
      Without it, the Pod is risking to receive traffic before it has booted. It is also used during
      rollouts, and can prevent downtime if a new version of the application is failing.
      More information: https://github.com/zegl/kube-score/blob/master/README_PROBES.md
[CRITICAL] Container Security Context
  · http-echo -> Container has no configured security context
      Set securityContext to run the container in a more secure context.
[CRITICAL] Container Resources
  · http-echo -> CPU limit is not set
      Resource limits are recommended to avoid resource DDOS. Set resources.limits.cpu
  · http-echo -> Memory limit is not set
      Resource limits are recommended to avoid resource DDOS. Set resources.limits.memory
  · http-echo -> CPU request is not set
      Resource requests are recommended to make sure that the application can start and run without
      crashing. Set resources.requests.cpu
  · http-echo -> Memory request is not set
      Resource requests are recommended to make sure that the application can start and run without crashing.
      Set resources.requests.memory
[CRITICAL] Deployment has PodDisruptionBudget
  · No matching PodDisruptionBudget was found
      It is recommended to define a PodDisruptionBudget to avoid unexpected downtime during Kubernetes
      maintenance operations, such as when draining a node.
[WARNING] Deployment has host PodAntiAffinity
  · Deployment does not have a host podAntiAffinity set
      It is recommended to set a podAntiAffinity that stops multiple pods from a deployment from
      being scheduled on the same node. This increases availability in case the node becomes unavailable.

YAML file passed kubeval check, but Kube score pointed out several shortcomings.

  • The readiness probe is missing

  • Memory and CPU requests and limits are missing.

  • Missing poddisrutionbudgets

  • Lack of anti affinity rules to maximize availability.

  • The container runs as root.

These are the effective points you should solve to make your deployment more robust and reliable. The Kube score command will output a highly readable result, including all WARNING and CRITICAL violations, which is very good in the development process. If you plan to use it as part of the continuous integration pipeline, you can use the -- output format CI flag to use more concise output, and it can also print checks at the level of OK:

kube-score score base-valid.yaml --output-format ci
[OK] http-echo apps/v1/Deployment
[OK] http-echo apps/v1/Deployment
[CRITICAL] http-echo apps/v1/Deployment: (http-echo) CPU limit is not set
[CRITICAL] http-echo apps/v1/Deployment: (http-echo) Memory limit is not set
[CRITICAL] http-echo apps/v1/Deployment: (http-echo) CPU request is not set
[CRITICAL] http-echo apps/v1/Deployment: (http-echo) Memory request is not set
[CRITICAL] http-echo apps/v1/Deployment: (http-echo) Image with latest tag
[OK] http-echo apps/v1/Deployment
[CRITICAL] http-echo apps/v1/Deployment: The pod does not have a matching network policy
[CRITICAL] http-echo apps/v1/Deployment: Container is missing a readinessProbe
[CRITICAL] http-echo apps/v1/Deployment: (http-echo) Container has no configured security context
[CRITICAL] http-echo apps/v1/Deployment: No matching PodDisruptionBudget was found
[WARNING] http-echo apps/v1/Deployment: Deployment does not have a host podAntiAffinity set
[OK] http-echo v1/Service
[OK] http-echo v1/Service
[OK] http-echo v1/Service
[OK] http-echo v1/Service

Similar to kubeval, when a CRITICAL check fails, Kube score will return a non-zero exit code, but you will also fail when configuring it in WARNINGs. There is also a built-in check to verify resources of different API versions, similar to kubeval. However, this information is hard coded in Kube score itself, and you cannot choose a different version of Kubernetes. Therefore, if you upgrade your cluster or you have several different clusters running different versions, this may limit your use of this tool.

Please note that there is an open issue to implement this function. You can learn more about Kube score on the official website: https://github.com/zegl/kube-score

Kube score checking is an excellent tool for implementing best practices, but what if you want to customize or add your own rules? Not at the moment. Kube score's design is not extensible. You can't add or adjust policies. If you want to write custom checks to comply with your organization's policies, you can use one of the next four options - config lint, copy, conf test, or polaris.

Config-lint

Config lint is a tool designed to validate configuration files written in YAML, JSON, terrain, CSV, and Kubernetes manifest. You can install it using the instructions on the project website:

https://stelligent.github.io/config-lint/#/install

At the time of writing, the latest version is 1.5.0.

Config lint does not have built-in checks for Kubernetes manifest. You must write your own rules to perform validation. These rules are written as YAML files, called rule sets, and have the following structure:

version: 1
description: Rules for Kubernetes spec files
type: Kubernetes
files:
  - "*.yaml"
rules:
   # list of rules

Let's take a closer look. The type field indicates what type of configuration you will check with config lint - typically Kubernetes manifest.

The files field accepts a directory as input in addition to a single file.

The rules field is where you can define custom checks. For example, you want to check whether the image in the Deployment is always extracted from a trusted image repository (such as my company. COM / myapp: 1.0). The config lint rule to implement this check can be as follows:

- id: MY_DEPLOYMENT_IMAGE_TAG
  severity: FAILURE
  message: Deployment must use a valid image tag
  resource: Deployment
  assertions:
    - every:
        key: spec.template.spec.containers
        expressions:
          - key: image
            op: starts-with
            value: "my-company.com/"

Each rule must have the following properties.

  • id -- this is the unique identification of the rule.

  • severity - it must be FAILURE, WARNING, and non_ One of the complex.

  • message -- if a rule is violated, the contents of this string will be displayed.

  • Resource - the type of resource to which you want this rule to be applied.

  • assertions -- a list of conditions that will evaluate the specified resource.

In the above rule, every assertion checks whether the Deployment (key:spec.templates.spec.contains) in each container uses a trusted image (that is, the image starting with "my company. COM /".

The complete rule set looks like this:

version: 1
description: Rules for Kubernetes spec files
type: Kubernetes
files:
  - "*.yaml"
rules:
  - id: DEPLOYMENT_IMAGE_REPOSITORY
    severity: FAILURE
    message: Deployment must use a valid image repository
    resource: Deployment
    assertions:
      - every:
          key: spec.template.spec.containers
          expressions:
            - key: image
              op: starts-with
              value: "my-company.com/"

If you want to test the check, you can save the rule set as check_image_repo.yaml.

Now, let's talk about base valid Yaml file for validation.

config-lint -rules check_image_repo.yaml base-valid.yaml
[
  {
  "AssertionMessage": "Every expression fails: And expression fails: image does not start with my-company.com/",
  "Category": "",
  "CreatedAt": "2020-06-04T01:29:25Z",
  "Filename": "test-data/base-valid.yaml",
  "LineNumber": 0,
  "ResourceID": "http-echo",
  "ResourceType": "Deployment",
  "RuleID": "DEPLOYMENT_IMAGE_REPOSITORY",
  "RuleMessage": "Deployment must use a valid image repository",
  "Status": "FAILURE"
  }
]

It failed. Now, let's consider the following manifest and an effective mirror Repository:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: http-echo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: http-echo
  template:
    metadata:
      labels:
        app: http-echo
    spec:
      containers:
      - name: http-echo
        image: my-company.com/http-echo:1.0
        args: ["-text", "hello-world"]
        ports:
        - containerPort: 5678

Run the same check using the above manifest and no violations will be reported:

config-lint -rules check_image_repo.yaml image-valid-mycompany.yaml
[]

Config lint is a promising framework that allows you to write custom checks for Kubernetes YAML manifest using YAML DSL. But what if you want to express more complex logic and checks? Is YAML too restrictive? What if you can express these checks in a real programming language?

Copper

Home page: https://github.com/cloud66-oss/copper

Copper V2 is a framework that uses custom checks to validate checklists -- just like config lint. However, copper does not use YAML to define checks. Instead, the test is written in JavaScript. Copper provides a library with some basic help programs to help read Kubernetes objects and report errors.

You can install Copper according to the official documentation. At the time of writing this article, the latest version is 2.0.1:

https://github.com/cloud66-oss/copper#installation

Similar to config lint, Copper does not have built-in checks. Let's write a check to ensure that the deployment can only pull container images from trusted repositories (such as my company. Com). Create a new file check_image_repo.js, as follows:

$$.forEach(function($){
    if ($.kind === 'Deployment') {
        $.spec.template.spec.containers.forEach(function(container) {
            var image = new DockerImage(container.image);
            if (image.registry.lastIndexOf('my-company.com/') != 0) {
                errors.add_error('no_company_repo',"Image " + $.metadata.name + " is not from my-company.com repo", 1)
            }
        });
    }
});

Now, according to our base valid Yaml manifest runs this check using the copy validate command:

copper validate --in=base-valid.yaml --validator=check_image_tag.js
Check no_company_repo failed with severity 1 due to Image http-echo is not from my-company.com repo
Validation failed

As you think, you can write more complex checks, such as verifying the domain name of the Ingress manifest, or rejecting any Pod running as a privilege. Copper has some built-in assistants:

The DockerImage function reads the specified input file and creates an object containing the following properties:

  • Name - contains the image name

  • Tag - contains the image tag

  • registry - image warehouse

  • registry_url - contains the protocol and image repository

  • fqin represents the entire fully qualified mirror position.

  • The findByName function helps you find the resources for a given kind and name from the input file.

  • The findByLabels function can help you find the kind and labels provided by the resource.

You can see all the available helpers here:

https://github.com/cloud66-oss/copper/tree/master/libjs

By default, it loads the entire input YAML file into the $$variable and makes it available in your script (if you used to use jQuery, you may find this pattern familiar).

In addition to not having to learn a custom language, you can also use the entire JavaScript language to write your checks, such as string interpolation, functions, etc. It is worth noting that the current copy version embeds the JavaScript engine of ES5 instead of ES6. To learn more, visit the project website:

https://github.com/cloud66-oss/copper

If Javascript is not your preferred language, or if you prefer a language for querying and describing policies, you should look at conf test.

Conftest

Convtest is a testing framework for configuration data, which can be used to check and verify Kubernetes manifest. The test is written in Rego, a specially built query language.

You can install conf test according to the instructions on the project website. At the time of writing this article, the latest version is 0.18.2:

https://www.conftest.dev/install/

Like config lint and copper, conf test does not have any built-in checks. So let's try it by writing a strategy. As in the previous example, you will check whether the container comes from a trusted source.

Create a new directory, conf test checks, and a directory called check_image_registry.rego file, as follows:

package main

deny[msg] {

  input.kind == "Deployment"
  image := input.spec.template.spec.containers[_].image
  not startswith(image, "my-company.com/")
  msg := sprintf("image '%v' doesn't come from my-company.com repository", [image])
}

Now let's run conf test to verify the manifest base - valid yaml:

conftest test --policy ./conftest-checks base-valid.yaml
FAIL - base-valid.yaml - image 'hashicorp/http-echo' doesn't come from my-company.com repository
1 tests, 1 passed, 0 warnings, 1 failure

Of course, it fails because the image is not trusted. The above Rego file specifies a deny block. When it is true, it will be evaluated as a violation. When you have multiple deny blocks, confitest will check them independently. The overall result is that the violation of any block will lead to the overall violation.

In addition to the default output format, conf test also supports JSON, TAP and table formats marked by -- output. If you want to integrate the report with the existing continuous integration pipeline, these formats will be very helpful. To help debug policies, confitest has a convenient -- trace flag, which can print the trace of how confitest parses the specified policy file.

The Conftest policy can be published and shared in the OCI (Open Container Initiative) warehouse as artefacts. The commands push and pull allow you to publish an artifact and extract an existing artefact from a remote repository.

Let's take a look at the demonstration of publishing the above strategy to the local docker warehouse using confitest push. Start the local docker warehouse with the following command:

docker run -it --rm -p 5000:5000 registry

From another terminal, navigate to the conftest checks directory created above and run the following command:

conftest push 127.0.0.1:5000/amitsaha/opa-bundle-example:latest

The command should complete successfully with the following message:

2020/06/10 14:25:43 pushed bundle with digest: sha256:e9765f201364c1a8a182ca637bc88201db3417bacc091e7ef8211f6c2fd2609c

Now, create a temporary directory, run the conf test pull command, and download the above bundle to the temporary directory:

cd $(mktemp -d)
conftest pull 127.0.0.1:5000/amitsaha/opa-bundle-example:latest

You will see that there is a new subdirectory policy in the temporary directory containing the previously push ed policy files:

tree
.
└── policy
  └── check_image_registry.rego

You can even run tests directly from the warehouse:

conftest test --update 127.0.0.1:5000/amitsaha/opa-bundle-example:latest base-valid.yaml
..
FAIL - base-valid.yaml - image 'hashicorp/http-echo' doesn't come from my-company.com repository
2 tests, 1 passed, 0 warnings, 1 failure

Unfortunately, DockerHub is not yet one of the supported image repositories. However, if you are using Azure container repository (ACR) or running your container repository, you may pass the test.

The artefact format is the same as that used by the open policy agent (OPA) binding, which makes it possible to run tests from existing OPA bindings using confitest.

You can learn more about sharing strategy and other functions of conf test on the official website:

https://www.conftest.dev/

Polaris

Home page: https://github.com/FairwindsOps/polaris

The last tool this article will explore is Polaris. Polaris can be installed inside the cluster or used as a command-line tool to statically analyze Kubernetes manifest. When run as a command-line tool, it includes several built-in checks covering areas such as security and best practices, similar to Kube score. In addition, you can also use it to write custom checks like config lint, copy and confist. In other words, Polaris combines the best of two categories: built-in and custom inspectors.

You can follow the instructions on the project website to install the polaris command line tool. At the time of writing this article, the latest version is 1.0.3:

https://github.com/FairwindsOps/polaris/blob/master/docs/usage.md#cli

After the installation is completed, you can use the following command for base valid Yaml manifest running polaris:

polaris audit --audit-path base-valid.yam

The above command will print a string in JSON format, detailing the checks run and the results of each test. The structure of the output results is as follows:

{
  "PolarisOutputVersion": "1.0",
  "AuditTime": "0001-01-01T00:00:00Z",
  "SourceType": "Path",
  "SourceName": "test-data/base-valid.yaml",
  "DisplayName": "test-data/base-valid.yaml",
  "ClusterInfo": {
    "Version": "unknown",
    "Nodes": 0,
    "Pods": 2,
    "Namespaces": 0,
    "Controllers": 2
  },
  "Results": [
    /* long list */
  ]
}

You can get the full output in the link below:

https://github.com/amitsaha/kubernetes-static-checkers-demo/blob/master/base-valid-polaris-result.json

Similar to Kube score, polaris also found that some manifest did not meet the recommended best practices, including:

  • pod lacking health check.

  • Container mirror does not have a label specified.

  • The container runs as root.

  • CPU and memory requests and limits are not set.

  • Each inspection is classified as the severity of a warning or hazard.

To learn more about the current built-in check, refer to the documentation:

https://github.com/FairwindsOps/polaris/blob/master/docs/usage.md#checks

If you are not interested in detailed results, the delivery flag -- format score will print a number in the range of 1-100, which is called score by polaris:

polaris audit --audit-path test-data/base-valid.yaml --format score
68

The closer the score is to 100, the higher the compliance. If you check the exit code of the polaris audit command, you will find that it is 0. To make the code of polaris audit exit non-zero, you can use the other two flags.

--The set exit code below score flag accepts a threshold score in the range of 1-100. When the score is lower than the threshold, it will exit with an exit code of 4. This is useful when your baseline score is 75 and you want to alert when your score is below 75.

When any hazard check fails, the - set exit code on danger flag will exit with an exit code of 3.

Now let's see how to define a custom check for polaris to test whether the container image in the Deployment comes from a trusted image repository. The custom check is defined in YAML format, and the test itself is described in JSON Schema. The following YAML code snippet defines a new checkImageRepo:

checkImageRepo:
  successMessage: Image registry is valid
  failureMessage: Image registry is not valid
  category: Images
  target: Container
  schema:
    '$schema': http://json-schema.org/draft-07/schema
    type: object
    properties:
      image:
        type: string
        pattern: ^my-company.com/.+$

Let's take a closer look:

  • successMessage is the string displayed when the check is successful.

  • failureMessage refers to the information displayed when the test is unsuccessful.

  • category refers to one of these categories - image, health check, security, network and resources.

  • target is a string used to determine the specification object targeted by the inspection. It should be one of Container, Pod or Controller.

  • The test itself is defined using the JSON schema in the schema object. The check here uses the pattern keyword to match whether the image comes from an allowed warehouse.

To run the checks defined above, you need to create a Polaris configuration file as follows:

checks:
  checkImageRepo: danger
customChecks:
  checkImageRepo:
    successMessage: Image registry is valid
    failureMessage: Image registry is not valid
    category: Images
    target: Container
    schema:
      '$schema': http://json-schema.org/draft-07/schema
      type: object
      properties:
        image:
          type: string
          pattern: ^my-company.com/.+$

Let's analyze this file.

  • The check field specifies the checks and their severity. Because you want to alert when the image is not trusted, checkImageRepo is assigned a danger severity.

  • Then define the checkImageRepo check itself in the customChecks object.

You can save the above file as custom_check.yaml, and then run polaris audit with the YAML manifest you want to verify.

You can use base valid Test yaml manifest:

polaris audit --config custom_check.yaml --audit-path base-valid.yaml

You will find that polaris audit only runs the custom check defined above, but it is not successful. If you change the container image to my company COM / HTTP echo: 1.0, polaris will report success. Github warehouse contains the modified manifest, so you can use image valid mycompany Yaml manifest tests the previous command.

But how can I run both built-in and custom checks at the same time? The above configuration file should update all built-in inspection identifiers, which should look as follows:

checks:
  cpuRequestsMissing: warning
  cpuLimitsMissing: warning
  # Other inbuilt checks..
  # ..
  # custom checks
  checkImageRepo: danger
customChecks:
  checkImageRepo:
    successMessage: Image registry is valid
    failureMessage: Image registry is not valid
    category: Images
    target: Container
    schema:
      '$schema': http://json-schema.org/draft-07/schema
      type: object
      properties:
        image:
          type: string
          pattern: ^my-company.com/.+$

Here you can see an example of a complete configuration file:

https://github.com/amitsaha/kubernetes-static-checkers-demo/blob/master/polaris-configs/config_with_custom_check.yaml

You can test base valid with custom and built-in checks yaml manifest:

polaris audit --config config_with_custom_check.yaml --audit-path base-valid.yaml

Polaris enhances built-in checks with your custom checks, combining the best of both ways. However, the inability to use a more powerful language, such as Rego or JavaScript, may limit the writing of more complex checks.

To learn more about polaris, check out the project website:

https://github.com/FairwindsOps/polaris

Summary

Although there are many tools to validate, grade and streamline Kubernetes YAML files, it is important to have a mental model to understand how you will design and perform inspections. For example, if you want the Kubernetes manifest to pass through a pipeline, kubeval can be the first step of such a pipeline because it verifies whether the object definition conforms to the Kubernetes API pattern. Once this check is successful, perhaps you can continue with more detailed tests, such as standard best practices and custom strategies. Kube score and polaris are the best choices here.

If you have complex requirements and want to customize the details of the check, you should consider copy, config lint and confitest. Although both confitest and config lint use more YAML to define custom validation rules, copper provides you with a real programming language, which makes it quite attractive. But should you use one of them and write all the checks from scratch, or should you use Polaris and write only additional custom checks? It depends on the situation.

Tags: Kubernetes

Posted by lixid on Wed, 25 May 2022 01:38:57 +0300