Deployment and Operations

3 mins read

Configuring Cloudentity on Kubernetes via GitOps

Configuring Cloudentity on Kubernetes using the GitOps methodology.

Deploying On Existing Kubernetes Cluster

While the default setup creates a new Kubernetes cluster using Kind, you might want to deploy Cloudentity and its infrastructure on an already existing cluster. Here’s how:

  • Ensure your kubeconfig, pointing to the desired cluster, exists at ~/.kube/config and is set as the default context.
  • Execute make deploy to initiate FluxCD installation and bootstrap all resources from the acp-on-k8s repository.
  • Monitor the deployment progress using make wait.
  • Confirm a successful deployment by running make run-lightweight-tests.

Kubernetes cluster considerations:

  • The stack has been architected to operate optimally on a minimum of 3 nodes due to pod affinity considerations.

  • Ensure that nodes are distributed across a minimum of 3 distinct zones, identified by the topology.kubernetes.io/zone label.

  • The nodes should possess the following labels, as they are leveraged as node selectors for specific components:

    compute=true
    fission=true
    nginx=true
    cockroachdb=true
    timescaledb=true
    redis=true
    spicedb=true
    elastic=true
    system=true
    clusterCritical=true
    

By adhering to these requirements, you ensure that the stack functions as intended and can achieve its intended performance and reliability characteristics. If these specifications don’t align with your needs, remember that you can adjust any of these considerations by customizing the stack, see section below.

Customizing the Deployment

Adhering to the principles of GitOps, all configuration changes must be tracked in a Git repository. Since the public repository is maintained by Cloudentity, we recommend forking it for custom modifications.

To deploy from a customized repository:

  1. Fork the acp-on-k8s repository.

  2. Make the necessary changes in your forked repository.

    You can find Cloudentity configuration in the apps/acp/base/release.yaml file. Add any configuration using the config.data setting to, for example, enable feature flags.

    Note

    Please note that the configuration added in the acp-on-k8s repository can be treated as production ready. Cloudentity does not take any responsibility for any changes made in the forked repositories.

    To import data, Cloudentity recommends utilizing a dedicated acp-cd Helm Chart, but for quick and local testing you may utilize the importJob.data configuration built into the release.yaml file.

    To learn more, visit:

  3. Deploy the stack from your repository with make deploy REPO=<url>, where <url> is the link to your forked project.

  4. To specify a branch other than the default main, use make deploy REPO=<url> BRANCH=<branch>.

For authenticated access to private repositories, include credentials in the repository URL, for example: REPO=https://user:token@github.com/myorganization/myrepository.

Upon completion, the stack is deployed from your personal repository, and any subsequent changes you commit are auto-deployed thanks to the GitOps methodology.

Access Local Services

To access local services provided by the Cloudentity platform, you need to update your system’s hosts file to recognize specific domain names. Open your /etc/hosts file and append the following entries:

# tenants
127.0.0.1 default.acp.local
127.0.0.1 system.acp.local
127.0.0.1 lightweight-tests.acp.local
# databases
127.0.0.1 cockroachdb.tools.local.acp.local
127.0.0.1 redisinsight.tools.local.acp.local
# logs and traces
127.0.0.1 kibana.tools.local.acp.local
# metrics and alerts
127.0.0.1 grafana.tools.local.acp.local
127.0.0.1 prometheus.tools.local.acp.local
127.0.0.1 alertmanager.tools.local.acp.local
# thanos
127.0.0.1 query.tools.local.acp.local
127.0.0.1 ruler.tools.local.acp.local
127.0.0.1 store.tools.local.acp.local
127.0.0.1 compactor.tools.local.acp.local

This configuration ensures that when you access these domain names on your local machine, they resolve to the local IP address (127.0.0.1).

If you are integrating additional tenants into the system, remember to add corresponding entries to the hosts file. For instance, for a tenant named mytest, you would add:

127.0.0.1 mytest.acp.local

Once set up, you can access specific services that require login credentials:

  • Kibana - Access using admin:p@ssw0rd!.
  • Grafana - Access using admin:p@ssw0rd!.
  • Cockroachdb - Access using dev:p@ssw0rd!.
Updated: Nov 2, 2023