Deploying UDS on RKE2
This tutorial demonstrates how to deploy UDS onto a VM-based RKE2 Kubernetes cluster. This scenario is common in on-prem and airgap environments where cloud-based deployments are not feasible.
Prerequisites
Section titled “Prerequisites”- Recommended system requirements
- Hypervisor for running VMs (recommend Lima)
- UDS CLI
Quickstart
Section titled “Quickstart”The fastest way to get up and running with UDS Core on RKE2 is using the automation and configuration provided in the uds-rke2-demo repo. Follow the instructions in the README
to either provision a VM running RKE2 with UDS, or install UDS on an RKE2 cluster directly.
Starting the VM and Installing RKE2
Section titled “Starting the VM and Installing RKE2”Lima (recommended)
Section titled “Lima (recommended)”Lima provides a template for quickly spinning up an Ubuntu VM running RKE2 with appropriate shared network configs, follow the instructions in uds-rke2-demo to quickly get up and running. The automation in the demo repo uses the following Lima command to provision an Ubuntu VM running RKE2:
if [[ "$(uname)" == "Darwin" ]]; then limactl start template://experimental/rke2 \ --memory 20 --cpus 10 --vm-type=vz --network=vzNAT -yelse limactl start template://experimental/rke2 \ --memory 20 --cpus 10 --vm-type=qemu -yfi
After the VM has been created and RKE2 installed, ensure connectivity by setting the KUBECONFIG
:
export KUBECONFIG="$HOME"/.lima/rke2/copied-from-guest/kubeconfig.yaml
Then run kubectl get pods -A
to verify that the pods are running.
Other Hypervisors
Section titled “Other Hypervisors”VM Requirements
Section titled “VM Requirements”Aside from the system requirements mentioned in the prerequisites, you will need to provision a VM running a Linux instance compatible with RKE2. Additionally, this tutorial assumes the following network configuration:
- The VM has its own IP and is accessible to the host machine by both SSH and HTTPS
- Recommend either a shared network or bridge setup (hypervisor port forwarding can also be useful but is often unnecessary)
- Ability to configure DNS resolution (often done by modifying
/etc/hosts
)
Installing RKE2
Section titled “Installing RKE2”SSH into the newly created Linux VM and follow the official quickstart to install RKE2 on the VM. Note that this is a single server node setup, no need to add agent nodes.
After RKE2 is installed, ensure connectivity by running kubectl get pods -A
and verifying that the native RKE2 pods are running. You may need to export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
(read the official docs). Depending on your VM setup, it may be easier to run this command from the VM itself as opposed to the host machine. If running from the host machine, you will need to ensure the Kube API (port 6443) is exposed to the host.
Bootstrapping the Cluster
Section titled “Bootstrapping the Cluster”In order to take advantage of the full range of capabilities UDS provides, the cluster must have the following prerequisites installed:
- Default Storage Class
- Load Balancer Controller
- Object Store
Each of these prereqs is covered in greater detail below.
Default Storage Class
Section titled “Default Storage Class”Since RKE2 does not ship with a default
storage class, you will need to install one. For demo purposes, we recommend using the local-path-provisioner by Rancher.
Load Balancer Controller
Section titled “Load Balancer Controller”Although RKE2 ships with an NGINX ingress controller, UDS uses Istio ingress gateways to logically separate admin traffic from other types of traffic coming into the cluster. Using Istio also ensures that traffic within the cluster is encrypted and all applications are integrated into a secure service mesh. More information can be found in the UDS service mesh docs.
UDS ingress gateways are K8s Services
of type LoadBalancer
. In order to provide an IP to these load balancer services, a load balancer controller, such as MetalLB, must be installed. An example configuration for MetalLB can be found in the demo repo. Note that the base IP used for the MetalLB IPAddressPool
will come from the internal IP of the cluster nodes, and can be found with:
uds zarf tools kubectl get nodes -o=jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}'
Note the Zarf package in the demo repo configures this IP for you.
Object Store
Section titled “Object Store”The UDS log store (Loki) uses object storage to store cluster logs. For demo purposes, we recommend installing Minio to provide object storage. Example Helm values for Minio can be found here.
Loki can be configured to use other buckets or storage providers by using UDS bundle overrides to configure the UDS Loki Helm chart values.
The zarf init package will bootstrap your cluster and make it airgap-ready. This is typically included as part of the uds-bundle.yaml
when installing UDS.
Installing UDS
Section titled “Installing UDS”With all prerequisites satisfied, UDS is ready to be installed in the cluster. You can use the automation in the demo repo to install UDS with a single command:
uds run install
Otherwise, a sample uds-bundle.yaml is provided for reference and is partially shown below:
kind: UDSBundlemetadata: name: uds-rke2-demo description: A UDS bundle for deploying the standard UDS Core package on a development cluster version: "0.1.0"
packages:
# prereq packages go here ...
- name: init repository: ghcr.io/zarf-dev/packages/init ref: <latest>
- name: core repository: ghcr.io/defenseunicorns/packages/uds/core ref: <latest> # additional configuration overrides go here
Accessing UDS Apps
Section titled “Accessing UDS Apps”After installing UDS Core, find the IPs of the Istio ingress gateway services. The following command run from the root of the demo repo will show the ingress gateway IPs.
uds run get-gw-ips
You can also use the vendored kubectl
to get the IPs:
# admin gateway ips (repeat for other gateways)uds zarf tools kubectl get svc admin-ingressgateway -n istio-admin-gateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
After getting the IP, use /etc/hosts
(or configure a DNS provider) to enable resolution of UDS Core app hostnames, for example:
...# admin apps use the admin-ingressgateway IP192.168.64.200 keycloak.admin.uds.dev grafana.admin.uds.dev neuvector.admin.uds.dev
# tenant apps use the tenant-ingressgateway IP192.168.64.201 sso.uds.dev podinfo.uds.dev
UDS Core apps should now be accessible via the host machine’s web browser.
Configuring Keycloak SSO
Section titled “Configuring Keycloak SSO”Keycloak is hardened by default and can be configured further as per the documentation in the UDS IdAM docs. To explore the demo environment, we recommend using the following command, ran from the root of the demo repo, to run a UDS task to create a user we can use to access UDS services:
uds run setup:keycloak-user --set KEYCLOAK_USER_GROUP="/UDS Core/Admin"
This will create an admin user with the following credentials:
username: dougpassword: unicorn123!@#UNrole: /UDS Core/Admin
These credentials can be used to log into any of the apps in UDS.
Integrating a Mission App
Section titled “Integrating a Mission App”UDS uses a custom Package
resource backed by a UDS K8s controller to automatically integrate and secure mission applications with minimal configuration. An example of such a configuration for the app PodInfo exists in the demo repo. It can be deployed into the UDS RKE2 cluster by running the following command from the root of the repo:
uds run deploy-podinfo
For a more in-depth explanation of Package
resources, see the Package CR reference docs and the Integrating an Application with UDS Core tutorial.