Prerequisites
UDS Core
can run in any CNCF conformant Kubernetes distribution that has not reached End-of-Life (EOL). This documentation aims to provide guidance and links to relevant information to help configure your Kubernetes environment and hosts for a successful installation of UDS Core
. Note that customizations may be required depending on the specific environment.
Cluster Requirements
When running Kubernetes on any type of host it is important to ensure you are following the upstream documentation from the Kubernetes distribution regarding prerequisites. A few links to upstream documentation are provided below for convenience.
RKE2
- General installation requirements
- Disabling Firewalld to prevent networking conflicts
- Modifying NetworkManager to prevent CNI conflicts
- Known Issues
K3S
EKS
AKS
UDS Core Requirements
The below are specific requirements for running UDS Core. Some of them are tied to the entire stack of UDS Core and some are more specific to certain components. If you encounter issues with a particular component of core, this can be a good list to check to validate you met all the prerequisite requirements for that specific application.
Default Storage Class
Several UDS Core components require persistent volumes that will be provisioned using the default storage class via dynamic volume provisioning. Ensure that your cluster includes a default storage class prior to deploying. You can validate by running the below command (see example output which includes (default)
next to the local-path
storage class):
❯ kubectl get storageclassNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGElocal-path (default) rancher.io/local-path Delete WaitForFirstConsumer true 55s
It’s generally beneficial if your storage class supports volume expansion (set allowVolumeExpansion: true
, provided your provisioner allows it). This enables you to resize volumes when needed. Additionally, be mindful of any size restrictions imposed by your provisioner. For instance, EBS volumes have a minimum size of 1Gi, which could lead to unexpected behavior, especially during Velero’s CSI backup and restore process. These constraints may also necessitate adjustments to default PVC sizes, such as Keycloak’s PVCs, which default to 512Mi in devMode
.
Network Policy Support
The UDS Operator will dynamically provision network policies to secure traffic between components in UDS Core. To ensure these are effective, validate that your CNI supports enforcing network policies. In addition, UDS Core makes use of some CIDR based policies for communication with the KubeAPI server. If you are using Cilium, support for node addressability with CIDR based policies must be enabled with a feature flag.
Istio
Istio requires a number of kernel modules to be loaded for full functionality. The below is a script that will ensure these modules are loaded and persisted across reboots (see also Istio’s upstream requirements list). Ideally this script is used as part of an image build or cloud-init process on each node.
modules=("br_netfilter" "xt_REDIRECT" "xt_owner" "xt_statistic" "iptable_mangle" "iptable_nat" "xt_conntrack" "xt_tcpudp" "xt_connmark" "xt_mark" "ip_set")for module in "${modules[@]}"; do modprobe "$module" echo "$module" >> "/etc/modules-load.d/istio-modules.conf"done
In addition, to run Istio ingress gateways (part of Core) you will need to ensure your cluster supports dynamic load balancer provisioning when services of type LoadBalancer are created. Typically in cloud environments this is handled using a cloud provider’s controller (example: AWS LB Controller). When deploying on-prem, this is commonly done by using a “bare metal” load balancer provisioner like MetalLB or kube-vip. Certain distributions may also include ingress controllers that you will want to disable as they may conflict with Istio (example: RKE2 includes ingress-nginx).
Ambient Mode
Istio can be deployed in Ambient Mode by deploying the optional istio-ambient
component. This mode is still in alpha release and is not recommended for production use or for clusters requiring FIPS
compliance. The istio-ambient
component installs the Istio CNI plugin which requires specifying the CNI_CONF_DIR
and CNI_BIN_DIR
variables. These values can change based on the environment Istio is being deployed into. By default the package will attempt to auto-detect these values and will use the following values if not specified:
# K3d clustercniConfDir: /var/lib/rancher/k3s/agent/etc/cni/net.dcniBinDir: /bin/
# K3s clustercniConfDir: /var/lib/rancher/k3s/agent/etc/cni/net.dcniBinDir: /opt/cni/bin/
# All other clusterscniConfDir: /etc/cni/net.dcniBinDir: /opt/cni/bin/
These values can be overwritten when installing core by setting the cniConfDir
and cniBinDir
values in the istio-ambient
component.
To set these values add the following to the uds-config.yaml
file:
variables: core-base: cni_conf_dir: "foo" cni_bin_dir: "bar"
or via --set
if deploying the package via zarf
:
uds zarf package deploy uds-core --set CNI_CONF_DIR=/etc/cni/net.d --set CNI_BIN_DIR=/opt/cni/bin
NeuVector
NeuVector historically has functioned best when the host is using cgroup v2. Cgroup v2 is enabled by default on many modern Linux distributions, but you may need to enable it depending on your operating system. Enabling this tends to be OS specific, so you will need to evaluate this for your specific hosts.
Vector
In order to ensure that Vector is able to scrape the necessary logs concurrently you may need to adjust some kernel parameters for your hosts. The below is a script that can be used to adjust these parameters to suitable values and ensure they are persisted across reboots. Ideally this script is used as part of an image build or cloud-init process on each node.
declare -A sysctl_settingssysctl_settings["fs.nr_open"]=13181250sysctl_settings["fs.inotify.max_user_instances"]=1024sysctl_settings["fs.inotify.max_user_watches"]=1048576sysctl_settings["fs.file-max"]=13181250
for key in "${!sysctl_settings[@]}"; do value="${sysctl_settings[$key]}" sysctl -w "$key=$value" echo "$key=$value" > "/etc/sysctl.d/$key.conf"donesysctl -p
Metrics Server
Metrics server is provided as an optional component in UDS Core and can be enabled if needed. For distros where metrics-server is already provided, ensure that you do NOT enable metrics-server. See the below as an example for enabling metrics-server if your cluster does not include it.
---- name: uds-core repository: ghcr.io/defenseunicorns/packages/private/uds/core ref: 0.25.2-unicorn optionalComponents: - metrics-server