Documentation
Introduction
- Overview
- Getting Started
- Support for K8s Installers
- Deploying on Kind
- Deploying on Minikube
- Configuration
- Installing with Helm
Cloud Deployment
Reference
- Antrea Network Policy
- Antctl
- Architecture
- Traffic Encryption (Ipsec / WireGuard)
- Securing Control Plane
- Security considerations
- Troubleshooting
- OS-specific Known Issues
- OVS Pipeline
- Feature Gates
- Antrea Proxy
- Network Flow Visibility
- Traceflow Guide
- NoEncap and Hybrid Traffic Modes
- Egress Guide
- NodePortLocal Guide
- Antrea IPAM Guide
- Exposing Services of type LoadBalancer
- Traffic Control
- Versioning
- Antrea API Groups
- Antrea API Reference
Windows
Integrations
Cookbooks
Multicluster
Developer Guide
Project Information
Deploying Antrea on a Kind cluster
We support running Antrea inside of Kind clusters on both Linux and macOS hosts. On macOS, support for Kind requires the use of Docker Desktop, instead of the legacy Docker Toolbox.
To deploy a released version of Antrea on an existing Kind cluster, you can simply use the same command as for other types of clusters:
kubectl apply -f https://github.com/antrea-io/antrea/releases/download/<TAG>/antrea.yml
Create a Kind cluster and deploy Antrea in a few seconds
Using the kind-setup.sh script
To create a simple two worker Node cluster and deploy a released version of Antrea, use:
./ci/kind/kind-setup.sh create <CLUSTER_NAME>
kubectl apply -f https://github.com/antrea-io/antrea/releases/download/<TAG>/antrea.yml
Or, for the latest version of Antrea, use:
./ci/kind/kind-setup.sh create <CLUSTER_NAME>
kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml
The kind-setup.sh
script may execute kubectl
commands to set up the cluster,
and requires that kubectl
be present in your PATH
.
To specify a different number of worker Nodes, use --num-workers <NUM>
. To
specify the IP family of the kind cluster, use --ip-family <ipv4|ipv6|dual>
.
To specify the Kubernetes version of the kind cluster, use
--k8s-version <VERSION>
. To specify the Service Cluster IP range, use
--service-cidr <CIDR>
.
If you want to pre-load the Antrea image in each Node (to avoid having each Node pull from the registry), you can use:
tag=<TAG>
cluster=<CLUSTER_NAME>
docker pull antrea/antrea-controller-ubuntu:$tag
docker pull antrea/antrea-agent-ubuntu:$tag
./ci/kind/kind-setup.sh \
--images "antrea/antrea-controller-ubuntu:$tag antrea/antrea-agent-ubuntu:$tag" \
create $cluster
kubectl apply -f https://github.com/antrea-io/antrea/releases/download/$tag/antrea.yml
The kind-setup.sh
is a convenience script typically used by developers for
testing. For more information on how to create a Kind cluster manually and
deploy Antrea, read the following sections.
As an Antrea developer
If you are an Antrea developer and you need to deploy Antrea with your local changes and locally built Antrea image, use:
./ci/kind/kind-setup.sh --antrea-cni create <CLUSTER_NAME>
kind-setup.sh
allows developers to specify the number of worker Nodes, the
docker bridge networks/subnets connected to the worker Nodes (to test Antrea in
different encap modes), and a list of docker images to be pre-loaded in each
Node. For more information on usage, run:
./ci/kind/kind-setup.sh help
As a developer, you do usually want to provide the --antrea-cni
flag, so that
the kind-setup.sh
can generate the appropriate Antrea YAML manifest for you on
the fly, and apply it to the created cluster directly.
Create a Kind cluster manually
The only requirement is to use a Kind configuration file which disables the
Kubernetes default CNI (kubenet
). For example, your configuration file may
look like this:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
disableDefaultCNI: true
podSubnet: 10.10.0.0/16
nodes:
- role: control-plane
- role: worker
- role: worker
Once you have created your configuration file (let’s call it kind-config.yml
),
create your cluster with:
kind create cluster --config kind-config.yml
Deploy Antrea to your Kind cluster
kubectl apply -f https://github.com/antrea-io/antrea/releases/download/<TAG>/antrea.yml
Deploy a local build of Antrea to your Kind cluster (for developers)
These instructions assume that you have built the Antrea Docker image locally
(e.g. by running make
from the root of the repository).
# load the Antrea Docker images in the Nodes
kind load docker-image antrea/antrea-controller-ubuntu:latest antrea/antrea-agent-ubuntu:latest
# deploy Antrea
kubectl apply -f build/yamls/antrea.yml
Check that everything is working
After a few seconds you should be able to observe the following when running
kubectl get -n kube-system pods -l app=antrea
:
NAME READY STATUS RESTARTS AGE
antrea-agent-dgsfs 2/2 Running 0 8m56s
antrea-agent-nzsmx 2/2 Running 0 8m56s
antrea-agent-zsztq 2/2 Running 0 8m56s
antrea-controller-775f4d79f8-6tksp 1/1 Running 0 8m56s
Run the Antrea e2e tests
To run the Antrea e2e test suite on your Kind cluster, please refer to this document.
FAQ
Antrea Agents are not starting on macOS, what could it be?
Some older versions of Docker Desktop did not include all the required Kernel
modules to run the Antrea Agent, and in particular the openvswitch
Kernel
module. See
this issue for more
information. This issue does not exist with recent Docker Desktop versions (>= 2.5
).
Antrea Agents are not starting on Windows, what could it be?
At this time, we do not officially support Antrea for Kind clusters running on
Windows hosts. In recent Docker Desktop versions, the default way of running
Linux containers on Windows is by using the
Docker Desktop WSL 2
backend. However, the Linux
Kernel used by default in WSL 2 does not include all the required Kernel modules
to run the Antrea Agent, and in particular the openvswitch
Kernel
module. There are 2 different ways to work around this issue, which we will not
detail in this document:
-
use the Hyper-V backend for Docker Desktop
-
build a custom Kernel for WSL, with the required Kernel configuration:
CONFIG_NETFILTER_XT_MATCH_RECENT=y CONFIG_NETFILTER_XT_TARGET_CT=y CONFIG_OPENVSWITCH=y CONFIG_OPENVSWITCH_GRE=y CONFIG_OPENVSWITCH_VXLAN=y CONFIG_OPENVSWITCH_GENEVE=y