Documentation for version v0.9.3 is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.
Antrea is a Kubernetes network plugin that provides network connectivity and security features for Pod workloads. Considering the scale and dynamism of Kubernetes workloads in a cluster, Network Flow Visibility helps in the management and configuration of Kubernetes resources such as Network Policy, Services, Pods etc., and thereby provides opportunities to enhance the performance and security aspects of Pod workloads.
For visualizing the network flows, Antrea monitors the flows in Linux conntrack module. These flows are converted to flow records and are sent to the configured flow controller. High-level design is given below:
In Antrea, the basic building block for the Network Flow Visibility is the Flow Exporter feature. Flow Exporter operates within Antrea Agent; it builds and maintains a connection store by polling and dumping flows from conntrack module periodically. Connections from the connection store are exported to a flow collector using the IPFIX protocol, and for this purpose we use the go-ipfix library.
To enable the Flow Exporter feature at the Antrea Agent, the following config parameters have to be set in the Antrea Agent ConfigMap as shown below. We provide some examples for the parameter values in the following snippet.
antrea-agent.conf: | # FeatureGates is a map of feature names to bools that enable or disable experimental features. featureGates: # Enable flowexporter which exports polled conntrack connections as IPFIX flow records from each agent to a configured collector. FlowExporter: true # Enable antrea proxy which provides ServiceLB for in-cluster services in antrea agent. # It should be enabled on Windows, otherwise NetworkPolicy will not take effect on # Service traffic. AntreaProxy: true # Provide flow collector address as string with format <IP>:<port>[:<proto>], where proto is tcp or udp. This also enables # the flow exporter that sends IPFIX flow records of conntrack flows on OVS bridge. If no L4 transport proto is given, # we consider tcp as default. flowCollectorAddr: "192.168.86.86:4739:tcp" # Provide flow poll interval as a duration string. This determines how often the flow exporter dumps connections from the conntrack module. # Flow poll interval should be greater than or equal to 1s (one second). # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". flowPollInterval: "1s" # Provide flow export frequency, which is the number of poll cycles elapsed before flow exporter exports flow records to # the flow collector. # Flow export frequency should be greater than or equal to 1. flowExportFrequency: 5
Please note that the default values for
flowExportFrequency parameters are set to 5s and 12, respectively.
flowCollectorAddr is a required parameter that is necessary for the Flow Exporter feature to work.
Currently, the Flow Exporter feature provides visibility for Pod-to-Pod, Pod-to-Node, Node-to-Pod, Node-to-Node and Pod-to-Service network flows along with the associated statistics such as data throughput (bits per second), packet throughput (packets per second), cumulative byte count, cumulative packet count etc. Pod-To-Service flow visibility is supported only when Antrea Proxy enabled.
Kubernetes information such as Node name, Pod name, Pod Namespace, Service name etc. is added to the flow records. For flow records that are exported from any given Antrea Agent, we only provide the information of Kubernetes entities that are local to the Antrea Agent. In the future, we plan to extend this feature to provide information about remote Kubernetes entities such as remote Node name, remote Pod name etc.
Please note that in the case of inter-Node flows, we are exporting only one copy of the flow record from the source Node, where the flow is originated from, and ignore the flow record from the destination Node, where the destination Pod resides. In the future, this behavior will be changed when the support for Network Policy is added as both hosts may apply different Network Policies and Rules.
Antrea supports sending IPFIX flow records through the Flow Exporter feature described above. The Elastic Stack (ELK Stack) works as the data collector, data storage and visualization tool for flow records and flow-related information. This document provides the guidelines for deploying Elastic Stack with support for Antrea-specific IPFIX fields in a Kubernetes cluster.
Elastic Stack is a group of open source products to help collect, store, search, analyze and visualize data in real time. We will use Logstash, Elasticsearch and Kibana in Antrea flow visualization. Logstash works as data collector to centralize flow records. Logstash Netflow codec plugin supports Netflow v5/v9/v10(IPFIX) protocols for flow data collection. The flow exporter feature in Antrea Agent uses the IPFIX (Netflow v10) protocol to export flow records.
Exported IPFIX flow records contain the following Antrea-specific fields along with standard IANA fields.
|IPFIX Information Element||Enterprise ID||Field ID||Type|
To create all the necessary resources in the
and get everything up-and-running, run:
kubectl create namespace elk-flow-collector kubectl create configmap logstash-configmap -n elk-flow-collector --from-file=build/yamls/elk-flow-collector/logstash/ kubectl apply -f build/yamls/elk-flow-collector/elk-flow-collector.yml -n elk-flow-collector
Kibana dashboard is exposed as a Nodeport Service, which can be accessed via
build/yamls/flow/kibana.ndjson is an auto-generated reusable file containing
pre-built objects for visualizing Pod-to-Pod, Pod-to-Service and Node-to-Node
flow records. To import the dashboards into Kibana, go to
Management -> Saved Objects and import
The following dashboards are pre-built and are recommended for Antrea flow visualization.
An overview of Pod-based flow records information is provided.
Pod-to-Pod Tx and Rx traffic is shown in sankey diagrams. Corresponding source or destination Pod throughput is visualized using stacked line graph.
Pod-to-Service traffic is presented similar to Pod-to-Pod traffic. Corresponding source or destination IP addresses are shown in tooltips.
Flow Records dashboard shows the raw flow records over time with support for filters.
Node Throughput dashboard shows the visualization of inter-Node and intra-Node traffic by aggregating all the Pod traffic per Node.