K8s-kube-proxy(ipvs代理模式)

基于1.25

在ipvs代理模式,kube-proxy主要通过宿主节点上配置的iptables规则、ipvs规则、IPSet内容、Dummy网卡等信息实现Service的功能,这些配置syncProxyRules func

  • 与ipatbles代理模式,ipvs模式对于新增一个Service,只会在IPSet、ipvs规定上有新增内容,而不产生新的iptables链和规则
  • ipvs代理模式使用IPSet和ipvs实现IP地址匹配和DNAT操作,因此比iptables线性执行效率更高,更适合较大规模集群的场景

统计Stale Service和Stale Endpoints

在ipvs代理模式中,Stale Service和Stale Endpoints的判定逻辑与iptables代理模式相同

初始化iptables内容缓冲区

kube-proxy也涉及到一些iptables链和规则的创建,因此使用iptables-restore字节流的形成缓存和刷新iptables规则

在kube-proxy同步开始阶段,会初始化四个内容缓冲区。

创建基础的iptables链和规则

kube-proxy调用createAndLinkKubeChain func在iptables的nat表和filter表上创建基础iptables链,以及iptables原始链向这些KUBE链跳转的规则。

  • Ref:https://github.com/kubernetes/kubernetes/blob/88e994f6bf8fc88114c5b733e09afea339bea66d/pkg/proxy/ipvs/proxier.go#L1840

    // createAndLinkKubeChain create all kube chains that ipvs proxier need and write basic link.
    func (proxier *Proxier) createAndLinkKubeChain() {
    for _, ch := range iptablesChains {
    if _, err := proxier.iptables.EnsureChain(ch.table, ch.chain); err != nil {
    klog.ErrorS(err, "Failed to ensure chain exists", "table", ch.table, "chain", ch.chain)
    return
    }
    if ch.table == utiliptables.TableNAT {
    proxier.natChains.Write(utiliptables.MakeChainLine(ch.chain))
    } else {
    proxier.filterChains.Write(utiliptables.MakeChainLine(ch.chain))
    }
    }

    for _, jc := range iptablesJumpChain {
    args := []string{"-m", "comment", "--comment", jc.comment, "-j", string(jc.to)}
    if _, err := proxier.iptables.EnsureRule(utiliptables.Prepend, jc.table, jc.from, args...); err != nil {
    klog.ErrorS(err, "Failed to ensure chain jumps", "table", jc.table, "srcChain", jc.from, "dstChain", jc.to)
    }
    }

    }

kube-porxy先调用iptables中的EnsureChain func创建集群的基础KUBE链

iptablesChains的包含的链包含如下:

属于表 链名称 功能
nat KUBE-SERVICES 检查数据包的目的IP是否存在访问Service的某个IP地址。如果访问的目的IP地址是NodePort或者LoadBalancerIP,就会跳转到KUBE-NODE-PORT或者KUBE-LOAD-BALANCER
nat KUBE-POSTROUTING 负责SNAT从当前节点流出的某些数据包,这些数据包带有MARK
nat KUBE-NODE-PORT 检查数据包的目的地址是否在访问Service的某个NodePort
nat KUBE-LOAD-BALANCER 检查数据包的目的地址是否访问Service的Statis字段中记录的LBIP
nat KUBE-MARK-MASQ 为数据包添加MARK
filter KUBE-FORWARD 根据数据包的MARK和Conntrack状态等记录放行或者拒绝某些数据包
filter KUBE-NODE-PORT 检查数据包的目的地址是否在访问Service的某个NodePort
filter KUBE-PROXY-FIREWALL 在配置spec.loadBalancerSourceRanges,拒绝不在白名单的数据包
filter KUBE-SOURCE-RANGES-FIRWEWALL 在配置spec.loadBalancerSourceRanges,拒绝不在白名单的数据包

EnsureChain func涉及到的iptables命令:

iptables -N KUBE-SERVICES -t nat 
iptabels -N KUBE-POSTROUTING -t nat
iptabels -N KUBE-NODE-PORT -t nat
iptabels -N KUBE-LOAD-BALANCER -t nat
iptabels -N KUBE-FORWARD -t filter
iptabels -N KUBE-NODE-PORT -t filter
iptabels -N KUBE-PROXY-FIREWALL -t filter
iptabels -N KUBE-SOURCE-RANGES-FIRWEWALL -t filter

在创建基础KUBE链之后,kube-proxy继续调用EnsureRule Func,添加从iptables原始链向这些基础的KUBE链跳转的规则中,记载咋iptablesJumpChain中

所属表 起始链 目的链 注释
nat OUTPUT KUBE-SEVICES kubernetes service portals
nat PREROUTING KUBE-SERVICES kubernetes service portals
nat POSTROUTING KUBE-POSTROUTING kubernetes postrouting rules
filter FORWARD KUBE-FORWARD kubernetes forwarding rules
filter INPUT KUBE-NODE-PORT kubernetes health check rules
filter INPUT KUBE-PROXY-FIREWALL kube-proxy firewall rules
filter FORWALL KUBE-PROXY-FIREWALL kube-proxy firewall rules

创建Dummy网卡

在ipvs模式汇总,需要把Service绑定到宿主节点上的本地网络设备,以便在iptables路由决策的时候让数据包进入INPUT链上,进而被ipvs处理,完成DNAT处理。为此,kube-proxy创建了一个虚拟网络设备,专用于绑定Service的IP,即Dummy网卡

kube-proxy调用EnsureDummyDevice Func,使用NetLink套接字确保Dummy网卡存在

该网卡称为Dummy,背后无真实网卡,只是辅助iptables,Dummy网卡名称为“kube-ipvs0”

创建IPSet

在ipvs模式中,iptables规则借助IPSet实现匹配IP地址和端口。

  • IPSet使用Hash、Bitmap等方式匹配IP地址和端口,复杂度是常量级别,在大规模集群下有效提高性能
  • kube-proxy会创建不同的IPSet匹配不同类型的Service IP

在ipvs模式中kube-proxy创建的IPSet

名称 类型 描述
KUBE-LOOP-BACK Hash:ip, port,ip Kubernetes endpoints dst ip:port,soure ip for solving hairpin purpose
KUBE-CLUSTER-IP Hash:ip,port Kubernetes service cluster ip + port for masquerade purpose
KUBE-EXTERNAL-IP Hash:ip,port Kubernetes service external ip + port for masquerade and filter purpose
KUBE-EXTERNAL-IP-LOCAL Hash:ip,port Kubernetes service external ip + port with exterbalTraffic=local
KUBE-LOAD-BALANCER Hash:ip,port Kubernetes service lb portal
KUBE-LOAD-BALANCER-LOCAL Hash:ip,port Kubernetes service load balancer ip + port with exterbalTraffic=local
KUBE-LOAD-BALANCER-FW Hash:ip,port Kubernetes service load balancer ip + port for load balancer with sourceRange
KUBE-LOAD-BALANCER-SOURCE-IP Hash:ip,port Kubernetes service load balancer ip + port for load filter purpose
KUBE-LOAD-BALANCER-SOURCE-CIDR Hash:ip,port,net Kubernetes service load balancer ip + port for load filter purpose
KUBE-NODE-PORT-TCP bitmap:port Kubernetes nodeport TCP port for masquerade purpose
KUBE-NODE-PORT-LOCAL-TCP bitmap:port Kubernetes nodeport TCP port with exterbalTraffic=local
KUBEKUBE-NODE-PORT-UDP bitmap:port Kubernetes nodeport UDP port for masquerade purpose
KUBE-NODE-PORT-LOCAL-UDP bitmap:port Kubernetes nodeport UDP port with exterbalTraffic=local
KUBE-NODE-PORT-SCTP-HASH bitmap:port Kubernetes nodeport SCTP port for masquerade purpose with type ‘hash ip:port’
KUBE-NODE-PORT-LOCAL-SCTP-HASH Hash:ip,port Kubernetes nodeport SCTP port with exterbalTraffic=local with type ‘hash ip:port’
KUBE-HEALTH-CHECK-NODE-PORT bitmap:port Kubernetes health check node por

统计宿主节点的IP地址

  • ipvs模式,最终使用nodeIPs设置ipvs规则,不同之处如下

    • 跳过了LoopBack,即以127开头的地址。原因是这会导致内核cross_local_route_boundary func进行路由合法检查时失败,进而导致数据包丢失,因此跳过127开头地址。iptables模式未跳过
    • 如果nodeAddSet结果包含零网段,则通过ipGetter.NodeIPs func获取所有非Dummy网卡设备的IP地址,作为宿主节点的主机IP地址返回。iptables模式之间使用整个零网段,不再统计各个网络设备的IP地址
  • Ref:https://github.com/kubernetes/kubernetes/blob/88e994f6bf8fc88114c5b733e09afea339bea66d/pkg/proxy/ipvs/proxier.go#L1106

    if hasNodePort {
    nodeAddrSet, err := utilproxy.GetNodeAddresses(proxier.nodePortAddresses, proxier.networkInterfacer)
    if err != nil {
    klog.ErrorS(err, "Failed to get node IP address matching nodeport cidr")
    } else {
    nodeAddresses = nodeAddrSet.List()
    for _, address := range nodeAddresses {
    a := netutils.ParseIPSloppy(address)
    if a.IsLoopback() {
    continue
    }
    if utilproxy.IsZeroCIDR(address) {
    nodeIPs, err = proxier.ipGetter.NodeIPs()
    if err != nil {
    klog.ErrorS(err, "Failed to list all node IPs from host")
    }
    break
    }
    nodeIPs = append(nodeIPs, a)
    }
    }
    }

为每个Service配置规则

配置LookBack IP Set

遍历Service的后端Endpoints,如果Endpoints就在kube-proxy所在的宿主节点上,就把该Endpoints加入KUBE-LOOP-BACK IP Set中

配置ClusterIP的IP Set和ipvs规则

Service的ClusterIP和Port添加到KUBE-CLUSTER- IP IPSet中

  • Ref:https://github.com/kubernetes/kubernetes/blob/88e994f6bf8fc88114c5b733e09afea339bea66d/pkg/proxy/ipvs/proxier.go#L1188

    // Capture the clusterIP.
    // ipset call
    entry := &utilipset.Entry{
    IP: svcInfo.ClusterIP().String(),
    Port: svcInfo.Port(),
    Protocol: protocol,
    SetType: utilipset.HashIPPort,
    }
    // add service Cluster IP:Port to kubeServiceAccess ip set for the purpose of solving hairpin.
    // proxier.kubeServiceAccessSet.activeEntries.Insert(entry.String())
    if valid := proxier.ipsetList[kubeClusterIPSet].validateEntry(entry); !valid {
    klog.ErrorS(nil, "Error adding entry to ipset", "entry", entry, "ipset", proxier.ipsetList[kubeClusterIPSet].Name)
    continue
    }
    proxier.ipsetList[kubeClusterIPSet].activeEntries.Insert(entry.String())

为ClusterIP和Port配置ipvs的VirtualServer与RealServer,这是使用ipvs实现DNAT的必要步骤

  • Ref:https://github.com/kubernetes/kubernetes/blob/88e994f6bf8fc88114c5b733e09afea339bea66d/pkg/proxy/ipvs/proxier.go#L1214

    // We need to bind ClusterIP to dummy interface, so set `bindAddr` parameter to `true` in syncService()
    if err := proxier.syncService(svcPortNameString, serv, true, bindedAddresses); err == nil {
    activeIPVSServices[serv.String()] = true
    activeBindAddrs[serv.Address.String()] = true
    // ExternalTrafficPolicy only works for NodePort and external LB traffic, does not affect ClusterIP
    // So we still need clusterIP rules in onlyNodeLocalEndpoints mode.
    internalNodeLocal := false
    if utilfeature.DefaultFeatureGate.Enabled(features.ServiceInternalTrafficPolicy) && svcInfo.InternalPolicyLocal() {
    internalNodeLocal = true
    }
    if err := proxier.syncEndpoint(svcPortName, internalNodeLocal, serv); err != nil {
    klog.ErrorS(err, "Failed to sync endpoint for service", "servicePortName", svcPortName, "virtualServer", serv)
    }
    } else {
    klog.ErrorS(err, "Failed to sync service", "servicePortName", svcPortName, "virtualServer", serv)
    }

    syncService func中,直接为ClusterIP和Port创建ipvs VirtualServer,并且将ClusterIP绑定Dummy网卡上,然后根据Endpoints,为ClusterIP的ipvs 的VirtualServer创建对应的RealServer

    • 如果Service的内部流量策略为Cluster,则Service的所有Endpoints都会被加入到RealServer
    • 如果Service的内部流量策略为Local,则分为俩种情况
      • 如果当前宿主节点上运行Service的Endpoints,则仅将宿主节点上的Endpoints加入到RealServer
      • 如果当前宿主节点上没有运行Service的Endpoints,则Service的所有Endpoints都会被加入RealServer

配置ExternalIP的IP Set和ipvs规则

为Service的External IP和Port配置规则。对于每个External IP,需要将其添加到IP Set中

  • Ref:https://github.com/kubernetes/kubernetes/blob/88e994f6bf8fc88114c5b733e09afea339bea66d/pkg/proxy/ipvs/proxier.go#L1231

    // Capture externalIPs.
    for _, externalIP := range svcInfo.ExternalIPStrings() {
    // ipset call
    entry := &utilipset.Entry{
    IP: externalIP,
    Port: svcInfo.Port(),
    Protocol: protocol,
    SetType: utilipset.HashIPPort,
    }

    if svcInfo.ExternalPolicyLocal() {
    if valid := proxier.ipsetList[kubeExternalIPLocalSet].validateEntry(entry); !valid {
    klog.ErrorS(nil, "Error adding entry to ipset", "entry", entry, "ipset", proxier.ipsetList[kubeExternalIPLocalSet].Name)
    continue
    }
    proxier.ipsetList[kubeExternalIPLocalSet].activeEntries.Insert(entry.String())
    } else {
    // We have to SNAT packets to external IPs.
    if valid := proxier.ipsetList[kubeExternalIPSet].validateEntry(entry); !valid {
    klog.ErrorS(nil, "Error adding entry to ipset", "entry", entry, "ipset", proxier.ipsetList[kubeExternalIPSet].Name)
    continue
    }
    proxier.ipsetList[kubeExternalIPSet].activeEntries.Insert(entry.String())
    }

如果Service的外部流量策略为Local,则会被添加到KUBE-EXTERNAL-IP-LOCAL IP Set中;否则,会被添加到KUBE-EXTERNAL-IP Set中

配置RealServer注意点:

  • 如果Service的外部流量策略均为Cluster,则Service的所有的Endpoints都会被加入到RealServer

  • 如果Service的外部流量策略为Local,则分为俩种情况

    • 如果当前宿主节点上运行Service的Endpoints,则仅将宿主节点上的Endpoints加入RealServer
    • 如果当前宿主节点上没有运行Service的Endpoints,则Service的所有Endpoint都加入到RealServer

    对于集群内部的访问,上述机制总是确保能访问到Service的Pod

    对于集群外的访问,由于Local模式下kube-proxy设置的iptables规则不会跳转到其他节点的Pod包执行SNTA操作,因此导致响应数据包会以源IP为PodIO地址从目的Pod直接返回客户端,导致客户端与Pod无法连接,达到了在Local模式下,宿主节点无Pod时集群外部无法访问的效果

配置Load BalancerIP的IP Set和ipvs规则

为LoadBalancer IO和Port的配置规则。

  • Ref:https://github.com/kubernetes/kubernetes/blob/88e994f6bf8fc88114c5b733e09afea339bea66d/pkg/proxy/ipvs/proxier.go#L1279

    for _, ingress := range svcInfo.LoadBalancerIPStrings() {
    // ipset call
    entry = &utilipset.Entry{
    IP: ingress,
    Port: svcInfo.Port(),
    Protocol: protocol,
    SetType: utilipset.HashIPPort,
    }
    // add service load balancer ingressIP:Port to kubeServiceAccess ip set for the purpose of solving hairpin.
    // proxier.kubeServiceAccessSet.activeEntries.Insert(entry.String())
    // If we are proxying globally, we need to masquerade in case we cross nodes.
    // If we are proxying only locally, we can retain the source IP.
    if valid := proxier.ipsetList[kubeLoadBalancerSet].validateEntry(entry); !valid {
    klog.ErrorS(nil, "Error adding entry to ipset", "entry", entry, "ipset", proxier.ipsetList[kubeLoadBalancerSet].Name)
    continue
    }
    proxier.ipsetList[kubeLoadBalancerSet].activeEntries.Insert(entry.String())
    // insert loadbalancer entry to lbIngressLocalSet if service externaltrafficpolicy=local
    if svcInfo.ExternalPolicyLocal() {
    if valid := proxier.ipsetList[kubeLoadBalancerLocalSet].validateEntry(entry); !valid {
    klog.ErrorS(nil, "Error adding entry to ipset", "entry", entry, "ipset", proxier.ipsetList[kubeLoadBalancerLocalSet].Name)
    continue
    }
    proxier.ipsetList[kubeLoadBalancerLocalSet].activeEntries.Insert(entry.String())
    }
    if len(svcInfo.LoadBalancerSourceRanges()) != 0 {
    // The service firewall rules are created based on ServiceSpec.loadBalancerSourceRanges field.
    // This currently works for loadbalancers that preserves source ips.
    // For loadbalancers which direct traffic to service NodePort, the firewall rules will not apply.
    if valid := proxier.ipsetList[kubeLoadBalancerFWSet].validateEntry(entry); !valid {
    klog.ErrorS(nil, "Error adding entry to ipset", "entry", entry, "ipset", proxier.ipsetList[kubeLoadBalancerFWSet].Name)
    continue
    }
    proxier.ipsetList[kubeLoadBalancerFWSet].activeEntries.Insert(entry.String())
    allowFromNode := false
    for _, src := range svcInfo.LoadBalancerSourceRanges() {
    // ipset call
    entry = &utilipset.Entry{
    IP: ingress,
    Port: svcInfo.Port(),
    Protocol: protocol,
    Net: src,
    SetType: utilipset.HashIPPortNet,
    }
    // enumerate all white list source cidr
    if valid := proxier.ipsetList[kubeLoadBalancerSourceCIDRSet].validateEntry(entry); !valid {
    klog.ErrorS(nil, "Error adding entry to ipset", "entry", entry, "ipset", proxier.ipsetList[kubeLoadBalancerSourceCIDRSet].Name)
    continue
    }
    proxier.ipsetList[kubeLoadBalancerSourceCIDRSet].activeEntries.Insert(entry.String())

    // ignore error because it has been validated
    _, cidr, _ := netutils.ParseCIDRSloppy(src)
    if cidr.Contains(proxier.nodeIP) {
    allowFromNode = true
    }
    }
    // generally, ip route rule was added to intercept request to loadbalancer vip from the
    // loadbalancer's backend hosts. In this case, request will not hit the loadbalancer but loop back directly.
    // Need to add the following rule to allow request on host.
    if allowFromNode {
    entry = &utilipset.Entry{
    IP: ingress,
    Port: svcInfo.Port(),
    Protocol: protocol,
    IP2: ingress,
    SetType: utilipset.HashIPPortIP,
    }
    // enumerate all white list source ip
    if valid := proxier.ipsetList[kubeLoadBalancerSourceIPSet].validateEntry(entry); !valid {
    klog.ErrorS(nil, "Error adding entry to ipset", "entry", entry, "ipset", proxier.ipsetList[kubeLoadBalancerSourceIPSet].Name)
    continue
    }
    proxier.ipsetList[kubeLoadBalancerSourceIPSet].activeEntries.Insert(entry.String())
    }
    }

配置NodePort的IP Set和ipvs规则

  1. 清除NodePort的UDP Conntrack记录

    调用ClearEntriesForPort func命令如下。NodePort字段就是当前处理的NodePort端口。

    conntrack -D -p UDP --dport {NodePort}

    ⚠️,在上述清理UDP Conntrack记录的过程中,无差别清理了所有的NodePort的UDP Conntrack记录,而不是仅仅Stale的UDP连接。这破坏了正常的UDP连接,因此这种方式是不好的, 尽管UPD就是不稳定的连接。

    v1.27kube-proxy解决了这个问题

  2. 根据ServicePort的协议类型,加入到KUBE-NODE-PORT-TCP IP Set或KUBE-NODE-PORT-UDP IP Set 或者KUBE-NODE-PORT-SCTP-HASH IP Set中。如果Service的外部流量策略为Local,则上述Entry还会进一步添加到KUBE-NODE-PORT-LOCAL-TCP IP Set、KUBE-NODE-PRORT-LOCAL-UDP IP Set、KUBE-NODE-PORT-LOCAL-SCTP-HASH IP Set中

  3. LoadBalacnerIP类型,为NodePort和nodeIPs创建ipvs VirtualServer,并且回调syncEndpoint func配置RealServer

配置健康检查的IP Set

如果Service设置了HealthCheckNodePort,则将加入到KUBE-HEALTH-CHECK-NODE-PORT IP Set

更新各个IP Set的内容

把前面的IP地址添加到IP Set中,并且从中删除掉不在目标IP地址范围的IP

创建匹配的IP Set的iptables规则

与iptabels模式不同的是,在ipvs模式中iptables并不负责执行DNAT操作,而仅仅是放行目的IP地址和某些ServiceIP地址匹配的数据包,这些数据包后续会由ipvs完成DNAT操作,路由的添加都在writeIptablesRules。func 完成

把iptables缓存区内容刷新到宿主机

kube-proxy首先先把netChain、natRules、filterChains、filterRules 缓冲区前后拼接到iptablesData缓冲区。然后调用RestoreAll func存储

  • Ref:https://github.com/kubernetes/kubernetes/blob/88e994f6bf8fc88114c5b733e09afea339bea66d/pkg/proxy/ipvs/proxier.go#L1549

    // Sync iptables rules.
    // NOTE: NoFlushTables is used so we don't flush non-kubernetes chains in the table.
    proxier.iptablesData.Reset()
    proxier.iptablesData.Write(proxier.natChains.Bytes())
    proxier.iptablesData.Write(proxier.natRules.Bytes())
    proxier.iptablesData.Write(proxier.filterChains.Bytes())
    proxier.iptablesData.Write(proxier.filterRules.Bytes())

    klog.V(5).InfoS("Restoring iptables", "rules", string(proxier.iptablesData.Bytes()))
    err = proxier.iptables.RestoreAll(proxier.iptablesData.Bytes(), utiliptables.NoFlushTables, utiliptables.RestoreCounters)
    if err != nil {
    if pErr, ok := err.(utiliptables.ParseError); ok {
    lines := utiliptables.ExtractLines(proxier.iptablesData.Bytes(), pErr.Line(), 3)
    klog.ErrorS(pErr, "Failed to execute iptables-restore", "rules", lines)
    } else {
    klog.ErrorS(err, "Failed to execute iptables-restore", "rules", string(proxier.iptablesData.Bytes()))
    }
    metrics.IptablesRestoreFailuresTotal.Inc()
    return
    }

    RestoreAll的命令如下:

    iptables-restore -w 5 -w 100000 --noflush --counters {BytesData}

清理冗余的Service地址

在ipvs模式汇总,如果Service已经不在了,需要及时清理该Service的相关地址和规则,清理主要分为三步:

  • 从各个IPSet中清理该Service的相关IP

  • 在ipvs中删除该Service的VirtualServer

  • 在Dummy网卡上解绑该Service的相关IP地址

  • Ref:https://github.com/kubernetes/kubernetes/blob/88e994f6bf8fc88114c5b733e09afea339bea66d/pkg/proxy/ipvs/proxier.go#L2061

    func (proxier *Proxier) cleanLegacyService(activeServices map[string]bool, currentServices map[string]*utilipvs.VirtualServer, legacyBindAddrs map[string]bool) {
    isIPv6 := netutils.IsIPv6(proxier.nodeIP)
    for cs := range currentServices {
    svc := currentServices[cs]
    if proxier.isIPInExcludeCIDRs(svc.Address) {
    continue
    }
    if netutils.IsIPv6(svc.Address) != isIPv6 {
    // Not our family
    continue
    }
    if _, ok := activeServices[cs]; !ok {
    klog.V(4).InfoS("Delete service", "virtualServer", svc)
    if err := proxier.ipvs.DeleteVirtualServer(svc); err != nil {
    klog.ErrorS(err, "Failed to delete service", "virtualServer", svc)
    }
    addr := svc.Address.String()
    if _, ok := legacyBindAddrs[addr]; ok {
    klog.V(4).InfoS("Unbinding address", "address", addr)
    if err := proxier.netlinkHandle.UnbindAddress(addr, defaultDummyDevice); err != nil {
    klog.ErrorS(err, "Failed to unbind service from dummy interface", "interface", defaultDummyDevice, "address", addr)
    } else {
    // In case we delete a multi-port service, avoid trying to unbind multiple times
    delete(legacyBindAddrs, addr)
    }
    }
    }
    }
    }

清理残留的UDP Conntrack记录

kube-proxy清理最开始统计的Stale Endpoints和Stale Service的UDP Conntrack记录

ClearEntriesForIP清理ClusterIP、ExternalIP、LoadBalancerIP的Stale UDP Conntrack记录,使用命令如下

conntrack -D --orig-dst {OriginIP} -p UDP

deleteEndpointConnections 负责清理Stale Endpoints的UDP Conntrack记录

conntrack -D --orig-dst {OriginIP} --dst-na {EndpointsIP} -p UDP