我在本地有 4 台机器 其中 1 master 3 node 利用 kubeadm 搭建 k8s1.29 版本的集群,并按照官方教程安装了 cilium CNI , 官方有提到可以使用 cilium 作为 loadbalancer 我就采用了 BGP 方式设置
apiVersion: "cilium.io/v2alpha1"
kind: CiliumLoadBalancerIPPool
metadata:
  name: "pool"
spec:
  cidrs:
    - cidr: "10.X.X.0/24"
---
	
apiVersion: "cilium.io/v2alpha1"
kind: CiliumBGPPeeringPolicy
metadata:
 name: 01-bgp-peering-policy
spec:
 virtualRouters:
 - localASN: 64512
   exportPodCIDR: false
   neighbors:
    - peerAddress: '10.X.X.1/32'
      peerASN: 64512
然后部署了一个 nginx 的 svc
kind: Pod
metadata:
  name: nginx
  labels:
    app.kubernetes.io/name: proxy
spec:
  containers:
  - name: nginx
    image: nginx:stable
    ports:
      - containerPort: 80
        name: http-web-svc
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: LoadBalancer
  selector:
    app.kubernetes.io/name: proxy
  ports:
  - name: name-of-service-port
    protocol: TCP
    port: 80
    targetPort: http-web-svc
k get svc 的结果 nginx-service LoadBalancer 172.16.7.185 10.X.X.2 80:31206/TCP 2d22h
已知从 10.X.X.30 10.X.X.31 10.X.X.32 10.X.X.33 (1 master 3 node) 执行 curl http://10.X.X.2
但是从其他机器上就不法访问 arp -a 显示 ? (10.X.X.2) at <incomplete> on eno1
各位大神可有解决思路,小弟万分感谢
|  |      1YOOHUU      2024-04-22 17:00:47 +08:00 其它机器要访问的是 31206 端口? | 
|  |      2pubby      2024-04-22 18:09:12 +08:00 其他机器是指 k8s 节点以外的机器吗? 对你的网络地址划分不太清楚,尤其这 2 个地方不太理解: cidr: "10.X.X.0/24" 这里是用于分配 service external-ip 的,不要和你 node 地址段重合 peerAddress: '10.X.X.1/32' 这里应该是一个 k8s 集群外的一个 bgp 路由 ip 没用过 cilium,我们是用 clico 的,集群外面用 bird 部署了 bgp 路由作为 peer ,这样外面就能直连 service external-ip 了 | 
|      3sinycn1 OP @pubby 感谢你的回复,目前问题还没有解决,所以我继续回帖 1.cidr: "10.X.X.0/24" 这里是用于分配 service external-ip 的,不要和你 node 地址段重合 之前确实是使用了同一网段,现在拆分为 10.206.20.0/22 ( node 使用)和 10.206.24.0/22 (external-ip) peerAddress: '10.206.20.1/32' 这里应该是一个 k8s 集群外的一个 bgp 路由 ip --- 是这样设定的 目前还是没有成功的发布路由 有更新的话我再来回帖 ``` cilium bgp routes (Defaulting to `available ipv4 unicast` routes, please see help for more options) Node VRouter Prefix NextHop Age Attrs ``` | 
|  |      4pubby      2024-05-07 12:15:55 +08:00 你先看一下 external-ip 分配了没有 然后集群外部需要有个 bgp peer 服务,可以用 bird 部署一个。(举例:192.168.3.100 ) 在外部主路由器上配个路由规则,目标 external-ip 段 走 bgp peer 外部 bgp 服务配置, : ```code [root@centos8-bgp-gw ~]# cat /etc/bird.conf router id 192.168.3.100; log syslog all; protocol kernel { scan time 10; ipv4 { #import none; export all; # insert routes into the kernel routing table }; merge paths on; # Allow export multipath routes (ECMP) graceful restart; } protocol device { scan time 60; } # Template example. Using templates to define IBGP route reflector clients. template bgp k8s_nodes { local 192.168.3.100 as 63400; neighbor as 63400; direct; ipv4 { import all; export none; }; } # for vm-k8s-master protocol bgp vm_k8s_master from k8s_nodes { neighbor 192.168.3.105; } # for vm1-node1 protocol bgp vm1_node1 from k8s_nodes { neighbor 192.168.4.121; } # for vm2-node4 protocol bgp vm2_node4 from k8s_nodes { neighbor 192.168.4.104; } # for vm2-node5 protocol bgp vm2_node5 from k8s_nodes { neighbor 192.168.4.105; } ``` 然后 bgp peer 上用 ip r 看一下那些 external-ip 的路由有没有自动生成 | 
|      5sinycn1 OP @pubby 感谢你的 update ,后来问题解决了,我使用 vyos+cilium bgp 实现了 bgp 路由动态发布,主要是需要变更 cilium 的配置,相关信息可以参考这个 issue(自己提交自己解决) https://github.com/cilium/cilium/issues/32375 | 
|      6a0xbd4CX0DHC1EuT      2024-05-08 10:36:00 +08:00 ping 不通是不是同一个问题 ```JavaScript [root@node1 metallb]# tcpdump -i any host 172.27.0.7 -s0 -A tcpdump: data link type LINUX_SLL2 dropped privs to tcpdump tcpdump: verbose output suppressed, use -v[v]... for full protocol decode listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes 10:26:33.131273 lo In IP node1 > node1: ICMP echo request, id 6, seq 1, length 64 E..Te:@.@.}*..................:f....}....................... !"#$%&'()*+,-./01234567 10:26:33.131305 lo In IP node1 > node1: ICMP node1 protocol 1 port 42460 unreachable, length 92 E..pe;[email protected]:@.@.}*..................:f....}....................... !"#$%&'()*+,-./01234567 10:26:34.165170 lo In IP node1 > node1: ICMP echo request, id 6, seq 2, length 64 E..Th&@[email protected]>...........W......:f............................ !"#$%&'()*+,-./01234567 10:26:34.165194 lo In IP node1 > node1: ICMP node1 protocol 1 port 3927 unreachable, length 92 E..ph'[email protected]&@[email protected]>...........W......:f............................ !"#$%&'()*+,-./01234567 10:26:35.194465 lo In IP node1 > node1: ICMP echo request, id 6, seq 3, length 64 E..Tl&@[email protected]>..................:f....X....................... !"#$%&'()*+,-./01234567 10:26:35.194505 lo In IP node1 > node1: ICMP node1 protocol 1 port 51171 unreachable, length 92 E..pl'[email protected]&@[email protected]>..................:f....X....................... !"#$%&'()*+,-./01234567 10:26:36.218668 lo In IP node1 > node1: ICMP echo request, id 6, seq 4, length 64 E..TnS@[email protected].......:f.....U...................... !"#$%&'()*+,-./01234567 10:26:36.218798 lo In IP node1 > node1: ICMP node1 protocol 1 port 12676 unreachable, length 92 [email protected]@[email protected].......:f.....U...................... !"#$%&'()*+,-./01234567 ^C 8 packets captured 21 packets received by filter 0 packets dropped by kernel [root@node1 metallb]# kubectl get svc -n nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-service LoadBalancer 10.233.12.235 172.27.0.7 80:32022/TCP 16h [root@node1 metallb]# curl -I 172.27.0.7 HTTP/1.1 200 OK Server: nginx Date: Wed, 08 May 2024 02:32:54 GMT Content-Type: text/html; charset=UTF-8 Connection: keep-alive X-Powered-By: PHP/8.2.7 [root@node1 metallb]# ping 172.27.0.7 PING 172.27.0.7 (172.27.0.7) 56(84) bytes of data. From 172.27.0.7 icmp_seq=1 Destination Port Unreachable From 172.27.0.7 icmp_seq=2 Destination Port Unreachable From 172.27.0.7 icmp_seq=3 Destination Port Unreachable From 172.27.0.7 icmp_seq=4 Destination Port Unreachable ^C --- 172.27.0.7 ping statistics --- 4 packets transmitted, 0 received, +4 errors, 100% packet loss, time 3100ms [root@node1 metallb]# kubectl get pod -A | grep cil kube-system cilium-mbjx2 1/1 Running 2 (57m ago) 24h kube-system cilium-operator-5547b984f4-5d9c8 1/1 Running 2 (57m ago) 24h kube-system cilium-operator-5547b984f4-z9kgk 1/1 Running 2 (57m ago) 24h kube-system cilium-pc8hh 1/1 Running 2 (57m ago) 24h [root@node1 metallb]# crictl images | grep cilium quay.io/cilium/cilium v1.15.4 aebfd554d3483 209MB quay.io/cilium/operator v1.15.4 cf4b9cdd4ba07 36.1MB ``` |