Post

Cilium 5์ฃผ์ฐจ ์ •๋ฆฌ

๐Ÿ”ง ์‹ค์Šต ํ™˜๊ฒฝ ๊ตฌ์„ฑ

1. VirtualBox ํ˜ธํ™˜์„ฑ ์ด์Šˆ

1
2
curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/Vagrantfile
vagrant up --provider=virtualbox

๐Ÿ“ข ์˜ค๋ฅ˜ ๋ฐœ์ƒ

1
2
3
4
5
6
7
8
9
10
11
vagrant up --provider=virtualbox
The provider 'virtualbox' that was requested to back the machine
'k8s-ctr' is reporting that it isn't usable on this system. The
reason is shown below:
Vagrant has detected that you have a version of VirtualBox installed
that is not supported by this version of Vagrant. Please install one of
the supported versions listed below to use Vagrant:
4.0, 4.1, 4.2, 4.3, 5.0, 5.1, 5.2, 6.0, 6.1, 7.0, 7.1
A Vagrant update may also be available that adds support for the version
you specified. Please check www.vagrantup.com/downloads.html to download
the latest version.
  • Vagrant๊ฐ€ ์ง€์›ํ•˜๋Š” VirtualBox๋Š” ์ตœ๋Œ€ 7.1 ๋ฒ„์ „
  • Arch Linux ๋กค๋ง ์—…๋ฐ์ดํŠธ๋กœ VirtualBox 7.2๊ฐ€ ์„ค์น˜๋˜์–ด ํ˜ธํ™˜ ๋ถˆ๊ฐ€

2. ํ•ด๊ฒฐ๊ณผ์ •: libvirt ์ „ํ™˜

(1) libvirt ํ™˜๊ฒฝ ์„ค์ •

1
2
3
sudo systemctl enable --now libvirtd
sudo usermod -a -G libvirt $USER
newgrp libvirt

(2) libvirt ํ”Œ๋Ÿฌ๊ทธ์ธ ์„ค์น˜

1
vagrant plugin install vagrant-libvirt

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Installing the 'vagrant-libvirt' plugin. This can take a few minutes...
Fetching xml-simple-1.1.9.gem
Fetching racc-1.8.1.gem
Building native extensions. This could take a while...
Fetching nokogiri-1.18.9-x86_64-linux-gnu.gem
Fetching ruby-libvirt-0.8.4.gem
Building native extensions. This could take a while...
Fetching formatador-1.2.0.gem
Fetching fog-core-2.6.0.gem
Fetching fog-xml-0.1.5.gem
Fetching fog-json-1.2.0.gem
Fetching fog-libvirt-0.13.2.gem
Fetching diffy-3.4.4.gem
Fetching vagrant-libvirt-0.12.2.gem
Installed the plugin 'vagrant-libvirt (0.12.2)'!

(3) ๋„คํŠธ์›Œํฌ ํŒจํ‚ค์ง€ ์„ค์น˜

1
2
sudo pacman -S dnsmasq bridge-utils iptables-nft
sudo systemctl restart libvirtd

(4) ๊ธฐ๋ณธ ๋„คํŠธ์›Œํฌ ํ™œ์„ฑํ™”

1
2
3
sudo virsh net-start default
sudo virsh net-autostart default
sudo virsh net-list --all

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
Network default started
Network default marked as autostarted

Name      State    Autostart   Persistent
--------------------------------------------
default   active   yes         yes

(5) Vagrantfile ์ˆ˜์ •

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
# Variables
K8SV = '1.33.2-1.1' # Kubernetes Version
CONTAINERDV = '1.7.27-1' # Containerd Version
CILIUMV = '1.18.0' # Cilium CNI Version
N = 1 # max number of worker nodes

# Base Image
BOX_IMAGE = "bento/ubuntu-24.04"
BOX_VERSION = "202508.03.0"

Vagrant.configure("2") do |config|
  #-ControlPlane Node
  config.vm.define "k8s-ctr" do |subconfig|
    subconfig.vm.box = BOX_IMAGE
    subconfig.vm.box_version = BOX_VERSION
    
    subconfig.vm.provider "libvirt" do |libvirt|
      libvirt.cpus = 2
      libvirt.memory = 2560
    end
    
    subconfig.vm.host_name = "k8s-ctr"
    subconfig.vm.network "private_network", ip: "192.168.10.100"
    subconfig.vm.network "forwarded_port", guest: 22, host: 60000, auto_correct: true, id: "ssh"
    subconfig.vm.synced_folder "./", "/vagrant", disabled: true
    subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/init_cfg.sh", args: [ K8SV, CONTAINERDV ]
    subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/k8s-ctr.sh", args: [ N, CILIUMV, K8SV ]
    subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/route-add1.sh"
  end

  #-Worker Nodes Subnet1
  (1..N).each do |i|
    config.vm.define "k8s-w#{i}" do |subconfig|
      subconfig.vm.box = BOX_IMAGE
      subconfig.vm.box_version = BOX_VERSION
      
      subconfig.vm.provider "libvirt" do |libvirt|
        libvirt.cpus = 2
        libvirt.memory = 1536
      end
      
      subconfig.vm.host_name = "k8s-w#{i}"
      subconfig.vm.network "private_network", ip: "192.168.10.10#{i}"
      subconfig.vm.network "forwarded_port", guest: 22, host: "6000#{i}", auto_correct: true, id: "ssh"
      subconfig.vm.synced_folder "./", "/vagrant", disabled: true
      subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/init_cfg.sh", args: [ K8SV, CONTAINERDV]
      subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/k8s-w.sh"
      subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/route-add1.sh"
    end
  end

  #-Router Node
  config.vm.define "router" do |subconfig|
    subconfig.vm.box = BOX_IMAGE
    subconfig.vm.box_version = BOX_VERSION
    
    subconfig.vm.provider "libvirt" do |libvirt|
      libvirt.cpus = 1
      libvirt.memory = 768
    end
    
    subconfig.vm.host_name = "router"
    subconfig.vm.network "private_network", ip: "192.168.10.200"
    subconfig.vm.network "forwarded_port", guest: 22, host: 60009, auto_correct: true, id: "ssh"
    subconfig.vm.network "private_network", ip: "192.168.20.200", auto_config: false
    subconfig.vm.synced_folder "./", "/vagrant", disabled: true
    subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/router.sh"
  end

  #-Worker Nodes Subnet2
  config.vm.define "k8s-w0" do |subconfig|
    subconfig.vm.box = BOX_IMAGE
    subconfig.vm.box_version = BOX_VERSION
    
    subconfig.vm.provider "libvirt" do |libvirt|
      libvirt.cpus = 2
      libvirt.memory = 1536
    end
    
    subconfig.vm.host_name = "k8s-w0"
    subconfig.vm.network "private_network", ip: "192.168.20.100"
    subconfig.vm.network "forwarded_port", guest: 22, host: 60010, auto_correct: true, id: "ssh"
    subconfig.vm.synced_folder "./", "/vagrant", disabled: true
    subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/init_cfg.sh", args: [ K8SV, CONTAINERDV]
    subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/k8s-w.sh"
    subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/route-add2.sh"
  end
end
  • VirtualBox ๊ด€๋ จ ์„ค์ • ์ œ๊ฑฐ (vb.customize, vb.name, vb.linked_clone)
  • provider๋ฅผ libvirt๋กœ ๋ณ€๊ฒฝ

(6) ํด๋Ÿฌ์Šคํ„ฐ ์‹คํ–‰

1
vagrant up --provider=libvirt

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
...
==> k8s-ctr: Running provisioner: shell...
    k8s-ctr: Running: /tmp/vagrant-shell20250815-120434-uhtz9s.sh
    k8s-ctr: >>>> K8S Controlplane config Start <<<<
    k8s-ctr: [TASK 1] Initial Kubernetes
    k8s-w1: >>>> Initial Config End <<<<
==> k8s-w1: Running provisioner: shell...
    k8s-w1: Running: /tmp/vagrant-shell20250815-120434-susr2r.sh
    k8s-w1: >>>> K8S Node config Start <<<<
    k8s-w1: [TASK 1] K8S Controlplane Join
    k8s-w0: >>>> Initial Config End <<<<
==> k8s-w0: Running provisioner: shell...
    k8s-w0: Running: /tmp/vagrant-shell20250815-120434-gmyjn3.sh
    k8s-w0: >>>> K8S Node config Start <<<<
    k8s-w0: [TASK 1] K8S Controlplane Join
    k8s-ctr: [TASK 2] Setting kube config file
    k8s-ctr: [TASK 3] Source the completion
    k8s-ctr: [TASK 4] Alias kubectl to k
    k8s-ctr: [TASK 5] Install Kubectx & Kubens
    k8s-ctr: [TASK 6] Install Kubeps & Setting PS1
    k8s-ctr: [TASK 7] Install Cilium CNI
    k8s-ctr: [TASK 8] Install Cilium / Hubble CLI
    k8s-ctr: cilium
    k8s-w1: >>>> K8S Node config End <<<<
==> k8s-w1: Running provisioner: shell...
    k8s-w1: Running: /tmp/vagrant-shell20250815-120434-45fmm8.sh
    k8s-w1: >>>> Route Add Config Start <<<<
    k8s-w1: >>>> Route Add Config End <<<<
    k8s-ctr: hubble
    k8s-ctr: [TASK 9] Remove node taint
    k8s-ctr: node/k8s-ctr untainted
    k8s-ctr: [TASK 10] local DNS with hosts file
    k8s-ctr: [TASK 11] Dynamically provisioning persistent local storage with Kubernetes
    k8s-ctr: [TASK 13] Install Metrics-server
    k8s-ctr: [TASK 14] Install k9s
    k8s-ctr: >>>> K8S Controlplane Config End <<<<
==> k8s-ctr: Running provisioner: shell...
    k8s-ctr: Running: /tmp/vagrant-shell20250815-120434-5mw3lc.sh
    k8s-ctr: >>>> Route Add Config Start <<<<
    k8s-ctr: >>>> Route Add Config End <<<<
    k8s-w0: >>>> K8S Node config End <<<<
==> k8s-w0: Running provisioner: shell...
    k8s-w0: Running: /tmp/vagrant-shell20250815-120434-dtpox6.sh
    k8s-w0: >>>> Route Add Config Start <<<<
    k8s-w0: >>>> Route Add Config End <<<<

3. ๋ผ์šฐํŒ… ๋ฐ BGP ์‹ค์Šต

  • https://docs.frrouting.org/en/stable-10.4/about.html
  • FRR(FRRouting) ์„ค์น˜ ํ›„ BGP ๊ธฐ๋ฐ˜ ๋„คํŠธ์›Œํฌ ์‹ค์Šต ๊ตฌ์„ฑ
  • bgpControlPlane.enabled=true ์„ค์ •
  • autoDirectNodeRoutes=false โ†’ ๊ฐ™์€ ๋„คํŠธ์›Œํฌ ๋…ธ๋“œ์—์„œ ๋‹ค๋ฅธ ๋…ธ๋“œ์˜ pod CIDR ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ ๋น„ํ™œ์„ฑํ™”
1
2
3
4
5
6
7
cat <<EOT>> /etc/netplan/50-vagrant.yaml
      routes:
      - to: 192.168.20.0/24
        via: 192.168.10.200
      # - to: 172.20.0.0/16
      #   via: 192.168.10.200
EOT
1
2
3
4
5
6
7
cat <<EOT>> /etc/netplan/50-vagrant.yaml
      routes:
      - to: 192.168.10.0/24
        via: 192.168.20.200
      # - to: 172.20.0.0/16
      #   via: 192.168.20.200
EOT
  • ๋‚ด๋ถ€๋ง ์ตœ์†Œ ๋ผ์šฐํŒ… ๊ทœ์น™๋งŒ ์ถ”๊ฐ€
  • BGP๋ฅผ ํ†ตํ•ด ๋ผ์šฐํŒ… ์ •๋ณด ๊ตํ™˜ ๋ฐ ๊ด‘๊ณ  ์„ค์ •
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
echo "[TASK 7] Configure FRR"
apt install frr -y >/dev/null 2>&1
sed -i "s/^bgpd=no/bgpd=yes/g" /etc/frr/daemons

NODEIP=$(ip -4 addr show eth1 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')
cat << EOF >> /etc/frr/frr.conf
!
router bgp 65000
  bgp router-id $NODEIP
  bgp graceful-restart
  no bgp ebgp-requires-policy
  bgp bestpath as-path multipath-relax
  maximum-paths 4
  network 10.10.1.0/24
EOF

systemctl daemon-reexec >/dev/null 2>&1
systemctl restart frr >/dev/null 2>&1
systemctl enable frr >/dev/null 2>&1
  • ๋ผ์šฐํ„ฐ VM์—๋„ FRR ์„ค์น˜ ๋ฐ bgpd=yes ์ ์šฉ ํ›„ BGP ๋ผ์šฐํ„ฐ๋กœ ๋™์ž‘

๐Ÿ–ฅ๏ธ [k8s-ctr] ์ ‘์† ํ›„ ๊ธฐ๋ณธ ์ •๋ณด ํ™•์ธ

1. ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ์—์„œ ๊ฐ ๋…ธ๋“œ(k8s-w0, k8s-w1, router)์— SSH ์ ‘๊ทผ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in k8s-w0 k8s-w1 router ; do echo ">> node : $i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@$i hostname; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
>> node : k8s-w0 <<
Warning: Permanently added 'k8s-w0' (ED25519) to the list of known hosts.
k8s-w0

>> node : k8s-w1 <<
Warning: Permanently added 'k8s-w1' (ED25519) to the list of known hosts.
k8s-w1

>> node : router <<
Warning: Permanently added 'router' (ED25519) to the list of known hosts.
router

2. ๋…ธ๋“œ Join ๋ฌธ์ œ ํ™•์ธ

(1) ๋ฌธ์ œ ์ƒํ™ฉ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get node -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
NAME      STATUS   ROLES           AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-ctr   Ready    control-plane   4m25s   v1.33.2   192.168.10.100   <none>        Ubuntu 24.04.2 LTS   6.8.0-64-generic   containerd://1.7.27
k8s-w1    Ready    <none>          4m11s   v1.33.2   192.168.10.101   <none>        Ubuntu 24.04.2 LTS   6.8.0-64-generic   containerd://1.7.27
  • kubectl get node ๊ฒฐ๊ณผ์—์„œ k8s-w0 ๋…ธ๋“œ๊ฐ€ ํ‘œ์‹œ๋˜์ง€ ์•Š์Œ

(2) ์›์ธ ํŒŒ์•…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
vagrant ssh k8s-w0

root@k8s-w0:~# cat /root/kubeadm-join-worker-config.yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: JoinConfiguration
discovery:
  bootstrapToken:
    token: "123456.1234567890123456"
    apiServerEndpoint: "192.168.10.100:6443"
    unsafeSkipCAVerification: true
nodeRegistration:
  criSocket: "unix:///run/containerd/containerd.sock"
  kubeletExtraArgs:
    - name: node-ip
      value: "192.168.20.100"
  • ์„ค์ •ํŒŒ์ผ์— ๋”๋ฏธํ† ํฐ์ด ํ•˜๋“œ์ฝ”๋”ฉ ๋˜์–ด์žˆ์–ด์„œ join ์‹คํŒจ

(3) ํ•ด๊ฒฐ ๊ณผ์ •

1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubeadm token create --ttl=72h
9llwjd.azxcrk0wd8lkrh45
  • ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ์—์„œ ์ƒˆ ํ† ํฐ ์ƒ์„ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
root@k8s-w0:~# sudo sed -i 's/123456.1234567890123456/9llwjd.azxcrk0wd8lkrh45/g' /root/kubeadm-join-worker-config.yaml
root@k8s-w0:~# cat /root/kubeadm-join-worker-config.yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: JoinConfiguration
discovery:
  bootstrapToken:
    token: "9llwjd.azxcrk0wd8lkrh45"
    apiServerEndpoint: "192.168.10.100:6443"
    unsafeSkipCAVerification: true
nodeRegistration:
  criSocket: "unix:///run/containerd/containerd.sock"
  kubeletExtraArgs:
    - name: node-ip
      value: "192.168.20.100"
  • ์›Œ์ปค๋…ธ๋“œ0 ์„ค์ • ํŒŒ์ผ ๋‚ด ํ† ํฐ ๊ฐ’ ๊ต์ฒด
1
2
root@k8s-w0:~# sudo kubeadm reset -f
root@k8s-w0:~# sudo kubeadm join --config="/root/kubeadm-join-worker-config.yaml"

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[preflight] Running pre-flight checks
W0815 22:41:14.831416    4224 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not perform cleanup of CNI plugin configuration,
network filtering rules and kubeconfig files.

For information on how to perform this cleanup manually, please see:
    https://k8s.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/
    
[preflight] Running pre-flight checks
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[preflight] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.002299091s
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.    
  • kubeadm reset -f ํ›„ kubeadm join ์žฌ์‹คํ–‰

(4) ๊ฒฐ๊ณผ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get nodes -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME      STATUS   ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-ctr   Ready    control-plane   10m   v1.33.2   192.168.10.100   <none>        Ubuntu 24.04.2 LTS   6.8.0-64-generic   containerd://1.7.27
k8s-w0    Ready    <none>          53s   v1.33.2   192.168.20.100   <none>        Ubuntu 24.04.2 LTS   6.8.0-64-generic   containerd://1.7.27
k8s-w1    Ready    <none>          10m   v1.33.2   192.168.10.101   <none>        Ubuntu 24.04.2 LTS   6.8.0-64-generic   containerd://1.7.27
  • ๋ชจ๋“  ๋…ธ๋“œ(k8s-ctr, k8s-w0, k8s-w1) ์ •์ƒ ์—ฐ๊ฒฐ

โš™๏ธ [k8s-ctr] cilium ์„ค์ • ํ™•์ธ

Cilium์—์„œ BGP Control Plane ๊ธฐ๋Šฅ ํ™œ์„ฑํ™” ์—ฌ๋ถ€ ํ™•์ธ

1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep -i bgp

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
bgp-router-id-allocation-ip-pool                  
bgp-router-id-allocation-mode                     default
bgp-secrets-namespace                             kube-system
enable-bgp-control-plane                          true
enable-bgp-control-plane-status-report            true
  • enable-bgp-control-plane = true ์ƒํƒœ ํ™•์ธ ์™„๋ฃŒ

๐ŸŒ ๋„คํŠธ์›Œํฌ ์ •๋ณด ํ™•์ธ

1. router ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router ip -br -c -4 addr

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
lo               UNKNOWN        127.0.0.1/8 
eth0             UP             192.168.121.180/24 metric 100 
eth1             UP             192.168.10.200/24 
eth2             UP             192.168.20.200/24 
loop1            UNKNOWN        10.10.1.200/24 
loop2            UNKNOWN        10.10.2.200/24

2. ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c -4 addr show dev eth1

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    altname enp0s6
    altname ens6
    inet 192.168.10.100/24 brd 192.168.10.255 scope global eth1
       valid_lft forever preferred_lft forever

3. ์›Œ์ปค๋…ธ๋“œ ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w0 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c -4 addr show dev eth1; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
>> node : k8s-w1 <<
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    altname enp0s6
    altname ens6
    inet 192.168.10.101/24 brd 192.168.10.255 scope global eth1
       valid_lft forever preferred_lft forever

>> node : k8s-w0 <<
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    altname enp0s6
    altname ens6
    inet 192.168.20.100/24 brd 192.168.20.255 scope global eth1
       valid_lft forever preferred_lft forever
  • k8s-w1: 192.168.10.101/24 โ†’ ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ๊ณผ ๊ฐ™์€ ๋Œ€์—ญ
  • k8s-w0: 192.168.20.100/24

4. router ๋ผ์šฐํŒ… ์ •๋ณด ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router ip -c route

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
default via 192.168.121.1 dev eth0 proto dhcp src 192.168.121.25 metric 100 
10.10.1.0/24 dev loop1 proto kernel scope link src 10.10.1.200 
10.10.2.0/24 dev loop2 proto kernel scope link src 10.10.2.200 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.200 
192.168.20.0/24 dev eth2 proto kernel scope link src 192.168.20.200 
192.168.121.0/24 dev eth0 proto kernel scope link src 192.168.121.25 metric 100 
192.168.121.1 dev eth0 proto dhcp scope link src 192.168.121.25 metric 100 
  • 192.168.10.0/24 ์™€ 192.168.20.0/24 ๋ผ์šฐํŒ… ์ฒ˜๋ฆฌ

5. ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ ๋ผ์šฐํŒ… ์ •๋ณด ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c route

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
default via 192.168.121.1 dev eth0 proto dhcp src 192.168.121.70 metric 100 
172.20.0.0/24 via 172.20.0.230 dev cilium_host proto kernel src 172.20.0.230 
172.20.0.230 dev cilium_host proto kernel scope link 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100 
192.168.20.0/24 via 192.168.10.200 dev eth1 proto static 
192.168.121.0/24 dev eth0 proto kernel scope link src 192.168.121.70 metric 100 
192.168.121.1 dev eth0 proto dhcp scope link src 192.168.121.70 metric 100 
  • autoDirectNodeRoutes=false ์„ค์ •์œผ๋กœ ์ธํ•ด Pod CIDR ์ž๋™ ๊ฒฝ๋กœ ๋“ฑ๋ก์ด ๋น„ํ™œ์„ฑํ™”๋จ
  • ๋”ฐ๋ผ์„œ ๊ฐ™์€ ๋„คํŠธ์›Œํฌ ๋Œ€์—ญ์— ์žˆ๋”๋ผ๋„ ์ƒ๋Œ€๋ฐฉ Pod CIDR๊ฐ€ ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ”์— ์กด์žฌํ•˜์ง€ ์•Š์Œ
  • ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ(k8s-ctr)์€ ์ž์‹ ์˜ PodCIDR(172.20.0.0/24)๋งŒ ๊ฐ€์ง€๊ณ  ์žˆ์Œ

6. ์›Œ์ปค๋…ธ๋“œ ๋ผ์šฐํŒ… ์ •๋ณด ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w0 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c route; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
>> node : k8s-w1 <<
default via 192.168.121.1 dev eth0 proto dhcp src 192.168.121.62 metric 100 
172.20.1.0/24 via 172.20.1.4 dev cilium_host proto kernel src 172.20.1.4 
172.20.1.4 dev cilium_host proto kernel scope link 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.101 
192.168.20.0/24 via 192.168.10.200 dev eth1 proto static 
192.168.121.0/24 dev eth0 proto kernel scope link src 192.168.121.62 metric 100 
192.168.121.1 dev eth0 proto dhcp scope link src 192.168.121.62 metric 100 

>> node : k8s-w0 <<
default via 192.168.121.1 dev eth0 proto dhcp src 192.168.121.122 metric 100 
172.20.2.0/24 via 172.20.2.89 dev cilium_host proto kernel src 172.20.2.89 
172.20.2.89 dev cilium_host proto kernel scope link 
192.168.10.0/24 via 192.168.20.200 dev eth1 proto static 
192.168.20.0/24 dev eth1 proto kernel scope link src 192.168.20.100 
192.168.121.0/24 dev eth0 proto kernel scope link src 192.168.121.122 metric 100 
192.168.121.1 dev eth0 proto dhcp scope link src 192.168.121.122 metric 100 
  • ๊ฐ ์›Œ์ปค ๋…ธ๋“œ(k8s-w1, k8s-w0)๋„ ์ž์‹ ์˜ PodCIDR๋งŒ ๋“ฑ๋ก๋˜์–ด ์žˆ์Œ
  • autoDirectNodeRoutes=false ๋•Œ๋ฌธ์— ๋‹ค๋ฅธ ๋…ธ๋“œ์˜ Pod CIDR์€ ์ž๋™์œผ๋กœ ์ถ”๊ฐ€๋˜์ง€ ์•Š์Œ

๐Ÿ“ฆ ์ƒ˜ํ”Œ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ๋ฐฐํฌ ๋ฐ ํ†ต์‹  ๋ฌธ์ œ ํ™•์ธ

1. ์ƒ˜ํ”Œ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ๋ฐฐํฌ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webpod
spec:
  replicas: 3
  selector:
    matchLabels:
      app: webpod
  template:
    metadata:
      labels:
        app: webpod
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - sample-app
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: webpod
        image: traefik/whoami
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: webpod
  labels:
    app: webpod
spec:
  selector:
    app: webpod
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP
EOF

# ๊ฒฐ๊ณผ
deployment.apps/webpod created
service/webpod created
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: curl-pod
  labels:
    app: curl
spec:
  nodeName: k8s-ctr
  containers:
  - name: curl
    image: nicolaka/netshoot
    command: ["tail"]
    args: ["-f", "/dev/null"]
  terminationGracePeriodSeconds: 0
EOF

# ๊ฒฐ๊ณผ
pod/curl-pod created

2. Pod ์Šค์ผ€์ค„๋ง ๋ฐ ์„œ๋น„์Šค ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get deploy,svc,ep webpod -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES           SELECTOR
deployment.apps/webpod   3/3     3            3           2m23s   webpod       traefik/whoami   app=webpod

NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE     SELECTOR
service/webpod   ClusterIP   10.96.54.159   <none>        80/TCP    2m23s   app=webpod

NAME               ENDPOINTS                                        AGE
endpoints/webpod   172.20.0.158:80,172.20.1.65:80,172.20.2.204:80   2m23s
  • podAntiAffinity ์„ค์ •์œผ๋กœ ์ธํ•ด ํŒŒ๋“œ๊ฐ€ ๋…ธ๋“œ๋ณ„๋กœ ๋ถ„์‚ฐ ๋ฐฐ์น˜๋จ
  • Service: ClusterIP 10.96.54.159 ํ• ๋‹น
  • Endpoints: 3๊ฐœ ํŒŒ๋“œ ๊ฐ๊ฐ ๋‹ค๋ฅธ PodCIDR(172.20.0.x, 172.20.1.x, 172.20.2.x)์— ๋ถ„์‚ฐ

3. Cilium Endpoint ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
NAME                      SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
curl-pod                  64126               ready            172.20.0.64    
webpod-697b545f57-fbtbj   38082               ready            172.20.0.158   
webpod-697b545f57-pxhvr   38082               ready            172.20.1.65    
webpod-697b545f57-rpblf   38082               ready            172.20.2.204 
  • curl-pod โ†’ 172.20.0.64 (์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ)
  • webpod 3๊ฐœ ํŒŒ๋“œ โ†’ 172.20.0.158, 172.20.1.65, 172.20.2.204

4. ํ†ต์‹  ํ…Œ์ŠคํŠธ ์ง„ํ–‰

curl-pod์—์„œ webpod ์„œ๋น„์Šค๋กœ ๋ฐ˜๋ณต ์š”์ฒญ ์ˆ˜ํ–‰

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s --connect-timeout 1 webpod | grep Hostname; echo "---" ; sleep 1; done'

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
---
---
---
---
---
Hostname: webpod-697b545f57-fbtbj
---
---
---
Hostname: webpod-697b545f57-fbtbj
---
---
---
Hostname: webpod-697b545f57-fbtbj
---
Hostname: webpod-697b545f57-fbtbj
---
Hostname: webpod-697b545f57-fbtbj
---
---
...
  • --connect-timeout 1 ์˜ต์…˜์„ ์ค˜์„œ 1์ดˆ ์ด๋‚ด ์‘๋‹ต ์—†์œผ๋ฉด ์—ฐ๊ฒฐ ์ข…๋ฃŒ
  • k8s-ctr์— ๋ฐฐํฌ๋œ webpod ํŒŒ๋“œ(172.20.0.158)๋งŒ ์‘๋‹ต
  • ๋‹ค๋ฅธ ๋…ธ๋“œ(k8s-w1, k8s-w0)์— ์œ„์น˜ํ•œ webpod ํŒŒ๋“œ๋“ค์€ ์‘๋‹ตํ•˜์ง€ ๋ชปํ•จ

๐Ÿ“ก Cilium BGP Control Plane

1. FRR ํ”„๋กœ์„ธ์Šค ์ƒํƒœ ํ™•์ธ

1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router
root@router:~# ss -tnlp | grep -iE 'zebra|bgpd'

โœ…ย ์ถœ๋ ฅ

1
2
3
4
LISTEN 0      3          127.0.0.1:2601      0.0.0.0:*    users:(("zebra",pid=3827,fd=23))                                                     
LISTEN 0      3          127.0.0.1:2605      0.0.0.0:*    users:(("bgpd",pid=3832,fd=18))                                                      
LISTEN 0      4096         0.0.0.0:179       0.0.0.0:*    users:(("bgpd",pid=3832,fd=22))                                                      
LISTEN 0      4096            [::]:179          [::]:*    users:(("bgpd",pid=3832,fd=23))
  • BGP ๊ด€๋ จ ํฌํŠธ(179) ๋ฆฌ์Šจ ์ค‘
1
root@router:~# ps -ef |grep frr

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
root        3814       1  0 22:30 ?        00:00:00 /usr/lib/frr/watchfrr -d -F traditional zebra bgpd staticd
frr         3827       1  0 22:30 ?        00:00:00 /usr/lib/frr/zebra -d -F traditional -A 127.0.0.1 -s 90000000
frr         3832       1  0 22:30 ?        00:00:00 /usr/lib/frr/bgpd -d -F traditional -A 127.0.0.1
frr         3839       1  0 22:30 ?        00:00:00 /usr/lib/frr/staticd -d -F traditional -A 127.0.0.1
root        4417    4399  0 23:02 pts/1    00:00:00 grep --color=auto frr
  • watchfrr, zebra, bgpd, staticd ํ”„๋กœ์„ธ์Šค ๊ตฌ๋™ ํ™•์ธ

2. FRR ์„ค์ • ํ™•์ธ (vtysh)

1
root@router:~# vtysh -c 'show running'

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Building configuration...

Current configuration:
!
frr version 8.4.4
frr defaults traditional
hostname router
log syslog informational
no ipv6 forwarding
service integrated-vtysh-config
!
router bgp 65000
 bgp router-id 192.168.10.200
 no bgp ebgp-requires-policy
 bgp graceful-restart
 bgp bestpath as-path multipath-relax
 !
 address-family ipv4 unicast
  network 10.10.1.0/24
  maximum-paths 4
 exit-address-family
exit
!
end
  • Router BGP AS ๋ฒˆํ˜ธ๋Š” 65000
  • Loopback ๋„คํŠธ์›Œํฌ 10.10.1.0/24๋ฅผ ๊ด‘๊ณ ํ•˜๋„๋ก ์„ค์ •๋จ
  • ๋ฉ€ํ‹ฐํŒจ์Šค(maximum-paths 4) ํ—ˆ์šฉ ์„ค์ • ์ ์šฉ

3. FRR ์„ค์ • ํŒŒ์ผ ํ™•์ธ

1
root@router:~# cat /etc/frr/frr.conf 

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# default to using syslog. /etc/rsyslog.d/45-frr.conf places the log in
# /var/log/frr/frr.log
#
# Note:
# FRR's configuration shell, vtysh, dynamically edits the live, in-memory
# configuration while FRR is running. When instructed, vtysh will persist the
# live configuration to this file, overwriting its contents. If you want to
# avoid this, you can edit this file manually before starting FRR, or instruct
# vtysh to write configuration to a different file.
log syslog informational
!
router bgp 65000
  bgp router-id 192.168.10.200
  bgp graceful-restart
  no bgp ebgp-requires-policy
  bgp bestpath as-path multipath-relax
  maximum-paths 4
  network 10.10.1.0/24
  • /etc/frr/frr.conf ํŒŒ์ผ์—์„œ ๋™์ผํ•œ ์„ค์ • ํ™•์ธ ๊ฐ€๋Šฅ
  • network 10.10.1.0/24 ๊ด‘๊ณ  ์ค‘

4. BGP ์ƒํƒœ ํ™•์ธ

1
root@router:~# vtysh -c 'show ip bgp summary'

โœ…ย ์ถœ๋ ฅ

1
% No BGP neighbors found in VRF default
  • ์•„์ง Neighbor ์—†์Œ
1
root@router:~# vtysh -c 'show ip bgp'

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
BGP table version is 1, local router ID is 192.168.10.200, vrf id 0
Default local pref 100, local AS 65000
Status codes:  s suppressed, d damped, h history, * valid, > best, = multipath,
               i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes:  i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found

   Network          Next Hop            Metric LocPrf Weight Path
*> 10.10.1.0/24     0.0.0.0                  0         32768 i

Displayed  1 routes and 1 total paths
  • ์ž์‹ ์ด ๋ณด์œ ํ•œ 10.10.1.0/24 ๋„คํŠธ์›Œํฌ๋งŒ ๊ด‘๊ณ  ์ค‘
  • ์™ธ๋ถ€ ๋…ธ๋“œ์™€ ์—ฐ๊ฒฐ๋˜์ง€ ์•Š์•„ BGP ํ…Œ์ด๋ธ”์€ ๋‹จ์ผ ์—”ํŠธ๋ฆฌ๋งŒ ์กด์žฌ

5. router ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค ํ™•์ธ

1
root@router:~# ip -c addr

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:97:4b:c4 brd ff:ff:ff:ff:ff:ff
    altname enp0s5
    altname ens5
    inet 192.168.121.25/24 metric 100 brd 192.168.121.255 scope global dynamic eth0
       valid_lft 2869sec preferred_lft 2869sec
    inet6 fe80::5054:ff:fe97:4bc4/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:24:2d:30 brd ff:ff:ff:ff:ff:ff
    altname enp0s6
    altname ens6
    inet 192.168.10.200/24 brd 192.168.10.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe24:2d30/64 scope link 
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:50:33:eb brd ff:ff:ff:ff:ff:ff
    altname enp0s7
    altname ens7
    inet 192.168.20.200/24 brd 192.168.20.255 scope global eth2
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe50:33eb/64 scope link 
       valid_lft forever preferred_lft forever
5: loop1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 22:63:e6:d9:f6:95 brd ff:ff:ff:ff:ff:ff
    inet 10.10.1.200/24 scope global loop1
       valid_lft forever preferred_lft forever
    inet6 fe80::2063:e6ff:fed9:f695/64 scope link 
       valid_lft forever preferred_lft forever
6: loop2: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 6e:08:a4:e5:88:c0 brd ff:ff:ff:ff:ff:ff
    inet 10.10.2.200/24 scope global loop2
       valid_lft forever preferred_lft forever
    inet6 fe80::6c08:a4ff:fee5:88c0/64 scope link 
       valid_lft forever preferred_lft forever
  • loop1: 10.10.1.200/24

6. router ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ” ํ™•์ธ

1
root@router:~# ip -c route

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
default via 192.168.121.1 dev eth0 proto dhcp src 192.168.121.25 metric 100 
10.10.1.0/24 dev loop1 proto kernel scope link src 10.10.1.200 
10.10.2.0/24 dev loop2 proto kernel scope link src 10.10.2.200 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.200 
192.168.20.0/24 dev eth2 proto kernel scope link src 192.168.20.200 
192.168.121.0/24 dev eth0 proto kernel scope link src 192.168.121.25 metric 100 
192.168.121.1 dev eth0 proto dhcp scope link src 192.168.121.25 metric 100 

7. BGP Neighbor ์„ค์ • ์ถ”๊ฐ€

router์—์„œ Cilium ๋…ธ๋“œ(k8s-ctr, k8s-w1, k8s-w0)๋ฅผ Neighbor๋กœ ๋“ฑ๋ก

1
2
3
4
5
6
7
root@router:~#  cat << EOF >> /etc/frr/frr.conf
  neighbor CILIUM peer-group
  neighbor CILIUM remote-as external
  neighbor 192.168.10.100 peer-group CILIUM
  neighbor 192.168.10.101 peer-group CILIUM
  neighbor 192.168.20.100 peer-group CILIUM 
EOF
  • peer-group CILIUM์œผ๋กœ ๋ฌถ์–ด ๊ด€๋ฆฌ ๋‹จ์ˆœํ™”
  • remote-as external ์˜ต์…˜์œผ๋กœ ๋‹ค๋ฅธ AS ์ž๋™ ์ˆ˜์šฉ
1
root@router:~# cat /etc/frr/frr.conf

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# default to using syslog. /etc/rsyslog.d/45-frr.conf places the log in
# /var/log/frr/frr.log
#
# Note:
# FRR's configuration shell, vtysh, dynamically edits the live, in-memory
# configuration while FRR is running. When instructed, vtysh will persist the
# live configuration to this file, overwriting its contents. If you want to
# avoid this, you can edit this file manually before starting FRR, or instruct
# vtysh to write configuration to a different file.
log syslog informational
!
router bgp 65000
  bgp router-id 192.168.10.200
  bgp graceful-restart
  no bgp ebgp-requires-policy
  bgp bestpath as-path multipath-relax
  maximum-paths 4
  network 10.10.1.0/24
  neighbor CILIUM peer-group
  neighbor CILIUM remote-as external
  neighbor 192.168.10.100 peer-group CILIUM
  neighbor 192.168.10.101 peer-group CILIUM
  neighbor 192.168.20.100 peer-group CILIUM 
  • router AS: 65000, ๋…ธ๋“œ AS: 65001

8. FRR ์„œ๋น„์Šค ์žฌ์‹œ์ž‘ ๋ฐ ์ƒํƒœ ํ™•์ธ

1
2
root@router:~# systemctl daemon-reexec && systemctl restart frr
root@router:~# systemctl status frr --no-pager --full

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
โ— frr.service - FRRouting
     Loaded: loaded (/usr/lib/systemd/system/frr.service; enabled; preset: enabled)
     Active: active (running) since Fri 2025-08-15 23:20:38 KST; 17s ago
       Docs: https://frrouting.readthedocs.io/en/latest/setup.html
    Process: 4540 ExecStart=/usr/lib/frr/frrinit.sh start (code=exited, status=0/SUCCESS)
   Main PID: 4550 (watchfrr)
     Status: "FRR Operational"
      Tasks: 13 (limit: 757)
     Memory: 19.5M (peak: 27.4M)
        CPU: 321ms
     CGroup: /system.slice/frr.service
             โ”œโ”€4550 /usr/lib/frr/watchfrr -d -F traditional zebra bgpd staticd
             โ”œโ”€4563 /usr/lib/frr/zebra -d -F traditional -A 127.0.0.1 -s 90000000
             โ”œโ”€4568 /usr/lib/frr/bgpd -d -F traditional -A 127.0.0.1
             โ””โ”€4575 /usr/lib/frr/staticd -d -F traditional -A 127.0.0.1

Aug 15 23:20:38 router watchfrr[4550]: [YFT0P-5Q5YX] Forked background command [pid 4551]: /usr/lib/frr/watchfrr.sh restart all
Aug 15 23:20:38 router zebra[4563]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 15 23:20:38 router bgpd[4568]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 15 23:20:38 router staticd[4575]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 15 23:20:38 router watchfrr[4550]: [QDG3Y-BY5TN] zebra state -> up : connect succeeded
Aug 15 23:20:38 router frrinit.sh[4540]:  * Started watchfrr
Aug 15 23:20:38 router watchfrr[4550]: [QDG3Y-BY5TN] bgpd state -> up : connect succeeded
Aug 15 23:20:38 router watchfrr[4550]: [QDG3Y-BY5TN] staticd state -> up : connect succeeded
Aug 15 23:20:38 router watchfrr[4550]: [KWE5Q-QNGFC] all daemons up, doing startup-complete notify
Aug 15 23:20:38 router systemd[1]: Started frr.service - FRRouting.

9. ๋ชจ๋‹ˆํ„ฐ๋ง

1
root@router:~# journalctl -u frr -f

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
Aug 15 23:20:38 router watchfrr[4550]: [YFT0P-5Q5YX] Forked background command [pid 4551]: /usr/lib/frr/watchfrr.sh restart all
Aug 15 23:20:38 router zebra[4563]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 15 23:20:38 router bgpd[4568]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 15 23:20:38 router staticd[4575]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 15 23:20:38 router watchfrr[4550]: [QDG3Y-BY5TN] zebra state -> up : connect succeeded
Aug 15 23:20:38 router frrinit.sh[4540]:  * Started watchfrr
Aug 15 23:20:38 router watchfrr[4550]: [QDG3Y-BY5TN] bgpd state -> up : connect succeeded
Aug 15 23:20:38 router watchfrr[4550]: [QDG3Y-BY5TN] staticd state -> up : connect succeeded
Aug 15 23:20:38 router watchfrr[4550]: [KWE5Q-QNGFC] all daemons up, doing startup-complete notify
Aug 15 23:20:38 router systemd[1]: Started frr.service - FRRouting.

๐Ÿ›ฐ๏ธ Cilium์— BGP ์„ค์ •

1. BGP ํ™œ์„ฑํ™” ๋…ธ๋“œ ๋ผ๋ฒจ ์„ค์ •

1
2
3
4
5
6
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl label nodes k8s-ctr k8s-w0 k8s-w1 enable-bgp=true

# ๊ฒฐ๊ณผ
node/k8s-ctr labeled
node/k8s-w0 labeled
node/k8s-w1 labeled
  • Cilium BGP๋ฅผ ์‹คํ–‰ํ•  ๋…ธ๋“œ์— enable-bgp=true ๋ผ๋ฒจ ๋ถ€์—ฌ
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get node -l enable-bgp=true

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME      STATUS   ROLES           AGE   VERSION
k8s-ctr   Ready    control-plane   53m   v1.33.2
k8s-w0    Ready    <none>          43m   v1.33.2
k8s-w1    Ready    <none>          53m   v1.33.2
  • 3๊ฐœ ๋…ธ๋“œ ๋ผ๋ฒจ๋ง ํ™•์ธ

2. Cilium BGP ๋ฆฌ์†Œ์Šค ์„ค์ •

Cilium์—์„œ BGP ๋™์ž‘์„ ์ •์˜ํ•˜๊ธฐ ์œ„ํ•ด 3๊ฐ€์ง€ CRD ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: cilium.io/v2
kind: CiliumBGPAdvertisement
metadata:
  name: bgp-advertisements
  labels:
    advertise: bgp
spec:
  advertisements:
    - advertisementType: "PodCIDR"
---
apiVersion: cilium.io/v2
kind: CiliumBGPPeerConfig
metadata:
  name: cilium-peer
spec:
  timers:
    holdTimeSeconds: 9
    keepAliveTimeSeconds: 3
  ebgpMultihop: 2
  gracefulRestart:
    enabled: true
    restartTimeSeconds: 15
  families:
    - afi: ipv4
      safi: unicast
      advertisements:
        matchLabels:
          advertise: "bgp"
---
apiVersion: cilium.io/v2
kind: CiliumBGPClusterConfig
metadata:
  name: cilium-bgp
spec:
  nodeSelector:
    matchLabels:
      "enable-bgp": "true"
  bgpInstances:
  - name: "instance-65001"
    localASN: 65001
    peers:
    - name: "tor-switch"
      peerASN: 65000
      peerAddress: 192.168.10.200  # router ip address
      peerConfigRef:
        name: "cilium-peer"
EOF

ciliumbgpadvertisement.cilium.io/bgp-advertisements created
ciliumbgppeerconfig.cilium.io/cilium-peer created
ciliumbgpclusterconfig.cilium.io/cilium-bgp created
  • CiliumBGPAdvertisement
    • advertisementType: PodCIDR ์„ค์ • โ†’ ๊ฐ ๋…ธ๋“œ์˜ PodCIDR๋ฅผ BGP๋กœ ๊ด‘๊ณ 
  • CiliumBGPPeerConfig
    • advertisements.matchLabels: advertise=bgp ๋กœ ํ•„ํ„ฐ๋ง
  • CiliumBGPClusterConfig
    • ๋ผ๋ฒจ(enable-bgp=true)์ด ์ง€์ •๋œ ๋…ธ๋“œ๋งŒ BGP ๋™์ž‘ ๋Œ€์ƒ
    • localASN: 65001, peerASN: 65000
    • peerAddress: 192.168.10.200 (Router IP)
    • Peer ์„ค์ •์€ cilium-peer ๋ ˆํผ๋Ÿฐ์Šค๋ฅผ ์ฐธ์กฐ
1
2
3
4
5
6
7
8
9
10
11
12
13
Aug 15 23:20:38 router watchfrr[4550]: [YFT0P-5Q5YX] Forked background command [pid 4551]: /usr/lib/frr/watchfrr.sh restart all
Aug 15 23:20:38 router zebra[4563]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 15 23:20:38 router bgpd[4568]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 15 23:20:38 router staticd[4575]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 15 23:20:38 router watchfrr[4550]: [QDG3Y-BY5TN] zebra state -> up : connect succeeded
Aug 15 23:20:38 router frrinit.sh[4540]:  * Started watchfrr
Aug 15 23:20:38 router watchfrr[4550]: [QDG3Y-BY5TN] bgpd state -> up : connect succeeded
Aug 15 23:20:38 router watchfrr[4550]: [QDG3Y-BY5TN] staticd state -> up : connect succeeded
Aug 15 23:20:38 router watchfrr[4550]: [KWE5Q-QNGFC] all daemons up, doing startup-complete notify
Aug 15 23:20:38 router systemd[1]: Started frr.service - FRRouting.
Aug 15 23:27:15 router bgpd[4568]: [M59KS-A3ZXZ] bgp_update_receive: rcvd End-of-RIB for IPv4 Unicast from 192.168.20.100 in vrf default
Aug 15 23:27:15 router bgpd[4568]: [M59KS-A3ZXZ] bgp_update_receive: rcvd End-of-RIB for IPv4 Unicast from 192.168.10.100 in vrf default
Aug 15 23:27:15 router bgpd[4568]: [M59KS-A3ZXZ] bgp_update_receive: rcvd End-of-RIB for IPv4 Unicast from 192.168.10.101 in vrf default

๐Ÿ” ํ†ต์‹  ํ™•์ธ

1. BGP ์„ธ์…˜ ์—ฐ๊ฒฐ ํ™•์ธ

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ss -tnlp | grep 179
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ss -tnp | grep 179
ESTAB 0      0               192.168.10.100:35791          192.168.10.200:179   users:(("cilium-agent",pid=5170,fd=50))              
ESTAB 0      0      [::ffff:192.168.10.100]:6443    [::ffff:172.20.0.179]:46928 users:(("kube-apiserver",pid=3868,fd=105))   
  • Cilium์€ BGP Listener๊ฐ€ ์•„๋‹ˆ๋ผ Initiator๋กœ ๋™์ž‘ํ•˜์—ฌ ๋„คํŠธ์›Œํฌ ์žฅ๋น„(FRR ๋ผ์šฐํ„ฐ)์™€ TCP 179 ํฌํŠธ๋กœ ์—ฐ๊ฒฐ์„ ์ˆ˜๋ฆฝํ•จ
  • ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ ๋…ธ๋“œ(192.168.10.100:35791)๊ฐ€ ๋ผ์šฐํ„ฐ(192.168.10.200:179)์™€ ์„ธ์…˜์„ ๋งบ์€ ์ƒํƒœ ํ™•์ธ๋จ

2. Cilium BGP Peer ์—ฐ๊ฒฐ ์ƒํƒœ ํ™•์ธ

1
2
3
4
5
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cilium bgp peers
Node      Local AS   Peer AS   Peer Address     Session State   Uptime   Family         Received   Advertised
k8s-ctr   65001      65000     192.168.10.200   established     10m59s   ipv4/unicast   4          2    
k8s-w0    65001      65000     192.168.10.200   established     10m59s   ipv4/unicast   4          2    
k8s-w1    65001      65000     192.168.10.200   established     10m59s   ipv4/unicast   4          2    
  • 3๊ฐœ ๋…ธ๋“œ(k8s-ctr, k8s-w0, k8s-w1) ๋ชจ๋‘ ๋ผ์šฐํ„ฐ์™€ established ์ƒํƒœ ํ™•์ธ
  • Local ASN์€ 65001, Peer ASN์€ 65000์œผ๋กœ ์ •์ƒ์ ์œผ๋กœ ๋งค์นญ

3. PodCIDR ๊ด‘๊ณ  ์ƒํƒœ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cilium bgp routes available ipv4 unicast

โœ…ย ์ถœ๋ ฅ

1
2
3
4
Node      VRouter   Prefix          NextHop   Age      Attrs
k8s-ctr   65001     172.20.0.0/24   0.0.0.0   12m43s   [{Origin: i} {Nexthop: 0.0.0.0}]   
k8s-w0    65001     172.20.2.0/24   0.0.0.0   12m42s   [{Origin: i} {Nexthop: 0.0.0.0}]   
k8s-w1    65001     172.20.1.0/24   0.0.0.0   12m43s   [{Origin: i} {Nexthop: 0.0.0.0}]  
  • ๊ฐ ๋…ธ๋“œ์˜ PodCIDR ๊ด‘๊ณ  ํ™•์ธ๋จ

4. Cilium BGP ๋ฆฌ์†Œ์Šค ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumbgpadvertisements,ciliumbgppeerconfigs,ciliumbgpclusterconfigs

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
NAME                                                  AGE
ciliumbgpadvertisement.cilium.io/bgp-advertisements   13m

NAME                                        AGE
ciliumbgppeerconfig.cilium.io/cilium-peer   13m

NAME                                          AGE
ciliumbgpclusterconfig.cilium.io/cilium-bgp   13m

๊ฐ ๋…ธ๋“œ๋ณ„ CiliumBGPNodeConfig ๋ฆฌ์†Œ์Šค ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumbgpnodeconfigs -o yaml | yq

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
{
  "apiVersion": "v1",
  "items": [
    {
      "apiVersion": "cilium.io/v2",
      "kind": "CiliumBGPNodeConfig",
      "metadata": {
        "creationTimestamp": "2025-08-15T14:27:12Z",
        "generation": 1,
        "name": "k8s-ctr",
        "ownerReferences": [
          {
            "apiVersion": "cilium.io/v2",
            "controller": true,
            "kind": "CiliumBGPClusterConfig",
            "name": "cilium-bgp",
            "uid": "e1f4b328-d375-4a7c-a99b-ed2658602a14"
          }
        ],
        "resourceVersion": "7578",
        "uid": "a72d5068-f106-4b37-a0a7-2ad0e72e8f9d"
      },
      "spec": {
        "bgpInstances": [
          {
            "localASN": 65001,
            "name": "instance-65001",
            "peers": [
              {
                "name": "tor-switch",
                "peerASN": 65000,
                "peerAddress": "192.168.10.200",
                "peerConfigRef": {
                  "name": "cilium-peer"
                }
              }
            ]
          }
        ]
      },
      "status": {
        "bgpInstances": [
          {
            "localASN": 65001,
            "name": "instance-65001",
            "peers": [
              {
                "establishedTime": "2025-08-15T14:27:14Z",
                "name": "tor-switch",
                "peerASN": 65000,
                "peerAddress": "192.168.10.200",
                "peeringState": "established",
                "routeCount": [
                  {
                    "advertised": 2,
                    "afi": "ipv4",
                    "received": 1,
                    "safi": "unicast"
                  }
                ],
                "timers": {
                  "appliedHoldTimeSeconds": 9,
                  "appliedKeepaliveSeconds": 3
                }
              }
            ]
          }
        ]
      }
    },
    {
      "apiVersion": "cilium.io/v2",
      "kind": "CiliumBGPNodeConfig",
      "metadata": {
        "creationTimestamp": "2025-08-15T14:27:12Z",
        "generation": 1,
        "name": "k8s-w0",
        "ownerReferences": [
          {
            "apiVersion": "cilium.io/v2",
            "controller": true,
            "kind": "CiliumBGPClusterConfig",
            "name": "cilium-bgp",
            "uid": "e1f4b328-d375-4a7c-a99b-ed2658602a14"
          }
        ],
        "resourceVersion": "7575",
        "uid": "395cc9e2-0f3e-47f3-bce0-169110494292"
      },
      "spec": {
        "bgpInstances": [
          {
            "localASN": 65001,
            "name": "instance-65001",
            "peers": [
              {
                "name": "tor-switch",
                "peerASN": 65000,
                "peerAddress": "192.168.10.200",
                "peerConfigRef": {
                  "name": "cilium-peer"
                }
              }
            ]
          }
        ]
      },
      "status": {
        "bgpInstances": [
          {
            "localASN": 65001,
            "name": "instance-65001",
            "peers": [
              {
                "establishedTime": "2025-08-15T14:27:14Z",
                "name": "tor-switch",
                "peerASN": 65000,
                "peerAddress": "192.168.10.200",
                "peeringState": "established",
                "routeCount": [
                  {
                    "advertised": 2,
                    "afi": "ipv4",
                    "received": 1,
                    "safi": "unicast"
                  }
                ],
                "timers": {
                  "appliedHoldTimeSeconds": 9,
                  "appliedKeepaliveSeconds": 3
                }
              }
            ]
          }
        ]
      }
    },
    {
      "apiVersion": "cilium.io/v2",
      "kind": "CiliumBGPNodeConfig",
      "metadata": {
        "creationTimestamp": "2025-08-15T14:27:12Z",
        "generation": 1,
        "name": "k8s-w1",
        "ownerReferences": [
          {
            "apiVersion": "cilium.io/v2",
            "controller": true,
            "kind": "CiliumBGPClusterConfig",
            "name": "cilium-bgp",
            "uid": "e1f4b328-d375-4a7c-a99b-ed2658602a14"
          }
        ],
        "resourceVersion": "7581",
        "uid": "d98cdab1-5d96-4ecf-ae47-1cc3c80a3071"
      },
      "spec": {
        "bgpInstances": [
          {
            "localASN": 65001,
            "name": "instance-65001",
            "peers": [
              {
                "name": "tor-switch",
                "peerASN": 65000,
                "peerAddress": "192.168.10.200",
                "peerConfigRef": {
                  "name": "cilium-peer"
                }
              }
            ]
          }
        ]
      },
      "status": {
        "bgpInstances": [
          {
            "localASN": 65001,
            "name": "instance-65001",
            "peers": [
              {
                "establishedTime": "2025-08-15T14:27:14Z",
                "name": "tor-switch",
                "peerASN": 65000,
                "peerAddress": "192.168.10.200",
                "peeringState": "established",
                "routeCount": [
                  {
                    "advertised": 2,
                    "afi": "ipv4",
                    "received": 1,
                    "safi": "unicast"
                  }
                ],
                "timers": {
                  "appliedHoldTimeSeconds": 9,
                  "appliedKeepaliveSeconds": 3
                }
              }
            ]
          }
        ]
      }
    }
  ],
  "kind": "List",
  "metadata": {
    "resourceVersion": ""
  }
}
  • Local ASN 65001
  • Peer Address 192.168.10.200, Peer ASN 65000
  • Peering State established
  • RouteCount: advertised 2, received 1

5. ๋ผ์šฐํ„ฐ ์ปค๋„ ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ” ํ™•์ธ

1
2
3
4
root@router:~# ip -c route | grep bgp
172.20.0.0/24 nhid 29 via 192.168.10.100 dev eth1 proto bgp metric 20 
172.20.1.0/24 nhid 30 via 192.168.10.101 dev eth1 proto bgp metric 20 
172.20.2.0/24 nhid 28 via 192.168.20.100 dev eth2 proto bgp metric 20 
  • FRR ๋ผ์šฐํ„ฐ ์ปค๋„ ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ”์— Pod CIDR ๊ฒฝ๋กœ๊ฐ€ BGP ํ”„๋กœํ† ์ฝœ์„ ํ†ตํ•ด ํ•™์Šต๋˜์–ด ๋“ฑ๋ก๋จ
  • ํŠน์ • Pod ๋Œ€์—ญ๊ณผ ํ†ต์‹ ํ•˜๋ ค๋ฉด ๋ฐ˜๋“œ์‹œ ํ•ด๋‹น ๋…ธ๋“œ๋กœ ์ „๋‹ฌ๋˜๋„๋ก ๋ผ์šฐํŒ… ๊ฒฝ๋กœ๊ฐ€ ์žกํž˜

6. BGP Neighbor ๊ด€๊ณ„ ํ™•์ธ

1
root@router:~# vtysh -c 'show ip bgp summary'

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
IPv4 Unicast Summary (VRF default):
BGP router identifier 192.168.10.200, local AS number 65000 vrf-id 0
BGP table version 4
RIB entries 7, using 1344 bytes of memory
Peers 3, using 2172 KiB of memory
Peer groups 1, using 64 bytes of memory

Neighbor        V         AS   MsgRcvd   MsgSent   TblVer  InQ OutQ  Up/Down State/PfxRcd   PfxSnt Desc
192.168.10.100  4      65001       353       356        0    0    0 00:17:29            1        4 N/A
192.168.10.101  4      65001       353       356        0    0    0 00:17:29            1        4 N/A
192.168.20.100  4      65001       353       356        0    0    0 00:17:28            1        4 N/A

Total number of neighbors 3
  • FRR ๋ผ์šฐํ„ฐ(AS 65000)๋Š” Cilium์ด ๊ตฌ๋™ ์ค‘์ธ 3๊ฐœ ๋…ธ๋“œ(AS 65001)์™€ BGP neighbor๋ฅผ ๋งบ์Œ
  • ๋ชจ๋“  neighbor๊ฐ€ Established ์ƒํƒœ๋กœ ์ •์ƒ ์—ฐ๊ฒฐ๋จ

7. BGP ๊ด‘๊ณ  ๊ฒฝ๋กœ ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
root@router:~# vtysh -c 'show ip bgp'
BGP table version is 4, local router ID is 192.168.10.200, vrf id 0
Default local pref 100, local AS 65000
Status codes:  s suppressed, d damped, h history, * valid, > best, = multipath,
               i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes:  i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found

   Network          Next Hop            Metric LocPrf Weight Path
*> 10.10.1.0/24     0.0.0.0                  0         32768 i
*> 172.20.0.0/24    192.168.10.100                         0 65001 i
*> 172.20.1.0/24    192.168.10.101                         0 65001 i
*> 172.20.2.0/24    192.168.20.100                         0 65001 i

Displayed  4 routes and 4 total paths
  • Pod CIDR (172.20.0.0/24, 172.20.1.0/24, 172.20.2.0/24)๊ฐ€ ๋ชจ๋‘ ์ˆ˜์‹ ๋จ
  • nexthop์€ ๊ฐ๊ฐ ๋…ธ๋“œ์˜ ๋‚ด๋ถ€ IP๋กœ ํ‘œ์‹œ๋˜๋ฉฐ, BGP๋ฅผ ํ†ตํ•ด ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๊ด‘๊ณ  ์ „ํŒŒ๊ฐ€ ์ด๋ค„์ง„ ๊ฒƒ ํ™•์ธ

8. BGP Neighbor ์„ค์ • ํ›„ ํ†ต์‹  ๋ถˆ๊ฐ€ ํ˜„์ƒ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Hostname: webpod-697b545f57-fbtbj
---
---
---
---
---
---
---
---
---
---
---
Hostname: webpod-697b545f57-fbtbj
---
---
---
---
---
...
  • Cilium์˜ ํŠน์„ฑ์ƒ BGP ์„ธ์…˜์„ ๋งบ๋”๋ผ๋„ ๊ธฐ๋ณธ์ ์œผ๋กœ ์ปค๋„ ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ”์— ๊ฒฝ๋กœ๋ฅผ ์ฃผ์ž…ํ•˜์ง€ ์•Š์Œ (disable-fib ์ƒํƒœ)

๐Ÿ”€ BGP ์ •๋ณด ์ „๋‹ฌ ํ™•์ธ

1. ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ tcpdump ์ง„ํ–‰

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i eth1 tcp port 179 -w /tmp/bgp.pcap

# ๊ฒฐ๊ณผ
tcpdump: listening on eth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes

2. FRR ์žฌ์‹œ์ž‘ ๋ฐ BGP ์„ธ์…˜ ํ™•์ธ

1
root@router:~# systemctl restart frr && journalctl -u frr -f

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
Aug 16 00:00:33 router watchfrr.sh[4856]: Cannot stop zebra: pid file not found
Aug 16 00:00:33 router zebra[4858]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 16 00:00:33 router bgpd[4863]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 16 00:00:33 router staticd[4870]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 16 00:00:33 router frrinit.sh[4835]:  * Started watchfrr
Aug 16 00:00:33 router systemd[1]: Started frr.service - FRRouting.
Aug 16 00:00:33 router watchfrr[4845]: [QDG3Y-BY5TN] zebra state -> up : connect succeeded
Aug 16 00:00:33 router watchfrr[4845]: [QDG3Y-BY5TN] bgpd state -> up : connect succeeded
Aug 16 00:00:33 router watchfrr[4845]: [QDG3Y-BY5TN] staticd state -> up : connect succeeded
Aug 16 00:00:33 router watchfrr[4845]: [KWE5Q-QNGFC] all daemons up, doing startup-complete notify
Aug 16 00:00:39 router bgpd[4863]: [M59KS-A3ZXZ] bgp_update_receive: rcvd End-of-RIB for IPv4 Unicast from 192.168.10.100 in vrf default
Aug 16 00:00:39 router bgpd[4863]: [M59KS-A3ZXZ] bgp_update_receive: rcvd End-of-RIB for IPv4 Unicast from 192.168.10.101 in vrf default
Aug 16 00:00:40 router bgpd[4863]: [M59KS-A3ZXZ] bgp_update_receive: rcvd End-of-RIB for IPv4 Unicast from 192.168.20.100 in vrf default

3. Termshark ํ™•์ธ

(1) BGP ํŒจํ‚ท ์บก์ฒ˜ ํŒŒ์ผ ๋ถ„์„

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# termshark -r /tmp/bgp.pcap

โœ…ย ์ถœ๋ ฅ

(2) ํŒจํ‚ท ๋ถ„์„ ์ค‘, ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ ์—ฐ๊ฒฐ์ด ๋Š๊น€

1
vagrant halt k8s-ctr --force
  • ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ VM ๊ฐ•์ œ ์ข…๋ฃŒ
1
2
3
4
sudo virsh start 5w_k8s-ctr

# ๊ฒฐ๊ณผ
Domain '5w_k8s-ctr' started
  • ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ VM ์žฌ๊ธฐ๋™

๐Ÿ› ๏ธ ๋ฌธ์ œ ํ•ด๊ฒฐ ํ›„ ํ†ต์‹  ํ™•์ธ

1. Cilium BGP ๋™์ž‘ ํŠน์„ฑ ํ™•์ธ

  • Cilium์˜ BGP๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ์™ธ๋ถ€ ๊ฒฝ๋กœ๋ฅผ ์ปค๋„ ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ”์— ์ฃผ์ž…ํ•˜์ง€ ์•Š์Œ
  • disable-fib ์˜ต์…˜์œผ๋กœ ๋นŒ๋“œ๋˜์–ด ์žˆ์Œ โ†’ ์ปค๋„ ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ”(FIB)์— BGP ๊ฒฝ๋กœ๋ฅผ ๋ฐ˜์˜ํ•˜์ง€ ์•Š๊ฒ ๋‹ค๋Š” ์˜๋ฏธ
  • ๋”ฐ๋ผ์„œ BGP ํ”ผ์–ด๋กœ๋ถ€ํ„ฐ ๊ฒฝ๋กœ๋Š” ์ˆ˜์‹ ํ–ˆ์ง€๋งŒ, ip route ์ถœ๋ ฅ์—๋Š” ํ•ด๋‹น CIDR ๋Œ€์—ญ์ด ๋‚˜ํƒ€๋‚˜์ง€ ์•Š์Œ

2. BGP ๊ฒฝ๋กœ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c route

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
default via 192.168.121.1 dev eth0 proto dhcp src 192.168.121.70 metric 100 
172.20.0.0/24 via 172.20.0.230 dev cilium_host proto kernel src 172.20.0.230 
172.20.0.230 dev cilium_host proto kernel scope link 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100 
192.168.20.0/24 via 192.168.10.200 dev eth1 proto static 
192.168.121.0/24 dev eth0 proto kernel scope link src 192.168.121.70 metric 100 
192.168.121.1 dev eth0 proto dhcp scope link src 192.168.121.70 metric 100
  • 172.20.1.0/24, 172.20.2.0/24 ๋Œ€์—ญ์ด ๋ˆ„๋ฝ๋จ
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cilium bgp routes

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
(Defaulting to `available ipv4 unicast` routes, please see help for more options)

Node      VRouter   Prefix          NextHop   Age      Attrs
k8s-ctr   65001     172.20.0.0/24   0.0.0.0   9m26s    [{Origin: i} {Nexthop: 0.0.0.0}]   
k8s-w0    65001     172.20.2.0/24   0.0.0.0   57m18s   [{Origin: i} {Nexthop: 0.0.0.0}]   
k8s-w1    65001     172.20.1.0/24   0.0.0.0   57m18s   [{Origin: i} {Nexthop: 0.0.0.0}] 
  • ๊ฐ ๋…ธ๋“œ CIDR ๊ฒฝ๋กœ๊ฐ€ BGP๋กœ ์ •์ƒ ์ˆ˜์‹ ๋œ ์ƒํƒœ ํ™•์ธ ๊ฐ€๋Šฅ

์ฆ‰, BGP ์ˆ˜์‹ ์€ ์ •์ƒ์ ์ด๋‚˜, ์ปค๋„ ๋ผ์šฐํŒ… ๋ฐ˜์˜์ด ์•ˆ ๋˜์–ด ํ†ต์‹  ๋ถˆ๊ฐ€ ์ƒํƒœ์ž„

1
2
3
4
5
6
7
8
9
10
11
12
13
Hostname: webpod-697b545f57-fbtbj
---
Hostname: webpod-697b545f57-fbtbj
---
---
---
---
---
---
Hostname: webpod-697b545f57-fbtbj
---
---
...

3. k8s ํŒŒ๋“œ ๋Œ€์—ญ ํ†ต์‹ ์„ eth1 ๊ฒฝ๋กœ๋กœ ์ง€์ •

1
2
3
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip route add 172.20.0.0/16 via 192.168.10.200
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w1 sudo ip route add 172.20.0.0/16 via 192.168.10.200
sshpass -p 'vagrant' ssh vagrant@k8s-w0 sudo ip route add 172.20.0.0/16 via 192.168.20.200
  • eth0 โ†’ ์ธํ„ฐ๋„ท ํ†ต์‹  ์ „์šฉ
  • eth1 โ†’ Kubernetes ํŒŒ๋“œ ํ†ต์‹  ์ „์šฉ
  • ๋”ฐ๋ผ์„œ ํŒŒ๋“œ ๋Œ€์—ญ(172.20.0.0/16)์„ eth1์„ ํ†ตํ•ด ๋ผ์šฐํŒ…ํ•˜๋„๋ก ๋ช…์‹œ์  ๊ฒฝ๋กœ ์ถ”๊ฐ€
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---
Hostname: webpod-697b545f57-fbtbj
---
Hostname: webpod-697b545f57-fbtbj
---
Hostname: webpod-697b545f57-rpblf
---
Hostname: webpod-697b545f57-fbtbj
---
Hostname: webpod-697b545f57-rpblf
---
Hostname: webpod-697b545f57-rpblf
---
Hostname: webpod-697b545f57-rpblf
---
Hostname: webpod-697b545f57-fbtbj
---
Hostname: webpod-697b545f57-pxhvr
---
Hostname: webpod-697b545f57-fbtbj
---
Hostname: webpod-697b545f57-rpblf
---
Hostname: webpod-697b545f57-fbtbj
---
Hostname: webpod-697b545f57-pxhvr
---
...
  • ๊ฒฝ๋กœ ์ถ”๊ฐ€ ํ›„ ํŒŒ๋“œ ๊ฐ„ ํ†ต์‹  ์ •์ƒ ๋™์ž‘ ํ™•์ธ

โธ๏ธ ๋…ธ๋“œ(k8s-w0) ์œ ์ง€๋ณด์ˆ˜ ์ƒํ™ฉ

1. ๋…ธ๋“œ ์œ ์ง€๋ณด์ˆ˜ ์‹œ์ž‘ (k8s-w0 Drain)

1
2
3
4
5
6
7
8
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl drain k8s-w0 --ignore-daemonsets

# ๊ฒฐ๊ณผ
node/k8s-w0 cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/cilium-envoy-8tgrn, kube-system/cilium-wszbk, kube-system/kube-proxy-xhjtq
evicting pod default/webpod-697b545f57-rpblf
pod/webpod-697b545f57-rpblf evicted
node/k8s-w0 drained
  • kubectl drain ๋ช…๋ น์–ด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ k8s-w0 ๋…ธ๋“œ์˜ ํŒŒ๋“œ๋ฅผ ์•ˆ์ „ํ•˜๊ฒŒ ์ œ๊ฑฐํ•˜๊ณ  ์Šค์ผ€์ค„๋ง์„ ๋ง‰์Œ

2. BGP ๊ธฐ๋Šฅ ๋น„ํ™œ์„ฑํ™” (enable-bgp=false)

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl label nodes k8s-w0 enable-bgp=false --overwrite

# ๊ฒฐ๊ณผ
node/k8s-w0 labeled
  • ์œ ์ง€๋ณด์ˆ˜๋ฅผ ์œ„ํ•ด k8s-w0 ๋…ธ๋“œ์˜ enable-bgp ๋ผ๋ฒจ์„ false๋กœ ๋ณ€๊ฒฝ
  • ์ด๋กœ ์ธํ•ด Cilium์˜ BGP ๋ฐ๋ชฌ์ด ํ•ด๋‹น ๋…ธ๋“œ์—์„œ๋Š” ๋™์ž‘ํ•˜์ง€ ์•Š๊ฒŒ ๋จ

3. ๋…ธ๋“œ ์ƒํƒœ ํ™•์ธ (SchedulingDisabled)

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get node

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME      STATUS                     ROLES           AGE    VERSION
k8s-ctr   Ready                      control-plane   124m   v1.33.2
k8s-w0    Ready,SchedulingDisabled   <none>          115m   v1.33.2
k8s-w1    Ready                      <none>          124m   v1.33.2
  • k8s-w0 ๋…ธ๋“œ ์ƒํƒœ๊ฐ€ Ready,SchedulingDisabled ๋กœ ํ‘œ์‹œ๋จ
1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumbgpnodeconfigs
NAME      AGE
k8s-ctr   70m
k8s-w1    70m
  • k8s-w0 ๋…ธ๋“œ๋Š” BGP NodeConfig ๋ชฉ๋ก์—์„œ๋„ ์ œ์™ธ๋จ

4. BGP ๋ผ์šฐํŠธ/ํ”ผ์–ด ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cilium bgp routes

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
(Defaulting to `available ipv4 unicast` routes, please see help for more options)

Node      VRouter   Prefix          NextHop   Age        Attrs
k8s-ctr   65001     172.20.0.0/24   0.0.0.0   23m21s     [{Origin: i} {Nexthop: 0.0.0.0}]   
k8s-w1    65001     172.20.1.0/24   0.0.0.0   1h11m13s   [{Origin: i} {Nexthop: 0.0.0.0}]  
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cilium bgp peers

โœ…ย ์ถœ๋ ฅ

1
2
3
Node      Local AS   Peer AS   Peer Address     Session State   Uptime   Family         Received   Advertised
k8s-ctr   65001      65000     192.168.10.200   established     15m58s   ipv4/unicast   3          2    
k8s-w1    65001      65000     192.168.10.200   established     15m58s   ipv4/unicast   3          2  
  • ์ถœ๋ ฅ์—์„œ k8s-w0 ๋ผ์šฐํŠธ ์ •๋ณด๊ฐ€ ์‚ฌ๋ผ์ง
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router "sudo vtysh -c 'show ip bgp summary'"

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
IPv4 Unicast Summary (VRF default):
BGP router identifier 192.168.10.200, local AS number 65000 vrf-id 0
BGP table version 5
RIB entries 5, using 960 bytes of memory
Peers 3, using 2172 KiB of memory
Peer groups 1, using 64 bytes of memory

Neighbor        V         AS   MsgRcvd   MsgSent   TblVer  InQ OutQ  Up/Down State/PfxRcd   PfxSnt Desc
192.168.10.100  4      65001       400       404        0    0    0 00:19:48            1        3 N/A
192.168.10.101  4      65001       400       404        0    0    0 00:19:49            1        3 N/A
192.168.20.100  4      65001       266       267        0    0    0 00:06:48       Active        0 N/A

Total number of neighbors 3
  • FRR ๋ผ์šฐํ„ฐ์—์„œ๋„ ์ƒํƒœ๊ฐ€ Active ๋กœ ํ‘œ์‹œ๋จ
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router "sudo vtysh -c 'show ip route bgp'"

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, F - PBR,
       f - OpenFabric,
       > - selected route, * - FIB route, q - queued, r - rejected, b - backup
       t - trapped, o - offload failure

B>* 172.20.0.0/24 [20/0] via 192.168.10.100, eth1, weight 1, 00:18:28
B>* 172.20.1.0/24 [20/0] via 192.168.10.101, eth1, weight 1, 00:18:28
  • ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ”์—์„œ๋„ k8s-w0์˜ CIDR์ด ์ œ๊ฑฐ๋จ

5. ๋…ธ๋“œ ๋ณต๊ตฌ (enable-bgp=true & uncordon)

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl label nodes k8s-w0 enable-bgp=true --overwrite

# ๊ฒฐ๊ณผ
node/k8s-w0 labeled
  • ์œ ์ง€๋ณด์ˆ˜๊ฐ€ ๋๋‚œ ํ›„ k8s-w0 ๋ผ๋ฒจ์„ ๋‹ค์‹œ enable-bgp=true ๋กœ ์›๋ณต
1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl uncordon k8s-w0

# ๊ฒฐ๊ณผ
node/k8s-w0 uncordoned
  • kubectl uncordon ๋ช…๋ น์–ด๋กœ ์Šค์ผ€์ค„๋ง์„ ํ—ˆ์šฉํ•˜์—ฌ ์ •์ƒ ์ƒํƒœ๋กœ ๋ณต๊ตฌ
1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get node
kubectl get ciliumbgpnodeconfigs
cilium bgp routes
cilium bgp peers

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
NAME      STATUS   ROLES           AGE    VERSION
k8s-ctr   Ready    control-plane   132m   v1.33.2
k8s-w0    Ready    <none>          123m   v1.33.2
k8s-w1    Ready    <none>          132m   v1.33.2

NAME      AGE
k8s-ctr   77m
k8s-w0    47s
k8s-w1    77m

(Defaulting to `available ipv4 unicast` routes, please see help for more options)

Node      VRouter   Prefix          NextHop   Age        Attrs
k8s-ctr   65001     172.20.0.0/24   0.0.0.0   29m55s     [{Origin: i} {Nexthop: 0.0.0.0}]   
k8s-w0    65001     172.20.2.0/24   0.0.0.0   48s        [{Origin: i} {Nexthop: 0.0.0.0}]   
k8s-w1    65001     172.20.1.0/24   0.0.0.0   1h17m47s   [{Origin: i} {Nexthop: 0.0.0.0}]   

Node      Local AS   Peer AS   Peer Address     Session State   Uptime   Family         Received   Advertised
k8s-ctr   65001      65000     192.168.10.200   established     22m2s    ipv4/unicast   4          2    
k8s-w0    65001      65000     192.168.10.200   established     46s      ipv4/unicast   4          2    
k8s-w1    65001      65000     192.168.10.200   established     22m2s    ipv4/unicast   4          2

6. ๋…ธ๋“œ๋ณ„ ํŒŒ๋“œ ๋ถ„๋ฐฐ ์‹คํ–‰

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
NAME                      READY   STATUS    RESTARTS      AGE    IP            NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   1 (31m ago)   117m   172.20.0.35   k8s-ctr   <none>           <none>
webpod-697b545f57-fbtbj   1/1     Running   1 (31m ago)   119m   172.20.0.6    k8s-ctr   <none>           <none>
webpod-697b545f57-lzxbc   1/1     Running   0             10m    172.20.1.98   k8s-w1    <none>           <none>
webpod-697b545f57-pxhvr   1/1     Running   0             119m   172.20.1.65   k8s-w1    <none>           <none>
1
2
3
4
5
6
7
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl scale deployment webpod --replicas 0
# ๊ฒฐ๊ณผ
deployment.apps/webpod scaled

(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl scale deployment webpod --replicas 3
# ๊ฒฐ๊ณผ
deployment.apps/webpod scaled
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
NAME                      READY   STATUS    RESTARTS      AGE    IP             NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   1 (31m ago)   117m   172.20.0.35    k8s-ctr   <none>           <none>
webpod-697b545f57-5twrq   1/1     Running   0             7s     172.20.1.119   k8s-w1    <none>           <none>
webpod-697b545f57-cp7xq   1/1     Running   0             7s     172.20.0.15    k8s-ctr   <none>           <none>
webpod-697b545f57-xtmdx   1/1     Running   0             7s     172.20.2.35    k8s-w0    <none>           <none>
  • kubectl scale ๋ช…๋ น์–ด๋กœ webpod ๋ฐฐํฌ๋ฅผ ์ค„์˜€๋‹ค๊ฐ€ ๋‹ค์‹œ ํ™•์žฅํ•˜์—ฌ ํŒŒ๋“œ๊ฐ€ k8s-w0์—๋„ ์ •์ƒ ๋ฐฐ์น˜๋จ์„ ํ™•์ธ

๐Ÿšซ CRD ์ƒํƒœ ๋ณด๊ณ  ๋น„ํ™œ์„ฑํ™”

  • https://docs.cilium.io/en/stable/network/bgp-control-plane/bgp-control-plane-operation/#disabling-crd-status-report
  • Cilium BGP๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ CiliumBGPNodeConfig ๋ฆฌ์†Œ์Šค์˜ status ํ•„๋“œ์— ํ”ผ์–ด ์ƒํƒœ, ์„ธ์…˜, ๋ผ์šฐํŠธ ์ •๋ณด๋ฅผ ๊ธฐ๋กํ•จ
  • ๊ทธ๋Ÿฌ๋‚˜ ๋Œ€๊ทœ๋ชจ ํด๋Ÿฌ์Šคํ„ฐ์—์„œ๋Š” ์ƒํƒœ ๋ณด๊ณ ๊ฐ€ ๋นˆ๋ฒˆํžˆ ๋ฐœ์ƒํ•ด Kubernetes API ์„œ๋ฒ„์— ๋ถ€ํ•˜๋ฅผ ์ค„ ์ˆ˜ ์žˆ์Œ
  • ๋”ฐ๋ผ์„œ ๊ณต์‹ ๋ฌธ์„œ์—์„œ๋„ bgp status reporting off ์˜ต์…˜ ์‚ฌ์šฉ์„ ๊ถŒ์žฅํ•จ

1. ํ˜„์žฌ BGP ์ƒํƒœ ํ™•์ธ (Status Report ํ™œ์„ฑํ™” ์ƒํƒœ)

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumbgpnodeconfigs -o yaml | yq

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
{
  "apiVersion": "v1",
  "items": [
    {
      "apiVersion": "cilium.io/v2",
      "kind": "CiliumBGPNodeConfig",
      "metadata": {
        "creationTimestamp": "2025-08-15T14:27:12Z",
        "generation": 1,
        "name": "k8s-ctr",
        "ownerReferences": [
          {
            "apiVersion": "cilium.io/v2",
            "controller": true,
            "kind": "CiliumBGPClusterConfig",
            "name": "cilium-bgp",
            "uid": "e1f4b328-d375-4a7c-a99b-ed2658602a14"
          }
        ],
        "resourceVersion": "15080",
        "uid": "a72d5068-f106-4b37-a0a7-2ad0e72e8f9d"
      },
      "spec": {
        "bgpInstances": [
          {
            "localASN": 65001,
            "name": "instance-65001",
            "peers": [
              {
                "name": "tor-switch",
                "peerASN": 65000,
                "peerAddress": "192.168.10.200",
                "peerConfigRef": {
                  "name": "cilium-peer"
                }
              }
            ]
          }
        ]
      },
      "status": {
        "bgpInstances": [
          {
            "localASN": 65001,
            "name": "instance-65001",
            "peers": [
              {
                "establishedTime": "2025-08-15T15:22:57Z",
                "name": "tor-switch",
                "peerASN": 65000,
                "peerAddress": "192.168.10.200",
                "peeringState": "established",
                "routeCount": [
                  {
                    "advertised": 2,
                    "afi": "ipv4",
                    "received": 3,
                    "safi": "unicast"
                  }
                ],
                "timers": {
                  "appliedHoldTimeSeconds": 9,
                  "appliedKeepaliveSeconds": 3
                }
              }
            ]
          }
        ]
      }
    },
    {
      "apiVersion": "cilium.io/v2",
      "kind": "CiliumBGPNodeConfig",
      "metadata": {
        "creationTimestamp": "2025-08-15T15:44:11Z",
        "generation": 1,
        "name": "k8s-w0",
        "ownerReferences": [
          {
            "apiVersion": "cilium.io/v2",
            "controller": true,
            "kind": "CiliumBGPClusterConfig",
            "name": "cilium-bgp",
            "uid": "e1f4b328-d375-4a7c-a99b-ed2658602a14"
          }
        ],
        "resourceVersion": "16068",
        "uid": "fd222576-cc33-4f4f-b7cd-c8157fbc8009"
      },
      "spec": {
        "bgpInstances": [
          {
            "localASN": 65001,
            "name": "instance-65001",
            "peers": [
              {
                "name": "tor-switch",
                "peerASN": 65000,
                "peerAddress": "192.168.10.200",
                "peerConfigRef": {
                  "name": "cilium-peer"
                }
              }
            ]
          }
        ]
      },
      "status": {
        "bgpInstances": [
          {
            "localASN": 65001,
            "name": "instance-65001",
            "peers": [
              {
                "establishedTime": "2025-08-15T15:44:13Z",
                "name": "tor-switch",
                "peerASN": 65000,
                "peerAddress": "192.168.10.200",
                "peeringState": "established",
                "routeCount": [
                  {
                    "advertised": 2,
                    "afi": "ipv4",
                    "received": 3,
                    "safi": "unicast"
                  }
                ],
                "timers": {
                  "appliedHoldTimeSeconds": 9,
                  "appliedKeepaliveSeconds": 3
                }
              }
            ]
          }
        ]
      }
    },
    {
      "apiVersion": "cilium.io/v2",
      "kind": "CiliumBGPNodeConfig",
      "metadata": {
        "creationTimestamp": "2025-08-15T14:27:12Z",
        "generation": 1,
        "name": "k8s-w1",
        "ownerReferences": [
          {
            "apiVersion": "cilium.io/v2",
            "controller": true,
            "kind": "CiliumBGPClusterConfig",
            "name": "cilium-bgp",
            "uid": "e1f4b328-d375-4a7c-a99b-ed2658602a14"
          }
        ],
        "resourceVersion": "15076",
        "uid": "d98cdab1-5d96-4ecf-ae47-1cc3c80a3071"
      },
      "spec": {
        "bgpInstances": [
          {
            "localASN": 65001,
            "name": "instance-65001",
            "peers": [
              {
                "name": "tor-switch",
                "peerASN": 65000,
                "peerAddress": "192.168.10.200",
                "peerConfigRef": {
                  "name": "cilium-peer"
                }
              }
            ]
          }
        ]
      },
      "status": {
        "bgpInstances": [
          {
            "localASN": 65001,
            "name": "instance-65001",
            "peers": [
              {
                "establishedTime": "2025-08-15T15:22:57Z",
                "name": "tor-switch",
                "peerASN": 65000,
                "peerAddress": "192.168.10.200",
                "peeringState": "established",
                "routeCount": [
                  {
                    "advertised": 2,
                    "afi": "ipv4",
                    "received": 3,
                    "safi": "unicast"
                  }
                ],
                "timers": {
                  "appliedHoldTimeSeconds": 9,
                  "appliedKeepaliveSeconds": 3
                }
              }
            ]
          }
        ]
      }
    }
  ],
  "kind": "List",
  "metadata": {
    "resourceVersion": ""
  }
}
  • ๊ฐ ๋…ธ๋“œ(k8s-ctr, k8s-w0, k8s-w1)์˜ status์— ํ”ผ์–ด ์—ฐ๊ฒฐ ์ƒํƒœ (established), Advertised / Received Routes, Keepalive / HoldTime ๊ฐ’ ๋“ฑ์˜ ์ •๋ณด๊ฐ€ ์ƒ์„ธํžˆ ๊ธฐ๋ก๋จ

2. Helm ์—…๊ทธ๋ ˆ์ด๋“œ๋กœ ์ƒํƒœ ๋ณด๊ณ  ๋น„ํ™œ์„ฑํ™”

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# helm upgrade cilium cilium/cilium --version 1.18.0 --namespace kube-system --reuse-values \
  --set bgpControlPlane.statusReport.enabled=false
  
# ๊ฒฐ๊ณผ  
Release "cilium" has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Sat Aug 16 00:51:38 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.

Your release version is 1.18.0.

For any further help, visit https://docs.cilium.io/en/v1.18/gettinghelp
  • ์„ฑ๊ณต์ ์œผ๋กœ cilium ์ฐจํŠธ๊ฐ€ ์žฌ๋ฐฐํฌ๋˜๋ฉฐ, ์ดํ›„ BGP ์ƒํƒœ ๋ณด๊ณ ๊ฐ€ ์ค‘๋‹จ๋จ

3. Cilium DaemonSet ๋กค๋ง ์žฌ์‹œ์ž‘

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system rollout restart ds/cilium

# ๊ฒฐ๊ณผ
daemonset.apps/cilium restarted

4. ๊ฒฐ๊ณผ ํ™•์ธ (Status Report ์ œ๊ฑฐ๋จ)

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumbgpnodeconfigs -o yaml | yq

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
{
  "apiVersion": "v1",
  "items": [
    {
      "apiVersion": "cilium.io/v2",
      "kind": "CiliumBGPNodeConfig",
      "metadata": {
        "creationTimestamp": "2025-08-15T14:27:12Z",
        "generation": 1,
        "name": "k8s-ctr",
        "ownerReferences": [
          {
            "apiVersion": "cilium.io/v2",
            "controller": true,
            "kind": "CiliumBGPClusterConfig",
            "name": "cilium-bgp",
            "uid": "e1f4b328-d375-4a7c-a99b-ed2658602a14"
          }
        ],
        "resourceVersion": "17327",
        "uid": "a72d5068-f106-4b37-a0a7-2ad0e72e8f9d"
      },
      "spec": {
        "bgpInstances": [
          {
            "localASN": 65001,
            "name": "instance-65001",
            "peers": [
              {
                "name": "tor-switch",
                "peerASN": 65000,
                "peerAddress": "192.168.10.200",
                "peerConfigRef": {
                  "name": "cilium-peer"
                }
              }
            ]
          }
        ]
      },
      "status": {}
    },
    {
      "apiVersion": "cilium.io/v2",
      "kind": "CiliumBGPNodeConfig",
      "metadata": {
        "creationTimestamp": "2025-08-15T15:44:11Z",
        "generation": 1,
        "name": "k8s-w0",
        "ownerReferences": [
          {
            "apiVersion": "cilium.io/v2",
            "controller": true,
            "kind": "CiliumBGPClusterConfig",
            "name": "cilium-bgp",
            "uid": "e1f4b328-d375-4a7c-a99b-ed2658602a14"
          }
        ],
        "resourceVersion": "17231",
        "uid": "fd222576-cc33-4f4f-b7cd-c8157fbc8009"
      },
      "spec": {
        "bgpInstances": [
          {
            "localASN": 65001,
            "name": "instance-65001",
            "peers": [
              {
                "name": "tor-switch",
                "peerASN": 65000,
                "peerAddress": "192.168.10.200",
                "peerConfigRef": {
                  "name": "cilium-peer"
                }
              }
            ]
          }
        ]
      },
      "status": {}
    },
    {
      "apiVersion": "cilium.io/v2",
      "kind": "CiliumBGPNodeConfig",
      "metadata": {
        "creationTimestamp": "2025-08-15T14:27:12Z",
        "generation": 1,
        "name": "k8s-w1",
        "ownerReferences": [
          {
            "apiVersion": "cilium.io/v2",
            "controller": true,
            "kind": "CiliumBGPClusterConfig",
            "name": "cilium-bgp",
            "uid": "e1f4b328-d375-4a7c-a99b-ed2658602a14"
          }
        ],
        "resourceVersion": "17229",
        "uid": "d98cdab1-5d96-4ecf-ae47-1cc3c80a3071"
      },
      "spec": {
        "bgpInstances": [
          {
            "localASN": 65001,
            "name": "instance-65001",
            "peers": [
              {
                "name": "tor-switch",
                "peerASN": 65000,
                "peerAddress": "192.168.10.200",
                "peerConfigRef": {
                  "name": "cilium-peer"
                }
              }
            ]
          }
        ]
      },
      "status": {}
    }
  ],
  "kind": "List",
  "metadata": {
    "resourceVersion": ""
  }
}
  • status ํ•„๋“œ๊ฐ€ ๋น„์–ด ์žˆ์Œ (status: {})
  • ๋” ์ด์ƒ CRD๋ฅผ ํ†ตํ•ด BGP ์ƒํƒœ๊ฐ€ ๊ธฐ๋ก๋˜์ง€ ์•Š์Œ โ†’ API ์„œ๋ฒ„ ๋ถ€ํ•˜ ๊ฐ์†Œ ํšจ๊ณผ

๐Ÿท๏ธ LoadBalancer External IP๋ฅผ BGP๋กœ ๊ด‘๊ณ 

  • https://docs.cilium.io/en/stable/network/bgp-control-plane/bgp-control-plane-v2/#service-virtual-ips
  • Kubernetes Service ํƒ€์ž…์„ LoadBalancer๋กœ ๋ณ€๊ฒฝํ•˜์—ฌ External IP๋ฅผ ํ• ๋‹น๋ฐ›์Œ
  • L2 Announcement์™€ ๋‹ฌ๋ฆฌ BGP๋Š” ๋ผ์šฐํŒ… ๊ธฐ๋ฐ˜์ด๋ผ, External IP๊ฐ€ ๋…ธ๋“œ ๋„คํŠธ์›Œํฌ ๋Œ€์—ญ๊ณผ ๋‹ฌ๋ผ๋„ ๊ด‘๊ณ  ๊ฐ€๋Šฅ

1. Cilium LoadBalancer IP Pool ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: "cilium.io/v2"
kind: CiliumLoadBalancerIPPool
metadata:
  name: "cilium-pool"
spec:
  allowFirstLastIPs: "No"
  blocks:
  - cidr: "172.16.1.0/24"
EOF

# ๊ฒฐ๊ณผ
ciliumloadbalancerippool.cilium.io/cilium-pool created
  • CiliumLoadBalancerIPPool CRD๋ฅผ ์ด์šฉํ•ด 172.16.1.0/24 ๋Œ€์—ญ์„ ํ’€๋กœ ๋“ฑ๋ก
1
2
3
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ippool
NAME          DISABLED   CONFLICTING   IPS AVAILABLE   AGE
cilium-pool   false      False         254             3m12s
  • /24 ๋Œ€์—ญ(172.16.1.0/24) โ†’ 254๊ฐœ IP ํ• ๋‹น ๊ฐ€๋Šฅ

2. ์„œ๋น„์Šค ํƒ€์ž… LoadBalancer ์ ์šฉ ๋ฐ External IP ํ™•์ธ

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl patch svc webpod -p '{"spec": {"type": "LoadBalancer"}}'

# ๊ฒฐ๊ณผ
service/webpod patched
  • ๊ธฐ์กด webpod ์„œ๋น„์Šค๋ฅผ LoadBalancer ํƒ€์ž…์œผ๋กœ ๋ณ€๊ฒฝํ•˜์—ฌ External IP๋ฅผ ํ• ๋‹น๋ฐ›์Œ
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc webpod 

โœ…ย ์ถœ๋ ฅ

1
2
NAME     TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
webpod   LoadBalancer   10.96.54.159   172.16.1.1    80:32567/TCP   15h
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ippool

โœ…ย ์ถœ๋ ฅ

1
2
NAME          DISABLED   CONFLICTING   IPS AVAILABLE   AGE
cilium-pool   false      False         253             4m50s
  • cilium-pool์—์„œ 1๊ฐœ IP๊ฐ€ ์†Œ๋ชจ๋˜์–ด 172.16.1.1์ด ์™ธ๋ถ€ ์ ‘๊ทผ์šฉ ์ฃผ์†Œ๋กœ ๋ถ€์—ฌ๋จ
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl describe svc webpod | grep 'Traffic Policy'

โœ…ย ์ถœ๋ ฅ

1
2
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
  • Traffic Policy๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ Cluster ๋ชจ๋“œ๋กœ ์„ค์ •๋˜์–ด, ๋ชจ๋“  ๋…ธ๋“œ๋กœ ํŠธ๋ž˜ํ”ฝ์ด ๋ถ„์‚ฐ๋จ
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system exec ds/cilium -c cilium-agent -- cilium-dbg service list

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
ID   Frontend                Service Type   Backend                                 
1    10.96.12.94:80/TCP      ClusterIP      1 => 172.20.0.232:4245/TCP (active)     
2    0.0.0.0:30003/TCP       NodePort       1 => 172.20.0.13:8081/TCP (active)      
5    10.96.33.159:80/TCP     ClusterIP      1 => 172.20.0.13:8081/TCP (active)      
6    10.96.198.41:443/TCP    ClusterIP      1 => 172.20.0.122:10250/TCP (active)    
7    10.96.0.1:443/TCP       ClusterIP      1 => 192.168.10.100:6443/TCP (active)   
8    10.96.137.113:443/TCP   ClusterIP      1 => 192.168.10.101:4244/TCP (active)   
9    10.96.0.10:53/TCP       ClusterIP      1 => 172.20.0.82:53/TCP (active)        
                                            2 => 172.20.0.104:53/TCP (active)       
10   10.96.0.10:53/UDP       ClusterIP      1 => 172.20.0.82:53/UDP (active)        
                                            2 => 172.20.0.104:53/UDP (active)       
11   10.96.0.10:9153/TCP     ClusterIP      1 => 172.20.0.82:9153/TCP (active)      
                                            2 => 172.20.0.104:9153/TCP (active)     
12   10.96.54.159:80/TCP     ClusterIP      1 => 172.20.0.15:80/TCP (active)        
                                            2 => 172.20.1.119:80/TCP (active)       
                                            3 => 172.20.2.35:80/TCP (active)        
14   0.0.0.0:32567/TCP       NodePort       1 => 172.20.0.15:80/TCP (active)        
                                            2 => 172.20.1.119:80/TCP (active)       
                                            3 => 172.20.2.35:80/TCP (active)        
17   172.16.1.1:80/TCP       LoadBalancer   1 => 172.20.0.15:80/TCP (active)        
                                            2 => 172.20.1.119:80/TCP (active)       
                                            3 => 172.20.2.35:80/TCP (active)   
  • 172.16.1.1:80/TCP LoadBalancer ํ”„๋ก ํŠธ์—”๋“œ๊ฐ€ ํŒŒ๋“œ 3๊ฐœ(172.20.x.x) ๋ฐฑ์—”๋“œ์™€ ๋งคํ•‘๋จ
1
2
3
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc webpod -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
LBIP=$(kubectl get svc webpod -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
curl -s $LBIP

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
172.16.1.1Hostname: webpod-697b545f57-5twrq
IP: 127.0.0.1
IP: ::1
IP: 172.20.1.119
IP: fe80::dcab:bcff:fee2:3765
RemoteAddr: 192.168.10.100:59608
GET / HTTP/1.1
Host: 172.16.1.1
User-Agent: curl/8.5.0
Accept: */*
  • ํด๋Ÿฌ์Šคํ„ฐ ๋‚ด๋ถ€์—์„œ curl 172.16.1.1 ํ…Œ์ŠคํŠธ ์‹œ ์ •์ƒ์ ์œผ๋กœ ์„œ๋น„์Šค ์‘๋‹ต ํ™•์ธ๋จ

3. router ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ” ๋ชจ๋‹ˆํ„ฐ๋ง

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# watch "sshpass -p 'vagrant' ssh vagrant@router ip -c route"

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
Every 2.0s: sshpass -p 'vagrant' ssh vagrant@router ip -c route   k8s-ctr: Sat Aug 16 14:04:23 2025

default via 192.168.121.1 dev eth0 proto dhcp src 192.168.121.25 metric 100
10.10.1.0/24 dev loop1 proto kernel scope link src 10.10.1.200 
10.10.2.0/24 dev loop2 proto kernel scope link src 10.10.2.200 
172.20.0.0/24 nhid 92 via 192.168.10.100 dev eth1 proto bgp metric 20
172.20.1.0/24 nhid 88 via 192.168.10.101 dev eth1 proto bgp metric 20
172.20.2.0/24 nhid 94 via 192.168.20.100 dev eth2 proto bgp metric 20
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.200 
192.168.20.0/24 dev eth2 proto kernel scope link src 192.168.20.200 
192.168.121.0/24 dev eth0 proto kernel scope link src 192.168.121.25 metric 100
192.168.121.1 dev eth0 proto dhcp scope link src 192.168.121.25 metric 100
  • ํ˜„์žฌ๋Š” Cilium BGP๋ฅผ ํ†ตํ•ด Pod CIDR ๋Œ€์—ญ(172.20.0.0/24, 172.20.1.0/24, 172.20.2.0/24) ์ด ๊ด‘๊ณ ๋œ ์ƒํƒœ

4. Cilium BGP Advertisement ํ™•์ธ (Pod CIDR)

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get CiliumBGPAdvertisement

โœ…ย ์ถœ๋ ฅ

1
2
NAME                 AGE
bgp-advertisements   14h
  • ํ˜„์žฌ๋Š” Pod CIDR๋ฅผ ๊ด‘๊ณ ํ•˜๋„๋ก ์„ค์ •๋œ ์ •์ฑ…๋งŒ ์กด์žฌ

5. ์ƒˆ๋กœ์šด BGP Advertisement ์ƒ์„ฑ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# k get svc

โœ…ย ์ถœ๋ ฅ

1
2
3
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1      <none>        443/TCP        15h
webpod       LoadBalancer   10.96.54.159   172.16.1.1    80:32567/TCP   15h
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: cilium.io/v2
kind: CiliumBGPAdvertisement
metadata:
  name: bgp-advertisements-lb-exip-webpod
  labels:
    advertise: bgp
spec:
  advertisements:
    - advertisementType: "Service"
      service:
        addresses:
          - LoadBalancerIP
      selector:             
        matchExpressions:
          - { key: app, operator: In, values: [ webpod ] }
EOF

# ๊ฒฐ๊ณผ
ciliumbgpadvertisement.cilium.io/bgp-advertisements-lb-exip-webpod created
  • advertisementType: Service๋กœ ์ง€์ •ํ•˜์—ฌ Service์˜ LoadBalancer External IP๋งŒ ๊ด‘๊ณ ํ•˜๋„๋ก ์„ค์ •
  • ํŠน์ • ์„œ๋น„์Šค(app=webpod)์˜ LoadBalancer IP(172.16.1.1)๊ฐ€ ๊ด‘๊ณ  ๋Œ€์ƒ์ด ๋จ

6. router์—์„œ LoadBalancer External IP ๋ผ์šฐํŠธ ๋ฐ˜์˜ ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Every 2.0s: sshpass -p 'vagrant' ssh vagrant@router ip -c route   k8s-ctr: Sat Aug 16 14:13:23 2025

default via 192.168.121.1 dev eth0 proto dhcp src 192.168.121.25 metric 100 
10.10.1.0/24 dev loop1 proto kernel scope link src 10.10.1.200 
10.10.2.0/24 dev loop2 proto kernel scope link src 10.10.2.200 
172.16.1.1 nhid 105 proto bgp metric 20 
	nexthop via 192.168.10.101 dev eth1 weight 1 
	nexthop via 192.168.10.100 dev eth1 weight 1 
	nexthop via 192.168.20.100 dev eth2 weight 1 
172.20.0.0/24 nhid 92 via 192.168.10.100 dev eth1 proto bgp metric 20 
172.20.1.0/24 nhid 88 via 192.168.10.101 dev eth1 proto bgp metric 20 
172.20.2.0/24 nhid 94 via 192.168.20.100 dev eth2 proto bgp metric 20 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.200 
192.168.20.0/24 dev eth2 proto kernel scope link src 192.168.20.200 
192.168.121.0/24 dev eth0 proto kernel scope link src 192.168.121.25 metric 100 
192.168.121.1 dev eth0 proto dhcp scope link src 192.168.121.25 metric 100
  • ์ƒˆ๋กœ์šด CiliumBGPAdvertisement ์ƒ์„ฑ ํ›„, ๋ผ์šฐํ„ฐ ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ”์— 172.16.1.1/32 ๊ฒฝ๋กœ๊ฐ€ ์ถ”๊ฐ€๋จ
  • Nexthop์€ ํด๋Ÿฌ์Šคํ„ฐ ๋‚ด 3๊ฐœ ๋…ธ๋“œ (192.168.10.100, 192.168.10.101, 192.168.20.100)
  • ๋™์ผ ์šฐ์„ ์ˆœ์œ„๋กœ multipath ๋ผ์šฐํŒ…์ด ๊ตฌ์„ฑ๋จ

7. Cilium BGP Advertisement ๋ฐ ์ •์ฑ… ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get CiliumBGPAdvertisement

โœ…ย ์ถœ๋ ฅ

1
2
3
NAME                                AGE
bgp-advertisements                  14h
bgp-advertisements-lb-exip-webpod   2m58s
  • CiliumBGPAdvertisement์— ์ƒˆ๋กœ์šด ์ •์ฑ…(bgp-advertisements-lb-exip-webpod)์ด ์ถ”๊ฐ€๋จ
1
2
3
4
5
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -- cilium-dbg bgp route-policies
VRouter   Policy Name                                             Type     Match Peers         Match Families   Match Prefixes (Min..Max Len)   RIB Action   Path Actions
65001     allow-local                                             import                                                                        accept       
65001     tor-switch-ipv4-PodCIDR                                 export   192.168.10.200/32                    172.20.1.0/24 (24..24)          accept       
65001     tor-switch-ipv4-Service-webpod-default-LoadBalancerIP   export   192.168.10.200/32                    172.16.1.1/32 (32..32)          accept
  • ์ •์ฑ… ์ด๋ฆ„์€ ์„œ๋น„์Šค๋ณ„๋กœ ์ƒ์„ฑ๋˜๋ฉฐ, ์ด๋ฒˆ์—๋Š” tor-switch-ipv4-Service-webpod-default-LoadBalancerIP
  • ๊ฒฐ๊ณผ์ ์œผ๋กœ Pod CIDR + Service LoadBalancer IP(172.16.1.1/32) ๊ฐ€ ๋™์‹œ์— export ๋จ

8. Cilium BGP Routes ํ™•์ธ (๋…ธ๋“œ๋ณ„ ๊ด‘๊ณ  ์ƒํƒœ)

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cilium bgp routes available ipv4 unicast

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
Node      VRouter   Prefix          NextHop   Age         Attrs
k8s-ctr   65001     172.16.1.1/32   0.0.0.0   5m6s        [{Origin: i} {Nexthop: 0.0.0.0}]   
          65001     172.20.0.0/24   0.0.0.0   13h23m31s   [{Origin: i} {Nexthop: 0.0.0.0}]   
k8s-w0    65001     172.16.1.1/32   0.0.0.0   5m6s        [{Origin: i} {Nexthop: 0.0.0.0}]   
          65001     172.20.2.0/24   0.0.0.0   13h23m43s   [{Origin: i} {Nexthop: 0.0.0.0}]   
k8s-w1    65001     172.16.1.1/32   0.0.0.0   5m6s        [{Origin: i} {Nexthop: 0.0.0.0}]   
          65001     172.20.1.0/24   0.0.0.0   13h23m44s   [{Origin: i} {Nexthop: 0.0.0.0}] 
  • ํด๋Ÿฌ์Šคํ„ฐ์˜ ๋ชจ๋“  ๋…ธ๋“œ(k8s-ctr, k8s-w0, k8s-w1)๊ฐ€ 172.16.1.1/32 ๊ฒฝ๋กœ๋ฅผ BGP๋กœ ๊ด‘๊ณ 
  • ๋ผ์šฐํ„ฐ๋Š” ์ด๋ฅผ ๋ฐ›์•„ ๋ชจ๋“  ๋…ธ๋“œ๋กœ ํŠธ๋ž˜ํ”ฝ์„ ๋ณด๋‚ผ ์ˆ˜ ์žˆ๊ฒŒ ๋จ

9. router BGP ํ…Œ์ด๋ธ”์—์„œ Multipath ๋ฐ˜์˜ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router "sudo vtysh -c 'show ip route bgp'"

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, F - PBR,
       f - OpenFabric,
       > - selected route, * - FIB route, q - queued, r - rejected, b - backup
       t - trapped, o - offload failure

B>* 172.16.1.1/32 [20/0] via 192.168.10.100, eth1, weight 1, 00:07:20
  *                      via 192.168.10.101, eth1, weight 1, 00:07:20
  *                      via 192.168.20.100, eth2, weight 1, 00:07:20
B>* 172.20.0.0/24 [20/0] via 192.168.10.100, eth1, weight 1, 00:29:43
B>* 172.20.1.0/24 [20/0] via 192.168.10.101, eth1, weight 1, 00:29:42
B>* 172.20.2.0/24 [20/0] via 192.168.20.100, eth2, weight 1, 00:29:42
  • ๋ผ์šฐํ„ฐ BGP ํ…Œ์ด๋ธ”์—์„œ 172.16.1.1/32 ๊ฒฝ๋กœ๋Š” multipath ์ƒํƒœ๋กœ ๊ธฐ๋ก๋จ
  • 192.168.10.100, 192.168.10.101, 192.168.20.100 ์„ธ ๋…ธ๋“œ๊ฐ€ ๋™์ผ ํ”„๋ฆฌํ”ฝ์Šค๋ฅผ ๊ด‘๊ณ ํ–ˆ๊ธฐ ๋•Œ๋ฌธ์— ๋ผ์šฐํ„ฐ๋Š” ์„ธ ๊ฒฝ๋กœ๋ฅผ ๋ชจ๋‘ ์œ ํšจ(* valid)๋กœ ์ธ์‹
  • BGP ๊ฒฝ๋กœ ์šฐ์„ ์ˆœ์œ„๊ฐ€ ๋™์ผ โ†’ ์ปค๋„ ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ”์— multipath๋กœ ๋™์‹œ์— ๋ฐ˜์˜๋จ
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router "sudo vtysh -c 'show ip bgp summary'"

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
IPv4 Unicast Summary (VRF default):
BGP router identifier 192.168.10.200, local AS number 65000 vrf-id 0
BGP table version 10
RIB entries 9, using 1728 bytes of memory
Peers 3, using 2172 KiB of memory
Peer groups 1, using 64 bytes of memory

Neighbor        V         AS   MsgRcvd   MsgSent   TblVer  InQ OutQ  Up/Down State/PfxRcd   PfxSnt Desc
192.168.10.100  4      65001      1284      1294        0    0    0 00:31:06            2        5 N/A
192.168.10.101  4      65001      1284      1294        0    0    0 00:31:05            2        5 N/A
192.168.20.100  4      65001      1125      1132        0    0    0 00:31:05            2        5 N/A

Total number of neighbors 3
  • show ip bgp summary ์ถœ๋ ฅ์—์„œ๋„ 3๊ฐœ์˜ ํ”ผ์–ด๊ฐ€ ๋ชจ๋‘ ๋™์ผ ํ”„๋ฆฌํ”ฝ์Šค๋ฅผ ๊ด‘๊ณ ํ–ˆ์Œ์„ ํ™•์ธ
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router "sudo vtysh -c 'show ip bgp'"

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
BGP table version is 10, local router ID is 192.168.10.200, vrf id 0
Default local pref 100, local AS 65000
Status codes:  s suppressed, d damped, h history, * valid, > best, = multipath,
               i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes:  i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found

   Network          Next Hop            Metric LocPrf Weight Path
*> 10.10.1.0/24     0.0.0.0                  0         32768 i
*= 172.16.1.1/32    192.168.20.100                         0 65001 i
*=                  192.168.10.101                         0 65001 i
*>                  192.168.10.100                         0 65001 i
*> 172.20.0.0/24    192.168.10.100                         0 65001 i
*> 172.20.1.0/24    192.168.10.101                         0 65001 i
*> 172.20.2.0/24    192.168.20.100                         0 65001 i

Displayed  5 routes and 7 total paths
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router "sudo vtysh -c 'show ip bgp 172.16.1.1/32'"
BGP routing table entry for 172.16.1.1/32, version 10
Paths: (3 available, best #3, table default)
  Advertised to non peer-group peers:
  192.168.10.100 192.168.10.101 192.168.20.100
  65001
    192.168.20.100 from 192.168.20.100 (192.168.20.100)
      Origin IGP, valid, external, multipath
      Last update: Sat Aug 16 14:10:56 2025
  65001
    192.168.10.101 from 192.168.10.101 (192.168.10.101)
      Origin IGP, valid, external, multipath
      Last update: Sat Aug 16 14:10:56 2025
  65001
    192.168.10.100 from 192.168.10.100 (192.168.10.100)
      Origin IGP, valid, external, multipath, best (Router ID)
      Last update: Sat Aug 16 14:10:56 2025
  • ํŠน์ • ๊ฒฝ๋กœ(172.16.1.1/32)๋งŒ ์กฐํšŒํ•ด๋„ multipath ํ•ญ๋ชฉ์ด ๋ชจ๋‘ ํ‘œ์‹œ๋˜๋ฉฐ, best ๊ฒฝ๋กœ๋Š” Router ID ๊ธฐ์ค€์œผ๋กœ ์„ ํƒ๋จ

๐Ÿ›œ router์—์„œ LB EX-IP ํ˜ธ์ถœ ํ™•์ธ

1. router์—์„œ LoadBalancer External IP ํ˜ธ์ถœ ํ…Œ์ŠคํŠธ

1
2
root@router:~# LBIP=172.16.1.1
curl -s $LBIP

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
Hostname: webpod-697b545f57-cp7xq
IP: 127.0.0.1
IP: ::1
IP: 172.20.0.15
IP: fe80::4870:31ff:fe42:a8a6
RemoteAddr: 192.168.10.200:43094
GET / HTTP/1.1
Host: 172.16.1.1
User-Agent: curl/8.5.0
Accept: */*
  • ๋ผ์šฐํ„ฐ์—์„œ curl ๋ช…๋ น์œผ๋กœ LB External IP(172.16.1.1) ํ˜ธ์ถœ

2. ๋ผ์šฐํ„ฐ์—์„œ ๋ถ€ํ•˜๋ถ„์‚ฐ ๊ฒฐ๊ณผ ํ™•์ธ

1
root@router:~# for i in {1..100};  do curl -s $LBIP | grep Hostname; done | sort | uniq -c | sort -nr

โœ…ย ์ถœ๋ ฅ

1
2
3
     36 Hostname: webpod-697b545f57-xtmdx
     36 Hostname: webpod-697b545f57-5twrq
     28 Hostname: webpod-697b545f57-cp7xq
  • 100ํšŒ ๋ฐ˜๋ณต ํ˜ธ์ถœ ์‹œ ํŒŒ๋“œ 3๊ฐœ๋กœ ํŠธ๋ž˜ํ”ฝ์ด ๋ถ„์‚ฐ๋˜๋Š” ๊ฒƒ์„ ํ™•์ธ
  • ์ด๋Š” Cilium LB๊ฐ€ multipath๋ฅผ ํ†ตํ•ด ์ •์ƒ์ ์œผ๋กœ ๋ถ€ํ•˜๋ถ„์‚ฐ์„ ์ˆ˜ํ–‰ํ•˜๊ณ  ์žˆ์Œ์„ ์˜๋ฏธ

3. ์‹ค์‹œ๊ฐ„ ํ˜ธ์ถœ ๋ชจ๋‹ˆํ„ฐ๋ง์œผ๋กœ ๋…ธ๋“œ ๋ถ„์‚ฐ ํ™•์ธ

1
root@router:~# while true; do curl -s $LBIP | egrep 'Hostname|RemoteAddr' ; sleep 0.1; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Hostname: webpod-697b545f57-xtmdx
RemoteAddr: 192.168.10.100:34884
Hostname: webpod-697b545f57-xtmdx
RemoteAddr: 192.168.10.100:34900
Hostname: webpod-697b545f57-5twrq
RemoteAddr: 192.168.10.100:34916
Hostname: webpod-697b545f57-5twrq
RemoteAddr: 192.168.10.100:34924
Hostname: webpod-697b545f57-cp7xq
RemoteAddr: 192.168.10.200:34940
Hostname: webpod-697b545f57-cp7xq
RemoteAddr: 192.168.10.200:34946
Hostname: webpod-697b545f57-xtmdx
RemoteAddr: 192.168.10.100:34948
Hostname: webpod-697b545f57-cp7xq
RemoteAddr: 192.168.10.200:34954
Hostname: webpod-697b545f57-5twrq
RemoteAddr: 192.168.10.100:34964
Hostname: webpod-697b545f57-xtmdx
RemoteAddr: 192.168.10.100:34966
Hostname: webpod-697b545f57-cp7xq
RemoteAddr: 192.168.10.200:34974
Hostname: webpod-697b545f57-xtmdx
RemoteAddr: 192.168.10.100:34986
Hostname: webpod-697b545f57-cp7xq
RemoteAddr: 192.168.10.200:35002
...
  • curl ๋ฐ˜๋ณต ํ˜ธ์ถœ ์‹œ RemoteAddr ํ™•์ธ ๊ฒฐ๊ณผ, ๋™์ผ LB IP ์š”์ฒญ์ด ์—ฌ๋Ÿฌ ๋…ธ๋“œ(192.168.10.100, 192.168.10.200)๋กœ ๋ถ„์‚ฐ๋จ
  • ์ฆ‰, ์™ธ๋ถ€์—์„œ LB IP๋กœ ์ ‘๊ทผ ์‹œ ๋ผ์šฐํ„ฐ๋Š” multipath ๋ผ์šฐํŒ…์„ ํ†ตํ•ด ๋‹ค์ˆ˜ ๋…ธ๋“œ๋กœ ํŠธ๋ž˜ํ”ฝ์„ ์ „๋‹ฌํ•จ

4. Pod ์ˆ˜ ์ถ•์†Œ ํ›„์—๋„ ๋ชจ๋“  ๋…ธ๋“œ๊ฐ€ ๊ด‘๊ณ ๋˜๋Š” ๋ฌธ์ œ

1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl scale deployment webpod --replicas 2
deployment.apps/webpod scaled
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME                      READY   STATUS    RESTARTS      AGE   IP             NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   1 (14h ago)   15h   172.20.0.35    k8s-ctr   <none>           <none>
webpod-697b545f57-5twrq   1/1     Running   0             13h   172.20.1.119   k8s-w1    <none>           <none>
webpod-697b545f57-xtmdx   1/1     Running   0             13h   172.20.2.35    k8s-w0    <none>           <none>
  • webpod๋ฅผ replicas=2๋กœ ์ค„์—ฌ ํŒŒ๋“œ๊ฐ€ k8s-w0, k8s-w1์—๋งŒ ์กด์žฌ
1
2
3
4
5
6
7
8
9
10
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cilium bgp routes
(Defaulting to `available ipv4 unicast` routes, please see help for more options)

Node      VRouter   Prefix          NextHop   Age         Attrs
k8s-ctr   65001     172.16.1.1/32   0.0.0.0   17m5s       [{Origin: i} {Nexthop: 0.0.0.0}]   
          65001     172.20.0.0/24   0.0.0.0   13h35m30s   [{Origin: i} {Nexthop: 0.0.0.0}]   
k8s-w0    65001     172.16.1.1/32   0.0.0.0   17m5s       [{Origin: i} {Nexthop: 0.0.0.0}]   
          65001     172.20.2.0/24   0.0.0.0   13h35m42s   [{Origin: i} {Nexthop: 0.0.0.0}]   
k8s-w1    65001     172.16.1.1/32   0.0.0.0   17m5s       [{Origin: i} {Nexthop: 0.0.0.0}]   
          65001     172.20.1.0/24   0.0.0.0   13h35m43s   [{Origin: i} {Nexthop: 0.0.0.0}] 
  • ๊ทธ๋Ÿฌ๋‚˜ ์—ฌ์ „ํžˆ ๋ชจ๋“  ๋…ธ๋“œ(k8s-ctr, k8s-w0, k8s-w1)๊ฐ€ 172.16.1.1/32๋ฅผ ๊ด‘๊ณ 
  • ๊ทธ ๊ฒฐ๊ณผ, ๋ผ์šฐํ„ฐ์—์„œ ๋ถˆํ•„์š”ํ•œ ๋…ธ๋“œ(k8s-ctr)๋กœ๋„ ํŠธ๋ž˜ํ”ฝ์ด ์ „๋‹ฌ๋จ โ†’ ๋น„ํšจ์œจ์ ์ธ ๊ฒฝ๋กœ ๋ฐœ์ƒ

5. Pod ๋ถ€์žฌ์—๋„ ๋ผ์šฐํ„ฐ ๊ฒฝ๋กœ๊ฐ€ ์œ ์ง€๋˜๋Š” ๋ฌธ์ œ

  • ํ˜„์žฌ Pod๊ฐ€ ์กด์žฌํ•˜์ง€ ์•Š์Œ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ , ๋ผ์šฐํ„ฐ์—๋Š” ์—ฌ์ „ํžˆ 172.16.1.1/32 ๊ฒฝ๋กœ๊ฐ€ ์œ ์ง€๋จ
  • ์ด๋Š” ๋ชจ๋“  ๋…ธ๋“œ๊ฐ€ External IP๋ฅผ ๊ด‘๊ณ ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋ฐœ์ƒํ•˜๋Š” ํ˜„์ƒ์ž„

6. Tcpdump๋กœ ํ™•์ธํ•œ ๋ถˆํ•„์š” ๊ฒฝ๋กœ ํ๋ฆ„

1
tcpdump -i eth1 -A -s 0 -nn 'tcp port 80'

โœ…ย ์ถœ๋ ฅ

  • ์‹ ๊ทœ ํ„ฐ๋ฏธ๋„(k8s-w1, k8s-w2, k8s-w0)์—์„œ tcpdump ์ˆ˜ํ–‰
1
2
root@router:~# LBIP=172.16.1.1
curl -s $LBIP

โœ…ย ์ถœ๋ ฅ

  • 172.16.1.1 ํ˜ธ์ถœ ์‹œ, ์š”์ฒญ์ด k8s-ctr โ†’ k8s-w0๋กœ ์ด๋™ํ•˜๋Š” ํŒจํ‚ท์ด ๋™์‹œ์— ๊ด€์ฐฐ๋จ
  • ์ฆ‰, Pod๊ฐ€ ์—†๋Š” ๋…ธ๋“œ๋กœ๋„ ํŠธ๋ž˜ํ”ฝ์ด ์œ ์ž…๋˜์–ด ๋ถˆํ•„์š”ํ•œ ๊ฒฝ๋กœ๊ฐ€ ์™„์„ฑ๋จ

๐Ÿ“ ExternalTrafficPolicy(Local) ์ ์šฉ ๋ฐ ECMP ํ•ด์‹œ ์ •์ฑ… ๋ณ€๊ฒฝ

1. ExternalTrafficPolicy(Local) ์ ์šฉ

1
2
3
4
5
# ๋ชจ๋‹ˆํ„ฐ๋ง
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# watch "sshpass -p 'vagrant' ssh vagrant@router ip -c route"

# k8s-ctr
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl patch service webpod -p '{"spec":{"externalTrafficPolicy":"Local"}}'

โœ…ย ์ถœ๋ ฅ

  • kubectl patch ๋ช…๋ น์œผ๋กœ Service์˜ externalTrafficPolicy๋ฅผ Local๋กœ ๋ณ€๊ฒฝ
1
root@router:~# vtysh -c 'show ip bgp'

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
BGP table version is 11, local router ID is 192.168.10.200, vrf id 0
Default local pref 100, local AS 65000
Status codes:  s suppressed, d damped, h history, * valid, > best, = multipath,
               i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes:  i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found

   Network          Next Hop            Metric LocPrf Weight Path
*> 10.10.1.0/24     0.0.0.0                  0         32768 i
*= 172.16.1.1/32    192.168.20.100                         0 65001 i
*>                  192.168.10.101                         0 65001 i
*> 172.20.0.0/24    192.168.10.100                         0 65001 i
*> 172.20.1.0/24    192.168.10.101                         0 65001 i
*> 172.20.2.0/24    192.168.20.100                         0 65001 i

Displayed  5 routes and 6 total paths
  • ๋ณ€๊ฒฝ ์ „(Cluster)์—๋Š” ๋ชจ๋“  ๋…ธ๋“œ๊ฐ€ BGP ๊ด‘๊ณ ๋ฅผ ์ˆ˜ํ–‰ โ†’ pod๊ฐ€ ์—†๋Š” ๋…ธ๋“œ๊นŒ์ง€ ๊ฒฝ๋กœ์— ํฌํ•จ๋จ
  • ๋ณ€๊ฒฝ ํ›„(Local)์—๋Š” pod๊ฐ€ ์กด์žฌํ•˜๋Š” ๋…ธ๋“œ๋งŒ BGP ๊ด‘๊ณ  โ†’ ๋ถˆํ•„์š”ํ•œ ๋ผ์šฐํŒ… ์ œ๊ฑฐ
  • ํ˜„์žฌ๋Š” w0, w1 ๋…ธ๋“œ๋งŒ ๊ด‘๊ณ 

2. Linux ECMP ๊ธฐ๋ณธ ํ•ด์‹œ ์ •์ฑ…

1
2
3
root@router:~# LBIP=172.16.1.1
root@router:~# for i in {1..100};  do curl -s $LBIP | grep Hostname; done | sort | uniq -c | sort -nr
    100 Hostname: webpod-697b545f57-xtmdx
  • ๋ฆฌ๋ˆ…์Šค๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ L3 ๊ธฐ๋ฐ˜ ํ•ด์‹œ ์ •์ฑ… ์‚ฌ์šฉ
  • ์†Œ์Šค/๋ชฉ์ ์ง€ IP๊ฐ€ ๋™์ผํ•  ๊ฒฝ์šฐ, ์—ฌ๋Ÿฌ ๊ฒฝ๋กœ๊ฐ€ ์žˆ์–ด๋„ ํ•˜๋‚˜์˜ ๊ฒฝ๋กœ๋งŒ ์‚ฌ์šฉ
  • curl ํ…Œ์ŠคํŠธ ์‹œ ํŠน์ • Pod๋งŒ 100% ์‘๋‹ต

3. ECMP Hash Policy ๋ณ€๊ฒฝ

1
2
3
4
5
root@router:~# sudo sysctl -w net.ipv4.fib_multipath_hash_policy=1
echo "net.ipv4.fib_multipath_hash_policy=1" >> /etc/sysctl.conf

# ๊ฒฐ๊ณผ
net.ipv4.fib_multipath_hash_policy = 1
  • net.ipv4.fib_multipath_hash_policy=1 ์„ค์ •์œผ๋กœ L4 ํฌํŠธ ๊ธฐ๋ฐ˜ ํ•ด์‹œ ์ ์šฉ
  • ์†Œ์Šค ํฌํŠธ๊ฐ€ ๋‹ฌ๋ผ์งˆ ๊ฒฝ์šฐ ๋‹ค๋ฅธ ๊ฒฝ๋กœ๋ฅผ ํ™œ์šฉ ๊ฐ€๋Šฅ โ†’ ๋ถ€ํ•˜๋ถ„์‚ฐ ๊ฐœ์„ 
1
2
3
root@router:~# for i in {1..100};  do curl -s $LBIP | grep Hostname; done | sort | uniq -c | sort -nr
     54 Hostname: webpod-697b545f57-5twrq
     46 Hostname: webpod-697b545f57-xtmdx

4. Deployment ํ™•์žฅ

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl scale deployment webpod --replicas 3

# ๊ฒฐ๊ณผ
deployment.apps/webpod scaled
  • kubectl scale ๋ช…๋ น์œผ๋กœ webpod๋ฅผ 3๊ฐœ Replica๋กœ ํ™•์žฅ
  • ์ƒˆ๋กœ์šด Pod๊ฐ€ ์ƒ์„ฑ๋˜๋ฉด ํ•ด๋‹น ๋…ธ๋“œ์—์„œ ์ฆ‰์‹œ BGP ๊ฒฝ๋กœ ๊ด‘๊ณ  ๋ฐ˜์˜
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
NAME                      READY   STATUS    RESTARTS      AGE   IP             NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   1 (14h ago)   16h   172.20.0.35    k8s-ctr   <none>           <none>
webpod-697b545f57-5twrq   1/1     Running   0             14h   172.20.1.119   k8s-w1    <none>           <none>
webpod-697b545f57-npkj5   1/1     Running   0             8s    172.20.0.159   k8s-ctr   <none>           <none>
webpod-697b545f57-xtmdx   1/1     Running   0             14h   172.20.2.35    k8s-w0    <none>           <none>
  • k8s-ctr ๋…ธ๋“œ์—๋„ ์ƒˆ๋กœ์šด Pod(webpod-697b545f57-npkj5)๊ฐ€ ๋ฐฐ์น˜๋˜์–ด 3๊ฐœ Pod๊ฐ€ ๋ชจ๋‘ ๋‹ค๋ฅธ ๋…ธ๋“œ์— ์œ„์น˜

5. ๋ผ์šฐํŒ… ๊ฒฝ๋กœ ํ™•์ธ

1
root@router:~# ip -c route

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
default via 192.168.121.1 dev eth0 proto dhcp src 192.168.121.25 metric 100 
10.10.1.0/24 dev loop1 proto kernel scope link src 10.10.1.200 
10.10.2.0/24 dev loop2 proto kernel scope link src 10.10.2.200 
172.16.1.1 nhid 114 proto bgp metric 20 
	nexthop via 192.168.10.101 dev eth1 weight 1 
	nexthop via 192.168.10.100 dev eth1 weight 1 
	nexthop via 192.168.20.100 dev eth2 weight 1 
172.20.0.0/24 nhid 92 via 192.168.10.100 dev eth1 proto bgp metric 20 
172.20.1.0/24 nhid 88 via 192.168.10.101 dev eth1 proto bgp metric 20 
172.20.2.0/24 nhid 94 via 192.168.20.100 dev eth2 proto bgp metric 20 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.200 
192.168.20.0/24 dev eth2 proto kernel scope link src 192.168.20.200 
192.168.121.0/24 dev eth0 proto kernel scope link src 192.168.121.25 metric 100 
192.168.121.1 dev eth0 proto dhcp scope link src 192.168.121.25 metric 100 
  • k8s-ctr ๋ฐ˜์˜

6. ExternalTrafficPolicy ํšจ๊ณผ

  • Cluster ๋ชจ๋“œ: Pod๊ฐ€ ์—†๋Š” ๋…ธ๋“œ๋„ ๊ฒฝ์œ  ๊ฐ€๋Šฅ โ†’ ๋น„ํšจ์œจ์ ์ธ ๋ผ์šฐํŒ… ๋ฐœ์ƒ
  • Local ๋ชจ๋“œ: ์š”์ฒญ์ด ๋„๋‹ฌํ•œ ๋…ธ๋“œ์˜ Pod์—์„œ ์ง์ ‘ ์‘๋‹ต โ†’ ๋ถˆํ•„์š”ํ•œ hop ์ œ๊ฑฐ

7. ํŠธ๋ž˜ํ”ฝ ๋ถ„์‚ฐ ๊ฒ€์ฆ

1
2
3
4
root@router:~# for i in {1..100};  do curl -s $LBIP | grep Hostname; done | sort | uniq -c | sort -nr
     35 Hostname: webpod-697b545f57-npkj5
     34 Hostname: webpod-697b545f57-5twrq
     31 Hostname: webpod-697b545f57-xtmdx
  • curl ๋ฐ˜๋ณต ํ…Œ์ŠคํŠธ๋กœ 100๋ฒˆ ์š”์ฒญ ์‹œ, 3๊ฐœ Pod๋กœ ๊ท ๋“ฑํ•˜๊ฒŒ ๋ถ„์‚ฐ
  • ExternalTrafficPolicy(Local) + ์Šค์ผ€์ผ ์•„์›ƒ ์กฐํ•ฉ์œผ๋กœ ์•ˆ์ •์ ์ด๊ณ  ํšจ์œจ์ ์ธ ๋ถ€ํ•˜๋ถ„์‚ฐ ๋‹ฌ์„ฑ

BGP + SNAT + Random โ†’ ๊ถŒ์žฅ ๋ฐฉ์‹

  • BGP(ECMP) ๊ธฐ๋ฐ˜ ๋ผ์šฐํŒ…๊ณผ K8S Service(LB EX-IP, ExternalTrafficPolicy: Local)๋ฅผ ์กฐํ•ฉ
  • SNAT ์ ์šฉ + Random ์•Œ๊ณ ๋ฆฌ์ฆ˜ โ†’ ๊ฐ€์žฅ ๋‹จ์ˆœํ•˜๊ณ  ์ผ๋ฐ˜์ ์œผ๋กœ ๊ถŒ์žฅ๋˜๋Š” ๋ฐฉ์‹

๐Ÿงฎ DSR + Maglev ๋ฐฉ์‹ โ†’ ๊ทธ๋‚˜๋งˆ ๊ดœ์ฐฎ์€ ๋ฐฉ์‹

  • BGP(ECMP) + Service(LB EX-IP, ExternalTrafficPolicy: Cluster) + DSR + Maglev
  • Geneve ํ—ค๋” encapsulation ํ•„์š”

1. DSR ๋„์ž… ๋ฐฐ๊ฒฝ

  • ๊ธฐ์กด L4 ๋กœ๋“œ๋ฐธ๋Ÿฐ์„œ๋Š” ์žฅ๋น„ ๊ฐ€๊ฒฉ์ด ๋น„์Œˆ
  • ๋Œ€์•ˆ์œผ๋กœ Direct Server Return(DSR) ๊ฐœ๋… ๋“ฑ์žฅ โ†’ ์„œ๋ฒ„๊ฐ€ ํด๋ผ์ด์–ธํŠธ๋กœ ์ง์ ‘ ์‘๋‹ต์„ ๋ฆฌํ„ด

2. Cilium ์„ค์ • ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -- cilium status --verbose

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
...
KubeProxyReplacement Details:
  Status:               True
  Socket LB:            Enabled
  Socket LB Tracing:    Enabled
  Socket LB Coverage:   Full
  Devices:              eth0   fe80::5054:ff:fea7:8e7a 192.168.121.62, eth1   192.168.10.101 fe80::5054:ff:fefb:b52e (Direct Routing)
  Mode:                 SNAT
  Backend Selection:    Random
  Session Affinity:     Enabled
  NAT46/64 Support:     Disabled
  XDP Acceleration:     Disabled
  Services:
  - ClusterIP:      Enabled
  - NodePort:       Enabled (Range: 30000-32767) 
  - LoadBalancer:   Enabled 
  - externalIPs:    Enabled 
  - HostPort:       Enabled
  Annotations:
  - service.cilium.io/node
  - service.cilium.io/node-selector
  - service.cilium.io/proxy-delegation
  - service.cilium.io/src-ranges-policy
  - service.cilium.io/type
...
  • Mode: SNAT (์ดˆ๊ธฐ ์ƒํƒœ)
  • Backend Selection: Random

3. ๋ชจ๋“ˆ ์ค€๋น„ (Geneve)

1
2
3
4
5
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# modprobe geneve
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w0 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i sudo modprobe geneve ; echo; done
>> node : k8s-w1 <<

>> node : k8s-w0 <<
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# lsmod | grep -E 'vxlan|geneve'
geneve                 49152  0
ip6_udp_tunnel         16384  1 geneve
udp_tunnel             32768  1 geneve

(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w0 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i sudo lsmod | grep -E 'vxlan|geneve' ; echo; done
>> node : k8s-w1 <<
geneve                 49152  0
ip6_udp_tunnel         16384  1 geneve
udp_tunnel             32768  1 geneve

>> node : k8s-w0 <<
geneve                 49152  0
ip6_udp_tunnel         16384  1 geneve
udp_tunnel             32768  1 geneve
  • ๋ชจ๋“  ๋…ธ๋“œ์— geneve ๋ชจ๋“ˆ ๋กœ๋“œ ์™„๋ฃŒ
  • geneve๋Š” ๋…ธ๋“œ ๊ฐ„ ํŒŒ๋“œ ํ†ต์‹  ์ „์ฒด์— ์“ฐ๋Š” ๊ฒŒ ์•„๋‹ˆ๋ผ, DSR ์ „์†ก ๊ฒฝ์œ  ์‹œ์—๋งŒ ์‚ฌ์šฉ๋จ

4. Helm์œผ๋กœ Cilium DSR ๋ชจ๋“œ ์ ์šฉ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# helm upgrade cilium cilium/cilium --version 1.18.0 --namespace kube-system --reuse-values \
  --set tunnelProtocol=geneve --set loadBalancer.mode=dsr --set loadBalancer.dsrDispatch=geneve \
  --set loadBalancer.algorithm=maglev

# ๊ฒฐ๊ณผ
Release "cilium" has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Sat Aug 16 17:29:45 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 3
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.

Your release version is 1.18.0.

For any further help, visit https://docs.cilium.io/en/v1.18/gettinghelp
1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system rollout restart ds/cilium

# ๊ฒฐ๊ณผ
daemonset.apps/cilium restarted
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -- cilium status --verbose

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
...
KubeProxyReplacement Details:
  Status:                True
  Socket LB:             Enabled
  Socket LB Tracing:     Enabled
  Socket LB Coverage:    Full
  Devices:               eth0   192.168.121.62 fe80::5054:ff:fea7:8e7a, eth1   192.168.10.101 fe80::5054:ff:fefb:b52e (Direct Routing)
  Mode:                  DSR
    DSR Dispatch Mode:   Geneve
  Backend Selection:     Maglev (Table Size: 16381)
  Session Affinity:      Enabled
  NAT46/64 Support:      Disabled
  XDP Acceleration:      Disabled
  Services:
  - ClusterIP:      Enabled
  - NodePort:       Enabled (Range: 30000-32767) 
  - LoadBalancer:   Enabled 
  - externalIPs:    Enabled 
  - HostPort:       Enabled
  Annotations:
  - service.cilium.io/node
  - service.cilium.io/node-selector
  - service.cilium.io/proxy-delegation
  - service.cilium.io/src-ranges-policy
  - service.cilium.io/type
...
  • Mode: DSR
  • DSR Dispatch Mode: Geneve
  • Backend Selection: Maglev

5. ExternalTrafficPolicy Cluster๋กœ ์›๋ณต

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl patch svc webpod -p '{"spec":{"externalTrafficPolicy":"Cluster"}}'

# ๊ฒฐ๊ณผ
service/webpod patched
  • ๋‹ค์‹œ Cluster ๋ชจ๋“œ๋กœ ๋ณต์›

6. DSR ํŒจํ‚ท ์บก์ฒ˜

(1) ํŒจํ‚ท ์บก์ฒ˜ ์ค€๋น„

1
tcpdump -i eth1 -w /tmp/dsr.pcap
  • k8s-ctr, k8s-w1, k8s-w0 ๋ชจ๋‘ tcpdump ์‹คํ–‰

(2) ์™ธ๋ถ€ ์ ‘๊ทผ ํ…Œ์ŠคํŠธ

1
2
root@router:~#  curl -s $LBIP
root@router:~#  curl -s $LBIP

โœ…ย ์ถœ๋ ฅ

  • ์™ธ๋ถ€ ๋ผ์šฐํ„ฐ์—์„œ LBIP๋กœ curl ์š”์ฒญ

(3) ์บก์ฒ˜ ํŒŒ์ผ ํ™•์ธ

1
2
3
4
5
6
7
8
vagrant plugin install vagrant-scp

# ๊ฒฐ๊ณผ
Installing the 'vagrant-scp' plugin. This can take a few minutes...
Building native extensions. This could take a while...
Building native extensions. This could take a while...
Fetching vagrant-scp-0.5.9.gem
Installed the plugin 'vagrant-scp (0.5.9)'!
  • Vagrant scp ํ”Œ๋Ÿฌ๊ทธ์ธ ์„ค์น˜
1
2
3
4
5
6
vagrant scp k8s-ctr:/tmp/dsr.pcap .

# ๊ฒฐ๊ณผ
[fog][WARNING] Unrecognized arguments: libvirt_ip_command
Warning: Permanently added '192.168.121.70' (ED25519) to the list of known hosts.
dsr.pcap                                                         100%   94KB  46.4MB/s   00:00 
  • pcap ํŒŒ์ผ์„ Host๋กœ ์ „์†ก ํ›„ Termshark๋กœ ์—ด์–ด ๋ถ„์„
1
termshark -r dsr.pcap

โœ…ย ์ถœ๋ ฅ

  • ์ถœ๋ฐœ์ง€ IP: 192.168.10.100
  • ๋ชฉ์ ์ง€ IP: 192.168.20.100 (k8s-w0)
  • ์™ธ๋ถ€ ์ ‘๊ทผ์ด ์ปจํŠธ๋กคํ”Œ๋ ˆ์ธ์œผ๋กœ ๋“ค์–ด์˜จ ๋’ค ์›Œ์ปค ๋…ธ๋“œ๋กœ ์ „๋‹ฌ๋œ ๊ฒƒ ํ™•์ธ
  • Geneve encapsulation ํ—ค๋” ์กด์žฌ

  • ์ปจํŠธ๋กคํ”Œ๋ ˆ์ธ(k8s-ctr)์—์„œ ์™ธ๋ถ€ ํด๋ผ์ด์–ธํŠธ(router) ๋กœ ๋ฐ”๋กœ ๋ฆฌํ„ดํ•˜๋Š” ํŒจํ‚ท ์กด์žฌ
  • D.IP / D.Port = ์ตœ์ดˆ ์š”์ฒญ ์‹œ S.IP / S.Port ๊ทธ๋Œ€๋กœ ์œ ์ง€๋จ
  • ์ฆ‰, SNAT์ด ์•„๋‹Œ DSR ๋ฐฉ์‹์œผ๋กœ ์‘๋‹ต์ด ์ „๋‹ฌ๋จ

๐Ÿงฉ ClusterMesh ํ™˜๊ฒฝ ์ค€๋น„

1. West ํด๋Ÿฌ์Šคํ„ฐ ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
kind create cluster --name west --image kindest/node:v1.33.2 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 30000 # sample apps
    hostPort: 30000
  - containerPort: 30001 # hubble ui
    hostPort: 30001
- role: worker
  extraPortMappings:
  - containerPort: 30002 # sample apps
    hostPort: 30002
networking:
  podSubnet: "10.0.0.0/16"
  serviceSubnet: "10.2.0.0/16"
  disableDefaultCNI: true
  kubeProxyMode: none
EOF

# ๊ฒฐ๊ณผ
Creating cluster "west" ...
 โœ“ Ensuring node image (kindest/node:v1.33.2) ๐Ÿ–ผ 
 โœ“ Preparing nodes ๐Ÿ“ฆ ๐Ÿ“ฆ  
 โœ“ Writing configuration ๐Ÿ“œ 
 โœ“ Starting control-plane ๐Ÿ•น๏ธ 
 โœ“ Installing StorageClass ๐Ÿ’พ 
 โœ“ Joining worker nodes ๐Ÿšœ 
Set kubectl context to "kind-west"
You can now use your cluster with:

kubectl cluster-info --context kind-west

Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community ๐Ÿ™‚

2. East ํด๋Ÿฌ์Šคํ„ฐ ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
kind create cluster --name east --image kindest/node:v1.33.2 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 31000 # sample apps
    hostPort: 31000
  - containerPort: 31001 # hubble ui
    hostPort: 31001
- role: worker
  extraPortMappings:
  - containerPort: 31002 # sample apps
    hostPort: 31002
networking:
  podSubnet: "10.1.0.0/16"
  serviceSubnet: "10.3.0.0/16"
  disableDefaultCNI: true
  kubeProxyMode: none
EOF

# ๊ฒฐ๊ณผ
Creating cluster "east" ...
 โœ“ Ensuring node image (kindest/node:v1.33.2) ๐Ÿ–ผ
 โœ“ Preparing nodes ๐Ÿ“ฆ ๐Ÿ“ฆ  
 โœ“ Writing configuration ๐Ÿ“œ 
 โœ“ Starting control-plane ๐Ÿ•น๏ธ 
 โœ“ Installing StorageClass ๐Ÿ’พ 
 โœ“ Joining worker nodes ๐Ÿšœ 
Set kubectl context to "kind-east"
You can now use your cluster with:

kubectl cluster-info --context kind-east

Not sure what to do next? ๐Ÿ˜…  Check out https://kind.sigs.k8s.io/docs/user/quick-start/

3. ์ปจํ…์ŠคํŠธ ํ™•์ธ

1
kubectl config get-contexts 

โœ…ย ์ถœ๋ ฅ

1
2
3
CURRENT   NAME        CLUSTER     AUTHINFO    NAMESPACE
*         kind-east   kind-east   kind-east   
          kind-west   kind-west   kind-west   
  • ๋‘ ํด๋Ÿฌ์Šคํ„ฐ๊ฐ€ kubeconfig์— ์ •์ƒ ๋“ฑ๋ก๋จ ํ™•์ธ

4. ๋…ธ๋“œ ๊ธฐ๋ณธ ํˆด ์„ค์น˜

1
2
3
4
docker exec -it west-control-plane sh -c 'apt update && apt install tree psmisc lsof wget net-tools dnsutils tcpdump ngrep iputils-ping git -y'
docker exec -it west-worker sh -c 'apt update && apt install tree psmisc lsof wget net-tools dnsutils tcpdump ngrep iputils-ping git -y'
docker exec -it east-control-plane sh -c 'apt update && apt install tree psmisc lsof wget net-tools dnsutils tcpdump ngrep iputils-ping git -y'
docker exec -it east-worker sh -c 'apt update && apt install tree psmisc lsof wget net-tools dnsutils tcpdump ngrep iputils-ping git -y'

5. Context ์ „ํ™˜ ๋ฐ ๋…ธ๋“œ ์กฐํšŒ

1
2
3
4
kubectl config set-context kind-east

# ๊ฒฐ๊ณผ
Context "kind-east" modified.
  • ๊ธฐ๋ณธ context๋ฅผ east๋กœ ๋ณ€๊ฒฝ
1
kubectl get node -v=6 --context kind-east

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
I0816 18:18:58.547416  180043 loader.go:402] Config loaded from file:  /home/devshin/.kube/config
I0816 18:18:58.547878  180043 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0816 18:18:58.547890  180043 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0816 18:18:58.547898  180043 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0816 18:18:58.547901  180043 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0816 18:18:58.561481  180043 round_trippers.go:560] GET https://127.0.0.1:38631/api?timeout=32s 200 OK in 13 milliseconds
I0816 18:18:58.564716  180043 round_trippers.go:560] GET https://127.0.0.1:38631/apis?timeout=32s 200 OK in 1 milliseconds
I0816 18:18:58.598067  180043 round_trippers.go:560] GET https://127.0.0.1:38631/api/v1/nodes?limit=500 200 OK in 14 milliseconds
NAME                 STATUS     ROLES           AGE     VERSION
east-control-plane   NotReady   control-plane   7m31s   v1.33.2
east-worker          NotReady   <none>          7m20s   v1.33.2
1
kubectl get node -v=6

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
I0816 18:19:58.446920  180152 loader.go:402] Config loaded from file:  /home/devshin/.kube/config
I0816 18:19:58.449078  180152 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0816 18:19:58.449176  180152 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0816 18:19:58.449218  180152 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0816 18:19:58.449263  180152 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0816 18:19:58.465153  180152 round_trippers.go:560] GET https://127.0.0.1:38631/api/v1/nodes?limit=500 200 OK in 7 milliseconds
NAME                 STATUS     ROLES           AGE     VERSION
east-control-plane   NotReady   control-plane   8m31s   v1.33.2
east-worker          NotReady   <none>          8m20s   v1.33.2
  • east ํด๋Ÿฌ์Šคํ„ฐ ๋…ธ๋“œ ์ •๋ณด ์ถœ๋ ฅ
1
kubectl get node -v=6 --context kind-west

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
I0816 18:21:10.714221  180273 loader.go:402] Config loaded from file:  /home/devshin/.kube/config
I0816 18:21:10.719211  180273 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0816 18:21:10.719796  180273 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0816 18:21:10.719902  180273 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0816 18:21:10.719986  180273 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0816 18:21:10.741286  180273 round_trippers.go:560] GET https://127.0.0.1:43999/api?timeout=32s 200 OK in 19 milliseconds
I0816 18:21:10.743285  180273 round_trippers.go:560] GET https://127.0.0.1:43999/apis?timeout=32s 200 OK in 1 milliseconds
I0816 18:21:10.751821  180273 round_trippers.go:560] GET https://127.0.0.1:43999/api/v1/nodes?limit=500 200 OK in 4 milliseconds
NAME                 STATUS     ROLES           AGE   VERSION
west-control-plane   NotReady   control-plane   10m   v1.33.2
west-worker          NotReady   <none>          10m   v1.33.2
  • --context ์˜ต์…˜์„ ์ฃผ๋ฉด west ํด๋Ÿฌ์Šคํ„ฐ ์ •๋ณด๋„ ํ™•์ธ ๊ฐ€๋Šฅ

6. ๋…ธ๋“œ ์ƒํƒœ ํ™•์ธ

1
kubectl get pod -A

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
kube-system          coredns-674b8bbfcf-2tm2l                     0/1     Pending   0          10m
kube-system          coredns-674b8bbfcf-c9qsg                     0/1     Pending   0          10m
kube-system          etcd-east-control-plane                      1/1     Running   0          10m
kube-system          kube-apiserver-east-control-plane            1/1     Running   0          10m
kube-system          kube-controller-manager-east-control-plane   1/1     Running   0          10m
kube-system          kube-scheduler-east-control-plane            1/1     Running   0          10m
local-path-storage   local-path-provisioner-7dc846544d-mwfdc      0/1     Pending   0          10m
  • ํ˜„์žฌ kube-proxy์™€ CNI๋ฅผ ์„ค์น˜ํ•˜์ง€ ์•Š์•˜๊ธฐ ๋•Œ๋ฌธ์— ๋ชจ๋“  ๋…ธ๋“œ ์ƒํƒœ๋Š” NotReady
  • System Pod(coredns, local-path-provisioner ๋“ฑ)๋„ Pending ์ƒํƒœ

7. Context Alias ์„ค์ •

1
2
alias kwest='kubectl --context kind-west'
alias keast='kubectl --context kind-east'

8. ๋…ธ๋“œ ์ƒ์„ธ ์ •๋ณด

1
2
3
4
5
kwest get node -owide

NAME                 STATUS     ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
west-control-plane   NotReady   control-plane   14m   v1.33.2   172.18.0.2    <none>        Debian GNU/Linux 12 (bookworm)   6.16.0-arch2-1   containerd://2.1.3
west-worker          NotReady   <none>          14m   v1.33.2   172.18.0.3    <none>        Debian GNU/Linux 12 (bookworm)   6.16.0-arch2-1   containerd://2.1.3
1
2
3
4
5
keast get node -owide

NAME                 STATUS     ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
east-control-plane   NotReady   control-plane   13m   v1.33.2   172.18.0.4    <none>        Debian GNU/Linux 12 (bookworm)   6.16.0-arch2-1   containerd://2.1.3
east-worker          NotReady   <none>          13m   v1.33.2   172.18.0.5    <none>        Debian GNU/Linux 12 (bookworm)   6.16.0-arch2-1   containerd://2.1.3

๐Ÿ•ธ๏ธ Cilium CNI ๋ฐฐํฌ : ClusterMesh

1. Cilium CLI ์„ค์น˜

1
2
3
4
5
6
7
8
9
10
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi

curl -L --fail --remote-name-all \
  https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}  

sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 56.6M  100 56.6M    0     0  16.1M      0  0:00:03  0:00:03 --:--:-- 18.2M

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100    92  100    92    0     0    338      0 --:--:-- --:--:-- --:--:--   338

cilium-linux-amd64.tar.gz: OK
cilium
  • Host OS์— cilium CLI ์„ค์น˜

2. West ํด๋Ÿฌ์Šคํ„ฐ์— Cilium ์„ค์น˜

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cilium install --version 1.17.6 --set ipam.mode=kubernetes \
--set kubeProxyReplacement=true --set bpf.masquerade=true \
--set endpointHealthChecking.enabled=false --set healthChecking=false \
--set operator.replicas=1 --set debug.enabled=true \
--set routingMode=native --set autoDirectNodeRoutes=true --set ipv4NativeRoutingCIDR=10.0.0.0/16 \
--set ipMasqAgent.enabled=true --set ipMasqAgent.config.nonMasqueradeCIDRs='{10.1.0.0/16}' \
--set cluster.name=west --set cluster.id=1 \
--context kind-west

# ๊ฒฐ๊ณผ   
๐Ÿ”ฎ Auto-detected Kubernetes kind: kind
โ„น๏ธ  Using Cilium version 1.17.6
โ„น๏ธ  Using cluster name "west"
โ„น๏ธ  Detecting real Kubernetes API server addr and port on Kind
๐Ÿ”ฎ Auto-detected kube-proxy has not been installed
โ„น๏ธ  Cilium will fully replace all functionalities of kube-proxy
  • cilium install ๋ช…๋ น์–ด๋กœ 1.17.6 ๋ฒ„์ „ ์„ค์น˜
  • routingMode=native, autoDirectNodeRoutes=true, ipv4NativeRoutingCIDR=10.0.0.0/16
  • cluster.name=west, cluster.id=1

3. East ํด๋Ÿฌ์Šคํ„ฐ์— Cilium ์„ค์น˜

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cilium install --version 1.17.6 --set ipam.mode=kubernetes \
--set kubeProxyReplacement=true --set bpf.masquerade=true \
--set endpointHealthChecking.enabled=false --set healthChecking=false \
--set operator.replicas=1 --set debug.enabled=true \
--set routingMode=native --set autoDirectNodeRoutes=true --set ipv4NativeRoutingCIDR=10.1.0.0/16 \
--set ipMasqAgent.enabled=true --set ipMasqAgent.config.nonMasqueradeCIDRs='{10.0.0.0/16}' \
--set cluster.name=east --set cluster.id=2 \
--context kind-east

# ๊ฒฐ๊ณผ
๐Ÿ”ฎ Auto-detected Kubernetes kind: kind
โ„น๏ธ  Using Cilium version 1.17.6
โ„น๏ธ  Using cluster name "east"
โ„น๏ธ  Detecting real Kubernetes API server addr and port on Kind
๐Ÿ”ฎ Auto-detected kube-proxy has not been installed
โ„น๏ธ  Cilium will fully replace all functionalities of kube-proxy
  • routingMode=native, autoDirectNodeRoutes=true, ipv4NativeRoutingCIDR=10.1.0.0/16
  • cluster.name=east, cluster.id=2

4. ์„ค์น˜ ํ™•์ธ (West/East Pod ์ƒํƒœ)

1
kwest get pod -A && keast get pod -A

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
kube-system          cilium-9jlht                                 1/1     Running   0          2m22s
kube-system          cilium-envoy-5gpxx                           1/1     Running   0          2m22s
kube-system          cilium-envoy-skv7b                           1/1     Running   0          2m22s
kube-system          cilium-operator-7dbb574d5b-drtg2             1/1     Running   0          2m22s
kube-system          cilium-qvpkv                                 1/1     Running   0          2m22s
kube-system          coredns-674b8bbfcf-kwxv5                     1/1     Running   0          34m
kube-system          coredns-674b8bbfcf-nb96t                     1/1     Running   0          34m
kube-system          etcd-west-control-plane                      1/1     Running   0          34m
kube-system          kube-apiserver-west-control-plane            1/1     Running   0          34m
kube-system          kube-controller-manager-west-control-plane   1/1     Running   0          34m
kube-system          kube-scheduler-west-control-plane            1/1     Running   0          34m
local-path-storage   local-path-provisioner-7dc846544d-jrdw8      1/1     Running   0          34m

NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
kube-system          cilium-envoy-mrzw8                           1/1     Running   0          94s
kube-system          cilium-envoy-vq5r7                           1/1     Running   0          94s
kube-system          cilium-kxddr                                 1/1     Running   0          94s
kube-system          cilium-operator-867f8dc978-44zqb             1/1     Running   0          94s
kube-system          cilium-qn52j                                 1/1     Running   0          94s
kube-system          coredns-674b8bbfcf-2tm2l                     1/1     Running   0          33m
kube-system          coredns-674b8bbfcf-c9qsg                     1/1     Running   0          33m
kube-system          etcd-east-control-plane                      1/1     Running   0          33m
kube-system          kube-apiserver-east-control-plane            1/1     Running   0          33m
kube-system          kube-controller-manager-east-control-plane   1/1     Running   0          33m
kube-system          kube-scheduler-east-control-plane            1/1     Running   0          33m
local-path-storage   local-path-provisioner-7dc846544d-mwfdc      1/1     Running   0          33m
  • Cilium DaemonSet, Envoy, Operator ์ •์ƒ ์‹คํ–‰ (Running)
  • CoreDNS, etcd, apiserver, controller, scheduler, local-path-provisioner ๋ชจ๋‘ ์ •์ƒ
1
cilium status --context kind-west

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
    /ยฏยฏ\
 /ยฏยฏ\__/ยฏยฏ\    Cilium:             OK
 \__/ยฏยฏ\__/    Operator:           OK
 /ยฏยฏ\__/ยฏยฏ\    Envoy DaemonSet:    OK
 \__/ยฏยฏ\__/    Hubble Relay:       disabled
    \__/       ClusterMesh:        disabled

DaemonSet              cilium                   Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet              cilium-envoy             Desired: 2, Ready: 2/2, Available: 2/2
Deployment             cilium-operator          Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium                   Running: 2
                       cilium-envoy             Running: 2
                       cilium-operator          Running: 1
                       clustermesh-apiserver    
                       hubble-relay             
Cluster Pods:          3/3 managed by Cilium
Helm chart version:    1.17.6
Image versions         cilium             quay.io/cilium/cilium:v1.17.6@sha256:544de3d4fed7acba72758413812780a4972d47c39035f2a06d6145d8644a3353: 2
                       cilium-envoy       quay.io/cilium/cilium-envoy:v1.33.4-1752151664-7c2edb0b44cf95f326d628b837fcdd845102ba68@sha256:318eff387835ca2717baab42a84f35a83a5f9e7d519253df87269f80b9ff0171: 2
                       cilium-operator    quay.io/cilium/operator-generic:v1.17.6@sha256:91ac3bf7be7bed30e90218f219d4f3062a63377689ee7246062fa0cc3839d096: 1
1
cilium status --context kind-east

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
    /ยฏยฏ\
 /ยฏยฏ\__/ยฏยฏ\    Cilium:             OK
 \__/ยฏยฏ\__/    Operator:           OK
 /ยฏยฏ\__/ยฏยฏ\    Envoy DaemonSet:    OK
 \__/ยฏยฏ\__/    Hubble Relay:       disabled
    \__/       ClusterMesh:        disabled

DaemonSet              cilium                   Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet              cilium-envoy             Desired: 2, Ready: 2/2, Available: 2/2
Deployment             cilium-operator          Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium                   Running: 2
                       cilium-envoy             Running: 2
                       cilium-operator          Running: 1
                       clustermesh-apiserver    
                       hubble-relay             
Cluster Pods:          3/3 managed by Cilium
Helm chart version:    1.17.6
Image versions         cilium             quay.io/cilium/cilium:v1.17.6@sha256:544de3d4fed7acba72758413812780a4972d47c39035f2a06d6145d8644a3353: 2
                       cilium-envoy       quay.io/cilium/cilium-envoy:v1.33.4-1752151664-7c2edb0b44cf95f326d628b837fcdd845102ba68@sha256:318eff387835ca2717baab42a84f35a83a5f9e7d519253df87269f80b9ff0171: 2
                       cilium-operator    quay.io/cilium/operator-generic:v1.17.6@sha256:91ac3bf7be7bed30e90218f219d4f3062a63377689ee7246062fa0cc3839d096: 1
  • DaemonSet, Envoy, Operator โ†’ Desired์™€ Ready ์ˆ˜ ์ผ์น˜ (์ •์ƒ ๋ฐฐํฌ ์™„๋ฃŒ)
  • ClusterMesh, Hubble Relay โ†’ ์•„์ง disabled ์ƒํƒœ

5. IP Masquerading ์„ค์ • ํ™•์ธ

1
kwest -n kube-system exec ds/cilium -c cilium-agent -- cilium-dbg bpf ipmasq list

โœ…ย ์ถœ๋ ฅ

1
2
3
IP PREFIX/ADDRESS   
10.1.0.0/16              
169.254.0.0/16  
1
keast -n kube-system exec ds/cilium -c cilium-agent -- cilium-dbg bpf ipmasq list

โœ…ย ์ถœ๋ ฅ

1
2
3
IP PREFIX/ADDRESS   
10.0.0.0/16              
169.254.0.0/16           
  • ์„œ๋กœ ์ƒ๋Œ€ ํด๋Ÿฌ์Šคํ„ฐ์˜ Pod CIDR ๋Œ€์—ญ์ด non-masquerade ์ฒ˜๋ฆฌ ๋˜์–ด ์žˆ์Œ

6. CoreDNS ์„ค์ • ํ™•์ธ

1
kubectl describe cm -n kube-system coredns --context kind-west | grep kubernetes

โœ…ย ์ถœ๋ ฅ

1
    kubernetes cluster.local in-addr.arpa ip6.arpa {
1
kubectl describe cm -n kube-system coredns --context kind-west | grep kubernetes

โœ…ย ์ถœ๋ ฅ

1
    kubernetes cluster.local in-addr.arpa ip6.arpa {
  • cluster.local ๋„๋ฉ”์ธ ์‚ฌ์šฉ
  • ConfigMap ์ถœ๋ ฅ์—์„œ kubernetes cluster.local in-addr.arpa ip6.arpa ํ™•์ธ

๐Ÿš€ Setting up Cluster Mesh

1. ์ดˆ๊ธฐ ์ƒํƒœ: ์ƒํ˜ธ Pod CIDR ์•Œ ์ˆ˜ ์—†์Œ

1
2
3
4
docker exec -it west-control-plane ip -c route
docker exec -it west-worker ip -c route
docker exec -it east-control-plane ip -c route
docker exec -it east-worker ip -c route

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
default via 172.18.0.1 dev eth0 
10.0.0.0/24 via 10.0.0.19 dev cilium_host proto kernel src 10.0.0.19 
10.0.0.19 dev cilium_host proto kernel scope link 
10.0.1.0/24 via 172.18.0.3 dev eth0 proto kernel 
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.2

default via 172.18.0.1 dev eth0 
10.0.0.0/24 via 172.18.0.2 dev eth0 proto kernel 
10.0.1.0/24 via 10.0.1.99 dev cilium_host proto kernel src 10.0.1.99 
10.0.1.99 dev cilium_host proto kernel scope link 
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.3

default via 172.18.0.1 dev eth0 
10.1.0.0/24 via 10.1.0.165 dev cilium_host proto kernel src 10.1.0.165 
10.1.0.165 dev cilium_host proto kernel scope link 
10.1.1.0/24 via 172.18.0.5 dev eth0 proto kernel 
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.4

default via 172.18.0.1 dev eth0
10.1.0.0/24 via 172.18.0.4 dev eth0 proto kernel 
10.1.1.0/24 via 10.1.1.122 dev cilium_host proto kernel src 10.1.1.122 
10.1.1.122 dev cilium_host proto kernel scope link 
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.5
  • kind ํ™˜๊ฒฝ์€ Docker ๋„คํŠธ์›Œํฌ ์œ„์—์„œ ๋™์ž‘ํ•˜๊ณ  BGP๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š์Œ
  • ๋”ฐ๋ผ์„œ West โ†” East ํด๋Ÿฌ์Šคํ„ฐ ๊ฐ„ Pod CIDR ๋ผ์šฐํŒ… ์ •๋ณด๊ฐ€ ๊ธฐ๋ณธ์ ์œผ๋กœ ์กด์žฌํ•˜์ง€ ์•Š์Œ
  • ip route ํ™•์ธ ๊ฒฐ๊ณผ, ๊ฐ ํด๋Ÿฌ์Šคํ„ฐ๋Š” ์ž๊ธฐ Pod CIDR๋งŒ ์•Œ๊ณ  ์žˆ๊ณ  ์ƒ๋Œ€๋ฐฉ์˜ Pod CIDR์€ ์•Œ ์ˆ˜ ์—†์Œ

2. CA Secret ๋™๊ธฐํ™”

1
2
3
4
5
6
7
8
9
keast get secret -n kube-system cilium-ca
NAME        TYPE     DATA   AGE
cilium-ca   Opaque   2      14m

keast delete secret -n kube-system cilium-ca 
secret "cilium-ca" deleted

keast get secret -n kube-system cilium-ca
Error from server (NotFound): secrets "cilium-ca" not found
  • east ํด๋Ÿฌ์Šคํ„ฐ์˜ ๊ธฐ๋ณธ cilium-ca Secret ์‚ญ์ œ
1
2
3
4
5
kubectl --context kind-west get secret -n kube-system cilium-ca -o yaml | \
kubectl --context kind-east create -f -

# ๊ฒฐ๊ณผ
secret/cilium-ca created
  • west ํด๋Ÿฌ์Šคํ„ฐ์˜ cilium-ca Secret์„ east ํด๋Ÿฌ์Šคํ„ฐ๋กœ ๊ฐ€์ ธ์™€ ์ƒ์„ฑ
1
2
3
keast get secret -n kube-system cilium-ca
NAME        TYPE     DATA   AGE
cilium-ca   Opaque   2      62s
  • ์–‘์ชฝ ํด๋Ÿฌ์Šคํ„ฐ๊ฐ€ ๋™์ผํ•œ CA๋ฅผ ๊ณต์œ ํ•˜์—ฌ ์ƒํ˜ธ TLS ์ธ์ฆ ๊ธฐ๋ฐ˜ ํ†ต์‹  ๊ฐ€๋Šฅ

3. ClusterMesh ์ƒํƒœ ๋ชจ๋‹ˆํ„ฐ๋ง

1
2
cilium clustermesh status --context kind-west --wait  
cilium clustermesh status --context kind-east --wait

4. ClusterMesh ํ™œ์„ฑํ™”

1
cilium clustermesh enable --service-type NodePort --enable-kvstoremesh=false --context kind-west

โœ…ย ์ถœ๋ ฅ

1
cilium clustermesh enable --service-type NodePort --enable-kvstoremesh=false --context kind-east

โœ…ย ์ถœ๋ ฅ

  • ๊ฐ ํด๋Ÿฌ์Šคํ„ฐ์—์„œ cilium clustermesh enable ์‹คํ–‰

5. West ํด๋Ÿฌ์Šคํ„ฐ Service/Endpoint ํ™•์ธ

1
kwest get svc,ep -n kube-system clustermesh-apiserver --context kind-west

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                            TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
service/clustermesh-apiserver   NodePort   10.2.126.217   <none>        2379:32379/TCP   2m57s

NAME                              ENDPOINTS       AGE
endpoints/clustermesh-apiserver   10.0.1.8:2379   2m57s
  • clustermesh-apiserver Service ์ƒ์„ฑ, ํƒ€์ž…์€ NodePort
  • ClusterIP: 10.2.126.217, Port: 2379:32379/TCP
  • NodePort (32379)๋กœ ์™ธ๋ถ€ ๋…ธ๋“œ ๊ฐ„ ํ†ต์‹  ์ฑ„๋„ ๊ฐœ๋ฐฉ
  • Endpoint: 10.0.1.8:2379

6. West ํด๋Ÿฌ์Šคํ„ฐ Pod ์ƒํƒœ ํ™•์ธ

1
kwest get pod -n kube-system -owide | grep clustermesh

โœ…ย ์ถœ๋ ฅ

1
2
clustermesh-apiserver-5cf45db9cc-2g847       2/2     Running     0          4m29s   10.0.1.8     west-worker          <none>           <none>
clustermesh-apiserver-generate-certs-pl6ws   0/1     Completed   0          4m29s   172.18.0.3   west-worker          <none>           <none>

7. East ํด๋Ÿฌ์Šคํ„ฐ Service/Endpoint ํ™•์ธ

1
keast get svc,ep -n kube-system clustermesh-apiserver --context kind-east

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                            TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
service/clustermesh-apiserver   NodePort   10.3.173.28   <none>        2379:32379/TCP   3m47s

NAME                              ENDPOINTS        AGE
endpoints/clustermesh-apiserver   10.1.1.62:2379   3m47s
  • ClusterIP: 10.3.173.28, Port: 2379:32379/TCP
  • Endpoint: 10.1.1.62:2379

8. ClusterMesh ์ƒํƒœ ๋ชจ๋‹ˆํ„ฐ๋ง

1
2
watch -d "cilium clustermesh status --context kind-west --wait"
watch -d "cilium clustermesh status --context kind-east --wait"

โœ…ย ์ถœ๋ ฅ

9. ClusterMesh ์—ฐ๊ฒฐ ์‹คํ–‰ (West โ†’ East)

1
cilium clustermesh connect --context kind-west --destination-context kind-east

โœ…ย ์ถœ๋ ฅ

10. ClusterMesh ์ •์ƒ ์—ฐ๊ฒฐ ํ™•์ธ (West/East)

1
cilium status --context kind-west

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
    /ยฏยฏ\
 /ยฏยฏ\__/ยฏยฏ\    Cilium:             OK
 \__/ยฏยฏ\__/    Operator:           OK
 /ยฏยฏ\__/ยฏยฏ\    Envoy DaemonSet:    OK
 \__/ยฏยฏ\__/    Hubble Relay:       disabled
    \__/       ClusterMesh:        OK

DaemonSet              cilium                   Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet              cilium-envoy             Desired: 2, Ready: 2/2, Available: 2/2
Deployment             cilium-operator          Desired: 1, Ready: 1/1, Available: 1/1
Deployment             clustermesh-apiserver    Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium                   Running: 2
                       cilium-envoy             Running: 2
                       cilium-operator          Running: 1
                       clustermesh-apiserver    Running: 1
                       hubble-relay             
Cluster Pods:          4/4 managed by Cilium
Helm chart version:    1.17.6
Image versions         cilium                   quay.io/cilium/cilium:v1.17.6@sha256:544de3d4fed7acba72758413812780a4972d47c39035f2a06d6145d8644a3353: 2
                       cilium-envoy             quay.io/cilium/cilium-envoy:v1.33.4-1752151664-7c2edb0b44cf95f326d628b837fcdd845102ba68@sha256:318eff387835ca2717baab42a84f35a83a5f9e7d519253df87269f80b9ff0171: 2
                       cilium-operator          quay.io/cilium/operator-generic:v1.17.6@sha256:91ac3bf7be7bed30e90218f219d4f3062a63377689ee7246062fa0cc3839d096: 1
                       clustermesh-apiserver    quay.io/cilium/clustermesh-apiserver:v1.17.6@sha256:f619e97432db427e1511bf91af3be8ded418c53a353a09629e04c5880659d1df: 2
1
cilium status --context kind-east

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
    /ยฏยฏ\
 /ยฏยฏ\__/ยฏยฏ\    Cilium:             OK
 \__/ยฏยฏ\__/    Operator:           OK
 /ยฏยฏ\__/ยฏยฏ\    Envoy DaemonSet:    OK
 \__/ยฏยฏ\__/    Hubble Relay:       disabled
    \__/       ClusterMesh:        OK

DaemonSet              cilium                   Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet              cilium-envoy             Desired: 2, Ready: 2/2, Available: 2/2
Deployment             cilium-operator          Desired: 1, Ready: 1/1, Available: 1/1
Deployment             clustermesh-apiserver    Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium                   Running: 2
                       cilium-envoy             Running: 2
                       cilium-operator          Running: 1
                       clustermesh-apiserver    Running: 1
                       hubble-relay             
Cluster Pods:          4/4 managed by Cilium
Helm chart version:    1.17.6
Image versions         cilium                   quay.io/cilium/cilium:v1.17.6@sha256:544de3d4fed7acba72758413812780a4972d47c39035f2a06d6145d8644a3353: 2
                       cilium-envoy             quay.io/cilium/cilium-envoy:v1.33.4-1752151664-7c2edb0b44cf95f326d628b837fcdd845102ba68@sha256:318eff387835ca2717baab42a84f35a83a5f9e7d519253df87269f80b9ff0171: 2
                       cilium-operator          quay.io/cilium/operator-generic:v1.17.6@sha256:91ac3bf7be7bed30e90218f219d4f3062a63377689ee7246062fa0cc3839d096: 1
                       clustermesh-apiserver    quay.io/cilium/clustermesh-apiserver:v1.17.6@sha256:f619e97432db427e1511bf91af3be8ded418c53a353a09629e04c5880659d1df: 2
  • cilium status ํ™•์ธ ๊ฒฐ๊ณผ, ์–‘์ชฝ ํด๋Ÿฌ์Šคํ„ฐ ๋ชจ๋‘ ClusterMesh: OK ์ถœ๋ ฅ
  • DaemonSet, Envoy, Operator, clustermesh-apiserver ๋ชจ๋‘ Running ์ƒํƒœ์ด๋ฉฐ, ์—ฐ๊ฒฐ ์•ˆ์ •์ 

11. ์ƒ์„ธ ์ƒํƒœ ํ™•์ธ (Verbose Mode)

1
kwest exec -it -n kube-system ds/cilium -- cilium status --verbose

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
...
ClusterMesh:   1/1 remote clusters ready, 0 global-services
   east: ready, 2 nodes, 4 endpoints, 3 identities, 0 services, 0 MCS-API service exports, 0 reconnections (last: never)
   โ””  etcd: 1/1 connected, leases=0, lock leases=0, has-quorum=true: endpoint status checks are disabled, ID: b88364e6e9ad8658
   โ””  remote configuration: expected=true, retrieved=true, cluster-id=2, kvstoremesh=false, sync-canaries=true, service-exports=disabled
   โ””  synchronization status: nodes=true, endpoints=true, identities=true, services=true
...   
1
keast exec -it -n kube-system ds/cilium -- cilium status --verbose

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
...
ClusterMesh:   1/1 remote clusters ready, 0 global-services
   west: ready, 2 nodes, 4 endpoints, 3 identities, 0 services, 0 MCS-API service exports, 0 reconnections (last: never)
   โ””  etcd: 1/1 connected, leases=0, lock leases=0, has-quorum=true: endpoint status checks are disabled, ID: 700452e5b45c47e8
   โ””  remote configuration: expected=true, retrieved=true, cluster-id=1, kvstoremesh=false, sync-canaries=true, service-exports=disabled
   โ””  synchronization status: nodes=true, endpoints=true, identities=true, services=true
...
  • west ํด๋Ÿฌ์Šคํ„ฐ์—์„œ east, east ํด๋Ÿฌ์Šคํ„ฐ์—์„œ west ์ƒํƒœ๋ฅผ ์ƒ์„ธ ์กฐํšŒ
  • ๋™๊ธฐํ™” ํ•ญ๋ชฉ: nodes=true, endpoints=true, identities=true, services=true โ†’ ๋™๊ธฐํ™” ์ •์ƒ
  • etcd ์—ฐ๊ฒฐ๋„ 1/1 connected, quorum ํ™•๋ณด๋จ

12. Pod CIDR ๋ผ์šฐํŒ… ์ž๋™ ์ฃผ์ž… ํ™•์ธ

1
2
3
4
docker exec -it west-control-plane ip -c route
docker exec -it west-worker ip -c route
docker exec -it east-control-plane ip -c route
docker exec -it east-worker ip -c route

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
default via 172.18.0.1 dev eth0 
10.0.0.0/24 via 10.0.0.19 dev cilium_host proto kernel src 10.0.0.19 
10.0.0.19 dev cilium_host proto kernel scope link 
10.0.1.0/24 via 172.18.0.3 dev eth0 proto kernel 
10.1.0.0/24 via 172.18.0.4 dev eth0 proto kernel 
10.1.1.0/24 via 172.18.0.5 dev eth0 proto kernel 
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.2

default via 172.18.0.1 dev eth0 
10.0.0.0/24 via 172.18.0.2 dev eth0 proto kernel 
10.0.1.0/24 via 10.0.1.99 dev cilium_host proto kernel src 10.0.1.99 
10.0.1.99 dev cilium_host proto kernel scope link 
10.1.0.0/24 via 172.18.0.4 dev eth0 proto kernel 
10.1.1.0/24 via 172.18.0.5 dev eth0 proto kernel 
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.3 

default via 172.18.0.1 dev eth0 
10.0.0.0/24 via 172.18.0.2 dev eth0 proto kernel 
10.0.1.0/24 via 172.18.0.3 dev eth0 proto kernel 
10.1.0.0/24 via 10.1.0.165 dev cilium_host proto kernel src 10.1.0.165 
10.1.0.165 dev cilium_host proto kernel scope link 
10.1.1.0/24 via 172.18.0.5 dev eth0 proto kernel 
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.4 

default via 172.18.0.1 dev eth0 
10.0.0.0/24 via 172.18.0.2 dev eth0 proto kernel 
10.0.1.0/24 via 172.18.0.3 dev eth0 proto kernel 
10.1.0.0/24 via 172.18.0.4 dev eth0 proto kernel 
10.1.1.0/24 via 10.1.1.122 dev cilium_host proto kernel src 10.1.1.122 
10.1.1.122 dev cilium_host proto kernel scope link 
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.5 
  • ClusterMesh ์—ฐ๊ฒฐ ํ›„, ๊ฐ ํด๋Ÿฌ์Šคํ„ฐ์˜ ๋…ธ๋“œ ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ”์— ์ƒ๋Œ€ ํด๋Ÿฌ์Šคํ„ฐ์˜ PodCIDR ์ž๋™ ์ฃผ์ž… ํ™•์ธ
  • ๊ฒฐ๊ณผ์ ์œผ๋กœ, ์–‘์ชฝ ํด๋Ÿฌ์Šคํ„ฐ Pod ๊ฐ„ ์ง์ ‘ ํ†ต์‹  ๊ฐ€๋Šฅ ์ƒํƒœ๋กœ ์ „ํ™˜๋จ

๐Ÿ‘๏ธ Hubble enable

1. Cilium Helm ์ €์žฅ์†Œ ์ถ”๊ฐ€ ๋ฐ ์—…๋ฐ์ดํŠธ

1
2
helm repo add cilium https://helm.cilium.io/
helm repo update

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
"cilium" has been added to your repositories

Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "cilium" chart repository
Update Complete. โŽˆHappy Helming!โŽˆ

2. West ํด๋Ÿฌ์Šคํ„ฐ์— Hubble ํ™œ์„ฑํ™”

1
2
3
helm upgrade cilium cilium/cilium --version 1.17.6 --namespace kube-system --reuse-values \
--set hubble.enabled=true --set hubble.relay.enabled=true --set hubble.ui.enabled=true \
--set hubble.ui.service.type=NodePort --set hubble.ui.service.nodePort=30001 --kube-context kind-west

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
Release "cilium" has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Sat Aug 16 19:29:04 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 4
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.

Your release version is 1.17.6.

For any further help, visit https://docs.cilium.io/en/v1.17/gettinghelp

3. West ํด๋Ÿฌ์Šคํ„ฐ Cilium DaemonSet ์žฌ์‹œ์ž‘

1
2
3
4
kwest -n kube-system rollout restart ds/cilium

# ๊ฒฐ๊ณผ
daemonset.apps/cilium restarted

4. East ํด๋Ÿฌ์Šคํ„ฐ์— Hubble ํ™œ์„ฑํ™”

1
2
3
helm upgrade cilium cilium/cilium --version 1.17.6 --namespace kube-system --reuse-values \
--set hubble.enabled=true --set hubble.relay.enabled=true --set hubble.ui.enabled=true \
--set hubble.ui.service.type=NodePort --set hubble.ui.service.nodePort=31001 --kube-context kind-east

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
Release "cilium" has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Sat Aug 16 19:30:52 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 4
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.

Your release version is 1.17.6.

For any further help, visit https://docs.cilium.io/en/v1.17/gettinghelp

5. East ํด๋Ÿฌ์Šคํ„ฐ Cilium DaemonSet ์žฌ์‹œ์ž‘

1
2
3
4
kwest -n kube-system rollout restart ds/cilium

# ๊ฒฐ๊ณผ
daemonset.apps/cilium restarted

6. Hubble UI ์ ‘์† ํ™•์ธ

http://localhost:30001

http://localhost:31001


โ†”๏ธ west โ†” east ํŒŒ๋“œ๊ฐ„ ์ง์ ‘ ํ†ต์‹ (tcpdump ๊ฒ€์ฆ)

1. ํ…Œ์ŠคํŠธ์šฉ ํŒŒ๋“œ ๋ฐฐํฌ (west/east ๊ฐ๊ฐ)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
cat << EOF | kubectl apply --context kind-west -f -
apiVersion: v1
kind: Pod
metadata:
  name: curl-pod
  labels:
    app: curl
spec:
  containers:
  - name: curl
    image: nicolaka/netshoot
    command: ["tail"]
    args: ["-f", "/dev/null"]
  terminationGracePeriodSeconds: 0
EOF

cat << EOF | kubectl apply --context kind-east -f -
apiVersion: v1
kind: Pod
metadata:
  name: curl-pod
  labels:
    app: curl
spec:
  containers:
  - name: curl
    image: nicolaka/netshoot
    command: ["tail"]
    args: ["-f", "/dev/null"]
  terminationGracePeriodSeconds: 0
EOF

# ๊ฒฐ๊ณผ
pod/curl-pod created
pod/curl-pod created

2. ํŒŒ๋“œ ์ƒํƒœ ๋ฐ IP ํ™•์ธ

1
kwest get pod -A && keast get pod -A

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
NAMESPACE            NAME                                         READY   STATUS      RESTARTS   AGE
default              curl-pod                                     1/1     Running     0          55s
kube-system          cilium-6l82v                                 1/1     Running     0          5m16s
kube-system          cilium-envoy-5gpxx                           1/1     Running     0          54m
kube-system          cilium-envoy-skv7b                           1/1     Running     0          54m
kube-system          cilium-lrpcr                                 1/1     Running     0          5m16s
kube-system          cilium-operator-7dbb574d5b-drtg2             1/1     Running     0          54m
kube-system          clustermesh-apiserver-5cf45db9cc-2g847       2/2     Running     0          32m
kube-system          clustermesh-apiserver-generate-certs-xvddz   0/1     Completed   0          7m53s
kube-system          coredns-674b8bbfcf-kwxv5                     1/1     Running     0          86m
kube-system          coredns-674b8bbfcf-nb96t                     1/1     Running     0          86m
kube-system          etcd-west-control-plane                      1/1     Running     0          86m
kube-system          hubble-relay-5dcd46f5c-rqrvl                 1/1     Running     0          7m54s
kube-system          hubble-ui-76d4965bb6-xkn8k                   2/2     Running     0          7m54s
kube-system          kube-apiserver-west-control-plane            1/1     Running     0          86m
kube-system          kube-controller-manager-west-control-plane   1/1     Running     0          86m
kube-system          kube-scheduler-west-control-plane            1/1     Running     0          86m
local-path-storage   local-path-provisioner-7dc846544d-jrdw8      1/1     Running     0          86m

NAMESPACE            NAME                                         READY   STATUS      RESTARTS   AGE
default              curl-pod                                     1/1     Running     0          55s
kube-system          cilium-7z2kz                                 1/1     Running     0          24m
kube-system          cilium-envoy-mrzw8                           1/1     Running     0          53m
kube-system          cilium-envoy-vq5r7                           1/1     Running     0          53m
kube-system          cilium-operator-867f8dc978-44zqb             1/1     Running     0          53m
kube-system          cilium-thtxk                                 1/1     Running     0          24m
kube-system          clustermesh-apiserver-5cf45db9cc-7wfwz       2/2     Running     0          31m
kube-system          clustermesh-apiserver-generate-certs-5csbq   0/1     Completed   0          6m4s
kube-system          coredns-674b8bbfcf-2tm2l                     1/1     Running     0          85m
kube-system          coredns-674b8bbfcf-c9qsg                     1/1     Running     0          85m
kube-system          etcd-east-control-plane                      1/1     Running     0          85m
kube-system          hubble-relay-5dcd46f5c-6qzn7                 1/1     Running     0          6m5s
kube-system          hubble-ui-76d4965bb6-jg78b                   2/2     Running     0          6m5s
kube-system          kube-apiserver-east-control-plane            1/1     Running     0          85m
kube-system          kube-controller-manager-east-control-plane   1/1     Running     0          85m
kube-system          kube-scheduler-east-control-plane            1/1     Running     0          85m
local-path-storage   local-path-provisioner-7dc846544d-mwfdc      1/1     Running     0          85m
1
kwest get pod -owide && keast get pod -owide 

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
NAME       READY   STATUS    RESTARTS   AGE    IP          NODE          NOMINATED NODE   READINESS GATES
curl-pod   1/1     Running   0          114s   10.0.1.12   west-worker   <none>           <none>

NAME       READY   STATUS    RESTARTS   AGE    IP          NODE          NOMINATED NODE   READINESS GATES
curl-pod   1/1     Running   0          114s   10.1.1.67   east-worker   <none>           <none>

3. west โ†’ east ํŒŒ๋“œ๊ฐ„ Ping ํ…Œ์ŠคํŠธ

1
kubectl exec -it curl-pod --context kind-west -- ping 10.1.1.67

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
PING 10.1.1.67 (10.1.1.67) 56(84) bytes of data.
64 bytes from 10.1.1.67: icmp_seq=1 ttl=62 time=0.070 ms
64 bytes from 10.1.1.67: icmp_seq=2 ttl=62 time=0.188 ms
64 bytes from 10.1.1.67: icmp_seq=3 ttl=62 time=0.093 ms
64 bytes from 10.1.1.67: icmp_seq=4 ttl=62 time=0.120 ms
64 bytes from 10.1.1.67: icmp_seq=5 ttl=62 time=0.153 ms
....
  • ์ •์ƒ์ ์œผ๋กœ ์‘๋‹ต(Reply) ์ˆ˜์‹  โ†’ Pod ๊ฐ„ ์ง์ ‘ ํ†ต์‹  ๊ฐ€๋Šฅ ํ™•์ธ

4. ๋ชฉ์ ์ง€ ํŒŒ๋“œ์—์„œ tcpdump ํ™•์ธ

east ํŒŒ๋“œ ๋‚ด๋ถ€์—์„œ tcpdump ์‹คํ–‰

1
kubectl exec -it curl-pod --context kind-east -- tcpdump -i eth0 -nn

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
10:43:50.833580 IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 199, length 64
10:43:50.833627 IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 199, length 64
10:43:51.857541 IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 200, length 64
10:43:51.857578 IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 200, length 64
10:43:52.880956 IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 201, length 64
10:43:52.881075 IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 201, length 64
10:43:53.904522 IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 202, length 64
10:43:53.904555 IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 202, length 64
10:43:54.928512 IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 203, length 64
10:43:54.928540 IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 203, length 64
10:43:55.952560 IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 204, length 64
10:43:55.952593 IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 204, length 64
10:43:56.976694 IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 205, length 64
10:43:56.976763 IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 205, length 64
...
  • west ํŒŒ๋“œ์˜ IP(10.0.1.12)์—์„œ ์ง์ ‘ ๋“ค์–ด์˜ค๋Š” ICMP ์š”์ฒญ ๋ฐ ์‘๋‹ต ํ™•์ธ
  • NAT ๋ณ€ํ™˜ ์—†์ด Pod โ†” Pod ๋‹ค์ด๋ ‰ํŠธ ๋ผ์šฐํŒ… ๊ฒ€์ฆ

5. ๋ชฉ์ ์ง€ ํด๋Ÿฌ์Šคํ„ฐ ๋…ธ๋“œ์—์„œ tcpdump ํ™•์ธ

east-control-plane, east-worker ๋…ธ๋“œ์—์„œ tcpdump ์‹คํ–‰

1
docker exec -it east-control-plane tcpdump -i any icmp -nn

โœ…ย ์ถœ๋ ฅ

1
2
3
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
1
docker exec -it east-worker tcpdump -i any icmp -nn

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
10:46:31.473504 eth0  In  IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 356, length 64
10:46:31.473530 lxccd8da9e761ca In  IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 356, length 64
10:46:31.473540 eth0  Out IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 356, length 64
10:46:31.988151 eth0  In  IP 172.18.0.1 > 172.18.0.5: ICMP 172.18.0.1 udp port 53 unreachable, length 53
10:46:32.496507 eth0  In  IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 357, length 64
10:46:32.496535 lxccd8da9e761ca In  IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 357, length 64
10:46:32.496545 eth0  Out IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 357, length 64
10:46:33.488946 eth0  In  IP 172.18.0.1 > 172.18.0.5: ICMP 172.18.0.1 udp port 53 unreachable, length 53
10:46:33.520513 eth0  In  IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 358, length 64
10:46:33.520542 lxccd8da9e761ca In  IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 358, length 64
10:46:33.520554 eth0  Out IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 358, length 64
10:46:33.569979 eth0  In  IP 172.18.0.1 > 172.18.0.5: ICMP 172.18.0.1 udp port 53 unreachable, length 53
10:46:34.544531 eth0  In  IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 359, length 64
10:46:34.544557 lxccd8da9e761ca In  IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 359, length 64
10:46:34.544569 eth0  Out IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 359, length 64
10:46:34.990071 eth0  In  IP 172.18.0.1 > 172.18.0.5: ICMP 172.18.0.1 udp port 53 unreachable, length 53
10:46:35.568525 eth0  In  IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 360, length 64
10:46:35.568555 lxccd8da9e761ca In  IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 360, length 64
10:46:35.568573 eth0  Out IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 360, length 64
10:46:36.492093 eth0  In  IP 172.18.0.1 > 172.18.0.5: ICMP 172.18.0.1 udp port 53 unreachable, length 53
10:46:36.572589 eth0  In  IP 172.18.0.1 > 172.18.0.5: ICMP 172.18.0.1 udp port 53 unreachable, length 53
10:46:36.593535 eth0  In  IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 361, length 64
...
  • west ํŒŒ๋“œ์—์„œ ๋“ค์–ด์˜จ ํŒจํ‚ท์ด ๋…ธ๋“œ์—์„œ ์ง์ ‘ ํŒŒ๋“œ๋กœ ์ „๋‹ฌ๋˜๋Š” ๊ณผ์ • ํ™•์ธ
  • ์ค‘๊ฐ„ NAT ๊ฒŒ์ดํŠธ์›จ์ด๋‚˜ ํ„ฐ๋„๋ง ์—†์ด direct routing ๋™์ž‘ ๊ฒ€์ฆ

โš–๏ธ Load-balancing & Service Discovery

1. ๊ธ€๋กœ๋ฒŒ ์„œ๋น„์Šค ๋ฐฐํฌ (west / east ํด๋Ÿฌ์Šคํ„ฐ)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
cat << EOF | kubectl apply --context kind-west -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webpod
spec:
  replicas: 2
  selector:
    matchLabels:
      app: webpod
  template:
    metadata:
      labels:
        app: webpod
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - sample-app
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: webpod
        image: traefik/whoami
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: webpod
  labels:
    app: webpod
  annotations:
    service.cilium.io/global: "true"
spec:
  selector:
    app: webpod
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP
EOF

# ๊ฒฐ๊ณผ
deployment.apps/webpod created
service/webpod created
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
cat << EOF | kubectl apply --context kind-east -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webpod
spec:
  replicas: 2
  selector:
    matchLabels:
      app: webpod
  template:
    metadata:
      labels:
        app: webpod
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - sample-app
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: webpod
        image: traefik/whoami
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: webpod
  labels:
    app: webpod
  annotations:
    service.cilium.io/global: "true"
spec:
  selector:
    app: webpod
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP
EOF

# ๊ฒฐ๊ณผ
deployment.apps/webpod created
service/webpod created
  • webpod Deployment๊ณผ Service๋ฅผ ์–‘์ชฝ ํด๋Ÿฌ์Šคํ„ฐ์— ๋™์ผํ•˜๊ฒŒ ๋ฐฐํฌ
  • service.cilium.io/global: "true" ์–ด๋…ธํ…Œ์ด์…˜์„ ์ถ”๊ฐ€ํ•˜์—ฌ ๊ธ€๋กœ๋ฒŒ ์„œ๋น„์Šค๋กœ ๋“ฑ๋ก
  • ClusterIP ํƒ€์ž… ์„œ๋น„์Šค ์ƒ์„ฑ, ํฌํŠธ 80 ๋…ธ์ถœ

2. ์—”๋“œํฌ์ธํŠธ ํ™•์ธ

1
kwest get svc,ep webpod && keast get svc,ep webpod

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service/webpod   ClusterIP   10.2.167.94   <none>        80/TCP    118s

NAME               ENDPOINTS                    AGE
endpoints/webpod   10.0.1.136:80,10.0.1.69:80   118s

Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service/webpod   ClusterIP   10.3.128.46   <none>        80/TCP    56s

NAME               ENDPOINTS                  AGE
endpoints/webpod   10.1.1.6:80,10.1.1.95:80   56s
  • west โ†’ 10.0.1.136, 10.0.1.69
  • east โ†’ 10.1.1.6, 10.1.1.95
  • ๋‘ ํด๋Ÿฌ์Šคํ„ฐ ๋ชจ๋‘ webpod ์„œ๋น„์Šค๊ฐ€ ์กด์žฌํ•˜๊ณ , ๊ฐ๊ฐ์˜ Pod IP๋“ค์ด ๊ธ€๋กœ๋ฒŒ ์„œ๋น„์Šค์— ๋“ฑ๋ก๋จ

3. ๊ธ€๋กœ๋ฒŒ ์„œ๋น„์Šค ๋งคํ•‘ ํ™•์ธ (west)

1
kwest exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
ID   Frontend                Service Type   Backend                             
1    10.2.0.1:443/TCP        ClusterIP      1 => 172.18.0.2:6443/TCP (active)   
2    10.2.182.189:443/TCP    ClusterIP      1 => 172.18.0.3:4244/TCP (active)   
3    10.2.0.10:53/UDP        ClusterIP      1 => 10.0.1.177:53/UDP (active)     
                                            2 => 10.0.1.115:53/UDP (active)     
4    10.2.0.10:53/TCP        ClusterIP      1 => 10.0.1.177:53/TCP (active)     
                                            2 => 10.0.1.115:53/TCP (active)     
5    10.2.0.10:9153/TCP      ClusterIP      1 => 10.0.1.177:9153/TCP (active)   
                                            2 => 10.0.1.115:9153/TCP (active)   
6    10.2.126.217:2379/TCP   ClusterIP      1 => 10.0.1.8:2379/TCP (active)     
7    172.18.0.3:32379/TCP    NodePort       1 => 10.0.1.8:2379/TCP (active)     
8    0.0.0.0:32379/TCP       NodePort       1 => 10.0.1.8:2379/TCP (active)     
9    10.2.1.9:80/TCP         ClusterIP      1 => 10.0.1.120:4245/TCP (active)   
10   10.2.233.237:80/TCP     ClusterIP      1 => 10.0.1.158:8081/TCP (active)   
11   172.18.0.3:30001/TCP    NodePort       1 => 10.0.1.158:8081/TCP (active)   
12   0.0.0.0:30001/TCP       NodePort       1 => 10.0.1.158:8081/TCP (active)   
13   10.2.167.94:80/TCP      ClusterIP      1 => 10.0.1.69:80/TCP (active)      
                                            2 => 10.0.1.136:80/TCP (active)     
                                            3 => 10.1.1.6:80/TCP (active)       
                                            4 => 10.1.1.95:80/TCP (active) 
  • webpod (10.2.167.94:80/TCP) โ†’ west, east ํด๋Ÿฌ์Šคํ„ฐ์˜ ๋ชจ๋“  Pod IP๋กœ ๋ถ„์‚ฐ
    • west : 10.0.1.69:80, 10.0.1.136:80
    • east : 10.1.1.6:80, 10.1.1.95:80

4. ๊ธ€๋กœ๋ฒŒ ์„œ๋น„์Šค ๋งคํ•‘ ํ™•์ธ (east)

1
keast exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
ID   Frontend               Service Type   Backend                             
1    10.3.0.1:443/TCP       ClusterIP      1 => 172.18.0.4:6443/TCP (active)   
2    10.3.107.176:443/TCP   ClusterIP      1 => 172.18.0.5:4244/TCP (active)   
3    10.3.0.10:9153/TCP     ClusterIP      1 => 10.1.1.45:9153/TCP (active)    
                                           2 => 10.1.1.21:9153/TCP (active)    
4    10.3.0.10:53/UDP       ClusterIP      1 => 10.1.1.45:53/UDP (active)      
                                           2 => 10.1.1.21:53/UDP (active)      
5    10.3.0.10:53/TCP       ClusterIP      1 => 10.1.1.45:53/TCP (active)      
                                           2 => 10.1.1.21:53/TCP (active)      
6    10.3.173.28:2379/TCP   ClusterIP      1 => 10.1.1.62:2379/TCP (active)    
7    172.18.0.5:32379/TCP   NodePort       1 => 10.1.1.62:2379/TCP (active)    
8    0.0.0.0:32379/TCP      NodePort       1 => 10.1.1.62:2379/TCP (active)    
9    10.3.54.60:80/TCP      ClusterIP      1 => 10.1.1.198:4245/TCP (active)   
10   10.3.1.28:80/TCP       ClusterIP      1 => 10.1.1.236:8081/TCP (active)   
11   172.18.0.5:31001/TCP   NodePort       1 => 10.1.1.236:8081/TCP (active)   
12   0.0.0.0:31001/TCP      NodePort       1 => 10.1.1.236:8081/TCP (active)   
13   10.3.128.46:80/TCP     ClusterIP      1 => 10.0.1.69:80/TCP (active)      
                                           2 => 10.0.1.136:80/TCP (active)     
                                           3 => 10.1.1.6:80/TCP (active)       
                                           4 => 10.1.1.95:80/TCP (active)
  • ๋™์ผํ•˜๊ฒŒ cilium service list --clustermesh-affinity ์‹คํ–‰
  • webpod (10.3.128.46:80/TCP) โ†’ ๋™์ผํ•œ 4๊ฐœ ์—”๋“œํฌ์ธํŠธ๋กœ ๋ถ„์‚ฐ

5. ํฌ๋กœ์Šค ํด๋Ÿฌ์Šคํ„ฐ ํ˜ธ์ถœ

1
2
kubectl exec -it curl-pod --context kind-west -- sh -c 'while true; do curl -s --connect-timeout 1 webpod ; sleep 1; echo "---"; done;'
kubectl exec -it curl-pod --context kind-east -- sh -c 'while true; do curl -s --connect-timeout 1 webpod ; sleep 1; echo "---"; done;'

โœ…ย ์ถœ๋ ฅ

  • west์˜ curl-pod โ†’ webpod ํ˜ธ์ถœ ์‹œ west + east ์—”๋“œํฌ์ธํŠธ๋กœ ๋กœ๋“œ๋ฐธ๋Ÿฐ์‹ฑ
  • east์˜ curl-pod โ†’ webpod ํ˜ธ์ถœ ์‹œ east + west ์—”๋“œํฌ์ธํŠธ๋กœ ๋กœ๋“œ๋ฐธ๋Ÿฐ์‹ฑ
  • ์ฆ‰, ์„œ๋น„์Šค VIP ํ˜ธ์ถœ๋งŒ์œผ๋กœ ๋‘ ํด๋Ÿฌ์Šคํ„ฐ์˜ Pod ๋ชจ๋‘ ๋Œ€์ƒ์ด ๋จ

6. ๋ ˆํ”Œ๋ฆฌ์นด ์ˆ˜ ์ถ•์†Œ (west 2 โ†’ 1)

1
2
3
4
kwest scale deployment webpod --replicas 1

# ๊ฒฐ๊ณผ
deployment.apps/webpod scaled
1
kwest exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
ID   Frontend                Service Type   Backend                             
1    10.2.0.1:443/TCP        ClusterIP      1 => 172.18.0.2:6443/TCP (active)   
2    10.2.182.189:443/TCP    ClusterIP      1 => 172.18.0.3:4244/TCP (active)   
3    10.2.0.10:53/UDP        ClusterIP      1 => 10.0.1.177:53/UDP (active)     
                                            2 => 10.0.1.115:53/UDP (active)     
4    10.2.0.10:53/TCP        ClusterIP      1 => 10.0.1.177:53/TCP (active)     
                                            2 => 10.0.1.115:53/TCP (active)     
5    10.2.0.10:9153/TCP      ClusterIP      1 => 10.0.1.177:9153/TCP (active)   
                                            2 => 10.0.1.115:9153/TCP (active)   
6    10.2.126.217:2379/TCP   ClusterIP      1 => 10.0.1.8:2379/TCP (active)     
7    172.18.0.3:32379/TCP    NodePort       1 => 10.0.1.8:2379/TCP (active)     
8    0.0.0.0:32379/TCP       NodePort       1 => 10.0.1.8:2379/TCP (active)     
9    10.2.1.9:80/TCP         ClusterIP      1 => 10.0.1.120:4245/TCP (active)   
10   10.2.233.237:80/TCP     ClusterIP      1 => 10.0.1.158:8081/TCP (active)   
11   172.18.0.3:30001/TCP    NodePort       1 => 10.0.1.158:8081/TCP (active)   
12   0.0.0.0:30001/TCP       NodePort       1 => 10.0.1.158:8081/TCP (active)   
13   10.2.167.94:80/TCP      ClusterIP      1 => 10.0.1.69:80/TCP (active)      
                                            2 => 10.1.1.6:80/TCP (active)       
                                            3 => 10.1.1.95:80/TCP (active)   
  • west์— ๋‚จ์€ Pod 1๊ฐœ + east์˜ Pod 2๊ฐœ๋กœ ์„œ๋น„์Šค ํŠธ๋ž˜ํ”ฝ์ด ๋ถ„์‚ฐ๋จ
  • ๊ธ€๋กœ๋ฒŒ ์„œ๋น„์Šค๋Š” Pod ๊ฐœ์ˆ˜์— ๋”ฐ๋ผ ์ž๋™ ๋ฐ˜์˜
1
keast exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
ID   Frontend               Service Type   Backend                             
1    10.3.0.1:443/TCP       ClusterIP      1 => 172.18.0.4:6443/TCP (active)   
2    10.3.107.176:443/TCP   ClusterIP      1 => 172.18.0.5:4244/TCP (active)   
3    10.3.0.10:9153/TCP     ClusterIP      1 => 10.1.1.45:9153/TCP (active)    
                                           2 => 10.1.1.21:9153/TCP (active)    
4    10.3.0.10:53/UDP       ClusterIP      1 => 10.1.1.45:53/UDP (active)      
                                           2 => 10.1.1.21:53/UDP (active)      
5    10.3.0.10:53/TCP       ClusterIP      1 => 10.1.1.45:53/TCP (active)      
                                           2 => 10.1.1.21:53/TCP (active)      
6    10.3.173.28:2379/TCP   ClusterIP      1 => 10.1.1.62:2379/TCP (active)    
7    172.18.0.5:32379/TCP   NodePort       1 => 10.1.1.62:2379/TCP (active)    
8    0.0.0.0:32379/TCP      NodePort       1 => 10.1.1.62:2379/TCP (active)    
9    10.3.54.60:80/TCP      ClusterIP      1 => 10.1.1.198:4245/TCP (active)   
10   10.3.1.28:80/TCP       ClusterIP      1 => 10.1.1.236:8081/TCP (active)   
11   172.18.0.5:31001/TCP   NodePort       1 => 10.1.1.236:8081/TCP (active)   
12   0.0.0.0:31001/TCP      NodePort       1 => 10.1.1.236:8081/TCP (active)   
13   10.3.128.46:80/TCP     ClusterIP      1 => 10.0.1.69:80/TCP (active)      
                                           2 => 10.1.1.6:80/TCP (active)       
                                           3 => 10.1.1.95:80/TCP (active)

7. ๋ ˆํ”Œ๋ฆฌ์นด 0 (west)

1
2
3
4
kwest scale deployment webpod --replicas 0

# ๊ฒฐ๊ณผ
deployment.apps/webpod scaled
  • west์˜ ๋ชจ๋“  Pod ์‚ญ์ œ ํ›„์—๋„ ์„œ๋น„์Šค ์ •์ƒ ๋™์ž‘

1
kwest exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
ID   Frontend                Service Type   Backend                             
1    10.2.0.1:443/TCP        ClusterIP      1 => 172.18.0.2:6443/TCP (active)   
2    10.2.182.189:443/TCP    ClusterIP      1 => 172.18.0.3:4244/TCP (active)   
3    10.2.0.10:53/UDP        ClusterIP      1 => 10.0.1.177:53/UDP (active)     
                                            2 => 10.0.1.115:53/UDP (active)     
4    10.2.0.10:53/TCP        ClusterIP      1 => 10.0.1.177:53/TCP (active)     
                                            2 => 10.0.1.115:53/TCP (active)     
5    10.2.0.10:9153/TCP      ClusterIP      1 => 10.0.1.177:9153/TCP (active)   
                                            2 => 10.0.1.115:9153/TCP (active)   
6    10.2.126.217:2379/TCP   ClusterIP      1 => 10.0.1.8:2379/TCP (active)     
7    172.18.0.3:32379/TCP    NodePort       1 => 10.0.1.8:2379/TCP (active)     
8    0.0.0.0:32379/TCP       NodePort       1 => 10.0.1.8:2379/TCP (active)     
9    10.2.1.9:80/TCP         ClusterIP      1 => 10.0.1.120:4245/TCP (active)   
10   10.2.233.237:80/TCP     ClusterIP      1 => 10.0.1.158:8081/TCP (active)   
11   172.18.0.3:30001/TCP    NodePort       1 => 10.0.1.158:8081/TCP (active)   
12   0.0.0.0:30001/TCP       NodePort       1 => 10.0.1.158:8081/TCP (active)   
13   10.2.167.94:80/TCP      ClusterIP      1 => 10.1.1.6:80/TCP (active)       
                                            2 => 10.1.1.95:80/TCP (active)
1
keast exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
ID   Frontend               Service Type   Backend                             
1    10.3.0.1:443/TCP       ClusterIP      1 => 172.18.0.4:6443/TCP (active)   
2    10.3.107.176:443/TCP   ClusterIP      1 => 172.18.0.5:4244/TCP (active)   
3    10.3.0.10:9153/TCP     ClusterIP      1 => 10.1.1.45:9153/TCP (active)    
                                           2 => 10.1.1.21:9153/TCP (active)    
4    10.3.0.10:53/UDP       ClusterIP      1 => 10.1.1.45:53/UDP (active)      
                                           2 => 10.1.1.21:53/UDP (active)      
5    10.3.0.10:53/TCP       ClusterIP      1 => 10.1.1.45:53/TCP (active)      
                                           2 => 10.1.1.21:53/TCP (active)      
6    10.3.173.28:2379/TCP   ClusterIP      1 => 10.1.1.62:2379/TCP (active)    
7    172.18.0.5:32379/TCP   NodePort       1 => 10.1.1.62:2379/TCP (active)    
8    0.0.0.0:32379/TCP      NodePort       1 => 10.1.1.62:2379/TCP (active)    
9    10.3.54.60:80/TCP      ClusterIP      1 => 10.1.1.198:4245/TCP (active)   
10   10.3.1.28:80/TCP       ClusterIP      1 => 10.1.1.236:8081/TCP (active)   
11   172.18.0.5:31001/TCP   NodePort       1 => 10.1.1.236:8081/TCP (active)   
12   0.0.0.0:31001/TCP      NodePort       1 => 10.1.1.236:8081/TCP (active)   
13   10.3.128.46:80/TCP     ClusterIP      1 => 10.1.1.6:80/TCP (active)       
                                           2 => 10.1.1.95:80/TCP (active) 
  • east์˜ Pod ์—”๋“œํฌ์ธํŠธ๋งŒ ์‚ด์•„์žˆ์œผ๋ฏ€๋กœ, ํŠธ๋ž˜ํ”ฝ์€ ์ž๋™์œผ๋กœ east ํด๋Ÿฌ์Šคํ„ฐ๋กœ ๋ผ์šฐํŒ…

8. ๋ ˆํ”Œ๋ฆฌ์นด ์›๋ณต

1
2
kwest scale deployment webpod --replicas 2
deployment.apps/webpod scaled
  • west์˜ ๋ ˆํ”Œ๋ฆฌ์นด ๋‹ค์‹œ 2๊ฐœ๋กœ ๋ณต๊ตฌ
  • ๊ธ€๋กœ๋ฒŒ ์„œ๋น„์Šค ์—”๋“œํฌ์ธํŠธ ๋ชฉ๋ก์— ๋‹ค์‹œ west Pod IP๊ฐ€ ์ถ”๊ฐ€

๐ŸŽฏ Service Affinity

1. ์„œ๋น„์Šค ์–ด๋…ธํ…Œ์ด์…˜ ์„ค์ •

1
2
3
4
5
6
7
kwest annotate service webpod service.cilium.io/affinity=local --overwrite
# ๊ฒฐ๊ณผ
service/webpod annotated

keast annotate service webpod service.cilium.io/affinity=local --overwrite
# ๊ฒฐ๊ณผ
service/webpod annotated
1
2
3
4
5
6
7
8
9
10
11
kwest describe svc webpod | grep Annotations -A3
Annotations:              service.cilium.io/affinity: local
                          service.cilium.io/global: true
Selector:                 app=webpod
Type:                     ClusterIP

keast describe svc webpod | grep Annotations -A3
Annotations:              service.cilium.io/affinity: local
                          service.cilium.io/global: true
Selector:                 app=webpod
Type:                     ClusterIP
  • west, east ํด๋Ÿฌ์Šคํ„ฐ์˜ webpod ์„œ๋น„์Šค์— service.cilium.io/affinity=local ์–ด๋…ธํ…Œ์ด์…˜ ์ถ”๊ฐ€

2. Affinity ๋™์ž‘ ํ™•์ธ (west)

1
kwest exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
ID   Frontend                Service Type   Backend                                       
1    10.2.0.1:443/TCP        ClusterIP      1 => 172.18.0.2:6443/TCP (active)             
2    10.2.182.189:443/TCP    ClusterIP      1 => 172.18.0.3:4244/TCP (active)             
3    10.2.0.10:53/UDP        ClusterIP      1 => 10.0.1.177:53/UDP (active)               
                                            2 => 10.0.1.115:53/UDP (active)               
4    10.2.0.10:53/TCP        ClusterIP      1 => 10.0.1.177:53/TCP (active)               
                                            2 => 10.0.1.115:53/TCP (active)               
5    10.2.0.10:9153/TCP      ClusterIP      1 => 10.0.1.177:9153/TCP (active)             
                                            2 => 10.0.1.115:9153/TCP (active)             
6    10.2.126.217:2379/TCP   ClusterIP      1 => 10.0.1.8:2379/TCP (active)               
7    172.18.0.3:32379/TCP    NodePort       1 => 10.0.1.8:2379/TCP (active)               
8    0.0.0.0:32379/TCP       NodePort       1 => 10.0.1.8:2379/TCP (active)               
9    10.2.1.9:80/TCP         ClusterIP      1 => 10.0.1.120:4245/TCP (active)             
10   10.2.233.237:80/TCP     ClusterIP      1 => 10.0.1.158:8081/TCP (active)             
11   172.18.0.3:30001/TCP    NodePort       1 => 10.0.1.158:8081/TCP (active)             
12   0.0.0.0:30001/TCP       NodePort       1 => 10.0.1.158:8081/TCP (active)             
13   10.2.167.94:80/TCP      ClusterIP      1 => 10.1.1.6:80/TCP (active)                 
                                            2 => 10.1.1.95:80/TCP (active)                
                                            3 => 10.0.1.159:80/TCP (active) (preferred)   
                                            4 => 10.0.1.107:80/TCP (active) (preferred)
  • webpod ์„œ๋น„์Šค ์—”๋“œํฌ์ธํŠธ์—์„œ ๋กœ์ปฌ Pod IP(10.0.1.x)๊ฐ€ (preferred) ๋กœ ํ‘œ์‹œ
  • ์ฆ‰, ํด๋Ÿฌ์Šคํ„ฐ ๋‚ด๋ถ€ ํด๋ผ์ด์–ธํŠธ ์š”์ฒญ์€ ๊ธฐ๋ณธ์ ์œผ๋กœ ๋กœ์ปฌ ์—”๋“œํฌ์ธํŠธ๋กœ ์šฐ์„  ๋ผ์šฐํŒ…๋จ

3. Affinity ๋™์ž‘ ํ™•์ธ (east)

1
keast exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
ID   Frontend               Service Type   Backend                                      
1    10.3.0.1:443/TCP       ClusterIP      1 => 172.18.0.4:6443/TCP (active)            
2    10.3.107.176:443/TCP   ClusterIP      1 => 172.18.0.5:4244/TCP (active)            
3    10.3.0.10:9153/TCP     ClusterIP      1 => 10.1.1.45:9153/TCP (active)             
                                           2 => 10.1.1.21:9153/TCP (active)             
4    10.3.0.10:53/UDP       ClusterIP      1 => 10.1.1.45:53/UDP (active)               
                                           2 => 10.1.1.21:53/UDP (active)               
5    10.3.0.10:53/TCP       ClusterIP      1 => 10.1.1.45:53/TCP (active)               
                                           2 => 10.1.1.21:53/TCP (active)               
6    10.3.173.28:2379/TCP   ClusterIP      1 => 10.1.1.62:2379/TCP (active)             
7    172.18.0.5:32379/TCP   NodePort       1 => 10.1.1.62:2379/TCP (active)             
8    0.0.0.0:32379/TCP      NodePort       1 => 10.1.1.62:2379/TCP (active)             
9    10.3.54.60:80/TCP      ClusterIP      1 => 10.1.1.198:4245/TCP (active)            
10   10.3.1.28:80/TCP       ClusterIP      1 => 10.1.1.236:8081/TCP (active)            
11   172.18.0.5:31001/TCP   NodePort       1 => 10.1.1.236:8081/TCP (active)            
12   0.0.0.0:31001/TCP      NodePort       1 => 10.1.1.236:8081/TCP (active)            
13   10.3.128.46:80/TCP     ClusterIP      1 => 10.1.1.6:80/TCP (active) (preferred)    
                                           2 => 10.1.1.95:80/TCP (active) (preferred)   
                                           3 => 10.0.1.159:80/TCP (active)              
                                           4 => 10.0.1.107:80/TCP (active) 
  • east ํด๋Ÿฌ์Šคํ„ฐ์—์„œ๋„ ๋™์ผํ•˜๊ฒŒ ๋กœ์ปฌ Pod IP(10.1.1.x)๊ฐ€ (preferred) ๋กœ ํ‘œ์‹œ
  • ๊ธ€๋กœ๋ฒŒ ์„œ๋น„์Šค์ง€๋งŒ, ์ž์‹ ์˜ ํด๋Ÿฌ์Šคํ„ฐ์— ์žˆ๋Š” Pod โ†’ ์ตœ์šฐ์„  ์ฒ˜๋ฆฌ

4. ์„œ๋น„์Šค ํ˜ธ์ถœ ํ…Œ์ŠคํŠธ

1
2
kubectl exec -it curl-pod --context kind-west -- sh -c 'while true; do curl -s --connect-timeout 1 webpod ; sleep 1; echo "---"; done;'
kubectl exec -it curl-pod --context kind-east -- sh -c 'while true; do curl -s --connect-timeout 1 webpod ; sleep 1; echo "---"; done;'

โœ…ย ์ถœ๋ ฅ

  • ์„œ๋น„์Šค ํ˜ธ์ถœ์ด ๋™์ผ ํด๋Ÿฌ์Šคํ„ฐ ๋‚ด ์—”๋“œํฌ์ธํŠธ๋กœ ์ง‘์ค‘๋จ

5. ๋ ˆํ”Œ๋ฆฌ์นด ์ˆ˜ ์ถ•์†Œ (west 2 โ†’ 0)

1
2
kwest scale deployment webpod --replicas 0
deployment.apps/webpod scaled
  • west ํด๋Ÿฌ์Šคํ„ฐ์—์„œ ๋ชจ๋“  webpod Pod ์‚ญ์ œ
1
kwest exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
ID   Frontend                Service Type   Backend                             
1    10.2.0.1:443/TCP        ClusterIP      1 => 172.18.0.2:6443/TCP (active)   
2    10.2.182.189:443/TCP    ClusterIP      1 => 172.18.0.3:4244/TCP (active)   
3    10.2.0.10:53/UDP        ClusterIP      1 => 10.0.1.177:53/UDP (active)     
                                            2 => 10.0.1.115:53/UDP (active)     
4    10.2.0.10:53/TCP        ClusterIP      1 => 10.0.1.177:53/TCP (active)     
                                            2 => 10.0.1.115:53/TCP (active)     
5    10.2.0.10:9153/TCP      ClusterIP      1 => 10.0.1.177:9153/TCP (active)   
                                            2 => 10.0.1.115:9153/TCP (active)   
6    10.2.126.217:2379/TCP   ClusterIP      1 => 10.0.1.8:2379/TCP (active)     
7    172.18.0.3:32379/TCP    NodePort       1 => 10.0.1.8:2379/TCP (active)     
8    0.0.0.0:32379/TCP       NodePort       1 => 10.0.1.8:2379/TCP (active)     
9    10.2.1.9:80/TCP         ClusterIP      1 => 10.0.1.120:4245/TCP (active)   
10   10.2.233.237:80/TCP     ClusterIP      1 => 10.0.1.158:8081/TCP (active)   
11   172.18.0.3:30001/TCP    NodePort       1 => 10.0.1.158:8081/TCP (active)   
12   0.0.0.0:30001/TCP       NodePort       1 => 10.0.1.158:8081/TCP (active)   
13   10.2.167.94:80/TCP      ClusterIP      1 => 10.1.1.6:80/TCP (active)       
                                            2 => 10.1.1.95:80/TCP (active)
  • affinity ์ •์ฑ…์ƒ ๋กœ์ปฌ Pod๋ฅผ ์„ ํ˜ธํ•˜์ง€๋งŒ, ๋กœ์ปฌ์— Pod๊ฐ€ ์—†์œผ๋ฏ€๋กœ ์›๊ฒฉ east Pod๋กœ ์š”์ฒญ ์ „๋‹ฌ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
keast exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity
ID   Frontend               Service Type   Backend                                      
1    10.3.0.1:443/TCP       ClusterIP      1 => 172.18.0.4:6443/TCP (active)            
2    10.3.107.176:443/TCP   ClusterIP      1 => 172.18.0.5:4244/TCP (active)            
3    10.3.0.10:9153/TCP     ClusterIP      1 => 10.1.1.45:9153/TCP (active)             
                                           2 => 10.1.1.21:9153/TCP (active)             
4    10.3.0.10:53/UDP       ClusterIP      1 => 10.1.1.45:53/UDP (active)               
                                           2 => 10.1.1.21:53/UDP (active)               
5    10.3.0.10:53/TCP       ClusterIP      1 => 10.1.1.45:53/TCP (active)               
                                           2 => 10.1.1.21:53/TCP (active)               
6    10.3.173.28:2379/TCP   ClusterIP      1 => 10.1.1.62:2379/TCP (active)             
7    172.18.0.5:32379/TCP   NodePort       1 => 10.1.1.62:2379/TCP (active)             
8    0.0.0.0:32379/TCP      NodePort       1 => 10.1.1.62:2379/TCP (active)             
9    10.3.54.60:80/TCP      ClusterIP      1 => 10.1.1.198:4245/TCP (active)            
10   10.3.1.28:80/TCP       ClusterIP      1 => 10.1.1.236:8081/TCP (active)            
11   172.18.0.5:31001/TCP   NodePort       1 => 10.1.1.236:8081/TCP (active)            
12   0.0.0.0:31001/TCP      NodePort       1 => 10.1.1.236:8081/TCP (active)            
13   10.3.128.46:80/TCP     ClusterIP      1 => 10.1.1.6:80/TCP (active) (preferred)    
                                           2 => 10.1.1.95:80/TCP (active) (preferred)
  • east ํด๋Ÿฌ์Šคํ„ฐ๋Š” ์—ฌ์ „ํžˆ ๋กœ์ปฌ Pod๊ฐ€ ์‚ด์•„์žˆ์–ด ์ž์‹ ์˜ Pod๋กœ ์šฐ์„  ์‘๋‹ต

6. ์›๋ณต (west replicas=2)

1
2
kwest scale deployment webpod --replicas 2
deployment.apps/webpod scaled
  • west์— ๋‹ค์‹œ 2๊ฐœ์˜ Pod ์ƒ์„ฑ โ†’ ๋กœ์ปฌ Pod IP๊ฐ€ ๋‹ค์‹œ (preferred) ๋กœ ํ‘œ์‹œ๋จ
  • ์„œ๋น„์Šค ํ˜ธ์ถœ ์‹œ ๋กœ์ปฌ ์šฐ์„  ์ •์ฑ…์ด ๋‹ค์‹œ ์ ์šฉ๋จ
This post is licensed under CC BY 4.0 by the author.