Cilium 5์ฃผ์ฐจ ์ ๋ฆฌ
๐ง ์ค์ต ํ๊ฒฝ ๊ตฌ์ฑ
1. VirtualBox ํธํ์ฑ ์ด์
1
2
curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/Vagrantfile
vagrant up --provider=virtualbox
๐ข ์ค๋ฅ ๋ฐ์
1
2
3
4
5
6
7
8
9
10
11
vagrant up --provider=virtualbox
The provider 'virtualbox' that was requested to back the machine
'k8s-ctr' is reporting that it isn't usable on this system. The
reason is shown below:
Vagrant has detected that you have a version of VirtualBox installed
that is not supported by this version of Vagrant. Please install one of
the supported versions listed below to use Vagrant:
4.0, 4.1, 4.2, 4.3, 5.0, 5.1, 5.2, 6.0, 6.1, 7.0, 7.1
A Vagrant update may also be available that adds support for the version
you specified. Please check www.vagrantup.com/downloads.html to download
the latest version.
- Vagrant๊ฐ ์ง์ํ๋ VirtualBox๋ ์ต๋
7.1
๋ฒ์ - Arch Linux ๋กค๋ง ์
๋ฐ์ดํธ๋ก VirtualBox
7.2
๊ฐ ์ค์น๋์ด ํธํ ๋ถ๊ฐ
2. ํด๊ฒฐ๊ณผ์ : libvirt ์ ํ
(1) libvirt ํ๊ฒฝ ์ค์
1
2
3
sudo systemctl enable --now libvirtd
sudo usermod -a -G libvirt $USER
newgrp libvirt
(2) libvirt ํ๋ฌ๊ทธ์ธ ์ค์น
1
vagrant plugin install vagrant-libvirt
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Installing the 'vagrant-libvirt' plugin. This can take a few minutes...
Fetching xml-simple-1.1.9.gem
Fetching racc-1.8.1.gem
Building native extensions. This could take a while...
Fetching nokogiri-1.18.9-x86_64-linux-gnu.gem
Fetching ruby-libvirt-0.8.4.gem
Building native extensions. This could take a while...
Fetching formatador-1.2.0.gem
Fetching fog-core-2.6.0.gem
Fetching fog-xml-0.1.5.gem
Fetching fog-json-1.2.0.gem
Fetching fog-libvirt-0.13.2.gem
Fetching diffy-3.4.4.gem
Fetching vagrant-libvirt-0.12.2.gem
Installed the plugin 'vagrant-libvirt (0.12.2)'!
(3) ๋คํธ์ํฌ ํจํค์ง ์ค์น
1
2
sudo pacman -S dnsmasq bridge-utils iptables-nft
sudo systemctl restart libvirtd
(4) ๊ธฐ๋ณธ ๋คํธ์ํฌ ํ์ฑํ
1
2
3
sudo virsh net-start default
sudo virsh net-autostart default
sudo virsh net-list --all
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
Network default started
Network default marked as autostarted
Name State Autostart Persistent
--------------------------------------------
default active yes yes
(5) Vagrantfile ์์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
# Variables
K8SV = '1.33.2-1.1' # Kubernetes Version
CONTAINERDV = '1.7.27-1' # Containerd Version
CILIUMV = '1.18.0' # Cilium CNI Version
N = 1 # max number of worker nodes
# Base Image
BOX_IMAGE = "bento/ubuntu-24.04"
BOX_VERSION = "202508.03.0"
Vagrant.configure("2") do |config|
#-ControlPlane Node
config.vm.define "k8s-ctr" do |subconfig|
subconfig.vm.box = BOX_IMAGE
subconfig.vm.box_version = BOX_VERSION
subconfig.vm.provider "libvirt" do |libvirt|
libvirt.cpus = 2
libvirt.memory = 2560
end
subconfig.vm.host_name = "k8s-ctr"
subconfig.vm.network "private_network", ip: "192.168.10.100"
subconfig.vm.network "forwarded_port", guest: 22, host: 60000, auto_correct: true, id: "ssh"
subconfig.vm.synced_folder "./", "/vagrant", disabled: true
subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/init_cfg.sh", args: [ K8SV, CONTAINERDV ]
subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/k8s-ctr.sh", args: [ N, CILIUMV, K8SV ]
subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/route-add1.sh"
end
#-Worker Nodes Subnet1
(1..N).each do |i|
config.vm.define "k8s-w#{i}" do |subconfig|
subconfig.vm.box = BOX_IMAGE
subconfig.vm.box_version = BOX_VERSION
subconfig.vm.provider "libvirt" do |libvirt|
libvirt.cpus = 2
libvirt.memory = 1536
end
subconfig.vm.host_name = "k8s-w#{i}"
subconfig.vm.network "private_network", ip: "192.168.10.10#{i}"
subconfig.vm.network "forwarded_port", guest: 22, host: "6000#{i}", auto_correct: true, id: "ssh"
subconfig.vm.synced_folder "./", "/vagrant", disabled: true
subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/init_cfg.sh", args: [ K8SV, CONTAINERDV]
subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/k8s-w.sh"
subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/route-add1.sh"
end
end
#-Router Node
config.vm.define "router" do |subconfig|
subconfig.vm.box = BOX_IMAGE
subconfig.vm.box_version = BOX_VERSION
subconfig.vm.provider "libvirt" do |libvirt|
libvirt.cpus = 1
libvirt.memory = 768
end
subconfig.vm.host_name = "router"
subconfig.vm.network "private_network", ip: "192.168.10.200"
subconfig.vm.network "forwarded_port", guest: 22, host: 60009, auto_correct: true, id: "ssh"
subconfig.vm.network "private_network", ip: "192.168.20.200", auto_config: false
subconfig.vm.synced_folder "./", "/vagrant", disabled: true
subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/router.sh"
end
#-Worker Nodes Subnet2
config.vm.define "k8s-w0" do |subconfig|
subconfig.vm.box = BOX_IMAGE
subconfig.vm.box_version = BOX_VERSION
subconfig.vm.provider "libvirt" do |libvirt|
libvirt.cpus = 2
libvirt.memory = 1536
end
subconfig.vm.host_name = "k8s-w0"
subconfig.vm.network "private_network", ip: "192.168.20.100"
subconfig.vm.network "forwarded_port", guest: 22, host: 60010, auto_correct: true, id: "ssh"
subconfig.vm.synced_folder "./", "/vagrant", disabled: true
subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/init_cfg.sh", args: [ K8SV, CONTAINERDV]
subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/k8s-w.sh"
subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/route-add2.sh"
end
end
- VirtualBox ๊ด๋ จ ์ค์ ์ ๊ฑฐ (
vb.customize
,vb.name
,vb.linked_clone
) - provider๋ฅผ
libvirt
๋ก ๋ณ๊ฒฝ
(6) ํด๋ฌ์คํฐ ์คํ
1
vagrant up --provider=libvirt
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
...
==> k8s-ctr: Running provisioner: shell...
k8s-ctr: Running: /tmp/vagrant-shell20250815-120434-uhtz9s.sh
k8s-ctr: >>>> K8S Controlplane config Start <<<<
k8s-ctr: [TASK 1] Initial Kubernetes
k8s-w1: >>>> Initial Config End <<<<
==> k8s-w1: Running provisioner: shell...
k8s-w1: Running: /tmp/vagrant-shell20250815-120434-susr2r.sh
k8s-w1: >>>> K8S Node config Start <<<<
k8s-w1: [TASK 1] K8S Controlplane Join
k8s-w0: >>>> Initial Config End <<<<
==> k8s-w0: Running provisioner: shell...
k8s-w0: Running: /tmp/vagrant-shell20250815-120434-gmyjn3.sh
k8s-w0: >>>> K8S Node config Start <<<<
k8s-w0: [TASK 1] K8S Controlplane Join
k8s-ctr: [TASK 2] Setting kube config file
k8s-ctr: [TASK 3] Source the completion
k8s-ctr: [TASK 4] Alias kubectl to k
k8s-ctr: [TASK 5] Install Kubectx & Kubens
k8s-ctr: [TASK 6] Install Kubeps & Setting PS1
k8s-ctr: [TASK 7] Install Cilium CNI
k8s-ctr: [TASK 8] Install Cilium / Hubble CLI
k8s-ctr: cilium
k8s-w1: >>>> K8S Node config End <<<<
==> k8s-w1: Running provisioner: shell...
k8s-w1: Running: /tmp/vagrant-shell20250815-120434-45fmm8.sh
k8s-w1: >>>> Route Add Config Start <<<<
k8s-w1: >>>> Route Add Config End <<<<
k8s-ctr: hubble
k8s-ctr: [TASK 9] Remove node taint
k8s-ctr: node/k8s-ctr untainted
k8s-ctr: [TASK 10] local DNS with hosts file
k8s-ctr: [TASK 11] Dynamically provisioning persistent local storage with Kubernetes
k8s-ctr: [TASK 13] Install Metrics-server
k8s-ctr: [TASK 14] Install k9s
k8s-ctr: >>>> K8S Controlplane Config End <<<<
==> k8s-ctr: Running provisioner: shell...
k8s-ctr: Running: /tmp/vagrant-shell20250815-120434-5mw3lc.sh
k8s-ctr: >>>> Route Add Config Start <<<<
k8s-ctr: >>>> Route Add Config End <<<<
k8s-w0: >>>> K8S Node config End <<<<
==> k8s-w0: Running provisioner: shell...
k8s-w0: Running: /tmp/vagrant-shell20250815-120434-dtpox6.sh
k8s-w0: >>>> Route Add Config Start <<<<
k8s-w0: >>>> Route Add Config End <<<<
3. ๋ผ์ฐํ ๋ฐ BGP ์ค์ต
- https://docs.frrouting.org/en/stable-10.4/about.html
- FRR(FRRouting) ์ค์น ํ BGP ๊ธฐ๋ฐ ๋คํธ์ํฌ ์ค์ต ๊ตฌ์ฑ
bgpControlPlane.enabled=true
์ค์ autoDirectNodeRoutes=false
โ ๊ฐ์ ๋คํธ์ํฌ ๋ ธ๋์์ ๋ค๋ฅธ ๋ ธ๋์ pod CIDR ์ถ๊ฐํ๋ ๊ฒ ๋นํ์ฑํ
1
2
3
4
5
6
7
cat <<EOT>> /etc/netplan/50-vagrant.yaml
routes:
- to: 192.168.20.0/24
via: 192.168.10.200
# - to: 172.20.0.0/16
# via: 192.168.10.200
EOT
1
2
3
4
5
6
7
cat <<EOT>> /etc/netplan/50-vagrant.yaml
routes:
- to: 192.168.10.0/24
via: 192.168.20.200
# - to: 172.20.0.0/16
# via: 192.168.20.200
EOT
- ๋ด๋ถ๋ง ์ต์ ๋ผ์ฐํ ๊ท์น๋ง ์ถ๊ฐ
- BGP๋ฅผ ํตํด ๋ผ์ฐํ ์ ๋ณด ๊ตํ ๋ฐ ๊ด๊ณ ์ค์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
echo "[TASK 7] Configure FRR"
apt install frr -y >/dev/null 2>&1
sed -i "s/^bgpd=no/bgpd=yes/g" /etc/frr/daemons
NODEIP=$(ip -4 addr show eth1 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')
cat << EOF >> /etc/frr/frr.conf
!
router bgp 65000
bgp router-id $NODEIP
bgp graceful-restart
no bgp ebgp-requires-policy
bgp bestpath as-path multipath-relax
maximum-paths 4
network 10.10.1.0/24
EOF
systemctl daemon-reexec >/dev/null 2>&1
systemctl restart frr >/dev/null 2>&1
systemctl enable frr >/dev/null 2>&1
- ๋ผ์ฐํฐ VM์๋ FRR ์ค์น ๋ฐ
bgpd=yes
์ ์ฉ ํ BGP ๋ผ์ฐํฐ๋ก ๋์
๐ฅ๏ธ [k8s-ctr] ์ ์ ํ ๊ธฐ๋ณธ ์ ๋ณด ํ์ธ
1. ์ปจํธ๋กค ํ๋ ์ธ์์ ๊ฐ ๋
ธ๋(k8s-w0
, k8s-w1
, router
)์ SSH ์ ๊ทผ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# for i in k8s-w0 k8s-w1 router ; do echo ">> node : $i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@$i hostname; echo; done
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
>> node : k8s-w0 <<
Warning: Permanently added 'k8s-w0' (ED25519) to the list of known hosts.
k8s-w0
>> node : k8s-w1 <<
Warning: Permanently added 'k8s-w1' (ED25519) to the list of known hosts.
k8s-w1
>> node : router <<
Warning: Permanently added 'router' (ED25519) to the list of known hosts.
router
2. ๋ ธ๋ Join ๋ฌธ์ ํ์ธ
(1) ๋ฌธ์ ์ํฉ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get node -owide
โ ย ์ถ๋ ฅ
1
2
3
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-ctr Ready control-plane 4m25s v1.33.2 192.168.10.100 <none> Ubuntu 24.04.2 LTS 6.8.0-64-generic containerd://1.7.27
k8s-w1 Ready <none> 4m11s v1.33.2 192.168.10.101 <none> Ubuntu 24.04.2 LTS 6.8.0-64-generic containerd://1.7.27
kubectl get node
๊ฒฐ๊ณผ์์k8s-w0
๋ ธ๋๊ฐ ํ์๋์ง ์์
(2) ์์ธ ํ์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
vagrant ssh k8s-w0
root@k8s-w0:~# cat /root/kubeadm-join-worker-config.yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: JoinConfiguration
discovery:
bootstrapToken:
token: "123456.1234567890123456"
apiServerEndpoint: "192.168.10.100:6443"
unsafeSkipCAVerification: true
nodeRegistration:
criSocket: "unix:///run/containerd/containerd.sock"
kubeletExtraArgs:
- name: node-ip
value: "192.168.20.100"
- ์ค์ ํ์ผ์ ๋๋ฏธํ ํฐ์ด ํ๋์ฝ๋ฉ ๋์ด์์ด์ join ์คํจ
(3) ํด๊ฒฐ ๊ณผ์
1
2
(โ|HomeLab:N/A) root@k8s-ctr:~# kubeadm token create --ttl=72h
9llwjd.azxcrk0wd8lkrh45
- ์ปจํธ๋กค ํ๋ ์ธ์์ ์ ํ ํฐ ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
root@k8s-w0:~# sudo sed -i 's/123456.1234567890123456/9llwjd.azxcrk0wd8lkrh45/g' /root/kubeadm-join-worker-config.yaml
root@k8s-w0:~# cat /root/kubeadm-join-worker-config.yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: JoinConfiguration
discovery:
bootstrapToken:
token: "9llwjd.azxcrk0wd8lkrh45"
apiServerEndpoint: "192.168.10.100:6443"
unsafeSkipCAVerification: true
nodeRegistration:
criSocket: "unix:///run/containerd/containerd.sock"
kubeletExtraArgs:
- name: node-ip
value: "192.168.20.100"
- ์์ปค๋ ธ๋0 ์ค์ ํ์ผ ๋ด ํ ํฐ ๊ฐ ๊ต์ฒด
1
2
root@k8s-w0:~# sudo kubeadm reset -f
root@k8s-w0:~# sudo kubeadm join --config="/root/kubeadm-join-worker-config.yaml"
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[preflight] Running pre-flight checks
W0815 22:41:14.831416 4224 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not perform cleanup of CNI plugin configuration,
network filtering rules and kubeconfig files.
For information on how to perform this cleanup manually, please see:
https://k8s.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/
[preflight] Running pre-flight checks
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[preflight] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.002299091s
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
kubeadm reset -f
ํkubeadm join
์ฌ์คํ
(4) ๊ฒฐ๊ณผ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get nodes -owide
โ ย ์ถ๋ ฅ
1
2
3
4
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-ctr Ready control-plane 10m v1.33.2 192.168.10.100 <none> Ubuntu 24.04.2 LTS 6.8.0-64-generic containerd://1.7.27
k8s-w0 Ready <none> 53s v1.33.2 192.168.20.100 <none> Ubuntu 24.04.2 LTS 6.8.0-64-generic containerd://1.7.27
k8s-w1 Ready <none> 10m v1.33.2 192.168.10.101 <none> Ubuntu 24.04.2 LTS 6.8.0-64-generic containerd://1.7.27
- ๋ชจ๋ ๋
ธ๋(
k8s-ctr
,k8s-w0
,k8s-w1
) ์ ์ ์ฐ๊ฒฐ
โ๏ธ [k8s-ctr] cilium ์ค์ ํ์ธ
Cilium์์ BGP Control Plane ๊ธฐ๋ฅ ํ์ฑํ ์ฌ๋ถ ํ์ธ
1
2
(โ|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep -i bgp
โ ย ์ถ๋ ฅ
1
2
3
4
5
bgp-router-id-allocation-ip-pool
bgp-router-id-allocation-mode default
bgp-secrets-namespace kube-system
enable-bgp-control-plane true
enable-bgp-control-plane-status-report true
enable-bgp-control-plane = true
์ํ ํ์ธ ์๋ฃ
๐ ๋คํธ์ํฌ ์ ๋ณด ํ์ธ
1. router ๋คํธ์ํฌ ์ธํฐํ์ด์ค ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router ip -br -c -4 addr
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
lo UNKNOWN 127.0.0.1/8
eth0 UP 192.168.121.180/24 metric 100
eth1 UP 192.168.10.200/24
eth2 UP 192.168.20.200/24
loop1 UNKNOWN 10.10.1.200/24
loop2 UNKNOWN 10.10.2.200/24
2. ์ปจํธ๋กค ํ๋ ์ธ ๋คํธ์ํฌ ์ธํฐํ์ด์ค ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# ip -c -4 addr show dev eth1
โ ย ์ถ๋ ฅ
1
2
3
4
5
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
altname enp0s6
altname ens6
inet 192.168.10.100/24 brd 192.168.10.255 scope global eth1
valid_lft forever preferred_lft forever
3. ์์ปค๋ ธ๋ ๋คํธ์ํฌ ์ธํฐํ์ด์ค ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w0 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c -4 addr show dev eth1; echo; done
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
>> node : k8s-w1 <<
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
altname enp0s6
altname ens6
inet 192.168.10.101/24 brd 192.168.10.255 scope global eth1
valid_lft forever preferred_lft forever
>> node : k8s-w0 <<
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
altname enp0s6
altname ens6
inet 192.168.20.100/24 brd 192.168.20.255 scope global eth1
valid_lft forever preferred_lft forever
k8s-w1
:192.168.10.101/24
โ ์ปจํธ๋กค ํ๋ ์ธ๊ณผ ๊ฐ์ ๋์ญk8s-w0
:192.168.20.100/24
4. router ๋ผ์ฐํ ์ ๋ณด ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router ip -c route
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
default via 192.168.121.1 dev eth0 proto dhcp src 192.168.121.25 metric 100
10.10.1.0/24 dev loop1 proto kernel scope link src 10.10.1.200
10.10.2.0/24 dev loop2 proto kernel scope link src 10.10.2.200
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.200
192.168.20.0/24 dev eth2 proto kernel scope link src 192.168.20.200
192.168.121.0/24 dev eth0 proto kernel scope link src 192.168.121.25 metric 100
192.168.121.1 dev eth0 proto dhcp scope link src 192.168.121.25 metric 100
192.168.10.0/24
์192.168.20.0/24
๋ผ์ฐํ ์ฒ๋ฆฌ
5. ์ปจํธ๋กค ํ๋ ์ธ ๋ผ์ฐํ ์ ๋ณด ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# ip -c route
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
default via 192.168.121.1 dev eth0 proto dhcp src 192.168.121.70 metric 100
172.20.0.0/24 via 172.20.0.230 dev cilium_host proto kernel src 172.20.0.230
172.20.0.230 dev cilium_host proto kernel scope link
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100
192.168.20.0/24 via 192.168.10.200 dev eth1 proto static
192.168.121.0/24 dev eth0 proto kernel scope link src 192.168.121.70 metric 100
192.168.121.1 dev eth0 proto dhcp scope link src 192.168.121.70 metric 100
autoDirectNodeRoutes=false
์ค์ ์ผ๋ก ์ธํด Pod CIDR ์๋ ๊ฒฝ๋ก ๋ฑ๋ก์ด ๋นํ์ฑํ๋จ- ๋ฐ๋ผ์ ๊ฐ์ ๋คํธ์ํฌ ๋์ญ์ ์๋๋ผ๋ ์๋๋ฐฉ Pod CIDR๊ฐ ๋ผ์ฐํ ํ ์ด๋ธ์ ์กด์ฌํ์ง ์์
- ์ปจํธ๋กค ํ๋ ์ธ(
k8s-ctr
)์ ์์ ์ PodCIDR(172.20.0.0/24
)๋ง ๊ฐ์ง๊ณ ์์
6. ์์ปค๋ ธ๋ ๋ผ์ฐํ ์ ๋ณด ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w0 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c route; echo; done
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
>> node : k8s-w1 <<
default via 192.168.121.1 dev eth0 proto dhcp src 192.168.121.62 metric 100
172.20.1.0/24 via 172.20.1.4 dev cilium_host proto kernel src 172.20.1.4
172.20.1.4 dev cilium_host proto kernel scope link
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.101
192.168.20.0/24 via 192.168.10.200 dev eth1 proto static
192.168.121.0/24 dev eth0 proto kernel scope link src 192.168.121.62 metric 100
192.168.121.1 dev eth0 proto dhcp scope link src 192.168.121.62 metric 100
>> node : k8s-w0 <<
default via 192.168.121.1 dev eth0 proto dhcp src 192.168.121.122 metric 100
172.20.2.0/24 via 172.20.2.89 dev cilium_host proto kernel src 172.20.2.89
172.20.2.89 dev cilium_host proto kernel scope link
192.168.10.0/24 via 192.168.20.200 dev eth1 proto static
192.168.20.0/24 dev eth1 proto kernel scope link src 192.168.20.100
192.168.121.0/24 dev eth0 proto kernel scope link src 192.168.121.122 metric 100
192.168.121.1 dev eth0 proto dhcp scope link src 192.168.121.122 metric 100
- ๊ฐ ์์ปค ๋
ธ๋(
k8s-w1
,k8s-w0
)๋ ์์ ์ PodCIDR๋ง ๋ฑ๋ก๋์ด ์์ autoDirectNodeRoutes=false
๋๋ฌธ์ ๋ค๋ฅธ ๋ ธ๋์ Pod CIDR์ ์๋์ผ๋ก ์ถ๊ฐ๋์ง ์์
๐ฆ ์ํ ์ ํ๋ฆฌ์ผ์ด์ ๋ฐฐํฌ ๋ฐ ํต์ ๋ฌธ์ ํ์ธ
1. ์ํ ์ ํ๋ฆฌ์ผ์ด์ ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
(โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: webpod
spec:
replicas: 3
selector:
matchLabels:
app: webpod
template:
metadata:
labels:
app: webpod
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- sample-app
topologyKey: "kubernetes.io/hostname"
containers:
- name: webpod
image: traefik/whoami
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: webpod
labels:
app: webpod
spec:
selector:
app: webpod
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
EOF
# ๊ฒฐ๊ณผ
deployment.apps/webpod created
service/webpod created
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
(โ|HomeLab:N/A) root@k8s-ctr:~# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: curl-pod
labels:
app: curl
spec:
nodeName: k8s-ctr
containers:
- name: curl
image: nicolaka/netshoot
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF
# ๊ฒฐ๊ณผ
pod/curl-pod created
2. Pod ์ค์ผ์ค๋ง ๋ฐ ์๋น์ค ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get deploy,svc,ep webpod -owide
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/webpod 3/3 3 3 2m23s webpod traefik/whoami app=webpod
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/webpod ClusterIP 10.96.54.159 <none> 80/TCP 2m23s app=webpod
NAME ENDPOINTS AGE
endpoints/webpod 172.20.0.158:80,172.20.1.65:80,172.20.2.204:80 2m23s
podAntiAffinity
์ค์ ์ผ๋ก ์ธํด ํ๋๊ฐ ๋ ธ๋๋ณ๋ก ๋ถ์ฐ ๋ฐฐ์น๋จ- Service: ClusterIP
10.96.54.159
ํ ๋น - Endpoints: 3๊ฐ ํ๋ ๊ฐ๊ฐ ๋ค๋ฅธ PodCIDR(
172.20.0.x
,172.20.1.x
,172.20.2.x
)์ ๋ถ์ฐ
3. Cilium Endpoint ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints
โ ย ์ถ๋ ฅ
1
2
3
4
5
NAME SECURITY IDENTITY ENDPOINT STATE IPV4 IPV6
curl-pod 64126 ready 172.20.0.64
webpod-697b545f57-fbtbj 38082 ready 172.20.0.158
webpod-697b545f57-pxhvr 38082 ready 172.20.1.65
webpod-697b545f57-rpblf 38082 ready 172.20.2.204
curl-pod
โ172.20.0.64
(์ปจํธ๋กค ํ๋ ์ธ)webpod
3๊ฐ ํ๋ โ172.20.0.158
,172.20.1.65
,172.20.2.204
4. ํต์ ํ ์คํธ ์งํ
curl-pod
์์ webpod
์๋น์ค๋ก ๋ฐ๋ณต ์์ฒญ ์ํ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s --connect-timeout 1 webpod | grep Hostname; echo "---" ; sleep 1; done'
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
---
---
---
---
---
Hostname: webpod-697b545f57-fbtbj
---
---
---
Hostname: webpod-697b545f57-fbtbj
---
---
---
Hostname: webpod-697b545f57-fbtbj
---
Hostname: webpod-697b545f57-fbtbj
---
Hostname: webpod-697b545f57-fbtbj
---
---
...
--connect-timeout 1
์ต์ ์ ์ค์ 1์ด ์ด๋ด ์๋ต ์์ผ๋ฉด ์ฐ๊ฒฐ ์ข ๋ฃk8s-ctr
์ ๋ฐฐํฌ๋webpod
ํ๋(172.20.0.158
)๋ง ์๋ต- ๋ค๋ฅธ ๋
ธ๋(
k8s-w1
,k8s-w0
)์ ์์นํwebpod
ํ๋๋ค์ ์๋ตํ์ง ๋ชปํจ
๐ก Cilium BGP Control Plane
- https://docs.cilium.io/en/stable/network/bgp-control-plane/bgp-control-plane-v2/
- BGPย ์ค์ ํ ํต์ ํ์ธ : Cilium์ BGP๋ ๊ธฐ๋ณธ์ ์ผ๋ก ์ธ๋ถ ๊ฒฝ๋ก๋ฅผ ์ปค๋ ๋ผ์ฐํ
ํ
์ด๋ธ์ ์ฃผ์
ํ์ง ์์
1. FRR ํ๋ก์ธ์ค ์ํ ํ์ธ
1
2
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router
root@router:~# ss -tnlp | grep -iE 'zebra|bgpd'
โ ย ์ถ๋ ฅ
1
2
3
4
LISTEN 0 3 127.0.0.1:2601 0.0.0.0:* users:(("zebra",pid=3827,fd=23))
LISTEN 0 3 127.0.0.1:2605 0.0.0.0:* users:(("bgpd",pid=3832,fd=18))
LISTEN 0 4096 0.0.0.0:179 0.0.0.0:* users:(("bgpd",pid=3832,fd=22))
LISTEN 0 4096 [::]:179 [::]:* users:(("bgpd",pid=3832,fd=23))
- BGP ๊ด๋ จ ํฌํธ(
179
) ๋ฆฌ์จ ์ค
1
root@router:~# ps -ef |grep frr
โ ย ์ถ๋ ฅ
1
2
3
4
5
root 3814 1 0 22:30 ? 00:00:00 /usr/lib/frr/watchfrr -d -F traditional zebra bgpd staticd
frr 3827 1 0 22:30 ? 00:00:00 /usr/lib/frr/zebra -d -F traditional -A 127.0.0.1 -s 90000000
frr 3832 1 0 22:30 ? 00:00:00 /usr/lib/frr/bgpd -d -F traditional -A 127.0.0.1
frr 3839 1 0 22:30 ? 00:00:00 /usr/lib/frr/staticd -d -F traditional -A 127.0.0.1
root 4417 4399 0 23:02 pts/1 00:00:00 grep --color=auto frr
watchfrr
,zebra
,bgpd
,staticd
ํ๋ก์ธ์ค ๊ตฌ๋ ํ์ธ
2. FRR ์ค์ ํ์ธ (vtysh)
1
root@router:~# vtysh -c 'show running'
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Building configuration...
Current configuration:
!
frr version 8.4.4
frr defaults traditional
hostname router
log syslog informational
no ipv6 forwarding
service integrated-vtysh-config
!
router bgp 65000
bgp router-id 192.168.10.200
no bgp ebgp-requires-policy
bgp graceful-restart
bgp bestpath as-path multipath-relax
!
address-family ipv4 unicast
network 10.10.1.0/24
maximum-paths 4
exit-address-family
exit
!
end
- Router BGP AS ๋ฒํธ๋
65000
- Loopback ๋คํธ์ํฌ
10.10.1.0/24
๋ฅผ ๊ด๊ณ ํ๋๋ก ์ค์ ๋จ - ๋ฉํฐํจ์ค(
maximum-paths 4
) ํ์ฉ ์ค์ ์ ์ฉ
3. FRR ์ค์ ํ์ผ ํ์ธ
1
root@router:~# cat /etc/frr/frr.conf
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# default to using syslog. /etc/rsyslog.d/45-frr.conf places the log in
# /var/log/frr/frr.log
#
# Note:
# FRR's configuration shell, vtysh, dynamically edits the live, in-memory
# configuration while FRR is running. When instructed, vtysh will persist the
# live configuration to this file, overwriting its contents. If you want to
# avoid this, you can edit this file manually before starting FRR, or instruct
# vtysh to write configuration to a different file.
log syslog informational
!
router bgp 65000
bgp router-id 192.168.10.200
bgp graceful-restart
no bgp ebgp-requires-policy
bgp bestpath as-path multipath-relax
maximum-paths 4
network 10.10.1.0/24
/etc/frr/frr.conf
ํ์ผ์์ ๋์ผํ ์ค์ ํ์ธ ๊ฐ๋ฅnetwork 10.10.1.0/24
๊ด๊ณ ์ค
4. BGP ์ํ ํ์ธ
1
root@router:~# vtysh -c 'show ip bgp summary'
โ ย ์ถ๋ ฅ
1
% No BGP neighbors found in VRF default
- ์์ง Neighbor ์์
1
root@router:~# vtysh -c 'show ip bgp'
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
BGP table version is 1, local router ID is 192.168.10.200, vrf id 0
Default local pref 100, local AS 65000
Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found
Network Next Hop Metric LocPrf Weight Path
*> 10.10.1.0/24 0.0.0.0 0 32768 i
Displayed 1 routes and 1 total paths
- ์์ ์ด ๋ณด์ ํ
10.10.1.0/24
๋คํธ์ํฌ๋ง ๊ด๊ณ ์ค - ์ธ๋ถ ๋ ธ๋์ ์ฐ๊ฒฐ๋์ง ์์ BGP ํ ์ด๋ธ์ ๋จ์ผ ์ํธ๋ฆฌ๋ง ์กด์ฌ
5. router ๋คํธ์ํฌ ์ธํฐํ์ด์ค ํ์ธ
1
root@router:~# ip -c addr
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:97:4b:c4 brd ff:ff:ff:ff:ff:ff
altname enp0s5
altname ens5
inet 192.168.121.25/24 metric 100 brd 192.168.121.255 scope global dynamic eth0
valid_lft 2869sec preferred_lft 2869sec
inet6 fe80::5054:ff:fe97:4bc4/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:24:2d:30 brd ff:ff:ff:ff:ff:ff
altname enp0s6
altname ens6
inet 192.168.10.200/24 brd 192.168.10.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe24:2d30/64 scope link
valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:50:33:eb brd ff:ff:ff:ff:ff:ff
altname enp0s7
altname ens7
inet 192.168.20.200/24 brd 192.168.20.255 scope global eth2
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe50:33eb/64 scope link
valid_lft forever preferred_lft forever
5: loop1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 22:63:e6:d9:f6:95 brd ff:ff:ff:ff:ff:ff
inet 10.10.1.200/24 scope global loop1
valid_lft forever preferred_lft forever
inet6 fe80::2063:e6ff:fed9:f695/64 scope link
valid_lft forever preferred_lft forever
6: loop2: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 6e:08:a4:e5:88:c0 brd ff:ff:ff:ff:ff:ff
inet 10.10.2.200/24 scope global loop2
valid_lft forever preferred_lft forever
inet6 fe80::6c08:a4ff:fee5:88c0/64 scope link
valid_lft forever preferred_lft forever
loop1
:10.10.1.200/24
6. router ๋ผ์ฐํ ํ ์ด๋ธ ํ์ธ
1
root@router:~# ip -c route
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
default via 192.168.121.1 dev eth0 proto dhcp src 192.168.121.25 metric 100
10.10.1.0/24 dev loop1 proto kernel scope link src 10.10.1.200
10.10.2.0/24 dev loop2 proto kernel scope link src 10.10.2.200
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.200
192.168.20.0/24 dev eth2 proto kernel scope link src 192.168.20.200
192.168.121.0/24 dev eth0 proto kernel scope link src 192.168.121.25 metric 100
192.168.121.1 dev eth0 proto dhcp scope link src 192.168.121.25 metric 100
7. BGP Neighbor ์ค์ ์ถ๊ฐ
router์์ Cilium ๋
ธ๋(k8s-ctr
, k8s-w1
, k8s-w0
)๋ฅผ Neighbor๋ก ๋ฑ๋ก
1
2
3
4
5
6
7
root@router:~# cat << EOF >> /etc/frr/frr.conf
neighbor CILIUM peer-group
neighbor CILIUM remote-as external
neighbor 192.168.10.100 peer-group CILIUM
neighbor 192.168.10.101 peer-group CILIUM
neighbor 192.168.20.100 peer-group CILIUM
EOF
peer-group
CILIUM์ผ๋ก ๋ฌถ์ด ๊ด๋ฆฌ ๋จ์ํremote-as external
์ต์ ์ผ๋ก ๋ค๋ฅธ AS ์๋ ์์ฉ
1
root@router:~# cat /etc/frr/frr.conf
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# default to using syslog. /etc/rsyslog.d/45-frr.conf places the log in
# /var/log/frr/frr.log
#
# Note:
# FRR's configuration shell, vtysh, dynamically edits the live, in-memory
# configuration while FRR is running. When instructed, vtysh will persist the
# live configuration to this file, overwriting its contents. If you want to
# avoid this, you can edit this file manually before starting FRR, or instruct
# vtysh to write configuration to a different file.
log syslog informational
!
router bgp 65000
bgp router-id 192.168.10.200
bgp graceful-restart
no bgp ebgp-requires-policy
bgp bestpath as-path multipath-relax
maximum-paths 4
network 10.10.1.0/24
neighbor CILIUM peer-group
neighbor CILIUM remote-as external
neighbor 192.168.10.100 peer-group CILIUM
neighbor 192.168.10.101 peer-group CILIUM
neighbor 192.168.20.100 peer-group CILIUM
- router AS:
65000
, ๋ ธ๋ AS:65001
8. FRR ์๋น์ค ์ฌ์์ ๋ฐ ์ํ ํ์ธ
1
2
root@router:~# systemctl daemon-reexec && systemctl restart frr
root@router:~# systemctl status frr --no-pager --full
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
โ frr.service - FRRouting
Loaded: loaded (/usr/lib/systemd/system/frr.service; enabled; preset: enabled)
Active: active (running) since Fri 2025-08-15 23:20:38 KST; 17s ago
Docs: https://frrouting.readthedocs.io/en/latest/setup.html
Process: 4540 ExecStart=/usr/lib/frr/frrinit.sh start (code=exited, status=0/SUCCESS)
Main PID: 4550 (watchfrr)
Status: "FRR Operational"
Tasks: 13 (limit: 757)
Memory: 19.5M (peak: 27.4M)
CPU: 321ms
CGroup: /system.slice/frr.service
โโ4550 /usr/lib/frr/watchfrr -d -F traditional zebra bgpd staticd
โโ4563 /usr/lib/frr/zebra -d -F traditional -A 127.0.0.1 -s 90000000
โโ4568 /usr/lib/frr/bgpd -d -F traditional -A 127.0.0.1
โโ4575 /usr/lib/frr/staticd -d -F traditional -A 127.0.0.1
Aug 15 23:20:38 router watchfrr[4550]: [YFT0P-5Q5YX] Forked background command [pid 4551]: /usr/lib/frr/watchfrr.sh restart all
Aug 15 23:20:38 router zebra[4563]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 15 23:20:38 router bgpd[4568]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 15 23:20:38 router staticd[4575]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 15 23:20:38 router watchfrr[4550]: [QDG3Y-BY5TN] zebra state -> up : connect succeeded
Aug 15 23:20:38 router frrinit.sh[4540]: * Started watchfrr
Aug 15 23:20:38 router watchfrr[4550]: [QDG3Y-BY5TN] bgpd state -> up : connect succeeded
Aug 15 23:20:38 router watchfrr[4550]: [QDG3Y-BY5TN] staticd state -> up : connect succeeded
Aug 15 23:20:38 router watchfrr[4550]: [KWE5Q-QNGFC] all daemons up, doing startup-complete notify
Aug 15 23:20:38 router systemd[1]: Started frr.service - FRRouting.
9. ๋ชจ๋ํฐ๋ง
1
root@router:~# journalctl -u frr -f
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
Aug 15 23:20:38 router watchfrr[4550]: [YFT0P-5Q5YX] Forked background command [pid 4551]: /usr/lib/frr/watchfrr.sh restart all
Aug 15 23:20:38 router zebra[4563]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 15 23:20:38 router bgpd[4568]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 15 23:20:38 router staticd[4575]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 15 23:20:38 router watchfrr[4550]: [QDG3Y-BY5TN] zebra state -> up : connect succeeded
Aug 15 23:20:38 router frrinit.sh[4540]: * Started watchfrr
Aug 15 23:20:38 router watchfrr[4550]: [QDG3Y-BY5TN] bgpd state -> up : connect succeeded
Aug 15 23:20:38 router watchfrr[4550]: [QDG3Y-BY5TN] staticd state -> up : connect succeeded
Aug 15 23:20:38 router watchfrr[4550]: [KWE5Q-QNGFC] all daemons up, doing startup-complete notify
Aug 15 23:20:38 router systemd[1]: Started frr.service - FRRouting.
๐ฐ๏ธ Cilium์ BGP ์ค์
1. BGP ํ์ฑํ ๋ ธ๋ ๋ผ๋ฒจ ์ค์
1
2
3
4
5
6
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl label nodes k8s-ctr k8s-w0 k8s-w1 enable-bgp=true
# ๊ฒฐ๊ณผ
node/k8s-ctr labeled
node/k8s-w0 labeled
node/k8s-w1 labeled
- Cilium BGP๋ฅผ ์คํํ ๋
ธ๋์
enable-bgp=true
๋ผ๋ฒจ ๋ถ์ฌ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get node -l enable-bgp=true
โ ย ์ถ๋ ฅ
1
2
3
4
NAME STATUS ROLES AGE VERSION
k8s-ctr Ready control-plane 53m v1.33.2
k8s-w0 Ready <none> 43m v1.33.2
k8s-w1 Ready <none> 53m v1.33.2
- 3๊ฐ ๋ ธ๋ ๋ผ๋ฒจ๋ง ํ์ธ
2. Cilium BGP ๋ฆฌ์์ค ์ค์
Cilium์์ BGP ๋์์ ์ ์ํ๊ธฐ ์ํด 3๊ฐ์ง CRD ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
(โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: cilium.io/v2
kind: CiliumBGPAdvertisement
metadata:
name: bgp-advertisements
labels:
advertise: bgp
spec:
advertisements:
- advertisementType: "PodCIDR"
---
apiVersion: cilium.io/v2
kind: CiliumBGPPeerConfig
metadata:
name: cilium-peer
spec:
timers:
holdTimeSeconds: 9
keepAliveTimeSeconds: 3
ebgpMultihop: 2
gracefulRestart:
enabled: true
restartTimeSeconds: 15
families:
- afi: ipv4
safi: unicast
advertisements:
matchLabels:
advertise: "bgp"
---
apiVersion: cilium.io/v2
kind: CiliumBGPClusterConfig
metadata:
name: cilium-bgp
spec:
nodeSelector:
matchLabels:
"enable-bgp": "true"
bgpInstances:
- name: "instance-65001"
localASN: 65001
peers:
- name: "tor-switch"
peerASN: 65000
peerAddress: 192.168.10.200 # router ip address
peerConfigRef:
name: "cilium-peer"
EOF
ciliumbgpadvertisement.cilium.io/bgp-advertisements created
ciliumbgppeerconfig.cilium.io/cilium-peer created
ciliumbgpclusterconfig.cilium.io/cilium-bgp created
CiliumBGPAdvertisement
advertisementType: PodCIDR
์ค์ โ ๊ฐ ๋ ธ๋์ PodCIDR๋ฅผ BGP๋ก ๊ด๊ณ
CiliumBGPPeerConfig
advertisements.matchLabels: advertise=bgp
๋ก ํํฐ๋ง
CiliumBGPClusterConfig
- ๋ผ๋ฒจ(
enable-bgp=true
)์ด ์ง์ ๋ ๋ ธ๋๋ง BGP ๋์ ๋์ - localASN: 65001, peerASN: 65000
- peerAddress:
192.168.10.200
(Router IP) - Peer ์ค์ ์
cilium-peer
๋ ํผ๋ฐ์ค๋ฅผ ์ฐธ์กฐ
- ๋ผ๋ฒจ(
1
2
3
4
5
6
7
8
9
10
11
12
13
Aug 15 23:20:38 router watchfrr[4550]: [YFT0P-5Q5YX] Forked background command [pid 4551]: /usr/lib/frr/watchfrr.sh restart all
Aug 15 23:20:38 router zebra[4563]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 15 23:20:38 router bgpd[4568]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 15 23:20:38 router staticd[4575]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 15 23:20:38 router watchfrr[4550]: [QDG3Y-BY5TN] zebra state -> up : connect succeeded
Aug 15 23:20:38 router frrinit.sh[4540]: * Started watchfrr
Aug 15 23:20:38 router watchfrr[4550]: [QDG3Y-BY5TN] bgpd state -> up : connect succeeded
Aug 15 23:20:38 router watchfrr[4550]: [QDG3Y-BY5TN] staticd state -> up : connect succeeded
Aug 15 23:20:38 router watchfrr[4550]: [KWE5Q-QNGFC] all daemons up, doing startup-complete notify
Aug 15 23:20:38 router systemd[1]: Started frr.service - FRRouting.
Aug 15 23:27:15 router bgpd[4568]: [M59KS-A3ZXZ] bgp_update_receive: rcvd End-of-RIB for IPv4 Unicast from 192.168.20.100 in vrf default
Aug 15 23:27:15 router bgpd[4568]: [M59KS-A3ZXZ] bgp_update_receive: rcvd End-of-RIB for IPv4 Unicast from 192.168.10.100 in vrf default
Aug 15 23:27:15 router bgpd[4568]: [M59KS-A3ZXZ] bgp_update_receive: rcvd End-of-RIB for IPv4 Unicast from 192.168.10.101 in vrf default
๐ ํต์ ํ์ธ
1. BGP ์ธ์ ์ฐ๊ฒฐ ํ์ธ
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# ss -tnlp | grep 179
(โ|HomeLab:N/A) root@k8s-ctr:~# ss -tnp | grep 179
ESTAB 0 0 192.168.10.100:35791 192.168.10.200:179 users:(("cilium-agent",pid=5170,fd=50))
ESTAB 0 0 [::ffff:192.168.10.100]:6443 [::ffff:172.20.0.179]:46928 users:(("kube-apiserver",pid=3868,fd=105))
- Cilium์ BGP Listener๊ฐ ์๋๋ผ Initiator๋ก ๋์ํ์ฌ ๋คํธ์ํฌ ์ฅ๋น(FRR ๋ผ์ฐํฐ)์ TCP 179 ํฌํธ๋ก ์ฐ๊ฒฐ์ ์๋ฆฝํจ
- ์ปจํธ๋กค ํ๋ ์ธ ๋
ธ๋(
192.168.10.100:35791
)๊ฐ ๋ผ์ฐํฐ(192.168.10.200:179
)์ ์ธ์ ์ ๋งบ์ ์ํ ํ์ธ๋จ
2. Cilium BGP Peer ์ฐ๊ฒฐ ์ํ ํ์ธ
1
2
3
4
5
(โ|HomeLab:N/A) root@k8s-ctr:~# cilium bgp peers
Node Local AS Peer AS Peer Address Session State Uptime Family Received Advertised
k8s-ctr 65001 65000 192.168.10.200 established 10m59s ipv4/unicast 4 2
k8s-w0 65001 65000 192.168.10.200 established 10m59s ipv4/unicast 4 2
k8s-w1 65001 65000 192.168.10.200 established 10m59s ipv4/unicast 4 2
- 3๊ฐ ๋
ธ๋(
k8s-ctr
,k8s-w0
,k8s-w1
) ๋ชจ๋ ๋ผ์ฐํฐ์ established ์ํ ํ์ธ - Local ASN์ 65001, Peer ASN์ 65000์ผ๋ก ์ ์์ ์ผ๋ก ๋งค์นญ
3. PodCIDR ๊ด๊ณ ์ํ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# cilium bgp routes available ipv4 unicast
โ ย ์ถ๋ ฅ
1
2
3
4
Node VRouter Prefix NextHop Age Attrs
k8s-ctr 65001 172.20.0.0/24 0.0.0.0 12m43s [{Origin: i} {Nexthop: 0.0.0.0}]
k8s-w0 65001 172.20.2.0/24 0.0.0.0 12m42s [{Origin: i} {Nexthop: 0.0.0.0}]
k8s-w1 65001 172.20.1.0/24 0.0.0.0 12m43s [{Origin: i} {Nexthop: 0.0.0.0}]
- ๊ฐ ๋ ธ๋์ PodCIDR ๊ด๊ณ ํ์ธ๋จ
4. Cilium BGP ๋ฆฌ์์ค ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumbgpadvertisements,ciliumbgppeerconfigs,ciliumbgpclusterconfigs
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
NAME AGE
ciliumbgpadvertisement.cilium.io/bgp-advertisements 13m
NAME AGE
ciliumbgppeerconfig.cilium.io/cilium-peer 13m
NAME AGE
ciliumbgpclusterconfig.cilium.io/cilium-bgp 13m
๊ฐ ๋
ธ๋๋ณ CiliumBGPNodeConfig
๋ฆฌ์์ค ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumbgpnodeconfigs -o yaml | yq
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "cilium.io/v2",
"kind": "CiliumBGPNodeConfig",
"metadata": {
"creationTimestamp": "2025-08-15T14:27:12Z",
"generation": 1,
"name": "k8s-ctr",
"ownerReferences": [
{
"apiVersion": "cilium.io/v2",
"controller": true,
"kind": "CiliumBGPClusterConfig",
"name": "cilium-bgp",
"uid": "e1f4b328-d375-4a7c-a99b-ed2658602a14"
}
],
"resourceVersion": "7578",
"uid": "a72d5068-f106-4b37-a0a7-2ad0e72e8f9d"
},
"spec": {
"bgpInstances": [
{
"localASN": 65001,
"name": "instance-65001",
"peers": [
{
"name": "tor-switch",
"peerASN": 65000,
"peerAddress": "192.168.10.200",
"peerConfigRef": {
"name": "cilium-peer"
}
}
]
}
]
},
"status": {
"bgpInstances": [
{
"localASN": 65001,
"name": "instance-65001",
"peers": [
{
"establishedTime": "2025-08-15T14:27:14Z",
"name": "tor-switch",
"peerASN": 65000,
"peerAddress": "192.168.10.200",
"peeringState": "established",
"routeCount": [
{
"advertised": 2,
"afi": "ipv4",
"received": 1,
"safi": "unicast"
}
],
"timers": {
"appliedHoldTimeSeconds": 9,
"appliedKeepaliveSeconds": 3
}
}
]
}
]
}
},
{
"apiVersion": "cilium.io/v2",
"kind": "CiliumBGPNodeConfig",
"metadata": {
"creationTimestamp": "2025-08-15T14:27:12Z",
"generation": 1,
"name": "k8s-w0",
"ownerReferences": [
{
"apiVersion": "cilium.io/v2",
"controller": true,
"kind": "CiliumBGPClusterConfig",
"name": "cilium-bgp",
"uid": "e1f4b328-d375-4a7c-a99b-ed2658602a14"
}
],
"resourceVersion": "7575",
"uid": "395cc9e2-0f3e-47f3-bce0-169110494292"
},
"spec": {
"bgpInstances": [
{
"localASN": 65001,
"name": "instance-65001",
"peers": [
{
"name": "tor-switch",
"peerASN": 65000,
"peerAddress": "192.168.10.200",
"peerConfigRef": {
"name": "cilium-peer"
}
}
]
}
]
},
"status": {
"bgpInstances": [
{
"localASN": 65001,
"name": "instance-65001",
"peers": [
{
"establishedTime": "2025-08-15T14:27:14Z",
"name": "tor-switch",
"peerASN": 65000,
"peerAddress": "192.168.10.200",
"peeringState": "established",
"routeCount": [
{
"advertised": 2,
"afi": "ipv4",
"received": 1,
"safi": "unicast"
}
],
"timers": {
"appliedHoldTimeSeconds": 9,
"appliedKeepaliveSeconds": 3
}
}
]
}
]
}
},
{
"apiVersion": "cilium.io/v2",
"kind": "CiliumBGPNodeConfig",
"metadata": {
"creationTimestamp": "2025-08-15T14:27:12Z",
"generation": 1,
"name": "k8s-w1",
"ownerReferences": [
{
"apiVersion": "cilium.io/v2",
"controller": true,
"kind": "CiliumBGPClusterConfig",
"name": "cilium-bgp",
"uid": "e1f4b328-d375-4a7c-a99b-ed2658602a14"
}
],
"resourceVersion": "7581",
"uid": "d98cdab1-5d96-4ecf-ae47-1cc3c80a3071"
},
"spec": {
"bgpInstances": [
{
"localASN": 65001,
"name": "instance-65001",
"peers": [
{
"name": "tor-switch",
"peerASN": 65000,
"peerAddress": "192.168.10.200",
"peerConfigRef": {
"name": "cilium-peer"
}
}
]
}
]
},
"status": {
"bgpInstances": [
{
"localASN": 65001,
"name": "instance-65001",
"peers": [
{
"establishedTime": "2025-08-15T14:27:14Z",
"name": "tor-switch",
"peerASN": 65000,
"peerAddress": "192.168.10.200",
"peeringState": "established",
"routeCount": [
{
"advertised": 2,
"afi": "ipv4",
"received": 1,
"safi": "unicast"
}
],
"timers": {
"appliedHoldTimeSeconds": 9,
"appliedKeepaliveSeconds": 3
}
}
]
}
]
}
}
],
"kind": "List",
"metadata": {
"resourceVersion": ""
}
}
- Local ASN 65001
- Peer Address
192.168.10.200
, Peer ASN 65000 - Peering State
established
- RouteCount:
advertised 2
,received 1
5. ๋ผ์ฐํฐ ์ปค๋ ๋ผ์ฐํ ํ ์ด๋ธ ํ์ธ
1
2
3
4
root@router:~# ip -c route | grep bgp
172.20.0.0/24 nhid 29 via 192.168.10.100 dev eth1 proto bgp metric 20
172.20.1.0/24 nhid 30 via 192.168.10.101 dev eth1 proto bgp metric 20
172.20.2.0/24 nhid 28 via 192.168.20.100 dev eth2 proto bgp metric 20
- FRR ๋ผ์ฐํฐ ์ปค๋ ๋ผ์ฐํ ํ ์ด๋ธ์ Pod CIDR ๊ฒฝ๋ก๊ฐ BGP ํ๋กํ ์ฝ์ ํตํด ํ์ต๋์ด ๋ฑ๋ก๋จ
- ํน์ Pod ๋์ญ๊ณผ ํต์ ํ๋ ค๋ฉด ๋ฐ๋์ ํด๋น ๋ ธ๋๋ก ์ ๋ฌ๋๋๋ก ๋ผ์ฐํ ๊ฒฝ๋ก๊ฐ ์กํ
6. BGP Neighbor ๊ด๊ณ ํ์ธ
1
root@router:~# vtysh -c 'show ip bgp summary'
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
IPv4 Unicast Summary (VRF default):
BGP router identifier 192.168.10.200, local AS number 65000 vrf-id 0
BGP table version 4
RIB entries 7, using 1344 bytes of memory
Peers 3, using 2172 KiB of memory
Peer groups 1, using 64 bytes of memory
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt Desc
192.168.10.100 4 65001 353 356 0 0 0 00:17:29 1 4 N/A
192.168.10.101 4 65001 353 356 0 0 0 00:17:29 1 4 N/A
192.168.20.100 4 65001 353 356 0 0 0 00:17:28 1 4 N/A
Total number of neighbors 3
- FRR ๋ผ์ฐํฐ(AS 65000)๋ Cilium์ด ๊ตฌ๋ ์ค์ธ 3๊ฐ ๋ ธ๋(AS 65001)์ BGP neighbor๋ฅผ ๋งบ์
- ๋ชจ๋ neighbor๊ฐ
Established
์ํ๋ก ์ ์ ์ฐ๊ฒฐ๋จ
7. BGP ๊ด๊ณ ๊ฒฝ๋ก ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
root@router:~# vtysh -c 'show ip bgp'
BGP table version is 4, local router ID is 192.168.10.200, vrf id 0
Default local pref 100, local AS 65000
Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found
Network Next Hop Metric LocPrf Weight Path
*> 10.10.1.0/24 0.0.0.0 0 32768 i
*> 172.20.0.0/24 192.168.10.100 0 65001 i
*> 172.20.1.0/24 192.168.10.101 0 65001 i
*> 172.20.2.0/24 192.168.20.100 0 65001 i
Displayed 4 routes and 4 total paths
- Pod CIDR (
172.20.0.0/24
,172.20.1.0/24
,172.20.2.0/24
)๊ฐ ๋ชจ๋ ์์ ๋จ - nexthop์ ๊ฐ๊ฐ ๋ ธ๋์ ๋ด๋ถ IP๋ก ํ์๋๋ฉฐ, BGP๋ฅผ ํตํด ์ฌ๋ฐ๋ฅด๊ฒ ๊ด๊ณ ์ ํ๊ฐ ์ด๋ค์ง ๊ฒ ํ์ธ
8. BGP Neighbor ์ค์ ํ ํต์ ๋ถ๊ฐ ํ์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Hostname: webpod-697b545f57-fbtbj
---
---
---
---
---
---
---
---
---
---
---
Hostname: webpod-697b545f57-fbtbj
---
---
---
---
---
...
- Cilium์ ํน์ฑ์ BGP ์ธ์
์ ๋งบ๋๋ผ๋ ๊ธฐ๋ณธ์ ์ผ๋ก ์ปค๋ ๋ผ์ฐํ
ํ
์ด๋ธ์ ๊ฒฝ๋ก๋ฅผ ์ฃผ์
ํ์ง ์์ (
disable-fib
์ํ)
๐ BGP ์ ๋ณด ์ ๋ฌ ํ์ธ
1. ์ปจํธ๋กค ํ๋ ์ธ tcpdump ์งํ
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i eth1 tcp port 179 -w /tmp/bgp.pcap
# ๊ฒฐ๊ณผ
tcpdump: listening on eth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
2. FRR ์ฌ์์ ๋ฐ BGP ์ธ์ ํ์ธ
1
root@router:~# systemctl restart frr && journalctl -u frr -f
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
Aug 16 00:00:33 router watchfrr.sh[4856]: Cannot stop zebra: pid file not found
Aug 16 00:00:33 router zebra[4858]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 16 00:00:33 router bgpd[4863]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 16 00:00:33 router staticd[4870]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 16 00:00:33 router frrinit.sh[4835]: * Started watchfrr
Aug 16 00:00:33 router systemd[1]: Started frr.service - FRRouting.
Aug 16 00:00:33 router watchfrr[4845]: [QDG3Y-BY5TN] zebra state -> up : connect succeeded
Aug 16 00:00:33 router watchfrr[4845]: [QDG3Y-BY5TN] bgpd state -> up : connect succeeded
Aug 16 00:00:33 router watchfrr[4845]: [QDG3Y-BY5TN] staticd state -> up : connect succeeded
Aug 16 00:00:33 router watchfrr[4845]: [KWE5Q-QNGFC] all daemons up, doing startup-complete notify
Aug 16 00:00:39 router bgpd[4863]: [M59KS-A3ZXZ] bgp_update_receive: rcvd End-of-RIB for IPv4 Unicast from 192.168.10.100 in vrf default
Aug 16 00:00:39 router bgpd[4863]: [M59KS-A3ZXZ] bgp_update_receive: rcvd End-of-RIB for IPv4 Unicast from 192.168.10.101 in vrf default
Aug 16 00:00:40 router bgpd[4863]: [M59KS-A3ZXZ] bgp_update_receive: rcvd End-of-RIB for IPv4 Unicast from 192.168.20.100 in vrf default
3. Termshark ํ์ธ
(1) BGP ํจํท ์บก์ฒ ํ์ผ ๋ถ์
1
(โ|HomeLab:N/A) root@k8s-ctr:~# termshark -r /tmp/bgp.pcap
(2) ํจํท ๋ถ์ ์ค, ์ปจํธ๋กค ํ๋ ์ธ ์ฐ๊ฒฐ์ด ๋๊น
1
vagrant halt k8s-ctr --force
- ์ปจํธ๋กค ํ๋ ์ธ VM ๊ฐ์ ์ข ๋ฃ
1
2
3
4
sudo virsh start 5w_k8s-ctr
# ๊ฒฐ๊ณผ
Domain '5w_k8s-ctr' started
- ์ปจํธ๋กค ํ๋ ์ธ VM ์ฌ๊ธฐ๋
๐ ๏ธ ๋ฌธ์ ํด๊ฒฐ ํ ํต์ ํ์ธ
1. Cilium BGP ๋์ ํน์ฑ ํ์ธ
- Cilium์ BGP๋ ๊ธฐ๋ณธ์ ์ผ๋ก ์ธ๋ถ ๊ฒฝ๋ก๋ฅผ ์ปค๋ ๋ผ์ฐํ ํ ์ด๋ธ์ ์ฃผ์ ํ์ง ์์
disable-fib
์ต์ ์ผ๋ก ๋น๋๋์ด ์์ โ ์ปค๋ ๋ผ์ฐํ ํ ์ด๋ธ(FIB)์ BGP ๊ฒฝ๋ก๋ฅผ ๋ฐ์ํ์ง ์๊ฒ ๋ค๋ ์๋ฏธ- ๋ฐ๋ผ์ BGP ํผ์ด๋ก๋ถํฐ ๊ฒฝ๋ก๋ ์์ ํ์ง๋ง,
ip route
์ถ๋ ฅ์๋ ํด๋น CIDR ๋์ญ์ด ๋ํ๋์ง ์์
2. BGP ๊ฒฝ๋ก ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# ip -c route
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
default via 192.168.121.1 dev eth0 proto dhcp src 192.168.121.70 metric 100
172.20.0.0/24 via 172.20.0.230 dev cilium_host proto kernel src 172.20.0.230
172.20.0.230 dev cilium_host proto kernel scope link
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100
192.168.20.0/24 via 192.168.10.200 dev eth1 proto static
192.168.121.0/24 dev eth0 proto kernel scope link src 192.168.121.70 metric 100
192.168.121.1 dev eth0 proto dhcp scope link src 192.168.121.70 metric 100
172.20.1.0/24
,172.20.2.0/24
๋์ญ์ด ๋๋ฝ๋จ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# cilium bgp routes
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
(Defaulting to `available ipv4 unicast` routes, please see help for more options)
Node VRouter Prefix NextHop Age Attrs
k8s-ctr 65001 172.20.0.0/24 0.0.0.0 9m26s [{Origin: i} {Nexthop: 0.0.0.0}]
k8s-w0 65001 172.20.2.0/24 0.0.0.0 57m18s [{Origin: i} {Nexthop: 0.0.0.0}]
k8s-w1 65001 172.20.1.0/24 0.0.0.0 57m18s [{Origin: i} {Nexthop: 0.0.0.0}]
- ๊ฐ ๋ ธ๋ CIDR ๊ฒฝ๋ก๊ฐ BGP๋ก ์ ์ ์์ ๋ ์ํ ํ์ธ ๊ฐ๋ฅ
์ฆ, BGP ์์ ์ ์ ์์ ์ด๋, ์ปค๋ ๋ผ์ฐํ ๋ฐ์์ด ์ ๋์ด ํต์ ๋ถ๊ฐ ์ํ์
1
2
3
4
5
6
7
8
9
10
11
12
13
Hostname: webpod-697b545f57-fbtbj
---
Hostname: webpod-697b545f57-fbtbj
---
---
---
---
---
---
Hostname: webpod-697b545f57-fbtbj
---
---
...
3. k8s ํ๋ ๋์ญ ํต์ ์ eth1 ๊ฒฝ๋ก๋ก ์ง์
1
2
3
(โ|HomeLab:N/A) root@k8s-ctr:~# ip route add 172.20.0.0/16 via 192.168.10.200
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w1 sudo ip route add 172.20.0.0/16 via 192.168.10.200
sshpass -p 'vagrant' ssh vagrant@k8s-w0 sudo ip route add 172.20.0.0/16 via 192.168.20.200
- eth0 โ ์ธํฐ๋ท ํต์ ์ ์ฉ
- eth1 โ Kubernetes ํ๋ ํต์ ์ ์ฉ
- ๋ฐ๋ผ์ ํ๋ ๋์ญ(
172.20.0.0/16
)์ eth1์ ํตํด ๋ผ์ฐํ ํ๋๋ก ๋ช ์์ ๊ฒฝ๋ก ์ถ๊ฐ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---
Hostname: webpod-697b545f57-fbtbj
---
Hostname: webpod-697b545f57-fbtbj
---
Hostname: webpod-697b545f57-rpblf
---
Hostname: webpod-697b545f57-fbtbj
---
Hostname: webpod-697b545f57-rpblf
---
Hostname: webpod-697b545f57-rpblf
---
Hostname: webpod-697b545f57-rpblf
---
Hostname: webpod-697b545f57-fbtbj
---
Hostname: webpod-697b545f57-pxhvr
---
Hostname: webpod-697b545f57-fbtbj
---
Hostname: webpod-697b545f57-rpblf
---
Hostname: webpod-697b545f57-fbtbj
---
Hostname: webpod-697b545f57-pxhvr
---
...
- ๊ฒฝ๋ก ์ถ๊ฐ ํ ํ๋ ๊ฐ ํต์ ์ ์ ๋์ ํ์ธ
โธ๏ธ ๋ ธ๋(k8s-w0) ์ ์ง๋ณด์ ์ํฉ
1. ๋ ธ๋ ์ ์ง๋ณด์ ์์ (k8s-w0 Drain)
1
2
3
4
5
6
7
8
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl drain k8s-w0 --ignore-daemonsets
# ๊ฒฐ๊ณผ
node/k8s-w0 cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/cilium-envoy-8tgrn, kube-system/cilium-wszbk, kube-system/kube-proxy-xhjtq
evicting pod default/webpod-697b545f57-rpblf
pod/webpod-697b545f57-rpblf evicted
node/k8s-w0 drained
kubectl drain
๋ช ๋ น์ด๋ฅผ ์ฌ์ฉํ์ฌk8s-w0
๋ ธ๋์ ํ๋๋ฅผ ์์ ํ๊ฒ ์ ๊ฑฐํ๊ณ ์ค์ผ์ค๋ง์ ๋ง์
2. BGP ๊ธฐ๋ฅ ๋นํ์ฑํ (enable-bgp=false)
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl label nodes k8s-w0 enable-bgp=false --overwrite
# ๊ฒฐ๊ณผ
node/k8s-w0 labeled
- ์ ์ง๋ณด์๋ฅผ ์ํด
k8s-w0
๋ ธ๋์enable-bgp
๋ผ๋ฒจ์false
๋ก ๋ณ๊ฒฝ - ์ด๋ก ์ธํด Cilium์ BGP ๋ฐ๋ชฌ์ด ํด๋น ๋ ธ๋์์๋ ๋์ํ์ง ์๊ฒ ๋จ
3. ๋ ธ๋ ์ํ ํ์ธ (SchedulingDisabled)
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get node
โ ย ์ถ๋ ฅ
1
2
3
4
NAME STATUS ROLES AGE VERSION
k8s-ctr Ready control-plane 124m v1.33.2
k8s-w0 Ready,SchedulingDisabled <none> 115m v1.33.2
k8s-w1 Ready <none> 124m v1.33.2
k8s-w0
๋ ธ๋ ์ํ๊ฐReady,SchedulingDisabled
๋ก ํ์๋จ
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumbgpnodeconfigs
NAME AGE
k8s-ctr 70m
k8s-w1 70m
k8s-w0
๋ ธ๋๋ BGP NodeConfig ๋ชฉ๋ก์์๋ ์ ์ธ๋จ
4. BGP ๋ผ์ฐํธ/ํผ์ด ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# cilium bgp routes
โ ย ์ถ๋ ฅ
1
2
3
4
5
(Defaulting to `available ipv4 unicast` routes, please see help for more options)
Node VRouter Prefix NextHop Age Attrs
k8s-ctr 65001 172.20.0.0/24 0.0.0.0 23m21s [{Origin: i} {Nexthop: 0.0.0.0}]
k8s-w1 65001 172.20.1.0/24 0.0.0.0 1h11m13s [{Origin: i} {Nexthop: 0.0.0.0}]
1
(โ|HomeLab:N/A) root@k8s-ctr:~# cilium bgp peers
โ ย ์ถ๋ ฅ
1
2
3
Node Local AS Peer AS Peer Address Session State Uptime Family Received Advertised
k8s-ctr 65001 65000 192.168.10.200 established 15m58s ipv4/unicast 3 2
k8s-w1 65001 65000 192.168.10.200 established 15m58s ipv4/unicast 3 2
- ์ถ๋ ฅ์์
k8s-w0
๋ผ์ฐํธ ์ ๋ณด๊ฐ ์ฌ๋ผ์ง
1
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router "sudo vtysh -c 'show ip bgp summary'"
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
IPv4 Unicast Summary (VRF default):
BGP router identifier 192.168.10.200, local AS number 65000 vrf-id 0
BGP table version 5
RIB entries 5, using 960 bytes of memory
Peers 3, using 2172 KiB of memory
Peer groups 1, using 64 bytes of memory
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt Desc
192.168.10.100 4 65001 400 404 0 0 0 00:19:48 1 3 N/A
192.168.10.101 4 65001 400 404 0 0 0 00:19:49 1 3 N/A
192.168.20.100 4 65001 266 267 0 0 0 00:06:48 Active 0 N/A
Total number of neighbors 3
- FRR ๋ผ์ฐํฐ์์๋ ์ํ๊ฐ Active ๋ก ํ์๋จ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router "sudo vtysh -c 'show ip route bgp'"
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, F - PBR,
f - OpenFabric,
> - selected route, * - FIB route, q - queued, r - rejected, b - backup
t - trapped, o - offload failure
B>* 172.20.0.0/24 [20/0] via 192.168.10.100, eth1, weight 1, 00:18:28
B>* 172.20.1.0/24 [20/0] via 192.168.10.101, eth1, weight 1, 00:18:28
- ๋ผ์ฐํ
ํ
์ด๋ธ์์๋
k8s-w0
์ CIDR์ด ์ ๊ฑฐ๋จ
5. ๋ ธ๋ ๋ณต๊ตฌ (enable-bgp=true & uncordon)
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl label nodes k8s-w0 enable-bgp=true --overwrite
# ๊ฒฐ๊ณผ
node/k8s-w0 labeled
- ์ ์ง๋ณด์๊ฐ ๋๋ ํ
k8s-w0
๋ผ๋ฒจ์ ๋ค์enable-bgp=true
๋ก ์๋ณต
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl uncordon k8s-w0
# ๊ฒฐ๊ณผ
node/k8s-w0 uncordoned
kubectl uncordon
๋ช ๋ น์ด๋ก ์ค์ผ์ค๋ง์ ํ์ฉํ์ฌ ์ ์ ์ํ๋ก ๋ณต๊ตฌ
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get node
kubectl get ciliumbgpnodeconfigs
cilium bgp routes
cilium bgp peers
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
NAME STATUS ROLES AGE VERSION
k8s-ctr Ready control-plane 132m v1.33.2
k8s-w0 Ready <none> 123m v1.33.2
k8s-w1 Ready <none> 132m v1.33.2
NAME AGE
k8s-ctr 77m
k8s-w0 47s
k8s-w1 77m
(Defaulting to `available ipv4 unicast` routes, please see help for more options)
Node VRouter Prefix NextHop Age Attrs
k8s-ctr 65001 172.20.0.0/24 0.0.0.0 29m55s [{Origin: i} {Nexthop: 0.0.0.0}]
k8s-w0 65001 172.20.2.0/24 0.0.0.0 48s [{Origin: i} {Nexthop: 0.0.0.0}]
k8s-w1 65001 172.20.1.0/24 0.0.0.0 1h17m47s [{Origin: i} {Nexthop: 0.0.0.0}]
Node Local AS Peer AS Peer Address Session State Uptime Family Received Advertised
k8s-ctr 65001 65000 192.168.10.200 established 22m2s ipv4/unicast 4 2
k8s-w0 65001 65000 192.168.10.200 established 46s ipv4/unicast 4 2
k8s-w1 65001 65000 192.168.10.200 established 22m2s ipv4/unicast 4 2
6. ๋ ธ๋๋ณ ํ๋ ๋ถ๋ฐฐ ์คํ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
โ ย ์ถ๋ ฅ
1
2
3
4
5
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-pod 1/1 Running 1 (31m ago) 117m 172.20.0.35 k8s-ctr <none> <none>
webpod-697b545f57-fbtbj 1/1 Running 1 (31m ago) 119m 172.20.0.6 k8s-ctr <none> <none>
webpod-697b545f57-lzxbc 1/1 Running 0 10m 172.20.1.98 k8s-w1 <none> <none>
webpod-697b545f57-pxhvr 1/1 Running 0 119m 172.20.1.65 k8s-w1 <none> <none>
1
2
3
4
5
6
7
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl scale deployment webpod --replicas 0
# ๊ฒฐ๊ณผ
deployment.apps/webpod scaled
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl scale deployment webpod --replicas 3
# ๊ฒฐ๊ณผ
deployment.apps/webpod scaled
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
โ ย ์ถ๋ ฅ
1
2
3
4
5
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-pod 1/1 Running 1 (31m ago) 117m 172.20.0.35 k8s-ctr <none> <none>
webpod-697b545f57-5twrq 1/1 Running 0 7s 172.20.1.119 k8s-w1 <none> <none>
webpod-697b545f57-cp7xq 1/1 Running 0 7s 172.20.0.15 k8s-ctr <none> <none>
webpod-697b545f57-xtmdx 1/1 Running 0 7s 172.20.2.35 k8s-w0 <none> <none>
kubectl scale
๋ช ๋ น์ด๋กwebpod
๋ฐฐํฌ๋ฅผ ์ค์๋ค๊ฐ ๋ค์ ํ์ฅํ์ฌ ํ๋๊ฐk8s-w0
์๋ ์ ์ ๋ฐฐ์น๋จ์ ํ์ธ
๐ซ CRD ์ํ ๋ณด๊ณ ๋นํ์ฑํ
- https://docs.cilium.io/en/stable/network/bgp-control-plane/bgp-control-plane-operation/#disabling-crd-status-report
- Cilium BGP๋ ๊ธฐ๋ณธ์ ์ผ๋ก
CiliumBGPNodeConfig
๋ฆฌ์์ค์status
ํ๋์ ํผ์ด ์ํ, ์ธ์ , ๋ผ์ฐํธ ์ ๋ณด๋ฅผ ๊ธฐ๋กํจ - ๊ทธ๋ฌ๋ ๋๊ท๋ชจ ํด๋ฌ์คํฐ์์๋ ์ํ ๋ณด๊ณ ๊ฐ ๋น๋ฒํ ๋ฐ์ํด Kubernetes API ์๋ฒ์ ๋ถํ๋ฅผ ์ค ์ ์์
- ๋ฐ๋ผ์ ๊ณต์ ๋ฌธ์์์๋
bgp status reporting off
์ต์ ์ฌ์ฉ์ ๊ถ์ฅํจ
1. ํ์ฌ BGP ์ํ ํ์ธ (Status Report ํ์ฑํ ์ํ)
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumbgpnodeconfigs -o yaml | yq
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "cilium.io/v2",
"kind": "CiliumBGPNodeConfig",
"metadata": {
"creationTimestamp": "2025-08-15T14:27:12Z",
"generation": 1,
"name": "k8s-ctr",
"ownerReferences": [
{
"apiVersion": "cilium.io/v2",
"controller": true,
"kind": "CiliumBGPClusterConfig",
"name": "cilium-bgp",
"uid": "e1f4b328-d375-4a7c-a99b-ed2658602a14"
}
],
"resourceVersion": "15080",
"uid": "a72d5068-f106-4b37-a0a7-2ad0e72e8f9d"
},
"spec": {
"bgpInstances": [
{
"localASN": 65001,
"name": "instance-65001",
"peers": [
{
"name": "tor-switch",
"peerASN": 65000,
"peerAddress": "192.168.10.200",
"peerConfigRef": {
"name": "cilium-peer"
}
}
]
}
]
},
"status": {
"bgpInstances": [
{
"localASN": 65001,
"name": "instance-65001",
"peers": [
{
"establishedTime": "2025-08-15T15:22:57Z",
"name": "tor-switch",
"peerASN": 65000,
"peerAddress": "192.168.10.200",
"peeringState": "established",
"routeCount": [
{
"advertised": 2,
"afi": "ipv4",
"received": 3,
"safi": "unicast"
}
],
"timers": {
"appliedHoldTimeSeconds": 9,
"appliedKeepaliveSeconds": 3
}
}
]
}
]
}
},
{
"apiVersion": "cilium.io/v2",
"kind": "CiliumBGPNodeConfig",
"metadata": {
"creationTimestamp": "2025-08-15T15:44:11Z",
"generation": 1,
"name": "k8s-w0",
"ownerReferences": [
{
"apiVersion": "cilium.io/v2",
"controller": true,
"kind": "CiliumBGPClusterConfig",
"name": "cilium-bgp",
"uid": "e1f4b328-d375-4a7c-a99b-ed2658602a14"
}
],
"resourceVersion": "16068",
"uid": "fd222576-cc33-4f4f-b7cd-c8157fbc8009"
},
"spec": {
"bgpInstances": [
{
"localASN": 65001,
"name": "instance-65001",
"peers": [
{
"name": "tor-switch",
"peerASN": 65000,
"peerAddress": "192.168.10.200",
"peerConfigRef": {
"name": "cilium-peer"
}
}
]
}
]
},
"status": {
"bgpInstances": [
{
"localASN": 65001,
"name": "instance-65001",
"peers": [
{
"establishedTime": "2025-08-15T15:44:13Z",
"name": "tor-switch",
"peerASN": 65000,
"peerAddress": "192.168.10.200",
"peeringState": "established",
"routeCount": [
{
"advertised": 2,
"afi": "ipv4",
"received": 3,
"safi": "unicast"
}
],
"timers": {
"appliedHoldTimeSeconds": 9,
"appliedKeepaliveSeconds": 3
}
}
]
}
]
}
},
{
"apiVersion": "cilium.io/v2",
"kind": "CiliumBGPNodeConfig",
"metadata": {
"creationTimestamp": "2025-08-15T14:27:12Z",
"generation": 1,
"name": "k8s-w1",
"ownerReferences": [
{
"apiVersion": "cilium.io/v2",
"controller": true,
"kind": "CiliumBGPClusterConfig",
"name": "cilium-bgp",
"uid": "e1f4b328-d375-4a7c-a99b-ed2658602a14"
}
],
"resourceVersion": "15076",
"uid": "d98cdab1-5d96-4ecf-ae47-1cc3c80a3071"
},
"spec": {
"bgpInstances": [
{
"localASN": 65001,
"name": "instance-65001",
"peers": [
{
"name": "tor-switch",
"peerASN": 65000,
"peerAddress": "192.168.10.200",
"peerConfigRef": {
"name": "cilium-peer"
}
}
]
}
]
},
"status": {
"bgpInstances": [
{
"localASN": 65001,
"name": "instance-65001",
"peers": [
{
"establishedTime": "2025-08-15T15:22:57Z",
"name": "tor-switch",
"peerASN": 65000,
"peerAddress": "192.168.10.200",
"peeringState": "established",
"routeCount": [
{
"advertised": 2,
"afi": "ipv4",
"received": 3,
"safi": "unicast"
}
],
"timers": {
"appliedHoldTimeSeconds": 9,
"appliedKeepaliveSeconds": 3
}
}
]
}
]
}
}
],
"kind": "List",
"metadata": {
"resourceVersion": ""
}
}
- ๊ฐ ๋
ธ๋(
k8s-ctr
,k8s-w0
,k8s-w1
)์status
์ ํผ์ด ์ฐ๊ฒฐ ์ํ (established
), Advertised / Received Routes, Keepalive / HoldTime ๊ฐ ๋ฑ์ ์ ๋ณด๊ฐ ์์ธํ ๊ธฐ๋ก๋จ
2. Helm ์ ๊ทธ๋ ์ด๋๋ก ์ํ ๋ณด๊ณ ๋นํ์ฑํ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
(โ|HomeLab:N/A) root@k8s-ctr:~# helm upgrade cilium cilium/cilium --version 1.18.0 --namespace kube-system --reuse-values \
--set bgpControlPlane.statusReport.enabled=false
# ๊ฒฐ๊ณผ
Release "cilium" has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Sat Aug 16 00:51:38 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.
Your release version is 1.18.0.
For any further help, visit https://docs.cilium.io/en/v1.18/gettinghelp
- ์ฑ๊ณต์ ์ผ๋ก
cilium
์ฐจํธ๊ฐ ์ฌ๋ฐฐํฌ๋๋ฉฐ, ์ดํ BGP ์ํ ๋ณด๊ณ ๊ฐ ์ค๋จ๋จ
3. Cilium DaemonSet ๋กค๋ง ์ฌ์์
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system rollout restart ds/cilium
# ๊ฒฐ๊ณผ
daemonset.apps/cilium restarted
4. ๊ฒฐ๊ณผ ํ์ธ (Status Report ์ ๊ฑฐ๋จ)
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumbgpnodeconfigs -o yaml | yq
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "cilium.io/v2",
"kind": "CiliumBGPNodeConfig",
"metadata": {
"creationTimestamp": "2025-08-15T14:27:12Z",
"generation": 1,
"name": "k8s-ctr",
"ownerReferences": [
{
"apiVersion": "cilium.io/v2",
"controller": true,
"kind": "CiliumBGPClusterConfig",
"name": "cilium-bgp",
"uid": "e1f4b328-d375-4a7c-a99b-ed2658602a14"
}
],
"resourceVersion": "17327",
"uid": "a72d5068-f106-4b37-a0a7-2ad0e72e8f9d"
},
"spec": {
"bgpInstances": [
{
"localASN": 65001,
"name": "instance-65001",
"peers": [
{
"name": "tor-switch",
"peerASN": 65000,
"peerAddress": "192.168.10.200",
"peerConfigRef": {
"name": "cilium-peer"
}
}
]
}
]
},
"status": {}
},
{
"apiVersion": "cilium.io/v2",
"kind": "CiliumBGPNodeConfig",
"metadata": {
"creationTimestamp": "2025-08-15T15:44:11Z",
"generation": 1,
"name": "k8s-w0",
"ownerReferences": [
{
"apiVersion": "cilium.io/v2",
"controller": true,
"kind": "CiliumBGPClusterConfig",
"name": "cilium-bgp",
"uid": "e1f4b328-d375-4a7c-a99b-ed2658602a14"
}
],
"resourceVersion": "17231",
"uid": "fd222576-cc33-4f4f-b7cd-c8157fbc8009"
},
"spec": {
"bgpInstances": [
{
"localASN": 65001,
"name": "instance-65001",
"peers": [
{
"name": "tor-switch",
"peerASN": 65000,
"peerAddress": "192.168.10.200",
"peerConfigRef": {
"name": "cilium-peer"
}
}
]
}
]
},
"status": {}
},
{
"apiVersion": "cilium.io/v2",
"kind": "CiliumBGPNodeConfig",
"metadata": {
"creationTimestamp": "2025-08-15T14:27:12Z",
"generation": 1,
"name": "k8s-w1",
"ownerReferences": [
{
"apiVersion": "cilium.io/v2",
"controller": true,
"kind": "CiliumBGPClusterConfig",
"name": "cilium-bgp",
"uid": "e1f4b328-d375-4a7c-a99b-ed2658602a14"
}
],
"resourceVersion": "17229",
"uid": "d98cdab1-5d96-4ecf-ae47-1cc3c80a3071"
},
"spec": {
"bgpInstances": [
{
"localASN": 65001,
"name": "instance-65001",
"peers": [
{
"name": "tor-switch",
"peerASN": 65000,
"peerAddress": "192.168.10.200",
"peerConfigRef": {
"name": "cilium-peer"
}
}
]
}
]
},
"status": {}
}
],
"kind": "List",
"metadata": {
"resourceVersion": ""
}
}
status
ํ๋๊ฐ ๋น์ด ์์ (status: {}
)- ๋ ์ด์ CRD๋ฅผ ํตํด BGP ์ํ๊ฐ ๊ธฐ๋ก๋์ง ์์ โ API ์๋ฒ ๋ถํ ๊ฐ์ ํจ๊ณผ
๐ท๏ธ LoadBalancer External IP๋ฅผ BGP๋ก ๊ด๊ณ
- https://docs.cilium.io/en/stable/network/bgp-control-plane/bgp-control-plane-v2/#service-virtual-ips
- Kubernetes Service ํ์
์
LoadBalancer
๋ก ๋ณ๊ฒฝํ์ฌ External IP๋ฅผ ํ ๋น๋ฐ์ - L2 Announcement์ ๋ฌ๋ฆฌ BGP๋ ๋ผ์ฐํ ๊ธฐ๋ฐ์ด๋ผ, External IP๊ฐ ๋ ธ๋ ๋คํธ์ํฌ ๋์ญ๊ณผ ๋ฌ๋ผ๋ ๊ด๊ณ ๊ฐ๋ฅ
1. Cilium LoadBalancer IP Pool ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
(โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: "cilium.io/v2"
kind: CiliumLoadBalancerIPPool
metadata:
name: "cilium-pool"
spec:
allowFirstLastIPs: "No"
blocks:
- cidr: "172.16.1.0/24"
EOF
# ๊ฒฐ๊ณผ
ciliumloadbalancerippool.cilium.io/cilium-pool created
CiliumLoadBalancerIPPool
CRD๋ฅผ ์ด์ฉํด172.16.1.0/24
๋์ญ์ ํ๋ก ๋ฑ๋ก
1
2
3
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ippool
NAME DISABLED CONFLICTING IPS AVAILABLE AGE
cilium-pool false False 254 3m12s
/24
๋์ญ(172.16.1.0/24
) โ 254๊ฐ IP ํ ๋น ๊ฐ๋ฅ
2. ์๋น์ค ํ์ LoadBalancer ์ ์ฉ ๋ฐ External IP ํ์ธ
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl patch svc webpod -p '{"spec": {"type": "LoadBalancer"}}'
# ๊ฒฐ๊ณผ
service/webpod patched
- ๊ธฐ์กด
webpod
์๋น์ค๋ฅผLoadBalancer
ํ์ ์ผ๋ก ๋ณ๊ฒฝํ์ฌ External IP๋ฅผ ํ ๋น๋ฐ์
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc webpod
โ ย ์ถ๋ ฅ
1
2
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
webpod LoadBalancer 10.96.54.159 172.16.1.1 80:32567/TCP 15h
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ippool
โ ย ์ถ๋ ฅ
1
2
NAME DISABLED CONFLICTING IPS AVAILABLE AGE
cilium-pool false False 253 4m50s
cilium-pool
์์ 1๊ฐ IP๊ฐ ์๋ชจ๋์ด172.16.1.1
์ด ์ธ๋ถ ์ ๊ทผ์ฉ ์ฃผ์๋ก ๋ถ์ฌ๋จ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl describe svc webpod | grep 'Traffic Policy'
โ ย ์ถ๋ ฅ
1
2
External Traffic Policy: Cluster
Internal Traffic Policy: Cluster
- Traffic Policy๋ ๊ธฐ๋ณธ์ ์ผ๋ก Cluster ๋ชจ๋๋ก ์ค์ ๋์ด, ๋ชจ๋ ๋ ธ๋๋ก ํธ๋ํฝ์ด ๋ถ์ฐ๋จ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system exec ds/cilium -c cilium-agent -- cilium-dbg service list
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
ID Frontend Service Type Backend
1 10.96.12.94:80/TCP ClusterIP 1 => 172.20.0.232:4245/TCP (active)
2 0.0.0.0:30003/TCP NodePort 1 => 172.20.0.13:8081/TCP (active)
5 10.96.33.159:80/TCP ClusterIP 1 => 172.20.0.13:8081/TCP (active)
6 10.96.198.41:443/TCP ClusterIP 1 => 172.20.0.122:10250/TCP (active)
7 10.96.0.1:443/TCP ClusterIP 1 => 192.168.10.100:6443/TCP (active)
8 10.96.137.113:443/TCP ClusterIP 1 => 192.168.10.101:4244/TCP (active)
9 10.96.0.10:53/TCP ClusterIP 1 => 172.20.0.82:53/TCP (active)
2 => 172.20.0.104:53/TCP (active)
10 10.96.0.10:53/UDP ClusterIP 1 => 172.20.0.82:53/UDP (active)
2 => 172.20.0.104:53/UDP (active)
11 10.96.0.10:9153/TCP ClusterIP 1 => 172.20.0.82:9153/TCP (active)
2 => 172.20.0.104:9153/TCP (active)
12 10.96.54.159:80/TCP ClusterIP 1 => 172.20.0.15:80/TCP (active)
2 => 172.20.1.119:80/TCP (active)
3 => 172.20.2.35:80/TCP (active)
14 0.0.0.0:32567/TCP NodePort 1 => 172.20.0.15:80/TCP (active)
2 => 172.20.1.119:80/TCP (active)
3 => 172.20.2.35:80/TCP (active)
17 172.16.1.1:80/TCP LoadBalancer 1 => 172.20.0.15:80/TCP (active)
2 => 172.20.1.119:80/TCP (active)
3 => 172.20.2.35:80/TCP (active)
172.16.1.1:80/TCP
LoadBalancer ํ๋ก ํธ์๋๊ฐ ํ๋ 3๊ฐ(172.20.x.x
) ๋ฐฑ์๋์ ๋งคํ๋จ
1
2
3
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc webpod -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
LBIP=$(kubectl get svc webpod -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
curl -s $LBIP
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
172.16.1.1Hostname: webpod-697b545f57-5twrq
IP: 127.0.0.1
IP: ::1
IP: 172.20.1.119
IP: fe80::dcab:bcff:fee2:3765
RemoteAddr: 192.168.10.100:59608
GET / HTTP/1.1
Host: 172.16.1.1
User-Agent: curl/8.5.0
Accept: */*
- ํด๋ฌ์คํฐ ๋ด๋ถ์์
curl 172.16.1.1
ํ ์คํธ ์ ์ ์์ ์ผ๋ก ์๋น์ค ์๋ต ํ์ธ๋จ
3. router ๋ผ์ฐํ ํ ์ด๋ธ ๋ชจ๋ํฐ๋ง
1
(โ|HomeLab:N/A) root@k8s-ctr:~# watch "sshpass -p 'vagrant' ssh vagrant@router ip -c route"
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
Every 2.0s: sshpass -p 'vagrant' ssh vagrant@router ip -c route k8s-ctr: Sat Aug 16 14:04:23 2025
default via 192.168.121.1 dev eth0 proto dhcp src 192.168.121.25 metric 100
10.10.1.0/24 dev loop1 proto kernel scope link src 10.10.1.200
10.10.2.0/24 dev loop2 proto kernel scope link src 10.10.2.200
172.20.0.0/24 nhid 92 via 192.168.10.100 dev eth1 proto bgp metric 20
172.20.1.0/24 nhid 88 via 192.168.10.101 dev eth1 proto bgp metric 20
172.20.2.0/24 nhid 94 via 192.168.20.100 dev eth2 proto bgp metric 20
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.200
192.168.20.0/24 dev eth2 proto kernel scope link src 192.168.20.200
192.168.121.0/24 dev eth0 proto kernel scope link src 192.168.121.25 metric 100
192.168.121.1 dev eth0 proto dhcp scope link src 192.168.121.25 metric 100
- ํ์ฌ๋ Cilium BGP๋ฅผ ํตํด Pod CIDR ๋์ญ(
172.20.0.0/24
,172.20.1.0/24
,172.20.2.0/24
) ์ด ๊ด๊ณ ๋ ์ํ
4. Cilium BGP Advertisement ํ์ธ (Pod CIDR)
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get CiliumBGPAdvertisement
โ ย ์ถ๋ ฅ
1
2
NAME AGE
bgp-advertisements 14h
- ํ์ฌ๋ Pod CIDR๋ฅผ ๊ด๊ณ ํ๋๋ก ์ค์ ๋ ์ ์ฑ ๋ง ์กด์ฌ
5. ์๋ก์ด BGP Advertisement ์์ฑ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# k get svc
โ ย ์ถ๋ ฅ
1
2
3
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15h
webpod LoadBalancer 10.96.54.159 172.16.1.1 80:32567/TCP 15h
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
(โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: cilium.io/v2
kind: CiliumBGPAdvertisement
metadata:
name: bgp-advertisements-lb-exip-webpod
labels:
advertise: bgp
spec:
advertisements:
- advertisementType: "Service"
service:
addresses:
- LoadBalancerIP
selector:
matchExpressions:
- { key: app, operator: In, values: [ webpod ] }
EOF
# ๊ฒฐ๊ณผ
ciliumbgpadvertisement.cilium.io/bgp-advertisements-lb-exip-webpod created
advertisementType: Service
๋ก ์ง์ ํ์ฌ Service์ LoadBalancer External IP๋ง ๊ด๊ณ ํ๋๋ก ์ค์ - ํน์ ์๋น์ค(
app=webpod
)์ LoadBalancer IP(172.16.1.1
)๊ฐ ๊ด๊ณ ๋์์ด ๋จ
6. router์์ LoadBalancer External IP ๋ผ์ฐํธ ๋ฐ์ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Every 2.0s: sshpass -p 'vagrant' ssh vagrant@router ip -c route k8s-ctr: Sat Aug 16 14:13:23 2025
default via 192.168.121.1 dev eth0 proto dhcp src 192.168.121.25 metric 100
10.10.1.0/24 dev loop1 proto kernel scope link src 10.10.1.200
10.10.2.0/24 dev loop2 proto kernel scope link src 10.10.2.200
172.16.1.1 nhid 105 proto bgp metric 20
nexthop via 192.168.10.101 dev eth1 weight 1
nexthop via 192.168.10.100 dev eth1 weight 1
nexthop via 192.168.20.100 dev eth2 weight 1
172.20.0.0/24 nhid 92 via 192.168.10.100 dev eth1 proto bgp metric 20
172.20.1.0/24 nhid 88 via 192.168.10.101 dev eth1 proto bgp metric 20
172.20.2.0/24 nhid 94 via 192.168.20.100 dev eth2 proto bgp metric 20
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.200
192.168.20.0/24 dev eth2 proto kernel scope link src 192.168.20.200
192.168.121.0/24 dev eth0 proto kernel scope link src 192.168.121.25 metric 100
192.168.121.1 dev eth0 proto dhcp scope link src 192.168.121.25 metric 100
- ์๋ก์ด
CiliumBGPAdvertisement
์์ฑ ํ, ๋ผ์ฐํฐ ๋ผ์ฐํ ํ ์ด๋ธ์172.16.1.1/32
๊ฒฝ๋ก๊ฐ ์ถ๊ฐ๋จ - Nexthop์ ํด๋ฌ์คํฐ ๋ด 3๊ฐ ๋
ธ๋ (
192.168.10.100
,192.168.10.101
,192.168.20.100
) - ๋์ผ ์ฐ์ ์์๋ก multipath ๋ผ์ฐํ ์ด ๊ตฌ์ฑ๋จ
7. Cilium BGP Advertisement ๋ฐ ์ ์ฑ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get CiliumBGPAdvertisement
โ ย ์ถ๋ ฅ
1
2
3
NAME AGE
bgp-advertisements 14h
bgp-advertisements-lb-exip-webpod 2m58s
CiliumBGPAdvertisement
์ ์๋ก์ด ์ ์ฑ (bgp-advertisements-lb-exip-webpod
)์ด ์ถ๊ฐ๋จ
1
2
3
4
5
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -- cilium-dbg bgp route-policies
VRouter Policy Name Type Match Peers Match Families Match Prefixes (Min..Max Len) RIB Action Path Actions
65001 allow-local import accept
65001 tor-switch-ipv4-PodCIDR export 192.168.10.200/32 172.20.1.0/24 (24..24) accept
65001 tor-switch-ipv4-Service-webpod-default-LoadBalancerIP export 192.168.10.200/32 172.16.1.1/32 (32..32) accept
- ์ ์ฑ
์ด๋ฆ์ ์๋น์ค๋ณ๋ก ์์ฑ๋๋ฉฐ, ์ด๋ฒ์๋
tor-switch-ipv4-Service-webpod-default-LoadBalancerIP
- ๊ฒฐ๊ณผ์ ์ผ๋ก Pod CIDR + Service LoadBalancer IP(172.16.1.1/32) ๊ฐ ๋์์ export ๋จ
8. Cilium BGP Routes ํ์ธ (๋ ธ๋๋ณ ๊ด๊ณ ์ํ)
1
(โ|HomeLab:N/A) root@k8s-ctr:~# cilium bgp routes available ipv4 unicast
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
Node VRouter Prefix NextHop Age Attrs
k8s-ctr 65001 172.16.1.1/32 0.0.0.0 5m6s [{Origin: i} {Nexthop: 0.0.0.0}]
65001 172.20.0.0/24 0.0.0.0 13h23m31s [{Origin: i} {Nexthop: 0.0.0.0}]
k8s-w0 65001 172.16.1.1/32 0.0.0.0 5m6s [{Origin: i} {Nexthop: 0.0.0.0}]
65001 172.20.2.0/24 0.0.0.0 13h23m43s [{Origin: i} {Nexthop: 0.0.0.0}]
k8s-w1 65001 172.16.1.1/32 0.0.0.0 5m6s [{Origin: i} {Nexthop: 0.0.0.0}]
65001 172.20.1.0/24 0.0.0.0 13h23m44s [{Origin: i} {Nexthop: 0.0.0.0}]
- ํด๋ฌ์คํฐ์ ๋ชจ๋ ๋
ธ๋(k8s-ctr, k8s-w0, k8s-w1)๊ฐ
172.16.1.1/32
๊ฒฝ๋ก๋ฅผ BGP๋ก ๊ด๊ณ - ๋ผ์ฐํฐ๋ ์ด๋ฅผ ๋ฐ์ ๋ชจ๋ ๋ ธ๋๋ก ํธ๋ํฝ์ ๋ณด๋ผ ์ ์๊ฒ ๋จ
9. router BGP ํ ์ด๋ธ์์ Multipath ๋ฐ์ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router "sudo vtysh -c 'show ip route bgp'"
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, F - PBR,
f - OpenFabric,
> - selected route, * - FIB route, q - queued, r - rejected, b - backup
t - trapped, o - offload failure
B>* 172.16.1.1/32 [20/0] via 192.168.10.100, eth1, weight 1, 00:07:20
* via 192.168.10.101, eth1, weight 1, 00:07:20
* via 192.168.20.100, eth2, weight 1, 00:07:20
B>* 172.20.0.0/24 [20/0] via 192.168.10.100, eth1, weight 1, 00:29:43
B>* 172.20.1.0/24 [20/0] via 192.168.10.101, eth1, weight 1, 00:29:42
B>* 172.20.2.0/24 [20/0] via 192.168.20.100, eth2, weight 1, 00:29:42
- ๋ผ์ฐํฐ BGP ํ
์ด๋ธ์์
172.16.1.1/32
๊ฒฝ๋ก๋ multipath ์ํ๋ก ๊ธฐ๋ก๋จ 192.168.10.100
,192.168.10.101
,192.168.20.100
์ธ ๋ ธ๋๊ฐ ๋์ผ ํ๋ฆฌํฝ์ค๋ฅผ ๊ด๊ณ ํ๊ธฐ ๋๋ฌธ์ ๋ผ์ฐํฐ๋ ์ธ ๊ฒฝ๋ก๋ฅผ ๋ชจ๋ ์ ํจ(* valid
)๋ก ์ธ์- BGP ๊ฒฝ๋ก ์ฐ์ ์์๊ฐ ๋์ผ โ ์ปค๋ ๋ผ์ฐํ ํ ์ด๋ธ์ multipath๋ก ๋์์ ๋ฐ์๋จ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router "sudo vtysh -c 'show ip bgp summary'"
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
IPv4 Unicast Summary (VRF default):
BGP router identifier 192.168.10.200, local AS number 65000 vrf-id 0
BGP table version 10
RIB entries 9, using 1728 bytes of memory
Peers 3, using 2172 KiB of memory
Peer groups 1, using 64 bytes of memory
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt Desc
192.168.10.100 4 65001 1284 1294 0 0 0 00:31:06 2 5 N/A
192.168.10.101 4 65001 1284 1294 0 0 0 00:31:05 2 5 N/A
192.168.20.100 4 65001 1125 1132 0 0 0 00:31:05 2 5 N/A
Total number of neighbors 3
show ip bgp summary
์ถ๋ ฅ์์๋ 3๊ฐ์ ํผ์ด๊ฐ ๋ชจ๋ ๋์ผ ํ๋ฆฌํฝ์ค๋ฅผ ๊ด๊ณ ํ์์ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router "sudo vtysh -c 'show ip bgp'"
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
BGP table version is 10, local router ID is 192.168.10.200, vrf id 0
Default local pref 100, local AS 65000
Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found
Network Next Hop Metric LocPrf Weight Path
*> 10.10.1.0/24 0.0.0.0 0 32768 i
*= 172.16.1.1/32 192.168.20.100 0 65001 i
*= 192.168.10.101 0 65001 i
*> 192.168.10.100 0 65001 i
*> 172.20.0.0/24 192.168.10.100 0 65001 i
*> 172.20.1.0/24 192.168.10.101 0 65001 i
*> 172.20.2.0/24 192.168.20.100 0 65001 i
Displayed 5 routes and 7 total paths
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router "sudo vtysh -c 'show ip bgp 172.16.1.1/32'"
BGP routing table entry for 172.16.1.1/32, version 10
Paths: (3 available, best #3, table default)
Advertised to non peer-group peers:
192.168.10.100 192.168.10.101 192.168.20.100
65001
192.168.20.100 from 192.168.20.100 (192.168.20.100)
Origin IGP, valid, external, multipath
Last update: Sat Aug 16 14:10:56 2025
65001
192.168.10.101 from 192.168.10.101 (192.168.10.101)
Origin IGP, valid, external, multipath
Last update: Sat Aug 16 14:10:56 2025
65001
192.168.10.100 from 192.168.10.100 (192.168.10.100)
Origin IGP, valid, external, multipath, best (Router ID)
Last update: Sat Aug 16 14:10:56 2025
- ํน์ ๊ฒฝ๋ก(
172.16.1.1/32
)๋ง ์กฐํํด๋ multipath ํญ๋ชฉ์ด ๋ชจ๋ ํ์๋๋ฉฐ,best
๊ฒฝ๋ก๋ Router ID ๊ธฐ์ค์ผ๋ก ์ ํ๋จ
๐ router์์ LB EX-IP ํธ์ถ ํ์ธ
1. router์์ LoadBalancer External IP ํธ์ถ ํ ์คํธ
1
2
root@router:~# LBIP=172.16.1.1
curl -s $LBIP
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
Hostname: webpod-697b545f57-cp7xq
IP: 127.0.0.1
IP: ::1
IP: 172.20.0.15
IP: fe80::4870:31ff:fe42:a8a6
RemoteAddr: 192.168.10.200:43094
GET / HTTP/1.1
Host: 172.16.1.1
User-Agent: curl/8.5.0
Accept: */*
- ๋ผ์ฐํฐ์์
curl
๋ช ๋ น์ผ๋ก LB External IP(172.16.1.1
) ํธ์ถ
2. ๋ผ์ฐํฐ์์ ๋ถํ๋ถ์ฐ ๊ฒฐ๊ณผ ํ์ธ
1
root@router:~# for i in {1..100}; do curl -s $LBIP | grep Hostname; done | sort | uniq -c | sort -nr
โ ย ์ถ๋ ฅ
1
2
3
36 Hostname: webpod-697b545f57-xtmdx
36 Hostname: webpod-697b545f57-5twrq
28 Hostname: webpod-697b545f57-cp7xq
- 100ํ ๋ฐ๋ณต ํธ์ถ ์ ํ๋ 3๊ฐ๋ก ํธ๋ํฝ์ด ๋ถ์ฐ๋๋ ๊ฒ์ ํ์ธ
- ์ด๋ Cilium LB๊ฐ multipath๋ฅผ ํตํด ์ ์์ ์ผ๋ก ๋ถํ๋ถ์ฐ์ ์ํํ๊ณ ์์์ ์๋ฏธ
3. ์ค์๊ฐ ํธ์ถ ๋ชจ๋ํฐ๋ง์ผ๋ก ๋ ธ๋ ๋ถ์ฐ ํ์ธ
1
root@router:~# while true; do curl -s $LBIP | egrep 'Hostname|RemoteAddr' ; sleep 0.1; done
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Hostname: webpod-697b545f57-xtmdx
RemoteAddr: 192.168.10.100:34884
Hostname: webpod-697b545f57-xtmdx
RemoteAddr: 192.168.10.100:34900
Hostname: webpod-697b545f57-5twrq
RemoteAddr: 192.168.10.100:34916
Hostname: webpod-697b545f57-5twrq
RemoteAddr: 192.168.10.100:34924
Hostname: webpod-697b545f57-cp7xq
RemoteAddr: 192.168.10.200:34940
Hostname: webpod-697b545f57-cp7xq
RemoteAddr: 192.168.10.200:34946
Hostname: webpod-697b545f57-xtmdx
RemoteAddr: 192.168.10.100:34948
Hostname: webpod-697b545f57-cp7xq
RemoteAddr: 192.168.10.200:34954
Hostname: webpod-697b545f57-5twrq
RemoteAddr: 192.168.10.100:34964
Hostname: webpod-697b545f57-xtmdx
RemoteAddr: 192.168.10.100:34966
Hostname: webpod-697b545f57-cp7xq
RemoteAddr: 192.168.10.200:34974
Hostname: webpod-697b545f57-xtmdx
RemoteAddr: 192.168.10.100:34986
Hostname: webpod-697b545f57-cp7xq
RemoteAddr: 192.168.10.200:35002
...
curl
๋ฐ๋ณต ํธ์ถ ์RemoteAddr
ํ์ธ ๊ฒฐ๊ณผ, ๋์ผ LB IP ์์ฒญ์ด ์ฌ๋ฌ ๋ ธ๋(192.168.10.100
,192.168.10.200
)๋ก ๋ถ์ฐ๋จ- ์ฆ, ์ธ๋ถ์์ LB IP๋ก ์ ๊ทผ ์ ๋ผ์ฐํฐ๋ multipath ๋ผ์ฐํ ์ ํตํด ๋ค์ ๋ ธ๋๋ก ํธ๋ํฝ์ ์ ๋ฌํจ
4. Pod ์ ์ถ์ ํ์๋ ๋ชจ๋ ๋ ธ๋๊ฐ ๊ด๊ณ ๋๋ ๋ฌธ์
1
2
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl scale deployment webpod --replicas 2
deployment.apps/webpod scaled
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
โ ย ์ถ๋ ฅ
1
2
3
4
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-pod 1/1 Running 1 (14h ago) 15h 172.20.0.35 k8s-ctr <none> <none>
webpod-697b545f57-5twrq 1/1 Running 0 13h 172.20.1.119 k8s-w1 <none> <none>
webpod-697b545f57-xtmdx 1/1 Running 0 13h 172.20.2.35 k8s-w0 <none> <none>
webpod
๋ฅผ replicas=2๋ก ์ค์ฌ ํ๋๊ฐk8s-w0
,k8s-w1
์๋ง ์กด์ฌ
1
2
3
4
5
6
7
8
9
10
(โ|HomeLab:N/A) root@k8s-ctr:~# cilium bgp routes
(Defaulting to `available ipv4 unicast` routes, please see help for more options)
Node VRouter Prefix NextHop Age Attrs
k8s-ctr 65001 172.16.1.1/32 0.0.0.0 17m5s [{Origin: i} {Nexthop: 0.0.0.0}]
65001 172.20.0.0/24 0.0.0.0 13h35m30s [{Origin: i} {Nexthop: 0.0.0.0}]
k8s-w0 65001 172.16.1.1/32 0.0.0.0 17m5s [{Origin: i} {Nexthop: 0.0.0.0}]
65001 172.20.2.0/24 0.0.0.0 13h35m42s [{Origin: i} {Nexthop: 0.0.0.0}]
k8s-w1 65001 172.16.1.1/32 0.0.0.0 17m5s [{Origin: i} {Nexthop: 0.0.0.0}]
65001 172.20.1.0/24 0.0.0.0 13h35m43s [{Origin: i} {Nexthop: 0.0.0.0}]
- ๊ทธ๋ฌ๋ ์ฌ์ ํ ๋ชจ๋ ๋
ธ๋(
k8s-ctr
,k8s-w0
,k8s-w1
)๊ฐ172.16.1.1/32
๋ฅผ ๊ด๊ณ - ๊ทธ ๊ฒฐ๊ณผ, ๋ผ์ฐํฐ์์ ๋ถํ์ํ ๋ ธ๋(k8s-ctr)๋ก๋ ํธ๋ํฝ์ด ์ ๋ฌ๋จ โ ๋นํจ์จ์ ์ธ ๊ฒฝ๋ก ๋ฐ์
5. Pod ๋ถ์ฌ์๋ ๋ผ์ฐํฐ ๊ฒฝ๋ก๊ฐ ์ ์ง๋๋ ๋ฌธ์
- ํ์ฌ Pod๊ฐ ์กด์ฌํ์ง ์์์๋ ๋ถ๊ตฌํ๊ณ , ๋ผ์ฐํฐ์๋ ์ฌ์ ํ
172.16.1.1/32
๊ฒฝ๋ก๊ฐ ์ ์ง๋จ - ์ด๋ ๋ชจ๋ ๋ ธ๋๊ฐ External IP๋ฅผ ๊ด๊ณ ํ๊ธฐ ๋๋ฌธ์ ๋ฐ์ํ๋ ํ์์
6. Tcpdump๋ก ํ์ธํ ๋ถํ์ ๊ฒฝ๋ก ํ๋ฆ
1
tcpdump -i eth1 -A -s 0 -nn 'tcp port 80'
- ์ ๊ท ํฐ๋ฏธ๋(k8s-w1, k8s-w2, k8s-w0)์์ tcpdump ์ํ
1
2
root@router:~# LBIP=172.16.1.1
curl -s $LBIP
172.16.1.1
ํธ์ถ ์, ์์ฒญ์ด k8s-ctr โ k8s-w0๋ก ์ด๋ํ๋ ํจํท์ด ๋์์ ๊ด์ฐฐ๋จ- ์ฆ, Pod๊ฐ ์๋ ๋ ธ๋๋ก๋ ํธ๋ํฝ์ด ์ ์ ๋์ด ๋ถํ์ํ ๊ฒฝ๋ก๊ฐ ์์ฑ๋จ
๐ ExternalTrafficPolicy(Local) ์ ์ฉ ๋ฐ ECMP ํด์ ์ ์ฑ ๋ณ๊ฒฝ
1. ExternalTrafficPolicy(Local) ์ ์ฉ
1
2
3
4
5
# ๋ชจ๋ํฐ๋ง
(โ|HomeLab:N/A) root@k8s-ctr:~# watch "sshpass -p 'vagrant' ssh vagrant@router ip -c route"
# k8s-ctr
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl patch service webpod -p '{"spec":{"externalTrafficPolicy":"Local"}}'
kubectl patch
๋ช ๋ น์ผ๋ก Service์externalTrafficPolicy
๋ฅผ Local๋ก ๋ณ๊ฒฝ
1
root@router:~# vtysh -c 'show ip bgp'
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
BGP table version is 11, local router ID is 192.168.10.200, vrf id 0
Default local pref 100, local AS 65000
Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found
Network Next Hop Metric LocPrf Weight Path
*> 10.10.1.0/24 0.0.0.0 0 32768 i
*= 172.16.1.1/32 192.168.20.100 0 65001 i
*> 192.168.10.101 0 65001 i
*> 172.20.0.0/24 192.168.10.100 0 65001 i
*> 172.20.1.0/24 192.168.10.101 0 65001 i
*> 172.20.2.0/24 192.168.20.100 0 65001 i
Displayed 5 routes and 6 total paths
- ๋ณ๊ฒฝ ์ (
Cluster
)์๋ ๋ชจ๋ ๋ ธ๋๊ฐ BGP ๊ด๊ณ ๋ฅผ ์ํ โ pod๊ฐ ์๋ ๋ ธ๋๊น์ง ๊ฒฝ๋ก์ ํฌํจ๋จ - ๋ณ๊ฒฝ ํ(
Local
)์๋ pod๊ฐ ์กด์ฌํ๋ ๋ ธ๋๋ง BGP ๊ด๊ณ โ ๋ถํ์ํ ๋ผ์ฐํ ์ ๊ฑฐ - ํ์ฌ๋
w0
,w1
๋ ธ๋๋ง ๊ด๊ณ
2. Linux ECMP ๊ธฐ๋ณธ ํด์ ์ ์ฑ
1
2
3
root@router:~# LBIP=172.16.1.1
root@router:~# for i in {1..100}; do curl -s $LBIP | grep Hostname; done | sort | uniq -c | sort -nr
100 Hostname: webpod-697b545f57-xtmdx
- ๋ฆฌ๋ ์ค๋ ๊ธฐ๋ณธ์ ์ผ๋ก L3 ๊ธฐ๋ฐ ํด์ ์ ์ฑ ์ฌ์ฉ
- ์์ค/๋ชฉ์ ์ง IP๊ฐ ๋์ผํ ๊ฒฝ์ฐ, ์ฌ๋ฌ ๊ฒฝ๋ก๊ฐ ์์ด๋ ํ๋์ ๊ฒฝ๋ก๋ง ์ฌ์ฉ
curl
ํ ์คํธ ์ ํน์ Pod๋ง 100% ์๋ต
3. ECMP Hash Policy ๋ณ๊ฒฝ
1
2
3
4
5
root@router:~# sudo sysctl -w net.ipv4.fib_multipath_hash_policy=1
echo "net.ipv4.fib_multipath_hash_policy=1" >> /etc/sysctl.conf
# ๊ฒฐ๊ณผ
net.ipv4.fib_multipath_hash_policy = 1
net.ipv4.fib_multipath_hash_policy=1
์ค์ ์ผ๋ก L4 ํฌํธ ๊ธฐ๋ฐ ํด์ ์ ์ฉ- ์์ค ํฌํธ๊ฐ ๋ฌ๋ผ์ง ๊ฒฝ์ฐ ๋ค๋ฅธ ๊ฒฝ๋ก๋ฅผ ํ์ฉ ๊ฐ๋ฅ โ ๋ถํ๋ถ์ฐ ๊ฐ์
1
2
3
root@router:~# for i in {1..100}; do curl -s $LBIP | grep Hostname; done | sort | uniq -c | sort -nr
54 Hostname: webpod-697b545f57-5twrq
46 Hostname: webpod-697b545f57-xtmdx
4. Deployment ํ์ฅ
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl scale deployment webpod --replicas 3
# ๊ฒฐ๊ณผ
deployment.apps/webpod scaled
kubectl scale
๋ช ๋ น์ผ๋ก webpod๋ฅผ 3๊ฐ Replica๋ก ํ์ฅ- ์๋ก์ด Pod๊ฐ ์์ฑ๋๋ฉด ํด๋น ๋ ธ๋์์ ์ฆ์ BGP ๊ฒฝ๋ก ๊ด๊ณ ๋ฐ์
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
โ ย ์ถ๋ ฅ
1
2
3
4
5
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-pod 1/1 Running 1 (14h ago) 16h 172.20.0.35 k8s-ctr <none> <none>
webpod-697b545f57-5twrq 1/1 Running 0 14h 172.20.1.119 k8s-w1 <none> <none>
webpod-697b545f57-npkj5 1/1 Running 0 8s 172.20.0.159 k8s-ctr <none> <none>
webpod-697b545f57-xtmdx 1/1 Running 0 14h 172.20.2.35 k8s-w0 <none> <none>
- k8s-ctr ๋
ธ๋์๋ ์๋ก์ด Pod(
webpod-697b545f57-npkj5
)๊ฐ ๋ฐฐ์น๋์ด 3๊ฐ Pod๊ฐ ๋ชจ๋ ๋ค๋ฅธ ๋ ธ๋์ ์์น
5. ๋ผ์ฐํ ๊ฒฝ๋ก ํ์ธ
1
root@router:~# ip -c route
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
default via 192.168.121.1 dev eth0 proto dhcp src 192.168.121.25 metric 100
10.10.1.0/24 dev loop1 proto kernel scope link src 10.10.1.200
10.10.2.0/24 dev loop2 proto kernel scope link src 10.10.2.200
172.16.1.1 nhid 114 proto bgp metric 20
nexthop via 192.168.10.101 dev eth1 weight 1
nexthop via 192.168.10.100 dev eth1 weight 1
nexthop via 192.168.20.100 dev eth2 weight 1
172.20.0.0/24 nhid 92 via 192.168.10.100 dev eth1 proto bgp metric 20
172.20.1.0/24 nhid 88 via 192.168.10.101 dev eth1 proto bgp metric 20
172.20.2.0/24 nhid 94 via 192.168.20.100 dev eth2 proto bgp metric 20
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.200
192.168.20.0/24 dev eth2 proto kernel scope link src 192.168.20.200
192.168.121.0/24 dev eth0 proto kernel scope link src 192.168.121.25 metric 100
192.168.121.1 dev eth0 proto dhcp scope link src 192.168.121.25 metric 100
- k8s-ctr ๋ฐ์
6. ExternalTrafficPolicy ํจ๊ณผ
- Cluster ๋ชจ๋: Pod๊ฐ ์๋ ๋ ธ๋๋ ๊ฒฝ์ ๊ฐ๋ฅ โ ๋นํจ์จ์ ์ธ ๋ผ์ฐํ ๋ฐ์
- Local ๋ชจ๋: ์์ฒญ์ด ๋๋ฌํ ๋ ธ๋์ Pod์์ ์ง์ ์๋ต โ ๋ถํ์ํ hop ์ ๊ฑฐ
7. ํธ๋ํฝ ๋ถ์ฐ ๊ฒ์ฆ
1
2
3
4
root@router:~# for i in {1..100}; do curl -s $LBIP | grep Hostname; done | sort | uniq -c | sort -nr
35 Hostname: webpod-697b545f57-npkj5
34 Hostname: webpod-697b545f57-5twrq
31 Hostname: webpod-697b545f57-xtmdx
curl
๋ฐ๋ณต ํ ์คํธ๋ก 100๋ฒ ์์ฒญ ์, 3๊ฐ Pod๋ก ๊ท ๋ฑํ๊ฒ ๋ถ์ฐ- ExternalTrafficPolicy(Local) + ์ค์ผ์ผ ์์ ์กฐํฉ์ผ๋ก ์์ ์ ์ด๊ณ ํจ์จ์ ์ธ ๋ถํ๋ถ์ฐ ๋ฌ์ฑ
BGP + SNAT + Random โ ๊ถ์ฅ ๋ฐฉ์
- BGP(ECMP) ๊ธฐ๋ฐ ๋ผ์ฐํ ๊ณผ K8S Service(LB EX-IP, ExternalTrafficPolicy: Local)๋ฅผ ์กฐํฉ
- SNAT ์ ์ฉ + Random ์๊ณ ๋ฆฌ์ฆ โ ๊ฐ์ฅ ๋จ์ํ๊ณ ์ผ๋ฐ์ ์ผ๋ก ๊ถ์ฅ๋๋ ๋ฐฉ์
๐งฎ DSR + Maglev ๋ฐฉ์ โ ๊ทธ๋๋ง ๊ด์ฐฎ์ ๋ฐฉ์
- BGP(ECMP) + Service(LB EX-IP, ExternalTrafficPolicy: Cluster) + DSR + Maglev
- Geneve ํค๋ encapsulation ํ์
1. DSR ๋์ ๋ฐฐ๊ฒฝ
- ๊ธฐ์กด L4 ๋ก๋๋ฐธ๋ฐ์๋ ์ฅ๋น ๊ฐ๊ฒฉ์ด ๋น์
- ๋์์ผ๋ก Direct Server Return(DSR) ๊ฐ๋
๋ฑ์ฅ โ ์๋ฒ๊ฐ ํด๋ผ์ด์ธํธ๋ก ์ง์ ์๋ต์ ๋ฆฌํด
2. Cilium ์ค์ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -- cilium status --verbose
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
...
KubeProxyReplacement Details:
Status: True
Socket LB: Enabled
Socket LB Tracing: Enabled
Socket LB Coverage: Full
Devices: eth0 fe80::5054:ff:fea7:8e7a 192.168.121.62, eth1 192.168.10.101 fe80::5054:ff:fefb:b52e (Direct Routing)
Mode: SNAT
Backend Selection: Random
Session Affinity: Enabled
NAT46/64 Support: Disabled
XDP Acceleration: Disabled
Services:
- ClusterIP: Enabled
- NodePort: Enabled (Range: 30000-32767)
- LoadBalancer: Enabled
- externalIPs: Enabled
- HostPort: Enabled
Annotations:
- service.cilium.io/node
- service.cilium.io/node-selector
- service.cilium.io/proxy-delegation
- service.cilium.io/src-ranges-policy
- service.cilium.io/type
...
- Mode: SNAT (์ด๊ธฐ ์ํ)
- Backend Selection: Random
3. ๋ชจ๋ ์ค๋น (Geneve)
1
2
3
4
5
(โ|HomeLab:N/A) root@k8s-ctr:~# modprobe geneve
(โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w0 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i sudo modprobe geneve ; echo; done
>> node : k8s-w1 <<
>> node : k8s-w0 <<
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
(โ|HomeLab:N/A) root@k8s-ctr:~# lsmod | grep -E 'vxlan|geneve'
geneve 49152 0
ip6_udp_tunnel 16384 1 geneve
udp_tunnel 32768 1 geneve
(โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w0 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i sudo lsmod | grep -E 'vxlan|geneve' ; echo; done
>> node : k8s-w1 <<
geneve 49152 0
ip6_udp_tunnel 16384 1 geneve
udp_tunnel 32768 1 geneve
>> node : k8s-w0 <<
geneve 49152 0
ip6_udp_tunnel 16384 1 geneve
udp_tunnel 32768 1 geneve
- ๋ชจ๋ ๋ ธ๋์ geneve ๋ชจ๋ ๋ก๋ ์๋ฃ
- geneve๋ ๋ ธ๋ ๊ฐ ํ๋ ํต์ ์ ์ฒด์ ์ฐ๋ ๊ฒ ์๋๋ผ, DSR ์ ์ก ๊ฒฝ์ ์์๋ง ์ฌ์ฉ๋จ
4. Helm์ผ๋ก Cilium DSR ๋ชจ๋ ์ ์ฉ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
(โ|HomeLab:N/A) root@k8s-ctr:~# helm upgrade cilium cilium/cilium --version 1.18.0 --namespace kube-system --reuse-values \
--set tunnelProtocol=geneve --set loadBalancer.mode=dsr --set loadBalancer.dsrDispatch=geneve \
--set loadBalancer.algorithm=maglev
# ๊ฒฐ๊ณผ
Release "cilium" has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Sat Aug 16 17:29:45 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 3
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.
Your release version is 1.18.0.
For any further help, visit https://docs.cilium.io/en/v1.18/gettinghelp
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system rollout restart ds/cilium
# ๊ฒฐ๊ณผ
daemonset.apps/cilium restarted
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -- cilium status --verbose
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
...
KubeProxyReplacement Details:
Status: True
Socket LB: Enabled
Socket LB Tracing: Enabled
Socket LB Coverage: Full
Devices: eth0 192.168.121.62 fe80::5054:ff:fea7:8e7a, eth1 192.168.10.101 fe80::5054:ff:fefb:b52e (Direct Routing)
Mode: DSR
DSR Dispatch Mode: Geneve
Backend Selection: Maglev (Table Size: 16381)
Session Affinity: Enabled
NAT46/64 Support: Disabled
XDP Acceleration: Disabled
Services:
- ClusterIP: Enabled
- NodePort: Enabled (Range: 30000-32767)
- LoadBalancer: Enabled
- externalIPs: Enabled
- HostPort: Enabled
Annotations:
- service.cilium.io/node
- service.cilium.io/node-selector
- service.cilium.io/proxy-delegation
- service.cilium.io/src-ranges-policy
- service.cilium.io/type
...
- Mode: DSR
- DSR Dispatch Mode: Geneve
- Backend Selection: Maglev
5. ExternalTrafficPolicy Cluster๋ก ์๋ณต
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl patch svc webpod -p '{"spec":{"externalTrafficPolicy":"Cluster"}}'
# ๊ฒฐ๊ณผ
service/webpod patched
- ๋ค์ Cluster ๋ชจ๋๋ก ๋ณต์
6. DSR ํจํท ์บก์ฒ
(1) ํจํท ์บก์ฒ ์ค๋น
1
tcpdump -i eth1 -w /tmp/dsr.pcap
- k8s-ctr, k8s-w1, k8s-w0 ๋ชจ๋ tcpdump ์คํ
(2) ์ธ๋ถ ์ ๊ทผ ํ ์คํธ
1
2
root@router:~# curl -s $LBIP
root@router:~# curl -s $LBIP
- ์ธ๋ถ ๋ผ์ฐํฐ์์ LBIP๋ก curl ์์ฒญ
(3) ์บก์ฒ ํ์ผ ํ์ธ
1
2
3
4
5
6
7
8
vagrant plugin install vagrant-scp
# ๊ฒฐ๊ณผ
Installing the 'vagrant-scp' plugin. This can take a few minutes...
Building native extensions. This could take a while...
Building native extensions. This could take a while...
Fetching vagrant-scp-0.5.9.gem
Installed the plugin 'vagrant-scp (0.5.9)'!
- Vagrant scp ํ๋ฌ๊ทธ์ธ ์ค์น
1
2
3
4
5
6
vagrant scp k8s-ctr:/tmp/dsr.pcap .
# ๊ฒฐ๊ณผ
[fog][WARNING] Unrecognized arguments: libvirt_ip_command
Warning: Permanently added '192.168.121.70' (ED25519) to the list of known hosts.
dsr.pcap 100% 94KB 46.4MB/s 00:00
- pcap ํ์ผ์ Host๋ก ์ ์ก ํ Termshark๋ก ์ด์ด ๋ถ์
1
termshark -r dsr.pcap
- ์ถ๋ฐ์ง IP: 192.168.10.100
- ๋ชฉ์ ์ง IP: 192.168.20.100 (k8s-w0)
- ์ธ๋ถ ์ ๊ทผ์ด ์ปจํธ๋กคํ๋ ์ธ์ผ๋ก ๋ค์ด์จ ๋ค ์์ปค ๋ ธ๋๋ก ์ ๋ฌ๋ ๊ฒ ํ์ธ
- Geneve encapsulation ํค๋ ์กด์ฌ
- ์ปจํธ๋กคํ๋ ์ธ(k8s-ctr)์์ ์ธ๋ถ ํด๋ผ์ด์ธํธ(router) ๋ก ๋ฐ๋ก ๋ฆฌํดํ๋ ํจํท ์กด์ฌ
- D.IP / D.Port = ์ต์ด ์์ฒญ ์ S.IP / S.Port ๊ทธ๋๋ก ์ ์ง๋จ
- ์ฆ, SNAT์ด ์๋ DSR ๋ฐฉ์์ผ๋ก ์๋ต์ด ์ ๋ฌ๋จ
๐งฉ ClusterMesh ํ๊ฒฝ ์ค๋น
1. West ํด๋ฌ์คํฐ ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
kind create cluster --name west --image kindest/node:v1.33.2 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000 # sample apps
hostPort: 30000
- containerPort: 30001 # hubble ui
hostPort: 30001
- role: worker
extraPortMappings:
- containerPort: 30002 # sample apps
hostPort: 30002
networking:
podSubnet: "10.0.0.0/16"
serviceSubnet: "10.2.0.0/16"
disableDefaultCNI: true
kubeProxyMode: none
EOF
# ๊ฒฐ๊ณผ
Creating cluster "west" ...
โ Ensuring node image (kindest/node:v1.33.2) ๐ผ
โ Preparing nodes ๐ฆ ๐ฆ
โ Writing configuration ๐
โ Starting control-plane ๐น๏ธ
โ Installing StorageClass ๐พ
โ Joining worker nodes ๐
Set kubectl context to "kind-west"
You can now use your cluster with:
kubectl cluster-info --context kind-west
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community ๐
2. East ํด๋ฌ์คํฐ ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
kind create cluster --name east --image kindest/node:v1.33.2 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 31000 # sample apps
hostPort: 31000
- containerPort: 31001 # hubble ui
hostPort: 31001
- role: worker
extraPortMappings:
- containerPort: 31002 # sample apps
hostPort: 31002
networking:
podSubnet: "10.1.0.0/16"
serviceSubnet: "10.3.0.0/16"
disableDefaultCNI: true
kubeProxyMode: none
EOF
# ๊ฒฐ๊ณผ
Creating cluster "east" ...
โ Ensuring node image (kindest/node:v1.33.2) ๐ผ
โ Preparing nodes ๐ฆ ๐ฆ
โ Writing configuration ๐
โ Starting control-plane ๐น๏ธ
โ Installing StorageClass ๐พ
โ Joining worker nodes ๐
Set kubectl context to "kind-east"
You can now use your cluster with:
kubectl cluster-info --context kind-east
Not sure what to do next? ๐
Check out https://kind.sigs.k8s.io/docs/user/quick-start/
3. ์ปจํ ์คํธ ํ์ธ
1
kubectl config get-contexts
โ ย ์ถ๋ ฅ
1
2
3
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kind-east kind-east kind-east
kind-west kind-west kind-west
- ๋ ํด๋ฌ์คํฐ๊ฐ kubeconfig์ ์ ์ ๋ฑ๋ก๋จ ํ์ธ
4. ๋ ธ๋ ๊ธฐ๋ณธ ํด ์ค์น
1
2
3
4
docker exec -it west-control-plane sh -c 'apt update && apt install tree psmisc lsof wget net-tools dnsutils tcpdump ngrep iputils-ping git -y'
docker exec -it west-worker sh -c 'apt update && apt install tree psmisc lsof wget net-tools dnsutils tcpdump ngrep iputils-ping git -y'
docker exec -it east-control-plane sh -c 'apt update && apt install tree psmisc lsof wget net-tools dnsutils tcpdump ngrep iputils-ping git -y'
docker exec -it east-worker sh -c 'apt update && apt install tree psmisc lsof wget net-tools dnsutils tcpdump ngrep iputils-ping git -y'
5. Context ์ ํ ๋ฐ ๋ ธ๋ ์กฐํ
1
2
3
4
kubectl config set-context kind-east
# ๊ฒฐ๊ณผ
Context "kind-east" modified.
- ๊ธฐ๋ณธ context๋ฅผ east๋ก ๋ณ๊ฒฝ
1
kubectl get node -v=6 --context kind-east
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
I0816 18:18:58.547416 180043 loader.go:402] Config loaded from file: /home/devshin/.kube/config
I0816 18:18:58.547878 180043 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0816 18:18:58.547890 180043 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0816 18:18:58.547898 180043 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0816 18:18:58.547901 180043 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0816 18:18:58.561481 180043 round_trippers.go:560] GET https://127.0.0.1:38631/api?timeout=32s 200 OK in 13 milliseconds
I0816 18:18:58.564716 180043 round_trippers.go:560] GET https://127.0.0.1:38631/apis?timeout=32s 200 OK in 1 milliseconds
I0816 18:18:58.598067 180043 round_trippers.go:560] GET https://127.0.0.1:38631/api/v1/nodes?limit=500 200 OK in 14 milliseconds
NAME STATUS ROLES AGE VERSION
east-control-plane NotReady control-plane 7m31s v1.33.2
east-worker NotReady <none> 7m20s v1.33.2
1
kubectl get node -v=6
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
I0816 18:19:58.446920 180152 loader.go:402] Config loaded from file: /home/devshin/.kube/config
I0816 18:19:58.449078 180152 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0816 18:19:58.449176 180152 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0816 18:19:58.449218 180152 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0816 18:19:58.449263 180152 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0816 18:19:58.465153 180152 round_trippers.go:560] GET https://127.0.0.1:38631/api/v1/nodes?limit=500 200 OK in 7 milliseconds
NAME STATUS ROLES AGE VERSION
east-control-plane NotReady control-plane 8m31s v1.33.2
east-worker NotReady <none> 8m20s v1.33.2
- east ํด๋ฌ์คํฐ ๋ ธ๋ ์ ๋ณด ์ถ๋ ฅ
1
kubectl get node -v=6 --context kind-west
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
I0816 18:21:10.714221 180273 loader.go:402] Config loaded from file: /home/devshin/.kube/config
I0816 18:21:10.719211 180273 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0816 18:21:10.719796 180273 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0816 18:21:10.719902 180273 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0816 18:21:10.719986 180273 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0816 18:21:10.741286 180273 round_trippers.go:560] GET https://127.0.0.1:43999/api?timeout=32s 200 OK in 19 milliseconds
I0816 18:21:10.743285 180273 round_trippers.go:560] GET https://127.0.0.1:43999/apis?timeout=32s 200 OK in 1 milliseconds
I0816 18:21:10.751821 180273 round_trippers.go:560] GET https://127.0.0.1:43999/api/v1/nodes?limit=500 200 OK in 4 milliseconds
NAME STATUS ROLES AGE VERSION
west-control-plane NotReady control-plane 10m v1.33.2
west-worker NotReady <none> 10m v1.33.2
--context
์ต์ ์ ์ฃผ๋ฉด west ํด๋ฌ์คํฐ ์ ๋ณด๋ ํ์ธ ๊ฐ๋ฅ
6. ๋ ธ๋ ์ํ ํ์ธ
1
kubectl get pod -A
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-674b8bbfcf-2tm2l 0/1 Pending 0 10m
kube-system coredns-674b8bbfcf-c9qsg 0/1 Pending 0 10m
kube-system etcd-east-control-plane 1/1 Running 0 10m
kube-system kube-apiserver-east-control-plane 1/1 Running 0 10m
kube-system kube-controller-manager-east-control-plane 1/1 Running 0 10m
kube-system kube-scheduler-east-control-plane 1/1 Running 0 10m
local-path-storage local-path-provisioner-7dc846544d-mwfdc 0/1 Pending 0 10m
- ํ์ฌ kube-proxy์ CNI๋ฅผ ์ค์นํ์ง ์์๊ธฐ ๋๋ฌธ์ ๋ชจ๋ ๋ ธ๋ ์ํ๋ NotReady
- System Pod(coredns, local-path-provisioner ๋ฑ)๋ Pending ์ํ
7. Context Alias ์ค์
1
2
alias kwest='kubectl --context kind-west'
alias keast='kubectl --context kind-east'
8. ๋ ธ๋ ์์ธ ์ ๋ณด
1
2
3
4
5
kwest get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
west-control-plane NotReady control-plane 14m v1.33.2 172.18.0.2 <none> Debian GNU/Linux 12 (bookworm) 6.16.0-arch2-1 containerd://2.1.3
west-worker NotReady <none> 14m v1.33.2 172.18.0.3 <none> Debian GNU/Linux 12 (bookworm) 6.16.0-arch2-1 containerd://2.1.3
1
2
3
4
5
keast get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
east-control-plane NotReady control-plane 13m v1.33.2 172.18.0.4 <none> Debian GNU/Linux 12 (bookworm) 6.16.0-arch2-1 containerd://2.1.3
east-worker NotReady <none> 13m v1.33.2 172.18.0.5 <none> Debian GNU/Linux 12 (bookworm) 6.16.0-arch2-1 containerd://2.1.3
๐ธ๏ธ Cilium CNI ๋ฐฐํฌ : ClusterMesh
1. Cilium CLI ์ค์น
1
2
3
4
5
6
7
8
9
10
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all \
https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 56.6M 100 56.6M 0 0 16.1M 0 0:00:03 0:00:03 --:--:-- 18.2M
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 92 100 92 0 0 338 0 --:--:-- --:--:-- --:--:-- 338
cilium-linux-amd64.tar.gz: OK
cilium
- Host OS์
cilium
CLI ์ค์น
2. West ํด๋ฌ์คํฐ์ Cilium ์ค์น
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cilium install --version 1.17.6 --set ipam.mode=kubernetes \
--set kubeProxyReplacement=true --set bpf.masquerade=true \
--set endpointHealthChecking.enabled=false --set healthChecking=false \
--set operator.replicas=1 --set debug.enabled=true \
--set routingMode=native --set autoDirectNodeRoutes=true --set ipv4NativeRoutingCIDR=10.0.0.0/16 \
--set ipMasqAgent.enabled=true --set ipMasqAgent.config.nonMasqueradeCIDRs='{10.1.0.0/16}' \
--set cluster.name=west --set cluster.id=1 \
--context kind-west
# ๊ฒฐ๊ณผ
๐ฎ Auto-detected Kubernetes kind: kind
โน๏ธ Using Cilium version 1.17.6
โน๏ธ Using cluster name "west"
โน๏ธ Detecting real Kubernetes API server addr and port on Kind
๐ฎ Auto-detected kube-proxy has not been installed
โน๏ธ Cilium will fully replace all functionalities of kube-proxy
cilium install
๋ช ๋ น์ด๋ก 1.17.6 ๋ฒ์ ์ค์นroutingMode=native
,autoDirectNodeRoutes=true
,ipv4NativeRoutingCIDR=10.0.0.0/16
cluster.name=west
,cluster.id=1
3. East ํด๋ฌ์คํฐ์ Cilium ์ค์น
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cilium install --version 1.17.6 --set ipam.mode=kubernetes \
--set kubeProxyReplacement=true --set bpf.masquerade=true \
--set endpointHealthChecking.enabled=false --set healthChecking=false \
--set operator.replicas=1 --set debug.enabled=true \
--set routingMode=native --set autoDirectNodeRoutes=true --set ipv4NativeRoutingCIDR=10.1.0.0/16 \
--set ipMasqAgent.enabled=true --set ipMasqAgent.config.nonMasqueradeCIDRs='{10.0.0.0/16}' \
--set cluster.name=east --set cluster.id=2 \
--context kind-east
# ๊ฒฐ๊ณผ
๐ฎ Auto-detected Kubernetes kind: kind
โน๏ธ Using Cilium version 1.17.6
โน๏ธ Using cluster name "east"
โน๏ธ Detecting real Kubernetes API server addr and port on Kind
๐ฎ Auto-detected kube-proxy has not been installed
โน๏ธ Cilium will fully replace all functionalities of kube-proxy
routingMode=native
,autoDirectNodeRoutes=true
,ipv4NativeRoutingCIDR=10.1.0.0/16
cluster.name=east
,cluster.id=2
4. ์ค์น ํ์ธ (West/East Pod ์ํ)
1
kwest get pod -A && keast get pod -A
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-9jlht 1/1 Running 0 2m22s
kube-system cilium-envoy-5gpxx 1/1 Running 0 2m22s
kube-system cilium-envoy-skv7b 1/1 Running 0 2m22s
kube-system cilium-operator-7dbb574d5b-drtg2 1/1 Running 0 2m22s
kube-system cilium-qvpkv 1/1 Running 0 2m22s
kube-system coredns-674b8bbfcf-kwxv5 1/1 Running 0 34m
kube-system coredns-674b8bbfcf-nb96t 1/1 Running 0 34m
kube-system etcd-west-control-plane 1/1 Running 0 34m
kube-system kube-apiserver-west-control-plane 1/1 Running 0 34m
kube-system kube-controller-manager-west-control-plane 1/1 Running 0 34m
kube-system kube-scheduler-west-control-plane 1/1 Running 0 34m
local-path-storage local-path-provisioner-7dc846544d-jrdw8 1/1 Running 0 34m
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-envoy-mrzw8 1/1 Running 0 94s
kube-system cilium-envoy-vq5r7 1/1 Running 0 94s
kube-system cilium-kxddr 1/1 Running 0 94s
kube-system cilium-operator-867f8dc978-44zqb 1/1 Running 0 94s
kube-system cilium-qn52j 1/1 Running 0 94s
kube-system coredns-674b8bbfcf-2tm2l 1/1 Running 0 33m
kube-system coredns-674b8bbfcf-c9qsg 1/1 Running 0 33m
kube-system etcd-east-control-plane 1/1 Running 0 33m
kube-system kube-apiserver-east-control-plane 1/1 Running 0 33m
kube-system kube-controller-manager-east-control-plane 1/1 Running 0 33m
kube-system kube-scheduler-east-control-plane 1/1 Running 0 33m
local-path-storage local-path-provisioner-7dc846544d-mwfdc 1/1 Running 0 33m
- Cilium DaemonSet, Envoy, Operator ์ ์ ์คํ (Running)
- CoreDNS, etcd, apiserver, controller, scheduler, local-path-provisioner ๋ชจ๋ ์ ์
1
cilium status --context kind-west
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
/ยฏยฏ\
/ยฏยฏ\__/ยฏยฏ\ Cilium: OK
\__/ยฏยฏ\__/ Operator: OK
/ยฏยฏ\__/ยฏยฏ\ Envoy DaemonSet: OK
\__/ยฏยฏ\__/ Hubble Relay: disabled
\__/ ClusterMesh: disabled
DaemonSet cilium Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet cilium-envoy Desired: 2, Ready: 2/2, Available: 2/2
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium Running: 2
cilium-envoy Running: 2
cilium-operator Running: 1
clustermesh-apiserver
hubble-relay
Cluster Pods: 3/3 managed by Cilium
Helm chart version: 1.17.6
Image versions cilium quay.io/cilium/cilium:v1.17.6@sha256:544de3d4fed7acba72758413812780a4972d47c39035f2a06d6145d8644a3353: 2
cilium-envoy quay.io/cilium/cilium-envoy:v1.33.4-1752151664-7c2edb0b44cf95f326d628b837fcdd845102ba68@sha256:318eff387835ca2717baab42a84f35a83a5f9e7d519253df87269f80b9ff0171: 2
cilium-operator quay.io/cilium/operator-generic:v1.17.6@sha256:91ac3bf7be7bed30e90218f219d4f3062a63377689ee7246062fa0cc3839d096: 1
1
cilium status --context kind-east
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
/ยฏยฏ\
/ยฏยฏ\__/ยฏยฏ\ Cilium: OK
\__/ยฏยฏ\__/ Operator: OK
/ยฏยฏ\__/ยฏยฏ\ Envoy DaemonSet: OK
\__/ยฏยฏ\__/ Hubble Relay: disabled
\__/ ClusterMesh: disabled
DaemonSet cilium Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet cilium-envoy Desired: 2, Ready: 2/2, Available: 2/2
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium Running: 2
cilium-envoy Running: 2
cilium-operator Running: 1
clustermesh-apiserver
hubble-relay
Cluster Pods: 3/3 managed by Cilium
Helm chart version: 1.17.6
Image versions cilium quay.io/cilium/cilium:v1.17.6@sha256:544de3d4fed7acba72758413812780a4972d47c39035f2a06d6145d8644a3353: 2
cilium-envoy quay.io/cilium/cilium-envoy:v1.33.4-1752151664-7c2edb0b44cf95f326d628b837fcdd845102ba68@sha256:318eff387835ca2717baab42a84f35a83a5f9e7d519253df87269f80b9ff0171: 2
cilium-operator quay.io/cilium/operator-generic:v1.17.6@sha256:91ac3bf7be7bed30e90218f219d4f3062a63377689ee7246062fa0cc3839d096: 1
DaemonSet
,Envoy
,Operator
โ Desired์ Ready ์ ์ผ์น (์ ์ ๋ฐฐํฌ ์๋ฃ)- ClusterMesh, Hubble Relay โ ์์ง disabled ์ํ
5. IP Masquerading ์ค์ ํ์ธ
1
kwest -n kube-system exec ds/cilium -c cilium-agent -- cilium-dbg bpf ipmasq list
โ ย ์ถ๋ ฅ
1
2
3
IP PREFIX/ADDRESS
10.1.0.0/16
169.254.0.0/16
1
keast -n kube-system exec ds/cilium -c cilium-agent -- cilium-dbg bpf ipmasq list
โ ย ์ถ๋ ฅ
1
2
3
IP PREFIX/ADDRESS
10.0.0.0/16
169.254.0.0/16
- ์๋ก ์๋ ํด๋ฌ์คํฐ์ Pod CIDR ๋์ญ์ด non-masquerade ์ฒ๋ฆฌ ๋์ด ์์
6. CoreDNS ์ค์ ํ์ธ
1
kubectl describe cm -n kube-system coredns --context kind-west | grep kubernetes
โ ย ์ถ๋ ฅ
1
kubernetes cluster.local in-addr.arpa ip6.arpa {
1
kubectl describe cm -n kube-system coredns --context kind-west | grep kubernetes
โ ย ์ถ๋ ฅ
1
kubernetes cluster.local in-addr.arpa ip6.arpa {
cluster.local
๋๋ฉ์ธ ์ฌ์ฉ- ConfigMap ์ถ๋ ฅ์์
kubernetes cluster.local in-addr.arpa ip6.arpa
ํ์ธ
๐ Setting up Cluster Mesh
1. ์ด๊ธฐ ์ํ: ์ํธ Pod CIDR ์ ์ ์์
1
2
3
4
docker exec -it west-control-plane ip -c route
docker exec -it west-worker ip -c route
docker exec -it east-control-plane ip -c route
docker exec -it east-worker ip -c route
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
default via 172.18.0.1 dev eth0
10.0.0.0/24 via 10.0.0.19 dev cilium_host proto kernel src 10.0.0.19
10.0.0.19 dev cilium_host proto kernel scope link
10.0.1.0/24 via 172.18.0.3 dev eth0 proto kernel
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.2
default via 172.18.0.1 dev eth0
10.0.0.0/24 via 172.18.0.2 dev eth0 proto kernel
10.0.1.0/24 via 10.0.1.99 dev cilium_host proto kernel src 10.0.1.99
10.0.1.99 dev cilium_host proto kernel scope link
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.3
default via 172.18.0.1 dev eth0
10.1.0.0/24 via 10.1.0.165 dev cilium_host proto kernel src 10.1.0.165
10.1.0.165 dev cilium_host proto kernel scope link
10.1.1.0/24 via 172.18.0.5 dev eth0 proto kernel
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.4
default via 172.18.0.1 dev eth0
10.1.0.0/24 via 172.18.0.4 dev eth0 proto kernel
10.1.1.0/24 via 10.1.1.122 dev cilium_host proto kernel src 10.1.1.122
10.1.1.122 dev cilium_host proto kernel scope link
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.5
kind
ํ๊ฒฝ์ Docker ๋คํธ์ํฌ ์์์ ๋์ํ๊ณ BGP๋ฅผ ์ฌ์ฉํ์ง ์์- ๋ฐ๋ผ์ West โ East ํด๋ฌ์คํฐ ๊ฐ Pod CIDR ๋ผ์ฐํ ์ ๋ณด๊ฐ ๊ธฐ๋ณธ์ ์ผ๋ก ์กด์ฌํ์ง ์์
ip route
ํ์ธ ๊ฒฐ๊ณผ, ๊ฐ ํด๋ฌ์คํฐ๋ ์๊ธฐ Pod CIDR๋ง ์๊ณ ์๊ณ ์๋๋ฐฉ์ Pod CIDR์ ์ ์ ์์
2. CA Secret ๋๊ธฐํ
1
2
3
4
5
6
7
8
9
keast get secret -n kube-system cilium-ca
NAME TYPE DATA AGE
cilium-ca Opaque 2 14m
keast delete secret -n kube-system cilium-ca
secret "cilium-ca" deleted
keast get secret -n kube-system cilium-ca
Error from server (NotFound): secrets "cilium-ca" not found
east
ํด๋ฌ์คํฐ์ ๊ธฐ๋ณธcilium-ca
Secret ์ญ์
1
2
3
4
5
kubectl --context kind-west get secret -n kube-system cilium-ca -o yaml | \
kubectl --context kind-east create -f -
# ๊ฒฐ๊ณผ
secret/cilium-ca created
west
ํด๋ฌ์คํฐ์cilium-ca
Secret์east
ํด๋ฌ์คํฐ๋ก ๊ฐ์ ธ์ ์์ฑ
1
2
3
keast get secret -n kube-system cilium-ca
NAME TYPE DATA AGE
cilium-ca Opaque 2 62s
- ์์ชฝ ํด๋ฌ์คํฐ๊ฐ ๋์ผํ CA๋ฅผ ๊ณต์ ํ์ฌ ์ํธ TLS ์ธ์ฆ ๊ธฐ๋ฐ ํต์ ๊ฐ๋ฅ
3. ClusterMesh ์ํ ๋ชจ๋ํฐ๋ง
1
2
cilium clustermesh status --context kind-west --wait
cilium clustermesh status --context kind-east --wait
4. ClusterMesh ํ์ฑํ
1
cilium clustermesh enable --service-type NodePort --enable-kvstoremesh=false --context kind-west
1
cilium clustermesh enable --service-type NodePort --enable-kvstoremesh=false --context kind-east
- ๊ฐ ํด๋ฌ์คํฐ์์
cilium clustermesh enable
์คํ
5. West ํด๋ฌ์คํฐ Service/Endpoint ํ์ธ
1
kwest get svc,ep -n kube-system clustermesh-apiserver --context kind-west
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/clustermesh-apiserver NodePort 10.2.126.217 <none> 2379:32379/TCP 2m57s
NAME ENDPOINTS AGE
endpoints/clustermesh-apiserver 10.0.1.8:2379 2m57s
clustermesh-apiserver
Service ์์ฑ, ํ์ ์ NodePort- ClusterIP:
10.2.126.217
, Port:2379:32379/TCP
NodePort (32379)
๋ก ์ธ๋ถ ๋ ธ๋ ๊ฐ ํต์ ์ฑ๋ ๊ฐ๋ฐฉ- Endpoint:
10.0.1.8:2379
6. West ํด๋ฌ์คํฐ Pod ์ํ ํ์ธ
1
kwest get pod -n kube-system -owide | grep clustermesh
โ ย ์ถ๋ ฅ
1
2
clustermesh-apiserver-5cf45db9cc-2g847 2/2 Running 0 4m29s 10.0.1.8 west-worker <none> <none>
clustermesh-apiserver-generate-certs-pl6ws 0/1 Completed 0 4m29s 172.18.0.3 west-worker <none> <none>
7. East ํด๋ฌ์คํฐ Service/Endpoint ํ์ธ
1
keast get svc,ep -n kube-system clustermesh-apiserver --context kind-east
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/clustermesh-apiserver NodePort 10.3.173.28 <none> 2379:32379/TCP 3m47s
NAME ENDPOINTS AGE
endpoints/clustermesh-apiserver 10.1.1.62:2379 3m47s
- ClusterIP:
10.3.173.28
, Port:2379:32379/TCP
- Endpoint:
10.1.1.62:2379
8. ClusterMesh ์ํ ๋ชจ๋ํฐ๋ง
1
2
watch -d "cilium clustermesh status --context kind-west --wait"
watch -d "cilium clustermesh status --context kind-east --wait"
9. ClusterMesh ์ฐ๊ฒฐ ์คํ (West โ East)
1
cilium clustermesh connect --context kind-west --destination-context kind-east
10. ClusterMesh ์ ์ ์ฐ๊ฒฐ ํ์ธ (West/East)
1
cilium status --context kind-west
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
/ยฏยฏ\
/ยฏยฏ\__/ยฏยฏ\ Cilium: OK
\__/ยฏยฏ\__/ Operator: OK
/ยฏยฏ\__/ยฏยฏ\ Envoy DaemonSet: OK
\__/ยฏยฏ\__/ Hubble Relay: disabled
\__/ ClusterMesh: OK
DaemonSet cilium Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet cilium-envoy Desired: 2, Ready: 2/2, Available: 2/2
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
Deployment clustermesh-apiserver Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium Running: 2
cilium-envoy Running: 2
cilium-operator Running: 1
clustermesh-apiserver Running: 1
hubble-relay
Cluster Pods: 4/4 managed by Cilium
Helm chart version: 1.17.6
Image versions cilium quay.io/cilium/cilium:v1.17.6@sha256:544de3d4fed7acba72758413812780a4972d47c39035f2a06d6145d8644a3353: 2
cilium-envoy quay.io/cilium/cilium-envoy:v1.33.4-1752151664-7c2edb0b44cf95f326d628b837fcdd845102ba68@sha256:318eff387835ca2717baab42a84f35a83a5f9e7d519253df87269f80b9ff0171: 2
cilium-operator quay.io/cilium/operator-generic:v1.17.6@sha256:91ac3bf7be7bed30e90218f219d4f3062a63377689ee7246062fa0cc3839d096: 1
clustermesh-apiserver quay.io/cilium/clustermesh-apiserver:v1.17.6@sha256:f619e97432db427e1511bf91af3be8ded418c53a353a09629e04c5880659d1df: 2
1
cilium status --context kind-east
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
/ยฏยฏ\
/ยฏยฏ\__/ยฏยฏ\ Cilium: OK
\__/ยฏยฏ\__/ Operator: OK
/ยฏยฏ\__/ยฏยฏ\ Envoy DaemonSet: OK
\__/ยฏยฏ\__/ Hubble Relay: disabled
\__/ ClusterMesh: OK
DaemonSet cilium Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet cilium-envoy Desired: 2, Ready: 2/2, Available: 2/2
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
Deployment clustermesh-apiserver Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium Running: 2
cilium-envoy Running: 2
cilium-operator Running: 1
clustermesh-apiserver Running: 1
hubble-relay
Cluster Pods: 4/4 managed by Cilium
Helm chart version: 1.17.6
Image versions cilium quay.io/cilium/cilium:v1.17.6@sha256:544de3d4fed7acba72758413812780a4972d47c39035f2a06d6145d8644a3353: 2
cilium-envoy quay.io/cilium/cilium-envoy:v1.33.4-1752151664-7c2edb0b44cf95f326d628b837fcdd845102ba68@sha256:318eff387835ca2717baab42a84f35a83a5f9e7d519253df87269f80b9ff0171: 2
cilium-operator quay.io/cilium/operator-generic:v1.17.6@sha256:91ac3bf7be7bed30e90218f219d4f3062a63377689ee7246062fa0cc3839d096: 1
clustermesh-apiserver quay.io/cilium/clustermesh-apiserver:v1.17.6@sha256:f619e97432db427e1511bf91af3be8ded418c53a353a09629e04c5880659d1df: 2
cilium status
ํ์ธ ๊ฒฐ๊ณผ, ์์ชฝ ํด๋ฌ์คํฐ ๋ชจ๋ ClusterMesh: OK ์ถ๋ ฅ- DaemonSet, Envoy, Operator, clustermesh-apiserver ๋ชจ๋ Running ์ํ์ด๋ฉฐ, ์ฐ๊ฒฐ ์์ ์
11. ์์ธ ์ํ ํ์ธ (Verbose Mode)
1
kwest exec -it -n kube-system ds/cilium -- cilium status --verbose
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
...
ClusterMesh: 1/1 remote clusters ready, 0 global-services
east: ready, 2 nodes, 4 endpoints, 3 identities, 0 services, 0 MCS-API service exports, 0 reconnections (last: never)
โ etcd: 1/1 connected, leases=0, lock leases=0, has-quorum=true: endpoint status checks are disabled, ID: b88364e6e9ad8658
โ remote configuration: expected=true, retrieved=true, cluster-id=2, kvstoremesh=false, sync-canaries=true, service-exports=disabled
โ synchronization status: nodes=true, endpoints=true, identities=true, services=true
...
1
keast exec -it -n kube-system ds/cilium -- cilium status --verbose
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
...
ClusterMesh: 1/1 remote clusters ready, 0 global-services
west: ready, 2 nodes, 4 endpoints, 3 identities, 0 services, 0 MCS-API service exports, 0 reconnections (last: never)
โ etcd: 1/1 connected, leases=0, lock leases=0, has-quorum=true: endpoint status checks are disabled, ID: 700452e5b45c47e8
โ remote configuration: expected=true, retrieved=true, cluster-id=1, kvstoremesh=false, sync-canaries=true, service-exports=disabled
โ synchronization status: nodes=true, endpoints=true, identities=true, services=true
...
- west ํด๋ฌ์คํฐ์์ east, east ํด๋ฌ์คํฐ์์ west ์ํ๋ฅผ ์์ธ ์กฐํ
- ๋๊ธฐํ ํญ๋ชฉ:
nodes=true
,endpoints=true
,identities=true
,services=true
โ ๋๊ธฐํ ์ ์ - etcd ์ฐ๊ฒฐ๋ 1/1 connected, quorum ํ๋ณด๋จ
12. Pod CIDR ๋ผ์ฐํ ์๋ ์ฃผ์ ํ์ธ
1
2
3
4
docker exec -it west-control-plane ip -c route
docker exec -it west-worker ip -c route
docker exec -it east-control-plane ip -c route
docker exec -it east-worker ip -c route
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
default via 172.18.0.1 dev eth0
10.0.0.0/24 via 10.0.0.19 dev cilium_host proto kernel src 10.0.0.19
10.0.0.19 dev cilium_host proto kernel scope link
10.0.1.0/24 via 172.18.0.3 dev eth0 proto kernel
10.1.0.0/24 via 172.18.0.4 dev eth0 proto kernel
10.1.1.0/24 via 172.18.0.5 dev eth0 proto kernel
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.2
default via 172.18.0.1 dev eth0
10.0.0.0/24 via 172.18.0.2 dev eth0 proto kernel
10.0.1.0/24 via 10.0.1.99 dev cilium_host proto kernel src 10.0.1.99
10.0.1.99 dev cilium_host proto kernel scope link
10.1.0.0/24 via 172.18.0.4 dev eth0 proto kernel
10.1.1.0/24 via 172.18.0.5 dev eth0 proto kernel
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.3
default via 172.18.0.1 dev eth0
10.0.0.0/24 via 172.18.0.2 dev eth0 proto kernel
10.0.1.0/24 via 172.18.0.3 dev eth0 proto kernel
10.1.0.0/24 via 10.1.0.165 dev cilium_host proto kernel src 10.1.0.165
10.1.0.165 dev cilium_host proto kernel scope link
10.1.1.0/24 via 172.18.0.5 dev eth0 proto kernel
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.4
default via 172.18.0.1 dev eth0
10.0.0.0/24 via 172.18.0.2 dev eth0 proto kernel
10.0.1.0/24 via 172.18.0.3 dev eth0 proto kernel
10.1.0.0/24 via 172.18.0.4 dev eth0 proto kernel
10.1.1.0/24 via 10.1.1.122 dev cilium_host proto kernel src 10.1.1.122
10.1.1.122 dev cilium_host proto kernel scope link
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.5
- ClusterMesh ์ฐ๊ฒฐ ํ, ๊ฐ ํด๋ฌ์คํฐ์ ๋ ธ๋ ๋ผ์ฐํ ํ ์ด๋ธ์ ์๋ ํด๋ฌ์คํฐ์ PodCIDR ์๋ ์ฃผ์ ํ์ธ
- ๊ฒฐ๊ณผ์ ์ผ๋ก, ์์ชฝ ํด๋ฌ์คํฐ Pod ๊ฐ ์ง์ ํต์ ๊ฐ๋ฅ ์ํ๋ก ์ ํ๋จ
๐๏ธ Hubble enable
1. Cilium Helm ์ ์ฅ์ ์ถ๊ฐ ๋ฐ ์ ๋ฐ์ดํธ
1
2
helm repo add cilium https://helm.cilium.io/
helm repo update
โ ย ์ถ๋ ฅ
1
2
3
4
5
"cilium" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "cilium" chart repository
Update Complete. โHappy Helming!โ
2. West ํด๋ฌ์คํฐ์ Hubble ํ์ฑํ
1
2
3
helm upgrade cilium cilium/cilium --version 1.17.6 --namespace kube-system --reuse-values \
--set hubble.enabled=true --set hubble.relay.enabled=true --set hubble.ui.enabled=true \
--set hubble.ui.service.type=NodePort --set hubble.ui.service.nodePort=30001 --kube-context kind-west
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
Release "cilium" has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Sat Aug 16 19:29:04 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 4
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.
Your release version is 1.17.6.
For any further help, visit https://docs.cilium.io/en/v1.17/gettinghelp
3. West ํด๋ฌ์คํฐ Cilium DaemonSet ์ฌ์์
1
2
3
4
kwest -n kube-system rollout restart ds/cilium
# ๊ฒฐ๊ณผ
daemonset.apps/cilium restarted
4. East ํด๋ฌ์คํฐ์ Hubble ํ์ฑํ
1
2
3
helm upgrade cilium cilium/cilium --version 1.17.6 --namespace kube-system --reuse-values \
--set hubble.enabled=true --set hubble.relay.enabled=true --set hubble.ui.enabled=true \
--set hubble.ui.service.type=NodePort --set hubble.ui.service.nodePort=31001 --kube-context kind-east
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
Release "cilium" has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Sat Aug 16 19:30:52 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 4
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.
Your release version is 1.17.6.
For any further help, visit https://docs.cilium.io/en/v1.17/gettinghelp
5. East ํด๋ฌ์คํฐ Cilium DaemonSet ์ฌ์์
1
2
3
4
kwest -n kube-system rollout restart ds/cilium
# ๊ฒฐ๊ณผ
daemonset.apps/cilium restarted
6. Hubble UI ์ ์ ํ์ธ
โ๏ธ west โ east ํ๋๊ฐ ์ง์ ํต์ (tcpdump ๊ฒ์ฆ)
1. ํ ์คํธ์ฉ ํ๋ ๋ฐฐํฌ (west/east ๊ฐ๊ฐ)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
cat << EOF | kubectl apply --context kind-west -f -
apiVersion: v1
kind: Pod
metadata:
name: curl-pod
labels:
app: curl
spec:
containers:
- name: curl
image: nicolaka/netshoot
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF
cat << EOF | kubectl apply --context kind-east -f -
apiVersion: v1
kind: Pod
metadata:
name: curl-pod
labels:
app: curl
spec:
containers:
- name: curl
image: nicolaka/netshoot
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF
# ๊ฒฐ๊ณผ
pod/curl-pod created
pod/curl-pod created
2. ํ๋ ์ํ ๋ฐ IP ํ์ธ
1
kwest get pod -A && keast get pod -A
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
NAMESPACE NAME READY STATUS RESTARTS AGE
default curl-pod 1/1 Running 0 55s
kube-system cilium-6l82v 1/1 Running 0 5m16s
kube-system cilium-envoy-5gpxx 1/1 Running 0 54m
kube-system cilium-envoy-skv7b 1/1 Running 0 54m
kube-system cilium-lrpcr 1/1 Running 0 5m16s
kube-system cilium-operator-7dbb574d5b-drtg2 1/1 Running 0 54m
kube-system clustermesh-apiserver-5cf45db9cc-2g847 2/2 Running 0 32m
kube-system clustermesh-apiserver-generate-certs-xvddz 0/1 Completed 0 7m53s
kube-system coredns-674b8bbfcf-kwxv5 1/1 Running 0 86m
kube-system coredns-674b8bbfcf-nb96t 1/1 Running 0 86m
kube-system etcd-west-control-plane 1/1 Running 0 86m
kube-system hubble-relay-5dcd46f5c-rqrvl 1/1 Running 0 7m54s
kube-system hubble-ui-76d4965bb6-xkn8k 2/2 Running 0 7m54s
kube-system kube-apiserver-west-control-plane 1/1 Running 0 86m
kube-system kube-controller-manager-west-control-plane 1/1 Running 0 86m
kube-system kube-scheduler-west-control-plane 1/1 Running 0 86m
local-path-storage local-path-provisioner-7dc846544d-jrdw8 1/1 Running 0 86m
NAMESPACE NAME READY STATUS RESTARTS AGE
default curl-pod 1/1 Running 0 55s
kube-system cilium-7z2kz 1/1 Running 0 24m
kube-system cilium-envoy-mrzw8 1/1 Running 0 53m
kube-system cilium-envoy-vq5r7 1/1 Running 0 53m
kube-system cilium-operator-867f8dc978-44zqb 1/1 Running 0 53m
kube-system cilium-thtxk 1/1 Running 0 24m
kube-system clustermesh-apiserver-5cf45db9cc-7wfwz 2/2 Running 0 31m
kube-system clustermesh-apiserver-generate-certs-5csbq 0/1 Completed 0 6m4s
kube-system coredns-674b8bbfcf-2tm2l 1/1 Running 0 85m
kube-system coredns-674b8bbfcf-c9qsg 1/1 Running 0 85m
kube-system etcd-east-control-plane 1/1 Running 0 85m
kube-system hubble-relay-5dcd46f5c-6qzn7 1/1 Running 0 6m5s
kube-system hubble-ui-76d4965bb6-jg78b 2/2 Running 0 6m5s
kube-system kube-apiserver-east-control-plane 1/1 Running 0 85m
kube-system kube-controller-manager-east-control-plane 1/1 Running 0 85m
kube-system kube-scheduler-east-control-plane 1/1 Running 0 85m
local-path-storage local-path-provisioner-7dc846544d-mwfdc 1/1 Running 0 85m
1
kwest get pod -owide && keast get pod -owide
โ ย ์ถ๋ ฅ
1
2
3
4
5
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-pod 1/1 Running 0 114s 10.0.1.12 west-worker <none> <none>
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-pod 1/1 Running 0 114s 10.1.1.67 east-worker <none> <none>
3. west โ east ํ๋๊ฐ Ping ํ ์คํธ
1
kubectl exec -it curl-pod --context kind-west -- ping 10.1.1.67
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
PING 10.1.1.67 (10.1.1.67) 56(84) bytes of data.
64 bytes from 10.1.1.67: icmp_seq=1 ttl=62 time=0.070 ms
64 bytes from 10.1.1.67: icmp_seq=2 ttl=62 time=0.188 ms
64 bytes from 10.1.1.67: icmp_seq=3 ttl=62 time=0.093 ms
64 bytes from 10.1.1.67: icmp_seq=4 ttl=62 time=0.120 ms
64 bytes from 10.1.1.67: icmp_seq=5 ttl=62 time=0.153 ms
....
- ์ ์์ ์ผ๋ก ์๋ต(Reply) ์์ โ Pod ๊ฐ ์ง์ ํต์ ๊ฐ๋ฅ ํ์ธ
4. ๋ชฉ์ ์ง ํ๋์์ tcpdump ํ์ธ
east ํ๋ ๋ด๋ถ์์ tcpdump
์คํ
1
kubectl exec -it curl-pod --context kind-east -- tcpdump -i eth0 -nn
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
10:43:50.833580 IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 199, length 64
10:43:50.833627 IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 199, length 64
10:43:51.857541 IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 200, length 64
10:43:51.857578 IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 200, length 64
10:43:52.880956 IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 201, length 64
10:43:52.881075 IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 201, length 64
10:43:53.904522 IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 202, length 64
10:43:53.904555 IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 202, length 64
10:43:54.928512 IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 203, length 64
10:43:54.928540 IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 203, length 64
10:43:55.952560 IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 204, length 64
10:43:55.952593 IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 204, length 64
10:43:56.976694 IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 205, length 64
10:43:56.976763 IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 205, length 64
...
- west ํ๋์ IP(
10.0.1.12
)์์ ์ง์ ๋ค์ด์ค๋ ICMP ์์ฒญ ๋ฐ ์๋ต ํ์ธ - NAT ๋ณํ ์์ด Pod โ Pod ๋ค์ด๋ ํธ ๋ผ์ฐํ ๊ฒ์ฆ
5. ๋ชฉ์ ์ง ํด๋ฌ์คํฐ ๋ ธ๋์์ tcpdump ํ์ธ
east-control-plane, east-worker ๋
ธ๋์์ tcpdump
์คํ
1
docker exec -it east-control-plane tcpdump -i any icmp -nn
โ ย ์ถ๋ ฅ
1
2
3
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
1
docker exec -it east-worker tcpdump -i any icmp -nn
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
10:46:31.473504 eth0 In IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 356, length 64
10:46:31.473530 lxccd8da9e761ca In IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 356, length 64
10:46:31.473540 eth0 Out IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 356, length 64
10:46:31.988151 eth0 In IP 172.18.0.1 > 172.18.0.5: ICMP 172.18.0.1 udp port 53 unreachable, length 53
10:46:32.496507 eth0 In IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 357, length 64
10:46:32.496535 lxccd8da9e761ca In IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 357, length 64
10:46:32.496545 eth0 Out IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 357, length 64
10:46:33.488946 eth0 In IP 172.18.0.1 > 172.18.0.5: ICMP 172.18.0.1 udp port 53 unreachable, length 53
10:46:33.520513 eth0 In IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 358, length 64
10:46:33.520542 lxccd8da9e761ca In IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 358, length 64
10:46:33.520554 eth0 Out IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 358, length 64
10:46:33.569979 eth0 In IP 172.18.0.1 > 172.18.0.5: ICMP 172.18.0.1 udp port 53 unreachable, length 53
10:46:34.544531 eth0 In IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 359, length 64
10:46:34.544557 lxccd8da9e761ca In IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 359, length 64
10:46:34.544569 eth0 Out IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 359, length 64
10:46:34.990071 eth0 In IP 172.18.0.1 > 172.18.0.5: ICMP 172.18.0.1 udp port 53 unreachable, length 53
10:46:35.568525 eth0 In IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 360, length 64
10:46:35.568555 lxccd8da9e761ca In IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 360, length 64
10:46:35.568573 eth0 Out IP 10.1.1.67 > 10.0.1.12: ICMP echo reply, id 2, seq 360, length 64
10:46:36.492093 eth0 In IP 172.18.0.1 > 172.18.0.5: ICMP 172.18.0.1 udp port 53 unreachable, length 53
10:46:36.572589 eth0 In IP 172.18.0.1 > 172.18.0.5: ICMP 172.18.0.1 udp port 53 unreachable, length 53
10:46:36.593535 eth0 In IP 10.0.1.12 > 10.1.1.67: ICMP echo request, id 2, seq 361, length 64
...
- west ํ๋์์ ๋ค์ด์จ ํจํท์ด ๋ ธ๋์์ ์ง์ ํ๋๋ก ์ ๋ฌ๋๋ ๊ณผ์ ํ์ธ
- ์ค๊ฐ NAT ๊ฒ์ดํธ์จ์ด๋ ํฐ๋๋ง ์์ด direct routing ๋์ ๊ฒ์ฆ
โ๏ธ Load-balancing & Service Discovery
1. ๊ธ๋ก๋ฒ ์๋น์ค ๋ฐฐํฌ (west / east ํด๋ฌ์คํฐ)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
cat << EOF | kubectl apply --context kind-west -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: webpod
spec:
replicas: 2
selector:
matchLabels:
app: webpod
template:
metadata:
labels:
app: webpod
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- sample-app
topologyKey: "kubernetes.io/hostname"
containers:
- name: webpod
image: traefik/whoami
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: webpod
labels:
app: webpod
annotations:
service.cilium.io/global: "true"
spec:
selector:
app: webpod
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
EOF
# ๊ฒฐ๊ณผ
deployment.apps/webpod created
service/webpod created
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
cat << EOF | kubectl apply --context kind-east -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: webpod
spec:
replicas: 2
selector:
matchLabels:
app: webpod
template:
metadata:
labels:
app: webpod
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- sample-app
topologyKey: "kubernetes.io/hostname"
containers:
- name: webpod
image: traefik/whoami
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: webpod
labels:
app: webpod
annotations:
service.cilium.io/global: "true"
spec:
selector:
app: webpod
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
EOF
# ๊ฒฐ๊ณผ
deployment.apps/webpod created
service/webpod created
webpod
Deployment๊ณผ Service๋ฅผ ์์ชฝ ํด๋ฌ์คํฐ์ ๋์ผํ๊ฒ ๋ฐฐํฌservice.cilium.io/global: "true"
์ด๋ ธํ ์ด์ ์ ์ถ๊ฐํ์ฌ ๊ธ๋ก๋ฒ ์๋น์ค๋ก ๋ฑ๋ก- ClusterIP ํ์ ์๋น์ค ์์ฑ, ํฌํธ 80 ๋ ธ์ถ
2. ์๋ํฌ์ธํธ ํ์ธ
1
kwest get svc,ep webpod && keast get svc,ep webpod
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/webpod ClusterIP 10.2.167.94 <none> 80/TCP 118s
NAME ENDPOINTS AGE
endpoints/webpod 10.0.1.136:80,10.0.1.69:80 118s
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/webpod ClusterIP 10.3.128.46 <none> 80/TCP 56s
NAME ENDPOINTS AGE
endpoints/webpod 10.1.1.6:80,10.1.1.95:80 56s
- west โ
10.0.1.136
,10.0.1.69
- east โ
10.1.1.6
,10.1.1.95
- ๋ ํด๋ฌ์คํฐ ๋ชจ๋
webpod
์๋น์ค๊ฐ ์กด์ฌํ๊ณ , ๊ฐ๊ฐ์ Pod IP๋ค์ด ๊ธ๋ก๋ฒ ์๋น์ค์ ๋ฑ๋ก๋จ
3. ๊ธ๋ก๋ฒ ์๋น์ค ๋งคํ ํ์ธ (west)
1
kwest exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
ID Frontend Service Type Backend
1 10.2.0.1:443/TCP ClusterIP 1 => 172.18.0.2:6443/TCP (active)
2 10.2.182.189:443/TCP ClusterIP 1 => 172.18.0.3:4244/TCP (active)
3 10.2.0.10:53/UDP ClusterIP 1 => 10.0.1.177:53/UDP (active)
2 => 10.0.1.115:53/UDP (active)
4 10.2.0.10:53/TCP ClusterIP 1 => 10.0.1.177:53/TCP (active)
2 => 10.0.1.115:53/TCP (active)
5 10.2.0.10:9153/TCP ClusterIP 1 => 10.0.1.177:9153/TCP (active)
2 => 10.0.1.115:9153/TCP (active)
6 10.2.126.217:2379/TCP ClusterIP 1 => 10.0.1.8:2379/TCP (active)
7 172.18.0.3:32379/TCP NodePort 1 => 10.0.1.8:2379/TCP (active)
8 0.0.0.0:32379/TCP NodePort 1 => 10.0.1.8:2379/TCP (active)
9 10.2.1.9:80/TCP ClusterIP 1 => 10.0.1.120:4245/TCP (active)
10 10.2.233.237:80/TCP ClusterIP 1 => 10.0.1.158:8081/TCP (active)
11 172.18.0.3:30001/TCP NodePort 1 => 10.0.1.158:8081/TCP (active)
12 0.0.0.0:30001/TCP NodePort 1 => 10.0.1.158:8081/TCP (active)
13 10.2.167.94:80/TCP ClusterIP 1 => 10.0.1.69:80/TCP (active)
2 => 10.0.1.136:80/TCP (active)
3 => 10.1.1.6:80/TCP (active)
4 => 10.1.1.95:80/TCP (active)
webpod (10.2.167.94:80/TCP)
โ west, east ํด๋ฌ์คํฐ์ ๋ชจ๋ Pod IP๋ก ๋ถ์ฐ- west :
10.0.1.69:80
,10.0.1.136:80
- east :
10.1.1.6:80
,10.1.1.95:80
- west :
4. ๊ธ๋ก๋ฒ ์๋น์ค ๋งคํ ํ์ธ (east)
1
keast exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
ID Frontend Service Type Backend
1 10.3.0.1:443/TCP ClusterIP 1 => 172.18.0.4:6443/TCP (active)
2 10.3.107.176:443/TCP ClusterIP 1 => 172.18.0.5:4244/TCP (active)
3 10.3.0.10:9153/TCP ClusterIP 1 => 10.1.1.45:9153/TCP (active)
2 => 10.1.1.21:9153/TCP (active)
4 10.3.0.10:53/UDP ClusterIP 1 => 10.1.1.45:53/UDP (active)
2 => 10.1.1.21:53/UDP (active)
5 10.3.0.10:53/TCP ClusterIP 1 => 10.1.1.45:53/TCP (active)
2 => 10.1.1.21:53/TCP (active)
6 10.3.173.28:2379/TCP ClusterIP 1 => 10.1.1.62:2379/TCP (active)
7 172.18.0.5:32379/TCP NodePort 1 => 10.1.1.62:2379/TCP (active)
8 0.0.0.0:32379/TCP NodePort 1 => 10.1.1.62:2379/TCP (active)
9 10.3.54.60:80/TCP ClusterIP 1 => 10.1.1.198:4245/TCP (active)
10 10.3.1.28:80/TCP ClusterIP 1 => 10.1.1.236:8081/TCP (active)
11 172.18.0.5:31001/TCP NodePort 1 => 10.1.1.236:8081/TCP (active)
12 0.0.0.0:31001/TCP NodePort 1 => 10.1.1.236:8081/TCP (active)
13 10.3.128.46:80/TCP ClusterIP 1 => 10.0.1.69:80/TCP (active)
2 => 10.0.1.136:80/TCP (active)
3 => 10.1.1.6:80/TCP (active)
4 => 10.1.1.95:80/TCP (active)
- ๋์ผํ๊ฒ
cilium service list --clustermesh-affinity
์คํ webpod (10.3.128.46:80/TCP)
โ ๋์ผํ 4๊ฐ ์๋ํฌ์ธํธ๋ก ๋ถ์ฐ
5. ํฌ๋ก์ค ํด๋ฌ์คํฐ ํธ์ถ
1
2
kubectl exec -it curl-pod --context kind-west -- sh -c 'while true; do curl -s --connect-timeout 1 webpod ; sleep 1; echo "---"; done;'
kubectl exec -it curl-pod --context kind-east -- sh -c 'while true; do curl -s --connect-timeout 1 webpod ; sleep 1; echo "---"; done;'
- west์ curl-pod โ webpod ํธ์ถ ์ west + east ์๋ํฌ์ธํธ๋ก ๋ก๋๋ฐธ๋ฐ์ฑ
- east์ curl-pod โ webpod ํธ์ถ ์ east + west ์๋ํฌ์ธํธ๋ก ๋ก๋๋ฐธ๋ฐ์ฑ
- ์ฆ, ์๋น์ค VIP ํธ์ถ๋ง์ผ๋ก ๋ ํด๋ฌ์คํฐ์ Pod ๋ชจ๋ ๋์์ด ๋จ
6. ๋ ํ๋ฆฌ์นด ์ ์ถ์ (west 2 โ 1)
1
2
3
4
kwest scale deployment webpod --replicas 1
# ๊ฒฐ๊ณผ
deployment.apps/webpod scaled
1
kwest exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
ID Frontend Service Type Backend
1 10.2.0.1:443/TCP ClusterIP 1 => 172.18.0.2:6443/TCP (active)
2 10.2.182.189:443/TCP ClusterIP 1 => 172.18.0.3:4244/TCP (active)
3 10.2.0.10:53/UDP ClusterIP 1 => 10.0.1.177:53/UDP (active)
2 => 10.0.1.115:53/UDP (active)
4 10.2.0.10:53/TCP ClusterIP 1 => 10.0.1.177:53/TCP (active)
2 => 10.0.1.115:53/TCP (active)
5 10.2.0.10:9153/TCP ClusterIP 1 => 10.0.1.177:9153/TCP (active)
2 => 10.0.1.115:9153/TCP (active)
6 10.2.126.217:2379/TCP ClusterIP 1 => 10.0.1.8:2379/TCP (active)
7 172.18.0.3:32379/TCP NodePort 1 => 10.0.1.8:2379/TCP (active)
8 0.0.0.0:32379/TCP NodePort 1 => 10.0.1.8:2379/TCP (active)
9 10.2.1.9:80/TCP ClusterIP 1 => 10.0.1.120:4245/TCP (active)
10 10.2.233.237:80/TCP ClusterIP 1 => 10.0.1.158:8081/TCP (active)
11 172.18.0.3:30001/TCP NodePort 1 => 10.0.1.158:8081/TCP (active)
12 0.0.0.0:30001/TCP NodePort 1 => 10.0.1.158:8081/TCP (active)
13 10.2.167.94:80/TCP ClusterIP 1 => 10.0.1.69:80/TCP (active)
2 => 10.1.1.6:80/TCP (active)
3 => 10.1.1.95:80/TCP (active)
- west์ ๋จ์ Pod 1๊ฐ + east์ Pod 2๊ฐ๋ก ์๋น์ค ํธ๋ํฝ์ด ๋ถ์ฐ๋จ
- ๊ธ๋ก๋ฒ ์๋น์ค๋ Pod ๊ฐ์์ ๋ฐ๋ผ ์๋ ๋ฐ์
1
keast exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
ID Frontend Service Type Backend
1 10.3.0.1:443/TCP ClusterIP 1 => 172.18.0.4:6443/TCP (active)
2 10.3.107.176:443/TCP ClusterIP 1 => 172.18.0.5:4244/TCP (active)
3 10.3.0.10:9153/TCP ClusterIP 1 => 10.1.1.45:9153/TCP (active)
2 => 10.1.1.21:9153/TCP (active)
4 10.3.0.10:53/UDP ClusterIP 1 => 10.1.1.45:53/UDP (active)
2 => 10.1.1.21:53/UDP (active)
5 10.3.0.10:53/TCP ClusterIP 1 => 10.1.1.45:53/TCP (active)
2 => 10.1.1.21:53/TCP (active)
6 10.3.173.28:2379/TCP ClusterIP 1 => 10.1.1.62:2379/TCP (active)
7 172.18.0.5:32379/TCP NodePort 1 => 10.1.1.62:2379/TCP (active)
8 0.0.0.0:32379/TCP NodePort 1 => 10.1.1.62:2379/TCP (active)
9 10.3.54.60:80/TCP ClusterIP 1 => 10.1.1.198:4245/TCP (active)
10 10.3.1.28:80/TCP ClusterIP 1 => 10.1.1.236:8081/TCP (active)
11 172.18.0.5:31001/TCP NodePort 1 => 10.1.1.236:8081/TCP (active)
12 0.0.0.0:31001/TCP NodePort 1 => 10.1.1.236:8081/TCP (active)
13 10.3.128.46:80/TCP ClusterIP 1 => 10.0.1.69:80/TCP (active)
2 => 10.1.1.6:80/TCP (active)
3 => 10.1.1.95:80/TCP (active)
7. ๋ ํ๋ฆฌ์นด 0 (west)
1
2
3
4
kwest scale deployment webpod --replicas 0
# ๊ฒฐ๊ณผ
deployment.apps/webpod scaled
- west์ ๋ชจ๋ Pod ์ญ์ ํ์๋ ์๋น์ค ์ ์ ๋์
1
kwest exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
ID Frontend Service Type Backend
1 10.2.0.1:443/TCP ClusterIP 1 => 172.18.0.2:6443/TCP (active)
2 10.2.182.189:443/TCP ClusterIP 1 => 172.18.0.3:4244/TCP (active)
3 10.2.0.10:53/UDP ClusterIP 1 => 10.0.1.177:53/UDP (active)
2 => 10.0.1.115:53/UDP (active)
4 10.2.0.10:53/TCP ClusterIP 1 => 10.0.1.177:53/TCP (active)
2 => 10.0.1.115:53/TCP (active)
5 10.2.0.10:9153/TCP ClusterIP 1 => 10.0.1.177:9153/TCP (active)
2 => 10.0.1.115:9153/TCP (active)
6 10.2.126.217:2379/TCP ClusterIP 1 => 10.0.1.8:2379/TCP (active)
7 172.18.0.3:32379/TCP NodePort 1 => 10.0.1.8:2379/TCP (active)
8 0.0.0.0:32379/TCP NodePort 1 => 10.0.1.8:2379/TCP (active)
9 10.2.1.9:80/TCP ClusterIP 1 => 10.0.1.120:4245/TCP (active)
10 10.2.233.237:80/TCP ClusterIP 1 => 10.0.1.158:8081/TCP (active)
11 172.18.0.3:30001/TCP NodePort 1 => 10.0.1.158:8081/TCP (active)
12 0.0.0.0:30001/TCP NodePort 1 => 10.0.1.158:8081/TCP (active)
13 10.2.167.94:80/TCP ClusterIP 1 => 10.1.1.6:80/TCP (active)
2 => 10.1.1.95:80/TCP (active)
1
keast exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
ID Frontend Service Type Backend
1 10.3.0.1:443/TCP ClusterIP 1 => 172.18.0.4:6443/TCP (active)
2 10.3.107.176:443/TCP ClusterIP 1 => 172.18.0.5:4244/TCP (active)
3 10.3.0.10:9153/TCP ClusterIP 1 => 10.1.1.45:9153/TCP (active)
2 => 10.1.1.21:9153/TCP (active)
4 10.3.0.10:53/UDP ClusterIP 1 => 10.1.1.45:53/UDP (active)
2 => 10.1.1.21:53/UDP (active)
5 10.3.0.10:53/TCP ClusterIP 1 => 10.1.1.45:53/TCP (active)
2 => 10.1.1.21:53/TCP (active)
6 10.3.173.28:2379/TCP ClusterIP 1 => 10.1.1.62:2379/TCP (active)
7 172.18.0.5:32379/TCP NodePort 1 => 10.1.1.62:2379/TCP (active)
8 0.0.0.0:32379/TCP NodePort 1 => 10.1.1.62:2379/TCP (active)
9 10.3.54.60:80/TCP ClusterIP 1 => 10.1.1.198:4245/TCP (active)
10 10.3.1.28:80/TCP ClusterIP 1 => 10.1.1.236:8081/TCP (active)
11 172.18.0.5:31001/TCP NodePort 1 => 10.1.1.236:8081/TCP (active)
12 0.0.0.0:31001/TCP NodePort 1 => 10.1.1.236:8081/TCP (active)
13 10.3.128.46:80/TCP ClusterIP 1 => 10.1.1.6:80/TCP (active)
2 => 10.1.1.95:80/TCP (active)
- east์ Pod ์๋ํฌ์ธํธ๋ง ์ด์์์ผ๋ฏ๋ก, ํธ๋ํฝ์ ์๋์ผ๋ก east ํด๋ฌ์คํฐ๋ก ๋ผ์ฐํ
8. ๋ ํ๋ฆฌ์นด ์๋ณต
1
2
kwest scale deployment webpod --replicas 2
deployment.apps/webpod scaled
- west์ ๋ ํ๋ฆฌ์นด ๋ค์ 2๊ฐ๋ก ๋ณต๊ตฌ
- ๊ธ๋ก๋ฒ ์๋น์ค ์๋ํฌ์ธํธ ๋ชฉ๋ก์ ๋ค์ west Pod IP๊ฐ ์ถ๊ฐ
๐ฏ Service Affinity
1. ์๋น์ค ์ด๋ ธํ ์ด์ ์ค์
1
2
3
4
5
6
7
kwest annotate service webpod service.cilium.io/affinity=local --overwrite
# ๊ฒฐ๊ณผ
service/webpod annotated
keast annotate service webpod service.cilium.io/affinity=local --overwrite
# ๊ฒฐ๊ณผ
service/webpod annotated
1
2
3
4
5
6
7
8
9
10
11
kwest describe svc webpod | grep Annotations -A3
Annotations: service.cilium.io/affinity: local
service.cilium.io/global: true
Selector: app=webpod
Type: ClusterIP
keast describe svc webpod | grep Annotations -A3
Annotations: service.cilium.io/affinity: local
service.cilium.io/global: true
Selector: app=webpod
Type: ClusterIP
- west, east ํด๋ฌ์คํฐ์
webpod
์๋น์ค์service.cilium.io/affinity=local
์ด๋ ธํ ์ด์ ์ถ๊ฐ
2. Affinity ๋์ ํ์ธ (west)
1
kwest exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
ID Frontend Service Type Backend
1 10.2.0.1:443/TCP ClusterIP 1 => 172.18.0.2:6443/TCP (active)
2 10.2.182.189:443/TCP ClusterIP 1 => 172.18.0.3:4244/TCP (active)
3 10.2.0.10:53/UDP ClusterIP 1 => 10.0.1.177:53/UDP (active)
2 => 10.0.1.115:53/UDP (active)
4 10.2.0.10:53/TCP ClusterIP 1 => 10.0.1.177:53/TCP (active)
2 => 10.0.1.115:53/TCP (active)
5 10.2.0.10:9153/TCP ClusterIP 1 => 10.0.1.177:9153/TCP (active)
2 => 10.0.1.115:9153/TCP (active)
6 10.2.126.217:2379/TCP ClusterIP 1 => 10.0.1.8:2379/TCP (active)
7 172.18.0.3:32379/TCP NodePort 1 => 10.0.1.8:2379/TCP (active)
8 0.0.0.0:32379/TCP NodePort 1 => 10.0.1.8:2379/TCP (active)
9 10.2.1.9:80/TCP ClusterIP 1 => 10.0.1.120:4245/TCP (active)
10 10.2.233.237:80/TCP ClusterIP 1 => 10.0.1.158:8081/TCP (active)
11 172.18.0.3:30001/TCP NodePort 1 => 10.0.1.158:8081/TCP (active)
12 0.0.0.0:30001/TCP NodePort 1 => 10.0.1.158:8081/TCP (active)
13 10.2.167.94:80/TCP ClusterIP 1 => 10.1.1.6:80/TCP (active)
2 => 10.1.1.95:80/TCP (active)
3 => 10.0.1.159:80/TCP (active) (preferred)
4 => 10.0.1.107:80/TCP (active) (preferred)
webpod
์๋น์ค ์๋ํฌ์ธํธ์์ ๋ก์ปฌ Pod IP(10.0.1.x
)๊ฐ(preferred)
๋ก ํ์- ์ฆ, ํด๋ฌ์คํฐ ๋ด๋ถ ํด๋ผ์ด์ธํธ ์์ฒญ์ ๊ธฐ๋ณธ์ ์ผ๋ก ๋ก์ปฌ ์๋ํฌ์ธํธ๋ก ์ฐ์ ๋ผ์ฐํ ๋จ
3. Affinity ๋์ ํ์ธ (east)
1
keast exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
ID Frontend Service Type Backend
1 10.3.0.1:443/TCP ClusterIP 1 => 172.18.0.4:6443/TCP (active)
2 10.3.107.176:443/TCP ClusterIP 1 => 172.18.0.5:4244/TCP (active)
3 10.3.0.10:9153/TCP ClusterIP 1 => 10.1.1.45:9153/TCP (active)
2 => 10.1.1.21:9153/TCP (active)
4 10.3.0.10:53/UDP ClusterIP 1 => 10.1.1.45:53/UDP (active)
2 => 10.1.1.21:53/UDP (active)
5 10.3.0.10:53/TCP ClusterIP 1 => 10.1.1.45:53/TCP (active)
2 => 10.1.1.21:53/TCP (active)
6 10.3.173.28:2379/TCP ClusterIP 1 => 10.1.1.62:2379/TCP (active)
7 172.18.0.5:32379/TCP NodePort 1 => 10.1.1.62:2379/TCP (active)
8 0.0.0.0:32379/TCP NodePort 1 => 10.1.1.62:2379/TCP (active)
9 10.3.54.60:80/TCP ClusterIP 1 => 10.1.1.198:4245/TCP (active)
10 10.3.1.28:80/TCP ClusterIP 1 => 10.1.1.236:8081/TCP (active)
11 172.18.0.5:31001/TCP NodePort 1 => 10.1.1.236:8081/TCP (active)
12 0.0.0.0:31001/TCP NodePort 1 => 10.1.1.236:8081/TCP (active)
13 10.3.128.46:80/TCP ClusterIP 1 => 10.1.1.6:80/TCP (active) (preferred)
2 => 10.1.1.95:80/TCP (active) (preferred)
3 => 10.0.1.159:80/TCP (active)
4 => 10.0.1.107:80/TCP (active)
- east ํด๋ฌ์คํฐ์์๋ ๋์ผํ๊ฒ ๋ก์ปฌ Pod IP(
10.1.1.x
)๊ฐ(preferred)
๋ก ํ์ - ๊ธ๋ก๋ฒ ์๋น์ค์ง๋ง, ์์ ์ ํด๋ฌ์คํฐ์ ์๋ Pod โ ์ต์ฐ์ ์ฒ๋ฆฌ
4. ์๋น์ค ํธ์ถ ํ ์คํธ
1
2
kubectl exec -it curl-pod --context kind-west -- sh -c 'while true; do curl -s --connect-timeout 1 webpod ; sleep 1; echo "---"; done;'
kubectl exec -it curl-pod --context kind-east -- sh -c 'while true; do curl -s --connect-timeout 1 webpod ; sleep 1; echo "---"; done;'
- ์๋น์ค ํธ์ถ์ด ๋์ผ ํด๋ฌ์คํฐ ๋ด ์๋ํฌ์ธํธ๋ก ์ง์ค๋จ
5. ๋ ํ๋ฆฌ์นด ์ ์ถ์ (west 2 โ 0)
1
2
kwest scale deployment webpod --replicas 0
deployment.apps/webpod scaled
- west ํด๋ฌ์คํฐ์์ ๋ชจ๋
webpod
Pod ์ญ์
1
kwest exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
ID Frontend Service Type Backend
1 10.2.0.1:443/TCP ClusterIP 1 => 172.18.0.2:6443/TCP (active)
2 10.2.182.189:443/TCP ClusterIP 1 => 172.18.0.3:4244/TCP (active)
3 10.2.0.10:53/UDP ClusterIP 1 => 10.0.1.177:53/UDP (active)
2 => 10.0.1.115:53/UDP (active)
4 10.2.0.10:53/TCP ClusterIP 1 => 10.0.1.177:53/TCP (active)
2 => 10.0.1.115:53/TCP (active)
5 10.2.0.10:9153/TCP ClusterIP 1 => 10.0.1.177:9153/TCP (active)
2 => 10.0.1.115:9153/TCP (active)
6 10.2.126.217:2379/TCP ClusterIP 1 => 10.0.1.8:2379/TCP (active)
7 172.18.0.3:32379/TCP NodePort 1 => 10.0.1.8:2379/TCP (active)
8 0.0.0.0:32379/TCP NodePort 1 => 10.0.1.8:2379/TCP (active)
9 10.2.1.9:80/TCP ClusterIP 1 => 10.0.1.120:4245/TCP (active)
10 10.2.233.237:80/TCP ClusterIP 1 => 10.0.1.158:8081/TCP (active)
11 172.18.0.3:30001/TCP NodePort 1 => 10.0.1.158:8081/TCP (active)
12 0.0.0.0:30001/TCP NodePort 1 => 10.0.1.158:8081/TCP (active)
13 10.2.167.94:80/TCP ClusterIP 1 => 10.1.1.6:80/TCP (active)
2 => 10.1.1.95:80/TCP (active)
- affinity ์ ์ฑ ์ ๋ก์ปฌ Pod๋ฅผ ์ ํธํ์ง๋ง, ๋ก์ปฌ์ Pod๊ฐ ์์ผ๋ฏ๋ก ์๊ฒฉ east Pod๋ก ์์ฒญ ์ ๋ฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
keast exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity
ID Frontend Service Type Backend
1 10.3.0.1:443/TCP ClusterIP 1 => 172.18.0.4:6443/TCP (active)
2 10.3.107.176:443/TCP ClusterIP 1 => 172.18.0.5:4244/TCP (active)
3 10.3.0.10:9153/TCP ClusterIP 1 => 10.1.1.45:9153/TCP (active)
2 => 10.1.1.21:9153/TCP (active)
4 10.3.0.10:53/UDP ClusterIP 1 => 10.1.1.45:53/UDP (active)
2 => 10.1.1.21:53/UDP (active)
5 10.3.0.10:53/TCP ClusterIP 1 => 10.1.1.45:53/TCP (active)
2 => 10.1.1.21:53/TCP (active)
6 10.3.173.28:2379/TCP ClusterIP 1 => 10.1.1.62:2379/TCP (active)
7 172.18.0.5:32379/TCP NodePort 1 => 10.1.1.62:2379/TCP (active)
8 0.0.0.0:32379/TCP NodePort 1 => 10.1.1.62:2379/TCP (active)
9 10.3.54.60:80/TCP ClusterIP 1 => 10.1.1.198:4245/TCP (active)
10 10.3.1.28:80/TCP ClusterIP 1 => 10.1.1.236:8081/TCP (active)
11 172.18.0.5:31001/TCP NodePort 1 => 10.1.1.236:8081/TCP (active)
12 0.0.0.0:31001/TCP NodePort 1 => 10.1.1.236:8081/TCP (active)
13 10.3.128.46:80/TCP ClusterIP 1 => 10.1.1.6:80/TCP (active) (preferred)
2 => 10.1.1.95:80/TCP (active) (preferred)
- east ํด๋ฌ์คํฐ๋ ์ฌ์ ํ ๋ก์ปฌ Pod๊ฐ ์ด์์์ด ์์ ์ Pod๋ก ์ฐ์ ์๋ต
6. ์๋ณต (west replicas=2)
1
2
kwest scale deployment webpod --replicas 2
deployment.apps/webpod scaled
- west์ ๋ค์ 2๊ฐ์ Pod ์์ฑ โ ๋ก์ปฌ Pod IP๊ฐ ๋ค์
(preferred)
๋ก ํ์๋จ - ์๋น์ค ํธ์ถ ์ ๋ก์ปฌ ์ฐ์ ์ ์ฑ ์ด ๋ค์ ์ ์ฉ๋จ