Cilium 4์ฃผ์ฐจ ์ ๋ฆฌ
๐ง ์ค์ต ํ๊ฒฝ ๊ตฌ์ฑ
1. ๋คํธ์ํฌ ๊ตฌ์กฐ ๋ณํ
- ์ง๋ ์ฃผ ์ค์ต์์๋ ๋ชจ๋ k8s ๋ ธ๋๊ฐ ๋์ผ ๋คํธ์ํฌ ๋์ญ์ ์กด์ฌ
- ๋ณ๊ฒฝ ์ง์
- ์ปจํธ๋กคํ๋ ์ธ(
192.168.10.100
), ์์ปค๋ ธ๋1(192.168.10.101
) โ192.168.10.0/24
- ์์ปค๋
ธ๋0(
192.168.20.100
) โ192.168.20.0/24
- ์ปจํธ๋กคํ๋ ์ธ(
- ์๋ก ๋ค๋ฅธ ๋คํธ์ํฌ ๋์ญ ๊ฐ ํต์ ์ ์ํด ๋ผ์ฐํฐ(eth1:
192.168.10.200
, eth2:192.168.20.200
) ์ฌ์ฉ - ๋ผ์ฐํฐ์๋ loopback ์ธํฐํ์ด์ค(loop1, loop2)์ IP Forward ํ์ฑํ
2. ํด๋ฌ์คํฐ ๊ตฌ์ฑ ๋ฐ Pod CIDR ์ค์
- Cilium Cluster Scope IPAM ์ฌ์ฉ
1
2
3
--set ipam.mode="cluster-pool" \
--set ipam.operator.clusterPoolIPv4PodCIDRList={"172.20.0.0/16"} \
--set ipv4NativeRoutingCIDR=172.20.0.0/16
- ๋
ธ๋๋ณ ํ ๋น๋ Pod CIDR
- k8s-ctr:
172.20.0.0/24
- k8s-w1:
172.20.1.0/24
- k8s-w0:
172.20.2.0/24
- k8s-ctr:
3. VirtualBox IP ๋์ญ ์ ํ ๋ฌธ์ ํด๊ฒฐ
1
2
3
curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/4w/Vagrantfile
vagrant up
๐ข ์ค๋ฅ ๋ฐ์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
...
==> router: Cloning VM...
==> router: Matching MAC address for NAT networking...
==> router: Checking if box 'bento/ubuntu-24.04' version '202502.21.0' is up to date...
==> router: Setting the name of the VM: router
==> router: Clearing any previously set network interfaces...
The IP address configured for the host-only network is not within the
allowed ranges. Please update the address used to be within the allowed
ranges and run the command again.
Address: 192.168.20.200
Ranges: 192.168.10.0/24
Valid ranges can be modified in the /etc/vbox/networks.conf file. For
more information including valid format see:
https://www.virtualbox.org/manual/ch06.html#network_hostonly
- ์ค๋ฅ ์์ธ:
192.168.20.200
์ด VirtualBox host-only ๋คํธ์ํฌ ํ์ฉ ๋ฒ์ ๋ฐ - ํด๊ฒฐ ๋ฐฉ๋ฒ
/etc/vbox/networks.conf
ํธ์ง- ํ์ฉ ๋ฒ์ ์ถ๊ฐ
* 192.168.0.0/16
vagrant halt
ํvagrant up
์ฌ์คํ
4. Vagrant ๊ธฐ๋ฐ k8s ์ค์ต ํ๊ฒฝ VM ๋ถํ ๋ฐ ์ด๊ธฐ ์ค์
1
vagrant up
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
Bringing machine 'k8s-ctr' up with 'virtualbox' provider...
Bringing machine 'k8s-w1' up with 'virtualbox' provider...
Bringing machine 'router' up with 'virtualbox' provider...
Bringing machine 'k8s-w0' up with 'virtualbox' provider...
==> k8s-ctr: Cloning VM...
==> k8s-ctr: Matching MAC address for NAT networking...
==> k8s-ctr: Checking if box 'bento/ubuntu-24.04' version '202502.21.0' is up to date...
==> k8s-ctr: Setting the name of the VM: k8s-ctr
==> k8s-ctr: Clearing any previously set network interfaces...
==> k8s-ctr: Preparing network interfaces based on configuration...
k8s-ctr: Adapter 1: nat
k8s-ctr: Adapter 2: hostonly
==> k8s-ctr: Forwarding ports...
k8s-ctr: 22 (guest) => 60000 (host) (adapter 1)
==> k8s-ctr: Running 'pre-boot' VM customizations...
==> k8s-ctr: Booting VM...
==> k8s-ctr: Waiting for machine to boot. This may take a few minutes...
k8s-ctr: SSH address: 127.0.0.1:60000
k8s-ctr: SSH username: vagrant
k8s-ctr: SSH auth method: private key
k8s-ctr:
k8s-ctr: Vagrant insecure key detected. Vagrant will automatically replace
k8s-ctr: this with a newly generated keypair for better security.
k8s-ctr:
k8s-ctr: Inserting generated public key within guest...
k8s-ctr: Removing insecure key from the guest if it's present...
k8s-ctr: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-ctr: Machine booted and ready!
==> k8s-ctr: Checking for guest additions in VM...
==> k8s-ctr: Setting hostname...
==> k8s-ctr: Configuring and enabling network interfaces...
==> k8s-ctr: Running provisioner: shell...
k8s-ctr: Running: /tmp/vagrant-shell20250807-101867-bkdz3.sh
k8s-ctr: >>>> Initial Config Start <<<<
k8s-ctr: [TASK 1] Setting Profile & Bashrc
k8s-ctr: [TASK 2] Disable AppArmor
k8s-ctr: [TASK 3] Disable and turn off SWAP
k8s-ctr: [TASK 4] Install Packages
k8s-ctr: [TASK 5] Install Kubernetes components (kubeadm, kubelet and kubectl)
k8s-ctr: [TASK 6] Install Packages & Helm
k8s-ctr: >>>> Initial Config End <<<<
==> k8s-ctr: Running provisioner: shell...
k8s-ctr: Running: /tmp/vagrant-shell20250807-101867-7y47ad.sh
k8s-ctr: >>>> K8S Controlplane config Start <<<<
k8s-ctr: [TASK 1] Initial Kubernetes
k8s-ctr: [TASK 2] Setting kube config file
k8s-ctr: [TASK 3] Source the completion
k8s-ctr: [TASK 4] Alias kubectl to k
k8s-ctr: [TASK 5] Install Kubectx & Kubens
k8s-ctr: [TASK 6] Install Kubeps & Setting PS1
k8s-ctr: [TASK 7] Install Cilium CNI
k8s-ctr: [TASK 8] Install Cilium / Hubble CLI
k8s-ctr: cilium
k8s-ctr: hubble
k8s-ctr: [TASK 9] Remove node taint
k8s-ctr: node/k8s-ctr untainted
k8s-ctr: [TASK 10] local DNS with hosts file
k8s-ctr: [TASK 11] Dynamically provisioning persistent local storage with Kubernetes
k8s-ctr: [TASK 12] Install Prometheus & Grafana
k8s-ctr: [TASK 13] Install Metrics-server
k8s-ctr: [TASK 14] Install k9s
k8s-ctr: >>>> K8S Controlplane Config End <<<<
==> k8s-ctr: Running provisioner: shell...
k8s-ctr: Running: /tmp/vagrant-shell20250807-101867-ret6h0.sh
k8s-ctr: >>>> Route Add Config Start <<<<
k8s-ctr: >>>> Route Add Config End <<<<
==> k8s-w1: Cloning VM...
==> k8s-w1: Matching MAC address for NAT networking...
==> k8s-w1: Checking if box 'bento/ubuntu-24.04' version '202502.21.0' is up to date...
==> k8s-w1: Setting the name of the VM: k8s-w1
==> k8s-w1: Clearing any previously set network interfaces...
==> k8s-w1: Preparing network interfaces based on configuration...
k8s-w1: Adapter 1: nat
k8s-w1: Adapter 2: hostonly
==> k8s-w1: Forwarding ports...
k8s-w1: 22 (guest) => 60001 (host) (adapter 1)
==> k8s-w1: Running 'pre-boot' VM customizations...
==> k8s-w1: Booting VM...
==> k8s-w1: Waiting for machine to boot. This may take a few minutes...
k8s-w1: SSH address: 127.0.0.1:60001
k8s-w1: SSH username: vagrant
k8s-w1: SSH auth method: private key
k8s-w1:
k8s-w1: Vagrant insecure key detected. Vagrant will automatically replace
k8s-w1: this with a newly generated keypair for better security.
k8s-w1:
k8s-w1: Inserting generated public key within guest...
k8s-w1: Removing insecure key from the guest if it's present...
k8s-w1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-w1: Machine booted and ready!
==> k8s-w1: Checking for guest additions in VM...
==> k8s-w1: Setting hostname...
==> k8s-w1: Configuring and enabling network interfaces...
==> k8s-w1: Running provisioner: shell...
k8s-w1: Running: /tmp/vagrant-shell20250807-101867-mkfn7y.sh
k8s-w1: >>>> Initial Config Start <<<<
k8s-w1: [TASK 1] Setting Profile & Bashrc
k8s-w1: [TASK 2] Disable AppArmor
k8s-w1: [TASK 3] Disable and turn off SWAP
k8s-w1: [TASK 4] Install Packages
k8s-w1: [TASK 5] Install Kubernetes components (kubeadm, kubelet and kubectl)
k8s-w1: [TASK 6] Install Packages & Helm
k8s-w1: >>>> Initial Config End <<<<
==> k8s-w1: Running provisioner: shell...
k8s-w1: Running: /tmp/vagrant-shell20250807-101867-nsboky.sh
k8s-w1: >>>> K8S Node config Start <<<<
k8s-w1: [TASK 1] K8S Controlplane Join
k8s-w1: >>>> K8S Node config End <<<<
==> k8s-w1: Running provisioner: shell...
k8s-w1: Running: /tmp/vagrant-shell20250807-101867-ub8q45.sh
k8s-w1: >>>> Route Add Config Start <<<<
k8s-w1: >>>> Route Add Config End <<<<
==> router: Cloning VM...
==> router: Matching MAC address for NAT networking...
==> router: Checking if box 'bento/ubuntu-24.04' version '202502.21.0' is up to date...
==> router: Setting the name of the VM: router
==> router: Clearing any previously set network interfaces...
==> router: Preparing network interfaces based on configuration...
router: Adapter 1: nat
router: Adapter 2: hostonly
router: Adapter 3: hostonly
==> router: Forwarding ports...
router: 22 (guest) => 60009 (host) (adapter 1)
==> router: Running 'pre-boot' VM customizations...
==> router: Booting VM...
==> router: Waiting for machine to boot. This may take a few minutes...
router: SSH address: 127.0.0.1:60009
router: SSH username: vagrant
router: SSH auth method: private key
router:
router: Vagrant insecure key detected. Vagrant will automatically replace
router: this with a newly generated keypair for better security.
router:
router: Inserting generated public key within guest...
router: Removing insecure key from the guest if it's present...
router: Key inserted! Disconnecting and reconnecting using new SSH key...
==> router: Machine booted and ready!
==> router: Checking for guest additions in VM...
==> router: Setting hostname...
==> router: Configuring and enabling network interfaces...
==> router: Running provisioner: shell...
router: Running: /tmp/vagrant-shell20250807-101867-8m4vi7.sh
router: >>>> Initial Config Start <<<<
router: [TASK 0] Setting eth2
router: [TASK 1] Setting Profile & Bashrc
router: [TASK 2] Disable AppArmor
router: [TASK 3] Add Kernel setting - IP Forwarding
router: [TASK 4] Setting Dummy Interface
router: [TASK 5] Install Packages
router: [TASK 6] Install Apache
router: >>>> Initial Config End <<<<
==> k8s-w0: Cloning VM...
==> k8s-w0: Matching MAC address for NAT networking...
==> k8s-w0: Checking if box 'bento/ubuntu-24.04' version '202502.21.0' is up to date...
==> k8s-w0: Setting the name of the VM: k8s-w0
==> k8s-w0: Clearing any previously set network interfaces...
==> k8s-w0: Preparing network interfaces based on configuration...
k8s-w0: Adapter 1: nat
k8s-w0: Adapter 2: hostonly
==> k8s-w0: Forwarding ports...
k8s-w0: 22 (guest) => 60010 (host) (adapter 1)
==> k8s-w0: Running 'pre-boot' VM customizations...
==> k8s-w0: Booting VM...
==> k8s-w0: Waiting for machine to boot. This may take a few minutes...
k8s-w0: SSH address: 127.0.0.1:60010
k8s-w0: SSH username: vagrant
k8s-w0: SSH auth method: private key
k8s-w0:
k8s-w0: Vagrant insecure key detected. Vagrant will automatically replace
k8s-w0: this with a newly generated keypair for better security.
k8s-w0:
k8s-w0: Inserting generated public key within guest...
k8s-w0: Removing insecure key from the guest if it's present...
k8s-w0: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-w0: Machine booted and ready!
==> k8s-w0: Checking for guest additions in VM...
==> k8s-w0: Setting hostname...
==> k8s-w0: Configuring and enabling network interfaces...
==> k8s-w0: Running provisioner: shell...
k8s-w0: Running: /tmp/vagrant-shell20250807-101867-5qlp47.sh
k8s-w0: >>>> Initial Config Start <<<<
k8s-w0: [TASK 1] Setting Profile & Bashrc
k8s-w0: [TASK 2] Disable AppArmor
k8s-w0: [TASK 3] Disable and turn off SWAP
k8s-w0: [TASK 4] Install Packages
k8s-w0: [TASK 5] Install Kubernetes components (kubeadm, kubelet and kubectl)
k8s-w0: [TASK 6] Install Packages & Helm
k8s-w0: >>>> Initial Config End <<<<
==> k8s-w0: Running provisioner: shell...
k8s-w0: Running: /tmp/vagrant-shell20250807-101867-vdcosy.sh
k8s-w0: >>>> K8S Node config Start <<<<
k8s-w0: [TASK 1] K8S Controlplane Join
k8s-w0: >>>> K8S Node config End <<<<
==> k8s-w0: Running provisioner: shell...
k8s-w0: Running: /tmp/vagrant-shell20250807-101867-53icua.sh
k8s-w0: >>>> Route Add Config Start <<<<
k8s-w0: >>>> Route Add Config End <<<<
5. ์ปจํธ๋กคํ๋ ์ธ SSH ์ ์
1
vagrant ssh k8s-ctr
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Welcome to Ubuntu 24.04.2 LTS (GNU/Linux 6.8.0-53-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/pro
System information as of Thu Aug 7 11:32:20 PM KST 2025
System load: 0.5
Usage of /: 29.6% of 30.34GB
Memory usage: 49%
Swap usage: 0%
Processes: 216
Users logged in: 0
IPv4 address for eth0: 10.0.2.15
IPv6 address for eth0: fd17:625c:f037:2:a00:27ff:fe6b:69c9
This system is built by the Bento project by Chef Software
More information can be found at https://github.com/chef/bento
Use of this system is acceptance of the OS vendor EULA and License Agreements.
(โ|HomeLab:N/A) root@k8s-ctr:~#
6. k9s ์คํ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# k9s
7. ์์ปค๋ ธ๋ ๋ฐ ๋ผ์ฐํฐ ์๊ฒฉ ์ ์ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-w0 hostname
Warning: Permanently added 'k8s-w0' (ED25519) to the list of known hosts.
k8s-w0
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-w1 hostname
Warning: Permanently added 'k8s-w1' (ED25519) to the list of known hosts.
k8s-w1
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@router hostname
Warning: Permanently added 'router' (ED25519) to the list of known hosts.
router
8. Cilium Pod CIDR ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode -o json | grep podCIDRs -A2
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
"podCIDRs": [
"172.20.0.0/24"
],
--
"podCIDRs": [
"172.20.2.0/24"
],
--
"podCIDRs": [
"172.20.1.0/24"
],
- Cilium Cluster-Scope IPAM์์ ๊ฐ ๋ ธ๋๋ณ Pod CIDR ํ ๋น ํ์ธ
๐ ๋คํธ์ํฌ ์ ๋ณด ํ์ธ
1. ๋ ธ๋ ๋คํธ์ํฌ ์ ๋ณด ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# k get node -owide
โ ย ์ถ๋ ฅ
1
2
3
4
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-ctr Ready control-plane 11m v1.33.2 192.168.10.100 <none> Ubuntu 24.04.2 LTS 6.8.0-53-generic containerd://1.7.27
k8s-w0 Ready <none> 5m9s v1.33.2 192.168.20.100 <none> Ubuntu 24.04.2 LTS 6.8.0-53-generic containerd://1.7.27
k8s-w1 Ready <none> 8m43s v1.33.2 192.168.10.101 <none> Ubuntu 24.04.2 LTS 6.8.0-53-generic containerd://1.7.27
192.168.10.x
๋์ญ(k8s-ctr, k8s-w1)๊ณผ192.168.20.x
๋์ญ(k8s-w0)์ผ๋ก ๋ถ๋ฆฌ
2. ๋ผ์ฐํฐ ์ธํฐํ์ด์ค ์ ๋ณด ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router ip -br -c -4 addr
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
lo UNKNOWN 127.0.0.1/8
eth0 UP 10.0.2.15/24 metric 100
eth1 UP 192.168.10.200/24
eth2 UP 192.168.20.200/24
loop1 UNKNOWN 10.10.1.200/24
loop2 UNKNOWN 10.10.2.200/24
- eth1(
192.168.10.200
), eth2(192.168.20.200
)๊ฐ ๊ฐ ๋คํธ์ํฌ ๋์ญ ๊ฒ์ดํธ์จ์ด ์ญํ ์ํ, loopback ์ธํฐํ์ด์ค ์กด์ฌ
3. ์์ปค๋ ธ๋ ์ธํฐํ์ด์ค ์ ๋ณด ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w1 ip -c -4 addr show dev eth1
โ ย ์ถ๋ ฅ
1
2
3
4
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
altname enp0s8
inet 192.168.10.101/24 brd 192.168.10.255 scope global eth1
valid_lft forever preferred_lft forever
- ์์ปค๋
ธ๋1:
eth1
โ192.168.10.101/24
1
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w0 ip -c -4 addr show dev eth1
โ ย ์ถ๋ ฅ
1
2
3
4
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
altname enp0s8
inet 192.168.20.100/24 brd 192.168.20.255 scope global eth1
valid_lft forever preferred_lft forever
- ์์ปค๋
ธ๋0:
eth1
โ192.168.20.100/24
4. ๋ผ์ฐํฐ์ ๋ผ์ฐํ ํ ์ด๋ธ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router ip -c route
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.1.0/24 dev loop1 proto kernel scope link src 10.10.1.200
10.10.2.0/24 dev loop2 proto kernel scope link src 10.10.2.200
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.200
192.168.20.0/24 dev eth2 proto kernel scope link src 192.168.20.200
5. ์ปจํธ๋กคํ๋ ์ธ Static Route ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# ip -c route | grep static
โ ย ์ถ๋ ฅ
1
2
3
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static # ๋ผ์ฐํฐ ๋๋ฏธ ์ธํฐํ์ด์ค
172.20.0.0/16 via 192.168.10.200 dev eth1 proto static
192.168.20.0/24 via 192.168.10.200 dev eth1 proto static # ์์ปค๋
ธ๋0 ํต์ ์ฉ
6. autoDirectNodeRoutes ๋์ ํ์ธ
--set routingMode=native --set autoDirectNodeRoutes=true
(1) ์ปจํธ๋กคํ๋ ์ธ ๋ผ์ฐํ ํ ์ด๋ธ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# ip -c route
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static
172.20.0.0/24 via 172.20.0.253 dev cilium_host proto kernel src 172.20.0.253
172.20.0.0/16 via 192.168.10.200 dev eth1 proto static
172.20.0.253 dev cilium_host proto kernel scope link
172.20.1.0/24 via 192.168.10.101 dev eth1 proto kernel
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100
192.168.20.0/24 via 192.168.10.200 dev eth1 proto static
- ๊ฐ์ ๋คํธ์ํฌ ๋์ญ ๋ ธ๋๋ผ๋ฆฌ๋ง PodCIDR ์๋ ๋ผ์ฐํ ์ถ๊ฐ๋จ
- ๋ค๋ฅธ ๋คํธ์ํฌ ๋์ญ ๋
ธ๋๋ ์๋ ๋ผ์ฐํ
๋ถ๊ฐ
- L3 ๋คํธ์ํฌ ์ฅ๋น์์ ๋ณ๋ ๋ผ์ฐํ ํ์
(2) ์์ปค๋ ธ๋ ๋ผ์ฐํ ํ ์ด๋ธ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w1 ip -c route
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static
172.20.0.0/24 via 192.168.10.100 dev eth1 proto kernel
172.20.0.0/16 via 192.168.10.200 dev eth1 proto static
172.20.1.0/24 via 172.20.1.238 dev cilium_host proto kernel src 172.20.1.238
172.20.1.238 dev cilium_host proto kernel scope link
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.101
192.168.20.0/24 via 192.168.10.200 dev eth1 proto static
172.20.0.0/24 via 192.168.10.100 dev eth1 proto kernel
- k8s-w1: ์ปจํธ๋กคํ๋ ์ธ PodCIDR(
172.20.0.0/24
) ์๋ ๋ผ์ฐํ ์กด์ฌ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w0 ip -c route
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.0.0/16 via 192.168.20.200 dev eth1 proto static
172.20.0.0/16 via 192.168.20.200 dev eth1 proto static
172.20.2.0/24 via 172.20.2.13 dev cilium_host proto kernel src 172.20.2.13
172.20.2.13 dev cilium_host proto kernel scope link
192.168.10.0/24 via 192.168.20.200 dev eth1 proto static
192.168.20.0/24 dev eth1 proto kernel scope link src 192.168.20.100
- k8s-w0: ๋ค๋ฅธ ๋ ธ๋ PodCIDR ์ ๋ณด ์์ โ ํต์ ๋ถ๊ฐ
7. ํต์ ํ์ธ
(1) ๋ผ์ฐํฐ ๋๋ฏธ ์ธํฐํ์ด์ค ํต์ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# ping -c 1 10.10.1.200
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
PING 10.10.1.200 (10.10.1.200) 56(84) bytes of data.
64 bytes from 10.10.1.200: icmp_seq=1 ttl=64 time=0.949 ms
--- 10.10.1.200 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.949/0.949/0.949/0.000 ms
- ์ปจํธ๋กคํ๋ ์ธ(
k8s-ctr
)์์ ๋ผ์ฐํฐ ๋๋ฏธ ์ธํฐํ์ด์ค(10.10.1.200
)๋ก ping ํ ์คํธ - ์๋ต ์ ์ ์์ โ ๋ผ์ฐํฐ์์ ์ฐ๊ฒฐ ์ํ ์ ์ ํ์ธ
(2) ์์ปค๋ ธ๋ ๋ฌผ๋ฆฌ ์ธํฐํ์ด์ค ํต์ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# ping -c 1 192.168.20.100
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
PING 192.168.20.100 (192.168.20.100) 56(84) bytes of data.
64 bytes from 192.168.20.100: icmp_seq=1 ttl=63 time=2.22 ms
--- 192.168.20.100 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 2.215/2.215/2.215/0.000 ms
- ์ปจํธ๋กคํ๋ ์ธ์์ ์์ปค๋
ธ๋
k8s-w0
์ ๋ฌผ๋ฆฌ ์ธํฐํ์ด์ค(192.168.20.100
)๋ก ping ํ ์คํธ - ์ ์ ์๋ต ์์ โ ์๋ก ๋ค๋ฅธ ๋คํธ์ํฌ ๋์ญ ๊ฐ ๋ฌผ๋ฆฌ IP ํต์ ๊ฐ๋ฅ
(3) ๋ ธ๋ ์กฐ์ธ ์ํ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# k get node -owide
โ ย ์ถ๋ ฅ
1
2
3
4
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-ctr Ready control-plane 45m v1.33.2 192.168.10.100 <none> Ubuntu 24.04.2 LTS 6.8.0-53-generic containerd://1.7.27
k8s-w0 Ready <none> 39m v1.33.2 192.168.20.100 <none> Ubuntu 24.04.2 LTS 6.8.0-53-generic containerd://1.7.27
k8s-w1 Ready <none> 42m v1.33.2 192.168.10.101 <none> Ubuntu 24.04.2 LTS 6.8.0-53-generic containerd://1.7.27
k8s-w0
(192.168.20.100) ํฌํจ ๋ชจ๋ ๋ ธ๋๊ฐReady
์ํ- ๋ฌผ๋ฆฌ IP ํต์ ๊ฐ๋ฅํ๋ฏ๋ก ํด๋ฌ์คํฐ ์กฐ์ธ ์ ์
(4) L3 ์ฅ๋น ๊ฒฝ์ ๊ฒฝ๋ก ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# tracepath -n 192.168.20.100
โ ย ์ถ๋ ฅ
1
2
3
4
5
1?: [LOCALHOST] pmtu 1500
1: 192.168.10.200 0.816ms
1: 192.168.10.200 0.546ms
2: 192.168.20.100 1.048ms reached
Resume: pmtu 1500 hops 2 back 2
tracepath
์คํ ๊ฒฐ๊ณผ,192.168.20.100
๊ฒฝ์ ์ 192.168.10.200
(๋ผ์ฐํฐ) ๊ฒฝ์ ํ์ธ- ๋ผ์ฐํฐ โ ๋ชฉ์ ์ง ๋ ธ๋๋ก 2ํ ๊ฒฝ๋ก ์ ์
๐ฆNative Routing mode
1. ์ํ ์ ํ๋ฆฌ์ผ์ด์ ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
(โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: webpod
spec:
replicas: 3
selector:
matchLabels:
app: webpod
template:
metadata:
labels:
app: webpod
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- sample-app
topologyKey: "kubernetes.io/hostname"
containers:
- name: webpod
image: traefik/whoami
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: webpod
labels:
app: webpod
spec:
selector:
app: webpod
EOFype: ClusterIP0
# ๊ฒฐ๊ณผ
deployment.apps/webpod created
service/webpod created
webpod
Deployment 3๊ฐ ํ๋๊ฐ ์ปจํธ๋กคํ๋ ์ธ, w1, w0 ๋ ธ๋์ ๋ถ์ฐ ๋ฐฐ์น๋จ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
(โ|HomeLab:N/A) root@k8s-ctr:~# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: curl-pod
labels:
app: curl
spec:
nodeName: k8s-ctr
containers:
- name: curl
image: nicolaka/netshoot
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF
# ๊ฒฐ๊ณผ
pod/curl-pod created
- ์ปจํธ๋กคํ๋ ์ธ ๋
ธ๋์
curl-pod
๋ฐฐํฌํด ํ ์คํธ ํ๊ฒฝ ์ค๋น
2. ์๋ํฌ์ธํธ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get deploy,svc,ep webpod -owide
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/webpod 3/3 3 3 116s webpod traefik/whoami app=webpod
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/webpod ClusterIP 10.96.163.90 <none> 80/TCP 116s app=webpod
NAME ENDPOINTS AGE
endpoints/webpod 172.20.0.66:80,172.20.1.126:80,172.20.2.1:80 116s
172.20.0.66
,172.20.1.126
,172.20.2.1
3๊ฐ ํ๋ IP ํ์ธ
3. Cilium ์๋ํฌ์ธํธ ์กฐํ
(1) ๊ฐ ํ๋์ Cilium ๊ด๋ฆฌ IP ๋ฐ Identity ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints
โ ย ์ถ๋ ฅ
1
2
3
4
5
NAME SECURITY IDENTITY ENDPOINT STATE IPV4 IPV6
curl-pod 63446 ready 172.20.0.96
webpod-697b545f57-fz95q 8469 ready 172.20.2.1
webpod-697b545f57-gkvrf 8469 ready 172.20.0.66
webpod-697b545f57-rpz7h 8469 ready 172.20.1.126
(2) IP-to-Identity ๋งคํ ์ ๋ณด ์กฐํ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -n kube-system ds/cilium -- cilium-dbg ip list
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
IP IDENTITY SOURCE
0.0.0.0/0 reserved:world
172.20.1.0/24 reserved:world
172.20.2.0/24 reserved:world
10.0.2.15/32 reserved:host
reserved:kube-apiserver
172.20.0.33/32 k8s:app=grafana custom-resource
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=cilium-monitoring
172.20.0.66/32 k8s:app=webpod custom-resource
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
172.20.0.71/32 k8s:app=prometheus custom-resource
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s
k8s:io.kubernetes.pod.namespace=cilium-monitoring
172.20.0.72/32 k8s:app=local-path-provisioner custom-resource
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=local-path-storage
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=local-path-provisioner-service-account
k8s:io.kubernetes.pod.namespace=local-path-storage
172.20.0.73/32 k8s:app.kubernetes.io/name=hubble-ui custom-resource
k8s:app.kubernetes.io/part-of=cilium
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=hubble-ui
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=hubble-ui
172.20.0.75/32 k8s:app.kubernetes.io/instance=metrics-server custom-resource
k8s:app.kubernetes.io/name=metrics-server
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=metrics-server
k8s:io.kubernetes.pod.namespace=kube-system
172.20.0.96/32 k8s:app=curl custom-resource
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
172.20.0.163/32 k8s:app.kubernetes.io/name=hubble-relay custom-resource
k8s:app.kubernetes.io/part-of=cilium
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=hubble-relay
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=hubble-relay
172.20.0.197/32 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system custom-resource
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
172.20.0.248/32 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system custom-resource
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
172.20.0.253/32 reserved:host
reserved:kube-apiserver
172.20.1.126/32 k8s:app=webpod custom-resource
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
172.20.1.238/32 reserved:remote-node
172.20.2.1/32 k8s:app=webpod custom-resource
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
172.20.2.13/32 reserved:remote-node
192.168.10.100/32 reserved:host
reserved:kube-apiserver
192.168.10.101/32 reserved:remote-node
192.168.20.100/32 reserved:remote-node
(3) ์๋ํฌ์ธํธ ์์ธ ์ ๋ณด ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -n kube-system ds/cilium -- cilium-dbg endpoint list
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
275 Disabled Disabled 1 k8s:node-role.kubernetes.io/control-plane ready
k8s:node.kubernetes.io/exclude-from-external-load-balancers
reserved:host
364 Disabled Disabled 18480 k8s:app=prometheus 172.20.0.71 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s
k8s:io.kubernetes.pod.namespace=cilium-monitoring
406 Disabled Disabled 63446 k8s:app=curl 172.20.0.96 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
465 Disabled Disabled 48231 k8s:app=local-path-provisioner 172.20.0.72 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=local-path-storage
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=local-path-provisioner-service-account
k8s:io.kubernetes.pod.namespace=local-path-storage
810 Disabled Disabled 58623 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system 172.20.0.248 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
1203 Disabled Disabled 58623 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system 172.20.0.197 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
1769 Disabled Disabled 8469 k8s:app=webpod 172.20.0.66 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
2060 Disabled Disabled 5827 k8s:app.kubernetes.io/instance=metrics-server 172.20.0.75 ready
k8s:app.kubernetes.io/name=metrics-server
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=metrics-server
k8s:io.kubernetes.pod.namespace=kube-system
2321 Disabled Disabled 22595 k8s:app.kubernetes.io/name=hubble-ui 172.20.0.73 ready
k8s:app.kubernetes.io/part-of=cilium
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=hubble-ui
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=hubble-ui
2795 Disabled Disabled 7496 k8s:app=grafana 172.20.0.33 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=cilium-monitoring
3315 Disabled Disabled 32707 k8s:app.kubernetes.io/name=hubble-relay 172.20.0.163 ready
k8s:app.kubernetes.io/part-of=cilium
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=hubble-relay
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=hubble-relay
4. Cilium ์๋น์ค ๋ฆฌ์คํธ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -n kube-system ds/cilium -- cilium-dbg service list
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
ID Frontend Service Type Backend
1 0.0.0.0:30003/TCP NodePort 1 => 172.20.0.73:8081/TCP (active)
4 10.96.67.8:80/TCP ClusterIP 1 => 172.20.0.73:8081/TCP (active)
5 10.96.0.10:53/TCP ClusterIP 1 => 172.20.0.197:53/TCP (active)
2 => 172.20.0.248:53/TCP (active)
6 10.96.0.10:53/UDP ClusterIP 1 => 172.20.0.197:53/UDP (active)
2 => 172.20.0.248:53/UDP (active)
7 10.96.0.10:9153/TCP ClusterIP 1 => 172.20.0.197:9153/TCP (active)
2 => 172.20.0.248:9153/TCP (active)
8 10.96.133.165:443/TCP ClusterIP 1 => 172.20.0.75:10250/TCP (active)
9 0.0.0.0:30002/TCP NodePort 1 => 172.20.0.33:3000/TCP (active)
12 10.96.60.80:3000/TCP ClusterIP 1 => 172.20.0.33:3000/TCP (active)
13 0.0.0.0:30001/TCP NodePort 1 => 172.20.0.71:9090/TCP (active)
16 10.96.97.168:9090/TCP ClusterIP 1 => 172.20.0.71:9090/TCP (active)
17 10.96.213.163:443/TCP ClusterIP 1 => 192.168.10.100:4244/TCP (active)
18 10.96.33.53:80/TCP ClusterIP 1 => 172.20.0.163:4245/TCP (active)
19 10.96.0.1:443/TCP ClusterIP 1 => 192.168.10.100:6443/TCP (active)
20 10.96.163.90:80/TCP ClusterIP 1 => 172.20.0.66:80/TCP (active)
2 => 172.20.1.126:80/TCP (active)
3 => 172.20.2.1:80/TCP (active)
- ์๋น์ค์ Frontend(ClusterIP, NodePort)์ Backend ํ๋ ๋งคํ ์กฐํ
webpod
์๋น์ค10.96.163.90:80/TCP
โ172.20.0.66
,172.20.1.126
,172.20.2.1
๋ฐฑ์๋ ํ๋๋ก ๋ถ์ฐ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# k get svc
โ ย ์ถ๋ ฅ
1
2
3
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 58m
webpod ClusterIP 10.96.163.90 <none> 80/TCP 7m19s
5. BPF ๋ก๋๋ฐธ๋ฐ์ฑ ๋งคํ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -n kube-system ds/cilium -- cilium-dbg bpf lb list | grep 10.96.163.90
โ ย ์ถ๋ ฅ
1
2
3
4
10.96.163.90:80/TCP (2) 172.20.1.126:80/TCP (20) (2)
10.96.163.90:80/TCP (1) 172.20.0.66:80/TCP (20) (1)
10.96.163.90:80/TCP (0) 0.0.0.0:0 (20) (0) [ClusterIP, non-routable]
10.96.163.90:80/TCP (3) 172.20.2.1:80/TCP (20) (3)
- ClusterIP์ ๋ํ BPF LB ๋งคํ ์กฐํ
- ๊ฐ ๋ฐฑ์๋ ํ๋๋ก ์ธ๋ฑ์ค ๊ธฐ๋ฐ ๋ถ์ฐ ์ฒ๋ฆฌ ํ์ธ
6. BPF NAT ํ ์ด๋ธ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -n kube-system ds/cilium -- cilium-dbg bpf nat list
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
ICMP OUT 192.168.10.100:8403 -> 10.10.1.200:0 XLATE_SRC 192.168.10.100:8403 Created=1305sec ago NeedsCT=1
TCP IN 104.16.100.215:443 -> 10.0.2.15:42902 XLATE_DST 10.0.2.15:42902 Created=665sec ago NeedsCT=1
TCP OUT 10.0.2.15:42902 -> 104.16.100.215:443 XLATE_SRC 10.0.2.15:42902 Created=665sec ago NeedsCT=1
TCP OUT 10.0.2.15:35146 -> 3.94.224.37:443 XLATE_SRC 10.0.2.15:35146 Created=712sec ago NeedsCT=1
TCP OUT 10.0.2.15:40482 -> 98.85.153.80:443 XLATE_SRC 10.0.2.15:40482 Created=665sec ago NeedsCT=1
TCP IN 104.16.98.215:443 -> 10.0.2.15:57124 XLATE_DST 10.0.2.15:57124 Created=708sec ago NeedsCT=1
TCP OUT 10.0.2.15:49870 -> 44.208.254.194:443 XLATE_SRC 10.0.2.15:49870 Created=668sec ago NeedsCT=1
ICMP IN 192.168.20.100:0 -> 192.168.10.100:8432 XLATE_DST 192.168.10.100:8432 Created=1241sec ago NeedsCT=1
TCP IN 44.208.254.194:443 -> 10.0.2.15:49884 XLATE_DST 10.0.2.15:49884 Created=665sec ago NeedsCT=1
TCP OUT 10.0.2.15:57908 -> 3.94.224.37:443 XLATE_SRC 10.0.2.15:57908 Created=666sec ago NeedsCT=1
TCP OUT 10.0.2.15:49884 -> 44.208.254.194:443 XLATE_SRC 10.0.2.15:49884 Created=665sec ago NeedsCT=1
UDP OUT 192.168.10.100:56494 -> 192.168.20.100:44444 XLATE_SRC 192.168.10.100:56494 Created=945sec ago NeedsCT=1
TCP OUT 10.0.2.15:57888 -> 3.94.224.37:443 XLATE_SRC 10.0.2.15:57888 Created=669sec ago NeedsCT=1
UDP OUT 10.0.2.15:52293 -> 10.0.2.3:53 XLATE_SRC 10.0.2.15:52293 Created=712sec ago NeedsCT=1
TCP IN 44.208.254.194:443 -> 10.0.2.15:34700 XLATE_DST 10.0.2.15:34700 Created=710sec ago NeedsCT=1
TCP IN 192.168.10.101:10250 -> 192.168.10.100:43366 XLATE_DST 192.168.10.100:43366 Created=3613sec ago NeedsCT=1
TCP IN 3.94.224.37:443 -> 10.0.2.15:57908 XLATE_DST 10.0.2.15:57908 Created=666sec ago NeedsCT=1
UDP OUT 10.0.2.15:44980 -> 10.0.2.3:53 XLATE_SRC 10.0.2.15:44980 Created=712sec ago NeedsCT=1
UDP IN 192.168.20.100:44445 -> 192.168.10.100:56494 XLATE_DST 192.168.10.100:56494 Created=944sec ago NeedsCT=1
TCP OUT 10.0.2.15:34714 -> 44.208.254.194:443 XLATE_SRC 10.0.2.15:34714 Created=708sec ago NeedsCT=1
TCP IN 3.94.224.37:443 -> 10.0.2.15:35146 XLATE_DST 10.0.2.15:35146 Created=712sec ago NeedsCT=1
TCP OUT 10.0.2.15:49464 -> 104.16.98.215:443 XLATE_SRC 10.0.2.15:49464 Created=664sec ago NeedsCT=1
TCP OUT 10.0.2.15:40472 -> 98.85.153.80:443 XLATE_SRC 10.0.2.15:40472 Created=668sec ago NeedsCT=1
UDP OUT 10.0.2.15:51117 -> 10.0.2.3:53 XLATE_SRC 10.0.2.15:51117 Created=712sec ago NeedsCT=1
TCP OUT 192.168.10.100:36046 -> 192.168.20.100:10250 XLATE_SRC 192.168.10.100:36046 Created=3401sec ago NeedsCT=1
TCP OUT 10.0.2.15:34700 -> 44.208.254.194:443 XLATE_SRC 10.0.2.15:34700 Created=710sec ago NeedsCT=1
TCP OUT 10.0.2.15:51854 -> 98.85.153.80:443 XLATE_SRC 10.0.2.15:51854 Created=711sec ago NeedsCT=1
TCP IN 44.208.254.194:443 -> 10.0.2.15:34684 XLATE_DST 10.0.2.15:34684 Created=712sec ago NeedsCT=1
TCP IN 104.16.98.215:443 -> 10.0.2.15:49454 XLATE_DST 10.0.2.15:49454 Created=665sec ago NeedsCT=1
UDP IN 192.168.20.100:44446 -> 192.168.10.100:56494 XLATE_DST 192.168.10.100:56494 Created=944sec ago NeedsCT=1
TCP IN 44.208.254.194:443 -> 10.0.2.15:34714 XLATE_DST 10.0.2.15:34714 Created=708sec ago NeedsCT=1
TCP IN 104.16.97.215:443 -> 10.0.2.15:43064 XLATE_DST 10.0.2.15:43064 Created=708sec ago NeedsCT=1
TCP IN 98.85.153.80:443 -> 10.0.2.15:40472 XLATE_DST 10.0.2.15:40472 Created=668sec ago NeedsCT=1
UDP IN 10.0.2.3:53 -> 10.0.2.15:39251 XLATE_DST 10.0.2.15:39251 Created=708sec ago NeedsCT=1
TCP OUT 192.168.10.100:43366 -> 192.168.10.101:10250 XLATE_SRC 192.168.10.100:43366 Created=3613sec ago NeedsCT=1
TCP IN 98.85.153.80:443 -> 10.0.2.15:51854 XLATE_DST 10.0.2.15:51854 Created=711sec ago NeedsCT=1
UDP IN 10.0.2.3:53 -> 10.0.2.15:44980 XLATE_DST 10.0.2.15:44980 Created=712sec ago NeedsCT=1
UDP IN 10.0.2.3:53 -> 10.0.2.15:52293 XLATE_DST 10.0.2.15:52293 Created=712sec ago NeedsCT=1
UDP IN 192.168.20.100:44444 -> 192.168.10.100:56494 XLATE_DST 192.168.10.100:56494 Created=945sec ago NeedsCT=1
TCP OUT 10.0.2.15:34684 -> 44.208.254.194:443 XLATE_SRC 10.0.2.15:34684 Created=712sec ago NeedsCT=1
TCP IN 3.94.224.37:443 -> 10.0.2.15:35154 XLATE_DST 10.0.2.15:35154 Created=708sec ago NeedsCT=1
TCP IN 44.208.254.194:443 -> 10.0.2.15:49870 XLATE_DST 10.0.2.15:49870 Created=668sec ago NeedsCT=1
TCP IN 98.85.153.80:443 -> 10.0.2.15:51860 XLATE_DST 10.0.2.15:51860 Created=710sec ago NeedsCT=1
UDP IN 10.0.2.3:53 -> 10.0.2.15:60066 XLATE_DST 10.0.2.15:60066 Created=708sec ago NeedsCT=1
TCP OUT 10.0.2.15:43064 -> 104.16.97.215:443 XLATE_SRC 10.0.2.15:43064 Created=708sec ago NeedsCT=1
UDP OUT 10.0.2.15:60066 -> 10.0.2.3:53 XLATE_SRC 10.0.2.15:60066 Created=708sec ago NeedsCT=1
TCP IN 192.168.20.100:10250 -> 192.168.10.100:36046 XLATE_DST 192.168.10.100:36046 Created=3401sec ago NeedsCT=1
TCP IN 98.85.153.80:443 -> 10.0.2.15:40482 XLATE_DST 10.0.2.15:40482 Created=665sec ago NeedsCT=1
TCP OUT 10.0.2.15:49454 -> 104.16.98.215:443 XLATE_SRC 10.0.2.15:49454 Created=665sec ago NeedsCT=1
TCP OUT 10.0.2.15:57902 -> 3.94.224.37:443 XLATE_SRC 10.0.2.15:57902 Created=667sec ago NeedsCT=1
UDP OUT 192.168.10.100:56494 -> 192.168.20.100:44446 XLATE_SRC 192.168.10.100:56494 Created=944sec ago NeedsCT=1
UDP OUT 10.0.2.15:39251 -> 10.0.2.3:53 XLATE_SRC 10.0.2.15:39251 Created=708sec ago NeedsCT=1
UDP OUT 10.0.2.15:52382 -> 10.0.2.3:53 XLATE_SRC 10.0.2.15:52382 Created=712sec ago NeedsCT=1
TCP IN 3.94.224.37:443 -> 10.0.2.15:57902 XLATE_DST 10.0.2.15:57902 Created=667sec ago NeedsCT=1
UDP IN 10.0.2.3:53 -> 10.0.2.15:52382 XLATE_DST 10.0.2.15:52382 Created=712sec ago NeedsCT=1
TCP OUT 10.0.2.15:35154 -> 3.94.224.37:443 XLATE_SRC 10.0.2.15:35154 Created=708sec ago NeedsCT=1
TCP OUT 10.0.2.15:51860 -> 98.85.153.80:443 XLATE_SRC 10.0.2.15:51860 Created=710sec ago NeedsCT=1
TCP IN 104.16.98.215:443 -> 10.0.2.15:49464 XLATE_DST 10.0.2.15:49464 Created=664sec ago NeedsCT=1
ICMP IN 10.10.1.200:0 -> 192.168.10.100:8403 XLATE_DST 192.168.10.100:8403 Created=1305sec ago NeedsCT=1
ICMP OUT 192.168.10.100:8432 -> 192.168.20.100:0 XLATE_SRC 192.168.10.100:8432 Created=1241sec ago NeedsCT=1
TCP OUT 10.0.2.15:57124 -> 104.16.98.215:443 XLATE_SRC 10.0.2.15:57124 Created=708sec ago NeedsCT=1
TCP IN 3.94.224.37:443 -> 10.0.2.15:57888 XLATE_DST 10.0.2.15:57888 Created=669sec ago NeedsCT=1
UDP IN 10.0.2.3:53 -> 10.0.2.15:51117 XLATE_DST 10.0.2.15:51117 Created=712sec ago NeedsCT=1
UDP OUT 192.168.10.100:56494 -> 192.168.20.100:44445 XLATE_SRC 192.168.10.100:56494 Created=944sec ago NeedsCT=1
- IN/OUT ํธ๋ํฝ์ ๋ํด
XLATE_SRC
(์์ค ๋ณํ),XLATE_DST
(๋ชฉ์ ์ง ๋ณํ) ๋งคํ ์ ๋ณด ์กฐํ - ์ธ๋ถ IP โ ๋ด๋ถ ํ๋/๋ ธ๋ IP ๋ณํ ๋ด์ญ ํฌํจ
7. ํ์ฌ ์ฌ์ฉ์ค์ธ ๋งต ํํฐ๋ง
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -n kube-system ds/cilium -- cilium-dbg map list | grep -v '0 0'
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Name Num entries Num errors Cache enabled
cilium_policy_v2_01769 3 0 true
cilium_policy_v2_00406 3 0 true
cilium_ipcache_v2 22 0 true
cilium_lb4_services_v2 45 0 true
cilium_lb4_reverse_nat 20 0 true
cilium_lb4_reverse_sk 9 0 true
cilium_policy_v2_00275 2 0 true
cilium_policy_v2_00810 3 0 true
cilium_policy_v2_00465 3 0 true
cilium_policy_v2_02060 3 0 true
cilium_policy_v2_00364 3 0 true
cilium_policy_v2_03315 3 0 true
cilium_policy_v2_02321 3 0 true
cilium_policy_v2_02795 3 0 true
cilium_runtime_config 256 0 true
cilium_lxc 13 0 true
cilium_lb4_backends_v3 16 0 true
cilium_policy_v2_01203 3 0 true
- ์ ์ฑ , IP ์บ์, LB ์๋น์ค, ๋ฐฑ์๋ ๋ฑ ํ์ฑ ๋งต ํ์ธ ๊ฐ๋ฅ
8. ์๋น์ค ๋งต ์์ธ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -n kube-system ds/cilium -- cilium-dbg map get cilium_lb4_services_v2
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
Key Value State Error
10.96.60.80:3000/TCP (0) 0 1[0] (12) [0x0 0x0]
10.0.2.15:30003/TCP (0) 0 1[0] (2) [0x42 0x0]
10.96.0.10:53/UDP (0) 0 2[0] (6) [0x0 0x0]
10.0.2.15:30002/TCP (1) 10 0[0] (10) [0x42 0x0]
10.96.67.8:80/TCP (0) 0 1[0] (4) [0x0 0x0]
10.96.0.10:53/TCP (1) 3 0[0] (5) [0x0 0x0]
10.96.0.10:53/TCP (0) 0 2[0] (5) [0x0 0x0]
10.0.2.15:30002/TCP (0) 0 1[0] (10) [0x42 0x0]
10.96.163.90:80/TCP (1) 14 0[0] (20) [0x0 0x0]
192.168.10.100:30001/TCP (1) 11 0[0] (15) [0x42 0x0]
10.96.97.168:9090/TCP (1) 11 0[0] (16) [0x0 0x0]
10.96.0.10:9153/TCP (2) 8 0[0] (7) [0x0 0x0]
10.96.33.53:80/TCP (0) 0 1[0] (18) [0x0 0x0]
192.168.10.100:30001/TCP (0) 0 1[0] (15) [0x42 0x0]
10.96.0.1:443/TCP (0) 0 1[0] (19) [0x0 0x0]
0.0.0.0:30002/TCP (1) 10 0[0] (9) [0x2 0x0]
10.96.133.165:443/TCP (0) 0 1[0] (8) [0x0 0x0]
0.0.0.0:30002/TCP (0) 0 1[0] (9) [0x2 0x0]
10.0.2.15:30001/TCP (1) 11 0[0] (14) [0x42 0x0]
10.96.67.8:80/TCP (1) 12 0[0] (4) [0x0 0x0]
0.0.0.0:30003/TCP (0) 0 1[0] (1) [0x2 0x0]
10.96.163.90:80/TCP (3) 15 0[0] (20) [0x0 0x0]
10.96.60.80:3000/TCP (1) 10 0[0] (12) [0x0 0x0]
0.0.0.0:30001/TCP (1) 11 0[0] (13) [0x2 0x0]
10.96.0.10:9153/TCP (1) 7 0[0] (7) [0x0 0x0]
10.96.163.90:80/TCP (0) 0 3[0] (20) [0x0 0x0]
192.168.10.100:30003/TCP (0) 0 1[0] (3) [0x42 0x0]
10.0.2.15:30001/TCP (0) 0 1[0] (14) [0x42 0x0]
10.96.213.163:443/TCP (1) 2 0[0] (17) [0x0 0x10]
10.96.0.1:443/TCP (1) 1 0[0] (19) [0x0 0x0]
10.96.133.165:443/TCP (1) 9 0[0] (8) [0x0 0x0]
10.96.0.10:53/UDP (1) 5 0[0] (6) [0x0 0x0]
10.96.97.168:9090/TCP (0) 0 1[0] (16) [0x0 0x0]
10.96.0.10:9153/TCP (0) 0 2[0] (7) [0x0 0x0]
10.96.163.90:80/TCP (2) 16 0[0] (20) [0x0 0x0]
192.168.10.100:30002/TCP (1) 10 0[0] (11) [0x42 0x0]
10.96.33.53:80/TCP (1) 13 0[0] (18) [0x0 0x0]
10.96.0.10:53/UDP (2) 6 0[0] (6) [0x0 0x0]
192.168.10.100:30003/TCP (1) 12 0[0] (3) [0x42 0x0]
10.0.2.15:30003/TCP (1) 12 0[0] (2) [0x42 0x0]
192.168.10.100:30002/TCP (0) 0 1[0] (11) [0x42 0x0]
0.0.0.0:30003/TCP (1) 12 0[0] (1) [0x2 0x0]
0.0.0.0:30001/TCP (0) 0 1[0] (13) [0x2 0x0]
10.96.213.163:443/TCP (0) 0 1[0] (17) [0x0 0x10]
10.96.0.10:53/TCP (2) 4 0[0] (5) [0x0 0x0]
webpod
์๋น์ค IP(10.96.163.90:80/TCP
)๊ฐ 3๊ฐ์ ๋ฐฑ์๋ ํ๋๋ก ๋งคํ๋์ด ์์- NodePort, ClusterIP, ์ธ๋ถ ๋ ธ์ถ IP๊น์ง ๋ชจ๋ ๋งคํ ์ ๋ณด ํ์ธ ๊ฐ๋ฅ
๐ก ํต์ ํ์ธ & ํ๋ธ
1. ํต์ ๋ถ์์ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s --connect-timeout 1 webpod | grep Hostname; echo "---" ; sleep 1; done'
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Hostname: webpod-697b545f57-rpz7h
---
Hostname: webpod-697b545f57-rpz7h
---
Hostname: webpod-697b545f57-rpz7h
---
Hostname: webpod-697b545f57-gkvrf
---
Hostname: webpod-697b545f57-rpz7h
---
---
Hostname: webpod-697b545f57-rpz7h
---
Hostname: webpod-697b545f57-gkvrf
---
Hostname: webpod-697b545f57-gkvrf
---
Hostname: webpod-697b545f57-rpz7h
---
Hostname: webpod-697b545f57-rpz7h
---
---
Hostname: webpod-697b545f57-rpz7h
---
curl
๋ช ๋ น์--connect-timeout 1
์ต์ ์ ์ ์ฉํ์ฌwebpod
์๋น์ค ํธ์ถ ์ ๊ฐํ์ ์ผ๋ก ์๋ต์ด ์์์ ํ์ธ- ์ ์ ์๋ต ์
Hostname
์ดwebpod-697b545f57-rpz7h
๋๋webpod-697b545f57-gkvrf
๋ก ํ์๋จ
2. Hubble UI ์ ์
- ์ปจํธ๋กค ํ๋ ์ธ ๋
ธ๋์
curl-pod
๊ฐ 3๊ฐ ๋ ธ๋์ ๋ฐฐํฌ๋webpod
์๋ํฌ์ธํธ๋ก ์์ฒญ - ๋ผ์ฐํฐ ๊ฒฝ์ ๊ฒฝ๋ก๋ฅผ ์ถ์ ํ๊ธฐ ์ํด ๋ผ์ฐํฐ์์
tcpdump
์งํ ๊ณํ ์๋ฆฝ
3. webpod IP ํ์ธ ๋ฐ ping ํ ์คํธ
1
2
3
4
5
(โ|HomeLab:N/A) root@k8s-ctr:~# export WEBPOD=$(kubectl get pod -l app=webpod --field-selector spec.nodeName=k8s-w0 -o jsonpath='{.items[0].status.podIP}')
echo $WEBPOD
# ๊ฒฐ๊ณผ
172.20.2.1
- ์์ปค๋
ธ๋0(
k8s-w0
)์ ๋ฐฐํฌ๋webpod
์ IP(172.20.2.1
) ์ถ์ถ ํping
์๋
1
2
3
4
root@router:~# tcpdump -i any icmp -nn
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
- ๋ผ์ฐํฐ์์ tcpdump
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- ping -c 2 -w 1 -W 1 $WEBPOD
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
PING 172.20.2.1 (172.20.2.1) 56(84) bytes of data.
--- 172.20.2.1 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
command terminated with exit code 1
- ํจํท ์์ค 100% ๋ฐ์ (์๋ต ์์)
4. ๋ผ์ฐํฐ tcpdump ๊ฒฐ๊ณผ ๋ถ์ (ICMP)
1
2
3
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
22:47:59.061351 eth1 In IP 172.20.0.96 > 172.20.2.1: ICMP echo request, id 1155, seq 1, length 64
22:47:59.061370 eth0 Out IP 172.20.0.96 > 172.20.2.1: ICMP echo request, id 1155, seq 1, length 64
curl-pod
(172.20.0.96) โwebpod
(172.20.2.1) ICMP ์์ฒญ์ดeth1
๋ก IN- ๋ชฉ์ ์ง๊ฐ ๋ค๋ฅธ ๋คํธ์ํฌ ์ธํฐํ์ด์ค(
eth2
)๊ฐ ์๋ ์ธํฐ๋ท ์ ์ฉeth0
์ผ๋ก OUT - ์์ธ: ๋ผ์ฐํฐ ๋ผ์ฐํ
ํ
์ด๋ธ์
172.20.0.0/16
CIDR ๊ฒฝ๋ก ์์ โ default route๋ก ์ฒ๋ฆฌ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# k get pod -owide
โ ย ์ถ๋ ฅ
1
2
3
4
5
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-pod 1/1 Running 0 22h 172.20.0.96 k8s-ctr <none> <none>
webpod-697b545f57-fz95q 1/1 Running 0 22h 172.20.2.1 k8s-w0 <none> <none>
webpod-697b545f57-gkvrf 1/1 Running 0 22h 172.20.0.66 k8s-ctr <none> <none>
webpod-697b545f57-rpz7h 1/1 Running 0 22h 172.20.1.126 k8s-w1 <none> <none>
5. ๋ผ์ฐํ ํ ์ด๋ธ ํ์ธ
1
root@router:~# ip -c route
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.1.0/24 dev loop1 proto kernel scope link src 10.10.1.200
10.10.2.0/24 dev loop2 proto kernel scope link src 10.10.2.200
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.200
192.168.20.0/24 dev eth2 proto kernel scope link src 192.168.20.200
172.20.x.x
๋์ญ์ ๋ํ ๊ฒฝ๋ก ๋ฏธ๋ฑ๋ก ํ์ธ
1
root@router:~# ip route get 172.20.2.1
โ ย ์ถ๋ ฅ
1
2
172.20.2.1 via 10.0.2.2 dev eth0 src 10.0.2.15 uid 0
cache
ip route get 172.20.2.1
๊ฒฐ๊ณผ default route(10.0.2.2 via eth0
)๋ก ์ ์ก๋จ
6. TCP ํธ๋ํฝ ๋ถ์
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s --connect-timeout 1 webpod | grep Hostname; echo "---" ; sleep 1; done'
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
Hostname: webpod-697b545f57-rpz7h
---
---
Hostname: webpod-697b545f57-rpz7h
---
Hostname: webpod-697b545f57-rpz7h
---
Hostname: webpod-697b545f57-gkvrf
---
Hostname: webpod-697b545f57-rpz7h
---
Hostname: webpod-697b545f57-gkvrf
---
Hostname: webpod-697b545f57-rpz7h
---
---
Hostname: webpod-697b545f57-gkvrf
---
---
Hostname: webpod-697b545f57-gkvrf
---
---
---
---
---
1
root@router:~# tcpdump -i any tcp port 80 -nn
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
23:01:08.369358 eth1 In IP 172.20.0.96.54708 > 172.20.2.1.80: Flags [S], seq 656654546, win 64240, options [mss 1460,sackOK,TS val 677256276 ecr 0,nop,wscale 7], length 0
23:01:08.369376 eth0 Out IP 172.20.0.96.54708 > 172.20.2.1.80: Flags [S], seq 656654546, win 64240, options [mss 1460,sackOK,TS val 677256276 ecr 0,nop,wscale 7], length 0
23:01:16.429813 eth1 In IP 172.20.0.96.37360 > 172.20.2.1.80: Flags [S], seq 1376149342, win 64240, options [mss 1460,sackOK,TS val 677264336 ecr 0,nop,wscale 7], length 0
23:01:16.429833 eth0 Out IP 172.20.0.96.37360 > 172.20.2.1.80: Flags [S], seq 1376149342, win 64240, options [mss 1460,sackOK,TS val 677264336 ecr 0,nop,wscale 7], length 0
23:01:19.444988 eth1 In IP 172.20.0.96.37368 > 172.20.2.1.80: Flags [S], seq 1991901286, win 64240, options [mss 1460,sackOK,TS val 677267352 ecr 0,nop,wscale 7], length 0
23:01:19.445008 eth0 Out IP 172.20.0.96.37368 > 172.20.2.1.80: Flags [S], seq 1991901286, win 64240, options [mss 1460,sackOK,TS val 677267352 ecr 0,nop,wscale 7], length 0
23:01:22.461799 eth1 In IP 172.20.0.96.41056 > 172.20.2.1.80: Flags [S], seq 958768126, win 64240, options [mss 1460,sackOK,TS val 677270368 ecr 0,nop,wscale 7], length 0
23:01:22.461831 eth0 Out IP 172.20.0.96.41056 > 172.20.2.1.80: Flags [S], seq 958768126, win 64240, options [mss 1460,sackOK,TS val 677270368 ecr 0,nop,wscale 7], length 0
23:01:24.468117 eth1 In IP 172.20.0.96.41064 > 172.20.2.1.80: Flags [S], seq 4136195789, win 64240, options [mss 1460,sackOK,TS val 677272375 ecr 0,nop,wscale 7], length 0
23:01:24.468135 eth0 Out IP 172.20.0.96.41064 > 172.20.2.1.80: Flags [S], seq 4136195789, win 64240, options [mss 1460,sackOK,TS val 677272375 ecr 0,nop,wscale 7], length 0
23:01:26.473963 eth1 In IP 172.20.0.96.41072 > 172.20.2.1.80: Flags [S], seq 2284309943, win 64240, options [mss 1460,sackOK,TS val 677274381 ecr 0,nop,wscale 7], length 0
23:01:26.473981 eth0 Out IP 172.20.0.96.41072 > 172.20.2.1.80: Flags [S], seq 2284309943, win 64240, options [mss 1460,sackOK,TS val 677274381 ecr 0,nop,wscale 7], length 0
23:01:28.480125 eth1 In IP 172.20.0.96.41084 > 172.20.2.1.80: Flags [S], seq 1101677227, win 64240, options [mss 1460,sackOK,TS val 677276387 ecr 0,nop,wscale 7], length 0
23:01:28.480143 eth0 Out IP 172.20.0.96.41084 > 172.20.2.1.80: Flags [S], seq 1101677227, win 64240, options [mss 1460,sackOK,TS val 677276387 ecr 0,nop,wscale 7], length 0
23:01:32.501119 eth1 In IP 172.20.0.96.56438 > 172.20.2.1.80: Flags [S], seq 601326350, win 64240, options [mss 1460,sackOK,TS val 677280408 ecr 0,nop,wscale 7], length 0
23:01:32.501138 eth0 Out IP 172.20.0.96.56438 > 172.20.2.1.80: Flags [S], seq 601326350, win 64240, options [mss 1460,sackOK,TS val 677280408 ecr 0,nop,wscale 7], length 0
tcpdump
๋ก TCP 80 ํฌํธ ๊ด์ฐฐ ์ SYN ํจํท์ดeth1
IN โeth0
OUT- ๋ชฉ์ ์ง pod CIDR๋ก์ ๊ฒฝ๋ก ๋ถ์ฌ๋ก ์๋ชป๋ ์ธํฐํ์ด์ค๋ก ์ก์ถ๋๋ ํ์ ์ฌํ์ธ
7. Hubble ํฌํธํฌ์๋ฉ ๋ฐ Flow ๋ชจ๋ํฐ๋ง
(1) ํฌํธํฌ์๋ฉ ์ ํ
1
2
3
(โ|HomeLab:N/A) root@k8s-ctr:~# cilium hubble port-forward&
[1] 13992
(โ|HomeLab:N/A) root@k8s-ctr:~# โน๏ธ Hubble Relay is available at 127.0.0.1:4245
hubble observe
๊ฒฐ๊ณผk8s-w0
๋ ธ๋๊ฐ unavailable ์ํ๋ก ํ์- ํน์ ์๋ํฌ์ธํธ๋ก SYN๋ง ์ ์ก๋๊ณ ์๋ต(ACK)์ด ์ค์ง ์๋ ์ธ์ ํ์ธ
๐ฆ Overlay Network (Encapsulation) mode
1. VXLAN Encapsulation ๊ฐ๋ฅ ์ฌ๋ถ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# grep -E 'CONFIG_VXLAN=y|CONFIG_VXLAN=m|CONFIG_GENEVE=y|CONFIG_GENEVE=m|CONFIG_FIB_RULES=y' /boot/config-$(uname -r)
โ ย ์ถ๋ ฅ
1
2
3
CONFIG_FIB_RULES=y # ์ปค๋์ ๋ด์ฅ๋จ
CONFIG_VXLAN=m # ๋ชจ๋๋ก ์ปดํ์ผ๋จ โ ์ปค๋์ ๋ก๋ํด์ ์ฌ์ฉ
CONFIG_GENEVE=m # ๋ชจ๋๋ก ์ปดํ์ผ๋จ โ ์ปค๋์ ๋ก๋ํด์ ์ฌ์ฉ
- ์ปค๋ ์ค์ ํ์ผ(
/boot/config-$(uname -r)
)์์CONFIG_VXLAN=m
,CONFIG_GENEVE=m
ํ์ธ - Encapsulation ๋ฐฉ์(VXLAN, Geneve) ์ง์ ์ฌ๋ถ ์ ๊ฒ
2. VXLAN ๋ชจ๋ ๋ก๋
1
2
(โ|HomeLab:N/A) root@k8s-ctr:~# lsmod | grep -E 'vxlan|geneve'
(โ|HomeLab:N/A) root@k8s-ctr:~# modprobe vxlan
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# lsmod | grep -E 'vxlan'
vxlan 155648 0
ip6_udp_tunnel 16384 1 vxlan
udp_tunnel 32768 1 vxlan
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w0 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i sudo modprobe vxlan ; echo; done
>> node : k8s-w1 <<
>> node : k8s-w0 <<
1
2
3
4
5
6
7
8
9
10
(โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w0 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i sudo lsmod | grep -E 'vxlan|geneve' ; echo; done
>> node : k8s-w1 <<
vxlan 155648 0
ip6_udp_tunnel 16384 1 vxlan
udp_tunnel 32768 1 vxlan
>> node : k8s-w0 <<
vxlan 155648 0
ip6_udp_tunnel 16384 1 vxlan
udp_tunnel 32768 1 vxlan
modprobe vxlan
๋ช ๋ น์ผ๋ก ์ปจํธ๋กค ํ๋ ์ธ ๋ฐ ์์ปค ๋ ธ๋ 2๊ณณ์ VXLAN ๋ชจ๋ ๋ก๋lsmod
๋ก ๋ก๋ ์ฌ๋ถ ํ์ธ (vxlan
,ip6_udp_tunnel
,udp_tunnel
๋ชจ๋ ํ์ฑํ)
3. Ping ํ ์คํธ (Native Routing ์ํ)
(1) ์์ปค๋ ธ๋1์ ๋ฐฐํฌ๋ webpod IP ์ถ์ถ
1
2
3
4
5
(โ|HomeLab:N/A) root@k8s-ctr:~# export WEBPOD1=$(kubectl get pod -l app=webpod --field-selector spec.nodeName=k8s-w1 -o jsonpath='{.items[0].status.podIP}')
echo $WEBPOD1
# ๊ฒฐ๊ณผ
172.20.1.126
(2) curl-pod์์ ping ์ํ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- ping $WEBPOD1
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
PING 172.20.1.126 (172.20.1.126) 56(84) bytes of data.
64 bytes from 172.20.1.126: icmp_seq=1 ttl=62 time=0.637 ms
64 bytes from 172.20.1.126: icmp_seq=2 ttl=62 time=0.647 ms
64 bytes from 172.20.1.126: icmp_seq=3 ttl=62 time=0.438 ms
64 bytes from 172.20.1.126: icmp_seq=4 ttl=62 time=0.455 ms
64 bytes from 172.20.1.126: icmp_seq=5 ttl=62 time=0.561 ms
64 bytes from 172.20.1.126: icmp_seq=6 ttl=62 time=0.407 ms
64 bytes from 172.20.1.126: icmp_seq=7 ttl=62 time=0.367 ms
64 bytes from 172.20.1.126: icmp_seq=8 ttl=62 time=0.377 ms
64 bytes from 172.20.1.126: icmp_seq=9 ttl=62 time=0.386 ms
64 bytes from 172.20.1.126: icmp_seq=10 ttl=62 time=0.414 ms
64 bytes from 172.20.1.126: icmp_seq=11 ttl=62 time=0.368 ms
64 bytes from 172.20.1.126: icmp_seq=12 ttl=62 time=0.369 ms
64 bytes from 172.20.1.126: icmp_seq=13 ttl=62 time=0.664 ms
64 bytes from 172.20.1.126: icmp_seq=14 ttl=62 time=0.537 ms
64 bytes from 172.20.1.126: icmp_seq=15 ttl=62 time=0.387 ms
64 bytes from 172.20.1.126: icmp_seq=16 ttl=62 time=0.452 ms
64 bytes from 172.20.1.126: icmp_seq=17 ttl=62 time=0.381 ms
64 bytes from 172.20.1.126: icmp_seq=18 ttl=62 time=0.606 ms
64 bytes from 172.20.1.126: icmp_seq=19 ttl=62 time=0.975 ms
64 bytes from 172.20.1.126: icmp_seq=20 ttl=62 time=0.384 ms
64 bytes from 172.20.1.126: icmp_seq=21 ttl=62 time=0.340 ms
64 bytes from 172.20.1.126: icmp_seq=22 ttl=62 time=0.392 ms
64 bytes from 172.20.1.126: icmp_seq=23 ttl=62 time=0.490 ms
64 bytes from 172.20.1.126: icmp_seq=24 ttl=62 time=0.405 ms
64 bytes from 172.20.1.126: icmp_seq=25 ttl=62 time=0.820 ms
64 bytes from 172.20.1.126: icmp_seq=26 ttl=62 time=0.387 ms
...
- ์ปจํธ๋กคํ๋ ์ธ๊ณผ ์์ปค๋ ธ๋1์ ๊ฐ์ ๋์ญ์ด๋ผ ํต์ ์ ์ ์๋ต
4. Cilium Overlay ๋ชจ๋(VXLAN) ์ ํ
1
2
3
(โ|HomeLab:N/A) root@k8s-ctr:~# helm upgrade cilium cilium/cilium --namespace kube-system --version 1.18.0 --reuse-values \
--set routingMode=tunnel --set tunnelProtocol=vxlan \
--set autoDirectNodeRoutes=false --set installNoConntrackIptablesRules=false
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
Release "cilium" has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Fri Aug 8 23:47:06 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.
Your release version is 1.18.0.
For any further help, visit https://docs.cilium.io/en/v1.18/gettinghelp
- Helm ์
๊ทธ๋ ์ด๋ ์
routingMode=tunnel
,tunnelProtocol=vxlan
์ค์ autoDirectNodeRoutes=false
๋ก Direct Routing ๋นํ์ฑํ
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl rollout restart -n kube-system ds/cilium
# ๊ฒฐ๊ณผ
daemonset.apps/cilium restarted
kubectl rollout restart
๋ก Cilium DaemonSet ์ฌ์์
5. Overlay ๋ชจ๋ ์ ์ฉ ํ์ธ
(1) mode=overlay-vxlan
ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# cilium features status | grep datapath_network
โ ย ์ถ๋ ฅ
1
Yes cilium_feature_datapath_network mode=overlay-vxlan 1 1 1
(2) Routing: Network: Tunnel [vxlan]
์ถ๋ ฅ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -- cilium status | grep ^Routing
โ ย ์ถ๋ ฅ
1
Routing: Network: Tunnel [vxlan] Host: BPF
(3) ๊ฐ ๋
ธ๋์ cilium_vxlan
์ธํฐํ์ด์ค ์์ฑ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# ip -c addr show dev cilium_vxlan
โ ย ์ถ๋ ฅ
1
2
3
4
26: cilium_vxlan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
link/ether 8e:c3:47:03:17:dc brd ff:ff:ff:ff:ff:ff
inet6 fe80::8cc3:47ff:fe03:17dc/64 scope link
valid_lft forever preferred_lft forever
1
(โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w0 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c addr show dev cilium_vxlan ; echo; done
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
>> node : k8s-w1 <<
8: cilium_vxlan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
link/ether 2e:fb:fe:83:8f:85 brd ff:ff:ff:ff:ff:ff
inet6 fe80::2cfb:feff:fe83:8f85/64 scope link
valid_lft forever preferred_lft forever
>> node : k8s-w0 <<
8: cilium_vxlan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
link/ether d6:44:cc:8c:e1:ce brd ff:ff:ff:ff:ff:ff
inet6 fe80::d444:ccff:fe8c:e1ce/64 scope link
valid_lft forever preferred_lft forever
6. Pod CIDR ๋ผ์ฐํ ํ ์ด๋ธ ๋ฑ๋ก ํ์ธ (์ปจํธ๋กค ํ๋ ์ธ)
1
(โ|HomeLab:N/A) root@k8s-ctr:~# ip -c route
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static
172.20.0.0/24 via 172.20.0.253 dev cilium_host proto kernel src 172.20.0.253
172.20.0.0/16 via 192.168.10.200 dev eth1 proto static
172.20.0.253 dev cilium_host proto kernel scope link
172.20.1.0/24 via 172.20.0.253 dev cilium_host proto kernel src 172.20.0.253 mtu 1450
172.20.2.0/24 via 172.20.0.253 dev cilium_host proto kernel src 172.20.0.253 mtu 1450
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100
192.168.20.0/24 via 192.168.10.200 dev eth1 proto static
1
(โ|HomeLab:N/A) root@k8s-ctr:~# ip -c route | grep cilium_host
โ ย ์ถ๋ ฅ
1
2
3
4
172.20.0.0/24 via 172.20.0.253 dev cilium_host proto kernel src 172.20.0.253
172.20.0.253 dev cilium_host proto kernel scope link
172.20.1.0/24 via 172.20.0.253 dev cilium_host proto kernel src 172.20.0.253 mtu 1450
172.20.2.0/24 via 172.20.0.253 dev cilium_host proto kernel src 172.20.0.253 mtu 1450
- ๊ธฐ์กด์๋ ์๋ ๋ค๋ฅธ ๋
ธ๋์ Pod CIDR(
172.20.1.0/24
,172.20.2.0/24
) ๊ฒฝ๋ก๊ฐcilium_host
์ธํฐํ์ด์ค๋ฅผ ํตํด ๋ฑ๋ก๋จ
7. ํน์ Pod IP ๊ฒฝ๋ก ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# ip route get 172.20.1.10
โ ย ์ถ๋ ฅ
1
2
172.20.1.10 dev cilium_host src 172.20.0.253 uid 0
cache mtu 1450
1
(โ|HomeLab:N/A) root@k8s-ctr:~# ip route get 172.20.2.10
โ ย ์ถ๋ ฅ
1
2
172.20.2.10 dev cilium_host src 172.20.0.253 uid 0
cache mtu 1450
172.20.1.10
,172.20.2.10
์ ๋ํ ๊ฒฝ๋ก ํ์ธ ์dev cilium_host
๊ฒฝ์ ๋ก ํต์ ํจ์ ํ์ธ- MTU๋ VXLAN ํฐ๋ ์ ์ฉ์ผ๋ก 1450์ผ๋ก ์ค์
8. ์์ปค๋ ธ๋ ๋ผ์ฐํ ํ ์ด๋ธ ๋น๊ต
1
(โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w0 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c route | grep cilium_host ; echo; done
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
>> node : k8s-w1 <<
172.20.0.0/24 via 172.20.1.238 dev cilium_host proto kernel src 172.20.1.238 mtu 1450
172.20.1.0/24 via 172.20.1.238 dev cilium_host proto kernel src 172.20.1.238
172.20.1.238 dev cilium_host proto kernel scope link
172.20.2.0/24 via 172.20.1.238 dev cilium_host proto kernel src 172.20.1.238 mtu 1450
>> node : k8s-w0 <<
172.20.0.0/24 via 172.20.2.13 dev cilium_host proto kernel src 172.20.2.13 mtu 1450
172.20.1.0/24 via 172.20.2.13 dev cilium_host proto kernel src 172.20.2.13 mtu 1450
172.20.2.0/24 via 172.20.2.13 dev cilium_host proto kernel src 172.20.2.13
172.20.2.13 dev cilium_host proto kernel scope link
- ์์ปค๋ ธ๋1, ์์ปค๋ ธ๋0 ๋ชจ๋ ์์ + ๋ค๋ฅธ ๋ ธ๋์ Pod CIDR ๋ผ์ฐํ ๊ฒฝ๋ก ํฌํจ
via
๋ค์ IP๋ ๊ฐ ๋ ธ๋ Cilium Router IP๋ก ์ค์ ๋์ด ์์
9. Cilium Router IP ํ์ธ
1
2
3
4
5
6
7
8
9
(โ|HomeLab:N/A) root@k8s-ctr:~# export CILIUMPOD0=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-ctr -o jsonpath='{.items[0].metadata.name}')
export CILIUMPOD1=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w1 -o jsonpath='{.items[0].metadata.name}')
export CILIUMPOD2=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w0 -o jsonpath='{.items[0].metadata.name}')
echo $CILIUMPOD0 $CILIUMPOD1 $CILIUMPOD2
# router ์ญํ IPย ํ์ธ
kubectl exec -it $CILIUMPOD0 -n kube-system -c cilium-agent -- cilium status --all-addresses | grep router
kubectl exec -it $CILIUMPOD1 -n kube-system -c cilium-agent -- cilium status --all-addresses | grep router
kubectl exec -it $CILIUMPOD2 -n kube-system -c cilium-agent -- cilium status --all-addresses | grep router
โ ย ์ถ๋ ฅ
1
2
3
4
cilium-th5dp cilium-snscc cilium-sjdzj
172.20.0.253 (router)
172.20.1.238 (router)
172.20.2.13 (router)
- ์ปจํธ๋กค ํ๋ ์ธ:
172.20.0.253
, ์์ปค๋ ธ๋1:172.20.1.238
, ์์ปค๋ ธ๋0:172.20.2.13
10. BPF ipcache ์ ๋ณด ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -n kube-system $CILIUMPOD0 -- cilium-dbg bpf ipcache list
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
IP PREFIX/ADDRESS IDENTITY
172.20.0.197/32 identity=58623 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
172.20.1.0/24 identity=2 encryptkey=0 tunnelendpoint=192.168.10.101 flags=hastunnel
192.168.10.100/32 identity=1 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
172.20.0.71/32 identity=18480 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
172.20.0.163/32 identity=32707 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
192.168.10.101/32 identity=6 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
192.168.20.100/32 identity=6 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
172.20.0.33/32 identity=7496 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
172.20.0.96/32 identity=63446 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
172.20.0.248/32 identity=58623 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
172.20.2.0/24 identity=2 encryptkey=0 tunnelendpoint=192.168.20.100 flags=hastunnel
172.20.0.75/32 identity=5827 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
172.20.0.253/32 identity=1 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
172.20.1.238/32 identity=6 encryptkey=0 tunnelendpoint=192.168.10.101 flags=hastunnel
172.20.2.1/32 identity=8469 encryptkey=0 tunnelendpoint=192.168.20.100 flags=hastunnel
172.20.2.13/32 identity=6 encryptkey=0 tunnelendpoint=192.168.20.100 flags=hastunnel
0.0.0.0/0 identity=2 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
172.20.1.126/32 identity=8469 encryptkey=0 tunnelendpoint=192.168.10.101 flags=hastunnel
10.0.2.15/32 identity=1 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
172.20.0.66/32 identity=8469 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
172.20.0.72/32 identity=48231 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
172.20.0.73/32 identity=22595 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -n kube-system $CILIUMPOD1 -- cilium-dbg bpf ipcache list
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
IP PREFIX/ADDRESS IDENTITY
172.20.0.66/32 identity=8469 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
172.20.0.73/32 identity=22595 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
172.20.0.197/32 identity=58623 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
172.20.0.248/32 identity=58623 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
172.20.1.238/32 identity=1 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
172.20.2.0/24 identity=2 encryptkey=0 tunnelendpoint=192.168.20.100 flags=hastunnel
192.168.20.100/32 identity=6 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
0.0.0.0/0 identity=2 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
10.0.2.15/32 identity=1 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
172.20.0.71/32 identity=18480 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
172.20.0.75/32 identity=5827 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
172.20.0.96/32 identity=63446 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
172.20.0.253/32 identity=6 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
192.168.10.100/32 identity=7 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
172.20.0.33/32 identity=7496 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
172.20.0.163/32 identity=32707 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
172.20.0.0/24 identity=2 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
172.20.1.126/32 identity=8469 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
172.20.2.1/32 identity=8469 encryptkey=0 tunnelendpoint=192.168.20.100 flags=hastunnel
192.168.10.101/32 identity=1 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
172.20.0.72/32 identity=48231 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
172.20.2.13/32 identity=6 encryptkey=0 tunnelendpoint=192.168.20.100 flags=hastunnel
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -n kube-system $CILIUMPOD2 -- cilium-dbg bpf ipcache list
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
IP PREFIX/ADDRESS IDENTITY
172.20.0.72/32 identity=48231 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
172.20.0.248/32 identity=58623 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
172.20.2.13/32 identity=1 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
172.20.0.66/32 identity=8469 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
172.20.0.96/32 identity=63446 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
172.20.0.163/32 identity=32707 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
172.20.1.126/32 identity=8469 encryptkey=0 tunnelendpoint=192.168.10.101 flags=hastunnel
172.20.2.1/32 identity=8469 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
192.168.10.100/32 identity=7 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
0.0.0.0/0 identity=2 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
10.0.2.15/32 identity=1 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
172.20.0.71/32 identity=18480 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
172.20.0.73/32 identity=22595 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
172.20.0.0/24 identity=2 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
172.20.1.238/32 identity=6 encryptkey=0 tunnelendpoint=192.168.10.101 flags=hastunnel
172.20.1.0/24 identity=2 encryptkey=0 tunnelendpoint=192.168.10.101 flags=hastunnel
192.168.20.100/32 identity=1 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
172.20.0.33/32 identity=7496 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
172.20.0.75/32 identity=5827 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
172.20.0.197/32 identity=58623 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
172.20.0.253/32 identity=6 encryptkey=0 tunnelendpoint=192.168.10.100 flags=hastunnel
192.168.10.101/32 identity=6 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>
- ๊ฐ ๋ ธ๋๋ณ Pod CIDR์ Tunnel Endpoint ๋งคํ ์ ๋ณด ํ์ธ
flags=hastunnel
ํ๋๊ทธ๋ก ํฐ๋ ํต์ ๊ฒฝ๋ก์์ ์๋ณ ๊ฐ๋ฅ- ๋ชจ๋ ๋ ธ๋์์ ๋ค๋ฅธ ๋ ธ๋ Pod CIDR๊ฐ Tunnel Endpoint์ ํจ๊ป ๋ฑ๋ก๋์ด ์์
11. BPF socknat ๋งคํ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -n kube-system $CILIUMPOD0 -- cilium-dbg bpf socknat list
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
Socket Cookie Backend -> Frontend
59452 192.168.10.100:11033 -> 10.96.0.1:-17663 (revnat=4864)
4120 192.168.10.100:11033 -> 10.96.0.1:-17663 (revnat=4864)
55815 192.168.10.100:11033 -> 10.96.0.1:-17663 (revnat=4864)
136068 192.168.10.100:-27632 -> 10.96.213.163:-17663 (revnat=4352)
4126 192.168.10.100:11033 -> 10.96.0.1:-17663 (revnat=4864)
55925 172.20.0.75:2600 -> 10.96.133.165:-17663 (revnat=2048)
4202 192.168.10.100:11033 -> 10.96.0.1:-17663 (revnat=4864)
65383 192.168.10.100:11033 -> 10.96.0.1:-17663 (revnat=4864)
4562 192.168.10.100:11033 -> 10.96.0.1:-17663 (revnat=4864)
65386 172.20.0.163:-27376 -> 10.96.33.53:20480 (revnat=4608)
55955 172.20.0.75:2600 -> 10.96.133.165:-17663 (revnat=2048)
- Backend โ Frontend NAT ๋งคํ ์ํ ํ์ธ
- VXLAN ํฐ๋์ ํตํด ์ ๋ฌ๋๋ ์๋น์ค ํธ๋ํฝ์ด NAT ํ ์ด๋ธ์ ๊ธฐ๋ก๋จ
๐ ํ๋๊ฐ ํต์ ํ์ธ
1. ์ ์ฒด ํ๋ ํต์ ์ํ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s --connect-timeout 1 webpod | grep Hostname; echo "---" ; sleep 1; done'
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
Hostname: webpod-697b545f57-gkvrf
---
Hostname: webpod-697b545f57-rpz7h
---
Hostname: webpod-697b545f57-gkvrf
---
Hostname: webpod-697b545f57-rpz7h
---
Hostname: webpod-697b545f57-gkvrf
---
Hostname: webpod-697b545f57-fz95q
---
Hostname: webpod-697b545f57-fz95q
- ํด๋ฌ์คํฐ ๋ด
webpod
3๊ฐ์curl-pod
๊ฐ ํต์ ์ ์ ๋์ ํ์ธ - ๋ชจ๋ Pod์ ๋ํด ์ ์ ์๋ต์ด ์์ ๋จ
2. ์์ปค๋ ธ๋0 ๋์ IP ํ์ธ
1
2
3
4
5
(โ|HomeLab:N/A) root@k8s-ctr:~# export WEBPOD=$(kubectl get pod -l app=webpod --field-selector spec.nodeName=k8s-w0 -o jsonpath='{.items[0].status.podIP}')
echo $WEBPOD
# ๊ฒฐ๊ณผ
172.20.2.1
3. Ping ํ ์คํธ๋ก ์ฐ๊ฒฐ์ฑ ๊ฒ์ฆ
(1) curl-pod
์์ ์์ปค๋
ธ๋0์ webpod
๋ก ping ํ
์คํธ ์ํ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- ping -c 2 -w 1 -W 1 $WEBPOD
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
PING 172.20.2.1 (172.20.2.1) 56(84) bytes of data.
64 bytes from 172.20.2.1: icmp_seq=1 ttl=63 time=1.25 ms
--- 172.20.2.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.246/1.246/1.246/0.000 ms
command terminated with exit code 1
- 0% ํจํท ์์ค๋ก ์ ์ ์๋ต ํ์ธ (RTT ์ฝ 1.25ms)
(2) VXLAN ์บก์ํ ํจํท ์บก์ฒ
1
root@router:~# tcpdump -i any udp port 8472 -nn
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
00:19:44.022486 eth1 In IP 192.168.10.100.33604 > 192.168.20.100.8472: OTV, flags [I] (0x08), overlay 0, instance 63446
IP 172.20.0.96 > 172.20.2.1: ICMP echo request, id 3995, seq 1, length 64
00:19:44.022498 eth2 Out IP 192.168.10.100.33604 > 192.168.20.100.8472: OTV, flags [I] (0x08), overlay 0, instance 63446
IP 172.20.0.96 > 172.20.2.1: ICMP echo request, id 3995, seq 1, length 64
00:19:44.023183 eth2 In IP 192.168.20.100.45255 > 192.168.10.100.8472: OTV, flags [I] (0x08), overlay 0, instance 8469
IP 172.20.2.1 > 172.20.0.96: ICMP echo reply, id 3995, seq 1, length 64
00:19:44.023187 eth1 Out IP 192.168.20.100.45255 > 192.168.10.100.8472: OTV, flags [I] (0x08), overlay 0, instance 8469
IP 172.20.2.1 > 172.20.0.96: ICMP echo reply, id 3995, seq 1, length 64
- ๋ผ์ฐํฐ์์
tcpdump -i any udp port 8472
๋ก VXLAN ํธ๋ํฝ ์บก์ฒ - Outer ํค๋(๋
ธ๋ IP:
192.168.x.x
) ์์ Inner ํค๋(Pod IP:172.20.x.x
) ํฌํจ ํ์ธ - ICMP ์๋ณธ ํจํท์ด VXLAN ํค๋๋ก ๊ฐ์ธ์ ธ
eth1
โeth2
๋ก ์ ๋ฌ๋๋ ๊ฒฝ๋ก ํ์ธ
4. ๋ฐ๋ณต ์์ฒญ์ผ๋ก ํธ๋ํฝ ์์ฑ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s --connect-timeout 1 webpod | grep Hostname; echo "---" ; sleep 1; done'
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
Hostname: webpod-697b545f57-fz95q
---
Hostname: webpod-697b545f57-fz95q
---
Hostname: webpod-697b545f57-gkvrf
---
Hostname: webpod-697b545f57-rpz7h
...
- ์๋ต Hostname์ด ์ฌ๋ฌ ํ๋(
fz95q
,gkvrf
,rpz7h
)๋ก ๋ถ์ฐ๋์ด ์์ ๋จ
5. VXLAN ํธ๋ํฝ ํจํท ์บก์ฒ
1
root@router:~# tcpdump -i any udp port 8472 -w /tmp/vxlan.pcap
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
tcpdump: data link type LINUX_SLL2
tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
^C160 packets captured
162 packets received by filter
0 packets dropped by kernel
- ๋ผ์ฐํฐ์์
tcpdump -i any udp port 8472 -w /tmp/vxlan.pcap
์ผ๋ก VXLAN ํธ๋ํฝ ์ ์ฅ - ์ด 160๊ฐ์ VXLAN ํจํท์ด ์บก์ฒ๋์ด ์ค๋ฒ๋ ์ด ํต์ ํ๋ฆ ํ์ธ ๊ฐ๋ฅ
6. Termshark ํ์ธ
1
root@router:~# termshark -r /tmp/vxlan.pcap -d udp.port==8472,vxlan
- VXLAN ๊ธฐ๋ณธ ํฌํธ(8472)๋ฅผ ์ง์ ํ์ฌ Termshark์์ ์บก์ฒ ํ์ผ์ ๋ถ์
- Outer ํค๋(๋ ธ๋ IP)์ Inner ํค๋(Pod IP) ๊ตฌ์กฐ๊ฐ ํ์ธ๋จ
- ๊ฐ ํจํท์๋ Internet Protocol, UDP, VXLAN ํค๋๊ฐ ์ถ๊ฐ๋์ด ์๋ณธ Pod ๊ฐ ํจํท์ด ๊ฐ์ธ์ง
- ๋ชฉ์ ์ง ๋ ธ๋์์ Decapsulation ์ํ, ์ด ๊ณผ์ ์์ CPU ๋ฑ ์ถ๊ฐ ๋ฆฌ์์ค ์ฌ์ฉ
7. Hubble Observe๋ฅผ ํตํ ์ค๋ฒ๋ ์ด ๊ฒฝ๋ก ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f --protocol tcp --pod curl-pod
โ ย ์ถ๋ ฅ
1
2
3
4
Aug 8 15:36:51.979: default/curl-pod:48788 (ID:63446) -> default/webpod-697b545f57-fz95q:80 (ID:8469) to-overlay FORWARDED (TCP Flags: SYN)
Aug 8 15:36:51.980: default/curl-pod:48788 (ID:63446) <- default/webpod-697b545f57-fz95q:80 (ID:8469) to-endpoint FORWARDED (TCP Flags: SYN, ACK)
Aug 8 15:36:51.980: default/curl-pod:48788 (ID:63446) -> default/webpod-697b545f57-fz95q:80 (ID:8469) to-overlay FORWARDED (TCP Flags: ACK)
...
hubble observe
์์to-overlay
๋ก๊ทธ๋ฅผ ํตํด ํจํท์ด VXLAN ํฐ๋ ๊ฒฝ๋ก๋ฅผ ๊ฑฐ์น๋ ๊ฒ์ ํ์ธ- ์ดํ
to-endpoint
๋จ๊ณ์์ ๋ชฉ์ ์ง ํ๋๋ก ์ ๋ฌ๋จ
๐ฏ K8S Service ๊ฐ๋ ์ ๋ฆฌ
(1) Pod IP ์ง์ ์ ๊ทผ ์ง์
- Pod๋ ์ธ์ ๋ ์ข ๋ฃยท์ฌ์์ฑ๋๋ฉฐ IP๊ฐ ๋ณ๊ฒฝ๋จ
- ์์ ์ ์ธ ์ ๊ทผ์ ์ํด ๊ณ ์ IP ๋๋ ๋๋ฉ์ธ์ ์ ๊ณตํ๋ Service ํ์
(2) Service ์ญํ
- Pod ์งํฉ์ ๋ํด ๊ณ ์ ๊ฐ์ IP(ClusterIP) ๋ฐ DNS ๋๋ฉ์ธ ์์ฑ
- ํด๋ฌ์คํฐ ๋ด๋ถ โ Service ClusterIP๋ก ๋ผ์ฐํ ํ์ฌ ์ ๊ทผ
- ํด๋ฌ์คํฐ ์ธ๋ถ โ ClusterIP ์ง์ ์ ๊ทผ ๋ถ๊ฐ
(3) ์ธ๋ถ ๋ ธ์ถ ๋ฐฉ๋ฒ
- NodePort
- Service ํ์
์
NodePort
๋ก ์ค์ - ์ง์ ํฌํธ๊ฐ ๋ชจ๋ ๋ ธ๋์ ๋ฆฌ์จ ์ํ๋ก ์ด๋ฆผ
- ์ธ๋ถ ํด๋ผ์ด์ธํธ โ
๋ ธ๋IP:NodePort
๋ก ์ ๊ทผ - kube-proxy(iptable/IPVS)๋ฅผ ํตํด ๋ฐฑ์๋ Pod๋ก ํธ๋ํฝ ์ ๋ฌ
- Service ํ์
์
- LoadBalancer
- NodePort ๊ธฐ๋ฐ์ ํ์ฅํ ๋ฐฉ์
- ์ธ๋ถ ๋ก๋๋ฐธ๋ฐ์๋ฅผ ํตํด ๋จ์ผ ์ธ๋ถ IP ์ ๊ณต ๋ฐ ํธ๋ํฝ ๋ถ์ฐ
- ํด๋ผ์ฐ๋ ํ๊ฒฝ์ Managed LB ๋๋ On-Premise LB ์ฅ๋น์ ์ฐ๊ณ ๊ฐ๋ฅ
๐๏ธ Service LB-IPAM
[k8s ํด๋ฌ์คํฐ ๋ด๋ถ] webpod ์๋น์ค๋ฅผ LoadBalancer Type ์ค์ with Cilium LB IPAM
1. LB-IPAM ์ด๊ธฐ ์ํ ํ์ธ
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get CiliumLoadBalancerIPPool -A
# ๊ฒฐ๊ณผ
No resources found
- ๊ธฐ์กด IP Pool ๋ฆฌ์์ค ์์ ํ์ธ
2. CiliumLoadBalancerIPPool ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
(โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: "cilium.io/v2" # v1.17 : cilium.io/v2alpha1
kind: CiliumLoadBalancerIPPool
metadata:
name: "cilium-lb-ippool"
spec:
blocks:
- start: "192.168.10.211"
stop: "192.168.10.215"
EOF
# ๊ฒฐ๊ณผ
ciliumloadbalancerippool.cilium.io/cilium-lb-ippool created
192.168.10.211 ~ 192.168.10.215
๋ฒ์๋ฅผ ๊ฐ์ง๋ IP Pool ์์ฑ- ์์ฑ๋ ๋ฆฌ์์ค๋
kubectl get ippools
๋ก ํ์ธ ๊ฐ๋ฅ
3. ๋ฆฌ์์ค ์ถ์ฝ์ด ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl api-resources | grep -i CiliumLoadBalancerIPPool
โ ย ์ถ๋ ฅ
1
ciliumloadbalancerippools ippools,ippool,lbippool,lbippools cilium.io/v2 false CiliumLoadBalancerIPPool
ippool
,lbippool
๋ฑ ์ถ์ฝ์ด ํ์ธ
4. IP Pool ์ํ ์ ๊ฒ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ippools
โ ย ์ถ๋ ฅ
1
2
NAME DISABLED CONFLICTING IPS AVAILABLE AGE
cilium-lb-ippool false False 5 4m2s
- ์์ฑ ์งํ
cilium-lb-ippool
์ํ: ์ฌ์ฉ ๊ฐ๋ฅ IP 5๊ฐ, ์ถฉ๋ ์์
5. ์๋น์ค ํ์ ๋ณ๊ฒฝ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# k get svc
โ ย ์ถ๋ ฅ
1
2
3
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
webpod ClusterIP 10.96.163.90 <none> 80/TCP 24h
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl patch svc webpod -p '{"spec":{"type":"LoadBalancer"}}'
# ๊ฒฐ๊ณผ
service/webpod patched
- ๊ธฐ์กด
ClusterIP
ํ์ ์webpod
์๋น์ค๋ฅผLoadBalancer
๋ก ๋ณ๊ฒฝ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# k get svc
โ ย ์ถ๋ ฅ
1
2
3
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
webpod LoadBalancer 10.96.163.90 192.168.10.211 80:30276/TCP 24h
- Cilium LB-IPAM์ ํตํด ์ฆ์ External IP๊ฐ ํ ๋น๋จ (
192.168.10.211
)
6. ๋ ธ๋ ๋ฐ ํ๋์์ LB-IP ์ ๊ทผ ํ ์คํธ
1
2
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc webpod -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
LBIP=$(kubectl get svc webpod -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
1
(โ|HomeLab:N/A) root@k8s-ctr:~# curl -s $LBIP
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
Hostname: webpod-697b545f57-rpz7h
IP: 127.0.0.1
IP: ::1
IP: 172.20.1.126
IP: fe80::3024:a3ff:feb8:6a8
RemoteAddr: 172.20.0.253:48068
GET / HTTP/1.1
Host: 192.168.10.211
User-Agent: curl/8.5.0
Accept: */*
- k8s ๋
ธ๋์์
curl
์์ฒญ ์ ์ ์ ์๋ต ์์ curl-pod
์์ LB-IP ์์ฒญ ์ ๋์ ํ๋ Hostname ๋ฐ RemoteAddr ํ์ธ ๊ฐ๋ฅ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl -s $LBIP
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
Hostname: webpod-697b545f57-rpz7h
IP: 127.0.0.1
IP: ::1
IP: 172.20.1.126
IP: fe80::3024:a3ff:feb8:6a8
RemoteAddr: 172.20.0.96:49084
GET / HTTP/1.1
Host: 192.168.10.211
User-Agent: curl/8.14.1
Accept: */*
๋์ ํ๋ ์ด๋ฆ ์ถ๋ ฅ
1
2
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl -s $LBIP | grep Hostname
Hostname: webpod-697b545f57-rpz7h
๋์ ํ๋ ์ ์ฅ์์ ์์ค IP ์ถ๋ ฅ(Layer3)
1
2
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl -s $LBIP | grep RemoteAddr
RemoteAddr: 172.20.0.96:37104
7. ์์ฒญ ๋ถ์ฐ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# while true; do kubectl exec -it curl-pod -- curl -s $LBIP | grep Hostname; sleep 0.1; done
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
Hostname: webpod-697b545f57-gkvrf
Hostname: webpod-697b545f57-rpz7h
Hostname: webpod-697b545f57-gkvrf
Hostname: webpod-697b545f57-rpz7h
Hostname: webpod-697b545f57-fz95q
Hostname: webpod-697b545f57-gkvrf
Hostname: webpod-697b545f57-gkvrf
Hostname: webpod-697b545f57-gkvrf
Hostname: webpod-697b545f57-rpz7h
...
- 0.1์ด ๊ฐ๊ฒฉ ๋ฐ๋ณต ์์ฒญ ์ํ ๊ฒฐ๊ณผ, 3๊ฐ์ ํ๋๋ก ๊ท ๋ฑํ๊ฒ ํธ๋ํฝ ๋ถ๋ฐฐ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# for i in {1..100}; do kubectl exec -it curl-pod -- curl -s $LBIP | grep Hostname; done | sort | uniq -c | sort -nr
โ ย ์ถ๋ ฅ
1
2
3
38 Hostname: webpod-697b545f57-gkvrf
34 Hostname: webpod-697b545f57-rpz7h
28 Hostname: webpod-697b545f57-fz95q
uniq -c
๊ฒฐ๊ณผ ๊ฐ ํ๋๋ณ ์์ฒญ ํ์ ๋น์ทํ๊ฒ ๋ถ์ฐ
8. IP Pool ์ฌ์ฉ๋ ํ์ธ
ippool์ ์๋ 5๊ฐ ์๋๋ฐ ์ด์ ํ๋ ํ ๋นํด์ 4๊ฐ ๋จ์๋ค๋ ์๋ฏธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ippools
โ ย ์ถ๋ ฅ
1
2
NAME DISABLED CONFLICTING IPS AVAILABLE AGE
cilium-lb-ippool false False 4 14m
- ์ด๊ธฐ 5๊ฐ IP ์ค 1๊ฐ ํ ๋น์ผ๋ก ์ฌ์ฉ ๊ฐ๋ฅ IP 4๊ฐ๋ก ๊ฐ์
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ippools -o jsonpath='{.items[*].status.conditions[?(@.type!="cilium.io/PoolConflict")]}' | jq
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
{
"lastTransitionTime": "2025-08-08T16:03:27Z",
"message": "5",
"observedGeneration": 1,
"reason": "noreason",
"status": "Unknown",
"type": "cilium.io/IPsTotal"
}
{
"lastTransitionTime": "2025-08-08T16:03:27Z",
"message": "4",
"observedGeneration": 1,
"reason": "noreason",
"status": "Unknown",
"type": "cilium.io/IPsAvailable"
}
{
"lastTransitionTime": "2025-08-08T16:03:27Z",
"message": "1",
"observedGeneration": 1,
"reason": "noreason",
"status": "Unknown",
"type": "cilium.io/IPsUsed"
}
IPsTotal=5
,IPsAvailable=4
,IPsUsed=1
ํ์ธ
9. ์๋น์ค ์ปจ๋์ ์์ IPAM ์ํ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc webpod -o jsonpath='{.status}' | jq
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
{
"conditions": [
{
"lastTransitionTime": "2025-08-08T16:09:03Z",
"message": "",
"reason": "satisfied",
"status": "True",
"type": "cilium.io/IPAMRequestSatisfied"
}
],
"loadBalancer": {
"ingress": [
{
"ip": "192.168.10.211",
"ipMode": "VIP"
}
]
}
}
cilium.io/IPAMRequestSatisfied
์ํTrue
๋ก, LB-IP ํ ๋น์ด ์ ์์ ์ผ๋ก ์๋ฃ๋จ- LoadBalancer Ingress ์ ๋ณด์์
ipMode
๊ฐVIP
๋ก ํ์๋จ
[k8s ํด๋ฌ์คํฐ ์ธ๋ถ] webpod ์๋น์ค๋ฅผ LoadBalancer External IP๋ก ํธ์ถ ํ์ธ
10. ์ธ๋ถ ๋ผ์ฐํฐ์์ LBIP ํต์ ์๋
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
webpod LoadBalancer 10.96.163.90 192.168.10.211 80:30276/TCP 24h
1
root@router:~# LBIP=192.168.10.211
โ ย ์ถ๋ ฅ
1
2
curl --connect-timeout 1 $LBIP
curl: (28) Failed to connect to 192.168.10.211 port 80 after 1001 ms: Timeout was reached
- ์ฟ ๋ฒ๋คํฐ์ค ํด๋ฌ์คํฐ ์ธ๋ถ(์กฐ์ธ๋์ง ์์ router)์์
192.168.10.211
(webpod LoadBalancer IP)๋ก HTTP ์์ฒญ ์๋ curl --connect-timeout 1
๊ฒฐ๊ณผ, 1์ด ๋๊ธฐ ํ ํ์์์ ๋ฐ์
11. ARP ์์ฒญ ์คํจ
1
root@router:~# arping -i eth1 $LBIP -c 1
โ ย ์ถ๋ ฅ
1
2
3
4
5
ARPING 192.168.10.211
Timeout
--- 192.168.10.211 statistics ---
1 packets transmitted, 0 packets received, 100% unanswered (0 extra)
- ์๋ต ์์(100% unanswered)
- ๊ฐ์ ๋คํธ์ํฌ ๋์ญ์์ ํต์ ํ๋ ค๋ฉด ๋์ IP์ ๋งค์นญ๋๋ MAC ์ฃผ์๊ฐ ํ์ํ์ง๋ง, ์ด๋ฅผ ๊ฐ์ ธ์ค์ง ๋ชปํจ
12. ARP ํ ์ด๋ธ ํ์ธ
1
root@router:~# arp -a
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
? (192.168.10.101) at 08:00:27:fb:78:e5 [ether] on eth1
? (10.0.2.3) at 52:55:0a:00:02:03 [ether] on eth0
? (192.168.10.100) at 08:00:27:d5:e8:d7 [ether] on eth1
_gateway (10.0.2.2) at 52:55:0a:00:02:02 [ether] on eth0
? (192.168.20.100) at 08:00:27:eb:bf:4f [ether] on eth2
? (192.168.10.211) at <incomplete> on eth1
arp -a
๋ช ๋ น ๊ฒฐ๊ณผ, ๋ค๋ฅธ ํธ์คํธ๋ค์ MAC ์ฃผ์๋ ํ์๋๋192.168.10.211
์<incomplete>
์ํ- LB-IP์ ํด๋นํ๋ MAC ์ฃผ์๋ฅผ ์ ์ ์์ด ํต์ ๋ถ๊ฐ
13. ์ง์์ ์ธ ARP ์์ฒญ ํ ์คํธ
1
root@router:~# arping -i eth1 $LBIP -c 100000
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
ARPING 192.168.10.211
Timeout
Timeout
Timeout
Timeout
Timeout
...
arping -i eth1 192.168.10.211 -c 100000
์คํํ์ผ๋ ์ฌ์ ํ ์๋ต ์์- ์ด๋ LB-IP์ ๋ํด ํด๋น MAC ์ฃผ์๋ฅผ ์๋ตํด ์ค ์ฃผ์ฒด๊ฐ ์๋ ์ํ๋ฅผ ์๋ฏธ
๐ Cilium L2 Announcement
[k8s ํด๋ฌ์คํฐ ์ธ๋ถ] webpod ์๋น์ค๋ฅผ LoadBalancer ExternalIP๋ก ํธ์ถ with L2 Announcements
1. Cilium L2 Announcement ๊ธฐ๋ฅ ํ์ฑํ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
(โ|HomeLab:N/A) root@k8s-ctr:~# helm upgrade cilium cilium/cilium --namespace kube-system --version 1.18.0 --reuse-values \
--set l2announcements.enabled=true && watch -d kubectl get pod -A
# ๊ฒฐ๊ณผ
Release "cilium" has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Sat Aug 9 13:05:19 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 3
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.
Your release version is 1.18.0.
For any further help, visit https://docs.cilium.io/en/v1.18/gettinghelp
- Helm ์
๊ทธ๋ ์ด๋ ์
l2announcements.enabled=true
์ต์ ์ ์ฉํ์ฌ ๊ธฐ๋ฅ ํ์ฑํ
1
2
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl rollout restart -n kube-system ds/cilium
daemonset.apps/cilium restarted
kubectl rollout restart
๋ก Cilium DaemonSet ์ฌ์์
1
2
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system exec ds/cilium -c cilium-agent -- cilium-dbg config --all | grep EnableL2Announcements
EnableL2Announcements : true
1
2
3
(โ|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep enable-l2
enable-l2-announcements true
enable-l2-neigh-discovery false
cilium-dbg config
๋ฐcilium config view
๋ฅผ ํตํดEnableL2Announcements: true
์ํ ํ์ธ
2. L2 Announcement ์ ์ฑ ์ค์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
(โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: "cilium.io/v2alpha1" # not v2
kind: CiliumL2AnnouncementPolicy
metadata:
name: policy1
spec:
serviceSelector:
matchLabels:
app: webpod
nodeSelector:
matchExpressions:
- key: kubernetes.io/hostname
operator: NotIn
values:
- k8s-w0
interfaces:
- ^eth[1-9]+
externalIPs: true
loadBalancerIPs: true
EOF
# ๊ฒฐ๊ณผ
ciliuml2announcementpolicy.cilium.io/policy1 created
CiliumL2AnnouncementPolicy
๋ฆฌ์์ค ์์ฑapp=webpod
์๋น์ค๋ง ๋์์ผ๋ก ์ค์ ํ๊ณ ,k8s-w0
๋ ธ๋๋ ์ ์ธํ์ฌ External IP ๊ด๊ณ ๋์์์ ๋ฐฐ์ - ๊ด๊ณ ์ธํฐํ์ด์ค:
^eth[1-9]+
ํจํด์ NIC externalIPs
๋ฐloadBalancerIPs
๋ชจ๋ ๊ด๊ณ ํ๋๋ก ์ค์
- ์ ์ฑ ์ ์ฉ ํ, ์ธ๋ถ์์ External IP ์ ๊ทผ ๊ฐ๋ฅํด์ง(ARP ์๋ต ์ฃผ์ฒด ์์ฑ)
3. ARP ์๋ต ๋ฆฌ๋ ๋ ธ๋ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system get lease
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
NAME HOLDER AGE
apiserver-k3qt3hgfvd4qocxh5wccoxpss4 apiserver-k3qt3hgfvd4qocxh5wccoxpss4_020ec706-87a6-4bd5-8d5a-49b96212344a 37h
cilium-l2announce-default-webpod k8s-ctr 2m16s
cilium-operator-resource-lock k8s-ctr-bstpwpr5rc 37h
kube-controller-manager k8s-ctr_5154e252-9521-4f38-8644-bb959f34c46a 37h
kube-scheduler k8s-ctr_8cdbb3a1-acdc-49a5-9f9a-1be875a5ef3e 37h
1
2
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system get lease | grep "cilium-l2announce"
cilium-l2announce-default-webpod k8s-ctr 2m44s
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system get lease/cilium-l2announce-default-webpod -o yaml | yq
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
"apiVersion": "coordination.k8s.io/v1",
"kind": "Lease",
"metadata": {
"creationTimestamp": "2025-08-09T04:13:07Z",
"name": "cilium-l2announce-default-webpod",
"namespace": "kube-system",
"resourceVersion": "36222",
"uid": "285205c3-c95e-497b-987b-de58a9460b84"
},
"spec": {
"acquireTime": "2025-08-09T04:13:07.812116Z",
"holderIdentity": "k8s-ctr",
"leaseDurationSeconds": 15,
"leaseTransitions": 0,
"renewTime": "2025-08-09T04:17:14.673805Z"
}
}
cilium-l2announce-default-webpod
๋ฆฌ๋ ๋ ธ๋๊ฐk8s-ctr
์ ํ์ธ
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# export CILIUMPOD0=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-ctr -o jsonpath='{.items[0].metadata.name}')
export CILIUMPOD1=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w1 -o jsonpath='{.items[0].metadata.name}')
export CILIUMPOD2=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w0 -o jsonpath='{.items[0].metadata.name}')
echo $CILIUMPOD0 $CILIUMPOD1 $CILIUMPOD2
โ ย ์ถ๋ ฅ
1
cilium-bzfjl cilium-t6hbg cilium-vp87t
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -n kube-system $CILIUMPOD0 -- cilium-dbg shell -- db/show l2-announce
โ ย ์ถ๋ ฅ
1
2
IP NetworkInterface
192.168.10.211 eth1
- ๋ฆฌ๋ ๋
ธ๋์์๋ง External IP(
192.168.10.211)
์ ํด๋น NIC(eth1
) ์ ๋ณด ํ์
1
2
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -n kube-system $CILIUMPOD1 -- cilium-dbg shell -- db/show l2-announce
IP NetworkInterface
1
2
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -n kube-system $CILIUMPOD2 -- cilium-dbg shell -- db/show l2-announce
IP NetworkInterface
- ๋๋จธ์ง ๋ ธ๋์์๋ ํด๋น IP ์ ๋ณด ์์ โ ๋ฆฌ๋ ๋ ธ๋๋ง ARP ์๋ต ์ํ
4. ์ธ๋ถ ๋ผ์ฐํฐ์์ ์ ๊ทผ ํ์ธ
1
root@router:~# curl --connect-timeout 1 $LBIP
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
Hostname: webpod-697b545f57-fz95q
IP: 127.0.0.1
IP: ::1
IP: 172.20.2.1
IP: fe80::7c30:5bff:fe77:1d6d
RemoteAddr: 172.20.0.253:40574
GET / HTTP/1.1
Host: 192.168.10.211
User-Agent: curl/8.5.0
Accept: */*
- ์ธ๋ถ ๋ผ์ฐํฐ์์ LoadBalancer IP(
192.168.10.211
)๋กcurl
์์ฒญ ์ ์ ์ ์๋ต
1
2
3
4
5
6
root@router:~# arp -a
? (10.0.2.3) at 52:55:0a:00:02:03 [ether] on eth0
? (192.168.10.100) at 08:00:27:d5:e8:d7 [ether] on eth1
_gateway (10.0.2.2) at 52:55:0a:00:02:02 [ether] on eth0
? (192.168.20.100) at 08:00:27:eb:bf:4f [ether] on eth2
? (192.168.10.211) at 08:00:27:d5:e8:d7 [ether] on eth1
arp -a
์์192.168.10.211
๊ณผ ๋ฆฌ๋ ๋ ธ๋ IP(192.168.10.100
)์ MAC ์ฃผ์ ๋์ผ ํ์ธ- ์ด๋ ๋ฆฌ๋ ๋ ธ๋๊ฐ External IP ์์ ๋ฐ ARP ์๋ต์ ๋ด๋นํจ์ ์๋ฏธ
1
2
3
4
5
root@router:~# curl -s $LBIP | grep Hostname
Hostname: webpod-697b545f57-rpz7h
root@router:~# curl -s $LBIP | grep RemoteAddr
RemoteAddr: 172.20.0.253:40236
- ์ธ๋ถ์์ ๋ฐ๋ณต ์์ฒญ ์, ๋ชจ๋ ํธ๋ํฝ์ด ๋จผ์ ๋ฆฌ๋ ๋ ธ๋(k8s-ctr)๋ก ๋ค์ด๊ฐ๊ณ ์ดํ ๊ฐ ์์ปค ๋ ธ๋์ Pod๋ก ๋ถ์ฐ ์๋ต๋จ
- L2 Announcement ๊ตฌ์กฐ์ ๋ฆฌ๋ ๋
ธ๋๊ฐ ์ต์ปค(anchor) ์ญํ ์ ์ํํ์ฌ ํญ์ ์ง์
์ง์ ์ด ๋จ
- ์ด๋ก ์ธํ ํธ๋ํฝ ๊ฒฝ์ ๊ฐ ๋ฐ์ํ๋ ๊ฒ์ด ๋์์ ํ๊ณ
๐งฉ Service LB-IPAM ๊ธฐ๋ฅ
1. netshoot-web Deployment ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
(โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: netshoot-web
labels:
app: netshoot-web
spec:
replicas: 3
selector:
matchLabels:
app: netshoot-web
template:
metadata:
labels:
app: netshoot-web
spec:
terminationGracePeriodSeconds: 0
containers:
- name: netshoot
image: nicolaka/netshoot
ports:
- containerPort: 8080
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
command: ["sh", "-c"]
args:
- |
while true; do
{ echo -e "HTTP/1.1 200 OK\r\nContent-Type: text/plain\r\n\r\nOK from \$POD_NAME"; } | nc -l -p 8080 -q 1;
done
EOF
# ๊ฒฐ๊ณผ
deployment.apps/netshoot-web created
- ๊ฐ Pod์์ 8080 ํฌํธ๋ฅผ ๋ฆฌ์จํ๊ณ HTTP ์๋ต ์ ์์ ์
POD_NAME
์ถ๋ ฅ
2. LoadBalancer ํ์ Service ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
(โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: netshoot-web
labels:
app: netshoot-web
spec:
type: LoadBalancer
selector:
app: netshoot-web
ports:
- name: http
port: 80
targetPort: 8080
EOF
# ๊ฒฐ๊ณผ
service/netshoot-web created
netshoot-web
์๋น์ค ํ์ ์ LoadBalancer๋ก ์ค์ ํ์ฌ External IP(192.168.10.212
) ๋ถ์ฌ
1
2
3
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc netshoot-web
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
netshoot-web LoadBalancer 10.96.68.82 192.168.10.212 80:30376/TCP 21s
3. CiliumL2AnnouncementPolicy ์ ์ฉ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
(โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: "cilium.io/v2alpha1" # not v2
kind: CiliumL2AnnouncementPolicy
metadata:
name: policy2
spec:
serviceSelector:
matchLabels:
app: netshoot-web
nodeSelector:
matchExpressions:
- key: kubernetes.io/hostname
operator: NotIn
values:
- k8s-w0
interfaces:
- ^eth[1-9]+
externalIPs: true
loadBalancerIPs: true
EOF
# ๊ฒฐ๊ณผ
ciliuml2announcementpolicy.cilium.io/policy2 created
- Service์
app=netshoot-web
๋ผ๋ฒจ์ ๊ธฐ๋ฐ์ผ๋ก ๋์ k8s-w0
๋ ธ๋๋ฅผ ์ ์ธํ ๋ ธ๋ ์ค External IP๋ฅผ ๊ด๊ณ ํ ๋ฆฌ๋ ๋ ธ๋ ์ ์
1
2
3
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system get lease | grep "cilium-l2announce"
cilium-l2announce-default-netshoot-web k8s-w1 34s
cilium-l2announce-default-webpod k8s-ctr 82m
netshoot-web
โ ๋ฆฌ๋ ๋ ธ๋:k8s-w1
webpod
โ ๋ฆฌ๋ ๋ ธ๋:k8s-ctr
4. ์๋น์ค ์ ๊ทผ ํ ์คํธ
1
2
3
4
5
6
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc netshoot-web -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
LB2IP=$(kubectl get svc netshoot-web -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
curl -s $LB2IP
# ๊ฒฐ๊ณผ
192.168.10.212OK from netshoot-web-5c59d94bd4-rh584
1
2
3
4
5
6
7
8
(โ|HomeLab:N/A) root@k8s-ctr:~# curl -s $LB2IP
OK from netshoot-web-5c59d94bd4-ndg9c
(โ|HomeLab:N/A) root@k8s-ctr:~# curl -s $LB2IP
OK from netshoot-web-5c59d94bd4-rh584
(โ|HomeLab:N/A) root@k8s-ctr:~# curl -s $LB2IP
OK from netshoot-web-5c59d94bd4-ghndv
- LB IP(
192.168.10.212
)๋ก curl ์์ฒญ ์ ๊ฐ๊ธฐ ๋ค๋ฅธ Pod ์๋ต ํ์ธ โ ๋ก๋๋ฐธ๋ฐ์ฑ ๋์ ์ ์
5. ์ธ๋ถ ๋ผ์ฐํฐ์์ ์ ๊ทผ ํ์ธ
(1) ์ธ๋ถ ๋ผ์ฐํฐ์์ External IP ARP ํ์ธ
1
2
3
4
5
6
7
8
9
root@router:~# LB2IP=192.168.10.212
root@router:~# arping -i eth1 $LB2IP -c 2
ARPING 192.168.10.212
60 bytes from 08:00:27:fb:78:e5 (192.168.10.212): index=0 time=799.846 usec
60 bytes from 08:00:27:fb:78:e5 (192.168.10.212): index=1 time=342.301 usec
--- 192.168.10.212 statistics ---
2 packets transmitted, 2 packets received, 0% unanswered (0 extra)
rtt min/avg/max/std-dev = 0.342/0.571/0.800/0.229 ms
- ์ธ๋ถ ๋ผ์ฐํฐ์์
arping
์คํ ์ External IP(192.168.10.212
)๊ฐ ๋ฆฌ๋ ๋ ธ๋(192.168.10.101
, k8s-w1)์ MAC ์ฃผ์๋ก ์๋ต
(2) ์ธ๋ถ ๋ผ์ฐํฐ์์ ์๋น์ค ์ ๊ทผ ํ ์คํธ
1
2
3
4
5
root@router:~# curl -s $LB2IP
OK from netshoot-web-5c59d94bd4-rh584
root@router:~# curl -s $LB2IP
OK from netshoot-web-5c59d94bd4-ghndv
curl
๋ช ๋ น์ผ๋ก External IP์ ์์ฒญ ์ ์ ์ ์๋ต- ์๋ต ๋ด์ฉ์์ ๊ฐ๊ธฐ ๋ค๋ฅธ
netshoot-web
Pod ์ด๋ฆ์ด ์ถ๋ ฅ๋์ด ๋ก๋๋ฐธ๋ฐ์ฑ ์ ์ ๋์ ํ์ธ
(3) ARP ํ ์ด๋ธ ๋งคํ ํ์ธ
1
root@router:~# arp -a
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
? (192.168.10.101) at 08:00:27:fb:78:e5 [ether] on eth1
? (10.0.2.3) at 52:55:0a:00:02:03 [ether] on eth0
? (192.168.10.100) at 08:00:27:d5:e8:d7 [ether] on eth1
_gateway (10.0.2.2) at 52:55:0a:00:02:02 [ether] on eth0
? (192.168.10.212) at 08:00:27:fb:78:e5 [ether] on eth1
? (192.168.20.100) at 08:00:27:eb:bf:4f [ether] on eth2
? (192.168.10.211) at 08:00:27:d5:e8:d7 [ether] on eth1
arp -a
๊ฒฐ๊ณผ External IP(192.168.10.212
)์ ๋ฆฌ๋ ๋ ธ๋ IP(192.168.10.101
)์ MAC ์ฃผ์ ๋์ผ- ์๋น์ค๋ณ๋ก ๋ฆฌ๋ ๋ ธ๋๊ฐ ๋ฌ๋ผ์ง ์ ์์ผ๋ฉฐ, ํด๋น ๋ฆฌ๋ ๋ ธ๋๊ฐ ARP ์๋ต ๋ฐ External IP ๊ด๊ณ ๋ฅผ ๋ด๋นํจ
๐ Requesting IPs : ํน์ Service์ EX-IP๋ฅผ ์ง์ ์ค์
1. ๊ธฐ์กด External IP ์ํ ํ์ธ
1
root@router:~# arping -i eth1 $LB2IP -c 1000000
- ์๋น์ค External IP(
192.168.10.212
)์ ๋ํด ์ธ๋ถ ๋ผ์ฐํฐ์์arping
์คํ - ์ ์์ ์ผ๋ก ARP ์๋ต์ด ์์ ๋จ
2. ํน์ ์๋น์ค External IP ๊ณ ์ ์ค์
Service
๋ฆฌ์์ค์lbipam.cilium.io/ips: "192.168.10.215"
์ ๋ ธํ ์ด์ ์ถ๊ฐ
- ๊ธฐ์กด External IP
192.168.10.212
โ ์ง์ ํ IP192.168.10.215
๋ก ๋ณ๊ฒฝ๋จ
3. ๋ณ๊ฒฝ๋ External IP ARP ์๋ต ํ์ธ
1
root@router:~# arping -i eth1 192.168.10.215 -c 1000000
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
ARPING 192.168.10.215
60 bytes from 08:00:27:fb:78:e5 (192.168.10.215): index=0 time=628.696 usec
60 bytes from 08:00:27:fb:78:e5 (192.168.10.215): index=1 time=506.722 usec
60 bytes from 08:00:27:fb:78:e5 (192.168.10.215): index=2 time=238.185 usec
60 bytes from 08:00:27:fb:78:e5 (192.168.10.215): index=3 time=349.109 usec
60 bytes from 08:00:27:fb:78:e5 (192.168.10.215): index=4 time=410.704 usec
60 bytes from 08:00:27:fb:78:e5 (192.168.10.215): index=5 time=433.840 usec
60 bytes from 08:00:27:fb:78:e5 (192.168.10.215): index=6 time=300.155 usec
60 bytes from 08:00:27:fb:78:e5 (192.168.10.215): index=7 time=232.381 usec
60 bytes from 08:00:27:fb:78:e5 (192.168.10.215): index=8 time=279.912 usec
60 bytes from 08:00:27:fb:78:e5 (192.168.10.215): index=9 time=661.042 usec
60 bytes from 08:00:27:fb:78:e5 (192.168.10.215): index=10 time=413.527 usec
60 bytes from 08:00:27:fb:78:e5 (192.168.10.215): index=11 time=644.416 usec
60 bytes from 08:00:27:fb:78:e5 (192.168.10.215): index=12 time=1.569 msec
...
- ์ธ๋ถ ๋ผ์ฐํฐ์์
arping
์คํ ์, ์ External IP(192.168.10.215
)๊ฐ ๋ฆฌ๋ ๋ ธ๋(k8s-w1)์ MAC ์ฃผ์๋ก ์๋ต
4. ๋ณ๊ฒฝ๋ External IP ์๋น์ค ์ ๊ทผ ํ ์คํธ
1
2
3
4
5
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc netshoot-web -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
LB2IP=$(kubectl get svc netshoot-web -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
curl -s $LB2IP
192.168.10.215OK from netshoot-web-5c59d94bd4-ndg9c
- ์ธ๋ถ ๋ผ์ฐํฐ ๋ฐ ํด๋ฌ์คํฐ ๋ด์์
curl
์์ฒญ ์ ์ ์ ์๋ต
๐ Sharing Keys : EX-IP 1๊ฐ๋ฅผ ๊ฐ๊ธฐ ๋ค๋ฅธ Port ๋ฅผ ํตํด์ ์ฌ์ฉ
- Cilium
sharing keys
๊ธฐ๋ฅ์ ์ฌ์ฉํ์ฌ ํ๋์ External IP๋ฅผ ์ฌ๋ฌ ์๋น์ค๊ฐ ๋ค๋ฅธ ํฌํธ๋ก ๊ณต์ ๊ฐ๋ฅ - ๊ธฐ๋ณธ์ ์ผ๋ก LoadBalancer ์๋น์ค๋ง๋ค External IP๊ฐ 1:1๋ก ํ ๋น๋์ง๋ง, Sharing Keys๋ฅผ ํตํด IP ์ฌ์ฌ์ฉ ๊ฐ๋ฅ
1. ์ถ๊ฐ LoadBalancer ์๋น์ค ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
(โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: netshoot-web2
labels:
app: netshoot-web
spec:
type: LoadBalancer
selector:
app: netshoot-web
ports:
- name: http
port: 8080
targetPort: 8080
EOF
service/netshoot-web2 created
netshoot-web2
์๋น์ค ์์ฑ, ํฌํธ๋ 8080 ์ฌ์ฉ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc -l app=netshoot-web
โ ย ์ถ๋ ฅ
1
2
3
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
netshoot-web LoadBalancer 10.96.68.82 192.168.10.215 80:30376/TCP 28m
netshoot-web2 LoadBalancer 10.96.14.206 192.168.10.212 8080:30961/TCP 26s
- ์์ฑ ์งํ ๊ธฐ๋ณธ ๋์์์๋ ์๋ก์ด External IP(
192.168.10.212
)๊ฐ ๋ฐ๊ธ๋จ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# k get svc
โ ย ์ถ๋ ฅ
1
2
3
4
5
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 39h
netshoot-web LoadBalancer 10.96.68.82 192.168.10.215 80:30376/TCP 28m
netshoot-web2 LoadBalancer 10.96.14.206 192.168.10.212 8080:30961/TCP 58s
webpod LoadBalancer 10.96.163.90 192.168.10.211 80:30276/TCP 38h
2. External IP Sharing Keys ์ ์ฉ
์๋น์ค metadata.annotations
์ ์๋ ํญ๋ชฉ ์ถ๊ฐ
1
2
"lbipam.cilium.io/ips": "192.168.10.215"
"lbipam.cilium.io/sharing-key": "1234"
- ๋ ์๋น์ค๊ฐ ๋์ผํ External IP๋ฅผ ์ฌ์ฉํ๊ฒ ๋๋ฉด์ 80ํฌํธ์ 8080ํฌํธ๋ก ๊ตฌ๋ถ๋์ด ์ ๊ทผ ๊ฐ๋ฅ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc -l app=netshoot-web
โ ย ์ถ๋ ฅ
1
2
3
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
netshoot-web LoadBalancer 10.96.68.82 192.168.10.215 80:30376/TCP 34m
netshoot-web2 LoadBalancer 10.96.14.206 192.168.10.215 8080:30961/TCP 6m41s
3. ๋์ผ External IP ์ฌ์ฉ ์ ๋ฆฌ๋ ๋ ธ๋ ํ์ธ
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system get lease | grep "cilium-l2announce"
cilium-l2announce-default-netshoot-web k8s-w1 33m
cilium-l2announce-default-netshoot-web2 k8s-w1 7m32s
cilium-l2announce-default-webpod k8s-ctr 115m
- ๋์ผ IP ์ฌ์ฉ ์ ARP ์๋ต ๋ฐ External IP ๊ด๊ณ ๋ฅผ ๋ด๋นํ๋ ๋ฆฌ๋ ๋
ธ๋๊ฐ ๋์ผํด์ง (
k8s-w1
) - ํน์ ๋ ธ๋๋ก ํธ๋ํฝ ์ง์ค ๊ฐ๋ฅ์ฑ์ด ์์ โ ๋ถํ ๋ถ์ฐ ์ค๊ณ ํ์
4. ์๋น์ค ์ ๊ทผ ํ ์คํธ (80/8080 ํฌํธ)
1
2
3
4
5
root@router:~# curl -s 192.168.10.215
OK from netshoot-web-5c59d94bd4-ghndv
root@router:~# curl -s 192.168.10.215:8080
OK from netshoot-web-5c59d94bd4-ndg9c
- ์ธ๋ถ ๋ผ์ฐํฐ์์
curl
์์ฒญ ์ ์ ์ ์๋ต ํ์ธ