๐ง ์ค์ตํ๊ฒฝ ๊ตฌ์ฑ (Arch Linux)
1. VirtualBox ์ค์น ๋ฐ ์ค์
(1) ํ์ ํจํค์ง ์ค์น
1
| sudo pacman -S virtualbox virtualbox-host-modules-arch linux-headers
|
(2) VirtualBox ๋ฒ์ ํ์ธ
โ
ย ์ถ๋ ฅ
(3) ์ปค๋ ๋ชจ๋ ๋ก๋ฉ
1
2
3
| sudo modprobe vboxdrv
sudo modprobe vboxnetflt
sudo modprobe vboxnetadp
|
(4) ๋ชจ๋ ์๋ ๋ก๋ฉ - ๋ถํ
์ ์๋ ์ ์ฉ
1
| sudo bash -c 'echo vboxdrv > /etc/modules-load.d/virtualbox.conf'
|
2. Vagrant ์ค์น ๋ฐ ์ด๊ธฐํ
(1) Vagrant ์ค์น
(2) Vagrant ๋ฒ์ ํ์ธ
โ
ย ์ถ๋ ฅ
(3) ์์
๋๋ ํ ๋ฆฌ ์์ฑ ๋ฐ Vagrantfile ๋ค์ด๋ก๋
1
2
3
4
| mkdir -p cilium-lab/1w && cd cilium-lab/1w
curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/1w/Vagrantfile
vagrant up
|
โ
ย ์ถ๋ ฅ
3. ๋คํธ์ํฌ ์ค์ ํ์ผ ์์
The IP address configured for the host-only network is not within the allowed ranges ์ค๋ฅ ๋ฐ์
VirtualBox 6.1 ์ด์๋ถํฐ๋ Host-Only ๋คํธ์ํฌ์ ํ์ฉ๋ IP ๋์ญ๋ง ์ฌ์ฉ๊ฐ๋ฅ
ํ์ฌ ํ์ฉ๋ ๋์ญ
Vagrantfile์์ ์ง์ ํ IP
1
| 192.168.10.100 โ (ํ์ฉ๋ ๋ฒ์ ๋ฐ)
|
VirtualBox์ IP ๋ฒ์ ์ถ๊ฐ
1
| sudo vim /etc/vbox/networks.conf
|
์๋ ์ค ์ถ๊ฐ
๊ธฐ์กด VM ์ ๊ฑฐ ํ ์ฌ์คํ
1
2
| vagrant destroy -f
vagrant up
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
| Bringing machine 'k8s-ctr' up with 'virtualbox' provider...
Bringing machine 'k8s-w1' up with 'virtualbox' provider...
Bringing machine 'k8s-w2' up with 'virtualbox' provider...
==> k8s-ctr: Cloning VM...
==> k8s-ctr: Matching MAC address for NAT networking...
==> k8s-ctr: Checking if box 'bento/ubuntu-24.04' version '202502.21.0' is up to date...
==> k8s-ctr: Setting the name of the VM: k8s-ctr
==> k8s-ctr: Clearing any previously set network interfaces...
==> k8s-ctr: Preparing network interfaces based on configuration...
k8s-ctr: Adapter 1: nat
k8s-ctr: Adapter 2: hostonly
==> k8s-ctr: Forwarding ports...
k8s-ctr: 22 (guest) => 60000 (host) (adapter 1)
==> k8s-ctr: Running 'pre-boot' VM customizations...
==> k8s-ctr: Booting VM...
==> k8s-ctr: Waiting for machine to boot. This may take a few minutes...
k8s-ctr: SSH address: 127.0.0.1:60000
k8s-ctr: SSH username: vagrant
k8s-ctr: SSH auth method: private key
k8s-ctr:
k8s-ctr: Vagrant insecure key detected. Vagrant will automatically replace
k8s-ctr: this with a newly generated keypair for better security.
k8s-ctr:
k8s-ctr: Inserting generated public key within guest...
k8s-ctr: Removing insecure key from the guest if it's present...
k8s-ctr: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-ctr: Machine booted and ready!
==> k8s-ctr: Checking for guest additions in VM...
==> k8s-ctr: Setting hostname...
==> k8s-ctr: Configuring and enabling network interfaces...
==> k8s-ctr: Running provisioner: shell...
k8s-ctr: Running: /tmp/vagrant-shell20250714-112889-1m78z0.sh
k8s-ctr: >>>> Initial Config Start <<<<
k8s-ctr: [TASK 1] Setting Profile & Change Timezone
k8s-ctr: [TASK 2] Disable AppArmor
k8s-ctr: [TASK 3] Disable and turn off SWAP
k8s-ctr: [TASK 4] Install Packages
k8s-ctr: [TASK 5] Install Kubernetes components (kubeadm, kubelet and kubectl)
k8s-ctr: [TASK 6] Install Packages & Helm
k8s-ctr: >>>> Initial Config End <<<<
==> k8s-ctr: Running provisioner: shell...
k8s-ctr: Running: /tmp/vagrant-shell20250714-112889-mpo3zf.sh
k8s-ctr: >>>> K8S Controlplane config Start <<<<
k8s-ctr: [TASK 1] Initial Kubernetes
k8s-ctr: [TASK 2] Setting kube config file
k8s-ctr: [TASK 3] Source the completion
k8s-ctr: [TASK 4] Alias kubectl to k
k8s-ctr: [TASK 5] Install Kubectx & Kubens
k8s-ctr: [TASK 6] Install Kubeps & Setting PS1
k8s-ctr: [TASK 6] Install Kubeps & Setting PS1
k8s-ctr: >>>> K8S Controlplane Config End <<<<
==> k8s-w1: Cloning VM...
==> k8s-w1: Matching MAC address for NAT networking...
==> k8s-w1: Checking if box 'bento/ubuntu-24.04' version '202502.21.0' is up to date...
==> k8s-w1: Setting the name of the VM: k8s-w1
==> k8s-w1: Clearing any previously set network interfaces...
==> k8s-w1: Preparing network interfaces based on configuration...
k8s-w1: Adapter 1: nat
k8s-w1: Adapter 2: hostonly
==> k8s-w1: Forwarding ports...
k8s-w1: 22 (guest) => 60001 (host) (adapter 1)
==> k8s-w1: Running 'pre-boot' VM customizations...
==> k8s-w1: Booting VM...
==> k8s-w1: Waiting for machine to boot. This may take a few minutes...
k8s-w1: SSH address: 127.0.0.1:60001
k8s-w1: SSH username: vagrant
k8s-w1: SSH auth method: private key
k8s-w1:
k8s-w1: Vagrant insecure key detected. Vagrant will automatically replace
k8s-w1: this with a newly generated keypair for better security.
k8s-w1:
k8s-w1: Inserting generated public key within guest...
k8s-w1: Removing insecure key from the guest if it's present...
k8s-w1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-w1: Machine booted and ready!
==> k8s-w1: Checking for guest additions in VM...
==> k8s-w1: Setting hostname...
==> k8s-w1: Configuring and enabling network interfaces...
==> k8s-w1: Running provisioner: shell...
k8s-w1: Running: /tmp/vagrant-shell20250714-112889-tnj10t.sh
k8s-w1: >>>> Initial Config Start <<<<
k8s-w1: [TASK 1] Setting Profile & Change Timezone
k8s-w1: [TASK 2] Disable AppArmor
k8s-w1: [TASK 3] Disable and turn off SWAP
k8s-w1: [TASK 4] Install Packages
k8s-w1: [TASK 5] Install Kubernetes components (kubeadm, kubelet and kubectl)
k8s-w1: [TASK 6] Install Packages & Helm
k8s-w1: >>>> Initial Config End <<<<
==> k8s-w1: Running provisioner: shell...
k8s-w1: Running: /tmp/vagrant-shell20250714-112889-gsjiv7.sh
k8s-w1: >>>> K8S Node config Start <<<<
k8s-w1: [TASK 1] K8S Controlplane Join
k8s-w1: >>>> K8S Node config End <<<<
==> k8s-w2: Cloning VM...
==> k8s-w2: Matching MAC address for NAT networking...
==> k8s-w2: Checking if box 'bento/ubuntu-24.04' version '202502.21.0' is up to date...
==> k8s-w2: Setting the name of the VM: k8s-w2
==> k8s-w2: Clearing any previously set network interfaces...
==> k8s-w2: Preparing network interfaces based on configuration...
k8s-w2: Adapter 1: nat
k8s-w2: Adapter 2: hostonly
==> k8s-w2: Forwarding ports...
k8s-w2: 22 (guest) => 60002 (host) (adapter 1)
==> k8s-w2: Running 'pre-boot' VM customizations...
==> k8s-w2: Booting VM...
==> k8s-w2: Waiting for machine to boot. This may take a few minutes...
k8s-w2: SSH address: 127.0.0.1:60002
k8s-w2: SSH username: vagrant
k8s-w2: SSH auth method: private key
k8s-w2:
k8s-w2: Vagrant insecure key detected. Vagrant will automatically replace
k8s-w2: this with a newly generated keypair for better security.
k8s-w2:
k8s-w2: Inserting generated public key within guest...
k8s-w2: Removing insecure key from the guest if it's present...
k8s-w2: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-w2: Machine booted and ready!
==> k8s-w2: Checking for guest additions in VM...
==> k8s-w2: Setting hostname...
==> k8s-w2: Configuring and enabling network interfaces...
==> k8s-w2: Running provisioner: shell...
k8s-w2: Running: /tmp/vagrant-shell20250714-112889-cd8mcd.sh
k8s-w2: >>>> Initial Config Start <<<<
k8s-w2: [TASK 1] Setting Profile & Change Timezone
k8s-w2: [TASK 2] Disable AppArmor
k8s-w2: [TASK 3] Disable and turn off SWAP
k8s-w2: [TASK 4] Install Packages
k8s-w2: [TASK 5] Install Kubernetes components (kubeadm, kubelet and kubectl)
k8s-w2: [TASK 6] Install Packages & Helm
k8s-w2: >>>> Initial Config End <<<<
==> k8s-w2: Running provisioner: shell...
k8s-w2: Running: /tmp/vagrant-shell20250714-112889-ncfxlg.sh
k8s-w2: >>>> K8S Node config Start <<<<
k8s-w2: [TASK 1] K8S Controlplane Join
k8s-w2: >>>> K8S Node config End <<<<
|
4. ๊ฐ์๋จธ์ ์ํ ํ์ธ
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
| Current machine states:
k8s-ctr running (virtualbox)
k8s-w1 running (virtualbox)
k8s-w2 running (virtualbox)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
|
5. ๋ฐฐํฌ ํ SSH ์ ์ ๋ฐ ๋
ธ๋ ๋คํธ์ํฌ ํ์ธ
๊ฐ ๋
ธ๋ eth0 ์ธํฐํ์ด์ค IP ํ์ธ
1
| for i in ctr w1 w2 ; do echo ">> node : k8s-$i <<"; vagrant ssh k8s-$i -c 'ip -c -4 addr show dev eth0'; echo; done #
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| >> node : k8s-ctr <<
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
altname enp0s3
inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
valid_lft 85640sec preferred_lft 85640sec
>> node : k8s-w1 <<
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
altname enp0s3
inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
valid_lft 85784sec preferred_lft 85784sec
>> node : k8s-w2 <<
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
altname enp0s3
inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
valid_lft 85941sec preferred_lft 85941sec
|
6. k8s-ctr ์ ์ ํ, ๊ธฐ๋ณธ ์ ๋ณด ํ์ธ
(1) k8s-ctr ์ ์
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
| Welcome to Ubuntu 24.04.2 LTS (GNU/Linux 6.8.0-53-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/pro
System information as of Mon Jul 14 10:50:38 PM KST 2025
System load: 0.27
Usage of /: 19.4% of 30.34GB
Memory usage: 31%
Swap usage: 0%
Processes: 161
Users logged in: 0
IPv4 address for eth0: 10.0.2.15
IPv6 address for eth0: fd17:625c:f037:2:a00:27ff:fe6b:69c9
This system is built by the Bento project by Chef Software
More information can be found at https://github.com/chef/bento
Use of this system is acceptance of the OS vendor EULA and License Agreements.
(โ|HomeLab:N/A) root@k8s-ctr:~#
|
(2) k8s-ctr ๋ด๋ถ ๊ธฐ๋ณธ ๋ช
๋ น ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
| (โ|HomeLab:N/A) root@k8s-ctr:~# whoami
root
(โ|HomeLab:N/A) root@k8s-ctr:~# pwd
/root
(โ|HomeLab:N/A) root@k8s-ctr:~# hostnamectl
Static hostname: k8s-ctr
Icon name: computer-vm
Chassis: vm ๐ด
Machine ID: 4f9fb3fa939a46d788144548529797c4
Boot ID: b47345bdfb114c0f99ef542366fb0ebc
Virtualization: oracle
Operating System: Ubuntu 24.04.2 LTS
Kernel: Linux 6.8.0-53-generic
Architecture: x86-64
Hardware Vendor: innotek GmbH
Hardware Model: VirtualBox
Firmware Version: VirtualBox
Firmware Date: Fri 2006-12-01
Firmware Age: 18y 7month 1w 6d
|
(3) ์์คํ
๋ฆฌ์์ค ๋ชจ๋ํฐ๋ง
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# htop
|
โ
ย ์ถ๋ ฅ
(4) /etc/hosts ํ์ผ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat /etc/hosts
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
| 127.0.0.1 localhost
127.0.1.1 vagrant
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
127.0.2.1 k8s-ctr k8s-ctr
192.168.10.100 k8s-ctr
192.168.10.101 k8s-w1
192.168.10.102 k8s-w2
|
(5) ๋
ธ๋๊ฐ ํต์ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ping -c 1 k8s-w1
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
| PING k8s-w1 (192.168.10.101) 56(84) bytes of data.
64 bytes from k8s-w1 (192.168.10.101): icmp_seq=1 ttl=64 time=0.421 ms
--- k8s-w1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ping -c 1 k8s-w2
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
|
PING k8s-w2 (192.168.10.102) 56(84) bytes of data.
64 bytes from k8s-w2 (192.168.10.102): icmp_seq=1 ttl=64 time=0.433 ms
--- k8s-w2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms
|
- k8s-w1 / k8s-w2 ๊ฐ๊ฐ 0% ํจํท ๋ก์ค, ์ ์ ์๋ต
(6) ์ปจํธ๋กคํ๋ ์ธ์์ ์์ปค ๋
ธ๋๋ก SSH ์๊ฒฉ ๋ช
๋ น
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-w1 hostname
|
โ
ย ์ถ๋ ฅ
1
2
| Warning: Permanently added 'k8s-w1' (ED25519) to the list of known hosts.
k8s-w1
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-w2 hostname
|
โ
ย ์ถ๋ ฅ
1
2
| Warning: Permanently added 'k8s-w2' (ED25519) to the list of known hosts.
k8s-w2
|
- k8s-w1 / k8s-w2 ์ ์์ ์ผ๋ก ํธ์คํธ๋ช
๋ฐํ
(7) SSH ์ ๊ทผ ๊ฐ๋ฅ ์ด์ ํ์ธ
- NAT๋ VirtualBox ํ๊ฒฝ์์ sshd ํ๋ก์ธ์ค ํ์ธ
- 10.0.2.2(VirtualBox NAT Gateway)์์ 10.0.2.15๋ก SSH ์ ์๋จ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ss -tnp |grep sshd
|
โ
ย ์ถ๋ ฅ
1
| ESTAB 0 0 [::ffff:10.0.2.15]:22 [::ffff:10.0.2.2]:48922 users:(("sshd",pid=4947,fd=4),("sshd",pid=4902,fd=4))
|
(8) ๋คํธ์ํฌ ์ธํฐํ์ด์ค ๋ฐ ๋ผ์ฐํ
ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ip -c addr
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
| 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
altname enp0s3
inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
valid_lft 84849sec preferred_lft 84849sec
inet6 fd17:625c:f037:2:a00:27ff:fe6b:69c9/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 86357sec preferred_lft 14357sec
inet6 fe80::a00:27ff:fe6b:69c9/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:80:23:b9 brd ff:ff:ff:ff:ff:ff
altname enp0s8
inet 192.168.10.100/24 brd 192.168.10.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe80:23b9/64 scope link
valid_lft forever preferred_lft forever
|
- eth0:
10.0.2.15/24
(NAT ๋คํธ์ํฌ) / eth1: 192.168.10.100/24
(Host-Only ๋คํธ์ํฌ)
(9) ๋ผ์ฐํ
ํ
์ด๋ธ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ip -c route
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
| default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100
|
(10) DNS ์ค์ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# resolvectl
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| Global
Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: stub
Link 2 (eth0)
Current Scopes: DNS
Protocols: +DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: 10.0.2.3
DNS Servers: 10.0.2.3
DNS Domain: Davolink
Link 3 (eth1)
Current Scopes: none
Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
|
7. k8s-ctr ์ฟ ๋ฒ๋คํฐ์ค ์ ๋ณด ํ์ธ
(1) ํด๋ฌ์คํฐ ์ ๋ณด ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl cluster-info
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| Kubernetes control plane is running at https://192.168.10.100:6443
CoreDNS is running at https://192.168.10.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
|
- Kubernetes control plane:
https://192.168.10.100:6443
- CoreDNS:
https://192.168.10.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
(2) ๋
ธ๋ ์ํ ๋ฐ INTERNAL-IP ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get node -owide
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-ctr NotReady control-plane 30m v1.33.2 192.168.10.100 <none> Ubuntu 24.04.2 LTS 6.8.0-53-generic containerd://1.7.27
k8s-w1 NotReady <none> 28m v1.33.2 10.0.2.15 <none> Ubuntu 24.04.2 LTS 6.8.0-53-generic containerd://1.7.27
k8s-w2 NotReady <none> 26m v1.33.2 10.0.2.15 <none> Ubuntu 24.04.2 LTS 6.8.0-53-generic containerd://1.7.27
|
โ ๋ฌธ์ ์
- ์์ปค ๋
ธ๋ INTERNAL-IP๊ฐ NAT ๋์ญ(10.0.2.15)์ผ๋ก ์กํ์์
- CNI ์๊ตฌ์ฌํญ์ ๋ฐ๋ผ INTERNAL-IP๊ฐ ์ค์ k8s API ํต์ ์ฉ ๋คํธ์ํฌ(192.168.10.x)๋ก ์ค์ ๋์ด์ผ ํจ
(3) ํ๋ ์ํ ๋ฐ IP ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -A -owide
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
| NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-674b8bbfcf-7gx6f 0/1 Pending 0 34m <none> <none> <none> <none>
kube-system coredns-674b8bbfcf-mjnst 0/1 Pending 0 34m <none> <none> <none> <none>
kube-system etcd-k8s-ctr 1/1 Running 0 35m 10.0.2.15 k8s-ctr <none> <none>
kube-system kube-apiserver-k8s-ctr 1/1 Running 0 35m 10.0.2.15 k8s-ctr <none> <none>
kube-system kube-controller-manager-k8s-ctr 1/1 Running 0 35m 10.0.2.15 k8s-ctr <none> <none>
kube-system kube-proxy-b6zgw 1/1 Running 0 32m 10.0.2.15 k8s-w1 <none> <none>
kube-system kube-proxy-grfn2 1/1 Running 0 34m 10.0.2.15 k8s-ctr <none> <none>
kube-system kube-proxy-p678s 1/1 Running 0 30m 10.0.2.15 k8s-w2 <none> <none>
kube-system kube-scheduler-k8s-ctr 1/1 Running 0 35m 10.0.2.15 k8s-ctr <none> <none>
|
- CoreDNS ํ๋:
Pending
์ํ (IP ์์ โ CNI ๋ฏธ์ค์น๋ก ์ธํ ๋ฌธ์ ) - etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kube-proxy: ๋ชจ๋
10.0.2.15
์ฌ์ฉ ์ค - ํธ์คํธ ๋คํธ์ํฌ ๋์ญ๋ 192.168.10.x๋ก ๋ณ๊ฒฝ ํ์
(4) CoreDNS ํ๋ ์์ธ ์ํ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# k describe pod -n kube-system -l k8s-app=kube-dns
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
| Name: coredns-674b8bbfcf-7gx6f
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Service Account: coredns
Node: <none>
Labels: k8s-app=kube-dns
pod-template-hash=674b8bbfcf
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/coredns-674b8bbfcf
Containers:
coredns:
Image: registry.k8s.io/coredns/coredns:v1.12.0
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9fss (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
kube-api-access-f9fss:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly op=Exists
node-role.kubernetes.io/control-plane:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 37m default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Warning FailedScheduling 118s (x7 over 32m) default-scheduler 0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
Name: coredns-674b8bbfcf-mjnst
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Service Account: coredns
Node: <none>
Labels: k8s-app=kube-dns
pod-template-hash=674b8bbfcf
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/coredns-674b8bbfcf
Containers:
coredns:
Image: registry.k8s.io/coredns/coredns:v1.12.0
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-55887 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
kube-api-access-55887:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly op=Exists
node-role.kubernetes.io/control-plane:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 37m default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Warning FailedScheduling 2m28s (x7 over 32m) default-scheduler 0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
|
- ๋ ๊ฐ์ CoreDNS ํ๋ ๋ชจ๋
Pending
์ํ - ์ค์ผ์ค๋ฌ ์ด๋ฒคํธ:
0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }
- ์ฆ, ๋ชจ๋ ๋
ธ๋๊ฐ NotReady ์ํ๋ผ์ ์ค์ผ์ค๋ง ๋ถ๊ฐ
8. k8s-ctr INTERNAL-IP ๋ณ๊ฒฝ ์ค์
(1) ๋ณ๊ฒฝ ํ์ ์ด์
- ์ด๊ธฐ kubeadm ์ค์ ์ ์ปจํธ๋กคํ๋ ์ธ์ API ์๋ฒ IP๊ฐ INTERNAL-IP๋ก ์ง์ ๋จ
- ๊ธฐ๋ณธ๊ฐ์ผ๋ก eth0(NAT) IP(
10.0.2.x
)๊ฐ ์กํ ์์ด CNI ๋ฐ Pod ๋คํธ์ํฌ์ ๋ง์ง ์๋ ๋ฌธ์ ๋ฐ์ - Host-Only ๋คํธ์ํฌ(eth1) ๋์ญ์ธ 192.168.10.x๋ฅผ INTERNAL-IP๋ก ๊ณ ์ ํด์ผ ํจ
(2) ํ์ฌ kubelet ์ค์ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat /var/lib/kubelet/kubeadm-flags.env
|
โ
ย ์ถ๋ ฅ
1
| KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.10"
|
(3) eth1 IP ํ์ธ ๋ฐ ๋ณ์๋ก ์ง์
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# NODEIP=$(ip -4 addr show eth1 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')
(โ|HomeLab:N/A) root@k8s-ctr:~# echo $NODEIP
|
โ
ย ์ถ๋ ฅ
(4) kubelet ์ค์ ํ์ผ์ node-ip ์ถ๊ฐ
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# sed -i "s/^\(KUBELET_KUBEADM_ARGS=\"\)/\1--node-ip=${NODEIP} /" /var/lib/kubelet/kubeadm-flags.env
systemctl daemon-reexec && systemctl restart kubelet
|
(5) ์ ์ฉ ํ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat /var/lib/kubelet/kubeadm-flags.env
|
โ
ย ์ถ๋ ฅ
1
| KUBELET_KUBEADM_ARGS="--node-ip=192.168.10.100 --container-runtime-endpoint=unix:///run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.10"
|
--node-ip=192.168.10.100
์ต์
์ด ์ถ๊ฐ๋จ
9. ์์ปค ๋
ธ๋(k8s-w1/w2) INTERNAL-IP ๋ณ๊ฒฝ ์ค์
(1) k8s-w1 ๋ณ๊ฒฝ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
| Welcome to Ubuntu 24.04.2 LTS (GNU/Linux 6.8.0-53-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/pro
System information as of Mon Jul 14 11:28:56 PM KST 2025
System load: 0.0
Usage of /: 17.0% of 30.34GB
Memory usage: 22%
Swap usage: 0%
Processes: 147
Users logged in: 0
IPv4 address for eth0: 10.0.2.15
IPv6 address for eth0: fd17:625c:f037:2:a00:27ff:fe6b:69c9
This system is built by the Bento project by Chef Software
More information can be found at https://github.com/chef/bento
Use of this system is acceptance of the OS vendor EULA and License Agreements.
Last login: Mon Jul 14 22:52:44 2025 from 10.0.2.2
root@k8s-w1:~# NODEIP=$(ip -4 addr show eth1 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')
sed -i "s/^\(KUBELET_KUBEADM_ARGS=\"\)/\1--node-ip=${NODEIP} /" /var/lib/kubelet/kubeadm-flags.env
systemctl daemon-reexec && systemctl restart kubelet
root@k8s-w1:~# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--node-ip=192.168.10.101 --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.10"
|
(2) k8s-w1 ๋ณ๊ฒฝ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
| Welcome to Ubuntu 24.04.2 LTS (GNU/Linux 6.8.0-53-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/pro
System information as of Mon Jul 14 11:29:17 PM KST 2025
System load: 0.0
Usage of /: 17.0% of 30.34GB
Memory usage: 21%
Swap usage: 0%
Processes: 147
Users logged in: 0
IPv4 address for eth0: 10.0.2.15
IPv6 address for eth0: fd17:625c:f037:2:a00:27ff:fe6b:69c9
This system is built by the Bento project by Chef Software
More information can be found at https://github.com/chef/bento
Use of this system is acceptance of the OS vendor EULA and License Agreements.
Last login: Mon Jul 14 22:52:45 2025 from 10.0.2.2
root@k8s-w2:~# NODEIP=$(ip -4 addr show eth1 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')
sed -i "s/^\(KUBELET_KUBEADM_ARGS=\"\)/\1--node-ip=${NODEIP} /" /var/lib/kubelet/kubeadm-flags.env
systemctl daemon-reexec && systemctl restart kubelet
root@k8s-w2:~# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--node-ip=192.168.10.102 --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.10"
|
(3) ๋ณ๊ฒฝ ํ ํ์ธ
๋
ธ๋ IP ์ ์ ์ ์ฉ ์ฌ๋ถ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get node -owide
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-ctr NotReady control-plane 50m v1.33.2 192.168.10.100 <none> Ubuntu 24.04.2 LTS 6.8.0-53-generic containerd://1.7.27
k8s-w1 NotReady <none> 47m v1.33.2 192.168.10.101 <none> Ubuntu 24.04.2 LTS 6.8.0-53-generic containerd://1.7.27
k8s-w2 NotReady <none> 46m v1.33.2 192.168.10.102 <none> Ubuntu 24.04.2 LTS 6.8.0-53-generic containerd://1.7.27
|
(4) static Pod IP ๋ณ๊ฒฝ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -A -owide
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
| NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-674b8bbfcf-7gx6f 0/1 Pending 0 52m <none> <none> <none> <none>
kube-system coredns-674b8bbfcf-mjnst 0/1 Pending 0 52m <none> <none> <none> <none>
kube-system etcd-k8s-ctr 1/1 Running 0 52m 192.168.10.100 k8s-ctr <none> <none>
kube-system kube-apiserver-k8s-ctr 1/1 Running 0 52m 192.168.10.100 k8s-ctr <none> <none>
kube-system kube-controller-manager-k8s-ctr 1/1 Running 0 52m 192.168.10.100 k8s-ctr <none> <none>
kube-system kube-proxy-b6zgw 1/1 Running 0 49m 192.168.10.101 k8s-w1 <none> <none>
kube-system kube-proxy-grfn2 1/1 Running 0 52m 192.168.10.100 k8s-ctr <none> <none>
kube-system kube-proxy-p678s 1/1 Running 0 48m 192.168.10.102 k8s-w2 <none> <none>
kube-system kube-scheduler-k8s-ctr 1/1 Running 0 52m 192.168.10.100 k8s-ctr <none> <none>
|
๐ Flannel CNI ์ค์น ๋ฐ ํ์ธ
1. ์ค์น ์ ํด๋ฌ์คํฐ ์ํ ํ์ธ
(1) kubeadm init
์ ์ง์ ํ Pod CIDR์ Service CIDR ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl cluster-info dump | grep -m 2 -E "cluster-cidr|service-cluster-ip-range"
|
โ
ย ์ถ๋ ฅ
1
2
| "--service-cluster-ip-range=10.96.0.0/16",
"--cluster-cidr=10.244.0.0/16",
|
(2) CoreDNS Pod ์ํ ํ์ธ
CNI ์ค์น ์ ์๋ Pod ๋คํธ์ํฌ๊ฐ ๊ตฌ์ฑ๋์ง ์์ IP๋ฅผ ๋ฐ์ง ๋ชปํจ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=kube-dns -owide
|
โ
ย ์ถ๋ ฅ
1
2
3
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-674b8bbfcf-7gx6f 0/1 Pending 0 59m <none> <none> <none> <none>
coredns-674b8bbfcf-mjnst 0/1 Pending 0 59m <none> <none> <none> <none>
|
(3) ๊ธฐ๋ณธ ๋คํธ์ํฌ ์ํ ํ์ธ
eth0, eth1 ์ธํฐํ์ด์ค ํ์ธ
1
2
3
| (โ|HomeLab:N/A) root@k8s-ctr:~# ip -c link
ip -c route
brctl show
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:80:23:b9 brd ff:ff:ff:ff:ff:ff
altname enp0s8
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100
|
- eth0์ NAT ๋คํธ์ํฌ (10.0.2.x), eth1์ Host-Only ๋คํธ์ํฌ (192.168.10.x)
(4) iptables ์ํ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# iptables-save
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
| # Generated by iptables-save v1.8.10 (nf_tables) on Mon Jul 14 23:44:12 2025
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-IPTABLES-HINT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Mon Jul 14 23:44:12 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Mon Jul 14 23:44:12 2025
*filter
:INPUT ACCEPT [651613:132029811]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [644732:121246327]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-PROXY-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -m nfacct --nfacct-name ct_state_invalid_dropped_pkts -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp has no endpoints" -m tcp --dport 53 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics has no endpoints" -m tcp --dport 9153 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns has no endpoints" -m udp --dport 53 -j REJECT --reject-with icmp-port-unreachable
COMMIT
# Completed on Mon Jul 14 23:44:12 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Mon Jul 14 23:44:12 2025
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SEP-ETI7FUQQE3BS2IXE - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SEP-ETI7FUQQE3BS2IXE -s 192.168.10.100/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-ETI7FUQQE3BS2IXE -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 192.168.10.100:6443
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https -> 192.168.10.100:6443" -j KUBE-SEP-ETI7FUQQE3BS2IXE
COMMIT
# Completed on Mon Jul 14 23:44:12 2025
|
์์ง Pod ๋คํธ์ํฌ๊ฐ ์๊ธฐ ๋๋ฌธ์ coredns ์๋น์ค๋ endpoint๊ฐ ์์
1
2
3
| -A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp has no endpoints" -m tcp --dport 53 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics has no endpoints" -m tcp --dport 9153 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns has no endpoints" -m udp --dport 53 -j REJECT --reject-with icmp-port-unreachable
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# iptables -t nat -S
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
| -P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N KUBE-KUBELET-CANARY
-N KUBE-MARK-MASQ
-N KUBE-NODEPORTS
-N KUBE-POSTROUTING
-N KUBE-PROXY-CANARY
-N KUBE-SEP-ETI7FUQQE3BS2IXE
-N KUBE-SERVICES
-N KUBE-SVC-NPX46M4PTMTKRN6Y
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SEP-ETI7FUQQE3BS2IXE -s 192.168.10.100/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-ETI7FUQQE3BS2IXE -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 192.168.10.100:6443
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https -> 192.168.10.100:6443" -j KUBE-SEP-ETI7FUQQE3BS2IXE
|
(5) CNI ์ค์ ๋๋ ํ ๋ฆฌ ํ์ธ โ ์กด์ฌ x
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# tree /etc/cni/net.d/
|
โ
ย ์ถ๋ ฅ
1
2
3
| /etc/cni/net.d/
0 directories, 0 files
|
2. Flannel CNI ์ค์น
(1) Flannel ๋ค์์คํ์ด์ค ๋ฐ Helm ์ค๋น
1
2
3
4
5
6
7
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl create ns kube-flannel
kubectl label --overwrite ns kube-flannel pod-security.kubernetes.io/enforce=privileged
helm repo add flannel https://flannel-io.github.io/flannel/
helm repo list
helm search repo flannel
helm show values flannel/flannel
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
| namespace/kube-flannel created
namespace/kube-flannel labeled
"flannel" has been added to your repositories
NAME URL
flannel https://flannel-io.github.io/flannel/
NAME CHART VERSION APP VERSION DESCRIPTION
flannel/flannel v0.27.1 v0.27.1 Install Flannel Network Plugin.
---
global:
imagePullSecrets:
# - name: "a-secret-name"
# The IPv4 cidr pool to create on startup if none exists. Pod IPs will be
# chosen from this range.
podCidr: "10.244.0.0/16"
podCidrv6: ""
flannel:
# kube-flannel image
image:
repository: ghcr.io/flannel-io/flannel
tag: v0.27.1
image_cni:
repository: ghcr.io/flannel-io/flannel-cni-plugin
tag: v1.7.1-flannel1
# cniBinDir is the directory to which the flannel CNI binary is installed.
cniBinDir: "/opt/cni/bin"
# cniConfDir is the directory where the CNI configuration is located.
cniConfDir: "/etc/cni/net.d"
# skipCNIConfigInstallation skips the installation of the flannel CNI config. This is useful when the CNI config is
# provided externally.
skipCNIConfigInstallation: false
# flannel command arguments
enableNFTables: false
args:
- "--ip-masq"
- "--kube-subnet-mgr"
# Backend for kube-flannel. Backend should not be changed
# at runtime. (vxlan, host-gw, wireguard, udp)
# Documentation at https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md
backend: "vxlan"
# Port used by the backend 0 means default value (VXLAN: 8472, Wireguard: 51821, UDP: 8285)
#backendPort: 0
# MTU to use for outgoing packets (VXLAN and Wiregurad) if not defined the MTU of the external interface is used.
# mtu: 1500
#
# VXLAN Configs:
#
# VXLAN Identifier to be used. On Linux default is 1.
#vni: 1
# Enable VXLAN Group Based Policy (Default false)
# GBP: false
# Enable direct routes (default is false)
# directRouting: false
# MAC prefix to be used on Windows. (Defaults is 0E-2A)
# macPrefix: "0E-2A"
#
# Wireguard Configs:
#
# UDP listen port used with IPv6
# backendPortv6: 51821
# Pre shared key to use
# psk: 0
# IP version to use on Wireguard
# tunnelMode: "separate"
# Persistent keep interval to use
# keepaliveInterval: 0
#
cniConf: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
#
# General daemonset configs
#
resources:
requests:
cpu: 100m
memory: 50Mi
tolerations:
- effect: NoExecute
operator: Exists
- effect: NoSchedule
operator: Exists
nodeSelector: {}
netpol:
enabled: false
args:
- "--hostname-override=$(MY_NODE_NAME)"
- "--v=2"
image:
repository: registry.k8s.io/networking/kube-network-policies
tag: v0.7.0
|
(2) ๋คํธ์ํฌ ์ธํฐํ์ด์ค ์ง์
1
2
3
4
5
6
7
8
9
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF > flannel-values.yaml
podCidr: "10.244.0.0/16"
flannel:
args:
- "--ip-masq"
- "--kube-subnet-mgr"
- "--iface=eth1"
EOF
|
(3) Helm์ผ๋ก Flannel ์ค์น
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# helm install flannel --namespace kube-flannel flannel/flannel -f flannel-values.yaml
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
| NAME: flannel
LAST DEPLOYED: Mon Jul 14 23:49:17 2025
NAMESPACE: kube-flannel
STATUS: deployed
REVISION: 1
TEST SUITE: None
|
(4) Pod ๋์ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kc describe pod -n kube-flannel -l app=flannel
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
| Name: kube-flannel-ds-9qdxf
Namespace: kube-flannel
Priority: 2000001000
Priority Class Name: system-node-critical
Service Account: flannel
Node: k8s-ctr/192.168.10.100
Start Time: Mon, 14 Jul 2025 23:49:17 +0900
Labels: app=flannel
controller-revision-hash=66c5c78475
pod-template-generation=1
tier=node
Annotations: <none>
Status: Running
IP: 192.168.10.100
IPs:
IP: 192.168.10.100
Controlled By: DaemonSet/kube-flannel-ds
Init Containers:
install-cni-plugin:
Container ID: containerd://7cbb5ee284a7eb7bb13995fd1c656f2d0776973ae1e7cdd3f616fd528270fdcd
Image: ghcr.io/flannel-io/flannel-cni-plugin:v1.7.1-flannel1
Image ID: ghcr.io/flannel-io/flannel-cni-plugin@sha256:cb3176a2c9eae5fa0acd7f45397e706eacb4577dac33cad89f93b775ff5611df
Port: <none>
Host Port: <none>
Command:
cp
Args:
-f
/flannel
/opt/cni/bin/flannel
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 14 Jul 2025 23:49:22 +0900
Finished: Mon, 14 Jul 2025 23:49:22 +0900
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/opt/cni/bin from cni-plugin (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-46mlr (ro)
install-cni:
Container ID: containerd://cfc3ee53844f9139d8ca75f024746fec9163e629f431b97966d10923612a60eb
Image: ghcr.io/flannel-io/flannel:v0.27.1
Image ID: ghcr.io/flannel-io/flannel@sha256:0c95c822b690f83dc827189d691015f92ab7e249e238876b56442b580c492d85
Port: <none>
Host Port: <none>
Command:
cp
Args:
-f
/etc/kube-flannel/cni-conf.json
/etc/cni/net.d/10-flannel.conflist
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 14 Jul 2025 23:49:30 +0900
Finished: Mon, 14 Jul 2025 23:49:30 +0900
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/etc/cni/net.d from cni (rw)
/etc/kube-flannel/ from flannel-cfg (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-46mlr (ro)
Containers:
kube-flannel:
Container ID: containerd://1fda477f80ac40a4e82bd18368b78b7e391bce1436d69a746ecef65773525920
Image: ghcr.io/flannel-io/flannel:v0.27.1
Image ID: ghcr.io/flannel-io/flannel@sha256:0c95c822b690f83dc827189d691015f92ab7e249e238876b56442b580c492d85
Port: <none>
Host Port: <none>
Command:
/opt/bin/flanneld
--ip-masq
--kube-subnet-mgr
--iface=eth1
State: Running
Started: Mon, 14 Jul 2025 23:49:31 +0900
Ready: True
Restart Count: 0
Requests:
cpu: 100m
memory: 50Mi
Environment:
POD_NAME: kube-flannel-ds-9qdxf (v1:metadata.name)
POD_NAMESPACE: kube-flannel (v1:metadata.namespace)
EVENT_QUEUE_DEPTH: 5000
CONT_WHEN_CACHE_NOT_READY: false
Mounts:
/etc/kube-flannel/ from flannel-cfg (rw)
/run/flannel from run (rw)
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-46mlr (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
run:
Type: HostPath (bare host directory volume)
Path: /run/flannel
HostPathType:
cni-plugin:
Type: HostPath (bare host directory volume)
Path: /opt/cni/bin
HostPathType:
cni:
Type: HostPath (bare host directory volume)
Path: /etc/cni/net.d
HostPathType:
flannel-cfg:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kube-flannel-cfg
Optional: false
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
kube-api-access-46mlr:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoExecute op=Exists
:NoSchedule op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 44s default-scheduler Successfully assigned kube-flannel/kube-flannel-ds-9qdxf to k8s-ctr
Normal Pulling 44s kubelet Pulling image "ghcr.io/flannel-io/flannel-cni-plugin:v1.7.1-flannel1"
Normal Pulled 39s kubelet Successfully pulled image "ghcr.io/flannel-io/flannel-cni-plugin:v1.7.1-flannel1" in 4.137s (4.137s including waiting). Image size: 4878976 bytes.
Normal Created 39s kubelet Created container: install-cni-plugin
Normal Started 39s kubelet Started container install-cni-plugin
Normal Pulling 38s kubelet Pulling image "ghcr.io/flannel-io/flannel:v0.27.1"
Normal Pulled 31s kubelet Successfully pulled image "ghcr.io/flannel-io/flannel:v0.27.1" in 7.208s (7.208s including waiting). Image size: 32389164 bytes.
Normal Created 31s kubelet Created container: install-cni
Normal Started 31s kubelet Started container install-cni
Normal Pulled 30s kubelet Container image "ghcr.io/flannel-io/flannel:v0.27.1" already present on machine
Normal Created 30s kubelet Created container: kube-flannel
Normal Started 30s kubelet Started container kube-flannel
Name: kube-flannel-ds-c4rxb
Namespace: kube-flannel
Priority: 2000001000
Priority Class Name: system-node-critical
Service Account: flannel
Node: k8s-w1/192.168.10.101
Start Time: Mon, 14 Jul 2025 23:49:17 +0900
Labels: app=flannel
controller-revision-hash=66c5c78475
pod-template-generation=1
tier=node
Annotations: <none>
Status: Running
IP: 192.168.10.101
IPs:
IP: 192.168.10.101
Controlled By: DaemonSet/kube-flannel-ds
Init Containers:
install-cni-plugin:
Container ID: containerd://cf6e39697e580d20418e0d1c4efa454479d173ed666612bd752e3f596a44a9bc
Image: ghcr.io/flannel-io/flannel-cni-plugin:v1.7.1-flannel1
Image ID: ghcr.io/flannel-io/flannel-cni-plugin@sha256:cb3176a2c9eae5fa0acd7f45397e706eacb4577dac33cad89f93b775ff5611df
Port: <none>
Host Port: <none>
Command:
cp
Args:
-f
/flannel
/opt/cni/bin/flannel
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 14 Jul 2025 23:49:22 +0900
Finished: Mon, 14 Jul 2025 23:49:22 +0900
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/opt/cni/bin from cni-plugin (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nppgp (ro)
install-cni:
Container ID: containerd://3a06f8216980b01adfbcd870361c1ae41371a7be206e5456ad98c0c01e8f90f3
Image: ghcr.io/flannel-io/flannel:v0.27.1
Image ID: ghcr.io/flannel-io/flannel@sha256:0c95c822b690f83dc827189d691015f92ab7e249e238876b56442b580c492d85
Port: <none>
Host Port: <none>
Command:
cp
Args:
-f
/etc/kube-flannel/cni-conf.json
/etc/cni/net.d/10-flannel.conflist
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 14 Jul 2025 23:49:30 +0900
Finished: Mon, 14 Jul 2025 23:49:30 +0900
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/etc/cni/net.d from cni (rw)
/etc/kube-flannel/ from flannel-cfg (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nppgp (ro)
Containers:
kube-flannel:
Container ID: containerd://f49e28157b71d776ec0526e2b87423fda60439a3e5f9dab351a6e465de53ebdb
Image: ghcr.io/flannel-io/flannel:v0.27.1
Image ID: ghcr.io/flannel-io/flannel@sha256:0c95c822b690f83dc827189d691015f92ab7e249e238876b56442b580c492d85
Port: <none>
Host Port: <none>
Command:
/opt/bin/flanneld
--ip-masq
--kube-subnet-mgr
--iface=eth1
State: Running
Started: Mon, 14 Jul 2025 23:49:31 +0900
Ready: True
Restart Count: 0
Requests:
cpu: 100m
memory: 50Mi
Environment:
POD_NAME: kube-flannel-ds-c4rxb (v1:metadata.name)
POD_NAMESPACE: kube-flannel (v1:metadata.namespace)
EVENT_QUEUE_DEPTH: 5000
CONT_WHEN_CACHE_NOT_READY: false
Mounts:
/etc/kube-flannel/ from flannel-cfg (rw)
/run/flannel from run (rw)
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nppgp (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
run:
Type: HostPath (bare host directory volume)
Path: /run/flannel
HostPathType:
cni-plugin:
Type: HostPath (bare host directory volume)
Path: /opt/cni/bin
HostPathType:
cni:
Type: HostPath (bare host directory volume)
Path: /etc/cni/net.d
HostPathType:
flannel-cfg:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kube-flannel-cfg
Optional: false
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
kube-api-access-nppgp:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoExecute op=Exists
:NoSchedule op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 44s default-scheduler Successfully assigned kube-flannel/kube-flannel-ds-c4rxb to k8s-w1
Normal Pulling 43s kubelet Pulling image "ghcr.io/flannel-io/flannel-cni-plugin:v1.7.1-flannel1"
Normal Pulled 39s kubelet Successfully pulled image "ghcr.io/flannel-io/flannel-cni-plugin:v1.7.1-flannel1" in 4.101s (4.101s including waiting). Image size: 4878976 bytes.
Normal Created 39s kubelet Created container: install-cni-plugin
Normal Started 39s kubelet Started container install-cni-plugin
Normal Pulling 38s kubelet Pulling image "ghcr.io/flannel-io/flannel:v0.27.1"
Normal Pulled 31s kubelet Successfully pulled image "ghcr.io/flannel-io/flannel:v0.27.1" in 6.891s (6.891s including waiting). Image size: 32389164 bytes.
Normal Created 31s kubelet Created container: install-cni
Normal Started 31s kubelet Started container install-cni
Normal Pulled 30s kubelet Container image "ghcr.io/flannel-io/flannel:v0.27.1" already present on machine
Normal Created 30s kubelet Created container: kube-flannel
Normal Started 30s kubelet Started container kube-flannel
Name: kube-flannel-ds-q4chw
Namespace: kube-flannel
Priority: 2000001000
Priority Class Name: system-node-critical
Service Account: flannel
Node: k8s-w2/192.168.10.102
Start Time: Mon, 14 Jul 2025 23:49:17 +0900
Labels: app=flannel
controller-revision-hash=66c5c78475
pod-template-generation=1
tier=node
Annotations: <none>
Status: Running
IP: 192.168.10.102
IPs:
IP: 192.168.10.102
Controlled By: DaemonSet/kube-flannel-ds
Init Containers:
install-cni-plugin:
Container ID: containerd://8b174a979471fdd203c4572b14db14c8931fdde14b2935e707790c8b913882ce
Image: ghcr.io/flannel-io/flannel-cni-plugin:v1.7.1-flannel1
Image ID: ghcr.io/flannel-io/flannel-cni-plugin@sha256:cb3176a2c9eae5fa0acd7f45397e706eacb4577dac33cad89f93b775ff5611df
Port: <none>
Host Port: <none>
Command:
cp
Args:
-f
/flannel
/opt/cni/bin/flannel
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 14 Jul 2025 23:49:22 +0900
Finished: Mon, 14 Jul 2025 23:49:22 +0900
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/opt/cni/bin from cni-plugin (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w8bh8 (ro)
install-cni:
Container ID: containerd://d40ae3545353faf7911afb92146557a2996d7bfecd1e47ff9edf8e6d0a23c918
Image: ghcr.io/flannel-io/flannel:v0.27.1
Image ID: ghcr.io/flannel-io/flannel@sha256:0c95c822b690f83dc827189d691015f92ab7e249e238876b56442b580c492d85
Port: <none>
Host Port: <none>
Command:
cp
Args:
-f
/etc/kube-flannel/cni-conf.json
/etc/cni/net.d/10-flannel.conflist
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 14 Jul 2025 23:49:29 +0900
Finished: Mon, 14 Jul 2025 23:49:29 +0900
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/etc/cni/net.d from cni (rw)
/etc/kube-flannel/ from flannel-cfg (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w8bh8 (ro)
Containers:
kube-flannel:
Container ID: containerd://5305910686764c1ea1a0231fc39a38dd718eeafbb20746d44e52c37e4b17ba72
Image: ghcr.io/flannel-io/flannel:v0.27.1
Image ID: ghcr.io/flannel-io/flannel@sha256:0c95c822b690f83dc827189d691015f92ab7e249e238876b56442b580c492d85
Port: <none>
Host Port: <none>
Command:
/opt/bin/flanneld
--ip-masq
--kube-subnet-mgr
--iface=eth1
State: Running
Started: Mon, 14 Jul 2025 23:49:30 +0900
Ready: True
Restart Count: 0
Requests:
cpu: 100m
memory: 50Mi
Environment:
POD_NAME: kube-flannel-ds-q4chw (v1:metadata.name)
POD_NAMESPACE: kube-flannel (v1:metadata.namespace)
EVENT_QUEUE_DEPTH: 5000
CONT_WHEN_CACHE_NOT_READY: false
Mounts:
/etc/kube-flannel/ from flannel-cfg (rw)
/run/flannel from run (rw)
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w8bh8 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
run:
Type: HostPath (bare host directory volume)
Path: /run/flannel
HostPathType:
cni-plugin:
Type: HostPath (bare host directory volume)
Path: /opt/cni/bin
HostPathType:
cni:
Type: HostPath (bare host directory volume)
Path: /etc/cni/net.d
HostPathType:
flannel-cfg:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kube-flannel-cfg
Optional: false
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
kube-api-access-w8bh8:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoExecute op=Exists
:NoSchedule op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 44s default-scheduler Successfully assigned kube-flannel/kube-flannel-ds-q4chw to k8s-w2
Normal Pulling 43s kubelet Pulling image "ghcr.io/flannel-io/flannel-cni-plugin:v1.7.1-flannel1"
Normal Pulled 39s kubelet Successfully pulled image "ghcr.io/flannel-io/flannel-cni-plugin:v1.7.1-flannel1" in 4.081s (4.081s including waiting). Image size: 4878976 bytes.
Normal Created 39s kubelet Created container: install-cni-plugin
Normal Started 39s kubelet Started container install-cni-plugin
Normal Pulling 39s kubelet Pulling image "ghcr.io/flannel-io/flannel:v0.27.1"
Normal Pulled 32s kubelet Successfully pulled image "ghcr.io/flannel-io/flannel:v0.27.1" in 6.97s (6.97s including waiting). Image size: 32389164 bytes.
Normal Created 32s kubelet Created container: install-cni
Normal Started 32s kubelet Started container install-cni
Normal Pulled 31s kubelet Container image "ghcr.io/flannel-io/flannel:v0.27.1" already present on machine
Normal Created 31s kubelet Created container: kube-flannel
Normal Started 31s kubelet Started container kube-flannel
|
install-cni-plugin
: flannel ๋ฐ์ด๋๋ฆฌ ์ค์น(/opt/cni/bin/flannel
)install-cni
: CNI์ค์ (/etc/cni/net.d/10-flannel.conflist
) ์ ์ฉ
(5) CNI ๋ฐ์ด๋๋ฆฌ ์ค์น ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# tree /opt/cni/bin/
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| /opt/cni/bin/
โโโ bandwidth
โโโ bridge
โโโ dhcp
โโโ dummy
โโโ firewall
โโโ flannel
โโโ host-device
โโโ host-local
โโโ ipvlan
โโโ LICENSE
โโโ loopback
โโโ macvlan
โโโ portmap
โโโ ptp
โโโ README.md
โโโ sbr
โโโ static
โโโ tap
โโโ tuning
โโโ vlan
โโโ vrf
1 directory, 21 files
|
(6) CNI ์ค์ ํ์ผ ํ์ธ
1
2
3
4
5
| (โ|HomeLab:N/A) root@k8s-ctr:~# tree /etc/cni/net.d/
/etc/cni/net.d/
โโโ 10-flannel.conflist
1 directory, 1 file
|
10-flannel.conflist
ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat /etc/cni/net.d/10-flannel.conflist | jq
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| {
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
|
(7) ConfigMap ์ค์ ํ์ธ
Pod ๋คํธ์ํฌ CIDR: 10.244.0.0/16
/ Backend: vxlan
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kc describe cm -n kube-flannel kube-flannel-cfg
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
| Name: kube-flannel-cfg
Namespace: kube-flannel
Labels: app=flannel
app.kubernetes.io/managed-by=Helm
tier=node
Annotations: meta.helm.sh/release-name: flannel
meta.helm.sh/release-namespace: kube-flannel
Data
====
cni-conf.json:
----
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json:
----
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
BinaryData
====
Events: <none>
|
(8) ๋คํธ์ํฌ ์ธํฐํ์ด์ค ๋ณํ ํ์ธ
์ค์น ํ, flannel.1
, cni0
, veth*
์ธํฐํ์ด์ค ์ถ๊ฐ๋จ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ip -c link
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:80:23:b9 brd ff:ff:ff:ff:ff:ff
altname enp0s8
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/ether e6:0f:9b:40:c3:ec brd ff:ff:ff:ff:ff:ff
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether f6:af:58:af:44:e3 brd ff:ff:ff:ff:ff:ff
6: vethe4603105@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
link/ether fa:28:80:b2:a3:a2 brd ff:ff:ff:ff:ff:ff link-netns cni-05426de7-dd90-2656-df69-64505867d5df
7: veth470cf46f@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
link/ether c6:8d:d9:1c:52:0e brd ff:ff:ff:ff:ff:ff link-netns cni-1d343493-f993-e0d5-e30c-163d1baf2a6f
|
(9) ๋ผ์ฐํ
ํ
์ด๋ธ ๋ณํ ํ์ธ
์ปจํธ๋กค ํ๋ ์ธ์์ ํ์ธ๋ ๋ผ์ฐํ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ip -c route | grep 10.244.
|
โ
ย ์ถ๋ ฅ
1
2
3
| 10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
|
10.244.1.0/24
โ ์์ปค๋
ธ๋1์ flannel.1๋ก ์ ์ก10.244.2.0/24
โ ์์ปค๋
ธ๋2์ flannel.1๋ก ์ ์ก
(10) ping ํ
์คํธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ping -c 1 10.244.1.0
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
| PING 10.244.1.0 (10.244.1.0) 56(84) bytes of data.
64 bytes from 10.244.1.0: icmp_seq=1 ttl=64 time=0.391 ms
--- 10.244.1.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ping -c 1 10.244.2.0
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
| PING 10.244.2.0 (10.244.2.0) 56(84) bytes of data.
64 bytes from 10.244.2.0: icmp_seq=1 ttl=64 time=0.409 ms
--- 10.244.2.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms
|
ping 10.244.1.0
, ping 10.244.2.0
์ ์ ์๋ต- ๋ผ์ฐํ
๋ฐ ์ธํฐํ์ด์ค ๋์ ํ์ธ ์๋ฃ
(11) ์์ปค๋
ธ๋ ์ธํฐํ์ด์ค ํ์ธ
k8s-w1
flannel.1: 10.244.1.0/32
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
| 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
altname enp0s3
inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
valid_lft 81470sec preferred_lft 81470sec
inet6 fd17:625c:f037:2:a00:27ff:fe6b:69c9/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 86361sec preferred_lft 14361sec
inet6 fe80::a00:27ff:fe6b:69c9/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:07:f2:54 brd ff:ff:ff:ff:ff:ff
altname enp0s8
inet 192.168.10.101/24 brd 192.168.10.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe07:f254/64 scope link
valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 4e:13:00:49:ce:71 brd ff:ff:ff:ff:ff:ff
inet 10.244.1.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::4c13:ff:fe49:ce71/64 scope link
valid_lft forever preferred_lft forever
|
k8s-w2
flannel.1: 10.244.2.0/32
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
| 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
altname enp0s3
inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
valid_lft 81514sec preferred_lft 81514sec
inet6 fd17:625c:f037:2:a00:27ff:fe6b:69c9/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 86356sec preferred_lft 14356sec
inet6 fe80::a00:27ff:fe6b:69c9/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:96:d7:20 brd ff:ff:ff:ff:ff:ff
altname enp0s8
inet 192.168.10.102/24 brd 192.168.10.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe96:d720/64 scope link
valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 56:19:dc:74:53:eb brd ff:ff:ff:ff:ff:ff
inet 10.244.2.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::5419:dcff:fe74:53eb/64 scope link
valid_lft forever preferred_lft forever
|
- ๊ฐ๊ฐ ์ปจํธ๋กค ํ๋ ์ธ CIDR(10.244.0.0/24)์ผ๋ก ๋ผ์ฐํ
์ค์ ์กด์ฌ
(12) ๋ธ๋ฆฟ์ง ์ธํฐํ์ด์ค ํ์ธ
cni0
๋ธ๋ฆฟ์ง์ veth470cf46f
, vethe4603105
์ฐ๊ฒฐ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# brctl show
|
โ
ย ์ถ๋ ฅ
1
2
3
| bridge name bridge id STP enabled interfaces
cni0 8000.f6af58af44e3 no veth470cf46f
vethe4603105
|
(13) iptables NAT ๊ท์น ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# iptables -t nat -S
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
| -P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N FLANNEL-POSTRTG
-N KUBE-KUBELET-CANARY
-N KUBE-MARK-MASQ
-N KUBE-NODEPORTS
-N KUBE-POSTROUTING
-N KUBE-PROXY-CANARY
-N KUBE-SEP-6E7XQMQ4RAYOWTTM
-N KUBE-SEP-ETI7FUQQE3BS2IXE
-N KUBE-SEP-IT2ZTR26TO4XFPTO
-N KUBE-SEP-N4G2XR5TDX7PQE7P
-N KUBE-SEP-YIL6JZP7A3QYXJU2
-N KUBE-SEP-ZP3FB6NMPNCO4VBJ
-N KUBE-SEP-ZXMNUKOKXUTL2MK2
-N KUBE-SERVICES
-N KUBE-SVC-ERIFXISQEP7F7OF4
-N KUBE-SVC-JD5MR3NA4I4DYORP
-N KUBE-SVC-NPX46M4PTMTKRN6Y
-N KUBE-SVC-TCOU7JCQXEZGVUNU
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -m comment --comment "flanneld masq" -j FLANNEL-POSTRTG
-A FLANNEL-POSTRTG -m mark --mark 0x4000/0x4000 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/24 -d 10.244.0.0/16 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/16 -d 10.244.0.0/24 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG ! -s 10.244.0.0/16 -d 10.244.0.0/24 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/16 ! -d 224.0.0.0/4 -m comment --comment "flanneld masq" -j MASQUERADE --random-fully
-A FLANNEL-POSTRTG ! -s 10.244.0.0/16 -d 10.244.0.0/16 -m comment --comment "flanneld masq" -j MASQUERADE --random-fully
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SEP-6E7XQMQ4RAYOWTTM -s 10.244.0.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-6E7XQMQ4RAYOWTTM -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.3:53
-A KUBE-SEP-ETI7FUQQE3BS2IXE -s 192.168.10.100/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-ETI7FUQQE3BS2IXE -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 192.168.10.100:6443
-A KUBE-SEP-IT2ZTR26TO4XFPTO -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-IT2ZTR26TO4XFPTO -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.0.2:53
-A KUBE-SEP-N4G2XR5TDX7PQE7P -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-N4G2XR5TDX7PQE7P -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.244.0.2:9153
-A KUBE-SEP-YIL6JZP7A3QYXJU2 -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-YIL6JZP7A3QYXJU2 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.2:53
-A KUBE-SEP-ZP3FB6NMPNCO4VBJ -s 10.244.0.3/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZP3FB6NMPNCO4VBJ -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.244.0.3:9153
-A KUBE-SEP-ZXMNUKOKXUTL2MK2 -s 10.244.0.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZXMNUKOKXUTL2MK2 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.0.3:53
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp -> 10.244.0.2:53" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-IT2ZTR26TO4XFPTO
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp -> 10.244.0.3:53" -j KUBE-SEP-ZXMNUKOKXUTL2MK2
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics -> 10.244.0.2:9153" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-N4G2XR5TDX7PQE7P
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics -> 10.244.0.3:9153" -j KUBE-SEP-ZP3FB6NMPNCO4VBJ
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https -> 192.168.10.100:6443" -j KUBE-SEP-ETI7FUQQE3BS2IXE
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.244.0.2:53" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-YIL6JZP7A3QYXJU2
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.244.0.3:53" -j KUBE-SEP-6E7XQMQ4RAYOWTTM
|
FLANNEL-POSTRTG
์ฒด์ธ ์์ฑ
(14) ์ปจํธ๋กค ํ๋ ์ธ์์ k8s-w1, k8s-w2 ์ ๋ณด ํ์ธ
๊ฐ ๋
ธ๋์ ๋คํธ์ํฌ ์ธํฐํ์ด์ค
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c link ; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| >> node : k8s-w1 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:07:f2:54 brd ff:ff:ff:ff:ff:ff
altname enp0s8
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/ether 4e:13:00:49:ce:71 brd ff:ff:ff:ff:ff:ff
>> node : k8s-w2 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:96:d7:20 brd ff:ff:ff:ff:ff:ff
altname enp0s8
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/ether 56:19:dc:74:53:eb brd ff:ff:ff:ff:ff:ff
|
๊ฐ ๋
ธ๋์ ๋ผ์ฐํ
ํ
์ด๋ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c route ; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| >> node : k8s-w1 <<
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.101
>> node : k8s-w2 <<
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.102
|
(15) ์ปจํธ๋กค ํ๋ ์ธ ์ธํฐํ์ด์ค ํ์ธ
flannel.1
์ธํฐํ์ด์ค์ 10.244.0.0/32
IP๊ฐ ํ ๋น๋จ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ip -c a
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
| ...
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether e6:0f:9b:40:c3:ec brd ff:ff:ff:ff:ff:ff
inet 10.244.0.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::e40f:9bff:fe40:c3ec/64 scope link
valid_lft forever preferred_lft forever
...
|
(16) ํด๋ฌ์คํฐ Pod ๋์ ์ํ ํ์ธ
coredns Pod๊ฐ ์ปจํธ๋กค ํ๋ ์ธ(k8s-ctr)์ Pod CIDR(10.244.0.x)์์ ์ ์ ๋์
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# k get pod -A -owide
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-flannel kube-flannel-ds-9qdxf 1/1 Running 0 27m 192.168.10.100 k8s-ctr <none> <none>
kube-flannel kube-flannel-ds-c4rxb 1/1 Running 0 27m 192.168.10.101 k8s-w1 <none> <none>
kube-flannel kube-flannel-ds-q4chw 1/1 Running 0 27m 192.168.10.102 k8s-w2 <none> <none>
kube-system coredns-674b8bbfcf-7gx6f 1/1 Running 0 94m 10.244.0.2 k8s-ctr <none> <none>
kube-system coredns-674b8bbfcf-mjnst 1/1 Running 0 94m 10.244.0.3 k8s-ctr <none> <none>
kube-system etcd-k8s-ctr 1/1 Running 0 95m 192.168.10.100 k8s-ctr <none> <none>
kube-system kube-apiserver-k8s-ctr 1/1 Running 0 95m 192.168.10.100 k8s-ctr <none> <none>
kube-system kube-controller-manager-k8s-ctr 1/1 Running 0 95m 192.168.10.100 k8s-ctr <none> <none>
kube-system kube-proxy-b6zgw 1/1 Running 0 92m 192.168.10.101 k8s-w1 <none> <none>
kube-system kube-proxy-grfn2 1/1 Running 0 94m 192.168.10.100 k8s-ctr <none> <none>
kube-system kube-proxy-p678s 1/1 Running 0 90m 192.168.10.102 k8s-w2 <none> <none>
kube-system kube-scheduler-k8s-ctr 1/1 Running 0 95m 192.168.10.100 k8s-ctr <none> <none>
|
๐ ์ํ ์ ํ๋ฆฌ์ผ์ด์
๋ฐฐํฌ
1. Deployment & Service ์์ฑ
podAntiAffinity ์ฌ์ฉํ์ฌ ๋์ผํ ๋ผ๋ฒจ์ ๊ฐ์ง ํ๋๊ฐ ๊ฐ์ ๋
ธ๋์ ์ค์ผ์ค๋ง๋์ง ์๋๋ก ์ค์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: webpod
spec:
replicas: 2
selector:
matchLabels:
app: webpod
template:
metadata:
labels:
app: webpod
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- sample-app
topologyKey: "kubernetes.io/hostname"
containers:
- name: webpod
image: traefik/whoami
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: webpod
labels:
app: webpod
spec:
selector:
app: webpod
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
EOF
# ๊ฒฐ๊ณผ
deployment.apps/webpod created
service/webpod created
|
2. curl-pod ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: curl-pod
labels:
app: curl
spec:
nodeName: k8s-ctr
containers:
- name: curl
image: alpine/curl
command: ["sleep", "36000"]
EOF
# ๊ฒฐ๊ณผ
pod/curl-pod created
|
3. ํ๋ ๋ฐ ์ปจํ
์ด๋ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# crictl ps
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
| CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
3690a55e4d3fa e747d861ab8fd About a minute ago Running curl 0 7802b00f4ec67 curl-pod default
3df0eea47e774 1cf5f116067c6 33 minutes ago Running coredns 0 43cbb7c51580f coredns-674b8bbfcf-mjnst kube-system
5b58bfe8158a1 1cf5f116067c6 33 minutes ago Running coredns 0 99e79c3156525 coredns-674b8bbfcf-7gx6f kube-system
1fda477f80ac4 747b002efa646 33 minutes ago Running kube-flannel 0 db9d5eb73c3c3 kube-flannel-ds-9qdxf kube-flannel
14f050462b6b3 661d404f36f01 2 hours ago Running kube-proxy 0 466896b16a9e0 kube-proxy-grfn2 kube-system
fe228f01dce18 cfed1ff748928 2 hours ago Running kube-scheduler 0 ff4a0b472ee10 kube-scheduler-k8s-ctr kube-system
2dabc9b77c5bb ff4f56c76b82d 2 hours ago Running kube-controller-manager 0 37b1ef75a797b kube-controller-manager-k8s-ctr kube-system
db0f8a21d4e5d ee794efa53d85 2 hours ago Running kube-apiserver 0 2ec28da84e45d kube-apiserver-k8s-ctr kube-system
09c29ceb35443 499038711c081 2 hours ago Running etcd 0 d0c2a84088abf etcd-k8s-ctr kube-system
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i sudo crictl ps ; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| >> node : k8s-w1 <<
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
86199e4d7a0c5 6fee7566e4273 4 minutes ago Running webpod 0 ca2ae7352a95a webpod-697b545f57-rnqgq default
f49e28157b71d 747b002efa646 35 minutes ago Running kube-flannel 0 fdddbfd293f2c kube-flannel-ds-c4rxb kube-flannel
df1e05d91b486 661d404f36f01 2 hours ago Running kube-proxy 0 1f97c8bfacbe1 kube-proxy-b6zgw kube-system
>> node : k8s-w2 <<
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
52921096fd252 6fee7566e4273 4 minutes ago Running webpod 0 28d5aedbc875d webpod-697b545f57-67wlk default
5305910686764 747b002efa646 35 minutes ago Running kube-flannel 0 e37e09c6351bd kube-flannel-ds-q4chw kube-flannel
2e23a26ab34de 661d404f36f01 2 hours ago Running kube-proxy 0 db95ae957614a kube-proxy-p678s kube-system
|
- ์์ปค๋
ธ๋1:
webpod
1๊ฐ, kube-flannel
, kube-proxy
- ์์ปค๋
ธ๋2:
webpod
1๊ฐ, kube-flannel
, kube-proxy
4. ํ๋ ๋ฐฐํฌ ์ํ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# k get pod -owide
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-pod 1/1 Running 0 4m19s 10.244.0.4 k8s-ctr <none> <none>
webpod-697b545f57-67wlk 1/1 Running 0 6m34s 10.244.2.2 k8s-w2 <none> <none>
webpod-697b545f57-rnqgq 1/1 Running 0 6m34s 10.244.1.2 k8s-w1 <none> <none>
|
5. Service & Endpoints ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get deploy,svc,ep webpod -owide
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
| Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/webpod 2/2 2 2 7m25s webpod traefik/whoami app=webpod
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/webpod ClusterIP 10.96.33.91 <none> 80/TCP 7m25s app=webpod
NAME ENDPOINTS AGE
endpoints/webpod 10.244.1.2:80,10.244.2.2:80 7m25s
|
- ClusterIP:
10.96.33.91
- Endpoints:
10.244.1.2:80
, 10.244.2.2:80
EndpointSlice
๋ก๋ ๋์ผํ ์๋ํฌ์ธํธ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl api-resources | grep -i endpoint
|
โ
ย ์ถ๋ ฅ
1
2
| endpoints ep v1 true Endpoints
endpointslices discovery.k8s.io/v1 true EndpointSlice
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get endpointslices -l app=webpod
|
โ
ย ์ถ๋ ฅ
1
2
| NAME ADDRESSTYPE PORTS ENDPOINTS AGE
webpod-wzm9p IPv4 80 10.244.2.2,10.244.1.2 8m33s
|
6. ๋คํธ์ํฌ ๋ณํ ํ์ธ
cni0 ๋ธ๋ฆฌ์ง ๋ฐ veth ์ธํฐํ์ด์ค ์ถ๊ฐ ์์ฑ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ip -c link
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
| 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:80:23:b9 brd ff:ff:ff:ff:ff:ff
altname enp0s8
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/ether e6:0f:9b:40:c3:ec brd ff:ff:ff:ff:ff:ff
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether f6:af:58:af:44:e3 brd ff:ff:ff:ff:ff:ff
6: vethe4603105@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
link/ether fa:28:80:b2:a3:a2 brd ff:ff:ff:ff:ff:ff link-netns cni-05426de7-dd90-2656-df69-64505867d5df
7: veth470cf46f@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
link/ether c6:8d:d9:1c:52:0e brd ff:ff:ff:ff:ff:ff link-netns cni-1d343493-f993-e0d5-e30c-163d1baf2a6f
8: veth51449dca@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
link/ether 42:58:20:09:e6:66 brd ff:ff:ff:ff:ff:ff link-netns cni-a2abf4c9-a021-9a05-3db5-2b9f8761bb5a
|
cni0 ๋ธ๋ฆฌ์ง์ veth ์ธํฐํ์ด์ค 3๊ฐ ์ฐ๊ฒฐ๋จ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# brctl show
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| bridge name bridge id STP enabled interfaces
cni0 8000.f6af58af44e3 no veth470cf46f
veth51449dca
vethe4603105
|
webpod ์๋น์ค(10.96.33.91)์ ๋ํ KUBE-SERVICES ๊ท์น ์ถ๊ฐ๋จ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# iptables -t nat -S
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
| -P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N FLANNEL-POSTRTG
-N KUBE-KUBELET-CANARY
-N KUBE-MARK-MASQ
-N KUBE-NODEPORTS
-N KUBE-POSTROUTING
-N KUBE-PROXY-CANARY
-N KUBE-SEP-6E7XQMQ4RAYOWTTM
-N KUBE-SEP-ETI7FUQQE3BS2IXE
-N KUBE-SEP-IT2ZTR26TO4XFPTO
-N KUBE-SEP-N4G2XR5TDX7PQE7P
-N KUBE-SEP-PQBQBGZJJ5FKN3TB
-N KUBE-SEP-WEW7NHLZ4Y5A5ZKF
-N KUBE-SEP-YIL6JZP7A3QYXJU2
-N KUBE-SEP-ZP3FB6NMPNCO4VBJ
-N KUBE-SEP-ZXMNUKOKXUTL2MK2
-N KUBE-SERVICES
-N KUBE-SVC-CNZCPOCNCNOROALA
-N KUBE-SVC-ERIFXISQEP7F7OF4
-N KUBE-SVC-JD5MR3NA4I4DYORP
-N KUBE-SVC-NPX46M4PTMTKRN6Y
-N KUBE-SVC-TCOU7JCQXEZGVUNU
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -m comment --comment "flanneld masq" -j FLANNEL-POSTRTG
-A FLANNEL-POSTRTG -m mark --mark 0x4000/0x4000 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/24 -d 10.244.0.0/16 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/16 -d 10.244.0.0/24 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG ! -s 10.244.0.0/16 -d 10.244.0.0/24 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/16 ! -d 224.0.0.0/4 -m comment --comment "flanneld masq" -j MASQUERADE --random-fully
-A FLANNEL-POSTRTG ! -s 10.244.0.0/16 -d 10.244.0.0/16 -m comment --comment "flanneld masq" -j MASQUERADE --random-fully
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SEP-6E7XQMQ4RAYOWTTM -s 10.244.0.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-6E7XQMQ4RAYOWTTM -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.3:53
-A KUBE-SEP-ETI7FUQQE3BS2IXE -s 192.168.10.100/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-ETI7FUQQE3BS2IXE -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 192.168.10.100:6443
-A KUBE-SEP-IT2ZTR26TO4XFPTO -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-IT2ZTR26TO4XFPTO -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.0.2:53
-A KUBE-SEP-N4G2XR5TDX7PQE7P -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-N4G2XR5TDX7PQE7P -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.244.0.2:9153
-A KUBE-SEP-PQBQBGZJJ5FKN3TB -s 10.244.1.2/32 -m comment --comment "default/webpod" -j KUBE-MARK-MASQ
-A KUBE-SEP-PQBQBGZJJ5FKN3TB -p tcp -m comment --comment "default/webpod" -m tcp -j DNAT --to-destination 10.244.1.2:80
-A KUBE-SEP-WEW7NHLZ4Y5A5ZKF -s 10.244.2.2/32 -m comment --comment "default/webpod" -j KUBE-MARK-MASQ
-A KUBE-SEP-WEW7NHLZ4Y5A5ZKF -p tcp -m comment --comment "default/webpod" -m tcp -j DNAT --to-destination 10.244.2.2:80
-A KUBE-SEP-YIL6JZP7A3QYXJU2 -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-YIL6JZP7A3QYXJU2 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.2:53
-A KUBE-SEP-ZP3FB6NMPNCO4VBJ -s 10.244.0.3/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZP3FB6NMPNCO4VBJ -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.244.0.3:9153
-A KUBE-SEP-ZXMNUKOKXUTL2MK2 -s 10.244.0.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZXMNUKOKXUTL2MK2 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.0.3:53
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.96.33.91/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-SVC-CNZCPOCNCNOROALA
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-CNZCPOCNCNOROALA ! -s 10.244.0.0/16 -d 10.96.33.91/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SVC-CNZCPOCNCNOROALA -m comment --comment "default/webpod -> 10.244.1.2:80" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-PQBQBGZJJ5FKN3TB
-A KUBE-SVC-CNZCPOCNCNOROALA -m comment --comment "default/webpod -> 10.244.2.2:80" -j KUBE-SEP-WEW7NHLZ4Y5A5ZKF
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp -> 10.244.0.2:53" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-IT2ZTR26TO4XFPTO
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp -> 10.244.0.3:53" -j KUBE-SEP-ZXMNUKOKXUTL2MK2
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics -> 10.244.0.2:9153" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-N4G2XR5TDX7PQE7P
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics -> 10.244.0.3:9153" -j KUBE-SEP-ZP3FB6NMPNCO4VBJ
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https -> 192.168.10.100:6443" -j KUBE-SEP-ETI7FUQQE3BS2IXE
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.244.0.2:53" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-YIL6JZP7A3QYXJU2
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.244.0.3:53" -j KUBE-SEP-6E7XQMQ4RAYOWTTM
|
7. k8s-w1, k8s-w2 ์ ๋ณด ํ์ธ
(1) ๋คํธ์ํฌ ์ธํฐํ์ด์ค ์ํ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c link ; echo; done
>> node : k8s-w1 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:07:f2:54 brd ff:ff:ff:ff:ff:ff
altname enp0s8
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/ether 4e:13:00:49:ce:71 brd ff:ff:ff:ff:ff:ff
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 7a:2d:30:0b:37:11 brd ff:ff:ff:ff:ff:ff
6: veth5aaee95c@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
link/ether e6:aa:64:ab:e4:47 brd ff:ff:ff:ff:ff:ff link-netns cni-30ede71c-cd06-5139-e25d-267ce0b09a24
>> node : k8s-w2 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:96:d7:20 brd ff:ff:ff:ff:ff:ff
altname enp0s8
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/ether 56:19:dc:74:53:eb brd ff:ff:ff:ff:ff:ff
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 9e:ff:1f:86:1a:31 brd ff:ff:ff:ff:ff:ff
6: veth4ccb4288@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
link/ether 5e:dd:78:cd:18:b0 brd ff:ff:ff:ff:ff:ff link-netns cni-1a5528fd-a3b3-87cb-06a3-7b599dba3fc7
|
flannel.1
์ CNI ํ๋๋ ์ธํฐํ์ด์คcni0
๋ธ๋ฆฌ์ง ๋ฐ veth ์ธํฐํ์ด์ค๋ก ํ๋ ๋คํธ์ํฌ ๊ตฌ์ฑ
(2) ๋ผ์ฐํ
ํ
์ด๋ธ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c route ; echo; done
>> node : k8s-w1 <<
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.101
>> node : k8s-w2 <<
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
10.244.2.0/24 dev cni0 proto kernel scope link src 10.244.2.1
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.102
|
10.244.1.0/24
๋์ญ์ cni0
๋ก ์ง์ ๋ผ์ฐํ
10.244.0.0/24
, 10.244.2.0/24
๋์ญ์ flannel.1
๋ก ์ ๋ฌ
(3) ํ๋ ํต์ ํ
์คํธ
1
2
3
4
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -l app=webpod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
webpod-697b545f57-67wlk 1/1 Running 0 12m 10.244.2.2 k8s-w2 <none> <none>
webpod-697b545f57-rnqgq 1/1 Running 0 12m 10.244.1.2 k8s-w1 <none> <none>
|
์ปจํธ๋กค ํ๋ ์ธ์ ๋ฐฐํฌ๋ curl-pod
์์ ์ง์ IP๋ก ์ ๊ทผ
1
2
3
4
5
6
7
8
9
10
11
12
| (โ|HomeLab:N/A) root@k8s-ctr:~# POD1IP=10.244.1.2
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl $POD1IP
Hostname: webpod-697b545f57-rnqgq
IP: 127.0.0.1
IP: ::1
IP: 10.244.1.2
IP: fe80::3861:82ff:fe93:7fed
RemoteAddr: 10.244.0.4:45080
GET / HTTP/1.1
Host: 10.244.1.2
User-Agent: curl/8.14.1
Accept: */*
|
curl 10.244.1.2
โ webpod ์๋ต ํ์ธ
์๋น์ค๋ช
์ผ๋ก ์ ๊ทผ
1
2
3
4
5
6
7
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc,ep webpod
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/webpod ClusterIP 10.96.33.91 <none> 80/TCP 13m
NAME ENDPOINTS AGE
endpoints/webpod 10.244.1.2:80,10.244.2.2:80 13m
|
curl webpod
โ ClusterIP๋ฅผ ํตํด ๋๋ค ๋ถํ๋ถ์ฐ
1
2
3
4
5
6
7
8
9
10
11
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod
Hostname: webpod-697b545f57-67wlk
IP: 127.0.0.1
IP: ::1
IP: 10.244.2.2
IP: fe80::455:7ff:fe77:973f
RemoteAddr: 10.244.0.4:58490
GET / HTTP/1.1
Host: webpod
User-Agent: curl/8.14.1
Accept: */*
|
Hostname
๊ฐ์ด k8s-w1, k8s-w2์ webpod ํ๋๋ก ๋ฒ๊ฐ์ ์๋ต
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s webpod | grep Hostname; sleep 1; done'
Hostname: webpod-697b545f57-67wlk
Hostname: webpod-697b545f57-67wlk
Hostname: webpod-697b545f57-rnqgq
Hostname: webpod-697b545f57-67wlk
Hostname: webpod-697b545f57-67wlk
Hostname: webpod-697b545f57-rnqgq
Hostname: webpod-697b545f57-rnqgq
Hostname: webpod-697b545f57-rnqgq
Hostname: webpod-697b545f57-rnqgq
Hostname: webpod-697b545f57-67wlk
Hostname: webpod-697b545f57-67wlk
Hostname: webpod-697b545f57-rnqgq
...
|
8. ์๋น์ค & ์๋ํฌ์ธํธ
iptables NAT ๊ท์น์ webpod ์๋น์ค(ClusterIP)๋ก ํธ๋ํฝ ์ ๋ฌ ๊ท์น ์ถ๊ฐ๋จ
1
2
3
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc webpod -o jsonpath="{.spec.clusterIP}"
SVCIP=$(kubectl get svc webpod -o jsonpath="{.spec.clusterIP}")
iptables -t nat -S | grep $SVCIP
|
โ
ย ์ถ๋ ฅ
1
2
| 10.96.33.91-A KUBE-SERVICES -d 10.96.33.91/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-SVC-CNZCPOCNCNOROALA
-A KUBE-SVC-CNZCPOCNCNOROALA ! -s 10.244.0.0/16 -d 10.96.33.91/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
|
webpod ์๋น์ค
- ClusterIP:
10.96.33.91
- Endpoints:
10.244.1.2:80
, 10.244.2.2:80
9. iptables NAT ๊ท์น ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i sudo iptables -t nat -S | grep $SVCIP ; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
| >> node : k8s-w1 <<
-A KUBE-SERVICES -d 10.96.33.91/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-SVC-CNZCPOCNCNOROALA
-A KUBE-SVC-CNZCPOCNCNOROALA ! -s 10.244.0.0/16 -d 10.96.33.91/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
>> node : k8s-w2 <<
-A KUBE-SERVICES -d 10.96.33.91/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-SVC-CNZCPOCNCNOROALA
-A KUBE-SVC-CNZCPOCNCNOROALA ! -s 10.244.0.0/16 -d 10.96.33.91/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
|
๐งฉ Cilium CNI ์๊ฐ
Standard Container Networking vs Cilium eBPF Networking ๋น๊ต
https://cilium.io/blog/2021/05/11/cni-benchmark/
https://ebpf.io/
โ๏ธ Cilium CNI ์ค์น ์ ์
ํ
1. Cilium ์์คํ
์๊ตฌ ์ฌํญ ํ์ธ
์ํคํ
์ฒ(x86_64)์ ์ปค๋ ๋ฒ์ (6.8.0-53-generic) ํ์ธ
1
2
3
4
5
| (โ|HomeLab:N/A) root@k8s-ctr:~# arch
x86_64
(โ|HomeLab:N/A) root@k8s-ctr:~# uname -r
6.8.0-53-generic
|
2. ์ปค๋ ๊ธฐ๋ณธ ์๊ตฌ ์ฌํญ ํ์ธ
1
| grep -E 'CONFIG_BPF|CONFIG_BPF_SYSCALL|CONFIG_NET_CLS_BPF|CONFIG_BPF_JIT|CONFIG_NET_CLS_ACT|CONFIG_NET_SCH_INGRESS|CONFIG_CRYPTO_SHA1|CONFIG_CRYPTO_USER_API_HASH|CONFIG_CGROUPS|CONFIG_CGROUP_BPF|CONFIG_PERF_EVENTS|CONFIG_SCHEDSTATS' /boot/config-$(uname -r)
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
| CONFIG_BPF=y
CONFIG_BPF_SYSCALL=y
CONFIG_BPF_JIT=y
CONFIG_BPF_JIT_ALWAYS_ON=y
CONFIG_BPF_JIT_DEFAULT_ON=y
CONFIG_BPF_UNPRIV_DEFAULT_OFF=y
# CONFIG_BPF_PRELOAD is not set
CONFIG_BPF_LSM=y
CONFIG_CGROUPS=y
CONFIG_CGROUP_BPF=y
CONFIG_PERF_EVENTS=y
CONFIG_PERF_EVENTS_INTEL_UNCORE=y
CONFIG_PERF_EVENTS_INTEL_RAPL=m
CONFIG_PERF_EVENTS_INTEL_CSTATE=m
# CONFIG_PERF_EVENTS_AMD_POWER is not set
CONFIG_PERF_EVENTS_AMD_UNCORE=m
CONFIG_PERF_EVENTS_AMD_BRS=y
CONFIG_NET_SCH_INGRESS=m
CONFIG_NET_CLS_BPF=m
CONFIG_NET_CLS_ACT=y
CONFIG_BPF_STREAM_PARSER=y
CONFIG_CRYPTO_SHA1=y
CONFIG_CRYPTO_USER_API_HASH=m
CONFIG_CRYPTO_SHA1_SSSE3=m
CONFIG_SCHEDSTATS=y
CONFIG_BPF_EVENTS=y
CONFIG_BPF_KPROBE_OVERRIDE=y
|
3. ์ปค๋ ํฐ๋๋ง/๋ผ์ฐํ
์ต์
ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# grep -E 'CONFIG_VXLAN=y|CONFIG_VXLAN=m|CONFIG_GENEVE=y|CONFIG_GENEVE=m|CONFIG_FIB_RULES=y' /boot/config-$(uname -r)
|
โ
ย ์ถ๋ ฅ
1
2
3
| CONFIG_FIB_RULES=y
CONFIG_VXLAN=m
CONFIG_GENEVE=m
|
4. geneve ๋ชจ๋ ๋ก๋ ์ํ ํ์ธ
geneve ๋ชจ๋ ๋ฏธ๋ก๋ ์ํ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# lsmod | grep -E 'vxlan|geneve'
|
โ
ย ์ถ๋ ฅ
1
2
3
| vxlan 155648 0
ip6_udp_tunnel 16384 1 vxlan
udp_tunnel 32768 1 vxlan
|
modprobe geneve
์คํ ํ geneve ๋ชจ๋์ด ์ ์ ๋ก๋๋จ
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# modprobe geneve
(โ|HomeLab:N/A) root@k8s-ctr:~# lsmod | grep -E 'vxlan|geneve'
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| geneve 49152 0
vxlan 155648 0
ip6_udp_tunnel 16384 2 geneve,vxlan
udp_tunnel 32768 2 geneve,vxlan
|
5. Netfilter ๊ด๋ จ ์ต์
ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# grep -E 'CONFIG_NETFILTER_XT_TARGET_TPROXY|CONFIG_NETFILTER_XT_TARGET_MARK|CONFIG_NETFILTER_XT_TARGET_CT|CONFIG_NETFILTER_XT_MATCH_MARK|CONFIG_NETFILTER_XT_MATCH_SOCKET' /boot/config-$(uname -r)
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
| CONFIG_NETFILTER_XT_TARGET_CT=m
CONFIG_NETFILTER_XT_TARGET_MARK=m
CONFIG_NETFILTER_XT_TARGET_TPROXY=m
CONFIG_NETFILTER_XT_MATCH_MARK=m
CONFIG_NETFILTER_XT_MATCH_SOCKET=m
|
6. Netkit Device Mode ์ต์
ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# grep -E 'CONFIG_NETKIT=y|CONFIG_NETKIT=m' /boot/config-$(uname -r)
|
โ
ย ์ถ๋ ฅ
7. eBPF ํ์ผ์์คํ
๋ง์ดํธ ์ํ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# mount | grep /sys/fs/bpf
|
โ
ย ์ถ๋ ฅ
1
| bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
|
8. ๊ธฐ์กด Flannel CNI ์ ๊ฑฐ
(1) helm uninstall -n kube-flannel flannel
์คํ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# helm uninstall -n kube-flannel flannel
|
โ
ย ์ถ๋ ฅ
1
| release "flannel" uninstalled
|
(2) ์์ฌ ๋ฆฌ์์ค ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# helm list -A
|
โ
ย ์ถ๋ ฅ
1
| NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get all -n kube-flannel
|
โ
ย ์ถ๋ ฅ
1
| No resources found in kube-flannel namespace.
|
(3) ๋ค์์คํ์ด์ค ์ญ์
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl delete ns kube-flannel
|
โ
ย ์ถ๋ ฅ
1
| namespace "kube-flannel" deleted
|
(4) ์ ์ฒด ํ๋ ์ํ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -A -owide
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
| NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default curl-pod 1/1 Running 0 46h 10.244.0.4 k8s-ctr <none> <none>
default webpod-697b545f57-67wlk 1/1 Running 0 46h 10.244.2.2 k8s-w2 <none> <none>
default webpod-697b545f57-rnqgq 1/1 Running 0 46h 10.244.1.2 k8s-w1 <none> <none>
kube-system coredns-674b8bbfcf-7gx6f 1/1 Running 0 47h 10.244.0.2 k8s-ctr <none> <none>
kube-system coredns-674b8bbfcf-mjnst 1/1 Running 0 47h 10.244.0.3 k8s-ctr <none> <none>
kube-system etcd-k8s-ctr 1/1 Running 0 47h 192.168.10.100 k8s-ctr <none> <none>
kube-system kube-apiserver-k8s-ctr 1/1 Running 0 47h 192.168.10.100 k8s-ctr <none> <none>
kube-system kube-controller-manager-k8s-ctr 1/1 Running 0 47h 192.168.10.100 k8s-ctr <none> <none>
kube-system kube-proxy-b6zgw 1/1 Running 0 47h 192.168.10.101 k8s-w1 <none> <none>
kube-system kube-proxy-grfn2 1/1 Running 0 47h 192.168.10.100 k8s-ctr <none> <none>
kube-system kube-proxy-p678s 1/1 Running 0 47h 192.168.10.102 k8s-w2 <none> <none>
kube-system kube-scheduler-k8s-ctr 1/1 Running 0 47h 192.168.10.100 k8s-ctr <none> <none>
|
9. ๋คํธ์ํฌ ์ธํฐํ์ด์ค ์ ๊ฑฐ ์ ์ํ ํ์ธ
๊ฐ ๋
ธ๋(k8s-ctr, k8s-w1, k8s-w2)์ ๋คํธ์ํฌ ์ธํฐํ์ด์ค ์กฐํ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ip -c link
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
| 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:80:23:b9 brd ff:ff:ff:ff:ff:ff
altname enp0s8
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/ether e6:0f:9b:40:c3:ec brd ff:ff:ff:ff:ff:ff
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether f6:af:58:af:44:e3 brd ff:ff:ff:ff:ff:ff
6: vethe4603105@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
link/ether fa:28:80:b2:a3:a2 brd ff:ff:ff:ff:ff:ff link-netns cni-05426de7-dd90-2656-df69-64505867d5df
7: veth470cf46f@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
link/ether c6:8d:d9:1c:52:0e brd ff:ff:ff:ff:ff:ff link-netns cni-1d343493-f993-e0d5-e30c-163d1baf2a6f
8: veth51449dca@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
link/ether 42:58:20:09:e6:66 brd ff:ff:ff:ff:ff:ff link-netns cni-a2abf4c9-a021-9a05-3db5-2b9f8761bb5a
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c link ; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
| >> node : k8s-w1 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:07:f2:54 brd ff:ff:ff:ff:ff:ff
altname enp0s8
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/ether 4e:13:00:49:ce:71 brd ff:ff:ff:ff:ff:ff
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 7a:2d:30:0b:37:11 brd ff:ff:ff:ff:ff:ff
6: veth5aaee95c@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
link/ether e6:aa:64:ab:e4:47 brd ff:ff:ff:ff:ff:ff link-netns cni-30ede71c-cd06-5139-e25d-267ce0b09a24
>> node : k8s-w2 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:96:d7:20 brd ff:ff:ff:ff:ff:ff
altname enp0s8
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/ether 56:19:dc:74:53:eb brd ff:ff:ff:ff:ff:ff
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 9e:ff:1f:86:1a:31 brd ff:ff:ff:ff:ff:ff
6: veth4ccb4288@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
link/ether 5e:dd:78:cd:18:b0 brd ff:ff:ff:ff:ff:ff link-netns cni-1a5528fd-a3b3-87cb-06a3-7b599dba3fc7
|
- flannel ์ญ์ ํ์๋ flannel.1 ๋ฐ cni0๊ฐ ๋จ์์์
10. ๊ฐ ๋
ธ๋์ ๋ธ๋ฆฌ์ง ์ํ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# brctl show
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| bridge name bridge id STP enabled interfaces
cni0 8000.f6af58af44e3 no veth470cf46f
veth51449dca
vethe4603105
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i brctl show ; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
| >> node : k8s-w1 <<
bridge name bridge id STP enabled interfaces
cni0 8000.7a2d300b3711 no veth5aaee95c
>> node : k8s-w2 <<
bridge name bridge id STP enabled interfaces
cni0 8000.9eff1f861a31 no veth4ccb4288
|
- k8s-ctr, k8s-w1, k8s-w2 ๋ชจ๋
cni0
๋ธ๋ฆฌ์ง์ veth๋ค์ด ๋จ์์๋ ์ํ๋ก ์ถ๋ ฅ
11. ๊ฐ ๋
ธ๋์ ๋ผ์ฐํ
ํ
์ด๋ธ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ip -c route
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
| default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c route ; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| >> node : k8s-w1 <<
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.101
>> node : k8s-w2 <<
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
10.244.2.0/24 dev cni0 proto kernel scope link src 10.244.2.1
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.102
|
flannel.1
๋ฐ cni0
๋ฅผ ํตํด ๋ผ์ฐํ
๋๊ณ ์๋ ์ํ ํ์ธ
12. vNIC(flannel.1, cni0) ์ ๊ฑฐ
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# ip link del flannel.1
ip link del cni0
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i sudo ip link del flannel.1 ; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
| >> node : k8s-w1 <<
>> node : k8s-w2 <<
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i sudo ip link del cni0 ; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
| >> node : k8s-w1 <<
>> node : k8s-w2 <<
|
13. ์ ๊ฑฐ ํ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ip -c link
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:80:23:b9 brd ff:ff:ff:ff:ff:ff
altname enp0s8
6: vethe4603105@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether fa:28:80:b2:a3:a2 brd ff:ff:ff:ff:ff:ff link-netns cni-05426de7-dd90-2656-df69-64505867d5df
7: veth470cf46f@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether c6:8d:d9:1c:52:0e brd ff:ff:ff:ff:ff:ff link-netns cni-1d343493-f993-e0d5-e30c-163d1baf2a6f
8: veth51449dca@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 42:58:20:09:e6:66 brd ff:ff:ff:ff:ff:ff link-netns cni-a2abf4c9-a021-9a05-3db5-2b9f8761bb5a
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c link ; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| >> node : k8s-w1 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:07:f2:54 brd ff:ff:ff:ff:ff:ff
altname enp0s8
6: veth5aaee95c@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether e6:aa:64:ab:e4:47 brd ff:ff:ff:ff:ff:ff link-netns cni-30ede71c-cd06-5139-e25d-267ce0b09a24
>> node : k8s-w2 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:96:d7:20 brd ff:ff:ff:ff:ff:ff
altname enp0s8
6: veth4ccb4288@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 5e:dd:78:cd:18:b0 brd ff:ff:ff:ff:ff:ff link-netns cni-1a5528fd-a3b3-87cb-06a3-7b599dba3fc7
|
- flannel.1, cni0 ์ธํฐํ์ด์ค๊ฐ ๋ชจ๋ ์ ๊ฑฐ๋ ์ํ๋ก ์ถ๋ ฅ
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# brctl show
# ์ถ๋ ฅ ์์
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i brctl show ; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
| >> node : k8s-w1 <<
>> node : k8s-w2 <<
|
brctl show
๊ฒฐ๊ณผ์์๋ ๋ธ๋ฆฌ์ง(cni0) ๊ด๋ จ ์ถ๋ ฅ์ด ์ ๋ถ ์ฌ๋ผ์ง
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ip -c route
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
| default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c route ; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
| >> node : k8s-w1 <<
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.101
>> node : k8s-w2 <<
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.102
|
- flannel ๊ด๋ จ ๋ผ์ฐํธ ํญ๋ชฉ(10.244.x.x) ์ ๊ฑฐ, eth0/eth1 ๊ธฐ๋ณธ ๊ฒฝ๋ก๋ง ๋จ์์์
14. kube-proxy ๋ฆฌ์์ค ์ ๊ฑฐ
DaemonSet๊ณผ ConfigMap ์ ๊ฑฐ
1
2
3
4
5
6
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system delete ds kube-proxy
kubectl -n kube-system delete cm kube-proxy
# ๊ฒฐ๊ณผ
daemonset.apps "kube-proxy" deleted
configmap "kube-proxy" deleted
|
์ ๊ฑฐ ํ์๋ ๊ธฐ์กด ํ๋๋ค์ 10.244.x.x ๋์ญ์ IP๋ฅผ ์ ์ง
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -A -owide
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
| NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default curl-pod 1/1 Running 0 46h 10.244.0.4 k8s-ctr <none> <none>
default webpod-697b545f57-67wlk 1/1 Running 0 46h 10.244.2.2 k8s-w2 <none> <none>
default webpod-697b545f57-rnqgq 1/1 Running 0 46h 10.244.1.2 k8s-w1 <none> <none>
kube-system coredns-674b8bbfcf-7gx6f 0/1 Running 0 2d 10.244.0.2 k8s-ctr <none> <none>
kube-system coredns-674b8bbfcf-mjnst 0/1 Running 0 2d 10.244.0.3 k8s-ctr <none> <none>
kube-system etcd-k8s-ctr 1/1 Running 0 2d 192.168.10.100 k8s-ctr <none> <none>
kube-system kube-apiserver-k8s-ctr 1/1 Running 0 2d 192.168.10.100 k8s-ctr <none> <none>
kube-system kube-controller-manager-k8s-ctr 1/1 Running 0 2d 192.168.10.100 k8s-ctr <none> <none>
kube-system kube-scheduler-k8s-ctr 1/1 Running 0 2d 192.168.10.100 k8s-ctr <none> <none>
|
15. ํ๋ ๋คํธ์ํฌ ํต์ ํ์ธ
ํต์ ๋ถ๊ฐ ์ํ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod
|
16. iptables ์์กด ๊ท์น ํ์ธ
FLANNEL-POSTRTG, FLANNEL-FWD, KUBE-* ๊ด๋ จ ์ฒด์ธ์ด ๋จ์์๋ ์ํ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# iptables-save
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
| # Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:54:05 2025
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-IPTABLES-HINT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Wed Jul 16 22:54:05 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:54:05 2025
*filter
:INPUT ACCEPT [1979062:417540265]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1959906:367460835]
:FLANNEL-FWD - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-PROXY-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A FORWARD -m comment --comment "flanneld forward" -j FLANNEL-FWD
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A FLANNEL-FWD -s 10.244.0.0/16 -m comment --comment "flanneld forward" -j ACCEPT
-A FLANNEL-FWD -d 10.244.0.0/16 -m comment --comment "flanneld forward" -j ACCEPT
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -m nfacct --nfacct-name ct_state_invalid_dropped_pkts -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns has no endpoints" -m udp --dport 53 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp has no endpoints" -m tcp --dport 53 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics has no endpoints" -m tcp --dport 9153 -j REJECT --reject-with icmp-port-unreachable
COMMIT
# Completed on Wed Jul 16 22:54:05 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:54:05 2025
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:FLANNEL-POSTRTG - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SEP-ETI7FUQQE3BS2IXE - [0:0]
:KUBE-SEP-PQBQBGZJJ5FKN3TB - [0:0]
:KUBE-SEP-WEW7NHLZ4Y5A5ZKF - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-CNZCPOCNCNOROALA - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -m comment --comment "flanneld masq" -j FLANNEL-POSTRTG
-A FLANNEL-POSTRTG -m mark --mark 0x4000/0x4000 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/24 -d 10.244.0.0/16 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/16 -d 10.244.0.0/24 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG ! -s 10.244.0.0/16 -d 10.244.0.0/24 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/16 ! -d 224.0.0.0/4 -m comment --comment "flanneld masq" -j MASQUERADE --random-fully
-A FLANNEL-POSTRTG ! -s 10.244.0.0/16 -d 10.244.0.0/16 -m comment --comment "flanneld masq" -j MASQUERADE --random-fully
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SEP-ETI7FUQQE3BS2IXE -s 192.168.10.100/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-ETI7FUQQE3BS2IXE -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 192.168.10.100:6443
-A KUBE-SEP-PQBQBGZJJ5FKN3TB -s 10.244.1.2/32 -m comment --comment "default/webpod" -j KUBE-MARK-MASQ
-A KUBE-SEP-PQBQBGZJJ5FKN3TB -p tcp -m comment --comment "default/webpod" -m tcp -j DNAT --to-destination 10.244.1.2:80
-A KUBE-SEP-WEW7NHLZ4Y5A5ZKF -s 10.244.2.2/32 -m comment --comment "default/webpod" -j KUBE-MARK-MASQ
-A KUBE-SEP-WEW7NHLZ4Y5A5ZKF -p tcp -m comment --comment "default/webpod" -m tcp -j DNAT --to-destination 10.244.2.2:80
-A KUBE-SERVICES -d 10.96.33.91/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-SVC-CNZCPOCNCNOROALA
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-CNZCPOCNCNOROALA ! -s 10.244.0.0/16 -d 10.96.33.91/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SVC-CNZCPOCNCNOROALA -m comment --comment "default/webpod -> 10.244.1.2:80" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-PQBQBGZJJ5FKN3TB
-A KUBE-SVC-CNZCPOCNCNOROALA -m comment --comment "default/webpod -> 10.244.2.2:80" -j KUBE-SEP-WEW7NHLZ4Y5A5ZKF
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https -> 192.168.10.100:6443" -j KUBE-SEP-ETI7FUQQE3BS2IXE
COMMIT
# Completed on Wed Jul 16 22:54:05 2025
|
- flannel ์ ๊ฑฐ ํ์๋ iptables ์ฒด์ธ๋ค์ด ๋จ์์์ด ๋คํธ์ํฌ ๊ฒฝ๋ก์ ์ํฅ
17. iptables ์ด๊ธฐํ
KUBE ๋ฐ FLANNEL ๊ด๋ จ ๊ท์น ์ ๊ฑฐ
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# iptables-save | grep -v KUBE | grep -v FLANNEL | iptables-restore
(โ|HomeLab:N/A) root@k8s-ctr:~# iptables-save
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| # Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:55:46 2025
*mangle
:PREROUTING ACCEPT [2973:530951]
:INPUT ACCEPT [2973:530951]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [2840:534805]
:POSTROUTING ACCEPT [2840:534805]
COMMIT
# Completed on Wed Jul 16 22:55:46 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:55:46 2025
*filter
:INPUT ACCEPT [2973:530951]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [2840:534805]
COMMIT
# Completed on Wed Jul 16 22:55:46 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:55:46 2025
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
COMMIT
# Completed on Wed Jul 16 22:55:46 2025
|
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w1 "sudo iptables-save | grep -v KUBE | grep -v FLANNEL | sudo iptables-restore"
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w1 sudo iptables-save
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| # Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:56:36 2025
*mangle
:PREROUTING ACCEPT [25:5510]
:INPUT ACCEPT [25:5510]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [21:5592]
:POSTROUTING ACCEPT [21:5592]
COMMIT
# Completed on Wed Jul 16 22:56:36 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:56:36 2025
*filter
:INPUT ACCEPT [25:5510]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [21:5592]
COMMIT
# Completed on Wed Jul 16 22:56:36 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:56:36 2025
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
COMMIT
# Completed on Wed Jul 16 22:56:36 2025
|
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w2 "sudo iptables-save | grep -v KUBE | grep -v FLANNEL | sudo iptables-restore"
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w2 sudo iptables-save
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| # Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:57:32 2025
*mangle
:PREROUTING ACCEPT [25:5374]
:INPUT ACCEPT [25:5374]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [20:5540]
:POSTROUTING ACCEPT [20:5540]
COMMIT
# Completed on Wed Jul 16 22:57:32 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:57:32 2025
*filter
:INPUT ACCEPT [25:5374]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [20:5540]
COMMIT
# Completed on Wed Jul 16 22:57:32 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:57:32 2025
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
COMMIT
# Completed on Wed Jul 16 22:57:32 2025
|
18. ์ ๊ฑฐ ์ดํ ์ํ ํ์ธ
ํ์ง๋ง ํ๋๋ค์ ์ฌ์ ํ ๊ธฐ์กด 10.244.x.x IP๋ฅผ ์ ์งํ๋ฉฐ CNI๊ฐ ์๊ธฐ ๋๋ฌธ์ ํต์ ๋ถ๊ฐ ์ํ ์ ์ง
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-pod 1/1 Running 0 46h 10.244.0.4 k8s-ctr <none> <none>
webpod-697b545f57-67wlk 1/1 Running 0 46h 10.244.2.2 k8s-w2 <none> <none>
webpod-697b545f57-rnqgq 1/1 Running 0 46h 10.244.1.2 k8s-w1 <none> <none>
|
๐ ๏ธ Cilium ์ค์น
1. Cilium Helm ์ ์ฅ์ ์ถ๊ฐ
1
2
3
4
| (โ|HomeLab:N/A) root@k8s-ctr:~# helm repo add cilium https://helm.cilium.io/
# ๊ฒฐ๊ณผ
"cilium" has been added to your repositories
|
2. Cilium ์ค์น (Helm)
kubeProxyReplacement=true
: kube-proxy ์์ด Cilium์ด ๋์ฒดinstallNoConntrackIptablesRules=true
: iptables ๊ท์น ์ค์น ์๋ตbpf.masquerade=true
: BPF ๊ธฐ๋ฐ masquerading
1
2
3
4
5
6
7
8
9
10
11
12
| (โ|HomeLab:N/A) root@k8s-ctr:~# helm install cilium cilium/cilium --version 1.17.5 --namespace kube-system \
--set k8sServiceHost=192.168.10.100 --set k8sServicePort=6443 \
--set kubeProxyReplacement=true \
--set routingMode=native \
--set autoDirectNodeRoutes=true \
--set ipam.mode="cluster-pool" \
--set ipam.operator.clusterPoolIPv4PodCIDRList={"172.20.0.0/16"} \
--set ipv4NativeRoutingCIDR=172.20.0.0/16 \
--set endpointRoutes.enabled=true \
--set installNoConntrackIptablesRules=true \
--set bpf.masquerade=true \
--set ipv6.enabled=false
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
| NAME: cilium
LAST DEPLOYED: Wed Jul 16 23:03:28 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble.
Your release version is 1.17.5.
For any further help, visit https://docs.cilium.io/en/v1.17/gettinghelp
|
3. Cilium Values ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# helm get values cilium -n kube-system
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| USER-SUPPLIED VALUES:
autoDirectNodeRoutes: true
bpf:
masquerade: true
endpointRoutes:
enabled: true
installNoConntrackIptablesRules: true
ipam:
mode: cluster-pool
operator:
clusterPoolIPv4PodCIDRList:
- 172.20.0.0/16
ipv4NativeRoutingCIDR: 172.20.0.0/16
ipv6:
enabled: false
k8sServiceHost: 192.168.10.100
k8sServicePort: 6443
kubeProxyReplacement: true
routingMode: native
|
4. CRD ์์ฑ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get crd
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
| NAME CREATED AT
ciliumcidrgroups.cilium.io 2025-07-16T14:04:32Z
ciliumclusterwidenetworkpolicies.cilium.io 2025-07-16T14:04:32Z
ciliumendpoints.cilium.io 2025-07-16T14:04:32Z
ciliumexternalworkloads.cilium.io 2025-07-16T14:04:32Z
ciliumidentities.cilium.io 2025-07-16T14:04:32Z
ciliuml2announcementpolicies.cilium.io 2025-07-16T14:04:32Z
ciliumloadbalancerippools.cilium.io 2025-07-16T14:04:32Z
ciliumnetworkpolicies.cilium.io 2025-07-16T14:04:33Z
ciliumnodeconfigs.cilium.io 2025-07-16T14:04:32Z
ciliumnodes.cilium.io 2025-07-16T14:04:32Z
ciliumpodippools.cilium.io 2025-07-16T14:04:32Z
|
ciliumendpoints.cilium.io
, ciliumnodes.cilium.io
๋ฑ Cilium ๊ด๋ จ CRD๋ค์ด ์์ฑ๋ ์ํ ํ์ธ
5. Cilium Pod ์ํ ๋ชจ๋ํฐ๋ง
1
| watch -d kubectl get pod -A
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
| Every 2.0s: kubectl get pod -A k8s-ctr: Wed Jul 16 23:08:45 2025
NAMESPACE NAME READY STATUS RESTARTS AGE
default curl-pod 1/1 Running 0 46h
default webpod-697b545f57-67wlk 1/1 Running 0 46h
default webpod-697b545f57-rnqgq 1/1 Running 0 46h
kube-system cilium-6pndl 1/1 Running 0 6m5s
kube-system cilium-9bpz2 1/1 Running 0 6m5s
kube-system cilium-envoy-b4zts 1/1 Running 0 6m5s
kube-system cilium-envoy-fkj4l 1/1 Running 0 6m5s
kube-system cilium-envoy-z9mvb 1/1 Running 0 6m5s
kube-system cilium-operator-865bc7f457-rgj7k 1/1 Running 0 6m5s
kube-system cilium-operator-865bc7f457-s2zv2 1/1 Running 0 6m5s
kube-system cilium-zhq77 1/1 Running 0 6m5s
kube-system coredns-674b8bbfcf-697cz 1/1 Running 0 4m44s
kube-system coredns-674b8bbfcf-rtz5g 1/1 Running 0 4m58s
kube-system etcd-k8s-ctr 1/1 Running 0 2d
kube-system kube-apiserver-k8s-ctr 1/1 Running 0 2d
kube-system kube-controller-manager-k8s-ctr 1/1 Running 0 2d
kube-system kube-scheduler-k8s-ctr 1/1 Running 0 2d
|
cilium-*
, cilium-operator-*
, cilium-envoy-*
ํ๋๋ค์ด Running ์ํ๋ก ์ ์ ๋ฐฐํฌ ํ์ธ
6. Cilium ์ํ ์์ธ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg status --verbose
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
| KVStore: Disabled
Kubernetes: Ok 1.33 (v1.33.2) [linux/amd64]
Kubernetes APIs: ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: True [eth0 10.0.2.15 fd17:625c:f037:2:a00:27ff:fe6b:69c9 fe80::a00:27ff:fe6b:69c9, eth1 192.168.10.100 fe80::a00:27ff:fe80:23b9 (Direct Routing)]
Host firewall: Disabled
SRv6: Disabled
CNI Chaining: none
CNI Config file: successfully wrote CNI configuration file to /host/etc/cni/net.d/05-cilium.conflist
Cilium: Ok 1.17.5 (v1.17.5-69aab28c)
NodeMonitor: Listening for events on 2 CPUs with 64x4096 of shared memory
Cilium health daemon: Ok
IPAM: IPv4: 3/254 allocated from 172.20.0.0/24,
Allocated addresses:
172.20.0.10 (kube-system/coredns-674b8bbfcf-rtz5g)
172.20.0.117 (health)
172.20.0.187 (router)
IPv4 BIG TCP: Disabled
IPv6 BIG TCP: Disabled
BandwidthManager: Disabled
Routing: Network: Native Host: BPF
Attach Mode: TCX
Device Mode: veth
Masquerading: BPF [eth0, eth1] 172.20.0.0/16 [IPv4: Enabled, IPv6: Disabled]
Clock Source for BPF: ktime
...
|
- KubeProxyReplacement ํ์ฑํ, BPF masquerade, IPv4 CIDR ๋ฑ ์ธ๋ถ ์ค์ ์ถ๋ ฅ
- CNI ์ค์ ํ์ผ
/host/etc/cni/net.d/05-cilium.conflist
์์ฑ ํ์ธ
7. iptables ์ํ ์ ๊ฒ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# iptables -t nat -S
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| -P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N CILIUM_OUTPUT_nat
-N CILIUM_POST_nat
-N CILIUM_PRE_nat
-N KUBE-KUBELET-CANARY
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_nat" -j CILIUM_PRE_nat
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_nat" -j CILIUM_OUTPUT_nat
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_nat" -j CILIUM_POST_nat
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i sudo iptables -t nat -S ; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
| >> node : k8s-w1 <<
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N CILIUM_OUTPUT_nat
-N CILIUM_POST_nat
-N CILIUM_PRE_nat
-N KUBE-KUBELET-CANARY
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_nat" -j CILIUM_PRE_nat
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_nat" -j CILIUM_OUTPUT_nat
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_nat" -j CILIUM_POST_nat
>> node : k8s-w2 <<
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N CILIUM_OUTPUT_nat
-N CILIUM_POST_nat
-N CILIUM_PRE_nat
-N KUBE-KUBELET-CANARY
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_nat" -j CILIUM_PRE_nat
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_nat" -j CILIUM_OUTPUT_nat
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_nat" -j CILIUM_POST_nat
|
- Cilium ์ค์น ํ NAT ํ
์ด๋ธ์์ CILIUM_ ์ฒด์ธ์ด ๋ณด์ด๋ฉฐ, iptables ๊ท์น์ ์ต์ํ์ผ๋ก๋ง ์กด์ฌ
- ๋๋ถ๋ถ์ ํธ๋ํฝ ์ฒ๋ฆฌ๋ eBPF ํ๋ก๊ทธ๋จ์์ ์ํ
8. ์๋น์ค ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# k get svc -A
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
| NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d
default webpod ClusterIP 10.96.33.91 <none> 80/TCP 46h
kube-system cilium-envoy ClusterIP None <none> 9964/TCP 14m
kube-system hubble-peer ClusterIP 10.96.239.219 <none> 443/TCP 14m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 2d
|
Cilium์ด eBPF๋ก ํจํท์ ์ฒ๋ฆฌํ๊ธฐ ๋๋ฌธ์ iptables ๊ท์น์ด ๋ณต์กํ์ง ์์
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# iptables-save
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
| # Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:18:29 2025
*mangle
:PREROUTING ACCEPT [315245:427624291]
:INPUT ACCEPT [315245:427624291]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [304727:93833315]
:POSTROUTING ACCEPT [304727:93833315]
:CILIUM_POST_mangle - [0:0]
:CILIUM_PRE_mangle - [0:0]
:KUBE-IPTABLES-HINT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_mangle" -j CILIUM_PRE_mangle
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_mangle" -j CILIUM_POST_mangle
-A CILIUM_PRE_mangle ! -o lo -m socket --transparent -m mark ! --mark 0xe00/0xf00 -m mark ! --mark 0x800/0xf00 -m comment --comment "cilium: any->pod redirect proxied traffic to host proxy" -j MARK --set-xmark 0x200/0xffffffff
-A CILIUM_PRE_mangle -p tcp -m mark --mark 0x21a60200 -m comment --comment "cilium: TPROXY to host cilium-dns-egress proxy" -j TPROXY --on-port 42529 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff
-A CILIUM_PRE_mangle -p udp -m mark --mark 0x21a60200 -m comment --comment "cilium: TPROXY to host cilium-dns-egress proxy" -j TPROXY --on-port 42529 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff
COMMIT
# Completed on Wed Jul 16 23:18:29 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:18:29 2025
*raw
:PREROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:CILIUM_OUTPUT_raw - [0:0]
:CILIUM_PRE_raw - [0:0]
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_raw" -j CILIUM_PRE_raw
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_raw" -j CILIUM_OUTPUT_raw
-A CILIUM_OUTPUT_raw -d 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -s 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0xa00/0xfffffeff -m comment --comment "cilium: NOTRACK for proxy return traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0xa00/0xfffffeff -m comment --comment "cilium: NOTRACK for proxy return traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0x800/0xe00 -m comment --comment "cilium: NOTRACK for L7 proxy upstream traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0x800/0xe00 -m comment --comment "cilium: NOTRACK for L7 proxy upstream traffic" -j CT --notrack
-A CILIUM_PRE_raw -d 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_PRE_raw -s 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_PRE_raw -m mark --mark 0x200/0xf00 -m comment --comment "cilium: NOTRACK for proxy traffic" -j CT --notrack
COMMIT
# Completed on Wed Jul 16 23:18:29 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:18:29 2025
*filter
:INPUT ACCEPT [315245:427624291]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [304727:93833315]
:CILIUM_FORWARD - [0:0]
:CILIUM_INPUT - [0:0]
:CILIUM_OUTPUT - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
-A INPUT -m comment --comment "cilium-feeder: CILIUM_INPUT" -j CILIUM_INPUT
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "cilium-feeder: CILIUM_FORWARD" -j CILIUM_FORWARD
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT" -j CILIUM_OUTPUT
-A OUTPUT -j KUBE-FIREWALL
-A CILIUM_FORWARD -o cilium_host -m comment --comment "cilium: any->cluster on cilium_host forward accept" -j ACCEPT
-A CILIUM_FORWARD -i cilium_host -m comment --comment "cilium: cluster->any on cilium_host forward accept (nodeport)" -j ACCEPT
-A CILIUM_FORWARD -i lxc+ -m comment --comment "cilium: cluster->any on lxc+ forward accept" -j ACCEPT
-A CILIUM_FORWARD -i cilium_net -m comment --comment "cilium: cluster->any on cilium_net forward accept (nodeport)" -j ACCEPT
-A CILIUM_FORWARD -o lxc+ -m comment --comment "cilium: any->cluster on lxc+ forward accept" -j ACCEPT
-A CILIUM_FORWARD -i lxc+ -m comment --comment "cilium: cluster->any on lxc+ forward accept (nodeport)" -j ACCEPT
-A CILIUM_INPUT -m mark --mark 0x200/0xf00 -m comment --comment "cilium: ACCEPT for proxy traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark --mark 0xa00/0xe00 -m comment --comment "cilium: ACCEPT for proxy traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark --mark 0x800/0xe00 -m comment --comment "cilium: ACCEPT for l7 proxy upstream traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark ! --mark 0xe00/0xf00 -m mark ! --mark 0xd00/0xf00 -m mark ! --mark 0x400/0xf00 -m mark ! --mark 0xa00/0xe00 -m mark ! --mark 0x800/0xe00 -m mark ! --mark 0xf00/0xf00 -m comment --comment "cilium: host->any mark as from host" -j MARK --set-xmark 0xc00/0xf00
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
COMMIT
# Completed on Wed Jul 16 23:18:29 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:18:29 2025
*nat
:PREROUTING ACCEPT [46:2536]
:INPUT ACCEPT [46:2536]
:OUTPUT ACCEPT [4956:297326]
:POSTROUTING ACCEPT [4956:297326]
:CILIUM_OUTPUT_nat - [0:0]
:CILIUM_POST_nat - [0:0]
:CILIUM_PRE_nat - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_nat" -j CILIUM_PRE_nat
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_nat" -j CILIUM_OUTPUT_nat
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_nat" -j CILIUM_POST_nat
COMMIT
# Completed on Wed Jul 16 23:18:29 2025
|
๐ Pod CIDR (IPAM) ํ์ธ
1. Pod CIDR(IPAM) ํ์ธ
๊ฐ ๋
ธ๋์ Pod CIDR๊ฐ ์ด์ flannel ์ฌ์ฉ ์ ์ค์ ๋ 10.244.x.x/24
๋ก ๊ทธ๋๋ก ๋จ์์์
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
|
โ
ย ์ถ๋ ฅ
1
2
3
| k8s-ctr 10.244.0.0/24
k8s-w1 10.244.1.0/24
k8s-w2 10.244.2.0/24
|
2. ๊ธฐ์กด Pod IP ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-pod 1/1 Running 0 46h 10.244.0.4 k8s-ctr <none> <none>
webpod-697b545f57-67wlk 1/1 Running 0 47h 10.244.2.2 k8s-w2 <none> <none>
webpod-697b545f57-rnqgq 1/1 Running 0 47h 10.244.1.2 k8s-w1 <none> <none>
|
Cilium CIDR๊ฐ ์ ์ฉ๋์ง ์์ ํต์ ๋ถ๊ฐ ์ํ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# k exec -it curl-pod -- curl webpod
|
3. CiliumNodes ๋ฆฌ์์ค ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnodes
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| NAME CILIUMINTERNALIP INTERNALIP AGE
k8s-ctr 172.20.0.187 192.168.10.100 19m
k8s-w1 172.20.2.165 192.168.10.101 19m
k8s-w2 172.20.1.96 192.168.10.102 19m
|
๊ฐ ๋
ธ๋์ ๋ํด Cilium์ด ๊ด๋ฆฌํ๋ InternalIP์ Cilium PodCIDR(172.20.x.x ๋์ญ) ํ์ธ
4. CiliumNodes PodCIDRs ํ์ธ
๊ฐ ๋
ธ๋๋ณ๋ก 172.20.x.x/24
CIDR์ด ์ถ๋ ฅ๋จ (Cilium์์ ๊ด๋ฆฌํ๋ ๋คํธ์ํฌ)
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnodes -o json | grep podCIDRs -A2
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| "podCIDRs": [
"172.20.0.0/24"
],
--
"podCIDRs": [
"172.20.2.0/24"
],
--
"podCIDRs": [
"172.20.1.0/24"
],
|
5. Deployment ๋กค์์ ์ฌ์์
1
2
3
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl rollout restart deployment webpod
# ๊ฒฐ๊ณผ
deployment.apps/webpod restarted
|
webpod
ํ๋๊ฐ ์ฌ๋ฐฐํฌ๋๋ฉด์ Cilium ๊ด๋ฆฌ CIDR(172.20.x.x) ๋์ญ์ผ๋ก IP๊ฐ ๋ณ๊ฒฝ๋จ
1
2
3
4
5
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-pod 1/1 Running 0 47h 10.244.0.4 k8s-ctr <none> <none>
webpod-9894b69cd-v2n4m 1/1 Running 0 15s 172.20.1.88 k8s-w2 <none> <none>
webpod-9894b69cd-zxcnw 1/1 Running 0 19s 172.20.2.124 k8s-w1 <none> <none>
|
6. curl-pod ์ฌ๋ฐฐํฌ
(1) ๊ธฐ์กด curl-pod
์ญ์
1
2
3
4
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl delete pod curl-pod --grace-period=0
# ๊ฒฐ๊ณผ
pod "curl-pod" deleted
|
(2) curl-pod
์ฌ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: curl-pod
labels:
app: curl
spec:
nodeName: k8s-ctr
containers:
- name: curl
image: nicolaka/netshoot
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF
# ๊ฒฐ๊ณผ
pod/curl-pod created
|
(3) ์ฌ๋ฐฐํฌ ํ kubectl get pod -owide
์ถ๋ ฅ
๋ค์ ๋ฐฐํฌํ์ฌ Cilium ๊ด๋ฆฌ CIDR(172.20.0.x) ๋์ญ์ IP๋ก ์ ์ ๋ฐ๊ธ๋จ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-pod 1/1 Running 0 42s 172.20.0.191 k8s-ctr <none> <none>
webpod-9894b69cd-v2n4m 1/1 Running 0 3m32s 172.20.1.88 k8s-w2 <none> <none>
webpod-9894b69cd-zxcnw 1/1 Running 0 3m36s 172.20.2.124 k8s-w1 <none> <none>
|
7. CiliumEndpoints ํ์ธ
๊ฐ ํ๋์ Endpoint ์ํ๊ฐ ready
๋ก ์ถ๋ ฅ๋๊ณ Cilium ๊ด๋ฆฌ IP(172.20.x.x)๊ฐ ํ์๋จ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| NAME SECURITY IDENTITY ENDPOINT STATE IPV4 IPV6
curl-pod 13136 ready 172.20.0.191
webpod-9894b69cd-v2n4m 8254 ready 172.20.1.88
webpod-9894b69cd-zxcnw 8254 ready 172.20.2.124
|
8. Cilium Endpoint ์์ธ ๋ฆฌ์คํธ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg endpoint list
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
425 Disabled Disabled 4 reserved:health 172.20.0.117 ready
1827 Disabled Disabled 13136 k8s:app=curl 172.20.0.191 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
1912 Disabled Disabled 1 k8s:node-role.kubernetes.io/control-plane ready
k8s:node.kubernetes.io/exclude-from-external-load-balancers
reserved:host
3021 Disabled Disabled 39751 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system 172.20.0.10 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
|
9. Pod ๊ฐ ํต์ ํ์ธ
webpod ํ๋์ Hostname์ด ๋ฒ๊ฐ์ ์ถ๋ ฅ๋จ โ Cilium ๋คํธ์ํฌ๋ก ์ ์ ํต์
1
2
3
4
5
6
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod | grep Hostname
Hostname: webpod-9894b69cd-v2n4m
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod | grep Hostname
Hostname: webpod-9894b69cd-zxcnw
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod | grep Hostname
Hostname: webpod-9894b69cd-v2n4m
|
๐ป Cilium CLI ์ค์น
1. Cilium CLI ์ค์น
GitHub ๋ฆด๋ฆฌ์ฆ์์ Cilium CLI ๋ค์ด๋ก๋ ํ /usr/local/bin
์ ์ค์น
1
2
3
4
5
6
7
8
9
| (โ|HomeLab:N/A) root@k8s-ctr:~# CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz >/dev/null 2>&1
tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz
# ๊ฒฐ๊ณผ
cilium
|
2. which cilium
์ผ๋ก ๊ฒฝ๋ก ํ์ธ
1
2
3
4
| (โ|HomeLab:N/A) root@k8s-ctr:~# which cilium
# ๊ฒฐ๊ณผ
/usr/local/bin/cilium
|
3. Cilium ์ํ ์ ๊ฒ (CLI)
cilium status
์คํ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# cilium status
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
| /ยฏยฏ\
/ยฏยฏ\__/ยฏยฏ\ Cilium: OK
\__/ยฏยฏ\__/ Operator: OK
/ยฏยฏ\__/ยฏยฏ\ Envoy DaemonSet: OK
\__/ยฏยฏ\__/ Hubble Relay: disabled
\__/ ClusterMesh: disabled
DaemonSet cilium Desired: 3, Ready: 3/3, Available: 3/3
DaemonSet cilium-envoy Desired: 3, Ready: 3/3, Available: 3/3
Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2
Containers: cilium Running: 3
cilium-envoy Running: 3
cilium-operator Running: 2
clustermesh-apiserver
hubble-relay
Cluster Pods: 5/5 managed by Cilium
Helm chart version: 1.17.5
Image versions cilium quay.io/cilium/cilium:v1.17.5@sha256:baf8541723ee0b72d6c489c741c81a6fdc5228940d66cb76ef5ea2ce3c639ea6: 3
cilium-envoy quay.io/cilium/cilium-envoy:v1.32.6-1749271279-0864395884b263913eac200ee2048fd985f8e626@sha256:9f69e290a7ea3d4edf9192acd81694089af048ae0d8a67fb63bd62dc1d72203e: 3
cilium-operator quay.io/cilium/operator-generic:v1.17.5@sha256:f954c97eeb1b47ed67d08cc8fb4108fb829f869373cbb3e698a7f8ef1085b09e: 2
|
4. Cilium ์ค์ ํ์ธ
cilium config view
๋ช
๋ น์ผ๋ก ํ์ฌ Cilium ์์ด์ ํธ์ ์ค์ ๊ฐ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# cilium config view
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
| agent-not-ready-taint-key node.cilium.io/agent-not-ready
arping-refresh-period 30s
auto-direct-node-routes true
bpf-distributed-lru false
bpf-events-drop-enabled true
bpf-events-policy-verdict-enabled true
bpf-events-trace-enabled true
bpf-lb-acceleration disabled
bpf-lb-algorithm-annotation false
bpf-lb-external-clusterip false
bpf-lb-map-max 65536
bpf-lb-mode-annotation false
bpf-lb-sock false
bpf-lb-source-range-all-types false
bpf-map-dynamic-size-ratio 0.0025
bpf-policy-map-max 16384
bpf-root /sys/fs/bpf
cgroup-root /run/cilium/cgroupv2
cilium-endpoint-gc-interval 5m0s
cluster-id 0
cluster-name default
cluster-pool-ipv4-cidr 172.20.0.0/16
cluster-pool-ipv4-mask-size 24
clustermesh-enable-endpoint-sync false
clustermesh-enable-mcs-api false
cni-exclusive true
cni-log-file /var/run/cilium/cilium-cni.log
custom-cni-conf false
datapath-mode veth
debug false
debug-verbose
default-lb-service-ipam lbipam
direct-routing-skip-unreachable false
dnsproxy-enable-transparent-mode true
dnsproxy-socket-linger-timeout 10
egress-gateway-reconciliation-trigger-interval 1s
enable-auto-protect-node-port-range true
enable-bpf-clock-probe false
enable-bpf-masquerade true
enable-endpoint-health-checking true
enable-endpoint-lockdown-on-policy-overflow false
enable-endpoint-routes true
enable-experimental-lb false
enable-health-check-loadbalancer-ip false
enable-health-check-nodeport true
enable-health-checking true
enable-hubble true
enable-internal-traffic-policy true
enable-ipv4 true
enable-ipv4-big-tcp false
enable-ipv4-masquerade true
enable-ipv6 false
enable-ipv6-big-tcp false
enable-ipv6-masquerade true
enable-k8s-networkpolicy true
enable-k8s-terminating-endpoint true
enable-l2-neigh-discovery true
enable-l7-proxy true
enable-lb-ipam true
enable-local-redirect-policy false
enable-masquerade-to-route-source false
enable-metrics true
enable-node-selector-labels false
enable-non-default-deny-policies true
enable-policy default
enable-policy-secrets-sync true
enable-runtime-device-detection true
enable-sctp false
enable-source-ip-verification true
enable-svc-source-range-check true
enable-tcx true
enable-vtep false
enable-well-known-identities false
enable-xt-socket-fallback true
envoy-access-log-buffer-size 4096
envoy-base-id 0
envoy-keep-cap-netbindservice false
external-envoy-proxy true
health-check-icmp-failure-threshold 3
http-retry-count 3
hubble-disable-tls false
hubble-export-file-max-backups 5
hubble-export-file-max-size-mb 10
hubble-listen-address :4244
hubble-socket-path /var/run/cilium/hubble.sock
hubble-tls-cert-file /var/lib/cilium/tls/hubble/server.crt
hubble-tls-client-ca-files /var/lib/cilium/tls/hubble/client-ca.crt
hubble-tls-key-file /var/lib/cilium/tls/hubble/server.key
identity-allocation-mode crd
identity-gc-interval 15m0s
identity-heartbeat-timeout 30m0s
install-no-conntrack-iptables-rules true
ipam cluster-pool
ipam-cilium-node-update-rate 15s
iptables-random-fully false
ipv4-native-routing-cidr 172.20.0.0/16
k8s-require-ipv4-pod-cidr false
k8s-require-ipv6-pod-cidr false
kube-proxy-replacement true
kube-proxy-replacement-healthz-bind-address
max-connected-clusters 255
mesh-auth-enabled true
mesh-auth-gc-interval 5m0s
mesh-auth-queue-size 1024
mesh-auth-rotated-identities-queue-size 1024
monitor-aggregation medium
monitor-aggregation-flags all
monitor-aggregation-interval 5s
nat-map-stats-entries 32
nat-map-stats-interval 30s
node-port-bind-protection true
nodeport-addresses
nodes-gc-interval 5m0s
operator-api-serve-addr 127.0.0.1:9234
operator-prometheus-serve-addr :9963
policy-cidr-match-mode
policy-secrets-namespace cilium-secrets
policy-secrets-only-from-secrets-namespace true
preallocate-bpf-maps false
procfs /host/proc
proxy-connect-timeout 2
proxy-idle-timeout-seconds 60
proxy-initial-fetch-timeout 30
proxy-max-concurrent-retries 128
proxy-max-connection-duration-seconds 0
proxy-max-requests-per-connection 0
proxy-xff-num-trusted-hops-egress 0
proxy-xff-num-trusted-hops-ingress 0
remove-cilium-node-taints true
routing-mode native
service-no-backend-response reject
set-cilium-is-up-condition true
set-cilium-node-taints true
synchronize-k8s-nodes true
tofqdns-dns-reject-response-code refused
tofqdns-enable-dns-compression true
tofqdns-endpoint-max-ip-per-hostname 1000
tofqdns-idle-connection-grace-period 0s
tofqdns-max-deferred-connection-deletes 10000
tofqdns-proxy-response-max-delay 100ms
tunnel-protocol vxlan
tunnel-source-port-range 0-0
unmanaged-pod-watcher-interval 15
vtep-cidr
vtep-endpoint
vtep-mac
vtep-mask
write-cni-conf-when-ready /host/etc/cni/net.d/05-cilium.conflist
|
5. Cilium ConfigMap ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get cm -n kube-system cilium-config -o json | jq
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
| {
"apiVersion": "v1",
"data": {
"agent-not-ready-taint-key": "node.cilium.io/agent-not-ready",
"arping-refresh-period": "30s",
"auto-direct-node-routes": "true",
"bpf-distributed-lru": "false",
"bpf-events-drop-enabled": "true",
"bpf-events-policy-verdict-enabled": "true",
"bpf-events-trace-enabled": "true",
"bpf-lb-acceleration": "disabled",
"bpf-lb-algorithm-annotation": "false",
"bpf-lb-external-clusterip": "false",
"bpf-lb-map-max": "65536",
"bpf-lb-mode-annotation": "false",
"bpf-lb-sock": "false",
"bpf-lb-source-range-all-types": "false",
"bpf-map-dynamic-size-ratio": "0.0025",
"bpf-policy-map-max": "16384",
"bpf-root": "/sys/fs/bpf",
"cgroup-root": "/run/cilium/cgroupv2",
"cilium-endpoint-gc-interval": "5m0s",
"cluster-id": "0",
"cluster-name": "default",
"cluster-pool-ipv4-cidr": "172.20.0.0/16",
"cluster-pool-ipv4-mask-size": "24",
"clustermesh-enable-endpoint-sync": "false",
"clustermesh-enable-mcs-api": "false",
"cni-exclusive": "true",
"cni-log-file": "/var/run/cilium/cilium-cni.log",
"custom-cni-conf": "false",
"datapath-mode": "veth",
"debug": "false",
"debug-verbose": "",
"default-lb-service-ipam": "lbipam",
"direct-routing-skip-unreachable": "false",
"dnsproxy-enable-transparent-mode": "true",
"dnsproxy-socket-linger-timeout": "10",
"egress-gateway-reconciliation-trigger-interval": "1s",
"enable-auto-protect-node-port-range": "true",
"enable-bpf-clock-probe": "false",
"enable-bpf-masquerade": "true",
"enable-endpoint-health-checking": "true",
"enable-endpoint-lockdown-on-policy-overflow": "false",
"enable-endpoint-routes": "true",
"enable-experimental-lb": "false",
"enable-health-check-loadbalancer-ip": "false",
"enable-health-check-nodeport": "true",
"enable-health-checking": "true",
"enable-hubble": "true",
"enable-internal-traffic-policy": "true",
"enable-ipv4": "true",
"enable-ipv4-big-tcp": "false",
"enable-ipv4-masquerade": "true",
"enable-ipv6": "false",
"enable-ipv6-big-tcp": "false",
"enable-ipv6-masquerade": "true",
"enable-k8s-networkpolicy": "true",
"enable-k8s-terminating-endpoint": "true",
"enable-l2-neigh-discovery": "true",
"enable-l7-proxy": "true",
"enable-lb-ipam": "true",
"enable-local-redirect-policy": "false",
"enable-masquerade-to-route-source": "false",
"enable-metrics": "true",
"enable-node-selector-labels": "false",
"enable-non-default-deny-policies": "true",
"enable-policy": "default",
"enable-policy-secrets-sync": "true",
"enable-runtime-device-detection": "true",
"enable-sctp": "false",
"enable-source-ip-verification": "true",
"enable-svc-source-range-check": "true",
"enable-tcx": "true",
"enable-vtep": "false",
"enable-well-known-identities": "false",
"enable-xt-socket-fallback": "true",
"envoy-access-log-buffer-size": "4096",
"envoy-base-id": "0",
"envoy-keep-cap-netbindservice": "false",
"external-envoy-proxy": "true",
"health-check-icmp-failure-threshold": "3",
"http-retry-count": "3",
"hubble-disable-tls": "false",
"hubble-export-file-max-backups": "5",
"hubble-export-file-max-size-mb": "10",
"hubble-listen-address": ":4244",
"hubble-socket-path": "/var/run/cilium/hubble.sock",
"hubble-tls-cert-file": "/var/lib/cilium/tls/hubble/server.crt",
"hubble-tls-client-ca-files": "/var/lib/cilium/tls/hubble/client-ca.crt",
"hubble-tls-key-file": "/var/lib/cilium/tls/hubble/server.key",
"identity-allocation-mode": "crd",
"identity-gc-interval": "15m0s",
"identity-heartbeat-timeout": "30m0s",
"install-no-conntrack-iptables-rules": "true",
"ipam": "cluster-pool",
"ipam-cilium-node-update-rate": "15s",
"iptables-random-fully": "false",
"ipv4-native-routing-cidr": "172.20.0.0/16",
"k8s-require-ipv4-pod-cidr": "false",
"k8s-require-ipv6-pod-cidr": "false",
"kube-proxy-replacement": "true",
"kube-proxy-replacement-healthz-bind-address": "",
"max-connected-clusters": "255",
"mesh-auth-enabled": "true",
"mesh-auth-gc-interval": "5m0s",
"mesh-auth-queue-size": "1024",
"mesh-auth-rotated-identities-queue-size": "1024",
"monitor-aggregation": "medium",
"monitor-aggregation-flags": "all",
"monitor-aggregation-interval": "5s",
"nat-map-stats-entries": "32",
"nat-map-stats-interval": "30s",
"node-port-bind-protection": "true",
"nodeport-addresses": "",
"nodes-gc-interval": "5m0s",
"operator-api-serve-addr": "127.0.0.1:9234",
"operator-prometheus-serve-addr": ":9963",
"policy-cidr-match-mode": "",
"policy-secrets-namespace": "cilium-secrets",
"policy-secrets-only-from-secrets-namespace": "true",
"preallocate-bpf-maps": "false",
"procfs": "/host/proc",
"proxy-connect-timeout": "2",
"proxy-idle-timeout-seconds": "60",
"proxy-initial-fetch-timeout": "30",
"proxy-max-concurrent-retries": "128",
"proxy-max-connection-duration-seconds": "0",
"proxy-max-requests-per-connection": "0",
"proxy-xff-num-trusted-hops-egress": "0",
"proxy-xff-num-trusted-hops-ingress": "0",
"remove-cilium-node-taints": "true",
"routing-mode": "native",
"service-no-backend-response": "reject",
"set-cilium-is-up-condition": "true",
"set-cilium-node-taints": "true",
"synchronize-k8s-nodes": "true",
"tofqdns-dns-reject-response-code": "refused",
"tofqdns-enable-dns-compression": "true",
"tofqdns-endpoint-max-ip-per-hostname": "1000",
"tofqdns-idle-connection-grace-period": "0s",
"tofqdns-max-deferred-connection-deletes": "10000",
"tofqdns-proxy-response-max-delay": "100ms",
"tunnel-protocol": "vxlan",
"tunnel-source-port-range": "0-0",
"unmanaged-pod-watcher-interval": "15",
"vtep-cidr": "",
"vtep-endpoint": "",
"vtep-mac": "",
"vtep-mask": "",
"write-cni-conf-when-ready": "/host/etc/cni/net.d/05-cilium.conflist"
},
"kind": "ConfigMap",
"metadata": {
"annotations": {
"meta.helm.sh/release-name": "cilium",
"meta.helm.sh/release-namespace": "kube-system"
},
"creationTimestamp": "2025-07-16T14:03:29Z",
"labels": {
"app.kubernetes.io/managed-by": "Helm"
},
"name": "cilium-config",
"namespace": "kube-system",
"resourceVersion": "18656",
"uid": "ace08746-3cfc-446a-a112-1ed335a5b9c1"
}
}
|
6. Cilium debug ๋ชจ๋ ํ์ฑํ
cilium config set debug true
๋ช
๋ น์ผ๋ก ๋๋ฒ๊ทธ ๋ชจ๋ ํ์ฑํ ํ Cilium Pod๋ค์ด ์๋ ์ฌ์์๋จ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
| (โ|HomeLab:N/A) root@k8s-ctr:~# cilium config set debug true && watch kubectl get pod -A
โจ Patching ConfigMap cilium-config with debug=true...
โป๏ธ Restarted Cilium pods
27 202
Every 2.0s: kubectl get pod -A k8s-ctr: Wed Jul 16 23:42:42 2025
NAMESPACE AME READY STATUS RESTARTS AGE
default curl-pod 1/1 Running 0 11m
default webpod-9894b69cd-v2n4m 1/1 Running 0 14m
default webpod-9894b69cd-zxcnw 1/1 Running 0 14m
kube-system cilium-64kc9 1/1 Running 0 79s
kube-system cilium-cl6f6 1/1 Running 0 79s
kube-system cilium-envoy-b4zts 1/1 Running 0 38m
kube-system cilium-envoy-fkj4l 1/1 Running 0 38m
kube-system cilium-envoy-z9mvb 1/1 Running 0 38m
kube-system cilium-operator-865bc7f457-rgj7k 1/1 Running 0 38m
kube-system cilium-operator-865bc7f457-s2zv2 1/1 Running 0 38m
kube-system cilium-zzk8d 1/1 Running 0 79s
kube-system coredns-674b8bbfcf-697cz 1/1 Running 0 36m
kube-system coredns-674b8bbfcf-rtz5g 1/1 Running 0 37m
kube-system etcd-k8s-ctr 1/1 Running 0 2d
kube-system kube-apiserver-k8s-ctr 1/1 Running 0 2d
kube-system kube-controller-manager-k8s-ctr 1/1 Running 0 2d
kube-system kube-scheduler-k8s-ctr 1/1 Running 0 2d
|
๐ ๋คํธ์ํฌ ๊ธฐ๋ณธ ์ ๋ณด ํ์ธ
1. ๊ฐ ๋
ธ๋์ ์ธํฐํ์ด์ค ์ ๋ณด ํ์ธ
cilium_net
, cilium_host
, lxc_health
, ๊ทธ๋ฆฌ๊ณ ๊ฐ ํ๋์ ์ฐ๊ฒฐ๋ lxc*
์ธํฐํ์ด์ค๋ค์ด ์กด์ฌํจ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
| (โ|HomeLab:N/A) root@k8s-ctr:~# ip -c addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
altname enp0s3
inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
valid_lft 70970sec preferred_lft 70970sec
inet6 fd17:625c:f037:2:a00:27ff:fe6b:69c9/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 86006sec preferred_lft 14006sec
inet6 fe80::a00:27ff:fe6b:69c9/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:80:23:b9 brd ff:ff:ff:ff:ff:ff
altname enp0s8
inet 192.168.10.100/24 brd 192.168.10.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe80:23b9/64 scope link
valid_lft forever preferred_lft forever
9: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ee:43:ce:84:cd:8b brd ff:ff:ff:ff:ff:ff
inet6 fe80::ec43:ceff:fe84:cd8b/64 scope link
valid_lft forever preferred_lft forever
10: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ce:32:1f:30:c0:e2 brd ff:ff:ff:ff:ff:ff
inet 172.20.0.187/32 scope global cilium_host
valid_lft forever preferred_lft forever
inet6 fe80::cc32:1fff:fe30:c0e2/64 scope link
valid_lft forever preferred_lft forever
14: lxc15efa76a1c03@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 0e:74:2e:72:27:a0 brd ff:ff:ff:ff:ff:ff link-netns cni-69166b00-fabc-0af1-3874-e887a47b08b0
inet6 fe80::c74:2eff:fe72:27a0/64 scope link
valid_lft forever preferred_lft forever
16: lxc4bee7d32b9d3@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether da:c2:12:8f:3f:7f brd ff:ff:ff:ff:ff:ff link-netns cni-e912a9e2-ed8c-1436-f857-94e0fcb0cd78
inet6 fe80::d8c2:12ff:fe8f:3f7f/64 scope link
valid_lft forever preferred_lft forever
18: lxc_health@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether da:66:60:a7:ea:23 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::d866:60ff:fea7:ea23/64 scope link
valid_lft forever preferred_lft forever
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c addr ; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
| >> node : k8s-w1 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
altname enp0s3
inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
valid_lft 70992sec preferred_lft 70992sec
inet6 fd17:625c:f037:2:a00:27ff:fe6b:69c9/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 86183sec preferred_lft 14183sec
inet6 fe80::a00:27ff:fe6b:69c9/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:07:f2:54 brd ff:ff:ff:ff:ff:ff
altname enp0s8
inet 192.168.10.101/24 brd 192.168.10.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe07:f254/64 scope link
valid_lft forever preferred_lft forever
7: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 4e:42:1e:e2:03:ec brd ff:ff:ff:ff:ff:ff
inet6 fe80::4c42:1eff:fee2:3ec/64 scope link
valid_lft forever preferred_lft forever
8: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 5e:62:ab:66:82:0f brd ff:ff:ff:ff:ff:ff
inet 172.20.2.165/32 scope global cilium_host
valid_lft forever preferred_lft forever
inet6 fe80::5c62:abff:fe66:820f/64 scope link
valid_lft forever preferred_lft forever
12: lxcd268a7119386@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether e6:d8:89:f3:57:c0 brd ff:ff:ff:ff:ff:ff link-netns cni-2acabd28-9c86-f447-f6f4-9f3ca8e10ca4
inet6 fe80::e4d8:89ff:fef3:57c0/64 scope link
valid_lft forever preferred_lft forever
14: lxc_health@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether e6:ec:3f:38:38:a8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::e4ec:3fff:fe38:38a8/64 scope link
valid_lft forever preferred_lft forever
>> node : k8s-w2 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
altname enp0s3
inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
valid_lft 71150sec preferred_lft 71150sec
inet6 fd17:625c:f037:2:a00:27ff:fe6b:69c9/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 86314sec preferred_lft 14314sec
inet6 fe80::a00:27ff:fe6b:69c9/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:96:d7:20 brd ff:ff:ff:ff:ff:ff
altname enp0s8
inet 192.168.10.102/24 brd 192.168.10.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe96:d720/64 scope link
valid_lft forever preferred_lft forever
7: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 12:28:ed:0b:bc:2a brd ff:ff:ff:ff:ff:ff
inet6 fe80::1028:edff:fe0b:bc2a/64 scope link
valid_lft forever preferred_lft forever
8: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether e6:60:2a:76:4c:a7 brd ff:ff:ff:ff:ff:ff
inet 172.20.1.96/32 scope global cilium_host
valid_lft forever preferred_lft forever
inet6 fe80::e460:2aff:fe76:4ca7/64 scope link
valid_lft forever preferred_lft forever
12: lxc61f4da945ad8@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 4e:a7:3d:29:2b:4b brd ff:ff:ff:ff:ff:ff link-netns cni-61bcb4d9-eab1-78d3-9d22-9801294ffb91
inet6 fe80::4ca7:3dff:fe29:2b4b/64 scope link
valid_lft forever preferred_lft forever
14: lxc5fade0857eac@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 62:c7:2a:1b:a2:5a brd ff:ff:ff:ff:ff:ff link-netns cni-dd3d4f65-5ab6-c3fe-a381-13412ad693b6
inet6 fe80::60c7:2aff:fe1b:a25a/64 scope link
valid_lft forever preferred_lft forever
16: lxc_health@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ba:77:7e:b5:ee:fa brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::b877:7eff:feb5:eefa/64 scope link
valid_lft forever preferred_lft forever
|
- k8s-w1 โ
cilium_host
IP: 172.20.2.165 - k8s-w2 โ
cilium_host
IP: 172.20.1.96
2. ๊ฐ ๋
ธ๋์ cilium_net / cilium_host ์ ๋ณด ํ์ธ
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# ip -c addr show cilium_net
ip -c addr show cilium_host
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
| 9: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ee:43:ce:84:cd:8b brd ff:ff:ff:ff:ff:ff
inet6 fe80::ec43:ceff:fe84:cd8b/64 scope link
valid_lft forever preferred_lft forever
10: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ce:32:1f:30:c0:e2 brd ff:ff:ff:ff:ff:ff
inet 172.20.0.187/32 scope global cilium_host
valid_lft forever preferred_lft forever
inet6 fe80::cc32:1fff:fe30:c0e2/64 scope link
valid_lft forever preferred_lft forever
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c addr show cilium_net ; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| >> node : k8s-w1 <<
7: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 4e:42:1e:e2:03:ec brd ff:ff:ff:ff:ff:ff
inet6 fe80::4c42:1eff:fee2:3ec/64 scope link
valid_lft forever preferred_lft forever
>> node : k8s-w2 <<
7: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 12:28:ed:0b:bc:2a brd ff:ff:ff:ff:ff:ff
inet6 fe80::1028:edff:fe0b:bc2a/64 scope link
valid_lft forever preferred_lft forever
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c addr show cilium_host ; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| >> node : k8s-w1 <<
8: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 5e:62:ab:66:82:0f brd ff:ff:ff:ff:ff:ff
inet 172.20.2.165/32 scope global cilium_host
valid_lft forever preferred_lft forever
inet6 fe80::5c62:abff:fe66:820f/64 scope link
valid_lft forever preferred_lft forever
>> node : k8s-w2 <<
8: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether e6:60:2a:76:4c:a7 brd ff:ff:ff:ff:ff:ff
inet 172.20.1.96/32 scope global cilium_host
valid_lft forever preferred_lft forever
inet6 fe80::e460:2aff:fe76:4ca7/64 scope link
valid_lft forever preferred_lft forever
|
3. ํฌ์ค์ฒดํฌ ์ฉ๋๋ก ๋์ํ๋ ์ธํฐํ์ด์ค ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ip -c addr show lxc_health
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| 18: lxc_health@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether da:66:60:a7:ea:23 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::d866:60ff:fea7:ea23/64 scope link
valid_lft forever preferred_lft forever
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c addr show lxc_health ; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| >> node : k8s-w1 <<
14: lxc_health@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether e6:ec:3f:38:38:a8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::e4ec:3fff:fe38:38a8/64 scope link
valid_lft forever preferred_lft forever
>> node : k8s-w2 <<
16: lxc_health@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ba:77:7e:b5:ee:fa brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::b877:7eff:feb5:eefa/64 scope link
valid_lft forever preferred_lft forever
|
4. ํด๋ฌ์คํฐ ํฌ์ค ์ํ์ IP ์ฐ๊ฒฐ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg status --verbose
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
| ...
Cluster health: 3/3 reachable (2025-07-16T15:07:31Z)
Name IP Node Endpoints
k8s-ctr (localhost):
Host connectivity to 192.168.10.100:
ICMP to stack: OK, RTT=91.601ยตs
HTTP to agent: OK, RTT=382.5ยตs
Endpoint connectivity to 172.20.0.117:
ICMP to stack: OK, RTT=145.313ยตs
HTTP to agent: OK, RTT=637.185ยตs
k8s-w1:
Host connectivity to 192.168.10.101:
ICMP to stack: OK, RTT=650.087ยตs
HTTP to agent: OK, RTT=954.827ยตs
Endpoint connectivity to 172.20.2.252:
ICMP to stack: OK, RTT=852.728ยตs
HTTP to agent: OK, RTT=975.807ยตs
k8s-w2:
Host connectivity to 192.168.10.102:
ICMP to stack: OK, RTT=469.026ยตs
HTTP to agent: OK, RTT=731.956ยตs
Endpoint connectivity to 172.20.1.139:
ICMP to stack: OK, RTT=584.404ยตs
HTTP to agent: OK, RTT=699.36ยตs
...
|
5. health ์๋ํฌ์ธํธ IP ์ฌํ์ธ
health ์๋ํฌ์ธํธ IP๊ฐ 172.20.0.117 ์์ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg endpoint list | grep health
|
โ
ย ์ถ๋ ฅ
1
| 977 Disabled Disabled 4 reserved:health 172.20.0.117 ready
|
6. Cilium ์ํ ๋ฐ IPAM ์ ๋ณด ํ์ธ
cilium-dbg status --all-addresses
๋ช
๋ น์ผ๋ก Cilium IPAM ์ ๋ณด ๋ฐ ํ ๋น๋ IP ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg status --all-addresses
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
| KVStore: Disabled
Kubernetes: Ok 1.33 (v1.33.2) [linux/amd64]
Kubernetes APIs: ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: True [eth0 10.0.2.15 fd17:625c:f037:2:a00:27ff:fe6b:69c9 fe80::a00:27ff:fe6b:69c9, eth1 fe80::a00:27ff:fe80:23b9 192.168.10.100 (Direct Routing)]
Host firewall: Disabled
SRv6: Disabled
CNI Chaining: none
CNI Config file: successfully wrote CNI configuration file to /host/etc/cni/net.d/05-cilium.conflist
Cilium: Ok 1.17.5 (v1.17.5-69aab28c)
NodeMonitor: Listening for events on 2 CPUs with 64x4096 of shared memory
Cilium health daemon: Ok
IPAM: IPv4: 4/254 allocated from 172.20.0.0/24,
Allocated addresses:
172.20.0.10 (kube-system/coredns-674b8bbfcf-rtz5g [restored])
172.20.0.117 (health)
172.20.0.187 (router)
172.20.0.191 (default/curl-pod [restored])
IPv4 BIG TCP: Disabled
IPv6 BIG TCP: Disabled
BandwidthManager: Disabled
Routing: Network: Native Host: BPF
Attach Mode: TCX
Device Mode: veth
Masquerading: BPF [eth0, eth1] 172.20.0.0/16 [IPv4: Enabled, IPv6: Disabled]
Controller Status: 34/34 healthy
Proxy Status: OK, ip 172.20.0.187, 0 redirects active on ports 10000-20000, Envoy: external
Global Identity Range: min 256, max 65535
Hubble: Ok Current/Max Flows: 4095/4095 (100.00%), Flows/s: 13.57 Metrics: Disabled
Encryption: Disabled
Cluster health: 3/3 reachable (2025-07-16T15:10:31Z)
Name IP Node Endpoints
Modules Health: Stopped(0) Degraded(0) OK(58)
|
7. BPF Conntrack/NAT ํ
์ด๋ธ ICMP ํ๋ฆ ํ์ธ
BPF Conntrack ํ
์ด๋ธ์ ICMP ์ธ์
์ ๋ณด ์ถ๋ ฅ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium bpf ct list global | grep ICMP |head -n4
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| ICMP OUT 192.168.10.100:40449 -> 192.168.10.102:0 expires=16154 Packets=0 Bytes=0 RxFlagsSeen=0x00 LastRxReport=16094 TxFlagsSeen=0x00 LastTxReport=16094 Flags=0x0000 [ ] RevNAT=0 SourceSecurityID=0 IfIndex=0 BackendID=0
ICMP IN 192.168.10.102:8955 -> 172.20.0.117:0 expires=15702 Packets=0 Bytes=0 RxFlagsSeen=0x00 LastRxReport=15642 TxFlagsSeen=0x00 LastTxReport=15642 Flags=0x0000 [ ] RevNAT=0 SourceSecurityID=6 IfIndex=0 BackendID=0
ICMP OUT 192.168.10.100:60722 -> 192.168.10.102:0 expires=16034 Packets=0 Bytes=0 RxFlagsSeen=0x00 LastRxReport=15974 TxFlagsSeen=0x00 LastTxReport=15974 Flags=0x0000 [ ] RevNAT=0 SourceSecurityID=0 IfIndex=0 BackendID=0
ICMP OUT 192.168.10.100:54893 -> 172.20.1.139:0 expires=15984 Packets=0 Bytes=0 RxFlagsSeen=0x00 LastRxReport=15924 TxFlagsSeen=0x00 LastTxReport=15924 Flags=0x0000 [ ] RevNAT=0 SourceSecurityID=0 IfIndex=0 BackendID=0
|
NAT ํ
์ด๋ธ์ ๊ธฐ๋ก๋ ICMP ํธ๋ํฝ ๋ณํ ์ ๋ณด ์ถ๋ ฅ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium bpf nat list | grep ICMP |head -n4
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| ICMP OUT 192.168.10.100:65340 -> 172.20.1.139:0 XLATE_SRC 192.168.10.100:65340 Created=99sec ago NeedsCT=1
ICMP IN 172.20.1.139:0 -> 192.168.10.100:61858 XLATE_DST 192.168.10.100:61858 Created=419sec ago NeedsCT=1
ICMP IN 172.20.1.139:0 -> 192.168.10.100:65340 XLATE_DST 192.168.10.100:65340 Created=99sec ago NeedsCT=1
ICMP IN 172.20.1.139:0 -> 192.168.10.100:42350 XLATE_DST 192.168.10.100:42350 Created=619sec ago NeedsCT=1
|
8. ๋ผ์ฐํ
ํ
์ด๋ธ์์ POD CIDR ํ์ธ
172.20.1.0/24
โ k8s-w2 ๊ฒฝ์ , 172.20.2.0/24
โ k8s-w1 ๊ฒฝ์
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ip -c route | grep 172.20 | grep eth1
|
โ
ย ์ถ๋ ฅ
1
2
| 172.20.1.0/24 via 192.168.10.102 dev eth1 proto kernel
172.20.2.0/24 via 192.168.10.101 dev eth1 proto kernel
|
9. ํ๋๋ณ IP์ ๋
ธ๋ ํ์ธ
autoDirectNodeRoutes=true
์ค์ ์ผ๋ก ๊ฐ ๋
ธ๋์ Pod CIDR๊ฐ ์๋ ์ถ๊ฐ๋จ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# k get pod -owide
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-pod 1/1 Running 0 45m 172.20.0.191 k8s-ctr <none> <none>
webpod-9894b69cd-v2n4m 1/1 Running 0 48m 172.20.1.88 k8s-w2 <none> <none>
webpod-9894b69cd-zxcnw 1/1 Running 0 48m 172.20.2.124 k8s-w1 <none> <none>
|
- curl-pod:
172.20.0.191
(k8s-ctr) - webpod-v2n4m:
172.20.1.88
(k8s-w2) - webpod-zxcnw:
172.20.2.124
(k8s-w1)
10. CiliumEndpoints ๋ฆฌ์์ค ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints -A
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
| NAMESPACE NAME SECURITY IDENTITY ENDPOINT STATE IPV4 IPV6
default curl-pod 13136 ready 172.20.0.191
default webpod-9894b69cd-v2n4m 8254 ready 172.20.1.88
default webpod-9894b69cd-zxcnw 8254 ready 172.20.2.124
kube-system coredns-674b8bbfcf-697cz 39751 ready 172.20.1.127
kube-system coredns-674b8bbfcf-rtz5g 39751 ready 172.20.0.10
|
11. lxc ์ธํฐํ์ด์ค ๊ธฐ๋ฐ ๋ผ์ฐํ
ํ
์ด๋ธ ํ์ธ
endpointRoutes.enabled=true ์ค์ ์ผ๋ก ํ๋ ์ ์ฉ ๋คํธ์ํฌ ์ธํฐํ์ด์ค(lxcY)๊ฐ ํธ์คํธ ๋ผ์ฐํ
์ ๋ฐ์๋จ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ip -c route | grep lxc
|
โ
ย ์ถ๋ ฅ
1
2
3
| 172.20.0.10 dev lxc15efa76a1c03 proto kernel scope link
172.20.0.117 dev lxc_health proto kernel scope link
172.20.0.191 dev lxc4bee7d32b9d3 proto kernel scope link
|
๊ฐ ๋
ธ๋๋ณ lxc ๋ผ์ฐํ
ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c route | grep lxc ; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
| >> node : k8s-w1 <<
172.20.2.124 dev lxcd268a7119386 proto kernel scope link
172.20.2.252 dev lxc_health proto kernel scope link
>> node : k8s-w2 <<
172.20.1.88 dev lxc5fade0857eac proto kernel scope link
172.20.1.127 dev lxc61f4da945ad8 proto kernel scope link
172.20.1.139 dev lxc_health proto kernel scope lin
|
โก Cilium CMD ํ์ธ
- Cilium CMD Cheatsheet - Docs , CMD Reference - Docs
1. Cilium ํ๋ ์ด๋ฆ ๋ณ์ํ
1
2
3
4
| (โ|HomeLab:N/A) root@k8s-ctr:~# export CILIUMPOD0=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-ctr -o jsonpath='{.items[0].metadata.name}')
export CILIUMPOD1=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w1 -o jsonpath='{.items[0].metadata.name}')
export CILIUMPOD2=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w2 -o jsonpath='{.items[0].metadata.name}')
echo $CILIUMPOD0 $CILIUMPOD1 $CILIUMPOD2
|
โ
ย ์ถ๋ ฅ
1
| cilium-64kc9 cilium-cl6f6 cilium-zzk8d
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| (โ|HomeLab:N/A) root@k8s-ctr:~# k get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default curl-pod 1/1 Running 0 60m
default webpod-9894b69cd-v2n4m 1/1 Running 0 63m
default webpod-9894b69cd-zxcnw 1/1 Running 0 63m
kube-system cilium-64kc9 1/1 Running 0 50m
kube-system cilium-cl6f6 1/1 Running 0 50m
kube-system cilium-envoy-b4zts 1/1 Running 0 87m
kube-system cilium-envoy-fkj4l 1/1 Running 0 87m
kube-system cilium-envoy-z9mvb 1/1 Running 0 87m
kube-system cilium-operator-865bc7f457-rgj7k 1/1 Running 0 87m
kube-system cilium-operator-865bc7f457-s2zv2 1/1 Running 0 87m
kube-system cilium-zzk8d 1/1 Running 0 50m
kube-system coredns-674b8bbfcf-697cz 1/1 Running 0 85m
kube-system coredns-674b8bbfcf-rtz5g 1/1 Running 0 85m
kube-system etcd-k8s-ctr 1/1 Running 0 2d1h
kube-system kube-apiserver-k8s-ctr 1/1 Running 0 2d1h
kube-system kube-controller-manager-k8s-ctr 1/1 Running 0 2d1h
kube-system kube-scheduler-k8s-ctr 1/1 Running 0 2d1h
|
2. ๋จ์ถํค(alias) ์ง์
alias c0, c1, c2
๋ก ๊ฐ ๋
ธ๋์ cilium ๋ช
๋ น์ ์ฝ๊ฒ ํธ์ถalias c0bpf, c1bpf, c2bpf
๋ก bpftool ๋ช
๋ น๋ ๋จ์ถํค ์ค์
1
2
3
4
5
6
7
| (โ|HomeLab:N/A) root@k8s-ctr:~# alias c0="kubectl exec -it $CILIUMPOD0 -n kube-system -c cilium-agent -- cilium"
alias c1="kubectl exec -it $CILIUMPOD1 -n kube-system -c cilium-agent -- cilium"
alias c2="kubectl exec -it $CILIUMPOD2 -n kube-system -c cilium-agent -- cilium"
alias c0bpf="kubectl exec -it $CILIUMPOD0 -n kube-system -c cilium-agent -- bpftool"
alias c1bpf="kubectl exec -it $CILIUMPOD1 -n kube-system -c cilium-agent -- bpftool"
alias c2bpf="kubectl exec -it $CILIUMPOD2 -n kube-system -c cilium-agent -- bpftool"
|
3. Cilium ์๋ํฌ์ธํธ ๋ชฉ๋ก ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c0 endpoint list
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
977 Disabled Disabled 4 reserved:health 172.20.0.117 ready
1827 Disabled Disabled 13136 k8s:app=curl 172.20.0.191 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
1912 Disabled Disabled 1 k8s:node-role.kubernetes.io/control-plane ready
k8s:node.kubernetes.io/exclude-from-external-load-balancers
reserved:host
3021 Disabled Disabled 39751 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system 172.20.0.10 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c1 endpoint list
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
| ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
391 Disabled Disabled 1 reserved:host ready
1110 Disabled Disabled 4 reserved:health 172.20.2.252 ready
1126 Disabled Disabled 8254 k8s:app=webpod 172.20.2.124 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c2 endpoint list
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
401 Disabled Disabled 8254 k8s:app=webpod 172.20.1.88 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
690 Disabled Disabled 39751 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system 172.20.1.127 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
1532 Disabled Disabled 4 reserved:health 172.20.1.139 ready
2914 Disabled Disabled 1 reserved:host ready
|
4. Cilium ๋ชจ๋ํฐ ์ถ๋ ฅ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c1 monitor -v
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
| Listening for events on 2 CPUs with 64x4096 of shared memory
Press Ctrl-C to quit
time="2025-07-17T13:02:16.691606819Z" level=info msg="Initializing dissection cache..." subsys=monitor
-> network flow 0x44d1f314 , identity host->unknown state established ifindex eth1 orig-ip 192.168.10.101: 192.168.10.101:56916 -> 192.168.10.102:4240 tcp ACK
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7036, dst [127.0.0.1]:51394 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7036, dst [127.0.0.1]:51394 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7036, dst [127.0.0.1]:51394 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 7892 sock_cookie: 2675, dst [127.0.0.1]:9234 tcp
-> network flow 0xb3726e6 , identity host->unknown state established ifindex eth1 orig-ip 192.168.10.101: 192.168.10.101:36468 -> 192.168.10.100:6443 tcp ACK
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7036, dst [127.0.0.1]:51394 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7036, dst [127.0.0.1]:51394 tcp
-> network flow 0xd9c7ab80 , identity host->unknown state established ifindex eth1 orig-ip 192.168.10.101: 192.168.10.101:35138 -> 192.168.10.100:6443 tcp ACK
CPU 01: [pre-xlate-rev] cgroup_id: 12434 sock_cookie: 2676, dst [127.0.0.1]:41326 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 7892 sock_cookie: 2677, dst [127.0.0.1]:9878 tcp
-> network flow 0x1dc25546 , identity host->unknown state established ifindex eth1 orig-ip 192.168.10.101: 192.168.10.101:35710 -> 192.168.10.100:6443 tcp ACK
-> network flow 0x0 , identity host->unknown state new ifindex eth1 orig-ip 192.168.10.101: 192.168.10.101 -> 172.20.0.117 EchoRequest
-> network flow 0xff037f8b , identity host->unknown state established ifindex eth1 orig-ip 192.168.10.101: 192.168.10.101:33586 -> 172.20.0.117:4240 tcp ACK
CPU 01: [pre-xlate-rev] cgroup_id: 13643 sock_cookie: 7037, dst [127.0.0.1]:53790 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 7892 sock_cookie: 2678, dst [127.0.0.1]:9879 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 7892 sock_cookie: 2679, dst [127.0.0.1]:9234 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7038, dst [127.0.0.1]:55230 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7038, dst [127.0.0.1]:55230 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7038, dst [127.0.0.1]:55230 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7038, dst [127.0.0.1]:55230 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7038, dst [127.0.0.1]:55230 tcp
-> network flow 0xc767c399 , identity host->unknown state established ifindex eth1 orig-ip 192.168.10.101: 192.168.10.101:36484 -> 192.168.10.100:6443 tcp ACK
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7039, dst [127.0.0.1]:55238 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7039, dst [127.0.0.1]:55238 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7039, dst [127.0.0.1]:55238 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 7892 sock_cookie: 2680, dst [127.0.0.1]:9234 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7039, dst [127.0.0.1]:55238 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7039, dst [127.0.0.1]:55238 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 13643 sock_cookie: 7040, dst [192.168.10.100]:51010 tcp
...
|
5. Cilium IP ๋ฆฌ์คํธ ํ์ธ
Cilium์ด ๊ด๋ฆฌํ๋ IP์ ํด๋น Identity, Source ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c0 ip list
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
| IP IDENTITY SOURCE
0.0.0.0/0 reserved:world
10.0.2.15/32 reserved:host
reserved:kube-apiserver
172.20.0.10/32 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system custom-resource
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
172.20.0.117/32 reserved:health
172.20.0.187/32 reserved:host
reserved:kube-apiserver
172.20.0.191/32 k8s:app=curl custom-resource
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
172.20.1.88/32 k8s:app=webpod custom-resource
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
172.20.1.96/32 reserved:remote-node
172.20.1.127/32 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system custom-resource
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
172.20.1.139/32 reserved:health
172.20.2.124/32 k8s:app=webpod custom-resource
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
172.20.2.165/32 reserved:remote-node
172.20.2.252/32 reserved:health
192.168.10.100/32 reserved:host
reserved:kube-apiserver
192.168.10.101/32 reserved:remote-node
192.168.10.102/32 reserved:remote-node
|
IP์ Identity ID ๋งคํ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c0 ip list -n
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| IP IDENTITY SOURCE
0.0.0.0/0 2
10.0.2.15/32 1
172.20.0.10/32 39751 custom-resource
172.20.0.117/32 4
172.20.0.187/32 1
172.20.0.191/32 13136 custom-resource
172.20.1.88/32 8254 custom-resource
172.20.1.96/32 6
172.20.1.127/32 39751 custom-resource
172.20.1.139/32 4
172.20.2.124/32 8254 custom-resource
172.20.2.165/32 6
172.20.2.252/32 4
192.168.10.100/32 1
192.168.10.101/32 6
192.168.10.102/32 6
|
6. Cilium Identity ๋ฆฌ์คํธ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c0 identity list
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
| ID LABELS
1 reserved:host
reserved:kube-apiserver
2 reserved:world
3 reserved:unmanaged
4 reserved:health
5 reserved:init
6 reserved:remote-node
7 reserved:kube-apiserver
reserved:remote-node
8 reserved:ingress
9 reserved:world-ipv4
10 reserved:world-ipv6
8254 k8s:app=webpod
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
13136 k8s:app=curl
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
39751 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
|
7. BPF Filesystem ๋ง์ดํธ ์ ๋ณด ํ์ธ
/sys/fs/bpf
๋ง์ดํธ ์ํ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c0 bpf fs show
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
|
MountID: 1107
ParentID: 1097
Mounted State: true
MountPoint: /sys/fs/bpf
MountOptions: rw,relatime
OptionFields: [master:11]
FilesystemType: bpf
MountSource: bpf
SuperOptions: rw,mode=700
|
BPF ๋ง์ดํธ ํด๋ ๊ตฌ์กฐ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# tree /sys/fs/bpf
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
| /sys/fs/bpf
โโโ cilium
โย ย โโโ devices
โย ย โย ย โโโ cilium_host
โย ย โย ย โย ย โโโ links
โย ย โย ย โย ย โโโ cil_from_host
โย ย โย ย โย ย โโโ cil_to_host
โย ย โย ย โโโ cilium_net
โย ย โย ย โย ย โโโ links
โย ย โย ย โย ย โโโ cil_to_host
โย ย โย ย โโโ eth0
โย ย โย ย โย ย โโโ links
โย ย โย ย โย ย โโโ cil_from_netdev
โย ย โย ย โย ย โโโ cil_to_netdev
โย ย โย ย โโโ eth1
โย ย โย ย โโโ links
โย ย โย ย โโโ cil_from_netdev
โย ย โย ย โโโ cil_to_netdev
โย ย โโโ endpoints
โย ย โย ย โโโ 1827
โย ย โย ย โย ย โโโ links
โย ย โย ย โย ย โโโ cil_from_container
โย ย โย ย โย ย โโโ cil_to_container
โย ย โย ย โโโ 3021
โย ย โย ย โย ย โโโ links
โย ย โย ย โย ย โโโ cil_from_container
โย ย โย ย โย ย โโโ cil_to_container
โย ย โย ย โโโ 977
โย ย โย ย โโโ links
โย ย โย ย โโโ cil_from_container
โย ย โย ย โโโ cil_to_container
โย ย โโโ socketlb
โย ย โโโ links
โย ย โโโ cgroup
โย ย โโโ cil_sock4_connect
โย ย โโโ cil_sock4_getpeername
โย ย โโโ cil_sock4_post_bind
โย ย โโโ cil_sock4_recvmsg
โย ย โโโ cil_sock4_sendmsg
โย ย โโโ cil_sock6_connect
โย ย โโโ cil_sock6_getpeername
โย ย โโโ cil_sock6_post_bind
โย ย โโโ cil_sock6_recvmsg
โย ย โโโ cil_sock6_sendmsg
โโโ tc
โโโ globals
โโโ cilium_auth_map
โโโ cilium_call_policy
โโโ cilium_calls_00977
โโโ cilium_calls_01827
โโโ cilium_calls_03021
โโโ cilium_calls_hostns_01912
โโโ cilium_calls_netdev_00002
โโโ cilium_calls_netdev_00003
โโโ cilium_calls_netdev_00009
โโโ cilium_ct4_global
โโโ cilium_ct_any4_global
โโโ cilium_egresscall_policy
โโโ cilium_events
โโโ cilium_ipcache
โโโ cilium_ipv4_frag_datagrams
โโโ cilium_l2_responder_v4
โโโ cilium_lb4_affinity
โโโ cilium_lb4_backends_v3
โโโ cilium_lb4_reverse_nat
โโโ cilium_lb4_reverse_sk
โโโ cilium_lb4_services_v2
โโโ cilium_lb4_source_range
โโโ cilium_lb_affinity_match
โโโ cilium_lxc
โโโ cilium_metrics
โโโ cilium_node_map
โโโ cilium_node_map_v2
โโโ cilium_nodeport_neigh4
โโโ cilium_policy_v2_00977
โโโ cilium_policy_v2_01827
โโโ cilium_policy_v2_01912
โโโ cilium_policy_v2_03021
โโโ cilium_ratelimit
โโโ cilium_ratelimit_metrics
โโโ cilium_runtime_config
โโโ cilium_signals
โโโ cilium_skip_lb4
โโโ cilium_snat_v4_external
|
8. Cilium ์๋น์ค ๋ฆฌ์คํธ ํ์ธ
ClusterIP ์๋น์ค์ Backend ๋งคํ ์ ๋ณด ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c0 service list
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| ID Frontend Service Type Backend
1 10.96.0.1:443/TCP ClusterIP 1 => 192.168.10.100:6443/TCP (active)
2 10.96.33.91:80/TCP ClusterIP 1 => 172.20.2.124:80/TCP (active)
2 => 172.20.1.88:80/TCP (active)
3 10.96.239.219:443/TCP ClusterIP 1 => 192.168.10.100:4244/TCP (active)
4 10.96.0.10:53/UDP ClusterIP 1 => 172.20.0.10:53/UDP (active)
2 => 172.20.1.127:53/UDP (active)
5 10.96.0.10:53/TCP ClusterIP 1 => 172.20.0.10:53/TCP (active)
2 => 172.20.1.127:53/TCP (active)
6 10.96.0.10:9153/TCP ClusterIP 1 => 172.20.0.10:9153/TCP (active)
2 => 172.20.1.127:9153/TCP (active)
|
ClusterIP 10.96.33.91:80/TCP
โ Backend 172.20.2.124:80
, 172.20.1.88:80
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# k get svc
|
โ
ย ์ถ๋ ฅ
1
2
3
| NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d23h
webpod ClusterIP 10.96.33.91 <none> 80/TCP 2d21h
|
9. Kubernetes ์๋น์ค ์ ๋ณด ํ์ธ
webpod
โ ClusterIP 10.96.33.91 /80/TCPwebpod
โ 172.20.1.88:80, 172.20.2.124:80
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# k get svc,ep
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
| Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d23h
service/webpod ClusterIP 10.96.33.91 <none> 80/TCP 2d21h
NAME ENDPOINTS AGE
endpoints/kubernetes 192.168.10.100:6443 2d23h
endpoints/webpod 172.20.1.88:80,172.20.2.124:80 2d21h
|
10. Cilium eBPF Map ๋ฆฌ์คํธ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c0 map list
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| E0717 22:11:58.172602 44099 websocket.go:297] Unknown stream id 1, discarding message
Name Num entries Num errors Cache enabled
cilium_lxc 4 0 true
cilium_lb4_services_v2 16 0 true
cilium_lb4_affinity 0 0 true
cilium_lb4_reverse_nat 6 0 true
cilium_lb_affinity_match 0 0 true
cilium_policy_v2_00977 3 0 true
cilium_policy_v2_01912 2 0 true
cilium_ratelimit 0 0 true
cilium_ratelimit_metrics 0 0 true
cilium_lb4_reverse_sk 10 0 true
cilium_ipcache 16 0 true
cilium_lb4_backends_v3 1 0 true
cilium_lb4_source_range 0 0 true
cilium_policy_v2_03021 3 0 true
cilium_policy_v2_01827 3 0 true
cilium_runtime_config 256 0 true
cilium_l2_responder_v4 0 0 false
cilium_node_map 0 0 false
cilium_node_map_v2 0 0 false
cilium_auth_map 0 0 false
cilium_metrics 0 0 false
cilium_skip_lb4 0 0 false
|
11. Cilium eBPF Map ์ด๋ฒคํธ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c1 map events cilium_lb4_services_v2
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| key="10.96.0.1:443/TCP (1)" value="1 0[0] (1) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.0.1:443/TCP (0)" value="16777216 1[0] (1) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.33.91:80/TCP (1)" value="11 0[0] (2) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.33.91:80/TCP (2)" value="12 0[0] (2) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.33.91:80/TCP (0)" value="16777216 2[0] (2) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.239.219:443/TCP (1)" value="4 0[0] (3) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.239.219:443/TCP (0)" value="16777216 1[0] (3) [0x0 0x10]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.0.10:9153/TCP (1)" value="6 0[0] (4) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.0.10:9153/TCP (2)" value="10 0[0] (4) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.0.10:9153/TCP (0)" value="16777216 2[0] (4) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.0.10:53/UDP (1)" value="7 0[0] (5) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.0.10:53/UDP (2)" value="8 0[0] (5) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.0.10:53/UDP (0)" value="16777216 2[0] (5) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.0.10:53/TCP (1)" value="5 0[0] (6) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.0.10:53/TCP (2)" value="9 0[0] (6) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.0.10:53/TCP (0)" value="16777216 2[0] (6) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.239.219:443/TCP (0)" value="16777216 0[0] (3) [0x0 0x10]" time=2025-07-16T14:40:25Z action=update desiredState=sync lastError="<nil>"
key="10.96.239.219:443/TCP (1)" value="<nil>" time=2025-07-16T14:40:25Z action=delete desiredState=to-be-deleted lastError="<nil>"
key="10.96.239.219:443/TCP (1)" value="13 0[0] (3) [0x0 0x0]" time=2025-07-16T14:40:29Z action=update desiredState=sync lastError="<nil>"
key="10.96.239.219:443/TCP (0)" value="16777216 1[0] (3) [0x0 0x10]" time=2025-07-16T14:40:29Z action=update desiredState=sync lastError="<nil>"
key="10.96.239.219:443/TCP (1)" value="13 0[0] (3) [0x0 0x0]" time=2025-07-16T14:40:30Z action=update desiredState=sync lastError="<nil>"
key="10.96.239.219:443/TCP (0)" value="16777216 1[0] (3) [0x0 0x10]" time=2025-07-16T14:40:30Z action=update desiredState=sync lastError="<nil>"
key="10.96.239.219:443/TCP (1)" value="13 0[0] (3) [0x0 0x0]" time=2025-07-16T14:40:30Z action=update desiredState=sync lastError="<nil>"
key="10.96.239.219:443/TCP (0)" value="16777216 1[0] (3) [0x0 0x10]" time=2025-07-16T14:40:30Z action=update desiredState=sync lastError="<nil>"
|
12. Cilium eBPF ์ ์ฑ
์ ๋ณด ํ์ธ
(1) ์ปจํธ๋กคํ๋ ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c0 bpf policy get --all
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
| Endpoint ID: 977
Path: /sys/fs/bpf/tc/globals/cilium_policy_v2_00977
POLICY DIRECTION LABELS (source:key[=value]) PORT/PROTO PROXY PORT AUTH TYPE BYTES PACKETS PREFIX
Allow Ingress reserved:unknown ANY NONE disabled 95314 1202 0
Allow Ingress reserved:host ANY NONE disabled 62866 709 0
reserved:kube-apiserver
Allow Egress reserved:unknown ANY NONE disabled 0 0 0
Endpoint ID: 1827
Path: /sys/fs/bpf/tc/globals/cilium_policy_v2_01827
POLICY DIRECTION LABELS (source:key[=value]) PORT/PROTO PROXY PORT AUTH TYPE BYTES PACKETS PREFIX
Allow Ingress reserved:unknown ANY NONE disabled 0 0 0
Allow Ingress reserved:host ANY NONE disabled 0 0 0
reserved:kube-apiserver
Allow Egress reserved:unknown ANY NONE disabled 2112 24 0
Endpoint ID: 1912
Path: /sys/fs/bpf/tc/globals/cilium_policy_v2_01912
POLICY DIRECTION LABELS (source:key[=value]) PORT/PROTO PROXY PORT AUTH TYPE BYTES PACKETS PREFIX
Allow Ingress reserved:unknown ANY NONE disabled 0 0 0
Allow Egress reserved:unknown ANY NONE disabled 0 0 0
Endpoint ID: 3021
Path: /sys/fs/bpf/tc/globals/cilium_policy_v2_03021
POLICY DIRECTION LABELS (source:key[=value]) PORT/PROTO PROXY PORT AUTH TYPE BYTES PACKETS PREFIX
Allow Ingress reserved:unknown ANY NONE disabled 460 4 0
Allow Ingress reserved:host ANY NONE disabled 655293 7514 0
reserved:kube-apiserver
Allow Egress reserved:unknown ANY NONE disabled 66590 771 0
|
(2) w1 ๋
ธ๋
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c1 bpf policy get --all -n
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
| Endpoint ID: 391
Path: /sys/fs/bpf/tc/globals/cilium_policy_v2_00391
POLICY DIRECTION IDENTITY PORT/PROTO PROXY PORT AUTH TYPE BYTES PACKETS PREFIX
Allow Ingress 0 ANY NONE disabled 0 0 0
Allow Egress 0 ANY NONE disabled 0 0 0
Endpoint ID: 1110
Path: /sys/fs/bpf/tc/globals/cilium_policy_v2_01110
POLICY DIRECTION IDENTITY PORT/PROTO PROXY PORT AUTH TYPE BYTES PACKETS PREFIX
Allow Ingress 0 ANY NONE disabled 98726 1249 0
Allow Ingress 1 ANY NONE disabled 63973 722 0
Allow Egress 0 ANY NONE disabled 0 0 0
Endpoint ID: 1126
Path: /sys/fs/bpf/tc/globals/cilium_policy_v2_01126
POLICY DIRECTION IDENTITY PORT/PROTO PROXY PORT AUTH TYPE BYTES PACKETS PREFIX
Allow Ingress 0 ANY NONE disabled 474 6 0
Allow Ingress 1 ANY NONE disabled 0 0 0
Allow Egress 0 ANY NONE disabled 0 0 0
|
(3) w2 ๋
ธ๋
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c2 bpf policy get --all -n
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
| Endpoint ID: 401
Path: /sys/fs/bpf/tc/globals/cilium_policy_v2_00401
POLICY DIRECTION IDENTITY PORT/PROTO PROXY PORT AUTH TYPE BYTES PACKETS PREFIX
Allow Ingress 0 ANY NONE disabled 948 12 0
Allow Ingress 1 ANY NONE disabled 0 0 0
Allow Egress 0 ANY NONE disabled 0 0 0
Endpoint ID: 690
Path: /sys/fs/bpf/tc/globals/cilium_policy_v2_00690
POLICY DIRECTION IDENTITY PORT/PROTO PROXY PORT AUTH TYPE BYTES PACKETS PREFIX
Allow Ingress 0 ANY NONE disabled 230 2 0
Allow Ingress 1 ANY NONE disabled 665067 7620 0
Allow Egress 0 ANY NONE disabled 71756 785 0
Endpoint ID: 1532
Path: /sys/fs/bpf/tc/globals/cilium_policy_v2_01532
POLICY DIRECTION IDENTITY PORT/PROTO PROXY PORT AUTH TYPE BYTES PACKETS PREFIX
Allow Ingress 0 ANY NONE disabled 99846 1271 0
Allow Ingress 1 ANY NONE disabled 64447 729 0
Allow Egress 0 ANY NONE disabled 0 0 0
Endpoint ID: 2914
Path: /sys/fs/bpf/tc/globals/cilium_policy_v2_02914
POLICY DIRECTION IDENTITY PORT/PROTO PROXY PORT AUTH TYPE BYTES PACKETS PREFIX
Allow Ingress 0 ANY NONE disabled 0 0 0
Allow Egress 0 ANY NONE disabled 0 0 0
|
13. Cilium Shell ๋ช
๋ น์ผ๋ก ๋๋ฐ์ด์ค ์ ๋ณด ํ์ธ
(1) ์ปจํธ๋กคํ๋ ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c0 shell -- db/show devices
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
|
Name Index Selected Type MTU HWAddr Flags Addresses
lo 1 false device 65536 up|loopback|running 127.0.0.1, ::1
eth0 2 true device 1500 08:00:27:6b:69:c9 up|broadcast|multicast|running 10.0.2.15, fd17:625c:f037:2:a00:27ff:fe6b:69c9, fe80::a00:27ff:fe6b:69c9
eth1 3 true device 1500 08:00:27:80:23:b9 up|broadcast|multicast|running fe80::a00:27ff:fe80:23b9, 192.168.10.100
cilium_net 9 false veth 1500 ee:43:ce:84:cd:8b up|broadcast|multicast|running fe80::ec43:ceff:fe84:cd8b
cilium_host 10 false veth 1500 ce:32:1f:30:c0:e2 up|broadcast|multicast|running 172.20.0.187, fe80::cc32:1fff:fe30:c0e2
lxc15efa76a1c03 14 false veth 1500 0e:74:2e:72:27:a0 up|broadcast|multicast|running fe80::c74:2eff:fe72:27a0
lxc4bee7d32b9d3 16 false veth 1500 da:c2:12:8f:3f:7f up|broadcast|multicast|running fe80::d8c2:12ff:fe8f:3f7f
lxc_health 18 false veth 1500 da:66:60:a7:ea:23 up|broadcast|multicast|running fe80::d866:60ff:fea7:ea23
|
(2) w1 ๋
ธ๋
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c1 shell -- db/show devices
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
| Name Index Selected Type MTU HWAddr Flags Addresses
lo 1 false device 65536 up|loopback|running 127.0.0.1, ::1
eth0 2 true device 1500 08:00:27:6b:69:c9 up|broadcast|multicast|running 10.0.2.15, fd17:625c:f037:2:a00:27ff:fe6b:69c9, fe80::a00:27ff:fe6b:69c9
eth1 3 true device 1500 08:00:27:07:f2:54 up|broadcast|multicast|running fe80::a00:27ff:fe07:f254, 192.168.10.101
cilium_net 7 false veth 1500 4e:42:1e:e2:03:ec up|broadcast|multicast|running fe80::4c42:1eff:fee2:3ec
cilium_host 8 false veth 1500 5e:62:ab:66:82:0f up|broadcast|multicast|running 172.20.2.165, fe80::5c62:abff:fe66:820f
lxcd268a7119386 12 false veth 1500 e6:d8:89:f3:57:c0 up|broadcast|multicast|running fe80::e4d8:89ff:fef3:57c0
lxc_health 14 false veth 1500 e6:ec:3f:38:38:a8 up|broadcast|multicast|running fe80::e4ec:3fff:fe38:38a8
|
(3) w2 ๋
ธ๋
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c2 shell -- db/show devices
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
| Name Index Selected Type MTU HWAddr Flags Addresses
lo 1 false device 65536 up|loopback|running 127.0.0.1, ::1
eth0 2 true device 1500 08:00:27:6b:69:c9 up|broadcast|multicast|running 10.0.2.15, fd17:625c:f037:2:a00:27ff:fe6b:69c9, fe80::a00:27ff:fe6b:69c9
eth1 3 true device 1500 08:00:27:96:d7:20 up|broadcast|multicast|running fe80::a00:27ff:fe96:d720, 192.168.10.102
cilium_net 7 false veth 1500 12:28:ed:0b:bc:2a up|broadcast|multicast|running fe80::1028:edff:fe0b:bc2a
cilium_host 8 false veth 1500 e6:60:2a:76:4c:a7 up|broadcast|multicast|running 172.20.1.96, fe80::e460:2aff:fe76:4ca7
lxc61f4da945ad8 12 false veth 1500 4e:a7:3d:29:2b:4b up|broadcast|multicast|running fe80::4ca7:3dff:fe29:2b4b
lxc5fade0857eac 14 false veth 1500 62:c7:2a:1b:a2:5a up|broadcast|multicast|running fe80::60c7:2aff:fe1b:a25a
lxc_health 16 false veth 1500 ba:77:7e:b5:ee:fa up|broadcast|multicast|running fe80::b877:7eff:feb5:eefa
|
๐ก ๋
ธ๋ ๊ฐ โํ๋ โ ํ๋โ ํต์ ํ์ธ
1. ์๋ํฌ์ธํธ ์ ๋ณด ํ์ธ
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
kubectl get svc,ep webpod
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-pod 1/1 Running 0 22h 172.20.0.191 k8s-ctr <none> <none>
webpod-9894b69cd-v2n4m 1/1 Running 0 22h 172.20.1.88 k8s-w2 <none> <none>
webpod-9894b69cd-zxcnw 1/1 Running 0 22h 172.20.2.124 k8s-w1 <none> <none>
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/webpod ClusterIP 10.96.33.91 <none> 80/TCP 2d22h
NAME ENDPOINTS AGE
endpoints/webpod 172.20.1.88:80,172.20.2.124:80 2d22h
|
- webpod ํ๋ 2๊ฐ:
172.20.1.88 (k8s-w2)
, 172.20.2.124 (k8s-w1)
- webpod ์๋น์ค ClusterIP:
10.96.33.91
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# WEBPOD1IP=172.20.1.88
|
2. Cilium BPF maps (ipcache)๋ก ๋ชฉ์ ์ง ํ๋ ๊ฒฝ๋ก ํ์ธ
๋ชฉ์ ์ง Pod IP โ tunnelendpoint ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c0 map get cilium_ipcache
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| Key Value State Error
172.20.0.191/32 identity=13136 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none> sync
172.20.2.124/32 identity=8254 encryptkey=0 tunnelendpoint=192.168.10.101 flags=<none> sync
172.20.2.165/32 identity=6 encryptkey=0 tunnelendpoint=192.168.10.101 flags=<none> sync
192.168.10.102/32 identity=6 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none> sync
172.20.1.139/32 identity=4 encryptkey=0 tunnelendpoint=192.168.10.102 flags=<none> sync
192.168.10.101/32 identity=6 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none> sync
172.20.0.10/32 identity=39751 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none> sync
172.20.1.127/32 identity=39751 encryptkey=0 tunnelendpoint=192.168.10.102 flags=<none> sync
172.20.0.187/32 identity=1 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none> sync
172.20.0.117/32 identity=4 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none> sync
10.0.2.15/32 identity=1 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none> sync
172.20.2.252/32 identity=4 encryptkey=0 tunnelendpoint=192.168.10.101 flags=<none> sync
172.20.1.96/32 identity=6 encryptkey=0 tunnelendpoint=192.168.10.102 flags=<none> sync
172.20.1.88/32 identity=8254 encryptkey=0 tunnelendpoint=192.168.10.102 flags=<none> sync
192.168.10.100/32 identity=1 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none> sync
0.0.0.0/0 identity=2 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none> sync
|
172.20.1.88
๋ก ํต์ ์ tunnelendpoint=192.168.10.102
(w2๋ก ์ ๋ฌ)
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c0 map get cilium_ipcache | grep $WEBPOD1IP
|
โ
ย ์ถ๋ ฅ
1
| 172.20.1.88/32 identity=8254 encryptkey=0 tunnelendpoint=192.168.10.102 flags=<none> sync
|
3. curl-pod์ LXC ์ธํฐํ์ด์ค ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ip -c a
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
| 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
altname enp0s3
inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
valid_lft 66746sec preferred_lft 66746sec
inet6 fd17:625c:f037:2:a00:27ff:fe6b:69c9/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 86156sec preferred_lft 14156sec
inet6 fe80::a00:27ff:fe6b:69c9/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:80:23:b9 brd ff:ff:ff:ff:ff:ff
altname enp0s8
inet 192.168.10.100/24 brd 192.168.10.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe80:23b9/64 scope link
valid_lft forever preferred_lft forever
9: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ee:43:ce:84:cd:8b brd ff:ff:ff:ff:ff:ff
inet6 fe80::ec43:ceff:fe84:cd8b/64 scope link
valid_lft forever preferred_lft forever
10: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ce:32:1f:30:c0:e2 brd ff:ff:ff:ff:ff:ff
inet 172.20.0.187/32 scope global cilium_host
valid_lft forever preferred_lft forever
inet6 fe80::cc32:1fff:fe30:c0e2/64 scope link
valid_lft forever preferred_lft forever
14: lxc15efa76a1c03@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 0e:74:2e:72:27:a0 brd ff:ff:ff:ff:ff:ff link-netns cni-69166b00-fabc-0af1-3874-e887a47b08b0
inet6 fe80::c74:2eff:fe72:27a0/64 scope link
valid_lft forever preferred_lft forever
16: lxc4bee7d32b9d3@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether da:c2:12:8f:3f:7f brd ff:ff:ff:ff:ff:ff link-netns cni-e912a9e2-ed8c-1436-f857-94e0fcb0cd78
inet6 fe80::d8c2:12ff:fe8f:3f7f/64 scope link
valid_lft forever preferred_lft forever
18: lxc_health@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether da:66:60:a7:ea:23 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::d866:60ff:fea7:ea23/64 scope link
valid_lft forever preferred_lft forever
|
4. eBPF ํ๋ก๊ทธ๋จ ๋ถ์ฐฉ ํํฉ ํ์ธ
1
2
| LXC=<k8s-ctr์ ๊ฐ์ฅ ๋์ค์ lxc ์ด๋ฆ>
(โ|HomeLab:N/A) root@k8s-ctr:~# LXC=lxc4bee7d32b9d3
|
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# c0bpf net show
c0bpf net show | grep $LXC
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| xdp:
tc:
eth0(2) tcx/ingress cil_from_netdev prog_id 1876 link_id 16
eth0(2) tcx/egress cil_to_netdev prog_id 1877 link_id 17
eth1(3) tcx/ingress cil_from_netdev prog_id 1889 link_id 18
eth1(3) tcx/egress cil_to_netdev prog_id 1893 link_id 19
cilium_net(9) tcx/ingress cil_to_host prog_id 1869 link_id 15
cilium_host(10) tcx/ingress cil_to_host prog_id 1831 link_id 13
cilium_host(10) tcx/egress cil_from_host prog_id 1834 link_id 14
lxc15efa76a1c03(14) tcx/ingress cil_from_container prog_id 1821 link_id 20
lxc15efa76a1c03(14) tcx/egress cil_to_container prog_id 1828 link_id 21
lxc4bee7d32b9d3(16) tcx/ingress cil_from_container prog_id 1854 link_id 24
lxc4bee7d32b9d3(16) tcx/egress cil_to_container prog_id 1859 link_id 25
lxc_health(18) tcx/ingress cil_from_container prog_id 1849 link_id 27
lxc_health(18) tcx/egress cil_to_container prog_id 1838 link_id 28
flow_dissector:
netfilter:
lxc4bee7d32b9d3(16) tcx/ingress cil_from_container prog_id 1854 link_id 24
lxc4bee7d32b9d3(16) tcx/egress cil_to_container prog_id 1859 link_id 25
|
cil_from_container
, cil_to_container
๊ฐ ingress/egress์ ๋ถ์ฐฉ
c0bpf prog show id 1854
, c0bpf prog show id 1859
๋ก ์ธ๋ถ ์ ๋ณด ํ์ธ
1
2
3
4
5
| (โ|HomeLab:N/A) root@k8s-ctr:~# c0bpf prog show id 1854
1854: sched_cls name cil_from_container tag 41989045bb171bee gpl
loaded_at 2025-07-17T12:49:54+0000 uid 0
xlated 752B jited 579B memlock 4096B map_ids 324,323,58
btf_id 840
|
1
2
3
4
5
| (โ|HomeLab:N/A) root@k8s-ctr:~# c0bpf prog show id 1859
1859: sched_cls name cil_to_container tag 0b3125767ba1861c gpl
loaded_at 2025-07-17T12:49:54+0000 uid 0
xlated 1448B jited 928B memlock 4096B map_ids 324,58,323
btf_id 845
|
5. eBPF map ๋ฆฌ์คํธ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c0bpf map list
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| ...
58: percpu_hash name cilium_metrics flags 0x1
key 8B value 16B max_entries 1024 memlock 19144B
...
323: prog_array name cilium_calls_01 flags 0x0
key 4B value 4B max_entries 50 memlock 720B
owner_prog_type sched_cls owner jited
324: array name .rodata.config flags 0x480
key 4B value 64B max_entries 1 memlock 8192B
btf_id 838 frozen
...
|
- cilium_metrics(58), cilium_calls_01(323), .rodata.config(324) ๋ฑ ํ์ธ
6. ์ค์ ํจํท ์ ์ก ํ์ธ (ngrep)
w1, w2 ๋
ธ๋์์ ngrep -tW byline -d eth1 '' 'tcp port 80'
์คํ
1
| ngrep -tW byline -d eth1 '' 'tcp port 80'
|
โ
ย ์ถ๋ ฅ
curl ์์ฒญ ์ eth1 ์ธํฐํ์ด์ค๋ก TCP 80 ํฌํธ ํจํท์ด ์ค๊ฐ๋ ๊ฒ์ ์ค์๊ฐ์ผ๋ก ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl $WEBPOD1IP
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
| Hostname: webpod-9894b69cd-v2n4m
IP: 127.0.0.1
IP: ::1
IP: 172.20.1.88
IP: fe80::1c48:9ff:fe0c:94d8
RemoteAddr: 172.20.0.191:33604
GET / HTTP/1.1
Host: 172.20.1.88
User-Agent: curl/8.14.1
Accept: */*
|
๐งญ ๋
ธ๋ ๊ฐ โํ๋ โ ์๋น์ค(ClusterIP)โ ํต์ ํ์ธ
192.168.0.1(๋
ธ๋ ๋ด๋ถ)๋ก ์ ๊ทผํ๋ฉด iptables๊ฐ ClusterIP๋ฅผ ์ค์ ํ๋ IP๋ก Destination NAT ์ฒ๋ฆฌํจ
1. ClusterIP๋ก webpod ์ ๊ทผ
1
2
3
4
5
6
7
8
9
10
11
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod
Hostname: webpod-9894b69cd-v2n4m
IP: 127.0.0.1
IP: ::1
IP: 172.20.1.88
IP: fe80::1c48:9ff:fe0c:94d8
RemoteAddr: 172.20.0.191:37496
GET / HTTP/1.1
Host: webpod
User-Agent: curl/8.14.1
Accept: */*
|
2. ์๋น์ค ์ ๋ณด ํ์ธ
1
2
3
4
| (โ|HomeLab:N/A) root@k8s-ctr:~# k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d1h
webpod ClusterIP 10.96.33.91 <none> 80/TCP 2d23h
|
3. ํจํท ์บก์ฒ ๊ฒฐ๊ณผ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec curl-pod -- tcpdump -enni any -q
|
โ
ย ์ถ๋ ฅ
์ค์ ํจํท ์บก์ฒ ์ ClusterIP(10.96.33.91)๊ฐ ๋ํ๋์ง ์๊ณ , ์ง์ ๋ชฉ์ ์ง ํ๋ IP๋ก ํต์ ํ๋ ๋ชจ์ต์ด ํ์ธ๋จ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod
|
โ
ย ์ถ๋ ฅ
4. strace -c ๋ก syscall ํธ์ถ ์์ฝ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec curl-pod -- strace -c curl -s webpod
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
| Hostname: webpod-9894b69cd-v2n4m
IP: 127.0.0.1
IP: ::1
IP: 172.20.1.88
IP: fe80::1c48:9ff:fe0c:94d8
RemoteAddr: 172.20.0.191:45620
GET / HTTP/1.1
Host: webpod
User-Agent: curl/8.14.1
Accept: */*
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
21.19 0.000217 2 85 mmap
14.94 0.000153 4 33 munmap
13.09 0.000134 2 47 30 open
7.71 0.000079 26 3 sendto
6.25 0.000064 2 22 close
5.57 0.000057 19 3 1 connect
4.69 0.000048 1 28 rt_sigaction
4.30 0.000044 3 14 mprotect
3.71 0.000038 1 27 read
2.64 0.000027 1 24 fcntl
2.54 0.000026 2 9 poll
2.44 0.000025 6 4 socket
1.95 0.000020 3 6 3 recvfrom
1.56 0.000016 1 12 fstat
1.37 0.000014 1 10 lseek
0.78 0.000008 0 14 rt_sigprocmask
0.68 0.000007 2 3 readv
0.68 0.000007 7 1 writev
0.59 0.000006 1 5 getsockname
0.59 0.000006 6 1 eventfd2
0.59 0.000006 2 3 3 ioctl
0.59 0.000006 1 5 setsockopt
0.49 0.000005 5 1 stat
0.29 0.000003 0 4 brk
0.29 0.000003 3 1 getrandom
0.20 0.000002 1 2 geteuid
0.10 0.000001 1 1 getsockopt
0.10 0.000001 1 1 getgid
0.10 0.000001 1 1 getegid
0.00 0.000000 0 1 execve
0.00 0.000000 0 1 getuid
0.00 0.000000 0 1 arch_prctl
0.00 0.000000 0 1 set_tid_address
------ ----------- ----------- --------- --------- ----------------
100.00 0.001024 2 374 37 total
|
5. strace -e trace=connect ๋ก connect ํธ์ถ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec curl-pod -- strace -e trace=connect curl -s webpod
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| connect(4, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("10.96.0.10")}, 16) = 0
connect(5, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("10.96.33.91")}, 16) = 0
connect(4, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("10.96.33.91")}, 16) = -1 EINPROGRESS (Operation in progress)
Hostname: webpod-9894b69cd-v2n4m
IP: 127.0.0.1
IP: ::1
IP: 172.20.1.88
IP: fe80::1c48:9ff:fe0c:94d8
RemoteAddr: 172.20.0.191:56178
GET / HTTP/1.1
Host: webpod
User-Agent: curl/8.14.1
Accept: */*
+++ exited with 0 +++
|
connect()
ํธ์ถ ์ ClusterIP(10.96.33.91)๋ก ์ฐ๊ฒฐ ์๋ ๋ก๊ทธ ํ์ธ
์ดํ eBPF ๋์์ผ๋ก ์ค์ ํ๋ IP๋ก ์ฐ๊ฒฐ
1
2
3
4
| (โ|HomeLab:N/A) root@k8s-ctr:~# k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d1h
webpod ClusterIP 10.96.33.91 <none> 80/TCP 2d23h
|
6. strace -e trace=getsockname๋ก ์์ผ ๋ก์ปฌ ์ฃผ์ ํ์ธ
getsockname ๊ฒฐ๊ณผ ๋ก์ปฌ ์์ผ ์ฃผ์๊ฐ 172.20.0.191
(curl-pod IP)๋ก ํ์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec curl-pod -- strace -e trace=getsockname curl -s webpod
getsockname(4, {sa_family=AF_INET, sin_port=htons(43981), sin_addr=inet_addr("172.20.0.191")}, [128 => 16]) = 0
getsockname(5, {sa_family=AF_INET, sin_port=htons(35772), sin_addr=inet_addr("172.20.0.191")}, [16]) = 0
getsockname(4, {sa_family=AF_INET, sin_port=htons(48122), sin_addr=inet_addr("172.20.0.191")}, [128 => 16]) = 0
getsockname(4, {sa_family=AF_INET, sin_port=htons(48122), sin_addr=inet_addr("172.20.0.191")}, [128 => 16]) = 0
getsockname(4, {sa_family=AF_INET, sin_port=htons(48122), sin_addr=inet_addr("172.20.0.191")}, [128 => 16]) = 0
Hostname: webpod-9894b69cd-v2n4m
IP: 127.0.0.1
IP: ::1
IP: 172.20.1.88
IP: fe80::1c48:9ff:fe0c:94d8
RemoteAddr: 172.20.0.191:48122
GET / HTTP/1.1
Host: webpod
User-Agent: curl/8.14.1
Accept: */*
+++ exited with 0 +++
|
ClusterIP ๊ธฐ๋ฐ์ผ๋ก ์ ์ํ์ง๋ง ์ค์ ๋ก๋ eBPF๊ฐ ์ต์ ํํ์ฌ ํน์ ํ๋ IP๋ก ์ ๋ฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec curl-pod -- strace -e trace=getsockopt curl -s webpod
getsockopt(4, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
Hostname: webpod-9894b69cd-zxcnw
IP: 127.0.0.1
IP: ::1
IP: 172.20.2.124
IP: fe80::781d:98ff:fe11:9f1a
RemoteAddr: 172.20.0.191:45302
GET / HTTP/1.1
Host: webpod
User-Agent: curl/8.14.1
Accept: */*
+++ exited with 0 +++
|
- ClusterIP๋ก ์ ๊ทผํด๋ cilium eBPF๊ฐ ์์ผ ๋ ๋ฒจ์์ ์ต์ ํํ์ฌ ์ค์ ํ๋ IP๋ก ๋ฐ๋ก ์ฐ๊ฒฐ