Post

Cilium 1์ฃผ์ฐจ ์ •๋ฆฌ

๐Ÿง ์‹ค์Šตํ™˜๊ฒฝ ๊ตฌ์„ฑ (Arch Linux)

1. VirtualBox ์„ค์น˜ ๋ฐ ์„ค์ •

(1) ํ•„์ˆ˜ ํŒจํ‚ค์ง€ ์„ค์น˜

1
sudo pacman -S virtualbox virtualbox-host-modules-arch linux-headers

(2) VirtualBox ๋ฒ„์ „ ํ™•์ธ

1
VBoxManage --version

โœ…ย ์ถœ๋ ฅ

1
7.1.10r169112

(3) ์ปค๋„ ๋ชจ๋“ˆ ๋กœ๋”ฉ

1
2
3
sudo modprobe vboxdrv
sudo modprobe vboxnetflt
sudo modprobe vboxnetadp

(4) ๋ชจ๋“ˆ ์ž๋™ ๋กœ๋”ฉ - ๋ถ€ํŒ… ์‹œ ์ž๋™ ์ ์šฉ

1
sudo bash -c 'echo vboxdrv > /etc/modules-load.d/virtualbox.conf'

2. Vagrant ์„ค์น˜ ๋ฐ ์ดˆ๊ธฐํ™”

(1) Vagrant ์„ค์น˜

1
sudo pacman -S vagrant

(2) Vagrant ๋ฒ„์ „ ํ™•์ธ

1
vagrant --version

โœ…ย ์ถœ๋ ฅ

1
Vagrant 2.4.7

(3) ์ž‘์—… ๋””๋ ‰ํ† ๋ฆฌ ์ƒ์„ฑ ๋ฐ Vagrantfile ๋‹ค์šด๋กœ๋“œ

1
2
3
4
mkdir -p cilium-lab/1w && cd cilium-lab/1w
curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/1w/Vagrantfile

vagrant up

โœ…ย ์ถœ๋ ฅ

3. ๋„คํŠธ์›Œํฌ ์„ค์ • ํŒŒ์ผ ์ˆ˜์ •

The IP address configured for the host-only network is not within the allowed ranges ์˜ค๋ฅ˜ ๋ฐœ์ƒ

VirtualBox 6.1 ์ด์ƒ๋ถ€ํ„ฐ๋Š” Host-Only ๋„คํŠธ์›Œํฌ์— ํ—ˆ์šฉ๋œ IP ๋Œ€์—ญ๋งŒ ์‚ฌ์šฉ๊ฐ€๋Šฅ

ํ˜„์žฌ ํ—ˆ์šฉ๋œ ๋Œ€์—ญ

1
192.168.56.0/21

Vagrantfile์—์„œ ์ง€์ •ํ•œ IP

1
192.168.10.100 โŒ (ํ—ˆ์šฉ๋œ ๋ฒ”์œ„ ๋ฐ–)

VirtualBox์— IP ๋ฒ”์œ„ ์ถ”๊ฐ€

1
sudo vim /etc/vbox/networks.conf

์•„๋ž˜ ์ค„ ์ถ”๊ฐ€

1
* 192.168.10.0/24

๊ธฐ์กด VM ์ œ๊ฑฐ ํ›„ ์žฌ์‹คํ–‰

1
2
vagrant destroy -f
vagrant up

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
Bringing machine 'k8s-ctr' up with 'virtualbox' provider...
Bringing machine 'k8s-w1' up with 'virtualbox' provider...
Bringing machine 'k8s-w2' up with 'virtualbox' provider...
==> k8s-ctr: Cloning VM...
==> k8s-ctr: Matching MAC address for NAT networking...
==> k8s-ctr: Checking if box 'bento/ubuntu-24.04' version '202502.21.0' is up to date...
==> k8s-ctr: Setting the name of the VM: k8s-ctr
==> k8s-ctr: Clearing any previously set network interfaces...
==> k8s-ctr: Preparing network interfaces based on configuration...
    k8s-ctr: Adapter 1: nat
    k8s-ctr: Adapter 2: hostonly
==> k8s-ctr: Forwarding ports...
    k8s-ctr: 22 (guest) => 60000 (host) (adapter 1)
==> k8s-ctr: Running 'pre-boot' VM customizations...
==> k8s-ctr: Booting VM...
==> k8s-ctr: Waiting for machine to boot. This may take a few minutes...
    k8s-ctr: SSH address: 127.0.0.1:60000
    k8s-ctr: SSH username: vagrant
    k8s-ctr: SSH auth method: private key
    k8s-ctr: 
    k8s-ctr: Vagrant insecure key detected. Vagrant will automatically replace
    k8s-ctr: this with a newly generated keypair for better security.
    k8s-ctr: 
    k8s-ctr: Inserting generated public key within guest...
    k8s-ctr: Removing insecure key from the guest if it's present...
    k8s-ctr: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-ctr: Machine booted and ready!
==> k8s-ctr: Checking for guest additions in VM...
==> k8s-ctr: Setting hostname...
==> k8s-ctr: Configuring and enabling network interfaces...
==> k8s-ctr: Running provisioner: shell...
    k8s-ctr: Running: /tmp/vagrant-shell20250714-112889-1m78z0.sh
    k8s-ctr: >>>> Initial Config Start <<<<
    k8s-ctr: [TASK 1] Setting Profile & Change Timezone
    k8s-ctr: [TASK 2] Disable AppArmor
    k8s-ctr: [TASK 3] Disable and turn off SWAP
    k8s-ctr: [TASK 4] Install Packages
    k8s-ctr: [TASK 5] Install Kubernetes components (kubeadm, kubelet and kubectl)
    k8s-ctr: [TASK 6] Install Packages & Helm
    k8s-ctr: >>>> Initial Config End <<<<
==> k8s-ctr: Running provisioner: shell...
    k8s-ctr: Running: /tmp/vagrant-shell20250714-112889-mpo3zf.sh
    k8s-ctr: >>>> K8S Controlplane config Start <<<<
    k8s-ctr: [TASK 1] Initial Kubernetes
    k8s-ctr: [TASK 2] Setting kube config file
    k8s-ctr: [TASK 3] Source the completion
    k8s-ctr: [TASK 4] Alias kubectl to k
    k8s-ctr: [TASK 5] Install Kubectx & Kubens
    k8s-ctr: [TASK 6] Install Kubeps & Setting PS1
    k8s-ctr: [TASK 6] Install Kubeps & Setting PS1
    k8s-ctr: >>>> K8S Controlplane Config End <<<<
==> k8s-w1: Cloning VM...
==> k8s-w1: Matching MAC address for NAT networking...
==> k8s-w1: Checking if box 'bento/ubuntu-24.04' version '202502.21.0' is up to date...
==> k8s-w1: Setting the name of the VM: k8s-w1
==> k8s-w1: Clearing any previously set network interfaces...
==> k8s-w1: Preparing network interfaces based on configuration...
    k8s-w1: Adapter 1: nat
    k8s-w1: Adapter 2: hostonly
==> k8s-w1: Forwarding ports...
    k8s-w1: 22 (guest) => 60001 (host) (adapter 1)
==> k8s-w1: Running 'pre-boot' VM customizations...
==> k8s-w1: Booting VM...
==> k8s-w1: Waiting for machine to boot. This may take a few minutes...
    k8s-w1: SSH address: 127.0.0.1:60001
    k8s-w1: SSH username: vagrant
    k8s-w1: SSH auth method: private key
    k8s-w1: 
    k8s-w1: Vagrant insecure key detected. Vagrant will automatically replace
    k8s-w1: this with a newly generated keypair for better security.
    k8s-w1: 
    k8s-w1: Inserting generated public key within guest...
    k8s-w1: Removing insecure key from the guest if it's present...
    k8s-w1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-w1: Machine booted and ready!
==> k8s-w1: Checking for guest additions in VM...
==> k8s-w1: Setting hostname...
==> k8s-w1: Configuring and enabling network interfaces...
==> k8s-w1: Running provisioner: shell...
    k8s-w1: Running: /tmp/vagrant-shell20250714-112889-tnj10t.sh
    k8s-w1: >>>> Initial Config Start <<<<
    k8s-w1: [TASK 1] Setting Profile & Change Timezone
    k8s-w1: [TASK 2] Disable AppArmor
    k8s-w1: [TASK 3] Disable and turn off SWAP
    k8s-w1: [TASK 4] Install Packages
    k8s-w1: [TASK 5] Install Kubernetes components (kubeadm, kubelet and kubectl)
    k8s-w1: [TASK 6] Install Packages & Helm
    k8s-w1: >>>> Initial Config End <<<<
==> k8s-w1: Running provisioner: shell...
    k8s-w1: Running: /tmp/vagrant-shell20250714-112889-gsjiv7.sh
    k8s-w1: >>>> K8S Node config Start <<<<
    k8s-w1: [TASK 1] K8S Controlplane Join
    k8s-w1: >>>> K8S Node config End <<<<
==> k8s-w2: Cloning VM...
==> k8s-w2: Matching MAC address for NAT networking...
==> k8s-w2: Checking if box 'bento/ubuntu-24.04' version '202502.21.0' is up to date...
==> k8s-w2: Setting the name of the VM: k8s-w2
==> k8s-w2: Clearing any previously set network interfaces...
==> k8s-w2: Preparing network interfaces based on configuration...
    k8s-w2: Adapter 1: nat
    k8s-w2: Adapter 2: hostonly
==> k8s-w2: Forwarding ports...
    k8s-w2: 22 (guest) => 60002 (host) (adapter 1)
==> k8s-w2: Running 'pre-boot' VM customizations...
==> k8s-w2: Booting VM...
==> k8s-w2: Waiting for machine to boot. This may take a few minutes...
    k8s-w2: SSH address: 127.0.0.1:60002
    k8s-w2: SSH username: vagrant
    k8s-w2: SSH auth method: private key
    k8s-w2: 
    k8s-w2: Vagrant insecure key detected. Vagrant will automatically replace
    k8s-w2: this with a newly generated keypair for better security.
    k8s-w2: 
    k8s-w2: Inserting generated public key within guest...
    k8s-w2: Removing insecure key from the guest if it's present...
    k8s-w2: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-w2: Machine booted and ready!
==> k8s-w2: Checking for guest additions in VM...
==> k8s-w2: Setting hostname...
==> k8s-w2: Configuring and enabling network interfaces...
==> k8s-w2: Running provisioner: shell...
    k8s-w2: Running: /tmp/vagrant-shell20250714-112889-cd8mcd.sh
    k8s-w2: >>>> Initial Config Start <<<<
    k8s-w2: [TASK 1] Setting Profile & Change Timezone
    k8s-w2: [TASK 2] Disable AppArmor
    k8s-w2: [TASK 3] Disable and turn off SWAP
    k8s-w2: [TASK 4] Install Packages
    k8s-w2: [TASK 5] Install Kubernetes components (kubeadm, kubelet and kubectl)
    k8s-w2: [TASK 6] Install Packages & Helm
    k8s-w2: >>>> Initial Config End <<<<
==> k8s-w2: Running provisioner: shell...
    k8s-w2: Running: /tmp/vagrant-shell20250714-112889-ncfxlg.sh
    k8s-w2: >>>> K8S Node config Start <<<<
    k8s-w2: [TASK 1] K8S Controlplane Join
    k8s-w2: >>>> K8S Node config End <<<<

4. ๊ฐ€์ƒ๋จธ์‹  ์ƒํƒœ ํ™•์ธ

1
vagrant status

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
Current machine states:

k8s-ctr                   running (virtualbox)
k8s-w1                    running (virtualbox)
k8s-w2                    running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

5. ๋ฐฐํฌ ํ›„ SSH ์ ‘์† ๋ฐ ๋…ธ๋“œ ๋„คํŠธ์›Œํฌ ํ™•์ธ

๊ฐ ๋…ธ๋“œ eth0 ์ธํ„ฐํŽ˜์ด์Šค IP ํ™•์ธ

1
for i in ctr w1 w2 ; do echo ">> node : k8s-$i <<"; vagrant ssh k8s-$i -c 'ip -c -4 addr show dev eth0'; echo; done #

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
>> node : k8s-ctr <<
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    altname enp0s3
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 85640sec preferred_lft 85640sec

>> node : k8s-w1 <<
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    altname enp0s3
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 85784sec preferred_lft 85784sec

>> node : k8s-w2 <<
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    altname enp0s3
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 85941sec preferred_lft 85941sec

6. k8s-ctr ์ ‘์† ํ›„, ๊ธฐ๋ณธ ์ •๋ณด ํ™•์ธ

(1) k8s-ctr ์ ‘์†

1
vagrant ssh k8s-ctr

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
Welcome to Ubuntu 24.04.2 LTS (GNU/Linux 6.8.0-53-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/pro

 System information as of Mon Jul 14 10:50:38 PM KST 2025

  System load:           0.27
  Usage of /:            19.4% of 30.34GB
  Memory usage:          31%
  Swap usage:            0%
  Processes:             161
  Users logged in:       0
  IPv4 address for eth0: 10.0.2.15
  IPv6 address for eth0: fd17:625c:f037:2:a00:27ff:fe6b:69c9

This system is built by the Bento project by Chef Software
More information can be found at https://github.com/chef/bento

Use of this system is acceptance of the OS vendor EULA and License Agreements.
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# 

(2) k8s-ctr ๋‚ด๋ถ€ ๊ธฐ๋ณธ ๋ช…๋ น ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# whoami
root

(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# pwd
/root

(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# hostnamectl
 Static hostname: k8s-ctr
       Icon name: computer-vm
         Chassis: vm ๐Ÿ–ด
      Machine ID: 4f9fb3fa939a46d788144548529797c4
         Boot ID: b47345bdfb114c0f99ef542366fb0ebc
  Virtualization: oracle
Operating System: Ubuntu 24.04.2 LTS              
          Kernel: Linux 6.8.0-53-generic
    Architecture: x86-64
 Hardware Vendor: innotek GmbH
  Hardware Model: VirtualBox
Firmware Version: VirtualBox
   Firmware Date: Fri 2006-12-01
    Firmware Age: 18y 7month 1w 6d

(3) ์‹œ์Šคํ…œ ๋ฆฌ์†Œ์Šค ๋ชจ๋‹ˆํ„ฐ๋ง

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# htop

โœ…ย ์ถœ๋ ฅ

(4) /etc/hosts ํŒŒ์ผ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cat /etc/hosts

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
127.0.0.1 localhost
127.0.1.1 vagrant

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
127.0.2.1 k8s-ctr k8s-ctr
192.168.10.100 k8s-ctr
192.168.10.101 k8s-w1
192.168.10.102 k8s-w2

(5) ๋…ธ๋“œ๊ฐ„ ํ†ต์‹  ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ping -c 1 k8s-w1

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
PING k8s-w1 (192.168.10.101) 56(84) bytes of data.
64 bytes from k8s-w1 (192.168.10.101): icmp_seq=1 ttl=64 time=0.421 ms

--- k8s-w1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ping -c 1 k8s-w2

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
PING k8s-w2 (192.168.10.102) 56(84) bytes of data.
64 bytes from k8s-w2 (192.168.10.102): icmp_seq=1 ttl=64 time=0.433 ms

--- k8s-w2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms
  • k8s-w1 / k8s-w2 ๊ฐ๊ฐ 0% ํŒจํ‚ท ๋กœ์Šค, ์ •์ƒ ์‘๋‹ต

(6) ์ปจํŠธ๋กคํ”Œ๋ ˆ์ธ์—์„œ ์›Œ์ปค ๋…ธ๋“œ๋กœ SSH ์›๊ฒฉ ๋ช…๋ น

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-w1 hostname

โœ…ย ์ถœ๋ ฅ

1
2
Warning: Permanently added 'k8s-w1' (ED25519) to the list of known hosts.
k8s-w1
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-w2 hostname

โœ…ย ์ถœ๋ ฅ

1
2
Warning: Permanently added 'k8s-w2' (ED25519) to the list of known hosts.
k8s-w2
  • k8s-w1 / k8s-w2 ์ •์ƒ์ ์œผ๋กœ ํ˜ธ์ŠคํŠธ๋ช… ๋ฐ˜ํ™˜

(7) SSH ์ ‘๊ทผ ๊ฐ€๋Šฅ ์ด์œ  ํ™•์ธ

  • NAT๋œ VirtualBox ํ™˜๊ฒฝ์—์„œ sshd ํ”„๋กœ์„ธ์Šค ํ™•์ธ
  • 10.0.2.2(VirtualBox NAT Gateway)์—์„œ 10.0.2.15๋กœ SSH ์ ‘์†๋จ
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ss -tnp |grep sshd

โœ…ย ์ถœ๋ ฅ

1
ESTAB 0      0           [::ffff:10.0.2.15]:22          [::ffff:10.0.2.2]:48922 users:(("sshd",pid=4947,fd=4),("sshd",pid=4902,fd=4))

(8) ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค ๋ฐ ๋ผ์šฐํŒ… ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c addr

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 84849sec preferred_lft 84849sec
    inet6 fd17:625c:f037:2:a00:27ff:fe6b:69c9/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 86357sec preferred_lft 14357sec
    inet6 fe80::a00:27ff:fe6b:69c9/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:80:23:b9 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
    inet 192.168.10.100/24 brd 192.168.10.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe80:23b9/64 scope link 
       valid_lft forever preferred_lft forever
  • eth0: 10.0.2.15/24 (NAT ๋„คํŠธ์›Œํฌ) / eth1: 192.168.10.100/24 (Host-Only ๋„คํŠธ์›Œํฌ)

(9) ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ” ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c route

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100

(10) DNS ์„ค์ • ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# resolvectl

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
Global
         Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
  resolv.conf mode: stub

Link 2 (eth0)
    Current Scopes: DNS
         Protocols: +DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: 10.0.2.3
       DNS Servers: 10.0.2.3
        DNS Domain: Davolink

Link 3 (eth1)
    Current Scopes: none
         Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported

7. k8s-ctr ์ฟ ๋ฒ„๋„คํ‹ฐ์Šค ์ •๋ณด ํ™•์ธ

(1) ํด๋Ÿฌ์Šคํ„ฐ ์ •๋ณด ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl cluster-info

โœ…ย ์ถœ๋ ฅ

1
2
3
4
Kubernetes control plane is running at https://192.168.10.100:6443
CoreDNS is running at https://192.168.10.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
  • Kubernetes control plane: https://192.168.10.100:6443
  • CoreDNS: https://192.168.10.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

(2) ๋…ธ๋“œ ์ƒํƒœ ๋ฐ INTERNAL-IP ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get node -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME      STATUS     ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-ctr   NotReady   control-plane   30m   v1.33.2   192.168.10.100   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w1    NotReady   <none>          28m   v1.33.2   10.0.2.15        <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w2    NotReady   <none>          26m   v1.33.2   10.0.2.15        <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27

โš  ๋ฌธ์ œ์ 

  • ์›Œ์ปค ๋…ธ๋“œ INTERNAL-IP๊ฐ€ NAT ๋Œ€์—ญ(10.0.2.15)์œผ๋กœ ์žกํ˜€์žˆ์Œ
  • CNI ์š”๊ตฌ์‚ฌํ•ญ์— ๋”ฐ๋ผ INTERNAL-IP๊ฐ€ ์‹ค์ œ k8s API ํ†ต์‹ ์šฉ ๋„คํŠธ์›Œํฌ(192.168.10.x)๋กœ ์„ค์ •๋˜์–ด์•ผ ํ•จ

(3) ํŒŒ๋“œ ์ƒํƒœ ๋ฐ IP ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -A -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE   IP          NODE      NOMINATED NODE   READINESS GATES
kube-system   coredns-674b8bbfcf-7gx6f          0/1     Pending   0          34m   <none>      <none>    <none>           <none>
kube-system   coredns-674b8bbfcf-mjnst          0/1     Pending   0          34m   <none>      <none>    <none>           <none>
kube-system   etcd-k8s-ctr                      1/1     Running   0          35m   10.0.2.15   k8s-ctr   <none>           <none>
kube-system   kube-apiserver-k8s-ctr            1/1     Running   0          35m   10.0.2.15   k8s-ctr   <none>           <none>
kube-system   kube-controller-manager-k8s-ctr   1/1     Running   0          35m   10.0.2.15   k8s-ctr   <none>           <none>
kube-system   kube-proxy-b6zgw                  1/1     Running   0          32m   10.0.2.15   k8s-w1    <none>           <none>
kube-system   kube-proxy-grfn2                  1/1     Running   0          34m   10.0.2.15   k8s-ctr   <none>           <none>
kube-system   kube-proxy-p678s                  1/1     Running   0          30m   10.0.2.15   k8s-w2    <none>           <none>
kube-system   kube-scheduler-k8s-ctr            1/1     Running   0          35m   10.0.2.15   k8s-ctr   <none>           <none>
  • CoreDNS ํŒŒ๋“œ: Pending ์ƒํƒœ (IP ์—†์Œ โ†’ CNI ๋ฏธ์„ค์น˜๋กœ ์ธํ•œ ๋ฌธ์ œ)
  • etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kube-proxy: ๋ชจ๋‘ 10.0.2.15 ์‚ฌ์šฉ ์ค‘
  • ํ˜ธ์ŠคํŠธ ๋„คํŠธ์›Œํฌ ๋Œ€์—ญ๋„ 192.168.10.x๋กœ ๋ณ€๊ฒฝ ํ•„์š”

(4) CoreDNS ํŒŒ๋“œ ์ƒ์„ธ ์ƒํƒœ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# k  describe pod -n kube-system -l k8s-app=kube-dns

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
Name:                 coredns-674b8bbfcf-7gx6f
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      coredns
Node:                 <none>
Labels:               k8s-app=kube-dns
                      pod-template-hash=674b8bbfcf
Annotations:          <none>
Status:               Pending
IP:                   
IPs:                  <none>
Controlled By:        ReplicaSet/coredns-674b8bbfcf
Containers:
  coredns:
    Image:       registry.k8s.io/coredns/coredns:v1.12.0
    Ports:       53/UDP, 53/TCP, 9153/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9fss (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  kube-api-access-f9fss:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 CriticalAddonsOnly op=Exists
                             node-role.kubernetes.io/control-plane:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  37m                 default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
  Warning  FailedScheduling  118s (x7 over 32m)  default-scheduler  0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.

Name:                 coredns-674b8bbfcf-mjnst
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      coredns
Node:                 <none>
Labels:               k8s-app=kube-dns
                      pod-template-hash=674b8bbfcf
Annotations:          <none>
Status:               Pending
IP:                   
IPs:                  <none>
Controlled By:        ReplicaSet/coredns-674b8bbfcf
Containers:
  coredns:
    Image:       registry.k8s.io/coredns/coredns:v1.12.0
    Ports:       53/UDP, 53/TCP, 9153/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-55887 (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  kube-api-access-55887:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 CriticalAddonsOnly op=Exists
                             node-role.kubernetes.io/control-plane:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  37m                  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
  Warning  FailedScheduling  2m28s (x7 over 32m)  default-scheduler  0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
  • ๋‘ ๊ฐœ์˜ CoreDNS ํŒŒ๋“œ ๋ชจ๋‘ Pending ์ƒํƒœ
  • ์Šค์ผ€์ค„๋Ÿฌ ์ด๋ฒคํŠธ: 0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }
  • ์ฆ‰, ๋ชจ๋“  ๋…ธ๋“œ๊ฐ€ NotReady ์ƒํƒœ๋ผ์„œ ์Šค์ผ€์ค„๋ง ๋ถˆ๊ฐ€

8. k8s-ctr INTERNAL-IP ๋ณ€๊ฒฝ ์„ค์ •

(1) ๋ณ€๊ฒฝ ํ•„์š” ์ด์œ 

  • ์ดˆ๊ธฐ kubeadm ์„ค์ • ์‹œ ์ปจํŠธ๋กคํ”Œ๋ ˆ์ธ์˜ API ์„œ๋ฒ„ IP๊ฐ€ INTERNAL-IP๋กœ ์ง€์ •๋จ
  • ๊ธฐ๋ณธ๊ฐ’์œผ๋กœ eth0(NAT) IP(10.0.2.x)๊ฐ€ ์žกํ˜€ ์žˆ์–ด CNI ๋ฐ Pod ๋„คํŠธ์›Œํฌ์™€ ๋งž์ง€ ์•Š๋Š” ๋ฌธ์ œ ๋ฐœ์ƒ
  • Host-Only ๋„คํŠธ์›Œํฌ(eth1) ๋Œ€์—ญ์ธ 192.168.10.x๋ฅผ INTERNAL-IP๋กœ ๊ณ ์ •ํ•ด์•ผ ํ•จ

(2) ํ˜„์žฌ kubelet ์„ค์ • ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cat /var/lib/kubelet/kubeadm-flags.env

โœ…ย ์ถœ๋ ฅ

1
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.10"

(3) eth1 IP ํ™•์ธ ๋ฐ ๋ณ€์ˆ˜๋กœ ์ง€์ •

1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# NODEIP=$(ip -4 addr show eth1 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# echo $NODEIP

โœ…ย ์ถœ๋ ฅ

1
192.168.10.100

(4) kubelet ์„ค์ • ํŒŒ์ผ์— node-ip ์ถ”๊ฐ€

1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# sed -i "s/^\(KUBELET_KUBEADM_ARGS=\"\)/\1--node-ip=${NODEIP} /" /var/lib/kubelet/kubeadm-flags.env
systemctl daemon-reexec && systemctl restart kubelet

(5) ์ ์šฉ ํ›„ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cat /var/lib/kubelet/kubeadm-flags.env

โœ…ย ์ถœ๋ ฅ

1
KUBELET_KUBEADM_ARGS="--node-ip=192.168.10.100 --container-runtime-endpoint=unix:///run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.10"
  • --node-ip=192.168.10.100 ์˜ต์…˜์ด ์ถ”๊ฐ€๋จ

9. ์›Œ์ปค ๋…ธ๋“œ(k8s-w1/w2) INTERNAL-IP ๋ณ€๊ฒฝ ์„ค์ •

(1) k8s-w1 ๋ณ€๊ฒฝ

1
vagrant ssh k8s-w1 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
Welcome to Ubuntu 24.04.2 LTS (GNU/Linux 6.8.0-53-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/pro

 System information as of Mon Jul 14 11:28:56 PM KST 2025

  System load:           0.0
  Usage of /:            17.0% of 30.34GB
  Memory usage:          22%
  Swap usage:            0%
  Processes:             147
  Users logged in:       0
  IPv4 address for eth0: 10.0.2.15
  IPv6 address for eth0: fd17:625c:f037:2:a00:27ff:fe6b:69c9

This system is built by the Bento project by Chef Software
More information can be found at https://github.com/chef/bento

Use of this system is acceptance of the OS vendor EULA and License Agreements.
Last login: Mon Jul 14 22:52:44 2025 from 10.0.2.2

root@k8s-w1:~# NODEIP=$(ip -4 addr show eth1 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')
sed -i "s/^\(KUBELET_KUBEADM_ARGS=\"\)/\1--node-ip=${NODEIP} /" /var/lib/kubelet/kubeadm-flags.env
systemctl daemon-reexec && systemctl restart kubelet

root@k8s-w1:~# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--node-ip=192.168.10.101 --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.10"

(2) k8s-w1 ๋ณ€๊ฒฝ

1
vagrant ssh k8s-w2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
Welcome to Ubuntu 24.04.2 LTS (GNU/Linux 6.8.0-53-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/pro

 System information as of Mon Jul 14 11:29:17 PM KST 2025

  System load:           0.0
  Usage of /:            17.0% of 30.34GB
  Memory usage:          21%
  Swap usage:            0%
  Processes:             147
  Users logged in:       0
  IPv4 address for eth0: 10.0.2.15
  IPv6 address for eth0: fd17:625c:f037:2:a00:27ff:fe6b:69c9

This system is built by the Bento project by Chef Software
More information can be found at https://github.com/chef/bento

Use of this system is acceptance of the OS vendor EULA and License Agreements.
Last login: Mon Jul 14 22:52:45 2025 from 10.0.2.2

root@k8s-w2:~# NODEIP=$(ip -4 addr show eth1 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')
sed -i "s/^\(KUBELET_KUBEADM_ARGS=\"\)/\1--node-ip=${NODEIP} /" /var/lib/kubelet/kubeadm-flags.env
systemctl daemon-reexec && systemctl restart kubelet

root@k8s-w2:~# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--node-ip=192.168.10.102 --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.10"

(3) ๋ณ€๊ฒฝ ํ›„ ํ™•์ธ

๋…ธ๋“œ IP ์ •์ƒ ์ ์šฉ ์—ฌ๋ถ€ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get node -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME      STATUS     ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-ctr   NotReady   control-plane   50m   v1.33.2   192.168.10.100   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w1    NotReady   <none>          47m   v1.33.2   192.168.10.101   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w2    NotReady   <none>          46m   v1.33.2   192.168.10.102   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27

(4) static Pod IP ๋ณ€๊ฒฝ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -A -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
kube-system   coredns-674b8bbfcf-7gx6f          0/1     Pending   0          52m   <none>           <none>    <none>           <none>
kube-system   coredns-674b8bbfcf-mjnst          0/1     Pending   0          52m   <none>           <none>    <none>           <none>
kube-system   etcd-k8s-ctr                      1/1     Running   0          52m   192.168.10.100   k8s-ctr   <none>           <none>
kube-system   kube-apiserver-k8s-ctr            1/1     Running   0          52m   192.168.10.100   k8s-ctr   <none>           <none>
kube-system   kube-controller-manager-k8s-ctr   1/1     Running   0          52m   192.168.10.100   k8s-ctr   <none>           <none>
kube-system   kube-proxy-b6zgw                  1/1     Running   0          49m   192.168.10.101   k8s-w1    <none>           <none>
kube-system   kube-proxy-grfn2                  1/1     Running   0          52m   192.168.10.100   k8s-ctr   <none>           <none>
kube-system   kube-proxy-p678s                  1/1     Running   0          48m   192.168.10.102   k8s-w2    <none>           <none>
kube-system   kube-scheduler-k8s-ctr            1/1     Running   0          52m   192.168.10.100   k8s-ctr   <none>           <none>

๐ŸŒ Flannel CNI ์„ค์น˜ ๋ฐ ํ™•์ธ

1. ์„ค์น˜ ์ „ ํด๋Ÿฌ์Šคํ„ฐ ์ƒํƒœ ํ™•์ธ

(1) kubeadm init ์‹œ ์ง€์ •ํ•œ Pod CIDR์™€ Service CIDR ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl cluster-info dump | grep -m 2 -E "cluster-cidr|service-cluster-ip-range"

โœ…ย ์ถœ๋ ฅ

1
2
                            "--service-cluster-ip-range=10.96.0.0/16",
                            "--cluster-cidr=10.244.0.0/16",

(2) CoreDNS Pod ์ƒํƒœ ํ™•์ธ

CNI ์„ค์น˜ ์ „์—๋Š” Pod ๋„คํŠธ์›Œํฌ๊ฐ€ ๊ตฌ์„ฑ๋˜์ง€ ์•Š์•„ IP๋ฅผ ๋ฐ›์ง€ ๋ชปํ•จ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=kube-dns -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
NAME                       READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
coredns-674b8bbfcf-7gx6f   0/1     Pending   0          59m   <none>   <none>   <none>           <none>
coredns-674b8bbfcf-mjnst   0/1     Pending   0          59m   <none>   <none>   <none>           <none>

(3) ๊ธฐ๋ณธ ๋„คํŠธ์›Œํฌ ์ƒํƒœ ํ™•์ธ

eth0, eth1 ์ธํ„ฐํŽ˜์ด์Šค ํ™•์ธ

1
2
3
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c link
ip -c route
brctl show

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:80:23:b9 brd ff:ff:ff:ff:ff:ff
    altname enp0s8

default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100
  • eth0์€ NAT ๋„คํŠธ์›Œํฌ (10.0.2.x), eth1์€ Host-Only ๋„คํŠธ์›Œํฌ (192.168.10.x)

(4) iptables ์ƒํƒœ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# iptables-save

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
# Generated by iptables-save v1.8.10 (nf_tables) on Mon Jul 14 23:44:12 2025
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-IPTABLES-HINT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Mon Jul 14 23:44:12 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Mon Jul 14 23:44:12 2025
*filter
:INPUT ACCEPT [651613:132029811]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [644732:121246327]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-PROXY-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -m nfacct --nfacct-name  ct_state_invalid_dropped_pkts -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp has no endpoints" -m tcp --dport 53 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics has no endpoints" -m tcp --dport 9153 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns has no endpoints" -m udp --dport 53 -j REJECT --reject-with icmp-port-unreachable
COMMIT
# Completed on Mon Jul 14 23:44:12 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Mon Jul 14 23:44:12 2025
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SEP-ETI7FUQQE3BS2IXE - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SEP-ETI7FUQQE3BS2IXE -s 192.168.10.100/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-ETI7FUQQE3BS2IXE -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 192.168.10.100:6443
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https -> 192.168.10.100:6443" -j KUBE-SEP-ETI7FUQQE3BS2IXE
COMMIT
# Completed on Mon Jul 14 23:44:12 2025

์•„์ง Pod ๋„คํŠธ์›Œํฌ๊ฐ€ ์—†๊ธฐ ๋•Œ๋ฌธ์— coredns ์„œ๋น„์Šค๋Š” endpoint๊ฐ€ ์—†์Œ

1
2
3
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp has no endpoints" -m tcp --dport 53 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics has no endpoints" -m tcp --dport 9153 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns has no endpoints" -m udp --dport 53 -j REJECT --reject-with icmp-port-unreachable
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# iptables -t nat -S

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N KUBE-KUBELET-CANARY
-N KUBE-MARK-MASQ
-N KUBE-NODEPORTS
-N KUBE-POSTROUTING
-N KUBE-PROXY-CANARY
-N KUBE-SEP-ETI7FUQQE3BS2IXE
-N KUBE-SERVICES
-N KUBE-SVC-NPX46M4PTMTKRN6Y
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SEP-ETI7FUQQE3BS2IXE -s 192.168.10.100/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-ETI7FUQQE3BS2IXE -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 192.168.10.100:6443
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https -> 192.168.10.100:6443" -j KUBE-SEP-ETI7FUQQE3BS2IXE

(5) CNI ์„ค์ • ๋””๋ ‰ํ† ๋ฆฌ ํ™•์ธ โ†’ ์กด์žฌ x

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# tree /etc/cni/net.d/

โœ…ย ์ถœ๋ ฅ

1
2
3
/etc/cni/net.d/

0 directories, 0 files

2. Flannel CNI ์„ค์น˜

(1) Flannel ๋„ค์ž„์ŠคํŽ˜์ด์Šค ๋ฐ Helm ์ค€๋น„

1
2
3
4
5
6
7
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl create ns kube-flannel
kubectl label --overwrite ns kube-flannel pod-security.kubernetes.io/enforce=privileged

helm repo add flannel https://flannel-io.github.io/flannel/
helm repo list
helm search repo flannel
helm show values flannel/flannel

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
namespace/kube-flannel created
namespace/kube-flannel labeled
"flannel" has been added to your repositories
NAME   	URL                                  
flannel	https://flannel-io.github.io/flannel/
NAME           	CHART VERSION	APP VERSION	DESCRIPTION                    
flannel/flannel	v0.27.1      	v0.27.1    	Install Flannel Network Plugin.
---
global:
  imagePullSecrets:
# - name: "a-secret-name"

# The IPv4 cidr pool to create on startup if none exists. Pod IPs will be
# chosen from this range.
podCidr: "10.244.0.0/16"
podCidrv6: ""

flannel:
  # kube-flannel image
  image:
    repository: ghcr.io/flannel-io/flannel
    tag: v0.27.1
  image_cni:
    repository: ghcr.io/flannel-io/flannel-cni-plugin
    tag: v1.7.1-flannel1
  # cniBinDir is the directory to which the flannel CNI binary is installed.
  cniBinDir: "/opt/cni/bin"
  # cniConfDir is the directory where the CNI configuration is located.
  cniConfDir: "/etc/cni/net.d"
  # skipCNIConfigInstallation skips the installation of the flannel CNI config. This is useful when the CNI config is
  # provided externally.
  skipCNIConfigInstallation: false
  # flannel command arguments
  enableNFTables: false
  args:
  - "--ip-masq"
  - "--kube-subnet-mgr"
  # Backend for kube-flannel. Backend should not be changed
  # at runtime. (vxlan, host-gw, wireguard, udp)
  # Documentation at https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md
  backend: "vxlan"
  # Port used by the backend 0 means default value (VXLAN: 8472, Wireguard: 51821, UDP: 8285)
  #backendPort: 0
  # MTU to use for outgoing packets (VXLAN and Wiregurad) if not defined the MTU of the external interface is used.
  # mtu: 1500
  #
  # VXLAN Configs:
  #
  # VXLAN Identifier to be used. On Linux default is 1.
  #vni: 1
  # Enable VXLAN Group Based Policy (Default false)
  # GBP: false
  # Enable direct routes (default is false)
  # directRouting: false
  # MAC prefix to be used on Windows. (Defaults is 0E-2A)
  # macPrefix: "0E-2A"
  #
  # Wireguard Configs:
  #
  # UDP listen port used with IPv6
  # backendPortv6: 51821
  # Pre shared key to use
  # psk: 0
  # IP version to use on Wireguard
  # tunnelMode: "separate"
  # Persistent keep interval to use
  # keepaliveInterval: 0
  #
  cniConf: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  #
  # General daemonset configs
  #
  resources:
    requests:
      cpu: 100m
      memory: 50Mi
  tolerations:
  - effect: NoExecute
    operator: Exists
  - effect: NoSchedule
    operator: Exists
  nodeSelector: {}

netpol:
  enabled: false
  args:
  - "--hostname-override=$(MY_NODE_NAME)"
  - "--v=2"
  image:
    repository: registry.k8s.io/networking/kube-network-policies
    tag: v0.7.0

(2) ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค ์ง€์ •

1
2
3
4
5
6
7
8
9
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF > flannel-values.yaml
podCidr: "10.244.0.0/16"

flannel:
  args:
  - "--ip-masq"
  - "--kube-subnet-mgr"
  - "--iface=eth1"  
EOF

(3) Helm์œผ๋กœ Flannel ์„ค์น˜

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# helm install flannel --namespace kube-flannel flannel/flannel -f flannel-values.yaml

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
NAME: flannel
LAST DEPLOYED: Mon Jul 14 23:49:17 2025
NAMESPACE: kube-flannel
STATUS: deployed
REVISION: 1
TEST SUITE: None

(4) Pod ๋™์ž‘ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kc describe pod -n kube-flannel -l app=flannel

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
Name:                 kube-flannel-ds-9qdxf
Namespace:            kube-flannel
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      flannel
Node:                 k8s-ctr/192.168.10.100
Start Time:           Mon, 14 Jul 2025 23:49:17 +0900
Labels:               app=flannel
                      controller-revision-hash=66c5c78475
                      pod-template-generation=1
                      tier=node
Annotations:          <none>
Status:               Running
IP:                   192.168.10.100
IPs:
  IP:           192.168.10.100
Controlled By:  DaemonSet/kube-flannel-ds
Init Containers:
  install-cni-plugin:
    Container ID:  containerd://7cbb5ee284a7eb7bb13995fd1c656f2d0776973ae1e7cdd3f616fd528270fdcd
    Image:         ghcr.io/flannel-io/flannel-cni-plugin:v1.7.1-flannel1
    Image ID:      ghcr.io/flannel-io/flannel-cni-plugin@sha256:cb3176a2c9eae5fa0acd7f45397e706eacb4577dac33cad89f93b775ff5611df
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
    Args:
      -f
      /flannel
      /opt/cni/bin/flannel
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 14 Jul 2025 23:49:22 +0900
      Finished:     Mon, 14 Jul 2025 23:49:22 +0900
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /opt/cni/bin from cni-plugin (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-46mlr (ro)
  install-cni:
    Container ID:  containerd://cfc3ee53844f9139d8ca75f024746fec9163e629f431b97966d10923612a60eb
    Image:         ghcr.io/flannel-io/flannel:v0.27.1
    Image ID:      ghcr.io/flannel-io/flannel@sha256:0c95c822b690f83dc827189d691015f92ab7e249e238876b56442b580c492d85
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
    Args:
      -f
      /etc/kube-flannel/cni-conf.json
      /etc/cni/net.d/10-flannel.conflist
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 14 Jul 2025 23:49:30 +0900
      Finished:     Mon, 14 Jul 2025 23:49:30 +0900
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/cni/net.d from cni (rw)
      /etc/kube-flannel/ from flannel-cfg (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-46mlr (ro)
Containers:
  kube-flannel:
    Container ID:  containerd://1fda477f80ac40a4e82bd18368b78b7e391bce1436d69a746ecef65773525920
    Image:         ghcr.io/flannel-io/flannel:v0.27.1
    Image ID:      ghcr.io/flannel-io/flannel@sha256:0c95c822b690f83dc827189d691015f92ab7e249e238876b56442b580c492d85
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/bin/flanneld
      --ip-masq
      --kube-subnet-mgr
      --iface=eth1
    State:          Running
      Started:      Mon, 14 Jul 2025 23:49:31 +0900
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_NAME:                   kube-flannel-ds-9qdxf (v1:metadata.name)
      POD_NAMESPACE:              kube-flannel (v1:metadata.namespace)
      EVENT_QUEUE_DEPTH:          5000
      CONT_WHEN_CACHE_NOT_READY:  false
    Mounts:
      /etc/kube-flannel/ from flannel-cfg (rw)
      /run/flannel from run (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-46mlr (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  run:
    Type:          HostPath (bare host directory volume)
    Path:          /run/flannel
    HostPathType:  
  cni-plugin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:  
  cni:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  
  flannel-cfg:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-flannel-cfg
    Optional:  false
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  kube-api-access-46mlr:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 :NoExecute op=Exists
                             :NoSchedule op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  44s   default-scheduler  Successfully assigned kube-flannel/kube-flannel-ds-9qdxf to k8s-ctr
  Normal  Pulling    44s   kubelet            Pulling image "ghcr.io/flannel-io/flannel-cni-plugin:v1.7.1-flannel1"
  Normal  Pulled     39s   kubelet            Successfully pulled image "ghcr.io/flannel-io/flannel-cni-plugin:v1.7.1-flannel1" in 4.137s (4.137s including waiting). Image size: 4878976 bytes.
  Normal  Created    39s   kubelet            Created container: install-cni-plugin
  Normal  Started    39s   kubelet            Started container install-cni-plugin
  Normal  Pulling    38s   kubelet            Pulling image "ghcr.io/flannel-io/flannel:v0.27.1"
  Normal  Pulled     31s   kubelet            Successfully pulled image "ghcr.io/flannel-io/flannel:v0.27.1" in 7.208s (7.208s including waiting). Image size: 32389164 bytes.
  Normal  Created    31s   kubelet            Created container: install-cni
  Normal  Started    31s   kubelet            Started container install-cni
  Normal  Pulled     30s   kubelet            Container image "ghcr.io/flannel-io/flannel:v0.27.1" already present on machine
  Normal  Created    30s   kubelet            Created container: kube-flannel
  Normal  Started    30s   kubelet            Started container kube-flannel

Name:                 kube-flannel-ds-c4rxb
Namespace:            kube-flannel
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      flannel
Node:                 k8s-w1/192.168.10.101
Start Time:           Mon, 14 Jul 2025 23:49:17 +0900
Labels:               app=flannel
                      controller-revision-hash=66c5c78475
                      pod-template-generation=1
                      tier=node
Annotations:          <none>
Status:               Running
IP:                   192.168.10.101
IPs:
  IP:           192.168.10.101
Controlled By:  DaemonSet/kube-flannel-ds
Init Containers:
  install-cni-plugin:
    Container ID:  containerd://cf6e39697e580d20418e0d1c4efa454479d173ed666612bd752e3f596a44a9bc
    Image:         ghcr.io/flannel-io/flannel-cni-plugin:v1.7.1-flannel1
    Image ID:      ghcr.io/flannel-io/flannel-cni-plugin@sha256:cb3176a2c9eae5fa0acd7f45397e706eacb4577dac33cad89f93b775ff5611df
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
    Args:
      -f
      /flannel
      /opt/cni/bin/flannel
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 14 Jul 2025 23:49:22 +0900
      Finished:     Mon, 14 Jul 2025 23:49:22 +0900
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /opt/cni/bin from cni-plugin (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nppgp (ro)
  install-cni:
    Container ID:  containerd://3a06f8216980b01adfbcd870361c1ae41371a7be206e5456ad98c0c01e8f90f3
    Image:         ghcr.io/flannel-io/flannel:v0.27.1
    Image ID:      ghcr.io/flannel-io/flannel@sha256:0c95c822b690f83dc827189d691015f92ab7e249e238876b56442b580c492d85
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
    Args:
      -f
      /etc/kube-flannel/cni-conf.json
      /etc/cni/net.d/10-flannel.conflist
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 14 Jul 2025 23:49:30 +0900
      Finished:     Mon, 14 Jul 2025 23:49:30 +0900
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/cni/net.d from cni (rw)
      /etc/kube-flannel/ from flannel-cfg (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nppgp (ro)
Containers:
  kube-flannel:
    Container ID:  containerd://f49e28157b71d776ec0526e2b87423fda60439a3e5f9dab351a6e465de53ebdb
    Image:         ghcr.io/flannel-io/flannel:v0.27.1
    Image ID:      ghcr.io/flannel-io/flannel@sha256:0c95c822b690f83dc827189d691015f92ab7e249e238876b56442b580c492d85
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/bin/flanneld
      --ip-masq
      --kube-subnet-mgr
      --iface=eth1
    State:          Running
      Started:      Mon, 14 Jul 2025 23:49:31 +0900
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_NAME:                   kube-flannel-ds-c4rxb (v1:metadata.name)
      POD_NAMESPACE:              kube-flannel (v1:metadata.namespace)
      EVENT_QUEUE_DEPTH:          5000
      CONT_WHEN_CACHE_NOT_READY:  false
    Mounts:
      /etc/kube-flannel/ from flannel-cfg (rw)
      /run/flannel from run (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nppgp (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  run:
    Type:          HostPath (bare host directory volume)
    Path:          /run/flannel
    HostPathType:  
  cni-plugin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:  
  cni:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  
  flannel-cfg:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-flannel-cfg
    Optional:  false
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  kube-api-access-nppgp:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 :NoExecute op=Exists
                             :NoSchedule op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  44s   default-scheduler  Successfully assigned kube-flannel/kube-flannel-ds-c4rxb to k8s-w1
  Normal  Pulling    43s   kubelet            Pulling image "ghcr.io/flannel-io/flannel-cni-plugin:v1.7.1-flannel1"
  Normal  Pulled     39s   kubelet            Successfully pulled image "ghcr.io/flannel-io/flannel-cni-plugin:v1.7.1-flannel1" in 4.101s (4.101s including waiting). Image size: 4878976 bytes.
  Normal  Created    39s   kubelet            Created container: install-cni-plugin
  Normal  Started    39s   kubelet            Started container install-cni-plugin
  Normal  Pulling    38s   kubelet            Pulling image "ghcr.io/flannel-io/flannel:v0.27.1"
  Normal  Pulled     31s   kubelet            Successfully pulled image "ghcr.io/flannel-io/flannel:v0.27.1" in 6.891s (6.891s including waiting). Image size: 32389164 bytes.
  Normal  Created    31s   kubelet            Created container: install-cni
  Normal  Started    31s   kubelet            Started container install-cni
  Normal  Pulled     30s   kubelet            Container image "ghcr.io/flannel-io/flannel:v0.27.1" already present on machine
  Normal  Created    30s   kubelet            Created container: kube-flannel
  Normal  Started    30s   kubelet            Started container kube-flannel

Name:                 kube-flannel-ds-q4chw
Namespace:            kube-flannel
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      flannel
Node:                 k8s-w2/192.168.10.102
Start Time:           Mon, 14 Jul 2025 23:49:17 +0900
Labels:               app=flannel
                      controller-revision-hash=66c5c78475
                      pod-template-generation=1
                      tier=node
Annotations:          <none>
Status:               Running
IP:                   192.168.10.102
IPs:
  IP:           192.168.10.102
Controlled By:  DaemonSet/kube-flannel-ds
Init Containers:
  install-cni-plugin:
    Container ID:  containerd://8b174a979471fdd203c4572b14db14c8931fdde14b2935e707790c8b913882ce
    Image:         ghcr.io/flannel-io/flannel-cni-plugin:v1.7.1-flannel1
    Image ID:      ghcr.io/flannel-io/flannel-cni-plugin@sha256:cb3176a2c9eae5fa0acd7f45397e706eacb4577dac33cad89f93b775ff5611df
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
    Args:
      -f
      /flannel
      /opt/cni/bin/flannel
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 14 Jul 2025 23:49:22 +0900
      Finished:     Mon, 14 Jul 2025 23:49:22 +0900
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /opt/cni/bin from cni-plugin (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w8bh8 (ro)
  install-cni:
    Container ID:  containerd://d40ae3545353faf7911afb92146557a2996d7bfecd1e47ff9edf8e6d0a23c918
    Image:         ghcr.io/flannel-io/flannel:v0.27.1
    Image ID:      ghcr.io/flannel-io/flannel@sha256:0c95c822b690f83dc827189d691015f92ab7e249e238876b56442b580c492d85
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
    Args:
      -f
      /etc/kube-flannel/cni-conf.json
      /etc/cni/net.d/10-flannel.conflist
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 14 Jul 2025 23:49:29 +0900
      Finished:     Mon, 14 Jul 2025 23:49:29 +0900
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/cni/net.d from cni (rw)
      /etc/kube-flannel/ from flannel-cfg (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w8bh8 (ro)
Containers:
  kube-flannel:
    Container ID:  containerd://5305910686764c1ea1a0231fc39a38dd718eeafbb20746d44e52c37e4b17ba72
    Image:         ghcr.io/flannel-io/flannel:v0.27.1
    Image ID:      ghcr.io/flannel-io/flannel@sha256:0c95c822b690f83dc827189d691015f92ab7e249e238876b56442b580c492d85
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/bin/flanneld
      --ip-masq
      --kube-subnet-mgr
      --iface=eth1
    State:          Running
      Started:      Mon, 14 Jul 2025 23:49:30 +0900
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_NAME:                   kube-flannel-ds-q4chw (v1:metadata.name)
      POD_NAMESPACE:              kube-flannel (v1:metadata.namespace)
      EVENT_QUEUE_DEPTH:          5000
      CONT_WHEN_CACHE_NOT_READY:  false
    Mounts:
      /etc/kube-flannel/ from flannel-cfg (rw)
      /run/flannel from run (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w8bh8 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  run:
    Type:          HostPath (bare host directory volume)
    Path:          /run/flannel
    HostPathType:  
  cni-plugin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:  
  cni:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  
  flannel-cfg:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-flannel-cfg
    Optional:  false
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  kube-api-access-w8bh8:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 :NoExecute op=Exists
                             :NoSchedule op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  44s   default-scheduler  Successfully assigned kube-flannel/kube-flannel-ds-q4chw to k8s-w2
  Normal  Pulling    43s   kubelet            Pulling image "ghcr.io/flannel-io/flannel-cni-plugin:v1.7.1-flannel1"
  Normal  Pulled     39s   kubelet            Successfully pulled image "ghcr.io/flannel-io/flannel-cni-plugin:v1.7.1-flannel1" in 4.081s (4.081s including waiting). Image size: 4878976 bytes.
  Normal  Created    39s   kubelet            Created container: install-cni-plugin
  Normal  Started    39s   kubelet            Started container install-cni-plugin
  Normal  Pulling    39s   kubelet            Pulling image "ghcr.io/flannel-io/flannel:v0.27.1"
  Normal  Pulled     32s   kubelet            Successfully pulled image "ghcr.io/flannel-io/flannel:v0.27.1" in 6.97s (6.97s including waiting). Image size: 32389164 bytes.
  Normal  Created    32s   kubelet            Created container: install-cni
  Normal  Started    32s   kubelet            Started container install-cni
  Normal  Pulled     31s   kubelet            Container image "ghcr.io/flannel-io/flannel:v0.27.1" already present on machine
  Normal  Created    31s   kubelet            Created container: kube-flannel
  Normal  Started    31s   kubelet            Started container kube-flannel
  • install-cni-plugin: flannel ๋ฐ”์ด๋„ˆ๋ฆฌ ์„ค์น˜(/opt/cni/bin/flannel)
  • install-cni: CNI์„ค์ •(/etc/cni/net.d/10-flannel.conflist) ์ ์šฉ

(5) CNI ๋ฐ”์ด๋„ˆ๋ฆฌ ์„ค์น˜ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# tree /opt/cni/bin/

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
/opt/cni/bin/
โ”œโ”€โ”€ bandwidth
โ”œโ”€โ”€ bridge
โ”œโ”€โ”€ dhcp
โ”œโ”€โ”€ dummy
โ”œโ”€โ”€ firewall
โ”œโ”€โ”€ flannel
โ”œโ”€โ”€ host-device
โ”œโ”€โ”€ host-local
โ”œโ”€โ”€ ipvlan
โ”œโ”€โ”€ LICENSE
โ”œโ”€โ”€ loopback
โ”œโ”€โ”€ macvlan
โ”œโ”€โ”€ portmap
โ”œโ”€โ”€ ptp
โ”œโ”€โ”€ README.md
โ”œโ”€โ”€ sbr
โ”œโ”€โ”€ static
โ”œโ”€โ”€ tap
โ”œโ”€โ”€ tuning
โ”œโ”€โ”€ vlan
โ””โ”€โ”€ vrf

1 directory, 21 files

(6) CNI ์„ค์ • ํŒŒ์ผ ํ™•์ธ

1
2
3
4
5
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# tree /etc/cni/net.d/
/etc/cni/net.d/
โ””โ”€โ”€ 10-flannel.conflist

1 directory, 1 file

10-flannel.conflist ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cat /etc/cni/net.d/10-flannel.conflist | jq

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
{
  "name": "cbr0",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}

(7) ConfigMap ์„ค์ • ํ™•์ธ

Pod ๋„คํŠธ์›Œํฌ CIDR: 10.244.0.0/16 / Backend: vxlan

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kc describe cm -n kube-flannel kube-flannel-cfg

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
Name:         kube-flannel-cfg
Namespace:    kube-flannel
Labels:       app=flannel
              app.kubernetes.io/managed-by=Helm
              tier=node
Annotations:  meta.helm.sh/release-name: flannel
              meta.helm.sh/release-namespace: kube-flannel

Data
====
cni-conf.json:
----
{
  "name": "cbr0",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}

net-conf.json:
----
{
  "Network": "10.244.0.0/16",
  "Backend": {
    "Type": "vxlan"
  }
}

BinaryData
====

Events:  <none>

(8) ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค ๋ณ€ํ™” ํ™•์ธ

์„ค์น˜ ํ›„, flannel.1, cni0, veth* ์ธํ„ฐํŽ˜์ด์Šค ์ถ”๊ฐ€๋จ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c link

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:80:23:b9 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether e6:0f:9b:40:c3:ec brd ff:ff:ff:ff:ff:ff
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether f6:af:58:af:44:e3 brd ff:ff:ff:ff:ff:ff
6: vethe4603105@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
    link/ether fa:28:80:b2:a3:a2 brd ff:ff:ff:ff:ff:ff link-netns cni-05426de7-dd90-2656-df69-64505867d5df
7: veth470cf46f@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
    link/ether c6:8d:d9:1c:52:0e brd ff:ff:ff:ff:ff:ff link-netns cni-1d343493-f993-e0d5-e30c-163d1baf2a6f

(9) ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ” ๋ณ€ํ™” ํ™•์ธ

์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ์—์„œ ํ™•์ธ๋œ ๋ผ์šฐํŒ…

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c route | grep 10.244.

โœ…ย ์ถœ๋ ฅ

1
2
3
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1 
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
  • 10.244.1.0/24 โ†’ ์›Œ์ปค๋…ธ๋“œ1์˜ flannel.1๋กœ ์ „์†ก
  • 10.244.2.0/24 โ†’ ์›Œ์ปค๋…ธ๋“œ2์˜ flannel.1๋กœ ์ „์†ก

(10) ping ํ…Œ์ŠคํŠธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ping -c 1 10.244.1.0

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
PING 10.244.1.0 (10.244.1.0) 56(84) bytes of data.
64 bytes from 10.244.1.0: icmp_seq=1 ttl=64 time=0.391 ms

--- 10.244.1.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ping -c 1 10.244.2.0

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
PING 10.244.2.0 (10.244.2.0) 56(84) bytes of data.
64 bytes from 10.244.2.0: icmp_seq=1 ttl=64 time=0.409 ms

--- 10.244.2.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms
  • ping 10.244.1.0, ping 10.244.2.0 ์ •์ƒ ์‘๋‹ต
  • ๋ผ์šฐํŒ… ๋ฐ ์ธํ„ฐํŽ˜์ด์Šค ๋™์ž‘ ํ™•์ธ ์™„๋ฃŒ

(11) ์›Œ์ปค๋…ธ๋“œ ์ธํ„ฐํŽ˜์ด์Šค ํ™•์ธ

k8s-w1 flannel.1: 10.244.1.0/32

1
root@k8s-w1:~# ip -c a

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 81470sec preferred_lft 81470sec
    inet6 fd17:625c:f037:2:a00:27ff:fe6b:69c9/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 86361sec preferred_lft 14361sec
    inet6 fe80::a00:27ff:fe6b:69c9/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:07:f2:54 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
    inet 192.168.10.101/24 brd 192.168.10.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe07:f254/64 scope link 
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether 4e:13:00:49:ce:71 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::4c13:ff:fe49:ce71/64 scope link 
       valid_lft forever preferred_lft forever

k8s-w2 flannel.1: 10.244.2.0/32

1
root@k8s-w2:~# ip -c a

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 81514sec preferred_lft 81514sec
    inet6 fd17:625c:f037:2:a00:27ff:fe6b:69c9/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 86356sec preferred_lft 14356sec
    inet6 fe80::a00:27ff:fe6b:69c9/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:96:d7:20 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
    inet 192.168.10.102/24 brd 192.168.10.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe96:d720/64 scope link 
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether 56:19:dc:74:53:eb brd ff:ff:ff:ff:ff:ff
    inet 10.244.2.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::5419:dcff:fe74:53eb/64 scope link 
       valid_lft forever preferred_lft forever
  • ๊ฐ๊ฐ ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ CIDR(10.244.0.0/24)์œผ๋กœ ๋ผ์šฐํŒ… ์„ค์ • ์กด์žฌ

(12) ๋ธŒ๋ฆฟ์ง€ ์ธํ„ฐํŽ˜์ด์Šค ํ™•์ธ

cni0 ๋ธŒ๋ฆฟ์ง€์— veth470cf46f, vethe4603105 ์—ฐ๊ฒฐ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# brctl show

โœ…ย ์ถœ๋ ฅ

1
2
3
bridge name	  bridge id		        STP enabled	  interfaces
cni0		      8000.f6af58af44e3	  no		        veth470cf46f
							                                  vethe4603105

(13) iptables NAT ๊ทœ์น™ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# iptables -t nat -S

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N FLANNEL-POSTRTG
-N KUBE-KUBELET-CANARY
-N KUBE-MARK-MASQ
-N KUBE-NODEPORTS
-N KUBE-POSTROUTING
-N KUBE-PROXY-CANARY
-N KUBE-SEP-6E7XQMQ4RAYOWTTM
-N KUBE-SEP-ETI7FUQQE3BS2IXE
-N KUBE-SEP-IT2ZTR26TO4XFPTO
-N KUBE-SEP-N4G2XR5TDX7PQE7P
-N KUBE-SEP-YIL6JZP7A3QYXJU2
-N KUBE-SEP-ZP3FB6NMPNCO4VBJ
-N KUBE-SEP-ZXMNUKOKXUTL2MK2
-N KUBE-SERVICES
-N KUBE-SVC-ERIFXISQEP7F7OF4
-N KUBE-SVC-JD5MR3NA4I4DYORP
-N KUBE-SVC-NPX46M4PTMTKRN6Y
-N KUBE-SVC-TCOU7JCQXEZGVUNU
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -m comment --comment "flanneld masq" -j FLANNEL-POSTRTG
-A FLANNEL-POSTRTG -m mark --mark 0x4000/0x4000 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/24 -d 10.244.0.0/16 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/16 -d 10.244.0.0/24 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG ! -s 10.244.0.0/16 -d 10.244.0.0/24 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/16 ! -d 224.0.0.0/4 -m comment --comment "flanneld masq" -j MASQUERADE --random-fully
-A FLANNEL-POSTRTG ! -s 10.244.0.0/16 -d 10.244.0.0/16 -m comment --comment "flanneld masq" -j MASQUERADE --random-fully
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SEP-6E7XQMQ4RAYOWTTM -s 10.244.0.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-6E7XQMQ4RAYOWTTM -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.3:53
-A KUBE-SEP-ETI7FUQQE3BS2IXE -s 192.168.10.100/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-ETI7FUQQE3BS2IXE -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 192.168.10.100:6443
-A KUBE-SEP-IT2ZTR26TO4XFPTO -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-IT2ZTR26TO4XFPTO -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.0.2:53
-A KUBE-SEP-N4G2XR5TDX7PQE7P -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-N4G2XR5TDX7PQE7P -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.244.0.2:9153
-A KUBE-SEP-YIL6JZP7A3QYXJU2 -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-YIL6JZP7A3QYXJU2 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.2:53
-A KUBE-SEP-ZP3FB6NMPNCO4VBJ -s 10.244.0.3/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZP3FB6NMPNCO4VBJ -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.244.0.3:9153
-A KUBE-SEP-ZXMNUKOKXUTL2MK2 -s 10.244.0.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZXMNUKOKXUTL2MK2 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.0.3:53
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp -> 10.244.0.2:53" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-IT2ZTR26TO4XFPTO
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp -> 10.244.0.3:53" -j KUBE-SEP-ZXMNUKOKXUTL2MK2
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics -> 10.244.0.2:9153" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-N4G2XR5TDX7PQE7P
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics -> 10.244.0.3:9153" -j KUBE-SEP-ZP3FB6NMPNCO4VBJ
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https -> 192.168.10.100:6443" -j KUBE-SEP-ETI7FUQQE3BS2IXE
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.244.0.2:53" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-YIL6JZP7A3QYXJU2
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.244.0.3:53" -j KUBE-SEP-6E7XQMQ4RAYOWTTM
  • FLANNEL-POSTRTG ์ฒด์ธ ์ƒ์„ฑ

(14) ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ์—์„œ k8s-w1, k8s-w2 ์ •๋ณด ํ™•์ธ

๊ฐ ๋…ธ๋“œ์˜ ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c link ; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
>> node : k8s-w1 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:07:f2:54 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether 4e:13:00:49:ce:71 brd ff:ff:ff:ff:ff:ff

>> node : k8s-w2 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:96:d7:20 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether 56:19:dc:74:53:eb brd ff:ff:ff:ff:ff:ff

๊ฐ ๋…ธ๋“œ์˜ ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ”

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c route ; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
>> node : k8s-w1 <<
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.101 

>> node : k8s-w2 <<
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.102

(15) ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ ์ธํ„ฐํŽ˜์ด์Šค ํ™•์ธ

flannel.1 ์ธํ„ฐํŽ˜์ด์Šค์— 10.244.0.0/32 IP๊ฐ€ ํ• ๋‹น๋จ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c a

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
...
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether e6:0f:9b:40:c3:ec brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::e40f:9bff:fe40:c3ec/64 scope link 
       valid_lft forever preferred_lft forever
...       

(16) ํด๋Ÿฌ์Šคํ„ฐ Pod ๋™์ž‘ ์ƒํƒœ ํ™•์ธ

coredns Pod๊ฐ€ ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ(k8s-ctr)์˜ Pod CIDR(10.244.0.x)์—์„œ ์ •์ƒ ๋™์ž‘

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# k get pod -A -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
NAMESPACE      NAME                              READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
kube-flannel    kube-flannel-ds-9qdxf              1/1     Running   0          27m   192.168.10.100   k8s-ctr   <none>           <none>
kube-flannel    kube-flannel-ds-c4rxb              1/1     Running   0          27m   192.168.10.101   k8s-w1    <none>           <none>
kube-flannel    kube-flannel-ds-q4chw              1/1     Running   0          27m   192.168.10.102   k8s-w2    <none>           <none>
kube-system    coredns-674b8bbfcf-7gx6f          1/1     Running   0          94m   10.244.0.2       k8s-ctr   <none>           <none>
kube-system    coredns-674b8bbfcf-mjnst          1/1     Running   0          94m   10.244.0.3       k8s-ctr   <none>           <none>
kube-system    etcd-k8s-ctr                      1/1     Running   0          95m   192.168.10.100   k8s-ctr   <none>           <none>
kube-system    kube-apiserver-k8s-ctr            1/1     Running   0          95m   192.168.10.100   k8s-ctr   <none>           <none>
kube-system    kube-controller-manager-k8s-ctr   1/1     Running   0          95m   192.168.10.100   k8s-ctr   <none>           <none>
kube-system    kube-proxy-b6zgw                  1/1     Running   0          92m   192.168.10.101   k8s-w1    <none>           <none>
kube-system    kube-proxy-grfn2                  1/1     Running   0          94m   192.168.10.100   k8s-ctr   <none>           <none>
kube-system    kube-proxy-p678s                  1/1     Running   0          90m   192.168.10.102   k8s-w2    <none>           <none>
kube-system    kube-scheduler-k8s-ctr            1/1     Running   0          95m   192.168.10.100   k8s-ctr   <none>           <none>

๐Ÿš€ ์ƒ˜ํ”Œ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ๋ฐฐํฌ

1. Deployment & Service ์ƒ์„ฑ

podAntiAffinity ์‚ฌ์šฉํ•˜์—ฌ ๋™์ผํ•œ ๋ผ๋ฒจ์„ ๊ฐ€์ง„ ํŒŒ๋“œ๊ฐ€ ๊ฐ™์€ ๋…ธ๋“œ์— ์Šค์ผ€์ค„๋ง๋˜์ง€ ์•Š๋„๋ก ์„ค์ •

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webpod
spec:
  replicas: 2
  selector:
    matchLabels:
      app: webpod
  template:
    metadata:
      labels:
        app: webpod
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - sample-app
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: webpod
        image: traefik/whoami
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: webpod
  labels:
    app: webpod
spec:
  selector:
    app: webpod
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP
EOF

# ๊ฒฐ๊ณผ
deployment.apps/webpod created
service/webpod created

2. curl-pod ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: curl-pod
  labels:
    app: curl
spec:
  nodeName: k8s-ctr
  containers:
    - name: curl
      image: alpine/curl
      command: ["sleep", "36000"]
EOF

# ๊ฒฐ๊ณผ
pod/curl-pod created

3. ํŒŒ๋“œ ๋ฐ ์ปจํ…Œ์ด๋„ˆ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# crictl ps

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                               NAMESPACE
3690a55e4d3fa       e747d861ab8fd       About a minute ago   Running             curl                      0                   7802b00f4ec67       curl-pod                          default
3df0eea47e774       1cf5f116067c6       33 minutes ago       Running             coredns                   0                   43cbb7c51580f       coredns-674b8bbfcf-mjnst          kube-system
5b58bfe8158a1       1cf5f116067c6       33 minutes ago       Running             coredns                   0                   99e79c3156525       coredns-674b8bbfcf-7gx6f          kube-system
1fda477f80ac4       747b002efa646       33 minutes ago       Running             kube-flannel               0                   db9d5eb73c3c3       kube-flannel-ds-9qdxf              kube-flannel
14f050462b6b3       661d404f36f01       2 hours ago          Running             kube-proxy                0                   466896b16a9e0       kube-proxy-grfn2                  kube-system
fe228f01dce18       cfed1ff748928        2 hours ago          Running             kube-scheduler            0                   ff4a0b472ee10        kube-scheduler-k8s-ctr            kube-system
2dabc9b77c5bb       ff4f56c76b82d        2 hours ago          Running             kube-controller-manager   0                   37b1ef75a797b       kube-controller-manager-k8s-ctr   kube-system
db0f8a21d4e5d       ee794efa53d85       2 hours ago          Running             kube-apiserver            0                   2ec28da84e45d       kube-apiserver-k8s-ctr            kube-system
09c29ceb35443       499038711c081       2 hours ago          Running             etcd                      0                   d0c2a84088abf       etcd-k8s-ctr                      kube-system
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i sudo crictl ps ; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
>> node : k8s-w1 <<
CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                       NAMESPACE
86199e4d7a0c5       6fee7566e4273       4 minutes ago       Running             webpod              0                   ca2ae7352a95a       webpod-697b545f57-rnqgq   default
f49e28157b71d       747b002efa646       35 minutes ago      Running             kube-flannel         0                   fdddbfd293f2c       kube-flannel-ds-c4rxb      kube-flannel
df1e05d91b486       661d404f36f01       2 hours ago         Running             kube-proxy          0                   1f97c8bfacbe1       kube-proxy-b6zgw          kube-system

>> node : k8s-w2 <<
CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                       NAMESPACE
52921096fd252       6fee7566e4273       4 minutes ago       Running             webpod              0                   28d5aedbc875d       webpod-697b545f57-67wlk   default
5305910686764       747b002efa646       35 minutes ago      Running             kube-flannel         0                   e37e09c6351bd       kube-flannel-ds-q4chw      kube-flannel
2e23a26ab34de       661d404f36f01       2 hours ago         Running             kube-proxy          0                   db95ae957614a       kube-proxy-p678s          kube-system
  • ์›Œ์ปค๋…ธ๋“œ1: webpod 1๊ฐœ, kube-flannel, kube-proxy
  • ์›Œ์ปค๋…ธ๋“œ2: webpod 1๊ฐœ, kube-flannel, kube-proxy

4. ํŒŒ๋“œ ๋ฐฐํฌ ์ƒํƒœ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# k get pod -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME                      READY   STATUS    RESTARTS   AGE     IP           NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          4m19s   10.244.0.4   k8s-ctr   <none>           <none>
webpod-697b545f57-67wlk   1/1     Running   0          6m34s   10.244.2.2   k8s-w2    <none>           <none>
webpod-697b545f57-rnqgq   1/1     Running   0          6m34s   10.244.1.2   k8s-w1    <none>           <none>

5. Service & Endpoints ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get deploy,svc,ep webpod -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES           SELECTOR
deployment.apps/webpod   2/2     2            2           7m25s   webpod       traefik/whoami   app=webpod

NAME             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE     SELECTOR
service/webpod   ClusterIP   10.96.33.91   <none>        80/TCP    7m25s   app=webpod

NAME               ENDPOINTS                     AGE
endpoints/webpod   10.244.1.2:80,10.244.2.2:80   7m25s
  • ClusterIP: 10.96.33.91
  • Endpoints: 10.244.1.2:80, 10.244.2.2:80

EndpointSlice๋กœ๋„ ๋™์ผํ•œ ์—”๋“œํฌ์ธํŠธ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl api-resources | grep -i endpoint

โœ…ย ์ถœ๋ ฅ

1
2
endpoints                           ep           v1                                true         Endpoints
endpointslices                                   discovery.k8s.io/v1               true         EndpointSlice
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get endpointslices -l app=webpod

โœ…ย ์ถœ๋ ฅ

1
2
NAME           ADDRESSTYPE   PORTS   ENDPOINTS               AGE
webpod-wzm9p   IPv4          80      10.244.2.2,10.244.1.2   8m33s

6. ๋„คํŠธ์›Œํฌ ๋ณ€ํ™” ํ™•์ธ

cni0 ๋ธŒ๋ฆฌ์ง€ ๋ฐ veth ์ธํ„ฐํŽ˜์ด์Šค ์ถ”๊ฐ€ ์ƒ์„ฑ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c link

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:80:23:b9 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether e6:0f:9b:40:c3:ec brd ff:ff:ff:ff:ff:ff
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether f6:af:58:af:44:e3 brd ff:ff:ff:ff:ff:ff
6: vethe4603105@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
    link/ether fa:28:80:b2:a3:a2 brd ff:ff:ff:ff:ff:ff link-netns cni-05426de7-dd90-2656-df69-64505867d5df
7: veth470cf46f@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
    link/ether c6:8d:d9:1c:52:0e brd ff:ff:ff:ff:ff:ff link-netns cni-1d343493-f993-e0d5-e30c-163d1baf2a6f
8: veth51449dca@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
    link/ether 42:58:20:09:e6:66 brd ff:ff:ff:ff:ff:ff link-netns cni-a2abf4c9-a021-9a05-3db5-2b9f8761bb5a

cni0 ๋ธŒ๋ฆฌ์ง€์— veth ์ธํ„ฐํŽ˜์ด์Šค 3๊ฐœ ์—ฐ๊ฒฐ๋จ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# brctl show

โœ…ย ์ถœ๋ ฅ

1
2
3
4
bridge name	bridge id		STP enabled	interfaces
cni0		8000.f6af58af44e3	no		veth470cf46f
							veth51449dca
							vethe4603105

webpod ์„œ๋น„์Šค(10.96.33.91)์— ๋Œ€ํ•œ KUBE-SERVICES ๊ทœ์น™ ์ถ”๊ฐ€๋จ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# iptables -t nat -S

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N FLANNEL-POSTRTG
-N KUBE-KUBELET-CANARY
-N KUBE-MARK-MASQ
-N KUBE-NODEPORTS
-N KUBE-POSTROUTING
-N KUBE-PROXY-CANARY
-N KUBE-SEP-6E7XQMQ4RAYOWTTM
-N KUBE-SEP-ETI7FUQQE3BS2IXE
-N KUBE-SEP-IT2ZTR26TO4XFPTO
-N KUBE-SEP-N4G2XR5TDX7PQE7P
-N KUBE-SEP-PQBQBGZJJ5FKN3TB
-N KUBE-SEP-WEW7NHLZ4Y5A5ZKF
-N KUBE-SEP-YIL6JZP7A3QYXJU2
-N KUBE-SEP-ZP3FB6NMPNCO4VBJ
-N KUBE-SEP-ZXMNUKOKXUTL2MK2
-N KUBE-SERVICES
-N KUBE-SVC-CNZCPOCNCNOROALA
-N KUBE-SVC-ERIFXISQEP7F7OF4
-N KUBE-SVC-JD5MR3NA4I4DYORP
-N KUBE-SVC-NPX46M4PTMTKRN6Y
-N KUBE-SVC-TCOU7JCQXEZGVUNU
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -m comment --comment "flanneld masq" -j FLANNEL-POSTRTG
-A FLANNEL-POSTRTG -m mark --mark 0x4000/0x4000 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/24 -d 10.244.0.0/16 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/16 -d 10.244.0.0/24 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG ! -s 10.244.0.0/16 -d 10.244.0.0/24 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/16 ! -d 224.0.0.0/4 -m comment --comment "flanneld masq" -j MASQUERADE --random-fully
-A FLANNEL-POSTRTG ! -s 10.244.0.0/16 -d 10.244.0.0/16 -m comment --comment "flanneld masq" -j MASQUERADE --random-fully
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SEP-6E7XQMQ4RAYOWTTM -s 10.244.0.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-6E7XQMQ4RAYOWTTM -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.3:53
-A KUBE-SEP-ETI7FUQQE3BS2IXE -s 192.168.10.100/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-ETI7FUQQE3BS2IXE -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 192.168.10.100:6443
-A KUBE-SEP-IT2ZTR26TO4XFPTO -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-IT2ZTR26TO4XFPTO -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.0.2:53
-A KUBE-SEP-N4G2XR5TDX7PQE7P -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-N4G2XR5TDX7PQE7P -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.244.0.2:9153
-A KUBE-SEP-PQBQBGZJJ5FKN3TB -s 10.244.1.2/32 -m comment --comment "default/webpod" -j KUBE-MARK-MASQ
-A KUBE-SEP-PQBQBGZJJ5FKN3TB -p tcp -m comment --comment "default/webpod" -m tcp -j DNAT --to-destination 10.244.1.2:80
-A KUBE-SEP-WEW7NHLZ4Y5A5ZKF -s 10.244.2.2/32 -m comment --comment "default/webpod" -j KUBE-MARK-MASQ
-A KUBE-SEP-WEW7NHLZ4Y5A5ZKF -p tcp -m comment --comment "default/webpod" -m tcp -j DNAT --to-destination 10.244.2.2:80
-A KUBE-SEP-YIL6JZP7A3QYXJU2 -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-YIL6JZP7A3QYXJU2 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.2:53
-A KUBE-SEP-ZP3FB6NMPNCO4VBJ -s 10.244.0.3/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZP3FB6NMPNCO4VBJ -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.244.0.3:9153
-A KUBE-SEP-ZXMNUKOKXUTL2MK2 -s 10.244.0.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZXMNUKOKXUTL2MK2 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.0.3:53
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.96.33.91/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-SVC-CNZCPOCNCNOROALA
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-CNZCPOCNCNOROALA ! -s 10.244.0.0/16 -d 10.96.33.91/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SVC-CNZCPOCNCNOROALA -m comment --comment "default/webpod -> 10.244.1.2:80" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-PQBQBGZJJ5FKN3TB
-A KUBE-SVC-CNZCPOCNCNOROALA -m comment --comment "default/webpod -> 10.244.2.2:80" -j KUBE-SEP-WEW7NHLZ4Y5A5ZKF
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp -> 10.244.0.2:53" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-IT2ZTR26TO4XFPTO
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp -> 10.244.0.3:53" -j KUBE-SEP-ZXMNUKOKXUTL2MK2
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics -> 10.244.0.2:9153" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-N4G2XR5TDX7PQE7P
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics -> 10.244.0.3:9153" -j KUBE-SEP-ZP3FB6NMPNCO4VBJ
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https -> 192.168.10.100:6443" -j KUBE-SEP-ETI7FUQQE3BS2IXE
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.244.0.2:53" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-YIL6JZP7A3QYXJU2
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.244.0.3:53" -j KUBE-SEP-6E7XQMQ4RAYOWTTM

7. k8s-w1, k8s-w2 ์ •๋ณด ํ™•์ธ

(1) ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค ์ƒํƒœ ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c link ; echo; done
>> node : k8s-w1 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:07:f2:54 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether 4e:13:00:49:ce:71 brd ff:ff:ff:ff:ff:ff
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 7a:2d:30:0b:37:11 brd ff:ff:ff:ff:ff:ff
6: veth5aaee95c@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
    link/ether e6:aa:64:ab:e4:47 brd ff:ff:ff:ff:ff:ff link-netns cni-30ede71c-cd06-5139-e25d-267ce0b09a24

>> node : k8s-w2 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:96:d7:20 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether 56:19:dc:74:53:eb brd ff:ff:ff:ff:ff:ff
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 9e:ff:1f:86:1a:31 brd ff:ff:ff:ff:ff:ff
6: veth4ccb4288@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
    link/ether 5e:dd:78:cd:18:b0 brd ff:ff:ff:ff:ff:ff link-netns cni-1a5528fd-a3b3-87cb-06a3-7b599dba3fc7
  • flannel.1์€ CNI ํ”Œ๋ž˜๋„ ์ธํ„ฐํŽ˜์ด์Šค
  • cni0 ๋ธŒ๋ฆฌ์ง€ ๋ฐ veth ์ธํ„ฐํŽ˜์ด์Šค๋กœ ํŒŒ๋“œ ๋„คํŠธ์›Œํฌ ๊ตฌ์„ฑ

(2) ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ” ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c route ; echo; done
>> node : k8s-w1 <<
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.101 

>> node : k8s-w2 <<
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 
10.244.2.0/24 dev cni0 proto kernel scope link src 10.244.2.1 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.102
  • 10.244.1.0/24 ๋Œ€์—ญ์„ cni0๋กœ ์ง์ ‘ ๋ผ์šฐํŒ…
  • 10.244.0.0/24, 10.244.2.0/24 ๋Œ€์—ญ์€ flannel.1๋กœ ์ „๋‹ฌ

(3) ํŒŒ๋“œ ํ†ต์‹  ํ…Œ์ŠคํŠธ

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -l app=webpod -owide
NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
webpod-697b545f57-67wlk   1/1     Running   0          12m   10.244.2.2   k8s-w2   <none>           <none>
webpod-697b545f57-rnqgq   1/1     Running   0          12m   10.244.1.2   k8s-w1   <none>           <none>

์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ์— ๋ฐฐํฌ๋œ curl-pod์—์„œ ์ง์ ‘ IP๋กœ ์ ‘๊ทผ

1
2
3
4
5
6
7
8
9
10
11
12
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# POD1IP=10.244.1.2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl $POD1IP
Hostname: webpod-697b545f57-rnqgq
IP: 127.0.0.1
IP: ::1
IP: 10.244.1.2
IP: fe80::3861:82ff:fe93:7fed
RemoteAddr: 10.244.0.4:45080
GET / HTTP/1.1
Host: 10.244.1.2
User-Agent: curl/8.14.1
Accept: */*

curl 10.244.1.2 โ†’ webpod ์‘๋‹ต ํ™•์ธ

์„œ๋น„์Šค๋ช…์œผ๋กœ ์ ‘๊ทผ

1
2
3
4
5
6
7
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc,ep webpod
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service/webpod   ClusterIP   10.96.33.91   <none>        80/TCP    13m

NAME               ENDPOINTS                     AGE
endpoints/webpod   10.244.1.2:80,10.244.2.2:80   13m

curl webpod โ†’ ClusterIP๋ฅผ ํ†ตํ•ด ๋žœ๋ค ๋ถ€ํ•˜๋ถ„์‚ฐ

1
2
3
4
5
6
7
8
9
10
11
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod
Hostname: webpod-697b545f57-67wlk
IP: 127.0.0.1
IP: ::1
IP: 10.244.2.2
IP: fe80::455:7ff:fe77:973f
RemoteAddr: 10.244.0.4:58490
GET / HTTP/1.1
Host: webpod
User-Agent: curl/8.14.1
Accept: */*

Hostname ๊ฐ’์ด k8s-w1, k8s-w2์˜ webpod ํŒŒ๋“œ๋กœ ๋ฒˆ๊ฐˆ์•„ ์‘๋‹ต

1
2
3
4
5
6
7
8
9
10
11
12
13
14
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s webpod | grep Hostname; sleep 1; done'
Hostname: webpod-697b545f57-67wlk
Hostname: webpod-697b545f57-67wlk
Hostname: webpod-697b545f57-rnqgq
Hostname: webpod-697b545f57-67wlk
Hostname: webpod-697b545f57-67wlk
Hostname: webpod-697b545f57-rnqgq
Hostname: webpod-697b545f57-rnqgq
Hostname: webpod-697b545f57-rnqgq
Hostname: webpod-697b545f57-rnqgq
Hostname: webpod-697b545f57-67wlk
Hostname: webpod-697b545f57-67wlk
Hostname: webpod-697b545f57-rnqgq
...

8. ์„œ๋น„์Šค & ์—”๋“œํฌ์ธํŠธ

iptables NAT ๊ทœ์น™์— webpod ์„œ๋น„์Šค(ClusterIP)๋กœ ํŠธ๋ž˜ํ”ฝ ์ „๋‹ฌ ๊ทœ์น™ ์ถ”๊ฐ€๋จ

1
2
3
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc webpod -o jsonpath="{.spec.clusterIP}"
SVCIP=$(kubectl get svc webpod -o jsonpath="{.spec.clusterIP}")
iptables -t nat -S | grep $SVCIP

โœ…ย ์ถœ๋ ฅ

1
2
10.96.33.91-A KUBE-SERVICES -d 10.96.33.91/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-SVC-CNZCPOCNCNOROALA
-A KUBE-SVC-CNZCPOCNCNOROALA ! -s 10.244.0.0/16 -d 10.96.33.91/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ

webpod ์„œ๋น„์Šค

  • ClusterIP: 10.96.33.91
  • Endpoints: 10.244.1.2:80, 10.244.2.2:80

9. iptables NAT ๊ทœ์น™ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i sudo iptables -t nat -S | grep $SVCIP ; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
>> node : k8s-w1 <<
-A KUBE-SERVICES -d 10.96.33.91/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-SVC-CNZCPOCNCNOROALA
-A KUBE-SVC-CNZCPOCNCNOROALA ! -s 10.244.0.0/16 -d 10.96.33.91/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ

>> node : k8s-w2 <<
-A KUBE-SERVICES -d 10.96.33.91/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-SVC-CNZCPOCNCNOROALA
-A KUBE-SVC-CNZCPOCNCNOROALA ! -s 10.244.0.0/16 -d 10.96.33.91/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ

๐Ÿงฉ Cilium CNI ์†Œ๊ฐœ

Standard Container Networking vs Cilium eBPF Networking ๋น„๊ต

https://cilium.io/blog/2021/05/11/cni-benchmark/

https://ebpf.io/


โš™๏ธ Cilium CNI ์„ค์น˜ ์ „ ์…‹ํŒ…

1. Cilium ์‹œ์Šคํ…œ ์š”๊ตฌ ์‚ฌํ•ญ ํ™•์ธ

์•„ํ‚คํ…์ฒ˜(x86_64)์™€ ์ปค๋„ ๋ฒ„์ „(6.8.0-53-generic) ํ™•์ธ

1
2
3
4
5
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# arch
x86_64

(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# uname -r
6.8.0-53-generic

2. ์ปค๋„ ๊ธฐ๋ณธ ์š”๊ตฌ ์‚ฌํ•ญ ํ™•์ธ

1
grep -E 'CONFIG_BPF|CONFIG_BPF_SYSCALL|CONFIG_NET_CLS_BPF|CONFIG_BPF_JIT|CONFIG_NET_CLS_ACT|CONFIG_NET_SCH_INGRESS|CONFIG_CRYPTO_SHA1|CONFIG_CRYPTO_USER_API_HASH|CONFIG_CGROUPS|CONFIG_CGROUP_BPF|CONFIG_PERF_EVENTS|CONFIG_SCHEDSTATS' /boot/config-$(uname -r)

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
CONFIG_BPF=y
CONFIG_BPF_SYSCALL=y
CONFIG_BPF_JIT=y
CONFIG_BPF_JIT_ALWAYS_ON=y
CONFIG_BPF_JIT_DEFAULT_ON=y
CONFIG_BPF_UNPRIV_DEFAULT_OFF=y
# CONFIG_BPF_PRELOAD is not set
CONFIG_BPF_LSM=y
CONFIG_CGROUPS=y
CONFIG_CGROUP_BPF=y
CONFIG_PERF_EVENTS=y
CONFIG_PERF_EVENTS_INTEL_UNCORE=y
CONFIG_PERF_EVENTS_INTEL_RAPL=m
CONFIG_PERF_EVENTS_INTEL_CSTATE=m
# CONFIG_PERF_EVENTS_AMD_POWER is not set
CONFIG_PERF_EVENTS_AMD_UNCORE=m
CONFIG_PERF_EVENTS_AMD_BRS=y
CONFIG_NET_SCH_INGRESS=m
CONFIG_NET_CLS_BPF=m
CONFIG_NET_CLS_ACT=y
CONFIG_BPF_STREAM_PARSER=y
CONFIG_CRYPTO_SHA1=y
CONFIG_CRYPTO_USER_API_HASH=m
CONFIG_CRYPTO_SHA1_SSSE3=m
CONFIG_SCHEDSTATS=y
CONFIG_BPF_EVENTS=y
CONFIG_BPF_KPROBE_OVERRIDE=y

3. ์ปค๋„ ํ„ฐ๋„๋ง/๋ผ์šฐํŒ… ์˜ต์…˜ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# grep -E 'CONFIG_VXLAN=y|CONFIG_VXLAN=m|CONFIG_GENEVE=y|CONFIG_GENEVE=m|CONFIG_FIB_RULES=y' /boot/config-$(uname -r)

โœ…ย ์ถœ๋ ฅ

1
2
3
CONFIG_FIB_RULES=y
CONFIG_VXLAN=m
CONFIG_GENEVE=m

4. geneve ๋ชจ๋“ˆ ๋กœ๋“œ ์ƒํƒœ ํ™•์ธ

geneve ๋ชจ๋“ˆ ๋ฏธ๋กœ๋“œ ์ƒํƒœ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# lsmod | grep -E 'vxlan|geneve'

โœ…ย ์ถœ๋ ฅ

1
2
3
vxlan                 155648  0
ip6_udp_tunnel         16384  1 vxlan
udp_tunnel             32768  1 vxlan

modprobe geneve ์‹คํ–‰ ํ›„ geneve ๋ชจ๋“ˆ์ด ์ •์ƒ ๋กœ๋“œ๋จ

1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# modprobe geneve
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# lsmod | grep -E 'vxlan|geneve'

โœ…ย ์ถœ๋ ฅ

1
2
3
4
geneve                 49152  0
vxlan                 155648  0
ip6_udp_tunnel         16384  2 geneve,vxlan
udp_tunnel             32768  2 geneve,vxlan

5. Netfilter ๊ด€๋ จ ์˜ต์…˜ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# grep -E 'CONFIG_NETFILTER_XT_TARGET_TPROXY|CONFIG_NETFILTER_XT_TARGET_MARK|CONFIG_NETFILTER_XT_TARGET_CT|CONFIG_NETFILTER_XT_MATCH_MARK|CONFIG_NETFILTER_XT_MATCH_SOCKET' /boot/config-$(uname -r)

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
CONFIG_NETFILTER_XT_TARGET_CT=m
CONFIG_NETFILTER_XT_TARGET_MARK=m
CONFIG_NETFILTER_XT_TARGET_TPROXY=m
CONFIG_NETFILTER_XT_MATCH_MARK=m
CONFIG_NETFILTER_XT_MATCH_SOCKET=m

6. Netkit Device Mode ์˜ต์…˜ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# grep -E 'CONFIG_NETKIT=y|CONFIG_NETKIT=m' /boot/config-$(uname -r)

โœ…ย ์ถœ๋ ฅ

1
CONFIG_NETKIT=y

7. eBPF ํŒŒ์ผ์‹œ์Šคํ…œ ๋งˆ์šดํŠธ ์ƒํƒœ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# mount | grep /sys/fs/bpf

โœ…ย ์ถœ๋ ฅ

1
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)

8. ๊ธฐ์กด Flannel CNI ์ œ๊ฑฐ

(1) helm uninstall -n kube-flannel flannel ์‹คํ–‰

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# helm uninstall -n kube-flannel flannel

โœ…ย ์ถœ๋ ฅ

1
release "flannel" uninstalled

(2) ์ž”์—ฌ ๋ฆฌ์†Œ์Šค ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# helm list -A

โœ…ย ์ถœ๋ ฅ

1
NAME	NAMESPACE	REVISION	UPDATED	STATUS	CHART	APP VERSION
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get all -n kube-flannel

โœ…ย ์ถœ๋ ฅ

1
No resources found in kube-flannel namespace.

(3) ๋„ค์ž„์ŠคํŽ˜์ด์Šค ์‚ญ์ œ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl delete ns kube-flannel

โœ…ย ์ถœ๋ ฅ

1
namespace "kube-flannel" deleted

(4) ์ „์ฒด ํŒŒ๋“œ ์ƒํƒœ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -A -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
default       curl-pod                          1/1     Running   0          46h   10.244.0.4       k8s-ctr   <none>           <none>
default       webpod-697b545f57-67wlk           1/1     Running   0          46h   10.244.2.2       k8s-w2    <none>           <none>
default       webpod-697b545f57-rnqgq           1/1     Running   0          46h   10.244.1.2       k8s-w1    <none>           <none>
kube-system   coredns-674b8bbfcf-7gx6f          1/1     Running   0          47h   10.244.0.2       k8s-ctr   <none>           <none>
kube-system   coredns-674b8bbfcf-mjnst          1/1     Running   0          47h   10.244.0.3       k8s-ctr   <none>           <none>
kube-system   etcd-k8s-ctr                      1/1     Running   0          47h   192.168.10.100   k8s-ctr   <none>           <none>
kube-system   kube-apiserver-k8s-ctr            1/1     Running   0          47h   192.168.10.100   k8s-ctr   <none>           <none>
kube-system   kube-controller-manager-k8s-ctr   1/1     Running   0          47h   192.168.10.100   k8s-ctr   <none>           <none>
kube-system   kube-proxy-b6zgw                  1/1     Running   0          47h   192.168.10.101   k8s-w1    <none>           <none>
kube-system   kube-proxy-grfn2                  1/1     Running   0          47h   192.168.10.100   k8s-ctr   <none>           <none>
kube-system   kube-proxy-p678s                  1/1     Running   0          47h   192.168.10.102   k8s-w2    <none>           <none>
kube-system   kube-scheduler-k8s-ctr            1/1     Running   0          47h   192.168.10.100   k8s-ctr   <none>           <none>

9. ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค ์ œ๊ฑฐ ์ „ ์ƒํƒœ ํ™•์ธ

๊ฐ ๋…ธ๋“œ(k8s-ctr, k8s-w1, k8s-w2)์˜ ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค ์กฐํšŒ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c link

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:80:23:b9 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether e6:0f:9b:40:c3:ec brd ff:ff:ff:ff:ff:ff
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether f6:af:58:af:44:e3 brd ff:ff:ff:ff:ff:ff
6: vethe4603105@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
    link/ether fa:28:80:b2:a3:a2 brd ff:ff:ff:ff:ff:ff link-netns cni-05426de7-dd90-2656-df69-64505867d5df
7: veth470cf46f@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
    link/ether c6:8d:d9:1c:52:0e brd ff:ff:ff:ff:ff:ff link-netns cni-1d343493-f993-e0d5-e30c-163d1baf2a6f
8: veth51449dca@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
    link/ether 42:58:20:09:e6:66 brd ff:ff:ff:ff:ff:ff link-netns cni-a2abf4c9-a021-9a05-3db5-2b9f8761bb5a
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c link ; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
>> node : k8s-w1 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:07:f2:54 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether 4e:13:00:49:ce:71 brd ff:ff:ff:ff:ff:ff
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 7a:2d:30:0b:37:11 brd ff:ff:ff:ff:ff:ff
6: veth5aaee95c@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
    link/ether e6:aa:64:ab:e4:47 brd ff:ff:ff:ff:ff:ff link-netns cni-30ede71c-cd06-5139-e25d-267ce0b09a24

>> node : k8s-w2 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:96:d7:20 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether 56:19:dc:74:53:eb brd ff:ff:ff:ff:ff:ff
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 9e:ff:1f:86:1a:31 brd ff:ff:ff:ff:ff:ff
6: veth4ccb4288@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
    link/ether 5e:dd:78:cd:18:b0 brd ff:ff:ff:ff:ff:ff link-netns cni-1a5528fd-a3b3-87cb-06a3-7b599dba3fc7
  • flannel ์‚ญ์ œ ํ›„์—๋„ flannel.1 ๋ฐ cni0๊ฐ€ ๋‚จ์•„์žˆ์Œ

10. ๊ฐ ๋…ธ๋“œ์˜ ๋ธŒ๋ฆฌ์ง€ ์ƒํƒœ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# brctl show

โœ…ย ์ถœ๋ ฅ

1
2
3
4
bridge name	bridge id		STP enabled	interfaces
cni0		8000.f6af58af44e3	no		veth470cf46f
							veth51449dca
							vethe4603105
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i brctl show ; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
>> node : k8s-w1 <<
bridge name	bridge id		STP enabled	interfaces
cni0		8000.7a2d300b3711	no		veth5aaee95c

>> node : k8s-w2 <<
bridge name	bridge id		STP enabled	interfaces
cni0		8000.9eff1f861a31	no		veth4ccb4288
  • k8s-ctr, k8s-w1, k8s-w2 ๋ชจ๋‘ cni0 ๋ธŒ๋ฆฌ์ง€์— veth๋“ค์ด ๋‚จ์•„์žˆ๋Š” ์ƒํƒœ๋กœ ์ถœ๋ ฅ

11. ๊ฐ ๋…ธ๋“œ์˜ ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ” ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c route

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1 
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100 
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c route ; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
>> node : k8s-w1 <<
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.101 

>> node : k8s-w2 <<
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 
10.244.2.0/24 dev cni0 proto kernel scope link src 10.244.2.1 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.102
  • flannel.1 ๋ฐ cni0๋ฅผ ํ†ตํ•ด ๋ผ์šฐํŒ…๋˜๊ณ  ์žˆ๋Š” ์ƒํƒœ ํ™•์ธ

12. vNIC(flannel.1, cni0) ์ œ๊ฑฐ

1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip link del flannel.1
ip link del cni0
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i sudo ip link del flannel.1 ; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
>> node : k8s-w1 <<

>> node : k8s-w2 <<
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i sudo ip link del cni0 ; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
>> node : k8s-w1 <<

>> node : k8s-w2 <<

13. ์ œ๊ฑฐ ํ›„ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c link

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:80:23:b9 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
6: vethe4603105@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fa:28:80:b2:a3:a2 brd ff:ff:ff:ff:ff:ff link-netns cni-05426de7-dd90-2656-df69-64505867d5df
7: veth470cf46f@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether c6:8d:d9:1c:52:0e brd ff:ff:ff:ff:ff:ff link-netns cni-1d343493-f993-e0d5-e30c-163d1baf2a6f
8: veth51449dca@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 42:58:20:09:e6:66 brd ff:ff:ff:ff:ff:ff link-netns cni-a2abf4c9-a021-9a05-3db5-2b9f8761bb5a
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c link ; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
>> node : k8s-w1 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:07:f2:54 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
6: veth5aaee95c@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether e6:aa:64:ab:e4:47 brd ff:ff:ff:ff:ff:ff link-netns cni-30ede71c-cd06-5139-e25d-267ce0b09a24

>> node : k8s-w2 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:96:d7:20 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
6: veth4ccb4288@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 5e:dd:78:cd:18:b0 brd ff:ff:ff:ff:ff:ff link-netns cni-1a5528fd-a3b3-87cb-06a3-7b599dba3fc7
  • flannel.1, cni0 ์ธํ„ฐํŽ˜์ด์Šค๊ฐ€ ๋ชจ๋‘ ์ œ๊ฑฐ๋œ ์ƒํƒœ๋กœ ์ถœ๋ ฅ
1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# brctl show
# ์ถœ๋ ฅ ์—†์Œ
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i brctl show ; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
>> node : k8s-w1 <<

>> node : k8s-w2 <<
  • brctl show ๊ฒฐ๊ณผ์—์„œ๋„ ๋ธŒ๋ฆฌ์ง€(cni0) ๊ด€๋ จ ์ถœ๋ ฅ์ด ์ „๋ถ€ ์‚ฌ๋ผ์ง
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c route

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c route ; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
>> node : k8s-w1 <<
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.101 

>> node : k8s-w2 <<
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.102
  • flannel ๊ด€๋ จ ๋ผ์šฐํŠธ ํ•ญ๋ชฉ(10.244.x.x) ์ œ๊ฑฐ, eth0/eth1 ๊ธฐ๋ณธ ๊ฒฝ๋กœ๋งŒ ๋‚จ์•„์žˆ์Œ

14. kube-proxy ๋ฆฌ์†Œ์Šค ์ œ๊ฑฐ

DaemonSet๊ณผ ConfigMap ์ œ๊ฑฐ

1
2
3
4
5
6
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system delete ds kube-proxy
kubectl -n kube-system delete cm kube-proxy

# ๊ฒฐ๊ณผ
daemonset.apps "kube-proxy" deleted
configmap "kube-proxy" deleted

์ œ๊ฑฐ ํ›„์—๋„ ๊ธฐ์กด ํŒŒ๋“œ๋“ค์€ 10.244.x.x ๋Œ€์—ญ์˜ IP๋ฅผ ์œ ์ง€

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -A -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
NAMESPACE     NAME                              READY   STATUS    RESTARTS      AGE   IP               NODE      NOMINATED NODE   READINESS GATES
default       curl-pod                          1/1     Running   0             46h   10.244.0.4       k8s-ctr   <none>           <none>
default       webpod-697b545f57-67wlk           1/1     Running   0             46h   10.244.2.2       k8s-w2    <none>           <none>
default       webpod-697b545f57-rnqgq           1/1     Running   0             46h   10.244.1.2       k8s-w1    <none>           <none>
kube-system   coredns-674b8bbfcf-7gx6f          0/1     Running   0             2d    10.244.0.2       k8s-ctr   <none>           <none>
kube-system   coredns-674b8bbfcf-mjnst          0/1     Running   0             2d    10.244.0.3       k8s-ctr   <none>           <none>
kube-system   etcd-k8s-ctr                      1/1     Running   0             2d    192.168.10.100   k8s-ctr   <none>           <none>
kube-system   kube-apiserver-k8s-ctr            1/1     Running   0             2d    192.168.10.100   k8s-ctr   <none>           <none>
kube-system   kube-controller-manager-k8s-ctr   1/1     Running   0             2d    192.168.10.100   k8s-ctr   <none>           <none>
kube-system   kube-scheduler-k8s-ctr            1/1     Running   0             2d    192.168.10.100   k8s-ctr   <none>           <none>

15. ํŒŒ๋“œ ๋„คํŠธ์›Œํฌ ํ†ต์‹  ํ™•์ธ

ํ†ต์‹  ๋ถˆ๊ฐ€ ์ƒํƒœ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod

16. iptables ์ž”์กด ๊ทœ์น™ ํ™•์ธ

FLANNEL-POSTRTG, FLANNEL-FWD, KUBE-* ๊ด€๋ จ ์ฒด์ธ์ด ๋‚จ์•„์žˆ๋Š” ์ƒํƒœ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# iptables-save

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:54:05 2025
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-IPTABLES-HINT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Wed Jul 16 22:54:05 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:54:05 2025
*filter
:INPUT ACCEPT [1979062:417540265]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1959906:367460835]
:FLANNEL-FWD - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-PROXY-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A FORWARD -m comment --comment "flanneld forward" -j FLANNEL-FWD
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A FLANNEL-FWD -s 10.244.0.0/16 -m comment --comment "flanneld forward" -j ACCEPT
-A FLANNEL-FWD -d 10.244.0.0/16 -m comment --comment "flanneld forward" -j ACCEPT
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -m nfacct --nfacct-name  ct_state_invalid_dropped_pkts -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns has no endpoints" -m udp --dport 53 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp has no endpoints" -m tcp --dport 53 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics has no endpoints" -m tcp --dport 9153 -j REJECT --reject-with icmp-port-unreachable
COMMIT
# Completed on Wed Jul 16 22:54:05 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:54:05 2025
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:FLANNEL-POSTRTG - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SEP-ETI7FUQQE3BS2IXE - [0:0]
:KUBE-SEP-PQBQBGZJJ5FKN3TB - [0:0]
:KUBE-SEP-WEW7NHLZ4Y5A5ZKF - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-CNZCPOCNCNOROALA - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -m comment --comment "flanneld masq" -j FLANNEL-POSTRTG
-A FLANNEL-POSTRTG -m mark --mark 0x4000/0x4000 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/24 -d 10.244.0.0/16 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/16 -d 10.244.0.0/24 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG ! -s 10.244.0.0/16 -d 10.244.0.0/24 -m comment --comment "flanneld masq" -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/16 ! -d 224.0.0.0/4 -m comment --comment "flanneld masq" -j MASQUERADE --random-fully
-A FLANNEL-POSTRTG ! -s 10.244.0.0/16 -d 10.244.0.0/16 -m comment --comment "flanneld masq" -j MASQUERADE --random-fully
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SEP-ETI7FUQQE3BS2IXE -s 192.168.10.100/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-ETI7FUQQE3BS2IXE -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 192.168.10.100:6443
-A KUBE-SEP-PQBQBGZJJ5FKN3TB -s 10.244.1.2/32 -m comment --comment "default/webpod" -j KUBE-MARK-MASQ
-A KUBE-SEP-PQBQBGZJJ5FKN3TB -p tcp -m comment --comment "default/webpod" -m tcp -j DNAT --to-destination 10.244.1.2:80
-A KUBE-SEP-WEW7NHLZ4Y5A5ZKF -s 10.244.2.2/32 -m comment --comment "default/webpod" -j KUBE-MARK-MASQ
-A KUBE-SEP-WEW7NHLZ4Y5A5ZKF -p tcp -m comment --comment "default/webpod" -m tcp -j DNAT --to-destination 10.244.2.2:80
-A KUBE-SERVICES -d 10.96.33.91/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-SVC-CNZCPOCNCNOROALA
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-CNZCPOCNCNOROALA ! -s 10.244.0.0/16 -d 10.96.33.91/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SVC-CNZCPOCNCNOROALA -m comment --comment "default/webpod -> 10.244.1.2:80" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-PQBQBGZJJ5FKN3TB
-A KUBE-SVC-CNZCPOCNCNOROALA -m comment --comment "default/webpod -> 10.244.2.2:80" -j KUBE-SEP-WEW7NHLZ4Y5A5ZKF
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https -> 192.168.10.100:6443" -j KUBE-SEP-ETI7FUQQE3BS2IXE
COMMIT
# Completed on Wed Jul 16 22:54:05 2025
  • flannel ์ œ๊ฑฐ ํ›„์—๋„ iptables ์ฒด์ธ๋“ค์ด ๋‚จ์•„์žˆ์–ด ๋„คํŠธ์›Œํฌ ๊ฒฝ๋กœ์— ์˜ํ–ฅ

17. iptables ์ดˆ๊ธฐํ™”

KUBE ๋ฐ FLANNEL ๊ด€๋ จ ๊ทœ์น™ ์ œ๊ฑฐ

1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# iptables-save | grep -v KUBE | grep -v FLANNEL | iptables-restore
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# iptables-save

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:55:46 2025
*mangle
:PREROUTING ACCEPT [2973:530951]
:INPUT ACCEPT [2973:530951]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [2840:534805]
:POSTROUTING ACCEPT [2840:534805]
COMMIT
# Completed on Wed Jul 16 22:55:46 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:55:46 2025
*filter
:INPUT ACCEPT [2973:530951]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [2840:534805]
COMMIT
# Completed on Wed Jul 16 22:55:46 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:55:46 2025
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
COMMIT
# Completed on Wed Jul 16 22:55:46 2025
1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w1 "sudo iptables-save | grep -v KUBE | grep -v FLANNEL | sudo iptables-restore"
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w1 sudo iptables-save

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:56:36 2025
*mangle
:PREROUTING ACCEPT [25:5510]
:INPUT ACCEPT [25:5510]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [21:5592]
:POSTROUTING ACCEPT [21:5592]
COMMIT
# Completed on Wed Jul 16 22:56:36 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:56:36 2025
*filter
:INPUT ACCEPT [25:5510]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [21:5592]
COMMIT
# Completed on Wed Jul 16 22:56:36 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:56:36 2025
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
COMMIT
# Completed on Wed Jul 16 22:56:36 2025
1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w2 "sudo iptables-save | grep -v KUBE | grep -v FLANNEL | sudo iptables-restore"
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w2 sudo iptables-save

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:57:32 2025
*mangle
:PREROUTING ACCEPT [25:5374]
:INPUT ACCEPT [25:5374]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [20:5540]
:POSTROUTING ACCEPT [20:5540]
COMMIT
# Completed on Wed Jul 16 22:57:32 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:57:32 2025
*filter
:INPUT ACCEPT [25:5374]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [20:5540]
COMMIT
# Completed on Wed Jul 16 22:57:32 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 22:57:32 2025
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
COMMIT
# Completed on Wed Jul 16 22:57:32 2025

18. ์ œ๊ฑฐ ์ดํ›„ ์ƒํƒœ ํ™•์ธ

ํ•˜์ง€๋งŒ ํŒŒ๋“œ๋“ค์€ ์—ฌ์ „ํžˆ ๊ธฐ์กด 10.244.x.x IP๋ฅผ ์œ ์ง€ํ•˜๋ฉฐ CNI๊ฐ€ ์—†๊ธฐ ๋•Œ๋ฌธ์— ํ†ต์‹  ๋ถˆ๊ฐ€ ์ƒํƒœ ์œ ์ง€

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          46h   10.244.0.4   k8s-ctr   <none>           <none>
webpod-697b545f57-67wlk   1/1     Running   0          46h   10.244.2.2   k8s-w2    <none>           <none>
webpod-697b545f57-rnqgq   1/1     Running   0          46h   10.244.1.2   k8s-w1    <none>           <none>

๐Ÿ› ๏ธ Cilium ์„ค์น˜

1. Cilium Helm ์ €์žฅ์†Œ ์ถ”๊ฐ€

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# helm repo add cilium https://helm.cilium.io/

# ๊ฒฐ๊ณผ
"cilium" has been added to your repositories

2. Cilium ์„ค์น˜ (Helm)

  • kubeProxyReplacement=true : kube-proxy ์—†์ด Cilium์ด ๋Œ€์ฒด
  • installNoConntrackIptablesRules=true : iptables ๊ทœ์น™ ์„ค์น˜ ์ƒ๋žต
  • bpf.masquerade=true : BPF ๊ธฐ๋ฐ˜ masquerading
1
2
3
4
5
6
7
8
9
10
11
12
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# helm install cilium cilium/cilium --version 1.17.5 --namespace kube-system \
--set k8sServiceHost=192.168.10.100 --set k8sServicePort=6443 \
--set kubeProxyReplacement=true \
--set routingMode=native \
--set autoDirectNodeRoutes=true \
--set ipam.mode="cluster-pool" \
--set ipam.operator.clusterPoolIPv4PodCIDRList={"172.20.0.0/16"} \
--set ipv4NativeRoutingCIDR=172.20.0.0/16 \
--set endpointRoutes.enabled=true \
--set installNoConntrackIptablesRules=true \
--set bpf.masquerade=true \
--set ipv6.enabled=false

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
NAME: cilium
LAST DEPLOYED: Wed Jul 16 23:03:28 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble.

Your release version is 1.17.5.

For any further help, visit https://docs.cilium.io/en/v1.17/gettinghelp

3. Cilium Values ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# helm get values cilium -n kube-system

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
USER-SUPPLIED VALUES:
autoDirectNodeRoutes: true
bpf:
  masquerade: true
endpointRoutes:
  enabled: true
installNoConntrackIptablesRules: true
ipam:
  mode: cluster-pool
  operator:
    clusterPoolIPv4PodCIDRList:
    - 172.20.0.0/16
ipv4NativeRoutingCIDR: 172.20.0.0/16
ipv6:
  enabled: false
k8sServiceHost: 192.168.10.100
k8sServicePort: 6443
kubeProxyReplacement: true
routingMode: native

4. CRD ์ƒ์„ฑ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get crd

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
NAME                                         CREATED AT
ciliumcidrgroups.cilium.io                   2025-07-16T14:04:32Z
ciliumclusterwidenetworkpolicies.cilium.io   2025-07-16T14:04:32Z
ciliumendpoints.cilium.io                    2025-07-16T14:04:32Z
ciliumexternalworkloads.cilium.io            2025-07-16T14:04:32Z
ciliumidentities.cilium.io                   2025-07-16T14:04:32Z
ciliuml2announcementpolicies.cilium.io       2025-07-16T14:04:32Z
ciliumloadbalancerippools.cilium.io          2025-07-16T14:04:32Z
ciliumnetworkpolicies.cilium.io              2025-07-16T14:04:33Z
ciliumnodeconfigs.cilium.io                  2025-07-16T14:04:32Z
ciliumnodes.cilium.io                        2025-07-16T14:04:32Z
ciliumpodippools.cilium.io                   2025-07-16T14:04:32Z
  • ciliumendpoints.cilium.io, ciliumnodes.cilium.io ๋“ฑ Cilium ๊ด€๋ จ CRD๋“ค์ด ์ƒ์„ฑ๋œ ์ƒํƒœ ํ™•์ธ

5. Cilium Pod ์ƒํƒœ ๋ชจ๋‹ˆํ„ฐ๋ง

1
watch -d kubectl get pod -A

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Every 2.0s: kubectl get pod -A                   k8s-ctr: Wed Jul 16 23:08:45 2025

NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
default       curl-pod                           1/1     Running   0          46h
default       webpod-697b545f57-67wlk            1/1     Running   0          46h
default       webpod-697b545f57-rnqgq            1/1     Running   0          46h
kube-system   cilium-6pndl                       1/1     Running   0          6m5s
kube-system   cilium-9bpz2                       1/1     Running   0          6m5s
kube-system   cilium-envoy-b4zts                 1/1     Running   0          6m5s
kube-system   cilium-envoy-fkj4l                 1/1     Running   0          6m5s
kube-system   cilium-envoy-z9mvb                 1/1     Running   0          6m5s
kube-system   cilium-operator-865bc7f457-rgj7k   1/1     Running   0          6m5s
kube-system   cilium-operator-865bc7f457-s2zv2   1/1     Running   0          6m5s
kube-system   cilium-zhq77                       1/1     Running   0          6m5s
kube-system   coredns-674b8bbfcf-697cz           1/1     Running   0          4m44s
kube-system   coredns-674b8bbfcf-rtz5g           1/1     Running   0          4m58s
kube-system   etcd-k8s-ctr                       1/1     Running   0          2d
kube-system   kube-apiserver-k8s-ctr             1/1     Running   0          2d
kube-system   kube-controller-manager-k8s-ctr    1/1     Running   0          2d
kube-system   kube-scheduler-k8s-ctr             1/1     Running   0          2d
  • cilium-*, cilium-operator-*, cilium-envoy-* ํŒŒ๋“œ๋“ค์ด Running ์ƒํƒœ๋กœ ์ •์ƒ ๋ฐฐํฌ ํ™•์ธ

6. Cilium ์ƒํƒœ ์ƒ์„ธ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg status --verbose

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
KVStore:                Disabled   
Kubernetes:             Ok         1.33 (v1.33.2) [linux/amd64]
Kubernetes APIs:        ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:   True   [eth0    10.0.2.15 fd17:625c:f037:2:a00:27ff:fe6b:69c9 fe80::a00:27ff:fe6b:69c9, eth1   192.168.10.100 fe80::a00:27ff:fe80:23b9 (Direct Routing)]
Host firewall:          Disabled
SRv6:                   Disabled
CNI Chaining:           none
CNI Config file:        successfully wrote CNI configuration file to /host/etc/cni/net.d/05-cilium.conflist
Cilium:                 Ok   1.17.5 (v1.17.5-69aab28c)
NodeMonitor:            Listening for events on 2 CPUs with 64x4096 of shared memory
Cilium health daemon:   Ok   
IPAM:                   IPv4: 3/254 allocated from 172.20.0.0/24, 
Allocated addresses:
  172.20.0.10 (kube-system/coredns-674b8bbfcf-rtz5g)
  172.20.0.117 (health)
  172.20.0.187 (router)
IPv4 BIG TCP:           Disabled
IPv6 BIG TCP:           Disabled
BandwidthManager:       Disabled
Routing:                Network: Native   Host: BPF
Attach Mode:            TCX
Device Mode:            veth
Masquerading:           BPF   [eth0, eth1]   172.20.0.0/16 [IPv4: Enabled, IPv6: Disabled]
Clock Source for BPF:   ktime
...
  • KubeProxyReplacement ํ™œ์„ฑํ™”, BPF masquerade, IPv4 CIDR ๋“ฑ ์„ธ๋ถ€ ์„ค์ • ์ถœ๋ ฅ
  • CNI ์„ค์ • ํŒŒ์ผ /host/etc/cni/net.d/05-cilium.conflist ์ƒ์„ฑ ํ™•์ธ

7. iptables ์ƒํƒœ ์ ๊ฒ€

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# iptables -t nat -S

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N CILIUM_OUTPUT_nat
-N CILIUM_POST_nat
-N CILIUM_PRE_nat
-N KUBE-KUBELET-CANARY
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_nat" -j CILIUM_PRE_nat
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_nat" -j CILIUM_OUTPUT_nat
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_nat" -j CILIUM_POST_nat
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i sudo iptables -t nat -S ; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
>> node : k8s-w1 <<
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N CILIUM_OUTPUT_nat
-N CILIUM_POST_nat
-N CILIUM_PRE_nat
-N KUBE-KUBELET-CANARY
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_nat" -j CILIUM_PRE_nat
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_nat" -j CILIUM_OUTPUT_nat
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_nat" -j CILIUM_POST_nat

>> node : k8s-w2 <<
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N CILIUM_OUTPUT_nat
-N CILIUM_POST_nat
-N CILIUM_PRE_nat
-N KUBE-KUBELET-CANARY
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_nat" -j CILIUM_PRE_nat
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_nat" -j CILIUM_OUTPUT_nat
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_nat" -j CILIUM_POST_nat
  • Cilium ์„ค์น˜ ํ›„ NAT ํ…Œ์ด๋ธ”์—์„œ CILIUM_ ์ฒด์ธ์ด ๋ณด์ด๋ฉฐ, iptables ๊ทœ์น™์€ ์ตœ์†Œํ•œ์œผ๋กœ๋งŒ ์กด์žฌ
  • ๋Œ€๋ถ€๋ถ„์˜ ํŠธ๋ž˜ํ”ฝ ์ฒ˜๋ฆฌ๋Š” eBPF ํ”„๋กœ๊ทธ๋žจ์—์„œ ์ˆ˜ํ–‰

8. ์„œ๋น„์Šค ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# k get svc -A

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
NAMESPACE     NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes     ClusterIP   10.96.0.1       <none>        443/TCP                  2d
default       webpod         ClusterIP   10.96.33.91     <none>        80/TCP                   46h
kube-system   cilium-envoy   ClusterIP   None            <none>        9964/TCP                 14m
kube-system   hubble-peer    ClusterIP   10.96.239.219   <none>        443/TCP                  14m
kube-system   kube-dns       ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   2d

Cilium์ด eBPF๋กœ ํŒจํ‚ท์„ ์ฒ˜๋ฆฌํ•˜๊ธฐ ๋•Œ๋ฌธ์— iptables ๊ทœ์น™์ด ๋ณต์žกํ•˜์ง€ ์•Š์Œ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# iptables-save

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:18:29 2025
*mangle
:PREROUTING ACCEPT [315245:427624291]
:INPUT ACCEPT [315245:427624291]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [304727:93833315]
:POSTROUTING ACCEPT [304727:93833315]
:CILIUM_POST_mangle - [0:0]
:CILIUM_PRE_mangle - [0:0]
:KUBE-IPTABLES-HINT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_mangle" -j CILIUM_PRE_mangle
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_mangle" -j CILIUM_POST_mangle
-A CILIUM_PRE_mangle ! -o lo -m socket --transparent -m mark ! --mark 0xe00/0xf00 -m mark ! --mark 0x800/0xf00 -m comment --comment "cilium: any->pod redirect proxied traffic to host proxy" -j MARK --set-xmark 0x200/0xffffffff
-A CILIUM_PRE_mangle -p tcp -m mark --mark 0x21a60200 -m comment --comment "cilium: TPROXY to host cilium-dns-egress proxy" -j TPROXY --on-port 42529 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff
-A CILIUM_PRE_mangle -p udp -m mark --mark 0x21a60200 -m comment --comment "cilium: TPROXY to host cilium-dns-egress proxy" -j TPROXY --on-port 42529 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff
COMMIT
# Completed on Wed Jul 16 23:18:29 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:18:29 2025
*raw
:PREROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:CILIUM_OUTPUT_raw - [0:0]
:CILIUM_PRE_raw - [0:0]
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_raw" -j CILIUM_PRE_raw
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_raw" -j CILIUM_OUTPUT_raw
-A CILIUM_OUTPUT_raw -d 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -s 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0xa00/0xfffffeff -m comment --comment "cilium: NOTRACK for proxy return traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0xa00/0xfffffeff -m comment --comment "cilium: NOTRACK for proxy return traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0x800/0xe00 -m comment --comment "cilium: NOTRACK for L7 proxy upstream traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0x800/0xe00 -m comment --comment "cilium: NOTRACK for L7 proxy upstream traffic" -j CT --notrack
-A CILIUM_PRE_raw -d 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_PRE_raw -s 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_PRE_raw -m mark --mark 0x200/0xf00 -m comment --comment "cilium: NOTRACK for proxy traffic" -j CT --notrack
COMMIT
# Completed on Wed Jul 16 23:18:29 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:18:29 2025
*filter
:INPUT ACCEPT [315245:427624291]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [304727:93833315]
:CILIUM_FORWARD - [0:0]
:CILIUM_INPUT - [0:0]
:CILIUM_OUTPUT - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
-A INPUT -m comment --comment "cilium-feeder: CILIUM_INPUT" -j CILIUM_INPUT
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "cilium-feeder: CILIUM_FORWARD" -j CILIUM_FORWARD
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT" -j CILIUM_OUTPUT
-A OUTPUT -j KUBE-FIREWALL
-A CILIUM_FORWARD -o cilium_host -m comment --comment "cilium: any->cluster on cilium_host forward accept" -j ACCEPT
-A CILIUM_FORWARD -i cilium_host -m comment --comment "cilium: cluster->any on cilium_host forward accept (nodeport)" -j ACCEPT
-A CILIUM_FORWARD -i lxc+ -m comment --comment "cilium: cluster->any on lxc+ forward accept" -j ACCEPT
-A CILIUM_FORWARD -i cilium_net -m comment --comment "cilium: cluster->any on cilium_net forward accept (nodeport)" -j ACCEPT
-A CILIUM_FORWARD -o lxc+ -m comment --comment "cilium: any->cluster on lxc+ forward accept" -j ACCEPT
-A CILIUM_FORWARD -i lxc+ -m comment --comment "cilium: cluster->any on lxc+ forward accept (nodeport)" -j ACCEPT
-A CILIUM_INPUT -m mark --mark 0x200/0xf00 -m comment --comment "cilium: ACCEPT for proxy traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark --mark 0xa00/0xe00 -m comment --comment "cilium: ACCEPT for proxy traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark --mark 0x800/0xe00 -m comment --comment "cilium: ACCEPT for l7 proxy upstream traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark ! --mark 0xe00/0xf00 -m mark ! --mark 0xd00/0xf00 -m mark ! --mark 0x400/0xf00 -m mark ! --mark 0xa00/0xe00 -m mark ! --mark 0x800/0xe00 -m mark ! --mark 0xf00/0xf00 -m comment --comment "cilium: host->any mark as from host" -j MARK --set-xmark 0xc00/0xf00
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
COMMIT
# Completed on Wed Jul 16 23:18:29 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:18:29 2025
*nat
:PREROUTING ACCEPT [46:2536]
:INPUT ACCEPT [46:2536]
:OUTPUT ACCEPT [4956:297326]
:POSTROUTING ACCEPT [4956:297326]
:CILIUM_OUTPUT_nat - [0:0]
:CILIUM_POST_nat - [0:0]
:CILIUM_PRE_nat - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_nat" -j CILIUM_PRE_nat
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_nat" -j CILIUM_OUTPUT_nat
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_nat" -j CILIUM_POST_nat
COMMIT
# Completed on Wed Jul 16 23:18:29 2025

๐Ÿ”Ž Pod CIDR (IPAM) ํ™•์ธ

1. Pod CIDR(IPAM) ํ™•์ธ

๊ฐ ๋…ธ๋“œ์˜ Pod CIDR๊ฐ€ ์ด์ „ flannel ์‚ฌ์šฉ ์‹œ ์„ค์ •๋œ 10.244.x.x/24๋กœ ๊ทธ๋Œ€๋กœ ๋‚จ์•„์žˆ์Œ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'

โœ…ย ์ถœ๋ ฅ

1
2
3
k8s-ctr	10.244.0.0/24
k8s-w1	10.244.1.0/24
k8s-w2	10.244.2.0/24

2. ๊ธฐ์กด Pod IP ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          46h   10.244.0.4   k8s-ctr   <none>           <none>
webpod-697b545f57-67wlk   1/1     Running   0          47h   10.244.2.2   k8s-w2    <none>           <none>
webpod-697b545f57-rnqgq   1/1     Running   0          47h   10.244.1.2   k8s-w1    <none>           <none>

Cilium CIDR๊ฐ€ ์ ์šฉ๋˜์ง€ ์•Š์•„ ํ†ต์‹  ๋ถˆ๊ฐ€ ์ƒํƒœ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# k exec -it curl-pod -- curl webpod

3. CiliumNodes ๋ฆฌ์†Œ์Šค ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnodes 

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME      CILIUMINTERNALIP   INTERNALIP       AGE
k8s-ctr   172.20.0.187       192.168.10.100   19m
k8s-w1    172.20.2.165       192.168.10.101   19m
k8s-w2    172.20.1.96        192.168.10.102   19m

๊ฐ ๋…ธ๋“œ์— ๋Œ€ํ•ด Cilium์ด ๊ด€๋ฆฌํ•˜๋Š” InternalIP์™€ Cilium PodCIDR(172.20.x.x ๋Œ€์—ญ) ํ™•์ธ

4. CiliumNodes PodCIDRs ํ™•์ธ

๊ฐ ๋…ธ๋“œ๋ณ„๋กœ 172.20.x.x/24 CIDR์ด ์ถœ๋ ฅ๋จ (Cilium์—์„œ ๊ด€๋ฆฌํ•˜๋Š” ๋„คํŠธ์›Œํฌ)

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnodes -o json | grep podCIDRs -A2

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
                    "podCIDRs": [
                        "172.20.0.0/24"
                    ],
--
                    "podCIDRs": [
                        "172.20.2.0/24"
                    ],
--
                    "podCIDRs": [
                        "172.20.1.0/24"
                    ],

5. Deployment ๋กค์•„์›ƒ ์žฌ์‹œ์ž‘

1
2
3
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl rollout restart deployment webpod
# ๊ฒฐ๊ณผ
deployment.apps/webpod restarted

webpod ํŒŒ๋“œ๊ฐ€ ์žฌ๋ฐฐํฌ๋˜๋ฉด์„œ Cilium ๊ด€๋ฆฌ CIDR(172.20.x.x) ๋Œ€์—ญ์œผ๋กœ IP๊ฐ€ ๋ณ€๊ฒฝ๋จ

1
2
3
4
5
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
NAME                     READY   STATUS    RESTARTS   AGE   IP             NODE      NOMINATED NODE   READINESS GATES
curl-pod                 1/1     Running   0          47h   10.244.0.4     k8s-ctr   <none>           <none>
webpod-9894b69cd-v2n4m   1/1     Running   0          15s   172.20.1.88    k8s-w2    <none>           <none>
webpod-9894b69cd-zxcnw   1/1     Running   0          19s   172.20.2.124   k8s-w1    <none>           <none>

6. curl-pod ์žฌ๋ฐฐํฌ

(1) ๊ธฐ์กด curl-pod ์‚ญ์ œ

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~#  kubectl delete pod curl-pod --grace-period=0

# ๊ฒฐ๊ณผ
pod "curl-pod" deleted

(2) curl-pod ์žฌ๋ฐฐํฌ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: curl-pod
  labels:
    app: curl
spec:
  nodeName: k8s-ctr
  containers:
  - name: curl
    image: nicolaka/netshoot
    command: ["tail"]
    args: ["-f", "/dev/null"]
  terminationGracePeriodSeconds: 0
EOF

# ๊ฒฐ๊ณผ
pod/curl-pod created

(3) ์žฌ๋ฐฐํฌ ํ›„ kubectl get pod -owide ์ถœ๋ ฅ

๋‹ค์‹œ ๋ฐฐํฌํ•˜์—ฌ Cilium ๊ด€๋ฆฌ CIDR(172.20.0.x) ๋Œ€์—ญ์˜ IP๋กœ ์ •์ƒ ๋ฐœ๊ธ‰๋จ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME                     READY   STATUS    RESTARTS   AGE     IP             NODE      NOMINATED NODE   READINESS GATES
curl-pod                 1/1     Running   0          42s     172.20.0.191   k8s-ctr   <none>           <none>
webpod-9894b69cd-v2n4m   1/1     Running   0          3m32s   172.20.1.88    k8s-w2    <none>           <none>
webpod-9894b69cd-zxcnw   1/1     Running   0          3m36s   172.20.2.124   k8s-w1    <none>           <none>

7. CiliumEndpoints ํ™•์ธ

๊ฐ ํŒŒ๋“œ์˜ Endpoint ์ƒํƒœ๊ฐ€ ready๋กœ ์ถœ๋ ฅ๋˜๊ณ  Cilium ๊ด€๋ฆฌ IP(172.20.x.x)๊ฐ€ ํ‘œ์‹œ๋จ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME                     SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
curl-pod                 13136               ready            172.20.0.191   
webpod-9894b69cd-v2n4m   8254                ready            172.20.1.88    
webpod-9894b69cd-zxcnw   8254                ready            172.20.2.124 

8. Cilium Endpoint ์ƒ์„ธ ๋ฆฌ์ŠคํŠธ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg endpoint list

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                  IPv6   IPv4           STATUS   
           ENFORCEMENT        ENFORCEMENT                                                                                                                     
425        Disabled           Disabled          4          reserved:health                                                                     172.20.0.117   ready   
1827       Disabled           Disabled          13136      k8s:app=curl                                                                        172.20.0.191   ready   
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                     
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                   
                                                           k8s:io.cilium.k8s.policy.serviceaccount=default                                                            
                                                           k8s:io.kubernetes.pod.namespace=default                                                                    
1912       Disabled           Disabled          1          k8s:node-role.kubernetes.io/control-plane                                                          ready   
                                                           k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                
                                                           reserved:host                                                                                              
3021       Disabled           Disabled          39751      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system          172.20.0.10    ready   
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                   
                                                           k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                            
                                                           k8s:io.kubernetes.pod.namespace=kube-system                                                                
                                                           k8s:k8s-app=kube-dns

9. Pod ๊ฐ„ ํ†ต์‹  ํ™•์ธ

webpod ํŒŒ๋“œ์˜ Hostname์ด ๋ฒˆ๊ฐˆ์•„ ์ถœ๋ ฅ๋จ โ†’ Cilium ๋„คํŠธ์›Œํฌ๋กœ ์ •์ƒ ํ†ต์‹ 

1
2
3
4
5
6
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod | grep Hostname
Hostname: webpod-9894b69cd-v2n4m
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod | grep Hostname
Hostname: webpod-9894b69cd-zxcnw
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod | grep Hostname
Hostname: webpod-9894b69cd-v2n4m

๐Ÿ’ป Cilium CLI ์„ค์น˜

1. Cilium CLI ์„ค์น˜

GitHub ๋ฆด๋ฆฌ์ฆˆ์—์„œ Cilium CLI ๋‹ค์šด๋กœ๋“œ ํ›„ /usr/local/bin์— ์„ค์น˜

1
2
3
4
5
6
7
8
9
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz >/dev/null 2>&1
tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz

# ๊ฒฐ๊ณผ
cilium

2. which cilium์œผ๋กœ ๊ฒฝ๋กœ ํ™•์ธ

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# which cilium

# ๊ฒฐ๊ณผ
/usr/local/bin/cilium

3. Cilium ์ƒํƒœ ์ ๊ฒ€ (CLI)

cilium status ์‹คํ–‰

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cilium status

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
    /ยฏยฏ\
 /ยฏยฏ\__/ยฏยฏ\    Cilium:             OK
 \__/ยฏยฏ\__/    Operator:           OK
 /ยฏยฏ\__/ยฏยฏ\    Envoy DaemonSet:    OK
 \__/ยฏยฏ\__/    Hubble Relay:       disabled
    \__/       ClusterMesh:        disabled

DaemonSet              cilium                   Desired: 3, Ready: 3/3, Available: 3/3
DaemonSet              cilium-envoy             Desired: 3, Ready: 3/3, Available: 3/3
Deployment             cilium-operator          Desired: 2, Ready: 2/2, Available: 2/2
Containers:            cilium                   Running: 3
                       cilium-envoy             Running: 3
                       cilium-operator          Running: 2
                       clustermesh-apiserver    
                       hubble-relay             
Cluster Pods:          5/5 managed by Cilium
Helm chart version:    1.17.5
Image versions         cilium             quay.io/cilium/cilium:v1.17.5@sha256:baf8541723ee0b72d6c489c741c81a6fdc5228940d66cb76ef5ea2ce3c639ea6: 3
                       cilium-envoy       quay.io/cilium/cilium-envoy:v1.32.6-1749271279-0864395884b263913eac200ee2048fd985f8e626@sha256:9f69e290a7ea3d4edf9192acd81694089af048ae0d8a67fb63bd62dc1d72203e: 3
                       cilium-operator    quay.io/cilium/operator-generic:v1.17.5@sha256:f954c97eeb1b47ed67d08cc8fb4108fb829f869373cbb3e698a7f8ef1085b09e: 2

4. Cilium ์„ค์ • ํ™•์ธ

cilium config view ๋ช…๋ น์œผ๋กœ ํ˜„์žฌ Cilium ์—์ด์ „ํŠธ์˜ ์„ค์ •๊ฐ’ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cilium config view

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
agent-not-ready-taint-key                         node.cilium.io/agent-not-ready
arping-refresh-period                             30s
auto-direct-node-routes                           true
bpf-distributed-lru                               false
bpf-events-drop-enabled                           true
bpf-events-policy-verdict-enabled                 true
bpf-events-trace-enabled                          true
bpf-lb-acceleration                               disabled
bpf-lb-algorithm-annotation                       false
bpf-lb-external-clusterip                         false
bpf-lb-map-max                                    65536
bpf-lb-mode-annotation                            false
bpf-lb-sock                                       false
bpf-lb-source-range-all-types                     false
bpf-map-dynamic-size-ratio                        0.0025
bpf-policy-map-max                                16384
bpf-root                                          /sys/fs/bpf
cgroup-root                                       /run/cilium/cgroupv2
cilium-endpoint-gc-interval                       5m0s
cluster-id                                        0
cluster-name                                      default
cluster-pool-ipv4-cidr                            172.20.0.0/16
cluster-pool-ipv4-mask-size                       24
clustermesh-enable-endpoint-sync                  false
clustermesh-enable-mcs-api                        false
cni-exclusive                                     true
cni-log-file                                      /var/run/cilium/cilium-cni.log
custom-cni-conf                                   false
datapath-mode                                     veth
debug                                             false
debug-verbose                                     
default-lb-service-ipam                           lbipam
direct-routing-skip-unreachable                   false
dnsproxy-enable-transparent-mode                  true
dnsproxy-socket-linger-timeout                    10
egress-gateway-reconciliation-trigger-interval    1s
enable-auto-protect-node-port-range               true
enable-bpf-clock-probe                            false
enable-bpf-masquerade                             true
enable-endpoint-health-checking                   true
enable-endpoint-lockdown-on-policy-overflow       false
enable-endpoint-routes                            true
enable-experimental-lb                            false
enable-health-check-loadbalancer-ip               false
enable-health-check-nodeport                      true
enable-health-checking                            true
enable-hubble                                     true
enable-internal-traffic-policy                    true
enable-ipv4                                       true
enable-ipv4-big-tcp                               false
enable-ipv4-masquerade                            true
enable-ipv6                                       false
enable-ipv6-big-tcp                               false
enable-ipv6-masquerade                            true
enable-k8s-networkpolicy                          true
enable-k8s-terminating-endpoint                   true
enable-l2-neigh-discovery                         true
enable-l7-proxy                                   true
enable-lb-ipam                                    true
enable-local-redirect-policy                      false
enable-masquerade-to-route-source                 false
enable-metrics                                    true
enable-node-selector-labels                       false
enable-non-default-deny-policies                  true
enable-policy                                     default
enable-policy-secrets-sync                        true
enable-runtime-device-detection                   true
enable-sctp                                       false
enable-source-ip-verification                     true
enable-svc-source-range-check                     true
enable-tcx                                        true
enable-vtep                                       false
enable-well-known-identities                      false
enable-xt-socket-fallback                         true
envoy-access-log-buffer-size                      4096
envoy-base-id                                     0
envoy-keep-cap-netbindservice                     false
external-envoy-proxy                              true
health-check-icmp-failure-threshold               3
http-retry-count                                  3
hubble-disable-tls                                false
hubble-export-file-max-backups                    5
hubble-export-file-max-size-mb                    10
hubble-listen-address                             :4244
hubble-socket-path                                /var/run/cilium/hubble.sock
hubble-tls-cert-file                              /var/lib/cilium/tls/hubble/server.crt
hubble-tls-client-ca-files                        /var/lib/cilium/tls/hubble/client-ca.crt
hubble-tls-key-file                               /var/lib/cilium/tls/hubble/server.key
identity-allocation-mode                          crd
identity-gc-interval                              15m0s
identity-heartbeat-timeout                        30m0s
install-no-conntrack-iptables-rules               true
ipam                                              cluster-pool
ipam-cilium-node-update-rate                      15s
iptables-random-fully                             false
ipv4-native-routing-cidr                          172.20.0.0/16
k8s-require-ipv4-pod-cidr                         false
k8s-require-ipv6-pod-cidr                         false
kube-proxy-replacement                            true
kube-proxy-replacement-healthz-bind-address       
max-connected-clusters                            255
mesh-auth-enabled                                 true
mesh-auth-gc-interval                             5m0s
mesh-auth-queue-size                              1024
mesh-auth-rotated-identities-queue-size           1024
monitor-aggregation                               medium
monitor-aggregation-flags                         all
monitor-aggregation-interval                      5s
nat-map-stats-entries                             32
nat-map-stats-interval                            30s
node-port-bind-protection                         true
nodeport-addresses                                
nodes-gc-interval                                 5m0s
operator-api-serve-addr                           127.0.0.1:9234
operator-prometheus-serve-addr                    :9963
policy-cidr-match-mode                            
policy-secrets-namespace                          cilium-secrets
policy-secrets-only-from-secrets-namespace        true
preallocate-bpf-maps                              false
procfs                                            /host/proc
proxy-connect-timeout                             2
proxy-idle-timeout-seconds                        60
proxy-initial-fetch-timeout                       30
proxy-max-concurrent-retries                      128
proxy-max-connection-duration-seconds             0
proxy-max-requests-per-connection                 0
proxy-xff-num-trusted-hops-egress                 0
proxy-xff-num-trusted-hops-ingress                0
remove-cilium-node-taints                         true
routing-mode                                      native
service-no-backend-response                       reject
set-cilium-is-up-condition                        true
set-cilium-node-taints                            true
synchronize-k8s-nodes                             true
tofqdns-dns-reject-response-code                  refused
tofqdns-enable-dns-compression                    true
tofqdns-endpoint-max-ip-per-hostname              1000
tofqdns-idle-connection-grace-period              0s
tofqdns-max-deferred-connection-deletes           10000
tofqdns-proxy-response-max-delay                  100ms
tunnel-protocol                                   vxlan
tunnel-source-port-range                          0-0
unmanaged-pod-watcher-interval                    15
vtep-cidr                                         
vtep-endpoint                                     
vtep-mac                                          
vtep-mask                                         
write-cni-conf-when-ready                         /host/etc/cni/net.d/05-cilium.conflist

5. Cilium ConfigMap ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get cm -n kube-system cilium-config -o json | jq

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
{
  "apiVersion": "v1",
  "data": {
    "agent-not-ready-taint-key": "node.cilium.io/agent-not-ready",
    "arping-refresh-period": "30s",
    "auto-direct-node-routes": "true",
    "bpf-distributed-lru": "false",
    "bpf-events-drop-enabled": "true",
    "bpf-events-policy-verdict-enabled": "true",
    "bpf-events-trace-enabled": "true",
    "bpf-lb-acceleration": "disabled",
    "bpf-lb-algorithm-annotation": "false",
    "bpf-lb-external-clusterip": "false",
    "bpf-lb-map-max": "65536",
    "bpf-lb-mode-annotation": "false",
    "bpf-lb-sock": "false",
    "bpf-lb-source-range-all-types": "false",
    "bpf-map-dynamic-size-ratio": "0.0025",
    "bpf-policy-map-max": "16384",
    "bpf-root": "/sys/fs/bpf",
    "cgroup-root": "/run/cilium/cgroupv2",
    "cilium-endpoint-gc-interval": "5m0s",
    "cluster-id": "0",
    "cluster-name": "default",
    "cluster-pool-ipv4-cidr": "172.20.0.0/16",
    "cluster-pool-ipv4-mask-size": "24",
    "clustermesh-enable-endpoint-sync": "false",
    "clustermesh-enable-mcs-api": "false",
    "cni-exclusive": "true",
    "cni-log-file": "/var/run/cilium/cilium-cni.log",
    "custom-cni-conf": "false",
    "datapath-mode": "veth",
    "debug": "false",
    "debug-verbose": "",
    "default-lb-service-ipam": "lbipam",
    "direct-routing-skip-unreachable": "false",
    "dnsproxy-enable-transparent-mode": "true",
    "dnsproxy-socket-linger-timeout": "10",
    "egress-gateway-reconciliation-trigger-interval": "1s",
    "enable-auto-protect-node-port-range": "true",
    "enable-bpf-clock-probe": "false",
    "enable-bpf-masquerade": "true",
    "enable-endpoint-health-checking": "true",
    "enable-endpoint-lockdown-on-policy-overflow": "false",
    "enable-endpoint-routes": "true",
    "enable-experimental-lb": "false",
    "enable-health-check-loadbalancer-ip": "false",
    "enable-health-check-nodeport": "true",
    "enable-health-checking": "true",
    "enable-hubble": "true",
    "enable-internal-traffic-policy": "true",
    "enable-ipv4": "true",
    "enable-ipv4-big-tcp": "false",
    "enable-ipv4-masquerade": "true",
    "enable-ipv6": "false",
    "enable-ipv6-big-tcp": "false",
    "enable-ipv6-masquerade": "true",
    "enable-k8s-networkpolicy": "true",
    "enable-k8s-terminating-endpoint": "true",
    "enable-l2-neigh-discovery": "true",
    "enable-l7-proxy": "true",
    "enable-lb-ipam": "true",
    "enable-local-redirect-policy": "false",
    "enable-masquerade-to-route-source": "false",
    "enable-metrics": "true",
    "enable-node-selector-labels": "false",
    "enable-non-default-deny-policies": "true",
    "enable-policy": "default",
    "enable-policy-secrets-sync": "true",
    "enable-runtime-device-detection": "true",
    "enable-sctp": "false",
    "enable-source-ip-verification": "true",
    "enable-svc-source-range-check": "true",
    "enable-tcx": "true",
    "enable-vtep": "false",
    "enable-well-known-identities": "false",
    "enable-xt-socket-fallback": "true",
    "envoy-access-log-buffer-size": "4096",
    "envoy-base-id": "0",
    "envoy-keep-cap-netbindservice": "false",
    "external-envoy-proxy": "true",
    "health-check-icmp-failure-threshold": "3",
    "http-retry-count": "3",
    "hubble-disable-tls": "false",
    "hubble-export-file-max-backups": "5",
    "hubble-export-file-max-size-mb": "10",
    "hubble-listen-address": ":4244",
    "hubble-socket-path": "/var/run/cilium/hubble.sock",
    "hubble-tls-cert-file": "/var/lib/cilium/tls/hubble/server.crt",
    "hubble-tls-client-ca-files": "/var/lib/cilium/tls/hubble/client-ca.crt",
    "hubble-tls-key-file": "/var/lib/cilium/tls/hubble/server.key",
    "identity-allocation-mode": "crd",
    "identity-gc-interval": "15m0s",
    "identity-heartbeat-timeout": "30m0s",
    "install-no-conntrack-iptables-rules": "true",
    "ipam": "cluster-pool",
    "ipam-cilium-node-update-rate": "15s",
    "iptables-random-fully": "false",
    "ipv4-native-routing-cidr": "172.20.0.0/16",
    "k8s-require-ipv4-pod-cidr": "false",
    "k8s-require-ipv6-pod-cidr": "false",
    "kube-proxy-replacement": "true",
    "kube-proxy-replacement-healthz-bind-address": "",
    "max-connected-clusters": "255",
    "mesh-auth-enabled": "true",
    "mesh-auth-gc-interval": "5m0s",
    "mesh-auth-queue-size": "1024",
    "mesh-auth-rotated-identities-queue-size": "1024",
    "monitor-aggregation": "medium",
    "monitor-aggregation-flags": "all",
    "monitor-aggregation-interval": "5s",
    "nat-map-stats-entries": "32",
    "nat-map-stats-interval": "30s",
    "node-port-bind-protection": "true",
    "nodeport-addresses": "",
    "nodes-gc-interval": "5m0s",
    "operator-api-serve-addr": "127.0.0.1:9234",
    "operator-prometheus-serve-addr": ":9963",
    "policy-cidr-match-mode": "",
    "policy-secrets-namespace": "cilium-secrets",
    "policy-secrets-only-from-secrets-namespace": "true",
    "preallocate-bpf-maps": "false",
    "procfs": "/host/proc",
    "proxy-connect-timeout": "2",
    "proxy-idle-timeout-seconds": "60",
    "proxy-initial-fetch-timeout": "30",
    "proxy-max-concurrent-retries": "128",
    "proxy-max-connection-duration-seconds": "0",
    "proxy-max-requests-per-connection": "0",
    "proxy-xff-num-trusted-hops-egress": "0",
    "proxy-xff-num-trusted-hops-ingress": "0",
    "remove-cilium-node-taints": "true",
    "routing-mode": "native",
    "service-no-backend-response": "reject",
    "set-cilium-is-up-condition": "true",
    "set-cilium-node-taints": "true",
    "synchronize-k8s-nodes": "true",
    "tofqdns-dns-reject-response-code": "refused",
    "tofqdns-enable-dns-compression": "true",
    "tofqdns-endpoint-max-ip-per-hostname": "1000",
    "tofqdns-idle-connection-grace-period": "0s",
    "tofqdns-max-deferred-connection-deletes": "10000",
    "tofqdns-proxy-response-max-delay": "100ms",
    "tunnel-protocol": "vxlan",
    "tunnel-source-port-range": "0-0",
    "unmanaged-pod-watcher-interval": "15",
    "vtep-cidr": "",
    "vtep-endpoint": "",
    "vtep-mac": "",
    "vtep-mask": "",
    "write-cni-conf-when-ready": "/host/etc/cni/net.d/05-cilium.conflist"
  },
  "kind": "ConfigMap",
  "metadata": {
    "annotations": {
      "meta.helm.sh/release-name": "cilium",
      "meta.helm.sh/release-namespace": "kube-system"
    },
    "creationTimestamp": "2025-07-16T14:03:29Z",
    "labels": {
      "app.kubernetes.io/managed-by": "Helm"
    },
    "name": "cilium-config",
    "namespace": "kube-system",
    "resourceVersion": "18656",
    "uid": "ace08746-3cfc-446a-a112-1ed335a5b9c1"
  }
}

6. Cilium debug ๋ชจ๋“œ ํ™œ์„ฑํ™”

cilium config set debug true ๋ช…๋ น์œผ๋กœ ๋””๋ฒ„๊ทธ ๋ชจ๋“œ ํ™œ์„ฑํ™” ํ›„ Cilium Pod๋“ค์ด ์ž๋™ ์žฌ์‹œ์ž‘๋จ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cilium config set debug true && watch kubectl get pod -A

โœจ Patching ConfigMap cilium-config with debug=true...
โ™ป๏ธ  Restarted Cilium pods
27 202
Every 2.0s: kubectl get pod -A                  k8s-ctr: Wed Jul 16 23:42:42 2025

NAMESPACE     AME                               READY   STATUS    RESTARTS   AGE
default       curl-pod                           1/1     Running   0          11m
default       webpod-9894b69cd-v2n4m             1/1     Running   0          14m
default       webpod-9894b69cd-zxcnw             1/1     Running   0          14m
kube-system   cilium-64kc9                       1/1     Running   0          79s
kube-system   cilium-cl6f6                       1/1     Running   0          79s
kube-system   cilium-envoy-b4zts                 1/1     Running   0          38m
kube-system   cilium-envoy-fkj4l                 1/1     Running   0          38m
kube-system   cilium-envoy-z9mvb                 1/1     Running   0          38m
kube-system   cilium-operator-865bc7f457-rgj7k   1/1     Running   0          38m
kube-system   cilium-operator-865bc7f457-s2zv2   1/1     Running   0          38m
kube-system   cilium-zzk8d                       1/1     Running   0          79s
kube-system   coredns-674b8bbfcf-697cz           1/1     Running   0          36m
kube-system   coredns-674b8bbfcf-rtz5g           1/1     Running   0          37m
kube-system   etcd-k8s-ctr                       1/1     Running   0          2d
kube-system   kube-apiserver-k8s-ctr             1/1     Running   0          2d
kube-system   kube-controller-manager-k8s-ctr    1/1     Running   0          2d
kube-system   kube-scheduler-k8s-ctr             1/1     Running   0          2d

๐Ÿ“‹ ๋„คํŠธ์›Œํฌ ๊ธฐ๋ณธ ์ •๋ณด ํ™•์ธ

1. ๊ฐ ๋…ธ๋“œ์˜ ์ธํ„ฐํŽ˜์ด์Šค ์ •๋ณด ํ™•์ธ

cilium_net, cilium_host, lxc_health, ๊ทธ๋ฆฌ๊ณ  ๊ฐ ํŒŒ๋“œ์— ์—ฐ๊ฒฐ๋œ lxc* ์ธํ„ฐํŽ˜์ด์Šค๋“ค์ด ์กด์žฌํ•จ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 70970sec preferred_lft 70970sec
    inet6 fd17:625c:f037:2:a00:27ff:fe6b:69c9/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 86006sec preferred_lft 14006sec
    inet6 fe80::a00:27ff:fe6b:69c9/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:80:23:b9 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
    inet 192.168.10.100/24 brd 192.168.10.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe80:23b9/64 scope link 
       valid_lft forever preferred_lft forever
9: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ee:43:ce:84:cd:8b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ec43:ceff:fe84:cd8b/64 scope link 
       valid_lft forever preferred_lft forever
10: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ce:32:1f:30:c0:e2 brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.187/32 scope global cilium_host
       valid_lft forever preferred_lft forever
    inet6 fe80::cc32:1fff:fe30:c0e2/64 scope link 
       valid_lft forever preferred_lft forever
14: lxc15efa76a1c03@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0e:74:2e:72:27:a0 brd ff:ff:ff:ff:ff:ff link-netns cni-69166b00-fabc-0af1-3874-e887a47b08b0
    inet6 fe80::c74:2eff:fe72:27a0/64 scope link 
       valid_lft forever preferred_lft forever
16: lxc4bee7d32b9d3@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether da:c2:12:8f:3f:7f brd ff:ff:ff:ff:ff:ff link-netns cni-e912a9e2-ed8c-1436-f857-94e0fcb0cd78
    inet6 fe80::d8c2:12ff:fe8f:3f7f/64 scope link 
       valid_lft forever preferred_lft forever
18: lxc_health@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether da:66:60:a7:ea:23 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::d866:60ff:fea7:ea23/64 scope link 
       valid_lft forever preferred_lft forever
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c addr ; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
>> node : k8s-w1 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 70992sec preferred_lft 70992sec
    inet6 fd17:625c:f037:2:a00:27ff:fe6b:69c9/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 86183sec preferred_lft 14183sec
    inet6 fe80::a00:27ff:fe6b:69c9/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:07:f2:54 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
    inet 192.168.10.101/24 brd 192.168.10.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe07:f254/64 scope link 
       valid_lft forever preferred_lft forever
7: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 4e:42:1e:e2:03:ec brd ff:ff:ff:ff:ff:ff
    inet6 fe80::4c42:1eff:fee2:3ec/64 scope link 
       valid_lft forever preferred_lft forever
8: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 5e:62:ab:66:82:0f brd ff:ff:ff:ff:ff:ff
    inet 172.20.2.165/32 scope global cilium_host
       valid_lft forever preferred_lft forever
    inet6 fe80::5c62:abff:fe66:820f/64 scope link 
       valid_lft forever preferred_lft forever
12: lxcd268a7119386@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e6:d8:89:f3:57:c0 brd ff:ff:ff:ff:ff:ff link-netns cni-2acabd28-9c86-f447-f6f4-9f3ca8e10ca4
    inet6 fe80::e4d8:89ff:fef3:57c0/64 scope link 
       valid_lft forever preferred_lft forever
14: lxc_health@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e6:ec:3f:38:38:a8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::e4ec:3fff:fe38:38a8/64 scope link 
       valid_lft forever preferred_lft forever

>> node : k8s-w2 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 71150sec preferred_lft 71150sec
    inet6 fd17:625c:f037:2:a00:27ff:fe6b:69c9/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 86314sec preferred_lft 14314sec
    inet6 fe80::a00:27ff:fe6b:69c9/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:96:d7:20 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
    inet 192.168.10.102/24 brd 192.168.10.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe96:d720/64 scope link 
       valid_lft forever preferred_lft forever
7: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 12:28:ed:0b:bc:2a brd ff:ff:ff:ff:ff:ff
    inet6 fe80::1028:edff:fe0b:bc2a/64 scope link 
       valid_lft forever preferred_lft forever
8: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e6:60:2a:76:4c:a7 brd ff:ff:ff:ff:ff:ff
    inet 172.20.1.96/32 scope global cilium_host
       valid_lft forever preferred_lft forever
    inet6 fe80::e460:2aff:fe76:4ca7/64 scope link 
       valid_lft forever preferred_lft forever
12: lxc61f4da945ad8@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 4e:a7:3d:29:2b:4b brd ff:ff:ff:ff:ff:ff link-netns cni-61bcb4d9-eab1-78d3-9d22-9801294ffb91
    inet6 fe80::4ca7:3dff:fe29:2b4b/64 scope link 
       valid_lft forever preferred_lft forever
14: lxc5fade0857eac@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 62:c7:2a:1b:a2:5a brd ff:ff:ff:ff:ff:ff link-netns cni-dd3d4f65-5ab6-c3fe-a381-13412ad693b6
    inet6 fe80::60c7:2aff:fe1b:a25a/64 scope link 
       valid_lft forever preferred_lft forever
16: lxc_health@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ba:77:7e:b5:ee:fa brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::b877:7eff:feb5:eefa/64 scope link 
       valid_lft forever preferred_lft forever
  • k8s-w1 โ†’ cilium_host IP: 172.20.2.165
  • k8s-w2 โ†’ cilium_host IP: 172.20.1.96

2. ๊ฐ ๋…ธ๋“œ์˜ cilium_net / cilium_host ์ •๋ณด ํ™•์ธ

1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c addr show cilium_net
ip -c addr show cilium_host

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
9: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ee:43:ce:84:cd:8b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ec43:ceff:fe84:cd8b/64 scope link 
       valid_lft forever preferred_lft forever
10: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ce:32:1f:30:c0:e2 brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.187/32 scope global cilium_host
       valid_lft forever preferred_lft forever
    inet6 fe80::cc32:1fff:fe30:c0e2/64 scope link 
       valid_lft forever preferred_lft forever
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c addr show cilium_net  ; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
>> node : k8s-w1 <<
7: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 4e:42:1e:e2:03:ec brd ff:ff:ff:ff:ff:ff
    inet6 fe80::4c42:1eff:fee2:3ec/64 scope link 
       valid_lft forever preferred_lft forever

>> node : k8s-w2 <<
7: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 12:28:ed:0b:bc:2a brd ff:ff:ff:ff:ff:ff
    inet6 fe80::1028:edff:fe0b:bc2a/64 scope link 
       valid_lft forever preferred_lft forever
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c addr show cilium_host ; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
>> node : k8s-w1 <<
8: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 5e:62:ab:66:82:0f brd ff:ff:ff:ff:ff:ff
    inet 172.20.2.165/32 scope global cilium_host
       valid_lft forever preferred_lft forever
    inet6 fe80::5c62:abff:fe66:820f/64 scope link 
       valid_lft forever preferred_lft forever

>> node : k8s-w2 <<
8: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e6:60:2a:76:4c:a7 brd ff:ff:ff:ff:ff:ff
    inet 172.20.1.96/32 scope global cilium_host
       valid_lft forever preferred_lft forever
    inet6 fe80::e460:2aff:fe76:4ca7/64 scope link 
       valid_lft forever preferred_lft forever

3. ํ—ฌ์Šค์ฒดํฌ ์šฉ๋„๋กœ ๋™์ž‘ํ•˜๋Š” ์ธํ„ฐํŽ˜์ด์Šค ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c addr show lxc_health

โœ…ย ์ถœ๋ ฅ

1
2
3
4
18: lxc_health@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether da:66:60:a7:ea:23 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::d866:60ff:fea7:ea23/64 scope link 
       valid_lft forever preferred_lft forever
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c addr show lxc_health  ; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
>> node : k8s-w1 <<
14: lxc_health@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e6:ec:3f:38:38:a8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::e4ec:3fff:fe38:38a8/64 scope link 
       valid_lft forever preferred_lft forever

>> node : k8s-w2 <<
16: lxc_health@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ba:77:7e:b5:ee:fa brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::b877:7eff:feb5:eefa/64 scope link 
       valid_lft forever preferred_lft forever

4. ํด๋Ÿฌ์Šคํ„ฐ ํ—ฌ์Šค ์ƒํƒœ์™€ IP ์—ฐ๊ฒฐ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg status --verbose

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
...
Cluster health:   3/3 reachable   (2025-07-16T15:07:31Z)
Name              IP              Node   Endpoints
  k8s-ctr (localhost):
    Host connectivity to 192.168.10.100:
      ICMP to stack:   OK, RTT=91.601ยตs
      HTTP to agent:   OK, RTT=382.5ยตs
    Endpoint connectivity to 172.20.0.117:
      ICMP to stack:   OK, RTT=145.313ยตs
      HTTP to agent:   OK, RTT=637.185ยตs
  k8s-w1:
    Host connectivity to 192.168.10.101:
      ICMP to stack:   OK, RTT=650.087ยตs
      HTTP to agent:   OK, RTT=954.827ยตs
    Endpoint connectivity to 172.20.2.252:
      ICMP to stack:   OK, RTT=852.728ยตs
      HTTP to agent:   OK, RTT=975.807ยตs
  k8s-w2:
    Host connectivity to 192.168.10.102:
      ICMP to stack:   OK, RTT=469.026ยตs
      HTTP to agent:   OK, RTT=731.956ยตs
    Endpoint connectivity to 172.20.1.139:
      ICMP to stack:   OK, RTT=584.404ยตs
      HTTP to agent:   OK, RTT=699.36ยตs
...      

5. health ์—”๋“œํฌ์ธํŠธ IP ์žฌํ™•์ธ

health ์—”๋“œํฌ์ธํŠธ IP๊ฐ€ 172.20.0.117 ์ž„์„ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg endpoint list | grep health

โœ…ย ์ถœ๋ ฅ

1
977        Disabled           Disabled          4          reserved:health                                                                     172.20.0.117   ready

6. Cilium ์ƒํƒœ ๋ฐ IPAM ์ •๋ณด ํ™•์ธ

cilium-dbg status --all-addresses ๋ช…๋ น์œผ๋กœ Cilium IPAM ์ •๋ณด ๋ฐ ํ• ๋‹น๋œ IP ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg status --all-addresses

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
KVStore:                Disabled   
Kubernetes:             Ok         1.33 (v1.33.2) [linux/amd64]
Kubernetes APIs:        ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:   True   [eth0    10.0.2.15 fd17:625c:f037:2:a00:27ff:fe6b:69c9 fe80::a00:27ff:fe6b:69c9, eth1   fe80::a00:27ff:fe80:23b9 192.168.10.100 (Direct Routing)]
Host firewall:          Disabled
SRv6:                   Disabled
CNI Chaining:           none
CNI Config file:        successfully wrote CNI configuration file to /host/etc/cni/net.d/05-cilium.conflist
Cilium:                 Ok   1.17.5 (v1.17.5-69aab28c)
NodeMonitor:            Listening for events on 2 CPUs with 64x4096 of shared memory
Cilium health daemon:   Ok   
IPAM:                   IPv4: 4/254 allocated from 172.20.0.0/24, 
Allocated addresses:
  172.20.0.10 (kube-system/coredns-674b8bbfcf-rtz5g [restored])
  172.20.0.117 (health)
  172.20.0.187 (router)
  172.20.0.191 (default/curl-pod [restored])
IPv4 BIG TCP:            Disabled
IPv6 BIG TCP:            Disabled
BandwidthManager:        Disabled
Routing:                 Network: Native   Host: BPF
Attach Mode:             TCX
Device Mode:             veth
Masquerading:            BPF   [eth0, eth1]   172.20.0.0/16 [IPv4: Enabled, IPv6: Disabled]
Controller Status:       34/34 healthy
Proxy Status:            OK, ip 172.20.0.187, 0 redirects active on ports 10000-20000, Envoy: external
Global Identity Range:   min 256, max 65535
Hubble:                  Ok              Current/Max Flows: 4095/4095 (100.00%), Flows/s: 13.57   Metrics: Disabled
Encryption:              Disabled        
Cluster health:          3/3 reachable   (2025-07-16T15:10:31Z)
Name                     IP              Node   Endpoints
Modules Health:          Stopped(0) Degraded(0) OK(58)

7. BPF Conntrack/NAT ํ…Œ์ด๋ธ” ICMP ํ๋ฆ„ ํ™•์ธ

BPF Conntrack ํ…Œ์ด๋ธ”์˜ ICMP ์„ธ์…˜ ์ •๋ณด ์ถœ๋ ฅ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium bpf ct list global | grep ICMP |head -n4

โœ…ย ์ถœ๋ ฅ

1
2
3
4
ICMP OUT 192.168.10.100:40449 -> 192.168.10.102:0 expires=16154 Packets=0 Bytes=0 RxFlagsSeen=0x00 LastRxReport=16094 TxFlagsSeen=0x00 LastTxReport=16094 Flags=0x0000 [ ] RevNAT=0 SourceSecurityID=0 IfIndex=0 BackendID=0 
ICMP IN 192.168.10.102:8955 -> 172.20.0.117:0 expires=15702 Packets=0 Bytes=0 RxFlagsSeen=0x00 LastRxReport=15642 TxFlagsSeen=0x00 LastTxReport=15642 Flags=0x0000 [ ] RevNAT=0 SourceSecurityID=6 IfIndex=0 BackendID=0 
ICMP OUT 192.168.10.100:60722 -> 192.168.10.102:0 expires=16034 Packets=0 Bytes=0 RxFlagsSeen=0x00 LastRxReport=15974 TxFlagsSeen=0x00 LastTxReport=15974 Flags=0x0000 [ ] RevNAT=0 SourceSecurityID=0 IfIndex=0 BackendID=0 
ICMP OUT 192.168.10.100:54893 -> 172.20.1.139:0 expires=15984 Packets=0 Bytes=0 RxFlagsSeen=0x00 LastRxReport=15924 TxFlagsSeen=0x00 LastTxReport=15924 Flags=0x0000 [ ] RevNAT=0 SourceSecurityID=0 IfIndex=0 BackendID=0

NAT ํ…Œ์ด๋ธ”์— ๊ธฐ๋ก๋œ ICMP ํŠธ๋ž˜ํ”ฝ ๋ณ€ํ™˜ ์ •๋ณด ์ถœ๋ ฅ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium bpf nat list | grep ICMP |head -n4

โœ…ย ์ถœ๋ ฅ

1
2
3
4
ICMP OUT 192.168.10.100:65340 -> 172.20.1.139:0 XLATE_SRC 192.168.10.100:65340 Created=99sec ago NeedsCT=1
ICMP IN 172.20.1.139:0 -> 192.168.10.100:61858 XLATE_DST 192.168.10.100:61858 Created=419sec ago NeedsCT=1
ICMP IN 172.20.1.139:0 -> 192.168.10.100:65340 XLATE_DST 192.168.10.100:65340 Created=99sec ago NeedsCT=1
ICMP IN 172.20.1.139:0 -> 192.168.10.100:42350 XLATE_DST 192.168.10.100:42350 Created=619sec ago NeedsCT=1

8. ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ”์—์„œ POD CIDR ํ™•์ธ

172.20.1.0/24 โ†’ k8s-w2 ๊ฒฝ์œ , 172.20.2.0/24 โ†’ k8s-w1 ๊ฒฝ์œ 

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c route | grep 172.20 | grep eth1

โœ…ย ์ถœ๋ ฅ

1
2
172.20.1.0/24 via 192.168.10.102 dev eth1 proto kernel 
172.20.2.0/24 via 192.168.10.101 dev eth1 proto kernel

9. ํŒŒ๋“œ๋ณ„ IP์™€ ๋…ธ๋“œ ํ™•์ธ

autoDirectNodeRoutes=true ์„ค์ •์œผ๋กœ ๊ฐ ๋…ธ๋“œ์˜ Pod CIDR๊ฐ€ ์ž๋™ ์ถ”๊ฐ€๋จ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# k get pod -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME                     READY   STATUS    RESTARTS   AGE   IP             NODE      NOMINATED NODE   READINESS GATES
curl-pod                 1/1     Running   0          45m   172.20.0.191   k8s-ctr   <none>           <none>
webpod-9894b69cd-v2n4m   1/1     Running   0          48m   172.20.1.88    k8s-w2    <none>           <none>
webpod-9894b69cd-zxcnw   1/1     Running   0          48m   172.20.2.124   k8s-w1    <none>           <none>
  • curl-pod: 172.20.0.191 (k8s-ctr)
  • webpod-v2n4m: 172.20.1.88 (k8s-w2)
  • webpod-zxcnw: 172.20.2.124 (k8s-w1)

10. CiliumEndpoints ๋ฆฌ์†Œ์Šค ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~#  kubectl get ciliumendpoints -A

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
NAMESPACE     NAME                       SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
default       curl-pod                   13136               ready            172.20.0.191   
default       webpod-9894b69cd-v2n4m     8254                ready            172.20.1.88    
default       webpod-9894b69cd-zxcnw     8254                ready            172.20.2.124   
kube-system   coredns-674b8bbfcf-697cz   39751               ready            172.20.1.127   
kube-system   coredns-674b8bbfcf-rtz5g   39751               ready            172.20.0.10    

11. lxc ์ธํ„ฐํŽ˜์ด์Šค ๊ธฐ๋ฐ˜ ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ” ํ™•์ธ

endpointRoutes.enabled=true ์„ค์ •์œผ๋กœ ํŒŒ๋“œ ์ „์šฉ ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค(lxcY)๊ฐ€ ํ˜ธ์ŠคํŠธ ๋ผ์šฐํŒ…์— ๋ฐ˜์˜๋จ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c route | grep lxc

โœ…ย ์ถœ๋ ฅ

1
2
3
172.20.0.10 dev lxc15efa76a1c03 proto kernel scope link 
172.20.0.117 dev lxc_health proto kernel scope link 
172.20.0.191 dev lxc4bee7d32b9d3 proto kernel scope link

๊ฐ ๋…ธ๋“œ๋ณ„ lxc ๋ผ์šฐํŒ… ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c route | grep lxc ; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
>> node : k8s-w1 <<
172.20.2.124 dev lxcd268a7119386 proto kernel scope link 
172.20.2.252 dev lxc_health proto kernel scope link 

>> node : k8s-w2 <<
172.20.1.88 dev lxc5fade0857eac proto kernel scope link 
172.20.1.127 dev lxc61f4da945ad8 proto kernel scope link 
172.20.1.139 dev lxc_health proto kernel scope lin

โšก Cilium CMD ํ™•์ธ

  • Cilium CMD Cheatsheet - Docs , CMD Reference - Docs

1. Cilium ํŒŒ๋“œ ์ด๋ฆ„ ๋ณ€์ˆ˜ํ™”

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# export CILIUMPOD0=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-ctr -o jsonpath='{.items[0].metadata.name}')
export CILIUMPOD1=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w1  -o jsonpath='{.items[0].metadata.name}')
export CILIUMPOD2=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w2  -o jsonpath='{.items[0].metadata.name}')
echo $CILIUMPOD0 $CILIUMPOD1 $CILIUMPOD2

โœ…ย ์ถœ๋ ฅ

1
cilium-64kc9 cilium-cl6f6 cilium-zzk8d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# k get pod -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
default       curl-pod                           1/1     Running   0          60m
default       webpod-9894b69cd-v2n4m             1/1     Running   0          63m
default       webpod-9894b69cd-zxcnw             1/1     Running   0          63m
kube-system   cilium-64kc9                       1/1     Running   0          50m
kube-system   cilium-cl6f6                       1/1     Running   0          50m
kube-system   cilium-envoy-b4zts                 1/1     Running   0          87m
kube-system   cilium-envoy-fkj4l                 1/1     Running   0          87m
kube-system   cilium-envoy-z9mvb                 1/1     Running   0          87m
kube-system   cilium-operator-865bc7f457-rgj7k   1/1     Running   0          87m
kube-system   cilium-operator-865bc7f457-s2zv2   1/1     Running   0          87m
kube-system   cilium-zzk8d                       1/1     Running   0          50m
kube-system   coredns-674b8bbfcf-697cz           1/1     Running   0          85m
kube-system   coredns-674b8bbfcf-rtz5g           1/1     Running   0          85m
kube-system   etcd-k8s-ctr                       1/1     Running   0          2d1h
kube-system   kube-apiserver-k8s-ctr             1/1     Running   0          2d1h
kube-system   kube-controller-manager-k8s-ctr    1/1     Running   0          2d1h
kube-system   kube-scheduler-k8s-ctr             1/1     Running   0          2d1h

2. ๋‹จ์ถ•ํ‚ค(alias) ์ง€์ •

  • alias c0, c1, c2 ๋กœ ๊ฐ ๋…ธ๋“œ์˜ cilium ๋ช…๋ น์„ ์‰ฝ๊ฒŒ ํ˜ธ์ถœ
  • alias c0bpf, c1bpf, c2bpf ๋กœ bpftool ๋ช…๋ น๋„ ๋‹จ์ถ•ํ‚ค ์„ค์ •
1
2
3
4
5
6
7
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# alias c0="kubectl exec -it $CILIUMPOD0 -n kube-system -c cilium-agent -- cilium"
alias c1="kubectl exec -it $CILIUMPOD1 -n kube-system -c cilium-agent -- cilium"
alias c2="kubectl exec -it $CILIUMPOD2 -n kube-system -c cilium-agent -- cilium"

alias c0bpf="kubectl exec -it $CILIUMPOD0 -n kube-system -c cilium-agent -- bpftool"
alias c1bpf="kubectl exec -it $CILIUMPOD1 -n kube-system -c cilium-agent -- bpftool"
alias c2bpf="kubectl exec -it $CILIUMPOD2 -n kube-system -c cilium-agent -- bpftool"

3. Cilium ์—”๋“œํฌ์ธํŠธ ๋ชฉ๋ก ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# c0 endpoint list

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                  IPv6   IPv4           STATUS   
           ENFORCEMENT        ENFORCEMENT                                                                                                                     
977        Disabled           Disabled          4          reserved:health                                                                     172.20.0.117   ready   
1827       Disabled           Disabled          13136      k8s:app=curl                                                                        172.20.0.191   ready   
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                     
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                   
                                                           k8s:io.cilium.k8s.policy.serviceaccount=default                                                            
                                                           k8s:io.kubernetes.pod.namespace=default                                                                    
1912       Disabled           Disabled          1          k8s:node-role.kubernetes.io/control-plane                                                          ready   
                                                           k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                
                                                           reserved:host                                                                                              
3021       Disabled           Disabled          39751      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system          172.20.0.10    ready   
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                   
                                                           k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                            
                                                           k8s:io.kubernetes.pod.namespace=kube-system                                                                
                                                           k8s:k8s-app=kube-dns
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# c1 endpoint list

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                              IPv6   IPv4           STATUS   
           ENFORCEMENT        ENFORCEMENT                                                                                                                 
391        Disabled           Disabled          1          reserved:host                                                                                  ready   
1110       Disabled           Disabled          4          reserved:health                                                                 172.20.2.252   ready   
1126       Disabled           Disabled          8254       k8s:app=webpod                                                                  172.20.2.124   ready   
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                 
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                               
                                                           k8s:io.cilium.k8s.policy.serviceaccount=default                                                        
                                                           k8s:io.kubernetes.pod.namespace=default 
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# c2 endpoint list

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                  IPv6   IPv4           STATUS   
           ENFORCEMENT        ENFORCEMENT                                                                                                                     
401        Disabled           Disabled          8254       k8s:app=webpod                                                                      172.20.1.88    ready   
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                     
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                   
                                                           k8s:io.cilium.k8s.policy.serviceaccount=default                                                            
                                                           k8s:io.kubernetes.pod.namespace=default                                                                    
690        Disabled           Disabled          39751      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system          172.20.1.127   ready   
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                   
                                                           k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                            
                                                           k8s:io.kubernetes.pod.namespace=kube-system                                                                
                                                           k8s:k8s-app=kube-dns                                                                                       
1532       Disabled           Disabled          4          reserved:health                                                                     172.20.1.139   ready   
2914       Disabled           Disabled          1          reserved:host                                                                                      ready

4. Cilium ๋ชจ๋‹ˆํ„ฐ ์ถœ๋ ฅ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# c1 monitor -v

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
Listening for events on 2 CPUs with 64x4096 of shared memory
Press Ctrl-C to quit
time="2025-07-17T13:02:16.691606819Z" level=info msg="Initializing dissection cache..." subsys=monitor
-> network flow 0x44d1f314 , identity host->unknown state established ifindex eth1 orig-ip 192.168.10.101: 192.168.10.101:56916 -> 192.168.10.102:4240 tcp ACK
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7036, dst [127.0.0.1]:51394 tcp 
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7036, dst [127.0.0.1]:51394 tcp 
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7036, dst [127.0.0.1]:51394 tcp 
CPU 01: [pre-xlate-rev] cgroup_id: 7892 sock_cookie: 2675, dst [127.0.0.1]:9234 tcp 
-> network flow 0xb3726e6 , identity host->unknown state established ifindex eth1 orig-ip 192.168.10.101: 192.168.10.101:36468 -> 192.168.10.100:6443 tcp ACK
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7036, dst [127.0.0.1]:51394 tcp 
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7036, dst [127.0.0.1]:51394 tcp 
-> network flow 0xd9c7ab80 , identity host->unknown state established ifindex eth1 orig-ip 192.168.10.101: 192.168.10.101:35138 -> 192.168.10.100:6443 tcp ACK
CPU 01: [pre-xlate-rev] cgroup_id: 12434 sock_cookie: 2676, dst [127.0.0.1]:41326 tcp 
CPU 01: [pre-xlate-rev] cgroup_id: 7892 sock_cookie: 2677, dst [127.0.0.1]:9878 tcp 
-> network flow 0x1dc25546 , identity host->unknown state established ifindex eth1 orig-ip 192.168.10.101: 192.168.10.101:35710 -> 192.168.10.100:6443 tcp ACK
-> network flow 0x0 , identity host->unknown state new ifindex eth1 orig-ip 192.168.10.101: 192.168.10.101 -> 172.20.0.117 EchoRequest
-> network flow 0xff037f8b , identity host->unknown state established ifindex eth1 orig-ip 192.168.10.101: 192.168.10.101:33586 -> 172.20.0.117:4240 tcp ACK
CPU 01: [pre-xlate-rev] cgroup_id: 13643 sock_cookie: 7037, dst [127.0.0.1]:53790 tcp 
CPU 01: [pre-xlate-rev] cgroup_id: 7892 sock_cookie: 2678, dst [127.0.0.1]:9879 tcp 
CPU 01: [pre-xlate-rev] cgroup_id: 7892 sock_cookie: 2679, dst [127.0.0.1]:9234 tcp 
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7038, dst [127.0.0.1]:55230 tcp 
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7038, dst [127.0.0.1]:55230 tcp 
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7038, dst [127.0.0.1]:55230 tcp 
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7038, dst [127.0.0.1]:55230 tcp 
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7038, dst [127.0.0.1]:55230 tcp 
-> network flow 0xc767c399 , identity host->unknown state established ifindex eth1 orig-ip 192.168.10.101: 192.168.10.101:36484 -> 192.168.10.100:6443 tcp ACK
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7039, dst [127.0.0.1]:55238 tcp 
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7039, dst [127.0.0.1]:55238 tcp 
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7039, dst [127.0.0.1]:55238 tcp 
CPU 01: [pre-xlate-rev] cgroup_id: 7892 sock_cookie: 2680, dst [127.0.0.1]:9234 tcp 
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7039, dst [127.0.0.1]:55238 tcp 
CPU 01: [pre-xlate-rev] cgroup_id: 12362 sock_cookie: 7039, dst [127.0.0.1]:55238 tcp 
CPU 01: [pre-xlate-rev] cgroup_id: 13643 sock_cookie: 7040, dst [192.168.10.100]:51010 tcp
...

5. Cilium IP ๋ฆฌ์ŠคํŠธ ํ™•์ธ

Cilium์ด ๊ด€๋ฆฌํ•˜๋Š” IP์™€ ํ•ด๋‹น Identity, Source ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~#  c0 ip list

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
IP                  IDENTITY                                                                     SOURCE
0.0.0.0/0           reserved:world                                                               
10.0.2.15/32        reserved:host                                                                
                    reserved:kube-apiserver                                                      
172.20.0.10/32      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system   custom-resource
                    k8s:io.cilium.k8s.policy.cluster=default                                     
                    k8s:io.cilium.k8s.policy.serviceaccount=coredns                              
                    k8s:io.kubernetes.pod.namespace=kube-system                                  
                    k8s:k8s-app=kube-dns                                                         
172.20.0.117/32     reserved:health                                                              
172.20.0.187/32     reserved:host                                                                
                    reserved:kube-apiserver                                                      
172.20.0.191/32     k8s:app=curl                                                                 custom-resource
                    k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default       
                    k8s:io.cilium.k8s.policy.cluster=default                                     
                    k8s:io.cilium.k8s.policy.serviceaccount=default                              
                    k8s:io.kubernetes.pod.namespace=default                                      
172.20.1.88/32      k8s:app=webpod                                                               custom-resource
                    k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default       
                    k8s:io.cilium.k8s.policy.cluster=default                                     
                    k8s:io.cilium.k8s.policy.serviceaccount=default                              
                    k8s:io.kubernetes.pod.namespace=default                                      
172.20.1.96/32      reserved:remote-node                                                         
172.20.1.127/32     k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system   custom-resource
                    k8s:io.cilium.k8s.policy.cluster=default                                     
                    k8s:io.cilium.k8s.policy.serviceaccount=coredns                              
                    k8s:io.kubernetes.pod.namespace=kube-system                                  
                    k8s:k8s-app=kube-dns                                                         
172.20.1.139/32     reserved:health                                                              
172.20.2.124/32     k8s:app=webpod                                                               custom-resource
                    k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default       
                    k8s:io.cilium.k8s.policy.cluster=default                                     
                    k8s:io.cilium.k8s.policy.serviceaccount=default                              
                    k8s:io.kubernetes.pod.namespace=default                                      
172.20.2.165/32     reserved:remote-node                                                         
172.20.2.252/32     reserved:health                                                              
192.168.10.100/32   reserved:host                                                                
                    reserved:kube-apiserver                                                      
192.168.10.101/32   reserved:remote-node                                                         
192.168.10.102/32   reserved:remote-node

IP์™€ Identity ID ๋งคํ•‘ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# c0 ip list -n

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
IP                  IDENTITY   SOURCE
0.0.0.0/0           2          
10.0.2.15/32        1          
172.20.0.10/32      39751      custom-resource
172.20.0.117/32     4          
172.20.0.187/32     1          
172.20.0.191/32     13136      custom-resource
172.20.1.88/32      8254       custom-resource
172.20.1.96/32      6          
172.20.1.127/32     39751      custom-resource
172.20.1.139/32     4          
172.20.2.124/32     8254       custom-resource
172.20.2.165/32     6          
172.20.2.252/32     4          
192.168.10.100/32   1          
192.168.10.101/32   6          
192.168.10.102/32   6

6. Cilium Identity ๋ฆฌ์ŠคํŠธ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# c0 identity list

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
ID      LABELS
1       reserved:host
        reserved:kube-apiserver
2       reserved:world
3       reserved:unmanaged
4       reserved:health
5       reserved:init
6       reserved:remote-node
7       reserved:kube-apiserver
        reserved:remote-node
8       reserved:ingress
9       reserved:world-ipv4
10      reserved:world-ipv6
8254    k8s:app=webpod
        k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
        k8s:io.cilium.k8s.policy.cluster=default
        k8s:io.cilium.k8s.policy.serviceaccount=default
        k8s:io.kubernetes.pod.namespace=default
13136   k8s:app=curl
        k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
        k8s:io.cilium.k8s.policy.cluster=default
        k8s:io.cilium.k8s.policy.serviceaccount=default
        k8s:io.kubernetes.pod.namespace=default
39751   k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
        k8s:io.cilium.k8s.policy.cluster=default
        k8s:io.cilium.k8s.policy.serviceaccount=coredns
        k8s:io.kubernetes.pod.namespace=kube-system
        k8s:k8s-app=kube-dns

7. BPF Filesystem ๋งˆ์šดํŠธ ์ •๋ณด ํ™•์ธ

/sys/fs/bpf ๋งˆ์šดํŠธ ์ƒํƒœ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# c0 bpf fs show

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
MountID:          1107
ParentID:         1097
Mounted State:    true
MountPoint:       /sys/fs/bpf
MountOptions:     rw,relatime
OptionFields:     [master:11]
FilesystemType:   bpf
MountSource:      bpf
SuperOptions:     rw,mode=700

BPF ๋งˆ์šดํŠธ ํด๋” ๊ตฌ์กฐ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# tree /sys/fs/bpf

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
/sys/fs/bpf
โ”œโ”€โ”€ cilium
โ”‚ย ย  โ”œโ”€โ”€ devices
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ cilium_host
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ links
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย      โ”œโ”€โ”€ cil_from_host
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย      โ””โ”€โ”€ cil_to_host
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ cilium_net
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ links
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย      โ””โ”€โ”€ cil_to_host
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ eth0
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ links
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย      โ”œโ”€โ”€ cil_from_netdev
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย      โ””โ”€โ”€ cil_to_netdev
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ eth1
โ”‚ย ย  โ”‚ย ย      โ””โ”€โ”€ links
โ”‚ย ย  โ”‚ย ย          โ”œโ”€โ”€ cil_from_netdev
โ”‚ย ย  โ”‚ย ย          โ””โ”€โ”€ cil_to_netdev
โ”‚ย ย  โ”œโ”€โ”€ endpoints
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ 1827
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ links
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย      โ”œโ”€โ”€ cil_from_container
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย      โ””โ”€โ”€ cil_to_container
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ 3021
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ links
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย      โ”œโ”€โ”€ cil_from_container
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย      โ””โ”€โ”€ cil_to_container
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ 977
โ”‚ย ย  โ”‚ย ย      โ””โ”€โ”€ links
โ”‚ย ย  โ”‚ย ย          โ”œโ”€โ”€ cil_from_container
โ”‚ย ย  โ”‚ย ย          โ””โ”€โ”€ cil_to_container
โ”‚ย ย  โ””โ”€โ”€ socketlb
โ”‚ย ย      โ””โ”€โ”€ links
โ”‚ย ย          โ””โ”€โ”€ cgroup
โ”‚ย ย              โ”œโ”€โ”€ cil_sock4_connect
โ”‚ย ย              โ”œโ”€โ”€ cil_sock4_getpeername
โ”‚ย ย              โ”œโ”€โ”€ cil_sock4_post_bind
โ”‚ย ย              โ”œโ”€โ”€ cil_sock4_recvmsg
โ”‚ย ย              โ”œโ”€โ”€ cil_sock4_sendmsg
โ”‚ย ย              โ”œโ”€โ”€ cil_sock6_connect
โ”‚ย ย              โ”œโ”€โ”€ cil_sock6_getpeername
โ”‚ย ย              โ”œโ”€โ”€ cil_sock6_post_bind
โ”‚ย ย              โ”œโ”€โ”€ cil_sock6_recvmsg
โ”‚ย ย              โ””โ”€โ”€ cil_sock6_sendmsg
โ””โ”€โ”€ tc
    โ””โ”€โ”€ globals
        โ”œโ”€โ”€ cilium_auth_map
        โ”œโ”€โ”€ cilium_call_policy
        โ”œโ”€โ”€ cilium_calls_00977
        โ”œโ”€โ”€ cilium_calls_01827
        โ”œโ”€โ”€ cilium_calls_03021
        โ”œโ”€โ”€ cilium_calls_hostns_01912
        โ”œโ”€โ”€ cilium_calls_netdev_00002
        โ”œโ”€โ”€ cilium_calls_netdev_00003
        โ”œโ”€โ”€ cilium_calls_netdev_00009
        โ”œโ”€โ”€ cilium_ct4_global
        โ”œโ”€โ”€ cilium_ct_any4_global
        โ”œโ”€โ”€ cilium_egresscall_policy
        โ”œโ”€โ”€ cilium_events
        โ”œโ”€โ”€ cilium_ipcache
        โ”œโ”€โ”€ cilium_ipv4_frag_datagrams
        โ”œโ”€โ”€ cilium_l2_responder_v4
        โ”œโ”€โ”€ cilium_lb4_affinity
        โ”œโ”€โ”€ cilium_lb4_backends_v3
        โ”œโ”€โ”€ cilium_lb4_reverse_nat
        โ”œโ”€โ”€ cilium_lb4_reverse_sk
        โ”œโ”€โ”€ cilium_lb4_services_v2
        โ”œโ”€โ”€ cilium_lb4_source_range
        โ”œโ”€โ”€ cilium_lb_affinity_match
        โ”œโ”€โ”€ cilium_lxc
        โ”œโ”€โ”€ cilium_metrics
        โ”œโ”€โ”€ cilium_node_map
        โ”œโ”€โ”€ cilium_node_map_v2
        โ”œโ”€โ”€ cilium_nodeport_neigh4
        โ”œโ”€โ”€ cilium_policy_v2_00977
        โ”œโ”€โ”€ cilium_policy_v2_01827
        โ”œโ”€โ”€ cilium_policy_v2_01912
        โ”œโ”€โ”€ cilium_policy_v2_03021
        โ”œโ”€โ”€ cilium_ratelimit
        โ”œโ”€โ”€ cilium_ratelimit_metrics
        โ”œโ”€โ”€ cilium_runtime_config
        โ”œโ”€โ”€ cilium_signals
        โ”œโ”€โ”€ cilium_skip_lb4
        โ””โ”€โ”€ cilium_snat_v4_external

8. Cilium ์„œ๋น„์Šค ๋ฆฌ์ŠคํŠธ ํ™•์ธ

ClusterIP ์„œ๋น„์Šค์™€ Backend ๋งคํ•‘ ์ •๋ณด ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# c0 service list

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
ID   Frontend                Service Type   Backend                                 
1    10.96.0.1:443/TCP       ClusterIP      1 => 192.168.10.100:6443/TCP (active)   
2    10.96.33.91:80/TCP      ClusterIP      1 => 172.20.2.124:80/TCP (active)       
                                            2 => 172.20.1.88:80/TCP (active)        
3    10.96.239.219:443/TCP   ClusterIP      1 => 192.168.10.100:4244/TCP (active)   
4    10.96.0.10:53/UDP       ClusterIP      1 => 172.20.0.10:53/UDP (active)        
                                            2 => 172.20.1.127:53/UDP (active)       
5    10.96.0.10:53/TCP       ClusterIP      1 => 172.20.0.10:53/TCP (active)        
                                            2 => 172.20.1.127:53/TCP (active)       
6    10.96.0.10:9153/TCP     ClusterIP      1 => 172.20.0.10:9153/TCP (active)      
                                            2 => 172.20.1.127:9153/TCP (active)

ClusterIP 10.96.33.91:80/TCP โ†’ Backend 172.20.2.124:80, 172.20.1.88:80

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# k get svc

โœ…ย ์ถœ๋ ฅ

1
2
3
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP   2d23h
webpod       ClusterIP   10.96.33.91   <none>        80/TCP    2d21h

9. Kubernetes ์„œ๋น„์Šค ์ •๋ณด ํ™•์ธ

  • webpod โ†’ ClusterIP 10.96.33.91 /80/TCP
  • webpod โ†’ 172.20.1.88:80, 172.20.2.124:80
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# k get svc,ep

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP   2d23h
service/webpod       ClusterIP   10.96.33.91   <none>        80/TCP    2d21h

NAME                   ENDPOINTS                        AGE
endpoints/kubernetes   192.168.10.100:6443              2d23h
endpoints/webpod       172.20.1.88:80,172.20.2.124:80   2d21h

10. Cilium eBPF Map ๋ฆฌ์ŠคํŠธ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# c0 map list

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
E0717 22:11:58.172602   44099 websocket.go:297] Unknown stream id 1, discarding message
 Name                       Num entries   Num errors   Cache enabled
cilium_lxc                 4             0            true
cilium_lb4_services_v2     16            0            true
cilium_lb4_affinity        0             0            true
cilium_lb4_reverse_nat     6             0            true
cilium_lb_affinity_match   0             0            true
cilium_policy_v2_00977     3             0            true
cilium_policy_v2_01912     2             0            true
cilium_ratelimit           0             0            true
cilium_ratelimit_metrics   0             0            true
cilium_lb4_reverse_sk      10            0            true
cilium_ipcache             16            0            true
cilium_lb4_backends_v3     1             0            true
cilium_lb4_source_range    0             0            true
cilium_policy_v2_03021     3             0            true
cilium_policy_v2_01827     3             0            true
cilium_runtime_config      256           0            true
cilium_l2_responder_v4     0             0            false
cilium_node_map            0             0            false
cilium_node_map_v2         0             0            false
cilium_auth_map            0             0            false
cilium_metrics             0             0            false
cilium_skip_lb4            0             0            false

11. Cilium eBPF Map ์ด๋ฒคํŠธ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# c1 map events cilium_lb4_services_v2

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
key="10.96.0.1:443/TCP (1)" value="1 0[0] (1) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.0.1:443/TCP (0)" value="16777216 1[0] (1) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.33.91:80/TCP (1)" value="11 0[0] (2) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.33.91:80/TCP (2)" value="12 0[0] (2) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.33.91:80/TCP (0)" value="16777216 2[0] (2) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.239.219:443/TCP (1)" value="4 0[0] (3) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.239.219:443/TCP (0)" value="16777216 1[0] (3) [0x0 0x10]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.0.10:9153/TCP (1)" value="6 0[0] (4) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.0.10:9153/TCP (2)" value="10 0[0] (4) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.0.10:9153/TCP (0)" value="16777216 2[0] (4) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.0.10:53/UDP (1)" value="7 0[0] (5) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.0.10:53/UDP (2)" value="8 0[0] (5) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.0.10:53/UDP (0)" value="16777216 2[0] (5) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.0.10:53/TCP (1)" value="5 0[0] (6) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.0.10:53/TCP (2)" value="9 0[0] (6) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.0.10:53/TCP (0)" value="16777216 2[0] (6) [0x0 0x0]" time=2025-07-16T14:40:24Z action=update desiredState=sync lastError="<nil>"
key="10.96.239.219:443/TCP (0)" value="16777216 0[0] (3) [0x0 0x10]" time=2025-07-16T14:40:25Z action=update desiredState=sync lastError="<nil>"
key="10.96.239.219:443/TCP (1)" value="<nil>" time=2025-07-16T14:40:25Z action=delete desiredState=to-be-deleted lastError="<nil>"
key="10.96.239.219:443/TCP (1)" value="13 0[0] (3) [0x0 0x0]" time=2025-07-16T14:40:29Z action=update desiredState=sync lastError="<nil>"
key="10.96.239.219:443/TCP (0)" value="16777216 1[0] (3) [0x0 0x10]" time=2025-07-16T14:40:29Z action=update desiredState=sync lastError="<nil>"
key="10.96.239.219:443/TCP (1)" value="13 0[0] (3) [0x0 0x0]" time=2025-07-16T14:40:30Z action=update desiredState=sync lastError="<nil>"
key="10.96.239.219:443/TCP (0)" value="16777216 1[0] (3) [0x0 0x10]" time=2025-07-16T14:40:30Z action=update desiredState=sync lastError="<nil>"
key="10.96.239.219:443/TCP (1)" value="13 0[0] (3) [0x0 0x0]" time=2025-07-16T14:40:30Z action=update desiredState=sync lastError="<nil>"
key="10.96.239.219:443/TCP (0)" value="16777216 1[0] (3) [0x0 0x10]" time=2025-07-16T14:40:30Z action=update desiredState=sync lastError="<nil>"

12. Cilium eBPF ์ •์ฑ… ์ •๋ณด ํ™•์ธ

(1) ์ปจํŠธ๋กคํ”Œ๋ ˆ์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# c0 bpf policy get --all

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
Endpoint ID: 977
Path: /sys/fs/bpf/tc/globals/cilium_policy_v2_00977

POLICY   DIRECTION   LABELS (source:key[=value])   PORT/PROTO   PROXY PORT   AUTH TYPE   BYTES   PACKETS   PREFIX   
Allow    Ingress     reserved:unknown              ANY          NONE         disabled    95314   1202      0        
Allow    Ingress     reserved:host                 ANY          NONE         disabled    62866   709       0        
                     reserved:kube-apiserver                                                                        
Allow    Egress      reserved:unknown              ANY          NONE         disabled    0       0         0        

Endpoint ID: 1827
Path: /sys/fs/bpf/tc/globals/cilium_policy_v2_01827

POLICY   DIRECTION   LABELS (source:key[=value])   PORT/PROTO   PROXY PORT   AUTH TYPE   BYTES   PACKETS   PREFIX   
Allow    Ingress     reserved:unknown              ANY          NONE         disabled    0       0         0        
Allow    Ingress     reserved:host                 ANY          NONE         disabled    0       0         0        
                     reserved:kube-apiserver                                                                        
Allow    Egress      reserved:unknown              ANY          NONE         disabled    2112    24        0        

Endpoint ID: 1912
Path: /sys/fs/bpf/tc/globals/cilium_policy_v2_01912

POLICY   DIRECTION   LABELS (source:key[=value])   PORT/PROTO   PROXY PORT   AUTH TYPE   BYTES   PACKETS   PREFIX   
Allow    Ingress     reserved:unknown              ANY          NONE         disabled    0       0         0        
Allow    Egress      reserved:unknown              ANY          NONE         disabled    0       0         0        

Endpoint ID: 3021
Path: /sys/fs/bpf/tc/globals/cilium_policy_v2_03021

POLICY   DIRECTION   LABELS (source:key[=value])   PORT/PROTO   PROXY PORT   AUTH TYPE   BYTES    PACKETS   PREFIX   
Allow    Ingress     reserved:unknown              ANY          NONE         disabled    460      4         0        
Allow    Ingress     reserved:host                 ANY          NONE         disabled    655293   7514      0        
                     reserved:kube-apiserver                                                                         
Allow    Egress      reserved:unknown              ANY          NONE         disabled    66590    771       0

(2) w1 ๋…ธ๋“œ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# c1 bpf policy get --all -n

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
Endpoint ID: 391
Path: /sys/fs/bpf/tc/globals/cilium_policy_v2_00391

POLICY   DIRECTION   IDENTITY   PORT/PROTO   PROXY PORT   AUTH TYPE   BYTES   PACKETS   PREFIX   
Allow    Ingress     0          ANY          NONE         disabled    0       0         0        
Allow    Egress      0          ANY          NONE         disabled    0       0         0        

Endpoint ID: 1110
Path: /sys/fs/bpf/tc/globals/cilium_policy_v2_01110

POLICY   DIRECTION   IDENTITY   PORT/PROTO   PROXY PORT   AUTH TYPE   BYTES   PACKETS   PREFIX   
Allow    Ingress     0          ANY          NONE         disabled    98726   1249      0        
Allow    Ingress     1          ANY          NONE         disabled    63973   722       0        
Allow    Egress      0          ANY          NONE         disabled    0       0         0        

Endpoint ID: 1126
Path: /sys/fs/bpf/tc/globals/cilium_policy_v2_01126

POLICY   DIRECTION   IDENTITY   PORT/PROTO   PROXY PORT   AUTH TYPE   BYTES   PACKETS   PREFIX   
Allow    Ingress     0          ANY          NONE         disabled    474     6         0        
Allow    Ingress     1          ANY          NONE         disabled    0       0         0        
Allow    Egress      0          ANY          NONE         disabled    0       0         0

(3) w2 ๋…ธ๋“œ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# c2 bpf policy get --all -n

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Endpoint ID: 401
Path: /sys/fs/bpf/tc/globals/cilium_policy_v2_00401

POLICY   DIRECTION   IDENTITY   PORT/PROTO   PROXY PORT   AUTH TYPE   BYTES   PACKETS   PREFIX   
Allow    Ingress     0          ANY          NONE         disabled    948     12        0        
Allow    Ingress     1          ANY          NONE         disabled    0       0         0        
Allow    Egress      0          ANY          NONE         disabled    0       0         0        

Endpoint ID: 690
Path: /sys/fs/bpf/tc/globals/cilium_policy_v2_00690

POLICY   DIRECTION   IDENTITY   PORT/PROTO   PROXY PORT   AUTH TYPE   BYTES    PACKETS   PREFIX   
Allow    Ingress     0          ANY          NONE         disabled    230      2         0        
Allow    Ingress     1          ANY          NONE         disabled    665067   7620      0        
Allow    Egress      0          ANY          NONE         disabled    71756    785       0        

Endpoint ID: 1532
Path: /sys/fs/bpf/tc/globals/cilium_policy_v2_01532

POLICY   DIRECTION   IDENTITY   PORT/PROTO   PROXY PORT   AUTH TYPE   BYTES   PACKETS   PREFIX   
Allow    Ingress     0          ANY          NONE         disabled    99846   1271      0        
Allow    Ingress     1          ANY          NONE         disabled    64447   729       0        
Allow    Egress      0          ANY          NONE         disabled    0       0         0        

Endpoint ID: 2914
Path: /sys/fs/bpf/tc/globals/cilium_policy_v2_02914

POLICY   DIRECTION   IDENTITY   PORT/PROTO   PROXY PORT   AUTH TYPE   BYTES   PACKETS   PREFIX   
Allow    Ingress     0          ANY          NONE         disabled    0       0         0        
Allow    Egress      0          ANY          NONE         disabled    0       0         0 

13. Cilium Shell ๋ช…๋ น์œผ๋กœ ๋””๋ฐ”์ด์Šค ์ •๋ณด ํ™•์ธ

(1) ์ปจํŠธ๋กคํ”Œ๋ ˆ์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# c0 shell -- db/show devices

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
Name              Index   Selected   Type     MTU     HWAddr              Flags                            Addresses
lo                1       false      device   65536                       up|loopback|running              127.0.0.1, ::1
eth0              2       true       device   1500    08:00:27:6b:69:c9   up|broadcast|multicast|running   10.0.2.15, fd17:625c:f037:2:a00:27ff:fe6b:69c9, fe80::a00:27ff:fe6b:69c9
eth1              3       true       device   1500    08:00:27:80:23:b9   up|broadcast|multicast|running   fe80::a00:27ff:fe80:23b9, 192.168.10.100
cilium_net        9       false      veth     1500    ee:43:ce:84:cd:8b   up|broadcast|multicast|running   fe80::ec43:ceff:fe84:cd8b
cilium_host       10      false      veth     1500    ce:32:1f:30:c0:e2   up|broadcast|multicast|running   172.20.0.187, fe80::cc32:1fff:fe30:c0e2
lxc15efa76a1c03   14      false      veth     1500    0e:74:2e:72:27:a0   up|broadcast|multicast|running   fe80::c74:2eff:fe72:27a0
lxc4bee7d32b9d3   16      false      veth     1500    da:c2:12:8f:3f:7f   up|broadcast|multicast|running   fe80::d8c2:12ff:fe8f:3f7f
lxc_health        18      false      veth     1500    da:66:60:a7:ea:23   up|broadcast|multicast|running   fe80::d866:60ff:fea7:ea23

(2) w1 ๋…ธ๋“œ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# c1 shell -- db/show devices

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
Name              Index   Selected   Type     MTU     HWAddr              Flags                            Addresses
lo                1       false      device   65536                       up|loopback|running              127.0.0.1, ::1
eth0              2       true       device   1500    08:00:27:6b:69:c9   up|broadcast|multicast|running   10.0.2.15, fd17:625c:f037:2:a00:27ff:fe6b:69c9, fe80::a00:27ff:fe6b:69c9
eth1              3       true       device   1500    08:00:27:07:f2:54   up|broadcast|multicast|running   fe80::a00:27ff:fe07:f254, 192.168.10.101
cilium_net        7       false      veth     1500    4e:42:1e:e2:03:ec   up|broadcast|multicast|running   fe80::4c42:1eff:fee2:3ec
cilium_host       8       false      veth     1500    5e:62:ab:66:82:0f   up|broadcast|multicast|running   172.20.2.165, fe80::5c62:abff:fe66:820f
lxcd268a7119386   12      false      veth     1500    e6:d8:89:f3:57:c0   up|broadcast|multicast|running   fe80::e4d8:89ff:fef3:57c0
lxc_health        14      false      veth     1500    e6:ec:3f:38:38:a8   up|broadcast|multicast|running   fe80::e4ec:3fff:fe38:38a8

(3) w2 ๋…ธ๋“œ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# c2 shell -- db/show devices

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
Name              Index   Selected   Type     MTU     HWAddr              Flags                            Addresses
lo                1       false      device   65536                       up|loopback|running              127.0.0.1, ::1
eth0              2       true       device   1500    08:00:27:6b:69:c9   up|broadcast|multicast|running   10.0.2.15, fd17:625c:f037:2:a00:27ff:fe6b:69c9, fe80::a00:27ff:fe6b:69c9
eth1              3       true       device   1500    08:00:27:96:d7:20   up|broadcast|multicast|running   fe80::a00:27ff:fe96:d720, 192.168.10.102
cilium_net        7       false      veth     1500    12:28:ed:0b:bc:2a   up|broadcast|multicast|running   fe80::1028:edff:fe0b:bc2a
cilium_host       8       false      veth     1500    e6:60:2a:76:4c:a7   up|broadcast|multicast|running   172.20.1.96, fe80::e460:2aff:fe76:4ca7
lxc61f4da945ad8   12      false      veth     1500    4e:a7:3d:29:2b:4b   up|broadcast|multicast|running   fe80::4ca7:3dff:fe29:2b4b
lxc5fade0857eac   14      false      veth     1500    62:c7:2a:1b:a2:5a   up|broadcast|multicast|running   fe80::60c7:2aff:fe1b:a25a
lxc_health        16      false      veth     1500    ba:77:7e:b5:ee:fa   up|broadcast|multicast|running   fe80::b877:7eff:feb5:eefa

๐Ÿ“ก ๋…ธ๋“œ ๊ฐ„ โ€˜ํŒŒ๋“œ โ†’ ํŒŒ๋“œโ€™ ํ†ต์‹  ํ™•์ธ

1. ์—”๋“œํฌ์ธํŠธ ์ •๋ณด ํ™•์ธ

1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
kubectl get svc,ep webpod

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
NAME                     READY   STATUS    RESTARTS   AGE   IP             NODE      NOMINATED NODE   READINESS GATES
curl-pod                 1/1     Running   0          22h   172.20.0.191   k8s-ctr   <none>           <none>
webpod-9894b69cd-v2n4m   1/1     Running   0          22h   172.20.1.88    k8s-w2    <none>           <none>
webpod-9894b69cd-zxcnw   1/1     Running   0          22h   172.20.2.124   k8s-w1    <none>           <none>

Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service/webpod   ClusterIP   10.96.33.91   <none>        80/TCP    2d22h

NAME               ENDPOINTS                        AGE
endpoints/webpod   172.20.1.88:80,172.20.2.124:80   2d22h
  • webpod ํŒŒ๋“œ 2๊ฐœ: 172.20.1.88 (k8s-w2) , 172.20.2.124 (k8s-w1)
  • webpod ์„œ๋น„์Šค ClusterIP: 10.96.33.91
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# WEBPOD1IP=172.20.1.88

2. Cilium BPF maps (ipcache)๋กœ ๋ชฉ์ ์ง€ ํŒŒ๋“œ ๊ฒฝ๋กœ ํ™•์ธ

๋ชฉ์ ์ง€ Pod IP โ†’ tunnelendpoint ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# c0 map get cilium_ipcache

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Key                 Value                                                                    State   Error
172.20.0.191/32     identity=13136 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>          sync    
172.20.2.124/32     identity=8254 encryptkey=0 tunnelendpoint=192.168.10.101 flags=<none>    sync    
172.20.2.165/32     identity=6 encryptkey=0 tunnelendpoint=192.168.10.101 flags=<none>       sync    
192.168.10.102/32   identity=6 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>              sync    
172.20.1.139/32     identity=4 encryptkey=0 tunnelendpoint=192.168.10.102 flags=<none>       sync    
192.168.10.101/32   identity=6 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>              sync    
172.20.0.10/32      identity=39751 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>          sync    
172.20.1.127/32     identity=39751 encryptkey=0 tunnelendpoint=192.168.10.102 flags=<none>   sync    
172.20.0.187/32     identity=1 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>              sync    
172.20.0.117/32     identity=4 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>              sync    
10.0.2.15/32        identity=1 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>              sync    
172.20.2.252/32     identity=4 encryptkey=0 tunnelendpoint=192.168.10.101 flags=<none>       sync    
172.20.1.96/32      identity=6 encryptkey=0 tunnelendpoint=192.168.10.102 flags=<none>       sync    
172.20.1.88/32      identity=8254 encryptkey=0 tunnelendpoint=192.168.10.102 flags=<none>    sync    
192.168.10.100/32   identity=1 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>              sync    
0.0.0.0/0           identity=2 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>              sync

172.20.1.88๋กœ ํ†ต์‹  ์‹œ tunnelendpoint=192.168.10.102 (w2๋กœ ์ „๋‹ฌ)

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# c0 map get cilium_ipcache | grep $WEBPOD1IP

โœ…ย ์ถœ๋ ฅ

1
172.20.1.88/32      identity=8254 encryptkey=0 tunnelendpoint=192.168.10.102 flags=<none>    sync

3. curl-pod์˜ LXC ์ธํ„ฐํŽ˜์ด์Šค ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c a

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 66746sec preferred_lft 66746sec
    inet6 fd17:625c:f037:2:a00:27ff:fe6b:69c9/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 86156sec preferred_lft 14156sec
    inet6 fe80::a00:27ff:fe6b:69c9/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:80:23:b9 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
    inet 192.168.10.100/24 brd 192.168.10.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe80:23b9/64 scope link 
       valid_lft forever preferred_lft forever
9: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ee:43:ce:84:cd:8b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ec43:ceff:fe84:cd8b/64 scope link 
       valid_lft forever preferred_lft forever
10: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ce:32:1f:30:c0:e2 brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.187/32 scope global cilium_host
       valid_lft forever preferred_lft forever
    inet6 fe80::cc32:1fff:fe30:c0e2/64 scope link 
       valid_lft forever preferred_lft forever
14: lxc15efa76a1c03@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0e:74:2e:72:27:a0 brd ff:ff:ff:ff:ff:ff link-netns cni-69166b00-fabc-0af1-3874-e887a47b08b0
    inet6 fe80::c74:2eff:fe72:27a0/64 scope link 
       valid_lft forever preferred_lft forever
16: lxc4bee7d32b9d3@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether da:c2:12:8f:3f:7f brd ff:ff:ff:ff:ff:ff link-netns cni-e912a9e2-ed8c-1436-f857-94e0fcb0cd78
    inet6 fe80::d8c2:12ff:fe8f:3f7f/64 scope link 
       valid_lft forever preferred_lft forever
18: lxc_health@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether da:66:60:a7:ea:23 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::d866:60ff:fea7:ea23/64 scope link 
       valid_lft forever preferred_lft forever

4. eBPF ํ”„๋กœ๊ทธ๋žจ ๋ถ€์ฐฉ ํ˜„ํ™ฉ ํ™•์ธ

1
2
LXC=<k8s-ctr์˜ ๊ฐ€์žฅ ๋‚˜์ค‘์— lxc ์ด๋ฆ„>
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# LXC=lxc4bee7d32b9d3
1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# c0bpf net show
c0bpf net show | grep $LXC

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
xdp:

tc:
eth0(2) tcx/ingress cil_from_netdev prog_id 1876 link_id 16 
eth0(2) tcx/egress cil_to_netdev prog_id 1877 link_id 17 
eth1(3) tcx/ingress cil_from_netdev prog_id 1889 link_id 18 
eth1(3) tcx/egress cil_to_netdev prog_id 1893 link_id 19 
cilium_net(9) tcx/ingress cil_to_host prog_id 1869 link_id 15 
cilium_host(10) tcx/ingress cil_to_host prog_id 1831 link_id 13 
cilium_host(10) tcx/egress cil_from_host prog_id 1834 link_id 14 
lxc15efa76a1c03(14) tcx/ingress cil_from_container prog_id 1821 link_id 20 
lxc15efa76a1c03(14) tcx/egress cil_to_container prog_id 1828 link_id 21 
lxc4bee7d32b9d3(16) tcx/ingress cil_from_container prog_id 1854 link_id 24 
lxc4bee7d32b9d3(16) tcx/egress cil_to_container prog_id 1859 link_id 25 
lxc_health(18) tcx/ingress cil_from_container prog_id 1849 link_id 27 
lxc_health(18) tcx/egress cil_to_container prog_id 1838 link_id 28 

flow_dissector:

netfilter:

lxc4bee7d32b9d3(16) tcx/ingress cil_from_container prog_id 1854 link_id 24 
lxc4bee7d32b9d3(16) tcx/egress cil_to_container prog_id 1859 link_id 25
  • cil_from_container, cil_to_container๊ฐ€ ingress/egress์— ๋ถ€์ฐฉ

c0bpf prog show id 1854, c0bpf prog show id 1859๋กœ ์„ธ๋ถ€ ์ •๋ณด ํ™•์ธ

1
2
3
4
5
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# c0bpf prog show id 1854
1854: sched_cls  name cil_from_container  tag 41989045bb171bee  gpl
	loaded_at 2025-07-17T12:49:54+0000  uid 0
	xlated 752B  jited 579B  memlock 4096B  map_ids 324,323,58
	btf_id 840
1
2
3
4
5
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# c0bpf prog show id 1859
1859: sched_cls  name cil_to_container  tag 0b3125767ba1861c  gpl
	loaded_at 2025-07-17T12:49:54+0000  uid 0
	xlated 1448B  jited 928B  memlock 4096B  map_ids 324,58,323
	btf_id 845

5. eBPF map ๋ฆฌ์ŠคํŠธ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# c0bpf map list

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
...
58: percpu_hash  name cilium_metrics  flags 0x1
	key 8B  value 16B  max_entries 1024  memlock 19144B
...
323: prog_array  name cilium_calls_01  flags 0x0
	key 4B  value 4B  max_entries 50  memlock 720B
	owner_prog_type sched_cls  owner jited
324: array  name .rodata.config  flags 0x480
	key 4B  value 64B  max_entries 1  memlock 8192B
	btf_id 838  frozen
...
  • cilium_metrics(58), cilium_calls_01(323), .rodata.config(324) ๋“ฑ ํ™•์ธ

6. ์‹ค์ œ ํŒจํ‚ท ์ „์†ก ํ™•์ธ (ngrep)

w1, w2 ๋…ธ๋“œ์—์„œ ngrep -tW byline -d eth1 '' 'tcp port 80' ์‹คํ–‰

1
ngrep -tW byline -d eth1 '' 'tcp port 80'

โœ…ย ์ถœ๋ ฅ

curl ์š”์ฒญ ์‹œ eth1 ์ธํ„ฐํŽ˜์ด์Šค๋กœ TCP 80 ํฌํŠธ ํŒจํ‚ท์ด ์˜ค๊ฐ€๋Š” ๊ฒƒ์„ ์‹ค์‹œ๊ฐ„์œผ๋กœ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl $WEBPOD1IP

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
Hostname: webpod-9894b69cd-v2n4m
IP: 127.0.0.1
IP: ::1
IP: 172.20.1.88
IP: fe80::1c48:9ff:fe0c:94d8
RemoteAddr: 172.20.0.191:33604
GET / HTTP/1.1
Host: 172.20.1.88
User-Agent: curl/8.14.1
Accept: */*


๐Ÿงญ ๋…ธ๋“œ ๊ฐ„ โ€˜ํŒŒ๋“œ โ†’ ์„œ๋น„์Šค(ClusterIP)โ€™ ํ†ต์‹  ํ™•์ธ

192.168.0.1(๋…ธ๋“œ ๋‚ด๋ถ€)๋กœ ์ ‘๊ทผํ•˜๋ฉด iptables๊ฐ€ ClusterIP๋ฅผ ์‹ค์ œ ํŒŒ๋“œ IP๋กœ Destination NAT ์ฒ˜๋ฆฌํ•จ

1. ClusterIP๋กœ webpod ์ ‘๊ทผ

1
2
3
4
5
6
7
8
9
10
11
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod
Hostname: webpod-9894b69cd-v2n4m
IP: 127.0.0.1
IP: ::1
IP: 172.20.1.88
IP: fe80::1c48:9ff:fe0c:94d8
RemoteAddr: 172.20.0.191:37496
GET / HTTP/1.1
Host: webpod
User-Agent: curl/8.14.1
Accept: */*

2. ์„œ๋น„์Šค ์ •๋ณด ํ™•์ธ

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# k get svc
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP   3d1h
webpod       ClusterIP   10.96.33.91   <none>        80/TCP    2d23h

3. ํŒจํ‚ท ์บก์ฒ˜ ๊ฒฐ๊ณผ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec curl-pod -- tcpdump -enni any -q

โœ…ย ์ถœ๋ ฅ

์‹ค์ œ ํŒจํ‚ท ์บก์ฒ˜ ์‹œ ClusterIP(10.96.33.91)๊ฐ€ ๋‚˜ํƒ€๋‚˜์ง€ ์•Š๊ณ , ์ง์ ‘ ๋ชฉ์ ์ง€ ํŒŒ๋“œ IP๋กœ ํ†ต์‹ ํ•˜๋Š” ๋ชจ์Šต์ด ํ™•์ธ๋จ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod

โœ…ย ์ถœ๋ ฅ

4. strace -c ๋กœ syscall ํ˜ธ์ถœ ์š”์•ฝ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec curl-pod -- strace -c curl -s webpod

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
Hostname: webpod-9894b69cd-v2n4m
IP: 127.0.0.1
IP: ::1
IP: 172.20.1.88
IP: fe80::1c48:9ff:fe0c:94d8
RemoteAddr: 172.20.0.191:45620
GET / HTTP/1.1
Host: webpod
User-Agent: curl/8.14.1
Accept: */*

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 21.19    0.000217           2        85           mmap
 14.94    0.000153           4        33           munmap
 13.09    0.000134           2        47        30 open
  7.71    0.000079          26         3           sendto
  6.25    0.000064           2        22           close
  5.57    0.000057          19         3         1 connect
  4.69    0.000048           1        28           rt_sigaction
  4.30    0.000044           3        14           mprotect
  3.71    0.000038           1        27           read
  2.64    0.000027           1        24           fcntl
  2.54    0.000026           2         9           poll
  2.44    0.000025           6         4           socket
  1.95    0.000020           3         6         3 recvfrom
  1.56    0.000016           1        12           fstat
  1.37    0.000014           1        10           lseek
  0.78    0.000008           0        14           rt_sigprocmask
  0.68    0.000007           2         3           readv
  0.68    0.000007           7         1           writev
  0.59    0.000006           1         5           getsockname
  0.59    0.000006           6         1           eventfd2
  0.59    0.000006           2         3         3 ioctl
  0.59    0.000006           1         5           setsockopt
  0.49    0.000005           5         1           stat
  0.29    0.000003           0         4           brk
  0.29    0.000003           3         1           getrandom
  0.20    0.000002           1         2           geteuid
  0.10    0.000001           1         1           getsockopt
  0.10    0.000001           1         1           getgid
  0.10    0.000001           1         1           getegid
  0.00    0.000000           0         1           execve
  0.00    0.000000           0         1           getuid
  0.00    0.000000           0         1           arch_prctl
  0.00    0.000000           0         1           set_tid_address
------ ----------- ----------- --------- --------- ----------------
100.00    0.001024           2       374        37 total

5. strace -e trace=connect ๋กœ connect ํ˜ธ์ถœ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec curl-pod -- strace -e trace=connect curl -s webpod

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
connect(4, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("10.96.0.10")}, 16) = 0
connect(5, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("10.96.33.91")}, 16) = 0
connect(4, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("10.96.33.91")}, 16) = -1 EINPROGRESS (Operation in progress)
Hostname: webpod-9894b69cd-v2n4m
IP: 127.0.0.1
IP: ::1
IP: 172.20.1.88
IP: fe80::1c48:9ff:fe0c:94d8
RemoteAddr: 172.20.0.191:56178
GET / HTTP/1.1
Host: webpod
User-Agent: curl/8.14.1
Accept: */*

+++ exited with 0 +++

connect() ํ˜ธ์ถœ ์‹œ ClusterIP(10.96.33.91)๋กœ ์—ฐ๊ฒฐ ์‹œ๋„ ๋กœ๊ทธ ํ™•์ธ

์ดํ›„ eBPF ๋™์ž‘์œผ๋กœ ์‹ค์ œ ํŒŒ๋“œ IP๋กœ ์—ฐ๊ฒฐ

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# k get svc
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP   3d1h
webpod       ClusterIP   10.96.33.91   <none>        80/TCP    2d23h

6. strace -e trace=getsockname๋กœ ์†Œ์ผ“ ๋กœ์ปฌ ์ฃผ์†Œ ํ™•์ธ

getsockname ๊ฒฐ๊ณผ ๋กœ์ปฌ ์†Œ์ผ“ ์ฃผ์†Œ๊ฐ€ 172.20.0.191 (curl-pod IP)๋กœ ํ‘œ์‹œ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec curl-pod -- strace -e trace=getsockname curl -s webpod
getsockname(4, {sa_family=AF_INET, sin_port=htons(43981), sin_addr=inet_addr("172.20.0.191")}, [128 => 16]) = 0
getsockname(5, {sa_family=AF_INET, sin_port=htons(35772), sin_addr=inet_addr("172.20.0.191")}, [16]) = 0
getsockname(4, {sa_family=AF_INET, sin_port=htons(48122), sin_addr=inet_addr("172.20.0.191")}, [128 => 16]) = 0
getsockname(4, {sa_family=AF_INET, sin_port=htons(48122), sin_addr=inet_addr("172.20.0.191")}, [128 => 16]) = 0
getsockname(4, {sa_family=AF_INET, sin_port=htons(48122), sin_addr=inet_addr("172.20.0.191")}, [128 => 16]) = 0
Hostname: webpod-9894b69cd-v2n4m
IP: 127.0.0.1
IP: ::1
IP: 172.20.1.88
IP: fe80::1c48:9ff:fe0c:94d8
RemoteAddr: 172.20.0.191:48122
GET / HTTP/1.1
Host: webpod
User-Agent: curl/8.14.1
Accept: */*

+++ exited with 0 +++

ClusterIP ๊ธฐ๋ฐ˜์œผ๋กœ ์ ‘์†ํ–ˆ์ง€๋งŒ ์‹ค์ œ๋กœ๋Š” eBPF๊ฐ€ ์ตœ์ ํ™”ํ•˜์—ฌ ํŠน์ • ํŒŒ๋“œ IP๋กœ ์ „๋‹ฌ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec curl-pod -- strace -e trace=getsockopt curl -s webpod 
getsockopt(4, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
Hostname: webpod-9894b69cd-zxcnw
IP: 127.0.0.1
IP: ::1
IP: 172.20.2.124
IP: fe80::781d:98ff:fe11:9f1a
RemoteAddr: 172.20.0.191:45302
GET / HTTP/1.1
Host: webpod
User-Agent: curl/8.14.1
Accept: */*

+++ exited with 0 +++
  • ClusterIP๋กœ ์ ‘๊ทผํ•ด๋„ cilium eBPF๊ฐ€ ์†Œ์ผ“ ๋ ˆ๋ฒจ์—์„œ ์ตœ์ ํ™”ํ•˜์—ฌ ์‹ค์ œ ํŒŒ๋“œ IP๋กœ ๋ฐ”๋กœ ์—ฐ๊ฒฐ
This post is licensed under CC BY 4.0 by the author.