Cilium 3์ฃผ์ฐจ ์ ๋ฆฌ
๐ง ์ค์ต ํ๊ฒฝ ๊ตฌ์ฑ
- ํด๋ฌ์คํฐ ๋
ธ๋
- ์ปจํธ๋กค ํ๋ ์ธ:
192.168.10.100
(๋ฉ๋ชจ๋ฆฌ 2GB โ 2.5GB ์ํฅ) - ์์ปค ๋
ธ๋:
192.168.10.101
(1๋) - ๋คํธ์ํฌ ๋์ญ:
192.168.10.0/24
- ์ปจํธ๋กค ํ๋ ์ธ์ taint ์ ๊ฑฐ โ ํ๋ ๋ฐฐํฌ ๊ฐ๋ฅ
kubeadm-init-ctr-config.yaml
์ฌ์ฉ ์ ๋ฒ์ ๋ณ์K8S_VERSION_PLACEHOLDER
๋ก ์ฌ์ฌ์ฉ์ฑ ํ๋ณด
- ์ปจํธ๋กค ํ๋ ์ธ:
- ๋ผ์ฐํฐ
- ์ฃผ์:
192.168.10.200
- ์ฌ๋ด๋ง(10.10.0.0/16)๊ณผ ์ฟ ๋ฒ๋คํฐ์ค ๋คํธ์ํฌ(192.168.10.0/24) ๊ฐ ํต์ ์ค๊ณ
static route ์ค์ :
to: 10.10.0.0/16 โ via: 192.168.10.200
- IP forwarding ํ์ฑํ
- dummy ์ธํฐํ์ด์ค 2๊ฐ ์์ฑ
loop1
: 10.10.1.200loop2
: 10.10.2.200
- ์ฃผ์:
๐ ์ค์ต ํ๊ฒฝ ๋ฐฐํฌ
1. Vagrantfile ๋ค์ด๋ก๋ ๋ฐ ๊ฐ์๋จธ์ ๊ตฌ์ฑ
1
2
3
curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/3w/Vagrantfile
vagrant up
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
Bringing machine 'k8s-ctr' up with 'virtualbox' provider...
Bringing machine 'k8s-w1' up with 'virtualbox' provider...
Bringing machine 'router' up with 'virtualbox' provider...
==> k8s-ctr: Preparing master VM for linked clones...
k8s-ctr: This is a one time operation. Once the master VM is prepared,
k8s-ctr: it will be used as a base for linked clones, making the creation
k8s-ctr: of new VMs take milliseconds on a modern system.
==> k8s-ctr: Importing base box 'bento/ubuntu-24.04'...
==> k8s-ctr: Cloning VM...
==> k8s-ctr: Matching MAC address for NAT networking...
==> k8s-ctr: Checking if box 'bento/ubuntu-24.04' version '202502.21.0' is up to date...
==> k8s-ctr: Setting the name of the VM: k8s-ctr
==> k8s-ctr: Clearing any previously set network interfaces...
==> k8s-ctr: Preparing network interfaces based on configuration...
k8s-ctr: Adapter 1: nat
k8s-ctr: Adapter 2: hostonly
==> k8s-ctr: Forwarding ports...
k8s-ctr: 22 (guest) => 60000 (host) (adapter 1)
==> k8s-ctr: Running 'pre-boot' VM customizations...
==> k8s-ctr: Booting VM...
==> k8s-ctr: Waiting for machine to boot. This may take a few minutes...
k8s-ctr: SSH address: 127.0.0.1:60000
k8s-ctr: SSH username: vagrant
k8s-ctr: SSH auth method: private key
k8s-ctr:
k8s-ctr: Vagrant insecure key detected. Vagrant will automatically replace
k8s-ctr: this with a newly generated keypair for better security.
k8s-ctr:
k8s-ctr: Inserting generated public key within guest...
k8s-ctr: Removing insecure key from the guest if it's present...
k8s-ctr: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-ctr: Machine booted and ready!
==> k8s-ctr: Checking for guest additions in VM...
==> k8s-ctr: Setting hostname...
==> k8s-ctr: Configuring and enabling network interfaces...
==> k8s-ctr: Running provisioner: shell...
k8s-ctr: Running: /tmp/vagrant-shell20250730-27828-acul9.sh
k8s-ctr: >>>> Initial Config Start <<<<
k8s-ctr: [TASK 1] Setting Profile & Bashrc
k8s-ctr: [TASK 2] Disable AppArmor
k8s-ctr: [TASK 3] Disable and turn off SWAP
k8s-ctr: [TASK 4] Install Packages
k8s-ctr: [TASK 5] Install Kubernetes components (kubeadm, kubelet and kubectl)
k8s-ctr: [TASK 6] Install Packages & Helm
k8s-ctr: >>>> Initial Config End <<<<
==> k8s-ctr: Running provisioner: shell...
k8s-ctr: Running: /tmp/vagrant-shell20250730-27828-zl78rn.sh
k8s-ctr: >>>> K8S Controlplane config Start <<<<
k8s-ctr: [TASK 1] Initial Kubernetes
k8s-ctr: [TASK 2] Setting kube config file
k8s-ctr: [TASK 3] Source the completion
k8s-ctr: [TASK 4] Alias kubectl to k
k8s-ctr: [TASK 5] Install Kubectx & Kubens
k8s-ctr: [TASK 6] Install Kubeps & Setting PS1
k8s-ctr: [TASK 7] Install Cilium CNI
k8s-ctr: [TASK 8] Install Cilium / Hubble CLI
k8s-ctr: cilium
k8s-ctr: hubble
k8s-ctr: [TASK 9] Remove node taint
k8s-ctr: node/k8s-ctr untainted
k8s-ctr: [TASK 10] local DNS with hosts file
k8s-ctr: [TASK 11] Install Prometheus & Grafana
k8s-ctr: [TASK 12] Dynamically provisioning persistent local storage with Kubernetes
k8s-ctr: >>>> K8S Controlplane Config End <<<<
==> k8s-ctr: Running provisioner: shell...
k8s-ctr: Running: /tmp/vagrant-shell20250730-27828-7fwjno.sh
k8s-ctr: >>>> Route Add Config Start <<<<
k8s-ctr: >>>> Route Add Config End <<<<
==> k8s-w1: Cloning VM...
==> k8s-w1: Matching MAC address for NAT networking...
==> k8s-w1: Checking if box 'bento/ubuntu-24.04' version '202502.21.0' is up to date...
==> k8s-w1: Setting the name of the VM: k8s-w1
==> k8s-w1: Clearing any previously set network interfaces...
==> k8s-w1: Preparing network interfaces based on configuration...
k8s-w1: Adapter 1: nat
k8s-w1: Adapter 2: hostonly
==> k8s-w1: Forwarding ports...
k8s-w1: 22 (guest) => 60001 (host) (adapter 1)
==> k8s-w1: Running 'pre-boot' VM customizations...
==> k8s-w1: Booting VM...
==> k8s-w1: Waiting for machine to boot. This may take a few minutes...
k8s-w1: SSH address: 127.0.0.1:60001
k8s-w1: SSH username: vagrant
k8s-w1: SSH auth method: private key
k8s-w1:
k8s-w1: Vagrant insecure key detected. Vagrant will automatically replace
k8s-w1: this with a newly generated keypair for better security.
k8s-w1:
k8s-w1: Inserting generated public key within guest...
k8s-w1: Removing insecure key from the guest if it's present...
k8s-w1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-w1: Machine booted and ready!
==> k8s-w1: Checking for guest additions in VM...
==> k8s-w1: Setting hostname...
==> k8s-w1: Configuring and enabling network interfaces...
==> k8s-w1: Running provisioner: shell...
k8s-w1: Running: /tmp/vagrant-shell20250730-27828-km5kmk.sh
k8s-w1: >>>> Initial Config Start <<<<
k8s-w1: [TASK 1] Setting Profile & Bashrc
k8s-w1: [TASK 2] Disable AppArmor
k8s-w1: [TASK 3] Disable and turn off SWAP
k8s-w1: [TASK 4] Install Packages
k8s-w1: [TASK 5] Install Kubernetes components (kubeadm, kubelet and kubectl)
k8s-w1: [TASK 6] Install Packages & Helm
k8s-w1: >>>> Initial Config End <<<<
==> k8s-w1: Running provisioner: shell...
k8s-w1: Running: /tmp/vagrant-shell20250730-27828-fmg78c.sh
k8s-w1: >>>> K8S Node config Start <<<<
k8s-w1: [TASK 1] K8S Controlplane Join
k8s-w1: >>>> K8S Node config End <<<<
==> k8s-w1: Running provisioner: shell...
k8s-w1: Running: /tmp/vagrant-shell20250730-27828-ila0lv.sh
k8s-w1: >>>> Route Add Config Start <<<<
k8s-w1: >>>> Route Add Config End <<<<
==> router: Cloning VM...
==> router: Matching MAC address for NAT networking...
==> router: Checking if box 'bento/ubuntu-24.04' version '202502.21.0' is up to date...
==> router: Setting the name of the VM: router
==> router: Clearing any previously set network interfaces...
==> router: Preparing network interfaces based on configuration...
router: Adapter 1: nat
router: Adapter 2: hostonly
==> router: Forwarding ports...
router: 22 (guest) => 60009 (host) (adapter 1)
==> router: Running 'pre-boot' VM customizations...
==> router: Booting VM...
==> router: Waiting for machine to boot. This may take a few minutes...
router: SSH address: 127.0.0.1:60009
router: SSH username: vagrant
router: SSH auth method: private key
router: Warning: Connection reset. Retrying...
router:
router: Vagrant insecure key detected. Vagrant will automatically replace
router: this with a newly generated keypair for better security.
router:
router: Inserting generated public key within guest...
router: Removing insecure key from the guest if it's present...
router: Key inserted! Disconnecting and reconnecting using new SSH key...
==> router: Machine booted and ready!
==> router: Checking for guest additions in VM...
==> router: Setting hostname...
==> router: Configuring and enabling network interfaces...
==> router: Running provisioner: shell...
router: Running: /tmp/vagrant-shell20250730-27828-2x1jkp.sh
router: >>>> Initial Config Start <<<<
router: [TASK 1] Setting Profile & Bashrc
router: [TASK 2] Disable AppArmor
router: [TASK 3] Add Kernel setting - IP Forwarding
router: [TASK 4] Setting Dummy Interface
router: [TASK 5] Install Packages
router: [TASK 6] Install Apache
router: >>>> Initial Config End <<<<
2. ์ปจํธ๋กค ํ๋ ์ธ ๋ ธ๋ ์ ์
1
vagrant ssh k8s-ctr
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
Welcome to Ubuntu 24.04.2 LTS (GNU/Linux 6.8.0-53-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/pro
System information as of Wed Jul 30 02:46:22 PM KST 2025
System load: 0.28
Usage of /: 29.2% of 30.34GB
Memory usage: 51%
Swap usage: 0%
Processes: 217
Users logged in: 0
IPv4 address for eth0: 10.0.2.15
IPv6 address for eth0: fd17:625c:f037:2:a00:27ff:fe6b:69c9
This system is built by the Bento project by Chef Software
More information can be found at https://github.com/chef/bento
Use of this system is acceptance of the OS vendor EULA and License Agreements.
(โ|HomeLab:N/A) root@k8s-ctr:~#
3. ์์ปค ๋ ธ๋ SSH ํต์ ํ์ธ
์ปจํธ๋กค ํ๋ ์ธ์์ ์์ปค ๋ ธ๋์ SSH ์ ์ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-w1 hostname
โ ย ์ถ๋ ฅ
1
2
Warning: Permanently added 'k8s-w1' (ED25519) to the list of known hosts.
k8s-w1
4. ํด๋ฌ์คํฐ ๋คํธ์ํฌ CIDR ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl cluster-info dump | grep -m 2 -E "cluster-cidr|service-cluster-ip-range"
โ ย ์ถ๋ ฅ
1
2
"--service-cluster-ip-range=10.96.0.0/16",
"--cluster-cidr=10.244.0.0/16",
5. ๋ ธ๋ ์ํ ๋ฐ ๋ด๋ถ IP ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get node -owide
โ ย ์ถ๋ ฅ
1
2
3
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-ctr Ready control-plane 7m38s v1.33.2 192.168.10.100 <none> Ubuntu 24.04.2 LTS 6.8.0-53-generic containerd://1.7.27
k8s-w1 Ready <none> 5m38s v1.33.2 192.168.10.101 <none> Ubuntu 24.04.2 LTS 6.8.0-53-generic containerd://1.7.27
- ๋ด๋ถ IP ํ์ธ ๊ฐ๋ฅ (
192.168.10.100
,192.168.10.101
)
6. ์ฟ ๋ฒ๋คํฐ์ค IPAM ๋ฐ ํ๋ ๋คํธ์ํฌ ์ํ ํ์ธ
(1) ๋ ธ๋๋ณ Pod CIDR ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
โ ย ์ถ๋ ฅ
1
2
k8s-ctr 10.244.0.0/24
k8s-w1 10.244.1.0/24
kube-controller-manager
๊ฐ ๊ฐ ๋ ธ๋์ ํ ๋นํ Pod CIDR ํ์ธ
(2) Cilium์ด ์ฌ์ฉํ๋ Pod CIDR ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode -o json | grep podCIDRs -A2
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
"podCIDRs": [
"10.244.0.0/24"
],
--
"podCIDRs": [
"10.244.1.0/24"
],
CiliumNode
๋ฆฌ์์ค๋ฅผ ํตํด ๊ฐ ๋ ธ๋๊ฐ ์ธ์ํ๊ณ ์๋ Pod CIDR ํ์ธ
(3) IPAM ๋ชจ๋ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep ^ipam
โ ย ์ถ๋ ฅ
1
2
ipam kubernetes
ipam-cilium-node-update-rate 15s
kubernetes
๋ชจ๋์ผ ๊ฒฝ์ฐ Kubernetes๊ฐ IP๋ฅผ ํ ๋นํ๊ณ , Cilium์ ๊ทธ๊ฒ์ ๊ทธ๋๋ก ์ฌ์ฉํจ
(4) Cilium ์๋ํฌ์ธํธ IP ํ์ธ
ciliumendpoints
๋ฆฌ์์ค๋ฅผ ์กฐํํ์ฌ ํ๋์ ๋ถ์ฌ๋ ์ค์ IP ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints -A
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
NAMESPACE NAME SECURITY IDENTITY ENDPOINT STATE IPV4 IPV6
cilium-monitoring grafana-5c69859d9-wdb82 22795 ready 10.244.0.104
cilium-monitoring prometheus-6fc896bc5d-bxnd5 1213 ready 10.244.0.65
kube-system coredns-674b8bbfcf-9pxvx 28565 ready 10.244.0.199
kube-system coredns-674b8bbfcf-khjhq 28565 ready 10.244.0.59
kube-system hubble-relay-5dcd46f5c-5r79v 17061 ready 10.244.0.122
kube-system hubble-ui-76d4965bb6-xmdp8 2452 ready 10.244.0.80
local-path-storage local-path-provisioner-74f9666bc9-scg4s 56893 ready 10.244.0.253
10.244.0.x
โ ์ปจํธ๋กค ํ๋ ์ธ ๋ ธ๋10.244.1.x
โ ์์ปค ๋ ธ๋
๐ถ k9s ์ค์น ๋ฐ ์คํ ์ ๋ฆฌ
1. k9s ์ค์น
1
2
(โ|HomeLab:N/A) root@k8s-ctr:~# wget https://github.com/derailed/k9s/releases/latest/download/k9s_linux_amd64.deb -O /tmp/k9s_linux_amd64.deb
apt install /tmp/k9s_linux_amd64.deb
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
--2025-07-30 14:55:17-- https://github.com/derailed/k9s/releases/latest/download/k9s_linux_amd64.deb
Resolving github.com (github.com)... 20.200.245.247
Connecting to github.com (github.com)|20.200.245.247|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github.com/derailed/k9s/releases/download/v0.50.9/k9s_linux_amd64.deb [following]
--2025-07-30 14:55:17-- https://github.com/derailed/k9s/releases/download/v0.50.9/k9s_linux_amd64.deb
Reusing existing connection to github.com:443.
HTTP request sent, awaiting response... 302 Found
Location: https://release-assets.githubusercontent.com/github-production-release-asset/167596393/68b2cb87-c3c4-4c08-8ebe-b8aaa51894f5?sp=r&sv=2018-11-09&sr=b&spr=https&se=2025-07-30T06%3A41%3A09Z&rscd=attachment%3B+filename%3Dk9s_linux_amd64.deb&rsct=application%2Foctet-stream&skoid=96c2d410-5711-43a1-aedd-ab1947aa7ab0&sktid=398a6654-997b-47e9-b12b-9515b896b4de&skt=2025-07-30T05%3A40%3A14Z&ske=2025-07-30T06%3A41%3A09Z&sks=b&skv=2018-11-09&sig=JeO%2BpcQvqHA9Cn%2F9LNC%2FVbGkvi%2BA2WVntygiGkgYwwk%3D&jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmVsZWFzZS1hc3NldHMuZ2l0aHVidXNlcmNvbnRlbnQuY29tIiwia2V5Ijoia2V5MSIsImV4cCI6MTc1Mzg1NTIyMywibmJmIjoxNzUzODU0OTIzLCJwYXRoIjoicmVsZWFzZWFzc2V0cHJvZHVjdGlvbi5ibG9iLmNvcmUud2luZG93cy5uZXQifQ.lj7UoO3dvLsG-a_0jHncvKP_C05qv3_v8-1Ne7RIpK0&response-content-disposition=attachment%3B%20filename%3Dk9s_linux_amd64.deb&response-content-type=application%2Foctet-stream [following]
--2025-07-30 14:55:18-- https://release-assets.githubusercontent.com/github-production-release-asset/167596393/68b2cb87-c3c4-4c08-8ebe-b8aaa51894f5?sp=r&sv=2018-11-09&sr=b&spr=https&se=2025-07-30T06%3A41%3A09Z&rscd=attachment%3B+filename%3Dk9s_linux_amd64.deb&rsct=application%2Foctet-stream&skoid=96c2d410-5711-43a1-aedd-ab1947aa7ab0&sktid=398a6654-997b-47e9-b12b-9515b896b4de&skt=2025-07-30T05%3A40%3A14Z&ske=2025-07-30T06%3A41%3A09Z&sks=b&skv=2018-11-09&sig=JeO%2BpcQvqHA9Cn%2F9LNC%2FVbGkvi%2BA2WVntygiGkgYwwk%3D&jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmVsZWFzZS1hc3NldHMuZ2l0aHVidXNlcmNvbnRlbnQuY29tIiwia2V5Ijoia2V5MSIsImV4cCI6MTc1Mzg1NTIyMywibmJmIjoxNzUzODU0OTIzLCJwYXRoIjoicmVsZWFzZWFzc2V0cHJvZHVjdGlvbi5ibG9iLmNvcmUud2luZG93cy5uZXQifQ.lj7UoO3dvLsG-a_0jHncvKP_C05qv3_v8-1Ne7RIpK0&response-content-disposition=attachment%3B%20filename%3Dk9s_linux_amd64.deb&response-content-type=application%2Foctet-stream
Resolving release-assets.githubusercontent.com (release-assets.githubusercontent.com)... 185.199.110.133, 185.199.108.133, 185.199.109.133, ...
Connecting to release-assets.githubusercontent.com (release-assets.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 38258230 (36M) [application/octet-stream]
Saving to: โ/tmp/k9s_linux_amd64.debโ
/tmp/k9s_linux_amd64.de 100%[==============================>] 36.49M 17.9MB/s in 2.0s
2025-07-30 14:55:20 (17.9 MB/s) - โ/tmp/k9s_linux_amd64.debโ saved [38258230/38258230]
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Note, selecting 'k9s' instead of '/tmp/k9s_linux_amd64.deb'
The following NEW packages will be installed:
k9s
0 upgraded, 1 newly installed, 0 to remove and 175 not upgraded.
Need to get 0 B/38.3 MB of archives.
After this operation, 124 MB of additional disk space will be used.
Get:1 /tmp/k9s_linux_amd64.deb k9s amd64 0.50.9 [38.3 MB]
Selecting previously unselected package k9s.
(Reading database ... 51864 files and directories currently installed.)
Preparing to unpack /tmp/k9s_linux_amd64.deb ...
Unpacking k9s (0.50.9) ...
Setting up k9s (0.50.9) ...
Scanning processes...
Scanning linux images...
Running kernel seems to be up-to-date.
No services need to be restarted.
No containers need to be restarted.
No user sessions are running outdated binaries.
No VM guests are running outdated hypervisor (qemu) binaries on this host.
2. k9s ์คํ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# k9s
๐ Cilium IPAM ์ค์ต
1. IPAM ๊ฐ๋ ๋ฐ Cilium ๋ชจ๋
Kubernetes Host Scope
- ๋
ธ๋๋ณ๋ก ๊ณ ์ ๋
PodCIDR
๋ฅผ ์ฌ์ฉํ๋ ๋ชจ๋ - KubeControllerManager๊ฐ IP ๋ฒ์๋ฅผ ํ ๋น ๋ฐ ๊ด๋ฆฌ
- ๊ฐ ๋ ธ๋์ ๋ฏธ๋ฆฌ ์ ์๋ CIDR ๋ธ๋ก์ด ํ ๋น๋จ
Cilium Cluster Scope
- Cilium์ด ์์ฒด์ ์ผ๋ก IP ํ์ ๊ด๋ฆฌํ๋ฉฐ ๋์ ์ผ๋ก ํ ๋น
- ๋ณ๋ IPAM ์ค์ ์ด ์์ ๊ฒฝ์ฐ ๊ธฐ๋ณธ์ ์ผ๋ก ์ฌ์ฉ๋๋ ๋ชจ๋
- ์ธ๋ถ IPAM(AWS ENI, Azure IPAM ๋ฑ)๊ณผ์ ์ฐ๋๋ ๊ฐ๋ฅ
2. ๋ฉํฐ CIDR ๋ฐ Multi-pool ์ ์ฝ์ฌํญ
ํด๋ฌ์คํฐ ๋ด ๋ณต์ CIDR ๊ตฌ์ฑ
- Cilium์ ํด๋ฌ์คํฐ ๋ด ์ฌ๋ฌ CIDR ๋ธ๋ก์ ์ง์
- ์ ์ฝ์ฌํญ:
vxlan
,geneve
๊ฐ์ ํฐ๋ ๊ธฐ๋ฐ ๋ผ์ฐํ ๋ชจ๋์์๋ Multi-pool ๋ฏธ์ง์ - ํ์ฅ์ฑ: ํน์ ๋ ธ๋์ Pod ์์๊ฐ ์ฆ๊ฐํ์ฌ CIDR์ด ๋ถ์กฑํ ๊ฒฝ์ฐ, ํด๋น ๋ ธ๋์๋ง ์ถ๊ฐ CIDR์ ํ ๋นํ์ฌ ์ ์ฐํ ํ์ฅ ๊ฐ๋ฅ
3. ๋ ธ๋๋ณ Pod CIDR ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
โ ย ์ถ๋ ฅ
1
2
k8s-ctr 10.244.0.0/24
k8s-w1 10.244.1.0/24
- Kubernetes Host Scope ๊ธฐ๋ฐ IPAM ํ๊ฒฝ์์ ๊ฐ ๋
ธ๋์
/24
CIDR ๋ธ๋ก์ด ์๋ ํ ๋น๋จ
4. kube-controller-manager ์ค์ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kc describe pod -n kube-system kube-controller-manager-k8s-ctr
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
Name: kube-controller-manager-k8s-ctr
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Node: k8s-ctr/192.168.10.100
Start Time: Wed, 30 Jul 2025 14:41:28 +0900
Labels: component=kube-controller-manager
tier=control-plane
Annotations: kubernetes.io/config.hash: 2da908bf08a691927af74a336851f6e1
kubernetes.io/config.mirror: 2da908bf08a691927af74a336851f6e1
kubernetes.io/config.seen: 2025-07-30T14:41:20.396308103+09:00
kubernetes.io/config.source: file
Status: Running
SeccompProfile: RuntimeDefault
IP: 192.168.10.100
IPs:
IP: 192.168.10.100
Controlled By: Node/k8s-ctr
Containers:
kube-controller-manager:
Container ID: containerd://fb984494600e1c9a3755783595ee377a07d82efade606d941f2c162a604eed32
Image: registry.k8s.io/kube-controller-manager:v1.33.2
Image ID: registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081
Port: <none>
Host Port: <none>
Command:
kube-controller-manager
--allocate-node-cidrs=true
--authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
--authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
--bind-address=127.0.0.1
--client-ca-file=/etc/kubernetes/pki/ca.crt
--cluster-cidr=10.244.0.0/16
--cluster-name=kubernetes
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
--cluster-signing-key-file=/etc/kubernetes/pki/ca.key
--controllers=*,bootstrapsigner,tokencleaner
--kubeconfig=/etc/kubernetes/controller-manager.conf
--leader-elect=true
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--root-ca-file=/etc/kubernetes/pki/ca.crt
--service-account-private-key-file=/etc/kubernetes/pki/sa.key
--service-cluster-ip-range=10.96.0.0/16
--use-service-account-credentials=true
State: Running
Started: Wed, 30 Jul 2025 14:41:24 +0900
Ready: True
Restart Count: 0
Requests:
cpu: 200m
Liveness: http-get https://127.0.0.1:10257/healthz delay=10s timeout=15s period=10s #success=1 #failure=8
Startup: http-get https://127.0.0.1:10257/healthz delay=10s timeout=15s period=10s #success=1 #failure=24
Environment: <none>
Mounts:
/etc/ca-certificates from etc-ca-certificates (ro)
/etc/kubernetes/controller-manager.conf from kubeconfig (ro)
/etc/kubernetes/pki from k8s-certs (ro)
/etc/ssl/certs from ca-certs (ro)
/usr/libexec/kubernetes/kubelet-plugins/volume/exec from flexvolume-dir (rw)
/usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
/usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
ca-certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/certs
HostPathType: DirectoryOrCreate
etc-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /etc/ca-certificates
HostPathType: DirectoryOrCreate
flexvolume-dir:
Type: HostPath (bare host directory volume)
Path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
HostPathType: DirectoryOrCreate
k8s-certs:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki
HostPathType: DirectoryOrCreate
kubeconfig:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/controller-manager.conf
HostPathType: FileOrCreate
usr-local-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/local/share/ca-certificates
HostPathType: DirectoryOrCreate
usr-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/share/ca-certificates
HostPathType: DirectoryOrCreate
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoExecute op=Exists
Events: <none>
-allocate-node-cidrs=true
: ๋ ธ๋๋ณ CIDR ์๋ ํ ๋น ํ์ฑํ-cluster-cidr=10.244.0.0/16
: ์ ์ฒด ํด๋ฌ์คํฐ Pod IP ๋ฒ์ ์ค์ -service-cluster-ip-range=10.96.0.0/16
: ์๋น์ค IP ๋ฒ์
5. Cilium์ด ์ธ์ํ Pod CIDR ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode -o json | grep podCIDRs -A2
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
"podCIDRs": [
"10.244.0.0/24"
],
--
"podCIDRs": [
"10.244.1.0/24"
],
- ์ปจํธ๋กค ํ๋ ์ธ ๋
ธ๋:
10.244.0.0/24
- ์์ปค ๋
ธ๋:
10.244.1.0/24
6. Cilium Endpoint IP ํ ๋น ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints.cilium.io -A
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
NAMESPACE NAME SECURITY IDENTITY ENDPOINT STATE IPV4 IPV6
cilium-monitoring grafana-5c69859d9-wdb82 22795 ready 10.244.0.104
cilium-monitoring prometheus-6fc896bc5d-bxnd5 1213 ready 10.244.0.65
kube-system coredns-674b8bbfcf-9pxvx 28565 ready 10.244.0.199
kube-system coredns-674b8bbfcf-khjhq 28565 ready 10.244.0.59
kube-system hubble-relay-5dcd46f5c-5r79v 17061 ready 10.244.0.122
kube-system hubble-ui-76d4965bb6-xmdp8 2452 ready 10.244.0.80
local-path-storage local-path-provisioner-74f9666bc9-scg4s 56893 ready 10.244.0.253
- ๋ชจ๋ Pod๊ฐ ์ปจํธ๋กค ํ๋ ์ธ ๋
ธ๋์ CIDR ๋ฒ์(
10.244.0.0/24
) ๋ด์์ IP ํ ๋น๋ฐ์ - IP ํ ๋น์ด ์ ์์ ์ผ๋ก ์ด๋ฃจ์ด์ง๊ณ ๋ชจ๋ Endpoint๊ฐ
ready
์ํ
๐ฆ ์ํ ์ ํ๋ฆฌ์ผ์ด์ ๋ฐฐํฌ ๋ฐ ํ์ธ & Termshark
1. ์ํ ์ ํ๋ฆฌ์ผ์ด์ ๋ฐฐํฌ (webpod)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
(โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: webpod
spec:
replicas: 2
selector:
matchLabels:
app: webpod
template:
metadata:
labels:
app: webpod
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- sample-app
topologyKey: "kubernetes.io/hostname"
containers:
- name: webpod
image: traefik/whoami
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: webpod
labels:
app: webpod
spec:
selector:
app: webpod
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
EOF
# ๊ฒฐ๊ณผ
deployment.apps/webpod created
service/webpod created
2. curl ํ ์คํธ์ฉ ํ๋ ๋ฐฐํฌ (curl-pod)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
(โ|HomeLab:N/A) root@k8s-ctr:~# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: curl-pod
labels:
app: curl
spec:
nodeName: k8s-ctr
containers:
- name: curl
image: nicolaka/netshoot
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF
# ๊ฒฐ๊ณผ
pod/curl-pod created
nodeName: k8s-ctr
: ์ปจํธ๋กค ํ๋ ์ธ ๋ ธ๋์ ๋ช ์์ ์ผ๋ก ๊ณ ์ ๋ฐฐ์น
3. ๋ฆฌ์์ค ๋ฐฐํฌ ์ํ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get deploy,svc,ep webpod -owide
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/webpod 2/2 2 2 97s webpod traefik/whoami app=webpod
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/webpod ClusterIP 10.96.152.212 <none> 80/TCP 97s app=webpod
NAME ENDPOINTS AGE
endpoints/webpod 10.244.0.1:80,10.244.1.96:80 96s
- Deployment: 2๊ฐ Pod๊ฐ ์ ์ ์์ฑ ๋ฐ ์คํ ์ค
- Service: ClusterIP ํ์
์ผ๋ก
10.96.152.212
ํ ๋น - Endpoints
10.244.0.1:80
(์ปจํธ๋กค ํ๋ ์ธ ๋ ธ๋์ Pod)10.244.1.96:80
(์์ปค ๋ ธ๋์ Pod)
4. Cilium Endpoint ์ ๋ณด ์กฐํ
(1) EndpointSlice ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get endpointslices -l app=webpod
โ ย ์ถ๋ ฅ
1
2
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
webpod-2wrvt IPv4 80 10.244.0.1,10.244.1.96 118s
(2) Cilium Endpoint ์์ธ ์ ๋ณด
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg endpoint list
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
147 Disabled Disabled 28565 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system 10.244.0.199 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
318 Disabled Disabled 5580 k8s:app=curl 10.244.0.27 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
853 Disabled Disabled 2452 k8s:app.kubernetes.io/name=hubble-ui 10.244.0.80 ready
k8s:app.kubernetes.io/part-of=cilium
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=hubble-ui
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=hubble-ui
1009 Disabled Disabled 12497 k8s:app=webpod 10.244.0.1 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
1043 Disabled Disabled 56893 k8s:app=local-path-provisioner 10.244.0.253 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=local-path-storage
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=local-path-provisioner-service-account
k8s:io.kubernetes.pod.namespace=local-path-storage
1452 Disabled Disabled 28565 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system 10.244.0.59 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
1680 Disabled Disabled 1213 k8s:app=prometheus 10.244.0.65 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s
k8s:io.kubernetes.pod.namespace=cilium-monitoring
1694 Disabled Disabled 17061 k8s:app.kubernetes.io/name=hubble-relay 10.244.0.122 ready
k8s:app.kubernetes.io/part-of=cilium
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=hubble-relay
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=hubble-relay
2772 Disabled Disabled 22795 k8s:app=grafana 10.244.0.104 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=cilium-monitoring
3358 Disabled Disabled 1 k8s:node-role.kubernetes.io/control-plane ready
k8s:node.kubernetes.io/exclude-from-external-load-balancers
reserved:host
5. ์๋น์ค ํต์ ํ ์คํธ
(1) ๋จ์ผ ์์ฒญ ํ ์คํธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod | grep Hostname
โ ย ์ถ๋ ฅ
1
Hostname: webpod-697b545f57-bpzn9
(2) ์ฐ์ ์์ฒญ์ ํตํ ๋ก๋ ๋ฐธ๋ฐ์ฑ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s webpod | grep Hostname; sleep 1; done'
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
Hostname: webpod-697b545f57-xb8fd
Hostname: webpod-697b545f57-xb8fd
Hostname: webpod-697b545f57-bpzn9
Hostname: webpod-697b545f57-bpzn9
Hostname: webpod-697b545f57-bpzn9
Hostname: webpod-697b545f57-bpzn9
Hostname: webpod-697b545f57-bpzn9
Hostname: webpod-697b545f57-xb8fd
...
- ๋ ๊ฐ์ ์๋ก ๋ค๋ฅธ Pod(
bpzn9
,xb8fd
) ๊ฐ์ ํธ๋ํฝ์ด ๋ถ์ฐ๋จ - DNS ๊ธฐ๋ฐ ์๋น์ค ๋์ค์ปค๋ฒ๋ฆฌ๊ฐ ์ ์ ๋์ (
webpod
์๋น์ค๋ช ์ผ๋ก ์ ๊ทผ)
6. Hubble ํ๋ฆ ์ถ์ ์ค์ต
(1) Hubble UI ์น ์ ์ ์ฃผ์ ํ์ธ
1
2
(โ|HomeLab:N/A) root@k8s-ctr:~# NODEIP=$(ip -4 addr show eth1 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')
echo -e "http://$NODEIP:30003"
โ ย ์ถ๋ ฅ
1
http://192.168.10.100:30003
(2) ์ง์์ ์ธ curl ์์ฒญ ์ํ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s webpod | grep Hostname; sleep 1; done'
- curl์ด default ๋ค์์คํ์ด์ค์ ์๋ webpod ์๋น์ค๋ช ์ผ๋ก ๋ค์ด๊ฐ๋๊ฑธ ํ์ธํ ์ ์๋ค.
(3) Hubble Relay ํฌํธ ํฌ์๋ฉ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# cilium hubble port-forward&
โ ย ์ถ๋ ฅ
1
2
[1] 10026
โน๏ธ Hubble Relay is available at 127.0.0.1:4245
- gRPC API๋ฅผ ํตํด ๋ก์ปฌํธ์คํธ์
4245
ํฌํธ์์ ์ ๊ทผ ๊ฐ๋ฅ
(4) ์ค์๊ฐ ๋คํธ์ํฌ ํ๋ฆ ๋ชจ๋ํฐ๋ง
1
(โ|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f --protocol tcp --pod curl-pod
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
Jul 30 06:30:30.990: default/curl-pod:53176 (ID:5580) <- default/webpod-697b545f57-xb8fd:80 (ID:12497) to-network FORWARDED (TCP Flags: ACK, FIN)
Jul 30 06:30:30.990: default/curl-pod:53176 (ID:5580) -> default/webpod-697b545f57-xb8fd:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK)
Jul 30 06:30:32.254: default/curl-pod (ID:5580) <> 10.96.152.212:80 (world) pre-xlate-fwd TRACED (TCP)
Jul 30 06:30:32.254: default/curl-pod (ID:5580) <> default/webpod-697b545f57-bpzn9:80 (ID:12497) post-xlate-fwd TRANSLATED (TCP)
Jul 30 06:30:32.254: default/curl-pod:58930 (ID:5580) -> default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: SYN)
Jul 30 06:30:32.254: default/curl-pod:58930 (ID:5580) <- default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: SYN, ACK)
Jul 30 06:30:32.254: default/curl-pod:58930 (ID:5580) -> default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK)
Jul 30 06:30:32.254: default/curl-pod:58930 (ID:5580) <> default/webpod-697b545f57-bpzn9 (ID:12497) pre-xlate-rev TRACED (TCP)
Jul 30 06:30:32.254: default/curl-pod:58930 (ID:5580) <> default/webpod-697b545f57-bpzn9 (ID:12497) pre-xlate-rev TRACED (TCP)
Jul 30 06:30:32.254: default/curl-pod:58930 (ID:5580) <> default/webpod-697b545f57-bpzn9 (ID:12497) pre-xlate-rev TRACED (TCP)
Jul 30 06:30:32.255: default/curl-pod:58930 (ID:5580) -> default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 30 06:30:32.255: default/curl-pod:58930 (ID:5580) <> default/webpod-697b545f57-bpzn9 (ID:12497) pre-xlate-rev TRACED (TCP)
Jul 30 06:30:32.256: default/curl-pod:58930 (ID:5580) <> default/webpod-697b545f57-bpzn9 (ID:12497) pre-xlate-rev TRACED (TCP)
Jul 30 06:30:32.256: default/curl-pod:58930 (ID:5580) <- default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 30 06:30:32.257: default/curl-pod:58930 (ID:5580) -> default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Jul 30 06:30:32.257: default/curl-pod:58930 (ID:5580) <- default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Jul 30 06:30:32.257: default/curl-pod:58930 (ID:5580) -> default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK)
Jul 30 06:30:33.263: default/curl-pod (ID:5580) <> 10.96.152.212:80 (world) pre-xlate-fwd TRACED (TCP)
Jul 30 06:30:33.263: default/curl-pod (ID:5580) <> default/webpod-697b545f57-bpzn9:80 (ID:12497) post-xlate-fwd TRANSLATED (TCP)
Jul 30 06:30:33.263: default/curl-pod:58942 (ID:5580) -> default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: SYN)
Jul 30 06:30:33.263: default/curl-pod:58942 (ID:5580) <- default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: SYN, ACK)
Jul 30 06:30:33.263: default/curl-pod:58942 (ID:5580) -> default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK)
Jul 30 06:30:33.264: default/curl-pod:58942 (ID:5580) <> default/webpod-697b545f57-bpzn9 (ID:12497) pre-xlate-rev TRACED (TCP)
Jul 30 06:30:33.264: default/curl-pod:58942 (ID:5580) -> default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 30 06:30:33.264: default/curl-pod:58942 (ID:5580) <> default/webpod-697b545f57-bpzn9 (ID:12497) pre-xlate-rev TRACED (TCP)
Jul 30 06:30:33.264: default/curl-pod:58942 (ID:5580) <> default/webpod-697b545f57-bpzn9 (ID:12497) pre-xlate-rev TRACED (TCP)
Jul 30 06:30:33.265: default/curl-pod:58942 (ID:5580) <> default/webpod-697b545f57-bpzn9 (ID:12497) pre-xlate-rev TRACED (TCP)
Jul 30 06:30:33.265: default/curl-pod:58942 (ID:5580) <> default/webpod-697b545f57-bpzn9 (ID:12497) pre-xlate-rev TRACED (TCP)
Jul 30 06:30:33.265: default/curl-pod:58942 (ID:5580) <- default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 30 06:30:33.265: default/curl-pod:58942 (ID:5580) -> default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Jul 30 06:30:33.265: default/curl-pod:58942 (ID:5580) <- default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Jul 30 06:30:33.265: default/curl-pod:58942 (ID:5580) -> default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK)
Jul 30 06:30:34.018: default/curl-pod:53190 (ID:5580) -> default/webpod-697b545f57-xb8fd:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: SYN)
Jul 30 06:30:34.018: default/curl-pod:53190 (ID:5580) <- default/webpod-697b545f57-xb8fd:80 (ID:12497) to-network FORWARDED (TCP Flags: SYN, ACK)
Jul 30 06:30:34.018: default/curl-pod:53190 (ID:5580) -> default/webpod-697b545f57-xb8fd:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK)
Jul 30 06:30:34.018: default/curl-pod:53190 (ID:5580) -> default/webpod-697b545f57-xb8fd:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
10.96.152.212:80
: ClusterIP ์๋น์ค ์ฃผ์ (world ๋ผ๋ฒจ)pre-xlate-fwd
: NAT ๋ณํ ์ ์ถ์ - ์์ผ ๋ก๋๋ฐธ๋ฐ์์ ์ํ ์๋น์ค IP ์ ๊ทผpost-xlate-fwd
: NAT ๋ณํ ํ - ์ค์ Pod IP๋ก ๋ณํ๋จ
TCP ์ฐ๊ฒฐ ์๋ช ์ฃผ๊ธฐ
1
2
3
4
5
TCP Flags: SYN # ์ฐ๊ฒฐ ์์
TCP Flags: SYN, ACK # ์ฐ๊ฒฐ ์๋ฝ
TCP Flags: ACK # ์ฐ๊ฒฐ ํ์ธ
TCP Flags: ACK, PSH # HTTP ๋ฐ์ดํฐ ์ ์ก
TCP Flags: ACK, FIN # ์ฐ๊ฒฐ ์ข
๋ฃ
7. ์๋น์ค ์ ๋ณด ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# k get svc
โ ย ์ถ๋ ฅ
1
2
3
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 57m
webpod ClusterIP 10.96.152.212 <none> 80/TCP 19m
webpod
์๋น์ค์ ClusterIP10.96.152.212
๊ฐ Hubble ๋ก๊ทธ์ ์๋น์ค ์ฃผ์์ ์ผ์น- Cilium์ ์์ผ ๋ ๋ฒจ ๋ก๋๋ฐธ๋ฐ์๊ฐ ์๋น์ค IP๋ฅผ ์ค์ Pod IP๋ก ์๋ ๋ณํ
8. ๋คํธ์ํฌ ํจํท ์บก์ฒ ๋ถ์
(1) tcpdump๋ฅผ ํตํ ์ค์๊ฐ ๋ชจ๋ํฐ๋ง
1
(โ|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i eth1 tcp port 80 -nn
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
15:43:14.755578 IP 10.244.0.27.41700 > 10.244.1.96.80: Flags [S], seq 501953752, win 64240, options [mss 1460,sackOK,TS val 1594519 ecr 0,nop,wscale 7], length 0
15:43:14.756290 IP 10.244.1.96.80 > 10.244.0.27.41700: Flags [S.], seq 2849751208, ack 501953753, win 65160, options [mss 1460,sackOK,TS val 3721394349 ecr 1594519,nop,wscale 7], length 0
15:43:14.756381 IP 10.244.0.27.41700 > 10.244.1.96.80: Flags [.], ack 1, win 502, options [nop,nop,TS val 1594520 ecr 3721394349], length 0
15:43:14.756622 IP 10.244.0.27.41700 > 10.244.1.96.80: Flags [P.], seq 1:71, ack 1, win 502, options [nop,nop,TS val 1594521 ecr 3721394349], length 70: HTTP: GET / HTTP/1.1
15:43:14.757363 IP 10.244.1.96.80 > 10.244.0.27.41700: Flags [.], ack 71, win 509, options [nop,nop,TS val 3721394350 ecr 1594521], length 0
15:43:14.757855 IP 10.244.1.96.80 > 10.244.0.27.41700: Flags [P.], seq 1:321, ack 71, win 509, options [nop,nop,TS val 3721394351 ecr 1594521], length 320: HTTP: HTTP/1.1 200 OK
15:43:14.757884 IP 10.244.0.27.41700 > 10.244.1.96.80: Flags [.], ack 321, win 501, options [nop,nop,TS val 1594522 ecr 3721394351], length 0
15:43:14.758124 IP 10.244.0.27.41700 > 10.244.1.96.80: Flags [F.], seq 71, ack 321, win 501, options [nop,nop,TS val 1594522 ecr 3721394351], length 0
15:43:14.758448 IP 10.244.1.96.80 > 10.244.0.27.41700: Flags [F.], seq 321, ack 72, win 509, options [nop,nop,TS val 3721394352 ecr 1594522], length 0
15:43:14.758485 IP 10.244.0.27.41700 > 10.244.1.96.80: Flags [.], ack 322, win 501, options [nop,nop,TS val 1594522 ecr 3721394352], length 0
15:43:16.770376 IP 10.244.0.27.41702 > 10.244.1.96.80: Flags [S], seq 2173259033, win 64240, options [mss 1460,sackOK,TS val 1596534 ecr 0,nop,wscale 7], length 0
15:43:16.771075 IP 10.244.1.96.80 > 10.244.0.27.41702: Flags [S.], seq 1449700480, ack 2173259034, win 65160, options [mss 1460,sackOK,TS val 3721396364 ecr 1596534,nop,wscale 7], length 0
15:43:16.771133 IP 10.244.0.27.41702 > 10.244.1.96.80: Flags [.], ack 1, win 502, options [nop,nop,TS val 1596535 ecr 3721396364], length 0
15:43:16.771167 IP 10.244.0.27.41702 > 10.244.1.96.80: Flags [P.], seq 1:71, ack 1, win 502, options [nop,nop,TS val 1596535 ecr 3721396364], length 70: HTTP: GET / HTTP/1.1
15:43:16.771658 IP 10.244.1.96.80 > 10.244.0.27.41702: Flags [.], ack 71, win 509, options [nop,nop,TS val 3721396365 ecr 1596535], length 0
15:43:16.772436 IP 10.244.1.96.80 > 10.244.0.27.41702: Flags [P.], seq 1:321, ack 71, win 509, options [nop,nop,TS val 3721396366 ecr 1596535], length 320: HTTP: HTTP/1.1 200 OK
15:43:16.772479 IP 10.244.0.27.41702 > 10.244.1.96.80: Flags [.], ack 321, win 501, options [nop,nop,TS val 1596536 ecr 3721396366], length 0
15:43:16.772648 IP 10.244.0.27.41702 > 10.244.1.96.80: Flags [F.], seq 71, ack 321, win 501, options [nop,nop,TS val 1596537 ecr 3721396366], length 0
15:43:16.773058 IP 10.244.1.96.80 > 10.244.0.27.41702: Flags [F.], seq 321, ack 72, win 509, options [nop,nop,TS val 3721396366 ecr 1596537], length 0
15:43:16.773093 IP 10.244.0.27.41702 > 10.244.1.96.80: Flags [.], ack 322, win 501, options [nop,nop,TS val 1596537 ecr 3721396366], length 0
15:43:17.778477 IP 10.244.0.27.52802 > 10.244.1.96.80: Flags [S], seq 1698202645, win 64240, options [mss 1460,sackOK,TS val 1597542 ecr 0,nop,wscale 7], length 0
15:43:17.779167 IP 10.244.1.96.80 > 10.244.0.27.52802: Flags [S.], seq 4294649790, ack 1698202646, win 65160, options [mss 1460,sackOK,TS val 3721397372 ecr 1597542,nop,wscale 7], length 0
...
10.244.0.27
: curl-pod์ ์ค์ IP ์ฃผ์10.244.1.96
: webpod์ ์ค์ IP ์ฃผ์ (์์ปค ๋ ธ๋)- ์๋น์ค IP(
10.96.152.212
)๋ ํจํท ๋ ๋ฒจ์์๋ ๋ณด์ด์ง ์์ - eBPF๊ฐ ์ปค๋ ๋ ๋ฒจ์์ ํฌ๋ช ํ๊ฒ NAT ๋ณํ์ ์ํ
(2) ํจํท ์บก์ฒ ํ์ผ ์์ฑ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i eth1 tcp port 80 -w /tmp/http.pcap
โ ย ์ถ๋ ฅ
1
2
3
4
tcpdump: listening on eth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
^C30 packets captured
30 packets received by filter
0 packets dropped by kernel
(3) Termshark๋ก ๋ถ์
1
(โ|HomeLab:N/A) root@k8s-ctr:~# termshark -r /tmp/http.pcap
๐ [Cilium] Cluster Scope & ๋ง์ด๊ทธ๋ ์ด์ ์ค์ต
- https://docs.cilium.io/en/stable/network/concepts/ipam/cluster-pool/
- https://docs.cilium.io/en/stable/network/kubernetes/ipam-cluster-pool/
1. ๊ฐ์
- ๋ชฉํ: Kubernetes Host Scope์์ Cilium Cluster Scope IPAM ๋ชจ๋๋ก ๋ง์ด๊ทธ๋ ์ด์
- IP ๋์ญ ๋ณ๊ฒฝ: 10.244.0.0/16 โ 172.20.0.0/16
- ๊ด๋ฆฌ ์ฃผ์ฒด ๋ณ๊ฒฝ: kube-controller-manager โ Cilium Operator
ํต์ ํ์ธ ๋ชฉ์ , ๋ฐ๋ณต ์์ฒญ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s webpod | grep Hostname; sleep 1; done'
2. IPAM ๋ชจ๋ ๋ณ๊ฒฝ
(1) ์ต์ด ๋ณ๊ฒฝ ์๋ (์คํจ)
1
2
(โ|HomeLab:N/A) root@k8s-ctr:~# helm upgrade cilium cilium/cilium --namespace kube-system --reuse-values \
--set ipam.mode="cluster-pool" --set ipam.operator.clusterPoolIPv4PodCIDRList={"172.20.0.0/16"} --set ipv4NativeRoutingCIDR=172.20.0.0/16
โ ย ์ถ๋ ฅ
1
Error: UPGRADE FAILED: template: cilium/templates/cilium-operator/deployment.yaml:145:26: executing "cilium/templates/cilium-operator/deployment.yaml" at <.Values.k8sServiceHostRef.name>: nil pointer evaluating interface {}.name
(2) ๋ฌธ์ ํด๊ฒฐ: Values ์ ์ ๐ก
๊ธฐ์กด ๊ฐ์ clean-values.yaml
๋ก ๋ฐฑ์
1
(โ|HomeLab:N/A) root@k8s-ctr:~# helm get values cilium -n kube-system > clean-values.yaml
์ค๋ฅ ์ ๋ฐ ํญ๋ชฉ ์ ๊ฑฐ ํ final-values.yaml
์์ฑํ์ฌ ์ค์ ์ ์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
(โ|HomeLab:N/A) root@k8s-ctr:~# cat > final-values.yaml << EOF
autoDirectNodeRoutes: true
bpf:
masquerade: true
debug:
enabled: true
endpointHealthChecking:
enabled: false
endpointRoutes:
enabled: true
healthChecking: false
hubble:
enabled: true
metrics:
enableOpenMetrics: true
enabled:
- dns
- drop
- tcp
- flow
- port-distribution
- icmp
- httpV2:exemplars=true;labelsContext=source_ip,source_namespace,source_workload,destination_ip,destination_namespace,destination_workload,traffic_direction
relay:
enabled: true
ui:
enabled: true
service:
nodePort: 30003
type: NodePort
installNoConntrackIptablesRules: true
ipam:
mode: cluster-pool
operator:
clusterPoolIPv4PodCIDRList:
- "172.20.0.0/16"
ipv4NativeRoutingCIDR: 172.20.0.0/16
k8s:
requireIPv4PodCIDR: true
k8sServiceHost: 192.168.10.100
k8sServicePort: 6443
kubeProxyReplacement: true
operator:
prometheus:
enabled: true
replicas: 1
prometheus:
enabled: true
routingMode: native
EOF
(3) ์ค์ ์ ์ฉ: IPAM cluster-pool๋ก ๋ณ๊ฒฝ ์ฑ๊ณต
1
(โ|HomeLab:N/A) root@k8s-ctr:~# helm upgrade cilium cilium/cilium --namespace kube-system -f final-values.yaml
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
Release "cilium" has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Wed Jul 30 16:26:05 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.
Your release version is 1.18.0.
For any further help, visit https://docs.cilium.io/en/v1.18/gettinghelp
3. Cilium ๊ตฌ์ฑ ์์ ์ฌ์์
(1) Cilium Operator ์ฌ์์
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system rollout restart deploy/cilium-operator
# ๊ฒฐ๊ณผ
deployment.apps/cilium-operator restarted
(2) Cilium DaemonSet ์ฌ์์
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system rollout restart ds/cilium
# ๊ฒฐ๊ณผ
daemonset.apps/cilium restarted
4. k9s ์ถ๋ ฅ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# k9s
โ
ย ์ถ๋ ฅ - default ๋ค์์คํ์ด์ค
5. IPAM ๋ชจ๋ ๋ณ๊ฒฝ ํ์ธ
IPAM ๋ชจ๋๊ฐ cluster-pool
๋ก ์ ์ ๋ณ๊ฒฝ๋์์
1
(โ|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep ^ipam
โ ย ์ถ๋ ฅ
1
2
ipam cluster-pool
ipam-cilium-node-update-rate 15s
6. Pod CIDR ๋ฏธ๋ฐ์ ํ์ธ
(1) CiliumNode์ ๊ธฐ์กด CIDR ์ ์ง ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode -o json | grep podCIDRs -A2
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
"podCIDRs": [
"10.244.0.0/24"
],
--
"podCIDRs": [
"10.244.1.0/24"
],
(2) CiliumEndpoint์ Pod IP ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints.cilium.io -A
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
NAMESPACE NAME SECURITY IDENTITY ENDPOINT STATE IPV4 IPV6
cilium-monitoring grafana-5c69859d9-wdb82 22795 ready 10.244.0.104
cilium-monitoring prometheus-6fc896bc5d-bxnd5 1213 ready 10.244.0.65
default curl-pod 5580 ready 10.244.0.27
default webpod-697b545f57-bpzn9 12497 ready 10.244.0.1
default webpod-697b545f57-xb8fd 12497 ready 10.244.1.96
kube-system coredns-674b8bbfcf-9pxvx 28565 ready 10.244.0.199
kube-system coredns-674b8bbfcf-khjhq 28565 ready 10.244.0.59
kube-system hubble-relay-5b48c999f9-cvjjc 17061 ready 10.244.1.67
kube-system hubble-ui-655f947f96-tcrrp 2452 ready 10.244.1.66
local-path-storage local-path-provisioner-74f9666bc9-scg4s 56893 ready 10.244.0.253
๊ทธ๋ฌ๋, ํต์ ์ ์๋๊ณ ์์
7. IPAM ๋ณ๊ฒฝ ๋ฏธ๋ฐ์ ์์ธ ํ์
(1) CiliumNode ๋ด podCIDRs
๊ฐ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode -o json | grep podCIDRs -A2
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
"podCIDRs": [
"10.244.0.0/24"
],
--
"podCIDRs": [
"10.244.1.0/24"
],
(2) ๊ธฐ์กด CiliumNode์ IP ์ ์ง ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode
โ ย ์ถ๋ ฅ
1
2
3
NAME CILIUMINTERNALIP INTERNALIP AGE
k8s-ctr 10.244.0.70 192.168.10.100 130m
k8s-w1 10.244.1.175 192.168.10.101 128m
8. CiliumNode ๋ฆฌ์์ค ์ญ์ ๋ฐ ์ฌ์์
(1) ์์ปค ๋ ธ๋์ CiliumNode ์ญ์
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl delete ciliumnode k8s-w1
# ๊ฒฐ๊ณผ
ciliumnode.cilium.io "k8s-w1" deleted
(2) Cilium DaemonSet ์ฌ์์
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system rollout restart ds/cilium
# ๊ฒฐ๊ณผ
daemonset.apps/cilium restarted
(3) ๋ณ๊ฒฝ๋ Pod CIDRs ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode -o json | grep podCIDRs -A2
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
"podCIDRs": [
"10.244.0.0/24"
],
--
"podCIDRs": [
"172.20.0.0/24"
],
9. ์ปจํธ๋กคํ๋ ์ธ ๋ ธ๋๋ CIDR ์ฌ์ค์
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints.cilium.io -A
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
NAMESPACE NAME SECURITY IDENTITY ENDPOINT STATE IPV4 IPV6
cilium-monitoring grafana-5c69859d9-wdb82 22795 ready 10.244.0.104
cilium-monitoring prometheus-6fc896bc5d-bxnd5 1213 ready 10.244.0.65
default curl-pod 5580 ready 10.244.0.27
default webpod-697b545f57-bpzn9 12497 ready 10.244.0.1
kube-system coredns-674b8bbfcf-9pxvx 28565 ready 10.244.0.199
kube-system coredns-674b8bbfcf-khjhq 28565 ready 10.244.0.59
local-path-storage local-path-provisioner-74f9666bc9-scg4s 56893 ready 10.244.0.253
(1) ์ปจํธ๋กคํ๋ ์ธ ๋ ธ๋ ์ญ์
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl delete ciliumnode k8s-ctr
# ๊ฒฐ๊ณผ
ciliumnode.cilium.io "k8s-ctr" deleted
(2) DaemonSet ์ฌ์์
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system rollout restart ds/cilium
# ๊ฒฐ๊ณผ
daemonset.apps/cilium restarted
(3) ๋ณ๊ฒฝ๋ Pod CIDRs ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode -o json | grep podCIDRs -A2
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
"podCIDRs": [
"172.20.1.0/24"
],
--
"podCIDRs": [
"172.20.0.0/24"
],
10. ์๋ํฌ์ธํธ ๋ฐ ๋ผ์ฐํ ๊ฒฝ๋ก ํ์ธ
(1) ๋ณ๊ฒฝ๋ Endpoint IP ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints.cilium.io -A
โ ย ์ถ๋ ฅ
1
2
3
NAMESPACE NAME SECURITY IDENTITY ENDPOINT STATE IPV4 IPV6
kube-system coredns-674b8bbfcf-gbnm8 28565 ready 172.20.0.186
kube-system coredns-674b8bbfcf-vvgfm 28565 ready 172.20.1.144
(2) ๋ผ์ฐํ ๊ฒฝ๋ก ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# ip -c route
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static
172.20.0.0/24 via 192.168.10.101 dev eth1 proto kernel
172.20.1.144 dev lxcf2a822e72a6e proto kernel scope link
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100
1
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w1 ip -c route
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static
172.20.0.186 dev lxc80130454cb70 proto kernel scope link
172.20.1.0/24 via 192.168.10.100 dev eth1 proto kernel
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.101
11. ๊ธฐ์กด Pod์ IP ์ ์ง ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -A -owide | grep 10.244.
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
cilium-monitoring grafana-5c69859d9-wdb82 0/1 Running 0 143m 10.244.0.104 k8s-ctr <none> <none>
cilium-monitoring prometheus-6fc896bc5d-bxnd5 1/1 Running 0 143m 10.244.0.65 k8s-ctr <none> <none>
default curl-pod 1/1 Running 0 105m 10.244.0.27 k8s-ctr <none> <none>
default webpod-697b545f57-bpzn9 1/1 Running 0 106m 10.244.0.1 k8s-ctr <none> <none>
default webpod-697b545f57-xb8fd 1/1 Running 0 106m 10.244.1.96 k8s-w1 <none> <none>
kube-system hubble-relay-5b48c999f9-cvjjc 0/1 Running 5 (28s ago) 39m 10.244.1.67 k8s-w1 <none> <none>
kube-system hubble-ui-655f947f96-tcrrp 1/2 CrashLoopBackOff 6 (106s ago) 39m 10.244.1.66 k8s-w1 <none> <none>
local-path-storage local-path-provisioner-74f9666bc9-scg4s 1/1 Running 0 143m 10.244.0.253 k8s-ctr <none> <none>
12. Deployment ๋ฆฌ์์ค ์ฌ์์
์์คํ ๋ฐ ๋ชจ๋ํฐ๋ง ํ๋ ์ฌ์์
1
2
3
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system rollout restart deploy/hubble-relay deploy/hubble-ui
kubectl -n cilium-monitoring rollout restart deploy/prometheus deploy/grafana
kubectl rollout restart deploy/webpod
โ ย ์ถ๋ ฅ
1
2
3
4
5
deployment.apps/hubble-relay restarted
deployment.apps/hubble-ui restarted
deployment.apps/prometheus restarted
deployment.apps/grafana restarted
deployment.apps/webpod restarted
13. ์๋ ์์ฑ ํ๋ ์ญ์ ๋ฐ ์ฌ๋ฐฐํฌ
(1) curl-pod ์ญ์
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl delete pod curl-pod
# ์ถ๋ ฅ
pod "curl-pod" deleted
(2) curl-pod ์ฌ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
(โ|HomeLab:N/A) root@k8s-ctr:~# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: curl-pod
labels:
app: curl
spec:
nodeName: k8s-ctr
containers:
- name: curl
image: nicolaka/netshoot
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF
# ๊ฒฐ๊ณผ
pod/curl-pod created
14. ์ IP ํ ๋น ์ํ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints.cilium.io -A
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
NAMESPACE NAME SECURITY IDENTITY ENDPOINT STATE IPV4 IPV6
cilium-monitoring grafana-6bc98cff96-h74hv 22795 ready 172.20.0.67
cilium-monitoring prometheus-597ff4d4c5-hzrsx 1213 ready 172.20.0.17
default curl-pod 5580 ready 172.20.1.236
default webpod-556878d5d7-7p8bn 12497 ready 172.20.1.40
default webpod-556878d5d7-r4dmh 12497 ready 172.20.0.130
kube-system coredns-674b8bbfcf-gbnm8 28565 ready 172.20.0.186
kube-system coredns-674b8bbfcf-vvgfm 28565 ready 172.20.1.144
kube-system hubble-relay-c8db994db-5hc26 17061 ready 172.20.0.190
kube-system hubble-ui-5c5855f4bf-8dkrf 2452 ready 172.20.0.162
1
(โ|HomeLab:N/A) root@k8s-ctr:~# k get pod -A
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
NAMESPACE NAME READY STATUS RESTARTS AGE
cilium-monitoring grafana-6bc98cff96-h74hv 1/1 Running 0 3m31s
cilium-monitoring prometheus-597ff4d4c5-hzrsx 1/1 Running 0 3m31s
default curl-pod 1/1 Running 0 110s
default webpod-556878d5d7-7p8bn 1/1 Running 0 3m4s
default webpod-556878d5d7-r4dmh 1/1 Running 0 3m30s
kube-system cilium-8nxg4 1/1 Running 0 8m42s
kube-system cilium-envoy-mn4qm 1/1 Running 0 44m
kube-system cilium-envoy-zgsk4 1/1 Running 0 44m
kube-system cilium-kl2mj 1/1 Running 0 8m42s
kube-system cilium-operator-765ddcc649-ft64f 1/1 Running 0 38m
kube-system coredns-674b8bbfcf-gbnm8 1/1 Running 0 8m19s
kube-system coredns-674b8bbfcf-vvgfm 1/1 Running 0 8m4s
kube-system etcd-k8s-ctr 1/1 Running 0 149m
kube-system hubble-relay-c8db994db-5hc26 1/1 Running 0 3m31s
kube-system hubble-ui-5c5855f4bf-8dkrf 2/2 Running 0 3m31s
kube-system kube-apiserver-k8s-ctr 1/1 Running 0 149m
kube-system kube-controller-manager-k8s-ctr 1/1 Running 0 149m
kube-system kube-proxy-5ccc4 1/1 Running 0 147m
kube-system kube-proxy-mzn7t 1/1 Running 0 149m
kube-system kube-scheduler-k8s-ctr 1/1 Running 0 149m
local-path-storage local-path-provisioner-74f9666bc9-scg4s 1/1 Running 0 148m
15. curl-pod์์ ํต์ ํ ์คํธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s webpod | grep Hostname; sleep 1; done'
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
Hostname: webpod-556878d5d7-7p8bn
Hostname: webpod-556878d5d7-r4dmh
Hostname: webpod-556878d5d7-7p8bn
Hostname: webpod-556878d5d7-7p8bn
Hostname: webpod-556878d5d7-7p8bn
Hostname: webpod-556878d5d7-r4dmh
Hostname: webpod-556878d5d7-7p8bn
Hostname: webpod-556878d5d7-r4dmh
Hostname: webpod-556878d5d7-7p8bn
Hostname: webpod-556878d5d7-7p8bn
...
16. Hubble ํฌํธํฌ์๋ฉ ์ถฉ๋ ํด๊ฒฐ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# cilium hubble port-forward&
โ ย ์ถ๋ ฅ
1
2
3
[2] 34662
(โ|HomeLab:N/A) root@k8s-ctr:~#
Error: Unable to port forward: failed to port forward: failed to port forward: unable to listen on any of the requested ports: [{4245 4245}]
๊ธฐ์กด ํฌํธ ์ถฉ๋ ํ์ธ ๋ฐ ์ข ๋ฃ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# ss -tnlp | grep 4245
โ ย ์ถ๋ ฅ
1
2
LISTEN 0 4096 127.0.0.1:4245 0.0.0.0:* users:(("cilium",pid=10026,fd=7))
LISTEN 0 4096 [::1]:4245 [::]:* users:(("cilium",pid=10026,fd=8))
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kill -9 10026
# ๊ฒฐ๊ณผ
[1]+ Killed cilium hubble port-forward
17. Hubble ํฌํธํฌ์๋ฉ ์ ์ ์ฌ์์
1
(โ|HomeLab:N/A) root@k8s-ctr:~# cilium hubble port-forward&
โ ย ์ถ๋ ฅ
1
2
[1] 34787
(โ|HomeLab:N/A) root@k8s-ctr:~# โน๏ธ Hubble Relay is available at 127.0.0.1:4245
Hubble ์ํ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# hubble status
โ ย ์ถ๋ ฅ
1
2
3
4
Healthcheck (via localhost:4245): Ok
Current/Max Flows: 8,190/8,190 (100.00%)
Flows/s: 38.00
Connected Nodes: 2/2
๐ง IPAM ๋ชจ๋ ๋ณ๊ฒฝ์ ์ ์คํ๊ฒ
IPAM ๋ชจ๋๋ฅผ ๋ณ๊ฒฝํ๋ ์์ ์ ๋จ์ํ Pod CIDR ๋์ญ์ ๋ฐ๊พธ๋ ๊ฒ๋ณด๋ค ํจ์ฌ ๋ ํฐ ๋ฆฌ์คํฌ๋ฅผ ๋๋ฐํจ. ์ด๊ธฐ ํด๋ฌ์คํฐ ์ค๊ณ ๋จ๊ณ์์ ์ฌ์ฉํ IPAM ๋ชจ๋๋ฅผ ์ ์คํ๊ฒ ๊ฒฐ์ ํด์ผ ํ๋ฉฐ, ํฅํ ํด๋ฌ์คํฐ ํ์ฅ(์ค์ผ์ผ์ ) ๊ณํ์ด๋ ๋คํธ์ํฌ ๊ตฌ์กฐ๊น์ง ๊ณ ๋ คํด ์ค์ ํ๋ ๊ฒ์ ๊ถ์ฅํจ.
๐งญ Routing
1. ํ๋ ์ํ ๋ฐ IP ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
โ ย ์ถ๋ ฅ
1
2
3
4
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-pod 1/1 Running 0 57m 172.20.1.236 k8s-ctr <none> <none>
webpod-556878d5d7-7p8bn 1/1 Running 0 58m 172.20.1.40 k8s-ctr <none> <none>
webpod-556878d5d7-r4dmh 1/1 Running 0 59m 172.20.0.130 k8s-w1 <none> <none>
webpod 1,2 ํ๋ IP
1
2
3
(โ|HomeLab:N/A) root@k8s-ctr:~# export WEBPODIP1=$(kubectl get -l app=webpod pods --field-selector spec.nodeName=k8s-ctr -o jsonpath='{.items[0].status.podIP}')
export WEBPODIP2=$(kubectl get -l app=webpod pods --field-selector spec.nodeName=k8s-w1 -o jsonpath='{.items[0].status.podIP}')
echo $WEBPODIP1 $WEBPODIP2
โ ย ์ถ๋ ฅ
1
172.20.1.40 172.20.0.130
2. ํ๋ ๊ฐ ํต์ ํ์ธ (ping)
curl-pod
โ webpod-2
๋ก ping ์๋
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- ping $WEBPODIP2
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
PING 172.20.0.130 (172.20.0.130) 56(84) bytes of data.
64 bytes from 172.20.0.130: icmp_seq=1 ttl=62 time=0.433 ms
64 bytes from 172.20.0.130: icmp_seq=2 ttl=62 time=0.657 ms
64 bytes from 172.20.0.130: icmp_seq=3 ttl=62 time=0.554 ms
64 bytes from 172.20.0.130: icmp_seq=4 ttl=62 time=0.374 ms
64 bytes from 172.20.0.130: icmp_seq=5 ttl=62 time=0.990 ms
64 bytes from 172.20.0.130: icmp_seq=6 ttl=62 time=0.486 ms
64 bytes from 172.20.0.130: icmp_seq=7 ttl=62 time=0.446 ms
64 bytes from 172.20.0.130: icmp_seq=8 ttl=62 time=0.533 ms
...
- ICMP ์๋ต ์์ ์ ์
- ํ๋ ๊ฐ ํต์ ์ ๋ฌธ์ ์์ (Native Routing ์ ์ ์๋)
3. ๋ผ์ฐํ ํ ์ด๋ธ ํ์ธ (k8s-ctr ๋ ธ๋)
1
(โ|HomeLab:N/A) root@k8s-ctr:~# ip -c route
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static
172.20.0.0/24 via 192.168.10.101 dev eth1 proto kernel
172.20.1.40 dev lxc0895f39b5225 proto kernel scope link
172.20.1.144 dev lxcf2a822e72a6e proto kernel scope link
172.20.1.236 dev lxcd63c3c1415ff proto kernel scope link
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100
172.20.0.0/24 via 192.168.10.101 dev eth1 proto kernel
- webpod-2๊ฐ ์ํ ๋คํธ์ํฌ ๋์ญ์ผ๋ก ๊ฐ๊ธฐ ์ํด ์์ปค๋ ธ๋1์ IP ์ฌ์ฉ
4. ๋ผ์ฐํ ํ ์ด๋ธ ํ์ธ (k8s-w1 ๋ ธ๋)
webpod-2 (172.20.0.130
)๋ veth ์ธํฐํ์ด์ค์ ์ง์ ์ฐ๊ฒฐ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w1 ip -c route
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static
172.20.0.17 dev lxce960d096d8a4 proto kernel scope link
172.20.0.67 dev lxcd23f85153e89 proto kernel scope link
172.20.0.130 dev lxc097ff224d206 proto kernel scope link
172.20.0.162 dev lxc4fe9abccf909 proto kernel scope link
172.20.0.186 dev lxc80130454cb70 proto kernel scope link
172.20.0.190 dev lxcb2f1076877d3 proto kernel scope link
172.20.1.0/24 via 192.168.10.100 dev eth1 proto kernel
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.101
172.20.1.0/24 via 192.168.10.100 dev eth1 proto kernel
- curl-pod๊ฐ ์๋ k8s-ctr๋ก์ ๊ฒฝ๋ก๊ฐ ์ค์ ๋์ด ์์
5. Hubble CLI๋ก ํธ๋ํฝ ํ๋ฆ ํ์ธ
1
hubble observe -f --pod curl-pod
โ ย ์ถ๋ ฅ
1
2
3
4
5
Jul 30 09:15:15.857: default/curl-pod (ID:5580) -> default/webpod-556878d5d7-r4dmh (ID:12497) to-network FORWARDED (ICMPv4 EchoRequest)
Jul 30 09:15:15.858: default/curl-pod (ID:5580) <- default/webpod-556878d5d7-r4dmh (ID:12497) to-endpoint FORWARDED (ICMPv4 EchoReply)
Jul 30 09:15:16.848: default/curl-pod (ID:5580) -> default/webpod-556878d5d7-r4dmh (ID:12497) to-endpoint FORWARDED (ICMPv4 EchoRequest)
Jul 30 09:15:16.848: default/curl-pod (ID:5580) <- default/webpod-556878d5d7-r4dmh (ID:12497) to-network FORWARDED (ICMPv4 EchoReply)
...
- ICMP EchoRequest / EchoReply ํธ๋ํฝ ์ค์๊ฐ ๋ก๊ทธ ์ถ๋ ฅ
- Source:
curl-pod
, Destination:webpod-2
6. tcpdump๋ก ๋คํธ์ํฌ ํจํท ์บก์ฒ
tcpdump -i eth1 icmp
๋ช
๋ น์ผ๋ก ์ค์๊ฐ ICMP ํธ๋ํฝ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i eth1 icmp
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
18:20:34.129970 IP 172.20.1.236 > 172.20.0.130: ICMP echo request, id 9174, seq 636, length 64
18:20:34.130563 IP 172.20.0.130 > 172.20.1.236: ICMP echo reply, id 9174, seq 636, length 64
18:20:35.153607 IP 172.20.1.236 > 172.20.0.130: ICMP echo request, id 9174, seq 637, length 64
18:20:35.154045 IP 172.20.0.130 > 172.20.1.236: ICMP echo reply, id 9174, seq 637, length 64
18:20:36.178084 IP 172.20.1.236 > 172.20.0.130: ICMP echo request, id 9174, seq 638, length 64
18:20:36.179263 IP 172.20.0.130 > 172.20.1.236: ICMP echo reply, id 9174, seq 638, length 64
18:20:37.179611 IP 172.20.1.236 > 172.20.0.130: ICMP echo request, id 9174, seq 639, length 64
18:20:37.179994 IP 172.20.0.130 > 172.20.1.236: ICMP echo reply, id 9174, seq 639, length 64
18:20:38.225687 IP 172.20.1.236 > 172.20.0.130: ICMP echo request, id 9174, seq 640, length 64
18:20:38.226119 IP 172.20.0.130 > 172.20.1.236: ICMP echo reply, id 9174, seq 640, length 64
...
curl-pod (172.20.1.236)
โwebpod-2 (172.20.0.130)
์ง์ ํต์- ์ค๋ฒ๋ ์ด ํฐ๋๋ง(VXLAN, Geneve) ์์ด native IP๋ก ์ ์ก๋จ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# k get pod -owide
โ ย ์ถ๋ ฅ
1
2
3
4
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-pod 1/1 Running 0 72m 172.20.1.236 k8s-ctr <none> <none>
webpod-556878d5d7-7p8bn 1/1 Running 0 73m 172.20.1.40 k8s-ctr <none> <none>
webpod-556878d5d7-r4dmh 1/1 Running 0 73m 172.20.0.130 k8s-w1 <none> <none>
7. tcpdump ๊ฒฐ๊ณผ ์ ์ฅ ๋ฐ termshark ๋ถ์
1
(โ|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i eth1 icmp -w /tmp/icmp.pcap
โ ย ์ถ๋ ฅ
1
2
3
4
tcpdump: listening on eth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
^C8 packets captured
10 packets received by filter
0 packets dropped by kernel
Source IP์ Destination IP๊ฐ ๊ทธ๋๋ก ๋ณด์ด๋ฉฐ encapsulation ์์
1
(โ|HomeLab:N/A) root@k8s-ctr:~# termshark -r /tmp/icmp.pcap
๐ Masquerading
Masquerading ๊ฐ์
- ๋ด๋ถ ์ฌ์ค IP๋ฅผ ๊ฐ์ง ์ฅ์น๋ค์ด ๊ณต์ ๊ธฐ๋ฅผ ํตํด ํ๋์ ๊ณต์ธ IP๋ก NAT๋์ด ์ธ๋ถ์ ํต์ ํ๋ ๋ฐฉ์
- ์ด์ ์ ์ฌํ๊ฒ Kubernetes์์๋ Pod๊ฐ ์ธ๋ถ ์ธํฐ๋ท๊ณผ ํต์ ํ ๋ ๋ ธ๋์ IP๋ก masquerading์ด ํ์ํจ
Kubernetes์์์ Masquerading ๋์
- Pod๊ฐ ์ธํฐ๋ท์ผ๋ก ๋๊ฐ ๋, ๋ ธ๋์ IP๋ก Source NAT(Masquerade) ๋จ
- ์ด์ : ๋๋ถ๋ถ์ ๋ ธ๋ IP๋ ์ธ๋ถ ์ธํฐ๋ท๊ณผ ์ฐ๊ฒฐ ๊ฐ๋ฅํจ
- ์ธ๋ถ๋ก ๋๊ฐ๋ ํธ๋ํฝ๋ง masquerading ๋์์ด๋ฉฐ, ํด๋ฌ์คํฐ ๋ด๋ถ ํต์ ์ ์ ์ธ
Cilium์ Masquerading ๋์ ๋ฐฉ์
- ํด๋ฌ์คํฐ ์ธ๋ถ๋ก ๋๊ฐ๋ ๋ชจ๋ ํธ๋ํฝ์ ์์ค IP โ ๋ ธ๋ IP๋ก ๋ณํ๋จ
- ๋จ, Pod ๊ฐ ํต์ ๋๋ ํด๋ฌ์คํฐ ๋ด๋ถ ๋ ธ๋ IP ๋์ ํธ๋ํฝ์ masquerading ๋์ง ์์
- ex. ๋ค๋ฅธ ๋ ธ๋์ Pod์ ํต์ ์, ์์ค IP๊ฐ ๋ ธ๋ IP๋ก ๋ฐ๋๋ฉด ์๋จ โ ์์ธ ์ฒ๋ฆฌ ํ์
์์ธ ์ฒ๋ฆฌ๋ฅผ ์ํ ์ค์ : ipv4-native-routing-cidr
ipv4-native-routing-cidr: 10.0.0.0/8
์ ๊ฐ์ด ์ค์ - ํด๋น CIDR ๋ฒ์์ IP๋ก ๊ฐ๋ ํธ๋ํฝ์ masquerading ๋์ง ์์
- ์ฃผ๋ก ํด๋ฌ์คํฐ ๋ด Pod CIDR ๋ฒ์๋ฅผ ์ง์
1. Masquerading ์ํ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium status | grep Masquerading
โ ย ์ถ๋ ฅ
1
Masquerading: BPF [eth0, eth1] 172.20.0.0/16 [IPv4: Enabled, IPv6: Disabled]
2. ipv4-native-routing-cidr
ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep ipv4-native-routing-cidr
โ ย ์ถ๋ ฅ
1
ipv4-native-routing-cidr 172.20.0.0/16
- ๊ฐ์ ํด๋ฌ์คํฐ ๋ด Node IP๋ก ๊ฐ๋ ํธ๋ํฝ์ Masquerading ์ ์ธ ๋์
3. ํ๋ ๊ฐ ํต์ ์ Masquerading ์ฌ๋ถ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i eth1 icmp -nn
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
18:46:02.125979 IP 172.20.1.236 > 172.20.0.130: ICMP echo request, id 9174, seq 2131, length 64
18:46:02.126385 IP 172.20.0.130 > 172.20.1.236: ICMP echo reply, id 9174, seq 2131, length 64
18:46:03.153938 IP 172.20.1.236 > 172.20.0.130: ICMP echo request, id 9174, seq 2132, length 64
18:46:03.154695 IP 172.20.0.130 > 172.20.1.236: ICMP echo reply, id 9174, seq 2132, length 64
18:46:04.154704 IP 172.20.1.236 > 172.20.0.130: ICMP echo request, id 9174, seq 2133, length 64
18:46:04.155285 IP 172.20.0.130 > 172.20.1.236: ICMP echo reply, id 9174, seq 2133, length 64
...
tcpdump
๋ฅผ ํตํด ICMP ์์ฒญ/์๋ต์ ํ์ธํ ๊ฒฐ๊ณผ, ์์ค IP๊ฐ ๊ทธ๋๋ก ์ ์ง๋์ด ์ ๋ฌ๋จ- Pod ๊ฐ ํต์ ์์๋ Masquerading์ด ์ ์ฉ๋์ง ์์
1
2
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- ping 192.168.10.101
(โ|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i eth1 icmp -nn
โ ย ์ถ๋ ฅ
1
2
3
4
64 bytes from 192.168.10.101: icmp_seq=1 ttl=63 time=0.333 ms
64 bytes from 192.168.10.101: icmp_seq=2 ttl=63 time=0.535 ms
64 bytes from 192.168.10.101: icmp_seq=3 ttl=63 time=0.499 ms
...
1
2
3
4
5
6
7
8
9
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
18:48:32.790099 IP 172.20.1.236 > 192.168.10.101: ICMP echo request, id 9180, seq 1, length 64
18:48:32.790388 IP 192.168.10.101 > 172.20.1.236: ICMP echo reply, id 9180, seq 1, length 64
18:48:33.809718 IP 172.20.1.236 > 192.168.10.101: ICMP echo request, id 9180, seq 2, length 64
18:48:33.810202 IP 192.168.10.101 > 172.20.1.236: ICMP echo reply, id 9180, seq 2, length 64
18:48:34.833711 IP 172.20.1.236 > 192.168.10.101: ICMP echo request, id 9180, seq 3, length 64
18:48:34.834176 IP 192.168.10.101 > 172.20.1.236: ICMP echo reply, id 9180, seq 3, length 64
...
- ๋ง์ฝ masquerading(NAT)์ด ๋์๋๋ผ๋ฉด ์์ค IP๊ฐ ๋
ธ๋ IP(์:
192.168.10.100
)๋ก ๋ณํ์ด์ผ ํจ - ๊ฒฐ๊ณผ์ ์ผ๋ก ์ธ๋ถ ๋ ธ๋๋ก์ ํธ๋ํฝ์๋ Masquerading์ ์ ์ฉ๋์ง ์์
๐งช Masquerading ์ค์ต
1. Masquerading ์ค์ต ๊ฐ์
- ์ค์ต ๋ชฉ์ : Pod์์ ํด๋ฌ์คํฐ ์ธ๋ถ ์๋ฒ(router)๋ก ํต์ ์ Masquerading ์ฌ๋ถ ํ์ธ
- ๋น๊ต ๋์
- ํด๋ฌ์คํฐ ๋ด ๋ ธ๋ ๊ฐ ํต์ ์
- ํด๋ฌ์คํฐ ์ธ๋ถ ์๋ฒ(router) ํต์ ์
- ํ๊ฒฝ ๊ตฌ์ฑ
curl-pod
,webpod
, ์ธ๋ถ ๋ผ์ฐํฐ ์๋ฒ (192.168.10.200
)- ๋์ผ ๋คํธ์ํฌ ๋์ญ:
192.168.10.0/24
2. ํด๋ฌ์คํฐ ๋ด๋ถ ํต์ : Masquerading ๋์ง ์์
1
2
3
4
5
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl -s webpod | grep Hostname
Hostname: webpod-556878d5d7-r4dmh
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl -s webpod | grep Hostname
Hostname: webpod-556878d5d7-7p8bn
1
2
(โ|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i eth1 icmp -nn
root@router:~# tcpdump -i eth1 icmp -nn
์์ค IP๊ฐ Pod IP(172.20.1.236
)๋ก ์ ์ง๋จ
1
kubectl exec -it curl-pod -- ping 192.168.10.101
1
2
3
4
5
(โ|HomeLab:N/A) root@k8s-ctr:~# k get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-pod 1/1 Running 0 121m 172.20.1.236 k8s-ctr <none> <none>
webpod-556878d5d7-7p8bn 1/1 Running 0 122m 172.20.1.40 k8s-ctr <none> <none>
webpod-556878d5d7-r4dmh 1/1 Running 0 123m 172.20.0.130 k8s-w1 <none> <none>
3. ํด๋ฌ์คํฐ ์ธ๋ถ(router) ํต์ : Masquerading ๋ฐ์
์์ค IP๊ฐ Pod IP๊ฐ ์๋๋ผ ํด๋น Pod๊ฐ ์์นํ ๋
ธ๋ IP(192.168.10.100
)๋ก NAT
1
kubectl exec -it curl-pod -- ping 192.168.10.200
- Cilium์ ํด๋ฌ์คํฐ ์ธ๋ถ๋ก ๋๊ฐ๋ ํธ๋ํฝ์ ์๋์ผ๋ก Masquerading ์ํ
- ๋์ผ ๋คํธ์ํฌ ๋์ญ์ด์ด๋ ๋ ธ๋๊ฐ ์๋ ์ธ๋ถ ์๋ฒ์ด๋ฏ๋ก NAT ์ ์ฉ๋จ
4. hubble observe
๋ช
๋ น์ผ๋ก ์ค์๊ฐ ํธ๋ํฝ ํ๋ฆ ํ์ธ
1
hubble observe -f --pod curl-pod
curl-pod
์์ ๋ฐ์ํ๋ ํธ๋ํฝ ํ๋ฆ์ ์ค์๊ฐ์ผ๋ก ํ์ธ- ์ธ๋ถ ์๋ฒ(router:
192.168.10.200
) ํธ์ถ ์ โ ์์ค IP๋ ๋ ธ๋ IP๋ก Masquerading ๋์ด ์ ์ก
5. TCP ํฌํธ 80์ผ๋ก๋ Masquerading ํ์ธ
1
kubectl exec -it curl-pod -- curl -s webpod
1
kubectl exec -it curl-pod -- curl -s 192.168.10.200
- ์ธ๋ถ ํต์ (
192.168.10.200
) โ ์์ค IP๋ ๋ ธ๋ IP
6. ๋ผ์ฐํฐ์ Loopback ์ธํฐํ์ด์ค ํ์ธ
๋ผ์ฐํฐ(192.168.10.200
)์๋ 2๊ฐ์ ๋๋ฏธ ์ธํฐํ์ด์ค ์กด์ฌ
1
root@router:~# ip -br -c -4 addr
โ ย ์ถ๋ ฅ
1
2
3
4
5
lo UNKNOWN 127.0.0.1/8
eth0 UP 10.0.2.15/24 metric 100
eth1 UP 192.168.10.200/24
loop1 UNKNOWN 10.10.1.200/24
loop2 UNKNOWN 10.10.2.200/24
1
root@router:~# ip -c a
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
altname enp0s3
inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
valid_lft 69510sec preferred_lft 69510sec
inet6 fd17:625c:f037:2:a00:27ff:fe6b:69c9/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 85902sec preferred_lft 13902sec
inet6 fe80::a00:27ff:fe6b:69c9/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:dc:00:69 brd ff:ff:ff:ff:ff:ff
altname enp0s8
inet 192.168.10.200/24 brd 192.168.10.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fedc:69/64 scope link
valid_lft forever preferred_lft forever
4: loop1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether ba:83:60:b7:5a:f4 brd ff:ff:ff:ff:ff:ff
inet 10.10.1.200/24 scope global loop1
valid_lft forever preferred_lft forever
inet6 fe80::9cd5:7fff:fe2e:29f7/64 scope link
valid_lft forever preferred_lft forever
5: loop2: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 9e:23:52:ad:1f:72 brd ff:ff:ff:ff:ff:ff
inet 10.10.2.200/24 scope global loop2
valid_lft forever preferred_lft forever
inet6 fe80::58a6:ffff:fe7a:8424/64 scope link
valid_lft forever preferred_lft forever
7. k8s ๋ ธ๋์ Static Routing ์ค์
๋ชจ๋ ๋
ธ๋์์ 10.10.0.0/16
๋์ญ์ ๋ผ์ฐํฐ(192.168.10.200
)๋ก ๊ฒฝ์ ํ๋๋ก static route ์ค์
1
2
(โ|HomeLab:N/A) root@k8s-ctr:~# ip -c route | grep static
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static
8. Loopback ์ธํฐํ์ด์ค๋ก์ ํต์ ๋ Masquerading๋จ
curl-pod
์์ 10.10.1.200
, 10.10.2.200
์ผ๋ก HTTP ์์ฒญ
1
kubectl exec -it curl-pod -- curl -s 10.10.1.200
1
kubectl exec -it curl-pod -- curl -s 10.10.2.200
- ์ด IP๋ค์ ๋ผ์ฐํฐ์ loop1, loop2 ์ธํฐํ์ด์ค์ ํด๋น
- Masquerading์ผ๋ก ์ธํด ์๋ต์ ๋ ธ๋ IP๋ก ์ ์ก๋จ โ ์ ์์ ์ธ ํต์ ๊ฐ๋ฅ
โ๏ธ (Cilium์ eBPF ๊ตฌํ) ip-masq-agent
์ค์
- Cilium์ masquerading ๊ณ ๊ธ ๊ธฐ๋ฅ ๊ตฌํ ์ค ํ๋๋ก, ํน์ CIDR๋ก์ ํธ๋ํฝ์ ๋ํด NAT(Masquerading)๋ฅผ ์๋ตํ ์ ์์
- ํด๋ฌ์คํฐ ์ธ๋ถ ํต์ ์ NAT์ ํผํ๊ณ ์ง์ Pod IP๋ก ํต์ ํ๊ณ ์ ํ ๋ ํ์ฉ
- https://github.com/kubernetes-sigs/ip-masq-agent
1. ipMasqAgent ์ค์ ๋ฐ ์ ์ฉ
Helm ๋ช ๋ น์ ํตํด ip-masq-agent ํ์ฑํ ๋ฐ ์์ธ CIDR ์ค์
1
2
(โ|HomeLab:N/A) root@k8s-ctr:~# helm upgrade cilium cilium/cilium --namespace kube-system --reuse-values \
--set ipMasqAgent.enabled=true --set ipMasqAgent.config.nonMasqueradeCIDRs='{10.10.1.0/24,10.10.2.0/24}'
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
Release "cilium" has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Wed Jul 30 22:12:33 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 3
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.
Your release version is 1.18.0.
For any further help, visit https://docs.cilium.io/en/v1.18/gettinghelp
2. ์ค์ ์ ์ฉ ํ์ธ
(1) ConfigMap ์์ฑ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get cm -n kube-system ip-masq-agent -o yaml | yq
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{
"apiVersion": "v1",
"data": {
"config": "{\"nonMasqueradeCIDRs\":[\"10.10.1.0/24\",\"10.10.2.0/24\"]}"
},
"kind": "ConfigMap",
"metadata": {
"annotations": {
"meta.helm.sh/release-name": "cilium",
"meta.helm.sh/release-namespace": "kube-system"
},
"creationTimestamp": "2025-07-30T13:12:35Z",
"labels": {
"app.kubernetes.io/managed-by": "Helm"
},
"name": "ip-masq-agent",
"namespace": "kube-system",
"resourceVersion": "38148",
"uid": "14aaeb96-1b47-42cb-97f6-680fe73e8be6"
}
}
nonMasqueradeCIDRs
์10.10.1.0/24
,10.10.2.0/24
ํฌํจ๋จ
(2) ConfigMap ์กด์ฌ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get cm -n kube-system
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
NAME DATA AGE
cilium-config 154 7h32m
cilium-envoy-config 1 7h32m
coredns 1 7h32m
extension-apiserver-authentication 6 7h32m
hubble-relay-config 1 7h32m
hubble-ui-nginx 1 7h32m
ip-masq-agent 1 104s
kube-apiserver-legacy-service-account-token-tracking 1 7h32m
kube-proxy 2 7h32m
kube-root-ca.crt 1 7h32m
kubeadm-config 1 7h32m
kubelet-config 1 7h32m
ip-masq-agent
ํญ๋ชฉ ์ถ๊ฐ ํ์ธ
(3) Cilium ์ค์ ์ ์ฉ ์ฌ๋ถ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep -i ip-masq
โ ย ์ถ๋ ฅ
1
enable-ip-masq-agent true
enable-ip-masq-agent: true
ํ์ธ
(4) BPF IPMasq ์์ธ CIDR ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system exec ds/cilium -c cilium-agent -- cilium-dbg bpf ipmasq list
โ ย ์ถ๋ ฅ
1
2
3
4
IP PREFIX/ADDRESS
10.10.1.0/24
10.10.2.0/24
169.254.0.0/16
3. NAT ์์ด ํต์ ์คํจ ์ฌ๋ก (๋ผ์ฐํฐ ๋ผ์ฐํ ๋ฏธ์ค์ ์)
curl-pod(172.20.1.236)
โ 10.10.1.200
์์ฒญ
- ๋ผ์ฐํฐ๋ ์๋ต์ Pod IP๋ก ๋ณด๋ด์ผ ํ๋๋ฐ ๋ผ์ฐํ ๊ฒฝ๋ก๊ฐ ์์ด ์๋ต ์คํจ
๋ฌธ์ ์์ธ
- ๋ผ์ฐํฐ๋
172.20.0.0/16
๋์ญ(Pod CIDR)์ ๋ํ ๊ฒฝ๋ก๋ฅผ ์์ง ๋ชปํจ - Pod๊ฐ ์ฌ๋ด๋ง ์๋ฒ์ ์์ฒญ์ ๋ณด๋ด๊ณ , ์๋ต์ด ๋์์ค์ง ์์
- TCP Handshake์์
SYN
์ ์ ์ก๋๋,SYN-ACK
๋ ์์ ๋์ง ์์ โ ํต์ ์คํจ
4. Pod IP ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# k get pod -owide
โ ย ์ถ๋ ฅ
1
2
3
4
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-pod 1/1 Running 0 5h16m 172.20.1.236 k8s-ctr <none> <none>
webpod-556878d5d7-7p8bn 1/1 Running 0 5h17m 172.20.1.40 k8s-ctr <none> <none>
webpod-556878d5d7-r4dmh 1/1 Running 0 5h17m 172.20.0.130 k8s-w1 <none> <none>
curl-pod
:172.20.1.236
(์ปจํธ๋กคํ๋ ์ธ ๋ ธ๋์ ์์น)webpod
:172.20.0.130
(์์ปค๋ ธ๋ 1์ ์์น)
5. ๋ผ์ฐํฐ ๋ผ์ฐํ ํ ์ด๋ธ ํ์ธ
1
root@router:~# ip -c route
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.1.0/24 dev loop1 proto kernel scope link src 10.10.1.200
10.10.2.0/24 dev loop2 proto kernel scope link src 10.10.2.200
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.200
- ๋ผ์ฐํฐ๋
172.20.x.x
์ ๋ํ ๊ฒฝ๋ก๊ฐ ์์ด unknown destination์ผ๋ก ์ธ์ - ํด๋น ํจํท์ default gateway (
10.0.2.2
)๋ก ์ ์ก โ ์ธํฐ๋ท์ผ๋ก ๋๊ฐ๊ฒ ๋จ
6. ๋ ธ๋ ๋ผ์ฐํ ํ ์ด๋ธ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# ip -c route
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static
172.20.0.0/24 via 192.168.10.101 dev eth1 proto kernel
172.20.1.40 dev lxc0895f39b5225 proto kernel scope link
172.20.1.144 dev lxcf2a822e72a6e proto kernel scope link
172.20.1.236 dev lxcd63c3c1415ff proto kernel scope link
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100
7. ๋ผ์ฐํฐ์ Static Route ์ค์
1
2
root@router:~# ip route add 172.20.1.0/24 via 192.168.10.100
ip route add 172.20.0.0/24 via 192.168.10.101
- ๋ผ์ฐํฐ๊ฐ ํด๋น Pod CIDR ํธ๋ํฝ์ ํด๋น ๋ ธ๋๋ก ์ ํํ ์ ๋ฌํ ์ ์๋๋ก ์ค์
172.20.1.0/24
โ ์ปจํธ๋กคํ๋ ์ธ ๋ ธ๋,172.20.0.0/24
โ ์์ปค๋ ธ๋ 1
8. Static Route ๋ฐ์ ํ์ธ
1
root@router:~# ip -c route | grep 172.20
โ ย ์ถ๋ ฅ
1
2
172.20.0.0/24 via 192.168.10.101 dev eth1
172.20.1.0/24 via 192.168.10.100 dev eth1
- ์ด์ ๋ผ์ฐํฐ๋ Pod IP๋ก ๋ค์ด์จ ์๋ต ํธ๋ํฝ์ ์ฌ๋ฐ๋ฅด๊ฒ ๋ผ์ฐํ ๊ฐ๋ฅ
9. ํต์ ์ ์ ์๋ ํ์ธ
NAT ์์ด ์ฌ๋ด๋ง๊ณผ ์ง์ ํต์ ์ฑ๊ณต
1
kubectl exec -it curl-pod -- curl -s 10.10.1.200
1
kubectl exec -it curl-pod -- curl -s 10.10.2.200
๐ก CoreDNS, NodeLocalDNS
1. ํ๋ ๋ด /etc/resolv.conf
ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- cat /etc/resolv.conf
โ ย ์ถ๋ ฅ
1
2
3
search default.svc.cluster.local svc.cluster.local cluster.local Davolink
nameserver 10.96.0.10
options ndots:5
2. kubelet DNS ์ค์ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# cat /var/lib/kubelet/config.yaml | grep cluster -A1
โ ย ์ถ๋ ฅ
1
2
3
4
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerRuntimeEndpoint: ""
3. CoreDNS ์๋น์ค ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc,ep -n kube-system kube-dns
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 8h
NAME ENDPOINTS AGE
endpoints/kube-dns 172.20.0.186:53,172.20.1.144:53,172.20.0.186:53 + 3 more... 8h
4. CoreDNS Pod ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=kube-dns
โ ย ์ถ๋ ฅ
1
2
3
NAME READY STATUS RESTARTS AGE
coredns-674b8bbfcf-gbnm8 1/1 Running 0 6h12m
coredns-674b8bbfcf-vvgfm 1/1 Running 0 6h12m
- CoreDNS๋
kube-system
๋ค์์คํ์ด์ค์์ 2๊ฐ์ replica๋ก ๋์ - ํ๋ ๋ผ๋ฒจ
k8s-app=kube-dns
๋ก ํํฐ๋ง ๊ฐ๋ฅ
5. CoreDNS Pod ์์ธ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kc describe pod -n kube-system -l k8s-app=kube-dns
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
...
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
...
6. CoreDNS Corefile ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kc describe cm -n kube-system coredns
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
Name: coredns
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
Corefile:
----
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30 {
disable success cluster.local
disable denial cluster.local
}
loop
reload
loadbalance
}
BinaryData
====
Events: <none>
7. ๋
ธ๋์ /etc/resolv.conf
ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# cat /etc/resolv.conf
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# This is /run/systemd/resolve/stub-resolv.conf managed by man:systemd-resolved(8).
# Do not edit.
#
# This file might be symlinked as /etc/resolv.conf. If you're looking at
# /etc/resolv.conf and seeing this text, you have followed the symlink.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs should typically not access this file directly, but only
# through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
# different way, replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0 trust-ad
search Davolink
- ๋ ธ๋ ์์ฒด๋ systemd-resolved์ ์ํด ๊ด๋ฆฌ๋๋ stub resolver ์ฌ์ฉ ์ค
- ์ค์ ์ง์๋
127.0.0.53
๋ก ๊ฐ๋ฉฐ, ์ด๋ ๋ก์ปฌ DNS ํ๋ก์ ์ญํ
8. ๋
ธ๋์ ์์ DNS ํ์ธ (resolvectl
)
1
(โ|HomeLab:N/A) root@k8s-ctr:~# resolvectl
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
Global
Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: stub
Link 2 (eth0)
Current Scopes: DNS
Protocols: +DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: 10.0.2.3
DNS Servers: 10.0.2.3
DNS Domain: Davolink
Link 3 (eth1)
Current Scopes: none
Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Link 4 (cilium_net)
Current Scopes: none
Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Link 5 (cilium_host)
Current Scopes: none
Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Link 9 (lxcfeeee14aa766)
Current Scopes: none
Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Link 25 (lxcf2a822e72a6e)
Current Scopes: none
Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Link 27 (lxc0895f39b5225)
Current Scopes: none
Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Link 29 (lxcd63c3c1415ff)
Current Scopes: none
Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
- ์์ DNS๋
10.0.2.3
์ผ๋ก ์ค์ ๋์ด ์์ (ex. VirtualBox NAT DNS) - ์ด๋ VM ๊ฒ์คํธ์ ์ธ๋ถ ๋ค์์๋ฒ ์ญํ ์ ํ๋ฉฐ, upstream ์ง์๋ฅผ ๋ด๋น
๐ ํ๋์์ DNS ์ง์ ํ์ธ
1. ํ๋ ๋ฐ CoreDNS IP ํ์ธ
curl-pod
๊ณผ webpod
๋ค์ Pod IP ๋ฐ ๋
ธ๋ ์์น ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
โ ย ์ถ๋ ฅ
1
2
3
4
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-pod 1/1 Running 0 6h17m 172.20.1.236 k8s-ctr <none> <none>
webpod-556878d5d7-7p8bn 1/1 Running 0 6h18m 172.20.1.40 k8s-ctr <none> <none>
webpod-556878d5d7-r4dmh 1/1 Running 0 6h19m 172.20.0.130 k8s-w1 <none> <none>
CoreDNS ํ๋ 2๊ฐ๊ฐ k8s-ctr
, k8s-w1
๋
ธ๋์ ๊ฐ๊ฐ ์กด์ฌํจ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=kube-dns -owide
โ ย ์ถ๋ ฅ
1
2
3
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-674b8bbfcf-gbnm8 1/1 Running 0 6h24m 172.20.0.186 k8s-w1 <none> <none>
coredns-674b8bbfcf-vvgfm 1/1 Running 0 6h24m 172.20.1.144 k8s-ctr <none> <none>
2. CoreDNS ํ๋ ์ ์ถ์ (์ค์ต ํธ์ ๋ชฉ์ )
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl scale deployment -n kube-system coredns --replicas 1
# ๊ฒฐ๊ณผ
deployment.apps/coredns scaled
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=kube-dns -owide
โ ย ์ถ๋ ฅ
1
2
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-674b8bbfcf-vvgfm 1/1 Running 0 6h25m 172.20.1.144 k8s-ctr <none> <none>
3. CoreDNS ๋ฉํธ๋ฆญ ํ์ธ (์ง์ ์ )
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl kube-dns.kube-system.svc:9153/metrics | grep coredns_cache_ | grep -v ^#
โ ย ์ถ๋ ฅ
1
2
3
4
coredns_cache_entries{server="dns://:53",type="denial",view="",zones="."} 1
coredns_cache_entries{server="dns://:53",type="success",view="",zones="."} 0
coredns_cache_misses_total{server="dns://:53",view="",zones="."} 3170
coredns_cache_requests_total{server="dns://:53",view="",zones="."} 3170
4. ๋ด๋ถ DNS ์ง์ ํ
์คํธ (webpod
)
nslookup webpod
์ํ โ ๋ด๋ถ DNS ์ ์ ๋์webpod.default.svc.cluster.local
โ10.96.152.212
IP ๋ฐํ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- nslookup webpod
5. ์ง์ ๋๋ฒ๊น
(nslookup -debug
)
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- nslookup -debug webpod
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
;; Got recursion not available from 10.96.0.10
Server: 10.96.0.10
Address: 10.96.0.10#53
------------
QUESTIONS:
webpod.default.svc.cluster.local, type = A, class = IN
ANSWERS:
-> webpod.default.svc.cluster.local
internet address = 10.96.152.212
ttl = 30
AUTHORITY RECORDS:
ADDITIONAL RECORDS:
------------
Name: webpod.default.svc.cluster.local
Address: 10.96.152.212
;; Got recursion not available from 10.96.0.10
------------
QUESTIONS:
webpod.default.svc.cluster.local, type = AAAA, class = IN
ANSWERS:
AUTHORITY RECORDS:
-> cluster.local
origin = ns.dns.cluster.local
mail addr = hostmaster.cluster.local
serial = 1753885699
refresh = 7200
retry = 1800
expire = 86400
minimum = 30
ttl = 30
ADDITIONAL RECORDS:
------------
- A/AAAA ๋ ์ฝ๋ ์ง์ ํ๋ฆ ๋ฐ TTL, Authority ์์ญ ์๋ต ํ์ธ
6. ์ธ๋ถ ๋๋ฉ์ธ ์ง์ ํ
์คํธ (google.com
)
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- nslookup -debug google.com
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
;; Got recursion not available from 10.96.0.10
Server: 10.96.0.10
Address: 10.96.0.10#53
------------
QUESTIONS:
google.com.default.svc.cluster.local, type = A, class = IN
ANSWERS:
AUTHORITY RECORDS:
-> cluster.local
origin = ns.dns.cluster.local
mail addr = hostmaster.cluster.local
serial = 1753885699
refresh = 7200
retry = 1800
expire = 86400
minimum = 30
ttl = 30
ADDITIONAL RECORDS:
------------
** server can't find google.com.default.svc.cluster.local: NXDOMAIN
;; Got recursion not available from 10.96.0.10
Server: 10.96.0.10
Address: 10.96.0.10#53
------------
QUESTIONS:
google.com.svc.cluster.local, type = A, class = IN
ANSWERS:
AUTHORITY RECORDS:
-> cluster.local
origin = ns.dns.cluster.local
mail addr = hostmaster.cluster.local
serial = 1753885699
refresh = 7200
retry = 1800
expire = 86400
minimum = 30
ttl = 30
ADDITIONAL RECORDS:
------------
** server can't find google.com.svc.cluster.local: NXDOMAIN
;; Got recursion not available from 10.96.0.10
Server: 10.96.0.10
Address: 10.96.0.10#53
------------
QUESTIONS:
google.com.cluster.local, type = A, class = IN
ANSWERS:
AUTHORITY RECORDS:
-> cluster.local
origin = ns.dns.cluster.local
mail addr = hostmaster.cluster.local
serial = 1753885699
refresh = 7200
retry = 1800
expire = 86400
minimum = 30
ttl = 30
ADDITIONAL RECORDS:
------------
** server can't find google.com.cluster.local: NXDOMAIN
Server: 10.96.0.10
Address: 10.96.0.10#53
------------
QUESTIONS:
google.com.Davolink, type = A, class = IN
ANSWERS:
AUTHORITY RECORDS:
-> .
origin = a.root-servers.net
mail addr = nstld.verisign-grs.com
serial = 2025073000
refresh = 1800
retry = 900
expire = 604800
minimum = 86400
ttl = 30
ADDITIONAL RECORDS:
------------
** server can't find google.com.Davolink: NXDOMAIN
Server: 10.96.0.10
Address: 10.96.0.10#53
------------
QUESTIONS:
google.com, type = A, class = IN
ANSWERS:
-> google.com
internet address = 142.251.222.14
ttl = 30
AUTHORITY RECORDS:
ADDITIONAL RECORDS:
------------
Non-authoritative answer:
Name: google.com
Address: 142.251.222.14
------------
QUESTIONS:
google.com, type = AAAA, class = IN
ANSWERS:
-> google.com
has AAAA address 2404:6800:4005:813::200e
ttl = 30
AUTHORITY RECORDS:
ADDITIONAL RECORDS:
------------
Name: google.com
Address: 2404:6800:4005:813::200e
command terminated with exit code 1
google.com
์ง์ ์ search ๋๋ฉ์ธ์ ๋ฐ๋ผ ์์ฐจ ์ง์๋จ
google.com.default.svc.cluster.local
โNXDOMAIN
google.com
โ ์ ์ ์๋ต (A/AAAA ๋ชจ๋ ์์ )
curl-pod
์์ ์ํํ DNS ์ง์์ ๋ํด CoreDNS๊ฐ A ๋ ์ฝ๋๋ก ์๋ตํ ๊ฒ์ ๋ก๊ทธ์์ ํ์ธํ ์ ์์
7. CoreDNS ๋ฉํธ๋ฆญ ํ์ธ (์ง์ ํ)
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl kube-dns.kube-system.svc:9153/metrics | grep coredns_cache_ | grep -v ^#
โ ย ์ถ๋ ฅ
1
2
3
4
coredns_cache_entries{server="dns://:53",type="denial",view="",zones="."} 2
coredns_cache_entries{server="dns://:53",type="success",view="",zones="."} 2
coredns_cache_misses_total{server="dns://:53",view="",zones="."} 3188
coredns_cache_requests_total{server="dns://:53",view="",zones="."} 3188
8. CoreDNS ๋ก๊ทธ ์ค์๊ฐ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system logs -l k8s-app=kube-dns -f
โ ย ์ถ๋ ฅ
1
2
3
4
5
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.12.0
linux/amd64, go1.23.3, 51e11f1
9. k9s CoreDNS ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# k9s
10. ๋ด๋ถ ๋๋ฉ์ธ ์ง์ ํ
์คํธ (webpod
)
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- nslookup webpod
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
;; Got recursion not available from 10.96.0.10
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: webpod.default.svc.cluster.local
Address: 10.96.152.212
;; Got recursion not available from 10.96.0.10
11. Hubble๋ก ๋ด๋ถ DNS ์ง์ ํ๋ฆ ์ถ์
1
(โ|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f --port 53
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
Jul 30 14:56:15.979: 10.0.2.15:44999 (host) -> 10.0.2.3:53 (world) to-network FORWARDED (UDP)
Jul 30 14:57:36.751: default/curl-pod (ID:5580) <> 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Jul 30 14:57:36.751: default/curl-pod (ID:5580) <> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) post-xlate-fwd TRANSLATED (UDP)
Jul 30 14:57:36.751: default/curl-pod:32772 (ID:5580) -> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 14:57:36.752: default/curl-pod:32772 (ID:5580) <- kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 14:57:36.752: kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) <> default/curl-pod (ID:5580) pre-xlate-rev TRACED (UDP)
Jul 30 14:57:36.752: 10.96.0.10:53 (world) <> default/curl-pod (ID:5580) post-xlate-rev TRANSLATED (UDP)
Jul 30 14:57:36.754: default/curl-pod (ID:5580) <> 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Jul 30 14:57:36.754: default/curl-pod (ID:5580) <> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) post-xlate-fwd TRANSLATED (UDP)
Jul 30 14:57:36.754: default/curl-pod:57903 (ID:5580) -> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 14:57:36.754: default/curl-pod:57903 (ID:5580) <- kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 14:57:36.754: kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) <> default/curl-pod (ID:5580) pre-xlate-rev TRACED (UDP)
Jul 30 14:57:36.754: 10.96.0.10:53 (world) <> default/curl-pod (ID:5580) post-xlate-rev TRANSLATED (UDP)
curl-pod
โ10.96.0.10:53
์์ฒญ- ์ง์ ๋ฐ ์๋ต ํ๋ฆ์ด Cilium์์ ํฌ์๋ฉ ๋ฐ ๋ณํ ๊ณผ์ ์ ๊ฑฐ์ณ ํ์ธ๋จ
12. tcpdump๋ก ๋ด๋ถ DNS ํจํท ํ์ธ
1
2
3
4
5
6
7
8
(โ|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i any udp port 53 -nn
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
23:57:36.751572 lxcd63c3c1415ff In IP 172.20.1.236.32772 > 172.20.1.144.53: 19286+ A? webpod.default.svc.cluster.local. (50)
23:57:36.752160 lxcf2a822e72a6e In IP 172.20.1.144.53 > 172.20.1.236.32772: 19286*- 1/0/0 A 10.96.152.212 (98)
23:57:36.754268 lxcd63c3c1415ff In IP 172.20.1.236.57903 > 172.20.1.144.53: 5377+ AAAA? webpod.default.svc.cluster.local. (50)
23:57:36.754408 lxcf2a822e72a6e In IP 172.20.1.144.53 > 172.20.1.236.57903: 5377*- 0/1/0 (143)
A?
,AAAA?
์ง์๊ฐ CoreDNS ํ๋์ ๋๋ฌํ๋์ง ํ์ธ- ์๋ต์ IP
10.96.152.212
๋ฐํ๋๋ ๊ฒ ํ์ธ
13. ์ธ๋ถ ๋๋ฉ์ธ ์ง์ ํ
์คํธ (google.com
)
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- nslookup google.com
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
;; Got recursion not available from 10.96.0.10
;; Got recursion not available from 10.96.0.10
;; Got recursion not available from 10.96.0.10
Server: 10.96.0.10
Address: 10.96.0.10#53
Non-authoritative answer:
Name: google.com
Address: 142.250.196.110
Name: google.com
Address: 2404:6800:4004:825::200e
- โrecursion not availableโ ๋ฉ์์ง ์ฌ๋ฌ ๋ฒ ์ถ๋ ฅ๋๋
- ์ต์ข
์ ์ผ๋ก A ๋ ์ฝ๋ (
142.250.196.110
), AAAA ๋ ์ฝ๋ (2404:6800:4004:825::200e
) ์ ์ ๋ฐํ
14. Hubble๋ก ์ธ๋ถ ๋๋ฉ์ธ ์ง์ ํ๋ฆ ์ถ์
1
(โ|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f --port 53
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
Jul 30 15:01:56.982: 10.0.2.15:58614 (host) -> 10.0.2.3:53 (world) to-network FORWARDED (UDP)
Jul 30 15:02:20.444: default/curl-pod (ID:5580) <> 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Jul 30 15:02:20.444: default/curl-pod (ID:5580) <> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) post-xlate-fwd TRANSLATED (UDP)
Jul 30 15:02:20.444: default/curl-pod:46923 (ID:5580) -> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.446: default/curl-pod:46923 (ID:5580) <- kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.446: kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) <> default/curl-pod (ID:5580) pre-xlate-rev TRACED (UDP)
Jul 30 15:02:20.446: 10.96.0.10:53 (world) <> default/curl-pod (ID:5580) post-xlate-rev TRANSLATED (UDP)
Jul 30 15:02:20.446: default/curl-pod (ID:5580) <> 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Jul 30 15:02:20.446: default/curl-pod (ID:5580) <> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) post-xlate-fwd TRANSLATED (UDP)
Jul 30 15:02:20.446: default/curl-pod:42237 (ID:5580) -> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.446: default/curl-pod:42237 (ID:5580) <- kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.446: kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) <> default/curl-pod (ID:5580) pre-xlate-rev TRACED (UDP)
Jul 30 15:02:20.446: 10.96.0.10:53 (world) <> default/curl-pod (ID:5580) post-xlate-rev TRANSLATED (UDP)
Jul 30 15:02:20.448: default/curl-pod (ID:5580) <> 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Jul 30 15:02:20.448: default/curl-pod (ID:5580) <> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) post-xlate-fwd TRANSLATED (UDP)
Jul 30 15:02:20.449: default/curl-pod:50175 (ID:5580) -> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.449: default/curl-pod:50175 (ID:5580) <- kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.449: kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) <> default/curl-pod (ID:5580) pre-xlate-rev TRACED (UDP)
Jul 30 15:02:20.449: 10.96.0.10:53 (world) <> default/curl-pod (ID:5580) post-xlate-rev TRANSLATED (UDP)
Jul 30 15:02:20.450: default/curl-pod (ID:5580) <> 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Jul 30 15:02:20.450: default/curl-pod (ID:5580) <> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) post-xlate-fwd TRANSLATED (UDP)
Jul 30 15:02:20.450: default/curl-pod:58567 (ID:5580) -> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.450: 10.0.2.3:53 (world) <> kube-system/coredns-674b8bbfcf-vvgfm (ID:28565) pre-xlate-rev TRACED (UDP)
Jul 30 15:02:20.450: 10.0.2.3:53 (world) <> kube-system/coredns-674b8bbfcf-vvgfm (ID:28565) pre-xlate-rev TRACED (UDP)
Jul 30 15:02:20.450: 10.0.2.3:53 (world) <> kube-system/coredns-674b8bbfcf-vvgfm (ID:28565) pre-xlate-rev TRACED (UDP)
Jul 30 15:02:20.450: kube-system/coredns-674b8bbfcf-vvgfm:57647 (ID:28565) -> 10.0.2.3:53 (world) to-network FORWARDED (UDP)
Jul 30 15:02:20.450: 10.0.2.3:53 (world) <> kube-system/coredns-674b8bbfcf-vvgfm (ID:28565) pre-xlate-rev TRACED (UDP)
Jul 30 15:02:20.450: 10.0.2.3:53 (world) <> kube-system/coredns-674b8bbfcf-vvgfm (ID:28565) pre-xlate-rev TRACED (UDP)
Jul 30 15:02:20.455: kube-system/coredns-674b8bbfcf-vvgfm:57647 (ID:28565) <- 10.0.2.3:53 (world) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.455: default/curl-pod:58567 (ID:5580) <- kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.455: kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) <> default/curl-pod (ID:5580) pre-xlate-rev TRACED (UDP)
Jul 30 15:02:20.455: 10.96.0.10:53 (world) <> default/curl-pod (ID:5580) post-xlate-rev TRANSLATED (UDP)
Jul 30 15:02:20.456: default/curl-pod (ID:5580) <> 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Jul 30 15:02:20.456: default/curl-pod (ID:5580) <> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) post-xlate-fwd TRANSLATED (UDP)
Jul 30 15:02:20.456: default/curl-pod:35291 (ID:5580) -> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.461: default/curl-pod:35291 (ID:5580) <- kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.461: kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) <> default/curl-pod (ID:5580) pre-xlate-rev TRACED (UDP)
Jul 30 15:02:20.461: 10.96.0.10:53 (world) <> default/curl-pod (ID:5580) post-xlate-rev TRANSLATED (UDP)
Jul 30 15:02:20.463: default/curl-pod (ID:5580) <> 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Jul 30 15:02:20.463: default/curl-pod (ID:5580) <> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) post-xlate-fwd TRANSLATED (UDP)
Jul 30 15:02:20.463: default/curl-pod:47474 (ID:5580) -> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.475: default/curl-pod:47474 (ID:5580) <- kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.476: kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) <> default/curl-pod (ID:5580) pre-xlate-rev TRACED (UDP)
Jul 30 15:02:20.476: 10.96.0.10:53 (world) <> default/curl-pod (ID:5580) post-xlate-rev TRANSLATED (UDP)
curl-pod
โCoreDNS
โ ์ธ๋ถ ๋ค์์๋ฒ (10.0.2.3
) ํ๋ฆ ํ์ธ- CoreDNS๊ฐ ์ฌ๊ท ์ง์๋ฅผ ์ํด ์ธ๋ถ DNS๋ก ํธ๋ํฝ ์ ๋ฌ
- CoreDNS โ
10.0.2.3
๋ก UDP ํฌ์๋ฉ ํ์ธ๋จ
15. tcpdump๋ก ์ธ๋ถ DNS ์ง์ ํจํท ๋ถ์
1
(โ|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i any udp port 53 -nn
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
00:02:20.444113 lxcd63c3c1415ff In IP 172.20.1.236.46923 > 172.20.1.144.53: 7116+ A? google.com.default.svc.cluster.local. (54)
00:02:20.444467 lxcf2a822e72a6e In IP 172.20.1.144.53 > 172.20.1.236.46923: 7116 NXDomain*- 0/1/0 (147)
00:02:20.446052 lxcd63c3c1415ff In IP 172.20.1.236.42237 > 172.20.1.144.53: 5495+ A? google.com.svc.cluster.local. (46)
00:02:20.446537 lxcf2a822e72a6e In IP 172.20.1.144.53 > 172.20.1.236.42237: 5495 NXDomain*- 0/1/0 (139)
00:02:20.448803 lxcd63c3c1415ff In IP 172.20.1.236.50175 > 172.20.1.144.53: 9828+ A? google.com.cluster.local. (42)
00:02:20.448969 lxcf2a822e72a6e In IP 172.20.1.144.53 > 172.20.1.236.50175: 9828 NXDomain*- 0/1/0 (135)
00:02:20.450674 lxcd63c3c1415ff In IP 172.20.1.236.58567 > 172.20.1.144.53: 41730+ A? google.com.Davolink. (37)
00:02:20.450858 lxcf2a822e72a6e In IP 172.20.1.144.57647 > 10.0.2.3.53: 7727+ A? google.com.Davolink. (37)
00:02:20.450875 eth0 Out IP 10.0.2.15.57647 > 10.0.2.3.53: 7727+ A? google.com.Davolink. (37)
00:02:20.455142 eth0 In IP 10.0.2.3.53 > 10.0.2.15.57647: 7727 NXDomain 0/1/0 (112)
00:02:20.455305 lxcf2a822e72a6e In IP 172.20.1.144.53 > 172.20.1.236.58567: 41730 NXDomain 0/1/0 (112)
00:02:20.456958 lxcd63c3c1415ff In IP 172.20.1.236.35291 > 172.20.1.144.53: 23450+ A? google.com. (28)
00:02:20.457191 lxcf2a822e72a6e In IP 172.20.1.144.57647 > 10.0.2.3.53: 12952+ A? google.com. (28)
00:02:20.457204 eth0 Out IP 10.0.2.15.57647 > 10.0.2.3.53: 12952+ A? google.com. (28)
00:02:20.460993 eth0 In IP 10.0.2.3.53 > 10.0.2.15.57647: 12952 1/0/0 A 142.250.196.110 (44)
00:02:20.461181 lxcf2a822e72a6e In IP 172.20.1.144.53 > 172.20.1.236.35291: 23450 1/0/0 A 142.250.196.110 (54)
00:02:20.463949 lxcd63c3c1415ff In IP 172.20.1.236.47474 > 172.20.1.144.53: 49159+ AAAA? google.com. (28)
00:02:20.464413 lxcf2a822e72a6e In IP 172.20.1.144.57647 > 10.0.2.3.53: 52829+ AAAA? google.com. (28)
00:02:20.464427 eth0 Out IP 10.0.2.15.57647 > 10.0.2.3.53: 52829+ AAAA? google.com. (28)
00:02:20.473156 eth0 In IP 10.0.2.3.53 > 10.0.2.15.57647: 52829 1/0/0 AAAA 2404:6800:4004:825::200e (56)
00:02:20.473605 lxcf2a822e72a6e In IP 172.20.1.144.53 > 172.20.1.236.47474: 49159 1/0/0 AAAA 2404:6800:4004:825::200e (66)
google.com.default.svc.cluster.local.
๋ฑ search ๋๋ฉ์ธ ์ ์ฉ ํ ์์ฐจ์ ์คํจ (NXDOMAIN
)- ์ต์ข
์ ์ผ๋ก
google.com.
์ ๋ํด A, AAAA ์ง์ ์ฑ๊ณต - ์ธ๋ถ DNS ์๋ฒ์ธ
10.0.2.3
๋ก ์์ฒญ ์ ๋ฌ๋๋ฉฐ ์๋ต ์์
16. CoreDNS ์บ์ ๋ฉํธ๋ฆญ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl kube-dns.kube-system.svc:9153/metrics | grep coredns_cache_ | grep -v ^#
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
coredns_cache_entries{server="dns://:53",type="denial",view="",zones="."} 2
coredns_cache_entries{server="dns://:53",type="success",view="",zones="."} 2
coredns_cache_hits_total{server="dns://:53",type="denial",view="",zones="."} 1
coredns_cache_hits_total{server="dns://:53",type="success",view="",zones="."} 2
coredns_cache_misses_total{server="dns://:53",view="",zones="."} 3223
coredns_cache_requests_total{server="dns://:53",view="",zones="."} 3226
๐ง NodeLocalDNS
์ค์ต + Cilium Local Redirect Policy
1. iptables ์ํ ๋ฐฑ์ (์ค์น ์ )
1
(โ|HomeLab:N/A) root@k8s-ctr:~# iptables-save | tee before.txt
2. NodeLocal DNS ๋งค๋ํ์คํธ ๋ค์ด๋ก๋
1
(โ|HomeLab:N/A) root@k8s-ctr:~# wget https://github.com/kubernetes/kubernetes/raw/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
--2025-07-31 00:17:53-- https://github.com/kubernetes/kubernetes/raw/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml
Resolving github.com (github.com)... 20.200.245.247
Connecting to github.com (github.com)|20.200.245.247|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml [following]
--2025-07-31 00:17:53-- https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.111.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 5377 (5.3K) [text/plain]
Saving to: โnodelocaldns.yamlโ
nodelocaldns.yaml 100%[==============================>] 5.25K --.-KB/s in 0s
2025-07-31 00:17:53 (61.7 MB/s) - โnodelocaldns.yamlโ saved [5377/5377]
3. ํ๊ฒฝ ๋ณ์ ์ ํ
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubedns=`kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}`
domain='cluster.local'
localdns='169.254.20.10'
echo $kubedns $domain $localdns
โ ย ์ถ๋ ฅ
1
10.96.0.10 cluster.local 169.254.20.10
4. ๋งค๋ํ์คํธ ๋ด ๋ณ์ ์นํ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__/$kubedns/g" nodelocaldns.yaml
5. NodeLocalDNS ์ค์น
1
2
3
4
5
6
7
8
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl apply -f nodelocaldns.yaml
# ๊ฒฐ๊ณผ
serviceaccount/node-local-dns created
service/kube-dns-upstream created
configmap/node-local-dns created
daemonset.apps/node-local-dns created
service/node-local-dns created
6. ์ค์น ํ์ธ (Pod ํ์ธ)
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=node-local-dns -owide
โ ย ์ถ๋ ฅ
1
2
3
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node-local-dns-ggjmt 1/1 Running 0 30s 192.168.10.100 k8s-ctr <none> <none>
node-local-dns-kzthr 1/1 Running 0 30s 192.168.10.101 k8s-w1 <none> <none>
7. ๋ก๊ทธ ์ค์ ์ถ๊ฐ (ConfigMap ์์ )
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl edit cm -n kube-system node-local-dns
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
...
cluster.local:53 {
log
errors
cache {
success 9984 30
denial 9984 5
}
reload
loop
bind 169.254.20.10 10.96.0.10
forward . __PILLAR__CLUSTER__DNS__ {
force_tcp
}
prometheus :9253
health 169.254.20.10:8080
}
...
.:53 {
log
errors
cache 30
reload
loop
bind 169.254.20.10 10.96.0.10
forward . __PILLAR__UPSTREAM__SERVERS__
prometheus :9253
}
configmap/node-local-dns edited
- Corefile ๋ด
cluster.local:53
๋ฐ.:53
๋ธ๋ก์log
์ถ๊ฐ
8. ๋ฐ๋ชฌ์ ์ฌ์์
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system rollout restart ds node-local-dns
# ๊ฒฐ๊ณผ
daemonset.apps/node-local-dns restarted
9. ์ ์ฉ๋ ConfigMap ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl describe cm -n kube-system node-local-dns
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
Name: node-local-dns
Namespace: kube-system
Labels: addonmanager.kubernetes.io/mode=Reconcile
Annotations: <none>
Data
====
Corefile:
----
cluster.local:53 {
log
errors
cache {
success 9984 30
denial 9984 5
}
reload
loop
bind 169.254.20.10 10.96.0.10
forward . __PILLAR__CLUSTER__DNS__ {
force_tcp
}
prometheus :9253
health 169.254.20.10:8080
}
in-addr.arpa:53 {
errors
cache 30
reload
loop
bind 169.254.20.10 10.96.0.10
forward . __PILLAR__CLUSTER__DNS__ {
force_tcp
}
prometheus :9253
}
ip6.arpa:53 {
errors
cache 30
reload
loop
bind 169.254.20.10 10.96.0.10
forward . __PILLAR__CLUSTER__DNS__ {
force_tcp
}
prometheus :9253
}
.:53 {
log
errors
cache 30
reload
loop
bind 169.254.20.10 10.96.0.10
forward . __PILLAR__UPSTREAM__SERVERS__
prometheus :9253
}
BinaryData
====
Events: <none>
169.254.20.10
์ผ๋ก ์ง์๊ฐ ๋ค์ด์ค๋ฉด node-local-dns๊ฐ ์๋ตcluster.local
๋๋ฉ์ธ์ ๋ด๋ถ DNS๋ก ์ฒ๋ฆฌํ๊ณ , ์ธ๋ถ ๋๋ฉ์ธ์ upstream์ผ๋ก ์ ๋ฌ๋จ
10. iptables ์ํ ๋ฐฑ์ (์ค์น ํ)
1
(โ|HomeLab:N/A) root@k8s-ctr:~# iptables-save | tee after.txt
11. iptables diff ๋น๊ต
NodeLocalDNS ์ค์น ์ ํ์ iptables ์ฐจ์ด ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# diff before.txt after.txt
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
1c1
< # Generated by iptables-save v1.8.10 (nf_tables) on Thu Jul 31 00:16:21 2025
---
> # Generated by iptables-save v1.8.10 (nf_tables) on Thu Jul 31 00:35:13 2025
19,20c19,20
< # Completed on Thu Jul 31 00:16:21 2025
< # Generated by iptables-save v1.8.10 (nf_tables) on Thu Jul 31 00:16:21 2025
---
> # Completed on Thu Jul 31 00:35:13 2025
> # Generated by iptables-save v1.8.10 (nf_tables) on Thu Jul 31 00:35:13 2025
25a26,29
> -A PREROUTING -d 10.96.0.10/32 -p udp -m udp --dport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A PREROUTING -d 10.96.0.10/32 -p tcp -m tcp --dport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A PREROUTING -d 169.254.20.10/32 -p udp -m udp --dport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A PREROUTING -d 169.254.20.10/32 -p tcp -m tcp --dport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
26a31,42
> -A OUTPUT -s 10.96.0.10/32 -p tcp -m tcp --sport 8080 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A OUTPUT -d 10.96.0.10/32 -p tcp -m tcp --dport 8080 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A OUTPUT -d 10.96.0.10/32 -p udp -m udp --dport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A OUTPUT -d 10.96.0.10/32 -p tcp -m tcp --dport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A OUTPUT -s 10.96.0.10/32 -p udp -m udp --sport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A OUTPUT -s 10.96.0.10/32 -p tcp -m tcp --sport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A OUTPUT -s 169.254.20.10/32 -p tcp -m tcp --sport 8080 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A OUTPUT -d 169.254.20.10/32 -p tcp -m tcp --dport 8080 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A OUTPUT -d 169.254.20.10/32 -p udp -m udp --dport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A OUTPUT -d 169.254.20.10/32 -p tcp -m tcp --dport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A OUTPUT -s 169.254.20.10/32 -p udp -m udp --sport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A OUTPUT -s 169.254.20.10/32 -p tcp -m tcp --sport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
38,39c54,55
< # Completed on Thu Jul 31 00:16:21 2025
< # Generated by iptables-save v1.8.10 (nf_tables) on Thu Jul 31 00:16:21 2025
---
> # Completed on Thu Jul 31 00:35:13 2025
> # Generated by iptables-save v1.8.10 (nf_tables) on Thu Jul 31 00:35:13 2025
41c57
< :INPUT ACCEPT [6292389:2731707993]
---
> :INPUT ACCEPT [6565229:2825821888]
43c59
< :OUTPUT ACCEPT [6187388:1591005004]
---
> :OUTPUT ACCEPT [6451665:1655130253]
54a71,74
> -A INPUT -d 10.96.0.10/32 -p udp -m udp --dport 53 -m comment --comment "NodeLocal DNS Cache: allow DNS traffic" -j ACCEPT
> -A INPUT -d 10.96.0.10/32 -p tcp -m tcp --dport 53 -m comment --comment "NodeLocal DNS Cache: allow DNS traffic" -j ACCEPT
> -A INPUT -d 169.254.20.10/32 -p udp -m udp --dport 53 -m comment --comment "NodeLocal DNS Cache: allow DNS traffic" -j ACCEPT
> -A INPUT -d 169.254.20.10/32 -p tcp -m tcp --dport 53 -m comment --comment "NodeLocal DNS Cache: allow DNS traffic" -j ACCEPT
64a85,88
> -A OUTPUT -s 10.96.0.10/32 -p udp -m udp --sport 53 -m comment --comment "NodeLocal DNS Cache: allow DNS traffic" -j ACCEPT
> -A OUTPUT -s 10.96.0.10/32 -p tcp -m tcp --sport 53 -m comment --comment "NodeLocal DNS Cache: allow DNS traffic" -j ACCEPT
> -A OUTPUT -s 169.254.20.10/32 -p udp -m udp --sport 53 -m comment --comment "NodeLocal DNS Cache: allow DNS traffic" -j ACCEPT
> -A OUTPUT -s 169.254.20.10/32 -p tcp -m tcp --sport 53 -m comment --comment "NodeLocal DNS Cache: allow DNS traffic" -j ACCEPT
84,85c108,109
< # Completed on Thu Jul 31 00:16:21 2025
< # Generated by iptables-save v1.8.10 (nf_tables) on Thu Jul 31 00:16:21 2025
---
> # Completed on Thu Jul 31 00:35:13 2025
> # Generated by iptables-save v1.8.10 (nf_tables) on Thu Jul 31 00:35:13 2025
105a130
> :KUBE-SEP-6O2WVHQLDKFLZQRS - [0:0]
110a136
> :KUBE-SEP-IG5OB37KH2LEQCBN - [0:0]
114a141
> :KUBE-SVC-BRK3P4PPQWCLKOAN - [0:0]
118a146
> :KUBE-SVC-FXR4M2CWOGAZGGYD - [0:0]
136a165,166
> -A KUBE-NODEPORTS -d 127.0.0.0/8 -p tcp -m comment --comment "kube-system/hubble-ui:http" -m tcp --dport 30003 -m nfacct --nfacct-name localhost_nps_accepted_pkts -j KUBE-EXT-ZGWW2L4XLRSDZ3EF
> -A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/hubble-ui:http" -m tcp --dport 30003 -j KUBE-EXT-ZGWW2L4XLRSDZ3EF
141,142d170
< -A KUBE-NODEPORTS -d 127.0.0.0/8 -p tcp -m comment --comment "kube-system/hubble-ui:http" -m tcp --dport 30003 -m nfacct --nfacct-name localhost_nps_accepted_pkts -j KUBE-EXT-ZGWW2L4XLRSDZ3EF
< -A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/hubble-ui:http" -m tcp --dport 30003 -j KUBE-EXT-ZGWW2L4XLRSDZ3EF
153a182,183
> -A KUBE-SEP-6O2WVHQLDKFLZQRS -s 172.20.1.144/32 -m comment --comment "kube-system/kube-dns-upstream:dns" -j KUBE-MARK-MASQ
> -A KUBE-SEP-6O2WVHQLDKFLZQRS -p udp -m comment --comment "kube-system/kube-dns-upstream:dns" -m udp -j DNAT --to-destination 172.20.1.144:53
163a194,195
> -A KUBE-SEP-IG5OB37KH2LEQCBN -s 172.20.1.144/32 -m comment --comment "kube-system/kube-dns-upstream:dns-tcp" -j KUBE-MARK-MASQ
> -A KUBE-SEP-IG5OB37KH2LEQCBN -p tcp -m comment --comment "kube-system/kube-dns-upstream:dns-tcp" -m tcp -j DNAT --to-destination 172.20.1.144:53
167a200,201
> -A KUBE-SERVICES -d 10.96.152.212/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-SVC-CNZCPOCNCNOROALA
> -A KUBE-SERVICES -d 10.96.206.199/32 -p tcp -m comment --comment "kube-system/hubble-ui:http cluster IP" -m tcp --dport 80 -j KUBE-SVC-ZGWW2L4XLRSDZ3EF
170a205,206
> -A KUBE-SERVICES -d 10.96.31.170/32 -p udp -m comment --comment "kube-system/kube-dns-upstream:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-FXR4M2CWOGAZGGYD
> -A KUBE-SERVICES -d 10.96.31.170/32 -p tcp -m comment --comment "kube-system/kube-dns-upstream:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-BRK3P4PPQWCLKOAN
176,177d211
< -A KUBE-SERVICES -d 10.96.152.212/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-SVC-CNZCPOCNCNOROALA
< -A KUBE-SERVICES -d 10.96.206.199/32 -p tcp -m comment --comment "kube-system/hubble-ui:http cluster IP" -m tcp --dport 80 -j KUBE-SVC-ZGWW2L4XLRSDZ3EF
180a215,216
> -A KUBE-SVC-BRK3P4PPQWCLKOAN ! -s 10.244.0.0/16 -d 10.96.31.170/32 -p tcp -m comment --comment "kube-system/kube-dns-upstream:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
> -A KUBE-SVC-BRK3P4PPQWCLKOAN -m comment --comment "kube-system/kube-dns-upstream:dns-tcp -> 172.20.1.144:53" -j KUBE-SEP-IG5OB37KH2LEQCBN
189a226,227
> -A KUBE-SVC-FXR4M2CWOGAZGGYD ! -s 10.244.0.0/16 -d 10.96.31.170/32 -p udp -m comment --comment "kube-system/kube-dns-upstream:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
> -A KUBE-SVC-FXR4M2CWOGAZGGYD -m comment --comment "kube-system/kube-dns-upstream:dns -> 172.20.1.144:53" -j KUBE-SEP-6O2WVHQLDKFLZQRS
201c239
< # Completed on Thu Jul 31 00:16:21 2025
---
> # Completed on Thu Jul 31 00:35:13 2025
์ฃผ์ ์ถ๊ฐ ๋ด์ฉ
169.254.20.10
์ ๋ํ DNS ํธ๋ํฝ์ conntrack ์ ์ธ (NOTRACK
)- DNS ํฌํธ (
53
)์ ๋ํACCEPT
๊ท์น ์ถ๊ฐ๋จ
12. ํ๋ ๋ด DNS ์ค์ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- cat /etc/resolv.conf
โ ย ์ถ๋ ฅ
1
2
3
search default.svc.cluster.local svc.cluster.local cluster.local Davolink
nameserver 10.96.0.10
options ndots:5
13. DNS ๋ก๊ทธ ํ์ธ
1
2
kubectl -n kube-system logs -l k8s-app=kube-dns -f
kubectl -n kube-system logs -l k8s-app=node-local-dns -f
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl delete pod curl-pod
# ๊ฒฐ๊ณผ
pod "curl-pod" deleted
14. curl-pod ์ฌ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
(โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: curl-pod
labels:
app: curl
spec:
containers:
- name: curl
image: nicolaka/netshoot
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF
# ๊ฒฐ๊ณผ
pod/curl-pod created
๐ Cilium Local Redirect Policy : --set localRedirectPolicy=true
1. Cilium LocalRedirectPolicy ๊ธฐ๋ฅ ์ค๋ช
localRedirectPolicy=true
๋ฅผ ํตํด Cilium์ด eBPF ๊ธฐ๋ฐ์ผ๋ก DNS ์์ฒญ์ ๋ ธ๋ ๋ด์node-local-dns
ํ๋๋ก ์ง์ ์ ๋ฌํ๋๋ก ๊ตฌ์ฑ- Kubernetes ๊ธฐ๋ณธ ๋ฐฉ์(IPVS/iptables)์ ์ฐํํ์ฌ Cilium์ด ์ง์ ํธ๋ํฝ์ ์ฒ๋ฆฌํจ
- https://docs.cilium.io/en/stable/network/kubernetes/local-redirect-policy/
2. Cilium ์ค์ ์ ๋ฐ์ดํธ
1
2
(โ|HomeLab:N/A) root@k8s-ctr:~# helm upgrade cilium cilium/cilium --namespace kube-system --reuse-values \
--set localRedirectPolicy=true
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
Release "cilium" has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Thu Jul 31 00:52:32 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 4
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.
Your release version is 1.18.0.
For any further help, visit https://docs.cilium.io/en/v1.18/gettinghelp
3. Cilium ์ฌ์์
(1) cilium-operator
์ฌ์์
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl rollout restart deploy cilium-operator -n kube-system
# ๊ฒฐ๊ณผ
deployment.apps/cilium-operator restarted
(2) cilium
DaemonSet ์ฌ์์
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl rollout restart ds cilium -n kube-system
# ๊ฒฐ๊ณผ
daemonset.apps/cilium restarted
4. Local Redirect ์ฐ๋์ฉ NodeLocal DNS ๋งค๋ํ์คํธ ๋ค์ด๋ก๋ ๋ฐ ์์
(1) Cilium์์ ์ ๊ณตํ๋ NodeLocal DNS ์ค์ ํ์ผ ๋ค์ด๋ก๋
1
(โ|HomeLab:N/A) root@k8s-ctr:~# wget https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/kubernetes-local-redirect/node-local-dns.yaml
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
--2025-07-31 00:54:30-- https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/kubernetes-local-redirect/node-local-dns.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.108.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3493 (3.4K) [text/plain]
Saving to: โnode-local-dns.yamlโ
node-local-dns.yaml 100%[==============================>] 3.41K --.-KB/s in 0s
2025-07-31 00:54:31 (51.1 MB/s) - โnode-local-dns.yamlโ saved [3493/3493]
(2) ๊ธฐ์กด kube-dns
ClusterIP๋ก forward ํ๋๋ก ์นํ
1
2
(โ|HomeLab:N/A) root@k8s-ctr:~# kubedns=$(kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP})
sed -i "s/__PILLAR__DNS__SERVER__/$kubedns/g;" node-local-dns.yaml
(3)__PILLAR__DNS__SERVER__
๊ฐ ์นํ ํ vi -d
๋ก ๊ธฐ์กด ๋งค๋ํ์คํธ์ ๋น๊ต ํ์ธ
1
vi -d nodelocaldns.yaml node-local-dns.yaml
5. ์์ ๋ NodeLocal DNS ๋ฐฐํฌ
1
2
3
4
5
6
7
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl apply -f node-local-dns.yaml
# ๊ฒฐ๊ณผ
serviceaccount/node-local-dns configured
service/kube-dns-upstream configured
configmap/node-local-dns configured
daemonset.apps/node-local-dns configured
๊ด๋ จ ๋ฆฌ์์ค ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# k get cm -n kube-system
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
NAME DATA AGE
cilium-config 155 10h
cilium-envoy-config 1 10h
coredns 1 10h
extension-apiserver-authentication 6 10h
hubble-relay-config 1 10h
hubble-ui-nginx 1 10h
ip-masq-agent 1 167m
kube-apiserver-legacy-service-account-token-tracking 1 10h
kube-proxy 2 10h
kube-root-ca.crt 1 10h
kubeadm-config 1 10h
kubelet-config 1 10h
node-local-dns 1 39m
7. ConfigMap ์์ (Corefile์ ๋ก๊ทธ ์ถ๊ฐ)
k9s
๋๋ kubectl edit cm
์ ์ฌ์ฉํ์ฌ Corefile์ log
์ค์ ์ถ๊ฐ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# k9s
8. Corefile ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl describe cm -n kube-system node-local-dns
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
Name: node-local-dns
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
Corefile:
----
cluster.local:53 {
log
errors
cache {
success 9984 30
denial 9984 5
}
reload
loop
bind 0.0.0.0
forward . __PILLAR__CLUSTER__DNS__ {
force_tcp
}
prometheus :9253
health
}
in-addr.arpa:53 {
errors
cache 30
reload
loop
bind 0.0.0.0
forward . __PILLAR__CLUSTER__DNS__ {
force_tcp
}
prometheus :9253
}
ip6.arpa:53 {
errors
cache 30
reload
loop
bind 0.0.0.0
forward . __PILLAR__CLUSTER__DNS__ {
force_tcp
}
prometheus :9253
}
.:53 {
log
errors
cache 30
reload
loop
bind 0.0.0.0
forward . __PILLAR__UPSTREAM__SERVERS__
prometheus :9253
}
BinaryData
====
Events: <none>
9. LocalRedirectPolicy ๋ฆฌ์์ค ๋ค์ด๋ก๋ ๋ฐ ์ ์ฉ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# wget https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/kubernetes-local-redirect/node-local-dns-lrp.yaml
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
--2025-07-31 01:04:50-- https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/kubernetes-local-redirect/node-local-dns-lrp.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.108.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 452 [text/plain]
Saving to: โnode-local-dns-lrp.yamlโ
node-local-dns-lrp.yaml 100%[===============================================================================================>] 452 --.-KB/s in 0s
2025-07-31 01:04:50 (35.5 MB/s) - โnode-local-dns-lrp.yamlโ saved [452/452]
node-local-dns-lrp.yaml
๋ค์ด๋ก๋ ๋ฐ ๋ด๋ถ ๊ตฌ์กฐ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# cat node-local-dns-lrp.yaml | yq
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
{
"apiVersion": "cilium.io/v2",
"kind": "CiliumLocalRedirectPolicy",
"metadata": {
"name": "nodelocaldns",
"namespace": "kube-system"
},
"spec": {
"redirectFrontend": {
"serviceMatcher": {
"serviceName": "kube-dns",
"namespace": "kube-system"
}
},
"redirectBackend": {
"localEndpointSelector": {
"matchLabels": {
"k8s-app": "node-local-dns"
}
},
"toPorts": [
{
"port": "53",
"name": "dns",
"protocol": "UDP"
},
{
"port": "53",
"name": "dns-tcp",
"protocol": "TCP"
}
]
}
}
}
kube-system
๋ค์์คํ์ด์ค์kube-dns
๋ก ๋ค์ด์ค๋ ํธ๋ํฝ์- ๋์ผ ๋ค์์คํ์ด์ค ๋ด
node-local-dns
๋ผ๋ฒจ์ ๊ฐ์ง ํ๋์ 53๋ฒ ํฌํธ(UDP/TCP)๋ก ๋ฆฌ๋ค์ด๋ ์
1
2
3
4
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/kubernetes-local-redirect/node-local-dns-lrp.yaml
# ๊ฒฐ๊ณผ
ciliumlocalredirectpolicy.cilium.io/nodelocaldns created
10. ์ ์ฑ ์ ์ฉ ์ฌ๋ถ ํ์ธ
(1) ์์ฑ๋ LRP ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get CiliumLocalRedirectPolicy -A
โ ย ์ถ๋ ฅ
1
2
NAMESPACE NAME AGE
kube-system nodelocaldns 29s
(2) ์ค์ ๋ฆฌ๋ค์ด๋ ์ ์๋น์ค ์ํ ํ์ธ
1
2
3
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg service list | grep LocalRedirect
17 10.96.0.10:53/UDP LocalRedirect 1 => 172.20.0.211:53/UDP (active)
18 10.96.0.10:53/TCP LocalRedirect 1 => 172.20.0.211:53/TCP (active)
11. ํ ์คํธ ๋ฐ ๋ก๊ทธ ํ์ธ
1
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system logs -l k8s-app=node-local-dns -f
1
2
3
4
5
6
7
8
9
10
11
12
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- nslookup www.google.com
;; Got recursion not available from 10.96.0.10
;; Got recursion not available from 10.96.0.10
;; Got recursion not available from 10.96.0.10
Server: 10.96.0.10
Address: 10.96.0.10#53
Non-authoritative answer:
Name: www.google.com
Address: 172.217.161.36
Name: www.google.com
Address: 2404:6800:4005:81a::2004
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
loop
bind 0.0.0.0
forward . /etc/resolv.conf
prometheus :9253
}
[INFO] Reloading
[INFO] plugin/reload: Running configuration MD5 = a9f86d268572cb5d3a3b2400ed98aff3
[INFO] Reloading complete
[INFO] 127.0.0.1:40579 - 43725 "HINFO IN 1286845318690505901.8683751169514593307.cluster.local. udp 71 false 512" NXDOMAIN qr,aa,rd 164 0.001900209s
[INFO] 127.0.0.1:56654 - 7833 "HINFO IN 2926695405806933559.8651839986208145762. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009552781s
[INFO] 172.20.0.69:37103 - 36022 "A IN www.google.com.cluster.local. udp 46 false 512" NXDOMAIN qr,aa,rd 139 0.002148806s
[INFO] 172.20.0.69:45793 - 8828 "A IN www.google.com.davolink. udp 41 false 512" NXDOMAIN qr,rd,ra 116 0.006312427s
[INFO] 172.20.0.69:41214 - 60955 "A IN www.google.com. udp 32 false 512" NOERROR qr,rd,ra 62 0.004218626s
[INFO] 172.20.0.69:47527 - 57259 "AAAA IN www.google.com. udp 32 false 512" NOERROR qr,rd,ra 74 0.004593533s
[INFO] 172.20.0.69:33415 - 17625 "A IN www.google.com.default.svc.cluster.local. udp 58 false 512" NXDOMAIN qr,aa,rd 151 0.001869527s
[INFO] 172.20.0.69:35698 - 26190 "A IN www.google.com.svc.cluster.local. udp 50 false 512" NXDOMAIN qr,aa,rd 143 0.001329426s
[INFO] 172.20.0.69:44150 - 38926 "A IN www.google.com.cluster.local. udp 46 false 512" NXDOMAIN qr,aa,rd 139 0.000965516s
[INFO] 172.20.0.69:58440 - 55923 "A IN www.google.com.davolink. udp 41 false 512" NXDOMAIN qr,rd,ra 116 0.006998827s
[INFO] 172.20.0.69:59737 - 33042 "A IN www.google.com. udp 32 false 512" NOERROR qr,rd,ra 62 0.004193042s
[INFO] 172.20.0.69:39931 - 18384 "AAAA IN www.google.com. udp 32 false 512" NOERROR qr,rd,ra 74 0.003735194s