Post

Cilium 3์ฃผ์ฐจ ์ •๋ฆฌ

๐Ÿ”ง ์‹ค์Šต ํ™˜๊ฒฝ ๊ตฌ์„ฑ

  • ํด๋Ÿฌ์Šคํ„ฐ ๋…ธ๋“œ
    • ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ: 192.168.10.100 (๋ฉ”๋ชจ๋ฆฌ 2GB โ†’ 2.5GB ์ƒํ–ฅ)
    • ์›Œ์ปค ๋…ธ๋“œ: 192.168.10.101 (1๋Œ€)
    • ๋„คํŠธ์›Œํฌ ๋Œ€์—ญ: 192.168.10.0/24
    • ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ์— taint ์ œ๊ฑฐ โ†’ ํŒŒ๋“œ ๋ฐฐํฌ ๊ฐ€๋Šฅ
    • kubeadm-init-ctr-config.yaml ์‚ฌ์šฉ ์‹œ ๋ฒ„์ „ ๋ณ€์ˆ˜ K8S_VERSION_PLACEHOLDER๋กœ ์žฌ์‚ฌ์šฉ์„ฑ ํ™•๋ณด
  • ๋ผ์šฐํ„ฐ
    • ์ฃผ์†Œ: 192.168.10.200
    • ์‚ฌ๋‚ด๋ง(10.10.0.0/16)๊ณผ ์ฟ ๋ฒ„๋„คํ‹ฐ์Šค ๋„คํŠธ์›Œํฌ(192.168.10.0/24) ๊ฐ„ ํ†ต์‹  ์ค‘๊ณ„
    • static route ์„ค์ •:

      to: 10.10.0.0/16 โ†’ via: 192.168.10.200

    • IP forwarding ํ™œ์„ฑํ™”
    • dummy ์ธํ„ฐํŽ˜์ด์Šค 2๊ฐœ ์ƒ์„ฑ
      • loop1: 10.10.1.200
      • loop2: 10.10.2.200

๐Ÿš€ ์‹ค์Šต ํ™˜๊ฒฝ ๋ฐฐํฌ

1. Vagrantfile ๋‹ค์šด๋กœ๋“œ ๋ฐ ๊ฐ€์ƒ๋จธ์‹  ๊ตฌ์„ฑ

1
2
3
curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/3w/Vagrantfile

vagrant up

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
Bringing machine 'k8s-ctr' up with 'virtualbox' provider...
Bringing machine 'k8s-w1' up with 'virtualbox' provider...
Bringing machine 'router' up with 'virtualbox' provider...
==> k8s-ctr: Preparing master VM for linked clones...
    k8s-ctr: This is a one time operation. Once the master VM is prepared,
    k8s-ctr: it will be used as a base for linked clones, making the creation
    k8s-ctr: of new VMs take milliseconds on a modern system.
==> k8s-ctr: Importing base box 'bento/ubuntu-24.04'...
==> k8s-ctr: Cloning VM...
==> k8s-ctr: Matching MAC address for NAT networking...
==> k8s-ctr: Checking if box 'bento/ubuntu-24.04' version '202502.21.0' is up to date...
==> k8s-ctr: Setting the name of the VM: k8s-ctr
==> k8s-ctr: Clearing any previously set network interfaces...
==> k8s-ctr: Preparing network interfaces based on configuration...
    k8s-ctr: Adapter 1: nat
    k8s-ctr: Adapter 2: hostonly
==> k8s-ctr: Forwarding ports...
    k8s-ctr: 22 (guest) => 60000 (host) (adapter 1)
==> k8s-ctr: Running 'pre-boot' VM customizations...
==> k8s-ctr: Booting VM...
==> k8s-ctr: Waiting for machine to boot. This may take a few minutes...
    k8s-ctr: SSH address: 127.0.0.1:60000
    k8s-ctr: SSH username: vagrant
    k8s-ctr: SSH auth method: private key
    k8s-ctr: 
    k8s-ctr: Vagrant insecure key detected. Vagrant will automatically replace
    k8s-ctr: this with a newly generated keypair for better security.
    k8s-ctr: 
    k8s-ctr: Inserting generated public key within guest...
    k8s-ctr: Removing insecure key from the guest if it's present...
    k8s-ctr: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-ctr: Machine booted and ready!
==> k8s-ctr: Checking for guest additions in VM...
==> k8s-ctr: Setting hostname...
==> k8s-ctr: Configuring and enabling network interfaces...
==> k8s-ctr: Running provisioner: shell...
    k8s-ctr: Running: /tmp/vagrant-shell20250730-27828-acul9.sh
    k8s-ctr: >>>> Initial Config Start <<<<
    k8s-ctr: [TASK 1] Setting Profile & Bashrc
    k8s-ctr: [TASK 2] Disable AppArmor
    k8s-ctr: [TASK 3] Disable and turn off SWAP
    k8s-ctr: [TASK 4] Install Packages
    k8s-ctr: [TASK 5] Install Kubernetes components (kubeadm, kubelet and kubectl)
    k8s-ctr: [TASK 6] Install Packages & Helm
    k8s-ctr: >>>> Initial Config End <<<<
==> k8s-ctr: Running provisioner: shell...
    k8s-ctr: Running: /tmp/vagrant-shell20250730-27828-zl78rn.sh
    k8s-ctr: >>>> K8S Controlplane config Start <<<<
    k8s-ctr: [TASK 1] Initial Kubernetes
    k8s-ctr: [TASK 2] Setting kube config file
    k8s-ctr: [TASK 3] Source the completion
    k8s-ctr: [TASK 4] Alias kubectl to k
    k8s-ctr: [TASK 5] Install Kubectx & Kubens
    k8s-ctr: [TASK 6] Install Kubeps & Setting PS1
    k8s-ctr: [TASK 7] Install Cilium CNI
    k8s-ctr: [TASK 8] Install Cilium / Hubble CLI
    k8s-ctr: cilium
    k8s-ctr: hubble
    k8s-ctr: [TASK 9] Remove node taint
    k8s-ctr: node/k8s-ctr untainted
    k8s-ctr: [TASK 10] local DNS with hosts file
    k8s-ctr: [TASK 11] Install Prometheus & Grafana
    k8s-ctr: [TASK 12] Dynamically provisioning persistent local storage with Kubernetes
    k8s-ctr: >>>> K8S Controlplane Config End <<<<
==> k8s-ctr: Running provisioner: shell...
    k8s-ctr: Running: /tmp/vagrant-shell20250730-27828-7fwjno.sh
    k8s-ctr: >>>> Route Add Config Start <<<<
    k8s-ctr: >>>> Route Add Config End <<<<
==> k8s-w1: Cloning VM...
==> k8s-w1: Matching MAC address for NAT networking...
==> k8s-w1: Checking if box 'bento/ubuntu-24.04' version '202502.21.0' is up to date...
==> k8s-w1: Setting the name of the VM: k8s-w1
==> k8s-w1: Clearing any previously set network interfaces...
==> k8s-w1: Preparing network interfaces based on configuration...
    k8s-w1: Adapter 1: nat
    k8s-w1: Adapter 2: hostonly
==> k8s-w1: Forwarding ports...
    k8s-w1: 22 (guest) => 60001 (host) (adapter 1)
==> k8s-w1: Running 'pre-boot' VM customizations...
==> k8s-w1: Booting VM...
==> k8s-w1: Waiting for machine to boot. This may take a few minutes...
    k8s-w1: SSH address: 127.0.0.1:60001
    k8s-w1: SSH username: vagrant
    k8s-w1: SSH auth method: private key
    k8s-w1: 
    k8s-w1: Vagrant insecure key detected. Vagrant will automatically replace
    k8s-w1: this with a newly generated keypair for better security.
    k8s-w1: 
    k8s-w1: Inserting generated public key within guest...
    k8s-w1: Removing insecure key from the guest if it's present...
    k8s-w1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-w1: Machine booted and ready!
==> k8s-w1: Checking for guest additions in VM...
==> k8s-w1: Setting hostname...
==> k8s-w1: Configuring and enabling network interfaces...
==> k8s-w1: Running provisioner: shell...
    k8s-w1: Running: /tmp/vagrant-shell20250730-27828-km5kmk.sh
    k8s-w1: >>>> Initial Config Start <<<<
    k8s-w1: [TASK 1] Setting Profile & Bashrc
    k8s-w1: [TASK 2] Disable AppArmor
    k8s-w1: [TASK 3] Disable and turn off SWAP
    k8s-w1: [TASK 4] Install Packages
    k8s-w1: [TASK 5] Install Kubernetes components (kubeadm, kubelet and kubectl)
    k8s-w1: [TASK 6] Install Packages & Helm
    k8s-w1: >>>> Initial Config End <<<<
==> k8s-w1: Running provisioner: shell...
    k8s-w1: Running: /tmp/vagrant-shell20250730-27828-fmg78c.sh
    k8s-w1: >>>> K8S Node config Start <<<<
    k8s-w1: [TASK 1] K8S Controlplane Join
    k8s-w1: >>>> K8S Node config End <<<<
==> k8s-w1: Running provisioner: shell...
    k8s-w1: Running: /tmp/vagrant-shell20250730-27828-ila0lv.sh
    k8s-w1: >>>> Route Add Config Start <<<<
    k8s-w1: >>>> Route Add Config End <<<<
==> router: Cloning VM...
==> router: Matching MAC address for NAT networking...
==> router: Checking if box 'bento/ubuntu-24.04' version '202502.21.0' is up to date...
==> router: Setting the name of the VM: router
==> router: Clearing any previously set network interfaces...
==> router: Preparing network interfaces based on configuration...
    router: Adapter 1: nat
    router: Adapter 2: hostonly
==> router: Forwarding ports...
    router: 22 (guest) => 60009 (host) (adapter 1)
==> router: Running 'pre-boot' VM customizations...
==> router: Booting VM...
==> router: Waiting for machine to boot. This may take a few minutes...
    router: SSH address: 127.0.0.1:60009
    router: SSH username: vagrant
    router: SSH auth method: private key
    router: Warning: Connection reset. Retrying...
    router: 
    router: Vagrant insecure key detected. Vagrant will automatically replace
    router: this with a newly generated keypair for better security.
    router: 
    router: Inserting generated public key within guest...
    router: Removing insecure key from the guest if it's present...
    router: Key inserted! Disconnecting and reconnecting using new SSH key...
==> router: Machine booted and ready!
==> router: Checking for guest additions in VM...
==> router: Setting hostname...
==> router: Configuring and enabling network interfaces...
==> router: Running provisioner: shell...
    router: Running: /tmp/vagrant-shell20250730-27828-2x1jkp.sh
    router: >>>> Initial Config Start <<<<
    router: [TASK 1] Setting Profile & Bashrc
    router: [TASK 2] Disable AppArmor
    router: [TASK 3] Add Kernel setting - IP Forwarding
    router: [TASK 4] Setting Dummy Interface
    router: [TASK 5] Install Packages
    router: [TASK 6] Install Apache
    router: >>>> Initial Config End <<<<

2. ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ ๋…ธ๋“œ ์ ‘์†

1
vagrant ssh k8s-ctr

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
Welcome to Ubuntu 24.04.2 LTS (GNU/Linux 6.8.0-53-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/pro

 System information as of Wed Jul 30 02:46:22 PM KST 2025

  System load:           0.28
  Usage of /:            29.2% of 30.34GB
  Memory usage:          51%
  Swap usage:            0%
  Processes:             217
  Users logged in:       0
  IPv4 address for eth0: 10.0.2.15
  IPv6 address for eth0: fd17:625c:f037:2:a00:27ff:fe6b:69c9

This system is built by the Bento project by Chef Software
More information can be found at https://github.com/chef/bento

Use of this system is acceptance of the OS vendor EULA and License Agreements.
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# 

3. ์›Œ์ปค ๋…ธ๋“œ SSH ํ†ต์‹  ํ™•์ธ

์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ์—์„œ ์›Œ์ปค ๋…ธ๋“œ์— SSH ์ ‘์† ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-w1 hostname

โœ…ย ์ถœ๋ ฅ

1
2
Warning: Permanently added 'k8s-w1' (ED25519) to the list of known hosts.
k8s-w1

4. ํด๋Ÿฌ์Šคํ„ฐ ๋„คํŠธ์›Œํฌ CIDR ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl cluster-info dump | grep -m 2 -E "cluster-cidr|service-cluster-ip-range"

โœ…ย ์ถœ๋ ฅ

1
2
                            "--service-cluster-ip-range=10.96.0.0/16",
                            "--cluster-cidr=10.244.0.0/16",

5. ๋…ธ๋“œ ์ƒํƒœ ๋ฐ ๋‚ด๋ถ€ IP ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get node -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
NAME      STATUS   ROLES           AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-ctr   Ready    control-plane   7m38s   v1.33.2   192.168.10.100   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w1    Ready    <none>          5m38s   v1.33.2   192.168.10.101   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
  • ๋‚ด๋ถ€ IP ํ™•์ธ ๊ฐ€๋Šฅ (192.168.10.100, 192.168.10.101)

6. ์ฟ ๋ฒ„๋„คํ‹ฐ์Šค IPAM ๋ฐ ํŒŒ๋“œ ๋„คํŠธ์›Œํฌ ์ƒํƒœ ํ™•์ธ

(1) ๋…ธ๋“œ๋ณ„ Pod CIDR ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'

โœ…ย ์ถœ๋ ฅ

1
2
k8s-ctr	10.244.0.0/24
k8s-w1	10.244.1.0/24
  • kube-controller-manager๊ฐ€ ๊ฐ ๋…ธ๋“œ์— ํ• ๋‹นํ•œ Pod CIDR ํ™•์ธ

(2) Cilium์ด ์‚ฌ์šฉํ•˜๋Š” Pod CIDR ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode -o json | grep podCIDRs -A2

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
                    "podCIDRs": [
                        "10.244.0.0/24"
                    ],
--
                    "podCIDRs": [
                        "10.244.1.0/24"
                    ],
  • CiliumNode ๋ฆฌ์†Œ์Šค๋ฅผ ํ†ตํ•ด ๊ฐ ๋…ธ๋“œ๊ฐ€ ์ธ์‹ํ•˜๊ณ  ์žˆ๋Š” Pod CIDR ํ™•์ธ

(3) IPAM ๋ชจ๋“œ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep ^ipam

โœ…ย ์ถœ๋ ฅ

1
2
ipam                                              kubernetes
ipam-cilium-node-update-rate                      15s
  • kubernetes ๋ชจ๋“œ์ผ ๊ฒฝ์šฐ Kubernetes๊ฐ€ IP๋ฅผ ํ• ๋‹นํ•˜๊ณ , Cilium์€ ๊ทธ๊ฒƒ์„ ๊ทธ๋Œ€๋กœ ์‚ฌ์šฉํ•จ

(4) Cilium ์—”๋“œํฌ์ธํŠธ IP ํ™•์ธ

ciliumendpoints ๋ฆฌ์†Œ์Šค๋ฅผ ์กฐํšŒํ•˜์—ฌ ํŒŒ๋“œ์— ๋ถ€์—ฌ๋œ ์‹ค์ œ IP ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints -A

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
NAMESPACE            NAME                                      SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
cilium-monitoring    grafana-5c69859d9-wdb82                   22795               ready            10.244.0.104   
cilium-monitoring    prometheus-6fc896bc5d-bxnd5               1213                ready            10.244.0.65    
kube-system          coredns-674b8bbfcf-9pxvx                  28565               ready            10.244.0.199   
kube-system          coredns-674b8bbfcf-khjhq                  28565               ready            10.244.0.59    
kube-system          hubble-relay-5dcd46f5c-5r79v              17061               ready            10.244.0.122   
kube-system          hubble-ui-76d4965bb6-xmdp8                2452                ready            10.244.0.80    
local-path-storage   local-path-provisioner-74f9666bc9-scg4s   56893               ready            10.244.0.253
  • 10.244.0.x โ†’ ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ ๋…ธ๋“œ
  • 10.244.1.x โ†’ ์›Œ์ปค ๋…ธ๋“œ

๐Ÿถ k9s ์„ค์น˜ ๋ฐ ์‹คํ–‰ ์ •๋ฆฌ

1. k9s ์„ค์น˜

1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# wget https://github.com/derailed/k9s/releases/latest/download/k9s_linux_amd64.deb -O /tmp/k9s_linux_amd64.deb
apt install /tmp/k9s_linux_amd64.deb

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
--2025-07-30 14:55:17--  https://github.com/derailed/k9s/releases/latest/download/k9s_linux_amd64.deb
Resolving github.com (github.com)... 20.200.245.247
Connecting to github.com (github.com)|20.200.245.247|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github.com/derailed/k9s/releases/download/v0.50.9/k9s_linux_amd64.deb [following]
--2025-07-30 14:55:17--  https://github.com/derailed/k9s/releases/download/v0.50.9/k9s_linux_amd64.deb
Reusing existing connection to github.com:443.
HTTP request sent, awaiting response... 302 Found
Location: https://release-assets.githubusercontent.com/github-production-release-asset/167596393/68b2cb87-c3c4-4c08-8ebe-b8aaa51894f5?sp=r&sv=2018-11-09&sr=b&spr=https&se=2025-07-30T06%3A41%3A09Z&rscd=attachment%3B+filename%3Dk9s_linux_amd64.deb&rsct=application%2Foctet-stream&skoid=96c2d410-5711-43a1-aedd-ab1947aa7ab0&sktid=398a6654-997b-47e9-b12b-9515b896b4de&skt=2025-07-30T05%3A40%3A14Z&ske=2025-07-30T06%3A41%3A09Z&sks=b&skv=2018-11-09&sig=JeO%2BpcQvqHA9Cn%2F9LNC%2FVbGkvi%2BA2WVntygiGkgYwwk%3D&jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmVsZWFzZS1hc3NldHMuZ2l0aHVidXNlcmNvbnRlbnQuY29tIiwia2V5Ijoia2V5MSIsImV4cCI6MTc1Mzg1NTIyMywibmJmIjoxNzUzODU0OTIzLCJwYXRoIjoicmVsZWFzZWFzc2V0cHJvZHVjdGlvbi5ibG9iLmNvcmUud2luZG93cy5uZXQifQ.lj7UoO3dvLsG-a_0jHncvKP_C05qv3_v8-1Ne7RIpK0&response-content-disposition=attachment%3B%20filename%3Dk9s_linux_amd64.deb&response-content-type=application%2Foctet-stream [following]
--2025-07-30 14:55:18--  https://release-assets.githubusercontent.com/github-production-release-asset/167596393/68b2cb87-c3c4-4c08-8ebe-b8aaa51894f5?sp=r&sv=2018-11-09&sr=b&spr=https&se=2025-07-30T06%3A41%3A09Z&rscd=attachment%3B+filename%3Dk9s_linux_amd64.deb&rsct=application%2Foctet-stream&skoid=96c2d410-5711-43a1-aedd-ab1947aa7ab0&sktid=398a6654-997b-47e9-b12b-9515b896b4de&skt=2025-07-30T05%3A40%3A14Z&ske=2025-07-30T06%3A41%3A09Z&sks=b&skv=2018-11-09&sig=JeO%2BpcQvqHA9Cn%2F9LNC%2FVbGkvi%2BA2WVntygiGkgYwwk%3D&jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmVsZWFzZS1hc3NldHMuZ2l0aHVidXNlcmNvbnRlbnQuY29tIiwia2V5Ijoia2V5MSIsImV4cCI6MTc1Mzg1NTIyMywibmJmIjoxNzUzODU0OTIzLCJwYXRoIjoicmVsZWFzZWFzc2V0cHJvZHVjdGlvbi5ibG9iLmNvcmUud2luZG93cy5uZXQifQ.lj7UoO3dvLsG-a_0jHncvKP_C05qv3_v8-1Ne7RIpK0&response-content-disposition=attachment%3B%20filename%3Dk9s_linux_amd64.deb&response-content-type=application%2Foctet-stream
Resolving release-assets.githubusercontent.com (release-assets.githubusercontent.com)... 185.199.110.133, 185.199.108.133, 185.199.109.133, ...
Connecting to release-assets.githubusercontent.com (release-assets.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 38258230 (36M) [application/octet-stream]
Saving to: โ€˜/tmp/k9s_linux_amd64.debโ€™

/tmp/k9s_linux_amd64.de 100%[==============================>]  36.49M  17.9MB/s    in 2.0s    

2025-07-30 14:55:20 (17.9 MB/s) - โ€˜/tmp/k9s_linux_amd64.debโ€™ saved [38258230/38258230]

Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Note, selecting 'k9s' instead of '/tmp/k9s_linux_amd64.deb'
The following NEW packages will be installed:
  k9s
0 upgraded, 1 newly installed, 0 to remove and 175 not upgraded.
Need to get 0 B/38.3 MB of archives.
After this operation, 124 MB of additional disk space will be used.
Get:1 /tmp/k9s_linux_amd64.deb k9s amd64 0.50.9 [38.3 MB]
Selecting previously unselected package k9s.
(Reading database ... 51864 files and directories currently installed.)
Preparing to unpack /tmp/k9s_linux_amd64.deb ...
Unpacking k9s (0.50.9) ...
Setting up k9s (0.50.9) ...
Scanning processes...                                                                          
Scanning linux images...                                                                       

Running kernel seems to be up-to-date.

No services need to be restarted.

No containers need to be restarted.

No user sessions are running outdated binaries.

No VM guests are running outdated hypervisor (qemu) binaries on this host.

2. k9s ์‹คํ–‰

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# k9s

โœ…ย ์ถœ๋ ฅ


๐ŸŒ Cilium IPAM ์‹ค์Šต

1. IPAM ๊ฐœ๋… ๋ฐ Cilium ๋ชจ๋“œ

Kubernetes Host Scope

  • ๋…ธ๋“œ๋ณ„๋กœ ๊ณ ์ •๋œ PodCIDR๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ชจ๋“œ
  • KubeControllerManager๊ฐ€ IP ๋ฒ”์œ„๋ฅผ ํ• ๋‹น ๋ฐ ๊ด€๋ฆฌ
  • ๊ฐ ๋…ธ๋“œ์— ๋ฏธ๋ฆฌ ์ •์˜๋œ CIDR ๋ธ”๋ก์ด ํ• ๋‹น๋จ

Cilium Cluster Scope

  • Cilium์ด ์ž์ฒด์ ์œผ๋กœ IP ํ’€์„ ๊ด€๋ฆฌํ•˜๋ฉฐ ๋™์ ์œผ๋กœ ํ• ๋‹น
  • ๋ณ„๋„ IPAM ์„ค์ •์ด ์—†์„ ๊ฒฝ์šฐ ๊ธฐ๋ณธ์ ์œผ๋กœ ์‚ฌ์šฉ๋˜๋Š” ๋ชจ๋“œ
  • ์™ธ๋ถ€ IPAM(AWS ENI, Azure IPAM ๋“ฑ)๊ณผ์˜ ์—ฐ๋™๋„ ๊ฐ€๋Šฅ

2. ๋ฉ€ํ‹ฐ CIDR ๋ฐ Multi-pool ์ œ์•ฝ์‚ฌํ•ญ

ํด๋Ÿฌ์Šคํ„ฐ ๋‚ด ๋ณต์ˆ˜ CIDR ๊ตฌ์„ฑ

  • Cilium์€ ํด๋Ÿฌ์Šคํ„ฐ ๋‚ด ์—ฌ๋Ÿฌ CIDR ๋ธ”๋ก์„ ์ง€์›
  • ์ œ์•ฝ์‚ฌํ•ญ: vxlan, geneve ๊ฐ™์€ ํ„ฐ๋„ ๊ธฐ๋ฐ˜ ๋ผ์šฐํŒ… ๋ชจ๋“œ์—์„œ๋Š” Multi-pool ๋ฏธ์ง€์›
  • ํ™•์žฅ์„ฑ: ํŠน์ • ๋…ธ๋“œ์˜ Pod ์ˆ˜์š”๊ฐ€ ์ฆ๊ฐ€ํ•˜์—ฌ CIDR์ด ๋ถ€์กฑํ•œ ๊ฒฝ์šฐ, ํ•ด๋‹น ๋…ธ๋“œ์—๋งŒ ์ถ”๊ฐ€ CIDR์„ ํ• ๋‹นํ•˜์—ฌ ์œ ์—ฐํ•œ ํ™•์žฅ ๊ฐ€๋Šฅ

Kubernetes Host Scope

3. ๋…ธ๋“œ๋ณ„ Pod CIDR ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'

โœ…ย ์ถœ๋ ฅ

1
2
k8s-ctr	10.244.0.0/24
k8s-w1	10.244.1.0/24
  • Kubernetes Host Scope ๊ธฐ๋ฐ˜ IPAM ํ™˜๊ฒฝ์—์„œ ๊ฐ ๋…ธ๋“œ์— /24 CIDR ๋ธ”๋ก์ด ์ž๋™ ํ• ๋‹น๋จ

4. kube-controller-manager ์„ค์ • ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kc describe pod -n kube-system kube-controller-manager-k8s-ctr

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
Name:                 kube-controller-manager-k8s-ctr
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 k8s-ctr/192.168.10.100
Start Time:           Wed, 30 Jul 2025 14:41:28 +0900
Labels:               component=kube-controller-manager
                      tier=control-plane
Annotations:          kubernetes.io/config.hash: 2da908bf08a691927af74a336851f6e1
                      kubernetes.io/config.mirror: 2da908bf08a691927af74a336851f6e1
                      kubernetes.io/config.seen: 2025-07-30T14:41:20.396308103+09:00
                      kubernetes.io/config.source: file
Status:               Running
SeccompProfile:       RuntimeDefault
IP:                   192.168.10.100
IPs:
  IP:           192.168.10.100
Controlled By:  Node/k8s-ctr
Containers:
  kube-controller-manager:
    Container ID:  containerd://fb984494600e1c9a3755783595ee377a07d82efade606d941f2c162a604eed32
    Image:         registry.k8s.io/kube-controller-manager:v1.33.2
    Image ID:      registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-controller-manager
      --allocate-node-cidrs=true
      --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
      --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
      --bind-address=127.0.0.1
      --client-ca-file=/etc/kubernetes/pki/ca.crt
      --cluster-cidr=10.244.0.0/16
      --cluster-name=kubernetes
      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
      --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
      --controllers=*,bootstrapsigner,tokencleaner
      --kubeconfig=/etc/kubernetes/controller-manager.conf
      --leader-elect=true
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
      --root-ca-file=/etc/kubernetes/pki/ca.crt
      --service-account-private-key-file=/etc/kubernetes/pki/sa.key
      --service-cluster-ip-range=10.96.0.0/16
      --use-service-account-credentials=true
    State:          Running
      Started:      Wed, 30 Jul 2025 14:41:24 +0900
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        200m
    Liveness:     http-get https://127.0.0.1:10257/healthz delay=10s timeout=15s period=10s #success=1 #failure=8
    Startup:      http-get https://127.0.0.1:10257/healthz delay=10s timeout=15s period=10s #success=1 #failure=24
    Environment:  <none>
    Mounts:
      /etc/ca-certificates from etc-ca-certificates (ro)
      /etc/kubernetes/controller-manager.conf from kubeconfig (ro)
      /etc/kubernetes/pki from k8s-certs (ro)
      /etc/ssl/certs from ca-certs (ro)
      /usr/libexec/kubernetes/kubelet-plugins/volume/exec from flexvolume-dir (rw)
      /usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
      /usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  ca-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  DirectoryOrCreate
  etc-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ca-certificates
    HostPathType:  DirectoryOrCreate
  flexvolume-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/libexec/kubernetes/kubelet-plugins/volume/exec
    HostPathType:  DirectoryOrCreate
  k8s-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki
    HostPathType:  DirectoryOrCreate
  kubeconfig:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/controller-manager.conf
    HostPathType:  FileOrCreate
  usr-local-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/local/share/ca-certificates
    HostPathType:  DirectoryOrCreate
  usr-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/share/ca-certificates
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute op=Exists
Events:            <none>
  • -allocate-node-cidrs=true: ๋…ธ๋“œ๋ณ„ CIDR ์ž๋™ ํ• ๋‹น ํ™œ์„ฑํ™”
  • -cluster-cidr=10.244.0.0/16: ์ „์ฒด ํด๋Ÿฌ์Šคํ„ฐ Pod IP ๋ฒ”์œ„ ์„ค์ •
  • -service-cluster-ip-range=10.96.0.0/16: ์„œ๋น„์Šค IP ๋ฒ”์œ„

5. Cilium์ด ์ธ์‹ํ•œ Pod CIDR ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode -o json | grep podCIDRs -A2

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
                    "podCIDRs": [
                        "10.244.0.0/24"
                    ],
--
                    "podCIDRs": [
                        "10.244.1.0/24"
                    ],
  • ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ ๋…ธ๋“œ: 10.244.0.0/24
  • ์›Œ์ปค ๋…ธ๋“œ: 10.244.1.0/24

6. Cilium Endpoint IP ํ• ๋‹น ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints.cilium.io -A

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
NAMESPACE            NAME                                      SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
cilium-monitoring    grafana-5c69859d9-wdb82                   22795               ready            10.244.0.104   
cilium-monitoring    prometheus-6fc896bc5d-bxnd5               1213                ready            10.244.0.65    
kube-system          coredns-674b8bbfcf-9pxvx                  28565               ready            10.244.0.199   
kube-system          coredns-674b8bbfcf-khjhq                  28565               ready            10.244.0.59    
kube-system          hubble-relay-5dcd46f5c-5r79v              17061               ready            10.244.0.122   
kube-system          hubble-ui-76d4965bb6-xmdp8                2452                ready            10.244.0.80    
local-path-storage   local-path-provisioner-74f9666bc9-scg4s   56893               ready            10.244.0.253
  • ๋ชจ๋“  Pod๊ฐ€ ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ ๋…ธ๋“œ์˜ CIDR ๋ฒ”์œ„(10.244.0.0/24) ๋‚ด์—์„œ IP ํ• ๋‹น๋ฐ›์Œ
  • IP ํ• ๋‹น์ด ์ •์ƒ์ ์œผ๋กœ ์ด๋ฃจ์–ด์ง€๊ณ  ๋ชจ๋“  Endpoint๊ฐ€ ready ์ƒํƒœ

๐Ÿฆˆ ์ƒ˜ํ”Œ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ๋ฐฐํฌ ๋ฐ ํ™•์ธ & Termshark

1. ์ƒ˜ํ”Œ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ๋ฐฐํฌ (webpod)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webpod
spec:
  replicas: 2
  selector:
    matchLabels:
      app: webpod
  template:
    metadata:
      labels:
        app: webpod
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - sample-app
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: webpod
        image: traefik/whoami
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: webpod
  labels:
    app: webpod
spec:
  selector:
    app: webpod
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP
EOF

# ๊ฒฐ๊ณผ
deployment.apps/webpod created
service/webpod created

2. curl ํ…Œ์ŠคํŠธ์šฉ ํŒŒ๋“œ ๋ฐฐํฌ (curl-pod)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: curl-pod
  labels:
    app: curl
spec:
  nodeName: k8s-ctr
  containers:
  - name: curl
    image: nicolaka/netshoot
    command: ["tail"]
    args: ["-f", "/dev/null"]
  terminationGracePeriodSeconds: 0
EOF

# ๊ฒฐ๊ณผ
pod/curl-pod created
  • nodeName: k8s-ctr: ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ ๋…ธ๋“œ์— ๋ช…์‹œ์ ์œผ๋กœ ๊ณ ์ • ๋ฐฐ์น˜

3. ๋ฆฌ์†Œ์Šค ๋ฐฐํฌ ์ƒํƒœ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get deploy,svc,ep webpod -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES           SELECTOR
deployment.apps/webpod   2/2     2            2           97s   webpod       traefik/whoami   app=webpod

NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/webpod   ClusterIP   10.96.152.212   <none>        80/TCP    97s   app=webpod

NAME               ENDPOINTS                      AGE
endpoints/webpod   10.244.0.1:80,10.244.1.96:80   96s
  • Deployment: 2๊ฐœ Pod๊ฐ€ ์ •์ƒ ์ƒ์„ฑ ๋ฐ ์‹คํ–‰ ์ค‘
  • Service: ClusterIP ํƒ€์ž…์œผ๋กœ 10.96.152.212 ํ• ๋‹น
  • Endpoints
    • 10.244.0.1:80 (์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ ๋…ธ๋“œ์˜ Pod)
    • 10.244.1.96:80 (์›Œ์ปค ๋…ธ๋“œ์˜ Pod)

4. Cilium Endpoint ์ •๋ณด ์กฐํšŒ

(1) EndpointSlice ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get endpointslices -l app=webpod

โœ…ย ์ถœ๋ ฅ

1
2
NAME           ADDRESSTYPE   PORTS   ENDPOINTS                AGE
webpod-2wrvt   IPv4          80      10.244.0.1,10.244.1.96   118s

(2) Cilium Endpoint ์ƒ์„ธ ์ •๋ณด

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg endpoint list

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                         IPv6   IPv4           STATUS   
           ENFORCEMENT        ENFORCEMENT                                                                                                                            
147        Disabled           Disabled          28565      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                 10.244.0.199   ready   
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                          
                                                           k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                   
                                                           k8s:io.kubernetes.pod.namespace=kube-system                                                                       
                                                           k8s:k8s-app=kube-dns                                                                                              
318        Disabled           Disabled          5580       k8s:app=curl                                                                               10.244.0.27    ready   
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                            
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                          
                                                           k8s:io.cilium.k8s.policy.serviceaccount=default                                                                   
                                                           k8s:io.kubernetes.pod.namespace=default                                                                           
853        Disabled           Disabled          2452       k8s:app.kubernetes.io/name=hubble-ui                                                       10.244.0.80    ready   
                                                           k8s:app.kubernetes.io/part-of=cilium                                                                              
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                        
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                          
                                                           k8s:io.cilium.k8s.policy.serviceaccount=hubble-ui                                                                 
                                                           k8s:io.kubernetes.pod.namespace=kube-system                                                                       
                                                           k8s:k8s-app=hubble-ui                                                                                             
1009       Disabled           Disabled          12497      k8s:app=webpod                                                                             10.244.0.1     ready   
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                            
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                          
                                                           k8s:io.cilium.k8s.policy.serviceaccount=default                                                                   
                                                           k8s:io.kubernetes.pod.namespace=default                                                                           
1043       Disabled           Disabled          56893      k8s:app=local-path-provisioner                                                             10.244.0.253   ready   
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=local-path-storage                                 
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                          
                                                           k8s:io.cilium.k8s.policy.serviceaccount=local-path-provisioner-service-account                                    
                                                           k8s:io.kubernetes.pod.namespace=local-path-storage                                                                
1452       Disabled           Disabled          28565      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                 10.244.0.59    ready   
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                          
                                                           k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                   
                                                           k8s:io.kubernetes.pod.namespace=kube-system                                                                       
                                                           k8s:k8s-app=kube-dns                                                                                              
1680       Disabled           Disabled          1213       k8s:app=prometheus                                                                         10.244.0.65    ready   
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                  
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                          
                                                           k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                            
                                                           k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                 
1694       Disabled           Disabled          17061      k8s:app.kubernetes.io/name=hubble-relay                                                    10.244.0.122   ready   
                                                           k8s:app.kubernetes.io/part-of=cilium                                                                              
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                        
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                          
                                                           k8s:io.cilium.k8s.policy.serviceaccount=hubble-relay                                                              
                                                           k8s:io.kubernetes.pod.namespace=kube-system                                                                       
                                                           k8s:k8s-app=hubble-relay                                                                                          
2772       Disabled           Disabled          22795      k8s:app=grafana                                                                            10.244.0.104   ready   
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                  
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                          
                                                           k8s:io.cilium.k8s.policy.serviceaccount=default                                                                   
                                                           k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                 
3358       Disabled           Disabled          1          k8s:node-role.kubernetes.io/control-plane                                                                 ready   
                                                           k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                       
                                                           reserved:host  

5. ์„œ๋น„์Šค ํ†ต์‹  ํ…Œ์ŠคํŠธ

(1) ๋‹จ์ผ ์š”์ฒญ ํ…Œ์ŠคํŠธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod | grep Hostname

โœ…ย ์ถœ๋ ฅ

1
Hostname: webpod-697b545f57-bpzn9

(2) ์—ฐ์† ์š”์ฒญ์„ ํ†ตํ•œ ๋กœ๋“œ ๋ฐธ๋Ÿฐ์‹ฑ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s webpod | grep Hostname; sleep 1; done'

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
Hostname: webpod-697b545f57-xb8fd
Hostname: webpod-697b545f57-xb8fd
Hostname: webpod-697b545f57-bpzn9
Hostname: webpod-697b545f57-bpzn9
Hostname: webpod-697b545f57-bpzn9
Hostname: webpod-697b545f57-bpzn9
Hostname: webpod-697b545f57-bpzn9
Hostname: webpod-697b545f57-xb8fd
...
  • ๋‘ ๊ฐœ์˜ ์„œ๋กœ ๋‹ค๋ฅธ Pod(bpzn9, xb8fd) ๊ฐ„์— ํŠธ๋ž˜ํ”ฝ์ด ๋ถ„์‚ฐ๋จ
  • DNS ๊ธฐ๋ฐ˜ ์„œ๋น„์Šค ๋””์Šค์ปค๋ฒ„๋ฆฌ๊ฐ€ ์ •์ƒ ๋™์ž‘ (webpod ์„œ๋น„์Šค๋ช…์œผ๋กœ ์ ‘๊ทผ)

6. Hubble ํ๋ฆ„ ์ถ”์  ์‹ค์Šต

(1) Hubble UI ์›น ์ ‘์† ์ฃผ์†Œ ํ™•์ธ

1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# NODEIP=$(ip -4 addr show eth1 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')
echo -e "http://$NODEIP:30003"

โœ…ย ์ถœ๋ ฅ

1
http://192.168.10.100:30003

(2) ์ง€์†์ ์ธ curl ์š”์ฒญ ์ˆ˜ํ–‰

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s webpod | grep Hostname; sleep 1; done'

  • curl์ด default ๋„ค์ž„์ŠคํŽ˜์ด์Šค์— ์žˆ๋Š” webpod ์„œ๋น„์Šค๋ช…์œผ๋กœ ๋“ค์–ด๊ฐ€๋Š”๊ฑธ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค.

(3) Hubble Relay ํฌํŠธ ํฌ์›Œ๋”ฉ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cilium hubble port-forward&

โœ…ย ์ถœ๋ ฅ

1
2
[1] 10026
โ„น๏ธ  Hubble Relay is available at 127.0.0.1:4245
  • gRPC API๋ฅผ ํ†ตํ•ด ๋กœ์ปฌํ˜ธ์ŠคํŠธ์˜ 4245 ํฌํŠธ์—์„œ ์ ‘๊ทผ ๊ฐ€๋Šฅ

(4) ์‹ค์‹œ๊ฐ„ ๋„คํŠธ์›Œํฌ ํ๋ฆ„ ๋ชจ๋‹ˆํ„ฐ๋ง

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f --protocol tcp --pod curl-pod

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
Jul 30 06:30:30.990: default/curl-pod:53176 (ID:5580) <- default/webpod-697b545f57-xb8fd:80 (ID:12497) to-network FORWARDED (TCP Flags: ACK, FIN)
Jul 30 06:30:30.990: default/curl-pod:53176 (ID:5580) -> default/webpod-697b545f57-xb8fd:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK)
Jul 30 06:30:32.254: default/curl-pod (ID:5580) <> 10.96.152.212:80 (world) pre-xlate-fwd TRACED (TCP)
Jul 30 06:30:32.254: default/curl-pod (ID:5580) <> default/webpod-697b545f57-bpzn9:80 (ID:12497) post-xlate-fwd TRANSLATED (TCP)
Jul 30 06:30:32.254: default/curl-pod:58930 (ID:5580) -> default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: SYN)
Jul 30 06:30:32.254: default/curl-pod:58930 (ID:5580) <- default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: SYN, ACK)
Jul 30 06:30:32.254: default/curl-pod:58930 (ID:5580) -> default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK)
Jul 30 06:30:32.254: default/curl-pod:58930 (ID:5580) <> default/webpod-697b545f57-bpzn9 (ID:12497) pre-xlate-rev TRACED (TCP)
Jul 30 06:30:32.254: default/curl-pod:58930 (ID:5580) <> default/webpod-697b545f57-bpzn9 (ID:12497) pre-xlate-rev TRACED (TCP)
Jul 30 06:30:32.254: default/curl-pod:58930 (ID:5580) <> default/webpod-697b545f57-bpzn9 (ID:12497) pre-xlate-rev TRACED (TCP)
Jul 30 06:30:32.255: default/curl-pod:58930 (ID:5580) -> default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 30 06:30:32.255: default/curl-pod:58930 (ID:5580) <> default/webpod-697b545f57-bpzn9 (ID:12497) pre-xlate-rev TRACED (TCP)
Jul 30 06:30:32.256: default/curl-pod:58930 (ID:5580) <> default/webpod-697b545f57-bpzn9 (ID:12497) pre-xlate-rev TRACED (TCP)
Jul 30 06:30:32.256: default/curl-pod:58930 (ID:5580) <- default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 30 06:30:32.257: default/curl-pod:58930 (ID:5580) -> default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Jul 30 06:30:32.257: default/curl-pod:58930 (ID:5580) <- default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Jul 30 06:30:32.257: default/curl-pod:58930 (ID:5580) -> default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK)
Jul 30 06:30:33.263: default/curl-pod (ID:5580) <> 10.96.152.212:80 (world) pre-xlate-fwd TRACED (TCP)
Jul 30 06:30:33.263: default/curl-pod (ID:5580) <> default/webpod-697b545f57-bpzn9:80 (ID:12497) post-xlate-fwd TRANSLATED (TCP)
Jul 30 06:30:33.263: default/curl-pod:58942 (ID:5580) -> default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: SYN)
Jul 30 06:30:33.263: default/curl-pod:58942 (ID:5580) <- default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: SYN, ACK)
Jul 30 06:30:33.263: default/curl-pod:58942 (ID:5580) -> default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK)
Jul 30 06:30:33.264: default/curl-pod:58942 (ID:5580) <> default/webpod-697b545f57-bpzn9 (ID:12497) pre-xlate-rev TRACED (TCP)
Jul 30 06:30:33.264: default/curl-pod:58942 (ID:5580) -> default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 30 06:30:33.264: default/curl-pod:58942 (ID:5580) <> default/webpod-697b545f57-bpzn9 (ID:12497) pre-xlate-rev TRACED (TCP)
Jul 30 06:30:33.264: default/curl-pod:58942 (ID:5580) <> default/webpod-697b545f57-bpzn9 (ID:12497) pre-xlate-rev TRACED (TCP)
Jul 30 06:30:33.265: default/curl-pod:58942 (ID:5580) <> default/webpod-697b545f57-bpzn9 (ID:12497) pre-xlate-rev TRACED (TCP)
Jul 30 06:30:33.265: default/curl-pod:58942 (ID:5580) <> default/webpod-697b545f57-bpzn9 (ID:12497) pre-xlate-rev TRACED (TCP)
Jul 30 06:30:33.265: default/curl-pod:58942 (ID:5580) <- default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 30 06:30:33.265: default/curl-pod:58942 (ID:5580) -> default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Jul 30 06:30:33.265: default/curl-pod:58942 (ID:5580) <- default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Jul 30 06:30:33.265: default/curl-pod:58942 (ID:5580) -> default/webpod-697b545f57-bpzn9:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK)
Jul 30 06:30:34.018: default/curl-pod:53190 (ID:5580) -> default/webpod-697b545f57-xb8fd:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: SYN)
Jul 30 06:30:34.018: default/curl-pod:53190 (ID:5580) <- default/webpod-697b545f57-xb8fd:80 (ID:12497) to-network FORWARDED (TCP Flags: SYN, ACK)
Jul 30 06:30:34.018: default/curl-pod:53190 (ID:5580) -> default/webpod-697b545f57-xb8fd:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK)
Jul 30 06:30:34.018: default/curl-pod:53190 (ID:5580) -> default/webpod-697b545f57-xb8fd:80 (ID:12497) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
  • 10.96.152.212:80: ClusterIP ์„œ๋น„์Šค ์ฃผ์†Œ (world ๋ผ๋ฒจ)
  • pre-xlate-fwd: NAT ๋ณ€ํ™˜ ์ „ ์ถ”์  - ์†Œ์ผ“ ๋กœ๋“œ๋ฐธ๋Ÿฐ์„œ์— ์˜ํ•œ ์„œ๋น„์Šค IP ์ ‘๊ทผ
  • post-xlate-fwd: NAT ๋ณ€ํ™˜ ํ›„ - ์‹ค์ œ Pod IP๋กœ ๋ณ€ํ™˜๋จ

TCP ์—ฐ๊ฒฐ ์ƒ๋ช…์ฃผ๊ธฐ

1
2
3
4
5
TCP Flags: SYN        # ์—ฐ๊ฒฐ ์‹œ์ž‘
TCP Flags: SYN, ACK   # ์—ฐ๊ฒฐ ์ˆ˜๋ฝ
TCP Flags: ACK        # ์—ฐ๊ฒฐ ํ™•์ธ
TCP Flags: ACK, PSH   # HTTP ๋ฐ์ดํ„ฐ ์ „์†ก
TCP Flags: ACK, FIN   # ์—ฐ๊ฒฐ ์ข…๋ฃŒ

7. ์„œ๋น„์Šค ์ •๋ณด ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# k get svc

โœ…ย ์ถœ๋ ฅ

1
2
3
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   57m
webpod       ClusterIP   10.96.152.212   <none>        80/TCP    19m
  • webpod ์„œ๋น„์Šค์˜ ClusterIP 10.96.152.212๊ฐ€ Hubble ๋กœ๊ทธ์˜ ์„œ๋น„์Šค ์ฃผ์†Œ์™€ ์ผ์น˜
  • Cilium์˜ ์†Œ์ผ“ ๋ ˆ๋ฒจ ๋กœ๋“œ๋ฐธ๋Ÿฐ์„œ๊ฐ€ ์„œ๋น„์Šค IP๋ฅผ ์‹ค์ œ Pod IP๋กœ ์ž๋™ ๋ณ€ํ™˜

8. ๋„คํŠธ์›Œํฌ ํŒจํ‚ท ์บก์ฒ˜ ๋ถ„์„

(1) tcpdump๋ฅผ ํ†ตํ•œ ์‹ค์‹œ๊ฐ„ ๋ชจ๋‹ˆํ„ฐ๋ง

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i eth1 tcp port 80 -nn

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
15:43:14.755578 IP 10.244.0.27.41700 > 10.244.1.96.80: Flags [S], seq 501953752, win 64240, options [mss 1460,sackOK,TS val 1594519 ecr 0,nop,wscale 7], length 0
15:43:14.756290 IP 10.244.1.96.80 > 10.244.0.27.41700: Flags [S.], seq 2849751208, ack 501953753, win 65160, options [mss 1460,sackOK,TS val 3721394349 ecr 1594519,nop,wscale 7], length 0
15:43:14.756381 IP 10.244.0.27.41700 > 10.244.1.96.80: Flags [.], ack 1, win 502, options [nop,nop,TS val 1594520 ecr 3721394349], length 0
15:43:14.756622 IP 10.244.0.27.41700 > 10.244.1.96.80: Flags [P.], seq 1:71, ack 1, win 502, options [nop,nop,TS val 1594521 ecr 3721394349], length 70: HTTP: GET / HTTP/1.1
15:43:14.757363 IP 10.244.1.96.80 > 10.244.0.27.41700: Flags [.], ack 71, win 509, options [nop,nop,TS val 3721394350 ecr 1594521], length 0
15:43:14.757855 IP 10.244.1.96.80 > 10.244.0.27.41700: Flags [P.], seq 1:321, ack 71, win 509, options [nop,nop,TS val 3721394351 ecr 1594521], length 320: HTTP: HTTP/1.1 200 OK
15:43:14.757884 IP 10.244.0.27.41700 > 10.244.1.96.80: Flags [.], ack 321, win 501, options [nop,nop,TS val 1594522 ecr 3721394351], length 0
15:43:14.758124 IP 10.244.0.27.41700 > 10.244.1.96.80: Flags [F.], seq 71, ack 321, win 501, options [nop,nop,TS val 1594522 ecr 3721394351], length 0
15:43:14.758448 IP 10.244.1.96.80 > 10.244.0.27.41700: Flags [F.], seq 321, ack 72, win 509, options [nop,nop,TS val 3721394352 ecr 1594522], length 0
15:43:14.758485 IP 10.244.0.27.41700 > 10.244.1.96.80: Flags [.], ack 322, win 501, options [nop,nop,TS val 1594522 ecr 3721394352], length 0
15:43:16.770376 IP 10.244.0.27.41702 > 10.244.1.96.80: Flags [S], seq 2173259033, win 64240, options [mss 1460,sackOK,TS val 1596534 ecr 0,nop,wscale 7], length 0
15:43:16.771075 IP 10.244.1.96.80 > 10.244.0.27.41702: Flags [S.], seq 1449700480, ack 2173259034, win 65160, options [mss 1460,sackOK,TS val 3721396364 ecr 1596534,nop,wscale 7], length 0
15:43:16.771133 IP 10.244.0.27.41702 > 10.244.1.96.80: Flags [.], ack 1, win 502, options [nop,nop,TS val 1596535 ecr 3721396364], length 0
15:43:16.771167 IP 10.244.0.27.41702 > 10.244.1.96.80: Flags [P.], seq 1:71, ack 1, win 502, options [nop,nop,TS val 1596535 ecr 3721396364], length 70: HTTP: GET / HTTP/1.1
15:43:16.771658 IP 10.244.1.96.80 > 10.244.0.27.41702: Flags [.], ack 71, win 509, options [nop,nop,TS val 3721396365 ecr 1596535], length 0
15:43:16.772436 IP 10.244.1.96.80 > 10.244.0.27.41702: Flags [P.], seq 1:321, ack 71, win 509, options [nop,nop,TS val 3721396366 ecr 1596535], length 320: HTTP: HTTP/1.1 200 OK
15:43:16.772479 IP 10.244.0.27.41702 > 10.244.1.96.80: Flags [.], ack 321, win 501, options [nop,nop,TS val 1596536 ecr 3721396366], length 0
15:43:16.772648 IP 10.244.0.27.41702 > 10.244.1.96.80: Flags [F.], seq 71, ack 321, win 501, options [nop,nop,TS val 1596537 ecr 3721396366], length 0
15:43:16.773058 IP 10.244.1.96.80 > 10.244.0.27.41702: Flags [F.], seq 321, ack 72, win 509, options [nop,nop,TS val 3721396366 ecr 1596537], length 0
15:43:16.773093 IP 10.244.0.27.41702 > 10.244.1.96.80: Flags [.], ack 322, win 501, options [nop,nop,TS val 1596537 ecr 3721396366], length 0
15:43:17.778477 IP 10.244.0.27.52802 > 10.244.1.96.80: Flags [S], seq 1698202645, win 64240, options [mss 1460,sackOK,TS val 1597542 ecr 0,nop,wscale 7], length 0
15:43:17.779167 IP 10.244.1.96.80 > 10.244.0.27.52802: Flags [S.], seq 4294649790, ack 1698202646, win 65160, options [mss 1460,sackOK,TS val 3721397372 ecr 1597542,nop,wscale 7], length 0
...
  • 10.244.0.27: curl-pod์˜ ์‹ค์ œ IP ์ฃผ์†Œ
  • 10.244.1.96: webpod์˜ ์‹ค์ œ IP ์ฃผ์†Œ (์›Œ์ปค ๋…ธ๋“œ)
  • ์„œ๋น„์Šค IP(10.96.152.212)๋Š” ํŒจํ‚ท ๋ ˆ๋ฒจ์—์„œ๋Š” ๋ณด์ด์ง€ ์•Š์Œ
  • eBPF๊ฐ€ ์ปค๋„ ๋ ˆ๋ฒจ์—์„œ ํˆฌ๋ช…ํ•˜๊ฒŒ NAT ๋ณ€ํ™˜์„ ์ˆ˜ํ–‰

(2) ํŒจํ‚ท ์บก์ฒ˜ ํŒŒ์ผ ์ƒ์„ฑ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i eth1 tcp port 80 -w /tmp/http.pcap

โœ…ย ์ถœ๋ ฅ

1
2
3
4
tcpdump: listening on eth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
^C30 packets captured
30 packets received by filter
0 packets dropped by kernel

(3) Termshark๋กœ ๋ถ„์„

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# termshark -r /tmp/http.pcap

โœ…ย ์ถœ๋ ฅ


๐Ÿ [Cilium] Cluster Scope & ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜ ์‹ค์Šต

1. ๊ฐœ์š”

  • ๋ชฉํ‘œ: Kubernetes Host Scope์—์„œ Cilium Cluster Scope IPAM ๋ชจ๋“œ๋กœ ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜
  • IP ๋Œ€์—ญ ๋ณ€๊ฒฝ: 10.244.0.0/16 โ†’ 172.20.0.0/16
  • ๊ด€๋ฆฌ ์ฃผ์ฒด ๋ณ€๊ฒฝ: kube-controller-manager โ†’ Cilium Operator

ํ†ต์‹  ํ™•์ธ ๋ชฉ์ , ๋ฐ˜๋ณต ์š”์ฒญ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s webpod | grep Hostname; sleep 1; done'

2. IPAM ๋ชจ๋“œ ๋ณ€๊ฒฝ

(1) ์ตœ์ดˆ ๋ณ€๊ฒฝ ์‹œ๋„ (์‹คํŒจ)

1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# helm upgrade cilium cilium/cilium --namespace kube-system --reuse-values \
--set ipam.mode="cluster-pool" --set ipam.operator.clusterPoolIPv4PodCIDRList={"172.20.0.0/16"} --set ipv4NativeRoutingCIDR=172.20.0.0/16

โœ…ย ์ถœ๋ ฅ

1
Error: UPGRADE FAILED: template: cilium/templates/cilium-operator/deployment.yaml:145:26: executing "cilium/templates/cilium-operator/deployment.yaml" at <.Values.k8sServiceHostRef.name>: nil pointer evaluating interface {}.name

(2) ๋ฌธ์ œ ํ•ด๊ฒฐ: Values ์ •์ œ ๐Ÿ’ก

๊ธฐ์กด ๊ฐ’์„ clean-values.yaml๋กœ ๋ฐฑ์—…

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# helm get values cilium -n kube-system > clean-values.yaml

์˜ค๋ฅ˜ ์œ ๋ฐœ ํ•ญ๋ชฉ ์ œ๊ฑฐ ํ›„ final-values.yaml ์ž‘์„ฑํ•˜์—ฌ ์„ค์ • ์ •์ œ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cat > final-values.yaml << EOF
autoDirectNodeRoutes: true
bpf:
  masquerade: true
debug:
  enabled: true
endpointHealthChecking:
  enabled: false
endpointRoutes:
  enabled: true
healthChecking: false
hubble:
  enabled: true
  metrics:
    enableOpenMetrics: true
    enabled:
    - dns
    - drop
    - tcp
    - flow
    - port-distribution
    - icmp
    - httpV2:exemplars=true;labelsContext=source_ip,source_namespace,source_workload,destination_ip,destination_namespace,destination_workload,traffic_direction
  relay:
    enabled: true
  ui:
    enabled: true
    service:
      nodePort: 30003
      type: NodePort
installNoConntrackIptablesRules: true
ipam:
  mode: cluster-pool
  operator:
    clusterPoolIPv4PodCIDRList:
      - "172.20.0.0/16"
ipv4NativeRoutingCIDR: 172.20.0.0/16
k8s:
  requireIPv4PodCIDR: true
k8sServiceHost: 192.168.10.100
k8sServicePort: 6443
kubeProxyReplacement: true
operator:
  prometheus:
    enabled: true
  replicas: 1
prometheus:
  enabled: true
routingMode: native
EOF

(3) ์„ค์ • ์ ์šฉ: IPAM cluster-pool๋กœ ๋ณ€๊ฒฝ ์„ฑ๊ณต

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# helm upgrade cilium cilium/cilium --namespace kube-system -f final-values.yaml

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
Release "cilium" has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Wed Jul 30 16:26:05 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.

Your release version is 1.18.0.

For any further help, visit https://docs.cilium.io/en/v1.18/gettinghelp

3. Cilium ๊ตฌ์„ฑ ์š”์†Œ ์žฌ์‹œ์ž‘

(1) Cilium Operator ์žฌ์‹œ์ž‘

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system rollout restart deploy/cilium-operator

# ๊ฒฐ๊ณผ
deployment.apps/cilium-operator restarted

(2) Cilium DaemonSet ์žฌ์‹œ์ž‘

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system rollout restart ds/cilium

# ๊ฒฐ๊ณผ
daemonset.apps/cilium restarted

4. k9s ์ถœ๋ ฅ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# k9s

โœ…ย ์ถœ๋ ฅ - default ๋„ค์ž„์ŠคํŽ˜์ด์Šค

โœ…ย ์ถœ๋ ฅ - all

5. IPAM ๋ชจ๋“œ ๋ณ€๊ฒฝ ํ™•์ธ

IPAM ๋ชจ๋“œ๊ฐ€ cluster-pool๋กœ ์ •์ƒ ๋ณ€๊ฒฝ๋˜์—ˆ์Œ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep ^ipam

โœ…ย ์ถœ๋ ฅ

1
2
ipam                                              cluster-pool
ipam-cilium-node-update-rate                      15s

6. Pod CIDR ๋ฏธ๋ฐ˜์˜ ํ™•์ธ

(1) CiliumNode์˜ ๊ธฐ์กด CIDR ์œ ์ง€ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode -o json | grep podCIDRs -A2

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
                    "podCIDRs": [
                        "10.244.0.0/24"
                    ],
--
                    "podCIDRs": [
                        "10.244.1.0/24"
                    ],

(2) CiliumEndpoint์˜ Pod IP ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints.cilium.io -A

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
NAMESPACE            NAME                                      SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
cilium-monitoring    grafana-5c69859d9-wdb82                   22795               ready            10.244.0.104   
cilium-monitoring    prometheus-6fc896bc5d-bxnd5               1213                ready            10.244.0.65    
default              curl-pod                                  5580                ready            10.244.0.27    
default              webpod-697b545f57-bpzn9                   12497               ready            10.244.0.1     
default              webpod-697b545f57-xb8fd                   12497               ready            10.244.1.96    
kube-system          coredns-674b8bbfcf-9pxvx                  28565               ready            10.244.0.199   
kube-system          coredns-674b8bbfcf-khjhq                  28565               ready            10.244.0.59    
kube-system          hubble-relay-5b48c999f9-cvjjc             17061               ready            10.244.1.67    
kube-system          hubble-ui-655f947f96-tcrrp                2452                ready            10.244.1.66    
local-path-storage   local-path-provisioner-74f9666bc9-scg4s   56893               ready            10.244.0.253

๊ทธ๋Ÿฌ๋‚˜, ํ†ต์‹ ์€ ์ž˜๋˜๊ณ  ์žˆ์Œ

7. IPAM ๋ณ€๊ฒฝ ๋ฏธ๋ฐ˜์˜ ์›์ธ ํŒŒ์•…

(1) CiliumNode ๋‚ด podCIDRs ๊ฐ’ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode -o json | grep podCIDRs -A2

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
                    "podCIDRs": [
                        "10.244.0.0/24"
                    ],
--
                    "podCIDRs": [
                        "10.244.1.0/24"
                    ],

(2) ๊ธฐ์กด CiliumNode์˜ IP ์œ ์ง€ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode

โœ…ย ์ถœ๋ ฅ

1
2
3
NAME      CILIUMINTERNALIP   INTERNALIP       AGE
k8s-ctr   10.244.0.70        192.168.10.100   130m
k8s-w1    10.244.1.175       192.168.10.101   128m

8. CiliumNode ๋ฆฌ์†Œ์Šค ์‚ญ์ œ ๋ฐ ์žฌ์‹œ์ž‘

(1) ์›Œ์ปค ๋…ธ๋“œ์˜ CiliumNode ์‚ญ์ œ

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl delete ciliumnode k8s-w1

# ๊ฒฐ๊ณผ
ciliumnode.cilium.io "k8s-w1" deleted

(2) Cilium DaemonSet ์žฌ์‹œ์ž‘

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system rollout restart ds/cilium

# ๊ฒฐ๊ณผ
daemonset.apps/cilium restarted

(3) ๋ณ€๊ฒฝ๋œ Pod CIDRs ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode -o json | grep podCIDRs -A2

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
                    "podCIDRs": [
                        "10.244.0.0/24"
                    ],
--
                    "podCIDRs": [
                        "172.20.0.0/24"
                    ],

9. ์ปจํŠธ๋กคํ”Œ๋ ˆ์ธ ๋…ธ๋“œ๋„ CIDR ์žฌ์„ค์ •

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints.cilium.io -A

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
NAMESPACE            NAME                                      SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
cilium-monitoring    grafana-5c69859d9-wdb82                   22795               ready            10.244.0.104   
cilium-monitoring    prometheus-6fc896bc5d-bxnd5               1213                ready            10.244.0.65    
default              curl-pod                                  5580                ready            10.244.0.27    
default              webpod-697b545f57-bpzn9                   12497               ready            10.244.0.1     
kube-system          coredns-674b8bbfcf-9pxvx                  28565               ready            10.244.0.199   
kube-system          coredns-674b8bbfcf-khjhq                  28565               ready            10.244.0.59    
local-path-storage   local-path-provisioner-74f9666bc9-scg4s   56893               ready            10.244.0.253

(1) ์ปจํŠธ๋กคํ”Œ๋ ˆ์ธ ๋…ธ๋“œ ์‚ญ์ œ

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl delete ciliumnode k8s-ctr

# ๊ฒฐ๊ณผ
ciliumnode.cilium.io "k8s-ctr" deleted

(2) DaemonSet ์žฌ์‹œ์ž‘

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system rollout restart ds/cilium

# ๊ฒฐ๊ณผ
daemonset.apps/cilium restarted

(3) ๋ณ€๊ฒฝ๋œ Pod CIDRs ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode -o json | grep podCIDRs -A2

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
                    "podCIDRs": [
                        "172.20.1.0/24"
                    ],
--
                    "podCIDRs": [
                        "172.20.0.0/24"
                    ],

10. ์—”๋“œํฌ์ธํŠธ ๋ฐ ๋ผ์šฐํŒ… ๊ฒฝ๋กœ ํ™•์ธ

(1) ๋ณ€๊ฒฝ๋œ Endpoint IP ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints.cilium.io -A

โœ…ย ์ถœ๋ ฅ

1
2
3
NAMESPACE     NAME                       SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
kube-system   coredns-674b8bbfcf-gbnm8   28565               ready            172.20.0.186   
kube-system   coredns-674b8bbfcf-vvgfm   28565               ready            172.20.1.144  

(2) ๋ผ์šฐํŒ… ๊ฒฝ๋กœ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c route

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static 
172.20.0.0/24 via 192.168.10.101 dev eth1 proto kernel 
172.20.1.144 dev lxcf2a822e72a6e proto kernel scope link 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100 
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w1 ip -c route

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static 
172.20.0.186 dev lxc80130454cb70 proto kernel scope link 
172.20.1.0/24 via 192.168.10.100 dev eth1 proto kernel 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.101 

11. ๊ธฐ์กด Pod์˜ IP ์œ ์ง€ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -A -owide | grep 10.244.

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
cilium-monitoring    grafana-5c69859d9-wdb82                   0/1     Running            0              143m    10.244.0.104     k8s-ctr   <none>           <none>
cilium-monitoring    prometheus-6fc896bc5d-bxnd5               1/1     Running            0              143m    10.244.0.65      k8s-ctr   <none>           <none>
default              curl-pod                                  1/1     Running            0              105m    10.244.0.27      k8s-ctr   <none>           <none>
default              webpod-697b545f57-bpzn9                   1/1     Running            0              106m    10.244.0.1       k8s-ctr   <none>           <none>
default              webpod-697b545f57-xb8fd                   1/1     Running            0              106m    10.244.1.96      k8s-w1    <none>           <none>
kube-system          hubble-relay-5b48c999f9-cvjjc             0/1     Running            5 (28s ago)    39m     10.244.1.67      k8s-w1    <none>           <none>
kube-system          hubble-ui-655f947f96-tcrrp                1/2     CrashLoopBackOff   6 (106s ago)   39m     10.244.1.66      k8s-w1    <none>           <none>
local-path-storage   local-path-provisioner-74f9666bc9-scg4s   1/1     Running            0              143m    10.244.0.253     k8s-ctr   <none>           <none>

12. Deployment ๋ฆฌ์†Œ์Šค ์žฌ์‹œ์ž‘

์‹œ์Šคํ…œ ๋ฐ ๋ชจ๋‹ˆํ„ฐ๋ง ํŒŒ๋“œ ์žฌ์‹œ์ž‘

1
2
3
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system rollout restart deploy/hubble-relay deploy/hubble-ui
kubectl -n cilium-monitoring rollout restart deploy/prometheus deploy/grafana
kubectl rollout restart deploy/webpod

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
deployment.apps/hubble-relay restarted
deployment.apps/hubble-ui restarted
deployment.apps/prometheus restarted
deployment.apps/grafana restarted
deployment.apps/webpod restarted

13. ์ˆ˜๋™ ์ƒ์„ฑ ํŒŒ๋“œ ์‚ญ์ œ ๋ฐ ์žฌ๋ฐฐํฌ

(1) curl-pod ์‚ญ์ œ

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl delete pod curl-pod

# ์ถœ๋ ฅ
pod "curl-pod" deleted

(2) curl-pod ์žฌ๋ฐฐํฌ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: curl-pod
  labels:
    app: curl
spec:
  nodeName: k8s-ctr
  containers:
  - name: curl
    image: nicolaka/netshoot
    command: ["tail"]
    args: ["-f", "/dev/null"]
  terminationGracePeriodSeconds: 0
EOF

# ๊ฒฐ๊ณผ
pod/curl-pod created

14. ์ƒˆ IP ํ• ๋‹น ์ƒํƒœ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints.cilium.io -A

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
NAMESPACE           NAME                           SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
cilium-monitoring   grafana-6bc98cff96-h74hv        22795               ready            172.20.0.67    
cilium-monitoring   prometheus-597ff4d4c5-hzrsx     1213                ready            172.20.0.17    
default             curl-pod                       5580                ready            172.20.1.236   
default             webpod-556878d5d7-7p8bn        12497               ready            172.20.1.40    
default             webpod-556878d5d7-r4dmh        12497               ready            172.20.0.130   
kube-system         coredns-674b8bbfcf-gbnm8       28565               ready            172.20.0.186   
kube-system         coredns-674b8bbfcf-vvgfm       28565               ready            172.20.1.144   
kube-system         hubble-relay-c8db994db-5hc26   17061               ready            172.20.0.190   
kube-system         hubble-ui-5c5855f4bf-8dkrf     2452                ready            172.20.0.162 
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# k get pod -A

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
NAMESPACE            NAME                                      READY   STATUS    RESTARTS   AGE
cilium-monitoring    grafana-6bc98cff96-h74hv                   1/1     Running   0          3m31s
cilium-monitoring    prometheus-597ff4d4c5-hzrsx                1/1     Running   0          3m31s
default              curl-pod                                  1/1     Running   0          110s
default              webpod-556878d5d7-7p8bn                   1/1     Running   0          3m4s
default              webpod-556878d5d7-r4dmh                   1/1     Running   0          3m30s
kube-system          cilium-8nxg4                              1/1     Running   0          8m42s
kube-system          cilium-envoy-mn4qm                        1/1     Running   0          44m
kube-system          cilium-envoy-zgsk4                        1/1     Running   0          44m
kube-system          cilium-kl2mj                              1/1     Running   0          8m42s
kube-system          cilium-operator-765ddcc649-ft64f          1/1     Running   0          38m
kube-system          coredns-674b8bbfcf-gbnm8                  1/1     Running   0          8m19s
kube-system          coredns-674b8bbfcf-vvgfm                  1/1     Running   0          8m4s
kube-system          etcd-k8s-ctr                              1/1     Running   0          149m
kube-system          hubble-relay-c8db994db-5hc26              1/1     Running   0          3m31s
kube-system          hubble-ui-5c5855f4bf-8dkrf                2/2     Running   0          3m31s
kube-system          kube-apiserver-k8s-ctr                    1/1     Running   0          149m
kube-system          kube-controller-manager-k8s-ctr           1/1     Running   0          149m
kube-system          kube-proxy-5ccc4                          1/1     Running   0          147m
kube-system          kube-proxy-mzn7t                          1/1     Running   0          149m
kube-system          kube-scheduler-k8s-ctr                    1/1     Running   0          149m
local-path-storage   local-path-provisioner-74f9666bc9-scg4s   1/1     Running   0          148m

15. curl-pod์—์„œ ํ†ต์‹  ํ…Œ์ŠคํŠธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s webpod | grep Hostname; sleep 1; done'

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
Hostname: webpod-556878d5d7-7p8bn
Hostname: webpod-556878d5d7-r4dmh
Hostname: webpod-556878d5d7-7p8bn
Hostname: webpod-556878d5d7-7p8bn
Hostname: webpod-556878d5d7-7p8bn
Hostname: webpod-556878d5d7-r4dmh
Hostname: webpod-556878d5d7-7p8bn
Hostname: webpod-556878d5d7-r4dmh
Hostname: webpod-556878d5d7-7p8bn
Hostname: webpod-556878d5d7-7p8bn
...

16. Hubble ํฌํŠธํฌ์›Œ๋”ฉ ์ถฉ๋Œ ํ•ด๊ฒฐ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cilium hubble port-forward&

โœ…ย ์ถœ๋ ฅ

1
2
3
[2] 34662
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# 
Error: Unable to port forward: failed to port forward: failed to port forward: unable to listen on any of the requested ports: [{4245 4245}]

๊ธฐ์กด ํฌํŠธ ์ถฉ๋Œ ํ™•์ธ ๋ฐ ์ข…๋ฃŒ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ss -tnlp | grep 4245

โœ…ย ์ถœ๋ ฅ

1
2
LISTEN 0      4096        127.0.0.1:4245       0.0.0.0:*    users:(("cilium",pid=10026,fd=7))                      
LISTEN 0      4096            [::1]:4245          [::]:*    users:(("cilium",pid=10026,fd=8))
1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kill -9 10026

# ๊ฒฐ๊ณผ
[1]+  Killed                  cilium hubble port-forward

17. Hubble ํฌํŠธํฌ์›Œ๋”ฉ ์ •์ƒ ์žฌ์‹œ์ž‘

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cilium hubble port-forward&

โœ…ย ์ถœ๋ ฅ

1
2
[1] 34787
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# โ„น๏ธ  Hubble Relay is available at 127.0.0.1:4245

Hubble ์ƒํƒœ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# hubble status

โœ…ย ์ถœ๋ ฅ

1
2
3
4
Healthcheck (via localhost:4245): Ok
Current/Max Flows: 8,190/8,190 (100.00%)
Flows/s: 38.00
Connected Nodes: 2/2

๐Ÿ”ง IPAM ๋ชจ๋“œ ๋ณ€๊ฒฝ์€ ์‹ ์ค‘ํ•˜๊ฒŒ

IPAM ๋ชจ๋“œ๋ฅผ ๋ณ€๊ฒฝํ•˜๋Š” ์ž‘์—…์€ ๋‹จ์ˆœํžˆ Pod CIDR ๋Œ€์—ญ์„ ๋ฐ”๊พธ๋Š” ๊ฒƒ๋ณด๋‹ค ํ›จ์”ฌ ๋” ํฐ ๋ฆฌ์Šคํฌ๋ฅผ ๋™๋ฐ˜ํ•จ. ์ดˆ๊ธฐ ํด๋Ÿฌ์Šคํ„ฐ ์„ค๊ณ„ ๋‹จ๊ณ„์—์„œ ์‚ฌ์šฉํ•  IPAM ๋ชจ๋“œ๋ฅผ ์‹ ์ค‘ํ•˜๊ฒŒ ๊ฒฐ์ •ํ•ด์•ผ ํ•˜๋ฉฐ, ํ–ฅํ›„ ํด๋Ÿฌ์Šคํ„ฐ ํ™•์žฅ(์Šค์ผ€์ผ์—…) ๊ณ„ํš์ด๋‚˜ ๋„คํŠธ์›Œํฌ ๊ตฌ์กฐ๊นŒ์ง€ ๊ณ ๋ คํ•ด ์„ค์ •ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•จ.


๐Ÿงญ Routing

1. ํŒŒ๋“œ ์ƒํƒœ ๋ฐ IP ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME                      READY   STATUS    RESTARTS   AGE   IP             NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          57m   172.20.1.236   k8s-ctr   <none>           <none>
webpod-556878d5d7-7p8bn   1/1     Running   0          58m   172.20.1.40    k8s-ctr   <none>           <none>
webpod-556878d5d7-r4dmh   1/1     Running   0          59m   172.20.0.130   k8s-w1    <none>           <none>

webpod 1,2 ํŒŒ๋“œ IP

1
2
3
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# export WEBPODIP1=$(kubectl get -l app=webpod pods --field-selector spec.nodeName=k8s-ctr -o jsonpath='{.items[0].status.podIP}')
export WEBPODIP2=$(kubectl get -l app=webpod pods --field-selector spec.nodeName=k8s-w1  -o jsonpath='{.items[0].status.podIP}')
echo $WEBPODIP1 $WEBPODIP2

โœ…ย ์ถœ๋ ฅ

1
172.20.1.40 172.20.0.130

2. ํŒŒ๋“œ ๊ฐ„ ํ†ต์‹  ํ™•์ธ (ping)

curl-pod โ†’ webpod-2 ๋กœ ping ์‹œ๋„

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- ping $WEBPODIP2

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
PING 172.20.0.130 (172.20.0.130) 56(84) bytes of data.
64 bytes from 172.20.0.130: icmp_seq=1 ttl=62 time=0.433 ms
64 bytes from 172.20.0.130: icmp_seq=2 ttl=62 time=0.657 ms
64 bytes from 172.20.0.130: icmp_seq=3 ttl=62 time=0.554 ms
64 bytes from 172.20.0.130: icmp_seq=4 ttl=62 time=0.374 ms
64 bytes from 172.20.0.130: icmp_seq=5 ttl=62 time=0.990 ms
64 bytes from 172.20.0.130: icmp_seq=6 ttl=62 time=0.486 ms
64 bytes from 172.20.0.130: icmp_seq=7 ttl=62 time=0.446 ms
64 bytes from 172.20.0.130: icmp_seq=8 ttl=62 time=0.533 ms
...
  • ICMP ์‘๋‹ต ์ˆ˜์‹  ์ •์ƒ
  • ํŒŒ๋“œ ๊ฐ„ ํ†ต์‹ ์— ๋ฌธ์ œ ์—†์Œ (Native Routing ์ •์ƒ ์ž‘๋™)

3. ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ” ํ™•์ธ (k8s-ctr ๋…ธ๋“œ)

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c route

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static 
172.20.0.0/24 via 192.168.10.101 dev eth1 proto kernel 
172.20.1.40 dev lxc0895f39b5225 proto kernel scope link 
172.20.1.144 dev lxcf2a822e72a6e proto kernel scope link 
172.20.1.236 dev lxcd63c3c1415ff proto kernel scope link 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100 
  • 172.20.0.0/24 via 192.168.10.101 dev eth1 proto kernel
    • webpod-2๊ฐ€ ์†ํ•œ ๋„คํŠธ์›Œํฌ ๋Œ€์—ญ์œผ๋กœ ๊ฐ€๊ธฐ ์œ„ํ•ด ์›Œ์ปค๋…ธ๋“œ1์˜ IP ์‚ฌ์šฉ

4. ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ” ํ™•์ธ (k8s-w1 ๋…ธ๋“œ)

webpod-2 (172.20.0.130)๋Š” veth ์ธํ„ฐํŽ˜์ด์Šค์— ์ง์ ‘ ์—ฐ๊ฒฐ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w1 ip -c route

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static 
172.20.0.17 dev lxce960d096d8a4 proto kernel scope link 
172.20.0.67 dev lxcd23f85153e89 proto kernel scope link 
172.20.0.130 dev lxc097ff224d206 proto kernel scope link 
172.20.0.162 dev lxc4fe9abccf909 proto kernel scope link 
172.20.0.186 dev lxc80130454cb70 proto kernel scope link 
172.20.0.190 dev lxcb2f1076877d3 proto kernel scope link 
172.20.1.0/24 via 192.168.10.100 dev eth1 proto kernel 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.101
  • 172.20.1.0/24 via 192.168.10.100 dev eth1 proto kernel
    • curl-pod๊ฐ€ ์žˆ๋Š” k8s-ctr๋กœ์˜ ๊ฒฝ๋กœ๊ฐ€ ์„ค์ •๋˜์–ด ์žˆ์Œ

5. Hubble CLI๋กœ ํŠธ๋ž˜ํ”ฝ ํ๋ฆ„ ํ™•์ธ

1
hubble observe -f --pod curl-pod

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
Jul 30 09:15:15.857: default/curl-pod (ID:5580) -> default/webpod-556878d5d7-r4dmh (ID:12497) to-network FORWARDED (ICMPv4 EchoRequest)
Jul 30 09:15:15.858: default/curl-pod (ID:5580) <- default/webpod-556878d5d7-r4dmh (ID:12497) to-endpoint FORWARDED (ICMPv4 EchoReply)
Jul 30 09:15:16.848: default/curl-pod (ID:5580) -> default/webpod-556878d5d7-r4dmh (ID:12497) to-endpoint FORWARDED (ICMPv4 EchoRequest)
Jul 30 09:15:16.848: default/curl-pod (ID:5580) <- default/webpod-556878d5d7-r4dmh (ID:12497) to-network FORWARDED (ICMPv4 EchoReply)
...

  • ICMP EchoRequest / EchoReply ํŠธ๋ž˜ํ”ฝ ์‹ค์‹œ๊ฐ„ ๋กœ๊ทธ ์ถœ๋ ฅ
  • Source: curl-pod, Destination: webpod-2

6. tcpdump๋กœ ๋„คํŠธ์›Œํฌ ํŒจํ‚ท ์บก์ฒ˜

tcpdump -i eth1 icmp ๋ช…๋ น์œผ๋กœ ์‹ค์‹œ๊ฐ„ ICMP ํŠธ๋ž˜ํ”ฝ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i eth1 icmp

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
18:20:34.129970 IP 172.20.1.236 > 172.20.0.130: ICMP echo request, id 9174, seq 636, length 64
18:20:34.130563 IP 172.20.0.130 > 172.20.1.236: ICMP echo reply, id 9174, seq 636, length 64
18:20:35.153607 IP 172.20.1.236 > 172.20.0.130: ICMP echo request, id 9174, seq 637, length 64
18:20:35.154045 IP 172.20.0.130 > 172.20.1.236: ICMP echo reply, id 9174, seq 637, length 64
18:20:36.178084 IP 172.20.1.236 > 172.20.0.130: ICMP echo request, id 9174, seq 638, length 64
18:20:36.179263 IP 172.20.0.130 > 172.20.1.236: ICMP echo reply, id 9174, seq 638, length 64
18:20:37.179611 IP 172.20.1.236 > 172.20.0.130: ICMP echo request, id 9174, seq 639, length 64
18:20:37.179994 IP 172.20.0.130 > 172.20.1.236: ICMP echo reply, id 9174, seq 639, length 64
18:20:38.225687 IP 172.20.1.236 > 172.20.0.130: ICMP echo request, id 9174, seq 640, length 64
18:20:38.226119 IP 172.20.0.130 > 172.20.1.236: ICMP echo reply, id 9174, seq 640, length 64
...
  • curl-pod (172.20.1.236) โ†’ webpod-2 (172.20.0.130) ์ง์ ‘ ํ†ต์‹ 
  • ์˜ค๋ฒ„๋ ˆ์ด ํ„ฐ๋„๋ง(VXLAN, Geneve) ์—†์ด native IP๋กœ ์ „์†ก๋จ
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# k get pod -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME                      READY   STATUS    RESTARTS   AGE   IP             NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          72m   172.20.1.236   k8s-ctr   <none>           <none>
webpod-556878d5d7-7p8bn   1/1     Running   0          73m   172.20.1.40    k8s-ctr   <none>           <none>
webpod-556878d5d7-r4dmh   1/1     Running   0          73m   172.20.0.130   k8s-w1    <none>           <none>

7. tcpdump ๊ฒฐ๊ณผ ์ €์žฅ ๋ฐ termshark ๋ถ„์„

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i eth1 icmp -w /tmp/icmp.pcap

โœ…ย ์ถœ๋ ฅ

1
2
3
4
tcpdump: listening on eth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
^C8 packets captured
10 packets received by filter
0 packets dropped by kernel

Source IP์™€ Destination IP๊ฐ€ ๊ทธ๋Œ€๋กœ ๋ณด์ด๋ฉฐ encapsulation ์—†์Œ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# termshark -r /tmp/icmp.pcap

โœ…ย ์ถœ๋ ฅ


๐Ÿ”€ Masquerading

Masquerading ๊ฐœ์š”

  • ๋‚ด๋ถ€ ์‚ฌ์„ค IP๋ฅผ ๊ฐ€์ง„ ์žฅ์น˜๋“ค์ด ๊ณต์œ ๊ธฐ๋ฅผ ํ†ตํ•ด ํ•˜๋‚˜์˜ ๊ณต์ธ IP๋กœ NAT๋˜์–ด ์™ธ๋ถ€์™€ ํ†ต์‹ ํ•˜๋Š” ๋ฐฉ์‹
  • ์ด์™€ ์œ ์‚ฌํ•˜๊ฒŒ Kubernetes์—์„œ๋„ Pod๊ฐ€ ์™ธ๋ถ€ ์ธํ„ฐ๋„ท๊ณผ ํ†ต์‹ ํ•  ๋•Œ ๋…ธ๋“œ์˜ IP๋กœ masquerading์ด ํ•„์š”ํ•จ

Kubernetes์—์„œ์˜ Masquerading ๋™์ž‘

  • Pod๊ฐ€ ์ธํ„ฐ๋„ท์œผ๋กœ ๋‚˜๊ฐˆ ๋•Œ, ๋…ธ๋“œ์˜ IP๋กœ Source NAT(Masquerade) ๋จ
  • ์ด์œ : ๋Œ€๋ถ€๋ถ„์˜ ๋…ธ๋“œ IP๋Š” ์™ธ๋ถ€ ์ธํ„ฐ๋„ท๊ณผ ์—ฐ๊ฒฐ ๊ฐ€๋Šฅํ•จ
  • ์™ธ๋ถ€๋กœ ๋‚˜๊ฐ€๋Š” ํŠธ๋ž˜ํ”ฝ๋งŒ masquerading ๋Œ€์ƒ์ด๋ฉฐ, ํด๋Ÿฌ์Šคํ„ฐ ๋‚ด๋ถ€ ํ†ต์‹ ์€ ์ œ์™ธ

Cilium์˜ Masquerading ๋™์ž‘ ๋ฐฉ์‹

  • ํด๋Ÿฌ์Šคํ„ฐ ์™ธ๋ถ€๋กœ ๋‚˜๊ฐ€๋Š” ๋ชจ๋“  ํŠธ๋ž˜ํ”ฝ์˜ ์†Œ์Šค IP โ†’ ๋…ธ๋“œ IP๋กœ ๋ณ€ํ™˜๋จ
  • ๋‹จ, Pod ๊ฐ„ ํ†ต์‹  ๋˜๋Š” ํด๋Ÿฌ์Šคํ„ฐ ๋‚ด๋ถ€ ๋…ธ๋“œ IP ๋Œ€์ƒ ํŠธ๋ž˜ํ”ฝ์€ masquerading ๋˜์ง€ ์•Š์Œ
  • ex. ๋‹ค๋ฅธ ๋…ธ๋“œ์˜ Pod์™€ ํ†ต์‹  ์‹œ, ์†Œ์Šค IP๊ฐ€ ๋…ธ๋“œ IP๋กœ ๋ฐ”๋€Œ๋ฉด ์•ˆ๋จ โ†’ ์˜ˆ์™ธ ์ฒ˜๋ฆฌ ํ•„์š”

์˜ˆ์™ธ ์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•œ ์„ค์ •: ipv4-native-routing-cidr

  • ipv4-native-routing-cidr: 10.0.0.0/8 ์™€ ๊ฐ™์ด ์„ค์ •
    • ํ•ด๋‹น CIDR ๋ฒ”์œ„์˜ IP๋กœ ๊ฐ€๋Š” ํŠธ๋ž˜ํ”ฝ์€ masquerading ๋˜์ง€ ์•Š์Œ
    • ์ฃผ๋กœ ํด๋Ÿฌ์Šคํ„ฐ ๋‚ด Pod CIDR ๋ฒ”์œ„๋ฅผ ์ง€์ •

1. Masquerading ์ƒํƒœ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent  -- cilium status | grep Masquerading

โœ…ย ์ถœ๋ ฅ

1
Masquerading:            BPF   [eth0, eth1]   172.20.0.0/16  [IPv4: Enabled, IPv6: Disabled]

2. ipv4-native-routing-cidr ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cilium config view  | grep ipv4-native-routing-cidr

โœ…ย ์ถœ๋ ฅ

1
ipv4-native-routing-cidr                          172.20.0.0/16
  • ๊ฐ™์€ ํด๋Ÿฌ์Šคํ„ฐ ๋‚ด Node IP๋กœ ๊ฐ€๋Š” ํŠธ๋ž˜ํ”ฝ์€ Masquerading ์ œ์™ธ ๋Œ€์ƒ

3. ํŒŒ๋“œ ๊ฐ„ ํ†ต์‹  ์‹œ Masquerading ์—ฌ๋ถ€ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i eth1 icmp -nn

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
18:46:02.125979 IP 172.20.1.236 > 172.20.0.130: ICMP echo request, id 9174, seq 2131, length 64
18:46:02.126385 IP 172.20.0.130 > 172.20.1.236: ICMP echo reply, id 9174, seq 2131, length 64
18:46:03.153938 IP 172.20.1.236 > 172.20.0.130: ICMP echo request, id 9174, seq 2132, length 64
18:46:03.154695 IP 172.20.0.130 > 172.20.1.236: ICMP echo reply, id 9174, seq 2132, length 64
18:46:04.154704 IP 172.20.1.236 > 172.20.0.130: ICMP echo request, id 9174, seq 2133, length 64
18:46:04.155285 IP 172.20.0.130 > 172.20.1.236: ICMP echo reply, id 9174, seq 2133, length 64
...
  • tcpdump๋ฅผ ํ†ตํ•ด ICMP ์š”์ฒญ/์‘๋‹ต์„ ํ™•์ธํ•œ ๊ฒฐ๊ณผ, ์†Œ์Šค IP๊ฐ€ ๊ทธ๋Œ€๋กœ ์œ ์ง€๋˜์–ด ์ „๋‹ฌ๋จ
  • Pod ๊ฐ„ ํ†ต์‹ ์—์„œ๋Š” Masquerading์ด ์ ์šฉ๋˜์ง€ ์•Š์Œ
1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- ping 192.168.10.101
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i eth1 icmp -nn

โœ…ย ์ถœ๋ ฅ

1
2
3
4
64 bytes from 192.168.10.101: icmp_seq=1 ttl=63 time=0.333 ms
64 bytes from 192.168.10.101: icmp_seq=2 ttl=63 time=0.535 ms
64 bytes from 192.168.10.101: icmp_seq=3 ttl=63 time=0.499 ms
...
1
2
3
4
5
6
7
8
9
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
18:48:32.790099 IP 172.20.1.236 > 192.168.10.101: ICMP echo request, id 9180, seq 1, length 64
18:48:32.790388 IP 192.168.10.101 > 172.20.1.236: ICMP echo reply, id 9180, seq 1, length 64
18:48:33.809718 IP 172.20.1.236 > 192.168.10.101: ICMP echo request, id 9180, seq 2, length 64
18:48:33.810202 IP 192.168.10.101 > 172.20.1.236: ICMP echo reply, id 9180, seq 2, length 64
18:48:34.833711 IP 172.20.1.236 > 192.168.10.101: ICMP echo request, id 9180, seq 3, length 64
18:48:34.834176 IP 192.168.10.101 > 172.20.1.236: ICMP echo reply, id 9180, seq 3, length 64
...
  • ๋งŒ์•ฝ masquerading(NAT)์ด ๋˜์—ˆ๋”๋ผ๋ฉด ์†Œ์Šค IP๊ฐ€ ๋…ธ๋“œ IP(์˜ˆ: 192.168.10.100)๋กœ ๋ณ€ํ–ˆ์–ด์•ผ ํ•จ
  • ๊ฒฐ๊ณผ์ ์œผ๋กœ ์™ธ๋ถ€ ๋…ธ๋“œ๋กœ์˜ ํŠธ๋ž˜ํ”ฝ์—๋„ Masquerading์€ ์ ์šฉ๋˜์ง€ ์•Š์Œ

๐Ÿงช Masquerading ์‹ค์Šต

1. Masquerading ์‹ค์Šต ๊ฐœ์š”

  • ์‹ค์Šต ๋ชฉ์ : Pod์—์„œ ํด๋Ÿฌ์Šคํ„ฐ ์™ธ๋ถ€ ์„œ๋ฒ„(router)๋กœ ํ†ต์‹  ์‹œ Masquerading ์—ฌ๋ถ€ ํ™•์ธ
  • ๋น„๊ต ๋Œ€์ƒ
    • ํด๋Ÿฌ์Šคํ„ฐ ๋‚ด ๋…ธ๋“œ ๊ฐ„ ํ†ต์‹  ์‹œ
    • ํด๋Ÿฌ์Šคํ„ฐ ์™ธ๋ถ€ ์„œ๋ฒ„(router) ํ†ต์‹  ์‹œ
  • ํ™˜๊ฒฝ ๊ตฌ์„ฑ
    • curl-pod, webpod, ์™ธ๋ถ€ ๋ผ์šฐํ„ฐ ์„œ๋ฒ„ (192.168.10.200)
    • ๋™์ผ ๋„คํŠธ์›Œํฌ ๋Œ€์—ญ: 192.168.10.0/24

2. ํด๋Ÿฌ์Šคํ„ฐ ๋‚ด๋ถ€ ํ†ต์‹ : Masquerading ๋˜์ง€ ์•Š์Œ

1
2
3
4
5
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl -s webpod | grep Hostname
Hostname: webpod-556878d5d7-r4dmh

(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl -s webpod | grep Hostname
Hostname: webpod-556878d5d7-7p8bn
1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i eth1 icmp -nn
root@router:~# tcpdump -i eth1 icmp -nn

โœ…ย ์ถœ๋ ฅ

์†Œ์Šค IP๊ฐ€ Pod IP(172.20.1.236)๋กœ ์œ ์ง€๋จ

1
kubectl exec -it curl-pod -- ping 192.168.10.101

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# k get pod -owide
NAME                      READY   STATUS    RESTARTS   AGE    IP             NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          121m   172.20.1.236   k8s-ctr   <none>           <none>
webpod-556878d5d7-7p8bn   1/1     Running   0          122m   172.20.1.40    k8s-ctr   <none>           <none>
webpod-556878d5d7-r4dmh   1/1     Running   0          123m   172.20.0.130   k8s-w1    <none>           <none>

3. ํด๋Ÿฌ์Šคํ„ฐ ์™ธ๋ถ€(router) ํ†ต์‹ : Masquerading ๋ฐœ์ƒ

์†Œ์Šค IP๊ฐ€ Pod IP๊ฐ€ ์•„๋‹ˆ๋ผ ํ•ด๋‹น Pod๊ฐ€ ์œ„์น˜ํ•œ ๋…ธ๋“œ IP(192.168.10.100)๋กœ NAT

1
kubectl exec -it curl-pod -- ping 192.168.10.200

โœ…ย ์ถœ๋ ฅ

  • Cilium์€ ํด๋Ÿฌ์Šคํ„ฐ ์™ธ๋ถ€๋กœ ๋‚˜๊ฐ€๋Š” ํŠธ๋ž˜ํ”ฝ์€ ์ž๋™์œผ๋กœ Masquerading ์ˆ˜ํ–‰
  • ๋™์ผ ๋„คํŠธ์›Œํฌ ๋Œ€์—ญ์ด์–ด๋„ ๋…ธ๋“œ๊ฐ€ ์•„๋‹Œ ์™ธ๋ถ€ ์„œ๋ฒ„์ด๋ฏ€๋กœ NAT ์ ์šฉ๋จ

4. hubble observe ๋ช…๋ น์œผ๋กœ ์‹ค์‹œ๊ฐ„ ํŠธ๋ž˜ํ”ฝ ํ๋ฆ„ ํ™•์ธ

1
hubble observe -f --pod curl-pod

โœ…ย ์ถœ๋ ฅ

  • curl-pod์—์„œ ๋ฐœ์ƒํ•˜๋Š” ํŠธ๋ž˜ํ”ฝ ํ๋ฆ„์„ ์‹ค์‹œ๊ฐ„์œผ๋กœ ํ™•์ธ
  • ์™ธ๋ถ€ ์„œ๋ฒ„(router: 192.168.10.200) ํ˜ธ์ถœ ์‹œ โ†’ ์†Œ์Šค IP๋Š” ๋…ธ๋“œ IP๋กœ Masquerading ๋˜์–ด ์ „์†ก

5. TCP ํฌํŠธ 80์œผ๋กœ๋„ Masquerading ํ™•์ธ

1
kubectl exec -it curl-pod -- curl -s webpod

1
kubectl exec -it curl-pod -- curl -s 192.168.10.200

  • ์™ธ๋ถ€ ํ†ต์‹  (192.168.10.200) โ†’ ์†Œ์Šค IP๋Š” ๋…ธ๋“œ IP

6. ๋ผ์šฐํ„ฐ์˜ Loopback ์ธํ„ฐํŽ˜์ด์Šค ํ™•์ธ

๋ผ์šฐํ„ฐ(192.168.10.200)์—๋Š” 2๊ฐœ์˜ ๋”๋ฏธ ์ธํ„ฐํŽ˜์ด์Šค ์กด์žฌ

1
root@router:~# ip -br -c -4 addr

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
lo               UNKNOWN        127.0.0.1/8 
eth0             UP             10.0.2.15/24 metric 100 
eth1             UP             192.168.10.200/24 
loop1            UNKNOWN        10.10.1.200/24 
loop2            UNKNOWN        10.10.2.200/24 
1
root@router:~# ip -c a

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 69510sec preferred_lft 69510sec
    inet6 fd17:625c:f037:2:a00:27ff:fe6b:69c9/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 85902sec preferred_lft 13902sec
    inet6 fe80::a00:27ff:fe6b:69c9/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:dc:00:69 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
    inet 192.168.10.200/24 brd 192.168.10.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fedc:69/64 scope link 
       valid_lft forever preferred_lft forever
4: loop1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether ba:83:60:b7:5a:f4 brd ff:ff:ff:ff:ff:ff
    inet 10.10.1.200/24 scope global loop1
       valid_lft forever preferred_lft forever
    inet6 fe80::9cd5:7fff:fe2e:29f7/64 scope link 
       valid_lft forever preferred_lft forever
5: loop2: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 9e:23:52:ad:1f:72 brd ff:ff:ff:ff:ff:ff
    inet 10.10.2.200/24 scope global loop2
       valid_lft forever preferred_lft forever
    inet6 fe80::58a6:ffff:fe7a:8424/64 scope link 
       valid_lft forever preferred_lft forever

7. k8s ๋…ธ๋“œ์— Static Routing ์„ค์ •

๋ชจ๋“  ๋…ธ๋“œ์—์„œ 10.10.0.0/16 ๋Œ€์—ญ์„ ๋ผ์šฐํ„ฐ(192.168.10.200)๋กœ ๊ฒฝ์œ ํ•˜๋„๋ก static route ์„ค์ •

1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c route | grep static
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static

8. Loopback ์ธํ„ฐํŽ˜์ด์Šค๋กœ์˜ ํ†ต์‹ ๋„ Masquerading๋จ

curl-pod์—์„œ 10.10.1.200, 10.10.2.200์œผ๋กœ HTTP ์š”์ฒญ

1
kubectl exec -it curl-pod -- curl -s 10.10.1.200

1
kubectl exec -it curl-pod -- curl -s 10.10.2.200

  • ์ด IP๋“ค์€ ๋ผ์šฐํ„ฐ์˜ loop1, loop2 ์ธํ„ฐํŽ˜์ด์Šค์— ํ•ด๋‹น
  • Masquerading์œผ๋กœ ์ธํ•ด ์‘๋‹ต์€ ๋…ธ๋“œ IP๋กœ ์ „์†ก๋จ โ†’ ์ •์ƒ์ ์ธ ํ†ต์‹  ๊ฐ€๋Šฅ

โš™๏ธ (Cilium์˜ eBPF ๊ตฌํ˜„) ip-masq-agent ์„ค์ •

  • Cilium์˜ masquerading ๊ณ ๊ธ‰ ๊ธฐ๋Šฅ ๊ตฌํ˜„ ์ค‘ ํ•˜๋‚˜๋กœ, ํŠน์ • CIDR๋กœ์˜ ํŠธ๋ž˜ํ”ฝ์— ๋Œ€ํ•ด NAT(Masquerading)๋ฅผ ์ƒ๋žตํ•  ์ˆ˜ ์žˆ์Œ
  • ํด๋Ÿฌ์Šคํ„ฐ ์™ธ๋ถ€ ํ†ต์‹  ์‹œ NAT์„ ํ”ผํ•˜๊ณ  ์ง์ ‘ Pod IP๋กœ ํ†ต์‹ ํ•˜๊ณ ์ž ํ•  ๋•Œ ํ™œ์šฉ
  • https://github.com/kubernetes-sigs/ip-masq-agent

1. ipMasqAgent ์„ค์ • ๋ฐ ์ ์šฉ

Helm ๋ช…๋ น์„ ํ†ตํ•ด ip-masq-agent ํ™œ์„ฑํ™” ๋ฐ ์˜ˆ์™ธ CIDR ์„ค์ •

1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# helm upgrade cilium cilium/cilium --namespace kube-system --reuse-values \
--set ipMasqAgent.enabled=true --set ipMasqAgent.config.nonMasqueradeCIDRs='{10.10.1.0/24,10.10.2.0/24}'

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
Release "cilium" has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Wed Jul 30 22:12:33 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 3
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.

Your release version is 1.18.0.

For any further help, visit https://docs.cilium.io/en/v1.18/gettinghelp

2. ์„ค์ • ์ ์šฉ ํ™•์ธ

(1) ConfigMap ์ƒ์„ฑ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get cm -n kube-system ip-masq-agent -o yaml | yq

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{
  "apiVersion": "v1",
  "data": {
    "config": "{\"nonMasqueradeCIDRs\":[\"10.10.1.0/24\",\"10.10.2.0/24\"]}"
  },
  "kind": "ConfigMap",
  "metadata": {
    "annotations": {
      "meta.helm.sh/release-name": "cilium",
      "meta.helm.sh/release-namespace": "kube-system"
    },
    "creationTimestamp": "2025-07-30T13:12:35Z",
    "labels": {
      "app.kubernetes.io/managed-by": "Helm"
    },
    "name": "ip-masq-agent",
    "namespace": "kube-system",
    "resourceVersion": "38148",
    "uid": "14aaeb96-1b47-42cb-97f6-680fe73e8be6"
  }
}
  • nonMasqueradeCIDRs์— 10.10.1.0/24, 10.10.2.0/24 ํฌํ•จ๋จ

(2) ConfigMap ์กด์žฌ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get cm -n kube-system

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
NAME                                                   DATA   AGE
cilium-config                                           154    7h32m
cilium-envoy-config                                     1      7h32m
coredns                                                1      7h32m
extension-apiserver-authentication                     6      7h32m
hubble-relay-config                                     1      7h32m
hubble-ui-nginx                                        1      7h32m
ip-masq-agent                                          1      104s
kube-apiserver-legacy-service-account-token-tracking   1      7h32m
kube-proxy                                             2      7h32m
kube-root-ca.crt                                       1      7h32m
kubeadm-config                                          1      7h32m
kubelet-config                                          1      7h32m
  • ip-masq-agent ํ•ญ๋ชฉ ์ถ”๊ฐ€ ํ™•์ธ

(3) Cilium ์„ค์ • ์ ์šฉ ์—ฌ๋ถ€ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cilium config view  | grep -i ip-masq

โœ…ย ์ถœ๋ ฅ

1
enable-ip-masq-agent                              true
  • enable-ip-masq-agent: true ํ™•์ธ

(4) BPF IPMasq ์˜ˆ์™ธ CIDR ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system exec ds/cilium -c cilium-agent -- cilium-dbg bpf ipmasq list

โœ…ย ์ถœ๋ ฅ

1
2
3
4
IP PREFIX/ADDRESS   
10.10.1.0/24             
10.10.2.0/24             
169.254.0.0/16

3. NAT ์—†์ด ํ†ต์‹  ์‹คํŒจ ์‚ฌ๋ก€ (๋ผ์šฐํ„ฐ ๋ผ์šฐํŒ… ๋ฏธ์„ค์ • ์‹œ)

curl-pod(172.20.1.236) โ†’ 10.10.1.200 ์š”์ฒญ

  • ๋ผ์šฐํ„ฐ๋Š” ์‘๋‹ต์„ Pod IP๋กœ ๋ณด๋‚ด์•ผ ํ•˜๋Š”๋ฐ ๋ผ์šฐํŒ… ๊ฒฝ๋กœ๊ฐ€ ์—†์–ด ์‘๋‹ต ์‹คํŒจ

๋ฌธ์ œ์›์ธ

  • ๋ผ์šฐํ„ฐ๋Š” 172.20.0.0/16 ๋Œ€์—ญ(Pod CIDR)์— ๋Œ€ํ•œ ๊ฒฝ๋กœ๋ฅผ ์•Œ์ง€ ๋ชปํ•จ
  • Pod๊ฐ€ ์‚ฌ๋‚ด๋ง ์„œ๋ฒ„์— ์š”์ฒญ์„ ๋ณด๋‚ด๊ณ , ์‘๋‹ต์ด ๋Œ์•„์˜ค์ง€ ์•Š์Œ
  • TCP Handshake์—์„œ SYN์€ ์ „์†ก๋˜๋‚˜, SYN-ACK๋Š” ์ˆ˜์‹ ๋˜์ง€ ์•Š์Œ โ†’ ํ†ต์‹  ์‹คํŒจ

4. Pod IP ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# k get pod -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME                      READY   STATUS    RESTARTS   AGE     IP             NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          5h16m   172.20.1.236   k8s-ctr   <none>           <none>
webpod-556878d5d7-7p8bn   1/1     Running   0          5h17m   172.20.1.40    k8s-ctr   <none>           <none>
webpod-556878d5d7-r4dmh   1/1     Running   0          5h17m   172.20.0.130   k8s-w1    <none>           <none>
  • curl-pod: 172.20.1.236 (์ปจํŠธ๋กคํ”Œ๋ ˆ์ธ ๋…ธ๋“œ์— ์œ„์น˜)
  • webpod: 172.20.0.130 (์›Œ์ปค๋…ธ๋“œ 1์— ์œ„์น˜)

5. ๋ผ์šฐํ„ฐ ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ” ํ™•์ธ

1
root@router:~# ip -c route

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.10.1.0/24 dev loop1 proto kernel scope link src 10.10.1.200 
10.10.2.0/24 dev loop2 proto kernel scope link src 10.10.2.200 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.200 
  • ๋ผ์šฐํ„ฐ๋Š” 172.20.x.x์— ๋Œ€ํ•œ ๊ฒฝ๋กœ๊ฐ€ ์—†์–ด unknown destination์œผ๋กœ ์ธ์‹
  • ํ•ด๋‹น ํŒจํ‚ท์„ default gateway (10.0.2.2)๋กœ ์ „์†ก โ†’ ์ธํ„ฐ๋„ท์œผ๋กœ ๋‚˜๊ฐ€๊ฒŒ ๋จ

6. ๋…ธ๋“œ ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ” ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# ip -c route

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static 
172.20.0.0/24 via 192.168.10.101 dev eth1 proto kernel 
172.20.1.40 dev lxc0895f39b5225 proto kernel scope link 
172.20.1.144 dev lxcf2a822e72a6e proto kernel scope link 
172.20.1.236 dev lxcd63c3c1415ff proto kernel scope link 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100 

7. ๋ผ์šฐํ„ฐ์— Static Route ์„ค์ •

1
2
root@router:~# ip route add 172.20.1.0/24 via 192.168.10.100
ip route add 172.20.0.0/24 via 192.168.10.101
  • ๋ผ์šฐํ„ฐ๊ฐ€ ํ•ด๋‹น Pod CIDR ํŠธ๋ž˜ํ”ฝ์„ ํ•ด๋‹น ๋…ธ๋“œ๋กœ ์ •ํ™•ํžˆ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ๋„๋ก ์„ค์ •
  • 172.20.1.0/24 โ†’ ์ปจํŠธ๋กคํ”Œ๋ ˆ์ธ ๋…ธ๋“œ, 172.20.0.0/24 โ†’ ์›Œ์ปค๋…ธ๋“œ 1

8. Static Route ๋ฐ˜์˜ ํ™•์ธ

1
root@router:~# ip -c route | grep 172.20

โœ…ย ์ถœ๋ ฅ

1
2
172.20.0.0/24 via 192.168.10.101 dev eth1 
172.20.1.0/24 via 192.168.10.100 dev eth1 
  • ์ด์ œ ๋ผ์šฐํ„ฐ๋Š” Pod IP๋กœ ๋“ค์–ด์˜จ ์‘๋‹ต ํŠธ๋ž˜ํ”ฝ์„ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋ผ์šฐํŒ… ๊ฐ€๋Šฅ

9. ํ†ต์‹  ์ •์ƒ ์ž‘๋™ ํ™•์ธ

NAT ์—†์ด ์‚ฌ๋‚ด๋ง๊ณผ ์ง์ ‘ ํ†ต์‹  ์„ฑ๊ณต

1
kubectl exec -it curl-pod -- curl -s 10.10.1.200

1
kubectl exec -it curl-pod -- curl -s 10.10.2.200


๐Ÿ“ก CoreDNS, NodeLocalDNS

1. ํŒŒ๋“œ ๋‚ด /etc/resolv.conf ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- cat /etc/resolv.conf

โœ…ย ์ถœ๋ ฅ

1
2
3
search default.svc.cluster.local svc.cluster.local cluster.local Davolink
nameserver 10.96.0.10
options ndots:5

2. kubelet DNS ์„ค์ • ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cat /var/lib/kubelet/config.yaml | grep cluster -A1

โœ…ย ์ถœ๋ ฅ

1
2
3
4
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerRuntimeEndpoint: ""

3. CoreDNS ์„œ๋น„์Šค ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc,ep -n kube-system kube-dns

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   8h

NAME                 ENDPOINTS                                                     AGE
endpoints/kube-dns   172.20.0.186:53,172.20.1.144:53,172.20.0.186:53 + 3 more...   8h

4. CoreDNS Pod ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=kube-dns

โœ…ย ์ถœ๋ ฅ

1
2
3
NAME                       READY   STATUS    RESTARTS   AGE
coredns-674b8bbfcf-gbnm8   1/1     Running   0          6h12m
coredns-674b8bbfcf-vvgfm   1/1     Running   0          6h12m
  • CoreDNS๋Š” kube-system ๋„ค์ž„์ŠคํŽ˜์ด์Šค์—์„œ 2๊ฐœ์˜ replica๋กœ ๋™์ž‘
  • ํŒŒ๋“œ ๋ผ๋ฒจ k8s-app=kube-dns๋กœ ํ•„ํ„ฐ๋ง ๊ฐ€๋Šฅ

5. CoreDNS Pod ์ƒ์„ธ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kc describe pod -n kube-system -l k8s-app=kube-dns

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
...
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
...

6. CoreDNS Corefile ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kc describe cm -n kube-system coredns

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
Name:         coredns
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>

Data
====
Corefile:
----
.:53 {
    errors
    health {
       lameduck 5s
    }
    ready
    kubernetes cluster.local in-addr.arpa ip6.arpa {
       pods insecure
       fallthrough in-addr.arpa ip6.arpa
       ttl 30
    }
    prometheus :9153
    forward . /etc/resolv.conf {
       max_concurrent 1000
    }
    cache 30 {
       disable success cluster.local
       disable denial cluster.local
    }
    loop
    reload
    loadbalance
}

BinaryData
====

Events:  <none>

7. ๋…ธ๋“œ์˜ /etc/resolv.conf ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cat /etc/resolv.conf

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# This is /run/systemd/resolve/stub-resolv.conf managed by man:systemd-resolved(8).
# Do not edit.
#
# This file might be symlinked as /etc/resolv.conf. If you're looking at
# /etc/resolv.conf and seeing this text, you have followed the symlink.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs should typically not access this file directly, but only
# through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
# different way, replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.

nameserver 127.0.0.53
options edns0 trust-ad
search Davolink
  • ๋…ธ๋“œ ์ž์ฒด๋Š” systemd-resolved์— ์˜ํ•ด ๊ด€๋ฆฌ๋˜๋Š” stub resolver ์‚ฌ์šฉ ์ค‘
  • ์‹ค์ œ ์งˆ์˜๋Š” 127.0.0.53๋กœ ๊ฐ€๋ฉฐ, ์ด๋Š” ๋กœ์ปฌ DNS ํ”„๋ก์‹œ ์—ญํ• 

8. ๋…ธ๋“œ์˜ ์ƒ์œ„ DNS ํ™•์ธ (resolvectl)

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# resolvectl 

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
Global
         Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
  resolv.conf mode: stub

Link 2 (eth0)
    Current Scopes: DNS
         Protocols: +DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: 10.0.2.3
       DNS Servers: 10.0.2.3
        DNS Domain: Davolink

Link 3 (eth1)
    Current Scopes: none
         Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported

Link 4 (cilium_net)
    Current Scopes: none
         Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported

Link 5 (cilium_host)
    Current Scopes: none
         Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported

Link 9 (lxcfeeee14aa766)
    Current Scopes: none
         Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported

Link 25 (lxcf2a822e72a6e)
    Current Scopes: none
         Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported

Link 27 (lxc0895f39b5225)
    Current Scopes: none
         Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported

Link 29 (lxcd63c3c1415ff)
    Current Scopes: none
         Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
  • ์ƒ์œ„ DNS๋Š” 10.0.2.3์œผ๋กœ ์„ค์ •๋˜์–ด ์žˆ์Œ (ex. VirtualBox NAT DNS)
  • ์ด๋Š” VM ๊ฒŒ์ŠคํŠธ์˜ ์™ธ๋ถ€ ๋„ค์ž„์„œ๋ฒ„ ์—ญํ• ์„ ํ•˜๋ฉฐ, upstream ์งˆ์˜๋ฅผ ๋‹ด๋‹น

๐Ÿ” ํŒŒ๋“œ์—์„œ DNS ์งˆ์˜ ํ™•์ธ

1. ํŒŒ๋“œ ๋ฐ CoreDNS IP ํ™•์ธ

curl-pod๊ณผ webpod๋“ค์˜ Pod IP ๋ฐ ๋…ธ๋“œ ์œ„์น˜ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME                      READY   STATUS    RESTARTS   AGE     IP             NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          6h17m   172.20.1.236   k8s-ctr   <none>           <none>
webpod-556878d5d7-7p8bn   1/1     Running   0          6h18m   172.20.1.40    k8s-ctr   <none>           <none>
webpod-556878d5d7-r4dmh   1/1     Running   0          6h19m   172.20.0.130   k8s-w1    <none>           <none>

CoreDNS ํŒŒ๋“œ 2๊ฐœ๊ฐ€ k8s-ctr, k8s-w1 ๋…ธ๋“œ์— ๊ฐ๊ฐ ์กด์žฌํ•จ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=kube-dns -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
NAME                       READY   STATUS    RESTARTS   AGE     IP             NODE      NOMINATED NODE   READINESS GATES
coredns-674b8bbfcf-gbnm8   1/1     Running   0          6h24m   172.20.0.186   k8s-w1    <none>           <none>
coredns-674b8bbfcf-vvgfm   1/1     Running   0          6h24m   172.20.1.144   k8s-ctr   <none>           <none>

2. CoreDNS ํŒŒ๋“œ ์ˆ˜ ์ถ•์†Œ (์‹ค์Šต ํŽธ์˜ ๋ชฉ์ )

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl scale deployment -n kube-system coredns --replicas 1

# ๊ฒฐ๊ณผ
deployment.apps/coredns scaled
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=kube-dns -owide

โœ…ย ์ถœ๋ ฅ

1
2
NAME                       READY   STATUS    RESTARTS   AGE     IP             NODE      NOMINATED NODE   READINESS GATES
coredns-674b8bbfcf-vvgfm   1/1     Running   0          6h25m   172.20.1.144   k8s-ctr   <none>           <none>

3. CoreDNS ๋ฉ”ํŠธ๋ฆญ ํ™•์ธ (์งˆ์˜ ์ „)

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl kube-dns.kube-system.svc:9153/metrics | grep coredns_cache_ | grep -v ^#

โœ…ย ์ถœ๋ ฅ

1
2
3
4
coredns_cache_entries{server="dns://:53",type="denial",view="",zones="."} 1
coredns_cache_entries{server="dns://:53",type="success",view="",zones="."} 0
coredns_cache_misses_total{server="dns://:53",view="",zones="."} 3170
coredns_cache_requests_total{server="dns://:53",view="",zones="."} 3170

4. ๋‚ด๋ถ€ DNS ์งˆ์˜ ํ…Œ์ŠคํŠธ (webpod)

  • nslookup webpod ์ˆ˜ํ–‰ โ†’ ๋‚ด๋ถ€ DNS ์ •์ƒ ๋™์ž‘
  • webpod.default.svc.cluster.local โ†’ 10.96.152.212 IP ๋ฐ˜ํ™˜
1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- nslookup webpod

โœ…ย ์ถœ๋ ฅ

5. ์งˆ์˜ ๋””๋ฒ„๊น… (nslookup -debug)

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- nslookup -debug webpod

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
;; Got recursion not available from 10.96.0.10
Server:		10.96.0.10
Address:	10.96.0.10#53

------------
    QUESTIONS:
	webpod.default.svc.cluster.local, type = A, class = IN
    ANSWERS:
    ->  webpod.default.svc.cluster.local
	internet address = 10.96.152.212
	ttl = 30
    AUTHORITY RECORDS:
    ADDITIONAL RECORDS:
------------
Name:	webpod.default.svc.cluster.local
Address: 10.96.152.212
;; Got recursion not available from 10.96.0.10
------------
    QUESTIONS:
	webpod.default.svc.cluster.local, type = AAAA, class = IN
    ANSWERS:
    AUTHORITY RECORDS:
    ->  cluster.local
	origin = ns.dns.cluster.local
	mail addr = hostmaster.cluster.local
	serial = 1753885699
	refresh = 7200
	retry = 1800
	expire = 86400
	minimum = 30
	ttl = 30
    ADDITIONAL RECORDS:
------------
  • A/AAAA ๋ ˆ์ฝ”๋“œ ์งˆ์˜ ํ๋ฆ„ ๋ฐ TTL, Authority ์˜์—ญ ์‘๋‹ต ํ™•์ธ

6. ์™ธ๋ถ€ ๋„๋ฉ”์ธ ์งˆ์˜ ํ…Œ์ŠคํŠธ (google.com)

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- nslookup -debug google.com

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
;; Got recursion not available from 10.96.0.10
Server:		10.96.0.10
Address:	10.96.0.10#53

------------
    QUESTIONS:
	google.com.default.svc.cluster.local, type = A, class = IN
    ANSWERS:
    AUTHORITY RECORDS:
    ->  cluster.local
	origin = ns.dns.cluster.local
	mail addr = hostmaster.cluster.local
	serial = 1753885699
	refresh = 7200
	retry = 1800
	expire = 86400
	minimum = 30
	ttl = 30
    ADDITIONAL RECORDS:
------------
** server can't find google.com.default.svc.cluster.local: NXDOMAIN
;; Got recursion not available from 10.96.0.10
Server:		10.96.0.10
Address:	10.96.0.10#53

------------
    QUESTIONS:
	google.com.svc.cluster.local, type = A, class = IN
    ANSWERS:
    AUTHORITY RECORDS:
    ->  cluster.local
	origin = ns.dns.cluster.local
	mail addr = hostmaster.cluster.local
	serial = 1753885699
	refresh = 7200
	retry = 1800
	expire = 86400
	minimum = 30
	ttl = 30
    ADDITIONAL RECORDS:
------------
** server can't find google.com.svc.cluster.local: NXDOMAIN
;; Got recursion not available from 10.96.0.10
Server:		10.96.0.10
Address:	10.96.0.10#53

------------
    QUESTIONS:
	google.com.cluster.local, type = A, class = IN
    ANSWERS:
    AUTHORITY RECORDS:
    ->  cluster.local
	origin = ns.dns.cluster.local
	mail addr = hostmaster.cluster.local
	serial = 1753885699
	refresh = 7200
	retry = 1800
	expire = 86400
	minimum = 30
	ttl = 30
    ADDITIONAL RECORDS:
------------
** server can't find google.com.cluster.local: NXDOMAIN
Server:		10.96.0.10
Address:	10.96.0.10#53

------------
    QUESTIONS:
	google.com.Davolink, type = A, class = IN
    ANSWERS:
    AUTHORITY RECORDS:
    ->  .
	origin = a.root-servers.net
	mail addr = nstld.verisign-grs.com
	serial = 2025073000
	refresh = 1800
	retry = 900
	expire = 604800
	minimum = 86400
	ttl = 30
    ADDITIONAL RECORDS:
------------
** server can't find google.com.Davolink: NXDOMAIN
Server:		10.96.0.10
Address:	10.96.0.10#53

------------
    QUESTIONS:
	google.com, type = A, class = IN
    ANSWERS:
    ->  google.com
	internet address = 142.251.222.14
	ttl = 30
    AUTHORITY RECORDS:
    ADDITIONAL RECORDS:
------------
Non-authoritative answer:
Name:	google.com
Address: 142.251.222.14
------------
    QUESTIONS:
	google.com, type = AAAA, class = IN
    ANSWERS:
    ->  google.com
	has AAAA address 2404:6800:4005:813::200e
	ttl = 30
    AUTHORITY RECORDS:
    ADDITIONAL RECORDS:
------------
Name:	google.com
Address: 2404:6800:4005:813::200e

command terminated with exit code 1

google.com ์งˆ์˜ ์‹œ search ๋„๋ฉ”์ธ์„ ๋”ฐ๋ผ ์ˆœ์ฐจ ์งˆ์˜๋จ

  • google.com.default.svc.cluster.local โ†’ NXDOMAIN
  • google.com โ†’ ์ •์ƒ ์‘๋‹ต (A/AAAA ๋ชจ๋‘ ์ˆ˜์‹ )

curl-pod์—์„œ ์ˆ˜ํ–‰ํ•œ DNS ์งˆ์˜์— ๋Œ€ํ•ด CoreDNS๊ฐ€ A ๋ ˆ์ฝ”๋“œ๋กœ ์‘๋‹ตํ•œ ๊ฒƒ์„ ๋กœ๊ทธ์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Œ

7. CoreDNS ๋ฉ”ํŠธ๋ฆญ ํ™•์ธ (์งˆ์˜ ํ›„)

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl kube-dns.kube-system.svc:9153/metrics | grep coredns_cache_ | grep -v ^#

โœ…ย ์ถœ๋ ฅ

1
2
3
4
coredns_cache_entries{server="dns://:53",type="denial",view="",zones="."} 2
coredns_cache_entries{server="dns://:53",type="success",view="",zones="."} 2
coredns_cache_misses_total{server="dns://:53",view="",zones="."} 3188
coredns_cache_requests_total{server="dns://:53",view="",zones="."} 3188

8. CoreDNS ๋กœ๊ทธ ์‹ค์‹œ๊ฐ„ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system logs -l k8s-app=kube-dns -f

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.12.0
linux/amd64, go1.23.3, 51e11f1

9. k9s CoreDNS ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# k9s

10. ๋‚ด๋ถ€ ๋„๋ฉ”์ธ ์งˆ์˜ ํ…Œ์ŠคํŠธ (webpod)

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- nslookup webpod

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
;; Got recursion not available from 10.96.0.10
Server:		10.96.0.10
Address:	10.96.0.10#53

Name:	webpod.default.svc.cluster.local
Address: 10.96.152.212
;; Got recursion not available from 10.96.0.10

11. Hubble๋กœ ๋‚ด๋ถ€ DNS ์งˆ์˜ ํ๋ฆ„ ์ถ”์ 

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f --port 53

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
Jul 30 14:56:15.979: 10.0.2.15:44999 (host) -> 10.0.2.3:53 (world) to-network FORWARDED (UDP)
Jul 30 14:57:36.751: default/curl-pod (ID:5580) <> 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Jul 30 14:57:36.751: default/curl-pod (ID:5580) <> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) post-xlate-fwd TRANSLATED (UDP)
Jul 30 14:57:36.751: default/curl-pod:32772 (ID:5580) -> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 14:57:36.752: default/curl-pod:32772 (ID:5580) <- kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 14:57:36.752: kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) <> default/curl-pod (ID:5580) pre-xlate-rev TRACED (UDP)
Jul 30 14:57:36.752: 10.96.0.10:53 (world) <> default/curl-pod (ID:5580) post-xlate-rev TRANSLATED (UDP)
Jul 30 14:57:36.754: default/curl-pod (ID:5580) <> 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Jul 30 14:57:36.754: default/curl-pod (ID:5580) <> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) post-xlate-fwd TRANSLATED (UDP)
Jul 30 14:57:36.754: default/curl-pod:57903 (ID:5580) -> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 14:57:36.754: default/curl-pod:57903 (ID:5580) <- kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 14:57:36.754: kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) <> default/curl-pod (ID:5580) pre-xlate-rev TRACED (UDP)
Jul 30 14:57:36.754: 10.96.0.10:53 (world) <> default/curl-pod (ID:5580) post-xlate-rev TRANSLATED (UDP)
  • curl-pod โ†’ 10.96.0.10:53 ์š”์ฒญ
  • ์งˆ์˜ ๋ฐ ์‘๋‹ต ํ๋ฆ„์ด Cilium์—์„œ ํฌ์›Œ๋”ฉ ๋ฐ ๋ณ€ํ™˜ ๊ณผ์ •์„ ๊ฑฐ์ณ ํ™•์ธ๋จ

12. tcpdump๋กœ ๋‚ด๋ถ€ DNS ํŒจํ‚ท ํ™•์ธ

1
2
3
4
5
6
7
8
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i any udp port 53 -nn
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
23:57:36.751572 lxcd63c3c1415ff In  IP 172.20.1.236.32772 > 172.20.1.144.53: 19286+ A? webpod.default.svc.cluster.local. (50)
23:57:36.752160 lxcf2a822e72a6e In  IP 172.20.1.144.53 > 172.20.1.236.32772: 19286*- 1/0/0 A 10.96.152.212 (98)
23:57:36.754268 lxcd63c3c1415ff In  IP 172.20.1.236.57903 > 172.20.1.144.53: 5377+ AAAA? webpod.default.svc.cluster.local. (50)
23:57:36.754408 lxcf2a822e72a6e In  IP 172.20.1.144.53 > 172.20.1.236.57903: 5377*- 0/1/0 (143)
  • A?, AAAA? ์งˆ์˜๊ฐ€ CoreDNS ํŒŒ๋“œ์— ๋„๋‹ฌํ•˜๋Š”์ง€ ํ™•์ธ
  • ์‘๋‹ต์— IP 10.96.152.212 ๋ฐ˜ํ™˜๋˜๋Š” ๊ฒƒ ํ™•์ธ

13. ์™ธ๋ถ€ ๋„๋ฉ”์ธ ์งˆ์˜ ํ…Œ์ŠคํŠธ (google.com)

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- nslookup google.com

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
;; Got recursion not available from 10.96.0.10
;; Got recursion not available from 10.96.0.10
;; Got recursion not available from 10.96.0.10
Server:		10.96.0.10
Address:	10.96.0.10#53

Non-authoritative answer:
Name:	google.com
Address: 142.250.196.110
Name:	google.com
Address: 2404:6800:4004:825::200e
  • โ€œrecursion not availableโ€ ๋ฉ”์‹œ์ง€ ์—ฌ๋Ÿฌ ๋ฒˆ ์ถœ๋ ฅ๋˜๋‚˜
  • ์ตœ์ข…์ ์œผ๋กœ A ๋ ˆ์ฝ”๋“œ (142.250.196.110), AAAA ๋ ˆ์ฝ”๋“œ (2404:6800:4004:825::200e) ์ •์ƒ ๋ฐ˜ํ™˜

14. Hubble๋กœ ์™ธ๋ถ€ ๋„๋ฉ”์ธ ์งˆ์˜ ํ๋ฆ„ ์ถ”์ 

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f --port 53

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
Jul 30 15:01:56.982: 10.0.2.15:58614 (host) -> 10.0.2.3:53 (world) to-network FORWARDED (UDP)
Jul 30 15:02:20.444: default/curl-pod (ID:5580) <> 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Jul 30 15:02:20.444: default/curl-pod (ID:5580) <> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) post-xlate-fwd TRANSLATED (UDP)
Jul 30 15:02:20.444: default/curl-pod:46923 (ID:5580) -> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.446: default/curl-pod:46923 (ID:5580) <- kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.446: kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) <> default/curl-pod (ID:5580) pre-xlate-rev TRACED (UDP)
Jul 30 15:02:20.446: 10.96.0.10:53 (world) <> default/curl-pod (ID:5580) post-xlate-rev TRANSLATED (UDP)
Jul 30 15:02:20.446: default/curl-pod (ID:5580) <> 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Jul 30 15:02:20.446: default/curl-pod (ID:5580) <> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) post-xlate-fwd TRANSLATED (UDP)
Jul 30 15:02:20.446: default/curl-pod:42237 (ID:5580) -> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.446: default/curl-pod:42237 (ID:5580) <- kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.446: kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) <> default/curl-pod (ID:5580) pre-xlate-rev TRACED (UDP)
Jul 30 15:02:20.446: 10.96.0.10:53 (world) <> default/curl-pod (ID:5580) post-xlate-rev TRANSLATED (UDP)
Jul 30 15:02:20.448: default/curl-pod (ID:5580) <> 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Jul 30 15:02:20.448: default/curl-pod (ID:5580) <> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) post-xlate-fwd TRANSLATED (UDP)
Jul 30 15:02:20.449: default/curl-pod:50175 (ID:5580) -> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.449: default/curl-pod:50175 (ID:5580) <- kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.449: kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) <> default/curl-pod (ID:5580) pre-xlate-rev TRACED (UDP)
Jul 30 15:02:20.449: 10.96.0.10:53 (world) <> default/curl-pod (ID:5580) post-xlate-rev TRANSLATED (UDP)
Jul 30 15:02:20.450: default/curl-pod (ID:5580) <> 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Jul 30 15:02:20.450: default/curl-pod (ID:5580) <> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) post-xlate-fwd TRANSLATED (UDP)
Jul 30 15:02:20.450: default/curl-pod:58567 (ID:5580) -> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.450: 10.0.2.3:53 (world) <> kube-system/coredns-674b8bbfcf-vvgfm (ID:28565) pre-xlate-rev TRACED (UDP)
Jul 30 15:02:20.450: 10.0.2.3:53 (world) <> kube-system/coredns-674b8bbfcf-vvgfm (ID:28565) pre-xlate-rev TRACED (UDP)
Jul 30 15:02:20.450: 10.0.2.3:53 (world) <> kube-system/coredns-674b8bbfcf-vvgfm (ID:28565) pre-xlate-rev TRACED (UDP)
Jul 30 15:02:20.450: kube-system/coredns-674b8bbfcf-vvgfm:57647 (ID:28565) -> 10.0.2.3:53 (world) to-network FORWARDED (UDP)
Jul 30 15:02:20.450: 10.0.2.3:53 (world) <> kube-system/coredns-674b8bbfcf-vvgfm (ID:28565) pre-xlate-rev TRACED (UDP)
Jul 30 15:02:20.450: 10.0.2.3:53 (world) <> kube-system/coredns-674b8bbfcf-vvgfm (ID:28565) pre-xlate-rev TRACED (UDP)
Jul 30 15:02:20.455: kube-system/coredns-674b8bbfcf-vvgfm:57647 (ID:28565) <- 10.0.2.3:53 (world) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.455: default/curl-pod:58567 (ID:5580) <- kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.455: kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) <> default/curl-pod (ID:5580) pre-xlate-rev TRACED (UDP)
Jul 30 15:02:20.455: 10.96.0.10:53 (world) <> default/curl-pod (ID:5580) post-xlate-rev TRANSLATED (UDP)
Jul 30 15:02:20.456: default/curl-pod (ID:5580) <> 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Jul 30 15:02:20.456: default/curl-pod (ID:5580) <> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) post-xlate-fwd TRANSLATED (UDP)
Jul 30 15:02:20.456: default/curl-pod:35291 (ID:5580) -> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.461: default/curl-pod:35291 (ID:5580) <- kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.461: kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) <> default/curl-pod (ID:5580) pre-xlate-rev TRACED (UDP)
Jul 30 15:02:20.461: 10.96.0.10:53 (world) <> default/curl-pod (ID:5580) post-xlate-rev TRANSLATED (UDP)
Jul 30 15:02:20.463: default/curl-pod (ID:5580) <> 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Jul 30 15:02:20.463: default/curl-pod (ID:5580) <> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) post-xlate-fwd TRANSLATED (UDP)
Jul 30 15:02:20.463: default/curl-pod:47474 (ID:5580) -> kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.475: default/curl-pod:47474 (ID:5580) <- kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) to-endpoint FORWARDED (UDP)
Jul 30 15:02:20.476: kube-system/coredns-674b8bbfcf-vvgfm:53 (ID:28565) <> default/curl-pod (ID:5580) pre-xlate-rev TRACED (UDP)
Jul 30 15:02:20.476: 10.96.0.10:53 (world) <> default/curl-pod (ID:5580) post-xlate-rev TRANSLATED (UDP)
  • curl-pod โ†’ CoreDNS โ†’ ์™ธ๋ถ€ ๋„ค์ž„์„œ๋ฒ„ (10.0.2.3) ํ๋ฆ„ ํ™•์ธ
  • CoreDNS๊ฐ€ ์žฌ๊ท€ ์งˆ์˜๋ฅผ ์œ„ํ•ด ์™ธ๋ถ€ DNS๋กœ ํŠธ๋ž˜ํ”ฝ ์ „๋‹ฌ
  • CoreDNS โ†’ 10.0.2.3๋กœ UDP ํฌ์›Œ๋”ฉ ํ™•์ธ๋จ

15. tcpdump๋กœ ์™ธ๋ถ€ DNS ์งˆ์˜ ํŒจํ‚ท ๋ถ„์„

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i any udp port 53 -nn

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
00:02:20.444113 lxcd63c3c1415ff In  IP 172.20.1.236.46923 > 172.20.1.144.53: 7116+ A? google.com.default.svc.cluster.local. (54)
00:02:20.444467 lxcf2a822e72a6e In  IP 172.20.1.144.53 > 172.20.1.236.46923: 7116 NXDomain*- 0/1/0 (147)
00:02:20.446052 lxcd63c3c1415ff In  IP 172.20.1.236.42237 > 172.20.1.144.53: 5495+ A? google.com.svc.cluster.local. (46)
00:02:20.446537 lxcf2a822e72a6e In  IP 172.20.1.144.53 > 172.20.1.236.42237: 5495 NXDomain*- 0/1/0 (139)
00:02:20.448803 lxcd63c3c1415ff In  IP 172.20.1.236.50175 > 172.20.1.144.53: 9828+ A? google.com.cluster.local. (42)
00:02:20.448969 lxcf2a822e72a6e In  IP 172.20.1.144.53 > 172.20.1.236.50175: 9828 NXDomain*- 0/1/0 (135)
00:02:20.450674 lxcd63c3c1415ff In  IP 172.20.1.236.58567 > 172.20.1.144.53: 41730+ A? google.com.Davolink. (37)
00:02:20.450858 lxcf2a822e72a6e In  IP 172.20.1.144.57647 > 10.0.2.3.53: 7727+ A? google.com.Davolink. (37)
00:02:20.450875 eth0  Out IP 10.0.2.15.57647 > 10.0.2.3.53: 7727+ A? google.com.Davolink. (37)
00:02:20.455142 eth0  In  IP 10.0.2.3.53 > 10.0.2.15.57647: 7727 NXDomain 0/1/0 (112)
00:02:20.455305 lxcf2a822e72a6e In  IP 172.20.1.144.53 > 172.20.1.236.58567: 41730 NXDomain 0/1/0 (112)
00:02:20.456958 lxcd63c3c1415ff In  IP 172.20.1.236.35291 > 172.20.1.144.53: 23450+ A? google.com. (28)
00:02:20.457191 lxcf2a822e72a6e In  IP 172.20.1.144.57647 > 10.0.2.3.53: 12952+ A? google.com. (28)
00:02:20.457204 eth0  Out IP 10.0.2.15.57647 > 10.0.2.3.53: 12952+ A? google.com. (28)
00:02:20.460993 eth0  In  IP 10.0.2.3.53 > 10.0.2.15.57647: 12952 1/0/0 A 142.250.196.110 (44)
00:02:20.461181 lxcf2a822e72a6e In  IP 172.20.1.144.53 > 172.20.1.236.35291: 23450 1/0/0 A 142.250.196.110 (54)
00:02:20.463949 lxcd63c3c1415ff In  IP 172.20.1.236.47474 > 172.20.1.144.53: 49159+ AAAA? google.com. (28)
00:02:20.464413 lxcf2a822e72a6e In  IP 172.20.1.144.57647 > 10.0.2.3.53: 52829+ AAAA? google.com. (28)
00:02:20.464427 eth0  Out IP 10.0.2.15.57647 > 10.0.2.3.53: 52829+ AAAA? google.com. (28)
00:02:20.473156 eth0  In  IP 10.0.2.3.53 > 10.0.2.15.57647: 52829 1/0/0 AAAA 2404:6800:4004:825::200e (56)
00:02:20.473605 lxcf2a822e72a6e In  IP 172.20.1.144.53 > 172.20.1.236.47474: 49159 1/0/0 AAAA 2404:6800:4004:825::200e (66)
  • google.com.default.svc.cluster.local. ๋“ฑ search ๋„๋ฉ”์ธ ์ ์šฉ ํ›„ ์ˆœ์ฐจ์  ์‹คํŒจ (NXDOMAIN)
  • ์ตœ์ข…์ ์œผ๋กœ google.com.์— ๋Œ€ํ•ด A, AAAA ์งˆ์˜ ์„ฑ๊ณต
  • ์™ธ๋ถ€ DNS ์„œ๋ฒ„์ธ 10.0.2.3๋กœ ์š”์ฒญ ์ „๋‹ฌ๋˜๋ฉฐ ์‘๋‹ต ์ˆ˜์‹ 

16. CoreDNS ์บ์‹œ ๋ฉ”ํŠธ๋ฆญ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl kube-dns.kube-system.svc:9153/metrics | grep coredns_cache_ | grep -v ^#

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
coredns_cache_entries{server="dns://:53",type="denial",view="",zones="."} 2
coredns_cache_entries{server="dns://:53",type="success",view="",zones="."} 2
coredns_cache_hits_total{server="dns://:53",type="denial",view="",zones="."} 1
coredns_cache_hits_total{server="dns://:53",type="success",view="",zones="."} 2
coredns_cache_misses_total{server="dns://:53",view="",zones="."} 3223
coredns_cache_requests_total{server="dns://:53",view="",zones="."} 3226

๐Ÿง  NodeLocalDNS ์‹ค์Šต + Cilium Local Redirect Policy

1. iptables ์ƒํƒœ ๋ฐฑ์—… (์„ค์น˜ ์ „)

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# iptables-save | tee before.txt

2. NodeLocal DNS ๋งค๋‹ˆํŽ˜์ŠคํŠธ ๋‹ค์šด๋กœ๋“œ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# wget https://github.com/kubernetes/kubernetes/raw/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
--2025-07-31 00:17:53--  https://github.com/kubernetes/kubernetes/raw/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml
Resolving github.com (github.com)... 20.200.245.247
Connecting to github.com (github.com)|20.200.245.247|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml [following]
--2025-07-31 00:17:53--  https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.111.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 5377 (5.3K) [text/plain]
Saving to: โ€˜nodelocaldns.yamlโ€™

nodelocaldns.yaml       100%[==============================>]   5.25K  --.-KB/s    in 0s      

2025-07-31 00:17:53 (61.7 MB/s) - โ€˜nodelocaldns.yamlโ€™ saved [5377/5377]

3. ํ™˜๊ฒฝ ๋ณ€์ˆ˜ ์…‹ํŒ…

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubedns=`kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}`
domain='cluster.local'
localdns='169.254.20.10'
echo $kubedns $domain $localdns

โœ…ย ์ถœ๋ ฅ

1
10.96.0.10 cluster.local 169.254.20.10

4. ๋งค๋‹ˆํŽ˜์ŠคํŠธ ๋‚ด ๋ณ€์ˆ˜ ์น˜ํ™˜

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__/$kubedns/g" nodelocaldns.yaml

5. NodeLocalDNS ์„ค์น˜

1
2
3
4
5
6
7
8
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl apply -f nodelocaldns.yaml

# ๊ฒฐ๊ณผ
serviceaccount/node-local-dns created
service/kube-dns-upstream created
configmap/node-local-dns created
daemonset.apps/node-local-dns created
service/node-local-dns created

6. ์„ค์น˜ ํ™•์ธ (Pod ํ™•์ธ)

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=node-local-dns -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
NAME                   READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
node-local-dns-ggjmt   1/1     Running   0          30s   192.168.10.100   k8s-ctr   <none>           <none>
node-local-dns-kzthr   1/1     Running   0          30s   192.168.10.101   k8s-w1    <none>           <none>

7. ๋กœ๊ทธ ์„ค์ • ์ถ”๊ฐ€ (ConfigMap ์ˆ˜์ •)

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl edit cm -n kube-system node-local-dns

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
...
    cluster.local:53 {
        log
        errors
        cache {
                success 9984 30
                denial 9984 5
        }
        reload
        loop
        bind 169.254.20.10 10.96.0.10
        forward . __PILLAR__CLUSTER__DNS__ {
                force_tcp
        }
        prometheus :9253
        health 169.254.20.10:8080
        }
...
    .:53 {
        log
        errors
        cache 30
        reload
        loop
        bind 169.254.20.10 10.96.0.10
        forward . __PILLAR__UPSTREAM__SERVERS__
        prometheus :9253
        }

configmap/node-local-dns edited
  • Corefile ๋‚ด cluster.local:53 ๋ฐ .:53 ๋ธ”๋ก์— log ์ถ”๊ฐ€

8. ๋ฐ๋ชฌ์…‹ ์žฌ์‹œ์ž‘

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system rollout restart ds node-local-dns

# ๊ฒฐ๊ณผ
daemonset.apps/node-local-dns restarted

9. ์ ์šฉ๋œ ConfigMap ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl describe cm -n kube-system node-local-dns

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
Name:         node-local-dns
Namespace:    kube-system
Labels:       addonmanager.kubernetes.io/mode=Reconcile
Annotations:  <none>

Data
====
Corefile:
----
cluster.local:53 {
    log
    errors
    cache {
            success 9984 30
            denial 9984 5
    }
    reload
    loop
    bind 169.254.20.10 10.96.0.10
    forward . __PILLAR__CLUSTER__DNS__ {
            force_tcp
    }
    prometheus :9253
    health 169.254.20.10:8080
    }
in-addr.arpa:53 {
    errors
    cache 30
    reload
    loop
    bind 169.254.20.10 10.96.0.10
    forward . __PILLAR__CLUSTER__DNS__ {
            force_tcp
    }
    prometheus :9253
    }
ip6.arpa:53 {
    errors
    cache 30
    reload
    loop
    bind 169.254.20.10 10.96.0.10
    forward . __PILLAR__CLUSTER__DNS__ {
            force_tcp
    }
    prometheus :9253
    }
.:53 {
    log
    errors
    cache 30
    reload
    loop
    bind 169.254.20.10 10.96.0.10
    forward . __PILLAR__UPSTREAM__SERVERS__
    prometheus :9253
    }

BinaryData
====

Events:  <none>
  • 169.254.20.10์œผ๋กœ ์งˆ์˜๊ฐ€ ๋“ค์–ด์˜ค๋ฉด node-local-dns๊ฐ€ ์‘๋‹ต
  • cluster.local ๋„๋ฉ”์ธ์€ ๋‚ด๋ถ€ DNS๋กœ ์ฒ˜๋ฆฌํ•˜๊ณ , ์™ธ๋ถ€ ๋„๋ฉ”์ธ์€ upstream์œผ๋กœ ์ „๋‹ฌ๋จ

10. iptables ์ƒํƒœ ๋ฐฑ์—… (์„ค์น˜ ํ›„)

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# iptables-save | tee after.txt

11. iptables diff ๋น„๊ต

NodeLocalDNS ์„ค์น˜ ์ „ํ›„์˜ iptables ์ฐจ์ด ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# diff before.txt after.txt 

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
1c1
< # Generated by iptables-save v1.8.10 (nf_tables) on Thu Jul 31 00:16:21 2025
---
> # Generated by iptables-save v1.8.10 (nf_tables) on Thu Jul 31 00:35:13 2025
19,20c19,20
< # Completed on Thu Jul 31 00:16:21 2025
< # Generated by iptables-save v1.8.10 (nf_tables) on Thu Jul 31 00:16:21 2025
---
> # Completed on Thu Jul 31 00:35:13 2025
> # Generated by iptables-save v1.8.10 (nf_tables) on Thu Jul 31 00:35:13 2025
25a26,29
> -A PREROUTING -d 10.96.0.10/32 -p udp -m udp --dport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A PREROUTING -d 10.96.0.10/32 -p tcp -m tcp --dport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A PREROUTING -d 169.254.20.10/32 -p udp -m udp --dport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A PREROUTING -d 169.254.20.10/32 -p tcp -m tcp --dport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
26a31,42
> -A OUTPUT -s 10.96.0.10/32 -p tcp -m tcp --sport 8080 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A OUTPUT -d 10.96.0.10/32 -p tcp -m tcp --dport 8080 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A OUTPUT -d 10.96.0.10/32 -p udp -m udp --dport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A OUTPUT -d 10.96.0.10/32 -p tcp -m tcp --dport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A OUTPUT -s 10.96.0.10/32 -p udp -m udp --sport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A OUTPUT -s 10.96.0.10/32 -p tcp -m tcp --sport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A OUTPUT -s 169.254.20.10/32 -p tcp -m tcp --sport 8080 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A OUTPUT -d 169.254.20.10/32 -p tcp -m tcp --dport 8080 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A OUTPUT -d 169.254.20.10/32 -p udp -m udp --dport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A OUTPUT -d 169.254.20.10/32 -p tcp -m tcp --dport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A OUTPUT -s 169.254.20.10/32 -p udp -m udp --sport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
> -A OUTPUT -s 169.254.20.10/32 -p tcp -m tcp --sport 53 -m comment --comment "NodeLocal DNS Cache: skip conntrack" -j NOTRACK
38,39c54,55
< # Completed on Thu Jul 31 00:16:21 2025
< # Generated by iptables-save v1.8.10 (nf_tables) on Thu Jul 31 00:16:21 2025
---
> # Completed on Thu Jul 31 00:35:13 2025
> # Generated by iptables-save v1.8.10 (nf_tables) on Thu Jul 31 00:35:13 2025
41c57
< :INPUT ACCEPT [6292389:2731707993]
---
> :INPUT ACCEPT [6565229:2825821888]
43c59
< :OUTPUT ACCEPT [6187388:1591005004]
---
> :OUTPUT ACCEPT [6451665:1655130253]
54a71,74
> -A INPUT -d 10.96.0.10/32 -p udp -m udp --dport 53 -m comment --comment "NodeLocal DNS Cache: allow DNS traffic" -j ACCEPT
> -A INPUT -d 10.96.0.10/32 -p tcp -m tcp --dport 53 -m comment --comment "NodeLocal DNS Cache: allow DNS traffic" -j ACCEPT
> -A INPUT -d 169.254.20.10/32 -p udp -m udp --dport 53 -m comment --comment "NodeLocal DNS Cache: allow DNS traffic" -j ACCEPT
> -A INPUT -d 169.254.20.10/32 -p tcp -m tcp --dport 53 -m comment --comment "NodeLocal DNS Cache: allow DNS traffic" -j ACCEPT
64a85,88
> -A OUTPUT -s 10.96.0.10/32 -p udp -m udp --sport 53 -m comment --comment "NodeLocal DNS Cache: allow DNS traffic" -j ACCEPT
> -A OUTPUT -s 10.96.0.10/32 -p tcp -m tcp --sport 53 -m comment --comment "NodeLocal DNS Cache: allow DNS traffic" -j ACCEPT
> -A OUTPUT -s 169.254.20.10/32 -p udp -m udp --sport 53 -m comment --comment "NodeLocal DNS Cache: allow DNS traffic" -j ACCEPT
> -A OUTPUT -s 169.254.20.10/32 -p tcp -m tcp --sport 53 -m comment --comment "NodeLocal DNS Cache: allow DNS traffic" -j ACCEPT
84,85c108,109
< # Completed on Thu Jul 31 00:16:21 2025
< # Generated by iptables-save v1.8.10 (nf_tables) on Thu Jul 31 00:16:21 2025
---
> # Completed on Thu Jul 31 00:35:13 2025
> # Generated by iptables-save v1.8.10 (nf_tables) on Thu Jul 31 00:35:13 2025
105a130
> :KUBE-SEP-6O2WVHQLDKFLZQRS - [0:0]
110a136
> :KUBE-SEP-IG5OB37KH2LEQCBN - [0:0]
114a141
> :KUBE-SVC-BRK3P4PPQWCLKOAN - [0:0]
118a146
> :KUBE-SVC-FXR4M2CWOGAZGGYD - [0:0]
136a165,166
> -A KUBE-NODEPORTS -d 127.0.0.0/8 -p tcp -m comment --comment "kube-system/hubble-ui:http" -m tcp --dport 30003 -m nfacct --nfacct-name  localhost_nps_accepted_pkts -j KUBE-EXT-ZGWW2L4XLRSDZ3EF
> -A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/hubble-ui:http" -m tcp --dport 30003 -j KUBE-EXT-ZGWW2L4XLRSDZ3EF
141,142d170
< -A KUBE-NODEPORTS -d 127.0.0.0/8 -p tcp -m comment --comment "kube-system/hubble-ui:http" -m tcp --dport 30003 -m nfacct --nfacct-name  localhost_nps_accepted_pkts -j KUBE-EXT-ZGWW2L4XLRSDZ3EF
< -A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/hubble-ui:http" -m tcp --dport 30003 -j KUBE-EXT-ZGWW2L4XLRSDZ3EF
153a182,183
> -A KUBE-SEP-6O2WVHQLDKFLZQRS -s 172.20.1.144/32 -m comment --comment "kube-system/kube-dns-upstream:dns" -j KUBE-MARK-MASQ
> -A KUBE-SEP-6O2WVHQLDKFLZQRS -p udp -m comment --comment "kube-system/kube-dns-upstream:dns" -m udp -j DNAT --to-destination 172.20.1.144:53
163a194,195
> -A KUBE-SEP-IG5OB37KH2LEQCBN -s 172.20.1.144/32 -m comment --comment "kube-system/kube-dns-upstream:dns-tcp" -j KUBE-MARK-MASQ
> -A KUBE-SEP-IG5OB37KH2LEQCBN -p tcp -m comment --comment "kube-system/kube-dns-upstream:dns-tcp" -m tcp -j DNAT --to-destination 172.20.1.144:53
167a200,201
> -A KUBE-SERVICES -d 10.96.152.212/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-SVC-CNZCPOCNCNOROALA
> -A KUBE-SERVICES -d 10.96.206.199/32 -p tcp -m comment --comment "kube-system/hubble-ui:http cluster IP" -m tcp --dport 80 -j KUBE-SVC-ZGWW2L4XLRSDZ3EF
170a205,206
> -A KUBE-SERVICES -d 10.96.31.170/32 -p udp -m comment --comment "kube-system/kube-dns-upstream:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-FXR4M2CWOGAZGGYD
> -A KUBE-SERVICES -d 10.96.31.170/32 -p tcp -m comment --comment "kube-system/kube-dns-upstream:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-BRK3P4PPQWCLKOAN
176,177d211
< -A KUBE-SERVICES -d 10.96.152.212/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-SVC-CNZCPOCNCNOROALA
< -A KUBE-SERVICES -d 10.96.206.199/32 -p tcp -m comment --comment "kube-system/hubble-ui:http cluster IP" -m tcp --dport 80 -j KUBE-SVC-ZGWW2L4XLRSDZ3EF
180a215,216
> -A KUBE-SVC-BRK3P4PPQWCLKOAN ! -s 10.244.0.0/16 -d 10.96.31.170/32 -p tcp -m comment --comment "kube-system/kube-dns-upstream:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
> -A KUBE-SVC-BRK3P4PPQWCLKOAN -m comment --comment "kube-system/kube-dns-upstream:dns-tcp -> 172.20.1.144:53" -j KUBE-SEP-IG5OB37KH2LEQCBN
189a226,227
> -A KUBE-SVC-FXR4M2CWOGAZGGYD ! -s 10.244.0.0/16 -d 10.96.31.170/32 -p udp -m comment --comment "kube-system/kube-dns-upstream:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
> -A KUBE-SVC-FXR4M2CWOGAZGGYD -m comment --comment "kube-system/kube-dns-upstream:dns -> 172.20.1.144:53" -j KUBE-SEP-6O2WVHQLDKFLZQRS
201c239
< # Completed on Thu Jul 31 00:16:21 2025
---
> # Completed on Thu Jul 31 00:35:13 2025

์ฃผ์š” ์ถ”๊ฐ€ ๋‚ด์šฉ

  • 169.254.20.10์— ๋Œ€ํ•œ DNS ํŠธ๋ž˜ํ”ฝ์€ conntrack ์ œ์™ธ (NOTRACK)
  • DNS ํฌํŠธ (53)์— ๋Œ€ํ•œ ACCEPT ๊ทœ์น™ ์ถ”๊ฐ€๋จ

12. ํŒŒ๋“œ ๋‚ด DNS ์„ค์ • ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- cat /etc/resolv.conf

โœ…ย ์ถœ๋ ฅ

1
2
3
search default.svc.cluster.local svc.cluster.local cluster.local Davolink
nameserver 10.96.0.10
options ndots:5

13. DNS ๋กœ๊ทธ ํ™•์ธ

1
2
kubectl -n kube-system logs -l k8s-app=kube-dns -f
kubectl -n kube-system logs -l k8s-app=node-local-dns -f
1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl delete pod curl-pod

# ๊ฒฐ๊ณผ
pod "curl-pod" deleted

14. curl-pod ์žฌ๋ฐฐํฌ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: curl-pod
  labels:
    app: curl
spec:
  containers:
  - name: curl
    image: nicolaka/netshoot
    command: ["tail"]
    args: ["-f", "/dev/null"]
  terminationGracePeriodSeconds: 0
EOF

# ๊ฒฐ๊ณผ
pod/curl-pod created

๐Ÿ“ Cilium Local Redirect Policy : --set localRedirectPolicy=true

1. Cilium LocalRedirectPolicy ๊ธฐ๋Šฅ ์„ค๋ช…

  • localRedirectPolicy=true๋ฅผ ํ†ตํ•ด Cilium์ด eBPF ๊ธฐ๋ฐ˜์œผ๋กœ DNS ์š”์ฒญ์„ ๋…ธ๋“œ ๋‚ด์˜ node-local-dns ํŒŒ๋“œ๋กœ ์ง์ ‘ ์ „๋‹ฌํ•˜๋„๋ก ๊ตฌ์„ฑ
  • Kubernetes ๊ธฐ๋ณธ ๋ฐฉ์‹(IPVS/iptables)์„ ์šฐํšŒํ•˜์—ฌ Cilium์ด ์ง์ ‘ ํŠธ๋ž˜ํ”ฝ์„ ์ฒ˜๋ฆฌํ•จ
  • https://docs.cilium.io/en/stable/network/kubernetes/local-redirect-policy/

2. Cilium ์„ค์ • ์—…๋ฐ์ดํŠธ

1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# helm upgrade cilium cilium/cilium --namespace kube-system --reuse-values \
  --set localRedirectPolicy=true

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
Release "cilium" has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Thu Jul 31 00:52:32 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 4
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.

Your release version is 1.18.0.

For any further help, visit https://docs.cilium.io/en/v1.18/gettinghelp

3. Cilium ์žฌ์‹œ์ž‘

(1) cilium-operator ์žฌ์‹œ์ž‘

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl rollout restart deploy cilium-operator -n kube-system

# ๊ฒฐ๊ณผ
deployment.apps/cilium-operator restarted

(2) cilium DaemonSet ์žฌ์‹œ์ž‘

1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl rollout restart ds cilium -n kube-system

# ๊ฒฐ๊ณผ
daemonset.apps/cilium restarted

4. Local Redirect ์—ฐ๋™์šฉ NodeLocal DNS ๋งค๋‹ˆํŽ˜์ŠคํŠธ ๋‹ค์šด๋กœ๋“œ ๋ฐ ์ˆ˜์ •

(1) Cilium์—์„œ ์ œ๊ณตํ•˜๋Š” NodeLocal DNS ์„ค์ • ํŒŒ์ผ ๋‹ค์šด๋กœ๋“œ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# wget https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/kubernetes-local-redirect/node-local-dns.yaml

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
--2025-07-31 00:54:30--  https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/kubernetes-local-redirect/node-local-dns.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.108.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3493 (3.4K) [text/plain]
Saving to: โ€˜node-local-dns.yamlโ€™

node-local-dns.yaml     100%[==============================>]   3.41K  --.-KB/s    in 0s      

2025-07-31 00:54:31 (51.1 MB/s) - โ€˜node-local-dns.yamlโ€™ saved [3493/3493]

(2) ๊ธฐ์กด kube-dns ClusterIP๋กœ forward ํ•˜๋„๋ก ์น˜ํ™˜

1
2
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubedns=$(kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP})
sed -i "s/__PILLAR__DNS__SERVER__/$kubedns/g;" node-local-dns.yaml

(3)__PILLAR__DNS__SERVER__ ๊ฐ’ ์น˜ํ™˜ ํ›„ vi -d๋กœ ๊ธฐ์กด ๋งค๋‹ˆํŽ˜์ŠคํŠธ์™€ ๋น„๊ต ํ™•์ธ

1
vi -d nodelocaldns.yaml node-local-dns.yaml

โœ…ย ์ถœ๋ ฅ

5. ์ˆ˜์ •๋œ NodeLocal DNS ๋ฐฐํฌ

1
2
3
4
5
6
7
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl apply -f node-local-dns.yaml

# ๊ฒฐ๊ณผ
serviceaccount/node-local-dns configured
service/kube-dns-upstream configured
configmap/node-local-dns configured
daemonset.apps/node-local-dns configured

๊ด€๋ จ ๋ฆฌ์†Œ์Šค ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# k get cm -n kube-system

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
NAME                                                   DATA   AGE
cilium-config                                           155    10h
cilium-envoy-config                                     1      10h
coredns                                                1      10h
extension-apiserver-authentication                     6      10h
hubble-relay-config                                     1      10h
hubble-ui-nginx                                        1      10h
ip-masq-agent                                          1      167m
kube-apiserver-legacy-service-account-token-tracking   1      10h
kube-proxy                                             2      10h
kube-root-ca.crt                                       1      10h
kubeadm-config                                          1      10h
kubelet-config                                          1      10h
node-local-dns                                         1      39m

7. ConfigMap ์ˆ˜์ • (Corefile์— ๋กœ๊ทธ ์ถ”๊ฐ€)

k9s ๋˜๋Š” kubectl edit cm์„ ์‚ฌ์šฉํ•˜์—ฌ Corefile์— log ์„ค์ • ์ถ”๊ฐ€

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# k9s

8. Corefile ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl describe cm -n kube-system node-local-dns

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
Name:         node-local-dns
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>

Data
====
Corefile:
----
cluster.local:53 {
    log
    errors
    cache {
            success 9984 30
            denial 9984 5
    }
    reload
    loop
    bind 0.0.0.0
    forward . __PILLAR__CLUSTER__DNS__ {
            force_tcp
    }
    prometheus :9253
    health
    }
in-addr.arpa:53 {
    errors
    cache 30
    reload
    loop
    bind 0.0.0.0
    forward . __PILLAR__CLUSTER__DNS__ {
            force_tcp
    }
    prometheus :9253
    }
ip6.arpa:53 {
    errors
    cache 30
    reload
    loop
    bind 0.0.0.0
    forward . __PILLAR__CLUSTER__DNS__ {
            force_tcp
    }
    prometheus :9253
    }
.:53 {
    log
    errors
    cache 30
    reload
    loop
    bind 0.0.0.0
    forward . __PILLAR__UPSTREAM__SERVERS__
    prometheus :9253
    }

BinaryData
====

Events:  <none>

9. LocalRedirectPolicy ๋ฆฌ์†Œ์Šค ๋‹ค์šด๋กœ๋“œ ๋ฐ ์ ์šฉ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# wget https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/kubernetes-local-redirect/node-local-dns-lrp.yaml

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
--2025-07-31 01:04:50--  https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/kubernetes-local-redirect/node-local-dns-lrp.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.108.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 452 [text/plain]
Saving to: โ€˜node-local-dns-lrp.yamlโ€™

node-local-dns-lrp.yaml                       100%[===============================================================================================>]     452  --.-KB/s    in 0s      

2025-07-31 01:04:50 (35.5 MB/s) - โ€˜node-local-dns-lrp.yamlโ€™ saved [452/452]

node-local-dns-lrp.yaml ๋‹ค์šด๋กœ๋“œ ๋ฐ ๋‚ด๋ถ€ ๊ตฌ์กฐ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# cat node-local-dns-lrp.yaml | yq

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
{
  "apiVersion": "cilium.io/v2",
  "kind": "CiliumLocalRedirectPolicy",
  "metadata": {
    "name": "nodelocaldns",
    "namespace": "kube-system"
  },
  "spec": {
    "redirectFrontend": {
      "serviceMatcher": {
        "serviceName": "kube-dns",
        "namespace": "kube-system"
      }
    },
    "redirectBackend": {
      "localEndpointSelector": {
        "matchLabels": {
          "k8s-app": "node-local-dns"
        }
      },
      "toPorts": [
        {
          "port": "53",
          "name": "dns",
          "protocol": "UDP"
        },
        {
          "port": "53",
          "name": "dns-tcp",
          "protocol": "TCP"
        }
      ]
    }
  }
}
  • kube-system ๋„ค์ž„์ŠคํŽ˜์ด์Šค์˜ kube-dns๋กœ ๋“ค์–ด์˜ค๋Š” ํŠธ๋ž˜ํ”ฝ์„
  • ๋™์ผ ๋„ค์ž„์ŠคํŽ˜์ด์Šค ๋‚ด node-local-dns ๋ผ๋ฒจ์„ ๊ฐ€์ง„ ํŒŒ๋“œ์˜ 53๋ฒˆ ํฌํŠธ(UDP/TCP)๋กœ ๋ฆฌ๋‹ค์ด๋ ‰์…˜
1
2
3
4
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/kubernetes-local-redirect/node-local-dns-lrp.yaml

# ๊ฒฐ๊ณผ
ciliumlocalredirectpolicy.cilium.io/nodelocaldns created

10. ์ •์ฑ… ์ ์šฉ ์—ฌ๋ถ€ ํ™•์ธ

(1) ์ƒ์„ฑ๋œ LRP ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl get CiliumLocalRedirectPolicy -A

โœ…ย ์ถœ๋ ฅ

1
2
NAMESPACE     NAME           AGE
kube-system   nodelocaldns   29s

(2) ์‹ค์ œ ๋ฆฌ๋‹ค์ด๋ ‰์…˜ ์„œ๋น„์Šค ์ƒํƒœ ํ™•์ธ

1
2
3
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg service list | grep LocalRedirect
17   10.96.0.10:53/UDP       LocalRedirect   1 => 172.20.0.211:53/UDP (active)       
18   10.96.0.10:53/TCP       LocalRedirect   1 => 172.20.0.211:53/TCP (active) 

11. ํ…Œ์ŠคํŠธ ๋ฐ ๋กœ๊ทธ ํ™•์ธ

1
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system logs -l k8s-app=node-local-dns -f
1
2
3
4
5
6
7
8
9
10
11
12
(โŽˆ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- nslookup www.google.com
;; Got recursion not available from 10.96.0.10
;; Got recursion not available from 10.96.0.10
;; Got recursion not available from 10.96.0.10
Server:		10.96.0.10
Address:	10.96.0.10#53

Non-authoritative answer:
Name:	www.google.com
Address: 172.217.161.36
Name:	www.google.com
Address: 2404:6800:4005:81a::2004
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
    loop
    bind 0.0.0.0
    forward . /etc/resolv.conf
    prometheus :9253
    }
[INFO] Reloading
[INFO] plugin/reload: Running configuration MD5 = a9f86d268572cb5d3a3b2400ed98aff3
[INFO] Reloading complete
[INFO] 127.0.0.1:40579 - 43725 "HINFO IN 1286845318690505901.8683751169514593307.cluster.local. udp 71 false 512" NXDOMAIN qr,aa,rd 164 0.001900209s
[INFO] 127.0.0.1:56654 - 7833 "HINFO IN 2926695405806933559.8651839986208145762. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009552781s
[INFO] 172.20.0.69:37103 - 36022 "A IN www.google.com.cluster.local. udp 46 false 512" NXDOMAIN qr,aa,rd 139 0.002148806s
[INFO] 172.20.0.69:45793 - 8828 "A IN www.google.com.davolink. udp 41 false 512" NXDOMAIN qr,rd,ra 116 0.006312427s
[INFO] 172.20.0.69:41214 - 60955 "A IN www.google.com. udp 32 false 512" NOERROR qr,rd,ra 62 0.004218626s
[INFO] 172.20.0.69:47527 - 57259 "AAAA IN www.google.com. udp 32 false 512" NOERROR qr,rd,ra 74 0.004593533s
[INFO] 172.20.0.69:33415 - 17625 "A IN www.google.com.default.svc.cluster.local. udp 58 false 512" NXDOMAIN qr,aa,rd 151 0.001869527s
[INFO] 172.20.0.69:35698 - 26190 "A IN www.google.com.svc.cluster.local. udp 50 false 512" NXDOMAIN qr,aa,rd 143 0.001329426s
[INFO] 172.20.0.69:44150 - 38926 "A IN www.google.com.cluster.local. udp 46 false 512" NXDOMAIN qr,aa,rd 139 0.000965516s
[INFO] 172.20.0.69:58440 - 55923 "A IN www.google.com.davolink. udp 41 false 512" NXDOMAIN qr,rd,ra 116 0.006998827s
[INFO] 172.20.0.69:59737 - 33042 "A IN www.google.com. udp 32 false 512" NOERROR qr,rd,ra 62 0.004193042s
[INFO] 172.20.0.69:39931 - 18384 "AAAA IN www.google.com. udp 32 false 512" NOERROR qr,rd,ra 74 0.003735194s

This post is licensed under CC BY 4.0 by the author.