๐ง ์ค์ต ํ๊ฒฝ ๊ตฌ์ฑ
1. Vagrantfile ๋ค์ด๋ก๋ ๋ฐ ๊ฐ์๋จธ์ ๊ตฌ์ฑ
1
2
3
| curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/6w/Vagrantfile
vagrant up
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
| Bringing machine 'k8s-ctr' up with 'virtualbox' provider...
Bringing machine 'k8s-w1' up with 'virtualbox' provider...
Bringing machine 'router' up with 'virtualbox' provider...
==> k8s-ctr: Box 'bento/ubuntu-24.04' could not be found. Attempting to find and install...
k8s-ctr: Box Provider: virtualbox
k8s-ctr: Box Version: 202508.03.0
==> k8s-ctr: Loading metadata for box 'bento/ubuntu-24.04'
k8s-ctr: URL: https://vagrantcloud.com/api/v2/vagrant/bento/ubuntu-24.04
==> k8s-ctr: Adding box 'bento/ubuntu-24.04' (v202508.03.0) for provider: virtualbox (amd64)
k8s-ctr: Downloading: https://vagrantcloud.com/bento/boxes/ubuntu-24.04/versions/202508.03.0/providers/virtualbox/amd64/vagrant.box
==> k8s-ctr: Successfully added box 'bento/ubuntu-24.04' (v202508.03.0) for 'virtualbox (amd64)'!
==> k8s-ctr: Preparing master VM for linked clones...
k8s-ctr: This is a one time operation. Once the master VM is prepared,
k8s-ctr: it will be used as a base for linked clones, making the creation
k8s-ctr: of new VMs take milliseconds on a modern system.
==> k8s-ctr: Importing base box 'bento/ubuntu-24.04'...
==> k8s-ctr: Cloning VM...
==> k8s-ctr: Matching MAC address for NAT networking...
==> k8s-ctr: Checking if box 'bento/ubuntu-24.04' version '202508.03.0' is up to date...
==> k8s-ctr: Setting the name of the VM: k8s-ctr
==> k8s-ctr: Clearing any previously set network interfaces...
==> k8s-ctr: Preparing network interfaces based on configuration...
k8s-ctr: Adapter 1: nat
k8s-ctr: Adapter 2: hostonly
==> k8s-ctr: Forwarding ports...
k8s-ctr: 22 (guest) => 60000 (host) (adapter 1)
==> k8s-ctr: Running 'pre-boot' VM customizations...
==> k8s-ctr: Booting VM...
==> k8s-ctr: Waiting for machine to boot. This may take a few minutes...
k8s-ctr: SSH address: 127.0.0.1:60000
k8s-ctr: SSH username: vagrant
k8s-ctr: SSH auth method: private key
k8s-ctr:
k8s-ctr: Vagrant insecure key detected. Vagrant will automatically replace
k8s-ctr: this with a newly generated keypair for better security.
k8s-ctr:
k8s-ctr: Inserting generated public key within guest...
k8s-ctr: Removing insecure key from the guest if it's present...
k8s-ctr: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-ctr: Machine booted and ready!
==> k8s-ctr: Checking for guest additions in VM...
k8s-ctr: The guest additions on this VM do not match the installed version of
k8s-ctr: VirtualBox! In most cases this is fine, but in rare cases it can
k8s-ctr: prevent things such as shared folders from working properly. If you see
k8s-ctr: shared folder errors, please make sure the guest additions within the
k8s-ctr: virtual machine match the version of VirtualBox you have installed on
k8s-ctr: your host and reload your VM.
k8s-ctr:
k8s-ctr: Guest Additions Version: 7.1.12
k8s-ctr: VirtualBox Version: 7.2
==> k8s-ctr: Setting hostname...
==> k8s-ctr: Configuring and enabling network interfaces...
==> k8s-ctr: Running provisioner: shell...
k8s-ctr: Running: /tmp/vagrant-shell20250823-57237-anxscg.sh
k8s-ctr: >>>> Initial Config Start <<<<
k8s-ctr: [TASK 1] Setting Profile & Bashrc
k8s-ctr: [TASK 2] Disable AppArmor
k8s-ctr: [TASK 3] Disable and turn off SWAP
k8s-ctr: [TASK 4] Install Packages
k8s-ctr: [TASK 5] Install Kubernetes components (kubeadm, kubelet and kubectl)
k8s-ctr: [TASK 6] Install Packages & Helm
k8s-ctr: [TASK 7] Install pwru
k8s-ctr: >>>> Initial Config End <<<<
==> k8s-ctr: Running provisioner: shell...
k8s-ctr: Running: /tmp/vagrant-shell20250823-57237-5746j8.sh
k8s-ctr: >>>> K8S Controlplane config Start <<<<
k8s-ctr: [TASK 1] Initial Kubernetes
k8s-ctr: [TASK 2] Setting kube config file
k8s-ctr: [TASK 3] Source the completion
k8s-ctr: [TASK 4] Alias kubectl to k
k8s-ctr: [TASK 5] Install Kubectx & Kubens
k8s-ctr: [TASK 6] Install Kubeps & Setting PS1
k8s-ctr: [TASK 7] Install Cilium CNI
k8s-ctr: [TASK 8] Install Cilium / Hubble CLI
k8s-ctr: cilium
k8s-ctr: hubble
k8s-ctr: [TASK 9] Remove node taint
k8s-ctr: node/k8s-ctr untainted
k8s-ctr: [TASK 10] local DNS with hosts file
k8s-ctr: [TASK 11] Dynamically provisioning persistent local storage with Kubernetes
k8s-ctr: [TASK 13] Install Metrics-server
k8s-ctr: [TASK 14] Install k9s
k8s-ctr: >>>> K8S Controlplane Config End <<<<
==> k8s-ctr: Running provisioner: shell...
k8s-ctr: Running: /tmp/vagrant-shell20250823-57237-aenbtn.sh
k8s-ctr: >>>> Route Add Config Start <<<<
k8s-ctr: >>>> Route Add Config End <<<<
==> k8s-w1: Box 'bento/ubuntu-24.04' could not be found. Attempting to find and install...
k8s-w1: Box Provider: virtualbox
k8s-w1: Box Version: 202508.03.0
==> k8s-w1: Loading metadata for box 'bento/ubuntu-24.04'
k8s-w1: URL: https://vagrantcloud.com/api/v2/vagrant/bento/ubuntu-24.04
==> k8s-w1: Adding box 'bento/ubuntu-24.04' (v202508.03.0) for provider: virtualbox (amd64)
==> k8s-w1: Cloning VM...
==> k8s-w1: Matching MAC address for NAT networking...
==> k8s-w1: Checking if box 'bento/ubuntu-24.04' version '202508.03.0' is up to date...
==> k8s-w1: Setting the name of the VM: k8s-w1
==> k8s-w1: Clearing any previously set network interfaces...
==> k8s-w1: Preparing network interfaces based on configuration...
k8s-w1: Adapter 1: nat
k8s-w1: Adapter 2: hostonly
==> k8s-w1: Forwarding ports...
k8s-w1: 22 (guest) => 60001 (host) (adapter 1)
==> k8s-w1: Running 'pre-boot' VM customizations...
==> k8s-w1: Booting VM...
==> k8s-w1: Waiting for machine to boot. This may take a few minutes...
k8s-w1: SSH address: 127.0.0.1:60001
k8s-w1: SSH username: vagrant
k8s-w1: SSH auth method: private key
k8s-w1:
k8s-w1: Vagrant insecure key detected. Vagrant will automatically replace
k8s-w1: this with a newly generated keypair for better security.
k8s-w1:
k8s-w1: Inserting generated public key within guest...
k8s-w1: Removing insecure key from the guest if it's present...
k8s-w1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-w1: Machine booted and ready!
==> k8s-w1: Checking for guest additions in VM...
k8s-w1: The guest additions on this VM do not match the installed version of
k8s-w1: VirtualBox! In most cases this is fine, but in rare cases it can
k8s-w1: prevent things such as shared folders from working properly. If you see
k8s-w1: shared folder errors, please make sure the guest additions within the
k8s-w1: virtual machine match the version of VirtualBox you have installed on
k8s-w1: your host and reload your VM.
k8s-w1:
k8s-w1: Guest Additions Version: 7.1.12
k8s-w1: VirtualBox Version: 7.2
==> k8s-w1: Setting hostname...
==> k8s-w1: Configuring and enabling network interfaces...
==> k8s-w1: Running provisioner: shell...
k8s-w1: Running: /tmp/vagrant-shell20250823-57237-ijctwz.sh
k8s-w1: >>>> Initial Config Start <<<<
k8s-w1: [TASK 1] Setting Profile & Bashrc
k8s-w1: [TASK 2] Disable AppArmor
k8s-w1: [TASK 3] Disable and turn off SWAP
k8s-w1: [TASK 4] Install Packages
k8s-w1: [TASK 5] Install Kubernetes components (kubeadm, kubelet and kubectl)
k8s-w1: [TASK 6] Install Packages & Helm
k8s-w1: [TASK 7] Install pwru
k8s-w1: >>>> Initial Config End <<<<
==> k8s-w1: Running provisioner: shell...
k8s-w1: Running: /tmp/vagrant-shell20250823-57237-i6tnfg.sh
k8s-w1: >>>> K8S Node config Start <<<<
k8s-w1: [TASK 1] K8S Controlplane Join
k8s-w1: >>>> K8S Node config End <<<<
==> k8s-w1: Running provisioner: shell...
k8s-w1: Running: /tmp/vagrant-shell20250823-57237-u615su.sh
k8s-w1: >>>> Route Add Config Start <<<<
k8s-w1: >>>> Route Add Config End <<<<
==> router: Box 'bento/ubuntu-24.04' could not be found. Attempting to find and install...
router: Box Provider: virtualbox
router: Box Version: 202508.03.0
==> router: Loading metadata for box 'bento/ubuntu-24.04'
router: URL: https://vagrantcloud.com/api/v2/vagrant/bento/ubuntu-24.04
==> router: Adding box 'bento/ubuntu-24.04' (v202508.03.0) for provider: virtualbox (amd64)
==> router: Cloning VM...
==> router: Matching MAC address for NAT networking...
==> router: Checking if box 'bento/ubuntu-24.04' version '202508.03.0' is up to date...
==> router: Setting the name of the VM: router
==> router: Clearing any previously set network interfaces...
==> router: Preparing network interfaces based on configuration...
router: Adapter 1: nat
router: Adapter 2: hostonly
router: Adapter 3: hostonly
==> router: Forwarding ports...
router: 22 (guest) => 60009 (host) (adapter 1)
==> router: Running 'pre-boot' VM customizations...
==> router: Booting VM...
==> router: Waiting for machine to boot. This may take a few minutes...
router: SSH address: 127.0.0.1:60009
router: SSH username: vagrant
router: SSH auth method: private key
router:
router: Vagrant insecure key detected. Vagrant will automatically replace
router: this with a newly generated keypair for better security.
router:
router: Inserting generated public key within guest...
router: Removing insecure key from the guest if it's present...
router: Key inserted! Disconnecting and reconnecting using new SSH key...
==> router: Machine booted and ready!
==> router: Checking for guest additions in VM...
router: The guest additions on this VM do not match the installed version of
router: VirtualBox! In most cases this is fine, but in rare cases it can
router: prevent things such as shared folders from working properly. If you see
router: shared folder errors, please make sure the guest additions within the
router: virtual machine match the version of VirtualBox you have installed on
router: your host and reload your VM.
router:
router: Guest Additions Version: 7.1.12
router: VirtualBox Version: 7.2
==> router: Setting hostname...
==> router: Configuring and enabling network interfaces...
==> router: Running provisioner: shell...
router: Running: /tmp/vagrant-shell20250823-57237-9r906a.sh
router: >>>> Initial Config Start <<<<
router: [TASK 0] Setting eth2
router: [TASK 1] Setting Profile & Bashrc
router: [TASK 2] Disable AppArmor
router: [TASK 3] Add Kernel setting - IP Forwarding
router: [TASK 5] Install Packages
router: [TASK 6] Install Apache
router: >>>> Initial Config End <<<<
|
2. /etc/hosts
ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat /etc/hosts
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
127.0.0.1 localhost
127.0.1.1 vagrant
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
127.0.2.1 k8s-ctr k8s-ctr
192.168.10.100 k8s-ctr
192.168.10.200 router
192.168.20.100 k8s-w0
192.168.10.101 k8s-w1
|
3. ๋
ธ๋๊ฐ SSH ์ ์ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in k8s-w1 router ; do echo ">> node : $i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@$i hostname; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
| >> node : k8s-w1 <<
Warning: Permanently added 'k8s-w1' (ED25519) to the list of known hosts.
k8s-w1
>> node : router <<
Warning: Permanently added 'router' (ED25519) to the list of known hosts.
router
|
4. Cilium Pod CIDR ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode -o json | grep podCIDRs -A2
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
| "podCIDRs": [
"172.20.0.0/24"
],
--
"podCIDRs": [
"172.20.1.0/24"
],
|
- k8s-ctr:
172.20.0.0/24
- k8s-w1:
172.20.1.0/24
5. Cilium IPAM ๋ชจ๋ ํ์ธ
1
2
3
| (โ|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep ^ipam
ipam cluster-pool
ipam-cilium-node-update-rate 15s
|
6. iptables ๊ท์น ํ์ธ
(1) nat ๊ท์น ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# iptables -t nat -S
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| -P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N CILIUM_OUTPUT_nat
-N CILIUM_POST_nat
-N CILIUM_PRE_nat
-N KUBE-KUBELET-CANARY
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_nat" -j CILIUM_PRE_nat
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_nat" -j CILIUM_OUTPUT_nat
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_nat" -j CILIUM_POST_nat
|
(2) filter ๊ท์น ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# iptables -t filter -S
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| -P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N CILIUM_FORWARD
-N CILIUM_INPUT
-N CILIUM_OUTPUT
-N KUBE-FIREWALL
-N KUBE-KUBELET-CANARY
-A INPUT -m comment --comment "cilium-feeder: CILIUM_INPUT" -j CILIUM_INPUT
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "cilium-feeder: CILIUM_FORWARD" -j CILIUM_FORWARD
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT" -j CILIUM_OUTPUT
-A OUTPUT -j KUBE-FIREWALL
-A CILIUM_FORWARD -o cilium_host -m comment --comment "cilium: any->cluster on cilium_host forward accept" -j ACCEPT
-A CILIUM_FORWARD -i cilium_host -m comment --comment "cilium: cluster->any on cilium_host forward accept (nodeport)" -j ACCEPT
-A CILIUM_FORWARD -i lxc+ -m comment --comment "cilium: cluster->any on lxc+ forward accept" -j ACCEPT
-A CILIUM_FORWARD -i cilium_net -m comment --comment "cilium: cluster->any on cilium_net forward accept (nodeport)" -j ACCEPT
-A CILIUM_FORWARD -o lxc+ -m comment --comment "cilium: any->cluster on lxc+ forward accept" -j ACCEPT
-A CILIUM_FORWARD -i lxc+ -m comment --comment "cilium: cluster->any on lxc+ forward accept (nodeport)" -j ACCEPT
-A CILIUM_INPUT -m mark --mark 0x200/0xf00 -m comment --comment "cilium: ACCEPT for proxy traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark --mark 0xa00/0xe00 -m comment --comment "cilium: ACCEPT for proxy traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark --mark 0x800/0xe00 -m comment --comment "cilium: ACCEPT for l7 proxy upstream traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark ! --mark 0xe00/0xf00 -m mark ! --mark 0xd00/0xf00 -m mark ! --mark 0x400/0xf00 -m mark ! --mark 0xa00/0xe00 -m mark ! --mark 0x800/0xe00 -m mark ! --mark 0xf00/0xf00 -m comment --comment "cilium: host->any mark as from host" -j MARK --set-xmark 0xc00/0xf00
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
|
(3) mangle ๊ท์น ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# iptables -t mangle -S
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| -P PREROUTING ACCEPT
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N CILIUM_POST_mangle
-N CILIUM_PRE_mangle
-N KUBE-IPTABLES-HINT
-N KUBE-KUBELET-CANARY
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_mangle" -j CILIUM_PRE_mangle
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_mangle" -j CILIUM_POST_mangle
-A CILIUM_PRE_mangle ! -o lo -m socket --transparent -m mark ! --mark 0xe00/0xf00 -m mark ! --mark 0x800/0xf00 -m comment --comment "cilium: any->pod redirect proxied traffic to host proxy" -j MARK --set-xmark 0x200/0xffffffff
-A CILIUM_PRE_mangle -p tcp -m mark --mark 0x99b20200 -m comment --comment "cilium: TPROXY to host cilium-dns-egress proxy" -j TPROXY --on-port 45721 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff
-A CILIUM_PRE_mangle -p udp -m mark --mark 0x99b20200 -m comment --comment "cilium: TPROXY to host cilium-dns-egress proxy" -j TPROXY --on-port 45721 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# iptables -t mangle -S | grep -i proxy
|
โ
ย ์ถ๋ ฅ
1
2
3
| -A CILIUM_PRE_mangle ! -o lo -m socket --transparent -m mark ! --mark 0xe00/0xf00 -m mark ! --mark 0x800/0xf00 -m comment --comment "cilium: any->pod redirect proxied traffic to host proxy" -j MARK --set-xmark 0x200/0xffffffff
-A CILIUM_PRE_mangle -p tcp -m mark --mark 0x99b20200 -m comment --comment "cilium: TPROXY to host cilium-dns-egress proxy" -j TPROXY --on-port 45721 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff
-A CILIUM_PRE_mangle -p udp -m mark --mark 0x99b20200 -m comment --comment "cilium: TPROXY to host cilium-dns-egress proxy" -j TPROXY --on-port 45721 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff
|
- TCP/UDP ํธ๋ํฝ์
cilium-dns-egress proxy
๋ก TPROXY ๋ฆฌ๋ค์ด๋ ํธ - Pod โ Proxy โ ์ธ๋ถ ์๋น์ค ๊ฐ ํธ๋ํฝ์ด ์ฌ๋ฐ๋ฅด๊ฒ ํ๋ก์๋ฅผ ๊ฒฝ์ ํ๋๋ก ๋์
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# iptables -t raw -S
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| -P PREROUTING ACCEPT
-P OUTPUT ACCEPT
-N CILIUM_OUTPUT_raw
-N CILIUM_PRE_raw
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_raw" -j CILIUM_PRE_raw
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_raw" -j CILIUM_OUTPUT_raw
-A CILIUM_OUTPUT_raw -d 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -s 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0xa00/0xfffffeff -m comment --comment "cilium: NOTRACK for proxy return traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0xa00/0xfffffeff -m comment --comment "cilium: NOTRACK for proxy return traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0x800/0xe00 -m comment --comment "cilium: NOTRACK for L7 proxy upstream traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0x800/0xe00 -m comment --comment "cilium: NOTRACK for L7 proxy upstream traffic" -j CT --notrack
-A CILIUM_PRE_raw -d 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_PRE_raw -s 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_PRE_raw -m mark --mark 0x200/0xf00 -m comment --comment "cilium: NOTRACK for proxy traffic" -j CT --notrack
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# iptables -t raw -S | grep -i proxy
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
| -A CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0xa00/0xfffffeff -m comment --comment "cilium: NOTRACK for proxy return traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0xa00/0xfffffeff -m comment --comment "cilium: NOTRACK for proxy return traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0x800/0xe00 -m comment --comment "cilium: NOTRACK for L7 proxy upstream traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0x800/0xe00 -m comment --comment "cilium: NOTRACK for L7 proxy upstream traffic" -j CT --notrack
-A CILIUM_PRE_raw -m mark --mark 0x200/0xf00 -m comment --comment "cilium: NOTRACK for proxy traffic" -j CT --notrack
|
- Pod ํธ๋ํฝ ๋ฐ Proxy ํธ๋ํฝ์ ๋ํด conntrack์ ๋นํ์ฑํ(NOTRACK) ์ฒ๋ฆฌ
- Proxy ๋ฆฌํด ํธ๋ํฝ๊ณผ L7 Proxy ์
์คํธ๋ฆผ ํธ๋ํฝ์ด ์ปค๋ conntrack์ ์ํด ์ฐจ๋จ๋์ง ์๋๋ก ๋ณด์ฅ
๐ ์ํ ์ดํ๋ฆฌ์ผ์ด์
๋ฐฐํฌ ๋ฐ ํต์ ํ์ธ
1. ์ํ ์ ํ๋ฆฌ์ผ์ด์
๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: webpod
spec:
replicas: 2
selector:
matchLabels:
app: webpod
template:
metadata:
labels:
app: webpod
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- sample-app
topologyKey: "kubernetes.io/hostname"
containers:
- name: webpod
image: traefik/whoami
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: webpod
labels:
app: webpod
spec:
selector:
app: webpod
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
EOF
# ๊ฒฐ๊ณผ
deployment.apps/webpod created
service/webpod created
|
2. curl-pod ํ๋ ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: curl-pod
labels:
app: curl
spec:
nodeName: k8s-ctr
containers:
- name: curl
image: nicolaka/netshoot
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF
# ๊ฒฐ๊ณผ
pod/curl-pod created
|
3. iptables ์ฐจ๋จ ๊ท์น ์ถ๊ฐ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# iptables -t filter -I OUTPUT 1 -m tcp --proto tcp --dst 1.1.1.1/32 -j DROP
|
- ํน์ ๋ชฉ์ ์ง(
1.1.1.1
)๋ก ํฅํ๋ TCP ํธ๋ํฝ์ ์ฐจ๋จํ๊ธฐ ์ํด iptables OUTPUT ์ฒด์ธ์ ๊ท์น ์ฝ์
4. pwru ๋ชจ๋ํฐ๋ง
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# pwru 'dst host 1.1.1.1 and tcp and dst port 80'
|
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# curl 1.1.1.1 -v
* Trying 1.1.1.1:80...
|
โ
ย ๋ชจ๋ํฐ๋ง ๊ฒฐ๊ณผ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
2025/08/23 16:08:26 Attaching kprobes (via kprobe-multi)...
1642 / 1642 [-------------------------------------------------------------------------------------------------------------] 100.00% ? p/s
2025/08/23 16:08:26 Attached (ignored 0)
2025/08/23 16:08:26 Listening for events..
SKB CPU PROCESS NETNS MARK/x IFACE PROTO MTU LEN TUPLE FUNC
0xffff9879031f36e8 2 ~r/bin/curl:7924 4026531840 0 0 0x0000 1500 60 10.0.2.15:60312->1.1.1.1:80(tcp) ip_local_out
0xffff9879031f36e8 2 ~r/bin/curl:7924 4026531840 0 0 0x0000 1500 60 10.0.2.15:60312->1.1.1.1:80(tcp) __ip_local_out
0xffff9879031f36e8 2 ~r/bin/curl:7924 4026531840 0 0 0x0800 1500 60 10.0.2.15:60312->1.1.1.1:80(tcp) nf_hook_slow
0xffff9879031f36e8 2 ~r/bin/curl:7924 4026531840 0 0 0x0800 1500 60 10.0.2.15:60312->1.1.1.1:80(tcp) kfree_skb_reason(SKB_DROP_REASON_NETFILTER_DROP)
0xffff9879031f36e8 2 ~r/bin/curl:7924 4026531840 0 0 0x0800 1500 60 10.0.2.15:60312->1.1.1.1:80(tcp) skb_release_head_state
0xffff9879031f36e8 2 ~r/bin/curl:7924 4026531840 0 0 0x0800 0 60 10.0.2.15:60312->1.1.1.1:80(tcp) tcp_wfree
0xffff9879031f36e8 2 ~r/bin/curl:7924 4026531840 0 0 0x0800 0 60 10.0.2.15:60312->1.1.1.1:80(tcp) skb_release_data
0xffff9879031f36e8 2 ~r/bin/curl:7924 4026531840 0 0 0x0800 0 60 10.0.2.15:60312->1.1.1.1:80(tcp) kfree_skbmem
0xffff9879031f36e8 2 <empty>:0 4026531840 0 0 0x0800 0 60 10.0.2.15:60312->1.1.1.1:80(tcp) __skb_clone
0xffff9879031f36e8 2 <empty>:0 0 0 0 0x0800 0 60 10.0.2.15:60312->1.1.1.1:80(tcp) __copy_skb_header
0xffff9879031f36e8 2 <empty>:0 4026531840 0 0 0x0000 1500 60 10.0.2.15:60312->1.1.1.1:80(tcp) ip_local_out
0xffff9879031f36e8 2 <empty>:0 4026531840 0 0 0x0000 1500 60 10.0.2.15:60312->1.1.1.1:80(tcp) __ip_local_out
0xffff9879031f36e8 2 <empty>:0 4026531840 0 0 0x0800 1500 60 10.0.2.15:60312->1.1.1.1:80(tcp) nf_hook_slow
|
pwru
๋ฅผ ์ฌ์ฉํ์ฌ 1.1.1.1:80
๋ชฉ์ ์ง๋ก ํฅํ๋ ํธ๋ํฝ ๋ชจ๋ํฐ๋งcurl 1.1.1.1 -v
์คํ ํ ํจํท ๋๋กญ ๋ฐ์ ํ์ธSKB_DROP_REASON_NETFILTER_DROP
โ iptables DROP ๊ท์น์ ์ํด ํจํท์ด ์ฐจ๋จ๋จ
Cilium Service Mesh
1. ์๋น์ค ๋ฉ์๋?
- ์๋น์ค ๋ฉ์๋ ์ ํ๋ฆฌ์ผ์ด์
๋คํธ์ํน์ด๋ผ๋ ๊ณตํต ๊ด์ฌ์ฌ๋ฅผ ์ ํ๋ฆฌ์ผ์ด์
์ฝ๋ ๋ฐ์์ ํฌ๋ช
ํ ๋ฐฉ์์ผ๋ก ๊ตฌํํ๋ ์ธํ๋ผ ๋ ์ด์ด
- ๊ฐ๋ฐ์๊ฐ ์ง์ ๋คํธ์ํฌ ๊ธฐ๋ฅ(mTLS, ๋ผ์ฐํ
, ๋ก๋๋ฐธ๋ฐ์ฑ ๋ฑ)์ ๊ตฌํํ์ง ์์๋ ์๋น์ค ๋ฉ์๊ฐ ์ด๋ฅผ ๋์ ์ฒ๋ฆฌ
2. Cilium ๊ธฐ๋ฐ ์๋น์ค ๋ฉ์
- Cilium์ CNI ๊ธฐ๋ฐ ํ๋ซํผ์ผ๋ก, L3~L7 ๋คํธ์ํน์ eBPF์ Envoy๋ฅผ ํตํด ์ฒ๋ฆฌ
- ๋
ธ๋ ๋จ์ Envoy(Per-Node Envoy) ๊ฐ ๋ฐฐํฌ๋์ด ์ฌ์ด๋์นด ์์ด ์๋น์ค ๋ฉ์ ๊ธฐ๋ฅ์ ์ ๊ณต
- ๊ตฌ์กฐ์ ์ฒ ํ์ด Istio Ambient Mesh์ ์ ์ฌํ์ง๋ง ๊ตฌํ ๋ฐฉ์์์ ์ฐจ์ด๊ฐ ์์
3. ํธ๋ํฝ ์ฒ๋ฆฌ ๋ฐฉ์
(1) L3 / L4 ๊ณ์ธต
- Cilium์ด eBPF๋ก ์ง์ ์ฒ๋ฆฌ
- ์ฑ๋ฅ์ ์ด์ : ์ปค๋ ๋ ๋ฒจ์์ ํธ๋ํฝ ์ ์ด ๊ฐ๋ฅ โ ์ค๋ฒํค๋ ๊ฐ์
- Istio Ambient Mesh์ Per-Node ๋ชจ๋ธ๊ณผ ์ ์ฌํ์ง๋ง, Istio๋ Envoy ์ค์ฌ ๊ตฌ์กฐ
(2) L7 ๊ณ์ธต
- Envoy ํ๋ก์๋ฅผ ๊ฒฝ์ ํ์ฌ ํธ๋ํฝ ์ ๋ฌ
- ๋ผ์ฐํ
, ํํฐ๋ง, L7 ์ ์ฑ
๋ฑ์ Envoy๊ฐ ๋ด๋น
- Envoy์ ์ค์ (Config)์ Cilium Agent๊ฐ ๋์ ์ผ๋ก ์ ๋ฌ
4. Istio Ambient Mesh์์ ๋น๊ต
(1) ๊ณตํต์
- Per-Node Envoy ๋ชจ๋ธ ์ฌ์ฉ
- ์ฌ์ด๋์นด ์๋ ์๋น์ค ๋ฉ์ ์ ๊ณต
(2) ์ฐจ์ด์
- Istio: L3, L4, L7 ๋ชจ๋ Envoy์์ ์ฒ๋ฆฌ
- Cilium: L3, L4๋ eBPF ์ฒ๋ฆฌ, L7๋ง Envoy ์ฌ์ฉ
- Cilium์ ์ปค๋ ๋ ๋ฒจ(eBPF) ๊ธฐ๋ฐ ์ฒ๋ฆฌ๋ก ์ฑ๋ฅ ๋ฐ ์์ ํจ์จ์ฑ์ด ๋ ๋ฐ์ด๋จ
๐ K8S Ingress Support
1. ํ์ ์กฐ๊ฑด
- NodePort ํ์ฑํ ํ์
nodePort.enabled=true
- ๋๋ kube-proxy replacement ๋ชจ๋ ์ฌ์ฉ(
kubeProxyReplacement=true
)
- L7 Proxy ํ์ฑํ ํ์
l7Proxy=true
(๊ธฐ๋ณธ๊ฐ ํ์ฑํ ์ํ)
2. Ingress ๋์ ๋ฐฉ์
- Cilium Ingress / Gateway API๋ LoadBalancer ๋๋ NodePort ์๋น์ค ํํ๋ก ๋
ธ์ถ ๊ฐ๋ฅ
- ์ ํ์ ์ผ๋ก Host Network ๊ธฐ๋ฐ ๋
ธ์ถ๋ ์ง์
- ํธ๋ํฝ ํ๋ฆ
- ์๋น์ค ํฌํธ๋ก ์ ์
๋ ํธ๋ํฝ์ Cilium์ eBPF ์ฝ๋๊ฐ ๊ฐ๋ก์ฑ
- ์ดํ Envoy ํ๋ก์(Userspace Proxy) ๋ก ์ ๋ฌ (์ปค๋ TPROXY ๊ธฐ๋ฅ ํ์ฉ)
- Envoy๋ ์๋ณธ Destination IP/Port, Source IP/Port ์ ๋ณด๋ฅผ ์ ์งํ ์ํ์์ ํธ๋ํฝ์ ์ฒ๋ฆฌ
3. ํธ๋ํฝ ๋ฐ์ดํฐ ํ๋ก์ฐ
4. Ingress ๊ด๋ จ ์ฃผ์์ฌํญ
- Cilium Ingress์ Cilium Gateway API๋ ๋์์ ํ์ฑํ ๋ถ๊ฐ
- ๋จ, ๋ค๋ฅธ Ingress Controller(ex. NGINX Ingress Controller)์๋ ํจ๊ป ์ฌ์ฉ ๊ฐ๋ฅ
5. L7 ๋ถํ๋ถ์ฐ ์๊ณ ๋ฆฌ์ฆ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep -E '^loadbalancer|l7'
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| enable-l7-proxy true
loadbalancer-l7 envoy
loadbalancer-l7-algorithm round_robin
loadbalancer-l7-ports
|
- Cilium์ Envoy๋ฅผ ํตํด L7 ํธ๋ํฝ ๋ถํ๋ถ์ฐ์ ์ง์
- ํ์ฌ ์ค์ ํ์ธ ๊ฒฐ๊ณผ โ round_robin ์ฌ์ฉ ์ค
6. Ingress ์ ์ฉ ์์ฝ IP ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -- cilium ip list | grep ingress
|
โ
ย ์ถ๋ ฅ
1
2
| 172.20.0.86/32 reserved:ingress
172.20.1.225/32 reserved:ingress
|
- Ingress ํธ๋ํฝ ์ฒ๋ฆฌ๋ฅผ ์ํด ๋
ธ๋๋ณ ์์ฝ IP ํ ๋น
- ๊ฐ ๋
ธ๋์์ Envoy๊ฐ Ingress ํธ๋ํฝ์ ์ฒ๋ฆฌํ ์ ์๋๋ก ๋ด๋ถ์ ์ผ๋ก ์์ฝ๋ IP
7. cilium-envoy ๋ฐ๋ชฌ์
๋ฐฐํฌ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ds -n kube-system cilium-envoy -owide
|
โ
ย ์ถ๋ ฅ
1
2
| NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
cilium-envoy 2 2 2 2 2 kubernetes.io/os=linux 137m cilium-envoy quay.io/cilium/cilium-envoy:v1.34.4-1754895458-68cffdfa568b6b226d70a7ef81fc65dda3b890bf@sha256:247e908700012f7ef56f75908f8c965215c26a27762f296068645eb55450bda2 k8s-app=cilium-envoy
|
cilium-envoy
๋ DaemonSet ํํ๋ก ๋ชจ๋ ๋
ธ๋์ ๋ฐฐํฌ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=cilium-envoy -owide
|
โ
ย ์ถ๋ ฅ
1
2
3
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cilium-envoy-hwwwn 1/1 Running 0 137m 192.168.10.100 k8s-ctr <none> <none>
cilium-envoy-jmvfl 1/1 Running 0 135m 192.168.10.101 k8s-w1 <none> <none>
|
- ์ปจํธ๋กค ํ๋ ์ธ(k8s-ctr), ์์ปค(k8s-w1) ๋
ธ๋์ ๊ฐ๊ฐ Pod ์คํ ์ค
8. cilium-envoy Pod ์์ธ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kc describe pod -n kube-system -l k8s-app=cilium-envoy
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
| ...
Containers:
cilium-envoy:
Container ID: containerd://f0f15510265c3662b0381b0a6a93575bbc7ea6b819720763108942604d4a906f
Image: quay.io/cilium/cilium-envoy:v1.34.4-1754895458-68cffdfa568b6b226d70a7ef81fc65dda3b890bf@sha256:247e908700012f7ef56f75908f8c965215c26a27762f296068645eb55450bda2
Image ID: quay.io/cilium/cilium-envoy@sha256:247e908700012f7ef56f75908f8c965215c26a27762f296068645eb55450bda2
Port: 9964/TCP
Host Port: 9964/TCP
Command:
/usr/bin/cilium-envoy-starter
Args:
--
-c /var/run/cilium/envoy/bootstrap-config.json
--base-id 0
--log-level info
State: Running
Started: Sat, 23 Aug 2025 14:32:49 +0900
Ready: True
Restart Count: 0
Liveness: http-get http://127.0.0.1:9878/healthz delay=0s timeout=5s period=30s #success=1 #failure=10
Readiness: http-get http://127.0.0.1:9878/healthz delay=0s timeout=5s period=30s #success=1 #failure=3
Startup: http-get http://127.0.0.1:9878/healthz delay=5s timeout=1s period=2s #success=1 #failure=105
Environment:
K8S_NODE_NAME: (v1:spec.nodeName)
CILIUM_K8S_NAMESPACE: kube-system (v1:metadata.namespace)
KUBERNETES_SERVICE_HOST: 192.168.10.100
KUBERNETES_SERVICE_PORT: 6443
Mounts:
/sys/fs/bpf from bpf-maps (rw)
/var/run/cilium/envoy/ from envoy-config (ro)
/var/run/cilium/envoy/artifacts from envoy-artifacts (ro)
/var/run/cilium/envoy/sockets from envoy-sockets (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hpv9k (ro)
...
Volumes:
envoy-sockets:
Type: HostPath (bare host directory volume)
Path: /var/run/cilium/envoy/sockets
HostPathType: DirectoryOrCreate
envoy-artifacts:
Type: HostPath (bare host directory volume)
Path: /var/run/cilium/envoy/artifacts
HostPathType: DirectoryOrCreate
envoy-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: cilium-envoy-config
Optional: false
bpf-maps:
Type: HostPath (bare host directory volume)
Path: /sys/fs/bpf
HostPathType: DirectoryOrCreate
kube-api-access-hpv9k:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
|
- ํฌํธ 9964/TCP๋ก Envoy ๋ฆฌ์ค๋ ๋์
- BPF ๋งต(
/sys/fs/bpf
)๊ณผ Envoy ์ค์ (/var/run/cilium/envoy/
)์ด HostPath๋ก ๋ง์ดํธ๋จ
9. Envoy ์์ผ ๋ฐ ์ค์ ํ์ผ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ls -al /var/run/cilium/envoy/sockets
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
| total 0
drwxr-xr-x 3 root root 120 Aug 23 14:33 .
drwxr-xr-x 4 root root 80 Aug 23 14:32 ..
srw-rw---- 1 root 1337 0 Aug 23 14:33 access_log.sock
srwxr-xr-x 1 root root 0 Aug 23 14:32 admin.sock
drwxr-xr-x 3 root root 60 Aug 23 14:33 envoy
srw-rw---- 1 root 1337 0 Aug 23 14:33 xds.sock
|
/var/run/cilium/envoy/sockets
๋ด Unix ์์ผ ์กด์ฌadmin.sock
, access_log.sock
, xds.sock
๋ฑ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium-envoy -- ls -al /var/run/cilium/envoy
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
| total 12
drwxrwxrwx 5 root root 4096 Aug 23 05:32 .
drwxr-xr-x 3 root root 4096 Aug 23 05:32 ..
drwxr-xr-x 2 root root 4096 Aug 23 05:32 ..2025_08_23_05_32_12.382647784
lrwxrwxrwx 1 root root 31 Aug 23 05:32 ..data -> ..2025_08_23_05_32_12.382647784
drwxr-xr-x 2 root root 40 Aug 23 05:32 artifacts
lrwxrwxrwx 1 root root 28 Aug 23 05:32 bootstrap-config.json -> ..data/bootstrap-config.json
drwxr-xr-x 3 root root 120 Aug 23 05:33 sockets
|
1
2
3
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system get configmap cilium-envoy-config -o json \
| jq -r '.data["bootstrap-config.json"]' \
| jq .
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
| {
"admin": {
"address": {
"pipe": {
"path": "/var/run/cilium/envoy/sockets/admin.sock"
}
}
},
...
"listeners": [
{
"address": {
"socketAddress": {
"address": "0.0.0.0",
"portValue": 9964
}
},
...
|
- ConfigMap(
cilium-envoy-config
)์์ bootstrap-config.json
๋ก๋- ๋ฆฌ์ค๋ ํฌํธ:
9964
- Admin ์์ผ:
/var/run/cilium/envoy/sockets/admin.sock
10. BPF ๋งต ๋ฐ eBPF ํ๋ก๊ทธ๋จ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# tree /sys/fs/bpf
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
| /sys/fs/bpf
โโโ cilium
โย ย โโโ devices
โย ย โย ย โโโ cilium_host
โย ย โย ย โย ย โโโ links
โย ย โย ย โย ย โโโ cil_from_host
โย ย โย ย โย ย โโโ cil_to_host
โย ย โย ย โโโ cilium_net
โย ย โย ย โย ย โโโ links
โย ย โย ย โย ย โโโ cil_to_host
โย ย โย ย โโโ eth0
โย ย โย ย โย ย โโโ links
โย ย โย ย โย ย โโโ cil_from_netdev
โย ย โย ย โย ย โโโ cil_to_netdev
โย ย โย ย โโโ eth1
โย ย โย ย โโโ links
โย ย โย ย โโโ cil_from_netdev
โย ย โย ย โโโ cil_to_netdev
โย ย โโโ endpoints
โย ย โย ย โโโ 1161
โย ย โย ย โย ย โโโ links
โย ย โย ย โย ย โโโ cil_from_container
โย ย โย ย โย ย โโโ cil_to_container
โย ย โย ย โโโ 1294
โย ย โย ย โย ย โโโ links
โย ย โย ย โย ย โโโ cil_from_container
โย ย โย ย โย ย โโโ cil_to_container
โย ย โย ย โโโ 1308
โย ย โย ย โย ย โโโ links
โย ย โย ย โย ย โโโ cil_from_container
โย ย โย ย โย ย โโโ cil_to_container
โย ย โย ย โโโ 1347
โย ย โย ย โย ย โโโ links
โย ย โย ย โย ย โโโ cil_from_container
โย ย โย ย โย ย โโโ cil_to_container
โย ย โย ย โโโ 1449
โย ย โย ย โย ย โโโ links
โย ย โย ย โย ย โโโ cil_from_container
โย ย โย ย โย ย โโโ cil_to_container
โย ย โย ย โโโ 2479
โย ย โย ย โย ย โโโ links
โย ย โย ย โย ย โโโ cil_from_container
โย ย โย ย โย ย โโโ cil_to_container
โย ย โย ย โโโ 3045
โย ย โย ย โย ย โโโ links
โย ย โย ย โย ย โโโ cil_from_container
โย ย โย ย โย ย โโโ cil_to_container
โย ย โย ย โโโ 83
โย ย โย ย โโโ links
โย ย โย ย โโโ cil_from_container
โย ย โย ย โโโ cil_to_container
โย ย โโโ socketlb
โย ย โโโ links
โย ย โโโ cgroup
โย ย โโโ cil_sock4_connect
โย ย โโโ cil_sock4_getpeername
โย ย โโโ cil_sock4_post_bind
โย ย โโโ cil_sock4_recvmsg
โย ย โโโ cil_sock4_sendmsg
โย ย โโโ cil_sock6_connect
โย ย โโโ cil_sock6_getpeername
โย ย โโโ cil_sock6_post_bind
โย ย โโโ cil_sock6_recvmsg
โย ย โโโ cil_sock6_sendmsg
โย ย โโโ cil_sock_release
โโโ tc
โโโ globals
โโโ cilium_auth_map
โโโ cilium_call_policy
โโโ cilium_calls_00083
โโโ cilium_calls_01161
โโโ cilium_calls_01294
โโโ cilium_calls_01308
โโโ cilium_calls_01347
โโโ cilium_calls_01449
โโโ cilium_calls_02479
โโโ cilium_calls_03045
โโโ cilium_calls_hostns_00637
โโโ cilium_calls_netdev_00002
โโโ cilium_calls_netdev_00003
โโโ cilium_calls_netdev_00004
โโโ cilium_ct4_global
โโโ cilium_ct_any4_global
โโโ cilium_egresscall_policy
โโโ cilium_events
โโโ cilium_ipcache_v2
โโโ cilium_ipv4_frag_datagrams
โโโ cilium_l2_responder_v4
โโโ cilium_lb4_affinity
โโโ cilium_lb4_backends_v3
โโโ cilium_lb4_reverse_nat
โโโ cilium_lb4_reverse_sk
โโโ cilium_lb4_services_v2
โโโ cilium_lb4_source_range
โโโ cilium_lb_affinity_match
โโโ cilium_lxc
โโโ cilium_metrics
โโโ cilium_node_map_v2
โโโ cilium_nodeport_neigh4
โโโ cilium_policystats
โโโ cilium_policy_v2_00083
โโโ cilium_policy_v2_00637
โโโ cilium_policy_v2_01161
โโโ cilium_policy_v2_01294
โโโ cilium_policy_v2_01308
โโโ cilium_policy_v2_01347
โโโ cilium_policy_v2_01449
โโโ cilium_policy_v2_02479
โโโ cilium_policy_v2_03045
โโโ cilium_ratelimit
โโโ cilium_ratelimit_metrics
โโโ cilium_runtime_config
โโโ cilium_signals
โโโ cilium_skip_lb4
โโโ cilium_snat_v4_alloc_retries
โโโ cilium_snat_v4_external
33 directories, 83 files
|
/sys/fs/bpf/cilium
๋๋ ํ ๋ฆฌ์์ ๋ค์ํ BPF ์ค๋ธ์ ํธ ํ์ธdevices
, endpoints
, socketlb
, tc/globals
๋ฑ
- eBPF๋ฅผ ํ์ฉํด ์์ผ ๋ ๋ฒจ ๋ก๋๋ฐธ๋ฐ์ฑ, ํจํท ์ฒ๋ฆฌ ๊ฒฝ๋ก ์ ์ด, ์ ์ฑ
์ ์ฉ ์ํ
11. cilium-envoy ์๋น์ค ๋ฐ ์๋ํฌ์ธํธ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc,ep -n kube-system cilium-envoy
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
| Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/cilium-envoy ClusterIP None <none> 9964/TCP 145m
NAME ENDPOINTS AGE
endpoints/cilium-envoy 192.168.10.100:9964,192.168.10.101:9964 145m
|
cilium-envoy
์๋น์ค (ClusterIP) ์กด์ฌ- ๋
ธ๋๋ณ Envoy ์ธ์คํด์ค์ 9964 ํฌํธ์ ๋งคํ
- Endpoints:
192.168.10.100:9964
, 192.168.10.101:9964
12. cilium-ingress ์๋น์ค ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc,ep -n kube-system cilium-ingress
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
| Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/cilium-ingress LoadBalancer 10.96.199.58 <pending> 80:31809/TCP,443:30358/TCP 145m
NAME ENDPOINTS AGE
endpoints/cilium-ingress 192.192.192.192:9999 145m
|
cilium-ingress
์๋น์ค๋ LoadBalancer ํ์
์ผ๋ก ์์ฑ๋จ- ClusterIP:
10.96.199.58
- External-IP:
<pending>
(LB IPAM ๋ฏธ์ค์ ์ํ) - ํฌํธ:
80
, 443
- Endpoints๋ ๋ด๋ถ ๋
ผ๋ฆฌ IP๋ก ํ ๋น๋จ (
192.192.192.192:9999
) - ๋ชจ๋ ๋
ธ๋์์ Ingress ํธ๋ํฝ์ ์์ ํ ์ ์๋ ๋
ผ๋ฆฌ์ ์๋ํฌ์ธํธ ์ญํ ์ํ
โ๏ธ LB IPAM ์ค์ ํ ํ์ธ: CiliumL2AnnouncementPolicy
1. L2 Announcement ์ค์ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep l2
|
โ
ย ์ถ๋ ฅ
1
2
| enable-l2-announcements true
enable-l2-neigh-discovery false
|
2. LoadBalancer IP Pool ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: "cilium.io/v2"
kind: CiliumLoadBalancerIPPool
metadata:
name: "cilium-lb-ippool"
spec:
blocks:
- start: "192.168.10.211"
stop: "192.168.10.215"
EOF
# ๊ฒฐ๊ณผ
ciliumloadbalancerippool.cilium.io/cilium-lb-ippool created
|
- CiliumLoadBalancerIPPool ๋ฆฌ์์ค ์์ฑํ์ฌ 192.168.10.211 ~ 192.168.10.215 ๋ฒ์๋ฅผ IP ํ๋ก ํ ๋น
3. ํ ๋น๋ LB External IP ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ippool
|
โ
ย ์ถ๋ ฅ
1
2
| NAME DISABLED CONFLICTING IPS AVAILABLE AGE
cilium-lb-ippool false False 4 16s
|
- ํ์ฌ 1๊ฐ IP(
192.168.10.211
)๊ฐ ์ด๋ฏธ ์ฌ์ฉ ์ค
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# k get svc -A
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 153m
default webpod ClusterIP 10.96.10.7 <none> 80/TCP 59m
kube-system cilium-envoy ClusterIP None <none> 9964/TCP 153m
kube-system cilium-ingress LoadBalancer 10.96.199.58 192.168.10.211 80:31809/TCP,443:30358/TCP 153m
kube-system hubble-metrics ClusterIP None <none> 9965/TCP 153m
kube-system hubble-peer ClusterIP 10.96.28.176 <none> 443/TCP 153m
kube-system hubble-relay ClusterIP 10.96.83.176 <none> 80/TCP 153m
kube-system hubble-ui NodePort 10.96.222.67 <none> 80:30003/TCP 153m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 153m
kube-system metrics-server ClusterIP 10.96.80.249 <none> 443/TCP 153m
|
cilium-ingress
์๋น์ค์ External IP๋ก 192.168.10.211
์๋ ํ ๋น- ์๋น์ค ํ์
: LoadBalancer, ํฌํธ: 80, 443 ์คํ
4. Cilium L2 Announcement Policy ์ค์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: "cilium.io/v2alpha1"
kind: CiliumL2AnnouncementPolicy
metadata:
name: policy1
spec:
interfaces:
- eth1
externalIPs: true
loadBalancerIPs: true
EOF
# ๊ฒฐ๊ณผ
ciliuml2announcementpolicy.cilium.io/policy1 created
|
- CiliumL2AnnouncementPolicy ๋ฆฌ์์ค๋ฅผ ์์ฑํ์ฌ L2 ๋ธ๋ก๋์บ์คํธ ํ์ฑํ
- ํด๋ฌ์คํฐ ์ธ๋ถ์์๋ LoadBalancer IP ์ ๊ทผ ๊ฐ๋ฅ
5. L2 Announcement ๋ฆฌ๋ ๋
ธ๋ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system get lease | grep "cilium-l2announce"
|
โ
ย ์ถ๋ ฅ
1
| cilium-l2announce-kube-system-cilium-ingress k8s-w1
|
- Cilium์ ๋ฆฌ๋ ์ ์ถ(Lease ๋ฉ์ปค๋์ฆ) ์ ํตํด ํน์ ๋
ธ๋๊ฐ LB IP๋ฅผ ๊ด๊ณ
- ํ์ฌ ๋ฆฌ๋ โ
k8s-w1
- ์ธ๋ถ์์ LB IP๋ก ๋ค์ด์ค๋ ํธ๋ํฝ์ ๊ธฐ๋ณธ์ ์ผ๋ก ์์ปค ๋
ธ๋(k8s-w1)๋ก ์ ๋ฌ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system get lease/cilium-l2announce-kube-system-cilium-ingress -o yaml | yq
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
| {
"apiVersion": "coordination.k8s.io/v1",
"kind": "Lease",
"metadata": {
"creationTimestamp": "2025-08-23T08:06:52Z",
"name": "cilium-l2announce-kube-system-cilium-ingress",
"namespace": "kube-system",
"resourceVersion": "12038",
"uid": "9bd1ef62-bbb3-4dfe-b7f1-f8108cac6028"
},
"spec": {
"acquireTime": "2025-08-23T08:06:52.738938Z",
"holderIdentity": "k8s-w1",
"leaseDurationSeconds": 15,
"leaseTransitions": 0,
"renewTime": "2025-08-23T08:09:19.294357Z"
}
}
|
6. LB External IP ํ์ธ
1
2
3
4
5
| (โ|HomeLab:N/A) root@k8s-ctr:~# LBIP=$(kubectl get svc -n kube-system cilium-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo $LBIP
# ๊ฒฐ๊ณผ
192.168.10.211
|
7. ํด๋ฌ์คํฐ ๋ด๋ถ ์ ๊ทผ ํ
์คํธ (arping)
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# arping -i eth1 $LBIP -c 2
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
| ARPING 192.168.10.211
60 bytes from 08:00:27:92:a6:9d (192.168.10.211): index=0 time=499.579 usec
60 bytes from 08:00:27:92:a6:9d (192.168.10.211): index=1 time=586.194 usec
--- 192.168.10.211 statistics ---
2 packets transmitted, 2 packets received, 0% unanswered (0 extra)
rtt min/avg/max/std-dev = 0.500/0.543/0.586/0.043 ms
|
- ์ปจํธ๋กค ํ๋ ์ธ(k8s-ctr)์์ LB IP๋ก ARP ์์ฒญ โ ์ ์ ์๋ต ํ์ธ
8. ํด๋ฌ์คํฐ ์ธ๋ถ ์ ๊ทผ ํ
์คํธ (router ๋
ธ๋)
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router sudo arping -i eth1 $LBIP -c 2
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
| ARPING 192.168.10.211
60 bytes from 08:00:27:92:a6:9d (192.168.10.211): index=0 time=227.688 usec
60 bytes from 08:00:27:92:a6:9d (192.168.10.211): index=1 time=686.348 usec
--- 192.168.10.211 statistics ---
2 packets transmitted, 2 packets received, 0% unanswered (0 extra)
rtt min/avg/max/std-dev = 0.228/0.457/0.686/0.229 ms
|
- ์ธ๋ถ ๋
ธ๋(router)์์ LB IP๋ก ARP ์์ฒญ ์คํ โ ์ ์ ์๋ต ํ์ธ
- ์ฆ, ํด๋ฌ์คํฐ ์ธ๋ถ์์๋ LB EX-IP(
192.168.10.211
) ์ ๊ทผ ๊ฐ๋ฅ
๐ Ingress HTTP Example : XFF ํ์ธ
1. Book info ์ํ์ดํ๋ฆฌ์ผ์ด์
๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.26/samples/bookinfo/platform/kube/bookinfo.yaml
# ๊ฒฐ๊ณผ
service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created
|
details
, ratings
, reviews(v1~v3)
, productpage
์๋น์ค ๋ฐ ํ๋ ์์ฑ
2. ์๋น์ค ๋ฐ ํ๋ ์ํ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod,svc,ep
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
| Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME READY STATUS RESTARTS AGE
pod/curl-pod 1/1 Running 0 71m
pod/details-v1-766844796b-26zq8 1/1 Running 0 64s
pod/productpage-v1-54bb874995-58dcm 1/1 Running 0 64s
pod/ratings-v1-5dc79b6bcd-4ngj8 1/1 Running 0 64s
pod/reviews-v1-598b896c9d-jn7cn 1/1 Running 0 64s
pod/reviews-v2-556d6457d-nhshc 1/1 Running 0 64s
pod/reviews-v3-564544b4d6-hztdc 1/1 Running 0 64s
pod/webpod-697b545f57-8qdms 1/1 Running 0 72m
pod/webpod-697b545f57-cscj8 1/1 Running 0 72m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/details ClusterIP 10.96.86.35 <none> 9080/TCP 64s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 166m
service/productpage ClusterIP 10.96.6.128 <none> 9080/TCP 64s
service/ratings ClusterIP 10.96.195.236 <none> 9080/TCP 64s
service/reviews ClusterIP 10.96.172.158 <none> 9080/TCP 64s
service/webpod ClusterIP 10.96.10.7 <none> 80/TCP 72m
NAME ENDPOINTS AGE
endpoints/details 172.20.1.43:9080 64s
endpoints/kubernetes 192.168.10.100:6443 166m
endpoints/productpage 172.20.1.147:9080 64s
endpoints/ratings 172.20.1.210:9080 64s
endpoints/reviews 172.20.1.176:9080,172.20.1.178:9080,172.20.1.239:9080 64s
endpoints/webpod 172.20.0.57:80,172.20.1.146:80 72m
|
- Istio ์ฌ์ด๋์นด ๋ฐฉ์๊ณผ ๋ฌ๋ฆฌ Cilium์ ์ฌ์ด๋์นด ์์ด Envoy per-node ๊ธฐ๋ฐ์ผ๋ก ๋์ โ Ambient Mesh ์ ์ฌ ๊ตฌ์กฐ
3. Cilium IngressClass ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ingressclasses.networking.k8s.io
|
โ
ย ์ถ๋ ฅ
1
2
| NAME CONTROLLER PARAMETERS AGE
cilium cilium.io/ingress-controller <none> 167m
|
4. Ingress ๋ฆฌ์์ค ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: basic-ingress
namespace: default
spec:
ingressClassName: cilium
rules:
- http:
paths:
- backend:
service:
name: details
port:
number: 9080
path: /details
pathType: Prefix
- backend:
service:
name: productpage
port:
number: 9080
path: /
pathType: Prefix
EOF
# ๊ฒฐ๊ณผ
ingress.networking.k8s.io/basic-ingress created
|
- Path ๊ธฐ๋ฐ ๋ผ์ฐํ
๊ท์น ์ค์
/details
โ details:9080
/
(๊ธฐ๋ณธ) โ productpage:9080
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc -n kube-system cilium-ingress
|
โ
ย ์ถ๋ ฅ
1
2
| NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cilium-ingress LoadBalancer 10.96.199.58 192.168.10.211 80:31809/TCP,443:30358/TCP 171m
|
cilium-ingress
LoadBalancer ์๋น์ค(192.168.10.211
)์ ์ฐ๊ฒฐ๋จ
5. Ingress ์๋น์ค ๋ฐ ๋ฆฌ์์ค ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ingress
|
โ
ย ์ถ๋ ฅ
1
2
| NAME CLASS HOSTS ADDRESS PORTS AGE
basic-ingress cilium * 192.168.10.211 80 2m16s
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kc describe ingress
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| Name: basic-ingress
Labels: <none>
Namespace: default
Address: 192.168.10.211
Ingress Class: cilium
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/details details:9080 (172.20.1.43:9080)
/ productpage:9080 (172.20.1.147:9080)
Annotations: <none>
Events: <none>
|
/details
์์ฒญ ์ details-v1
ํ๋๋ก ์ ๋ฌ/
์์ฒญ ์ productpage-v1
ํ๋๋ก ์ ๋ฌ
6. Ingress ์ ๊ทผ ํ
์คํธ
(1) LB IP ํ์ธ
1
2
3
4
5
| (โ|HomeLab:N/A) root@k8s-ctr:~# LBIP=$(kubectl get svc -n kube-system cilium-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo $LBIP
# ๊ฒฐ๊ณผ
192.168.10.211
|
(2) /
์์ฒญ
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# curl -so /dev/null -w "%{http_code}\n" http://$LBIP/
200
|
- 200 OK (productpage ์๋น์ค)
(3) /details/1
์์ฒญ
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# curl -so /dev/null -w "%{http_code}\n" http://$LBIP/details/1
200
|
- 200 OK (details ์๋น์ค)
(4) /ratings
์์ฒญ
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# curl -so /dev/null -w "%{http_code}\n" http://$LBIP/ratings
404
|
- 404 Not Found (๊ฒฝ๋ก ๋ฏธ๋งคํ)
7. Hubble L7 ๋ชจ๋ํฐ๋ง
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# cilium hubble port-forward&
hubble observe -f -t l7
|
(1) /
, /details/1
์์ฒญ
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# curl -so /dev/null -w "%{http_code}\n" http://$LBIP/
curl -so /dev/null -w "%{http_code}\n" http://$LBIP/details/1
|
โ
ย ๋ชจ๋ํฐ๋ง ๊ฒฐ๊ณผ
1
2
3
4
5
6
7
8
| (โ|HomeLab:N/A) root@k8s-ctr:~# cilium hubble port-forward&
hubble observe -f -t l7
[1] 9942
โน๏ธ Hubble Relay is available at 127.0.0.1:4245
Aug 23 08:43:53.904: 127.0.0.1:58506 (ingress) -> default/productpage-v1-54bb874995-58dcm:9080 (ID:8100) http-request FORWARDED (HTTP/1.1 GET http://192.168.10.211/)
Aug 23 08:43:53.909: 127.0.0.1:58506 (ingress) <- default/productpage-v1-54bb874995-58dcm:9080 (ID:8100) http-response FORWARDED (HTTP/1.1 200 6ms (GET http://192.168.10.211/))
Aug 23 08:43:53.929: 127.0.0.1:58518 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) http-request FORWARDED (HTTP/1.1 GET http://192.168.10.211/details/1)
Aug 23 08:43:53.931: 127.0.0.1:58518 (ingress) <- default/details-v1-766844796b-26zq8:9080 (ID:7384) http-response FORWARDED (HTTP/1.1 200 3ms (GET http://192.168.10.211/details/1))
|
/
์์ฒญ โ productpage-v1 ์ผ๋ก ์ ๋ฌ, 200 ์๋ต/details/1
์์ฒญ โ details-v1 ์ผ๋ก ์ ๋ฌ, 200 ์๋ต
(2) /ratings
์์ฒญ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# curl -so /dev/null -w "%{http_code}\n" http://$LBIP/ratings
|
โ
ย ๋ชจ๋ํฐ๋ง ๊ฒฐ๊ณผ
1
2
| Aug 23 08:45:30.871: 127.0.0.1:35458 (ingress) -> default/productpage-v1-54bb874995-58dcm:9080 (ID:8100) http-request FORWARDED (HTTP/1.1 GET http://192.168.10.211/ratings)
Aug 23 08:45:30.874: 127.0.0.1:35458 (ingress) <- default/productpage-v1-54bb874995-58dcm:9080 (ID:8100) http-response FORWARDED (HTTP/1.1 404 4ms (GET http://192.168.10.211/ratings))
|
/ratings
์์ฒญ โ productpage-v1 ์ผ๋ก ์ ๋ฌ, 404 ์๋ต
(3) Hubble UI ํ์ธ
8. ์๋ต ํค๋์์ Envoy ํ๋ก์ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# curl -s -v http://$LBIP/
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| * Trying 192.168.10.211:80...
* Connected to 192.168.10.211 (192.168.10.211) port 80
> GET / HTTP/1.1
> Host: 192.168.10.211
> User-Agent: curl/8.5.0
> Accept: */*
>
< HTTP/1.1 200 OK
< server: envoy
< date: Sat, 23 Aug 2025 08:50:24 GMT
< content-type: text/html; charset=utf-8
< content-length: 2080
< x-envoy-upstream-service-time: 6
<
...
|
- ์๋ต ํค๋์
server: envoy
, x-envoy-upstream-service-time
ํฌํจ - ์ด๋ ์์ฒญ์ด Cilium Envoy Proxy๋ฅผ ๊ฒฝ์ ํ์ฌ ์ฒ๋ฆฌ๋์์์ ์๋ฏธ
9. Productpage ํ๋ ์์น ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -l app=productpage -owide
|
โ
ย ์ถ๋ ฅ
1
2
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
productpage-v1-54bb874995-58dcm 1/1 Running 0 36m 172.20.1.147 k8s-w1 <none> <none>
|
productpage-v1
ํ๋ IP: 172.20.1.147
- ๋ฐฐํฌ ๋
ธ๋:
k8s-w1
- ๋ฐ๋ผ์
k8s-w1
๋
ธ๋์์ veth ์ธํฐํ์ด์ค๋ฅผ ํ์ธํด์ผ ํจ
10. veth ์ธํฐํ์ด์ค ๋ฐ ๋ผ์ฐํ
์ ๋ณด ํ์ธ
1
2
| vagrant ssh k8s-w1
root@k8s-w1:~# ip -c route
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
1.214.68.2 via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
61.41.153.2 via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
172.20.0.0/24 via 192.168.10.100 dev eth1 proto kernel
172.20.1.43 dev lxc18af75bb5442 proto kernel scope link
172.20.1.146 dev lxc31dabfbe894f proto kernel scope link
172.20.1.147 dev lxcc960423e84e9 proto kernel scope link
172.20.1.176 dev lxc53d718f372d3 proto kernel scope link
172.20.1.178 dev lxc298e2d514c9d proto kernel scope link
172.20.1.210 dev lxc7edf0846d346 proto kernel scope link
172.20.1.239 dev lxc0214606ce0ed proto kernel scope link
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.101
192.168.20.0/24 via 192.168.10.200 dev eth1 proto static
|
ip -c route
๊ฒฐ๊ณผ172.20.1.147
ํ๋ โ veth ์ธํฐํ์ด์ค lxcc960423e84e9
์ฌ์ฉ
11. ngrep์ผ๋ก HTTP ํจํท ์บก์ฒ ์ค๋น
1
2
| root@k8s-w1:~# PROID=172.20.1.147
root@k8s-w1:~# PROVETH=lxcc960423e84e9
|
1
2
3
4
5
6
| root@k8s-w1:~# ngrep -tW byline -d $PROVETH '' 'tcp port 9080'
# ๋ชจ๋ํฐ๋ง ๋๊ธฐ์ค..
lxcc960423e84e9: no IPv4 address assigned: Cannot assign requested address
interface: lxcc960423e84e9
filter: ( tcp port 9080 ) and ((ip || ip6) || (vlan && (ip || ip6)))
|
- productpage ํ๋๊ฐ ์ฌ์ฉํ๋ veth(
lxcc960423e84e9
)์์ 9080 ํฌํธ ํธ๋ํฝ ๋ชจ๋ํฐ๋ง
12. ํจํท ๋ชจ๋ํฐ๋ง ๊ฒฐ๊ณผ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# curl -s http://$LBIP
|
โ
ย ๋ชจ๋ํฐ๋ง ๊ฒฐ๊ณผ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
| root@k8s-w1:~# ngrep -tW byline -d $PROVETH '' 'tcp port 9080'
lxcc960423e84e9: no IPv4 address assigned: Cannot assign requested address
interface: lxcc960423e84e9
filter: ( tcp port 9080 ) and ((ip || ip6) || (vlan && (ip || ip6)))
###
T 2025/08/23 18:00:13.976515 172.20.1.147:9080 -> 172.20.0.86:41677 [AP] #3
HTTP/1.1 200 OK.
Server: gunicorn.
Date: Sat, 23 Aug 2025 09:00:13 GMT.
Connection: keep-alive.
Content-Type: text/html; charset=utf-8.
Content-Length: 2080.
.
#
T 2025/08/23 18:00:13.976588 172.20.1.147:9080 -> 172.20.0.86:41677 [AP] #4
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
table {
color: #333;
background: white;
border: 1px solid grey;
font-size: 12pt;
border-collapse: collapse;
width: 100%;
}
table thead th,
table tfoot th {
color: #fff;
background: #466BB0;
}
table caption {
padding: .5em;
}
table th,
table td {
padding: .5em;
border: 1px solid lightgrey;
}
</style>
<script src="static/tailwind/tailwind.css"></script>
<div class="mx-auto px-4 sm:px-6 lg:px-8">
<div class="flex flex-col space-y-5 py-32 mx-auto max-w-7xl">
<h3 class="text-2xl">Hello! This is a simple bookstore application consisting of three services as shown below
</h3>
<table class="table table-condensed table-bordered table-hover"><tr><th>name</th><td>http://details:9080</td></tr><tr><th>endpoint</th><td>details</td></tr><tr><th>children</th><td><table class="table table-condensed table-bordered table-hover"><thead><tr><th>name</th><th>endpoint</th><th>children</th></tr></thead><tbody><tr><td>http://details:9080</td><td>details</td><td></td></tr><tr><td>http://reviews:9080</td><td>reviews</td><td><table class="table table-condensed table-bordered table-hover"><thead><tr><th>name</th><th>endpoint</th><th>children</th></tr></thead><tbody><tr><td>http://ratings:9080</td><td>ratings</td><td></td></tr></tbody></table></td></tr></tbody></table></td></tr></table>
<p>
Click on one of the links below to auto generate a request to the backend as a real user or a tester
</p>
<ul>
<li>
<a href="/productpage?u=normal" class="text-blue-500 hover:text-blue-600">Normal user</a>
</li>
<li>
<a href="/productpage?u=test" class="text-blue-500 hover:text-blue-600">Test user</a>
</li>
</ul>
</div>
</div>
##
|
๐ฆ Ingress-Nginx ์ค์น ๋ฐ ์ค์ : cilium ingress์ ๊ณต์กด ํ์ธ
1. Ingress-Nginx Helm Repo ์ถ๊ฐ
1
2
3
4
| (โ|HomeLab:N/A) root@k8s-ctr:~# helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
# ๊ฒฐ๊ณผ
"ingress-nginx" has been added to your repositories
|
2. Ingress-Nginx ์ปจํธ๋กค๋ฌ ์ค์น
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
| (โ|HomeLab:N/A) root@k8s-ctr:~# helm install ingress-nginx ingress-nginx/ingress-nginx --create-namespace -n ingress-nginx
# ๊ฒฐ๊ณผ
NAME: ingress-nginx
LAST DEPLOYED: Sat Aug 23 20:54:59 2025
NAMESPACE: ingress-nginx
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the load balancer IP to be available.
You can watch the status by running 'kubectl get service --namespace ingress-nginx ingress-nginx-controller --output wide --watch'
An example Ingress that makes use of the controller:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
namespace: foo
spec:
ingressClassName: nginx
rules:
- host: www.example.com
http:
paths:
- pathType: Prefix
backend:
service:
name: exampleService
port:
number: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
|
3. IngressClass ํ์ธ (Cilium + Nginx ๊ณต์กด)
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ingressclasses.networking.k8s.io
|
โ
ย ์ถ๋ ฅ
1
2
3
| NAME CONTROLLER PARAMETERS AGE
cilium cilium.io/ingress-controller <none> 6h24m
nginx k8s.io/ingress-nginx <none> 85s
|
- ๋ IngressClass๊ฐ ๋์์ ์กด์ฌ โ ๊ณต์กด ๊ฐ๋ฅ ํ์ธ
4. Ingress-Nginx Service ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc -n ingress-nginx
|
โ
ย ์ถ๋ ฅ
1
2
3
| NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.96.254.255 192.168.10.212 80:31190/TCP,443:30278/TCP 2m5s
ingress-nginx-controller-admission ClusterIP 10.96.103.242 <none> 443/TCP 2m5s
|
ingress-nginx-controller
์๋น์ค๊ฐ LoadBalancer ํ์
์ผ๋ก ์์ฑ๋จ- EXTERNAL-IP:
192.168.10.212
, ํฌํธ: 80
, 443
5. Ingress-Nginx Pod ์ํ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# k get pod -n ingress-nginx
|
โ
ย ์ถ๋ ฅ
1
2
| NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-67bbdf7d8d-cwkzh 1/1 Running 0 2m37s
|
6. Webpod ๋์ Nginx Ingress ๋ฆฌ์์ค ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webpod-ingress-nginx
namespace: default
spec:
ingressClassName: nginx
rules:
- host: nginx.webpod.local
http:
paths:
- backend:
service:
name: webpod
port:
number: 80
path: /
pathType: Prefix
EOF
# ๊ฒฐ๊ณผ
ingress.networking.k8s.io/webpod-ingress-nginx created
|
ingressClassName: nginx
์ง์ ํ์ฌ Ingress ์์ฑnginx.webpod.local
ํธ์คํธ์ ๋ํ ์์ฒญ์ webpod
์๋น์ค๋ก ์ ๋ฌ
7. Ingress ๋ฆฌ์์ค ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ingress -w
|
โ
ย ์ถ๋ ฅ
1
2
3
| NAME CLASS HOSTS ADDRESS PORTS AGE
basic-ingress cilium * 192.168.10.211 80 3h38m
webpod-ingress-nginx nginx nginx.webpod.local 192.168.10.212 80 31s
|
basic-ingress
โ Cilium Ingress (192.168.10.211
)webpod-ingress-nginx
โ Nginx Ingress (192.168.10.212
)
8. Nginx Ingress ์ ๊ทผ ํ
์คํธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# LB2IP=$(kubectl get svc -n ingress-nginx ingress-nginx-controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}')')
|
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# curl $LB2IP
curl -H "Host: nginx.webpod.local" $LB2IP
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
| <html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
Hostname: webpod-697b545f57-8qdms
IP: 127.0.0.1
IP: ::1
IP: 172.20.0.57
IP: fe80::2421:36ff:fe60:fa6c
RemoteAddr: 172.20.1.96:60838
GET / HTTP/1.1
Host: nginx.webpod.local
User-Agent: curl/8.5.0
Accept: */*
X-Forwarded-For: 192.168.10.100
X-Forwarded-Host: nginx.webpod.local
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Scheme: http
X-Real-Ip: 192.168.10.100
X-Request-Id: 750fe9441ca3603d7eebca1fa6adaf42
X-Scheme: http
|
- Webpod ์ ํ๋ฆฌ์ผ์ด์
์ ์ ์๋ต
- ์์ฒญ ํค๋์
X-Forwarded-For
, X-Real-Ip
, X-Request-Id
๋ฑ์ด ํฌํจ๋จ - Nginx Ingress๊ฐ ํด๋ผ์ด์ธํธ IP์ ํ๋ก์ ์ ๋ณด ํค๋๋ฅผ ์ ์์ ์ผ๋ก ์ ๋ฌํ๋ ๊ฒ ํ์ธ
๐ก dedicated mode
1. Shared Mode ํ๊ณ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ingress
|
โ
ย ์ถ๋ ฅ
1
2
3
| NAME CLASS HOSTS ADDRESS PORTS AGE
basic-ingress cilium * 192.168.10.211 80 3h42m
webpod-ingress-nginx nginx nginx.webpod.local 192.168.10.212 80 4m52s
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# k get svc -n kube-system
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
| NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cilium-envoy ClusterIP None <none> 9964/TCP 6h32m
cilium-ingress LoadBalancer 10.96.199.58 192.168.10.211 80:31809/TCP,443:30358/TCP 6h32m
hubble-metrics ClusterIP None <none> 9965/TCP 6h32m
hubble-peer ClusterIP 10.96.28.176 <none> 443/TCP 6h32m
hubble-relay ClusterIP 10.96.83.176 <none> 80/TCP 6h32m
hubble-ui NodePort 10.96.222.67 <none> 80:30003/TCP 6h32m
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 6h32m
metrics-server ClusterIP 10.96.80.249 <none> 443/TCP 6h32m
|
- ๊ธฐ๋ณธ์ ์ผ๋ก Cilium Ingress๋ shared mode์์ ๋์
- ๋ชจ๋ Ingress ๋ฆฌ์์ค๊ฐ ๋์ผํ IP(
192.168.10.211
)๋ฅผ ๊ณต์ - ๋ฆฌ์์ค๊ฐ ๋์ด๋๋ ๋์ผํ LoadBalancer IP ์ฌ์ฉ โ IP ๋ถ๋ฆฌ๊ฐ ๋ถ๊ฐ๋ฅ
2. Dedicated Mode Ingress ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webpod-ingress
namespace: default
annotations:
ingress.cilium.io/loadbalancer-mode: dedicated
spec:
ingressClassName: cilium
rules:
- http:
paths:
- backend:
service:
name: webpod
port:
number: 80
path: /
pathType: Prefix
EOF
# ๊ฒฐ๊ณผ
ingress.networking.k8s.io/webpod-ingress created
|
ingress.cilium.io/loadbalancer-mode: dedicated
์ ๋
ธํ
์ด์
์ถ๊ฐ
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# kc describe ingress webpod-ingress
kubectl get ingress
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
| Name: webpod-ingress
Labels: <none>
Namespace: default
Address: 192.168.10.213
Ingress Class: cilium
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/ webpod:80 (172.20.0.57:80,172.20.1.146:80)
Annotations: ingress.cilium.io/loadbalancer-mode: dedicated
Events: <none>
NAME CLASS HOSTS ADDRESS PORTS AGE
basic-ingress cilium * 192.168.10.211 80 3h44m
webpod-ingress cilium * 192.168.10.213 80 11s
webpod-ingress-nginx nginx nginx.webpod.local 192.168.10.212 80 7m11s
|
- Ingress ๋ฆฌ์์ค(
webpod-ingress
) ์์ฑ ์ ์ ์ฉ LoadBalancer IP ํ ๋น 192.168.10.213
๋ณ๋ IP ๋ถ์ฌ
3. Dedicated Ingress Service ์์ฑ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc,ep cilium-ingress-webpod-ingress
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
| Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/cilium-ingress-webpod-ingress LoadBalancer 10.96.98.203 192.168.10.213 80:30656/TCP,443:31333/TCP 107s
NAME ENDPOINTS AGE
endpoints/cilium-ingress-webpod-ingress 192.192.192.192:9999 106s
|
- ์๋์ผ๋ก
cilium-ingress-webpod-ingress
LoadBalancer ์๋น์ค ์์ฑ - EXTERNAL-IP:
192.168.10.213
4. Cilium L2 Announcement ๋ฆฌ๋ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get lease -n kube-system | grep ingress
|
โ
ย ์ถ๋ ฅ
1
2
3
| cilium-l2announce-default-cilium-ingress-webpod-ingress k8s-w1 3m25s
cilium-l2announce-ingress-nginx-ingress-nginx-controller k8s-w1 14m
cilium-l2announce-kube-system-cilium-ingress k8s-ctr 4h2m
|
cilium-l2announce-default-cilium-ingress-webpod-ingress
โ k8s-w1
webpod-ingress
์ ์ธ๋ถ IP(192.168.10.213)๋ k8s-w1 ๋
ธ๋๊ฐ ๊ด๊ณ
5. webpod ํ๋ ์์น ๋ฐ ๋ผ์ฐํ
ํ์ธ
(1) pod ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -l app=webpod -owide
|
โ
ย ์ถ๋ ฅ
1
2
3
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
webpod-697b545f57-8qdms 1/1 Running 0 5h4m 172.20.0.57 k8s-ctr <none> <none>
webpod-697b545f57-cscj8 1/1 Running 0 5h4m 172.20.1.146 k8s-w1 <none> <none>
|
- webpod ํ๋ 2๊ฐ โ
172.20.0.57 (k8s-ctr)
, 172.20.1.146 (k8s-w1)
(2) ๊ฐ ๋
ธ๋๋ณ ๋ผ์ฐํ
ํ
์ด๋ธ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ip -c route
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
1.214.68.2 via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
61.41.153.2 via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
172.20.0.26 dev lxccd6108c2c13a proto kernel scope link
172.20.0.51 dev lxc9d0051e26697 proto kernel scope link
172.20.0.57 dev lxc8d6118f480a3 proto kernel scope link
172.20.0.74 dev lxc090a59154842 proto kernel scope link
172.20.0.88 dev lxcbe10a3d0424e proto kernel scope link
172.20.0.94 dev lxc90422e45e688 proto kernel scope link
172.20.0.161 dev lxc3ade05345e37 proto kernel scope link
172.20.0.211 dev lxc406361bdd07d proto kernel scope link
172.20.1.0/24 via 192.168.10.101 dev eth1 proto kernel
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100
192.168.20.0/24 via 192.168.10.200 dev eth1 proto static
|
1
| root@k8s-w1:~# ip -c route
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
1.214.68.2 via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
61.41.153.2 via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
172.20.0.0/24 via 192.168.10.100 dev eth1 proto kernel
172.20.1.43 dev lxc18af75bb5442 proto kernel scope link
172.20.1.96 dev lxc4ef349547833 proto kernel scope link
172.20.1.146 dev lxc31dabfbe894f proto kernel scope link
172.20.1.147 dev lxcc960423e84e9 proto kernel scope link
172.20.1.176 dev lxc53d718f372d3 proto kernel scope link
172.20.1.178 dev lxc298e2d514c9d proto kernel scope link
172.20.1.210 dev lxc7edf0846d346 proto kernel scope link
172.20.1.239 dev lxc0214606ce0ed proto kernel scope link
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.101
192.168.20.0/24 via 192.168.10.200 dev eth1 proto static
|
1
2
| WPODVETH=lxc8d6118f480a3 # k8s-ctr
WPODVETH=lxc31dabfbe894f # k8s-w1
|
- ํ๋ IP์ ์ฐ๊ฒฐ๋ veth ์ธํฐํ์ด์ค ์๋ณ (
lxc8d6118f480a3
, lxc31dabfbe894f
)
6. ํธ๋ํฝ ์บก์ฒ(ngrep)
1
2
3
4
5
| (โ|HomeLab:N/A) root@k8s-ctr:~# WPODVETH=lxc8d6118f480a3
(โ|HomeLab:N/A) root@k8s-ctr:~# ngrep -tW byline -d $WPODVETH '' 'tcp port 80'
lxc8d6118f480a3: no IPv4 address assigned: Cannot assign requested address
interface: lxc8d6118f480a3
filter: ( tcp port 80 ) and ((ip || ip6) || (vlan && (ip || ip6)))
|
1
2
3
4
5
| root@k8s-w1:~# WPODVETH=lxc31dabfbe894f
root@k8s-w1:~# ngrep -tW byline -d $WPODVETH '' 'tcp port 80'
lxc31dabfbe894f: no IPv4 address assigned: Cannot assign requested address
interface: lxc31dabfbe894f
filter: ( tcp port 80 ) and ((ip || ip6) || (vlan && (ip || ip6)))
|
- veth ์ธํฐํ์ด์ค์์
tcp port 80
ํธ๋ํฝ ์บก์ฒ - ์ธ๋ถ ์์ฒญ(
curl http://192.168.10.213
) โ webpod ํ๋๋ก ์ ๋ฌ๋๋ ํ๋ฆ ํ์ธ
7. ์์ฒญ ๋ชจ๋ํฐ๋ง ๊ฒฐ๊ณผ
1
2
3
4
| (โ|HomeLab:N/A) root@k8s-ctr:~# LB2IP=$(kubectl get svc cilium-ingress-webpod-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router curl -s http://$LB2IP
sshpass -p 'vagrant' ssh vagrant@router curl -s http://$LB2IP
...
|
โ
ย ๋ชจ๋ํฐ๋ง ๊ฒฐ๊ณผ (k8s-w1)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
| root@k8s-w1:~# ngrep -tW byline -d $WPODVETH '' 'tcp port 80'
lxc31dabfbe894f: no IPv4 address assigned: Cannot assign requested address
interface: lxc31dabfbe894f
filter: ( tcp port 80 ) and ((ip || ip6) || (vlan && (ip || ip6)))
####
T 2025/08/23 21:19:58.384218 10.0.2.15:43560 -> 172.20.1.146:80 [AP] #4
GET / HTTP/1.1.
host: 192.168.10.213.
user-agent: curl/8.5.0.
accept: */*.
x-forwarded-for: 192.168.10.200.
x-forwarded-proto: http.
x-envoy-internal: true.
x-request-id: fa64b1ec-2d89-4fa1-ae36-62471ea0ed50.
.
##
T 2025/08/23 21:19:58.387967 172.20.1.146:80 -> 10.0.2.15:43560 [AP] #6
HTTP/1.1 200 OK.
Date: Sat, 23 Aug 2025 12:19:58 GMT.
Content-Length: 342.
Content-Type: text/plain; charset=utf-8.
.
Hostname: webpod-697b545f57-cscj8
IP: 127.0.0.1
IP: ::1
IP: 172.20.1.146
IP: fe80::24f0:47ff:fe1f:a7f9
RemoteAddr: 10.0.2.15:43560
GET / HTTP/1.1.
Host: 192.168.10.213.
User-Agent: curl/8.5.0.
Accept: */*.
X-Envoy-Internal: true.
X-Forwarded-For: 192.168.10.200.
X-Forwarded-Proto: http.
X-Request-Id: fa64b1ec-2d89-4fa1-ae36-62471ea0ed50.
.
#####
|
Host: 192.168.10.213
X-Forwarded-For: 192.168.10.200
(์ธ๋ถ ํด๋ผ์ด์ธํธ IP)X-Envoy-Internal: true
โ
ย ๋ชจ๋ํฐ๋ง ๊ฒฐ๊ณผ (k8s-ctr)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
| (โ|HomeLab:N/A) root@k8s-ctr:~# ngrep -tW byline -d $WPODVETH '' 'tcp port 80'
lxc8d6118f480a3: no IPv4 address assigned: Cannot assign requested address
interface: lxc8d6118f480a3
filter: ( tcp port 80 ) and ((ip || ip6) || (vlan && (ip || ip6)))
###
T 2025/08/23 21:22:00.133975 172.20.0.57:80 -> 172.20.1.225:39417 [AP] #3
HTTP/1.1 200 OK.
Date: Sat, 23 Aug 2025 12:22:00 GMT.
Content-Length: 344.
Content-Type: text/plain; charset=utf-8.
.
Hostname: webpod-697b545f57-8qdms
IP: 127.0.0.1
IP: ::1
IP: 172.20.0.57
IP: fe80::2421:36ff:fe60:fa6c
RemoteAddr: 172.20.1.225:39417
GET / HTTP/1.1.
Host: 192.168.10.213.
User-Agent: curl/8.5.0.
Accept: */*.
X-Envoy-Internal: true.
X-Forwarded-For: 192.168.10.200.
X-Forwarded-Proto: http.
X-Request-Id: 03f4b6de-58c7-4be4-856a-863300f29dd9.
.
|
- ๋์ผ ์์ฒญ์ด k8s-ctr์ webpod ํ๋(
172.20.0.57
)์๋ ์ ๋ฌ - ์๋ต ๋ก๊ทธ ํ์ธ ์ ๋์ผํ๊ฒ
X-Forwarded-For
ํฌํจ - Cilium Envoy๊ฐ L2 Announcement์ IPAM์ ํตํด ๊ฐ ๋
ธ๋๋ก ํธ๋ํฝ ๋ถ์ฐ ์ฒ๋ฆฌ ํ์ธ
๐ Ingress and Network Policy Example
1. ์ธ๋ถ ํธ๋ํฝ ์ฐจ๋จ ์ ์ฑ
์ ์ฉ (external-lockdown)
(1) CiliumClusterwideNetworkPolicy
์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: "cilium.io/v2"
kind: CiliumClusterwideNetworkPolicy
metadata:
name: "external-lockdown"
spec:
description: "Block all the traffic originating from outside of the cluster"
endpointSelector: {}
ingress:
- fromEntities:
- cluster
EOF
# ๊ฒฐ๊ณผ
ciliumclusterwidenetworkpolicy.cilium.io/external-lockdown created
|
- ํด๋ฌ์คํฐ ์ธ๋ถ์์ ์ค๋ ๋ชจ๋ ํธ๋ํฝ ์ฐจ๋จ
(2) ํด๋ฌ์คํฐ ๋ด๋ถ(k8s-ctr)์์ curl ํธ์ถ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f --identity ingress
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# curl --fail -v http://"$LBIP"/details/1
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| * Trying 192.168.10.211:80...
* Connected to 192.168.10.211 (192.168.10.211) port 80
> GET /details/1 HTTP/1.1
> Host: 192.168.10.211
> User-Agent: curl/8.5.0
> Accept: */*
>
< HTTP/1.1 403 Forbidden
< content-length: 15
< content-type: text/plain
< date: Sat, 23 Aug 2025 12:26:34 GMT
< server: envoy
* The requested URL returned error: 403
* Closing connection
curl: (22) The requested URL returned error: 403
|
โ
ย ๋ชจ๋ํฐ๋ง ๊ฒฐ๊ณผ
1
2
| Aug 23 12:28:32.277: 127.0.0.1:44836 (ingress) -> 127.0.0.1:12430 (world) http-request DROPPED (HTTP/1.1 GET http://192.168.10.211/details/1)
Aug 23 12:28:32.277: 127.0.0.1:44836 (ingress) <- 127.0.0.1:12430 (world) http-response FORWARDED (HTTP/1.1 403 0ms (GET http://192.168.10.211/details/1))
|
- 403 Forbidden ๋ฐํ
- Hubble ๋ก๊ทธ: world โ ingress ์์ฒญ์ด DROPPED ์ฒ๋ฆฌ
(3) ํด๋ฌ์คํฐ ์ธ๋ถ ๋ผ์ฐํฐ(192.168.10.200
)์์ curl ํธ์ถ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router "curl -s --fail -v http://"$LBIP"/details/1"
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| * Trying 192.168.10.211:80...
* Connected to 192.168.10.211 (192.168.10.211) port 80
> GET /details/1 HTTP/1.1
> Host: 192.168.10.211
> User-Agent: curl/8.5.0
> Accept: */*
>
< HTTP/1.1 403 Forbidden
< content-length: 15
< content-type: text/plain
< date: Sat, 23 Aug 2025 12:30:16 GMT
< server: envoy
* The requested URL returned error: 403
* Closing connection
|
โ
ย ๋ชจ๋ํฐ๋ง ๊ฒฐ๊ณผ
1
2
| Aug 23 12:30:16.575: 192.168.10.200:35670 (ingress) -> kube-system/cilium-ingress:80 (world) http-request DROPPED (HTTP/1.1 GET http://192.168.10.211/details/1)
Aug 23 12:30:16.575: 192.168.10.200:35670 (ingress) <- kube-system/cilium-ingress:80 (world) http-response FORWARDED (HTTP/1.1 403 0ms (GET http://192.168.10.211/details/1))
|
2. ํน์ CIDR ํ์ฉ ์ ์ฑ
์ถ๊ฐ (allow-cidr)
(1) CIDR ๊ธฐ๋ฐ ํ์ฉ ์ ์ฑ
์ ์ฉ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: "cilium.io/v2"
kind: CiliumClusterwideNetworkPolicy
metadata:
name: "allow-cidr"
spec:
description: "Allow all the traffic originating from a specific CIDR"
endpointSelector:
matchExpressions:
- key: reserved:ingress
operator: Exists
ingress:
- fromCIDRSet:
# Please update the CIDR to match your environment
- cidr: 192.168.10.200/32
- cidr: 127.0.0.1/32
EOF
# ๊ฒฐ๊ณผ
ciliumclusterwidenetworkpolicy.cilium.io/allow-cidr created
|
192.168.10.200/32
, 127.0.0.1/32
(2) ๋ด๋ถ(k8s-ctr)์์ curl ํธ์ถ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# curl --fail -v http://"$LBIP"/details/1
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| * Trying 192.168.10.211:80...
* Connected to 192.168.10.211 (192.168.10.211) port 80
> GET /details/1 HTTP/1.1
> Host: 192.168.10.211
> User-Agent: curl/8.5.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json
< server: envoy
< date: Sat, 23 Aug 2025 12:34:42 GMT
< content-length: 178
< x-envoy-upstream-service-time: 5
<
* Connection #0 to host 192.168.10.211 left intact
{"id":1,"author":"William Shakespeare","year":1595,"type":"paperback","pages":200,"publisher":"PublisherA","language":"English","ISBN-10":"1234567890","ISBN-13":"123-1234567890"}
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| Aug 23 12:34:42.205: 172.20.0.86:35693 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) policy-verdict:L3-Only INGRESS ALLOWED (TCP Flags: SYN)
Aug 23 12:34:42.205: 172.20.0.86:35693 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) to-endpoint FORWARDED (TCP Flags: SYN)
Aug 23 12:34:42.205: 172.20.0.86:35693 (ingress) <- default/details-v1-766844796b-26zq8:9080 (ID:7384) to-network FORWARDED (TCP Flags: SYN, ACK)
Aug 23 12:34:42.205: 172.20.0.86:35693 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) to-endpoint FORWARDED (TCP Flags: ACK)
Aug 23 12:34:42.206: 172.20.0.86:35693 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:34:42.206: 172.20.0.86:35693 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Aug 23 12:34:42.206: 172.20.0.86:35693 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:34:42.207: 172.20.0.86:35693 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:34:42.207: 172.20.0.86:35693 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:34:42.207: 172.20.0.86:35693 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:34:42.207: 127.0.0.1:60484 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) http-request FORWARDED (HTTP/1.1 GET http://192.168.10.211/details/1)
Aug 23 12:34:42.208: 172.20.0.86:35693 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:34:42.208: 172.20.0.86:35693 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:34:42.208: 172.20.0.86:35693 (ingress) <- default/details-v1-766844796b-26zq8:9080 (ID:7384) to-network FORWARDED (TCP Flags: ACK, PSH)
Aug 23 12:34:42.210: 127.0.0.1:60484 (ingress) <- default/details-v1-766844796b-26zq8:9080 (ID:7384) http-response FORWARDED (HTTP/1.1 200 5ms (GET http://192.168.10.211/details/1))
Aug 23 12:35:12.250: 172.20.0.86:35693 (ingress) <- default/details-v1-766844796b-26zq8:9080 (ID:7384) to-network FORWARDED (TCP Flags: ACK, FIN)
Aug 23 12:35:12.251: 172.20.0.86:35693 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
|
(3) ์ธ๋ถ ๋ผ์ฐํฐ(192.168.10.200
)์์ curl ํธ์ถ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router "curl -s --fail -v http://"$LBIP"/details/1"
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| * Trying 192.168.10.211:80...
* Connected to 192.168.10.211 (192.168.10.211) port 80
> GET /details/1 HTTP/1.1
> Host: 192.168.10.211
> User-Agent: curl/8.5.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json
< server: envoy
< date: Sat, 23 Aug 2025 12:37:37 GMT
< content-length: 178
< x-envoy-upstream-service-time: 3
<
{"id":1,"author":"William Shakespeare","year":1595,"type":"paperback","pages":200,"publisher":"PublisherA","language":"English","ISBN-10":"1234567890","ISBN-13":"123-1234567890"}{ [178 bytes data]
* Connection #0 to host 192.168.10.211 left intact
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
| (โ|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f --identity ingress
Aug 23 12:37:36.295: 172.20.0.86:45021 (ingress) <- default/details-v1-766844796b-26zq8:9080 (ID:7384) to-network FORWARDED (TCP Flags: ACK, FIN)
Aug 23 12:37:36.295: 172.20.0.86:45021 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Aug 23 12:37:37.580: 172.20.0.86:40729 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) policy-verdict:L3-Only INGRESS ALLOWED (TCP Flags: SYN)
Aug 23 12:37:37.580: 172.20.0.86:40729 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) to-endpoint FORWARDED (TCP Flags: SYN)
Aug 23 12:37:37.580: 172.20.0.86:40729 (ingress) <- default/details-v1-766844796b-26zq8:9080 (ID:7384) to-network FORWARDED (TCP Flags: SYN, ACK)
Aug 23 12:37:37.580: 172.20.0.86:40729 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) to-endpoint FORWARDED (TCP Flags: ACK)
Aug 23 12:37:37.580: 172.20.0.86:40729 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Aug 23 12:37:37.580: 172.20.0.86:40729 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:37:37.581: 172.20.0.86:40729 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:37:37.581: 192.168.10.200:43760 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) http-request FORWARDED (HTTP/1.1 GET http://192.168.10.211/details/1)
Aug 23 12:37:37.581: 172.20.0.86:40729 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:37:37.581: 172.20.0.86:40729 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:37:37.581: 172.20.0.86:40729 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:37:37.582: 172.20.0.86:40729 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:37:37.582: 172.20.0.86:40729 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:37:37.583: 192.168.10.200:43760 (ingress) <- default/details-v1-766844796b-26zq8:9080 (ID:7384) http-response FORWARDED (HTTP/1.1 200 4ms (GET http://192.168.10.211/details/1))
Aug 23 12:37:37.582: 172.20.0.86:40729 (ingress) <- default/details-v1-766844796b-26zq8:9080 (ID:7384) to-network FORWARDED (TCP Flags: ACK, PSH)
Aug 23 12:37:43.871: 172.20.0.86:37363 (ingress) <- default/details-v1-766844796b-26zq8:9080 (ID:7384) to-network FORWARDED (TCP Flags: ACK, FIN)
Aug 23 12:37:43.871: 172.20.0.86:37363 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Aug 23 12:38:07.626: 172.20.0.86:40729 (ingress) <- default/details-v1-766844796b-26zq8:9080 (ID:7384) to-network FORWARDED (TCP Flags: ACK, FIN)
Aug 23 12:38:07.627: 172.20.0.86:40729 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
|
3. ๊ธฐ๋ณธ ์ฐจ๋จ ์ ์ฑ
์ค์ (default-deny)
(1) default-deny
์ ์ฑ
์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: cilium.io/v2
kind: CiliumClusterwideNetworkPolicy
metadata:
name: "default-deny"
spec:
description: "Block all the traffic (except DNS) by default"
egress:
- toEndpoints:
- matchLabels:
io.kubernetes.pod.namespace: kube-system
k8s-app: kube-dns
toPorts:
- ports:
- port: '53'
protocol: UDP
rules:
dns:
- matchPattern: '*'
endpointSelector:
matchExpressions:
- key: io.kubernetes.pod.namespace
operator: NotIn
values:
- kube-system
EOF
# ๊ฒฐ๊ณผ
ciliumclusterwidenetworkpolicy.cilium.io/default-deny created
|
- ๊ธฐ๋ณธ์ ์ผ๋ก ๋ชจ๋ ํธ๋ํฝ ์ฐจ๋จ
- ์์ธ๋ก DNS ์์ฒญ(UDP 53) ๋ฐ
kube-system
๋ค์์คํ์ด์ค์ kube-dns
ํ๋ ํธ๋ํฝ๋ง ํ์ฉ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumclusterwidenetworkpolicy
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| NAME VALID
allow-cidr True
default-deny True
external-lockdown True
|
(2) ๋ด๋ถ(k8s-ctr)์์ curl ํธ์ถ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# curl --fail -v http://"$LBIP"/details/1
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| * Trying 192.168.10.211:80...
* Connected to 192.168.10.211 (192.168.10.211) port 80
> GET /details/1 HTTP/1.1
> Host: 192.168.10.211
> User-Agent: curl/8.5.0
> Accept: */*
>
< HTTP/1.1 403 Forbidden
< content-length: 15
< content-type: text/plain
< date: Sat, 23 Aug 2025 12:40:32 GMT
< server: envoy
* The requested URL returned error: 403
* Closing connection
curl: (22) The requested URL returned error: 403
|
1
2
| Aug 23 12:40:32.550: 127.0.0.1:59492 (ingress) -> 127.0.0.1:12430 (ID:16777218) http-request DROPPED (HTTP/1.1 GET http://192.168.10.211/details/1)
Aug 23 12:40:32.550: 127.0.0.1:59492 (ingress) <- 127.0.0.1:12430 (ID:16777218) http-response FORWARDED (HTTP/1.1 403 0ms (GET http://192.168.10.211/details/1))
|
(3) ์ธ๋ถ ๋ผ์ฐํฐ(192.168.10.200
)์์ curl ํธ์ถ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router "curl -s --fail -v http://"$LBIP"/details/1"
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| * Trying 192.168.10.211:80...
* Connected to 192.168.10.211 (192.168.10.211) port 80
> GET /details/1 HTTP/1.1
> Host: 192.168.10.211
> User-Agent: curl/8.5.0
> Accept: */*
>
< HTTP/1.1 403 Forbidden
< content-length: 15
< content-type: text/plain
< date: Sat, 23 Aug 2025 12:41:26 GMT
< server: envoy
* The requested URL returned error: 403
* Closing connection
|
1
2
| Aug 23 12:41:27.149: 192.168.10.200:37178 (ingress) -> kube-system/cilium-ingress:80 (world) http-request DROPPED (HTTP/1.1 GET http://192.168.10.211/details/1)
Aug 23 12:41:27.149: 192.168.10.200:37178 (ingress) <- kube-system/cilium-ingress:80 (world) http-response FORWARDED (HTTP/1.1 403 0ms (GET http://192.168.10.211/details/1))
|
4. Ingress ํธ๋ํฝ ํ์ฉ ์ ์ฑ
์ถ๊ฐ (allow-ingress-egress)
(1) allow-ingress-egress
์ ์ฑ
์ถ๊ฐ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: cilium.io/v2
kind: CiliumClusterwideNetworkPolicy
metadata:
name: allow-ingress-egress
spec:
description: "Allow all the egress traffic from reserved ingress identity to any endpoints in the cluster"
endpointSelector:
matchExpressions:
- key: reserved:ingress
operator: Exists
egress:
- toEntities:
- cluster
EOF
# ๊ฒฐ๊ณผ
ciliumclusterwidenetworkpolicy.cilium.io/allow-ingress-egress created
|
- Ingress ๊ฒฝ์ ํธ๋ํฝ์ ํด๋ฌ์คํฐ ๋ด๋ถ ์๋น์ค๋ก ์ ๋ฌํ ์ ์๋๋ก ํ์ฉ ์ ์ฑ
์ถ๊ฐ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumclusterwidenetworkpolicy
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
| NAME VALID
allow-cidr True
allow-ingress-egress True
default-deny True
external-lockdown True
|
(2) ๋ด๋ถ(k8s-ctr)์์ curl ํธ์ถ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# curl --fail -v http://"$LBIP"/details/1
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| * Trying 192.168.10.211:80...
* Connected to 192.168.10.211 (192.168.10.211) port 80
> GET /details/1 HTTP/1.1
> Host: 192.168.10.211
> User-Agent: curl/8.5.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json
< server: envoy
< date: Sat, 23 Aug 2025 12:44:00 GMT
< content-length: 178
< x-envoy-upstream-service-time: 2
<
* Connection #0 to host 192.168.10.211 left intact
{"id":1,"author":"William Shakespeare","year":1595,"type":"paperback","pages":200,"publisher":"PublisherA","language":"English","ISBN-10":"1234567890","ISBN-13":"123-1234567890"}
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| Aug 23 12:44:00.546: 172.20.0.86:44387 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) policy-verdict:L3-Only INGRESS ALLOWED (TCP Flags: SYN)
Aug 23 12:44:00.546: 172.20.0.86:44387 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) to-endpoint FORWARDED (TCP Flags: SYN)
Aug 23 12:44:00.546: 172.20.0.86:44387 (ingress) <- default/details-v1-766844796b-26zq8:9080 (ID:7384) to-network FORWARDED (TCP Flags: SYN, ACK)
Aug 23 12:44:00.547: 172.20.0.86:44387 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) to-endpoint FORWARDED (TCP Flags: ACK)
Aug 23 12:44:00.547: 172.20.0.86:44387 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Aug 23 12:44:00.547: 172.20.0.86:44387 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:44:00.547: 127.0.0.1:53060 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) http-request FORWARDED (HTTP/1.1 GET http://192.168.10.211/details/1)
Aug 23 12:44:00.547: 172.20.0.86:44387 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:44:00.548: 172.20.0.86:44387 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:44:00.548: 172.20.0.86:44387 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:44:00.548: 172.20.0.86:44387 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:44:00.549: 172.20.0.86:44387 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:44:00.549: 172.20.0.86:44387 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:44:00.549: 172.20.0.86:44387 (ingress) <- default/details-v1-766844796b-26zq8:9080 (ID:7384) to-network FORWARDED (TCP Flags: ACK, PSH)
Aug 23 12:44:00.550: 127.0.0.1:53060 (ingress) <- default/details-v1-766844796b-26zq8:9080 (ID:7384) http-response FORWARDED (HTTP/1.1 200 3ms (GET http://192.168.10.211/details/1))
Aug 23 12:44:30.593: 172.20.0.86:44387 (ingress) <- default/details-v1-766844796b-26zq8:9080 (ID:7384) to-network FORWARDED (TCP Flags: ACK, FIN)
Aug 23 12:44:30.594: 172.20.0.86:44387 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
|
(3) ์ธ๋ถ ๋ผ์ฐํฐ(192.168.10.200
)์์ curl ํธ์ถ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router "curl -s --fail -v http://"$LBIP"/details/1"
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| * Trying 192.168.10.211:80...
* Connected to 192.168.10.211 (192.168.10.211) port 80
> GET /details/1 HTTP/1.1
> Host: 192.168.10.211
> User-Agent: curl/8.5.0
> Accept: */*
>
{"id":1,"author":"William Shakespeare","year":1595,"type":"paperback","pages":200,"publisher":"PublisherA","language":"English","ISBN-10":"1234567890","ISBN-13":"123-1234567890"}< HTTP/1.1 200 OK
< content-type: application/json
< server: envoy
< date: Sat, 23 Aug 2025 12:45:02 GMT
< content-length: 178
< x-envoy-upstream-service-time: 3
<
{ [178 bytes data]
* Connection #0 to host 192.168.10.211 left intact
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| Aug 23 12:45:02.313: 172.20.0.86:36533 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) policy-verdict:L3-Only INGRESS ALLOWED (TCP Flags: SYN)
Aug 23 12:45:02.313: 172.20.0.86:36533 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) to-endpoint FORWARDED (TCP Flags: SYN)
Aug 23 12:45:02.313: 172.20.0.86:36533 (ingress) <- default/details-v1-766844796b-26zq8:9080 (ID:7384) to-network FORWARDED (TCP Flags: SYN, ACK)
Aug 23 12:45:02.314: 172.20.0.86:36533 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) to-endpoint FORWARDED (TCP Flags: ACK)
Aug 23 12:45:02.314: 172.20.0.86:36533 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:45:02.314: 172.20.0.86:36533 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:45:02.314: 192.168.10.200:41366 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) http-request FORWARDED (HTTP/1.1 GET http://192.168.10.211/details/1)
Aug 23 12:45:02.315: 172.20.0.86:36533 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Aug 23 12:45:02.315: 172.20.0.86:36533 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:45:02.315: 172.20.0.86:36533 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:45:02.315: 172.20.0.86:36533 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:45:02.316: 172.20.0.86:36533 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:45:02.316: 172.20.0.86:36533 (ingress) <> default/details-v1-766844796b-26zq8 (ID:7384) pre-xlate-rev TRACED (TCP)
Aug 23 12:45:02.316: 172.20.0.86:36533 (ingress) <- default/details-v1-766844796b-26zq8:9080 (ID:7384) to-network FORWARDED (TCP Flags: ACK, PSH)
Aug 23 12:45:02.317: 192.168.10.200:41366 (ingress) <- default/details-v1-766844796b-26zq8:9080 (ID:7384) http-response FORWARDED (HTTP/1.1 200 4ms (GET http://192.168.10.211/details/1))
Aug 23 12:45:32.355: 172.20.0.86:36533 (ingress) <- default/details-v1-766844796b-26zq8:9080 (ID:7384) to-network FORWARDED (TCP Flags: ACK, FIN)
Aug 23 12:45:32.356: 172.20.0.86:36533 (ingress) -> default/details-v1-766844796b-26zq8:9080 (ID:7384) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
|
- 200 OK ์ ์ ์๋ต
- ingress โ details ์๋น์ค ์์ฒญ์ด INGRESS ALLOWED ๋ก ํต๊ณผ ์ฒ๋ฆฌ
5. ์ ์ฑ
์ญ์
1
2
3
4
5
6
7
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl delete CiliumClusterwideNetworkPolicy --all
# ๊ฒฐ๊ณผ
ciliumclusterwidenetworkpolicy.cilium.io "allow-cidr" deleted
ciliumclusterwidenetworkpolicy.cilium.io "allow-ingress-egress" deleted
ciliumclusterwidenetworkpolicy.cilium.io "default-deny" deleted
ciliumclusterwidenetworkpolicy.cilium.io "external-lockdown" deleted
|
๐ Ingress Path Types Example
1. Path Types ์์ ๋ฆฌ์์ค ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/main/examples/kubernetes/servicemesh/ingress-path-types.yaml
# ๊ฒฐ๊ณผ
deployment.apps/exactpath created
deployment.apps/prefixpath created
deployment.apps/prefixpath2 created
deployment.apps/implpath created
deployment.apps/implpath2 created
service/prefixpath created
service/prefixpath2 created
service/exactpath created
service/implpath created
service/implpath2 created
|
- 5๊ฐ Deployment(
exactpath
, prefixpath
, prefixpath2
, implpath
, implpath2
)์ ๋์๋๋ Service ์์ฑ๋จ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get -f https://raw.githubusercontent.com/cilium/cilium/main/examples/kubernetes/servicemesh/ingress-path-types.yaml
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
| NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/exactpath 1/1 1 1 41s
deployment.apps/prefixpath 1/1 1 1 40s
deployment.apps/prefixpath2 1/1 1 1 40s
deployment.apps/implpath 1/1 1 1 40s
deployment.apps/implpath2 1/1 1 1 40s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/prefixpath ClusterIP 10.96.231.211 <none> 80/TCP 40s
service/prefixpath2 ClusterIP 10.96.27.239 <none> 80/TCP 40s
service/exactpath ClusterIP 10.96.82.188 <none> 80/TCP 40s
service/implpath ClusterIP 10.96.56.152 <none> 80/TCP 40s
service/implpath2 ClusterIP 10.96.107.112 <none> 80/TCP 40s
|
- ์์ฑ๋ ์๋น์ค ๋ชฉ๋ก
exactpath
, prefixpath
, prefixpath2
, implpath
, implpath2
2. Ingress ๋ฆฌ์์ค ์์ฑ
1
2
3
4
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/main/examples/kubernetes/servicemesh/ingress-path-types-ingress.yaml
# ๊ฒฐ๊ณผ
ingress.networking.k8s.io/multiple-path-types created
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# k get ingress
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
| NAME CLASS HOSTS ADDRESS PORTS AGE
basic-ingress cilium * 192.168.10.211 80 4h29m
multiple-path-types cilium pathtypes.example.com 192.168.10.211 80 37s
webpod-ingress cilium * 192.168.10.213 80 45m
webpod-ingress-nginx nginx nginx.webpod.local 192.168.10.212 80 52m
|
- Host:
pathtypes.example.com
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kc describe ingress multiple-path-types
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| Name: multiple-path-types
Labels: <none>
Namespace: default
Address: 192.168.10.211
Ingress Class: cilium
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
pathtypes.example.com
/exact exactpath:80 (172.20.1.38:3000)
/ prefixpath:80 (172.20.1.168:3000)
/prefix prefixpath2:80 (172.20.1.76:3000)
/impl implpath:80 (172.20.1.238:3000)
/impl.+ implpath2:80 (172.20.1.202:3000)
Annotations: <none>
Events: <none>
|
- Ingress ๊ท์น ํ์ธ ๊ฒฐ๊ณผ ๊ฐ Path์ Service ๋งคํ๋จ
/exact
โ exactpath
/
โ prefixpath
/prefix
โ prefixpath2
/impl
โ implpath
/impl.+
โ implpath2
3. Ingress YAML ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kc get ingress multiple-path-types -o yaml
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
| apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{},"name":"multiple-path-types","namespace":"default"},"spec":{"ingressClassName":"cilium","rules":[{"host":"pathtypes.example.com","http":{"paths":[{"backend":{"service":{"name":"exactpath","port":{"number":80}}},"path":"/exact","pathType":"Exact"},{"backend":{"service":{"name":"prefixpath","port":{"number":80}}},"path":"/","pathType":"Prefix"},{"backend":{"service":{"name":"prefixpath2","port":{"number":80}}},"path":"/prefix","pathType":"Prefix"},{"backend":{"service":{"name":"implpath","port":{"number":80}}},"path":"/impl","pathType":"ImplementationSpecific"},{"backend":{"service":{"name":"implpath2","port":{"number":80}}},"path":"/impl.+","pathType":"ImplementationSpecific"}]}}]}}
creationTimestamp: "2025-08-23T12:51:02Z"
generation: 1
name: multiple-path-types
namespace: default
resourceVersion: "34361"
uid: e533c642-50b6-4901-b760-6e39b05d3bf1
spec:
ingressClassName: cilium
rules:
- host: pathtypes.example.com
http:
paths:
- backend:
service:
name: exactpath
port:
number: 80
path: /exact
pathType: Exact
- backend:
service:
name: prefixpath
port:
number: 80
path: /
pathType: Prefix
- backend:
service:
name: prefixpath2
port:
number: 80
path: /prefix
pathType: Prefix
- backend:
service:
name: implpath
port:
number: 80
path: /impl
pathType: ImplementationSpecific
- backend:
service:
name: implpath2
port:
number: 80
path: /impl.+
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- ip: 192.168.10.211
|
- ๊ฐ Path์
pathType
๋ช
์๋จ/exact
โ Exact
/
, /prefix
โ Prefix
/impl
, /impl.+
โ ImplementationSpecific
4. ๊ธฐ๋ณธ ๊ฒฝ๋ก /
ํธ์ถ (PrefixPath ๋งค์นญ)
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# export PATHTYPE_IP=`k get ing multiple-path-types -o json | jq -r '.status.loadBalancer.ingress[0].ip'`
curl -s -H "Host: pathtypes.example.com" http://$PATHTYPE_IP/ | jq
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
| {
"path": "/",
"host": "pathtypes.example.com",
"method": "GET",
"proto": "HTTP/1.1",
"headers": {
"Accept": [
"*/*"
],
"User-Agent": [
"curl/8.5.0"
],
"X-Envoy-Internal": [
"true"
],
"X-Forwarded-For": [
"10.0.2.15"
],
"X-Forwarded-Proto": [
"http"
],
"X-Request-Id": [
"15cc02d8-35da-4b9c-8501-4e9868b136b2"
]
},
"namespace": "default",
"ingress": "",
"service": "",
"pod": "prefixpath-5d6b989d4-jfn4b"
}
|
ํ๋๋ช
ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod | grep path
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
| exactpath-7488f8c6c6-zkkxw 1/1 Running 0 5m34s
implpath-7d8bf85676-wlkrg 1/1 Running 0 5m34s
implpath2-56c97c8556-k4qp6 1/1 Running 0 5m34s
prefixpath-5d6b989d4-jfn4b 1/1 Running 0 5m34s
prefixpath2-b7c7c9568-jnvwj 1/1 Running 0 5m34s
|
1
2
3
4
| (โ|HomeLab:N/A) root@k8s-ctr:~# curl -s -H "Host: pathtypes.example.com" http://$PATHTYPE_IP/ | grep -E 'path|pod'
"path": "/",
"host": "pathtypes.example.com",
"pod": "prefixpath-5d6b989d4-jfn4b"
|
http://pathtypes.example.com/
ํธ์ถ โ prefixpath
ํ๋๋ก ๋ผ์ฐํ
๋จ- Path:
/
, Pod: prefixpath-xxxxx
5. /exact
ํธ์ถ (ExactPath ๋งค์นญ)
1
2
3
4
| (โ|HomeLab:N/A) root@k8s-ctr:~# curl -s -H "Host: pathtypes.example.com" http://$PATHTYPE_IP/exact | grep -E 'path|pod'
"path": "/exact",
"host": "pathtypes.example.com",
"pod": "exactpath-7488f8c6c6-zkkxw"
|
http://pathtypes.example.com/exact
ํธ์ถ โ exactpath
ํ๋ ๋งค์นญ- Path:
/exact
, Pod: exactpath-xxxxx
6. /prefix
ํธ์ถ (PrefixPath2 ๋งค์นญ)
1
2
3
4
| (โ|HomeLab:N/A) root@k8s-ctr:~# curl -s -H "Host: pathtypes.example.com" http://$PATHTYPE_IP/prefix | grep -E 'path|pod'
"path": "/prefix",
"host": "pathtypes.example.com",
"pod": "prefixpath2-b7c7c9568-jnvwj"
|
http://pathtypes.example.com/prefix
ํธ์ถ โ prefixpath2
ํ๋ ๋งค์นญ- Path:
/prefix
, Pod: prefixpath2-xxxxx
7. /impl
ํธ์ถ (ImplementationSpecific โ implpath ๋งค์นญ)
1
2
3
4
| (โ|HomeLab:N/A) root@k8s-ctr:~# curl -s -H "Host: pathtypes.example.com" http://$PATHTYPE_IP/impl | grep -E 'path|pod'
"path": "/impl",
"host": "pathtypes.example.com",
"pod": "implpath-7d8bf85676-wlkrg"
|
http://pathtypes.example.com/impl
ํธ์ถ โ implpath
ํ๋ ๋งค์นญ- Path:
/impl
, Pod: implpath-xxxxx
8. /implementation
ํธ์ถ (ImplementationSpecific โ implpath2 ๋งค์นญ)
1
2
3
4
| (โ|HomeLab:N/A) root@k8s-ctr:~# curl -s -H "Host: pathtypes.example.com" http://$PATHTYPE_IP/implementation | grep -E 'path|pod'
"path": "/implementation",
"host": "pathtypes.example.com",
"pod": "implpath2-56c97c8556-k4qp6"
|
http://pathtypes.example.com/implementation
ํธ์ถ โ implpath2
ํ๋ ๋งค์นญ- Path:
/implementation
, Pod: implpath2-xxxxx
๐ Ingress Example with TLS Termination
1. mkcert ์ค์น
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
| (โ|HomeLab:N/A) root@k8s-ctr:~# apt install mkcert -y
# ๊ฒฐ๊ณผ
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
mkcert
0 upgraded, 1 newly installed, 0 to remove and 23 not upgraded.
Need to get 1,418 kB of archives.
After this operation, 3,672 kB of additional disk space will be used.
Get:1 http://us.archive.ubuntu.com/ubuntu noble-updates/universe amd64 mkcert amd64 1.4.4-1ubuntu3.2 [1,418 kB]
Fetched 1,418 kB in 4s (401 kB/s)
Selecting previously unselected package mkcert.
(Reading database ... 52139 files and directories currently installed.)
Preparing to unpack .../mkcert_1.4.4-1ubuntu3.2_amd64.deb ...
Unpacking mkcert (1.4.4-1ubuntu3.2) ...
Setting up mkcert (1.4.4-1ubuntu3.2) ...
Processing triggers for man-db (2.12.0-4build2) ...
Scanning processes...
Scanning linux images...
Running kernel seems to be up-to-date.
No services need to be restarted.
No containers need to be restarted.
No user sessions are running outdated binaries.
No VM guests are running outdated hypervisor (qemu) binaries on this host.
|
- ๋ก์ปฌ์์ TLS ์ธ์ฆ์ ์์ฑ์ ์ํด ์ฌ์ฉ
2. ์์ผ๋์นด๋ ๋๋ฉ์ธ ์ธ์ฆ์ ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| (โ|HomeLab:N/A) root@k8s-ctr:~# mkcert '*.cilium.rocks'
# ๊ฒฐ๊ณผ
Created a new local CA ๐ฅ
Note: the local CA is not installed in the system trust store.
Run "mkcert -install" for certificates to be trusted automatically โ ๏ธ
Created a new certificate valid for the following names ๐
- "*.cilium.rocks"
Reminder: X.509 wildcards only go one level deep, so this won't match a.b.cilium.rocks โน๏ธ
The certificate is at "./_wildcard.cilium.rocks.pem" and the key at "./_wildcard.cilium.rocks-key.pem" โ
It will expire on 23 November 2027 ๐
|
- ์ธ์ฆ์๋
*.cilium.rocks
๋๋ฉ์ธ์ ์ ํจ
3. ์ธ์ฆ์ ์์ธ ์ ๋ณด ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ls -l *.pem
|
โ
ย ์ถ๋ ฅ
1
2
| -rw------- 1 root root 1704 Aug 23 22:07 _wildcard.cilium.rocks-key.pem
-rw-r--r-- 1 root root 1452 Aug 23 22:07 _wildcard.cilium.rocks.pem
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# openssl x509 -in _wildcard.cilium.rocks.pem -text -noout
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
| Certificate:
Data:
Version: 3 (0x2)
Serial Number:
b8:cb:f9:a7:5a:4b:41:d7:c1:89:db:80:f1:d6:57:a1
Signature Algorithm: sha256WithRSAEncryption
Issuer: O = mkcert development CA, OU = root@k8s-ctr, CN = mkcert root@k8s-ctr
Validity
Not Before: Aug 23 13:07:13 2025 GMT
Not After : Nov 23 13:07:13 2027 GMT
Subject: O = mkcert development certificate, OU = root@k8s-ctr
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:a7:46:6d:c6:15:3c:80:77:29:d4:93:e2:de:24:
e9:7d:c8:b6:5a:58:e4:6a:f7:db:85:e5:bf:c8:c6:
47:52:bb:9a:aa:17:fc:8e:d9:59:fa:f2:f4:ac:36:
47:7c:da:42:b8:91:3f:3e:61:2b:4e:55:11:01:86:
1a:3a:4b:1f:1a:99:62:69:42:6d:be:4c:dd:56:2d:
09:54:83:45:a7:45:86:9c:c4:0b:c3:fe:3f:aa:b3:
d4:e3:47:d5:99:2c:11:88:c9:68:2a:91:40:15:68:
b5:60:fe:71:6b:01:b5:08:5f:90:45:94:4c:1a:6a:
6d:3b:24:00:3f:03:b2:af:4e:74:32:19:88:99:63:
c3:b0:fe:79:98:82:15:6c:b3:a8:72:4e:83:92:94:
d0:af:ce:99:1d:16:c6:7f:da:b7:2b:e0:2c:9e:cc:
89:12:58:22:9b:11:0d:db:4a:01:e4:23:72:82:9c:
7c:62:0a:65:6c:05:fc:6e:2e:f0:54:0f:eb:09:71:
3b:93:29:48:b2:f6:37:0b:4e:3e:49:71:37:1b:80:
ca:a1:39:96:5b:21:4f:b7:d6:bc:61:df:31:e8:23:
0d:8d:82:56:21:e6:f4:86:2d:9c:ce:38:c8:a6:b6:
0a:03:93:e1:e4:9d:eb:b6:39:21:90:48:81:96:8c:
7b:b1
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Authority Key Identifier:
7A:2D:48:1E:DF:73:58:5A:EA:80:7A:1E:5B:06:FB:F7:C9:F5:9A:D0
X509v3 Subject Alternative Name:
DNS:*.cilium.rocks
Signature Algorithm: sha256WithRSAEncryption
Signature Value:
4e:f8:31:03:4f:ec:da:6c:67:57:05:2a:00:db:1d:96:54:e5:
a9:db:5c:ac:d7:62:f0:6a:96:9a:fd:8e:2c:30:e4:72:61:3e:
b1:01:9e:1b:20:c1:3c:8c:e6:63:e6:c6:48:4e:da:a9:06:8d:
7e:f4:7a:62:b7:42:08:38:24:9b:af:5d:24:52:56:b1:31:73:
78:64:68:7f:df:fc:1f:8b:3b:fe:eb:92:a1:23:db:e6:1a:3e:
56:87:06:9c:96:71:52:96:13:a0:fa:0b:e8:07:51:c0:99:8e:
d8:26:8d:64:41:76:6d:14:ed:ba:ae:25:db:54:3a:0b:6d:83:
6e:d3:1a:46:7a:9d:66:0b:12:1e:25:a6:ed:14:57:49:92:28:
09:4d:11:f3:1d:4c:f2:43:11:f6:34:3a:90:09:8e:d3:fb:e2:
29:6b:7d:57:83:33:e7:a0:3a:ee:a9:ce:4a:dd:ec:71:07:a3:
7b:3f:4c:40:68:75:bc:5c:35:ce:5c:4f:40:59:86:3c:c9:d8:
7b:e2:e4:77:cd:71:3b:bc:8c:5d:f2:52:1c:1f:19:43:38:65:
03:9a:31:8e:f6:d3:4d:71:e4:63:af:f3:ca:af:d8:d7:bd:29:
e6:6b:5d:00:09:59:8e:96:03:25:67:4e:6a:60:62:84:ab:34:
2a:c3:d2:a1:a6:5a:d8:b2:ef:dc:3b:59:9d:3a:2e:1b:65:5c:
69:13:2a:22:5a:58:e6:a8:c2:fd:7a:08:54:10:95:df:74:92:
9d:c5:53:b1:2f:ca:10:45:d5:25:38:d3:fe:c8:79:c4:81:a1:
c7:a4:0c:7e:11:bd:e8:ee:98:a7:3c:af:3e:02:5c:5a:b9:d4:
ca:91:14:39:4d:7d:b3:4a:86:2d:fa:a4:8c:56:8f:42:cd:72:
35:c5:f1:c6:c5:ea:af:f8:4b:86:46:0e:14:98:82:71:e0:39:
a7:9f:68:a2:72:3b:ee:45:e3:66:da:a1:7c:38:ef:99:93:b9:
17:4c:e3:16:26:6e
|
- Issuer: mkcert local CA
- Subject: mkcert development certificate
- Extended Key Usage: TLS Web Server Authentication
- Subject Alternative Name:
DNS:*.cilium.rocks
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# openssl rsa -in _wildcard.cilium.rocks-key.pem -text -noout
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
| Private-Key: (2048 bit, 2 primes)
modulus:
00:a7:46:6d:c6:15:3c:80:77:29:d4:93:e2:de:24:
e9:7d:c8:b6:5a:58:e4:6a:f7:db:85:e5:bf:c8:c6:
47:52:bb:9a:aa:17:fc:8e:d9:59:fa:f2:f4:ac:36:
47:7c:da:42:b8:91:3f:3e:61:2b:4e:55:11:01:86:
1a:3a:4b:1f:1a:99:62:69:42:6d:be:4c:dd:56:2d:
09:54:83:45:a7:45:86:9c:c4:0b:c3:fe:3f:aa:b3:
d4:e3:47:d5:99:2c:11:88:c9:68:2a:91:40:15:68:
b5:60:fe:71:6b:01:b5:08:5f:90:45:94:4c:1a:6a:
6d:3b:24:00:3f:03:b2:af:4e:74:32:19:88:99:63:
c3:b0:fe:79:98:82:15:6c:b3:a8:72:4e:83:92:94:
d0:af:ce:99:1d:16:c6:7f:da:b7:2b:e0:2c:9e:cc:
89:12:58:22:9b:11:0d:db:4a:01:e4:23:72:82:9c:
7c:62:0a:65:6c:05:fc:6e:2e:f0:54:0f:eb:09:71:
3b:93:29:48:b2:f6:37:0b:4e:3e:49:71:37:1b:80:
ca:a1:39:96:5b:21:4f:b7:d6:bc:61:df:31:e8:23:
0d:8d:82:56:21:e6:f4:86:2d:9c:ce:38:c8:a6:b6:
0a:03:93:e1:e4:9d:eb:b6:39:21:90:48:81:96:8c:
7b:b1
publicExponent: 65537 (0x10001)
privateExponent:
4a:77:30:f1:28:8d:09:87:82:e8:ae:79:25:79:7b:
34:52:c0:d3:11:95:86:05:17:05:d1:94:82:15:ba:
b4:9a:ed:ac:61:07:3e:b4:85:b9:10:a5:59:70:c4:
7c:51:51:b8:86:78:88:15:8b:c8:d0:57:c4:bc:e5:
3a:24:2d:11:93:4c:db:1d:06:6b:dc:1e:00:7a:06:
18:48:64:1e:a5:f5:da:1d:f0:3a:ed:19:7c:ad:97:
cd:22:32:75:80:c7:c1:84:1f:ca:2b:65:42:e2:9d:
34:33:b1:5b:f8:a3:95:b9:ad:29:3c:6e:70:a8:06:
3e:78:b5:5f:58:0f:18:b8:f4:a3:2d:8d:0c:b7:69:
41:1b:6e:2e:89:1d:d2:3f:f9:ca:9d:c2:16:61:a6:
64:7c:c3:35:9a:04:3d:ac:fa:7c:61:36:a3:90:9d:
12:c8:b7:cf:ed:2f:f8:15:67:08:64:8d:b3:2d:3f:
bc:3e:21:7c:32:89:17:c2:87:bc:ec:97:1d:0b:46:
4a:b6:af:62:ff:9b:69:90:90:d5:f8:4a:a9:cf:0a:
24:82:90:94:b6:af:72:32:04:87:3a:be:11:45:d6:
34:b8:e0:b2:07:21:8a:c3:5a:ba:a0:b9:60:8f:cb:
76:01:29:d8:a6:ed:01:69:59:c1:e6:8c:a0:12:52:
c1
prime1:
00:c2:10:ba:0e:17:bd:2f:3f:49:f0:e0:08:6c:93:
90:e9:10:dc:4c:db:28:d9:ef:a5:c0:48:a4:48:cc:
0f:e2:09:0e:61:52:81:39:b0:c5:ae:df:df:6d:cf:
1e:58:01:7d:c3:eb:a7:07:9d:3d:13:a8:fd:7d:28:
4c:43:6a:2b:6a:a5:a1:67:3e:88:50:5a:e3:4a:87:
df:4b:e6:73:62:0b:a0:cb:1c:aa:1c:71:74:38:56:
29:9a:01:03:5f:79:fb:e8:27:31:1f:7b:cb:e7:18:
d5:02:36:b9:85:54:af:bd:e6:36:51:df:2d:52:04:
75:7f:6c:27:97:66:9b:ce:e9
prime2:
00:dc:a8:ec:00:99:1c:2b:44:4e:d9:de:65:87:04:
cc:d3:73:aa:8e:ea:fb:da:32:5e:6d:75:d1:cf:af:
ba:4c:bb:2d:18:13:f9:f2:30:e6:a8:32:a9:17:bf:
ad:c5:dc:55:45:39:e9:bb:e6:81:af:35:76:89:8d:
e4:b7:2f:4c:0c:f8:b6:c8:9d:d5:f5:90:b4:9a:0a:
02:9a:56:58:9e:2d:00:28:57:64:25:72:1a:12:27:
25:39:f4:21:3a:0c:c1:5b:2a:6d:fb:59:c6:d1:d9:
29:35:2e:f8:3d:23:fb:8a:17:7b:22:68:fd:76:6d:
19:7b:a0:86:1d:e8:86:19:89
exponent1:
00:bb:ea:3e:7f:0e:f5:9e:1e:86:96:bc:18:ec:2a:
28:13:c6:c3:cb:98:1c:02:8c:4f:cf:d5:87:5f:06:
5a:0a:e2:02:0e:b3:39:76:87:79:a4:50:1b:a5:d4:
fc:e8:f2:e8:b9:22:22:e6:f8:96:ff:c6:8f:4e:f6:
0a:82:ec:f7:85:99:44:a5:18:a4:1e:4b:23:f4:33:
85:41:95:b0:fc:ab:8d:d3:9f:ba:09:38:c2:f5:1a:
59:c6:4a:5c:dd:c2:dc:ee:7b:a4:d1:7b:aa:81:e8:
45:ba:93:0b:29:d4:4b:28:73:16:fe:48:41:36:3e:
23:18:73:be:12:98:c5:1e:a1
exponent2:
28:86:4d:71:7e:ad:11:b5:25:d6:15:66:07:f9:f5:
b4:0f:d6:0f:11:50:d7:ad:d0:71:c1:e5:9a:82:e7:
70:18:20:1b:ff:ef:33:6f:8f:ac:ec:40:e9:bf:a6:
26:64:64:b4:a6:02:2b:24:16:45:1b:48:7d:d2:b5:
62:7f:29:34:68:a9:09:07:13:b8:59:af:11:6b:2d:
37:33:0c:6b:60:50:5b:18:51:a8:1e:e0:72:dd:05:
95:33:24:b7:08:88:aa:39:8c:a1:50:c7:96:da:8f:
1a:d9:59:75:c1:19:fd:2e:7a:ca:c3:05:69:ae:77:
b3:a0:b3:80:e2:a7:5e:21
coefficient:
1c:17:f5:ec:99:be:04:b5:5d:c5:91:a8:ab:d5:f2:
bf:07:de:79:77:a2:06:d8:d6:88:c8:50:a9:bd:51:
85:98:1f:70:b8:36:79:ca:01:45:55:f3:f7:c6:5e:
e7:59:b3:d9:b2:e2:7e:be:9d:7e:60:8d:dc:fe:52:
ae:14:e5:cd:0b:4a:92:da:ef:5e:91:8f:b8:a9:69:
9a:22:5d:e7:60:69:43:7b:f4:72:5a:a6:ce:79:14:
f5:d9:2a:6e:64:f8:69:dd:b0:f7:4f:56:00:41:d7:
a6:51:5d:2e:5e:b7:95:0c:3b:3d:d3:9a:48:c4:4a:
86:7b:c9:cb:ff:44:6c:6f
|
- Private Key (2048 bit) ์ ๋ณด ํ์ธ ๊ฐ๋ฅ
4. Kubernetes Secret ์์ฑ
1
2
3
4
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl create secret tls demo-cert --key=_wildcard.cilium.rocks-key.pem --cert=_wildcard.cilium.rocks.pem
# ๊ฒฐ๊ณผ
secret/demo-cert created
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get secret demo-cert -o json | jq
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| {
"apiVersion": "v1",
"data": {
"tls.crt": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVBekNDQW11Z0F3SUJBZ0lSQUxqTCthZGFTMEhYd1luYmdQSFdWNkV3RFFZSktvWklodmNOQVFFTEJRQXcKVlRFZU1Cd0dBMVVFQ2hNVmJXdGpaWEowSUdSbGRtVnNiM0J0Wlc1MElFTkJNUlV3RXdZRFZRUUxEQXh5YjI5MApRR3M0Y3kxamRISXhIREFhQmdOVkJBTU1FMjFyWTJWeWRDQnliMjkwUUdzNGN5MWpkSEl3SGhjTk1qVXdPREl6Ck1UTXdOekV6V2hjTk1qY3hNVEl6TVRNd056RXpXakJBTVNjd0pRWURWUVFLRXg1dGEyTmxjblFnWkdWMlpXeHYKY0cxbGJuUWdZMlZ5ZEdsbWFXTmhkR1V4RlRBVEJnTlZCQXNNREhKdmIzUkFhemh6TFdOMGNqQ0NBU0l3RFFZSgpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFLZEdiY1lWUElCM0tkU1Q0dDRrNlgzSXRscFk1R3IzCjI0WGx2OGpHUjFLN21xb1gvSTdaV2ZyeTlLdzJSM3phUXJpUlB6NWhLMDVWRVFHR0dqcExIeHFaWW1sQ2JiNU0KM1ZZdENWU0RSYWRGaHB6RUM4UCtQNnF6MU9OSDFaa3NFWWpKYUNxUlFCVm90V0QrY1dzQnRRaGZrRVdVVEJwcQpiVHNrQUQ4RHNxOU9kRElaaUpsanc3RCtlWmlDRld5enFISk9nNUtVMEsvT21SMFd4bi9hdHl2Z0xKN01pUkpZCklwc1JEZHRLQWVRamNvS2NmR0lLWld3Ri9HNHU4RlFQNndseE81TXBTTEwyTnd0T1BrbHhOeHVBeXFFNWxsc2gKVDdmV3ZHSGZNZWdqRFkyQ1ZpSG05SVl0bk00NHlLYTJDZ09UNGVTZDY3WTVJWkJJZ1phTWU3RUNBd0VBQWFOagpNR0V3RGdZRFZSMFBBUUgvQkFRREFnV2dNQk1HQTFVZEpRUU1NQW9HQ0NzR0FRVUZCd01CTUI4R0ExVWRJd1FZCk1CYUFGSG90U0I3ZmMxaGE2b0I2SGxzRysvZko5WnJRTUJrR0ExVWRFUVFTTUJDQ0Rpb3VZMmxzYVhWdExuSnYKWTJ0ek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQmdRQk8rREVEVCt6YWJHZFhCU29BMngyV1ZPV3AyMXlzMTJMdwphcGFhL1k0c01PUnlZVDZ4QVo0YklNRThqT1pqNXNaSVR0cXBCbzErOUhwaXQwSUlPQ1NicjEwa1VsYXhNWE40ClpHaC8zL3dmaXp2KzY1S2hJOXZtR2o1V2h3YWNsbkZTbGhPZytndm9CMUhBbVk3WUpvMWtRWFp0Rk8yNnJpWGIKVkRvTGJZTnUweHBHZXAxbUN4SWVKYWJ0RkZkSmtpZ0pUUkh6SFV6eVF4SDJORHFRQ1k3VCsrSXBhMzFYZ3pQbgpvRHJ1cWM1SzNleHhCNk43UDB4QWFIVzhYRFhPWEU5QVdZWTh5ZGg3NHVSM3pYRTd2SXhkOGxJY0h4bERPR1VECm1qR085dE5OY2VSanIvUEtyOWpYdlNubWExMEFDVm1PbGdNbFowNXFZR0tFcXpRcXc5S2hwbHJZc3UvY08xbWQKT2k0YlpWeHBFeW9pV2xqbXFNTDllZ2hVRUpYZmRKS2R4Vk94TDhvUVJkVWxPTlAreUhuRWdhSEhwQXgrRWIzbwo3cGluUEs4K0FseGF1ZFRLa1JRNVRYMnpTb1l0K3FTTVZvOUN6WEkxeGZIR3hlcXYrRXVHUmc0VW1JSng0RG1uCm4yaWljanZ1UmVObTJxRjhPTytaazdrWFRPTVdKbTQ9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K",
"tls.key": "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ25SbTNHRlR5QWR5blUKaytMZUpPbDl5TFphV09ScTk5dUY1Yi9JeGtkU3U1cXFGL3lPMlZuNjh2U3NOa2Q4MmtLNGtUOCtZU3RPVlJFQgpoaG82U3g4YW1XSnBRbTIrVE4xV0xRbFVnMFduUllhY3hBdkQvaitxczlUalI5V1pMQkdJeVdncWtVQVZhTFZnCi9uRnJBYlVJWDVCRmxFd2FhbTA3SkFBL0E3S3ZUblF5R1lpWlk4T3cvbm1ZZ2hWc3M2aHlUb09TbE5Ddnpwa2QKRnNaLzJyY3I0Q3lleklrU1dDS2JFUTNiU2dIa0kzS0NuSHhpQ21Wc0JmeHVMdkJVRCtzSmNUdVRLVWl5OWpjTApUajVKY1RjYmdNcWhPWlpiSVUrMzFyeGgzekhvSXcyTmdsWWg1dlNHTFp6T09NaW10Z29EaytIa25ldTJPU0dRClNJR1dqSHV4QWdNQkFBRUNnZ0VBU25jdzhTaU5DWWVDNks1NUpYbDdORkxBMHhHVmhnVVhCZEdVZ2hXNnRKcnQKckdFSFByU0Z1UkNsV1hERWZGRlJ1SVo0aUJXTHlOQlh4THpsT2lRdEVaTk0yeDBHYTl3ZUFIb0dHRWhrSHFYMQoyaDN3T3UwWmZLMlh6U0l5ZFlESHdZUWZ5aXRsUXVLZE5ET3hXL2lqbGJtdEtUeHVjS2dHUG5pMVgxZ1BHTGowCm95Mk5ETGRwUVJ0dUxva2Qwai81eXAzQ0ZtR21aSHpETlpvRVBhejZmR0UybzVDZEVzaTN6KzB2K0JWbkNHU04Kc3kwL3ZENGhmREtKRjhLSHZPeVhIUXRHU3Jhdll2K2JhWkNRMWZoS3FjOEtKSUtRbExhdmNqSUVoenErRVVYVwpOTGpnc2djaGlzTmF1cUM1WUkvTGRnRXAyS2J0QVdsWndlYU1vQkpTd1FLQmdRRENFTG9PRjcwdlAwbnc0QWhzCms1RHBFTnhNMnlqWjc2WEFTS1JJekEvaUNRNWhVb0U1c01XdTM5OXR6eDVZQVgzRDY2Y0huVDBUcVAxOUtFeEQKYWl0cXBhRm5Qb2hRV3VOS2g5OUw1bk5pQzZETEhLb2NjWFE0VmltYUFRTmZlZnZvSnpFZmU4dm5HTlVDTnJtRgpWSys5NWpaUjN5MVNCSFYvYkNlWFpwdk82UUtCZ1FEY3FPd0FtUndyUkU3WjNtV0hCTXpUYzZxTzZ2dmFNbDV0CmRkSFByN3BNdXkwWUUvbnlNT2FvTXFrWHY2M0YzRlZGT2VtNzVvR3ZOWGFKamVTM0wwd00rTGJJbmRYMWtMU2EKQ2dLYVZsaWVMUUFvVjJRbGNob1NKeVU1OUNFNkRNRmJLbTM3V2NiUjJTazFMdmc5SS91S0Yzc2lhUDEyYlJsNwpvSVlkNklZWmlRS0JnUUM3Nmo1L0R2V2VIb2FXdkJqc0tpZ1R4c1BMbUJ3Q2pFL1AxWWRmQmxvSzRnSU9zemwyCmgzbWtVQnVsMVB6bzh1aTVJaUxtK0piL3hvOU85Z3FDN1BlRm1VU2xHS1FlU3lQME00VkJsYkQ4cTQzVG43b0oKT01MMUdsbkdTbHpkd3R6dWU2VFJlNnFCNkVXNmt3c3AxRXNvY3hiK1NFRTJQaU1ZYzc0U21NVWVvUUtCZ0NpRwpUWEYrclJHMUpkWVZaZ2Y1OWJRUDFnOFJVTmV0MEhIQjVacUM1M0FZSUJ2Lzd6TnZqNnpzUU9tL3BpWmtaTFNtCkFpc2tGa1ViU0gzU3RXSi9LVFJvcVFrSEU3aFpyeEZyTFRjekRHdGdVRnNZVWFnZTRITGRCWlV6SkxjSWlLbzUKaktGUXg1YmFqeHJaV1hYQkdmMHVlc3JEQldtdWQ3T2dzNERpcDE0aEFvR0FIQmYxN0ptK0JMVmR4WkdvcTlYeQp2d2ZlZVhlaUJ0aldpTWhRcWIxUmhaZ2ZjTGcyZWNvQlJWWHo5OFplNTFtejJiTGlmcjZkZm1DTjNQNVNyaFRsCnpRdEtrdHJ2WHBHUHVLbHBtaUpkNTJCcFEzdjBjbHFtem5rVTlka3FibVQ0YWQydzkwOVdBRUhYcGxGZExsNjMKbFF3N1BkT2FTTVJLaG52SnkvOUViRzg9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K"
},
"kind": "Secret",
"metadata": {
"creationTimestamp": "2025-08-23T13:12:33Z",
"name": "demo-cert",
"namespace": "default",
"resourceVersion": "38913",
"uid": "ac3c16e7-8b10-4336-b108-9e82780e467b"
},
"type": "kubernetes.io/tls"
}
|
demo-cert
Secret ์์ฑ๋จ (kubernetes.io/tls
ํ์
)
5. TLS ์ง์ Ingress ๋ฆฌ์์ค ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-ingress
namespace: default
spec:
ingressClassName: cilium
rules:
- host: webpod.cilium.rocks
http:
paths:
- backend:
service:
name: webpod
port:
number: 80
path: /
pathType: Prefix
- host: bookinfo.cilium.rocks
http:
paths:
- backend:
service:
name: details
port:
number: 9080
path: /details
pathType: Prefix
- backend:
service:
name: productpage
port:
number: 9080
path: /
pathType: Prefix
tls:
- hosts:
- webpod.cilium.rocks
- bookinfo.cilium.rocks
secretName: demo-cert
EOF
# ๊ฒฐ๊ณผ
ingress.networking.k8s.io/tls-ingress created
|
webpod.cilium.rocks
โ Service webpod
(80)bookinfo.cilium.rocks
โ Service details
(9080, /details
)bookinfo.cilium.rocks
โ Service productpage
(9080, /
)
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ingress tls-ingress
|
โ
ย ์ถ๋ ฅ
1
2
| NAME CLASS HOSTS ADDRESS PORTS AGE
tls-ingress cilium webpod.cilium.rocks,bookinfo.cilium.rocks 192.168.10.211 80, 443 24s
|
- Ingress:
tls-ingress
- Address:
192.168.10.211
- Ports:
80, 443
6. ๋ก์ปฌ Trust Store ์
๋ฐ์ดํธ
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# ls -al /etc/ssl/certs/ca-certificates.crt
-rw-r--r-- 1 root root 219342 Feb 17 2025 /etc/ssl/certs/ca-certificates.crt
|
1
2
3
4
| (โ|HomeLab:N/A) root@k8s-ctr:~# mkcert -install
# ๊ฒฐ๊ณผ
The local CA is now installed in the system trust store! โก๏ธ
|
mkcert -install
์คํ โ ๋ก์ปฌ CA๋ฅผ ์์คํ
trust-store์ ๋ฑ๋ก
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# ls -al /etc/ssl/certs/ca-certificates.crt
-rw-r--r-- 1 root root 220956 Aug 23 22:32 /etc/ssl/certs/ca-certificates.crt
|
/etc/ssl/certs/ca-certificates.crt
์ฉ๋ ์ฆ๊ฐ (219KB โ 220KB)- ์์ ์ผ์ ๋ณ๊ฒฝ
7. CA ์ ์ฅ ์์น ํ์ธ
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# mkcert -CAROOT
/root/.local/share/mkcert
|
8. CA ์ธ์ฆ์ ๊ฒ์ฆ
(1) ์์คํ
์ธ์ฆ์ ๋ชฉ๋ก ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# tail -n 50 /etc/ssl/certs/ca-certificates.crt
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
| ...
-----BEGIN CERTIFICATE-----
MIIEejCCAuKgAwIBAgIRAJsI9METlrdS4UZuSo7f+PMwDQYJKoZIhvcNAQELBQAw
VTEeMBwGA1UEChMVbWtjZXJ0IGRldmVsb3BtZW50IENBMRUwEwYDVQQLDAxyb290
QGs4cy1jdHIxHDAaBgNVBAMME21rY2VydCByb290QGs4cy1jdHIwHhcNMjUwODIz
MTMwNzEzWhcNMzUwODIzMTMwNzEzWjBVMR4wHAYDVQQKExVta2NlcnQgZGV2ZWxv
cG1lbnQgQ0ExFTATBgNVBAsMDHJvb3RAazhzLWN0cjEcMBoGA1UEAwwTbWtjZXJ0
IHJvb3RAazhzLWN0cjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAPJb
lt837RYLLz5BD2VRvBlTk2eT1kbMqG9rNYLZ+gtRv+e929O8u8D38fyDVQ+aYlG7
7YpTSXWPp7TGNT4gURsMXfriG2bdFqFJQzgUl/8UiEfRTF7diA8VJwkEKN0uMm5L
O9feRHSCMr6udIHk/aJD8f4TblCX7zbccggZ4vtdz8iKZ16bLriK8epOz3jYArX4
DLyb7NhX16FdmyMPXRsgwmRMlK5vmEYI/+oC85PhnwFEeuX31LHw7mbihHWFfK+o
y6jOeJ0Fp4wp9rhHntq3kLwMaiH/Qgmi+RW90Ax8BwIQ8tWwWlt4iBTEdNWveOsF
dzzFQ9sM7h8VLYkEF9DiX86/h0Wf0MwLZp8a08auPCMnBrdezDkPrDGDZ/zMgDbh
eZIM5InprvEA2OexLVoPrFHOrCYOUj3prjxtj7aZQjjDPOSPa1RivGFJ7rtzODhF
kI3hCgq3LxxyG2JRAmT3aJaE6RoG28Vr0ZPm7Q5dKNDS5/j+L+QN0ViwzPoqnwID
AQABo0UwQzAOBgNVHQ8BAf8EBAMCAgQwEgYDVR0TAQH/BAgwBgEB/wIBADAdBgNV
HQ4EFgQUei1IHt9zWFrqgHoeWwb798n1mtAwDQYJKoZIhvcNAQELBQADggGBAGTX
VV+cxMiYCGUA6/ZLr9R+jUuLlHxxybddaAA2Z6QW6GnVwrw+M5Nw7/Jt6QdlYj7+
X1d8bKAjJGW5bkWVz1u27FVtaDQRx5fkqzuMrpxGdxzC1/PO+97bPNofYCNlenqc
ExHyPqqO6CDREZHqbd2Eeu4h92bJcW6I3Y0WJefTItllJODmchv/kpMTWS8g5Zrs
F5TnSOijiKeXwVvgDZiVa7uYx3Zlot27PXUjTg6Mx40IKRyv7K+fDz7oRFgAT4JB
qbTQLA808WpDe7H6Uen5n7Sxecf2VO6QIo0uj7dmRYFNAS2GB1Xcx6vjpCy7yS/U
YH1teXziNLrYn7aI+WxVyFYI1nAQEQIqBDFVP2jmqquTEDkoti1yW5HXGubwqacu
8lGVkYeAKQN5M4kkoEQzBwola2FfTuCSr2Lntl9vjwECddJzvGN7MTa43hs2dRV+
Dwiizd8XxSmrNi5eAXcZafoh5Txk3FirEPEDjlpMfeLqUS6u2BF+mANvylbqHg==
-----END CERTIFICATE-----
|
(2) ์ธ์ฆ์ ํ์ผ ์ถ์ถ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# vi 1.pem
|
/etc/ssl/certs/ca-certificates.crt
์ ์ ์ฅ๋ ์ธ์ฆ์ ๋ธ๋ก์ ๋ณ๋ ํ์ผ(1.pem
)๋ก ๋ถ๋ฆฌ- ์ดํ OpenSSL๋ก ์์ธ ๋ด์ฉ์ ํ์ธํ๊ธฐ ์ํด ์ฌ์ฉ
(3) CA ์ธ์ฆ์ ์์ธ ์ ๋ณด ํ์ธ
CA:TRUE, Local CA ์ ๋ณด๊ฐ ๋ค์ด๊ฐ ์๋ค.
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# openssl x509 -in 1.pem -text -noout
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
| Certificate:
Data:
Version: 3 (0x2)
Serial Number:
9b:08:f4:c1:13:96:b7:52:e1:46:6e:4a:8e:df:f8:f3
Signature Algorithm: sha256WithRSAEncryption
Issuer: O = mkcert development CA, OU = root@k8s-ctr, CN = mkcert root@k8s-ctr
Validity
Not Before: Aug 23 13:07:13 2025 GMT
Not After : Aug 23 13:07:13 2035 GMT
Subject: O = mkcert development CA, OU = root@k8s-ctr, CN = mkcert root@k8s-ctr
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (3072 bit)
Modulus:
00:f2:5b:96:df:37:ed:16:0b:2f:3e:41:0f:65:51:
bc:19:53:93:67:93:d6:46:cc:a8:6f:6b:35:82:d9:
fa:0b:51:bf:e7:bd:db:d3:bc:bb:c0:f7:f1:fc:83:
55:0f:9a:62:51:bb:ed:8a:53:49:75:8f:a7:b4:c6:
35:3e:20:51:1b:0c:5d:fa:e2:1b:66:dd:16:a1:49:
43:38:14:97:ff:14:88:47:d1:4c:5e:dd:88:0f:15:
27:09:04:28:dd:2e:32:6e:4b:3b:d7:de:44:74:82:
32:be:ae:74:81:e4:fd:a2:43:f1:fe:13:6e:50:97:
ef:36:dc:72:08:19:e2:fb:5d:cf:c8:8a:67:5e:9b:
2e:b8:8a:f1:ea:4e:cf:78:d8:02:b5:f8:0c:bc:9b:
ec:d8:57:d7:a1:5d:9b:23:0f:5d:1b:20:c2:64:4c:
94:ae:6f:98:46:08:ff:ea:02:f3:93:e1:9f:01:44:
7a:e5:f7:d4:b1:f0:ee:66:e2:84:75:85:7c:af:a8:
cb:a8:ce:78:9d:05:a7:8c:29:f6:b8:47:9e:da:b7:
90:bc:0c:6a:21:ff:42:09:a2:f9:15:bd:d0:0c:7c:
07:02:10:f2:d5:b0:5a:5b:78:88:14:c4:74:d5:af:
78:eb:05:77:3c:c5:43:db:0c:ee:1f:15:2d:89:04:
17:d0:e2:5f:ce:bf:87:45:9f:d0:cc:0b:66:9f:1a:
d3:c6:ae:3c:23:27:06:b7:5e:cc:39:0f:ac:31:83:
67:fc:cc:80:36:e1:79:92:0c:e4:89:e9:ae:f1:00:
d8:e7:b1:2d:5a:0f:ac:51:ce:ac:26:0e:52:3d:e9:
ae:3c:6d:8f:b6:99:42:38:c3:3c:e4:8f:6b:54:62:
bc:61:49:ee:bb:73:38:38:45:90:8d:e1:0a:0a:b7:
2f:1c:72:1b:62:51:02:64:f7:68:96:84:e9:1a:06:
db:c5:6b:d1:93:e6:ed:0e:5d:28:d0:d2:e7:f8:fe:
2f:e4:0d:d1:58:b0:cc:fa:2a:9f
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Certificate Sign
X509v3 Basic Constraints: critical
CA:TRUE, pathlen:0
X509v3 Subject Key Identifier:
7A:2D:48:1E:DF:73:58:5A:EA:80:7A:1E:5B:06:FB:F7:C9:F5:9A:D0
Signature Algorithm: sha256WithRSAEncryption
Signature Value:
64:d7:55:5f:9c:c4:c8:98:08:65:00:eb:f6:4b:af:d4:7e:8d:
4b:8b:94:7c:71:c9:b7:5d:68:00:36:67:a4:16:e8:69:d5:c2:
bc:3e:33:93:70:ef:f2:6d:e9:07:65:62:3e:fe:5f:57:7c:6c:
a0:23:24:65:b9:6e:45:95:cf:5b:b6:ec:55:6d:68:34:11:c7:
97:e4:ab:3b:8c:ae:9c:46:77:1c:c2:d7:f3:ce:fb:de:db:3c:
da:1f:60:23:65:7a:7a:9c:13:11:f2:3e:aa:8e:e8:20:d1:11:
91:ea:6d:dd:84:7a:ee:21:f7:66:c9:71:6e:88:dd:8d:16:25:
e7:d3:22:d9:65:24:e0:e6:72:1b:ff:92:93:13:59:2f:20:e5:
9a:ec:17:94:e7:48:e8:a3:88:a7:97:c1:5b:e0:0d:98:95:6b:
bb:98:c7:76:65:a2:dd:bb:3d:75:23:4e:0e:8c:c7:8d:08:29:
1c:af:ec:af:9f:0f:3e:e8:44:58:00:4f:82:41:a9:b4:d0:2c:
0f:34:f1:6a:43:7b:b1:fa:51:e9:f9:9f:b4:b1:79:c7:f6:54:
ee:90:22:8d:2e:8f:b7:66:45:81:4d:01:2d:86:07:55:dc:c7:
ab:e3:a4:2c:bb:c9:2f:d4:60:7d:6d:79:7c:e2:34:ba:d8:9f:
b6:88:f9:6c:55:c8:56:08:d6:70:10:11:02:2a:04:31:55:3f:
68:e6:aa:ab:93:10:39:28:b6:2d:72:5b:91:d7:1a:e6:f0:a9:
a7:2e:f2:51:95:91:87:80:29:03:79:33:89:24:a0:44:33:07:
0a:25:6b:61:5f:4e:e0:92:af:62:e7:b6:5f:6f:8f:01:02:75:
d2:73:bc:63:7b:31:36:b8:de:1b:36:75:15:7e:0f:08:a2:cd:
df:17:c5:29:ab:36:2e:5e:01:77:19:69:fa:21:e5:3c:64:dc:
58:ab:10:f1:03:8e:5a:4c:7d:e2:ea:51:2e:ae:d8:11:7e:98:
03:6f:ca:56:ea:1e
|
- Issuer / Subject: mkcert development CA (
CN = mkcert root@k8s-ctr
) - Validity: 2025-08-23 ~ 2035-08-23 (10๋
์ ํจ)
X509v3 Subject Key Identifier
: ๋ก์ปฌ CA ๊ณ ์ ์๋ณ์
9. HTTPS ์ ๊ทผ ํ
์คํธ (Bookinfo ์๋น์ค)
1
2
3
4
5
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ingress tls-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
LBIP=$(kubectl get ingress tls-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
# ๊ฒฐ๊ณผ
192.168.10.211
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# curl -s --resolve bookinfo.cilium.rocks:443:${LBIP} https://bookinfo.cilium.rocks/details/1 | jq
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| {
"id": 1,
"author": "William Shakespeare",
"year": 1595,
"type": "paperback",
"pages": 200,
"publisher": "PublisherA",
"language": "English",
"ISBN-10": "1234567890",
"ISBN-13": "123-1234567890"
}
|
- ์ ์์ ์ผ๋ก JSON ์๋ต ๋ฐํ๋จ (์ฑ
์ ๋ณด)
10. HTTPS ์ ๊ทผ ํ
์คํธ (Webpod ์๋น์ค)
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# curl -s --resolve webpod.cilium.rocks:443:${LBIP} https://webpod.cilium.rocks/ -v
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
| * Added webpod.cilium.rocks:443:192.168.10.211 to DNS cache
* Hostname webpod.cilium.rocks was found in DNS cache
* Trying 192.168.10.211:443...
* Connected to webpod.cilium.rocks (192.168.10.211) port 443
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: /etc/ssl/certs
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 / X25519 / RSASSA-PSS
* ALPN: server did not agree on a protocol. Uses default.
* Server certificate:
* subject: O=mkcert development certificate; OU=root@k8s-ctr
* start date: Aug 23 13:07:13 2025 GMT
* expire date: Nov 23 13:07:13 2027 GMT
* subjectAltName: host "webpod.cilium.rocks" matched cert's "*.cilium.rocks"
* issuer: O=mkcert development CA; OU=root@k8s-ctr; CN=mkcert root@k8s-ctr
* SSL certificate verify ok.
* Certificate level 0: Public key type RSA (2048/112 Bits/secBits), signed using sha256WithRSAEncryption
* Certificate level 1: Public key type RSA (3072/128 Bits/secBits), signed using sha256WithRSAEncryption
* using HTTP/1.x
> GET / HTTP/1.1
> Host: webpod.cilium.rocks
> User-Agent: curl/8.5.0
> Accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
< HTTP/1.1 200 OK
< date: Sat, 23 Aug 2025 13:39:46 GMT
< content-length: 345
< content-type: text/plain; charset=utf-8
< x-envoy-upstream-service-time: 1
< server: envoy
<
Hostname: webpod-697b545f57-cscj8
IP: 127.0.0.1
IP: ::1
IP: 172.20.1.146
IP: fe80::24f0:47ff:fe1f:a7f9
RemoteAddr: 172.20.0.86:43311
GET / HTTP/1.1
Host: webpod.cilium.rocks
User-Agent: curl/8.5.0
Accept: */*
X-Envoy-Internal: true
X-Forwarded-For: 10.0.2.15
X-Forwarded-Proto: https
X-Request-Id: 63c478c3-0b64-4dba-93f9-52ef0594b739
* Connection #0 to host webpod.cilium.rocks left intact
|
- TLSv1.3 ํธ๋์
ฐ์ดํฌ ์ฑ๊ณต
- ์ธ์ฆ์ ๊ฒ์ฆ: OK (mkcert CA๋ก ์ ๋ขฐ๋จ)
- ์๋ต ํค๋์
server: envoy
, x-envoy-upstream-service-time
ํฌํจ - Webpod ํ๋ ์๋ต ๋ฐํ๋จ
11. Ingress ๋ฆฌ์์ค ์ญ์
1
2
3
4
5
6
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl delete ingress basic-ingress tls-ingress webpod-ingress
# ๊ฒฐ๊ณผ
ingress.networking.k8s.io "basic-ingress" deleted
ingress.networking.k8s.io "tls-ingress" deleted
ingress.networking.k8s.io "webpod-ingress" deleted
|