๐ฅ๏ธ ์ค์ต ํ๊ฒฝ ๊ตฌ์ฑ
1. Vagrantfile ๋ค์ด๋ก๋ ๋ฐ ๊ฐ์๋จธ์ ๊ตฌ์ฑ
1
2
3
| curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/2w/Vagrantfile
vagrant up
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
| Bringing machine 'k8s-ctr' up with 'virtualbox' provider...
Bringing machine 'k8s-w1' up with 'virtualbox' provider...
Bringing machine 'k8s-w2' up with 'virtualbox' provider...
==> k8s-ctr: Preparing master VM for linked clones...
k8s-ctr: This is a one time operation. Once the master VM is prepared,
k8s-ctr: it will be used as a base for linked clones, making the creation
k8s-ctr: of new VMs take milliseconds on a modern system.
==> k8s-ctr: Importing base box 'bento/ubuntu-24.04'...
==> k8s-ctr: Cloning VM...
==> k8s-ctr: Matching MAC address for NAT networking...
==> k8s-ctr: Checking if box 'bento/ubuntu-24.04' version '202502.21.0' is up to date...
==> k8s-ctr: Setting the name of the VM: k8s-ctr
==> k8s-ctr: Clearing any previously set network interfaces...
==> k8s-ctr: Preparing network interfaces based on configuration...
k8s-ctr: Adapter 1: nat
k8s-ctr: Adapter 2: hostonly
==> k8s-ctr: Forwarding ports...
k8s-ctr: 22 (guest) => 60000 (host) (adapter 1)
==> k8s-ctr: Running 'pre-boot' VM customizations...
==> k8s-ctr: Booting VM...
==> k8s-ctr: Waiting for machine to boot. This may take a few minutes...
k8s-ctr: SSH address: 127.0.0.1:60000
k8s-ctr: SSH username: vagrant
k8s-ctr: SSH auth method: private key
k8s-ctr:
k8s-ctr: Vagrant insecure key detected. Vagrant will automatically replace
k8s-ctr: this with a newly generated keypair for better security.
k8s-ctr:
k8s-ctr: Inserting generated public key within guest...
k8s-ctr: Removing insecure key from the guest if it's present...
k8s-ctr: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-ctr: Machine booted and ready!
==> k8s-ctr: Checking for guest additions in VM...
==> k8s-ctr: Setting hostname...
==> k8s-ctr: Configuring and enabling network interfaces...
==> k8s-ctr: Running provisioner: shell...
k8s-ctr: Running: /tmp/vagrant-shell20250725-4641-sd34li.sh
k8s-ctr: >>>> Initial Config Start <<<<
k8s-ctr: [TASK 1] Setting Profile & Bashrc
k8s-ctr: [TASK 2] Disable AppArmor
k8s-ctr: [TASK 3] Disable and turn off SWAP
k8s-ctr: [TASK 4] Install Packages
k8s-ctr: [TASK 5] Install Kubernetes components (kubeadm, kubelet and kubectl)
k8s-ctr: [TASK 6] Install Packages & Helm
k8s-ctr: >>>> Initial Config End <<<<
==> k8s-ctr: Running provisioner: shell...
k8s-ctr: Running: /tmp/vagrant-shell20250725-4641-ffu2wg.sh
k8s-ctr: >>>> K8S Controlplane config Start <<<<
k8s-ctr: [TASK 1] Initial Kubernetes
k8s-ctr: [TASK 2] Setting kube config file
k8s-ctr: [TASK 3] Source the completion
k8s-ctr: [TASK 4] Alias kubectl to k
k8s-ctr: [TASK 5] Install Kubectx & Kubens
k8s-ctr: [TASK 6] Install Kubeps & Setting PS1
k8s-ctr: [TASK 7] Install Cilium CNI
k8s-ctr: [TASK 8] Install Cilium CLI
k8s-ctr: cilium
k8s-ctr: [TASK 9] local DNS with hosts file
k8s-ctr: >>>> K8S Controlplane Config End <<<<
==> k8s-w1: Cloning VM...
==> k8s-w1: Matching MAC address for NAT networking...
==> k8s-w1: Checking if box 'bento/ubuntu-24.04' version '202502.21.0' is up to date...
==> k8s-w1: Setting the name of the VM: k8s-w1
==> k8s-w1: Clearing any previously set network interfaces...
==> k8s-w1: Preparing network interfaces based on configuration...
k8s-w1: Adapter 1: nat
k8s-w1: Adapter 2: hostonly
==> k8s-w1: Forwarding ports...
k8s-w1: 22 (guest) => 60001 (host) (adapter 1)
==> k8s-w1: Running 'pre-boot' VM customizations...
==> k8s-w1: Booting VM...
==> k8s-w1: Waiting for machine to boot. This may take a few minutes...
k8s-w1: SSH address: 127.0.0.1:60001
k8s-w1: SSH username: vagrant
k8s-w1: SSH auth method: private key
k8s-w1:
k8s-w1: Vagrant insecure key detected. Vagrant will automatically replace
k8s-w1: this with a newly generated keypair for better security.
k8s-w1:
k8s-w1: Inserting generated public key within guest...
k8s-w1: Removing insecure key from the guest if it's present...
k8s-w1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-w1: Machine booted and ready!
==> k8s-w1: Checking for guest additions in VM...
==> k8s-w1: Setting hostname...
==> k8s-w1: Configuring and enabling network interfaces...
==> k8s-w1: Running provisioner: shell...
k8s-w1: Running: /tmp/vagrant-shell20250725-4641-wbk3an.sh
k8s-w1: >>>> Initial Config Start <<<<
k8s-w1: [TASK 1] Setting Profile & Bashrc
k8s-w1: [TASK 2] Disable AppArmor
k8s-w1: [TASK 3] Disable and turn off SWAP
k8s-w1: [TASK 4] Install Packages
k8s-w1: [TASK 5] Install Kubernetes components (kubeadm, kubelet and kubectl)
k8s-w1: [TASK 6] Install Packages & Helm
k8s-w1: >>>> Initial Config End <<<<
==> k8s-w1: Running provisioner: shell...
k8s-w1: Running: /tmp/vagrant-shell20250725-4641-3fipel.sh
k8s-w1: >>>> K8S Node config Start <<<<
k8s-w1: [TASK 1] K8S Controlplane Join
k8s-w1: >>>> K8S Node config End <<<<
==> k8s-w2: Cloning VM...
==> k8s-w2: Matching MAC address for NAT networking...
==> k8s-w2: Checking if box 'bento/ubuntu-24.04' version '202502.21.0' is up to date...
==> k8s-w2: Setting the name of the VM: k8s-w2
==> k8s-w2: Clearing any previously set network interfaces...
==> k8s-w2: Preparing network interfaces based on configuration...
k8s-w2: Adapter 1: nat
k8s-w2: Adapter 2: hostonly
==> k8s-w2: Forwarding ports...
k8s-w2: 22 (guest) => 60002 (host) (adapter 1)
==> k8s-w2: Running 'pre-boot' VM customizations...
==> k8s-w2: Booting VM...
==> k8s-w2: Waiting for machine to boot. This may take a few minutes...
k8s-w2: SSH address: 127.0.0.1:60002
k8s-w2: SSH username: vagrant
k8s-w2: SSH auth method: private key
k8s-w2:
k8s-w2: Vagrant insecure key detected. Vagrant will automatically replace
k8s-w2: this with a newly generated keypair for better security.
k8s-w2:
k8s-w2: Inserting generated public key within guest...
k8s-w2: Removing insecure key from the guest if it's present...
k8s-w2: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-w2: Machine booted and ready!
==> k8s-w2: Checking for guest additions in VM...
==> k8s-w2: Setting hostname...
==> k8s-w2: Configuring and enabling network interfaces...
==> k8s-w2: Running provisioner: shell...
k8s-w2: Running: /tmp/vagrant-shell20250725-4641-9v5rvk.sh
k8s-w2: >>>> Initial Config Start <<<<
k8s-w2: [TASK 1] Setting Profile & Bashrc
k8s-w2: [TASK 2] Disable AppArmor
k8s-w2: [TASK 3] Disable and turn off SWAP
k8s-w2: [TASK 4] Install Packages
k8s-w2: [TASK 5] Install Kubernetes components (kubeadm, kubelet and kubectl)
k8s-w2: [TASK 6] Install Packages & Helm
k8s-w2: >>>> Initial Config End <<<<
==> k8s-w2: Running provisioner: shell...
k8s-w2: Running: /tmp/vagrant-shell20250725-4641-c0575o.sh
k8s-w2: >>>> K8S Node config Start <<<<
k8s-w2: [TASK 1] K8S Controlplane Join
k8s-w2: >>>> K8S Node config End <<<<
|
2. VM ์ ์ ๋ฐ ๊ธฐ๋ณธ ์ ๋ณด ํ์ธ
(1) k8s-ctr ์ ์
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
| Welcome to Ubuntu 24.04.2 LTS (GNU/Linux 6.8.0-53-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/pro
System information as of Fri Jul 25 08:33:03 PM KST 2025
System load: 0.57
Usage of /: 25.0% of 30.34GB
Memory usage: 50%
Swap usage: 0%
Processes: 171
Users logged in: 0
IPv4 address for eth0: 10.0.2.15
IPv6 address for eth0: fd17:625c:f037:2:a00:27ff:fe6b:69c9
This system is built by the Bento project by Chef Software
More information can be found at https://github.com/chef/bento
Use of this system is acceptance of the OS vendor EULA and License Agreements.
(โ|HomeLab:N/A) root@k8s-ctr:~#
|
(2) ํธ์คํธ๋ค์, IP ๋งคํ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat /etc/hosts
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| 127.0.0.1 localhost
127.0.1.1 vagrant
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
127.0.2.1 k8s-ctr k8s-ctr
192.168.10.100 k8s-ctr
|
(3) sshpass๋ฅผ ํตํด k8s-w1, k8s-w2์ ์๊ฒฉ ์ ์ํ์ฌ ํธ์คํธ๋ค์ ํ์ธ
1
2
3
4
5
6
7
| (โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-w1 hostname
Warning: Permanently added 'k8s-w1' (ED25519) to the list of known hosts.
k8s-w1
(โ|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-w2 hostname
Warning: Permanently added 'k8s-w2' (ED25519) to the list of known hosts.
k8s-w2
|
3. kubeadm-config ์ค์ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl describe cm -n kube-system kubeadm-config
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
| Name: kubeadm-config
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
ClusterConfiguration:
----
apiServer: {}
apiVersion: kubeadm.k8s.io/v1beta4
caCertificateValidityPeriod: 87600h0m0s
certificateValidityPeriod: 8760h0m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
encryptionAlgorithm: RSA-2048
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: v1.33.2
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/16
proxy:
disabled: true
scheduler: {}
BinaryData
====
Events: <none>
|
- ๋คํธ์ํฌ ์
ํ
ํ์ธ
dnsDomain: cluster.local
, podSubnet: 10.244.0.0/16
, serviceSubnet: 10.96.0.0/16
4. kubelet-config ์ค์ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl describe cm -n kube-system kubelet-config
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
| Name: kubelet-config
Namespace: kube-system
Labels: <none>
Annotations: kubeadm.kubernetes.io/component-config.hash: sha256:0ff07274ab31cc8c0f9d989e90179a90b6e9b633c8f3671993f44185a0791127
Data
====
kubelet:
----
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerRuntimeEndpoint: ""
cpuManagerReconcilePeriod: 0s
crashLoopBackOff: {}
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMaximumGCAge: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
flushFrequency: 0
options:
json:
infoBufferSize: "0"
text:
infoBufferSize: "0"
verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
BinaryData
====
Events: <none>
|
5. ๋
ธ๋ ์ํ ๋ฐ IP ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get node -owide
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-ctr Ready control-plane 12m v1.33.2 192.168.10.100 <none> Ubuntu 24.04.2 LTS 6.8.0-53-generic containerd://1.7.27
k8s-w1 Ready <none> 9m38s v1.33.2 192.168.10.101 <none> Ubuntu 24.04.2 LTS 6.8.0-53-generic containerd://1.7.27
k8s-w2 Ready <none> 7m40s v1.33.2 192.168.10.102 <none> Ubuntu 24.04.2 LTS 6.8.0-53-generic containerd://1.7.27
|
6. ๋
ธ๋๋ณ kubeadm-flags.env ์ ๋ณด ํ์ธ
k8s-ctr: 192.168.10.100
, k8s-w1: 192.168.10.101
, k8s-w2: 192.168.10.102
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat /var/lib/kubelet/kubeadm-flags.env
|
โ
ย ์ถ๋ ฅ
1
| KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///run/containerd/containerd.sock --node-ip=192.168.10.100 --pod-infra-container-image=registry.k8s.io/pause:3.10"
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i cat /var/lib/kubelet/kubeadm-flags.env ; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
| >> node : k8s-w1 <<
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///run/containerd/containerd.sock --node-ip=192.168.10.101 --pod-infra-container-image=registry.k8s.io/pause:3.10"
>> node : k8s-w2 <<
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///run/containerd/containerd.sock --node-ip=192.168.10.102 --pod-infra-container-image=registry.k8s.io/pause:3.10"
|
7. Pod CIDR ํ์ธ
(1) kubecontroller๊ฐ ํ ๋นํ Pod CIDR ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
|
โ
ย ์ถ๋ ฅ
1
2
3
| k8s-ctr 10.244.0.0/24
k8s-w1 10.244.1.0/24
k8s-w2 10.244.2.0/24
|
(2) Cilium์ด ๊ด๋ฆฌํ๋ Pod CIDR ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode -o json | grep podCIDRs -A2
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| "podCIDRs": [
"172.20.0.0/24"
],
--
"podCIDRs": [
"172.20.1.0/24"
],
--
"podCIDRs": [
"172.20.2.0/24"
],
|
8. Cilium ์ํ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# cilium status
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
| /ยฏยฏ\
/ยฏยฏ\__/ยฏยฏ\ Cilium: OK
\__/ยฏยฏ\__/ Operator: OK
/ยฏยฏ\__/ยฏยฏ\ Envoy DaemonSet: OK
\__/ยฏยฏ\__/ Hubble Relay: disabled
\__/ ClusterMesh: disabled
DaemonSet cilium Desired: 3, Ready: 3/3, Available: 3/3
DaemonSet cilium-envoy Desired: 3, Ready: 3/3, Available: 3/3
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium Running: 3
cilium-envoy Running: 3
cilium-operator Running: 1
clustermesh-apiserver
hubble-relay
Cluster Pods: 2/2 managed by Cilium
Helm chart version: 1.17.6
Image versions cilium quay.io/cilium/cilium:v1.17.6@sha256:544de3d4fed7acba72758413812780a4972d47c39035f2a06d6145d8644a3353: 3
cilium-envoy quay.io/cilium/cilium-envoy:v1.33.4-1752151664-7c2edb0b44cf95f326d628b837fcdd845102ba68@sha256:318eff387835ca2717baab42a84f35a83a5f9e7d519253df87269f80b9ff0171: 3
cilium-operator quay.io/cilium/operator-generic:v1.17.6@sha256:91ac3bf7be7bed30e90218f219d4f3062a63377689ee7246062fa0cc3839d096: 1
|
9. Cilium ์ธ๋ถ ์ค์ ๊ฐ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# cilium config view
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
| agent-not-ready-taint-key node.cilium.io/agent-not-ready
arping-refresh-period 30s
auto-direct-node-routes true
bpf-distributed-lru false
bpf-events-drop-enabled true
bpf-events-policy-verdict-enabled true
bpf-events-trace-enabled true
bpf-lb-acceleration disabled
bpf-lb-algorithm-annotation false
bpf-lb-external-clusterip false
bpf-lb-map-max 65536
bpf-lb-mode-annotation false
bpf-lb-sock false
bpf-lb-source-range-all-types false
bpf-map-dynamic-size-ratio 0.0025
bpf-policy-map-max 16384
bpf-root /sys/fs/bpf
cgroup-root /run/cilium/cgroupv2
cilium-endpoint-gc-interval 5m0s
cluster-id 0
cluster-name default
cluster-pool-ipv4-cidr 172.20.0.0/16
cluster-pool-ipv4-mask-size 24
clustermesh-enable-endpoint-sync false
clustermesh-enable-mcs-api false
cni-exclusive true
cni-log-file /var/run/cilium/cilium-cni.log
custom-cni-conf false
datapath-mode veth
debug true
debug-verbose
default-lb-service-ipam lbipam
direct-routing-skip-unreachable false
dnsproxy-enable-transparent-mode true
dnsproxy-socket-linger-timeout 10
egress-gateway-reconciliation-trigger-interval 1s
enable-auto-protect-node-port-range true
enable-bpf-clock-probe false
enable-bpf-masquerade true
enable-endpoint-health-checking false
enable-endpoint-lockdown-on-policy-overflow false
enable-endpoint-routes true
enable-experimental-lb false
enable-health-check-loadbalancer-ip false
enable-health-check-nodeport true
enable-health-checking false
enable-hubble false
enable-internal-traffic-policy true
enable-ipv4 true
enable-ipv4-big-tcp false
enable-ipv4-masquerade true
enable-ipv6 false
enable-ipv6-big-tcp false
enable-ipv6-masquerade true
enable-k8s-networkpolicy true
enable-k8s-terminating-endpoint true
enable-l2-neigh-discovery true
enable-l7-proxy true
enable-lb-ipam true
enable-local-redirect-policy false
enable-masquerade-to-route-source false
enable-metrics true
enable-node-selector-labels false
enable-non-default-deny-policies true
enable-policy default
enable-policy-secrets-sync true
enable-runtime-device-detection true
enable-sctp false
enable-source-ip-verification true
enable-svc-source-range-check true
enable-tcx true
enable-vtep false
enable-well-known-identities false
enable-xt-socket-fallback true
envoy-access-log-buffer-size 4096
envoy-base-id 0
envoy-keep-cap-netbindservice false
external-envoy-proxy true
health-check-icmp-failure-threshold 3
http-retry-count 3
identity-allocation-mode crd
identity-gc-interval 15m0s
identity-heartbeat-timeout 30m0s
install-no-conntrack-iptables-rules true
ipam cluster-pool
ipam-cilium-node-update-rate 15s
iptables-random-fully false
ipv4-native-routing-cidr 172.20.0.0/16
k8s-require-ipv4-pod-cidr false
k8s-require-ipv6-pod-cidr false
kube-proxy-replacement true
kube-proxy-replacement-healthz-bind-address
max-connected-clusters 255
mesh-auth-enabled true
mesh-auth-gc-interval 5m0s
mesh-auth-queue-size 1024
mesh-auth-rotated-identities-queue-size 1024
monitor-aggregation medium
monitor-aggregation-flags all
monitor-aggregation-interval 5s
nat-map-stats-entries 32
nat-map-stats-interval 30s
node-port-bind-protection true
nodeport-addresses
nodes-gc-interval 5m0s
operator-api-serve-addr 127.0.0.1:9234
operator-prometheus-serve-addr :9963
policy-cidr-match-mode
policy-secrets-namespace cilium-secrets
policy-secrets-only-from-secrets-namespace true
preallocate-bpf-maps false
procfs /host/proc
proxy-connect-timeout 2
proxy-idle-timeout-seconds 60
proxy-initial-fetch-timeout 30
proxy-max-concurrent-retries 128
proxy-max-connection-duration-seconds 0
proxy-max-requests-per-connection 0
proxy-xff-num-trusted-hops-egress 0
proxy-xff-num-trusted-hops-ingress 0
remove-cilium-node-taints true
routing-mode native
service-no-backend-response reject
set-cilium-is-up-condition true
set-cilium-node-taints true
synchronize-k8s-nodes true
tofqdns-dns-reject-response-code refused
tofqdns-enable-dns-compression true
tofqdns-endpoint-max-ip-per-hostname 1000
tofqdns-idle-connection-grace-period 0s
tofqdns-max-deferred-connection-deletes 10000
tofqdns-proxy-response-max-delay 100ms
tunnel-protocol vxlan
tunnel-source-port-range 0-0
unmanaged-pod-watcher-interval 15
vtep-cidr
vtep-endpoint
vtep-mac
vtep-mask
write-cni-conf-when-ready /host/etc/cni/net.d/05-cilium.conflist
|
- CNI ๋์, BPF ์ ์ฑ
, L7 Proxy, metrics, DNSProxy ๋ฑ ๋ค์ํ ํญ๋ชฉ์ ํ์ฌ ์ค์ ์ํ ํ์ธ ๊ฐ๋ฅ
10. Cilium Agent ๋ด๋ถ ๋์ ์ค์ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -n kube-system -c cilium-agent -it ds/cilium -- cilium-dbg config
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| ##### Read-write configurations #####
ConntrackAccounting : Disabled
ConntrackLocal : Disabled
Debug : Disabled
DebugLB : Disabled
DebugPolicy : Enabled
DropNotification : Enabled
MonitorAggregationLevel : Medium
PolicyAccounting : Enabled
PolicyAuditMode : Disabled
PolicyTracing : Disabled
PolicyVerdictNotification : Enabled
SourceIPVerification : Enabled
TraceNotification : Enabled
PolicyEnforcement : default
|
11. Cilium Endpoint ์ํ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints -A
|
โ
ย ์ถ๋ ฅ
1
2
3
| NAMESPACE NAME SECURITY IDENTITY ENDPOINT STATE IPV4 IPV6
kube-system coredns-674b8bbfcf-8lcmh 3235 ready 172.20.0.21
kube-system coredns-674b8bbfcf-gjs6z 3235 ready 172.20.0.11
|
- ๊ฐ๊ฐ์ IPv4:
172.20.0.21
, 172.20.0.11
12. Cilium๊ณผ Hubble์ ๋ชจ๋ํฐ๋ง ๊ฐ์ ๊ณผ ๊ฐ๋ฐ ๋ฐฐ๊ฒฝ
- Cilium์ ๋ค๋ฅธ CNI์ ๋นํด ๋ชจ๋ํฐ๋ง ์ง์์ด ๋ฐ์ด๋จ
- eBPF ๊ธฐ๋ฐ์ด๋ผ ๊ธฐ์กด ๋ชจ๋ํฐ๋ง ์์คํ
๊ณผ์ ํธํ์ด ์ด๋ ต๊ณ , ๋ณ๋์ ๋ก๊น
ยท๋ชจ๋ํฐ๋ง ์์คํ
๊ตฌ์ถ์ด ์ฝ์ง ์์
- ์ด๋ฌํ ํ๊ณ๋ฅผ ๊ทน๋ณตํ๊ธฐ ์ํด Cilium์ ์ด๊ธฐ๋จ๊ณ๋ถํฐ Hubble์ ํจ๊ป ๊ฐ๋ฐํ์ฌ ๋ชจ๋ํฐ๋ง์ ์์ฝ๊ฒ ์ง์ํ๋๋ก ํจ
13. Hubble ์ค์น ์ ํ์ธ
(1) Hubble ํ์ฑํ ์ฌ๋ถ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep -i hubble
|
โ
ย ์ถ๋ ฅ
(2) Hubble ๊ด๋ จ Secret ์กด์ฌ ์ฌ๋ถ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get secret -n kube-system | grep -iE 'cilium-ca|hubble'
|
- ์๋ฌด๊ฒ๋ ์ถ๋ ฅ๋์ง ์์
- ํ์ฌ Hubble ๋ฏธ์ค์น ์ํ์ด๋ฉฐ, ์ค์น ์ Secret์ด ์์ฑ๋ ์์
(3) ํ์ฌ ์ด๋ฆฐ TCP ํฌํธ ์ ๋ณด ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ss -tnlp | grep -iE 'cilium|hubble' | tee before.txt
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
| LISTEN 0 4096 127.0.0.1:46395 0.0.0.0:* users:(("cilium-agent",pid=5229,fd=42))
LISTEN 0 4096 127.0.0.1:9234 0.0.0.0:* users:(("cilium-operator",pid=4772,fd=9))
LISTEN 0 4096 0.0.0.0:9964 0.0.0.0:* users:(("cilium-envoy",pid=4828,fd=25))
LISTEN 0 4096 0.0.0.0:9964 0.0.0.0:* users:(("cilium-envoy",pid=4828,fd=24))
LISTEN 0 4096 127.0.0.1:9890 0.0.0.0:* users:(("cilium-agent",pid=5229,fd=6))
LISTEN 0 4096 127.0.0.1:9891 0.0.0.0:* users:(("cilium-operator",pid=4772,fd=6))
LISTEN 0 4096 127.0.0.1:9878 0.0.0.0:* users:(("cilium-envoy",pid=4828,fd=27))
LISTEN 0 4096 127.0.0.1:9878 0.0.0.0:* users:(("cilium-envoy",pid=4828,fd=26))
LISTEN 0 4096 127.0.0.1:9879 0.0.0.0:* users:(("cilium-agent",pid=5229,fd=61))
LISTEN 0 4096 *:9963 *:* users:(("cilium-operator",pid=4772,fd=7))
|
- Hubble ํ์ฑํ ์ ํฌํธ ์ํ๋ฅผ ๊ธฐ๋กํด๋๊ณ ์ดํ ๋น๊ตํ๊ธฐ ์ํจ
๐ญ Hubble ์ค์น ๋ฐ ์ค์ ๊ณผ์
1. Hubble ์ค์น (Helm ์
๊ทธ๋ ์ด๋)
1
2
3
4
5
6
7
8
9
10
11
12
| (โ|HomeLab:N/A) root@k8s-ctr:~# helm upgrade cilium cilium/cilium --namespace kube-system --reuse-values \
--set hubble.enabled=true \
--set hubble.relay.enabled=true \
--set hubble.ui.enabled=true \
--set hubble.ui.service.type=NodePort \
--set hubble.ui.service.nodePort=31234 \
--set hubble.export.static.enabled=true \
--set hubble.export.static.filePath=/var/run/cilium/hubble/events.log \
--set prometheus.enabled=true \
--set operator.prometheus.enabled=true \
--set hubble.metrics.enableOpenMetrics=true \
--set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,httpV2:exemplars=true;labelsContext=source_ip\,source_namespace\,source_workload\,destination_ip\,destination_namespace\,destination_workload\,traffic_direction}"
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
| Release "cilium" has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Fri Jul 25 21:03:35 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.
Your release version is 1.17.6.
For any further help, visit https://docs.cilium.io/en/v1.17/gettinghelp
|
- UI ์๋น์ค๋
NodePort 31234
๋ก ์ง์ - Prometheus ๋ฐ OpenMetrics ํ์ฑํ
2. ์ค์น ํ ์ํ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# cilium status
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
| /ยฏยฏ\
/ยฏยฏ\__/ยฏยฏ\ Cilium: OK
\__/ยฏยฏ\__/ Operator: OK
/ยฏยฏ\__/ยฏยฏ\ Envoy DaemonSet: OK
\__/ยฏยฏ\__/ Hubble Relay: OK
\__/ ClusterMesh: disabled
DaemonSet cilium Desired: 3, Ready: 3/3, Available: 3/3
DaemonSet cilium-envoy Desired: 3, Ready: 3/3, Available: 3/3
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
Deployment hubble-relay Desired: 1, Ready: 1/1, Available: 1/1
Deployment hubble-ui Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium Running: 3
cilium-envoy Running: 3
cilium-operator Running: 1
clustermesh-apiserver
hubble-relay Running: 1
hubble-ui Running: 1
Cluster Pods: 4/4 managed by Cilium
Helm chart version: 1.17.6
Image versions cilium quay.io/cilium/cilium:v1.17.6@sha256:544de3d4fed7acba72758413812780a4972d47c39035f2a06d6145d8644a3353: 3
cilium-envoy quay.io/cilium/cilium-envoy:v1.33.4-1752151664-7c2edb0b44cf95f326d628b837fcdd845102ba68@sha256:318eff387835ca2717baab42a84f35a83a5f9e7d519253df87269f80b9ff0171: 3
cilium-operator quay.io/cilium/operator-generic:v1.17.6@sha256:91ac3bf7be7bed30e90218f219d4f3062a63377689ee7246062fa0cc3839d096: 1
hubble-relay quay.io/cilium/hubble-relay:v1.17.6@sha256:7d17ec10b3d37341c18ca56165b2f29a715cb8ee81311fd07088d8bf68c01e60: 1
hubble-ui quay.io/cilium/hubble-ui-backend:v0.13.2@sha256:a034b7e98e6ea796ed26df8f4e71f83fc16465a19d166eff67a03b822c0bfa15: 1
hubble-ui quay.io/cilium/hubble-ui:v0.13.2@sha256:9e37c1296b802830834cc87342a9182ccbb71ffebb711971e849221bd9d59392: 1
|
- Hubble Relay: OK๋ก ๋ณ๊ฒฝ
- hubble-relay, hubble-ui ํ๋๊ฐ ์ถ๊ฐ๋ก ๋ฐฐํฌ๋จ
3. Config ๋ณ๊ฒฝ ์ฌํญ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep -i hubble
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| enable-hubble true
enable-hubble-open-metrics true
hubble-disable-tls false
hubble-export-allowlist
hubble-export-denylist
hubble-export-fieldmask
hubble-export-file-max-backups 5
hubble-export-file-max-size-mb 10
hubble-export-file-path /var/run/cilium/hubble/events.log
hubble-listen-address :4244
hubble-metrics dns drop tcp flow port-distribution icmp httpV2:exemplars=true;labelsContext=source_ip,source_namespace,source_workload,destination_ip,destination_namespace,destination_workload,traffic_direction
hubble-metrics-server :9965
hubble-metrics-server-enable-tls false
hubble-socket-path /var/run/cilium/hubble.sock
hubble-tls-cert-file /var/lib/cilium/tls/hubble/server.crt
hubble-tls-client-ca-files /var/lib/cilium/tls/hubble/client-ca.crt
hubble-tls-key-file /var/lib/cilium/tls/hubble/server.key
|
enable-hubble: true
hubble-metrics-server: :9965
hubble-listen-address: :4244
hubble-export-file-path: /var/run/cilium/hubble/events.log
- TLS ๊ด๋ จ cert/key ๊ฒฝ๋ก๊ฐ ์ถ๊ฐ๋จ
4. Secret ์์ฑ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get secret -n kube-system | grep -iE 'cilium-ca|hubble'
|
โ
ย ์ถ๋ ฅ
1
2
3
| cilium-ca Opaque 2 3m43s
hubble-relay-client-certs kubernetes.io/tls 3 3m43s
hubble-server-certs kubernetes.io/tls 3 3m43s
|
- ์ค์น ์ ์๋ ์กด์ฌํ์ง ์์๋ Secret๋ค์ด ์๋ก ์ถ๊ฐ๋จ
5. ํฌํธ ๋ณ๊ฒฝ ์ฌํญ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ss -tnlp | grep -iE 'cilium|hubble' | tee after.txt
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
| LISTEN 0 4096 127.0.0.1:46395 0.0.0.0:* users:(("cilium-agent",pid=7666,fd=53))
LISTEN 0 4096 127.0.0.1:9234 0.0.0.0:* users:(("cilium-operator",pid=4772,fd=9))
LISTEN 0 4096 0.0.0.0:9964 0.0.0.0:* users:(("cilium-envoy",pid=4828,fd=25))
LISTEN 0 4096 0.0.0.0:9964 0.0.0.0:* users:(("cilium-envoy",pid=4828,fd=24))
LISTEN 0 4096 127.0.0.1:9890 0.0.0.0:* users:(("cilium-agent",pid=7666,fd=6))
LISTEN 0 4096 127.0.0.1:9891 0.0.0.0:* users:(("cilium-operator",pid=4772,fd=6))
LISTEN 0 4096 127.0.0.1:9878 0.0.0.0:* users:(("cilium-envoy",pid=4828,fd=27))
LISTEN 0 4096 127.0.0.1:9878 0.0.0.0:* users:(("cilium-envoy",pid=4828,fd=26))
LISTEN 0 4096 127.0.0.1:9879 0.0.0.0:* users:(("cilium-agent",pid=7666,fd=58))
LISTEN 0 4096 *:4244 *:* users:(("cilium-agent",pid=7666,fd=49))
LISTEN 0 4096 *:9965 *:* users:(("cilium-agent",pid=7666,fd=31))
LISTEN 0 4096 *:9963 *:* users:(("cilium-operator",pid=4772,fd=7))
LISTEN 0 4096 *:9962 *:* users:(("cilium-agent",pid=7666,fd=7))
|
- ์ค์น ์ ๋๋น ์ถ๊ฐ๋ ํฌํธ: 4244, 9965, 9962
- 4244: ๊ฐ ๋
ธ๋ cilium-agent์์ Hubble ํ๋ก์ธ์ค๊ฐ ๋ฆฌ์ค๋
- 9965: Hubble Metrics ์๋ฒ
- 9962: ์ถ๊ฐ ๋ด๋ถ ํฌํธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# vi -d before.txt after.txt
|
โ
ย ์ถ๋ ฅ
6. ๋
ธ๋๋ณ 4244 ํฌํธ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i sudo ss -tnlp |grep 4244 ; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
| >> node : k8s-w1 <<
LISTEN 0 4096 *:4244 *:* users:(("cilium-agent",pid=6068,fd=52))
>> node : k8s-w2 <<
LISTEN 0 4096 *:4244 *:* users:(("cilium-agent",pid=5995,fd=52))
|
- ๊ฐ ๋
ธ๋(
k8s-w1
, k8s-w2
)์์ 4244 ํฌํธ๊ฐ ์ด๋ ค ์์ - ์ด๋ Hubble์ด ๊ฐ ๋
ธ๋์ cilium-agent์ ์ถ๊ฐ๋ ํ๋ก์ธ์ค๋ฅผ ํตํด ๋ฐ์ดํฐ๋ฅผ ์์งํ๋ค๋ ์๋ฏธ
7. Hubble Relay ๋์ ํ์ธ
(1) hubble-relay ํ๋ ์ํ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=hubble-relay
|
โ
ย ์ถ๋ ฅ
1
2
| NAME READY STATUS RESTARTS AGE
hubble-relay-5dcd46f5c-s4wqq 1/1 Running 0 9m3s
|
hubble-relay
ํ๋๊ฐ ์ ์ ๋ฐฐํฌ๋์ด Running ์ํ- 4245/TCP ํฌํธ๋ก ๋ฐ์ดํฐ ์์ง
(2) hubble-relay ํ๋ ์์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kc describe pod -n kube-system -l k8s-app=hubble-relay
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
| Name: hubble-relay-5dcd46f5c-s4wqq
Namespace: kube-system
Priority: 0
Service Account: hubble-relay
Node: k8s-w2/192.168.10.102
Start Time: Fri, 25 Jul 2025 21:03:38 +0900
Labels: app.kubernetes.io/name=hubble-relay
app.kubernetes.io/part-of=cilium
k8s-app=hubble-relay
pod-template-hash=5dcd46f5c
Annotations: <none>
Status: Running
IP: 172.20.2.58
IPs:
IP: 172.20.2.58
Controlled By: ReplicaSet/hubble-relay-5dcd46f5c
Containers:
hubble-relay:
Container ID: containerd://d385feef54ef21ba44f32d93715f5c95124d84cd878a410bbcd11be2cc9bf62d
Image: quay.io/cilium/hubble-relay:v1.17.6@sha256:7d17ec10b3d37341c18ca56165b2f29a715cb8ee81311fd07088d8bf68c01e60
Image ID: quay.io/cilium/hubble-relay@sha256:7d17ec10b3d37341c18ca56165b2f29a715cb8ee81311fd07088d8bf68c01e60
Port: 4245/TCP
Host Port: 0/TCP
Command:
hubble-relay
Args:
serve
--debug
State: Running
Started: Fri, 25 Jul 2025 21:04:03 +0900
Ready: True
Restart Count: 0
Liveness: grpc <pod>:4222 delay=10s timeout=10s period=10s #success=1 #failure=12
Readiness: grpc <pod>:4222 delay=0s timeout=3s period=10s #success=1 #failure=3
Startup: grpc <pod>:4222 delay=10s timeout=1s period=3s #success=1 #failure=20
Environment: <none>
Mounts:
/etc/hubble-relay from config (ro)
/var/lib/hubble-relay/tls from tls (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: hubble-relay-config
Optional: false
tls:
Type: Projected (a volume that contains injected data from multiple sources)
SecretName: hubble-relay-client-certs
Optional: false
QoS Class: BestEffort
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m40s default-scheduler Successfully assigned kube-system/hubble-relay-5dcd46f5c-s4wqq to k8s-w2
Warning FailedMount 9m39s kubelet MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition
Normal Pulling 9m24s kubelet Pulling image "quay.io/cilium/hubble-relay:v1.17.6@sha256:7d17ec10b3d37341c18ca56165b2f29a715cb8ee81311fd07088d8bf68c01e60"
Normal Pulled 9m15s kubelet Successfully pulled image "quay.io/cilium/hubble-relay:v1.17.6@sha256:7d17ec10b3d37341c18ca56165b2f29a715cb8ee81311fd07088d8bf68c01e60" in 8.579s (8.579s including waiting). Image size: 29149253 bytes.
Normal Created 9m15s kubelet Created container: hubble-relay
Normal Started 9m15s kubelet Started container hubble-relay
|
(3) ์๋น์ค์ ์๋ํฌ์ธํธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kc get svc,ep -n kube-system hubble-relay
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
| Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hubble-relay ClusterIP 10.96.50.64 <none> 80/TCP 11m
NAME ENDPOINTS AGE
endpoints/hubble-relay 172.20.2.58:4245 11m
|
service/hubble-relay
โ endpoints/hubble-relay
๋ก ์ฐ๊ฒฐ- ClusterIP
10.96.50.64
์ 80/TCP๋ก ์ ๊ทผ ์ ๋ด๋ถ์ ์ผ๋ก 4245 ํฌํธ๋ก ์ ๋ฌ
(4) Hubble Relay์ Hubble Peer ์ฐ๋
hubble-relay
๋ hubble-peer
์๋น์ค(ClusterIP:443)๋ฅผ ํตํด ๊ฐ ๋
ธ๋์ :4244
ํฌํธ์์ ๋ฐ์ดํฐ๋ฅผ ์์ง- ๊ฐ ๋
ธ๋์ cilium-agent ๋ด ํ๋ธ ํ๋ก์ธ์ค๋ฅผ ํตํด ํ๋ฆ ๋ฐ์ดํฐ(Flow)๋ฅผ ๊ฐ์ ธ์ด
- ๋
ธ๋ ์๊ฐ ์ฆ๊ฐํ๋ฉด
hubble-peer
์ ์๋ํฌ์ธํธ๊ฐ ์๋์ผ๋ก ๋์ด๋๋ฉฐ, ์์ง ๋ฒ์๊ฐ ํ๋๋จ
8. ConfigMap ๋ชฉ๋ก ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get cm -n kube-system
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| NAME DATA AGE
cilium-config 158 51m
cilium-envoy-config 1 51m
coredns 1 51m
extension-apiserver-authentication 6 51m
hubble-relay-config 1 13m
hubble-ui-nginx 1 13m
kube-apiserver-legacy-service-account-token-tracking 1 51m
kube-root-ca.crt 1 51m
kubeadm-config 1 51m
kubelet-config 1 51m
|
hubble-relay-config
, hubble-ui-nginx
๋ฑ Hubble ๊ด๋ จ ConfigMap ์์ฑ ํ์ธ
9. Hubble Relay ์ค์ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl describe cm -n kube-system hubble-relay-config
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
| Name: hubble-relay-config
Namespace: kube-system
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: cilium
meta.helm.sh/release-namespace: kube-system
Data
====
config.yaml:
----
cluster-name: default
peer-service: "hubble-peer.kube-system.svc.cluster.local.:443"
listen-address: :4245
gops: true
gops-port: "9893"
retry-timeout:
sort-buffer-len-max:
sort-buffer-drain-timeout:
tls-hubble-client-cert-file: /var/lib/hubble-relay/tls/client.crt
tls-hubble-client-key-file: /var/lib/hubble-relay/tls/client.key
tls-hubble-server-ca-files: /var/lib/hubble-relay/tls/hubble-server-ca.crt
disable-server-tls: true
BinaryData
====
Events: <none>
|
peer-service
๋ก hubble-peer.kube-system.svc.cluster.local.:443
์ง์ listen-address: :4245
์ค์ โ Hubble Relay๊ฐ 4245 ํฌํธ์์ ๋์- TLS ์ธ์ฆ์ ๊ฒฝ๋ก(
/var/lib/hubble-relay/tls/
) ๋ฑ ์ธ๋ถ ์ค์ ํ์ธ ๊ฐ๋ฅ
10. Hubble Peer ์๋น์ค ๋ฐ ์๋ํฌ์ธํธ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc,ep -n kube-system hubble-peer
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
| Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hubble-peer ClusterIP 10.96.99.7 <none> 443/TCP 15m
NAME ENDPOINTS AGE
endpoints/hubble-peer 192.168.10.100:4244,192.168.10.101:4244,192.168.10.102:4244 15m
|
hubble-peer
์๋น์ค๊ฐ ClusterIP 10.96.99.7
๋ก ์์ฑ๋จ- ์๋ํฌ์ธํธ๋ ๊ฐ ๋
ธ๋์
4244
ํฌํธ๋ฅผ ์ฌ์ฉ192.168.10.100:4244
, 192.168.10.101:4244
, 192.168.10.102:4244
- ๊ฐ cilium-agent๊ฐ ํธ์คํธ ๋คํธ์ํฌ๋ฅผ ์ฌ์ฉํ์ฌ 4244 ํฌํธ๋ฅผ ์ง์ ๋
ธ์ถํจ
11. Hubble UI ํ๋ ๊ตฌ์ฑ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kc describe pod -n kube-system -l k8s-app=hubble-ui
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
| Name: hubble-ui-76d4965bb6-p4blh
Namespace: kube-system
Priority: 0
Service Account: hubble-ui
Node: k8s-w1/192.168.10.101
Start Time: Fri, 25 Jul 2025 21:03:38 +0900
Labels: app.kubernetes.io/name=hubble-ui
app.kubernetes.io/part-of=cilium
k8s-app=hubble-ui
pod-template-hash=76d4965bb6
Annotations: <none>
Status: Running
IP: 172.20.1.79
IPs:
IP: 172.20.1.79
Controlled By: ReplicaSet/hubble-ui-76d4965bb6
Containers:
frontend:
Container ID: containerd://e4f17c9e17383bd95bb54c77040bad5c4e0bbe619c99306f72edf3c29433bbd9
Image: quay.io/cilium/hubble-ui:v0.13.2@sha256:9e37c1296b802830834cc87342a9182ccbb71ffebb711971e849221bd9d59392
Image ID: quay.io/cilium/hubble-ui@sha256:9e37c1296b802830834cc87342a9182ccbb71ffebb711971e849221bd9d59392
Port: 8081/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 25 Jul 2025 21:04:02 +0900
Ready: True
Restart Count: 0
Liveness: http-get http://:8081/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/nginx/conf.d/default.conf from hubble-ui-nginx-conf (rw,path="nginx.conf")
/tmp from tmp-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-62g8t (ro)
backend:
Container ID: containerd://339fc85862a2589dfaced99bcc55e1abe29d23d59dd8e35e64566fca9f5cf65c
Image: quay.io/cilium/hubble-ui-backend:v0.13.2@sha256:a034b7e98e6ea796ed26df8f4e71f83fc16465a19d166eff67a03b822c0bfa15
Image ID: quay.io/cilium/hubble-ui-backend@sha256:a034b7e98e6ea796ed26df8f4e71f83fc16465a19d166eff67a03b822c0bfa15
Port: 8090/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 25 Jul 2025 21:04:10 +0900
Ready: True
Restart Count: 0
Environment:
EVENTS_SERVER_PORT: 8090
FLOWS_API_ADDR: hubble-relay:80
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-62g8t (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
hubble-ui-nginx-conf:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: hubble-ui-nginx
Optional: false
tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-62g8t:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 20m default-scheduler Successfully assigned kube-system/hubble-ui-76d4965bb6-p4blh to k8s-w1
Normal Pulling 19m kubelet Pulling image "quay.io/cilium/hubble-ui:v0.13.2@sha256:9e37c1296b802830834cc87342a9182ccbb71ffebb711971e849221bd9d59392"
Normal Pulled 19m kubelet Successfully pulled image "quay.io/cilium/hubble-ui:v0.13.2@sha256:9e37c1296b802830834cc87342a9182ccbb71ffebb711971e849221bd9d59392" in 7.822s (7.822s including waiting). Image size: 11129482 bytes.
Normal Created 19m kubelet Created container: frontend
Normal Started 19m kubelet Started container frontend
Normal Pulling 19m kubelet Pulling image "quay.io/cilium/hubble-ui-backend:v0.13.2@sha256:a034b7e98e6ea796ed26df8f4e71f83fc16465a19d166eff67a03b822c0bfa15"
Normal Pulled 19m kubelet Successfully pulled image "quay.io/cilium/hubble-ui-backend:v0.13.2@sha256:a034b7e98e6ea796ed26df8f4e71f83fc16465a19d166eff67a03b822c0bfa15" in 7.109s (7.109s including waiting). Image size: 20317203 bytes.
Normal Created 19m kubelet Created container: backend
Normal Started 19m kubelet Started container backend
|
hubble-ui
ํ๋์๋ frontend(8081/TCP)์ backend(8090/TCP) ๋ ์ปจํ
์ด๋ ์กด์ฌ- frontend(Nginx)๊ฐ backend๋ก ํ๋ก์ํ์ฌ Hubble Relay์์ ๋ฐ์ Flow ๋ฐ์ดํฐ๋ฅผ UI๋ก ์ ๊ณต
- backend ์ปจํ
์ด๋๋
FLOWS_API_ADDR=hubble-relay:80
์ค์ ์ผ๋ก Hubble Relay์ ์ฐ๋
12. Hubble UI Nginx ์ค์ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kc describe cm -n kube-system hubble-ui-nginx
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
| Name: hubble-ui-nginx
Namespace: kube-system
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: cilium
meta.helm.sh/release-namespace: kube-system
Data
====
nginx.conf:
----
server {
listen 8081;
listen [::]:8081;
server_name localhost;
root /app;
index index.html;
client_max_body_size 1G;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
location /api {
proxy_http_version 1.1;
proxy_pass_request_headers on;
proxy_pass http://127.0.0.1:8090;
}
location / {
# double `/index.html` is required here
try_files $uri $uri/ /index.html /index.html;
}
# Liveness probe
location /healthz {
access_log off;
add_header Content-Type text/plain;
return 200 'ok';
}
}
}
BinaryData
====
Events: <none>
|
- Nginx๊ฐ
8081
ํฌํธ์์ ๋๊ธฐ /api
์์ฒญ์ backend(127.0.0.1:8090
)๋ก ํ๋ก์, ๋๋จธ์ง๋ /index.html
๋ก ์ฒ๋ฆฌ/healthz
์๋ํฌ์ธํธ๋ก ํฌ์ค์ฒดํฌ ์ง์
13. Hubble UI ์๋น์ค์ ์๋ํฌ์ธํธ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc,ep -n kube-system hubble-ui
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
| Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hubble-ui NodePort 10.96.93.209 <none> 80:31234/TCP 22m
NAME ENDPOINTS AGE
endpoints/hubble-ui 172.20.1.79:8081 22m
|
hubble-ui
์๋น์ค๊ฐ NodePort 31234๋ก ์์ฑ๋จ- ClusterIP
10.96.93.209:80
โ ํ๋ 172.20.1.79:8081
๋ก ๋งคํ - ์ธ๋ถ์์
<์ปจํธ๋กคํ๋ ์ธIP>:31234
์ ์ ์ Hubble UI์ ์ ๊ทผ ๊ฐ๋ฅ
14. Hubble UI ์น ์ ์
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# NODEIP=$(ip -4 addr show eth1 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')
echo -e "http://$NODEIP:31234"
|
โ
ย ์ถ๋ ฅ
1
| http://192.168.10.100:31234
|
kube-system
๋ค์์คํ์ด์ค๋ก ์ด๋ํ๋ฉด Hubble Relay๋ฅผ ํตํด ์์ง๋ Flow ์ ๋ณด ์์ธ ํ์ธ ๊ฐ๋ฅ
Flow๋ฅผ ํด๋ฆญํ๋ฉด ํต์ ์์ธ ์ ๋ณด(์์ค, ๋ชฉ์ ์ง, ํ๋กํ ์ฝ ๋ฑ) ํ์ธ ๊ฐ๋ฅ
๐ป Hubble Client ์ค์น
1
2
3
4
5
6
| (โ|HomeLab:N/A) root@k8s-ctr:~# HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
HUBBLE_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then HUBBLE_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-${HUBBLE_ARCH}.tar.gz{,.sha256sum}
sudo tar xzvfC hubble-linux-${HUBBLE_ARCH}.tar.gz /usr/local/bin
which hubble
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 24.6M 100 24.6M 0 0 9606k 0 0:00:02 0:00:02 --:--:-- 12.6M
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 92 100 92 0 0 187 0 --:--:-- --:--:-- --:--:-- 187
hubble
/usr/local/bin/hubble
|
Hubble Client๋ฅผ ์ฌ์ฉํ๋ ค๋ฉด ์ ์ํ Hubble Relay์ ์ฃผ์๋ฅผ ๋ช
์ํด์ผ ํจ
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# hubble status
failed getting status: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:4245: connect: connection refused"
|
๐ Hubble API ์ ๊ทผ ๊ฒ์ฆ
1. Hubble API ํฌํธํฌ์๋ฉ ์คํ
1
2
3
| (โ|HomeLab:N/A) root@k8s-ctr:~# cilium hubble port-forward&
[1] 8917
(โ|HomeLab:N/A) root@k8s-ctr:~# โน๏ธ Hubble Relay is available at 127.0.0.1:4245
|
- ๋ฐฑ๊ทธ๋ผ์ด๋์์ Hubble Relay(ํฌํธ 4245)๋ก ํฌํธํฌ์๋ฉ์ ์์
2. ํฌํธํฌ์๋ฉ๋ Hubble API ํฌํธ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ss -tnlp | grep 4245
|
โ
ย ์ถ๋ ฅ
1
2
| LISTEN 0 4096 127.0.0.1:4245 0.0.0.0:* users:(("cilium",pid=8917,fd=7))
LISTEN 0 4096 [::1]:4245 [::]:* users:(("cilium",pid=8917,fd=8))
|
127.0.0.1:4245
์ [::1]:4245
ํฌํธ๊ฐ cilium
ํ๋ก์ธ์ค๋ก ์ด๋ฆผ
3. Hubble ์ํ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# hubble status
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| Healthcheck (via localhost:4245): Ok
Current/Max Flows: 12,285/12,285 (100.00%)
Flows/s: 35.77
Connected Nodes: 3/3
|
4. Hubble CLI ์ค์ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# hubble config view
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
| basic-auth-password: ""
basic-auth-username: ""
config: /root/.config/hubble/config.yaml
debug: false
kube-context: ""
kube-namespace: kube-system
kubeconfig: ""
port-forward: false
port-forward-port: "4245"
request-timeout: 12s
server: localhost:4245
timeout: 5s
tls: false
tls-allow-insecure: false
tls-ca-cert-files: []
tls-client-cert-file: ""
tls-client-key-file: ""
tls-server-name: ""
|
5. Hubble CLI ์๋ฒ ์ต์
ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# hubble help status | grep 'server string'
|
โ
ย ์ถ๋ ฅ
1
| --server string Address of a Hubble server. Ignored when --input-file or --port-forward is provided. (default "localhost:4245")
|
--server string
์ต์
์ผ๋ก Hubble ์๋ฒ ์ฃผ์ ์ง์ ๊ฐ๋ฅ
6. CiliumEndpoints ์ ๋ณด ์กฐํ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints.cilium.io -n kube-system
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
| NAME SECURITY IDENTITY ENDPOINT STATE IPV4 IPV6
coredns-674b8bbfcf-8lcmh 3235 ready 172.20.0.21
coredns-674b8bbfcf-gjs6z 3235 ready 172.20.0.11
hubble-relay-5dcd46f5c-s4wqq 40607 ready 172.20.2.58
hubble-ui-76d4965bb6-p4blh 14998 ready 172.20.1.79
|
7. Hubble Observe๋ก ์ค์๊ฐ ํธ๋ํฝ ๊ด์ฐฐ
hubble-relay
, hubble-ui
, coredns
๋ฑ๊ณผ์ TCP/UDP ํ๋ฆ์ด ์ค์๊ฐ์ผ๋ก ๊ธฐ๋ก
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# hubble observe
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
| Jul 26 01:38:52.399: kube-system/hubble-relay-5dcd46f5c-s4wqq:60070 (ID:40607) -> 192.168.10.102:4244 (host) to-stack FORWARDED (TCP Flags: ACK)
Jul 26 01:38:52.613: kube-system/hubble-relay-5dcd46f5c-s4wqq:53066 (ID:40607) <- 192.168.10.100:4244 (kube-apiserver) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 26 01:38:53.990: 10.0.2.15:41170 (host) -> kube-system/hubble-relay-5dcd46f5c-s4wqq:4222 (ID:40607) to-endpoint FORWARDED (TCP Flags: SYN)
Jul 26 01:38:53.990: 10.0.2.15:41170 (host) <- kube-system/hubble-relay-5dcd46f5c-s4wqq:4222 (ID:40607) to-stack FORWARDED (TCP Flags: SYN, ACK)
Jul 26 01:38:53.990: 10.0.2.15:41170 (host) -> kube-system/hubble-relay-5dcd46f5c-s4wqq:4222 (ID:40607) to-endpoint FORWARDED (TCP Flags: ACK)
Jul 26 01:38:53.990: 10.0.2.15:41170 (host) <> kube-system/hubble-relay-5dcd46f5c-s4wqq (ID:40607) pre-xlate-rev TRACED (TCP)
Jul 26 01:38:53.990: 10.0.2.15:41170 (host) <> kube-system/hubble-relay-5dcd46f5c-s4wqq (ID:40607) pre-xlate-rev TRACED (TCP)
Jul 26 01:38:53.990: 10.0.2.15:41170 (host) -> kube-system/hubble-relay-5dcd46f5c-s4wqq:4222 (ID:40607) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 26 01:38:53.990: 10.0.2.15:41170 (host) <> kube-system/hubble-relay-5dcd46f5c-s4wqq (ID:40607) pre-xlate-rev TRACED (TCP)
Jul 26 01:38:53.990: 10.0.2.15:41170 (host) <- kube-system/hubble-relay-5dcd46f5c-s4wqq:4222 (ID:40607) to-stack FORWARDED (TCP Flags: ACK, PSH)
Jul 26 01:38:53.991: 10.0.2.15:41170 (host) <> kube-system/hubble-relay-5dcd46f5c-s4wqq (ID:40607) pre-xlate-rev TRACED (TCP)
Jul 26 01:38:53.991: 10.0.2.15:41170 (host) <> kube-system/hubble-relay-5dcd46f5c-s4wqq (ID:40607) pre-xlate-rev TRACED (TCP)
Jul 26 01:38:53.991: 10.0.2.15:41170 (host) -> kube-system/hubble-relay-5dcd46f5c-s4wqq:4222 (ID:40607) to-endpoint FORWARDED (TCP Flags: ACK, RST)
Jul 26 01:38:54.690: 10.0.2.15:44558 (host) <> kube-system/hubble-ui-76d4965bb6-p4blh (ID:14998) pre-xlate-rev TRACED (TCP)
Jul 26 01:38:54.690: 10.0.2.15:44558 (host) -> kube-system/hubble-ui-76d4965bb6-p4blh:8081 (ID:14998) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 26 01:38:54.690: 10.0.2.15:44558 (host) <> kube-system/hubble-ui-76d4965bb6-p4blh (ID:14998) pre-xlate-rev TRACED (TCP)
Jul 26 01:38:54.690: 10.0.2.15:44558 (host) <> kube-system/hubble-ui-76d4965bb6-p4blh (ID:14998) pre-xlate-rev TRACED (TCP)
Jul 26 01:38:54.691: 10.0.2.15:44558 (host) <- kube-system/hubble-ui-76d4965bb6-p4blh:8081 (ID:14998) to-stack FORWARDED (TCP Flags: ACK, PSH)
Jul 26 01:38:54.691: 10.0.2.15:44558 (host) <- kube-system/hubble-ui-76d4965bb6-p4blh:8081 (ID:14998) to-stack FORWARDED (TCP Flags: ACK, FIN)
Jul 26 01:38:54.691: 10.0.2.15:44558 (host) -> kube-system/hubble-ui-76d4965bb6-p4blh:8081 (ID:14998) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Jul 26 01:38:54.836: 127.0.0.1:32922 (world) <> 192.168.10.102 (host) pre-xlate-rev TRACED (TCP)
Jul 26 01:38:54.943: 127.0.0.1:54908 (world) <> 192.168.10.101 (host) pre-xlate-rev TRACED (TCP)
Jul 26 01:38:55.466: 192.168.10.100:54270 (kube-apiserver) -> kube-system/hubble-ui-76d4965bb6-p4blh:8081 (ID:14998) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 26 01:38:55.467: 127.0.0.1:8090 (world) <> kube-system/hubble-ui-76d4965bb6-p4blh (ID:14998) pre-xlate-rev TRACED (TCP)
Jul 26 01:38:55.467: 127.0.0.1:8090 (world) <> kube-system/hubble-ui-76d4965bb6-p4blh (ID:14998) pre-xlate-rev TRACED (TCP)
Jul 26 01:38:55.467: 127.0.0.1:39880 (world) <> kube-system/hubble-ui-76d4965bb6-p4blh (ID:14998) pre-xlate-rev TRACED (TCP)
Jul 26 01:38:55.467: 127.0.0.1:39880 (world) <> kube-system/hubble-ui-76d4965bb6-p4blh (ID:14998) pre-xlate-rev TRACED (TCP)
Jul 26 01:38:55.467: 127.0.0.1:39880 (world) <> kube-system/hubble-ui-76d4965bb6-p4blh (ID:14998) pre-xlate-rev TRACED (TCP)
Jul 26 01:38:55.468: 127.0.0.1:8090 (world) <> kube-system/hubble-ui-76d4965bb6-p4blh (ID:14998) pre-xlate-rev TRACED (TCP)
Jul 26 01:38:55.468: 127.0.0.1:8090 (world) <> kube-system/hubble-ui-76d4965bb6-p4blh (ID:14998) pre-xlate-rev TRACED (TCP)
Jul 26 01:38:55.468: 192.168.10.100:54270 (kube-apiserver) <- kube-system/hubble-ui-76d4965bb6-p4blh:8081 (ID:14998) to-network FORWARDED (TCP Flags: ACK, PSH)
...
|
8. Cilium Agent ๋จ์ถํค ์ง์
1
2
3
4
5
6
7
8
9
10
11
12
| (โ|HomeLab:N/A) root@k8s-ctr:~# export CILIUMPOD0=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-ctr -o jsonpath='{.items[0].metadata.name}')
export CILIUMPOD1=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w1 -o jsonpath='{.items[0].metadata.name}')
export CILIUMPOD2=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w2 -o jsonpath='{.items[0].metadata.name}')
echo $CILIUMPOD0 $CILIUMPOD1 $CILIUMPOD2
alias c0="kubectl exec -it $CILIUMPOD0 -n kube-system -c cilium-agent -- cilium"
alias c1="kubectl exec -it $CILIUMPOD1 -n kube-system -c cilium-agent -- cilium"
alias c2="kubectl exec -it $CILIUMPOD2 -n kube-system -c cilium-agent -- cilium"
alias c0bpf="kubectl exec -it $CILIUMPOD0 -n kube-system -c cilium-agent -- bpftool"
alias c1bpf="kubectl exec -it $CILIUMPOD1 -n kube-system -c cilium-agent -- bpftool"
alias c2bpf="kubectl exec -it $CILIUMPOD2 -n kube-system -c cilium-agent -- bpftool"
|
โ
ย ์ถ๋ ฅ
1
| cilium-kk8gc cilium-x4kvl cilium-w4fhn
|
๐ฎ Star Wars ๋ฐ๋ชจ & Hubble/UI ์์ํ๊ธฐ
Deploy the Demo Application
1. Star Wars Demo ์ ํ๋ฆฌ์ผ์ด์
๋ฐฐํฌ
1
2
3
4
5
6
7
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/minikube/http-sw-app.yaml
# ๊ฒฐ๊ณผ
service/deathstar created
deployment.apps/deathstar created
pod/tiefighter created
pod/xwing created
|
2. ํ๋ ๋ผ๋ฒจ(Label) ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod --show-labels
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
| NAME READY STATUS RESTARTS AGE LABELS
deathstar-8c4c77fb7-4lc96 1/1 Running 0 29s app.kubernetes.io/name=deathstar,class=deathstar,org=empire,pod-template-hash=8c4c77fb7
deathstar-8c4c77fb7-dwnlx 1/1 Running 0 29s app.kubernetes.io/name=deathstar,class=deathstar,org=empire,pod-template-hash=8c4c77fb7
tiefighter 1/1 Running 0 29s app.kubernetes.io/name=tiefighter,class=tiefighter,org=empire
xwing 1/1 Running 0 29s app.kubernetes.io/name=xwing,class=xwing,org=alliance
|
3. ๋ฐฐํฌยท์๋น์คยท์๋ํฌ์ธํธ ์ํ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get deploy,svc,ep deathstar
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
| Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/deathstar 2/2 2 2 96s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/deathstar ClusterIP 10.96.192.29 <none> 80/TCP 96s
NAME ENDPOINTS AGE
endpoints/deathstar 172.20.1.188:80,172.20.2.9:80 96s
|
4. CiliumEndpoints ์ ์ฒด ์กฐํ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints.cilium.io -A
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
|
NAMESPACE NAME SECURITY IDENTITY ENDPOINT STATE IPV4 IPV6
default deathstar-8c4c77fb7-4lc96 55683 ready 172.20.2.9
default deathstar-8c4c77fb7-dwnlx 37697 ready 172.20.1.188
default tiefighter 19379 ready 172.20.2.116
default xwing 37711 ready 172.20.1.61
kube-system coredns-674b8bbfcf-8lcmh 3235 ready 172.20.0.21
kube-system coredns-674b8bbfcf-gjs6z 3235 ready 172.20.0.11
kube-system hubble-relay-5dcd46f5c-s4wqq 40607 ready 172.20.2.58
kube-system hubble-ui-76d4965bb6-p4blh 14998 ready 172.20.1.79
|
5. CiliumIdentities ์กฐํ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumidentities.cilium.io
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
| NAME NAMESPACE AGE
14998 kube-system 13h
19379 default 2m47s
3235 kube-system 14h
37697 default 2m47s
37711 default 2m47s
40607 kube-system 13h
55683 default 2m47s
|
6. Cilium Agent์์ ์๋ํฌ์ธํธ ๋ฆฌ์คํธ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium endpoint list
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
|
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
184 Disabled Disabled 37697 k8s:app.kubernetes.io/name=deathstar 172.20.1.188 ready
k8s:class=deathstar
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=empire
827 Disabled Disabled 14998 k8s:app.kubernetes.io/name=hubble-ui 172.20.1.79 ready
k8s:app.kubernetes.io/part-of=cilium
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=hubble-ui
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=hubble-ui
2272 Disabled Disabled 37711 k8s:app.kubernetes.io/name=xwing 172.20.1.61 ready
k8s:class=xwing
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=alliance
2439 Disabled Disabled 1 reserved:host ready
|
- ํด๋น ๋
ธ๋์ ์ฌ๋ผ๊ฐ ์๋ํฌ์ธํธ ๋ชฉ๋ก ์ถ๋ ฅ
- deathstar, hubble-ui, xwing, host ์๋ํฌ์ธํธ ์ ๋ณด ํ์ธ
7. ๊ฐ ๋
ธ๋๋ณ Cilium ์๋ํฌ์ธํธ ๋ฆฌ์คํธ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c0 endpoint list
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
1083 Disabled Disabled 3235 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system 172.20.0.21 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
1119 Disabled Disabled 1 k8s:node-role.kubernetes.io/control-plane ready
k8s:node.kubernetes.io/exclude-from-external-load-balancers
reserved:host
1122 Disabled Disabled 3235 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system 172.20.0.11 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
|
- kube-system ๋ค์์คํ์ด์ค(coredns ๋ฑ) ์๋ํฌ์ธํธ ์ ๋ณด ์ถ๋ ฅ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c1 endpoint list
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
184 Disabled Disabled 37697 k8s:app.kubernetes.io/name=deathstar 172.20.1.188 ready
k8s:class=deathstar
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=empire
827 Disabled Disabled 14998 k8s:app.kubernetes.io/name=hubble-ui 172.20.1.79 ready
k8s:app.kubernetes.io/part-of=cilium
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=hubble-ui
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=hubble-ui
2272 Disabled Disabled 37711 k8s:app.kubernetes.io/name=xwing 172.20.1.61 ready
k8s:class=xwing
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=alliance
2439 Disabled Disabled 1 reserved:host ready
|
deathstar
, hubble-ui
, xwing
์๋ํฌ์ธํธ ์ ๋ณด ์ถ๋ ฅ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c2 endpoint list
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
|
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
61 Disabled Disabled 40607 k8s:app.kubernetes.io/name=hubble-relay 172.20.2.58 ready
k8s:app.kubernetes.io/part-of=cilium
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=hubble-relay
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=hubble-relay
115 Disabled Disabled 55683 k8s:app.kubernetes.io/name=deathstar 172.20.2.9 ready
k8s:class=deathstar
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=empire
451 Disabled Disabled 1 reserved:host ready
2643 Disabled Disabled 19379 k8s:app.kubernetes.io/name=tiefighter 172.20.2.116 ready
k8s:class=tiefighter
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=empire
|
hubble-relay
, deathstar
, tiefighter
์๋ํฌ์ธํธ ์ ๋ณด ์ถ๋ ฅ- ๊ฐ ์๋ํฌ์ธํธ๊ฐ ์ด๋ค ๋ผ๋ฒจ(Label) ์ ๊ฐ์ง๊ณ ์๋์ง๊ฐ ์ ์ฑ
๊ฒฐ์ ์ ๋งค์ฐ ์ค์
๐ ํ์ฌ ์ ๊ทผ ์ํ ํ์ธ
1. ํ์ฌ deathstar ์ ๊ทผ ๊ฐ๋ฅ ์ํ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c1 endpoint list | grep -iE 'xwing|tiefighter|deathstar'
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| 184 Disabled Disabled 37697 k8s:app.kubernetes.io/name=deathstar 172.20.1.188 ready
k8s:class=deathstar
2272 Disabled Disabled 37711 k8s:app.kubernetes.io/name=xwing 172.20.1.61 ready
k8s:class=xwing
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c2 endpoint list | grep -iE 'xwing|tiefighter|deathstar'
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| 115 Disabled Disabled 55683 k8s:app.kubernetes.io/name=deathstar 172.20.2.9 ready
k8s:class=deathstar
2643 Disabled Disabled 19379 k8s:app.kubernetes.io/name=tiefighter 172.20.2.116 ready
k8s:class=tiefighter
|
- Deathstar ์๋น์ค๋ ์๋
org=empire
๋ผ๋ฒจ๋ง ํ์ฉํ๋๋ก ์ค๊ณ - ์์ง ๋คํธ์ํฌ ์ ์ฑ
(CNP)์ ์ ์ฉํ์ง ์์
xwing
๊ณผ tiefighter
๋ชจ๋ ์ ๊ทผ ๊ฐ๋ฅ - ๊ฐ ๋
ธ๋์์ Deathstar, Xwing, Tiefighter ํ๋ ๋ชจ๋ Ready ์ํ ํ์ธ
2. ๊ฐ ๋
ธ๋(C0/C1/C2) ๋ชจ๋ํฐ๋ง ํ์ธ
Cilium ๋ชจ๋ํฐ๋ง ๋ช
๋ น์ด๋ก ํธ๋ํฝ ํ๋ฆ์ ์ค์๊ฐ์ผ๋ก ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c0 monitor -v -v
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| Listening for events on 2 CPUs with 64x4096 of shared memory
Press Ctrl-C to quit
------------------------------------------------------------------------------
time="2025-07-26T01:56:34.86612445Z" level=info msg="Initializing dissection cache..." subsys=monitor
Ethernet {Contents=[..14..] Payload=[..78..] SrcMAC=08:00:27:cf:db:ea DstMAC=08:00:27:36:0a:32 EthernetType=IPv4 Length=0}
IPv4 {Contents=[..20..] Payload=[..56..] Version=4 IHL=5 TOS=0 Length=76 Id=13401 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=28728 SrcIP=192.168.10.100 DstIP=192.168.10.102 Options=[] Padding=[]}
TCP {Contents=[..32..] Payload=[..24..] SrcPort=56870 DstPort=10250 Seq=1360120777 Ack=634515328 DataOffset=8 FIN=false SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=991 Checksum=38489 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:3450183246/4076572814 0xcda59e4ef2fb908e)] Padding=[] Multipath=false}
CPU 01: MARK 0xe50d4628 FROM 1119 to-network: 90 bytes (90 captured), state established, interface eth1, , identity host->unknown, orig-ip 192.168.10.100
------------------------------------------------------------------------------
Ethernet {Contents=[..14..] Payload=[..62..] SrcMAC=f6:8a:5d:17:4b:ae DstMAC=fa:3a:73:16:b2:e8 EthernetType=IPv4 Length=0}
IPv4 {Contents=[..20..] Payload=[..40..] Version=4 IHL=5 TOS=0 Length=60 Id=1306 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=32116 SrcIP=10.0.2.15 DstIP=172.20.0.11 Options=[] Padding=[]}
TCP {Contents=[..40..] Payload=[] SrcPort=46488 DstPort=8181(intermapper) Seq=3425383573 Ack=0 DataOffset=10 FIN=false SYN=true RST=false PSH=false ACK=false URG=false ECE=false CWR=false NS=false Window=64240 Checksum=47196 Urgent=0 Options=[..5..] Padding=[] Multipath=false}
CPU 01: MARK 0x7bc3aa86 FROM 1122 to-endpoint: 74 bytes (74 captured), state new, interface lxc13029d3e0d35, , identity host->3235, orig-ip 10.0.2.15, to endpoint 1122
------------------------------------------------------------------------------
...
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c1 monitor -v -v
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| Listening for events on 2 CPUs with 64x4096 of shared memory
Press Ctrl-C to quit
------------------------------------------------------------------------------
time="2025-07-26T01:56:44.690724179Z" level=info msg="Initializing dissection cache..." subsys=monitor
Ethernet {Contents=[..14..] Payload=[..62..] SrcMAC=4a:f1:4d:a8:56:1f DstMAC=62:83:f4:95:c7:a1 EthernetType=IPv4 Length=0}
IPv4 {Contents=[..20..] Payload=[..40..] Version=4 IHL=5 TOS=0 Length=60 Id=47457 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=51176 SrcIP=10.0.2.15 DstIP=172.20.1.79 Options=[] Padding=[]}
TCP {Contents=[..40..] Payload=[] SrcPort=49718 DstPort=8081(sunproxyadmin) Seq=2450151608 Ack=0 DataOffset=10 FIN=false SYN=true RST=false PSH=false ACK=false URG=false ECE=false CWR=false NS=false Window=64240 Checksum=47520 Urgent=0 Options=[..5..] Padding=[] Multipath=false}
CPU 01: MARK 0xfe7f56b8 FROM 827 to-endpoint: 74 bytes (74 captured), state new, interface lxcd54448478341, , identity host->14998, orig-ip 10.0.2.15, to endpoint 827
------------------------------------------------------------------------------
Ethernet {Contents=[..14..] Payload=[..62..] SrcMAC=62:83:f4:95:c7:a1 DstMAC=4a:f1:4d:a8:56:1f EthernetType=IPv4 Length=0}
IPv4 {Contents=[..20..] Payload=[..40..] Version=4 IHL=5 TOS=0 Length=60 Id=0 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=33098 SrcIP=172.20.1.79 DstIP=10.0.2.15 Options=[] Padding=[]}
TCP {Contents=[..40..] Payload=[] SrcPort=8081(sunproxyadmin) DstPort=49718 Seq=1872207516 Ack=2450151609 DataOffset=10 FIN=false SYN=true RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=65160 Checksum=47520 Urgent=0 Options=[..5..] Padding=[] Multipath=false}
CPU 01: MARK 0x2bf841f5 FROM 827 to-stack: 74 bytes (74 captured), state reply, , identity 14998->host, orig-ip 0.0.0.0
------------------------------------------------------------------------------
...
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# c2 monitor -v -v
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| Listening for events on 2 CPUs with 64x4096 of shared memory
Press Ctrl-C to quit
------------------------------------------------------------------------------
time="2025-07-26T01:56:56.589423599Z" level=info msg="Initializing dissection cache..." subsys=monitor
Ethernet {Contents=[..14..] Payload=[..54..] SrcMAC=08:00:27:36:0a:32 DstMAC=08:00:27:cf:db:ea EthernetType=IPv4 Length=0}
IPv4 {Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=5472 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=36681 SrcIP=192.168.10.102 DstIP=192.168.10.100 Options=[] Padding=[]}
TCP {Contents=[..32..] Payload=[] SrcPort=47870 DstPort=6443(sun-sr-https) Seq=2993297840 Ack=594274701 DataOffset=8 FIN=false SYN=false RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=6125 Checksum=38465 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:4076599536/3450204970 0xf2fbf8f0cda5f32a)] Padding=[] Multipath=false}
CPU 01: MARK 0xd11396d8 FROM 451 to-network: 66 bytes (66 captured), state established, interface eth1, , identity host->unknown, orig-ip 192.168.10.102
------------------------------------------------------------------------------
Ethernet {Contents=[..14..] Payload=[..62..] SrcMAC=b2:be:eb:e9:e7:5a DstMAC=5a:0d:e7:cf:c6:f4 EthernetType=IPv4 Length=0}
IPv4 {Contents=[..20..] Payload=[..40..] Version=4 IHL=5 TOS=0 Length=60 Id=2669 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=30194 SrcIP=10.0.2.15 DstIP=172.20.2.58 Options=[] Padding=[]}
TCP {Contents=[..40..] Payload=[] SrcPort=49850 DstPort=4222 Seq=1225774167 Ack=0 DataOffset=10 FIN=false SYN=true RST=false PSH=false ACK=false URG=false ECE=false CWR=false NS=false Window=64240 Checksum=47755 Urgent=0 Options=[..5..] Padding=[] Multipath=false}
CPU 01: MARK 0xb856b576 FROM 61 to-endpoint: 74 bytes (74 captured), state new, interface lxc96f91b710874, , identity host->40607, orig-ip 10.0.2.15, to endpoint 61
------------------------------------------------------------------------------
...
|
- TCP SYN, ACK, ๋ฐ์ดํฐ ์ ์ก ๋ฑ ํจํท ํ๋ฆ ์ค์๊ฐ ํ์
3. Hubble Observe๋ก ์ ์ฒด ํธ๋ํฝ ํ์ธ
Hubble Relay๋ฅผ ํตํด ๋ชจ๋ ๋
ธ๋์ ํธ๋ํฝ์ ํ ๋ฒ์ ๊ด์ฐฐ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
| Jul 26 01:59:37.530: kube-system/coredns-674b8bbfcf-gjs6z:49010 (ID:3235) -> 192.168.10.100:6443 (host) to-stack FORWARDED (TCP Flags: ACK)
Jul 26 01:59:37.625: kube-system/hubble-ui-76d4965bb6-p4blh:59568 (ID:14998) -> 192.168.10.100:6443 (kube-apiserver) to-network FORWARDED (TCP Flags: ACK, PSH)
Jul 26 01:59:37.626: kube-system/hubble-ui-76d4965bb6-p4blh:59568 (ID:14998) <- 192.168.10.100:6443 (kube-apiserver) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 26 01:59:37.717: 10.0.2.15:54464 (host) -> 10.0.2.3:53 (world) to-network FORWARDED (UDP)
Jul 26 01:59:37.737: 127.0.0.1:51666 (world) <> kube-system/hubble-relay-5dcd46f5c-s4wqq (ID:40607) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:37.737: 127.0.0.1:51666 (world) <> kube-system/hubble-relay-5dcd46f5c-s4wqq (ID:40607) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:38.190: 127.0.0.1:34518 (world) <> 192.168.10.100 (host) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:38.522: 127.0.0.1:49862 (world) <> kube-system/coredns-674b8bbfcf-gjs6z (ID:3235) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:38.522: 127.0.0.1:49862 (world) <> kube-system/coredns-674b8bbfcf-gjs6z (ID:3235) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:38.522: 127.0.0.1:49862 (world) <> kube-system/coredns-674b8bbfcf-gjs6z (ID:3235) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:38.522: 127.0.0.1:8080 (world) <> kube-system/coredns-674b8bbfcf-gjs6z (ID:3235) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:38.522: 127.0.0.1:8080 (world) <> kube-system/coredns-674b8bbfcf-gjs6z (ID:3235) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:38.522: 127.0.0.1:8080 (world) <> kube-system/coredns-674b8bbfcf-gjs6z (ID:3235) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:38.522: 127.0.0.1:8080 (world) <> kube-system/coredns-674b8bbfcf-gjs6z (ID:3235) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:38.522: 127.0.0.1:8080 (world) <> kube-system/coredns-674b8bbfcf-gjs6z (ID:3235) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:38.522: 127.0.0.1:49862 (world) <> kube-system/coredns-674b8bbfcf-gjs6z (ID:3235) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:38.522: 127.0.0.1:49862 (world) <> kube-system/coredns-674b8bbfcf-gjs6z (ID:3235) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:38.524: 127.0.0.1:49868 (world) <> kube-system/coredns-674b8bbfcf-8lcmh (ID:3235) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:38.524: 127.0.0.1:49868 (world) <> kube-system/coredns-674b8bbfcf-8lcmh (ID:3235) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:38.524: 127.0.0.1:49868 (world) <> kube-system/coredns-674b8bbfcf-8lcmh (ID:3235) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:38.524: 127.0.0.1:8080 (world) <> kube-system/coredns-674b8bbfcf-8lcmh (ID:3235) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:38.524: 127.0.0.1:8080 (world) <> kube-system/coredns-674b8bbfcf-8lcmh (ID:3235) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:38.524: 127.0.0.1:8080 (world) <> kube-system/coredns-674b8bbfcf-8lcmh (ID:3235) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:38.524: 127.0.0.1:8080 (world) <> kube-system/coredns-674b8bbfcf-8lcmh (ID:3235) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:38.524: 127.0.0.1:8080 (world) <> kube-system/coredns-674b8bbfcf-8lcmh (ID:3235) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:38.524: 127.0.0.1:49868 (world) <> kube-system/coredns-674b8bbfcf-8lcmh (ID:3235) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:38.524: 127.0.0.1:49868 (world) <> kube-system/coredns-674b8bbfcf-8lcmh (ID:3235) pre-xlate-rev TRACED (TCP)
Jul 26 01:59:39.277: 192.168.10.102:47870 (host) -> 192.168.10.100:6443 (kube-apiserver) to-network FORWARDED (TCP Flags: ACK)
Jul 26 01:59:39.318: 192.168.10.101:55504 (host) -> 192.168.10.100:6443 (kube-apiserver) to-network FORWARDED (TCP Flags: ACK)
Jul 26 01:59:39.414: 10.0.2.15:37754 (host) -> kube-system/coredns-674b8bbfcf-8lcmh:8080 (ID:3235) to-endpoint FORWARDED (TCP Flags: SYN)
Jul 26 01:59:39.414: 10.0.2.15:37754 (host) <- kube-system/coredns-674b8bbfcf-8lcmh:8080 (ID:3235) to-stack FORWARDED (TCP Flags: SYN, ACK)
Jul 26 01:59:39.414: 10.0.2.15:37754 (host) -> kube-system/coredns-674b8bbfcf-8lcmh:8080 (ID:3235) to-endpoint FORWARDED (TCP Flags: ACK)
...
|
4. xwing์์ deathstar๋ก ์์ฒญ ํ
์คํธ
1
| kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
|
xwing โ deathstar ์์ฒญ ์ฑ๊ณต
5. tiefighter์์ deathstar๋ก ์์ฒญ ํ
์คํธ
1
| kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
|
tiefighter โ deathstar ์์ฒญ ์ฑ๊ณต
6. deathstar, xwing, tiefighter ์๋ํฌ์ธํธ ํ์ธ
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# c1 endpoint list | grep -iE 'xwing|tiefighter|deathstar'
c2 endpoint list | grep -iE 'xwing|tiefighter|deathstar'
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
| 184 Disabled Disabled 37697 k8s:app.kubernetes.io/name=deathstar 172.20.1.188 ready
k8s:class=deathstar
2272 Disabled Disabled 37711 k8s:app.kubernetes.io/name=xwing 172.20.1.61 ready
k8s:class=xwing
115 Disabled Disabled 55683 k8s:app.kubernetes.io/name=deathstar 172.20.2.9 ready
k8s:class=deathstar
2643 Disabled Disabled 19379 k8s:app.kubernetes.io/name=tiefighter 172.20.2.116 ready
k8s:class=tiefighter
|
IDENTITY ๊ฐ ์ถ์ถ
1
2
3
4
| (โ|HomeLab:N/A) root@k8s-ctr:~# XWINGID=37711
TIEFIGHTERID=19379
DEATHSTARID=37697
DEATHSTARID=55683
|
7. xwing์์ ์ถ๋ฐํ ํธ๋ํฝ๋ง ๋ชจ๋ํฐ๋ง
1
| hubble observe -f --from-identity $XWINGID
|
deathstar๋ก์ ์์ฒญ ํ๋ฆ ํ์ธ
1
| kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
|
โ
ย ์ถ๋ ฅ
8. xwing UDP ํธ๋ํฝ๋ง ๋ชจ๋ํฐ๋ง
1
| hubble observe -f --protocol udp --from-identity $XWINGID
|
1
| kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
|
9. xwing TCP ํธ๋ํฝ๋ง ๋ชจ๋ํฐ๋ง
SYN โ SYN/ACK โ ACK, ๋ฐ์ดํฐ ์ ์ก ํ FIN ํจํท๊น์ง 3-way handshake ๋ฐ ์ข
๋ฃ ๊ณผ์ ํ์ธ ๊ฐ๋ฅ
10. tiefighter์์ deathstar๋ก ์์ฒญ ํ
์คํธ
1
| hubble observe -f --protocol tcp --from-identity $DEATHSTARID
|
1
| kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
|
- tiefighter์์ deathstar๋ก ์์ฒญ ์ฑ๊ณต
- Hubble observe์์ ์ ์์ ์ธ ์์ฒญยท์๋ต ํ๋ฆ ํ์ธ
๐ Apply an L3/L4 Policy
1. L3/L4 ์ ์ฑ
์ ์ฉ
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/minikube/sw_l3_l4_policy.yaml
ciliumnetworkpolicy.cilium.io/rule1 created
|
1
2
3
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get cnp
NAME AGE VALID
rule1 72s True
|
2. xwing โ deathstar ์์ฒญ ํ
์คํธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f --type drop
|
ํธ์ถ์๋
1
| kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing --connect-timeout 2
|
3. tiefighter โ deathstar ์์ฒญ ํ
์คํธ
1
| hubble observe -f --protocol tcp --from-identity $DEATHSTARID
|
1
| kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
|
- tiefighter๋
org=empire
๋ผ๋ฒจ์ ๊ฐ์ง๊ณ ์์ผ๋ฏ๋ก deathstar๋ก ์ ๊ทผ ํ์ฉ - ํ์ฉ๋ ํธ๋ํฝ์ด Hubble observe์ ํ์๋จ
4. ์ ์ฑ
์ ์ฉ ํ ์๋ํฌ์ธํธ ์ํ ํ์ธ (c0, c1, c2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| (โ|HomeLab:N/A) root@k8s-ctr:~# c0 endpoint list
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
1083 Disabled Disabled 3235 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system 172.20.0.21 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
1119 Disabled Disabled 1 k8s:node-role.kubernetes.io/control-plane ready
k8s:node.kubernetes.io/exclude-from-external-load-balancers
reserved:host
1122 Disabled Disabled 3235 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system 172.20.0.11 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
| (โ|HomeLab:N/A) root@k8s-ctr:~# c1 endpoint list
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
184 Enabled Disabled 37697 k8s:app.kubernetes.io/name=deathstar 172.20.1.188 ready
k8s:class=deathstar
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=empire
827 Disabled Disabled 14998 k8s:app.kubernetes.io/name=hubble-ui 172.20.1.79 ready
k8s:app.kubernetes.io/part-of=cilium
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=hubble-ui
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=hubble-ui
2272 Disabled Disabled 37711 k8s:app.kubernetes.io/name=xwing 172.20.1.61 ready
k8s:class=xwing
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=alliance
2439 Disabled Disabled 1 reserved:host ready
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
| (โ|HomeLab:N/A) root@k8s-ctr:~# c2 endpoint list
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
61 Disabled Disabled 40607 k8s:app.kubernetes.io/name=hubble-relay 172.20.2.58 ready
k8s:app.kubernetes.io/part-of=cilium
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=hubble-relay
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=hubble-relay
115 Enabled Disabled 55683 k8s:app.kubernetes.io/name=deathstar 172.20.2.9 ready
k8s:class=deathstar
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=empire
451 Disabled Disabled 1 reserved:host ready
2643 Disabled Disabled 19379 k8s:app.kubernetes.io/name=tiefighter 172.20.2.116 ready
k8s:class=tiefighter
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=empire
|
- xwing์
org=alliance
๋ผ๋ฒจ๋ก, ์ ์ฑ
์ ๋ง์ง ์์ ์ ๊ทผ ๋ถ๊ฐ - tiefighter๋
org=empire
๋ก ํ์ฉ ๋์
5. CiliumNetworkPolicy ์์ธ ํ์ธ
์ ์ฑ
์ด ์ฆ์ ์ ์ฉ๋์ด ํธ๋ํฝ ์ ์ด ์ํ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
| (โ|HomeLab:N/A) root@k8s-ctr:~# kc describe cnp rule1
Name: rule1
Namespace: default
Labels: <none>
Annotations: <none>
API Version: cilium.io/v2
Kind: CiliumNetworkPolicy
Metadata:
Creation Timestamp: 2025-07-26T03:40:08Z
Generation: 1
Resource Version: 23448
UID: c6537744-baa5-4a00-baf3-b68515037c3f
Spec:
Description: L3-L4 policy to restrict deathstar access to empire ships only
Endpoint Selector:
Match Labels:
Class: deathstar
Org: empire
Ingress:
From Endpoints:
Match Labels:
Org: empire
To Ports:
Ports:
Port: 80
Protocol: TCP
Status:
Conditions:
Last Transition Time: 2025-07-26T03:40:08Z
Message: Policy validation succeeded
Status: True
Type: Valid
Events: <none>
|
๐ฆ Life of a Packet
- Cilium์์ L7 ๋์ ์ฒ๋ฆฌ๋ cilium-envoy ๋ฐ๋ชฌ์
์ด ๋ด๋น
- eBPF๋ก ํธ๋ํฝ์ ์ฒ๋ฆฌํ๋, L7 ๋ ๋ฒจ ์ฒ๋ฆฌ๋ ๊ฐ ๋
ธ๋์ ๋ฐฐํฌ๋
cilium-envoy
๊ฐ ์ํ - https://docs.cilium.io/en/stable/network/ebpf/lifeofapacket/
1. cilium-envoy ๋ฐ๋ชฌ์
์ํ ํ์ธ
1
2
3
4
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get ds -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
cilium 3 3 3 3 3 kubernetes.io/os=linux 16h
cilium-envoy 3 3 3 3 3 kubernetes.io/os=linux 16h
|
2. cilium-envoy ํ๋ ์ํ ํ์ธ
1
2
3
4
5
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=cilium-envoy -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cilium-envoy-fmwrj 1/1 Running 0 16h 192.168.10.102 k8s-w2 <none> <none>
cilium-envoy-wtq2z 1/1 Running 0 16h 192.168.10.100 k8s-ctr <none> <none>
cilium-envoy-zzvpw 1/1 Running 0 16h 192.168.10.101 k8s-w1 <none> <none>
|
3. cilium-envoy ์์ธ ์ค์ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
| (โ|HomeLab:N/A) root@k8s-ctr:~# kc describe ds -n kube-system cilium-envoy
Name: cilium-envoy
Namespace: kube-system
Selector: k8s-app=cilium-envoy
Node-Selector: kubernetes.io/os=linux
Labels: app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=cilium-envoy
app.kubernetes.io/part-of=cilium
k8s-app=cilium-envoy
name=cilium-envoy
Annotations: deprecated.daemonset.template.generation: 1
meta.helm.sh/release-name: cilium
meta.helm.sh/release-namespace: kube-system
Desired Number of Nodes Scheduled: 3
Current Number of Nodes Scheduled: 3
Number of Nodes Scheduled with Up-to-date Pods: 3
Number of Nodes Scheduled with Available Pods: 3
Number of Nodes Misscheduled: 0
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app.kubernetes.io/name=cilium-envoy
app.kubernetes.io/part-of=cilium
k8s-app=cilium-envoy
name=cilium-envoy
Service Account: cilium-envoy
Containers:
cilium-envoy:
Image: quay.io/cilium/cilium-envoy:v1.33.4-1752151664-7c2edb0b44cf95f326d628b837fcdd845102ba68@sha256:318eff387835ca2717baab42a84f35a83a5f9e7d519253df87269f80b9ff0171
Port: 9964/TCP
Host Port: 9964/TCP
Command:
/usr/bin/cilium-envoy-starter
Args:
--
-c /var/run/cilium/envoy/bootstrap-config.json
--base-id 0
--log-level info
Liveness: http-get http://127.0.0.1:9878/healthz delay=0s timeout=5s period=30s #success=1 #failure=10
Readiness: http-get http://127.0.0.1:9878/healthz delay=0s timeout=5s period=30s #success=1 #failure=3
Startup: http-get http://127.0.0.1:9878/healthz delay=5s timeout=1s period=2s #success=1 #failure=105
Environment:
K8S_NODE_NAME: (v1:spec.nodeName)
CILIUM_K8S_NAMESPACE: (v1:metadata.namespace)
KUBERNETES_SERVICE_HOST: 192.168.10.100
KUBERNETES_SERVICE_PORT: 6443
Mounts:
/sys/fs/bpf from bpf-maps (rw)
/var/run/cilium/envoy/ from envoy-config (ro)
/var/run/cilium/envoy/artifacts from envoy-artifacts (ro)
/var/run/cilium/envoy/sockets from envoy-sockets (rw)
Volumes:
envoy-sockets:
Type: HostPath (bare host directory volume)
Path: /var/run/cilium/envoy/sockets
HostPathType: DirectoryOrCreate
envoy-artifacts:
Type: HostPath (bare host directory volume)
Path: /var/run/cilium/envoy/artifacts
HostPathType: DirectoryOrCreate
envoy-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: cilium-envoy-config
Optional: false
bpf-maps:
Type: HostPath (bare host directory volume)
Path: /sys/fs/bpf
HostPathType: DirectoryOrCreate
Priority Class Name: system-node-critical
Node-Selectors: kubernetes.io/os=linux
Tolerations: op=Exists
Events: <none>
|
- ๋ณผ๋ฅจ ๋ง์ดํธ
/var/run/cilium/envoy/sockets
/sys/fs/bpf
4. ์์ผ ์ฐ๊ฒฐ ์ํ ํ์ธ
1
2
3
4
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- ss -xnp | grep -i -envoy
u_str ESTAB 0 0 /var/run/cilium/envoy/sockets/admin.sock 27023 * 27022
u_str ESTAB 0 0 /var/run/cilium/envoy/sockets/xds.sock 36056 * 36915 users:(("cilium-agent",pid=1,fd=65))
u_str ESTAB 0 0 /var/run/cilium/envoy/sockets/admin.sock 27017 * 27016
|
u_str ESTAB
์ํ๋ก envoy ์์ผ(admin.sock, xds.sock)๊ณผ ์ฐ๊ฒฐusers:(("cilium-agent",pid=1,fd=65))
ํํ๋ก ์ฐ๊ฒฐ ์ฃผ์ฒด๊ฐ cilium-agent์์ ํ์ธ
๐ HTTP ๊ธฐ๋ฐ L7 ์ ์ฑ
์ ์ฉ๊ณผ ํ
์คํธ
1. L7 ์ ์ฑ
์ ์ฉ ์ ํ
์คํธ
tiefighter
ํ๋์์ deathStar
์ ์ ์ง๋ณด์ API(/v1/exhaust-port
)๋ก PUT ์์ฒญ์ ์คํ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
| Panic: deathstar exploded
goroutine 1 [running]:
main.HandleGarbage(0x2080c3f50, 0x2, 0x4, 0x425c0, 0x5, 0xa)
/code/src/github.com/empire/deathstar/
temp/main.go:9 +0x64
main.main()
/code/src/github.com/empire/deathstar/
temp/main.go:5 +0x85
|
Panic: deathstar exploded
โ L7 ๋ ๋ฒจ์ ์ ์ด๊ฐ ์์ด ํ์ฉ๋ ์ํ- Hubble์์๋ L3/L4 ๋ ๋ฒจ๋ก๋ง ๊ด์ฐฐ๋์ด Forwarded๋ก ํ์๋์ง๋ง, ์ค์ ๋ก๋ ์ ํ๋ฆฌ์ผ์ด์
๋ ๋ฒจ์์ ๋ฌธ์ ๋ฐ์
2. ๊ธฐ์กด L3/L4 ์ ์ฑ
(rule1) ์
๋ฐ์ดํธ โ L7 ์ ์ฑ
์ ์ฉ
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/minikube/sw_l3_l4_l7_policy.yaml
ciliumnetworkpolicy.cilium.io/rule1 configured
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kc describe cnp
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
| Name: rule1
Namespace: default
Labels: <none>
Annotations: <none>
API Version: cilium.io/v2
Kind: CiliumNetworkPolicy
Metadata:
Creation Timestamp: 2025-07-26T03:40:08Z
Generation: 2
Resource Version: 27633
UID: c6537744-baa5-4a00-baf3-b68515037c3f
Spec:
Description: L7 policy to restrict access to specific HTTP call
Endpoint Selector:
Match Labels:
Class: deathstar
Org: empire
Ingress:
From Endpoints:
Match Labels:
Org: empire
To Ports:
Ports:
Port: 80
Protocol: TCP
Rules:
Http:
Method: POST
Path: /v1/request-landing
Status:
Conditions:
Last Transition Time: 2025-07-26T03:40:08Z
Message: Policy validation succeeded
Status: True
Type: Valid
Events: <none>
|
- ๋ชฉ์ ์ง:
org=empire
, class=deathstar
- ํ์ฉ ๊ท์น:
HTTP POST
๋ฉ์๋, ๊ฒฝ๋ก /v1/request-landing
๋ง ํ์ฉ - ๋๋จธ์ง HTTP ๋ฉ์๋๋ ์ฐจ๋จ
3. L7 ์ ์ฑ
๋ชจ๋ํฐ๋ง (Hubble ์ฌ์ฉ)
(1) deathStar ํ๋ ๊ธฐ์ค HTTP ํ๋กํ ์ฝ ๋ชจ๋ํฐ๋ง
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f --pod deathstar --protocol http
|
โ
ย ์ถ๋ ฅ
(2) tiefighter
์์ ์ ์ API ํธ์ถ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
|
(3) L7 ์ ๋ณด ํ๋๊ฐ ํ์ธ ๊ฐ๋ฅ
4. ๋นํ์ฉ API ์์ฒญ ํ
์คํธ ๋ฐ ๋๋กญ ๋ก๊ทธ ํ์ธ
๋นํ์ฉ API๋ฅผ ํธ์ถํ๊ฑฐ๋, ์ ์ฑ
์ ๋ง์ง ์๋ ์ ๊ทผ ์๋ โ Hubble์์ DROPPED ์ด๋ฒคํธ ํ์ธ
ํ๋ ์ด๋ฆ์ผ๋ก ๋ชจ๋ํฐ๋ง
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f --pod xwing
|
L7 DROP๊ณผ ์ฐจ์ด๊ฐ ์์
5. ์ค์ต ๋ฆฌ์์ค ์ ๋ฆฌ
(1) ์ค์ต ํ ๋ฆฌ์์ค ์ญ์
1
2
3
4
5
6
7
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/minikube/http-sw-app.yaml
kubectl delete cnp rule1
service "deathstar" deleted
deployment.apps "deathstar" deleted
pod "tiefighter" deleted
pod "xwing" deleted
ciliumnetworkpolicy.cilium.io "rule1" deleted
|
(2) ์ญ์ ํ์ธ
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get cnp
No resources found in default namespace.
|
๐ค Hubble Exporter ์ค์ ํ๊ธฐ
Hubble Exporter๋ file rotation, size limits, filters, field masks๋ฅผ ์ง์ํจ
1. Hubble Exporter ์ค์ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get cm -n kube-system cilium-config -o json | grep hubble-export
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
| "hubble-export-allowlist": "",
"hubble-export-denylist": "",
"hubble-export-fieldmask": "",
"hubble-export-file-max-backups": "5",
"hubble-export-file-max-size-mb": "10",
"hubble-export-file-path": "/var/run/cilium/hubble/events.log",
|
2. Hubble Exporter ํ๋ฆ ๋ก๊ทธ ์ค์๊ฐ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system exec ds/cilium -- tail -f /var/run/cilium/hubble/events.log
|
โ
ย ์ถ๋ ฅ
3. Hubble Exporter ๋ก๊ทธ๋ฅผ JSON ํํ๋ก ๋ณด๊ธฐ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system exec ds/cilium -- sh -c 'tail -f /var/run/cilium/hubble/events.log' | jq
|
โ
ย ์ถ๋ ฅ
๐ Prometheus & Grafana ์คํํ๊ธฐ
1. ์ํ ์ ํ๋ฆฌ์ผ์ด์
(webpod) ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: webpod
spec:
replicas: 2
selector:
matchLabels:
app: webpod
template:
metadata:
labels:
app: webpod
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- sample-app
topologyKey: "kubernetes.io/hostname"
containers:
- name: webpod
image: traefik/whoami
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: webpod
labels:
app: webpod
spec:
selector:
app: webpod
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
EOF
# ๊ฒฐ๊ณผ
deployment.apps/webpod created
service/webpod created
|
2. curl-pod ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: curl-pod
labels:
app: curl
spec:
nodeName: k8s-ctr
containers:
- name: curl
image: nicolaka/netshoot
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF
# ๊ฒฐ๊ณผ
pod/curl-pod created
|
3. ๋ฐฐํฌ ์ํ ๋ฐ Cilium ์๋ํฌ์ธํธ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg endpoint list
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
772 Disabled Disabled 7053 k8s:app=webpod 172.20.1.197 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
827 Disabled Disabled 14998 k8s:app.kubernetes.io/name=hubble-ui 172.20.1.79 ready
k8s:app.kubernetes.io/part-of=cilium
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=hubble-ui
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=hubble-ui
2439 Disabled Disabled 1 reserved:host ready
|
4. curl-pod๋ก webpod ์๋น์ค ์ ๊ทผ ํ
์คํธ
1
2
3
4
5
6
7
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod | grep Hostname
Hostname: webpod-697b545f57-kg77r
(โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod | grep Hostname
Hostname: webpod-697b545f57-mbjvf
...
|
๋ฉํธ๋ฆญ ์์ฑ์ ์ํด ํธ์ถ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s webpod | grep Hostname; sleep 1; done'
|
๐ Prometheus & Grafana ์ค์นํ๊ธฐ
1. Prometheus & Grafana ์ค์น
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/kubernetes/addons/prometheus/monitoring-example.yaml
# ๊ฒฐ๊ณผ
namespace/cilium-monitoring created
serviceaccount/prometheus-k8s created
configmap/grafana-config created
configmap/grafana-cilium-dashboard created
configmap/grafana-cilium-operator-dashboard created
configmap/grafana-hubble-dashboard created
configmap/grafana-hubble-l7-http-metrics-by-workload created
configmap/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/grafana created
service/prometheus created
deployment.apps/grafana created
deployment.apps/prometheus created
|
2. Prometheus & Grafana ๋ฆฌ์์ค ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get deploy,pod,svc,ep -n cilium-monitoring
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/grafana 1/1 1 1 45s
deployment.apps/prometheus 1/1 1 1 45s
NAME READY STATUS RESTARTS AGE
pod/grafana-5c69859d9-cd9fn 1/1 Running 0 45s
pod/prometheus-6fc896bc5d-85d4w 1/1 Running 0 45s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/grafana ClusterIP 10.96.169.141 <none> 3000/TCP 45s
service/prometheus ClusterIP 10.96.253.138 <none> 9090/TCP 45s
NAME ENDPOINTS AGE
endpoints/grafana 172.20.1.207:3000 45s
endpoints/prometheus 172.20.2.76:9090 45s
|
3. Grafana ๋์๋ณด๋ ConfigMap ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get cm -n cilium-monitoring
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
| NAME DATA AGE
grafana-cilium-dashboard 1 92s
grafana-cilium-operator-dashboard 1 91s
grafana-config 3 92s
grafana-hubble-dashboard 1 91s
grafana-hubble-l7-http-metrics-by-workload 1 91s
kube-root-ca.crt 1 92s
prometheus 1 91s
|
4. Prometheus ์๋ฒ ์ค์ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kc describe cm -n cilium-monitoring prometheus
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
| Name: prometheus
Namespace: cilium-monitoring
Labels: <none>
Annotations: <none>
Data
====
prometheus.yaml:
----
global:
scrape_interval: 10s
scrape_timeout: 10s
evaluation_interval: 10s
rule_files:
- "/etc/prometheus-rules/*.rules"
scrape_configs:
# https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml#L79
- job_name: 'kubernetes-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_k8s_app]
action: keep
regex: cilium
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: (.+)(?::\d+);(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: service
# https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml#L156
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: (.+):(?:\d+);(\d+)
replacement: ${1}:${2}
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod
- source_labels: [__meta_kubernetes_pod_container_port_number]
action: keep
regex: \d+
# https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml#L119
- job_name: 'kubernetes-services'
metrics_path: /metrics
params:
module: [http_2xx]
kubernetes_sd_configs:
- role: service
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: ${1}:${2}
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: namespace
- source_labels: [__meta_kubernetes_service_name]
target_label: service
- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
BinaryData
====
Events: <none>
|
5. Grafana ์๋ฒ ์ค์ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kc describe cm -n cilium-monitoring grafana-config
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
| Name: grafana-config
Namespace: cilium-monitoring
Labels: app=grafana
...
config.yaml:
providers:
- name: 'cilium-dashboard-config'
type: file
options:
path: '/configmap/dashboards/'
...
grafana.ini:
app_mode = production
[server]
http_port = 3000
[security]
admin_user = admin
admin_password = admin
disable_gravatar = true
[auth.anonymous]
enabled = true
org_name = Main Org.
org_role = Admin
[metrics]
enabled = true
interval_seconds = 10
...
prometheus-datasource.yaml:
datasources:
- name: prometheus
type: prometheus
url: http://prometheus:9090
editable: true
...
|
๐ ๋ฉํธ๋ฆญ ํ์ฑํ ์ํ๋ก Cilium๊ณผ Hubble ๋ฐฐํฌํ๊ธฐ
1. ํธ์คํธ์ ํฌํธ ์ ๋ณด ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# ss -tnlp | grep -E '9962|9963|9965'
|
โ
ย ์ถ๋ ฅ
1
2
3
| LISTEN 0 4096 *:9965 *:* users:(("cilium-agent",pid=7666,fd=31))
LISTEN 0 4096 *:9963 *:* users:(("cilium-operator",pid=4772,fd=7))
LISTEN 0 4096 *:9962 *:* users:(("cilium-agent",pid=7666,fd=7))
|
9962
: cilium-agent9963
: cilium-operator9965
: cilium-agent
2. ์์ปค ๋
ธ๋๋ณ ํฌํธ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i sudo ss -tnlp | grep -E '9962|9963|9965' ; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
| >> node : k8s-w1 <<
LISTEN 0 4096 *:9965 *:* users:(("cilium-agent",pid=6068,fd=32))
LISTEN 0 4096 *:9962 *:* users:(("cilium-agent",pid=6068,fd=7))
>> node : k8s-w2 <<
LISTEN 0 4096 *:9962 *:* users:(("cilium-agent",pid=5995,fd=7))
LISTEN 0 4096 *:9965 *:* users:(("cilium-agent",pid=5995,fd=31))
|
9962
: cilium-agent9965
: cilium-agent
๐ hostPC์์ ์ ์์ ์ํ NodePort ์ค์ ๋ฐ โPrometheus & Grafanaโ ์น ์ ์ ํ์ธ
1. cilium-monitoring ๋ค์์คํ์ด์ค ์๋น์ค ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc -n cilium-monitoring
|
โ
ย ์ถ๋ ฅ
1
2
3
| NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.96.169.141 <none> 3000/TCP 11m
prometheus ClusterIP 10.96.253.138 <none> 9090/TCP 11m
|
- ์ด๊ธฐ ์ํ๋ ClusterIP ๋ก ์ธ๋ถ ์ ๊ทผ ๋ถ๊ฐ
2. ์ธ๋ถ ์ ๊ทผ์ ์ํ NodePort ์ค์
1
2
3
4
5
6
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl patch svc -n cilium-monitoring prometheus -p '{"spec": {"type": "NodePort", "ports": [{"port": 9090, "targetPort": 9090, "nodePort": 30001}]}}'
kubectl patch svc -n cilium-monitoring grafana -p '{"spec": {"type": "NodePort", "ports": [{"port": 3000, "targetPort": 3000, "nodePort": 30002}]}}'
# ๊ฒฐ๊ณผ
service/prometheus patched
service/grafana patched
|
- Prometheus์ Grafana๋ฅผ ์ธ๋ถ์์ ์ ์ํ ์ ์๋๋ก NodePort๋ก ๋ณ๊ฒฝ
- Prometheus: 9090 โ NodePort 30001
- Grafana: 3000 โ NodePort 30002
3. NodePort ๋ณ๊ฒฝ ๊ฒฐ๊ณผ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc -n cilium-monitoring
|
โ
ย ์ถ๋ ฅ
1
2
3
| NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana NodePort 10.96.169.141 <none> 3000:30002/TCP 13m
prometheus NodePort 10.96.253.138 <none> 9090:30001/TCP 13m
|
4. ์น ์ ์ ์ฃผ์ ํ์ธ
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# echo "http://192.168.10.100:30001"
echo "http://192.168.10.100:30002"
|
โ
ย ์ถ๋ ฅ
1
2
| http://192.168.10.100:30001 # prometheus
http://192.168.10.100:30002 # grafana
|
Prometheus
Grafana
(์ฐธ๊ณ ) Prometheus ํ๋ ์น ์ ์ ์, ์๋ฒ์ ๋ธ๋ผ์ฐ์ ๊ฐ ์๊ฐ ์ฐจ์ด ๋ฐ์
1
2
3
4
5
6
7
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n cilium-monitoring deploy/prometheus -- date
Sat Jul 26 05:21:01 UTC 2025
(โ|HomeLab:N/A) root@k8s-ctr:~# date
Sat Jul 26 02:21:13 PM KST 2025
# ํด๊ฒฐ
๋ชจ๋ ๊ฐ์๋จธ์ reboot ํ ์ฌ์ ์ ์ ๋ฌธ์ ํด๊ฒฐ๋จ
|
5. Prometheus ๊ธฐ๋ณธ ์ค์ ๊ฐ ํ์ธ
6. Service Discovery๋ก ๋ฉํธ๋ฆญ ์์ง ๋์ ํ์ง
์ค์ ๋ก ํ์ง๋ ๋์์ Targets ํ๋ฉด์ ํ์๋จ
- ex.
cilium-agent
์ metric exporter โ http://192.168.10.102:9962/metrics
7. Grafana์ ๋ฐ์ดํฐ ์์ค ์ค์
- Grafana๋ ๊ฐ์ ๋ค์์คํ์ด์ค์ Prometheus ์๋น์ค๋ช
(
prometheus:9090
)์ ๋ฐ์ดํฐ ์์ค๋ก ์ค์ - Grafana ํ๋๊ฐ
prometheus
์๋น์ค์ 9090 ํฌํธ๋ก ์ฐ๊ฒฐํ์ฌ ๋ฉํธ๋ฆญ์ ๊ฐ์ ธ์ด
8. Service์ Endpoints ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# k get svc -n cilium-monitoring
|
โ
ย ์ถ๋ ฅ
1
2
3
| NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana NodePort 10.96.169.141 <none> 3000:30002/TCP 42m
prometheus NodePort 10.96.253.138 <none> 9090:30001/TCP 42m
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# k get svc,ep -n cilium-monitoring
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
| Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/grafana NodePort 10.96.169.141 <none> 3000:30002/TCP 44m
service/prometheus NodePort 10.96.253.138 <none> 9090:30001/TCP 44m
NAME ENDPOINTS AGE
endpoints/grafana 172.20.1.207:3000 44m
endpoints/prometheus 172.20.2.76:9090 44m
|
- grafana โ
172.20.1.207:3000
- prometheus โ
172.20.2.76:9090
9. ๊ธฐ๋ณธ Grafana ๋์๋ณด๋ ํ์ธ
ConfigMap์ ์ฌ์ ์ฃผ์
(pre-loading)๋ 4๊ฐ์ ๋์๋ณด๋ ์๋ ๋ฑ๋ก
10. Cilium Metrics ๋์๋ณด๋ ํ์ธ
11. PromQL ์ฟผ๋ฆฌ ์ดํด
(1) map ops (average node) ํจ๋ ๋ถ์
metrics browser ๋ด์ฉ
1
| topk(5, avg(rate(cilium_bpf_map_ops_total{k8s_app="cilium", pod=~"$pod"}[5m])) by (pod, map_name, operation))
|
(2) Prometheus์์ ์ฟผ๋ฆฌ ํด๋ณด๋ฉด์ ์ดํดํ๊ธฐ
1
| cilium_bpf_map_ops_total
|
์ต๊ทผ 5๋ถ ๊ฐ์ ๋ฐ์ดํฐ๋ก ์ฆ๊ฐ์จ ๊ณ์ฐ
1
| rate(cilium_bpf_map_ops_total{k8s_app="cilium"}[5m])
|
์ฌ๋ฌ ์๊ณ์ด(metric series)์ ๊ฐ์ ํ๊ท
1
| avg(rate(cilium_bpf_map_ops_total{k8s_app="cilium"}[5m])) by (pod, map_name, operation)
|
์ง๊ณ ํจ์์ ํจ๊ป ์ฌ์ฉํ์ฌ ์ด๋ค ๋ ์ด๋ธ(label)์ ๊ธฐ์ค์ผ๋ก ๊ทธ๋ฃนํํ ์ง๋ฅผ ์ง์ ํ๋ ๊ทธ๋ฃนํ
1
| avg(rate(cilium_bpf_map_ops_total{k8s_app="cilium"}[5m])) by (pod)
|
1
| avg(rate(cilium_bpf_map_ops_total{k8s_app="cilium"}[5m])) by (pod, map_name, operation)
|
์๊ณ์ด ์ค์์ ๊ฐ์ฅ ํฐ k๊ฐ๋ฅผ ์ ํ
1
| topk(5, avg(rate(cilium_bpf_map_ops_total{k8s_app="cilium"}[5m]))) by (pod, map_name, operation)
|
12. Grafana ๋ณ์
1
| topk(5, avg(rate(cilium_bpf_map_ops_total{k8s_app="cilium", pod=~"$pod"}[5m])) by (pod, map_name, operation))
|
pod=~โ$pod
โ ๋ณ์ ์ฌ์ฉ
1
| label_values(cilium_version, pod)
|
Prometheus์์ ์ฟผ๋ฆฌ
13. Hubble ๋์๋ณด๋
General Processing, Network, Network Policy, HTTP, DNS
โ๏ธ Cilium Metrics ์ค์ ๋ฐ ์์ง ๋ฐฉ๋ฒ
1. Cilium Metrics Annotations ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl describe pod -n kube-system -l k8s-app=cilium | grep prometheus
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
| prometheus.io/port: 9962
prometheus.io/scrape: true
Message: ved:unknown\" destination_label:\"reserved:host\" destination_label:\"reserved:remote-node\" destination_label:\"k8s:app=prometheus\",destination_label:\"k8s:k8s-app=kube-dns\" destination_port:\"53\",source_fqdn:\"*.cluster.local*\",destination_fqdn:\"*.cluster.local*\"}" buffer_size=4095 number_of_flows=17011 subsys=hubble took=1h4m50.122419715s whitelist="{source_pod:\"default/\" reply:false,destination_pod:\"default/\" reply:false}"
prometheus.io/port: 9962
prometheus.io/scrape: true
prometheus.io/port: 9962
prometheus.io/scrape: true
|
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl describe pod -n kube-system -l name=cilium-operator | grep prometheus
|
โ
ย ์ถ๋ ฅ
1
2
| Annotations: prometheus.io/port: 9963
prometheus.io/scrape: true
|
2. Prometheus Service Discovery๋ก ๋ฉํธ๋ฆญ ์์ง
cilium-agent
โ ํฌํธ 9962, cilium-operator
โ ํฌํธ 9963 ์์ ๋ฉํธ๋ฆญ์ ๋
ธ์ถ
3. Prometheus ConfigMap ์ค์ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kc describe cm -n cilium-monitoring prometheus
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
| ...
# https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml#L156
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: (.+):(?:\d+);(\d+)
replacement: ${1}:${2}
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod
- source_labels: [__meta_kubernetes_pod_container_port_number]
action: keep
regex: \d+
...
|
๐พ Hubble Metrics ์ค์ ๋ฐ ์์ง ๋ฐฉ๋ฒ
1. hubble-metrics ํค๋๋ฆฌ์ค ์๋น์ค ์์ฑ ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc -n kube-system hubble-metrics
|
โ
ย ์ถ๋ ฅ
1
2
| NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hubble-metrics ClusterIP None <none> 9965/TCP 19h
|
2. hubble-metrics ์๋น์ค ์์ธ ์ ๋ณด ํ์ธ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kc describe svc -n kube-system hubble-metrics
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
| Name: hubble-metrics
Namespace: kube-system
Labels: app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=hubble
app.kubernetes.io/part-of=cilium
k8s-app=hubble
Annotations: meta.helm.sh/release-name: cilium
meta.helm.sh/release-namespace: kube-system
prometheus.io/port: 9965
prometheus.io/scrape: true
Selector: k8s-app=cilium
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: None
IPs: None
Port: hubble-metrics 9965/TCP
TargetPort: hubble-metrics/TCP
Endpoints: 192.168.10.101:9965,192.168.10.102:9965,192.168.10.100:9965
Session Affinity: None
Internal Traffic Policy: Cluster
Events: <none>
|
๐ก L7 ํ๋กํ ์ฝ ๊ฐ์ํ
1. L7 ๋คํธ์ํฌ ์ ์ฑ
์ ์๋ฏธ
- L7 ๋คํธ์ํฌ ์ ์ฑ
์ ๊ฐ์์ฑ(Visibility) ์ ์ ๊ณตํ๊ณ ๋์์ ํธ๋ํฝ ์ ์ด ๊ธฐ๋ฅ์ ์ํํจ
- DNS ๊ท์น๊ณผ HTTP ๊ท์น์ผ๋ก ๊ตฌ๋ถํ์ฌ ์ค์ ๊ฐ๋ฅ
- ์ผ์นํ์ง ์๋ ํธ๋ํฝ์ ์ฐจ๋จ(Drop)๋จ
2. default ๋ค์์คํ์ด์ค Pod์ Egress ํธ๋ํฝ ์ ์ด
default
๋ค์์คํ์ด์ค์ ์๋ ๋ชจ๋ Pod ๋์์ผ๋ก egress ์ ์ฑ
์ ์ฉ- DNS ํฌํธ(53) ์ ๋ํด ๋ชจ๋ ๋๋ฉ์ธ ์กฐํ ํ์ฉ (
matchPattern: *
) - HTTP ํฌํธ(80, 8080) ๋ก default ๋ค์์คํ์ด์ค ๋ด ๋ค๋ฅธ Pod๋ก์ ์์ฒญ ํ์ฉ
- L7 ๊ฐ์์ฑ์ ์ํด
http: [{}]
๊ท์น ์ค์
3. L7 ์ ์ฑ
๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
| (โ|HomeLab:N/A) root@k8s-ctr:~# cat <<EOF | kubectl apply -f -
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "l7-visibility"
spec:
endpointSelector:
matchLabels:
"k8s:io.kubernetes.pod.namespace": default # default ๋ค์์คํ์ด์ค ์์ ๋ชจ๋ Pod์ ๋ํด egress ์ ์ฑ
์ด ์ ์ฉ
egress:
- toPorts:
- ports:
- port: "53"
protocol: ANY # TCP, UDP ๋ ๋ค ํ์ฉ
rules:
dns:
- matchPattern: "*" # ๋ชจ๋ ๋๋ฉ์ธ ์กฐํ ํ์ฉ, L7 ๊ฐ์์ฑ ํ์ฑํ
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": default
toPorts:
- ports:
- port: "80" # default ๋ค๋ฅธ ํ๋์ HTTP TCP 80 ์์ฒญ ํ์ฉ
protocol: TCP
- port: "8080" # default ๋ค๋ฅธ ํ๋์ HTTP TCP 8080 ์์ฒญ ํ์ฉ
protocol: TCP
rules:
http: [{}] # ๋ชจ๋ HTTP ์์ฒญ์ ํ์ฉ, L7 ๊ฐ์์ฑ ํ์ฑํ
EOF
# ๊ฒฐ๊ณผ
ciliumnetworkpolicy.cilium.io/l7-visibility created
|
์ ์ฑ
์ด ์ ์์ ์ผ๋ก ์ ์ฉ๋จ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl get cnp -o yaml
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
| apiVersion: v1
items:
- apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"cilium.io/v2","kind":"CiliumNetworkPolicy","metadata":{"annotations":{},"name":"l7-visibility","namespace":"default"},"spec":{"egress":[{"toPorts":[{"ports":[{"port":"53","protocol":"ANY"}],"rules":{"dns":[{"matchPattern":"*"}]}}]},{"toEndpoints":[{"matchLabels":{"k8s:io.kubernetes.pod.namespace":"default"}}],"toPorts":[{"ports":[{"port":"80","protocol":"TCP"},{"port":"8080","protocol":"TCP"}],"rules":{"http":[{}]}}]}],"endpointSelector":{"matchLabels":{"k8s:io.kubernetes.pod.namespace":"default"}}}}
creationTimestamp: "2025-07-26T07:28:30Z"
generation: 1
name: l7-visibility
namespace: default
resourceVersion: "49854"
uid: 5b8c403d-5fbf-48ee-b978-0dae890621ef
spec:
egress:
- toPorts:
- ports:
- port: "53"
protocol: ANY
rules:
dns:
- matchPattern: '*'
- toEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: default
toPorts:
- ports:
- port: "80"
protocol: TCP
- port: "8080"
protocol: TCP
rules:
http:
- {}
endpointSelector:
matchLabels:
k8s:io.kubernetes.pod.namespace: default
status:
conditions:
- lastTransitionTime: "2025-07-26T07:28:30Z"
message: Policy validation succeeded
status: "True"
type: Valid
kind: List
metadata:
resourceVersion: ""
|
4. L7 ์ ์ฑ
์ ์ฉ ํ ํต์ ํ์ธ
curl-pod์์ webpod์ผ๋ก ์์ฒญ ํ
์คํธ
- ์ ์์ ์ผ๋ก
Hostname
์๋ต ์์ - HTTP Header์ X-Envoy ๊ฐ์ด ์ถ๊ฐ๋์ด, ์์ฒญ์ด cilium-envoy ๋ฅผ ๊ฑฐ์ณ๊ฐ ๊ฒ์ ํ์ธ
5. ๋ฐ๋ณต ์์ฒญ
1
| (โ|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s webpod | grep Hostname; sleep 1; done'
|
๐ก๏ธ Security Implications & ์ค์ต
1. L7 ํธ๋ํฝ ๋ชจ๋ํฐ๋ง ์ ๋ณด์ ๊ณ ๋ ค์ฌํญ
- L7 ํธ๋ํฝ ๋ชจ๋ํฐ๋ง ์ ์ฌ์ฉ์ ์ด๋ฆ, ๋น๋ฐ๋ฒํธ, ์ฟผ๋ฆฌ ๋งค๊ฐ๋ณ์ ๋ฑ ๋ฏผ๊ฐ์ ๋ณด๊ฐ ๋ก๊ทธ๋ก ๋
ธ์ถ๋ ๊ฐ๋ฅ์ฑ ์์
- ๋ฐ๋ผ์, ๋ณด์ ์ ์ฑ
์๋ฆฝ ๋ฐ ๋ฏผ๊ฐ์ ๋ณด ๋ง์คํน ํ์
2. user_id ์ถ๋ ฅ ํ์ธ
- Hubble UI / ๋ชจ๋ํฐ๋ง์์
user_id
๊ฐ์ด ํ๋ฌธ์ผ๋ก ๋
ธ์ถ๋๋ ์ํฉ์ ํ์ธ - ์ด๋ L7 ํธ๋ํฝ ๋ชจ๋ํฐ๋ง ๊ณผ์ ์์ ๋ฐ์ํ ๋ฏผ๊ฐ์ ๋ณด ๋
ธ์ถ ์ฌ๋ก์
3. ๋ฏผ๊ฐ์ ๋ณด ๋ฏธ์ถ๋ ฅ(๋ง์คํน) ์ค์
1
2
| (โ|HomeLab:N/A) root@k8s-ctr:~# helm upgrade cilium cilium/cilium --namespace kube-system --reuse-values \
--set extraArgs="{--hubble-redact-enabled,--hubble-redact-http-urlquery}"
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
| Release "cilium" has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Sat Jul 26 16:55:55 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 3
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.
Your release version is 1.17.6.
For any further help, visit https://docs.cilium.io/en/v1.17/gettinghelp
|
Hubble ๋ก๊ทธ์ ์ฟผ๋ฆฌ ํ๋ผ๋ฏธํฐ ๋ฐ ๋ฏผ๊ฐ์ ๋ณด(user_id ๋ฑ) ๊ฐ ๋ ์ด์ ์ถ๋ ฅ๋์ง ์์