Post

Istio 9주차 정리

☸️ kind : k8s(1.32.2) 배포

1. Kind 클러스터 생성 (Control Plane + Worker 2대)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
kind create cluster --name myk8s --image kindest/node:v1.32.2 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 30000 # Sample Application
    hostPort: 30000
  - containerPort: 30001 # Prometheus
    hostPort: 30001
  - containerPort: 30002 # Grafana
    hostPort: 30002
  - containerPort: 30003 # Kiali
    hostPort: 30003
  - containerPort: 30004 # Tracing
    hostPort: 30004
  - containerPort: 30005 # kube-ops-view
    hostPort: 30005
- role: worker
- role: worker
networking:
  podSubnet: 10.10.0.0/16
  serviceSubnet: 10.200.1.0/24
EOF

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
Creating cluster "myk8s" ...
 ✓ Ensuring node image (kindest/node:v1.32.2) 🖼 
 ✓ Preparing nodes 📦 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
Set kubectl context to "kind-myk8s"
You can now use your cluster with:

kubectl cluster-info --context kind-myk8s

Have a nice day! 👋

2. 클러스터 노드 상태 확인

1
docker ps

✅ 출력

1
2
3
4
CONTAINER ID   IMAGE                  COMMAND                  CREATED              STATUS              PORTS                                                             NAMES
22747018eabb   kindest/node:v1.32.2   "/usr/local/bin/entr…"   About a minute ago   Up About a minute   0.0.0.0:30000-30005->30000-30005/tcp, 127.0.0.1:44959->6443/tcp   myk8s-control-plane
42a8f93275a7   kindest/node:v1.32.2   "/usr/local/bin/entr…"   About a minute ago   Up About a minute                                                                     myk8s-worker
0476037a89b7   kindest/node:v1.32.2   "/usr/local/bin/entr…"   About a minute ago   Up About a minute                                                                     myk8s-worker2

3. 노드별 필수 유틸리티 설치

1
for node in control-plane worker worker2; do echo "node : myk8s-$node" ; docker exec -it myk8s-$node sh -c 'apt update && apt install tree psmisc lsof ipset wget bridge-utils net-tools dnsutils tcpdump ngrep iputils-ping git vim -y'; echo; done

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
...
Setting up bind9-libs:amd64 (1:9.18.33-1~deb12u2) ...
Setting up openssh-client (1:9.2p1-2+deb12u6) ...
Setting up libxext6:amd64 (2:1.3.4-1+b1) ...
Setting up dbus-daemon (1.14.10-1~deb12u1) ...
Setting up libnet1:amd64 (1.1.6+dfsg-3.2) ...
Setting up libpcap0.8:amd64 (1.10.3-1) ...
Setting up dbus (1.14.10-1~deb12u1) ...
invoke-rc.d: policy-rc.d denied execution of start.
/usr/sbin/policy-rc.d returned 101, not running 'start dbus.service'
Setting up libgdbm-compat4:amd64 (1.23-3) ...
Setting up xauth (1:1.1.2-1) ...
Setting up bind9-host (1:9.18.33-1~deb12u2) ...
Setting up libperl5.36:amd64 (5.36.0-7+deb12u2) ...
Setting up tcpdump (4.99.3-1) ...
Setting up ngrep (1.47+ds1-5+b1) ...
Setting up perl (5.36.0-7+deb12u2) ...
Setting up bind9-dnsutils (1:9.18.33-1~deb12u2) ...
Setting up dnsutils (1:9.18.33-1~deb12u2) ...
Setting up liberror-perl (0.17029-2) ...
Setting up git (1:2.39.5-0+deb12u2) ...
Processing triggers for libc-bin (2.36-9+deb12u9) ...

4. Kind 네트워크 목록 조회

1
docker network ls

✅ 출력

1
2
3
4
5
NETWORK ID     NAME      DRIVER    SCOPE
64b0df261790   bridge    bridge    local
bb4d74152d4a   host      host      local
dbf072d0a217   kind      bridge    local
056dcb2c01d1   none      null      local

5. Kind 브리지 상세 정보 확인

1
docker inspect kind

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
[
    {
        "Name": "kind",
        "Id": "dbf072d0a217f53e0b62f42cee01bcecc1b2f6ea216475178db001f2e38681f5",
        "Created": "2025-01-26T16:18:22.33980443+09:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv4": true,
        "EnableIPv6": true,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                },
                {
                    "Subnet": "fc00:f853:ccd:e793::/64",
                    "Gateway": "fc00:f853:ccd:e793::1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "0476037a89b7c29dff4584506215eb077af038d807d1b2c1463fb7e61cef8910": {
                "Name": "myk8s-worker2",
                "EndpointID": "a0bdda25e785e7d2c25d5f7bbcd62f16f929b210e314c53b42b9cee8e12e6744",
                "MacAddress": "be:c2:93:3b:fb:74",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": "fc00:f853:ccd:e793::2/64"
            },
            "22747018eabbc4550d68aa51b861613774b827b03baa9b720d362747c0a3ea86": {
                "Name": "myk8s-control-plane",
                "EndpointID": "afca91598beaa690677283b0d489084cd3a469e19b32a8ac08b4224ff5d224be",
                "MacAddress": "06:27:aa:c2:4b:88",
                "IPv4Address": "172.18.0.3/16",
                "IPv6Address": "fc00:f853:ccd:e793::3/64"
            },
            "42a8f93275a7454296d00ea74e7f33414893d5f22ded2c33882236ebf08ead5c": {
                "Name": "myk8s-worker",
                "EndpointID": "f33db179a60ae6554c546f8b597ec2fc35499ebf3b8e559b1da17f5deab38db9",
                "MacAddress": "22:57:10:81:c9:8e",
                "IPv4Address": "172.18.0.4/16",
                "IPv6Address": "fc00:f853:ccd:e793::4/64"
            }
        },
        "Options": {
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

6. 테스트용 컨테이너 mypc 기동

1
2
3
4
docker run -d --rm --name mypc --network kind --ip 172.18.0.100 nicolaka/netshoot sleep infinity

# 결과
2046c5ac8dd06893ec7f67033e1a4278481ef1e14fb8a3ec1f487db2714d8cb2

7. 모든 컨테이너 목록 확인

1
docker ps

✅ 출력

1
2
3
4
5
CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS          PORTS                                                             NAMES
2046c5ac8dd0   nicolaka/netshoot      "sleep infinity"         20 seconds ago   Up 17 seconds                                                                     mypc
22747018eabb   kindest/node:v1.32.2   "/usr/local/bin/entr…"   6 minutes ago    Up 6 minutes    0.0.0.0:30000-30005->30000-30005/tcp, 127.0.0.1:44959->6443/tcp   myk8s-control-plane
42a8f93275a7   kindest/node:v1.32.2   "/usr/local/bin/entr…"   6 minutes ago    Up 6 minutes                                                                      myk8s-worker
0476037a89b7   kindest/node:v1.32.2   "/usr/local/bin/entr…"   6 minutes ago    Up 6 minutes                                                                      myk8s-worker2

8. Kind 네트워크상의 컨테이너 IP 확인

1
docker ps -q | xargs docker inspect --format ' '

✅ 출력

1
2
3
4
/mypc 172.18.0.100
/myk8s-control-plane 172.18.0.3
/myk8s-worker 172.18.0.4
/myk8s-worker2 172.18.0.2

🧩 MetalLB 배포

1. MetalLB 매니페스트 적용

1
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.9/config/manifests/metallb-native.yaml

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
namespace/metallb-system created
customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/servicel2statuses.metallb.io created
serviceaccount/controller created
serviceaccount/speaker created
role.rbac.authorization.k8s.io/controller created
role.rbac.authorization.k8s.io/pod-lister created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/controller created
rolebinding.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
configmap/metallb-excludel2 created
secret/metallb-webhook-cert created
service/metallb-webhook-service created
deployment.apps/controller created
daemonset.apps/speaker created
validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created

2. MetalLB 관련 CRD 목록 확인

1
kubectl get crd

✅ 출력

1
2
3
4
5
6
7
8
NAME                           CREATED AT
bfdprofiles.metallb.io         2025-06-07T07:18:05Z
bgpadvertisements.metallb.io   2025-06-07T07:18:05Z
bgppeers.metallb.io            2025-06-07T07:18:05Z
communities.metallb.io         2025-06-07T07:18:05Z
ipaddresspools.metallb.io      2025-06-07T07:18:05Z
l2advertisements.metallb.io    2025-06-07T07:18:05Z
servicel2statuses.metallb.io   2025-06-07T07:18:05Z

3. MetalLB Pod 상태 확인

1
kubectl get pod -n metallb-system

✅ 출력

1
2
3
4
5
NAME                         READY   STATUS    RESTARTS   AGE
controller-bb5f47665-29lwc   1/1     Running   0          53s
speaker-f7qvl                1/1     Running   0          53s
speaker-hcfq8                1/1     Running   0          53s
speaker-lr429                1/1     Running   0          53s

4. IPAddressPool 및 L2Advertisement 리소스 적용

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
cat << EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: default
  namespace: metallb-system
spec:
  addresses:
  - 172.18.255.201-172.18.255.220
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: default
  namespace: metallb-system
spec:
  ipAddressPools:
  - default
EOF

✅ 출력

1
2
ipaddresspool.metallb.io/default created
l2advertisement.metallb.io/default created

5. IPAddressPool 및 L2Advertisement 생성 결과 확인

1
kubectl get IPAddressPool,L2Advertisement -A

✅ 출력

1
2
3
4
5
NAMESPACE        NAME                               AUTO ASSIGN   AVOID BUGGY IPS   ADDRESSES
metallb-system   ipaddresspool.metallb.io/default   true          false             ["172.18.255.201-172.18.255.220"]

NAMESPACE        NAME                                 IPADDRESSPOOLS   IPADDRESSPOOL SELECTORS   INTERFACES
metallb-system   l2advertisement.metallb.io/default   ["default"]                                

🚀 istio 1.26.0 설치 : Ambient profile

1. myk8s-control-plane 진입 후, istioctl 설치

1
2
3
4
5
6
7
8
docker exec -it myk8s-control-plane bash

root@myk8s-control-plane:/# export ISTIOV=1.26.0
echo 'export ISTIOV=1.26.0' >> /root/.bashrc

curl -s -L https://istio.io/downloadIstio | ISTIO_VERSION=$ISTIOV sh -
cp istio-$ISTIOV/bin/istioctl /usr/local/bin/istioctl
istioctl version --remote=false

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
Downloading istio-1.26.0 from https://github.com/istio/istio/releases/download/1.26.0/istio-1.26.0-linux-amd64.tar.gz ...

Istio 1.26.0 download complete!

The Istio release archive has been downloaded to the istio-1.26.0 directory.

To configure the istioctl client tool for your workstation,
add the /istio-1.26.0/bin directory to your environment path variable with:
	export PATH="$PATH:/istio-1.26.0/bin"

Begin the Istio pre-installation check by running:
	istioctl x precheck 

Try Istio in ambient mode
	https://istio.io/latest/docs/ambient/getting-started/
Try Istio in sidecar mode
	https://istio.io/latest/docs/setup/getting-started/
Install guides for ambient mode
	https://istio.io/latest/docs/ambient/install/
Install guides for sidecar mode
	https://istio.io/latest/docs/setup/install/

Need more information? Visit https://istio.io/latest/docs/ 
client version: 1.26.0

2. Istio ambient 프로파일 설치

1
root@myk8s-control-plane:/# istioctl install --set profile=ambient --set meshConfig.accessLogFile=/dev/stdout --skip-confirmation

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
        |\          
        | \         
        |  \        
        |   \       
      /||    \      
     / ||     \     
    /  ||      \    
   /   ||       \   
  /    ||        \  
 /     ||         \ 
/______||__________\
____________________
  \__       _____/  
     \_____/        

✔ Istio core installed ⛵️                                                                                                                         
✔ Istiod installed 🧠                                                                                                                             
✔ CNI installed 🪢                                                                                                                                
✔ Ztunnel installed 🔒                                                                                                                            
✔ Installation complete                                                                                                                           
The ambient profile has been installed successfully, enjoy Istio without sidecars!

3. Kubernetes Gateway API CRDs 설치

1
2
root@myk8s-control-plane:/# kubectl get crd gateways.gateway.networking.k8s.io &> /dev/null || \
  kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml

✅ 출력

1
2
3
4
5
customresourcedefinition.apiextensions.k8s.io/gatewayclasses.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/gateways.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/grpcroutes.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/httproutes.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/referencegrants.gateway.networking.k8s.io created

4. 보조 도구 설치

1
root@myk8s-control-plane:/# kubectl apply -f istio-$ISTIOV/samples/addons

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
serviceaccount/grafana created
configmap/grafana created
service/grafana created
deployment.apps/grafana created
configmap/istio-grafana-dashboards created
configmap/istio-services-grafana-dashboards created
deployment.apps/jaeger created
service/tracing created
service/zipkin created
service/jaeger-collector created
serviceaccount/kiali created
configmap/kiali created
clusterrole.rbac.authorization.k8s.io/kiali created
clusterrolebinding.rbac.authorization.k8s.io/kiali created
service/kiali created
deployment.apps/kiali created
serviceaccount/loki created
configmap/loki created
configmap/loki-runtime created
clusterrole.rbac.authorization.k8s.io/loki-clusterrole created
clusterrolebinding.rbac.authorization.k8s.io/loki-clusterrolebinding created
service/loki-memberlist created
service/loki-headless created
service/loki created
statefulset.apps/loki created
serviceaccount/prometheus created
configmap/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/prometheus created
deployment.apps/prometheus created

5. 컨트롤 플레인 셸 종료

1
2
root@myk8s-control-plane:/# exit
exit

6. Istio 설치 확인 및 컨트롤 플레인 정보 확인

1
kubectl get crd  | grep gateways

✅ 출력

1
2
gateways.gateway.networking.k8s.io          2025-06-07T07:26:54Z
gateways.networking.istio.io                2025-06-07T07:25:24Z

7. Gateway API 리소스 확인

1
kubectl api-resources | grep Gateway

✅ 출력

1
2
3
gatewayclasses                      gc           gateway.networking.k8s.io/v1        false        GatewayClass
gateways                            gtw          gateway.networking.k8s.io/v1        true         Gateway
gateways                            gw           networking.istio.io/v1              true         Gateway

8. Istio ConfigMap 정보 확인

1
kubectl describe cm -n istio-system istio

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
Name:         istio
Namespace:    istio-system
Labels:       app.kubernetes.io/instance=istio
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=istiod
              app.kubernetes.io/part-of=istio
              app.kubernetes.io/version=1.26.0
              helm.sh/chart=istiod-1.26.0
              install.operator.istio.io/owning-resource=unknown
              install.operator.istio.io/owning-resource-namespace=istio-system
              istio.io/rev=default
              operator.istio.io/component=Pilot
              operator.istio.io/managed=Reconcile
              operator.istio.io/version=1.26.0
              release=istio
Annotations:  <none>

Data
====
mesh:
----
accessLogFile: /dev/stdout
defaultConfig:
  discoveryAddress: istiod.istio-system.svc:15012
  image:
    imageType: distroless
  proxyMetadata:
    ISTIO_META_ENABLE_HBONE: "true"
defaultProviders:
  metrics:
  - prometheus
enablePrometheusMerge: true
rootNamespace: istio-system
trustDomain: cluster.local

meshNetworks:
----
networks: {}

BinaryData
====

Events:  <none>

9. Istio zTunnel 상태 확인

1
docker exec -it myk8s-control-plane istioctl proxy-status

✅ 출력

1
2
3
4
NAME                           CLUSTER        CDS         LDS         EDS         RDS         ECDS        ISTIOD                     VERSION
ztunnel-4bls2.istio-system     Kubernetes     IGNORED     IGNORED     IGNORED     IGNORED     IGNORED     istiod-86b6b7ff7-d7q7f     1.26.0
ztunnel-kczj2.istio-system     Kubernetes     IGNORED     IGNORED     IGNORED     IGNORED     IGNORED     istiod-86b6b7ff7-d7q7f     1.26.0
ztunnel-wr6pp.istio-system     Kubernetes     IGNORED     IGNORED     IGNORED     IGNORED     IGNORED     istiod-86b6b7ff7-d7q7f     1.26.0

10. 워크로드별 zTunnel 적용 여부 확인

1
docker exec -it myk8s-control-plane istioctl ztunnel-config workload

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
NAMESPACE          POD NAME                                    ADDRESS    NODE                WAYPOINT PROTOCOL
default            kubernetes                                  172.18.0.3                     None     TCP
istio-system       grafana-65bfb5f855-jfmdl                    10.10.2.4  myk8s-worker2       None     TCP
istio-system       istio-cni-node-7kst6                        10.10.1.4  myk8s-worker        None     TCP
istio-system       istio-cni-node-dpqsv                        10.10.2.2  myk8s-worker2       None     TCP
istio-system       istio-cni-node-rfx6w                        10.10.0.5  myk8s-control-plane None     TCP
istio-system       istiod-86b6b7ff7-d7q7f                      10.10.1.3  myk8s-worker        None     TCP
istio-system       jaeger-868fbc75d7-4lq87                     10.10.2.5  myk8s-worker2       None     TCP
istio-system       kiali-6d774d8bb8-zkx5r                      10.10.2.6  myk8s-worker2       None     TCP
istio-system       loki-0                                      10.10.2.9  myk8s-worker2       None     TCP
istio-system       prometheus-689cc795d4-vrlxd                 10.10.2.7  myk8s-worker2       None     TCP
istio-system       ztunnel-4bls2                               10.10.0.6  myk8s-control-plane None     TCP
istio-system       ztunnel-kczj2                               10.10.2.3  myk8s-worker2       None     TCP
istio-system       ztunnel-wr6pp                               10.10.1.5  myk8s-worker        None     TCP
kube-system        coredns-668d6bf9bc-k6lf9                    10.10.0.2  myk8s-control-plane None     TCP
kube-system        coredns-668d6bf9bc-xbtkx                    10.10.0.3  myk8s-control-plane None     TCP
kube-system        etcd-myk8s-control-plane                    172.18.0.3 myk8s-control-plane None     TCP
kube-system        kindnet-g9qmc                               172.18.0.2 myk8s-worker2       None     TCP
kube-system        kindnet-lc2q2                               172.18.0.3 myk8s-control-plane None     TCP
kube-system        kindnet-njcw4                               172.18.0.4 myk8s-worker        None     TCP
kube-system        kube-apiserver-myk8s-control-plane          172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-controller-manager-myk8s-control-plane 172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-proxy-h2qb5                            172.18.0.2 myk8s-worker2       None     TCP
kube-system        kube-proxy-jmfg8                            172.18.0.4 myk8s-worker        None     TCP
kube-system        kube-proxy-nswxj                            172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-scheduler-myk8s-control-plane          172.18.0.3 myk8s-control-plane None     TCP
local-path-storage local-path-provisioner-7dc846544d-vzdcv     10.10.0.4  myk8s-control-plane None     TCP
metallb-system     controller-bb5f47665-29lwc                  10.10.1.2  myk8s-worker        None     TCP
metallb-system     speaker-f7qvl                               172.18.0.2 myk8s-worker2       None     TCP
metallb-system     speaker-hcfq8                               172.18.0.4 myk8s-worker        None     TCP
metallb-system     speaker-lr429                               172.18.0.3 myk8s-control-plane None     TCP

11. 서비스별 zTunnel 적용 여부 확인

1
docker exec -it myk8s-control-plane istioctl ztunnel-config service

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
NAMESPACE      SERVICE NAME            SERVICE VIP  WAYPOINT ENDPOINTS
default        kubernetes              10.200.1.1   None     1/1
istio-system   grafana                 10.200.1.136 None     1/1
istio-system   istiod                  10.200.1.163 None     1/1
istio-system   jaeger-collector        10.200.1.144 None     1/1
istio-system   kiali                   10.200.1.133 None     1/1
istio-system   loki                    10.200.1.200 None     1/1
istio-system   loki-headless                        None     1/1
istio-system   loki-memberlist                      None     1/1
istio-system   prometheus              10.200.1.41  None     1/1
istio-system   tracing                 10.200.1.229 None     1/1
istio-system   zipkin                  10.200.1.117 None     1/1
kube-system    kube-dns                10.200.1.10  None     2/2
metallb-system metallb-webhook-service 10.200.1.89  None     1/1

12. 각 노드의 iptables 규칙 확인

1
for node in control-plane worker worker2; do echo "node : myk8s-$node" ; docker exec -it myk8s-$node sh -c 'iptables-save'; echo; done

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
node : myk8s-control-plane
# Generated by iptables-save v1.8.9 (nf_tables) on Sat Jun  7 07:33:35 2025
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-IPTABLES-HINT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Sat Jun  7 07:33:35 2025
# Generated by iptables-save v1.8.9 (nf_tables) on Sat Jun  7 07:33:35 2025
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-PROXY-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -m nfacct --nfacct-name  ct_state_invalid_dropped_pkts -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Sat Jun  7 07:33:35 2025
# Generated by iptables-save v1.8.9 (nf_tables) on Sat Jun  7 07:33:35 2025
*nat
:PREROUTING ACCEPT [483:51481]
:INPUT ACCEPT [216:13458]
:OUTPUT ACCEPT [9640:578471]
:POSTROUTING ACCEPT [9749:590198]
:DOCKER_OUTPUT - [0:0]
:DOCKER_POSTROUTING - [0:0]
:ISTIO_POSTRT - [0:0]
:KIND-MASQ-AGENT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SEP-2VZBTCTWVROYNW5S - [0:0]
:KUBE-SEP-2XZJVPRY2PQVE3B3 - [0:0]
:KUBE-SEP-56NKIH2AGMF3225M - [0:0]
:KUBE-SEP-57OB7LMKUA4REB6U - [0:0]
:KUBE-SEP-6GODNNVFRWQ66GUT - [0:0]
:KUBE-SEP-AGD4XG6YTTFBC5MP - [0:0]
:KUBE-SEP-BK4SURRAVDQBOPR4 - [0:0]
:KUBE-SEP-BTHPIFJSQR2I7YRG - [0:0]
:KUBE-SEP-EOOOBC4NP4L4Q2B3 - [0:0]
:KUBE-SEP-HYFTSL3KNTBQVM56 - [0:0]
:KUBE-SEP-IHVFB7RO5ADTY6F5 - [0:0]
:KUBE-SEP-IPT6REQVTVHSQECW - [0:0]
:KUBE-SEP-KFUQ2ADNBCFWH5LK - [0:0]
:KUBE-SEP-O7QD6W2Z5ZWUKA37 - [0:0]
:KUBE-SEP-OUCY3KVQXMID5GFB - [0:0]
:KUBE-SEP-QEFWPNF3O63RIU3Y - [0:0]
:KUBE-SEP-QKX4QX54UKWK6JIY - [0:0]
:KUBE-SEP-RT3F6VLY3P67FIV3 - [0:0]
:KUBE-SEP-SDODBYH4DETTTORW - [0:0]
:KUBE-SEP-TMGNMEREWWYSM5SB - [0:0]
:KUBE-SEP-VIIXX4VJ3A6FHLRW - [0:0]
:KUBE-SEP-VQYPEMZ5ZXGF6COE - [0:0]
:KUBE-SEP-X33LYSV5PIJ3PHXQ - [0:0]
:KUBE-SEP-XVHB3NIW2NQLTFP3 - [0:0]
:KUBE-SEP-XWEOB3JN6VI62DQQ - [0:0]
:KUBE-SEP-ZEA5VGCBA2QNA7AK - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-A4N66M5KWTPIOJ3M - [0:0]
:KUBE-SVC-CG3LQLBYYHBKATGN - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-GZ25SP4UFGF7SAVL - [0:0]
:KUBE-SVC-H7T7T7XMCTV7IA7W - [0:0]
:KUBE-SVC-HHG7U5KRFPWQTOCU - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-KQQXNCC3T6BSGKIX - [0:0]
:KUBE-SVC-LTFVHWNLEPNOR2EL - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-NVNLZVDQSGQUD3NM - [0:0]
:KUBE-SVC-PGVURPSX4NOENZDL - [0:0]
:KUBE-SVC-QI3XYPITNLHPCCQT - [0:0]
:KUBE-SVC-RJIBBGIJBATPMI6Z - [0:0]
:KUBE-SVC-SEST5XGLUQ5J34LB - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-TRDPJ2HAZ5FCHNBZ - [0:0]
:KUBE-SVC-VL7ZZ6D5B63AFR4Y - [0:0]
:KUBE-SVC-WHNIZNLB5XFXIX2C - [0:0]
:KUBE-SVC-X2LFBJAZKZE3QCOK - [0:0]
:KUBE-SVC-XHUBMW47Y5G3ICIS - [0:0]
:KUBE-SVC-XJMB3L73YF5UUKWH - [0:0]
:KUBE-SVC-Y3OVZYCKHGYTKGDA - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -d 172.18.0.1/32 -j DOCKER_OUTPUT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -d 172.18.0.1/32 -j DOCKER_OUTPUT
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -d 172.18.0.1/32 -j DOCKER_POSTROUTING
-A POSTROUTING -m addrtype ! --dst-type LOCAL -m comment --comment "kind-masq-agent: ensure nat POSTROUTING directs all non-LOCAL destination traffic to our custom KIND-MASQ-AGENT chain" -j KIND-MASQ-AGENT
-A POSTROUTING -j ISTIO_POSTRT
-A DOCKER_OUTPUT -d 172.18.0.1/32 -p tcp -m tcp --dport 53 -j DNAT --to-destination 127.0.0.11:46183
-A DOCKER_OUTPUT -d 172.18.0.1/32 -p udp -m udp --dport 53 -j DNAT --to-destination 127.0.0.11:40395
-A DOCKER_POSTROUTING -s 127.0.0.11/32 -p tcp -m tcp --sport 46183 -j SNAT --to-source 172.18.0.1:53
-A DOCKER_POSTROUTING -s 127.0.0.11/32 -p udp -m udp --sport 40395 -j SNAT --to-source 172.18.0.1:53
-A ISTIO_POSTRT -p tcp -m owner --socket-exists -m set --match-set istio-inpod-probes-v4 dst -j SNAT --to-source 169.254.7.127
-A KIND-MASQ-AGENT -d 10.10.0.0/16 -m comment --comment "kind-masq-agent: local traffic is not subject to MASQUERADE" -j RETURN
-A KIND-MASQ-AGENT -m comment --comment "kind-masq-agent: outbound traffic is subject to MASQUERADE (must be last in chain)" -j MASQUERADE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SEP-2VZBTCTWVROYNW5S -s 10.10.2.9/32 -m comment --comment "istio-system/loki:http-metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-2VZBTCTWVROYNW5S -p tcp -m comment --comment "istio-system/loki:http-metrics" -m tcp -j DNAT --to-destination 10.10.2.9:3100
-A KUBE-SEP-2XZJVPRY2PQVE3B3 -s 10.10.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-2XZJVPRY2PQVE3B3 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.10.0.2:53
-A KUBE-SEP-56NKIH2AGMF3225M -s 10.10.2.7/32 -m comment --comment "istio-system/prometheus:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-56NKIH2AGMF3225M -p tcp -m comment --comment "istio-system/prometheus:http" -m tcp -j DNAT --to-destination 10.10.2.7:9090
-A KUBE-SEP-57OB7LMKUA4REB6U -s 10.10.2.5/32 -m comment --comment "istio-system/jaeger-collector:grpc-otel" -j KUBE-MARK-MASQ
-A KUBE-SEP-57OB7LMKUA4REB6U -p tcp -m comment --comment "istio-system/jaeger-collector:grpc-otel" -m tcp -j DNAT --to-destination 10.10.2.5:4317
-A KUBE-SEP-6GODNNVFRWQ66GUT -s 10.10.0.3/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-6GODNNVFRWQ66GUT -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.10.0.3:9153
-A KUBE-SEP-AGD4XG6YTTFBC5MP -s 10.10.2.5/32 -m comment --comment "istio-system/zipkin:http-query" -j KUBE-MARK-MASQ
-A KUBE-SEP-AGD4XG6YTTFBC5MP -p tcp -m comment --comment "istio-system/zipkin:http-query" -m tcp -j DNAT --to-destination 10.10.2.5:9411
-A KUBE-SEP-BK4SURRAVDQBOPR4 -s 10.10.1.2/32 -m comment --comment "metallb-system/metallb-webhook-service" -j KUBE-MARK-MASQ
-A KUBE-SEP-BK4SURRAVDQBOPR4 -p tcp -m comment --comment "metallb-system/metallb-webhook-service" -m tcp -j DNAT --to-destination 10.10.1.2:9443
-A KUBE-SEP-BTHPIFJSQR2I7YRG -s 10.10.2.4/32 -m comment --comment "istio-system/grafana:service" -j KUBE-MARK-MASQ
-A KUBE-SEP-BTHPIFJSQR2I7YRG -p tcp -m comment --comment "istio-system/grafana:service" -m tcp -j DNAT --to-destination 10.10.2.4:3000
-A KUBE-SEP-EOOOBC4NP4L4Q2B3 -s 10.10.2.5/32 -m comment --comment "istio-system/jaeger-collector:http-otel" -j KUBE-MARK-MASQ
-A KUBE-SEP-EOOOBC4NP4L4Q2B3 -p tcp -m comment --comment "istio-system/jaeger-collector:http-otel" -m tcp -j DNAT --to-destination 10.10.2.5:4318
-A KUBE-SEP-HYFTSL3KNTBQVM56 -s 10.10.2.9/32 -m comment --comment "istio-system/loki:grpc" -j KUBE-MARK-MASQ
-A KUBE-SEP-HYFTSL3KNTBQVM56 -p tcp -m comment --comment "istio-system/loki:grpc" -m tcp -j DNAT --to-destination 10.10.2.9:9095
-A KUBE-SEP-IHVFB7RO5ADTY6F5 -s 10.10.2.6/32 -m comment --comment "istio-system/kiali:http-metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-IHVFB7RO5ADTY6F5 -p tcp -m comment --comment "istio-system/kiali:http-metrics" -m tcp -j DNAT --to-destination 10.10.2.6:9090
-A KUBE-SEP-IPT6REQVTVHSQECW -s 10.10.2.5/32 -m comment --comment "istio-system/tracing:grpc-query" -j KUBE-MARK-MASQ
-A KUBE-SEP-IPT6REQVTVHSQECW -p tcp -m comment --comment "istio-system/tracing:grpc-query" -m tcp -j DNAT --to-destination 10.10.2.5:16685
-A KUBE-SEP-KFUQ2ADNBCFWH5LK -s 10.10.2.5/32 -m comment --comment "istio-system/jaeger-collector:http-zipkin" -j KUBE-MARK-MASQ
-A KUBE-SEP-KFUQ2ADNBCFWH5LK -p tcp -m comment --comment "istio-system/jaeger-collector:http-zipkin" -m tcp -j DNAT --to-destination 10.10.2.5:9411
-A KUBE-SEP-O7QD6W2Z5ZWUKA37 -s 10.10.2.6/32 -m comment --comment "istio-system/kiali:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-O7QD6W2Z5ZWUKA37 -p tcp -m comment --comment "istio-system/kiali:http" -m tcp -j DNAT --to-destination 10.10.2.6:20001
-A KUBE-SEP-OUCY3KVQXMID5GFB -s 10.10.1.3/32 -m comment --comment "istio-system/istiod:grpc-xds" -j KUBE-MARK-MASQ
-A KUBE-SEP-OUCY3KVQXMID5GFB -p tcp -m comment --comment "istio-system/istiod:grpc-xds" -m tcp -j DNAT --to-destination 10.10.1.3:15010
-A KUBE-SEP-QEFWPNF3O63RIU3Y -s 10.10.2.5/32 -m comment --comment "istio-system/jaeger-collector:jaeger-collector-http" -j KUBE-MARK-MASQ
-A KUBE-SEP-QEFWPNF3O63RIU3Y -p tcp -m comment --comment "istio-system/jaeger-collector:jaeger-collector-http" -m tcp -j DNAT --to-destination 10.10.2.5:14268
-A KUBE-SEP-QKX4QX54UKWK6JIY -s 172.18.0.3/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-QKX4QX54UKWK6JIY -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 172.18.0.3:6443
-A KUBE-SEP-RT3F6VLY3P67FIV3 -s 10.10.0.2/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-RT3F6VLY3P67FIV3 -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.10.0.2:9153
-A KUBE-SEP-SDODBYH4DETTTORW -s 10.10.2.5/32 -m comment --comment "istio-system/tracing:http-query" -j KUBE-MARK-MASQ
-A KUBE-SEP-SDODBYH4DETTTORW -p tcp -m comment --comment "istio-system/tracing:http-query" -m tcp -j DNAT --to-destination 10.10.2.5:16686
-A KUBE-SEP-TMGNMEREWWYSM5SB -s 10.10.1.3/32 -m comment --comment "istio-system/istiod:https-dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-TMGNMEREWWYSM5SB -p tcp -m comment --comment "istio-system/istiod:https-dns" -m tcp -j DNAT --to-destination 10.10.1.3:15012
-A KUBE-SEP-VIIXX4VJ3A6FHLRW -s 10.10.1.3/32 -m comment --comment "istio-system/istiod:https-webhook" -j KUBE-MARK-MASQ
-A KUBE-SEP-VIIXX4VJ3A6FHLRW -p tcp -m comment --comment "istio-system/istiod:https-webhook" -m tcp -j DNAT --to-destination 10.10.1.3:15017
-A KUBE-SEP-VQYPEMZ5ZXGF6COE -s 10.10.2.5/32 -m comment --comment "istio-system/jaeger-collector:jaeger-collector-grpc" -j KUBE-MARK-MASQ
-A KUBE-SEP-VQYPEMZ5ZXGF6COE -p tcp -m comment --comment "istio-system/jaeger-collector:jaeger-collector-grpc" -m tcp -j DNAT --to-destination 10.10.2.5:14250
-A KUBE-SEP-X33LYSV5PIJ3PHXQ -s 10.10.1.3/32 -m comment --comment "istio-system/istiod:http-monitoring" -j KUBE-MARK-MASQ
-A KUBE-SEP-X33LYSV5PIJ3PHXQ -p tcp -m comment --comment "istio-system/istiod:http-monitoring" -m tcp -j DNAT --to-destination 10.10.1.3:15014
-A KUBE-SEP-XVHB3NIW2NQLTFP3 -s 10.10.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-XVHB3NIW2NQLTFP3 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.10.0.2:53
-A KUBE-SEP-XWEOB3JN6VI62DQQ -s 10.10.0.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-XWEOB3JN6VI62DQQ -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.10.0.3:53
-A KUBE-SEP-ZEA5VGCBA2QNA7AK -s 10.10.0.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZEA5VGCBA2QNA7AK -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.10.0.3:53
-A KUBE-SERVICES -d 10.200.1.163/32 -p tcp -m comment --comment "istio-system/istiod:http-monitoring cluster IP" -m tcp --dport 15014 -j KUBE-SVC-XHUBMW47Y5G3ICIS
-A KUBE-SERVICES -d 10.200.1.144/32 -p tcp -m comment --comment "istio-system/jaeger-collector:jaeger-collector-grpc cluster IP" -m tcp --dport 14250 -j KUBE-SVC-VL7ZZ6D5B63AFR4Y
-A KUBE-SERVICES -d 10.200.1.144/32 -p tcp -m comment --comment "istio-system/jaeger-collector:http-zipkin cluster IP" -m tcp --dport 9411 -j KUBE-SVC-HHG7U5KRFPWQTOCU
-A KUBE-SERVICES -d 10.200.1.133/32 -p tcp -m comment --comment "istio-system/kiali:http-metrics cluster IP" -m tcp --dport 9090 -j KUBE-SVC-KQQXNCC3T6BSGKIX
-A KUBE-SERVICES -d 10.200.1.200/32 -p tcp -m comment --comment "istio-system/loki:http-metrics cluster IP" -m tcp --dport 3100 -j KUBE-SVC-QI3XYPITNLHPCCQT
-A KUBE-SERVICES -d 10.200.1.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.200.1.163/32 -p tcp -m comment --comment "istio-system/istiod:https-webhook cluster IP" -m tcp --dport 443 -j KUBE-SVC-WHNIZNLB5XFXIX2C
-A KUBE-SERVICES -d 10.200.1.163/32 -p tcp -m comment --comment "istio-system/istiod:https-dns cluster IP" -m tcp --dport 15012 -j KUBE-SVC-CG3LQLBYYHBKATGN
-A KUBE-SERVICES -d 10.200.1.144/32 -p tcp -m comment --comment "istio-system/jaeger-collector:jaeger-collector-http cluster IP" -m tcp --dport 14268 -j KUBE-SVC-SEST5XGLUQ5J34LB
-A KUBE-SERVICES -d 10.200.1.133/32 -p tcp -m comment --comment "istio-system/kiali:http cluster IP" -m tcp --dport 20001 -j KUBE-SVC-TRDPJ2HAZ5FCHNBZ
-A KUBE-SERVICES -d 10.200.1.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.200.1.163/32 -p tcp -m comment --comment "istio-system/istiod:grpc-xds cluster IP" -m tcp --dport 15010 -j KUBE-SVC-NVNLZVDQSGQUD3NM
-A KUBE-SERVICES -d 10.200.1.144/32 -p tcp -m comment --comment "istio-system/jaeger-collector:grpc-otel cluster IP" -m tcp --dport 4317 -j KUBE-SVC-PGVURPSX4NOENZDL
-A KUBE-SERVICES -d 10.200.1.144/32 -p tcp -m comment --comment "istio-system/jaeger-collector:http-otel cluster IP" -m tcp --dport 4318 -j KUBE-SVC-X2LFBJAZKZE3QCOK
-A KUBE-SERVICES -d 10.200.1.200/32 -p tcp -m comment --comment "istio-system/loki:grpc cluster IP" -m tcp --dport 9095 -j KUBE-SVC-XJMB3L73YF5UUKWH
-A KUBE-SERVICES -d 10.200.1.41/32 -p tcp -m comment --comment "istio-system/prometheus:http cluster IP" -m tcp --dport 9090 -j KUBE-SVC-RJIBBGIJBATPMI6Z
-A KUBE-SERVICES -d 10.200.1.229/32 -p tcp -m comment --comment "istio-system/tracing:http-query cluster IP" -m tcp --dport 80 -j KUBE-SVC-A4N66M5KWTPIOJ3M
-A KUBE-SERVICES -d 10.200.1.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.200.1.136/32 -p tcp -m comment --comment "istio-system/grafana:service cluster IP" -m tcp --dport 3000 -j KUBE-SVC-Y3OVZYCKHGYTKGDA
-A KUBE-SERVICES -d 10.200.1.229/32 -p tcp -m comment --comment "istio-system/tracing:grpc-query cluster IP" -m tcp --dport 16685 -j KUBE-SVC-H7T7T7XMCTV7IA7W
-A KUBE-SERVICES -d 10.200.1.117/32 -p tcp -m comment --comment "istio-system/zipkin:http-query cluster IP" -m tcp --dport 9411 -j KUBE-SVC-LTFVHWNLEPNOR2EL
-A KUBE-SERVICES -d 10.200.1.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.200.1.89/32 -p tcp -m comment --comment "metallb-system/metallb-webhook-service cluster IP" -m tcp --dport 443 -j KUBE-SVC-GZ25SP4UFGF7SAVL
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-A4N66M5KWTPIOJ3M ! -s 10.10.0.0/16 -d 10.200.1.229/32 -p tcp -m comment --comment "istio-system/tracing:http-query cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SVC-A4N66M5KWTPIOJ3M -m comment --comment "istio-system/tracing:http-query -> 10.10.2.5:16686" -j KUBE-SEP-SDODBYH4DETTTORW
-A KUBE-SVC-CG3LQLBYYHBKATGN ! -s 10.10.0.0/16 -d 10.200.1.163/32 -p tcp -m comment --comment "istio-system/istiod:https-dns cluster IP" -m tcp --dport 15012 -j KUBE-MARK-MASQ
-A KUBE-SVC-CG3LQLBYYHBKATGN -m comment --comment "istio-system/istiod:https-dns -> 10.10.1.3:15012" -j KUBE-SEP-TMGNMEREWWYSM5SB
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.10.0.0/16 -d 10.200.1.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp -> 10.10.0.2:53" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-XVHB3NIW2NQLTFP3
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp -> 10.10.0.3:53" -j KUBE-SEP-ZEA5VGCBA2QNA7AK
-A KUBE-SVC-GZ25SP4UFGF7SAVL ! -s 10.10.0.0/16 -d 10.200.1.89/32 -p tcp -m comment --comment "metallb-system/metallb-webhook-service cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-GZ25SP4UFGF7SAVL -m comment --comment "metallb-system/metallb-webhook-service -> 10.10.1.2:9443" -j KUBE-SEP-BK4SURRAVDQBOPR4
-A KUBE-SVC-H7T7T7XMCTV7IA7W ! -s 10.10.0.0/16 -d 10.200.1.229/32 -p tcp -m comment --comment "istio-system/tracing:grpc-query cluster IP" -m tcp --dport 16685 -j KUBE-MARK-MASQ
-A KUBE-SVC-H7T7T7XMCTV7IA7W -m comment --comment "istio-system/tracing:grpc-query -> 10.10.2.5:16685" -j KUBE-SEP-IPT6REQVTVHSQECW
-A KUBE-SVC-HHG7U5KRFPWQTOCU ! -s 10.10.0.0/16 -d 10.200.1.144/32 -p tcp -m comment --comment "istio-system/jaeger-collector:http-zipkin cluster IP" -m tcp --dport 9411 -j KUBE-MARK-MASQ
-A KUBE-SVC-HHG7U5KRFPWQTOCU -m comment --comment "istio-system/jaeger-collector:http-zipkin -> 10.10.2.5:9411" -j KUBE-SEP-KFUQ2ADNBCFWH5LK
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.10.0.0/16 -d 10.200.1.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics -> 10.10.0.2:9153" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-RT3F6VLY3P67FIV3
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics -> 10.10.0.3:9153" -j KUBE-SEP-6GODNNVFRWQ66GUT
-A KUBE-SVC-KQQXNCC3T6BSGKIX ! -s 10.10.0.0/16 -d 10.200.1.133/32 -p tcp -m comment --comment "istio-system/kiali:http-metrics cluster IP" -m tcp --dport 9090 -j KUBE-MARK-MASQ
-A KUBE-SVC-KQQXNCC3T6BSGKIX -m comment --comment "istio-system/kiali:http-metrics -> 10.10.2.6:9090" -j KUBE-SEP-IHVFB7RO5ADTY6F5
-A KUBE-SVC-LTFVHWNLEPNOR2EL ! -s 10.10.0.0/16 -d 10.200.1.117/32 -p tcp -m comment --comment "istio-system/zipkin:http-query cluster IP" -m tcp --dport 9411 -j KUBE-MARK-MASQ
-A KUBE-SVC-LTFVHWNLEPNOR2EL -m comment --comment "istio-system/zipkin:http-query -> 10.10.2.5:9411" -j KUBE-SEP-AGD4XG6YTTFBC5MP
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.10.0.0/16 -d 10.200.1.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https -> 172.18.0.3:6443" -j KUBE-SEP-QKX4QX54UKWK6JIY
-A KUBE-SVC-NVNLZVDQSGQUD3NM ! -s 10.10.0.0/16 -d 10.200.1.163/32 -p tcp -m comment --comment "istio-system/istiod:grpc-xds cluster IP" -m tcp --dport 15010 -j KUBE-MARK-MASQ
-A KUBE-SVC-NVNLZVDQSGQUD3NM -m comment --comment "istio-system/istiod:grpc-xds -> 10.10.1.3:15010" -j KUBE-SEP-OUCY3KVQXMID5GFB
-A KUBE-SVC-PGVURPSX4NOENZDL ! -s 10.10.0.0/16 -d 10.200.1.144/32 -p tcp -m comment --comment "istio-system/jaeger-collector:grpc-otel cluster IP" -m tcp --dport 4317 -j KUBE-MARK-MASQ
-A KUBE-SVC-PGVURPSX4NOENZDL -m comment --comment "istio-system/jaeger-collector:grpc-otel -> 10.10.2.5:4317" -j KUBE-SEP-57OB7LMKUA4REB6U
-A KUBE-SVC-QI3XYPITNLHPCCQT ! -s 10.10.0.0/16 -d 10.200.1.200/32 -p tcp -m comment --comment "istio-system/loki:http-metrics cluster IP" -m tcp --dport 3100 -j KUBE-MARK-MASQ
-A KUBE-SVC-QI3XYPITNLHPCCQT -m comment --comment "istio-system/loki:http-metrics -> 10.10.2.9:3100" -j KUBE-SEP-2VZBTCTWVROYNW5S
-A KUBE-SVC-RJIBBGIJBATPMI6Z ! -s 10.10.0.0/16 -d 10.200.1.41/32 -p tcp -m comment --comment "istio-system/prometheus:http cluster IP" -m tcp --dport 9090 -j KUBE-MARK-MASQ
-A KUBE-SVC-RJIBBGIJBATPMI6Z -m comment --comment "istio-system/prometheus:http -> 10.10.2.7:9090" -j KUBE-SEP-56NKIH2AGMF3225M
-A KUBE-SVC-SEST5XGLUQ5J34LB ! -s 10.10.0.0/16 -d 10.200.1.144/32 -p tcp -m comment --comment "istio-system/jaeger-collector:jaeger-collector-http cluster IP" -m tcp --dport 14268 -j KUBE-MARK-MASQ
-A KUBE-SVC-SEST5XGLUQ5J34LB -m comment --comment "istio-system/jaeger-collector:jaeger-collector-http -> 10.10.2.5:14268" -j KUBE-SEP-QEFWPNF3O63RIU3Y
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.10.0.0/16 -d 10.200.1.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.10.0.2:53" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-2XZJVPRY2PQVE3B3
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.10.0.3:53" -j KUBE-SEP-XWEOB3JN6VI62DQQ
-A KUBE-SVC-TRDPJ2HAZ5FCHNBZ ! -s 10.10.0.0/16 -d 10.200.1.133/32 -p tcp -m comment --comment "istio-system/kiali:http cluster IP" -m tcp --dport 20001 -j KUBE-MARK-MASQ
-A KUBE-SVC-TRDPJ2HAZ5FCHNBZ -m comment --comment "istio-system/kiali:http -> 10.10.2.6:20001" -j KUBE-SEP-O7QD6W2Z5ZWUKA37
-A KUBE-SVC-VL7ZZ6D5B63AFR4Y ! -s 10.10.0.0/16 -d 10.200.1.144/32 -p tcp -m comment --comment "istio-system/jaeger-collector:jaeger-collector-grpc cluster IP" -m tcp --dport 14250 -j KUBE-MARK-MASQ
-A KUBE-SVC-VL7ZZ6D5B63AFR4Y -m comment --comment "istio-system/jaeger-collector:jaeger-collector-grpc -> 10.10.2.5:14250" -j KUBE-SEP-VQYPEMZ5ZXGF6COE
-A KUBE-SVC-WHNIZNLB5XFXIX2C ! -s 10.10.0.0/16 -d 10.200.1.163/32 -p tcp -m comment --comment "istio-system/istiod:https-webhook cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-WHNIZNLB5XFXIX2C -m comment --comment "istio-system/istiod:https-webhook -> 10.10.1.3:15017" -j KUBE-SEP-VIIXX4VJ3A6FHLRW
-A KUBE-SVC-X2LFBJAZKZE3QCOK ! -s 10.10.0.0/16 -d 10.200.1.144/32 -p tcp -m comment --comment "istio-system/jaeger-collector:http-otel cluster IP" -m tcp --dport 4318 -j KUBE-MARK-MASQ
-A KUBE-SVC-X2LFBJAZKZE3QCOK -m comment --comment "istio-system/jaeger-collector:http-otel -> 10.10.2.5:4318" -j KUBE-SEP-EOOOBC4NP4L4Q2B3
-A KUBE-SVC-XHUBMW47Y5G3ICIS ! -s 10.10.0.0/16 -d 10.200.1.163/32 -p tcp -m comment --comment "istio-system/istiod:http-monitoring cluster IP" -m tcp --dport 15014 -j KUBE-MARK-MASQ
-A KUBE-SVC-XHUBMW47Y5G3ICIS -m comment --comment "istio-system/istiod:http-monitoring -> 10.10.1.3:15014" -j KUBE-SEP-X33LYSV5PIJ3PHXQ
-A KUBE-SVC-XJMB3L73YF5UUKWH ! -s 10.10.0.0/16 -d 10.200.1.200/32 -p tcp -m comment --comment "istio-system/loki:grpc cluster IP" -m tcp --dport 9095 -j KUBE-MARK-MASQ
-A KUBE-SVC-XJMB3L73YF5UUKWH -m comment --comment "istio-system/loki:grpc -> 10.10.2.9:9095" -j KUBE-SEP-HYFTSL3KNTBQVM56
-A KUBE-SVC-Y3OVZYCKHGYTKGDA ! -s 10.10.0.0/16 -d 10.200.1.136/32 -p tcp -m comment --comment "istio-system/grafana:service cluster IP" -m tcp --dport 3000 -j KUBE-MARK-MASQ
-A KUBE-SVC-Y3OVZYCKHGYTKGDA -m comment --comment "istio-system/grafana:service -> 10.10.2.4:3000" -j KUBE-SEP-BTHPIFJSQR2I7YRG
COMMIT
# Completed on Sat Jun  7 07:33:35 2025
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them

node : myk8s-worker
...

node : myk8s-worker2
...

13. Istio 관측 도구 NodePort 변경 작업

prometheus(30001), grafana(30002), kiali(30003), tracing(30004)

1
2
3
4
kubectl patch svc -n istio-system prometheus -p '{"spec": {"type": "NodePort", "ports": [{"port": 9090, "targetPort": 9090, "nodePort": 30001}]}}'
kubectl patch svc -n istio-system grafana -p '{"spec": {"type": "NodePort", "ports": [{"port": 3000, "targetPort": 3000, "nodePort": 30002}]}}'
kubectl patch svc -n istio-system kiali -p '{"spec": {"type": "NodePort", "ports": [{"port": 20001, "targetPort": 20001, "nodePort": 30003}]}}'
kubectl patch svc -n istio-system tracing -p '{"spec": {"type": "NodePort", "ports": [{"port": 80, "targetPort": 16686, "nodePort": 30004}]}}'

✅ 출력

1
2
3
4
service/prometheus patched
service/grafana patched
service/kiali patched
service/tracing patched

Kiali 접속: http://127.0.0.1:30003


🧩 Istio CNI 상태 및 로그 확인

1. Istio CNI DaemonSet 상태 확인

1
kubectl get ds -n istio-system

✅ 출력

1
2
3
NAME             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
istio-cni-node   3         3         3       3            3           kubernetes.io/os=linux   16m
ztunnel          3         3         3       3            3           kubernetes.io/os=linux   16m

2. istio-cni-node Pod 목록 확인

1
kubectl get pod -n istio-system -l k8s-app=istio-cni-node -owide

✅ 출력

1
2
3
4
NAME                   READY   STATUS    RESTARTS   AGE   IP          NODE                  NOMINATED NODE   READINESS GATES
istio-cni-node-7kst6   1/1     Running   0          17m   10.10.1.4   myk8s-worker          <none>           <none>
istio-cni-node-dpqsv   1/1     Running   0          17m   10.10.2.2   myk8s-worker2         <none>           <none>
istio-cni-node-rfx6w   1/1     Running   0          17m   10.10.0.5   myk8s-control-plane   <none>           <none>

3. istio-cni-node Pod 상세 정보 확인

1
kubectl describe pod -n istio-system -l k8s-app=istio-cni-node

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
Name:                 istio-cni-node-7kst6
Namespace:            istio-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      istio-cni
Node:                 myk8s-worker/172.18.0.4
Start Time:           Sat, 07 Jun 2025 16:25:35 +0900
Labels:               app.kubernetes.io/instance=istio
                      app.kubernetes.io/managed-by=Helm
                      app.kubernetes.io/name=istio-cni
                      app.kubernetes.io/part-of=istio
                      app.kubernetes.io/version=1.26.0
                      controller-revision-hash=69565b7cc7
                      helm.sh/chart=cni-1.26.0
                      istio.io/dataplane-mode=none
                      k8s-app=istio-cni-node
                      pod-template-generation=1
                      sidecar.istio.io/inject=false
Annotations:          container.apparmor.security.beta.kubernetes.io/install-cni: unconfined
                      prometheus.io/path: /metrics
                      prometheus.io/port: 15014
                      prometheus.io/scrape: true
                      sidecar.istio.io/inject: false
Status:               Running
IP:                   10.10.1.4
IPs:
  IP:           10.10.1.4
Controlled By:  DaemonSet/istio-cni-node
Containers:
  install-cni:
    Container ID:  containerd://78f0e92ac48223148fe634580d14e30d3c3c62bb5cd5ab39c4decff81f1b76b5
    Image:         docker.io/istio/install-cni:1.26.0-distroless
    Image ID:      docker.io/istio/install-cni@sha256:e69cea606f6fe75907602349081f78ddb0a94417199f9022f7323510abef65cb
    Port:          15014/TCP
    Host Port:     0/TCP
    Command:
      install-cni
    Args:
      --log_output_level=info
    State:          Running
      Started:      Sat, 07 Jun 2025 16:25:46 +0900
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      100m
      memory:   100Mi
    Readiness:  http-get http://:8000/readyz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment Variables from:
      istio-cni-config  ConfigMap  Optional: false
    Environment:
      REPAIR_NODE_NAME:            (v1:spec.nodeName)
      REPAIR_RUN_AS_DAEMON:       true
      REPAIR_SIDECAR_ANNOTATION:  sidecar.istio.io/status
      ALLOW_SWITCH_TO_HOST_NS:    true
      NODE_NAME:                   (v1:spec.nodeName)
      GOMEMLIMIT:                 node allocatable (limits.memory)
      GOMAXPROCS:                 node allocatable (limits.cpu)
      POD_NAME:                   istio-cni-node-7kst6 (v1:metadata.name)
      POD_NAMESPACE:              istio-system (v1:metadata.namespace)
    Mounts:
      /host/etc/cni/net.d from cni-net-dir (rw)
      /host/opt/cni/bin from cni-bin-dir (rw)
      /host/proc from cni-host-procfs (ro)
      /host/var/run/netns from cni-netns-dir (rw)
      /var/run/istio-cni from cni-socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6gwk9 (ro)
      /var/run/ztunnel from cni-ztunnel-sock-dir (rw)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  cni-bin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:  
  cni-host-procfs:
    Type:          HostPath (bare host directory volume)
    Path:          /proc
    HostPathType:  Directory
  cni-ztunnel-sock-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/ztunnel
    HostPathType:  DirectoryOrCreate
  cni-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  
  cni-socket-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/istio-cni
    HostPathType:  
  cni-netns-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/netns
    HostPathType:  DirectoryOrCreate
  kube-api-access-6gwk9:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 :NoSchedule op=Exists
                             :NoExecute op=Exists
                             CriticalAddonsOnly op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  17m   default-scheduler  Successfully assigned istio-system/istio-cni-node-7kst6 to myk8s-worker
  Normal   Pulling    17m   kubelet            Pulling image "docker.io/istio/install-cni:1.26.0-distroless"
  Normal   Pulled     17m   kubelet            Successfully pulled image "docker.io/istio/install-cni:1.26.0-distroless" in 10.868s (10.868s including waiting). Image size: 47376201 bytes.
  Normal   Created    17m   kubelet            Created container: install-cni
  Normal   Started    17m   kubelet            Started container install-cni
  Warning  Unhealthy  17m   kubelet            Readiness probe failed: HTTP probe failed with statuscode: 503

Name:                 istio-cni-node-dpqsv
Namespace:            istio-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      istio-cni
Node:                 myk8s-worker2/172.18.0.2
Start Time:           Sat, 07 Jun 2025 16:25:35 +0900
Labels:               app.kubernetes.io/instance=istio
                      app.kubernetes.io/managed-by=Helm
                      app.kubernetes.io/name=istio-cni
                      app.kubernetes.io/part-of=istio
                      app.kubernetes.io/version=1.26.0
                      controller-revision-hash=69565b7cc7
                      helm.sh/chart=cni-1.26.0
                      istio.io/dataplane-mode=none
                      k8s-app=istio-cni-node
                      pod-template-generation=1
                      sidecar.istio.io/inject=false
Annotations:          container.apparmor.security.beta.kubernetes.io/install-cni: unconfined
                      prometheus.io/path: /metrics
                      prometheus.io/port: 15014
                      prometheus.io/scrape: true
                      sidecar.istio.io/inject: false
Status:               Running
IP:                   10.10.2.2
IPs:
  IP:           10.10.2.2
Controlled By:  DaemonSet/istio-cni-node
Containers:
  install-cni:
    Container ID:  containerd://daad034515ef510652efe8e1cba35398f63be4091040d6e7d446538bfd36bf0c
    Image:         docker.io/istio/install-cni:1.26.0-distroless
    Image ID:      docker.io/istio/install-cni@sha256:e69cea606f6fe75907602349081f78ddb0a94417199f9022f7323510abef65cb
    Port:          15014/TCP
    Host Port:     0/TCP
    Command:
      install-cni
    Args:
      --log_output_level=info
    State:          Running
      Started:      Sat, 07 Jun 2025 16:25:46 +0900
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      100m
      memory:   100Mi
    Readiness:  http-get http://:8000/readyz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment Variables from:
      istio-cni-config  ConfigMap  Optional: false
    Environment:
      REPAIR_NODE_NAME:            (v1:spec.nodeName)
      REPAIR_RUN_AS_DAEMON:       true
      REPAIR_SIDECAR_ANNOTATION:  sidecar.istio.io/status
      ALLOW_SWITCH_TO_HOST_NS:    true
      NODE_NAME:                   (v1:spec.nodeName)
      GOMEMLIMIT:                 node allocatable (limits.memory)
      GOMAXPROCS:                 node allocatable (limits.cpu)
      POD_NAME:                   istio-cni-node-dpqsv (v1:metadata.name)
      POD_NAMESPACE:              istio-system (v1:metadata.namespace)
    Mounts:
      /host/etc/cni/net.d from cni-net-dir (rw)
      /host/opt/cni/bin from cni-bin-dir (rw)
      /host/proc from cni-host-procfs (ro)
      /host/var/run/netns from cni-netns-dir (rw)
      /var/run/istio-cni from cni-socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7nv4k (ro)
      /var/run/ztunnel from cni-ztunnel-sock-dir (rw)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  cni-bin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:  
  cni-host-procfs:
    Type:          HostPath (bare host directory volume)
    Path:          /proc
    HostPathType:  Directory
  cni-ztunnel-sock-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/ztunnel
    HostPathType:  DirectoryOrCreate
  cni-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  
  cni-socket-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/istio-cni
    HostPathType:  
  cni-netns-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/netns
    HostPathType:  DirectoryOrCreate
  kube-api-access-7nv4k:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 :NoSchedule op=Exists
                             :NoExecute op=Exists
                             CriticalAddonsOnly op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  17m   default-scheduler  Successfully assigned istio-system/istio-cni-node-dpqsv to myk8s-worker2
  Normal  Pulling    17m   kubelet            Pulling image "docker.io/istio/install-cni:1.26.0-distroless"
  Normal  Pulled     17m   kubelet            Successfully pulled image "docker.io/istio/install-cni:1.26.0-distroless" in 10.13s (10.13s including waiting). Image size: 47376201 bytes.
  Normal  Created    17m   kubelet            Created container: install-cni
  Normal  Started    17m   kubelet            Started container install-cni

Name:                 istio-cni-node-rfx6w
Namespace:            istio-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      istio-cni
Node:                 myk8s-control-plane/172.18.0.3
Start Time:           Sat, 07 Jun 2025 16:25:35 +0900
Labels:               app.kubernetes.io/instance=istio
                      app.kubernetes.io/managed-by=Helm
                      app.kubernetes.io/name=istio-cni
                      app.kubernetes.io/part-of=istio
                      app.kubernetes.io/version=1.26.0
                      controller-revision-hash=69565b7cc7
                      helm.sh/chart=cni-1.26.0
                      istio.io/dataplane-mode=none
                      k8s-app=istio-cni-node
                      pod-template-generation=1
                      sidecar.istio.io/inject=false
Annotations:          container.apparmor.security.beta.kubernetes.io/install-cni: unconfined
                      prometheus.io/path: /metrics
                      prometheus.io/port: 15014
                      prometheus.io/scrape: true
                      sidecar.istio.io/inject: false
Status:               Running
IP:                   10.10.0.5
IPs:
  IP:           10.10.0.5
Controlled By:  DaemonSet/istio-cni-node
Containers:
  install-cni:
    Container ID:  containerd://a329e0aab8b5fc24704bc72db46fa5a7184445aefe51b303e164b5cc5db5d406
    Image:         docker.io/istio/install-cni:1.26.0-distroless
    Image ID:      docker.io/istio/install-cni@sha256:e69cea606f6fe75907602349081f78ddb0a94417199f9022f7323510abef65cb
    Port:          15014/TCP
    Host Port:     0/TCP
    Command:
      install-cni
    Args:
      --log_output_level=info
    State:          Running
      Started:      Sat, 07 Jun 2025 16:25:45 +0900
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      100m
      memory:   100Mi
    Readiness:  http-get http://:8000/readyz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment Variables from:
      istio-cni-config  ConfigMap  Optional: false
    Environment:
      REPAIR_NODE_NAME:            (v1:spec.nodeName)
      REPAIR_RUN_AS_DAEMON:       true
      REPAIR_SIDECAR_ANNOTATION:  sidecar.istio.io/status
      ALLOW_SWITCH_TO_HOST_NS:    true
      NODE_NAME:                   (v1:spec.nodeName)
      GOMEMLIMIT:                 node allocatable (limits.memory)
      GOMAXPROCS:                 node allocatable (limits.cpu)
      POD_NAME:                   istio-cni-node-rfx6w (v1:metadata.name)
      POD_NAMESPACE:              istio-system (v1:metadata.namespace)
    Mounts:
      /host/etc/cni/net.d from cni-net-dir (rw)
      /host/opt/cni/bin from cni-bin-dir (rw)
      /host/proc from cni-host-procfs (ro)
      /host/var/run/netns from cni-netns-dir (rw)
      /var/run/istio-cni from cni-socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rfpf5 (ro)
      /var/run/ztunnel from cni-ztunnel-sock-dir (rw)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  cni-bin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:  
  cni-host-procfs:
    Type:          HostPath (bare host directory volume)
    Path:          /proc
    HostPathType:  Directory
  cni-ztunnel-sock-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/ztunnel
    HostPathType:  DirectoryOrCreate
  cni-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  
  cni-socket-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/istio-cni
    HostPathType:  
  cni-netns-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/netns
    HostPathType:  DirectoryOrCreate
  kube-api-access-rfpf5:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 :NoSchedule op=Exists
                             :NoExecute op=Exists
                             CriticalAddonsOnly op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  17m   default-scheduler  Successfully assigned istio-system/istio-cni-node-rfx6w to myk8s-control-plane
  Normal  Pulling    17m   kubelet            Pulling image "docker.io/istio/install-cni:1.26.0-distroless"
  Normal  Pulled     17m   kubelet            Successfully pulled image "docker.io/istio/install-cni:1.26.0-distroless" in 9.179s (9.18s including waiting). Image size: 47376201 bytes.
  Normal  Created    17m   kubelet            Created container: install-cni
  Normal  Started    17m   kubelet            Started container install-cni

4. CNI 바이너리 디렉토리 확인 (/opt/cni/bin)

1
for node in control-plane worker worker2; do echo "node : myk8s-$node" ; docker exec -it myk8s-$node sh -c 'ls -l /opt/cni/bin'; echo; done

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
node : myk8s-control-plane
total 70820
-rwxr-xr-x 1 root root  4059492 Mar  5 12:52 host-local
-rwxr-xr-x 1 root root 54538424 Jun  7 07:25 istio-cni
-rwxr-xr-x 1 root root  4114956 Mar  5 12:52 loopback
-rwxr-xr-x 1 root root  4713142 Mar  5 12:52 portmap
-rwxr-xr-x 1 root root  5080390 Mar  5 12:52 ptp

node : myk8s-worker
total 70820
-rwxr-xr-x 1 root root  4059492 Mar  5 12:52 host-local
-rwxr-xr-x 1 root root 54538424 Jun  7 07:25 istio-cni
-rwxr-xr-x 1 root root  4114956 Mar  5 12:52 loopback
-rwxr-xr-x 1 root root  4713142 Mar  5 12:52 portmap
-rwxr-xr-x 1 root root  5080390 Mar  5 12:52 ptp

node : myk8s-worker2
total 70820
-rwxr-xr-x 1 root root  4059492 Mar  5 12:52 host-local
-rwxr-xr-x 1 root root 54538424 Jun  7 07:25 istio-cni
-rwxr-xr-x 1 root root  4114956 Mar  5 12:52 loopback
-rwxr-xr-x 1 root root  4713142 Mar  5 12:52 portmap
-rwxr-xr-x 1 root root  5080390 Mar  5 12:52 ptp

5. CNI 설정 파일 디렉토리 확인 (/etc/cni/net.d)

1
for node in control-plane worker worker2; do echo "node : myk8s-$node" ; docker exec -it myk8s-$node sh -c 'ls -l /etc/cni/net.d'; echo; done

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
node : myk8s-control-plane
total 4
-rw-r--r-- 1 root root 861 Jun  7 07:25 10-kindnet.conflist

node : myk8s-worker
total 4
-rw-r--r-- 1 root root 861 Jun  7 07:25 10-kindnet.conflist

node : myk8s-worker2
total 4
-rw-r--r-- 1 root root 861 Jun  7 07:25 10-kindnet.conflist

6. Istio CNI 런타임 소켓 디렉토리 확인 (/var/run/istio-cni)

1
for node in control-plane worker worker2; do echo "node : myk8s-$node" ; docker exec -it myk8s-$node sh -c 'ls -l /var/run/istio-cni'; echo; done

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
node : myk8s-control-plane
total 8
-rw------- 1 root root 2990 Jun  7 07:25 istio-cni-kubeconfig
-rw------- 1 root root  171 Jun  7 07:25 istio-cni.log
srw-rw-rw- 1 root root    0 Jun  7 07:25 log.sock
srw-rw-rw- 1 root root    0 Jun  7 07:25 pluginevent.sock

node : myk8s-worker
total 8
-rw------- 1 root root 2981 Jun  7 07:25 istio-cni-kubeconfig
-rw------- 1 root root  171 Jun  7 07:25 istio-cni.log
srw-rw-rw- 1 root root    0 Jun  7 07:25 log.sock
srw-rw-rw- 1 root root    0 Jun  7 07:25 pluginevent.sock

node : myk8s-worker2
total 8
-rw------- 1 root root 2982 Jun  7 07:25 istio-cni-kubeconfig
-rw------- 1 root root 1335 Jun  7 07:27 istio-cni.log
srw-rw-rw- 1 root root    0 Jun  7 07:25 log.sock
srw-rw-rw- 1 root root    0 Jun  7 07:25 pluginevent.sock

7. 네임스페이스 파일 확인 (/var/run/netns)

1
for node in control-plane worker worker2; do echo "node : myk8s-$node" ; docker exec -it myk8s-$node sh -c 'ls -l /var/run/netns'; echo; done

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
node : myk8s-control-plane
total 0
-r--r--r-- 1 root root 0 Jun  7 06:54 cni-1ad352d6-873b-0f5d-9c82-54b44e853fd7
-r--r--r-- 1 root root 0 Jun  7 06:54 cni-861f44e6-937b-4d15-8df6-757b9759cc74
-r--r--r-- 1 root root 0 Jun  7 07:25 cni-8f0e1fff-31a4-bc8f-82d9-86ba81886dfd
-r--r--r-- 1 root root 0 Jun  7 07:25 cni-d348a25f-f731-b77c-dbaa-27b8d1dfa25a
-r--r--r-- 1 root root 0 Jun  7 06:54 cni-de7ad285-b86f-3164-14f2-a220466f8aa1

node : myk8s-worker
total 0
-r--r--r-- 1 root root 0 Jun  7 07:25 cni-25f25e6c-522e-f81f-5ebe-7053a7add3f0
-r--r--r-- 1 root root 0 Jun  7 07:25 cni-4afc4244-9e76-c6c2-e7de-2fe41a280b65
-r--r--r-- 1 root root 0 Jun  7 07:18 cni-a3b2c636-67d4-0b64-f34c-4dadb40c753a
-r--r--r-- 1 root root 0 Jun  7 07:25 cni-d6cc6548-5a67-08f7-df08-5b85d4c75cbc

node : myk8s-worker2
total 0
-r--r--r-- 1 root root 0 Jun  7 07:27 cni-1f8eace2-6620-216e-bb4d-c4b86ee5c6c4
-r--r--r-- 1 root root 0 Jun  7 07:25 cni-277efe1a-9bbe-2abf-9a8d-74328563969c
-r--r--r-- 1 root root 0 Jun  7 07:27 cni-3771908e-b0ff-603f-2996-7a7adf7075c7
-r--r--r-- 1 root root 0 Jun  7 07:27 cni-ac03b3ba-9368-6cf8-7cd4-d352d7efc40f
-r--r--r-- 1 root root 0 Jun  7 07:27 cni-ad228616-57c2-1fc6-e9cc-a850f148c51b
-r--r--r-- 1 root root 0 Jun  7 07:27 cni-f3dd8ca3-3f82-195a-d587-4b42ab11d2f8
-r--r--r-- 1 root root 0 Jun  7 07:25 cni-ff483962-b91b-3571-0f8b-66d75d099e53

8. 네트워크 네임스페이스 상세 조회 (lsns -t net)

1
for node in control-plane worker worker2; do echo "node : myk8s-$node" ; docker exec -it myk8s-$node sh -c 'lsns -t net'; echo; done

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
node : myk8s-control-plane
        NS TYPE NPROCS   PID USER     NETNSID NSFS                                                COMMAND
4026533248 net      32     1 root  unassigned                                                     /sbin/init
4026533441 net       2  1451 65535          1 /run/netns/cni-861f44e6-937b-4d15-8df6-757b9759cc74 /pause
4026533500 net       2  1461 65535          2 /run/netns/cni-de7ad285-b86f-3164-14f2-a220466f8aa1 /pause
4026533559 net       2  1467 65535          3 /run/netns/cni-1ad352d6-873b-0f5d-9c82-54b44e853fd7 /pause
4026533863 net       2  3526 65535          4 /run/netns/cni-8f0e1fff-31a4-bc8f-82d9-86ba81886dfd /pause
4026534128 net       2  3711 65535          5 /run/netns/cni-d348a25f-f731-b77c-dbaa-27b8d1dfa25a /pause

node : myk8s-worker
        NS TYPE NPROCS   PID USER      NETNSID NSFS                                                COMMAND
4026533179 net       2  2049 nobody          1 /run/netns/cni-a3b2c636-67d4-0b64-f34c-4dadb40c753a /pause
4026533313 net      19     1 root   unassigned                                                     /sbin/init
4026533794 net       2  2440 65535           2 /run/netns/cni-25f25e6c-522e-f81f-5ebe-7053a7add3f0 /pause
4026533923 net       2  2597 65535           3 /run/netns/cni-d6cc6548-5a67-08f7-df08-5b85d4c75cbc /pause
4026534067 net       2  2785 65535           4 /run/netns/cni-4afc4244-9e76-c6c2-e7de-2fe41a280b65 /pause

node : myk8s-worker2
        NS TYPE NPROCS   PID USER     NETNSID NSFS                                                COMMAND
4026533177 net      22     1 root  unassigned                                                     /sbin/init
4026533983 net       2  2301 65535          1 /run/netns/cni-ff483962-b91b-3571-0f8b-66d75d099e53 /pause
4026534132 net       2  2488 65535          2 /run/netns/cni-277efe1a-9bbe-2abf-9a8d-74328563969c /pause
4026534270 net       2  2703 65535          3 /run/netns/cni-ac03b3ba-9368-6cf8-7cd4-d352d7efc40f /pause
4026534331 net       2  2748 65535          4 /run/netns/cni-1f8eace2-6620-216e-bb4d-c4b86ee5c6c4 /pause
4026534400 net       2  2842 65535          5 /run/netns/cni-ad228616-57c2-1fc6-e9cc-a850f148c51b /pause
4026534465 net       3  2933 65535          6 /run/netns/cni-f3dd8ca3-3f82-195a-d587-4b42ab11d2f8 /pause
4026534530 net       5  3327 10001          7 /run/netns/cni-3771908e-b0ff-603f-2996-7a7adf7075c7 /pause

9. istio-cni-node 데몬셋 파드 로그 확인

1
kubectl logs  -n istio-system -l k8s-app=istio-cni-node -f

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
2025-06-07T07:25:46.531525Z	info	cni-agent	initial installation complete, start watching for re-installation
2025-06-07T07:25:49.943034Z	info	cni-plugin	excluded istio-system/ztunnel-kczj2 pod because it has proxy type ztunnel	pod=istio-system/ztunnel-kczj2
2025-06-07T07:25:54.996406Z	info	cni-agent	new ztunnel connected, total connected: 1	conn_uuid=93ca02ef-b17e-4530-8606-bfa17738bf4d
- name: istio-cni
2025-06-07T07:25:54.996621Z	info	cni-agent	received hello from ztunnel	conn_uuid=93ca02ef-b17e-4530-8606-bfa17738bf4d version=V1
2025-06-07T07:27:32.837642Z	info	cni-plugin	excluded because it does not have istio-proxy container (have [grafana])	pod=istio-system/grafana-65bfb5f855-jfmdl
2025-06-07T07:27:32.917263Z	info	cni-plugin	excluded because it does not have istio-proxy container (have [jaeger])	pod=istio-system/jaeger-868fbc75d7-4lq87
2025-06-07T07:27:33.020510Z	info	cni-plugin	excluded because it does not have istio-proxy container (have [kiali])	pod=istio-system/kiali-6d774d8bb8-zkx5r
2025-06-07T07:27:33.215794Z	info	cni-plugin	excluded because it does not have istio-proxy container (have [prometheus-server prometheus-server-configmap-reload])	pod=istio-system/prometheus-689cc795d4-vrlxd
2025-06-07T07:27:34.739441Z	info	cni-plugin	excluded because it does not have istio-proxy container (have [helper-pod])	pod=local-path-storage/helper-pod-create-pvc-88e3e94b-8a2e-458f-8e76-66918aa62d52
2025-06-07T07:27:39.722136Z	info	cni-plugin	excluded because it does not have istio-proxy container (have [loki loki-sc-rules])	pod=istio-system/loki-0
  user:
- name: istio-cni
  user:
    token: REDACTED
    token: REDACTED

2025-06-07T07:25:45.579505Z	info	cni-agent	configuration requires updates, (re)writing CNI config file at "": CNI config file "" preempted by "/host/etc/cni/net.d/10-kindnet.conflist"
2025-06-07T07:25:45.579796Z	info	cni-agent	created CNI config /host/etc/cni/net.d/10-kindnet.conflist
2025-06-07T07:25:45.579801Z	info	cni-agent	initial installation complete, start watching for re-installation
2025-06-07T07:25:49.942244Z	info	cni-plugin	excluded istio-system/ztunnel-4bls2 pod because it has proxy type ztunnel	pod=istio-system/ztunnel-4bls2
2025-06-07T07:25:55.298169Z	info	cni-agent	new ztunnel connected, total connected: 1	conn_uuid=c48bae5a-9998-486f-870e-cfb14ae50f33

2025-06-07T07:25:47.264802Z	info	cni-agent	configuration requires updates, (re)writing CNI config file at "": CNI config file "" preempted by "/host/etc/cni/net.d/10-kindnet.conflist"
2025-06-07T07:25:47.265184Z	info	cni-agent	created CNI config /host/etc/cni/net.d/10-kindnet.conflist
2025-06-07T07:25:55.298288Z	info	cni-agent	received hello from ztunnel	conn_uuid=c48bae5a-9998-486f-870e-cfb14ae50f33 version=V1
2025-06-07T07:25:47.265200Z	info	cni-agent	initial installation complete, start watching for re-installation
2025-06-07T07:25:49.920768Z	info	cni-plugin	excluded istio-system/ztunnel-wr6pp pod because it has proxy type ztunnel	pod=istio-system/ztunnel-wr6pp
2025-06-07T07:25:55.651439Z	info	cni-agent	new ztunnel connected, total connected: 1	conn_uuid=bb9587e4-4a75-4c42-8c69-976421363d2e
2025-06-07T07:25:55.651617Z	info	cni-agent	received hello from ztunnel	conn_uuid=bb9587e4-4a75-4c42-8c69-976421363d2e version=V1

🛰️ ztunnel 구성 및 상태 확인

1. ztunnel 파드 목록 조회 및 노드 확인

1
kubectl get pod -n istio-system -l app=ztunnel -owide

✅ 출력

1
2
3
4
NAME            READY   STATUS    RESTARTS   AGE   IP          NODE                  NOMINATED NODE   READINESS GATES
ztunnel-4bls2   1/1     Running   0          24m   10.10.0.6   myk8s-control-plane   <none>           <none>
ztunnel-kczj2   1/1     Running   0          24m   10.10.2.3   myk8s-worker2         <none>           <none>
ztunnel-wr6pp   1/1     Running   0          24m   10.10.1.5   myk8s-worker          <none>           <none>

2. ztunnel 파드 이름 변수 저장

1
2
3
4
ZPOD1NAME=$(kubectl get pod -n istio-system -l app=ztunnel -o jsonpath="{.items[0].metadata.name}")
ZPOD2NAME=$(kubectl get pod -n istio-system -l app=ztunnel -o jsonpath="{.items[1].metadata.name}")
ZPOD3NAME=$(kubectl get pod -n istio-system -l app=ztunnel -o jsonpath="{.items[2].metadata.name}")
echo $ZPOD1NAME $ZPOD2NAME $ZPOD3NAME

✅ 출력

1
ztunnel-4bls2 ztunnel-kczj2 ztunnel-wr6pp

3. ztunnel 파드 상세 정보 확인

1
kubectl describe pod -n istio-system -l app=ztunnel

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
Name:                 ztunnel-4bls2
Namespace:            istio-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      ztunnel
Node:                 myk8s-control-plane/172.18.0.3
Start Time:           Sat, 07 Jun 2025 16:25:49 +0900
Labels:               app=ztunnel
                      app.kubernetes.io/instance=istio
                      app.kubernetes.io/managed-by=Helm
                      app.kubernetes.io/name=ztunnel
                      app.kubernetes.io/part-of=istio
                      app.kubernetes.io/version=1.26.0
                      controller-revision-hash=56d59487df
                      helm.sh/chart=ztunnel-1.26.0
                      istio.io/dataplane-mode=none
                      pod-template-generation=1
                      sidecar.istio.io/inject=false
Annotations:          prometheus.io/port: 15020
                      prometheus.io/scrape: true
                      sidecar.istio.io/inject: false
Status:               Running
IP:                   10.10.0.6
IPs:
  IP:           10.10.0.6
Controlled By:  DaemonSet/ztunnel
Containers:
  istio-proxy:
    Container ID:  containerd://0c1ef5dcb31047254da84bdf13a5d83a25a8c5effad13d36b6868317a1358fa3
    Image:         docker.io/istio/ztunnel:1.26.0-distroless
    Image ID:      docker.io/istio/ztunnel@sha256:d711b5891822f4061c0849b886b4786f96b1728055333cbe42a99d0aeff36dbe
    Port:          15020/TCP
    Host Port:     0/TCP
    Args:
      proxy
      ztunnel
    State:          Running
      Started:      Sat, 07 Jun 2025 16:25:55 +0900
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      200m
      memory:   512Mi
    Readiness:  http-get http://:15021/healthz/ready delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      CA_ADDRESS:                        istiod.istio-system.svc:15012
      XDS_ADDRESS:                       istiod.istio-system.svc:15012
      RUST_LOG:                          info
      RUST_BACKTRACE:                    1
      ISTIO_META_CLUSTER_ID:             Kubernetes
      INPOD_ENABLED:                     true
      TERMINATION_GRACE_PERIOD_SECONDS:  30
      POD_NAME:                          ztunnel-4bls2 (v1:metadata.name)
      POD_NAMESPACE:                     istio-system (v1:metadata.namespace)
      NODE_NAME:                          (v1:spec.nodeName)
      INSTANCE_IP:                        (v1:status.podIP)
      SERVICE_ACCOUNT:                    (v1:spec.serviceAccountName)
      ISTIO_META_ENABLE_HBONE:           true
    Mounts:
      /tmp from tmp (rw)
      /var/run/secrets/istio from istiod-ca-cert (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cg9cp (ro)
      /var/run/secrets/tokens from istio-token (rw)
      /var/run/ztunnel from cni-ztunnel-sock-dir (rw)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  istio-token:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  43200
  istiod-ca-cert:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      istio-ca-root-cert
    Optional:  false
  cni-ztunnel-sock-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/ztunnel
    HostPathType:  DirectoryOrCreate
  tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-cg9cp:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 :NoSchedule op=Exists
                             :NoExecute op=Exists
                             CriticalAddonsOnly op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  25m   default-scheduler  Successfully assigned istio-system/ztunnel-4bls2 to myk8s-control-plane
  Normal  Pulling    25m   kubelet            Pulling image "docker.io/istio/ztunnel:1.26.0-distroless"
  Normal  Pulled     25m   kubelet            Successfully pulled image "docker.io/istio/ztunnel:1.26.0-distroless" in 5.154s (5.154s including waiting). Image size: 12963044 bytes.
  Normal  Created    25m   kubelet            Created container: istio-proxy
  Normal  Started    25m   kubelet            Started container istio-proxy

Name:                 ztunnel-kczj2
Namespace:            istio-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      ztunnel
Node:                 myk8s-worker2/172.18.0.2
Start Time:           Sat, 07 Jun 2025 16:25:49 +0900
Labels:               app=ztunnel
                      app.kubernetes.io/instance=istio
                      app.kubernetes.io/managed-by=Helm
                      app.kubernetes.io/name=ztunnel
                      app.kubernetes.io/part-of=istio
                      app.kubernetes.io/version=1.26.0
                      controller-revision-hash=56d59487df
                      helm.sh/chart=ztunnel-1.26.0
                      istio.io/dataplane-mode=none
                      pod-template-generation=1
                      sidecar.istio.io/inject=false
Annotations:          prometheus.io/port: 15020
                      prometheus.io/scrape: true
                      sidecar.istio.io/inject: false
Status:               Running
IP:                   10.10.2.3
IPs:
  IP:           10.10.2.3
Controlled By:  DaemonSet/ztunnel
Containers:
  istio-proxy:
    Container ID:  containerd://bf06e12cbc28de94f931fab730b9441a78599d8bff1088e03a335035a4bdb7f2
    Image:         docker.io/istio/ztunnel:1.26.0-distroless
    Image ID:      docker.io/istio/ztunnel@sha256:d711b5891822f4061c0849b886b4786f96b1728055333cbe42a99d0aeff36dbe
    Port:          15020/TCP
    Host Port:     0/TCP
    Args:
      proxy
      ztunnel
    State:          Running
      Started:      Sat, 07 Jun 2025 16:25:54 +0900
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      200m
      memory:   512Mi
    Readiness:  http-get http://:15021/healthz/ready delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      CA_ADDRESS:                        istiod.istio-system.svc:15012
      XDS_ADDRESS:                       istiod.istio-system.svc:15012
      RUST_LOG:                          info
      RUST_BACKTRACE:                    1
      ISTIO_META_CLUSTER_ID:             Kubernetes
      INPOD_ENABLED:                     true
      TERMINATION_GRACE_PERIOD_SECONDS:  30
      POD_NAME:                          ztunnel-kczj2 (v1:metadata.name)
      POD_NAMESPACE:                     istio-system (v1:metadata.namespace)
      NODE_NAME:                          (v1:spec.nodeName)
      INSTANCE_IP:                        (v1:status.podIP)
      SERVICE_ACCOUNT:                    (v1:spec.serviceAccountName)
      ISTIO_META_ENABLE_HBONE:           true
    Mounts:
      /tmp from tmp (rw)
      /var/run/secrets/istio from istiod-ca-cert (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-prpx6 (ro)
      /var/run/secrets/tokens from istio-token (rw)
      /var/run/ztunnel from cni-ztunnel-sock-dir (rw)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  istio-token:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  43200
  istiod-ca-cert:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      istio-ca-root-cert
    Optional:  false
  cni-ztunnel-sock-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/ztunnel
    HostPathType:  DirectoryOrCreate
  tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-prpx6:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 :NoSchedule op=Exists
                             :NoExecute op=Exists
                             CriticalAddonsOnly op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  25m   default-scheduler  Successfully assigned istio-system/ztunnel-kczj2 to myk8s-worker2
  Normal  Pulling    25m   kubelet            Pulling image "docker.io/istio/ztunnel:1.26.0-distroless"
  Normal  Pulled     25m   kubelet            Successfully pulled image "docker.io/istio/ztunnel:1.26.0-distroless" in 4.831s (4.831s including waiting). Image size: 12963044 bytes.
  Normal  Created    25m   kubelet            Created container: istio-proxy
  Normal  Started    25m   kubelet            Started container istio-proxy

Name:                 ztunnel-wr6pp
Namespace:            istio-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      ztunnel
Node:                 myk8s-worker/172.18.0.4
Start Time:           Sat, 07 Jun 2025 16:25:49 +0900
Labels:               app=ztunnel
                      app.kubernetes.io/instance=istio
                      app.kubernetes.io/managed-by=Helm
                      app.kubernetes.io/name=ztunnel
                      app.kubernetes.io/part-of=istio
                      app.kubernetes.io/version=1.26.0
                      controller-revision-hash=56d59487df
                      helm.sh/chart=ztunnel-1.26.0
                      istio.io/dataplane-mode=none
                      pod-template-generation=1
                      sidecar.istio.io/inject=false
Annotations:          prometheus.io/port: 15020
                      prometheus.io/scrape: true
                      sidecar.istio.io/inject: false
Status:               Running
IP:                   10.10.1.5
IPs:
  IP:           10.10.1.5
Controlled By:  DaemonSet/ztunnel
Containers:
  istio-proxy:
    Container ID:  containerd://5f86194922f0b506448ad5ed83048f61c2f1e532283e7bcc6fb513b070bda8bb
    Image:         docker.io/istio/ztunnel:1.26.0-distroless
    Image ID:      docker.io/istio/ztunnel@sha256:d711b5891822f4061c0849b886b4786f96b1728055333cbe42a99d0aeff36dbe
    Port:          15020/TCP
    Host Port:     0/TCP
    Args:
      proxy
      ztunnel
    State:          Running
      Started:      Sat, 07 Jun 2025 16:25:55 +0900
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      200m
      memory:   512Mi
    Readiness:  http-get http://:15021/healthz/ready delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      CA_ADDRESS:                        istiod.istio-system.svc:15012
      XDS_ADDRESS:                       istiod.istio-system.svc:15012
      RUST_LOG:                          info
      RUST_BACKTRACE:                    1
      ISTIO_META_CLUSTER_ID:             Kubernetes
      INPOD_ENABLED:                     true
      TERMINATION_GRACE_PERIOD_SECONDS:  30
      POD_NAME:                          ztunnel-wr6pp (v1:metadata.name)
      POD_NAMESPACE:                     istio-system (v1:metadata.namespace)
      NODE_NAME:                          (v1:spec.nodeName)
      INSTANCE_IP:                        (v1:status.podIP)
      SERVICE_ACCOUNT:                    (v1:spec.serviceAccountName)
      ISTIO_META_ENABLE_HBONE:           true
    Mounts:
      /tmp from tmp (rw)
      /var/run/secrets/istio from istiod-ca-cert (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-79x8x (ro)
      /var/run/secrets/tokens from istio-token (rw)
      /var/run/ztunnel from cni-ztunnel-sock-dir (rw)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  istio-token:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  43200
  istiod-ca-cert:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      istio-ca-root-cert
    Optional:  false
  cni-ztunnel-sock-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/ztunnel
    HostPathType:  DirectoryOrCreate
  tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-79x8x:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 :NoSchedule op=Exists
                             :NoExecute op=Exists
                             CriticalAddonsOnly op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  25m   default-scheduler  Successfully assigned istio-system/ztunnel-wr6pp to myk8s-worker
  Normal  Pulling    25m   kubelet            Pulling image "docker.io/istio/ztunnel:1.26.0-distroless"
  Normal  Pulled     25m   kubelet            Successfully pulled image "docker.io/istio/ztunnel:1.26.0-distroless" in 5.515s (5.515s including waiting). Image size: 12963044 bytes.
  Normal  Created    25m   kubelet            Created container: istio-proxy
  Normal  Started    25m   kubelet            Started container istio-proxy

4. kubectl pexec 플러그인 설치

1
kubectl krew install pexec

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Updated the local copy of plugin index.
  New plugins available:
    * ai
    * cwide
    * mtv
    * node-resource
    * preq
    * resource-backup
    * xdsnap
  Upgrades available for installed plugins:
    * krew v0.4.4 -> v0.4.5
    * view-secret v0.13.0 -> v0.14.0
Installing plugin: pexec
Installed plugin: pexec
\
 | Use this plugin:
 | 	kubectl pexec
 | Documentation:
 | 	https://github.com/ssup2/kpexec
 | Caveats:
 | \
 |  | pexec requires the privileges to run privileged pods with hostPID.
 | /
/
WARNING: You installed plugin "pexec" from the krew-index plugin repository.
   These plugins are not audited for security by the Krew maintainers.
   Run them at your own risk.

5. ztunnel 파드 내부 진입 (pexec 사용)

1
kubectl pexec $ZPOD1NAME -it -T -n istio-system -- bash

✅ 출력

1
2
3
4
5
Defaulting container name to istio-proxy.
Create cnsenter pod (cnsenter-ta1pzrh62s)
Wait to run cnsenter pod (cnsenter-ta1pzrh62s)
If you don't see a command prompt, try pressing enter.
bash-5.1# 

6. ztunnel 내부 IP 주소 확인

1
bash-5.1# ip -c addr

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 62:7d:8e:5c:37:ce brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.0.6/24 brd 10.10.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::607d:8eff:fe5c:37ce/64 scope link 
       valid_lft forever preferred_lft forever

7. iptables 정책 확인 (mangle, nat)

1
bash-5.1# iptables -t mangle -S

✅ 출력

1
2
3
4
5
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
1
bash-5.1# iptables -t nat -S

✅ 출력

1
2
3
4
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT

8. ztunnel 열려 있는 포트 확인 (ss)

15000, 15020, 15021 포트 Listen

1
2
bash-5.1# ss -tnlp
ss -tnp

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
State         Recv-Q        Send-Q               Local Address:Port                Peer Address:Port       Process                                
LISTEN        0             1024                     127.0.0.1:15000                    0.0.0.0:*           users:(("ztunnel",pid=1,fd=14))       
LISTEN        0             1024                             *:15020                          *:*           users:(("ztunnel",pid=1,fd=17))       
LISTEN        0             1024                             *:15021                          *:*           users:(("ztunnel",pid=1,fd=9))        
LISTEN        0             1024                         [::1]:15000                       [::]:*           users:(("ztunnel",pid=1,fd=15))       

State        Recv-Q        Send-Q               Local Address:Port                Peer Address:Port        Process                                
ESTAB        0             0                        127.0.0.1:15000                  127.0.0.1:47216        users:(("ztunnel",pid=1,fd=21))       
ESTAB        0             0                        127.0.0.1:47138                  127.0.0.1:15000                                              
ESTAB        0             0                        127.0.0.1:15000                  127.0.0.1:47218        users:(("ztunnel",pid=1,fd=22))       
ESTAB        0             0                        127.0.0.1:15000                  127.0.0.1:47138        users:(("ztunnel",pid=1,fd=20))       
ESTAB        0             0                        127.0.0.1:47218                  127.0.0.1:15000                                              
ESTAB        0             0                        10.10.0.6:53638               10.200.1.163:15012        users:(("ztunnel",pid=1,fd=18))       
ESTAB        0             0                        127.0.0.1:47216                  127.0.0.1:15000

/var/run/ztunnel/ztunnel.sock 관련 유닉스 소켓 연결 확인

1
bash-5.1# ss -xnp

✅ 출력

1
2
3
4
5
Netid         State         Recv-Q         Send-Q                                 Local Address:Port                   Peer Address:Port          Process                                                                                                                                           
u_seq         ESTAB         0              0                                                  * 150690                            * 140247         users:(("ztunnel",pid=1,fd=19))                                                                                                                  
u_seq         ESTAB         0              0                      /var/run/ztunnel/ztunnel.sock 140247                            * 150690                                                                                                                                                          
u_str         ESTAB         0              0                                                  * 149963                            * 149964         users:(("ztunnel",pid=1,fd=13),("ztunnel",pid=1,fd=8),("ztunnel",pid=1,fd=6))                                                                    
u_str         ESTAB         0              0                                                  * 149964                            * 149963         users:(("ztunnel",pid=1,fd=7))  

9. ztunnel Prometheus 메트릭 확인

1
bash-5.1# curl -s http://localhost:15020/metrics

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
# HELP istio_build Istio component build info.
# TYPE istio_build gauge
istio_build{component="ztunnel",tag="unknown"} 1
# HELP istio_xds_connection_terminations The total number of completed connections to xds server (unstable).
# TYPE istio_xds_connection_terminations counter
istio_xds_connection_terminations_total{reason="Reconnect"} 1
# HELP istio_xds_message Total number of messages received (unstable).
# TYPE istio_xds_message counter
istio_xds_message_total{url="type.googleapis.com/istio.workload.Address"} 19
istio_xds_message_total{url="type.googleapis.com/istio.security.Authorization"} 4
# HELP istio_xds_message_bytes Total number of bytes received (unstable).
# TYPE istio_xds_message_bytes counter
# UNIT istio_xds_message_bytes bytes
istio_xds_message_bytes_total{url="type.googleapis.com/istio.workload.Address"} 17060
istio_xds_message_bytes_total{url="type.googleapis.com/istio.security.Authorization"} 0
# HELP istio_tcp_connections_opened The total number of TCP connections opened.
# TYPE istio_tcp_connections_opened counter
# HELP istio_tcp_connections_closed The total number of TCP connections closed.
# TYPE istio_tcp_connections_closed counter
# HELP istio_tcp_received_bytes The size of total bytes received during request in case of a TCP connection.
# TYPE istio_tcp_received_bytes counter
# HELP istio_tcp_sent_bytes The size of total bytes sent during response in case of a TCP connection.
# TYPE istio_tcp_sent_bytes counter
# HELP istio_on_demand_dns The total number of requests that used on-demand DNS (unstable).
# TYPE istio_on_demand_dns counter
# HELP istio_dns_requests Total number of DNS requests (unstable).
# TYPE istio_dns_requests counter
# HELP istio_dns_upstream_requests Total number of DNS requests forwarded to upstream (unstable).
# TYPE istio_dns_upstream_requests counter
# HELP istio_dns_upstream_failures Total number of DNS requests that failed to forward upstream (unstable).
# TYPE istio_dns_upstream_failures counter
# HELP istio_dns_upstream_request_duration_seconds Total time in seconds Istio takes to get DNS response from upstream (unstable).
# TYPE istio_dns_upstream_request_duration_seconds histogram
# UNIT istio_dns_upstream_request_duration_seconds seconds
# HELP workload_manager_active_proxy_count The total number current workloads with active proxies (unstable).
# TYPE workload_manager_active_proxy_count gauge
workload_manager_active_proxy_count 0
# HELP workload_manager_pending_proxy_count The total number current workloads with pending proxies (unstable).
# TYPE workload_manager_pending_proxy_count gauge
workload_manager_pending_proxy_count 0
# HELP workload_manager_proxies_started The total number of proxies that were started (unstable).
# TYPE workload_manager_proxies_started counter
workload_manager_proxies_started_total 0
# HELP workload_manager_proxies_stopped The total number of proxies that were stopped (unstable).
# TYPE workload_manager_proxies_stopped counter
workload_manager_proxies_stopped_total 0
# EOF

10. ztunnel 구성 정보(config_dump) 확인

1
bash-5.1# curl -s http://localhost:15000/config_dump

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
{
  "certificates": [],
  "config": {
    "adminAddr": {
      "Localhost": [
        true,
        15000
      ]
    },
    "altCaHostname": null,
    "altXdsHostname": null,
    "caAddress": "https://istiod.istio-system.svc:15012",
    "caHeaders": [],
    "caRootCert": {
      "File": "./var/run/secrets/istio/root-cert.pem"
    },
    "clusterDomain": "cluster.local",
    "clusterId": "Kubernetes",
    "connectionWindowSize": 16777216,
    "dnsProxy": true,
    "dnsProxyAddr": {
      "Localhost": [
        true,
        15053
      ]
    },
    "dnsResolverCfg": {
      "domain": null,
      "name_servers": [
        {
          "bind_addr": null,
          "http_endpoint": null,
          "protocol": "udp",
          "socket_addr": "10.200.1.10:53",
          "tls_dns_name": null,
          "trust_negative_responses": false
        },
        {
          "bind_addr": null,
          "http_endpoint": null,
          "protocol": "tcp",
          "socket_addr": "10.200.1.10:53",
          "tls_dns_name": null,
          "trust_negative_responses": false
        }
      ],
      "search": [
        "istio-system.svc.cluster.local",
        "svc.cluster.local",
        "cluster.local",
        "tail1af747.ts.net",
        "davolink"
      ]
    },
    "dnsResolverOpts": {
      "attempts": 2,
      "avoid_local_udp_ports": [],
      "cache_size": 4096,
      "case_randomization": false,
      "check_names": true,
      "edns0": false,
      "ip_strategy": "Ipv4AndIpv6",
      "ndots": 5,
      "negative_max_ttl": null,
      "negative_min_ttl": null,
      "num_concurrent_reqs": 2,
      "os_port_selection": false,
      "positive_max_ttl": null,
      "positive_min_ttl": null,
      "preserve_intermediates": true,
      "recursion_desired": true,
      "server_ordering_strategy": "QueryStatistics",
      "timeout": {
        "nanos": 0,
        "secs": 5
      },
      "trust_anchor": null,
      "try_tcp_on_error": false,
      "use_hosts_file": "Auto",
      "validate": false
    },
    "fakeCa": false,
    "fakeSelfInbound": false,
    "frameSize": 1048576,
    "illegalPorts": [
      15008,
      15006,
      15001
    ],
    "inboundAddr": "[::]:15008",
    "inboundPlaintextAddr": "[::]:15006",
    "inpodPortReuse": true,
    "inpodUds": "/var/run/ztunnel/ztunnel.sock",
    "localNode": "myk8s-control-plane",
    "localhostAppTunnel": true,
    "network": "",
    "numWorkerThreads": 2,
    "outboundAddr": "[::]:15001",
    "packetMark": 1337,
    "poolMaxStreamsPerConn": 100,
    "poolUnusedReleaseTimeout": {
      "nanos": 0,
      "secs": 300
    },
    "proxy": true,
    "proxyArgs": "proxy ztunnel",
    "proxyMetadata": {
      "CLUSTER_ID": "Kubernetes",
      "ENABLE_HBONE": "true",
      "ISTIO_VERSION": "1.26.0"
    },
    "proxyMode": "Shared",
    "proxyWorkloadInformation": null,
    "readinessAddr": {
      "SocketAddr": "[::]:15021"
    },
    "requireOriginalSource": null,
    "secretTtl": {
      "nanos": 0,
      "secs": 86400
    },
    "selfTerminationDeadline": {
      "nanos": 0,
      "secs": 25
    },
    "socketConfig": {
      "keepaliveEnabled": true,
      "keepaliveInterval": {
        "nanos": 0,
        "secs": 180
      },
      "keepaliveRetries": 9,
      "keepaliveTime": {
        "nanos": 0,
        "secs": 180
      },
      "userTimeoutEnabled": false
    },
    "socks5Addr": null,
    "statsAddr": {
      "SocketAddr": "[::]:15020"
    },
    "windowSize": 4194304,
    "xdsAddress": "https://istiod.istio-system.svc:15012",
    "xdsHeaders": [],
    "xdsOnDemand": false,
    "xdsRootCert": {
      "File": "./var/run/secrets/istio/root-cert.pem"
    }
  },
  "policies": [],
  "services": [
    {
      "endpoints": {
        "Kubernetes//Pod/istio-system/grafana-65bfb5f855-jfmdl": {
          "port": {
            "3000": 3000
          },
          "status": "Healthy",
          "workloadUid": "Kubernetes//Pod/istio-system/grafana-65bfb5f855-jfmdl"
        }
      },
      "hostname": "grafana.istio-system.svc.cluster.local",
      "ipFamilies": "IPv4",
      "name": "grafana",
      "namespace": "istio-system",
      "ports": {
        "3000": 3000
      },
      "subjectAltNames": [],
      "vips": [
        "/10.200.1.136"
      ]
    },
    {
      "endpoints": {
        "Kubernetes//Pod/istio-system/istiod-86b6b7ff7-d7q7f": {
          "port": {
            "15010": 15010,
            "15012": 15012,
            "15014": 15014,
            "443": 15017
          },
          "status": "Healthy",
          "workloadUid": "Kubernetes//Pod/istio-system/istiod-86b6b7ff7-d7q7f"
        }
      },
      "hostname": "istiod.istio-system.svc.cluster.local",
      "ipFamilies": "IPv4",
      "name": "istiod",
      "namespace": "istio-system",
      "ports": {
        "15010": 15010,
        "15012": 15012,
        "15014": 15014,
        "443": 15017
      },
      "subjectAltNames": [],
      "vips": [
        "/10.200.1.163"
      ]
    },
    {
      "endpoints": {
        "Kubernetes//Pod/istio-system/jaeger-868fbc75d7-4lq87": {
          "port": {
            "14250": 14250,
            "14268": 14268,
            "4317": 4317,
            "4318": 4318,
            "9411": 9411
          },
          "status": "Healthy",
          "workloadUid": "Kubernetes//Pod/istio-system/jaeger-868fbc75d7-4lq87"
        }
      },
      "hostname": "jaeger-collector.istio-system.svc.cluster.local",
      "ipFamilies": "IPv4",
      "name": "jaeger-collector",
      "namespace": "istio-system",
      "ports": {
        "14250": 14250,
        "14268": 14268,
        "4317": 4317,
        "4318": 4318,
        "9411": 9411
      },
      "subjectAltNames": [],
      "vips": [
        "/10.200.1.144"
      ]
    },
    {
      "endpoints": {
        "Kubernetes//Pod/istio-system/kiali-6d774d8bb8-zkx5r": {
          "port": {
            "20001": 20001,
            "9090": 9090
          },
          "status": "Healthy",
          "workloadUid": "Kubernetes//Pod/istio-system/kiali-6d774d8bb8-zkx5r"
        }
      },
      "hostname": "kiali.istio-system.svc.cluster.local",
      "ipFamilies": "IPv4",
      "name": "kiali",
      "namespace": "istio-system",
      "ports": {
        "20001": 20001,
        "9090": 9090
      },
      "subjectAltNames": [],
      "vips": [
        "/10.200.1.133"
      ]
    },
    {
      "endpoints": {
        "Kubernetes//Pod/kube-system/coredns-668d6bf9bc-k6lf9": {
          "port": {
            "53": 53,
            "9153": 9153
          },
          "status": "Healthy",
          "workloadUid": "Kubernetes//Pod/kube-system/coredns-668d6bf9bc-k6lf9"
        },
        "Kubernetes//Pod/kube-system/coredns-668d6bf9bc-xbtkx": {
          "port": {
            "53": 53,
            "9153": 9153
          },
          "status": "Healthy",
          "workloadUid": "Kubernetes//Pod/kube-system/coredns-668d6bf9bc-xbtkx"
        }
      },
      "hostname": "kube-dns.kube-system.svc.cluster.local",
      "ipFamilies": "IPv4",
      "name": "kube-dns",
      "namespace": "kube-system",
      "ports": {
        "53": 53,
        "9153": 9153
      },
      "subjectAltNames": [],
      "vips": [
        "/10.200.1.10"
      ]
    },
    {
      "endpoints": {
        "Kubernetes/discovery.k8s.io/EndpointSlice/default/kubernetes/172.18.0.3": {
          "port": {
            "443": 6443
          },
          "status": "Healthy",
          "workloadUid": "Kubernetes/discovery.k8s.io/EndpointSlice/default/kubernetes/172.18.0.3"
        }
      },
      "hostname": "kubernetes.default.svc.cluster.local",
      "ipFamilies": "IPv4",
      "name": "kubernetes",
      "namespace": "default",
      "ports": {
        "443": 6443
      },
      "subjectAltNames": [],
      "vips": [
        "/10.200.1.1"
      ]
    },
    {
      "endpoints": {
        "Kubernetes//Pod/istio-system/loki-0": {
          "port": {
            "3100": 3100
          },
          "status": "Healthy",
          "workloadUid": "Kubernetes//Pod/istio-system/loki-0"
        }
      },
      "hostname": "loki-headless.istio-system.svc.cluster.local",
      "ipFamilies": "IPv4",
      "name": "loki-headless",
      "namespace": "istio-system",
      "ports": {
        "3100": 0
      },
      "subjectAltNames": [],
      "vips": []
    },
    {
      "endpoints": {
        "Kubernetes//Pod/istio-system/loki-0": {
          "port": {
            "7946": 7946
          },
          "status": "Healthy",
          "workloadUid": "Kubernetes//Pod/istio-system/loki-0"
        }
      },
      "hostname": "loki-memberlist.istio-system.svc.cluster.local",
      "ipFamilies": "IPv4",
      "name": "loki-memberlist",
      "namespace": "istio-system",
      "ports": {
        "7946": 0
      },
      "subjectAltNames": [],
      "vips": []
    },
    {
      "endpoints": {
        "Kubernetes//Pod/istio-system/loki-0": {
          "port": {
            "3100": 3100,
            "9095": 9095
          },
          "status": "Healthy",
          "workloadUid": "Kubernetes//Pod/istio-system/loki-0"
        }
      },
      "hostname": "loki.istio-system.svc.cluster.local",
      "ipFamilies": "IPv4",
      "name": "loki",
      "namespace": "istio-system",
      "ports": {
        "3100": 0,
        "9095": 0
      },
      "subjectAltNames": [],
      "vips": [
        "/10.200.1.200"
      ]
    },
    {
      "endpoints": {
        "Kubernetes//Pod/metallb-system/controller-bb5f47665-29lwc": {
          "port": {
            "443": 9443
          },
          "status": "Healthy",
          "workloadUid": "Kubernetes//Pod/metallb-system/controller-bb5f47665-29lwc"
        }
      },
      "hostname": "metallb-webhook-service.metallb-system.svc.cluster.local",
      "ipFamilies": "IPv4",
      "name": "metallb-webhook-service",
      "namespace": "metallb-system",
      "ports": {
        "443": 9443
      },
      "subjectAltNames": [],
      "vips": [
        "/10.200.1.89"
      ]
    },
    {
      "endpoints": {
        "Kubernetes//Pod/istio-system/prometheus-689cc795d4-vrlxd": {
          "port": {
            "9090": 9090
          },
          "status": "Healthy",
          "workloadUid": "Kubernetes//Pod/istio-system/prometheus-689cc795d4-vrlxd"
        }
      },
      "hostname": "prometheus.istio-system.svc.cluster.local",
      "ipFamilies": "IPv4",
      "name": "prometheus",
      "namespace": "istio-system",
      "ports": {
        "9090": 9090
      },
      "subjectAltNames": [],
      "vips": [
        "/10.200.1.41"
      ]
    },
    {
      "endpoints": {
        "Kubernetes//Pod/istio-system/jaeger-868fbc75d7-4lq87": {
          "port": {
            "16685": 16685,
            "80": 16686
          },
          "status": "Healthy",
          "workloadUid": "Kubernetes//Pod/istio-system/jaeger-868fbc75d7-4lq87"
        }
      },
      "hostname": "tracing.istio-system.svc.cluster.local",
      "ipFamilies": "IPv4",
      "name": "tracing",
      "namespace": "istio-system",
      "ports": {
        "16685": 16685,
        "80": 16686
      },
      "subjectAltNames": [],
      "vips": [
        "/10.200.1.229"
      ]
    },
    {
      "endpoints": {
        "Kubernetes//Pod/istio-system/jaeger-868fbc75d7-4lq87": {
          "port": {
            "9411": 9411
          },
          "status": "Healthy",
          "workloadUid": "Kubernetes//Pod/istio-system/jaeger-868fbc75d7-4lq87"
        }
      },
      "hostname": "zipkin.istio-system.svc.cluster.local",
      "ipFamilies": "IPv4",
      "name": "zipkin",
      "namespace": "istio-system",
      "ports": {
        "9411": 9411
      },
      "subjectAltNames": [],
      "vips": [
        "/10.200.1.117"
      ]
    }
  ],
  "stagedServices": {},
  "staticConfig": {
    "policies": [],
    "services": [],
    "workloads": []
  },
  "version": {
    "buildProfile": "release",
    "buildStatus": "Clean",
    "gitRevision": "2f601957bd172b34990612f4d8f847cadf4e880d",
    "gitTag": "1.26.0-beta.0-1-g2f60195",
    "istioVersion": "unknown",
    "rustVersion": "1.85.1",
    "version": "2f601957bd172b34990612f4d8f847cadf4e880d"
  },
  "workloadState": {},
  "workloads": [
    {
      "canonicalName": "cnsenter-ta1pzrh62s",
      "canonicalRevision": "latest",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "cnsenter-ta1pzrh62s",
      "namespace": "istio-system",
      "networkMode": "Standard",
      "node": "myk8s-control-plane",
      "protocol": "TCP",
      "serviceAccount": "default",
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/istio-system/cnsenter-ta1pzrh62s",
      "workloadIps": [
        "10.10.0.7"
      ],
      "workloadName": "cnsenter-ta1pzrh62s",
      "workloadType": "pod"
    },
    {
      "canonicalName": "grafana",
      "canonicalRevision": "11.3.1",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "grafana-65bfb5f855-jfmdl",
      "namespace": "istio-system",
      "networkMode": "Standard",
      "node": "myk8s-worker2",
      "protocol": "TCP",
      "serviceAccount": "grafana",
      "services": [
        "istio-system/grafana.istio-system.svc.cluster.local"
      ],
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/istio-system/grafana-65bfb5f855-jfmdl",
      "workloadIps": [
        "10.10.2.4"
      ],
      "workloadName": "grafana",
      "workloadType": "pod"
    },
    {
      "canonicalName": "istio-cni",
      "canonicalRevision": "1.26.0",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "istio-cni-node-7kst6",
      "namespace": "istio-system",
      "networkMode": "Standard",
      "node": "myk8s-worker",
      "protocol": "TCP",
      "serviceAccount": "istio-cni",
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/istio-system/istio-cni-node-7kst6",
      "workloadIps": [
        "10.10.1.4"
      ],
      "workloadName": "istio-cni-node",
      "workloadType": "pod"
    },
    {
      "canonicalName": "istio-cni",
      "canonicalRevision": "1.26.0",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "istio-cni-node-dpqsv",
      "namespace": "istio-system",
      "networkMode": "Standard",
      "node": "myk8s-worker2",
      "protocol": "TCP",
      "serviceAccount": "istio-cni",
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/istio-system/istio-cni-node-dpqsv",
      "workloadIps": [
        "10.10.2.2"
      ],
      "workloadName": "istio-cni-node",
      "workloadType": "pod"
    },
    {
      "canonicalName": "istio-cni",
      "canonicalRevision": "1.26.0",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "istio-cni-node-rfx6w",
      "namespace": "istio-system",
      "networkMode": "Standard",
      "node": "myk8s-control-plane",
      "protocol": "TCP",
      "serviceAccount": "istio-cni",
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/istio-system/istio-cni-node-rfx6w",
      "workloadIps": [
        "10.10.0.5"
      ],
      "workloadName": "istio-cni-node",
      "workloadType": "pod"
    },
    {
      "canonicalName": "istiod",
      "canonicalRevision": "1.26.0",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "istiod-86b6b7ff7-d7q7f",
      "namespace": "istio-system",
      "networkMode": "Standard",
      "node": "myk8s-worker",
      "protocol": "TCP",
      "serviceAccount": "istiod",
      "services": [
        "istio-system/istiod.istio-system.svc.cluster.local"
      ],
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/istio-system/istiod-86b6b7ff7-d7q7f",
      "workloadIps": [
        "10.10.1.3"
      ],
      "workloadName": "istiod",
      "workloadType": "pod"
    },
    {
      "canonicalName": "jaeger",
      "canonicalRevision": "latest",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "jaeger-868fbc75d7-4lq87",
      "namespace": "istio-system",
      "networkMode": "Standard",
      "node": "myk8s-worker2",
      "protocol": "TCP",
      "serviceAccount": "default",
      "services": [
        "istio-system/jaeger-collector.istio-system.svc.cluster.local",
        "istio-system/tracing.istio-system.svc.cluster.local",
        "istio-system/zipkin.istio-system.svc.cluster.local"
      ],
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/istio-system/jaeger-868fbc75d7-4lq87",
      "workloadIps": [
        "10.10.2.5"
      ],
      "workloadName": "jaeger",
      "workloadType": "pod"
    },
    {
      "canonicalName": "kiali",
      "canonicalRevision": "v2.8.0",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "kiali-6d774d8bb8-zkx5r",
      "namespace": "istio-system",
      "networkMode": "Standard",
      "node": "myk8s-worker2",
      "protocol": "TCP",
      "serviceAccount": "kiali",
      "services": [
        "istio-system/kiali.istio-system.svc.cluster.local"
      ],
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/istio-system/kiali-6d774d8bb8-zkx5r",
      "workloadIps": [
        "10.10.2.6"
      ],
      "workloadName": "kiali",
      "workloadType": "pod"
    },
    {
      "canonicalName": "loki",
      "canonicalRevision": "latest",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "loki-0",
      "namespace": "istio-system",
      "networkMode": "Standard",
      "node": "myk8s-worker2",
      "protocol": "TCP",
      "serviceAccount": "loki",
      "services": [
        "istio-system/loki-memberlist.istio-system.svc.cluster.local",
        "istio-system/loki.istio-system.svc.cluster.local",
        "istio-system/loki-headless.istio-system.svc.cluster.local"
      ],
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/istio-system/loki-0",
      "workloadIps": [
        "10.10.2.9"
      ],
      "workloadName": "loki",
      "workloadType": "pod"
    },
    {
      "canonicalName": "prometheus",
      "canonicalRevision": "v3.2.1",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "prometheus-689cc795d4-vrlxd",
      "namespace": "istio-system",
      "networkMode": "Standard",
      "node": "myk8s-worker2",
      "protocol": "TCP",
      "serviceAccount": "prometheus",
      "services": [
        "istio-system/prometheus.istio-system.svc.cluster.local"
      ],
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/istio-system/prometheus-689cc795d4-vrlxd",
      "workloadIps": [
        "10.10.2.7"
      ],
      "workloadName": "prometheus",
      "workloadType": "pod"
    },
    {
      "canonicalName": "ztunnel",
      "canonicalRevision": "1.26.0",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "ztunnel-4bls2",
      "namespace": "istio-system",
      "networkMode": "Standard",
      "node": "myk8s-control-plane",
      "protocol": "TCP",
      "serviceAccount": "ztunnel",
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/istio-system/ztunnel-4bls2",
      "workloadIps": [
        "10.10.0.6"
      ],
      "workloadName": "ztunnel",
      "workloadType": "pod"
    },
    {
      "canonicalName": "ztunnel",
      "canonicalRevision": "1.26.0",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "ztunnel-kczj2",
      "namespace": "istio-system",
      "networkMode": "Standard",
      "node": "myk8s-worker2",
      "protocol": "TCP",
      "serviceAccount": "ztunnel",
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/istio-system/ztunnel-kczj2",
      "workloadIps": [
        "10.10.2.3"
      ],
      "workloadName": "ztunnel",
      "workloadType": "pod"
    },
    {
      "canonicalName": "ztunnel",
      "canonicalRevision": "1.26.0",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "ztunnel-wr6pp",
      "namespace": "istio-system",
      "networkMode": "Standard",
      "node": "myk8s-worker",
      "protocol": "TCP",
      "serviceAccount": "ztunnel",
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/istio-system/ztunnel-wr6pp",
      "workloadIps": [
        "10.10.1.5"
      ],
      "workloadName": "ztunnel",
      "workloadType": "pod"
    },
    {
      "canonicalName": "coredns",
      "canonicalRevision": "latest",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "coredns-668d6bf9bc-k6lf9",
      "namespace": "kube-system",
      "networkMode": "Standard",
      "node": "myk8s-control-plane",
      "protocol": "TCP",
      "serviceAccount": "coredns",
      "services": [
        "kube-system/kube-dns.kube-system.svc.cluster.local"
      ],
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/kube-system/coredns-668d6bf9bc-k6lf9",
      "workloadIps": [
        "10.10.0.2"
      ],
      "workloadName": "coredns",
      "workloadType": "pod"
    },
    {
      "canonicalName": "coredns",
      "canonicalRevision": "latest",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "coredns-668d6bf9bc-xbtkx",
      "namespace": "kube-system",
      "networkMode": "Standard",
      "node": "myk8s-control-plane",
      "protocol": "TCP",
      "serviceAccount": "coredns",
      "services": [
        "kube-system/kube-dns.kube-system.svc.cluster.local"
      ],
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/kube-system/coredns-668d6bf9bc-xbtkx",
      "workloadIps": [
        "10.10.0.3"
      ],
      "workloadName": "coredns",
      "workloadType": "pod"
    },
    {
      "canonicalName": "etcd-myk8s-control-plane",
      "canonicalRevision": "latest",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "etcd-myk8s-control-plane",
      "namespace": "kube-system",
      "networkMode": "HostNetwork",
      "node": "myk8s-control-plane",
      "protocol": "TCP",
      "serviceAccount": "default",
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/kube-system/etcd-myk8s-control-plane",
      "workloadIps": [
        "172.18.0.3"
      ],
      "workloadName": "etcd-myk8s-control-plane",
      "workloadType": "pod"
    },
    {
      "canonicalName": "kindnet",
      "canonicalRevision": "latest",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "kindnet-g9qmc",
      "namespace": "kube-system",
      "networkMode": "HostNetwork",
      "node": "myk8s-worker2",
      "protocol": "TCP",
      "serviceAccount": "kindnet",
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/kube-system/kindnet-g9qmc",
      "workloadIps": [
        "172.18.0.2"
      ],
      "workloadName": "kindnet",
      "workloadType": "pod"
    },
    {
      "canonicalName": "kindnet",
      "canonicalRevision": "latest",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "kindnet-lc2q2",
      "namespace": "kube-system",
      "networkMode": "HostNetwork",
      "node": "myk8s-control-plane",
      "protocol": "TCP",
      "serviceAccount": "kindnet",
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/kube-system/kindnet-lc2q2",
      "workloadIps": [
        "172.18.0.3"
      ],
      "workloadName": "kindnet",
      "workloadType": "pod"
    },
    {
      "canonicalName": "kindnet",
      "canonicalRevision": "latest",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "kindnet-njcw4",
      "namespace": "kube-system",
      "networkMode": "HostNetwork",
      "node": "myk8s-worker",
      "protocol": "TCP",
      "serviceAccount": "kindnet",
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/kube-system/kindnet-njcw4",
      "workloadIps": [
        "172.18.0.4"
      ],
      "workloadName": "kindnet",
      "workloadType": "pod"
    },
    {
      "canonicalName": "kube-apiserver-myk8s-control-plane",
      "canonicalRevision": "latest",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "kube-apiserver-myk8s-control-plane",
      "namespace": "kube-system",
      "networkMode": "HostNetwork",
      "node": "myk8s-control-plane",
      "protocol": "TCP",
      "serviceAccount": "default",
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/kube-system/kube-apiserver-myk8s-control-plane",
      "workloadIps": [
        "172.18.0.3"
      ],
      "workloadName": "kube-apiserver-myk8s-control-plane",
      "workloadType": "pod"
    },
    {
      "canonicalName": "kube-controller-manager-myk8s-control-plane",
      "canonicalRevision": "latest",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "kube-controller-manager-myk8s-control-plane",
      "namespace": "kube-system",
      "networkMode": "HostNetwork",
      "node": "myk8s-control-plane",
      "protocol": "TCP",
      "serviceAccount": "default",
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/kube-system/kube-controller-manager-myk8s-control-plane",
      "workloadIps": [
        "172.18.0.3"
      ],
      "workloadName": "kube-controller-manager-myk8s-control-plane",
      "workloadType": "pod"
    },
    {
      "canonicalName": "kube-proxy",
      "canonicalRevision": "latest",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "kube-proxy-h2qb5",
      "namespace": "kube-system",
      "networkMode": "HostNetwork",
      "node": "myk8s-worker2",
      "protocol": "TCP",
      "serviceAccount": "kube-proxy",
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/kube-system/kube-proxy-h2qb5",
      "workloadIps": [
        "172.18.0.2"
      ],
      "workloadName": "kube-proxy",
      "workloadType": "pod"
    },
    {
      "canonicalName": "kube-proxy",
      "canonicalRevision": "latest",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "kube-proxy-jmfg8",
      "namespace": "kube-system",
      "networkMode": "HostNetwork",
      "node": "myk8s-worker",
      "protocol": "TCP",
      "serviceAccount": "kube-proxy",
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/kube-system/kube-proxy-jmfg8",
      "workloadIps": [
        "172.18.0.4"
      ],
      "workloadName": "kube-proxy",
      "workloadType": "pod"
    },
    {
      "canonicalName": "kube-proxy",
      "canonicalRevision": "latest",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "kube-proxy-nswxj",
      "namespace": "kube-system",
      "networkMode": "HostNetwork",
      "node": "myk8s-control-plane",
      "protocol": "TCP",
      "serviceAccount": "kube-proxy",
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/kube-system/kube-proxy-nswxj",
      "workloadIps": [
        "172.18.0.3"
      ],
      "workloadName": "kube-proxy",
      "workloadType": "pod"
    },
    {
      "canonicalName": "kube-scheduler-myk8s-control-plane",
      "canonicalRevision": "latest",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "kube-scheduler-myk8s-control-plane",
      "namespace": "kube-system",
      "networkMode": "HostNetwork",
      "node": "myk8s-control-plane",
      "protocol": "TCP",
      "serviceAccount": "default",
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/kube-system/kube-scheduler-myk8s-control-plane",
      "workloadIps": [
        "172.18.0.3"
      ],
      "workloadName": "kube-scheduler-myk8s-control-plane",
      "workloadType": "pod"
    },
    {
      "canonicalName": "local-path-provisioner",
      "canonicalRevision": "latest",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "local-path-provisioner-7dc846544d-vzdcv",
      "namespace": "local-path-storage",
      "networkMode": "Standard",
      "node": "myk8s-control-plane",
      "protocol": "TCP",
      "serviceAccount": "local-path-provisioner-service-account",
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/local-path-storage/local-path-provisioner-7dc846544d-vzdcv",
      "workloadIps": [
        "10.10.0.4"
      ],
      "workloadName": "local-path-provisioner",
      "workloadType": "pod"
    },
    {
      "canonicalName": "metallb",
      "canonicalRevision": "latest",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "controller-bb5f47665-29lwc",
      "namespace": "metallb-system",
      "networkMode": "Standard",
      "node": "myk8s-worker",
      "protocol": "TCP",
      "serviceAccount": "controller",
      "services": [
        "metallb-system/metallb-webhook-service.metallb-system.svc.cluster.local"
      ],
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/metallb-system/controller-bb5f47665-29lwc",
      "workloadIps": [
        "10.10.1.2"
      ],
      "workloadName": "controller",
      "workloadType": "pod"
    },
    {
      "canonicalName": "metallb",
      "canonicalRevision": "latest",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "speaker-f7qvl",
      "namespace": "metallb-system",
      "networkMode": "HostNetwork",
      "node": "myk8s-worker2",
      "protocol": "TCP",
      "serviceAccount": "speaker",
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/metallb-system/speaker-f7qvl",
      "workloadIps": [
        "172.18.0.2"
      ],
      "workloadName": "speaker",
      "workloadType": "pod"
    },
    {
      "canonicalName": "metallb",
      "canonicalRevision": "latest",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "speaker-hcfq8",
      "namespace": "metallb-system",
      "networkMode": "HostNetwork",
      "node": "myk8s-worker",
      "protocol": "TCP",
      "serviceAccount": "speaker",
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/metallb-system/speaker-hcfq8",
      "workloadIps": [
        "172.18.0.4"
      ],
      "workloadName": "speaker",
      "workloadType": "pod"
    },
    {
      "canonicalName": "metallb",
      "canonicalRevision": "latest",
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "speaker-lr429",
      "namespace": "metallb-system",
      "networkMode": "HostNetwork",
      "node": "myk8s-control-plane",
      "protocol": "TCP",
      "serviceAccount": "speaker",
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes//Pod/metallb-system/speaker-lr429",
      "workloadIps": [
        "172.18.0.3"
      ],
      "workloadName": "speaker",
      "workloadType": "pod"
    },
    {
      "capacity": 1,
      "clusterId": "Kubernetes",
      "name": "kubernetes",
      "namespace": "default",
      "networkMode": "HostNetwork",
      "protocol": "TCP",
      "serviceAccount": "default",
      "services": [
        "default/kubernetes.default.svc.cluster.local"
      ],
      "status": "Healthy",
      "trustDomain": "cluster.local",
      "uid": "Kubernetes/discovery.k8s.io/EndpointSlice/default/kubernetes/172.18.0.3",
      "workloadIps": [
        "172.18.0.3"
      ],
      "workloadType": "deployment"
    }
  ]
}

11. cnsenter 파드 종료 및 정리

1
2
3
bash-5.1# exit
exit
Delete cnsenter pod (cnsenter-ta1pzrh62s)

12. 노드별 ztunnel 유닉스 소켓 존재 확인

1
for node in control-plane worker worker2; do echo "node : myk8s-$node" ; docker exec -it myk8s-$node sh -c 'ls -l /var/run/ztunnel'; echo; done

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
node : myk8s-control-plane
total 0
srwxr-xr-x 1 root root 0 Jun  7 07:25 ztunnel.sock

node : myk8s-worker
total 0
srwxr-xr-x 1 root root 0 Jun  7 07:25 ztunnel.sock

node : myk8s-worker2
total 0
srwxr-xr-x 1 root root 0 Jun  7 07:25 ztunnel.sock

13. ztunnel 데몬셋 파드 로그 확인

1
kubectl logs -n istio-system -l app=ztunnel -f

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
2025-06-07T07:28:22.650671Z	info	xds::client:xds{id=1}	received response	type_url="type.googleapis.com/istio.workload.Address" size=1 removes=0
2025-06-07T07:28:28.655956Z	info	xds::client:xds{id=1}	received response	type_url="type.googleapis.com/istio.workload.Address" size=1 removes=0
2025-06-07T07:28:33.109876Z	info	xds::client:xds{id=1}	received response	type_url="type.googleapis.com/istio.workload.Address" size=1 removes=0
2025-06-07T07:29:19.663998Z	info	xds::client:xds{id=1}	received response	type_url="type.googleapis.com/istio.workload.Address" size=1 removes=0
2025-06-07T07:54:19.596751Z	info	xds::client:xds{id=1}	received response	type_url="type.googleapis.com/istio.workload.Address" size=1 removes=0
2025-06-07T07:57:54.510284Z	info	xds::client:xds{id=2}	Stream established	
2025-06-07T07:57:54.510344Z	info	xds::client:xds{id=2}	received response	type_url="type.googleapis.com/istio.workload.Address" size=44 removes=0
2025-06-07T07:57:54.510624Z	info	xds::client:xds{id=2}	received response	type_url="type.googleapis.com/istio.security.Authorization" size=0 removes=0
2025-06-07T08:00:38.356854Z	info	xds::client:xds{id=2}	received response	type_url="type.googleapis.com/istio.workload.Address" size=1 removes=0
2025-06-07T08:00:38.482796Z	info	xds::client:xds{id=2}	received response	type_url="type.googleapis.com/istio.workload.Address" size=0 removes=1
2025-06-07T07:28:22.650622Z	info	xds::client:xds{id=1}	received response	type_url="type.googleapis.com/istio.workload.Address" size=1 removes=0
2025-06-07T07:28:28.655964Z	info	xds::client:xds{id=1}	received response	type_url="type.googleapis.com/istio.workload.Address" size=1 removes=0
2025-06-07T07:28:33.110072Z	info	xds::client:xds{id=1}	received response	type_url="type.googleapis.com/istio.workload.Address" size=1 removes=0
2025-06-07T07:29:19.664038Z	info	xds::client:xds{id=1}	received response	type_url="type.googleapis.com/istio.workload.Address" size=1 removes=0
2025-06-07T07:54:19.597443Z	info	xds::client:xds{id=1}	received response	type_url="type.googleapis.com/istio.workload.Address" size=1 removes=0
2025-06-07T07:58:03.470961Z	info	xds::client:xds{id=2}	Stream established	
2025-06-07T07:58:03.471034Z	info	xds::client:xds{id=2}	received response	type_url="type.googleapis.com/istio.workload.Address" size=44 removes=0
2025-06-07T07:58:03.471190Z	info	xds::client:xds{id=2}	received response	type_url="type.googleapis.com/istio.security.Authorization" size=0 removes=0
2025-06-07T08:00:38.356857Z	info	xds::client:xds{id=2}	received response	type_url="type.googleapis.com/istio.workload.Address" size=1 removes=0
2025-06-07T08:00:38.482789Z	info	xds::client:xds{id=2}	received response	type_url="type.googleapis.com/istio.workload.Address" size=0 removes=1
2025-06-07T07:28:22.650616Z	info	xds::client:xds{id=1}	received response	type_url="type.googleapis.com/istio.workload.Address" size=1 removes=0
2025-06-07T07:28:28.655952Z	info	xds::client:xds{id=1}	received response	type_url="type.googleapis.com/istio.workload.Address" size=1 removes=0
2025-06-07T07:28:33.109961Z	info	xds::client:xds{id=1}	received response	type_url="type.googleapis.com/istio.workload.Address" size=1 removes=0
2025-06-07T07:29:19.664020Z	info	xds::client:xds{id=1}	received response	type_url="type.googleapis.com/istio.workload.Address" size=1 removes=0
2025-06-07T07:54:19.596945Z	info	xds::client:xds{id=1}	received response	type_url="type.googleapis.com/istio.workload.Address" size=1 removes=0
2025-06-07T07:54:53.249075Z	info	xds::client:xds{id=2}	Stream established	
2025-06-07T07:54:53.249237Z	info	xds::client:xds{id=2}	received response	type_url="type.googleapis.com/istio.workload.Address" size=44 removes=0
2025-06-07T07:54:53.249584Z	info	xds::client:xds{id=2}	received response	type_url="type.googleapis.com/istio.security.Authorization" size=0 removes=0
2025-06-07T08:00:38.356862Z	info	xds::client:xds{id=2}	received response	type_url="type.googleapis.com/istio.workload.Address" size=1 removes=0
2025-06-07T08:00:38.482818Z	info	xds::client:xds{id=2}	received response	type_url="type.googleapis.com/istio.workload.Address" size=0 removes=1

📁 Bookinfo 애플리케이션 배포 및 검증

1. Istio 디렉토리 존재 확인

1
docker exec -it myk8s-control-plane ls -l istio-1.26.0

✅ 출력

1
2
3
4
5
6
7
8
total 40
-rw-r--r--  1 root root 11357 May  7 11:05 LICENSE
-rw-r--r--  1 root root  6927 May  7 11:05 README.md
drwxr-x---  2 root root  4096 May  7 11:05 bin
-rw-r-----  1 root root   983 May  7 11:05 manifest.yaml
drwxr-xr-x  4 root root  4096 May  7 11:05 manifests
drwxr-xr-x 27 root root  4096 May  7 11:05 samples
drwxr-xr-x  3 root root  4096 May  7 11:05 tools

2. Bookinfo 샘플 애플리케이션 배포

1
docker exec -it myk8s-control-plane kubectl apply -f istio-1.26.0/samples/bookinfo/platform/kube/bookinfo.yaml

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created

3. 배포 리소스 상태 확인

1
kubectl get deploy,pod,svc,ep

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/details-v1       1/1     1            1           41s
deployment.apps/productpage-v1   0/1     1            0           41s
deployment.apps/ratings-v1       1/1     1            1           41s
deployment.apps/reviews-v1       1/1     1            1           41s
deployment.apps/reviews-v2       1/1     1            1           41s
deployment.apps/reviews-v3       0/1     1            0           41s

NAME                                  READY   STATUS              RESTARTS   AGE
pod/details-v1-766844796b-brc95       1/1     Running             0          41s
pod/productpage-v1-54bb874995-gkz6q   0/1     ContainerCreating   0          41s
pod/ratings-v1-5dc79b6bcd-8f7vz       1/1     Running             0          41s
pod/reviews-v1-598b896c9d-9dnx5       1/1     Running             0          41s
pod/reviews-v2-556d6457d-jdksf        1/1     Running             0          41s
pod/reviews-v3-564544b4d6-p2lj2       0/1     ContainerCreating   0          41s

NAME                  TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/details       ClusterIP   10.200.1.34    <none>        9080/TCP   41s
service/kubernetes    ClusterIP   10.200.1.1     <none>        443/TCP    70m
service/productpage   ClusterIP   10.200.1.135   <none>        9080/TCP   41s
service/ratings       ClusterIP   10.200.1.184   <none>        9080/TCP   41s
service/reviews       ClusterIP   10.200.1.179   <none>        9080/TCP   41s

NAME                    ENDPOINTS                         AGE
endpoints/details       10.10.2.10:9080                   41s
endpoints/kubernetes    172.18.0.3:6443                   70m
endpoints/productpage   <none>                            41s
endpoints/ratings       10.10.2.11:9080                   41s
endpoints/reviews       10.10.2.12:9080,10.10.2.13:9080   41s

4. ztunnel 서비스 구성 확인

1
docker exec -it myk8s-control-plane istioctl ztunnel-config service

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
NAMESPACE      SERVICE NAME            SERVICE VIP  WAYPOINT ENDPOINTS
default        details                 10.200.1.34  None     1/1
default        kubernetes              10.200.1.1   None     1/1
default        productpage             10.200.1.135 None     1/1
default        ratings                 10.200.1.184 None     1/1
default        reviews                 10.200.1.179 None     3/3
istio-system   grafana                 10.200.1.136 None     1/1
istio-system   istiod                  10.200.1.163 None     1/1
istio-system   jaeger-collector        10.200.1.144 None     1/1
istio-system   kiali                   10.200.1.133 None     1/1
istio-system   loki                    10.200.1.200 None     1/1
istio-system   loki-headless                        None     1/1
istio-system   loki-memberlist                      None     1/1
istio-system   prometheus              10.200.1.41  None     1/1
istio-system   tracing                 10.200.1.229 None     1/1
istio-system   zipkin                  10.200.1.117 None     1/1
kube-system    kube-dns                10.200.1.10  None     2/2
metallb-system metallb-webhook-service 10.200.1.89  None     1/1

5. ztunnel 워크로드 구성 확인

1
docker exec -it myk8s-control-plane istioctl ztunnel-config workload

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
NAMESPACE          POD NAME                                    ADDRESS    NODE                WAYPOINT PROTOCOL
default            details-v1-766844796b-brc95                 10.10.2.10 myk8s-worker2       None     TCP
default            kubernetes                                  172.18.0.3                     None     TCP
default            productpage-v1-54bb874995-gkz6q             10.10.2.15 myk8s-worker2       None     TCP
default            ratings-v1-5dc79b6bcd-8f7vz                 10.10.2.11 myk8s-worker2       None     TCP
default            reviews-v1-598b896c9d-9dnx5                 10.10.2.12 myk8s-worker2       None     TCP
default            reviews-v2-556d6457d-jdksf                  10.10.2.13 myk8s-worker2       None     TCP
default            reviews-v3-564544b4d6-p2lj2                 10.10.2.14 myk8s-worker2       None     TCP
istio-system       grafana-65bfb5f855-jfmdl                    10.10.2.4  myk8s-worker2       None     TCP
istio-system       istio-cni-node-7kst6                        10.10.1.4  myk8s-worker        None     TCP
istio-system       istio-cni-node-dpqsv                        10.10.2.2  myk8s-worker2       None     TCP
istio-system       istio-cni-node-rfx6w                        10.10.0.5  myk8s-control-plane None     TCP
istio-system       istiod-86b6b7ff7-d7q7f                      10.10.1.3  myk8s-worker        None     TCP
istio-system       jaeger-868fbc75d7-4lq87                     10.10.2.5  myk8s-worker2       None     TCP
istio-system       kiali-6d774d8bb8-zkx5r                      10.10.2.6  myk8s-worker2       None     TCP
istio-system       loki-0                                      10.10.2.9  myk8s-worker2       None     TCP
istio-system       prometheus-689cc795d4-vrlxd                 10.10.2.7  myk8s-worker2       None     TCP
istio-system       ztunnel-4bls2                               10.10.0.6  myk8s-control-plane None     TCP
istio-system       ztunnel-kczj2                               10.10.2.3  myk8s-worker2       None     TCP
istio-system       ztunnel-wr6pp                               10.10.1.5  myk8s-worker        None     TCP
kube-system        coredns-668d6bf9bc-k6lf9                    10.10.0.2  myk8s-control-plane None     TCP
kube-system        coredns-668d6bf9bc-xbtkx                    10.10.0.3  myk8s-control-plane None     TCP
kube-system        etcd-myk8s-control-plane                    172.18.0.3 myk8s-control-plane None     TCP
kube-system        kindnet-g9qmc                               172.18.0.2 myk8s-worker2       None     TCP
kube-system        kindnet-lc2q2                               172.18.0.3 myk8s-control-plane None     TCP
kube-system        kindnet-njcw4                               172.18.0.4 myk8s-worker        None     TCP
kube-system        kube-apiserver-myk8s-control-plane          172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-controller-manager-myk8s-control-plane 172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-proxy-h2qb5                            172.18.0.2 myk8s-worker2       None     TCP
kube-system        kube-proxy-jmfg8                            172.18.0.4 myk8s-worker        None     TCP
kube-system        kube-proxy-nswxj                            172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-scheduler-myk8s-control-plane          172.18.0.3 myk8s-control-plane None     TCP
local-path-storage local-path-provisioner-7dc846544d-vzdcv     10.10.0.4  myk8s-control-plane None     TCP
metallb-system     controller-bb5f47665-29lwc                  10.10.1.2  myk8s-worker        None     TCP
metallb-system     speaker-f7qvl                               172.18.0.2 myk8s-worker2       None     TCP
metallb-system     speaker-hcfq8                               172.18.0.4 myk8s-worker        None     TCP
metallb-system     speaker-lr429                               172.18.0.3 myk8s-control-plane None     TCP

6. ztunnel Envoy 프록시 상태 확인

1
docker exec -it myk8s-control-plane istioctl proxy-status

✅ 출력

1
2
3
4
NAME                           CLUSTER        CDS         LDS         EDS         RDS         ECDS        ISTIOD                     VERSION
ztunnel-4bls2.istio-system     Kubernetes     IGNORED     IGNORED     IGNORED     IGNORED     IGNORED     istiod-86b6b7ff7-d7q7f     1.26.0
ztunnel-kczj2.istio-system     Kubernetes     IGNORED     IGNORED     IGNORED     IGNORED     IGNORED     istiod-86b6b7ff7-d7q7f     1.26.0
ztunnel-wr6pp.istio-system     Kubernetes     IGNORED     IGNORED     IGNORED     IGNORED     IGNORED     istiod-86b6b7ff7-d7q7f     1.26.0

7. 요청 테스트용 서비스어카운트 생성

1
2
3
4
kubectl create sa netshoot

# 결과
serviceaccount/netshoot created

8. 요청 테스트용 Netshoot 파드 생성

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: netshoot
spec:
  serviceAccountName: netshoot
  nodeName: myk8s-control-plane
  containers:
  - name: netshoot
    image: nicolaka/netshoot
    command: ["tail"]
    args: ["-f", "/dev/null"]
  terminationGracePeriodSeconds: 0
EOF

# 결과
pod/netshoot created

9. Bookinfo 웹페이지 요청 테스트

1
kubectl exec -it netshoot -- curl -sS productpage:9080/productpage | grep -i title

✅ 출력

1
<title>Simple Bookstore App</title>

10. Bookinfo 지속적 요청 테스트 (반복 확인)

1
while true; do kubectl exec -it netshoot -- curl -sS productpage:9080/productpage | grep -i title ; date "+%Y-%m-%d %H:%M:%S"; sleep 1; done

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
<title>Simple Bookstore App</title>
2025-06-07 17:09:13
<title>Simple Bookstore App</title>
2025-06-07 17:09:14
<title>Simple Bookstore App</title>
2025-06-07 17:09:16
<title>Simple Bookstore App</title>
2025-06-07 17:09:18
<title>Simple Bookstore App</title>
2025-06-07 17:09:19
...

🌐 Bookinfo 애플리케이션 외부 노출 (Gateway API 기반)

1. Gateway 및 HTTPRoute 구성 파일 확인

1
docker exec -it myk8s-control-plane cat istio-1.26.0/samples/bookinfo/gateway-api/bookinfo-gateway.yaml

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: bookinfo-gateway
spec:
  gatewayClassName: istio
  listeners:
  - name: http
    port: 80
    protocol: HTTP
    allowedRoutes:
      namespaces:
        from: Same
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: bookinfo
spec:
  parentRefs:
  - name: bookinfo-gateway
  rules:
  - matches:
    - path:
        type: Exact
        value: /productpage
    - path:
        type: PathPrefix
        value: /static
    - path:
        type: Exact
        value: /login
    - path:
        type: Exact
        value: /logout
    - path:
        type: PathPrefix
        value: /api/v1/products
    backendRefs:
    - name: productpage
      port: 9080

2. Gateway 리소스 적용

1
2
3
4
5
docker exec -it myk8s-control-plane kubectl apply -f istio-1.26.0/samples/bookinfo/gateway-api/bookinfo-gateway.yaml

# 결과
gateway.gateway.networking.k8s.io/bookinfo-gateway created
httproute.gateway.networking.k8s.io/bookinfo created

3. 생성된 Gateway 리소스 확인

1
kubectl get gateway

✅ 출력

1
2
NAME               CLASS   ADDRESS          PROGRAMMED   AGE
bookinfo-gateway   istio   172.18.255.201   True         20s

4. 생성된 HTTPRoute 확인

1
kubectl get HTTPRoute

✅ 출력

1
2
NAME       HOSTNAMES   AGE
bookinfo               45s

5. Gateway Pod 및 서비스 확인

1
kubectl get svc,ep bookinfo-gateway-istio

✅ 출력

1
2
3
4
5
NAME                             TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)                        AGE
service/bookinfo-gateway-istio   LoadBalancer   10.200.1.248   172.18.255.201   15021:30821/TCP,80:31868/TCP   80s

NAME                               ENDPOINTS                      AGE
endpoints/bookinfo-gateway-istio   10.10.1.6:15021,10.10.1.6:80   80s

6. Gateway Pod 상세 정보 확인

1
kubectl get pod -l gateway.istio.io/managed=istio.io-gateway-controller -owide

✅ 출력

1
2
NAME                                      READY   STATUS    RESTARTS   AGE    IP          NODE           NOMINATED NODE   READINESS GATES
bookinfo-gateway-istio-6cbd9bcd49-dlqgn   1/1     Running   0          107s   10.10.1.6   myk8s-worker   <none>           <none>

7. 도커 컨테이너 상태 확인

1
docker ps

✅ 출력

1
2
3
4
5
CONTAINER ID   IMAGE                  COMMAND                  CREATED             STATUS             PORTS                                                             NAMES
2046c5ac8dd0   nicolaka/netshoot      "sleep infinity"         About an hour ago   Up About an hour                                                                     mypc
22747018eabb   kindest/node:v1.32.2   "/usr/local/bin/entr…"   About an hour ago   Up About an hour   0.0.0.0:30000-30005->30000-30005/tcp, 127.0.0.1:44959->6443/tcp   myk8s-control-plane
42a8f93275a7   kindest/node:v1.32.2   "/usr/local/bin/entr…"   About an hour ago   Up About an hour                                                                     myk8s-worker
0476037a89b7   kindest/node:v1.32.2   "/usr/local/bin/entr…"   About an hour ago   Up About an hour                                                                     myk8s-worker2

8. Gateway EXTERNAL-IP 추출 및 환경변수로 저장

1
2
kubectl get svc bookinfo-gateway-istio -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
GWLB=$(kubectl get svc bookinfo-gateway-istio -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

✅ 출력

1
172.18.255.201

9. 외부에서 반복적으로 Bookinfo에 요청 보내기

1
2
GWLB=$(kubectl get svc bookinfo-gateway-istio -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
while true; do docker exec -it mypc curl $GWLB/productpage | grep -i title ; date "+%Y-%m-%d %H:%M:%S"; sleep 1; done

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
<title>Simple Bookstore App</title>
2025-06-07 17:16:35
<title>Simple Bookstore App</title>
2025-06-07 17:16:36
<title>Simple Bookstore App</title>
2025-06-07 17:16:37
<title>Simple Bookstore App</title>
2025-06-07 17:16:38
<title>Simple Bookstore App</title>
2025-06-07 17:16:39
...

10. 로컬 PC에서 접속 가능하도록 포트 고정

LoadBalancer 서비스의 NodePort 고정 설정

1
2
3
4
kubectl patch svc bookinfo-gateway-istio -p '{"spec": {"type": "LoadBalancer", "ports": [{"port": 80, "targetPort": 80, "nodePort": 30000}]}}' 

# 결과
service/bookinfo-gateway-istio patched

11. 서비스 상태 재확인

1
kubectl get svc bookinfo-gateway-istio

✅ 출력

1
2
NAME                     TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)                        AGE
bookinfo-gateway-istio   LoadBalancer   10.200.1.248   172.18.255.201   15021:30821/TCP,80:30000/TCP   5m49s

12. 웹 브라우저에서 접속

http://127.0.0.1:30000/productpage


☁️ Ambient Mesh에 애플리케이션 추가하기

1. default 네임스페이스에 Ambient Mesh 설정 적용

1
2
3
4
kubectl label namespace default istio.io/dataplane-mode=ambient

# 결과
namespace/default labeled

2. Ambient 적용 여부 확인 (프록시 상태 조회)

1
docker exec -it myk8s-control-plane istioctl proxy-status

✅ 출력

1
2
3
4
5
NAME                                                CLUSTER        CDS              LDS              EDS              RDS              ECDS        ISTIOD                     VERSION
bookinfo-gateway-istio-6cbd9bcd49-dlqgn.default     Kubernetes     SYNCED (13m)     SYNCED (13m)     SYNCED (32s)     SYNCED (13m)     IGNORED     istiod-86b6b7ff7-d7q7f     1.26.0
ztunnel-4bls2.istio-system                          Kubernetes     IGNORED          IGNORED          IGNORED          IGNORED          IGNORED     istiod-86b6b7ff7-d7q7f     1.26.0
ztunnel-kczj2.istio-system                          Kubernetes     IGNORED          IGNORED          IGNORED          IGNORED          IGNORED     istiod-86b6b7ff7-d7q7f     1.26.0
ztunnel-wr6pp.istio-system                          Kubernetes     IGNORED          IGNORED          IGNORED          IGNORED          IGNORED     istiod-86b6b7ff7-d7q7f     1.26.0

3. 파드 목록 확인 (사이드카 미존재)

1
kubectl get pod

✅ 출력

1
2
3
4
5
6
7
8
9
10
NAME                                      READY   STATUS    RESTARTS   AGE
bookinfo-gateway-istio-6cbd9bcd49-dlqgn   1/1     Running   0          14m
details-v1-766844796b-brc95               1/1     Running   0          21m
netshoot                                  1/1     Running   0          17m
productpage-v1-54bb874995-gkz6q           1/1     Running   0          21m
ratings-v1-5dc79b6bcd-8f7vz               1/1     Running   0          21m
reviews-v1-598b896c9d-9dnx5               1/1     Running   0          21m
reviews-v2-556d6457d-jdksf                1/1     Running   0          21m
reviews-v3-564544b4d6-p2lj2               1/1     Running   0          21m

4. Ambient 워크로드 구성 정보 확인

1
docker exec -it myk8s-control-plane istioctl ztunnel-config workload

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
NAMESPACE          POD NAME                                    ADDRESS    NODE                WAYPOINT PROTOCOL
default            bookinfo-gateway-istio-6cbd9bcd49-dlqgn     10.10.1.6  myk8s-worker        None     TCP
default            details-v1-766844796b-brc95                 10.10.2.10 myk8s-worker2       None     HBONE
default            kubernetes                                  172.18.0.3                     None     TCP
default            netshoot                                    10.10.0.8  myk8s-control-plane None     HBONE
default            productpage-v1-54bb874995-gkz6q             10.10.2.15 myk8s-worker2       None     HBONE
default            ratings-v1-5dc79b6bcd-8f7vz                 10.10.2.11 myk8s-worker2       None     HBONE
default            reviews-v1-598b896c9d-9dnx5                 10.10.2.12 myk8s-worker2       None     HBONE
default            reviews-v2-556d6457d-jdksf                  10.10.2.13 myk8s-worker2       None     HBONE
default            reviews-v3-564544b4d6-p2lj2                 10.10.2.14 myk8s-worker2       None     HBONE
istio-system       grafana-65bfb5f855-jfmdl                    10.10.2.4  myk8s-worker2       None     TCP
istio-system       istio-cni-node-7kst6                        10.10.1.4  myk8s-worker        None     TCP
istio-system       istio-cni-node-dpqsv                        10.10.2.2  myk8s-worker2       None     TCP
istio-system       istio-cni-node-rfx6w                        10.10.0.5  myk8s-control-plane None     TCP
istio-system       istiod-86b6b7ff7-d7q7f                      10.10.1.3  myk8s-worker        None     TCP
istio-system       jaeger-868fbc75d7-4lq87                     10.10.2.5  myk8s-worker2       None     TCP
istio-system       kiali-6d774d8bb8-zkx5r                      10.10.2.6  myk8s-worker2       None     TCP
istio-system       loki-0                                      10.10.2.9  myk8s-worker2       None     TCP
istio-system       prometheus-689cc795d4-vrlxd                 10.10.2.7  myk8s-worker2       None     TCP
istio-system       ztunnel-4bls2                               10.10.0.6  myk8s-control-plane None     TCP
istio-system       ztunnel-kczj2                               10.10.2.3  myk8s-worker2       None     TCP
istio-system       ztunnel-wr6pp                               10.10.1.5  myk8s-worker        None     TCP
kube-system        coredns-668d6bf9bc-k6lf9                    10.10.0.2  myk8s-control-plane None     TCP
kube-system        coredns-668d6bf9bc-xbtkx                    10.10.0.3  myk8s-control-plane None     TCP
kube-system        etcd-myk8s-control-plane                    172.18.0.3 myk8s-control-plane None     TCP
kube-system        kindnet-g9qmc                               172.18.0.2 myk8s-worker2       None     TCP
kube-system        kindnet-lc2q2                               172.18.0.3 myk8s-control-plane None     TCP
kube-system        kindnet-njcw4                               172.18.0.4 myk8s-worker        None     TCP
kube-system        kube-apiserver-myk8s-control-plane          172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-controller-manager-myk8s-control-plane 172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-proxy-h2qb5                            172.18.0.2 myk8s-worker2       None     TCP
kube-system        kube-proxy-jmfg8                            172.18.0.4 myk8s-worker        None     TCP
kube-system        kube-proxy-nswxj                            172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-scheduler-myk8s-control-plane          172.18.0.3 myk8s-control-plane None     TCP
local-path-storage local-path-provisioner-7dc846544d-vzdcv     10.10.0.4  myk8s-control-plane None     TCP
metallb-system     controller-bb5f47665-29lwc                  10.10.1.2  myk8s-worker        None     TCP
metallb-system     speaker-f7qvl                               172.18.0.2 myk8s-worker2       None     TCP
metallb-system     speaker-hcfq8                               172.18.0.4 myk8s-worker        None     TCP
metallb-system     speaker-lr429                               172.18.0.3 myk8s-control-plane None     TCP

5. 특정 워크로드(details-v1) 구성 정보 확인

1
docker exec -it myk8s-control-plane istioctl ztunnel-config workload --address 10.10.2.10

✅ 출력

1
2
NAMESPACE POD NAME                    ADDRESS    NODE          WAYPOINT PROTOCOL
default   details-v1-766844796b-brc95 10.10.2.10 myk8s-worker2 None     HBONE

6. 워크로드 구성 JSON 형태로 출력

1
docker exec -it myk8s-control-plane istioctl ztunnel-config workload --address 10.10.2.10 -o json

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[
    {
        "uid": "Kubernetes//Pod/default/details-v1-766844796b-brc95",
        "workloadIps": [
            "10.10.2.10"
        ],
        "protocol": "HBONE",
        "name": "details-v1-766844796b-brc95",
        "namespace": "default",
        "serviceAccount": "bookinfo-details",
        "workloadName": "details-v1",
        "workloadType": "pod",
        "canonicalName": "details",
        "canonicalRevision": "v1",
        "clusterId": "Kubernetes",
        "trustDomain": "cluster.local",
        "locality": {},
        "node": "myk8s-worker2",
        "status": "Healthy",
        "hostname": "",
        "capacity": 1,
        "applicationTunnel": {
            "protocol": ""
        }
    }
]

7. productpage 컨테이너 내부 진입 후 네트워크 상태 확인

1
2
PPOD=$(kubectl get pod -l app=productpage -o jsonpath='{.items[0].metadata.name}')
kubectl pexec $PPOD -it -T -- bash

✅ 출력

1
2
3
4
5
Defaulting container name to productpage.
Create cnsenter pod (cnsenter-7hjox5wwzh)
Wait to run cnsenter pod (cnsenter-7hjox5wwzh)
If you don't see a command prompt, try pressing enter.
bash-5.1# 
1
bash-5.1# iptables -t mangle -S

✅ 출력

1
2
3
4
5
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
1
bash-5.1# iptables -t nat -S

✅ 출력

1
2
3
4
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
1
bash-5.1# ss -tnlp

✅ 출력

1
2
3
4
5
6
7
State    Recv-Q   Send-Q     Local Address:Port      Peer Address:Port  Process                                                                 
LISTEN   0        128            127.0.0.1:15053          0.0.0.0:*                                                                             
LISTEN   0        2048                   *:9080                 *:*      users:(("gunicorn",pid=21,fd=5),("gunicorn",pid=20,fd=5),("gunicorn",pid=19,fd=5),("gunicorn",pid=18,fd=5),("gunicorn",pid=17,fd=5),("gunicorn",pid=16,fd=5),("gunicorn",pid=15,fd=5),("gunicorn",pid=14,fd=5),("gunicorn",pid=1,fd=5))
LISTEN   0        128                [::1]:15053             [::]:*                                                                             
LISTEN   0        128                    *:15008                *:*                                                                             
LISTEN   0        128                    *:15001                *:*                                                                             
LISTEN   0        128                    *:15006                *:* 
1
bash-5.1# ls -l  /var/run/ztunnel

✅ 출력

1
2
total 0
srwxr-xr-x    1 root     root             0 Jun  7 07:25 ztunnel.sock
1
2
3
bash-5.1# exit
exit
Delete cnsenter pod (cnsenter-7hjox5wwzh)

8. 노드별 ipset 상태 확인

1
for node in control-plane worker worker2; do echo "node : myk8s-$node" ; docker exec -it myk8s-$node ipset list; echo; done

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
node : myk8s-control-plane
Name: istio-inpod-probes-v4
Type: hash:ip
Revision: 2
Header: family inet hashsize 1024 maxelem 65536 comment
Size in memory: 333
References: 1
Number of entries: 1
Members:
10.10.0.8 comment "32e9b533-ac63-4db0-b2eb-f64ada35c975"

Name: istio-inpod-probes-v6
Type: hash:ip
Revision: 2
Header: family inet6 hashsize 1024 maxelem 65536 comment
Size in memory: 224
References: 1
Number of entries: 0
Members:

node : myk8s-worker
Name: istio-inpod-probes-v4
Type: hash:ip
Revision: 2
Header: family inet hashsize 1024 maxelem 65536 comment
Size in memory: 216
References: 1
Number of entries: 0
Members:

Name: istio-inpod-probes-v6
Type: hash:ip
Revision: 2
Header: family inet6 hashsize 1024 maxelem 65536 comment
Size in memory: 224
References: 1
Number of entries: 0
Members:

node : myk8s-worker2
Name: istio-inpod-probes-v4
Type: hash:ip
Revision: 2
Header: family inet hashsize 1024 maxelem 65536 comment
Size in memory: 918
References: 1
Number of entries: 6
Members:
10.10.2.14 comment "4b7c7be8-df5f-498c-a662-f14e3f849094"
10.10.2.10 comment "cf44f4b3-6010-4a70-bcbd-08fc05707f18"
10.10.2.13 comment "a17efd41-b81f-4ac1-8df1-a2cee2344235"
10.10.2.15 comment "f0a4f4d6-b763-4f4b-8e92-8a7612bc84e3"
10.10.2.12 comment "742f2f22-8277-4338-b6ce-3a143c657ab3"
10.10.2.11 comment "cadd4937-f644-4c9a-89ca-2ed9db3094df"

Name: istio-inpod-probes-v6
Type: hash:ip
Revision: 2
Header: family inet6 hashsize 1024 maxelem 65536 comment
Size in memory: 224
References: 1
Number of entries: 0
Members:

9. ztunnel 파드 로그에서 트래픽 흐름 실시간 확인

1
kubectl -n istio-system logs -l app=ztunnel -f | egrep "inbound|outbound"

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
2025-06-07T08:42:48.739221Z	info	access	connection complete	src.addr=10.10.2.15:59558 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.13:15008 dst.hbone_addr=10.10.2.13:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v2-556d6457d-jdksf" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="inbound" bytes_sent=602 bytes_recv=192 duration="7ms"
2025-06-07T08:42:48.739320Z	info	access	connection complete	src.addr=10.10.2.15:35590 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.13:15008 dst.hbone_addr=10.10.2.13:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v2-556d6457d-jdksf" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=192 bytes_recv=602 duration="7ms"
2025-06-07T08:42:49.815951Z	info	access	connection complete	src.addr=10.10.2.15:40614 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="inbound" bytes_sent=358 bytes_recv=192 duration="2ms"
2025-06-07T08:42:49.816041Z	info	access	connection complete	src.addr=10.10.2.15:59690 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="outbound" bytes_sent=192 bytes_recv=358 duration="2ms"
2025-06-07T08:42:49.823126Z	info	access	connection complete	src.addr=10.10.2.15:59558 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.13:15008 dst.hbone_addr=10.10.2.13:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v2-556d6457d-jdksf" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="inbound" bytes_sent=602 bytes_recv=192 duration="5ms"
2025-06-07T08:42:49.823215Z	info	access	connection complete	src.addr=10.10.2.15:35594 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.13:15008 dst.hbone_addr=10.10.2.13:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v2-556d6457d-jdksf" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=192 bytes_recv=602 duration="6ms"
2025-06-07T08:42:50.903808Z	info	access	connection complete	src.addr=10.10.2.15:40614 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="inbound" bytes_sent=358 bytes_recv=192 duration="2ms"
2025-06-07T08:24:59.141447Z	info	proxy::inbound	listener established	address=[::]:15008 component="inbound" transparent=true
2025-06-07T08:24:59.141463Z	info	proxy::inbound_passthrough	listener established	address=[::]:15006 component="inbound plaintext" transparent=true
2025-06-07T08:24:59.141473Z	info	proxy::outbound	listener established	address=[::]:15001 component="outbound" transparent=true
2025-06-07T08:42:50.903914Z	info	access	connection complete	src.addr=10.10.2.15:59698 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="outbound" bytes_sent=192 bytes_recv=358 duration="2ms"
2025-06-07T08:42:50.909135Z	info	access	connection complete	src.addr=10.10.2.15:38792 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.12:15008 dst.hbone_addr=10.10.2.12:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v1-598b896c9d-9dnx5" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="inbound" bytes_sent=519 bytes_recv=192 duration="3ms"
2025-06-07T08:42:50.909236Z	info	access	connection complete	src.addr=10.10.2.15:35602 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.12:15008 dst.hbone_addr=10.10.2.12:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v1-598b896c9d-9dnx5" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=192 bytes_recv=519 duration="4ms"
2025-06-07T08:42:51.979954Z	info	access	connection complete	src.addr=10.10.2.15:40614 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="inbound" bytes_sent=358 bytes_recv=192 duration="2ms"
2025-06-07T08:42:51.980042Z	info	access	connection complete	src.addr=10.10.2.15:59712 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="outbound" bytes_sent=192 bytes_recv=358 duration="2ms"
2025-06-07T08:42:51.987492Z	info	access	connection complete	src.addr=10.10.2.15:59558 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.13:15008 dst.hbone_addr=10.10.2.13:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v2-556d6457d-jdksf" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="inbound" bytes_sent=602 bytes_recv=192 duration="5ms"
2025-06-07T08:42:51.987617Z	info	access	connection complete	src.addr=10.10.2.15:35608 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.13:15008 dst.hbone_addr=10.10.2.13:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v2-556d6457d-jdksf" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=192 bytes_recv=602 duration="6ms"
2025-06-07T08:42:52.911500Z	info	access	connection complete	src.addr=10.10.1.6:54914 src.workload="bookinfo-gateway-istio-6cbd9bcd49-dlqgn" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-gateway-istio" dst.addr=10.10.2.15:15008 dst.hbone_addr=10.10.2.15:9080 dst.service="productpage.default.svc.cluster.local" dst.workload="productpage-v1-54bb874995-gkz6q" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" direction="inbound" bytes_sent=64939 bytes_recv=1705 duration="6383ms"
2025-06-07T08:42:53.051206Z	info	access	connection complete	src.addr=10.10.2.15:40614 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="inbound" bytes_sent=358 bytes_recv=192 duration="2ms"
2025-06-07T08:42:53.051326Z	info	access	connection complete	src.addr=10.10.2.15:55236 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="outbound" bytes_sent=192 bytes_recv=358 duration="2ms"
2025-06-07T08:42:53.058424Z	info	access	connection complete	src.addr=10.10.2.15:59558 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.13:15008 dst.hbone_addr=10.10.2.13:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v2-556d6457d-jdksf" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="inbound" bytes_sent=602 bytes_recv=192 duration="5ms"
2025-06-07T08:42:53.058516Z	info	access	connection complete	src.addr=10.10.2.15:37366 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.13:15008 dst.hbone_addr=10.10.2.13:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v2-556d6457d-jdksf" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=192 bytes_recv=602 duration="6ms"
2025-06-07T08:42:54.126914Z	info	access	connection complete	src.addr=10.10.2.15:40614 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="inbound" bytes_sent=358 bytes_recv=192 duration="2ms"
2025-06-07T08:42:54.127025Z	info	access	connection complete	src.addr=10.10.2.15:55242 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="outbound" bytes_sent=192 bytes_recv=358 duration="2ms"
2025-06-07T08:42:54.133883Z	info	access	connection complete	src.addr=10.10.2.15:59558 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.13:15008 dst.hbone_addr=10.10.2.13:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v2-556d6457d-jdksf" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="inbound" bytes_sent=602 bytes_recv=192 duration="5ms"
2025-06-07T08:42:54.134021Z	info	access	connection complete	src.addr=10.10.2.15:37372 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.13:15008 dst.hbone_addr=10.10.2.13:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v2-556d6457d-jdksf" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=192 bytes_recv=602 duration="6ms"
2025-06-07T08:42:55.060772Z	info	access	connection complete	src.addr=10.10.1.6:54930 src.workload="bookinfo-gateway-istio-6cbd9bcd49-dlqgn" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-gateway-istio" dst.addr=10.10.2.15:15008 dst.hbone_addr=10.10.2.15:9080 dst.service="productpage.default.svc.cluster.local" dst.workload="productpage-v1-54bb874995-gkz6q" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" direction="inbound" bytes_sent=30490 bytes_recv=682 duration="3085ms"
2025-06-07T08:42:55.212610Z	info	access	connection complete	src.addr=10.10.2.15:40614 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="inbound" bytes_sent=358 bytes_recv=192 duration="2ms"
2025-06-07T08:42:55.212662Z	info	access	connection complete	src.addr=10.10.2.15:55250 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="outbound" bytes_sent=192 bytes_recv=358 duration="2ms"
2025-06-07T08:42:55.233248Z	info	access	connection complete	src.addr=10.10.2.15:54086 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.14:15008 dst.hbone_addr=10.10.2.14:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v3-564544b4d6-p2lj2" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="inbound" bytes_sent=599 bytes_recv=192 duration="19ms"
2025-06-07T08:42:55.233308Z	info	access	connection complete	src.addr=10.10.2.15:37384 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.14:15008 dst.hbone_addr=10.10.2.14:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v3-564544b4d6-p2lj2" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=192 bytes_recv=599 duration="19ms"
2025-06-07T08:42:56.135738Z	info	access	connection complete	src.addr=10.10.1.6:54914 src.workload="bookinfo-gateway-istio-6cbd9bcd49-dlqgn" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-gateway-istio" dst.addr=10.10.2.15:15008 dst.hbone_addr=10.10.2.15:9080 dst.service="productpage.default.svc.cluster.local" dst.workload="productpage-v1-54bb874995-gkz6q" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" direction="inbound" bytes_sent=15245 bytes_recv=341 duration="2014ms"
2025-06-07T08:42:56.292939Z	info	access	connection complete	src.addr=10.10.2.15:40614 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="inbound" bytes_sent=358 bytes_recv=192 duration="1ms"
2025-06-07T08:42:56.292985Z	info	access	connection complete	src.addr=10.10.2.15:55264 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="outbound" bytes_sent=192 bytes_recv=358 duration="1ms"
2025-06-07T08:42:56.298925Z	info	access	connection complete	src.addr=10.10.2.15:38792 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.12:15008 dst.hbone_addr=10.10.2.12:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v1-598b896c9d-9dnx5" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="inbound" bytes_sent=519 bytes_recv=192 duration="5ms"
2025-06-07T08:42:56.299045Z	info	access	connection complete	src.addr=10.10.2.15:37398 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.12:15008 dst.hbone_addr=10.10.2.12:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v1-598b896c9d-9dnx5" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=192 bytes_recv=519 duration="5ms"
2025-06-07T08:42:57.361355Z	info	access	connection complete	src.addr=10.10.2.15:40614 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="inbound" bytes_sent=358 bytes_recv=192 duration="2ms"
2025-06-07T08:42:57.361448Z	info	access	connection complete	src.addr=10.10.2.15:55278 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="outbound" bytes_sent=192 bytes_recv=358 duration="2ms"
2025-06-07T08:42:57.368329Z	info	access	connection complete	src.addr=10.10.2.15:59558 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.13:15008 dst.hbone_addr=10.10.2.13:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v2-556d6457d-jdksf" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="inbound" bytes_sent=602 bytes_recv=192 duration="4ms"
2025-06-07T08:42:57.368393Z	info	access	connection complete	src.addr=10.10.2.15:37410 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.13:15008 dst.hbone_addr=10.10.2.13:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v2-556d6457d-jdksf" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=192 bytes_recv=602 duration="4ms"
2025-06-07T08:42:57.636821Z	info	access	connection complete	src.addr=10.10.2.14:56892 src.workload="reviews-v3-564544b4d6-p2lj2" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" dst.addr=10.10.2.11:15008 dst.hbone_addr=10.10.2.11:9080 dst.service="ratings.default.svc.cluster.local" dst.workload="ratings-v1-5dc79b6bcd-8f7vz" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-ratings" direction="outbound" bytes_sent=216 bytes_recv=222 duration="10005ms"
2025-06-07T08:42:57.637067Z	info	access	connection complete	src.addr=10.10.2.14:59288 src.workload="reviews-v3-564544b4d6-p2lj2" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" dst.addr=10.10.2.11:15008 dst.hbone_addr=10.10.2.11:9080 dst.service="ratings.default.svc.cluster.local" dst.workload="ratings-v1-5dc79b6bcd-8f7vz" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-ratings" direction="inbound" bytes_sent=222 bytes_recv=216 duration="10005ms"
2025-06-07T08:42:58.434195Z	info	access	connection complete	src.addr=10.10.2.15:40614 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="inbound" bytes_sent=358 bytes_recv=192 duration="2ms"
2025-06-07T08:42:58.434244Z	info	access	connection complete	src.addr=10.10.2.15:55290 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="outbound" bytes_sent=192 bytes_recv=358 duration="3ms"
2025-06-07T08:42:58.438533Z	info	access	connection complete	src.addr=10.10.2.15:38792 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.12:15008 dst.hbone_addr=10.10.2.12:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v1-598b896c9d-9dnx5" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="inbound" bytes_sent=519 bytes_recv=192 duration="3ms"
2025-06-07T08:42:58.438605Z	info	access	connection complete	src.addr=10.10.2.15:37426 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.12:15008 dst.hbone_addr=10.10.2.12:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v1-598b896c9d-9dnx5" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=192 bytes_recv=519 duration="3ms"
2025-06-07T08:42:59.369893Z	info	access	connection complete	src.addr=10.10.1.6:54930 src.workload="bookinfo-gateway-istio-6cbd9bcd49-dlqgn" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-gateway-istio" dst.addr=10.10.2.15:15008 dst.hbone_addr=10.10.2.15:9080 dst.service="productpage.default.svc.cluster.local" dst.workload="productpage-v1-54bb874995-gkz6q" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" direction="inbound" bytes_sent=40091 bytes_recv=1023 duration="4162ms"
2025-06-07T08:42:59.512415Z	info	access	connection complete	src.addr=10.10.2.15:40614 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="inbound" bytes_sent=358 bytes_recv=192 duration="2ms"
2025-06-07T08:42:59.512513Z	info	access	connection complete	src.addr=10.10.2.15:55292 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="outbound" bytes_sent=192 bytes_recv=358 duration="2ms"
2025-06-07T08:42:59.516801Z	info	access	connection complete	src.addr=10.10.2.15:38792 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.12:15008 dst.hbone_addr=10.10.2.12:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v1-598b896c9d-9dnx5" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="inbound" bytes_sent=519 bytes_recv=192 duration="3ms"
...

10. ztunnel 파드에 진입하여 내부 네트워크 상태 확인

1
2
3
4
5
6
kubectl get pod -n istio-system -l app=ztunnel -owide
kubectl get pod -n istio-system -l app=ztunnel
ZPOD1NAME=$(kubectl get pod -n istio-system -l app=ztunnel -o jsonpath="{.items[0].metadata.name}")
ZPOD2NAME=$(kubectl get pod -n istio-system -l app=ztunnel -o jsonpath="{.items[1].metadata.name}")
ZPOD3NAME=$(kubectl get pod -n istio-system -l app=ztunnel -o jsonpath="{.items[2].metadata.name}")
echo $ZPOD1NAME $ZPOD2NAME $ZPOD3NAME

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
NAME            READY   STATUS    RESTARTS   AGE   IP          NODE                  NOMINATED NODE   READINESS GATES
ztunnel-4bls2   1/1     Running   0          78m   10.10.0.6   myk8s-control-plane   <none>           <none>
ztunnel-kczj2   1/1     Running   0          78m   10.10.2.3   myk8s-worker2         <none>           <none>
ztunnel-wr6pp   1/1     Running   0          78m   10.10.1.5   myk8s-worker          <none>           <none>

NAME            READY   STATUS    RESTARTS   AGE
ztunnel-4bls2   1/1     Running   0          78m
ztunnel-kczj2   1/1     Running   0          78m
ztunnel-wr6pp   1/1     Running   0          78m

ztunnel-4bls2 ztunnel-kczj2 ztunnel-wr6pp
1
kubectl pexec $ZPOD1NAME -it -T -n istio-system -- bash

✅ 출력

1
2
3
4
5
Defaulting container name to istio-proxy.
Create cnsenter pod (cnsenter-yi26ybqru8)
Wait to run cnsenter pod (cnsenter-yi26ybqru8)
If you don't see a command prompt, try pressing enter.
bash-5.1# 
1
bash-5.1# iptables -t mangle -S

✅ 출력

1
2
3
4
5
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
1
bash-5.1# iptables -t nat -S

✅ 출력

1
2
3
4
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
1
bash-5.1# ss -tnlp

✅ 출력

1
2
3
4
5
State    Recv-Q   Send-Q     Local Address:Port      Peer Address:Port  Process                                                                 
LISTEN   0        1024           127.0.0.1:15000          0.0.0.0:*      users:(("ztunnel",pid=1,fd=14))                                        
LISTEN   0        1024                   *:15020                *:*      users:(("ztunnel",pid=1,fd=17))                                        
LISTEN   0        1024                   *:15021                *:*      users:(("ztunnel",pid=1,fd=9))                                         
LISTEN   0        1024               [::1]:15000             [::]:*      users:(("ztunnel",pid=1,fd=15))
1
bash-5.1# ss -xnp

✅ 출력

1
2
3
4
5
Netid  State  Recv-Q   Send-Q                     Local Address:Port                               Peer Address:Port                            Process                                                                 
u_seq  ESTAB  0        0                                      * 150690                                        * 140247                           users:(("ztunnel",pid=1,fd=19))                                        
u_seq  ESTAB  0        0          /var/run/ztunnel/ztunnel.sock 140247                                        * 150690                                                                                                  
u_str  ESTAB  0        0                                      * 149963                                        * 149964                           users:(("ztunnel",pid=1,fd=13),("ztunnel",pid=1,fd=8),("ztunnel",pid=1,fd=6))
u_str  ESTAB  0        0                                      * 149964                                        * 149963                           users:(("ztunnel",pid=1,fd=7))

11. ztunnel 메트릭 수집 확인

1
bash-5.1# curl -s http://localhost:15020/metrics | grep '^[^#]'

✅ 출력

1
2
3
4
5
6
7
8
9
10
istio_build{component="ztunnel",tag="unknown"} 1
istio_xds_connection_terminations_total{reason="Reconnect"} 2
istio_xds_message_total{url="type.googleapis.com/istio.workload.Address"} 38
istio_xds_message_total{url="type.googleapis.com/istio.security.Authorization"} 5
istio_xds_message_bytes_total{url="type.googleapis.com/istio.workload.Address"} 31731
istio_xds_message_bytes_total{url="type.googleapis.com/istio.security.Authorization"} 0
workload_manager_active_proxy_count 1
workload_manager_pending_proxy_count 0
workload_manager_proxies_started_total 1
workload_manager_proxies_stopped_total 0
1
2
3
bash-5.1# exit
exit
Delete cnsenter pod (cnsenter-yi26ybqru8)

12. netshoot 워크로드 Ambient 제외

1
docker exec -it myk8s-control-plane istioctl ztunnel-config workload

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
NAMESPACE          POD NAME                                    ADDRESS    NODE                WAYPOINT PROTOCOL
default            bookinfo-gateway-istio-6cbd9bcd49-dlqgn     10.10.1.6  myk8s-worker        None     TCP
default            details-v1-766844796b-brc95                 10.10.2.10 myk8s-worker2       None     HBONE
default            kubernetes                                  172.18.0.3                     None     TCP
default            netshoot                                    10.10.0.8  myk8s-control-plane None     HBONE
default            productpage-v1-54bb874995-gkz6q             10.10.2.15 myk8s-worker2       None     HBONE
default            ratings-v1-5dc79b6bcd-8f7vz                 10.10.2.11 myk8s-worker2       None     HBONE
default            reviews-v1-598b896c9d-9dnx5                 10.10.2.12 myk8s-worker2       None     HBONE
default            reviews-v2-556d6457d-jdksf                  10.10.2.13 myk8s-worker2       None     HBONE
default            reviews-v3-564544b4d6-p2lj2                 10.10.2.14 myk8s-worker2       None     HBONE
istio-system       grafana-65bfb5f855-jfmdl                    10.10.2.4  myk8s-worker2       None     TCP
istio-system       istio-cni-node-7kst6                        10.10.1.4  myk8s-worker        None     TCP
istio-system       istio-cni-node-dpqsv                        10.10.2.2  myk8s-worker2       None     TCP
istio-system       istio-cni-node-rfx6w                        10.10.0.5  myk8s-control-plane None     TCP
istio-system       istiod-86b6b7ff7-d7q7f                      10.10.1.3  myk8s-worker        None     TCP
istio-system       jaeger-868fbc75d7-4lq87                     10.10.2.5  myk8s-worker2       None     TCP
istio-system       kiali-6d774d8bb8-zkx5r                      10.10.2.6  myk8s-worker2       None     TCP
istio-system       loki-0                                      10.10.2.9  myk8s-worker2       None     TCP
istio-system       prometheus-689cc795d4-vrlxd                 10.10.2.7  myk8s-worker2       None     TCP
istio-system       ztunnel-4bls2                               10.10.0.6  myk8s-control-plane None     TCP
istio-system       ztunnel-kczj2                               10.10.2.3  myk8s-worker2       None     TCP
istio-system       ztunnel-wr6pp                               10.10.1.5  myk8s-worker        None     TCP
kube-system        coredns-668d6bf9bc-k6lf9                    10.10.0.2  myk8s-control-plane None     TCP
kube-system        coredns-668d6bf9bc-xbtkx                    10.10.0.3  myk8s-control-plane None     TCP
kube-system        etcd-myk8s-control-plane                    172.18.0.3 myk8s-control-plane None     TCP
kube-system        kindnet-g9qmc                               172.18.0.2 myk8s-worker2       None     TCP
kube-system        kindnet-lc2q2                               172.18.0.3 myk8s-control-plane None     TCP
kube-system        kindnet-njcw4                               172.18.0.4 myk8s-worker        None     TCP
kube-system        kube-apiserver-myk8s-control-plane          172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-controller-manager-myk8s-control-plane 172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-proxy-h2qb5                            172.18.0.2 myk8s-worker2       None     TCP
kube-system        kube-proxy-jmfg8                            172.18.0.4 myk8s-worker        None     TCP
kube-system        kube-proxy-nswxj                            172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-scheduler-myk8s-control-plane          172.18.0.3 myk8s-control-plane None     TCP
local-path-storage local-path-provisioner-7dc846544d-vzdcv     10.10.0.4  myk8s-control-plane None     TCP
metallb-system     controller-bb5f47665-29lwc                  10.10.1.2  myk8s-worker        None     TCP
metallb-system     speaker-f7qvl                               172.18.0.2 myk8s-worker2       None     TCP
metallb-system     speaker-hcfq8                               172.18.0.4 myk8s-worker        None     TCP
metallb-system     speaker-lr429                               172.18.0.3 myk8s-control-plane None     TCP
1
2
3
4
kubectl label pod netshoot istio.io/dataplane-mode=none

# 결과
pod/netshoot labeled

netshoot 워크로드가 PROTOCOL=TCP로 표시되어 Ambient Mesh에서 제외된 것을 확인함

1
docker exec -it myk8s-control-plane istioctl ztunnel-config workload

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
NAMESPACE          POD NAME                                    ADDRESS    NODE                WAYPOINT PROTOCOL
default            bookinfo-gateway-istio-6cbd9bcd49-dlqgn     10.10.1.6  myk8s-worker        None     TCP
default            details-v1-766844796b-brc95                 10.10.2.10 myk8s-worker2       None     HBONE
default            kubernetes                                  172.18.0.3                     None     TCP
default            netshoot                                    10.10.0.8  myk8s-control-plane None     TCP
default            productpage-v1-54bb874995-gkz6q             10.10.2.15 myk8s-worker2       None     HBONE
default            ratings-v1-5dc79b6bcd-8f7vz                 10.10.2.11 myk8s-worker2       None     HBONE
default            reviews-v1-598b896c9d-9dnx5                 10.10.2.12 myk8s-worker2       None     HBONE
default            reviews-v2-556d6457d-jdksf                  10.10.2.13 myk8s-worker2       None     HBONE
default            reviews-v3-564544b4d6-p2lj2                 10.10.2.14 myk8s-worker2       None     HBONE
istio-system       grafana-65bfb5f855-jfmdl                    10.10.2.4  myk8s-worker2       None     TCP
istio-system       istio-cni-node-7kst6                        10.10.1.4  myk8s-worker        None     TCP
istio-system       istio-cni-node-dpqsv                        10.10.2.2  myk8s-worker2       None     TCP
istio-system       istio-cni-node-rfx6w                        10.10.0.5  myk8s-control-plane None     TCP
istio-system       istiod-86b6b7ff7-d7q7f                      10.10.1.3  myk8s-worker        None     TCP
istio-system       jaeger-868fbc75d7-4lq87                     10.10.2.5  myk8s-worker2       None     TCP
istio-system       kiali-6d774d8bb8-zkx5r                      10.10.2.6  myk8s-worker2       None     TCP
istio-system       loki-0                                      10.10.2.9  myk8s-worker2       None     TCP
istio-system       prometheus-689cc795d4-vrlxd                 10.10.2.7  myk8s-worker2       None     TCP
istio-system       ztunnel-4bls2                               10.10.0.6  myk8s-control-plane None     TCP
istio-system       ztunnel-kczj2                               10.10.2.3  myk8s-worker2       None     TCP
istio-system       ztunnel-wr6pp                               10.10.1.5  myk8s-worker        None     TCP
kube-system        coredns-668d6bf9bc-k6lf9                    10.10.0.2  myk8s-control-plane None     TCP
kube-system        coredns-668d6bf9bc-xbtkx                    10.10.0.3  myk8s-control-plane None     TCP
kube-system        etcd-myk8s-control-plane                    172.18.0.3 myk8s-control-plane None     TCP
kube-system        kindnet-g9qmc                               172.18.0.2 myk8s-worker2       None     TCP
kube-system        kindnet-lc2q2                               172.18.0.3 myk8s-control-plane None     TCP
kube-system        kindnet-njcw4                               172.18.0.4 myk8s-worker        None     TCP
kube-system        kube-apiserver-myk8s-control-plane          172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-controller-manager-myk8s-control-plane 172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-proxy-h2qb5                            172.18.0.2 myk8s-worker2       None     TCP
kube-system        kube-proxy-jmfg8                            172.18.0.4 myk8s-worker        None     TCP
kube-system        kube-proxy-nswxj                            172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-scheduler-myk8s-control-plane          172.18.0.3 myk8s-control-plane None     TCP
local-path-storage local-path-provisioner-7dc846544d-vzdcv     10.10.0.4  myk8s-control-plane None     TCP
metallb-system     controller-bb5f47665-29lwc                  10.10.1.2  myk8s-worker        None     TCP
metallb-system     speaker-f7qvl                               172.18.0.2 myk8s-worker2       None     TCP
metallb-system     speaker-hcfq8                               172.18.0.4 myk8s-worker        None     TCP
metallb-system     speaker-lr429                               172.18.0.3 myk8s-control-plane None     TCP

istio_tcp_sent_bytes_total

istio_tcp_connections_opened_total


🧭 istio-proxy 정보 확인

1. istio-proxy 상태 동기화 정보 확인

1
docker exec -it myk8s-control-plane istioctl proxy-status

✅ 출력

1
2
3
4
5
NAME                                                CLUSTER        CDS              LDS              EDS              RDS              ECDS        ISTIOD                     VERSION
bookinfo-gateway-istio-6cbd9bcd49-dlqgn.default     Kubernetes     SYNCED (28m)     SYNCED (28m)     SYNCED (28m)     SYNCED (28m)     IGNORED     istiod-86b6b7ff7-d7q7f     1.26.0
ztunnel-4bls2.istio-system                          Kubernetes     IGNORED          IGNORED          IGNORED          IGNORED          IGNORED     istiod-86b6b7ff7-d7q7f     1.26.0
ztunnel-kczj2.istio-system                          Kubernetes     IGNORED          IGNORED          IGNORED          IGNORED          IGNORED     istiod-86b6b7ff7-d7q7f     1.26.0
ztunnel-wr6pp.istio-system                          Kubernetes     IGNORED          IGNORED          IGNORED          IGNORED          IGNORED     istiod-86b6b7ff7-d7q7f     1.26.0

2. 프록시의 Listener 구성 정보 확인

1
docker exec -it myk8s-control-plane istioctl proxy-config listener deploy/bookinfo-gateway-istio

✅ 출력

1
2
3
4
5
ADDRESSES PORT  MATCH DESTINATION
          0     ALL   Cluster: connect_originate
0.0.0.0   80    ALL   Route: http.80
0.0.0.0   15021 ALL   Inline Route: /healthz/ready*
0.0.0.0   15090 ALL   Inline Route: /stats/prometheus*

3. 프록시의 HTTP Route 구성 확인

1
docker exec -it myk8s-control-plane istioctl proxy-config route deploy/bookinfo-gateway-istio

✅ 출력

1
2
3
4
5
6
7
8
NAME        VHOST NAME     DOMAINS     MATCH                           VIRTUAL SERVICE
http.80     *:80           *           /productpage                    bookinfo-0-istio-autogenerated-k8s-gateway.default
http.80     *:80           *           /logout                         bookinfo-0-istio-autogenerated-k8s-gateway.default
http.80     *:80           *           /login                          bookinfo-0-istio-autogenerated-k8s-gateway.default
http.80     *:80           *           PathPrefix:/api/v1/products     bookinfo-0-istio-autogenerated-k8s-gateway.default
http.80     *:80           *           PathPrefix:/static              bookinfo-0-istio-autogenerated-k8s-gateway.default
            backend        *           /healthz/ready*                 
            backend        *           /stats/prometheus* 

4. 프록시가 인지하고 있는 클러스터 정보 확인

1
docker exec -it myk8s-control-plane istioctl proxy-config cluster deploy/bookinfo-gateway-istio

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
SERVICE FQDN                                                 PORT      SUBSET     DIRECTION     TYPE             DESTINATION RULE
BlackHoleCluster                                             -         -          -             STATIC           
agent                                                        -         -          -             STATIC           
bookinfo-gateway-istio.default.svc.cluster.local             80        -          outbound      EDS              
bookinfo-gateway-istio.default.svc.cluster.local             15021     -          outbound      EDS              
connect_originate                                            -         -          -             ORIGINAL_DST     
details.default.svc.cluster.local                            9080      -          outbound      EDS              
grafana.istio-system.svc.cluster.local                       3000      -          outbound      EDS              
istiod.istio-system.svc.cluster.local                        443       -          outbound      EDS              
istiod.istio-system.svc.cluster.local                        15010     -          outbound      EDS              
istiod.istio-system.svc.cluster.local                        15012     -          outbound      EDS              
istiod.istio-system.svc.cluster.local                        15014     -          outbound      EDS              
jaeger-collector.istio-system.svc.cluster.local              4317      -          outbound      EDS              
jaeger-collector.istio-system.svc.cluster.local              4318      -          outbound      EDS              
jaeger-collector.istio-system.svc.cluster.local              9411      -          outbound      EDS              
jaeger-collector.istio-system.svc.cluster.local              14250     -          outbound      EDS              
jaeger-collector.istio-system.svc.cluster.local              14268     -          outbound      EDS              
kiali.istio-system.svc.cluster.local                         9090      -          outbound      EDS              
kiali.istio-system.svc.cluster.local                         20001     -          outbound      EDS              
kube-dns.kube-system.svc.cluster.local                       53        -          outbound      EDS              
kube-dns.kube-system.svc.cluster.local                       9153      -          outbound      EDS              
kubernetes.default.svc.cluster.local                         443       -          outbound      EDS              
loki-headless.istio-system.svc.cluster.local                 3100      -          outbound      EDS              
loki-memberlist.istio-system.svc.cluster.local               7946      -          outbound      EDS              
loki.istio-system.svc.cluster.local                          3100      -          outbound      EDS              
loki.istio-system.svc.cluster.local                          9095      -          outbound      EDS              
metallb-webhook-service.metallb-system.svc.cluster.local     443       -          outbound      EDS              
productpage.default.svc.cluster.local                        9080      -          outbound      EDS              
prometheus.istio-system.svc.cluster.local                    9090      -          outbound      EDS              
prometheus_stats                                             -         -          -             STATIC           
ratings.default.svc.cluster.local                            9080      -          outbound      EDS              
reviews.default.svc.cluster.local                            9080      -          outbound      EDS              
sds-grpc                                                     -         -          -             STATIC           
tracing.istio-system.svc.cluster.local                       80        -          outbound      EDS              
tracing.istio-system.svc.cluster.local                       16685     -          outbound      EDS              
xds-grpc                                                     -         -          -             STATIC           
zipkin.istio-system.svc.cluster.local                        9411      -          outbound      EDS

5. 프록시가 인식 중인 엔드포인트 및 상태 확인

1
docker exec -it myk8s-control-plane istioctl proxy-config endpoint deploy/bookinfo-gateway-istio --status healthy

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
ENDPOINT                                                STATUS      OUTLIER CHECK     CLUSTER
10.10.0.2:53                                            HEALTHY     OK                outbound|53||kube-dns.kube-system.svc.cluster.local
10.10.0.2:9153                                          HEALTHY     OK                outbound|9153||kube-dns.kube-system.svc.cluster.local
10.10.0.3:53                                            HEALTHY     OK                outbound|53||kube-dns.kube-system.svc.cluster.local
10.10.0.3:9153                                          HEALTHY     OK                outbound|9153||kube-dns.kube-system.svc.cluster.local
10.10.1.2:9443                                          HEALTHY     OK                outbound|443||metallb-webhook-service.metallb-system.svc.cluster.local
10.10.1.3:15010                                         HEALTHY     OK                outbound|15010||istiod.istio-system.svc.cluster.local
10.10.1.3:15012                                         HEALTHY     OK                outbound|15012||istiod.istio-system.svc.cluster.local
10.10.1.3:15014                                         HEALTHY     OK                outbound|15014||istiod.istio-system.svc.cluster.local
10.10.1.3:15017                                         HEALTHY     OK                outbound|443||istiod.istio-system.svc.cluster.local
10.10.1.6:80                                            HEALTHY     OK                outbound|80||bookinfo-gateway-istio.default.svc.cluster.local
10.10.1.6:15021                                         HEALTHY     OK                outbound|15021||bookinfo-gateway-istio.default.svc.cluster.local
10.10.2.15:15008                                        HEALTHY     OK                connect_originate
10.10.2.4:3000                                          HEALTHY     OK                outbound|3000||grafana.istio-system.svc.cluster.local
10.10.2.5:4317                                          HEALTHY     OK                outbound|4317||jaeger-collector.istio-system.svc.cluster.local
10.10.2.5:4318                                          HEALTHY     OK                outbound|4318||jaeger-collector.istio-system.svc.cluster.local
10.10.2.5:9411                                          HEALTHY     OK                outbound|9411||jaeger-collector.istio-system.svc.cluster.local
10.10.2.5:9411                                          HEALTHY     OK                outbound|9411||zipkin.istio-system.svc.cluster.local
10.10.2.5:14250                                         HEALTHY     OK                outbound|14250||jaeger-collector.istio-system.svc.cluster.local
10.10.2.5:14268                                         HEALTHY     OK                outbound|14268||jaeger-collector.istio-system.svc.cluster.local
10.10.2.5:16685                                         HEALTHY     OK                outbound|16685||tracing.istio-system.svc.cluster.local
10.10.2.5:16686                                         HEALTHY     OK                outbound|80||tracing.istio-system.svc.cluster.local
10.10.2.6:9090                                          HEALTHY     OK                outbound|9090||kiali.istio-system.svc.cluster.local
10.10.2.6:20001                                         HEALTHY     OK                outbound|20001||kiali.istio-system.svc.cluster.local
10.10.2.7:9090                                          HEALTHY     OK                outbound|9090||prometheus.istio-system.svc.cluster.local
10.10.2.9:3100                                          HEALTHY     OK                outbound|3100||loki-headless.istio-system.svc.cluster.local
10.10.2.9:3100                                          HEALTHY     OK                outbound|3100||loki.istio-system.svc.cluster.local
10.10.2.9:7946                                          HEALTHY     OK                outbound|7946||loki-memberlist.istio-system.svc.cluster.local
10.10.2.9:9095                                          HEALTHY     OK                outbound|9095||loki.istio-system.svc.cluster.local
127.0.0.1:15000                                         HEALTHY     OK                prometheus_stats
127.0.0.1:15020                                         HEALTHY     OK                agent
172.18.0.3:6443                                         HEALTHY     OK                outbound|443||kubernetes.default.svc.cluster.local
envoy://connect_originate/10.10.2.10:9080               HEALTHY     OK                outbound|9080||details.default.svc.cluster.local
envoy://connect_originate/10.10.2.11:9080               HEALTHY     OK                outbound|9080||ratings.default.svc.cluster.local
envoy://connect_originate/10.10.2.12:9080               HEALTHY     OK                outbound|9080||reviews.default.svc.cluster.local
envoy://connect_originate/10.10.2.13:9080               HEALTHY     OK                outbound|9080||reviews.default.svc.cluster.local
envoy://connect_originate/10.10.2.14:9080               HEALTHY     OK                outbound|9080||reviews.default.svc.cluster.local
envoy://connect_originate/10.10.2.15:9080               HEALTHY     OK                outbound|9080||productpage.default.svc.cluster.local
unix://./etc/istio/proxy/XDS                            HEALTHY     OK                xds-grpc
unix://./var/run/secrets/workload-spiffe-uds/socket     HEALTHY     OK                sds-grpc

6. 프록시의 Secret(TLS 인증서 등) 설정 정보 확인

1
docker exec -it myk8s-control-plane istioctl proxy-config secret deploy/bookinfo-gateway-istio

✅ 출력

1
2
3
RESOURCE NAME     TYPE           STATUS     VALID CERT     SERIAL NUMBER                        NOT AFTER                NOT BEFORE
default           Cert Chain     ACTIVE     true           1a961f22a83f19df7f350b293d526839     2025-06-08T08:11:58Z     2025-06-07T08:09:58Z
ROOTCA            CA             ACTIVE     true           8dae6d7752ac495efac249ceb5279185     2035-06-05T07:25:32Z     2025-06-07T07:25:32Z

🧾 ztunnel-config

1. ZTunnel 명령어 목록 확인

1
docker exec -it myk8s-control-plane istioctl ztunnel-config

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
A group of commands used to update or retrieve Ztunnel configuration from a Ztunnel instance.

Usage:
  istioctl ztunnel-config [command]

Aliases:
  ztunnel-config, zc

Examples:
  # Retrieve summary about workload configuration
  istioctl ztunnel-config workload

  # Retrieve summary about certificates
  istioctl ztunnel-config certificates

Available Commands:
  all         Retrieves all configuration for the specified Ztunnel pod.
  certificate Retrieves certificate for the specified Ztunnel pod.
  connections Retrieves connections for the specified Ztunnel pod.
  log         Retrieves logging levels of the Ztunnel instance in the specified pod.
  policy      Retrieves policies for the specified Ztunnel pod.
  service     Retrieves services for the specified Ztunnel pod.
  workload    Retrieves workload configuration for the specified Ztunnel pod.

Flags:
  -h, --help   help for ztunnel-config

Global Flags:
      --as string               Username to impersonate for the operation. User could be a regular user or a service account in a namespace
      --as-group stringArray    Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
      --as-uid string           UID to impersonate for the operation.
      --context string          Kubernetes configuration context
  -i, --istioNamespace string   Istio system namespace (default "istio-system")
  -c, --kubeconfig string       Kubernetes configuration file
  -n, --namespace string        Kubernetes namespace
      --vklog Level             number for the log level verbosity. Like -v flag. ex: --vklog=9

Use "istioctl ztunnel-config [command] --help" for more information about a command.

2. 클러스터 내 전체 서비스 정보 조회

1
docker exec -it myk8s-control-plane istioctl ztunnel-config service

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
NAMESPACE      SERVICE NAME            SERVICE VIP  WAYPOINT ENDPOINTS
default        bookinfo-gateway-istio  10.200.1.248 None     1/1
default        details                 10.200.1.34  None     1/1
default        kubernetes              10.200.1.1   None     1/1
default        productpage             10.200.1.135 None     1/1
default        ratings                 10.200.1.184 None     1/1
default        reviews                 10.200.1.179 None     3/3
istio-system   grafana                 10.200.1.136 None     1/1
istio-system   istiod                  10.200.1.163 None     1/1
istio-system   jaeger-collector        10.200.1.144 None     1/1
istio-system   kiali                   10.200.1.133 None     1/1
istio-system   loki                    10.200.1.200 None     1/1
istio-system   loki-headless                        None     1/1
istio-system   loki-memberlist                      None     1/1
istio-system   prometheus              10.200.1.41  None     1/1
istio-system   tracing                 10.200.1.229 None     1/1
istio-system   zipkin                  10.200.1.117 None     1/1
kube-system    kube-dns                10.200.1.10  None     2/2
metallb-system metallb-webhook-service 10.200.1.89  None     1/1

3. 특정 노드에서 특정 네임스페이스의 서비스 정보 조회

1
docker exec -it myk8s-control-plane istioctl ztunnel-config service --service-namespace default --node myk8s-worker

✅ 출력

1
2
3
4
5
6
7
NAMESPACE SERVICE NAME           SERVICE VIP  WAYPOINT ENDPOINTS
default   bookinfo-gateway-istio 10.200.1.248 None     1/1
default   details                10.200.1.34  None     1/1
default   kubernetes             10.200.1.1   None     1/1
default   productpage            10.200.1.135 None     1/1
default   ratings                10.200.1.184 None     1/1
default   reviews                10.200.1.179 None     3/3

4. 서비스 정보를 JSON 형태로 출력

1
docker exec -it myk8s-control-plane istioctl ztunnel-config service --service-namespace default --node myk8s-worker -o json

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
[
    {
        "name": "bookinfo-gateway-istio",
        "namespace": "default",
        "hostname": "bookinfo-gateway-istio.default.svc.cluster.local",
        "vips": [
            "/10.200.1.248"
        ],
        "ports": {
            "15021": 15021,
            "80": 80
        },
        "endpoints": {
            "Kubernetes//Pod/default/bookinfo-gateway-istio-6cbd9bcd49-dlqgn": {
                "workloadUid": "Kubernetes//Pod/default/bookinfo-gateway-istio-6cbd9bcd49-dlqgn",
                "service": "",
                "port": {
                    "15021": 15021,
                    "80": 80
                }
            }
        },
        "ipFamilies": "IPv4"
    },
    {
        "name": "details",
        "namespace": "default",
        "hostname": "details.default.svc.cluster.local",
        "vips": [
            "/10.200.1.34"
        ],
        "ports": {
            "9080": 9080
        },
        "endpoints": {
            "Kubernetes//Pod/default/details-v1-766844796b-brc95": {
                "workloadUid": "Kubernetes//Pod/default/details-v1-766844796b-brc95",
                "service": "",
                "port": {
                    "9080": 9080
                }
            }
        },
        "ipFamilies": "IPv4"
    },
    {
        "name": "kubernetes",
        "namespace": "default",
        "hostname": "kubernetes.default.svc.cluster.local",
        "vips": [
            "/10.200.1.1"
        ],
        "ports": {
            "443": 6443
        },
        "endpoints": {
            "Kubernetes/discovery.k8s.io/EndpointSlice/default/kubernetes/172.18.0.3": {
                "workloadUid": "Kubernetes/discovery.k8s.io/EndpointSlice/default/kubernetes/172.18.0.3",
                "service": "",
                "port": {
                    "443": 6443
                }
            }
        },
        "ipFamilies": "IPv4"
    },
    {
        "name": "productpage",
        "namespace": "default",
        "hostname": "productpage.default.svc.cluster.local",
        "vips": [
            "/10.200.1.135"
        ],
        "ports": {
            "9080": 9080
        },
        "endpoints": {
            "Kubernetes//Pod/default/productpage-v1-54bb874995-gkz6q": {
                "workloadUid": "Kubernetes//Pod/default/productpage-v1-54bb874995-gkz6q",
                "service": "",
                "port": {
                    "9080": 9080
                }
            }
        },
        "ipFamilies": "IPv4"
    },
    {
        "name": "ratings",
        "namespace": "default",
        "hostname": "ratings.default.svc.cluster.local",
        "vips": [
            "/10.200.1.184"
        ],
        "ports": {
            "9080": 9080
        },
        "endpoints": {
            "Kubernetes//Pod/default/ratings-v1-5dc79b6bcd-8f7vz": {
                "workloadUid": "Kubernetes//Pod/default/ratings-v1-5dc79b6bcd-8f7vz",
                "service": "",
                "port": {
                    "9080": 9080
                }
            }
        },
        "ipFamilies": "IPv4"
    },
    {
        "name": "reviews",
        "namespace": "default",
        "hostname": "reviews.default.svc.cluster.local",
        "vips": [
            "/10.200.1.179"
        ],
        "ports": {
            "9080": 9080
        },
        "endpoints": {
            "Kubernetes//Pod/default/reviews-v1-598b896c9d-9dnx5": {
                "workloadUid": "Kubernetes//Pod/default/reviews-v1-598b896c9d-9dnx5",
                "service": "",
                "port": {
                    "9080": 9080
                }
            },
            "Kubernetes//Pod/default/reviews-v2-556d6457d-jdksf": {
                "workloadUid": "Kubernetes//Pod/default/reviews-v2-556d6457d-jdksf",
                "service": "",
                "port": {
                    "9080": 9080
                }
            },
            "Kubernetes//Pod/default/reviews-v3-564544b4d6-p2lj2": {
                "workloadUid": "Kubernetes//Pod/default/reviews-v3-564544b4d6-p2lj2",
                "service": "",
                "port": {
                    "9080": 9080
                }
            }
        },
        "ipFamilies": "IPv4"
    }
]

5. 클러스터 전체 워크로드 구성 정보 확인

1
docker exec -it myk8s-control-plane istioctl ztunnel-config workload

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
NAMESPACE          POD NAME                                    ADDRESS    NODE                WAYPOINT PROTOCOL
default            bookinfo-gateway-istio-6cbd9bcd49-dlqgn     10.10.1.6  myk8s-worker        None     TCP
default            details-v1-766844796b-brc95                 10.10.2.10 myk8s-worker2       None     HBONE
default            kubernetes                                  172.18.0.3                     None     TCP
default            netshoot                                    10.10.0.8  myk8s-control-plane None     TCP
default            productpage-v1-54bb874995-gkz6q             10.10.2.15 myk8s-worker2       None     HBONE
default            ratings-v1-5dc79b6bcd-8f7vz                 10.10.2.11 myk8s-worker2       None     HBONE
default            reviews-v1-598b896c9d-9dnx5                 10.10.2.12 myk8s-worker2       None     HBONE
default            reviews-v2-556d6457d-jdksf                  10.10.2.13 myk8s-worker2       None     HBONE
default            reviews-v3-564544b4d6-p2lj2                 10.10.2.14 myk8s-worker2       None     HBONE
istio-system       grafana-65bfb5f855-jfmdl                    10.10.2.4  myk8s-worker2       None     TCP
istio-system       istio-cni-node-7kst6                        10.10.1.4  myk8s-worker        None     TCP
istio-system       istio-cni-node-dpqsv                        10.10.2.2  myk8s-worker2       None     TCP
istio-system       istio-cni-node-rfx6w                        10.10.0.5  myk8s-control-plane None     TCP
istio-system       istiod-86b6b7ff7-d7q7f                      10.10.1.3  myk8s-worker        None     TCP
istio-system       jaeger-868fbc75d7-4lq87                     10.10.2.5  myk8s-worker2       None     TCP
istio-system       kiali-6d774d8bb8-zkx5r                      10.10.2.6  myk8s-worker2       None     TCP
istio-system       loki-0                                      10.10.2.9  myk8s-worker2       None     TCP
istio-system       prometheus-689cc795d4-vrlxd                 10.10.2.7  myk8s-worker2       None     TCP
istio-system       ztunnel-4bls2                               10.10.0.6  myk8s-control-plane None     TCP
istio-system       ztunnel-kczj2                               10.10.2.3  myk8s-worker2       None     TCP
istio-system       ztunnel-wr6pp                               10.10.1.5  myk8s-worker        None     TCP
kube-system        coredns-668d6bf9bc-k6lf9                    10.10.0.2  myk8s-control-plane None     TCP
kube-system        coredns-668d6bf9bc-xbtkx                    10.10.0.3  myk8s-control-plane None     TCP
kube-system        etcd-myk8s-control-plane                    172.18.0.3 myk8s-control-plane None     TCP
kube-system        kindnet-g9qmc                               172.18.0.2 myk8s-worker2       None     TCP
kube-system        kindnet-lc2q2                               172.18.0.3 myk8s-control-plane None     TCP
kube-system        kindnet-njcw4                               172.18.0.4 myk8s-worker        None     TCP
kube-system        kube-apiserver-myk8s-control-plane          172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-controller-manager-myk8s-control-plane 172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-proxy-h2qb5                            172.18.0.2 myk8s-worker2       None     TCP
kube-system        kube-proxy-jmfg8                            172.18.0.4 myk8s-worker        None     TCP
kube-system        kube-proxy-nswxj                            172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-scheduler-myk8s-control-plane          172.18.0.3 myk8s-control-plane None     TCP
local-path-storage local-path-provisioner-7dc846544d-vzdcv     10.10.0.4  myk8s-control-plane None     TCP
metallb-system     controller-bb5f47665-29lwc                  10.10.1.2  myk8s-worker        None     TCP
metallb-system     speaker-f7qvl                               172.18.0.2 myk8s-worker2       None     TCP
metallb-system     speaker-hcfq8                               172.18.0.4 myk8s-worker        None     TCP
metallb-system     speaker-lr429                               172.18.0.3 myk8s-control-plane None     TCP

6. 특정 네임스페이스의 워크로드만 조회

1
docker exec -it myk8s-control-plane istioctl ztunnel-config workload --workload-namespace default

✅ 출력

1
2
3
4
5
6
7
8
9
10
NAMESPACE POD NAME                                ADDRESS    NODE                WAYPOINT PROTOCOL
default   bookinfo-gateway-istio-6cbd9bcd49-dlqgn 10.10.1.6  myk8s-worker        None     TCP
default   details-v1-766844796b-brc95             10.10.2.10 myk8s-worker2       None     HBONE
default   kubernetes                              172.18.0.3                     None     TCP
default   netshoot                                10.10.0.8  myk8s-control-plane None     TCP
default   productpage-v1-54bb874995-gkz6q         10.10.2.15 myk8s-worker2       None     HBONE
default   ratings-v1-5dc79b6bcd-8f7vz             10.10.2.11 myk8s-worker2       None     HBONE
default   reviews-v1-598b896c9d-9dnx5             10.10.2.12 myk8s-worker2       None     HBONE
default   reviews-v2-556d6457d-jdksf              10.10.2.13 myk8s-worker2       None     HBONE
default   reviews-v3-564544b4d6-p2lj2             10.10.2.14 myk8s-worker2       None     HBONE

7. 특정 네임스페이스 + 노드 필터로 워크로드 조회

1
docker exec -it myk8s-control-plane istioctl ztunnel-config workload --workload-namespace default --node myk8s-worker2

✅ 출력

1
2
3
4
5
6
7
8
9
10
NAMESPACE POD NAME                                ADDRESS    NODE                WAYPOINT PROTOCOL
default   bookinfo-gateway-istio-6cbd9bcd49-dlqgn 10.10.1.6  myk8s-worker        None     TCP
default   details-v1-766844796b-brc95             10.10.2.10 myk8s-worker2       None     HBONE
default   kubernetes                              172.18.0.3                     None     TCP
default   netshoot                                10.10.0.8  myk8s-control-plane None     TCP
default   productpage-v1-54bb874995-gkz6q         10.10.2.15 myk8s-worker2       None     HBONE
default   ratings-v1-5dc79b6bcd-8f7vz             10.10.2.11 myk8s-worker2       None     HBONE
default   reviews-v1-598b896c9d-9dnx5             10.10.2.12 myk8s-worker2       None     HBONE
default   reviews-v2-556d6457d-jdksf              10.10.2.13 myk8s-worker2       None     HBONE
default   reviews-v3-564544b4d6-p2lj2             10.10.2.14 myk8s-worker2       None     HBONE

🛍️ productpage 확인

1. productpage 워크로드 Bash 접근

1
2
PPOD=$(kubectl get pod -l app=productpage -o jsonpath='{.items[0].metadata.name}')
kubectl pexec $PPOD -it -T -- bash

✅ 출력

1
2
3
4
5
Defaulting container name to productpage.
Create cnsenter pod (cnsenter-k4jcuxs3l5)
Wait to run cnsenter pod (cnsenter-k4jcuxs3l5)
If you don't see a command prompt, try pressing enter.
bash-5.1# 

2. productpage 내 연결 상태 확인 (TCP 소켓 상태)

1
2
3
4
5
6
7
8
9
10
bash-5.1# ss -tnp
State Recv-Q Send-Q       Local Address:Port         Peer Address:Port  Process                                                                 
ESTAB 0      0                10.10.1.6:46435          10.10.2.15:9080                                                                          
ESTAB 0      0               10.10.2.15:59558          10.10.2.13:15008                                                                         
ESTAB 0      0               10.10.2.15:54086          10.10.2.14:15008                                                                         
ESTAB 0      0               10.10.2.15:38792          10.10.2.12:15008                                                                         
ESTAB 0      0               10.10.2.15:40614          10.10.2.10:15008                                                                         
ESTAB 0      0      [::ffff:10.10.2.15]:15008  [::ffff:10.10.1.6]:54914                                                                         
ESTAB 0      0      [::ffff:10.10.2.15]:15008  [::ffff:10.10.1.6]:54930                                                                         
ESTAB 0      0      [::ffff:10.10.2.15]:9080   [::ffff:10.10.1.6]:46435  users:(("gunicorn",pid=14,fd=11))

3. productpage 내 서비스 포트 리스닝 상태 확인

1
2
3
4
5
6
7
8
bash-5.1# ss -tnlp
State    Recv-Q   Send-Q     Local Address:Port      Peer Address:Port  Process                                                                 
LISTEN   0        128            127.0.0.1:15053          0.0.0.0:*                                                                             
LISTEN   0        2048                   *:9080                 *:*      users:(("gunicorn",pid=21,fd=5),("gunicorn",pid=20,fd=5),("gunicorn",pid=19,fd=5),("gunicorn",pid=18,fd=5),("gunicorn",pid=17,fd=5),("gunicorn",pid=16,fd=5),("gunicorn",pid=15,fd=5),("gunicorn",pid=14,fd=5),("gunicorn",pid=1,fd=5))
LISTEN   0        128                [::1]:15053             [::]:*                                                                             
LISTEN   0        128                    *:15008                *:*                                                                             
LISTEN   0        128                    *:15001                *:*                                                                             
LISTEN   0        128                    *:15006                *:*

4. 암호화 확인

1
tcpdump -i eth0 -A -s 0 -nn 'tcp port 15008'

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
..

..:...i..,...$.....n.....
...E.........O.4.i..
}.s.......l......0..#()...4Q..T1.}bDr.a.0..	..TL...9..)...\...~$.....i.0....8...?.@...."9[...;.I..?..3>q.........\.....\BEB......Y._...JkfO..
.....R..s..j........F.6.....A.....l?R	C....m...%3...!...	..._a.=..&2v).d.P..:..l.....O.sK..........c..S.,..P..`. .....?.	...o?...:9..a.>.%{.s|]...c...Y...N.....L....
l.w.t....D..R.1.Z?.......N.>.m....V~...!..T..Ba.d..\2.x.Do!u....8...eo.3`u.%.2.......[v........F./bJL<.p.y.y...j..d..._.........7.@\.for.z_...~0.c:J..`..yQ....*5......t.e.....f..
.W...ITYO_&.....e.(..>.W $.~....@....<.r6...{.. ....L..Q`.0.g.I..!~H..}..3.............nE..Dp...e.P.J8=.......L...^.........3...&....-ny.....#H.M..J...n.?r.
.+'U.~.G..C.{.F%.1....#u|]..........p..?|EHq....V.X.Z...xx.F.>7....'....}}1.....V.u.......7 |.:...`j...j.j..a....z1),8
....uf..*..R.>+o,|......`.6:...nq
Eqz.F..v.p...-....L...[.#sjG...s.z..'xBK....Q>x.....U4......k..l../%....1*).'.!B.vO..Y.f...y.?..5M.......6`.h.......?..:P..!a&.q.N..'....>D...=.[..!..F.o..d..r~.e..z.Jk.H.Hb.....!...=..W..{.X.UA...G....*hyj....t.A...N...Q!\..;..k....V....G......
10:58:41.163991 IP 10.10.2.15.15008 > 10.10.1.6.54914: Flags [P.], seq 43357:44412, ack 1188, win 682, options [nop,nop,TS val 2616070213 ecr 4275544335], length 1055
E..S..@.@...

..

..:...i."K...$.....n.....
...E............ey..O..........8g..z...&<.....B rn..oX9.}#...".pM..tI.......11p.*.:..ee.../N.....Dk|]D{,.mq.>6....d..,...Z ..!.>V...L....8.......j..<.....S.z_.5.....e..<B.le..x6...[....!.t.Jt/.f#....|.}...A....~n.......A$.".j..$.w..Ra..`...Z.....d8#...8Xq.].)m.K....D.;.F.......I[...kA...'..9.Q.F.f.-q&S.._...._.M.q.a...-K<8=...!..o.U....:e........z..e..... ..|.....M..*..z=..3R...}MV.BEU].q....'........t...brk.....<v..s.KC..y;;i...../ .Z-l...?.....b...e.W.{....,....g)8.Y.(9.OZ...`TyT..L:.#..W.
%O,6p .......`.eF..E...........}...:y....9.......:y.).8&}.L....b...
.E.jX.o.L.{.<..F...8....(L..Qy..t.#.........w.G...e\.U*....b...no.h..O....6.	...ir....],....E.&i........I~Q_GB.....u..e...R3.8_..t...P.dY...AL.5..C?...F1...I..k..........:..X=.o..G..h...e..w\.u.4.....ep[....S.c./..P9.og.W.3+o.3.j......e....j-....#.!.Uv.3N....Pz....m!.~..X.4.P...4.	....5q]....t.!0h
.6?.!......8.b..<..y..]2.{.R.. ...r..Y..N%.E%g..........Y......}..s5.jQ....:....4.T.
W..:.~n..M.4..../...0
1..M..	...4.{).!.'}......y...T........
...	.t..{c...}ex.......X................]y.ai.......W
10:58:41.163999 IP 10.10.2.15.15008 > 10.10.1.6.54914: Flags [P.], seq 44412:45467, ack 1188, win 682, options [nop,nop,TS val 2616070213 ecr 4275544335], length 1055
E..S..@.@...

..

..:...i.&j...$.....n.....
...E............-....U.....0.....}
..I.mn7!.7...W,.&.4O.d].....~....w...+=tVp.Q...?....P.....T9.X.g..E.K5..`?|..t.<..qs...!...*..|....j....?x...(..aK...ysE.o....{4~.....)%....ER...m.......Nh.,k7.`.9.....47q.6........K........Y.B._.....f....A...[.....Hy.)..'...a.Y....o..%H..c......r.}..w.....~dJ.a..u..Lb........X\_....K%0f..#....E.h../....?.A8|
M]%..=\Db.....?..l@&p..c........E|..84.L..w.W..W...........-_...e_\>..u@..-.;..-.Q.s....tb.d!.d..v.~..}W*........$.l..B......R.m.......!ohy.....W!4..}....I.....G.%.}.^...2XAj.~...Z.MD[.'(.D..Z.../.......F.t...D.....U.[....6..DM...s._...N....m.;...o....H.1Q.8....OJ..T.#_.|F(..d..?..13..q$.&.$.8..\..q.FqQ.+..j^.*.^.p........@"..".........."#..:`?}......x.bu5.d4.;Fn.~{..(-*..F;f.@{...S.w..m{~.pr...m.0..
....q.R..>i....h.Y.p)go.p~.Vc..V.........YL*.W`....Q....#...<z...?8.........=&['...l.?......WJ..vg...<Qg...t.aO...._&GmY/..Xdr.]5..........F...dM.B.UkZi.z.....!Vd..*..e-]f.6..8..R}.9kh8...T)....b.L...@...../..9...P3..G.	.<.)..).9....y.Y....D.~.D..../.
.r........mO....x.q.w.G	Mj....P....Y.q.:....^...V.K7..GM..].!
10:58:41.164007 IP 10.10.2.15.15008 > 10.10.1.6.54914: Flags [P.], seq 45467:46522, ack 1188, win 682, options [nop,nop,TS val 2616070213 ecr 4275544335], length 1055
E..S..@.@...

..

..:...i.*....$.....n.....
...E...........e96f.j..A`....f...w.y5.....G...U...y.Y...q....LI..'.C.$.Q[a...4W.....v...t...x."[..`?......A....qNgCd........}......o.........L1m.f8	...+.3$.........&1C.'..'\T-.5.7.t..[..Eal..V..6.2!.............o;.`..u..x.{.M.(.y.C....;<..KB&Zk..\..U.uRtW#.......\e.j..1.c.<`
D0p{.{.&\.....b+	....Q...(......;.@....:!...GZ3AX...>.n....\.B....].M.6..$<.p...	*..!...........z..7G...#......C.s.^.....T.8F.P....so...t....:.R......:.w
....:%H.g*...X.D.S[.P...uFUGv...`.y.z.	l..tTs.6%...-...N.@Lyu....U!J.A<.Jz/......j....qz].E].'v1....s<0G..%...2....C/..@..Q`=.o....*......GJ $\63..d...qCM...8.7B..q.b.y.$.`...gh...0,Q.H..k.5..z.X...~`..\..x......\	MeV....C...z..L
..E......z.....H[.`.;.\t.......*_../N..H.b.":jI.l".7......].".G.&3...J0.....p.1.....Y.P.M..Fm.......T.W...._E.r./.8..K.Q.*ABs^..	@t..M...:..[H..<._....E.Y......Q.....H..i$?0.....Q..<..t.r.....(...x.....!..v^
..V.....^Ya..~...
.sK5.[....7T...d...e....x...q.....*.....r._..P......3.'.=....d....m..GXW..<..uU....6R.
;<...;Y.......f..j.qC.`...1%..r(.......3!U.....!.......~=?......p. ,..H5..x.vv.....x....D.
10:58:41.164015 IP 10.10.2.15.15008 > 10.10.1.6.54914: Flags [P.], seq 46522:47285, ack 1188, win 682, options [nop,nop,TS val 2616070213 ecr 4275544335], length 763
E../..@.@...

..

..:...i......$.....J.....
...E.........q.x.....C.Hy..<Ea.\=$.`....	)..f.]..:....9.i..}..0.u+|...IM...i*...[(!d.k.7]/A...2....4.)wi;
.0...._z.<....Xl.SS..2.....n...D...x:.....-F.....2..-...~{.....v....Z..}J`l...L..-.J.T[..g.:......t.d.....|....w..<...V...^..V.bL...'~.bI......FiRG.G.5..25...U)...e`qqO.`E..T...<...8.....L=l..?..IuP..f?..t...O.<.4Hcbu8S..C.K.......m.^...-..'.:.../.D.%Q...q:y...Uq.Jw.`m.v..	....a.....1LND.........
.........QsS.g..J...m..V`6...Bj.)^7..B.r.o.../...c.Y........^.../org....y..[.%.y....8z..[l=.....c7...........M..!'R.....?.k(..e.%f.w....,)OP\..A."2...H...if.7Yx..9...=...t.N...Y~...['.>]z....i.%..Wp..........G.W.T..C/m-...^.<}S...|.....:
D.....!...cp$."...c.IG+J.S.k{....A/>..........i.7.E4co.4..wF	...q.d..d..f..._../[....J.u.......f..!..'.3....\.o..FS. .......B.).^
10:58:41.164089 IP 10.10.1.6.54914 > 10.10.2.15.15008: Flags [.], ack 47285, win 663, options [nop,nop,TS val 4275544335 ecr 2616070213], length 0
E..49.@.>...

..

....:....$i.1......O.....
.......E
10:58:41.200604 IP 10.10.2.15.40614 > 10.10.2.10.15008: Flags [.], ack 1933, win 9677, options [nop,nop,TS val 1456032130 ecr 2660710008], length 0
E..43L@.@..K

..

5. ngrep 도구 설치 (패킷 확인용)

1
bash-5.1# apk update && apk add ngrep

✅ 출력

1
2
3
4
5
6
7
8
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
v3.13.12-98-g1d183746afa [https://dl-cdn.alpinelinux.org/alpine/v3.13/main]
v3.13.12-94-g0551adbecc [https://dl-cdn.alpinelinux.org/alpine/v3.13/community]
OK: 13907 distinct packages available
(1/1) Installing ngrep (1.47-r1)
Executing busybox-1.32.1-r2.trigger
OK: 196 MiB in 118 packages

6. ngrep 사용하여 15008 포트 트래픽 가시화

1
bash-5.1# ngrep -tW byline -d eth0 '' 'tcp port 15008'

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
T 2025/06/07 11:00:17.686892 10.10.2.15:15008 -> 10.10.1.6:54930 [AP] #166
.....cb...V...........B...=;+....%L9..~..}>Q.%JM....s.....rZ.#5....4'.b......'3T.......K.?. .....2.....u.c....=.=.....i.l~...R...g
;U.I.)...........5?...su.^...h||.H.....id.....'.mp....._K..Ok;....[M.....,.&.....6b.M....]...@x[*...Y..=.t.{.u.
i.WJ.1..z..(L=..^.mk.0R.v....-1wq......i%....A......;k..*.{...*v......ze.......K&.Q...\..0.I.f.....6..Sa.^.8.....<....aY3X/.Z.Wz.-...%.a.N.U...yD.!+R.;.......?.g..Cz.C_.e.k...@.Rx[38.>(o!...#...2.....kZO...5...U.M..p'.BL..Y.k.....Vo....ev..OWhXQ.C..B$...@..iz>.H.......9-pN.8&X.%.......K...'.<..6.9A...,*Y.&..X...'R.R.............2.'.*..A.......J2....&T.efh..w.e{x..:...VN..........+0.T.WN.....a.. p..}....mr6H.L...VA.........K6....E...9.....n...X5:.*.8OL.q.${L.L..E.e...-0M....S.'..v......jxXs...=.!.....X..Ia80.N2YhF+.DM....7B).....!.[..,..\...V.k*k.g.S-...UKVB..N..d
@.*.%2.O.K...!.}.X.3..l.......1..,...b.Y.?....F.pS.,K.U....T.....mjq6..\........Z.JZ.z.p..}.M&..NDB..>\P_i....YN.d...9......RHQ...B...{]'...D..#.]..#.......Z.`v..$v|.U........5....KRI..*...c.@;..'.....W.RV.(.p..wb..+..........i..<t...g....*.3
#
T 2025/06/07 11:00:17.686898 10.10.2.15:15008 -> 10.10.1.6:54930 [AP] #167
.....VhnI...E,!.......:o)..78..c.L.../..,....3M(...../i.L:5I..(.X.8*.v......../..f.i...n..?.qa..k.m.k....S...'w...L...a,.&'UH\..E}..cq..B.....s..."=.J..d.....{.F.s.B.....1......c.W..A/......?....[KP.4.f..}.k.g.y.......!..,..'l.+C/.........M....Z.s.k........|....F.f#...Yq....Z..}.|..u...8k.:h.-...Bu.1.hwk.=fW.Tu.l.....OC.....E.^......Z.CE......~@....0.&N.R.3........#...fu.g....=...5.F{.C@<s...*.{....!S. D.y.~..xv#.&gu.].9i.........*.=..Xh.
......5......h:......{8..!...k....'.~.....Y.)^...&.H...y.q.|B].3..J.."ZT.....{....X\.A.....L.R.o....4+A.W..4_..O..Az!.q......8.o.^9i.d.>......o..:}
.(.yd....n.......4.4.......=!m.!#h......Y\*...$......j
`...f...h.8..s.._....;`...C.....d......1n....
...Ldf......;K.Nm_."<)O......u.#.E\}W...!.U.0P...8RP......C.h.+..<>..W.9.....67.Hx..a.9k=......x.....$..t._..6.....gy........&...'.]............:.5...'..N...._.L.p.....sPR
/
..D... ...>.V..,?.{..'. W=..%..
.\...Tc.I...Hw\.}..+...|0.{...]..JQ.....{e..8v.4..p.........
\..........g.i..#.q+..3......./..B.>.m..6.qn.\J(......L~0R...._..g[..+...*.=.*..{...?p5.......d#
#
T 2025/06/07 11:00:17.686904 10.10.2.15:15008 -> 10.10.1.6:54930 [AP] #168
..........z|.g..>........g.!^.|`..Y..J
ZC1....C.4.P...?8.O.j..._....\.....j.?..\.US."...N...f].....H.K.
bX...4E|...........B..a01.R.{.m..4
3...#V...j..xF...iY]....|. '~...@#........=.c......sm..|...!Bb.6#'.{.........#[|.%..B.ZFF!Yo.g...e../ .....N...n...#...H...ml..a"'Z....~/.Y..6.'.9...^..%....?.mB.b...a...M.....1/..!Q.W...U.&.5.U..v.@f....{m.".Qo....K..2.S...?g..eV`2V.D.......C.:..o..>.dT...`...K.N.. .5.h.nV+.....8..s......W.Op...
...Pt....O..Hs......W........h;.O...k3$de]Y...A..3....r...6..wcU.......A..{0.pe...-1?.T.E{j..9t....>1...I.L.)@8...%...z.......0...b..wHZ.......~...m.f>..u+.w.@.V......i..c...q..p0..5..........a..vU.....]dn....B1%70S*/.....
.Z.......q...........eZn...8.!......]=}.9.......D.T/.5..U&...#NE....r....NVlb...{x.......^....VK...qm.g.....w....P8...%..S...T.........e.>.|7;T....3..S......@.M..[...^Y]...q.Zm.=.).....U..3QrJ.$..3.[....n`..0(......s..n.....c._.`.i....5*O9....Xe|J...cI.[.iuBVDf... ..Iz=X.glpSv..].J...`8..c{P....8
.D.t!...>.i.....I.`...O..2.yoj.pF.?...r....U..l.h.m......7....s........Z(.........a....3...DB....
#
T 2025/06/07 11:00:17.686909 10.10.2.15:15008 -> 10.10.1.6:54930 [AP] #169
........p.y.F4....^Y..vqkI..~x....21.f....W_..8..'....,..?...J.OX...........LzF0....m.......m..
K...f.?.|}..BLe.q..?m.k+#...JUB.@e..Wu...{.e.+hb..%..))...|..u%^..!.....D.R3...d.?..3..;B..7............DR.].y...S.......*..[...|(....~...z...ac.(.w..L.@..|...'..)M.;.?.\Z&7....z.R..`..9O...&.X.}3S.U...[.(.?8}.cJ.;I.........<.".>Z.o.....L..=.$.0...... ..K...N{.....|a.+A......s.rkFD.Ar\D.Du....[..[...STt..Y....<..=...o.x......R..7am3...b........!.R.fC.(j...).g.k..oI.!....`&.3.R.D..-U...#..7...%..e...apZ.v...!....>dM...?....Ah'.tZ...R..O....&...0(#.n...l..Pv.........Q<-.MzcXM....J)...8T.....:.|....|8N.0|2.....UO..........l2w.B.V..y..Atn.2....9.@...a...>............
...}1..yo.'.Ke.......R.D./..~.....)GJ....l@..v.....9....t$.@...Nt..1.F.Q..0....\...>..zq..i~.d7.I.PuVe.d{..v..+..F.N.bZU+.L%....}'/J..M.p.8.Wo..$.h]/t...VN.i...RY Mm...F.rw..k..........G...3O..4.....r...&[...{z*..y....A.1.O_...n..T.....S<Z..i.....f7.....,....$R..<....(7YbT]..'T.(...xC
...I*.M.....|.....p.&$."..
.S].e.+...5.;.wFsK...6.]-.V.3..Oy.Y...... 4.l.|....mL....[_.....x.L....{..x.
#
T 2025/06/07 11:00:17.686914 10.10.2.15:15008 -> 10.10.1.6:54930 [AP] #170
.......|...........8.+Dq.....9.#!.....q.B....Q~L}.K..5....S.....Y.6..&9+.6..3..vN...`3.A}..jx.W..lW....k.......uj.o].#.......K(..n. ...GBTM.;r.|"...~.2......]..q....@.lZ......Z.....WL..&.i..I.....J\...{]..y.B.A...z......:...w.u...C#.j.w..%6H.^Tn.3.N....|..|..<..Qt..&..A[...f..s.O......B......(.XF..Q...r[.BOV........;w.D..b..V.......@.u.....r.............!._5......$.>..X.)..;..G.s.........O.P....s..~....di....'....R.:.j.........O..{......or..p3.Kl.%.)%...<.=.D*.?CK $d.=...E4.....[9.10.C.M...g..k.s....l
.^2....6....Oz.TF...m&\..n.Wq*....p....6..G..P`..o.........l.p.`E......y....r..../
.b..Tqe.3....U..o.N1v......e`d.={.K....L.DO..!.<.U...../.Gui.'Fw.dk.....)...sdEQ.....!.YQ...6..*1R..[.......B.....fo........S.B.S."..?.H...k.t..[.:.;......;M..~)b..*..?..a....
#####
T 2025/06/07 11:00:18.752054 10.10.1.6:54930 -> 10.10.2.15:15008 [AP] #175
....oi.....T.\.B.Ot.:(..2.......T...lH....;.6...m..../...;Z.P.y.E.(...W...C3#...'}~.*s.....L~..:DU....'........."(
/..X.`...G...!....!.;...i.l.......9..j.....?..$.;..&...+.G.2S......;$._..q*..
...7Q.X...45b.B.....ev....w`X..:...A..VU.j.`..Z.*...-..{O.u..W....P....h..r;..*&.......b.A.xa>{`k.8{.uX$...//eoQ.r.K.r.=QF.3.hu.h..>%..'.l0.j..... b...[.[)..5Z/.L..t...6..1...7..x
#
T 2025/06/07 11:00:18.754805 10.10.2.15:40614 -> 10.10.2.10:15008 [AP] #176
....vZ.A-..+...JT...DU.<....J..#m.xv..,.....jl....x.A.W...(......p^L.piQ\0... ^.]V....{..e`J...OhW_.e?Y L.g....*$#.kkIy!G..
#
T 2025/06/07 11:00:18.755042 10.10.2.10:15008 -> 10.10.2.15:40614 [AP] #177
......5.I.v{.......<.:C.[./....d
##
T 2025/06/07 11:00:18.755094 10.10.2.15:40614 -> 10.10.2.10:15008 [AP] #179
...............fw..fW
%6.{........p$I..f.v:G...^ipn..........H.,...s...T......k.v..&.1.K......<.u@<....c.v...s....../..x.{.-<0dv>.J.k..b....h9.......S.%....Z...........DCQY..}R.{..0S..aF.G\(.
[.]... ...iA5.!._.....n.eu..O..
#
T 2025/06/07 11:00:18.755908 10.10.2.10:15008 -> 10.10.2.15:40614 [AP] #180
........D.?....?...........0FR.......7.......r.bP..M.sX.....).f...Y.......~... ..P.........XDY.:...F...k...u.u....3......6.7zj..w.8e..9....!..d...g.o.
H....._.......X.Kk.9.Nt.^...S......Q<..;g.U.........
J..M..a
#
T 2025/06/07 11:00:18.755952 10.10.2.10:15008 -> 10.10.2.15:40614 [AP] #181
...........L\..^...S!..\..m.Z....X.@O........%.^j.Gv.a.?.....-B..- ....9.......7qB.....B.%...........@.j........%.w..m........8>.{......x..uMV...if...c-E...k..~/...P0.sZ.8]..G.G.%_|$V..>....t...q...J.(...h8.q.
##
T 2025/06/07 11:00:18.756556 10.10.2.15:40614 -> 10.10.2.10:15008 [AP] #183
.....%-...,..[=.]...E.sS..*W...
#
T 2025/06/07 11:00:18.756725 10.10.2.10:15008 -> 10.10.2.15:40614 [AP] #184
.....l.-.`..........IG
c...q...
#
T 2025/06/07 11:00:18.757957 10.10.2.15:38792 -> 10.10.2.12:15008 [AP] #185
....vj..[".s....C.,i.%..}r..........o~.........<..O.?O....Rv(.jou.........!.FT5;............n.H..f.A.Y...G(7...X...eG.v.A.m
#
T 2025/06/07 11:00:18.758171 10.10.2.12:15008 -> 10.10.2.15:38792 [AP] #186
.....$"..[..ec..PA......L....$.b
##
T 2025/06/07 11:00:18.758221 10.10.2.15:38792 -> 10.10.2.12:15008 [AP] #188
.....%.!....<.|.~a...e.,.8]%a..z.j*.......D/{q...........<T.r..n:,.`.q^...8..<..7..`7
{...}.._.w.;/..47..&.t..D...yM7~..1.,.W....P...C9de/...lGTA...... z5.9.cA..>..g........,. .d.....{5....j-.......%-..g..Q....".9o.....eIq.
..

7. ztunnel 소켓 존재 여부 확인

1
bash-5.1# ls -l /var/run/ztunnel

✅ 출력

1
2
total 0
srwxr-xr-x    1 root     root             0 Jun  7 07:25 ztunnel.sock

8. cnsenter 세션 종료 및 정리

1
2
3
bash-5.1# exit
exit
Delete cnsenter pod (cnsenter-k4jcuxs3l5)

🔐 Ambient Mesh에서 mTLS 활성화 여부 검증

1. 워크로드 수준에서 HBONE 프로토콜 확인 (ztunnel-config workload)

1
docker exec -it myk8s-control-plane istioctl ztunnel-config workload

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
NAMESPACE          POD NAME                                    ADDRESS    NODE                WAYPOINT PROTOCOL
default            bookinfo-gateway-istio-6cbd9bcd49-dlqgn     10.10.1.6  myk8s-worker        None     TCP
default            details-v1-766844796b-brc95                 10.10.2.10 myk8s-worker2       None     HBONE
default            kubernetes                                  172.18.0.3                     None     TCP
default            netshoot                                    10.10.0.8  myk8s-control-plane None     TCP
default            productpage-v1-54bb874995-gkz6q             10.10.2.15 myk8s-worker2       None     HBONE
default            ratings-v1-5dc79b6bcd-8f7vz                 10.10.2.11 myk8s-worker2       None     HBONE
default            reviews-v1-598b896c9d-9dnx5                 10.10.2.12 myk8s-worker2       None     HBONE
default            reviews-v2-556d6457d-jdksf                  10.10.2.13 myk8s-worker2       None     HBONE
default            reviews-v3-564544b4d6-p2lj2                 10.10.2.14 myk8s-worker2       None     HBONE
istio-system       grafana-65bfb5f855-jfmdl                    10.10.2.4  myk8s-worker2       None     TCP
istio-system       istio-cni-node-7kst6                        10.10.1.4  myk8s-worker        None     TCP
istio-system       istio-cni-node-dpqsv                        10.10.2.2  myk8s-worker2       None     TCP
istio-system       istio-cni-node-rfx6w                        10.10.0.5  myk8s-control-plane None     TCP
istio-system       istiod-86b6b7ff7-d7q7f                      10.10.1.3  myk8s-worker        None     TCP
istio-system       jaeger-868fbc75d7-4lq87                     10.10.2.5  myk8s-worker2       None     TCP
istio-system       kiali-6d774d8bb8-zkx5r                      10.10.2.6  myk8s-worker2       None     TCP
istio-system       loki-0                                      10.10.2.9  myk8s-worker2       None     TCP
istio-system       prometheus-689cc795d4-vrlxd                 10.10.2.7  myk8s-worker2       None     TCP
istio-system       ztunnel-4bls2                               10.10.0.6  myk8s-control-plane None     TCP
istio-system       ztunnel-kczj2                               10.10.2.3  myk8s-worker2       None     TCP
istio-system       ztunnel-wr6pp                               10.10.1.5  myk8s-worker        None     TCP
kube-system        coredns-668d6bf9bc-k6lf9                    10.10.0.2  myk8s-control-plane None     TCP
kube-system        coredns-668d6bf9bc-xbtkx                    10.10.0.3  myk8s-control-plane None     TCP
kube-system        etcd-myk8s-control-plane                    172.18.0.3 myk8s-control-plane None     TCP
kube-system        kindnet-g9qmc                               172.18.0.2 myk8s-worker2       None     TCP
kube-system        kindnet-lc2q2                               172.18.0.3 myk8s-control-plane None     TCP
kube-system        kindnet-njcw4                               172.18.0.4 myk8s-worker        None     TCP
kube-system        kube-apiserver-myk8s-control-plane          172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-controller-manager-myk8s-control-plane 172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-proxy-h2qb5                            172.18.0.2 myk8s-worker2       None     TCP
kube-system        kube-proxy-jmfg8                            172.18.0.4 myk8s-worker        None     TCP
kube-system        kube-proxy-nswxj                            172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-scheduler-myk8s-control-plane          172.18.0.3 myk8s-control-plane None     TCP
local-path-storage local-path-provisioner-7dc846544d-vzdcv     10.10.0.4  myk8s-control-plane None     TCP
metallb-system     controller-bb5f47665-29lwc                  10.10.1.2  myk8s-worker        None     TCP
metallb-system     speaker-f7qvl                               172.18.0.2 myk8s-worker2       None     TCP
metallb-system     speaker-hcfq8                               172.18.0.4 myk8s-worker        None     TCP
metallb-system     speaker-lr429                               172.18.0.3 myk8s-control-plane None     TCP

2. ztunnel 로그에서 SPIFFE 기반 mTLS 인증 여부 확인

1
kubectl -n istio-system logs -l app=ztunnel | grep -E "inbound|outbound"

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
2025-06-07T11:06:49.409887Z	info	access	connection complete	src.addr=10.10.2.15:60766 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.12:15008 dst.hbone_addr=10.10.2.12:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v1-598b896c9d-9dnx5" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=192 bytes_recv=519 duration="1ms"
2025-06-07T11:06:50.257696Z	info	access	connection complete	src.addr=10.10.1.6:54930 src.workload="bookinfo-gateway-istio-6cbd9bcd49-dlqgn" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-gateway-istio" dst.addr=10.10.2.15:15008 dst.hbone_addr=10.10.2.15:9080 dst.service="productpage.default.svc.cluster.local" dst.workload="productpage-v1-54bb874995-gkz6q" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" direction="inbound" bytes_sent=9603 bytes_recv=341 duration="2018ms"
2025-06-07T11:06:50.510535Z	info	access	connection complete	src.addr=10.10.2.15:40614 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="inbound" bytes_sent=358 bytes_recv=192 duration="2ms"
2025-06-07T11:06:50.510666Z	info	access	connection complete	src.addr=10.10.2.15:51494 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="outbound" bytes_sent=192 bytes_recv=358 duration="2ms"
2025-06-07T11:06:50.515110Z	info	access	connection complete	src.addr=10.10.2.15:59558 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.13:15008 dst.hbone_addr=10.10.2.13:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v2-556d6457d-jdksf" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="inbound" bytes_sent=602 bytes_recv=192 duration="3ms"
2025-06-07T11:06:50.515239Z	info	access	connection complete	src.addr=10.10.2.15:60770 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.13:15008 dst.hbone_addr=10.10.2.13:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v2-556d6457d-jdksf" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=192 bytes_recv=602 duration="3ms"
2025-06-07T11:06:51.593038Z	info	access	connection complete	src.addr=10.10.2.15:40614 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="inbound" bytes_sent=358 bytes_recv=192 duration="2ms"
2025-06-07T11:06:51.593147Z	info	access	connection complete	src.addr=10.10.2.15:51508 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.10:15008 dst.hbone_addr=10.10.2.10:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-766844796b-brc95" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="outbound" bytes_sent=192 bytes_recv=358 duration="2ms"
2025-06-07T11:06:51.597888Z	info	access	connection complete	src.addr=10.10.2.15:54086 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.14:15008 dst.hbone_addr=10.10.2.14:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v3-564544b4d6-p2lj2" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="inbound" bytes_sent=599 bytes_recv=192 duration="3ms"
2025-06-07T11:06:51.598078Z	info	access	connection complete	src.addr=10.10.2.15:60780 src.workload="productpage-v1-54bb874995-gkz6q" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.14:15008 dst.hbone_addr=10.10.2.14:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v3-564544b4d6-p2lj2" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=192 bytes_recv=599 duration="3ms"

  • ztunnel이 처리한 트래픽 로그를 통해 src.identity, dst.identity 필드에 spiffe:// 형식이 있으면 mTLS 인증을 수행한 것
  • connection complete 로그에 direction 값으로 방향(inbound/outbound), 인증 주체(identity) 확인 가능

🛡️ Secure Application Access : L4 Authorization Policy

1. netshoot 파드를 Ambient Mode에 다시 참여시키기

1
docker exec -it myk8s-control-plane istioctl ztunnel-config workload

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
NAMESPACE          POD NAME                                    ADDRESS    NODE                WAYPOINT PROTOCOL
default            bookinfo-gateway-istio-6cbd9bcd49-dlqgn     10.10.1.6  myk8s-worker        None     TCP
default            details-v1-766844796b-brc95                 10.10.2.10 myk8s-worker2       None     HBONE
default            kubernetes                                  172.18.0.3                     None     TCP
default            netshoot                                    10.10.0.8  myk8s-control-plane None     TCP
default            productpage-v1-54bb874995-gkz6q             10.10.2.15 myk8s-worker2       None     HBONE
default            ratings-v1-5dc79b6bcd-8f7vz                 10.10.2.11 myk8s-worker2       None     HBONE
default            reviews-v1-598b896c9d-9dnx5                 10.10.2.12 myk8s-worker2       None     HBONE
default            reviews-v2-556d6457d-jdksf                  10.10.2.13 myk8s-worker2       None     HBONE
default            reviews-v3-564544b4d6-p2lj2                 10.10.2.14 myk8s-worker2       None     HBONE
istio-system       grafana-65bfb5f855-jfmdl                    10.10.2.4  myk8s-worker2       None     TCP
istio-system       istio-cni-node-7kst6                        10.10.1.4  myk8s-worker        None     TCP
istio-system       istio-cni-node-dpqsv                        10.10.2.2  myk8s-worker2       None     TCP
istio-system       istio-cni-node-rfx6w                        10.10.0.5  myk8s-control-plane None     TCP
istio-system       istiod-86b6b7ff7-d7q7f                      10.10.1.3  myk8s-worker        None     TCP
istio-system       jaeger-868fbc75d7-4lq87                     10.10.2.5  myk8s-worker2       None     TCP
istio-system       kiali-6d774d8bb8-zkx5r                      10.10.2.6  myk8s-worker2       None     TCP
istio-system       loki-0                                      10.10.2.9  myk8s-worker2       None     TCP
istio-system       prometheus-689cc795d4-vrlxd                 10.10.2.7  myk8s-worker2       None     TCP
istio-system       ztunnel-4bls2                               10.10.0.6  myk8s-control-plane None     TCP
istio-system       ztunnel-kczj2                               10.10.2.3  myk8s-worker2       None     TCP
istio-system       ztunnel-wr6pp                               10.10.1.5  myk8s-worker        None     TCP
kube-system        coredns-668d6bf9bc-k6lf9                    10.10.0.2  myk8s-control-plane None     TCP
kube-system        coredns-668d6bf9bc-xbtkx                    10.10.0.3  myk8s-control-plane None     TCP
kube-system        etcd-myk8s-control-plane                    172.18.0.3 myk8s-control-plane None     TCP
kube-system        kindnet-g9qmc                               172.18.0.2 myk8s-worker2       None     TCP
kube-system        kindnet-lc2q2                               172.18.0.3 myk8s-control-plane None     TCP
kube-system        kindnet-njcw4                               172.18.0.4 myk8s-worker        None     TCP
kube-system        kube-apiserver-myk8s-control-plane          172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-controller-manager-myk8s-control-plane 172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-proxy-h2qb5                            172.18.0.2 myk8s-worker2       None     TCP
kube-system        kube-proxy-jmfg8                            172.18.0.4 myk8s-worker        None     TCP
kube-system        kube-proxy-nswxj                            172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-scheduler-myk8s-control-plane          172.18.0.3 myk8s-control-plane None     TCP
local-path-storage local-path-provisioner-7dc846544d-vzdcv     10.10.0.4  myk8s-control-plane None     TCP
metallb-system     controller-bb5f47665-29lwc                  10.10.1.2  myk8s-worker        None     TCP
metallb-system     speaker-f7qvl                               172.18.0.2 myk8s-worker2       None     TCP
metallb-system     speaker-hcfq8                               172.18.0.4 myk8s-worker        None     TCP
metallb-system     speaker-lr429                               172.18.0.3 myk8s-control-plane None     TCP
1
2
3
4
kubectl label pod netshoot istio.io/dataplane-mode=ambient --overwrite 

# 결과
pod/netshoot labeled

2. Ambient 참여 여부 확인 (netshoot 포함)

netshoot이 Ambient Mode (HBONE)로 표시됨

1
docker exec -it myk8s-control-plane istioctl ztunnel-config workload

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
NAMESPACE          POD NAME                                    ADDRESS    NODE                WAYPOINT PROTOCOL
default            bookinfo-gateway-istio-6cbd9bcd49-dlqgn     10.10.1.6  myk8s-worker        None     TCP
default            details-v1-766844796b-brc95                 10.10.2.10 myk8s-worker2       None     HBONE
default            kubernetes                                  172.18.0.3                     None     TCP
default            netshoot                                    10.10.0.8  myk8s-control-plane None     HBONE
default            productpage-v1-54bb874995-gkz6q             10.10.2.15 myk8s-worker2       None     HBONE
default            ratings-v1-5dc79b6bcd-8f7vz                 10.10.2.11 myk8s-worker2       None     HBONE
default            reviews-v1-598b896c9d-9dnx5                 10.10.2.12 myk8s-worker2       None     HBONE
default            reviews-v2-556d6457d-jdksf                  10.10.2.13 myk8s-worker2       None     HBONE
default            reviews-v3-564544b4d6-p2lj2                 10.10.2.14 myk8s-worker2       None     HBONE
istio-system       grafana-65bfb5f855-jfmdl                    10.10.2.4  myk8s-worker2       None     TCP
istio-system       istio-cni-node-7kst6                        10.10.1.4  myk8s-worker        None     TCP
istio-system       istio-cni-node-dpqsv                        10.10.2.2  myk8s-worker2       None     TCP
istio-system       istio-cni-node-rfx6w                        10.10.0.5  myk8s-control-plane None     TCP
istio-system       istiod-86b6b7ff7-d7q7f                      10.10.1.3  myk8s-worker        None     TCP
istio-system       jaeger-868fbc75d7-4lq87                     10.10.2.5  myk8s-worker2       None     TCP
istio-system       kiali-6d774d8bb8-zkx5r                      10.10.2.6  myk8s-worker2       None     TCP
istio-system       loki-0                                      10.10.2.9  myk8s-worker2       None     TCP
istio-system       prometheus-689cc795d4-vrlxd                 10.10.2.7  myk8s-worker2       None     TCP
istio-system       ztunnel-4bls2                               10.10.0.6  myk8s-control-plane None     TCP
istio-system       ztunnel-kczj2                               10.10.2.3  myk8s-worker2       None     TCP
istio-system       ztunnel-wr6pp                               10.10.1.5  myk8s-worker        None     TCP
kube-system        coredns-668d6bf9bc-k6lf9                    10.10.0.2  myk8s-control-plane None     TCP
kube-system        coredns-668d6bf9bc-xbtkx                    10.10.0.3  myk8s-control-plane None     TCP
kube-system        etcd-myk8s-control-plane                    172.18.0.3 myk8s-control-plane None     TCP
kube-system        kindnet-g9qmc                               172.18.0.2 myk8s-worker2       None     TCP
kube-system        kindnet-lc2q2                               172.18.0.3 myk8s-control-plane None     TCP
kube-system        kindnet-njcw4                               172.18.0.4 myk8s-worker        None     TCP
kube-system        kube-apiserver-myk8s-control-plane          172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-controller-manager-myk8s-control-plane 172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-proxy-h2qb5                            172.18.0.2 myk8s-worker2       None     TCP
kube-system        kube-proxy-jmfg8                            172.18.0.4 myk8s-worker        None     TCP
kube-system        kube-proxy-nswxj                            172.18.0.3 myk8s-control-plane None     TCP
kube-system        kube-scheduler-myk8s-control-plane          172.18.0.3 myk8s-control-plane None     TCP
local-path-storage local-path-provisioner-7dc846544d-vzdcv     10.10.0.4  myk8s-control-plane None     TCP
metallb-system     controller-bb5f47665-29lwc                  10.10.1.2  myk8s-worker        None     TCP
metallb-system     speaker-f7qvl                               172.18.0.2 myk8s-worker2       None     TCP
metallb-system     speaker-hcfq8                               172.18.0.4 myk8s-worker        None     TCP
metallb-system     speaker-lr429                               172.18.0.3 myk8s-control-plane None     TCP

3. productpage 접근 제어를 위한 L4 AuthorizationPolicy 생성

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
  name: productpage-viewer
  namespace: default
spec:
  selector:
    matchLabels:
      app: productpage
  action: ALLOW
  rules:
  - from:
    - source:
        principals:
        - cluster.local/ns/default/sa/netshoot
EOF

# 결과
authorizationpolicy.security.istio.io/productpage-viewer created

4. L4 Policy 생성 결과 확인

1
kubectl get authorizationpolicy

✅ 출력

1
2
NAME                 ACTION   AGE
productpage-viewer   ALLOW    73s

5. ztunnel 로그에서 RBAC 정책 적용 확인

1
kubectl logs ds/ztunnel -n istio-system -f | grep -E RBAC

✅ 출력

1
2
Found 3 pods, using pod/ztunnel-kczj2
2025-06-07T11:22:12.130018Z	info	xds:xds{id=8}	handling RBAC update productpage-viewer	

6. Gateway에서의 productpage 접근 시도 (거부 상태 확인)

1
2
GWLB=$(kubectl get svc bookinfo-gateway-istio -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
while true; do docker exec -it mypc curl $GWLB/productpage | grep -i title ; date "+%Y-%m-%d %H:%M:%S"; sleep 1; done

✅ 출력

1
2
3
4
5
2025-06-07 20:26:18
2025-06-07 20:26:19
2025-06-07 20:26:20
2025-06-07 20:26:21
...

7. netshoot에서 productpage 접근 시도 (정상 확인)

1
kubectl exec -it netshoot -- curl -sS productpage:9080/productpage | grep -i title

✅ 출력

1
<title>Simple Bookstore App</title>
1
while true; do kubectl exec -it netshoot -- curl -sS productpage:9080/productpage | grep -i title ; date "+%Y-%m-%d %H:%M:%S"; sleep 1; done

✅ 출력

1
2
3
4
5
6
7
8
9
<title>Simple Bookstore App</title>
2025-06-07 20:25:42
<title>Simple Bookstore App</title>
2025-06-07 20:25:44
<title>Simple Bookstore App</title>
2025-06-07 20:25:45
<title>Simple Bookstore App</title>
2025-06-07 20:25:46
...

8. gateway 서비스 어카운트도 허용하도록 정책 업데이트

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
  name: productpage-viewer
  namespace: default
spec:
  selector:
    matchLabels:
      app: productpage
  action: ALLOW
  rules:
  - from:
    - source:
        principals:
        - cluster.local/ns/default/sa/netshoot
        - cluster.local/ns/default/sa/bookinfo-gateway-istio
EOF

# 결과
authorizationpolicy.security.istio.io/productpage-viewer configured

9. gateway → productpage 정상 응답 확인

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
2025-06-07 20:27:19
2025-06-07 20:27:20
2025-06-07 20:27:21
2025-06-07 20:27:22
2025-06-07 20:27:23
2025-06-07 20:27:24
2025-06-07 20:27:25
2025-06-07 20:27:26
2025-06-07 20:27:27
<title>Simple Bookstore App</title>
2025-06-07 20:27:28
<title>Simple Bookstore App</title>
2025-06-07 20:27:29
<title>Simple Bookstore App</title>
2025-06-07 20:27:30

10. ztunnel 로그에서 정책 업데이트 적용 확인

1
kubectl logs ds/ztunnel -n istio-system -f | grep -E RBAC

✅ 출력

1
2
3
Found 3 pods, using pod/ztunnel-kczj2
2025-06-07T11:22:12.130018Z	info	xds:xds{id=8}	handling RBAC update productpage-viewer	
2025-06-07T11:27:28.469577Z	info	xds:xds{id=8}	handling RBAC update productpage-viewer

🛠️ Configure waypoint proxies - Docs

1. Waypoint Proxy 생성 및 적용

1
2
3
4
docker exec -it myk8s-control-plane istioctl waypoint apply -n default

# 결과
✅ waypoint default/waypoint applied

2. Gateway 리소스 목록 확인

1
kubectl get gateway

✅ 출력

1
2
3
NAME               CLASS            ADDRESS          PROGRAMMED   AGE
bookinfo-gateway   istio            172.18.255.201   True         3h22m
waypoint           istio-waypoint   10.200.1.127     True         48s

3. Waypoint Gateway 리소스 상세 조회

1
kubectl get gateway waypoint -o yaml

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  creationTimestamp: "2025-06-07T11:33:52Z"
  generation: 1
  name: waypoint
  namespace: default
  resourceVersion: "38622"
  uid: 33ae8cb0-b764-4b85-b4f2-663008efc5ef
spec:
  gatewayClassName: istio-waypoint
  listeners:
  - allowedRoutes:
      namespaces:
        from: Same
    name: mesh
    port: 15008
    protocol: HBONE
status:
  addresses:
  - type: IPAddress
    value: 10.200.1.127
  - type: Hostname
    value: waypoint.default.svc.cluster.local
  conditions:
  - lastTransitionTime: "2025-06-07T11:33:52Z"
    message: Resource accepted
    observedGeneration: 1
    reason: Accepted
    status: "True"
    type: Accepted
  - lastTransitionTime: "2025-06-07T11:33:54Z"
    message: Resource programmed, assigned to service(s) waypoint.default.svc.cluster.local:15008
    observedGeneration: 1
    reason: Programmed
    status: "True"
    type: Programmed
  listeners:
  - attachedRoutes: 0
    conditions:
    - lastTransitionTime: "2025-06-07T11:33:52Z"
      message: No errors found
      observedGeneration: 1
      reason: Accepted
      status: "True"
      type: Accepted
    - lastTransitionTime: "2025-06-07T11:33:52Z"
      message: No errors found
      observedGeneration: 1
      reason: NoConflicts
      status: "False"
      type: Conflicted
    - lastTransitionTime: "2025-06-07T11:33:52Z"
      message: No errors found
      observedGeneration: 1
      reason: Programmed
      status: "True"
      type: Programmed
    - lastTransitionTime: "2025-06-07T11:33:52Z"
      message: No errors found
      observedGeneration: 1
      reason: ResolvedRefs
      status: "True"
      type: ResolvedRefs
    name: mesh
    supportedKinds: []

4. Waypoint Proxy Pod 상태 확인

1
kubectl get pod -l service.istio.io/canonical-name=waypoint -owide

✅ 출력

1
2
NAME                      READY   STATUS    RESTARTS   AGE     IP          NODE           NOMINATED NODE   READINESS GATES
waypoint-66b59898-p8fx8   1/1     Running   0          2m18s   10.10.1.7   myk8s-worker   <none>           <none>

5. Waypoint 리소스 목록 확인

1
docker exec -it myk8s-control-plane istioctl waypoint list

✅ 출력

1
2
NAME         REVISION     PROGRAMMED
waypoint     default      True

6. Waypoint Proxy 상태 확인

1
docker exec -it myk8s-control-plane istioctl waypoint status

✅ 출력

1
2
NAMESPACE     NAME         STATUS     TYPE           REASON         MESSAGE
default       waypoint     True       Programmed     Programmed     Resource programmed, assigned to service(s) waypoint.default.svc.cluster.local:15008

7. Istio Proxy 상태 전체 확인

1
docker exec -it myk8s-control-plane istioctl proxy-status

✅ 출력

1
2
3
4
5
6
NAME                                                CLUSTER        CDS                LDS                EDS                RDS                ECDS        ISTIOD                     VERSION
bookinfo-gateway-istio-6cbd9bcd49-dlqgn.default     Kubernetes     SYNCED (2m32s)     SYNCED (2m32s)     SYNCED (2m32s)     SYNCED (2m32s)     IGNORED     istiod-86b6b7ff7-d7q7f     1.26.0
waypoint-66b59898-p8fx8.default                     Kubernetes     SYNCED (4m18s)     SYNCED (4m18s)     IGNORED            IGNORED            IGNORED     istiod-86b6b7ff7-d7q7f     1.26.0
ztunnel-4bls2.istio-system                          Kubernetes     IGNORED            IGNORED            IGNORED            IGNORED            IGNORED     istiod-86b6b7ff7-d7q7f     1.26.0
ztunnel-kczj2.istio-system                          Kubernetes     IGNORED            IGNORED            IGNORED            IGNORED            IGNORED     istiod-86b6b7ff7-d7q7f     1.26.0
ztunnel-wr6pp.istio-system                          Kubernetes     IGNORED            IGNORED            IGNORED            IGNORED            IGNORED     istiod-86b6b7ff7-d7q7f     1.26.0

8. Waypoint Proxy의 인증서 상태 확인

1
docker exec -it myk8s-control-plane istioctl proxy-config secret deploy/waypoint

✅ 출력

1
2
3
RESOURCE NAME     TYPE           STATUS     VALID CERT     SERIAL NUMBER                        NOT AFTER                NOT BEFORE
default           Cert Chain     ACTIVE     true           7a2fddfc5fa5cf492ccc1f787645b754     2025-06-08T11:33:53Z     2025-06-07T11:31:53Z
ROOTCA            CA             ACTIVE     true           8dae6d7752ac495efac249ceb5279185     2035-06-05T07:25:32Z     2025-06-07T07:25:32Z

9. 클러스터 파드 상태 확인

1
k get pod

✅ 출력

1
2
3
4
5
6
7
8
9
10
NAME                                      READY   STATUS    RESTARTS   AGE
bookinfo-gateway-istio-6cbd9bcd49-dlqgn   1/1     Running   0          3h28m
details-v1-766844796b-brc95               1/1     Running   0          3h35m
netshoot                                  1/1     Running   0          3h32m
productpage-v1-54bb874995-gkz6q           1/1     Running   0          3h35m
ratings-v1-5dc79b6bcd-8f7vz               1/1     Running   0          3h35m
reviews-v1-598b896c9d-9dnx5               1/1     Running   0          3h35m
reviews-v2-556d6457d-jdksf                1/1     Running   0          3h35m
reviews-v3-564544b4d6-p2lj2               1/1     Running   0          3h35m
waypoint-66b59898-p8fx8                   1/1     Running   0          6m13s

10. Waypoint Proxy 내부 진입

1
kubectl pexec waypoint-66b59898-p8fx8 -it -T -- bash

✅ 출력

1
2
3
4
5
Defaulting container name to istio-proxy.
Create cnsenter pod (cnsenter-yg1hwuu9vg)
Wait to run cnsenter pod (cnsenter-yg1hwuu9vg)
If you don't see a command prompt, try pressing enter.
bash-5.1# 

11. Envoy Prometheus 메트릭 확인

1
bash-5.1# curl -s http://localhost:15020/stats/prometheus

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
# HELP istio_agent_cert_expiry_seconds The time remaining, in seconds, before the certificate chain will expire. A negative value indicates the cert is expired.
# TYPE istio_agent_cert_expiry_seconds gauge
istio_agent_cert_expiry_seconds{resource_name="default"} 85907.959110295
# HELP istio_agent_go_gc_duration_seconds A summary of the wall-time pause (stop-the-world) duration in garbage collection cycles.
# TYPE istio_agent_go_gc_duration_seconds summary
istio_agent_go_gc_duration_seconds{quantile="0"} 3.5689e-05
istio_agent_go_gc_duration_seconds{quantile="0.25"} 4.1504e-05
istio_agent_go_gc_duration_seconds{quantile="0.5"} 6.2092e-05
istio_agent_go_gc_duration_seconds{quantile="0.75"} 7.7163e-05
istio_agent_go_gc_duration_seconds{quantile="1"} 0.000115585
istio_agent_go_gc_duration_seconds_sum 0.000562115
istio_agent_go_gc_duration_seconds_count 9
# HELP istio_agent_go_gc_gogc_percent Heap size target percentage configured by the user, otherwise 100. This value is set by the GOGC environment variable, and the runtime/debug.SetGCPercent function. Sourced from /gc/gogc:percent.
# TYPE istio_agent_go_gc_gogc_percent gauge
istio_agent_go_gc_gogc_percent 100
# HELP istio_agent_go_gc_gomemlimit_bytes Go runtime memory limit configured by the user, otherwise math.MaxInt64. This value is set by the GOMEMLIMIT environment variable, and the runtime/debug.SetMemoryLimit function. Sourced from /gc/gomemlimit:bytes.
# TYPE istio_agent_go_gc_gomemlimit_bytes gauge
istio_agent_go_gc_gomemlimit_bytes 1.073741824e+09
# HELP istio_agent_go_goroutines Number of goroutines that currently exist.
# TYPE istio_agent_go_goroutines gauge
istio_agent_go_goroutines 57
# HELP istio_agent_go_info Information about the Go environment.
# TYPE istio_agent_go_info gauge
istio_agent_go_info{version="go1.24.2"} 1
# HELP istio_agent_go_memstats_alloc_bytes Number of bytes allocated in heap and currently in use. Equals to /memory/classes/heap/objects:bytes.
# TYPE istio_agent_go_memstats_alloc_bytes gauge
istio_agent_go_memstats_alloc_bytes 8.125608e+06
# HELP istio_agent_go_memstats_alloc_bytes_total Total number of bytes allocated in heap until now, even if released already. Equals to /gc/heap/allocs:bytes.
# TYPE istio_agent_go_memstats_alloc_bytes_total counter
istio_agent_go_memstats_alloc_bytes_total 2.5861128e+07
# HELP istio_agent_go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table. Equals to /memory/classes/profiling/buckets:bytes.
# TYPE istio_agent_go_memstats_buck_hash_sys_bytes gauge
istio_agent_go_memstats_buck_hash_sys_bytes 1.455881e+06
# HELP istio_agent_go_memstats_frees_total Total number of heap objects frees. Equals to /gc/heap/frees:objects + /gc/heap/tiny/allocs:objects.
# TYPE istio_agent_go_memstats_frees_total counter
istio_agent_go_memstats_frees_total 124008
# HELP istio_agent_go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata. Equals to /memory/classes/metadata/other:bytes.
# TYPE istio_agent_go_memstats_gc_sys_bytes gauge
istio_agent_go_memstats_gc_sys_bytes 3.663136e+06
# HELP istio_agent_go_memstats_heap_alloc_bytes Number of heap bytes allocated and currently in use, same as go_memstats_alloc_bytes. Equals to /memory/classes/heap/objects:bytes.
# TYPE istio_agent_go_memstats_heap_alloc_bytes gauge
istio_agent_go_memstats_heap_alloc_bytes 8.125608e+06
# HELP istio_agent_go_memstats_heap_idle_bytes Number of heap bytes waiting to be used. Equals to /memory/classes/heap/released:bytes + /memory/classes/heap/free:bytes.
# TYPE istio_agent_go_memstats_heap_idle_bytes gauge
istio_agent_go_memstats_heap_idle_bytes 4.702208e+06
# HELP istio_agent_go_memstats_heap_inuse_bytes Number of heap bytes that are in use. Equals to /memory/classes/heap/objects:bytes + /memory/classes/heap/unused:bytes
# TYPE istio_agent_go_memstats_heap_inuse_bytes gauge
istio_agent_go_memstats_heap_inuse_bytes 1.0797056e+07
# HELP istio_agent_go_memstats_heap_objects Number of currently allocated objects. Equals to /gc/heap/objects:objects.
# TYPE istio_agent_go_memstats_heap_objects gauge
istio_agent_go_memstats_heap_objects 25423
# HELP istio_agent_go_memstats_heap_released_bytes Number of heap bytes released to OS. Equals to /memory/classes/heap/released:bytes.
# TYPE istio_agent_go_memstats_heap_released_bytes gauge
istio_agent_go_memstats_heap_released_bytes 3.055616e+06
# HELP istio_agent_go_memstats_heap_sys_bytes Number of heap bytes obtained from system. Equals to /memory/classes/heap/objects:bytes + /memory/classes/heap/unused:bytes + /memory/classes/heap/released:bytes + /memory/classes/heap/free:bytes.
# TYPE istio_agent_go_memstats_heap_sys_bytes gauge
istio_agent_go_memstats_heap_sys_bytes 1.5499264e+07
# HELP istio_agent_go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE istio_agent_go_memstats_last_gc_time_seconds gauge
istio_agent_go_memstats_last_gc_time_seconds 1.7492964841990168e+09
# HELP istio_agent_go_memstats_mallocs_total Total number of heap objects allocated, both live and gc-ed. Semantically a counter version for go_memstats_heap_objects gauge. Equals to /gc/heap/allocs:objects + /gc/heap/tiny/allocs:objects.
# TYPE istio_agent_go_memstats_mallocs_total counter
istio_agent_go_memstats_mallocs_total 149431
# HELP istio_agent_go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures. Equals to /memory/classes/metadata/mcache/inuse:bytes.
# TYPE istio_agent_go_memstats_mcache_inuse_bytes gauge
istio_agent_go_memstats_mcache_inuse_bytes 2416
# HELP istio_agent_go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system. Equals to /memory/classes/metadata/mcache/inuse:bytes + /memory/classes/metadata/mcache/free:bytes.
# TYPE istio_agent_go_memstats_mcache_sys_bytes gauge
istio_agent_go_memstats_mcache_sys_bytes 15704
# HELP istio_agent_go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures. Equals to /memory/classes/metadata/mspan/inuse:bytes.
# TYPE istio_agent_go_memstats_mspan_inuse_bytes gauge
istio_agent_go_memstats_mspan_inuse_bytes 119200
# HELP istio_agent_go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system. Equals to /memory/classes/metadata/mspan/inuse:bytes + /memory/classes/metadata/mspan/free:bytes.
# TYPE istio_agent_go_memstats_mspan_sys_bytes gauge
istio_agent_go_memstats_mspan_sys_bytes 146880
# HELP istio_agent_go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place. Equals to /gc/heap/goal:bytes.
# TYPE istio_agent_go_memstats_next_gc_bytes gauge
istio_agent_go_memstats_next_gc_bytes 1.5679154e+07
# HELP istio_agent_go_memstats_other_sys_bytes Number of bytes used for other system allocations. Equals to /memory/classes/other:bytes.
# TYPE istio_agent_go_memstats_other_sys_bytes gauge
istio_agent_go_memstats_other_sys_bytes 572871
# HELP istio_agent_go_memstats_stack_inuse_bytes Number of bytes obtained from system for stack allocator in non-CGO environments. Equals to /memory/classes/heap/stacks:bytes.
# TYPE istio_agent_go_memstats_stack_inuse_bytes gauge
istio_agent_go_memstats_stack_inuse_bytes 1.277952e+06
# HELP istio_agent_go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator. Equals to /memory/classes/heap/stacks:bytes + /memory/classes/os-stacks:bytes.
# TYPE istio_agent_go_memstats_stack_sys_bytes gauge
istio_agent_go_memstats_stack_sys_bytes 1.277952e+06
# HELP istio_agent_go_memstats_sys_bytes Number of bytes obtained from system. Equals to /memory/classes/total:byte.
# TYPE istio_agent_go_memstats_sys_bytes gauge
istio_agent_go_memstats_sys_bytes 2.2631688e+07
# HELP istio_agent_go_sched_gomaxprocs_threads The current runtime.GOMAXPROCS setting, or the number of operating system threads that can execute user-level Go code simultaneously. Sourced from /sched/gomaxprocs:threads.
# TYPE istio_agent_go_sched_gomaxprocs_threads gauge
istio_agent_go_sched_gomaxprocs_threads 2
# HELP istio_agent_go_threads Number of OS threads created.
# TYPE istio_agent_go_threads gauge
istio_agent_go_threads 9
...

12. 포트 수신 상태 (LISTEN 상태) 확인

1
bash-5.1# ss -tnlp

✅ 출력

1
2
3
4
5
6
7
8
9
State    Recv-Q   Send-Q     Local Address:Port      Peer Address:Port  Process                                                                 
LISTEN   0        4096             0.0.0.0:15008          0.0.0.0:*      users:(("envoy",pid=21,fd=35))                                         
LISTEN   0        4096             0.0.0.0:15008          0.0.0.0:*      users:(("envoy",pid=21,fd=34))                                         
LISTEN   0        4096             0.0.0.0:15021          0.0.0.0:*      users:(("envoy",pid=21,fd=23))                                         
LISTEN   0        4096             0.0.0.0:15021          0.0.0.0:*      users:(("envoy",pid=21,fd=22))                                         
LISTEN   0        4096             0.0.0.0:15090          0.0.0.0:*      users:(("envoy",pid=21,fd=21))                                         
LISTEN   0        4096             0.0.0.0:15090          0.0.0.0:*      users:(("envoy",pid=21,fd=20))                                         
LISTEN   0        4096           127.0.0.1:15000          0.0.0.0:*      users:(("envoy",pid=21,fd=18))                                         
LISTEN   0        4096                   *:15020                *:*      users:(("pilot-agent",pid=1,fd=7)) 

13. 현재 수립된 TCP 세션 확인

1
bash-5.1# ss -tnp

✅ 출력

1
2
3
4
5
6
7
8
9
10
11
12
State Recv-Q Send-Q      Local Address:Port          Peer Address:Port  Process                                                                 
ESTAB 0      0               127.0.0.1:15090            127.0.0.1:49074  users:(("envoy",pid=21,fd=36))                                         
ESTAB 0      0               10.10.1.7:37320         10.200.1.163:15012  users:(("pilot-agent",pid=1,fd=11))                                    
ESTAB 0      0               127.0.0.1:46778            127.0.0.1:15020  users:(("envoy",pid=21,fd=42))                                         
ESTAB 0      0               10.10.1.7:37336         10.200.1.163:15012  users:(("pilot-agent",pid=1,fd=15))                                    
ESTAB 0      0               127.0.0.1:49074            127.0.0.1:15090  users:(("pilot-agent",pid=1,fd=8))                                     
ESTAB 0      0               127.0.0.1:38686            127.0.0.1:15020  users:(("envoy",pid=21,fd=37))                                         
ESTAB 0      0               127.0.0.1:33642            127.0.0.1:15000  users:(("envoy",pid=21,fd=39))                                         
ESTAB 0      0               127.0.0.1:15000            127.0.0.1:33642  users:(("envoy",pid=21,fd=40))                                         
ESTAB 0      0      [::ffff:10.10.1.7]:15020   [::ffff:10.10.2.7]:37456  users:(("pilot-agent",pid=1,fd=3))                                     
ESTAB 0      0      [::ffff:127.0.0.1]:15020   [::ffff:127.0.0.1]:46778  users:(("pilot-agent",pid=1,fd=20))                                    
ESTAB 0      0      [::ffff:127.0.0.1]:15020   [::ffff:127.0.0.1]:38686  users:(("pilot-agent",pid=1,fd=18)) 

14. Unix 소켓 수신 상태 확인

1
bash-5.1# ss -xnlp

✅ 출력

1
2
3
Netid State  Recv-Q Send-Q                                Local Address:Port         Peer Address:Port   Process                                
u_str LISTEN 0      4096                            etc/istio/proxy/XDS 1955717                 * 0       users:(("pilot-agent",pid=1,fd=12))   
u_str LISTEN 0      4096     var/run/secrets/workload-spiffe-uds/socket 1955716                 * 0       users:(("pilot-agent",pid=1,fd=10))

15. Unix 도메인 소켓 연결 상태 확인

1
bash-5.1# ss -xnp

✅ 출력

1
2
3
4
5
Netid State Recv-Q  Send-Q                                Local Address:Port        Peer Address:Port     Process                               
u_str ESTAB 0       0        var/run/secrets/workload-spiffe-uds/socket 1951589                * 1957271   users:(("pilot-agent",pid=1,fd=16))  
u_str ESTAB 0       0                                                 * 1957271                * 1951589   users:(("envoy",pid=21,fd=32))       
u_str ESTAB 0       0                                                 * 1958250                * 1955719   users:(("envoy",pid=21,fd=19))       
u_str ESTAB 0       0                               etc/istio/proxy/XDS 1955719                * 1958250   users:(("pilot-agent",pid=1,fd=13)) 

16. cnsenter 종료 및 정리

1
2
3
bash-5.1# exit
exit
Delete cnsenter pod (cnsenter-yg1hwuu9vg)
This post is licensed under CC BY 4.0 by the author.