Istio 6주차 정리
☸️ k8s(1.23.17) 배포 : NodePort(30000 HTTP, 30005 HTTPS)
1. 소스 코드 다운로드
1
2
3
4
5
6
git clone https://github.com/AcornPublishing/istio-in-action
cd istio-in-action/book-source-code-master
pwd # 각자 자신의 pwd 경로
# 결과
/home/devshin/workspace/istio/istio-in-action/book-source-code-master
2. Kind 클러스터 생성
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
kind create cluster --name myk8s --image kindest/node:v1.23.17 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000 # Sample Application (istio-ingrssgateway) HTTP
hostPort: 30000
- containerPort: 30001 # Prometheus
hostPort: 30001
- containerPort: 30002 # Grafana
hostPort: 30002
- containerPort: 30003 # Kiali
hostPort: 30003
- containerPort: 30004 # Tracing
hostPort: 30004
- containerPort: 30005 # Sample Application (istio-ingrssgateway) HTTPS
hostPort: 30005
- containerPort: 30006 # TCP Route
hostPort: 30006
- containerPort: 30007 # kube-ops-view
hostPort: 30007
extraMounts: # 해당 부분 생략 가능
- hostPath: /home/devshin/workspace/istio/istio-in-action/book-source-code-master # 각자 자신의 pwd 경로로 설정
containerPath: /istiobook
networking:
podSubnet: 10.10.0.0/16
serviceSubnet: 10.200.0.0/22
EOF
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
Creating cluster "myk8s" ...
✓ Ensuring node image (kindest/node:v1.23.17) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-myk8s"
You can now use your cluster with:
kubectl cluster-info --context kind-myk8s
Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
3. 클러스터 생성 확인
1
docker ps
✅ 출력
1
2
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2dc54e55d2af kindest/node:v1.23.17 "/usr/local/bin/entr…" About a minute ago Up About a minute 0.0.0.0:30000-30007->30000-30007/tcp, 127.0.0.1:40987->6443/tcp myk8s-control-plane
4. 노드에 기본 툴 설치
1
docker exec -it myk8s-control-plane sh -c 'apt update && apt install tree psmisc lsof wget bridge-utils net-tools dnsutils tcpdump ngrep iputils-ping git vim -y'
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
...
Setting up bind9-libs:amd64 (1:9.18.33-1~deb12u2) ...
Setting up openssh-client (1:9.2p1-2+deb12u5) ...
Setting up libxext6:amd64 (2:1.3.4-1+b1) ...
Setting up dbus-daemon (1.14.10-1~deb12u1) ...
Setting up libnet1:amd64 (1.1.6+dfsg-3.2) ...
Setting up libpcap0.8:amd64 (1.10.3-1) ...
Setting up dbus (1.14.10-1~deb12u1) ...
invoke-rc.d: policy-rc.d denied execution of start.
/usr/sbin/policy-rc.d returned 101, not running 'start dbus.service'
Setting up libgdbm-compat4:amd64 (1.23-3) ...
Setting up xauth (1:1.1.2-1) ...
Setting up bind9-host (1:9.18.33-1~deb12u2) ...
Setting up libperl5.36:amd64 (5.36.0-7+deb12u2) ...
Setting up tcpdump (4.99.3-1) ...
Setting up ngrep (1.47+ds1-5+b1) ...
Setting up perl (5.36.0-7+deb12u2) ...
Setting up bind9-dnsutils (1:9.18.33-1~deb12u2) ...
Setting up dnsutils (1:9.18.33-1~deb12u2) ...
Setting up liberror-perl (0.17029-2) ...
Setting up git (1:2.39.5-0+deb12u2) ...
Processing triggers for libc-bin (2.36-9+deb12u4) ...
🛡️ Istio 1.17.8 설치
1. myk8s-control-plane 진입
1
2
docker exec -it myk8s-control-plane bash
root@myk8s-control-plane:/#
2. (옵션) 코드 파일 마운트 확인
1
root@myk8s-control-plane:/# tree /istiobook/ -L 1
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
/istiobook/
|-- 2025-04-27-190930_1_roundrobin.json
|-- 2025-04-27-191213_2_roundrobin.json
|-- 2025-04-27-191803_3_random.json
|-- 2025-04-27-220131_4_random.json
|-- 2025-04-27-221302_5_least_conn.json
|-- README.md
|-- appendices
|-- bin
|-- ch10
|-- ch11
|-- ch12
|-- ch13
|-- ch14
|-- ch2
|-- ch3
|-- ch4
|-- ch5
|-- ch6
|-- ch7
|-- ch8
|-- ch9
|-- forum-2.json
|-- prom-values-2.yaml
|-- services
`-- webapp-routes.json
17 directories, 9 files
3. istioctl 설치
1
2
3
4
5
6
root@myk8s-control-plane:/# export ISTIOV=1.17.8
echo 'export ISTIOV=1.17.8' >> /root/.bashrc
curl -s -L https://istio.io/downloadIstio | ISTIO_VERSION=$ISTIOV sh -
cp istio-$ISTIOV/bin/istioctl /usr/local/bin/istioctl
istioctl version --remote=false
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Downloading istio-1.17.8 from https://github.com/istio/istio/releases/download/1.17.8/istio-1.17.8-linux-amd64.tar.gz ...
Istio 1.17.8 download complete!
The Istio release archive has been downloaded to the istio-1.17.8 directory.
To configure the istioctl client tool for your workstation,
add the /istio-1.17.8/bin directory to your environment path variable with:
export PATH="$PATH:/istio-1.17.8/bin"
Begin the Istio pre-installation check by running:
istioctl x precheck
Try Istio in ambient mode
https://istio.io/latest/docs/ambient/getting-started/
Try Istio in sidecar mode
https://istio.io/latest/docs/setup/getting-started/
Install guides for ambient mode
https://istio.io/latest/docs/ambient/install/
Install guides for sidecar mode
https://istio.io/latest/docs/setup/install/
Need more information? Visit https://istio.io/latest/docs/
1.17.8
4. demo 프로파일로 Istio 컨트롤 플레인 배포
디버깅 레벨 높아짐
1
root@myk8s-control-plane:/# istioctl install --set profile=demo --set values.global.proxy.privileged=true -y
✅ 출력
1
2
3
4
5
6
7
✔ Istio core installed
✔ Istiod installed
✔ Ingress gateways installed
✔ Egress gateways installed
✔ Installation complete Making this installation the default for injection and validation.
Thank you for installing Istio 1.17. Please take a few minutes to tell us about your install/upgrade experience! https://forms.gle/hMHGiwZHPU7UQRWe9
5. 보조 도구 설치
1
kubectl apply -f istio-$ISTIOV/samples/addons
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
serviceaccount/grafana created
configmap/grafana created
service/grafana created
deployment.apps/grafana created
configmap/istio-grafana-dashboards created
configmap/istio-services-grafana-dashboards created
deployment.apps/jaeger created
service/tracing created
service/zipkin created
service/jaeger-collector created
serviceaccount/kiali created
configmap/kiali created
clusterrole.rbac.authorization.k8s.io/kiali-viewer created
clusterrole.rbac.authorization.k8s.io/kiali created
clusterrolebinding.rbac.authorization.k8s.io/kiali created
role.rbac.authorization.k8s.io/kiali-controlplane created
rolebinding.rbac.authorization.k8s.io/kiali-controlplane created
service/kiali created
deployment.apps/kiali created
serviceaccount/prometheus created
configmap/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/prometheus created
deployment.apps/prometheus created
6. 컨트롤 플레인 컨테이너에서 빠져나오기
1
2
root@myk8s-control-plane:/# exit
exit
7. 설치 리소스 확인
(1) Istio 시스템 네임스페이스의 모든 리소스 조회
1
kubectl get all,svc,ep,sa,cm,secret,pdb -n istio-system
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
NAME READY STATUS RESTARTS AGE
pod/grafana-b854c6c8-p55xh 1/1 Running 0 64s
pod/istio-egressgateway-85df6b84b7-md4m2 1/1 Running 0 2m3s
pod/istio-ingressgateway-6bb8fb6549-2mlk2 1/1 Running 0 2m3s
pod/istiod-8d74787f-6jlfq 1/1 Running 0 2m15s
pod/jaeger-5556cd8fcf-gfjtt 1/1 Running 0 64s
pod/kiali-648847c8c4-dsq49 1/1 Running 0 64s
pod/prometheus-7b8b9dd44c-lppsf 2/2 Running 0 64s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/grafana ClusterIP 10.200.1.188 <none> 3000/TCP 64s
service/istio-egressgateway ClusterIP 10.200.3.180 <none> 80/TCP,443/TCP 2m3s
service/istio-ingressgateway LoadBalancer 10.200.0.6 <pending> 15021:31287/TCP,80:32484/TCP,443:32372/TCP,31400:30235/TCP,15443:31757/TCP 2m3s
service/istiod ClusterIP 10.200.2.22 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 2m15s
service/jaeger-collector ClusterIP 10.200.1.90 <none> 14268/TCP,14250/TCP,9411/TCP 64s
service/kiali ClusterIP 10.200.1.108 <none> 20001/TCP,9090/TCP 64s
service/prometheus ClusterIP 10.200.0.173 <none> 9090/TCP 64s
service/tracing ClusterIP 10.200.3.237 <none> 80/TCP,16685/TCP 64s
service/zipkin ClusterIP 10.200.3.199 <none> 9411/TCP 64s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/grafana 1/1 1 1 64s
deployment.apps/istio-egressgateway 1/1 1 1 2m3s
deployment.apps/istio-ingressgateway 1/1 1 1 2m3s
deployment.apps/istiod 1/1 1 1 2m15s
deployment.apps/jaeger 1/1 1 1 64s
deployment.apps/kiali 1/1 1 1 64s
deployment.apps/prometheus 1/1 1 1 64s
NAME DESIRED CURRENT READY AGE
replicaset.apps/grafana-b854c6c8 1 1 1 64s
replicaset.apps/istio-egressgateway-85df6b84b7 1 1 1 2m3s
replicaset.apps/istio-ingressgateway-6bb8fb6549 1 1 1 2m3s
replicaset.apps/istiod-8d74787f 1 1 1 2m15s
replicaset.apps/jaeger-5556cd8fcf 1 1 1 64s
replicaset.apps/kiali-648847c8c4 1 1 1 64s
replicaset.apps/prometheus-7b8b9dd44c 1 1 1 64s
NAME ENDPOINTS AGE
endpoints/grafana 10.10.0.9:3000 64s
endpoints/istio-egressgateway 10.10.0.7:8080,10.10.0.7:8443 2m3s
endpoints/istio-ingressgateway 10.10.0.6:15443,10.10.0.6:15021,10.10.0.6:31400 + 2 more... 2m3s
endpoints/istiod 10.10.0.5:15012,10.10.0.5:15010,10.10.0.5:15017 + 1 more... 2m15s
endpoints/jaeger-collector 10.10.0.8:9411,10.10.0.8:14250,10.10.0.8:14268 64s
endpoints/kiali 10.10.0.10:9090,10.10.0.10:20001 64s
endpoints/prometheus 10.10.0.11:9090 64s
endpoints/tracing 10.10.0.8:16685,10.10.0.8:16686 64s
endpoints/zipkin 10.10.0.8:9411 64s
NAME SECRETS AGE
serviceaccount/default 1 2m17s
serviceaccount/grafana 1 64s
serviceaccount/istio-egressgateway-service-account 1 2m3s
serviceaccount/istio-ingressgateway-service-account 1 2m3s
serviceaccount/istio-reader-service-account 1 2m16s
serviceaccount/istiod 1 2m16s
serviceaccount/istiod-service-account 1 2m16s
serviceaccount/kiali 1 64s
serviceaccount/prometheus 1 64s
NAME DATA AGE
configmap/grafana 4 64s
configmap/istio 2 2m15s
configmap/istio-ca-root-cert 1 2m5s
configmap/istio-gateway-deployment-leader 0 2m5s
configmap/istio-gateway-status-leader 0 2m5s
configmap/istio-grafana-dashboards 2 64s
configmap/istio-leader 0 2m5s
configmap/istio-namespace-controller-election 0 2m5s
configmap/istio-services-grafana-dashboards 4 64s
configmap/istio-sidecar-injector 2 2m15s
configmap/kiali 1 64s
configmap/kube-root-ca.crt 1 2m17s
configmap/prometheus 5 64s
NAME TYPE DATA AGE
secret/default-token-ssdgn kubernetes.io/service-account-token 3 2m17s
secret/grafana-token-4st5q kubernetes.io/service-account-token 3 64s
secret/istio-ca-secret istio.io/ca-root 5 2m5s
secret/istio-egressgateway-service-account-token-6g9gg kubernetes.io/service-account-token 3 2m3s
secret/istio-ingressgateway-service-account-token-9nxhn kubernetes.io/service-account-token 3 2m3s
secret/istio-reader-service-account-token-b5s9x kubernetes.io/service-account-token 3 2m16s
secret/istiod-service-account-token-gsss5 kubernetes.io/service-account-token 3 2m16s
secret/istiod-token-vp4bk kubernetes.io/service-account-token 3 2m16s
secret/kiali-token-fks7p kubernetes.io/service-account-token 3 64s
secret/prometheus-token-54xwl kubernetes.io/service-account-token 3 64s
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
poddisruptionbudget.policy/istio-egressgateway 1 N/A 0 2m3s
poddisruptionbudget.policy/istio-ingressgateway 1 N/A 0 2m3s
poddisruptionbudget.policy/istiod 1 N/A 0 2m15s
(2) Istio 관련 CRD 목록 조회
1
kubectl get crd | grep istio.io | sort
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
authorizationpolicies.security.istio.io 2025-05-17T03:26:25Z
destinationrules.networking.istio.io 2025-05-17T03:26:25Z
envoyfilters.networking.istio.io 2025-05-17T03:26:25Z
gateways.networking.istio.io 2025-05-17T03:26:25Z
istiooperators.install.istio.io 2025-05-17T03:26:25Z
peerauthentications.security.istio.io 2025-05-17T03:26:25Z
proxyconfigs.networking.istio.io 2025-05-17T03:26:25Z
requestauthentications.security.istio.io 2025-05-17T03:26:25Z
serviceentries.networking.istio.io 2025-05-17T03:26:25Z
sidecars.networking.istio.io 2025-05-17T03:26:25Z
telemetries.telemetry.istio.io 2025-05-17T03:26:25Z
virtualservices.networking.istio.io 2025-05-17T03:26:25Z
wasmplugins.extensions.istio.io 2025-05-17T03:26:25Z
workloadentries.networking.istio.io 2025-05-17T03:26:25Z
workloadgroups.networking.istio.io 2025-05-17T03:26:25Z
8. 실습을 위한 네임스페이스 설정
1
2
3
kubectl create ns istioinaction
kubectl label namespace istioinaction istio-injection=enabled
kubectl get ns --show-labels
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
namespace/istioinaction created
namespace/istioinaction labeled
NAME STATUS AGE LABELS
default Active 10m kubernetes.io/metadata.name=default
istio-system Active 4m11s kubernetes.io/metadata.name=istio-system
istioinaction Active 1s istio-injection=enabled,kubernetes.io/metadata.name=istioinaction
kube-node-lease Active 10m kubernetes.io/metadata.name=kube-node-lease
kube-public Active 10m kubernetes.io/metadata.name=kube-public
kube-system Active 10m kubernetes.io/metadata.name=kube-system
local-path-storage Active 10m kubernetes.io/metadata.name=local-path-storage
9. istio-ingressgateway NodePort 및 트래픽 정책 수정
1
2
3
4
kubectl patch svc -n istio-system istio-ingressgateway -p '{"spec": {"type": "NodePort", "ports": [{"port": 80, "targetPort": 8080, "nodePort": 30000}]}}'
kubectl patch svc -n istio-system istio-ingressgateway -p '{"spec": {"type": "NodePort", "ports": [{"port": 443, "targetPort": 8443, "nodePort": 30005}]}}'
kubectl patch svc -n istio-system istio-ingressgateway -p '{"spec":{"externalTrafficPolicy": "Local"}}'
kubectl describe svc -n istio-system istio-ingressgateway
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
service/istio-ingressgateway patched
service/istio-ingressgateway patched
service/istio-ingressgateway patched
Name: istio-ingressgateway
Namespace: istio-system
Labels: app=istio-ingressgateway
install.operator.istio.io/owning-resource=unknown
install.operator.istio.io/owning-resource-namespace=istio-system
istio=ingressgateway
istio.io/rev=default
operator.istio.io/component=IngressGateways
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.17.8
release=istio
Annotations: <none>
Selector: app=istio-ingressgateway,istio=ingressgateway
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.200.0.6
IPs: 10.200.0.6
Port: status-port 15021/TCP
TargetPort: 15021/TCP
NodePort: status-port 31287/TCP
Endpoints: 10.10.0.6:15021
Port: http2 80/TCP
TargetPort: 8080/TCP
NodePort: http2 30000/TCP
Endpoints: 10.10.0.6:8080
Port: https 443/TCP
TargetPort: 8443/TCP
NodePort: https 30005/TCP
Endpoints: 10.10.0.6:8443
Port: tcp 31400/TCP
TargetPort: 31400/TCP
NodePort: tcp 30235/TCP
Endpoints: 10.10.0.6:31400
Port: tls 15443/TCP
TargetPort: 15443/TCP
NodePort: tls 31757/TCP
Endpoints: 10.10.0.6:15443
Session Affinity: None
External Traffic Policy: Local
Internal Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 0s service-controller LoadBalancer -> NodePort
10. Istio 관측 도구 - NodePort 서비스 포트 일괄 수정
1
2
3
4
kubectl patch svc -n istio-system prometheus -p '{"spec": {"type": "NodePort", "ports": [{"port": 9090, "targetPort": 9090, "nodePort": 30001}]}}'
kubectl patch svc -n istio-system grafana -p '{"spec": {"type": "NodePort", "ports": [{"port": 3000, "targetPort": 3000, "nodePort": 30002}]}}'
kubectl patch svc -n istio-system kiali -p '{"spec": {"type": "NodePort", "ports": [{"port": 20001, "targetPort": 20001, "nodePort": 30003}]}}'
kubectl patch svc -n istio-system tracing -p '{"spec": {"type": "NodePort", "ports": [{"port": 80, "targetPort": 16686, "nodePort": 30004}]}}'
✅ 출력
1
2
3
4
service/prometheus patched
service/grafana patched
service/kiali patched
service/tracing patched
❌ 가장 흔한 실수: 잘못 설정한 데이터 플레인
1. 샘플 애플리케이션 배포
1
2
3
4
kubectl apply -f services/catalog/kubernetes/catalog.yaml -n istioinaction # catalog v1 배포
kubectl apply -f ch10/catalog-deployment-v2.yaml -n istioinaction # catalog v2 배포
kubectl apply -f ch10/catalog-gateway.yaml -n istioinaction # catalog-gateway 배포
kubectl apply -f ch10/catalog-virtualservice-subsets-v1-v2.yaml -n istioinaction
✅ 출력
1
2
3
4
5
6
serviceaccount/catalog created
service/catalog created
deployment.apps/catalog created
deployment.apps/catalog-v2 created
gateway.networking.istio.io/catalog-gateway created
virtualservice.networking.istio.io/catalog-v1-v2 created
2. Gateway 리소스 구성 확인
1
cat ch10/catalog-gateway.yaml
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: catalog-gateway
namespace: istioinaction
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- "catalog.istioinaction.io"
port:
number: 80
name: http
protocol: HTTP
3. VirtualService 리소스 구성 확인
1
cat ch10/catalog-virtualservice-subsets-v1-v2.yaml
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: catalog-v1-v2
namespace: istioinaction
spec:
hosts:
- "catalog.istioinaction.io"
gateways:
- "catalog-gateway"
http:
- route:
- destination:
host: catalog.istioinaction.svc.cluster.local
subset: version-v1
port:
number: 80
weight: 20
- destination:
host: catalog.istioinaction.svc.cluster.local
subset: version-v2
port:
number: 80
weight: 80
4. 배포 및 서비스 리소스 상태 확인
1
kubectl get deploy,svc -n istioinaction
✅ 출력
1
2
3
4
5
6
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/catalog 1/1 1 1 116s
deployment.apps/catalog-v2 2/2 2 2 116s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/catalog ClusterIP 10.200.3.2 <none> 80/TCP 116s
5. Gateway 및 VirtualService 리소스 상태 확인
1
kubectl get gw,vs -n istioinaction
✅ 출력
1
2
3
4
5
NAME AGE
gateway.networking.istio.io/catalog-gateway 2m47s
NAME GATEWAYS HOSTS AGE
virtualservice.networking.istio.io/catalog-v1-v2 ["catalog-gateway"] ["catalog.istioinaction.io"] 2m47s
6. 인그레스 게이트웨이 로그 확인
NoClusterFound : Upstream cluster not found
1
kubectl logs -n istio-system -l app=istio-ingressgateway -f
✅ 출력
1
2
3
4
5
6
7
8
9
10
2025-05-17T03:26:46.438741Z info cache Root cert has changed, start rotating root cert
2025-05-17T03:26:46.438764Z info ads XDS: Incremental Pushing:0 ConnectedEndpoints:2 Version:
2025-05-17T03:26:46.438815Z info cache returned workload certificate from cache ttl=23h59m59.561192303s
2025-05-17T03:26:46.438818Z info cache returned workload trust anchor from cache ttl=23h59m59.561184436s
2025-05-17T03:26:46.438914Z info cache returned workload trust anchor from cache ttl=23h59m59.561087612s
2025-05-17T03:26:46.438969Z info ads SDS: PUSH request for node:istio-ingressgateway-6bb8fb6549-2mlk2.istio-system resources:1 size:4.0kB resource:default
2025-05-17T03:26:46.439063Z info ads SDS: PUSH request for node:istio-ingressgateway-6bb8fb6549-2mlk2.istio-system resources:1 size:1.1kB resource:ROOTCA
2025-05-17T03:26:46.439117Z info cache returned workload trust anchor from cache ttl=23h59m59.560887326s
2025-05-17T03:26:47.984580Z info Readiness succeeded in 1.922836442s
2025-05-17T03:26:47.984873Z info Envoy proxy is ready
7. 애플리케이션 엔드포인트 반복 호출 시도
1
for i in {1..100}; do curl http://catalog.istioinaction.io:30000/items -w "\nStatus Code %{http_code}\n"; sleep .5; done
✅ 출력
1
2
3
4
5
6
Status Code 503
Status Code 503
Status Code 503
...
1
2
3
4
[2025-05-17T03:40:27.455Z] "GET /items HTTP/1.1" 503 NC cluster_not_found - "-" 0 0 0 - "172.18.0.1" "curl/8.13.0" "4cf735c9-d09b-9769-9bf0-83fefa1b6931" "catalog.istioinaction.io:30000" "-" - - 10.10.0.6:8080 172.18.0.1:40804 - -
[2025-05-17T03:40:27.970Z] "GET /items HTTP/1.1" 503 NC cluster_not_found - "-" 0 0 0 - "172.18.0.1" "curl/8.13.0" "4f9c451d-9cd4-9508-bb67-2773fc37adbd" "catalog.istioinaction.io:30000" "-" - - 10.10.0.6:8080 172.18.0.1:40808 - -
[2025-05-17T03:40:28.490Z] "GET /items HTTP/1.1" 503 NC cluster_not_found - "-" 0 0 0 - "172.18.0.1" "curl/8.13.0" "53485abd-ca82-9138-89ba-24edb3a24420" "catalog.istioinaction.io:30000" "-" - - 10.10.0.6:8080 172.18.0.1:40814 - -
...
🔍 데이터 플레인이 최신 상태인지 확인하기
1. 데이터 플레인 동기화 상태 확인
1
docker exec -it myk8s-control-plane istioctl proxy-status
✅ 출력
1
2
3
4
5
6
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION
catalog-6cf4b97d-7d6gb.istioinaction Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8d74787f-6jlfq 1.17.8
catalog-v2-56c97f6db-8wnq6.istioinaction Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8d74787f-6jlfq 1.17.8
catalog-v2-56c97f6db-fl74q.istioinaction Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8d74787f-6jlfq 1.17.8
istio-egressgateway-85df6b84b7-md4m2.istio-system Kubernetes SYNCED SYNCED SYNCED NOT SENT NOT SENT istiod-8d74787f-6jlfq 1.17.8
istio-ingressgateway-6bb8fb6549-2mlk2.istio-system Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8d74787f-6jlfq 1.17.8
2. 설정 누락 문제 식별: Kiali로 서브셋 검증하기
Subset not found - https://v1-41.kiali.io/docs/features/validations/#kia1107—subset-not-found
⚠️ istioctl로 잘못된 설정 발견하기
1. 네임스페이스 설정 분석
1
docker exec -it myk8s-control-plane istioctl analyze -n istioinaction
✅ 출력
1
2
3
4
Error [IST0101] (VirtualService istioinaction/catalog-v1-v2) Referenced host+subset in destinationrule not found: "catalog.istioinaction.svc.cluster.local+version-v1"
Error [IST0101] (VirtualService istioinaction/catalog-v1-v2) Referenced host+subset in destinationrule not found: "catalog.istioinaction.svc.cluster.local+version-v2"
Error: Analyzers found issues when analyzing namespace: istioinaction.
See https://istio.io/v1.17/docs/reference/config/analysis for more information about causes and resolutions.
https://istio.io/v1.17/docs/reference/config/analysis/
2. analyze 명령어 종료 코드 확인
1
echo $? # (참고) 0 성공
✅ 출력
1
79
3. catalog 파드 이름 변수로 저장
1
2
kubectl get pod -n istioinaction -l app=catalog -o jsonpath='{.items[0].metadata.name}'
CATALOG_POD1=$(kubectl get pod -n istioinaction -l app=catalog -o jsonpath='{.items[0].metadata.name}')
✅ 출력
1
catalog-6cf4b97d-7d6gb
4. 특정 파드의 Istio 리소스 상세 정보 조회
1
docker exec -it myk8s-control-plane istioctl x des pod -n istioinaction $CATALOG_POD1
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Pod: catalog-6cf4b97d-7d6gb
Pod Revision: default
Pod Ports: 3000 (catalog), 15090 (istio-proxy)
--------------------
Service: catalog
Port: http 80/HTTP targets pod port 3000
--------------------
Effective PeerAuthentication:
Workload mTLS mode: PERMISSIVE
Exposed on Ingress Gateway http://172.18.0.2
VirtualService: catalog-v1-v2
WARNING: No destinations match pod subsets (checked 1 HTTP routes)
Warning: Route to subset version-v1 but NO DESTINATION RULE defining subsets!
Warning: Route to subset version-v2 but NO DESTINATION RULE defining subsets!
5. DestinationRule 구성 확인
1
cat ch10/catalog-destinationrule-v1-v2.yaml
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: catalog
namespace: istioinaction
spec:
host: catalog.istioinaction.svc.cluster.local
subsets:
- name: version-v1
labels:
version: v1
- name: version-v2
labels:
version: v2
6. DestinationRule 적용 및 설정 재검증
1
2
kubectl apply -f ch10/catalog-destinationrule-v1-v2.yaml
docker exec -it myk8s-control-plane istioctl x des pod -n istioinaction $CATALOG_POD1
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
destinationrule.networking.istio.io/catalog created
Pod: catalog-6cf4b97d-7d6gb
Pod Revision: default
Pod Ports: 3000 (catalog), 15090 (istio-proxy)
--------------------
Service: catalog
Port: http 80/HTTP targets pod port 3000
DestinationRule: catalog for "catalog.istioinaction.svc.cluster.local"
Matching subsets: version-v1 # 일치하는 부분집합
(Non-matching subsets version-v2) # 일치하지 않은 부분집합
No Traffic Policy
--------------------
Effective PeerAuthentication:
Workload mTLS mode: PERMISSIVE
Exposed on Ingress Gateway http://172.18.0.2
VirtualService: catalog-v1-v2 # 이 파드로 트래픽을 라우팅하는 VirtualService
Weight 20%
7. 테스트 후 리소스 원복
1
2
3
4
kubectl delete -f ch10/catalog-destinationrule-v1-v2.yaml
# 결과
destinationrule.networking.istio.io "catalog" deleted
🧰 엔보이 관리(admin) 인터페이스
1. 엔보이 어드민 인터페이스 포트포워딩
1
2
3
4
5
kubectl port-forward deploy/catalog -n istioinaction 15000:15000
# 결과
Forwarding from 127.0.0.1:15000 -> 15000
Forwarding from [::1]:15000 -> 15000
2. 엔보이 설정 전체 덤프 크기 확인
1
curl -s localhost:15000/config_dump | wc -l
✅ 출력
1
13952
🧭 istioctl 로 프록시 설정 쿼리하기
1. ingressgateway의 Envoy 리스너 설정 확인
1
docker exec -it myk8s-control-plane istioctl proxy-config listener deploy/istio-ingressgateway -n istio-system
✅ 출력
1
2
3
4
5
6
ADDRESS PORT MATCH DESTINATION
0.0.0.0 8080 ALL Route: http.8080 # 8080 포트에 대한 요청은 루트 http.8080에 따라 라우팅하도록 설정된다
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
## 리스너는 8080 포트에 설정돼 있다.
## 그 리스너에서 트래픽은 http.8080 이라는 루트에 따라 라우팅된다.
2. ingressgateway 서비스의 포트 매핑 확인
1
kubectl get svc -n istio-system istio-ingressgateway -o yaml | grep "ports:" -A10
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
ports:
- name: status-port
nodePort: 31287
port: 15021
protocol: TCP
targetPort: 15021
- name: http2
nodePort: 30000
port: 80
protocol: TCP
targetPort: 8080
📍 엔보이 루트 설정 쿼리하기
1. http.8080 라우트 설정 확인
1
docker exec -it myk8s-control-plane istioctl proxy-config routes deploy/istio-ingressgateway -n istio-system --name http.8080
✅ 출력
1
2
NAME DOMAINS MATCH VIRTUAL SERVICE
http.8080 catalog.istioinaction.io /* catalog-v1-v2.istioinaction
2. 라우팅 세부 정보(JSON) 확인
1
docker exec -it myk8s-control-plane istioctl proxy-config routes deploy/istio-ingressgateway -n istio-system --name http.8080 -o json
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
[
{
"name": "http.8080",
"virtualHosts": [
{
"name": "catalog.istioinaction.io:80",
"domains": [
"catalog.istioinaction.io"
],
"routes": [
{
"match": {
"prefix": "/" # 일치해야 하는 라우팅 규칙
},
"route": {
"weightedClusters": {
"clusters": [ # 규칙이 일치할 때 트래픽을 라우팅하는 클러스터
{
"name": "outbound|80|version-v1|catalog.istioinaction.svc.cluster.local",
"weight": 20
},
{
"name": "outbound|80|version-v2|catalog.istioinaction.svc.cluster.local",
"weight": 80
}
],
"totalWeight": 100
},
"timeout": "0s",
"retryPolicy": {
"retryOn": "connect-failure,refused-stream,unavailable,cancelled,retriable-status-codes",
"numRetries": 2,
"retryHostPredicate": [
{
"name": "envoy.retry_host_predicates.previous_hosts",
"typedConfig": {
"@type": "type.googleapis.com/envoy.extensions.retry.host.previous_hosts.v3.PreviousHostsPredicate"
}
}
],
"hostSelectionRetryMaxAttempts": "5",
"retriableStatusCodes": [
503
]
},
"maxGrpcTimeout": "0s"
},
"metadata": {
"filterMetadata": {
"istio": {
"config": "/apis/networking.istio.io/v1alpha3/namespaces/istioinaction/virtual-service/catalog-v1-v2"
}
}
},
"decorator": {
"operation": "catalog-v1-v2:80/*"
}
}
],
"includeRequestAttemptCount": true
}
],
"validateClusters": false,
"ignorePortInHostMatching": true
}
]
🧬 엔보이 클러스터 설정 쿼리하기
1. 기본 클러스터 설정 조회 (subset 없이)
1
2
docker exec -it myk8s-control-plane istioctl proxy-config clusters deploy/istio-ingressgateway -n istio-system \
--fqdn catalog.istioinaction.svc.cluster.local --port 80
✅ 출력
1
2
SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE
catalog.istioinaction.svc.cluster.local 80 - outbound EDS
2. 특정 서브셋에 대한 클러스터 조회
1
2
docker exec -it myk8s-control-plane istioctl proxy-config clusters deploy/istio-ingressgateway -n istio-system \
--fqdn catalog.istioinaction.svc.cluster.local --port 80 --subset version-v1
✅ 출력
1
SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE
현재 서브셋으로 구성된 데스티네이션 룰이 없다.
3. DestinationRule 정의 확인 (설정 예정인 YAML 내용 확인)
1
docker exec -it myk8s-control-plane cat /istiobook/ch10/catalog-destinationrule-v1-v2.yaml
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: catalog
namespace: istioinaction
spec:
host: catalog.istioinaction.svc.cluster.local
subsets:
- name: version-v1
labels:
version: v1
- name: version-v2
labels:
version: v2
4. 미리 설정 파일 분석하여 오류 유무 확인
istioctl analyze <yaml>
명령어로 적용 전 설정 유효성 검사
1
docker exec -it myk8s-control-plane istioctl analyze /istiobook/ch10/catalog-destinationrule-v1-v2.yaml -n istioinaction
✅ 출력
1
✔ No validation issues found when analyzing /istiobook/ch10/catalog-destinationrule-v1-v2.yaml.
5. DestinationRule 적용
1
2
3
4
kubectl apply -f ch10/catalog-destinationrule-v1-v2.yaml
# 결과
destinationrule.networking.istio.io/catalog created
6. 클러스터 재조회로 서브셋 반영 여부 확인
1
2
docker exec -it myk8s-control-plane istioctl proxy-config clusters deploy/istio-ingressgateway -n istio-system \
--fqdn catalog.istioinaction.svc.cluster.local --port 80
✅ 출력
1
2
3
4
SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE
catalog.istioinaction.svc.cluster.local 80 - outbound EDS catalog.istioinaction
catalog.istioinaction.svc.cluster.local 80 version-v1 outbound EDS catalog.istioinaction
catalog.istioinaction.svc.cluster.local 80 version-v2 outbound EDS catalog.istioinaction
7. 파드 기준 라우팅 상태 재확인
istioctl x describe pod
명령어로 VirtualService와 DestinationRule 연결 상태 점검
1
2
CATALOG_POD1=$(kubectl get pod -n istioinaction -l app=catalog -o jsonpath='{.items[0].metadata.name}')
docker exec -it myk8s-control-plane istioctl x des pod -n istioinaction $CATALOG_POD1
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Pod: catalog-6cf4b97d-7d6gb
Pod Revision: default
Pod Ports: 3000 (catalog), 15090 (istio-proxy)
--------------------
Service: catalog
Port: http 80/HTTP targets pod port 3000
DestinationRule: catalog for "catalog.istioinaction.svc.cluster.local"
Matching subsets: version-v1
(Non-matching subsets version-v2)
No Traffic Policy
--------------------
Effective PeerAuthentication:
Workload mTLS mode: PERMISSIVE
Exposed on Ingress Gateway http://172.18.0.2
VirtualService: catalog-v1-v2
Weight 20%
8. 전체 네임스페이스 설정 재검사
1
docker exec -it myk8s-control-plane istioctl analyze -n istioinaction
✅ 출력
1
✔ No validation issues found when analyzing namespace: istioinaction.
9. 실제 애플리케이션 호출로 문제 해결 여부 검증
1
curl http://catalog.istioinaction.io:30000/items
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[
{
"id": 1,
"color": "amber",
"department": "Eyewear",
"name": "Elinor Glasses",
"price": "282.00"
},
{
"id": 2,
"color": "cyan",
"department": "Clothing",
"name": "Atlas Shirt",
"price": "127.00"
},
{
"id": 3,
"color": "teal",
"department": "Clothing",
"name": "Small Metal Shoes",
"price": "232.00"
},
{
"id": 4,
"color": "red",
"department": "Watches",
"name": "Red Dragon Watch",
"price": "232.00"
}
]
⚙️ 클러스터는 어떻게 설정되는가?
Envoy 클러스터 설정 상세 쿼리 (JSON 출력)
1
2
docker exec -it myk8s-control-plane istioctl proxy-config clusters deploy/istio-ingressgateway -n istio-system \
--fqdn catalog.istioinaction.svc.cluster.local --port 80 --subset version-v1 -o json
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
[
{
"transportSocketMatches": [
{
"name": "tlsMode-istio",
"match": {
"tlsMode": "istio"
},
"transportSocket": {
"name": "envoy.transport_sockets.tls",
"typedConfig": {
"@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext",
"commonTlsContext": {
"tlsParams": {
"tlsMinimumProtocolVersion": "TLSv1_2",
"tlsMaximumProtocolVersion": "TLSv1_3"
},
"tlsCertificateSdsSecretConfigs": [
{
"name": "default",
"sdsConfig": {
"apiConfigSource": {
"apiType": "GRPC",
"transportApiVersion": "V3",
"grpcServices": [
{
"envoyGrpc": {
"clusterName": "sds-grpc"
}
}
],
"setNodeOnFirstMessageOnly": true
},
"initialFetchTimeout": "0s",
"resourceApiVersion": "V3"
}
}
],
"combinedValidationContext": {
"defaultValidationContext": {
"matchSubjectAltNames": [
{
"exact": "spiffe://cluster.local/ns/istioinaction/sa/catalog"
}
]
},
"validationContextSdsSecretConfig": {
"name": "ROOTCA",
"sdsConfig": {
"apiConfigSource": {
"apiType": "GRPC",
"transportApiVersion": "V3",
"grpcServices": [
{
"envoyGrpc": {
"clusterName": "sds-grpc"
}
}
],
"setNodeOnFirstMessageOnly": true
},
"initialFetchTimeout": "0s",
"resourceApiVersion": "V3"
}
}
},
"alpnProtocols": [
"istio-peer-exchange",
"istio"
]
},
"sni": "outbound_.80_.version-v1_.catalog.istioinaction.svc.cluster.local"
}
}
},
{
"name": "tlsMode-disabled",
"match": {},
"transportSocket": {
"name": "envoy.transport_sockets.raw_buffer",
"typedConfig": {
"@type": "type.googleapis.com/envoy.extensions.transport_sockets.raw_buffer.v3.RawBuffer"
}
}
}
],
"name": "outbound|80|version-v1|catalog.istioinaction.svc.cluster.local",
"type": "EDS",
"edsClusterConfig": {
"edsConfig": {
"ads": {},
"initialFetchTimeout": "0s",
"resourceApiVersion": "V3"
},
"serviceName": "outbound|80|version-v1|catalog.istioinaction.svc.cluster.local"
},
"connectTimeout": "10s",
"lbPolicy": "LEAST_REQUEST",
"circuitBreakers": {
"thresholds": [
{
"maxConnections": 4294967295,
"maxPendingRequests": 4294967295,
"maxRequests": 4294967295,
"maxRetries": 4294967295,
"trackRemaining": true
}
]
},
"commonLbConfig": {
"localityWeightedLbConfig": {}
},
"metadata": {
"filterMetadata": {
"istio": {
"config": "/apis/networking.istio.io/v1alpha3/namespaces/istioinaction/destination-rule/catalog",
"default_original_port": 80,
"services": [
{
"host": "catalog.istioinaction.svc.cluster.local",
"name": "catalog",
"namespace": "istioinaction"
}
],
"subset": "version-v1"
}
}
},
"filters": [
{
"name": "istio.metadata_exchange",
"typedConfig": {
"@type": "type.googleapis.com/envoy.tcp.metadataexchange.config.MetadataExchange",
"protocol": "istio-peer-exchange"
}
}
]
}
]
🛰️ 엔보이 클러스터 엔드포인트 쿼리하기
1. 엔보이가 라우팅하는 실제 엔드포인트 정보 확인
1
2
docker exec -it myk8s-control-plane istioctl proxy-config endpoints deploy/istio-ingressgateway -n istio-system \
--cluster "outbound|80|version-v1|catalog.istioinaction.svc.cluster.local"
✅ 출력
1
2
ENDPOINT STATUS OUTLIER CHECK CLUSTER
10.10.0.12:3000 HEALTHY OK outbound|80|version-v1|catalog.istioinaction.svc.cluster.local
2. 엔드포인트 IP에 해당하는 워크로드 존재 여부 검증
1
kubectl get pod -n istioinaction --field-selector status.podIP=10.10.0.12 -owide --show-labels
✅ 출력
1
2
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
catalog-6cf4b97d-7d6gb 2/2 Running 0 3h50m 10.10.0.12 myk8s-control-plane <none> <none> app=catalog,pod-template-hash=6cf4b97d,security.istio.io/tlsMode=istio,service.istio.io/canonical-name=catalog,service.istio.io/canonical-revision=v1,version=v1
실제 워크로드가 존재하고, 트래픽이 해당 워크로드로 라우팅되도록 엔보이 설정이 완전히 구성되었다.
🧯 애플리케이션 문제 트러블슈팅하기
1. 정상 응답 상태 사전 검증
1
for in in {1..9999}; do curl http://catalog.istioinaction.io:30000/items -w "\nStatus Code %{http_code}\n"; sleep 1; done
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
[
{
"id": 1,
"color": "amber",
"department": "Eyewear",
"name": "Elinor Glasses",
"price": "282.00"
},
{
"id": 2,
"color": "cyan",
"department": "Clothing",
"name": "Atlas Shirt",
"price": "127.00"
},
{
"id": 3,
"color": "teal",
"department": "Clothing",
"name": "Small Metal Shoes",
"price": "232.00"
},
{
"id": 4,
"color": "red",
"department": "Watches",
"name": "Red Dragon Watch",
"price": "232.00"
}
]
Status Code 200
...
2. Grafana 대시보드로 트래픽 관찰
Istio Mesh 대시보드를 통해 catalog 워크로드 트래픽 및 지표 시각적으로 확인
3. catalog v2 파드 중 하나 선택 및 변수 할당
1
2
CATALOG_POD=$(kubectl get pods -l version=v2 -n istioinaction -o jsonpath={.items..metadata.name} | cut -d ' ' -f1)
echo $CATALOG_POD
✅ 출력
1
catalog-v2-56c97f6db-8wnq6
4. catalog 파드에 latency 유발 요청 전송
1
2
3
4
kubectl -n istioinaction exec -c catalog $CATALOG_POD \
-- curl -s -X POST -H "Content-Type: application/json" \
-d '{"active": true, "type": "latency", "volatile": true}' \
localhost:3000/blowup ;
✅ 출력
1
blowups=[object Object]
5. 서비스 반복 호출로 지연 효과 관찰
1
for in in {1..9999}; do curl http://catalog.istioinaction.io:30000/items -w "\nStatus Code %{http_code}\n"; sleep 1; done
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[
{
"id": 1,
"color": "amber",
"department": "Eyewear",
"name": "Elinor Glasses",
"price": "282.00"
},
{
"id": 2,
"color": "cyan",
"department": "Clothing",
"name": "Atlas Shirt",
"price": "127.00"
},
{
"id": 3,
"color": "teal",
"department": "Clothing",
"name": "Small Metal Shoes",
"price": "232.00"
},
{
"id": 4,
"color": "red",
"department": "Watches",
"name": "Red Dragon Watch",
"price": "232.00"
}
]
Status Code 200
6. VirtualService 타임아웃(0.5s) 설정 적용
(1) VirtualService 리소스 목록 확인
1
kubectl get vs -n istioinaction
✅ 출력
1
2
NAME GATEWAYS HOSTS AGE
catalog-v1-v2 ["catalog-gateway"] ["catalog.istioinaction.io"] 4h53m
(2) kubectl patch
명령어를 사용하여 HTTP 타임아웃을 0.5초로 지정
1
2
3
4
5
kubectl patch vs catalog-v1-v2 -n istioinaction --type json \
-p '[{"op": "add", "path": "/spec/http/0/timeout", "value": "0.5s"}]'
# 결과
virtualservice.networking.istio.io/catalog-v1-v2 patched
7. VirtualService 설정 적용 여부 검증
1
kubectl get vs catalog-v1-v2 -n istioinaction -o jsonpath='{.spec.http[?(@.timeout=="0.5s")]}' | jq
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
{
"route": [
{
"destination": {
"host": "catalog.istioinaction.svc.cluster.local",
"port": {
"number": 80
},
"subset": "version-v1"
},
"weight": 20
},
{
"destination": {
"host": "catalog.istioinaction.svc.cluster.local",
"port": {
"number": 80
},
"subset": "version-v2"
},
"weight": 80
}
],
"timeout": "0.5s"
}
8. 타임아웃 적용 후 서비스 재요청 및 504 발생 확인
1
for in in {1..9999}; do curl http://catalog.istioinaction.io:30000/items -w "\nStatus Code %{http_code}\n"; sleep 1; done
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
[
{
"id": 1,
"color": "amber",
"department": "Eyewear",
"name": "Elinor Glasses",
"price": "282.00"
},
{
"id": 2,
"color": "cyan",
"department": "Clothing",
"name": "Atlas Shirt",
"price": "127.00"
},
{
"id": 3,
"color": "teal",
"department": "Clothing",
"name": "Small Metal Shoes",
"price": "232.00"
},
{
"id": 4,
"color": "red",
"department": "Watches",
"name": "Red Dragon Watch",
"price": "232.00"
}
]
Status Code 200
upstream request timeout
Status Code 504
upstream request timeout
Status Code 504
9. ingressgateway 로그에서 timeout 로그 스트리밍
1
kubectl logs -n istio-system -l app=istio-ingressgateway -f
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[2025-05-17T08:31:49.673Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 2 2 "172.18.0.1" "curl/8.13.0" "ddbb601f-e23b-9c3c-b4d8-696c510fa62a" "catalog.istioinaction.io:30000" "10.10.0.12:3000" outbound|80|version-v1|catalog.istioinaction.svc.cluster.local 10.10.0.6:60458 10.10.0.6:8080 172.18.0.1:39818 - -
[2025-05-17T08:31:50.296Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 2 2 "172.18.0.1" "curl/8.13.0" "b561e6be-2eb8-967f-a1b8-62f27797de98" "catalog.istioinaction.io:30000" "10.10.0.14:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:51144 10.10.0.6:8080 172.18.0.1:39824 - -
[2025-05-17T08:31:50.690Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 2 2 "172.18.0.1" "curl/8.13.0" "9c8a7f37-97bd-90a5-b2e3-27a5cacd3393" "catalog.istioinaction.io:30000" "10.10.0.14:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:51106 10.10.0.6:8080 172.18.0.1:39830 - -
[2025-05-17T08:31:51.308Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 2 2 "172.18.0.1" "curl/8.13.0" "06442ac5-45aa-9dc1-9df9-a368349324d3" "catalog.istioinaction.io:30000" "10.10.0.14:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:49314 10.10.0.6:8080 172.18.0.1:39840 - -
[2025-05-17T08:31:51.701Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 410 409 "172.18.0.1" "curl/8.13.0" "99f995bf-a090-993f-bdbf-dd1c8caab531" "catalog.istioinaction.io:30000" "10.10.0.13:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:54274 10.10.0.6:8080 172.18.0.1:39850 - -
[2025-05-17T08:31:52.322Z] "GET /items HTTP/1.1" 504 UT response_timeout - "-" 0 24 499 - "172.18.0.1" "curl/8.13.0" "665c29ba-1596-989d-b1d9-603f65457c8e" "catalog.istioinaction.io:30000" "10.10.0.13:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:60144 10.10.0.6:8080 172.18.0.1:39860 - -
[2025-05-17T08:31:53.127Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 2 2 "172.18.0.1" "curl/8.13.0" "b406314f-fef6-9f0e-acaa-293a5ae27d76" "catalog.istioinaction.io:30000" "10.10.0.12:3000" outbound|80|version-v1|catalog.istioinaction.svc.cluster.local 10.10.0.6:56056 10.10.0.6:8080 172.18.0.1:39862 - -
[2025-05-17T08:31:54.139Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 2 1 "172.18.0.1" "curl/8.13.0" "135eb6ae-3f58-9926-a012-25391a84b02b" "catalog.istioinaction.io:30000" "10.10.0.12:3000" outbound|80|version-v1|catalog.istioinaction.svc.cluster.local 10.10.0.6:56056 10.10.0.6:8080 172.18.0.1:39884 - -
[2025-05-17T08:31:53.833Z] "GET /items HTTP/1.1" 504 UT response_timeout - "-" 0 24 500 - "172.18.0.1" "curl/8.13.0" "2878fecb-5f3e-95e8-87a0-cfd080cbb1d0" "catalog.istioinaction.io:30000" "10.10.0.13:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:33400 10.10.0.6:8080 172.18.0.1:39872 - -
[2025-05-17T08:31:55.361Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 497 497 "172.18.0.1" "curl/8.13.0" "24c5650c-b785-97cb-a1dc-f3116ea01a8a" "catalog.istioinaction.io:30000" "10.10.0.13:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:57812 10.10.0.6:8080 172.18.0.1:39900 - -
[2025-05-17T08:31:56.869Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 2 2 "172.18.0.1" "curl/8.13.0" "ac3c564b-db42-9f06-9769-947005879e22" "catalog.istioinaction.io:30000" "10.10.0.12:3000" outbound|80|version-v1|catalog.istioinaction.svc.cluster.local 10.10.0.6:40450 10.10.0.6:8080 172.18.0.1:39914 - -
[2025-05-17T08:31:57.885Z] "GET /items HTTP/1.1" 504 UT response_timeout - "-" 0 24 500 - "172.18.0.1" "curl/8.13.0" "40a31598-e5ea-982f-891e-c8e8122e5ed7" "catalog.istioinaction.io:30000" "10.10.0.13:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:57812 10.10.0.6:8080 172.18.0.1:39004 - -
[2025-05-17T08:31:59.406Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 367 366 "172.18.0.1" "curl/8.13.0" "bb8da2d1-ec36-9874-8d5d-6d61ef638539" "catalog.istioinaction.io:30000" "10.10.0.13:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:34204 10.10.0.6:8080 172.18.0.1:39012 - -
[2025-05-17T08:32:00.783Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 3 2 "172.18.0.1" "curl/8.13.0" "9c5cd684-a71c-9082-84c1-916983f622cd" "catalog.istioinaction.io:30000" "10.10.0.14:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:47318 10.10.0.6:8080 172.18.0.1:39024 - -
[2025-05-17T08:32:01.798Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 289 289 "172.18.0.1" "curl/8.13.0" "896af886-997f-9c17-bfec-765b7a549f8c" "catalog.istioinaction.io:30000" "10.10.0.13:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:34164 10.10.0.6:8080 172.18.0.1:39026 - -
[2025-05-17T08:32:03.097Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 4 3 "172.18.0.1" "curl/8.13.0" "38870d95-8689-9c2b-a955-8702237d9aba" "catalog.istioinaction.io:30000" "10.10.0.12:3000" outbound|80|version-v1|catalog.istioinaction.svc.cluster.local 10.10.0.6:40450 10.10.0.6:8080 172.18.0.1:39032 - -
[2025-05-17T08:32:04.112Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 3 3 "172.18.0.1" "curl/8.13.0" "5d0cfbe6-7f0e-9db0-a80b-86b2e4c6369e" "catalog.istioinaction.io:30000" "10.10.0.12:3000" outbound|80|version-v1|catalog.istioinaction.svc.cluster.local 10.10.0.6:41612 10.10.0.6:8080 172.18.0.1:39034 - -
[2025-05-17T08:32:05.134Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 2 2 "172.18.0.1" "curl/8.13.0" "030b87f1-a0f4-95c7-a2ce-d374389e10a6" "catalog.istioinaction.io:30000" "10.10.0.14:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:36646 10.10.0.6:8080 172.18.0.1:39042 - -
[2025-05-17T08:32:06.155Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 3 2 "172.18.0.1" "curl/8.13.0" "f675b04c-874f-9cdb-81fe-114598960595" "catalog.istioinaction.io:30000" "10.10.0.12:3000" outbound|80|version-v1|catalog.istioinaction.svc.cluster.local 10.10.0.6:41592 10.10.0.6:8080 172.18.0.1:39052 - -
[2025-05-17T08:32:07.168Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 7 7 "172.18.0.1" "curl/8.13.0" "ec3030a1-dfb6-9578-bac1-907d5a2f04bb" "catalog.istioinaction.io:30000" "10.10.0.14:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:40350 10.10.0.6:8080 172.18.0.1:33540 - -
[2025-05-17T08:32:08.187Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 2 1 "172.18.0.1" "curl/8.13.0" "f0230eef-5532-940b-8347-7cb4748611eb" "catalog.istioinaction.io:30000" "10.10.0.14:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:51126 10.10.0.6:8080 172.18.0.1:33548 - -
[2025-05-17T08:32:09.199Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 302 301 "172.18.0.1" "curl/8.13.0" "4ccfd24e-8e9a-9708-8480-d4d953be3f59" "catalog.istioinaction.io:30000" "10.10.0.13:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:45308 10.10.0.6:8080 172.18.0.1:33554 - -
[2025-05-17T08:32:10.514Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 5 5 "172.18.0.1" "curl/8.13.0" "d1535c70-a1e1-9796-ba8d-87438b0c8382" "catalog.istioinaction.io:30000" "10.10.0.14:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:51144 10.10.0.6:8080 172.18.0.1:33556 - -
....
10. IngressGateway에서 504 오류 로그 필터링
1
kubectl logs -n istio-system -l app=istio-ingressgateway -f | grep 504
✅ 출력
1
2
3
4
[2025-05-17T08:33:01.777Z] "GET /items HTTP/1.1" 504 UT response_timeout - "-" 0 24 500 - "172.18.0.1" "curl/8.13.0" "a5072404-fe80-98d5-af94-185adc5f0c92" "catalog.istioinaction.io:30000" "10.10.0.13:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:51846 10.10.0.6:8080 172.18.0.1:48518 - -
[2025-05-17T08:33:09.618Z] "GET /items HTTP/1.1" 504 UT response_timeout - "-" 0 24 501 - "172.18.0.1" "curl/8.13.0" "2492025a-3fea-9468-8d37-89273cef19e5" "catalog.istioinaction.io:30000" "10.10.0.13:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:53376 10.10.0.6:8080 172.18.0.1:38128 - -
[2025-05-17T08:33:11.146Z] "GET /items HTTP/1.1" 504 UT response_timeout - "-" 0 24 501 - "172.18.0.1" "curl/8.13.0" "f829ebde-22b8-9b0f-9e42-22b2e20cfead" "catalog.istioinaction.io:30000" "10.10.0.13:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:53382 10.10.0.6:8080 172.18.0.1:38136 - -
....
11. catalog v2 파드의 istio-proxy 로그 스트리밍
1
kubectl logs -n istioinaction -l version=v2 -c istio-proxy -f
✅ 출력
1
2
3
4
5
6
[2025-05-17T08:33:38.954Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 11 9 "172.18.0.1" "curl/8.13.0" "8ff3dd9b-4803-9924-905c-3aea0f1d1731" "catalog.istioinaction.io:30000" "10.10.0.14:3000" inbound|3000|| 127.0.0.6:42347 10.10.0.14:3000 172.18.0.1:0 outbound_.80_.version-v2_.catalog.istioinaction.svc.cluster.local default
2025-05-17T08:33:44.126862Z info xdsproxy connected to upstream XDS server: istiod.istio-system.svc:15012
[2025-05-17T08:33:49.022Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 2 1 "172.18.0.1" "curl/8.13.0" "169e8391-8409-9e0c-85bb-53e4af150fbc" "catalog.istioinaction.io:30000" "10.10.0.14:3000" inbound|3000|| 127.0.0.6:57703 10.10.0.14:3000 172.18.0.1:0 outbound_.80_.version-v2_.catalog.istioinaction.svc.cluster.local default
[2025-05-17T08:33:52.089Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 1 1 "172.18.0.1" "curl/8.13.0" "4d07c358-6d2c-9650-9362-c4e6b0453445" "catalog.istioinaction.io:30000" "10.10.0.14:3000" inbound|3000|| 127.0.0.6:57703 10.10.0.14:3000 172.18.0.1:0 outbound_.80_.version-v2_.catalog.istioinaction.svc.cluster.local default
[2025-05-17T08:33:56.156Z] "GET /items HTTP/1.1" 200 - via_upstream - "-" 0 502 1 1 "172.18.0.1" "curl/8.13.0" "42fc582d-bb75-9772-8f23-17b9f3b28c3b" "catalog.istioinaction.io:30000" "10.10.0.14:3000" inbound|3000|| 127.0.0.6:44565 10.10.0.14:3000 172.18.0.1:0 outbound_.80_.version-v2_.catalog.istioinaction.svc.cluster.local default
...
🧾 엔보이 액세스 로그 이해하기 + 엔보이 액세스 로그 형식 바꾸기
1. 기존 엔보이 액세스 로그 확인
1
kubectl logs -n istio-system -l app=istio-ingressgateway -f | grep 504
✅ 출력
1
2
3
4
5
[2025-05-17T08:39:35.198Z] "GET /items HTTP/1.1" 504 UT response_timeout - "-" 0 24 500 - "172.18.0.1" "curl/8.13.0" "d42cf869-6b0b-96e1-bf93-7190c05d3d34" "catalog.istioinaction.io:30000" "10.10.0.13:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:59144 10.10.0.6:8080 172.18.0.1:43574 - -
[2025-05-17T08:39:41.111Z] "GET /items HTTP/1.1" 504 UT response_timeout - "-" 0 24 500 - "172.18.0.1" "curl/8.13.0" "00b32cb2-8c4a-991d-83ab-f2a5bf6b81ba" "catalog.istioinaction.io:30000" "10.10.0.13:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:41300 10.10.0.6:8080 172.18.0.1:47294 - -
[2025-05-17T08:40:08.024Z] "GET /items HTTP/1.1" 504 UT response_timeout - "-" 0 24 500 - "172.18.0.1" "curl/8.13.0" "716d72fa-68b7-9152-b35c-4086ee0be7c8" "catalog.istioinaction.io:30000" "10.10.0.13:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:52204 10.10.0.6:8080 172.18.0.1:59286 - -
[2025-05-17T08:40:16.301Z] "GET /items HTTP/1.1" 504 UT response_timeout - "-" 0 24 499 - "172.18.0.1" "curl/8.13.0" "53b55a05-569c-950c-b120-20773dda534f" "catalog.istioinaction.io:30000" "10.10.0.13:3000" outbound|80|version-v2|catalog.istioinaction.svc.cluster.local 10.10.0.6:51026 10.10.0.6:8080 172.18.0.1:59360 - -
....
2. MeshConfig
에서 로그 형식 JSON으로 변경
accessLogEncoding: JSON
추가
1
kubectl edit -n istio-system cm istio
1
configmap/istio edited
3. 변경된 JSON 로그 포맷 확인
1
kubectl logs -n istio-system -l app=istio-ingressgateway -f | jq
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
...
{
"x_forwarded_for": "172.18.0.1",
"upstream_service_time": null,
"connection_termination_details": null,
"upstream_cluster": "outbound|80|version-v2|catalog.istioinaction.svc.cluster.local",
"request_id": "93b6396c-c45c-9d1e-ac38-b3dabc48dccb",
"method": "GET",
"response_flags": "UT",
"protocol": "HTTP/1.1",
"upstream_host": "10.10.0.13:3000",
"response_code_details": "response_timeout",
"requested_server_name": null,
"downstream_local_address": "10.10.0.6:8080",
"upstream_transport_failure_reason": null,
"route_name": null,
"upstream_local_address": "10.10.0.6:36090",
"path": "/items",
"downstream_remote_address": "172.18.0.1:39542",
"bytes_received": 0,
"bytes_sent": 24,
"response_code": 504,
"start_time": "2025-05-17T08:44:06.127Z",
"user_agent": "curl/8.13.0",
"duration": 500,
"authority": "catalog.istioinaction.io:30000"
}
...
4. 느리게 응답하는 catalog v2 파드의 IP 조회
1
2
CATALOG_POD=$(kubectl get pods -l version=v2 -n istioinaction -o jsonpath={.items..metadata.name} | cut -d ' ' -f1)
kubectl get pod -n istioinaction $CATALOG_POD -owide
✅ 출력
1
2
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
catalog-v2-56c97f6db-8wnq6 2/2 Running 0 5h10m 10.10.0.13 myk8s-control-plane <none> <none>
🔥 엔보이 게이트웨이의 로깅 수준 높이기
1. 현재 게이트웨이 엔보이 로그 수준 확인
1
docker exec -it myk8s-control-plane istioctl proxy-config log deploy/istio-ingressgateway -n istio-system
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
istio-ingressgateway-6bb8fb6549-2mlk2.istio-system:
active loggers:
admin: warning
alternate_protocols_cache: warning
aws: warning
assert: warning
backtrace: warning
cache_filter: warning
client: warning
config: warning
connection: warning
conn_handler: warning
decompression: warning
dns: warning
dubbo: warning
envoy_bug: warning
ext_authz: warning
ext_proc: warning
rocketmq: warning
file: warning
filter: warning
forward_proxy: warning
grpc: warning
happy_eyeballs: warning
hc: warning
health_checker: warning
http: warning
http2: warning
hystrix: warning
init: warning
io: warning
jwt: warning
kafka: warning
key_value_store: warning
lua: warning
main: warning
matcher: warning
misc: error
mongo: warning
multi_connection: warning
oauth2: warning
quic: warning
quic_stream: warning
pool: warning
rate_limit_quota: warning
rbac: warning
rds: warning
redis: warning
router: warning
runtime: warning
stats: warning
secret: warning
tap: warning
testing: warning
thrift: warning
tracing: warning
upstream: warning
udp: warning
wasm: warning
websocket: warning
2. 특정 컴포넌트의 로그 레벨을 debug
로 설정
connection
, http
, router
, pool
로거의 수준을 debug
로 높임
1
2
docker exec -it myk8s-control-plane istioctl proxy-config log deploy/istio-ingressgateway -n istio-system \
--level http:debug,router:debug,connection:debug,pool:debug
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
istio-ingressgateway-6bb8fb6549-2mlk2.istio-system:
active loggers:
admin: warning
alternate_protocols_cache: warning
aws: warning
assert: warning
backtrace: warning
cache_filter: warning
client: warning
config: warning
connection: debug
conn_handler: warning
decompression: warning
dns: warning
dubbo: warning
envoy_bug: warning
ext_authz: warning
ext_proc: warning
rocketmq: warning
file: warning
filter: warning
forward_proxy: warning
grpc: warning
happy_eyeballs: warning
hc: warning
health_checker: warning
http: debug
http2: warning
hystrix: warning
init: warning
io: warning
jwt: warning
kafka: warning
key_value_store: warning
lua: warning
main: warning
matcher: warning
misc: error
mongo: warning
multi_connection: warning
oauth2: warning
quic: warning
quic_stream: warning
pool: debug
rate_limit_quota: warning
rbac: warning
rds: warning
redis: warning
router: debug
runtime: warning
stats: warning
secret: warning
tap: warning
testing: warning
thrift: warning
tracing: warning
upstream: warning
udp: warning
wasm: warning
websocket: warning
3. debug 로그 전체 수집 및 파일로 저장
1
k logs -n istio-system -l app=istio-ingressgateway -f > istio-igw-log.txt
4. 504 response_timeout 응답 디버깅
(1) 504
로그 필터링
1
2
3
4
5
6
7
8
2025-05-17T08:50:37.970643Z debug envoy http external/envoy/source/common/http/filter_manager.cc:967 [C11988][S12570926387111490963] Sending local reply with details response_timeout thread=48
2025-05-17T08:50:37.970716Z debug envoy http external/envoy/source/common/http/conn_manager_impl.cc:1687 [C11988][S12570926387111490963] encoding headers via codec (end_stream=false):
':status', '504'
'content-length', '24'
'content-type', 'text/plain'
'date', 'Sat, 17 May 2025 08:50:37 GMT'
'server', 'istio-envoy'
thread=48
(2) Connection ID (C11988
) 추적
1
2
3
4
5
6
7
8
2025-05-17T08:50:37.469925Z debug envoy http external/envoy/source/common/http/conn_manager_impl.cc:329 [C11988] new stream thread=48
2025-05-17T08:50:37.469985Z debug envoy http external/envoy/source/common/http/conn_manager_impl.cc:1049 [C11988][S12570926387111490963] request headers complete (end_stream=true):
':authority', 'catalog.istioinaction.io:30000'
':path', '/items'
':method', 'GET'
'user-agent', 'curl/8.13.0'
'accept', '*/*'
thread=48
(3) 요청-클러스터 매칭 확인
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
2025-05-17T08:50:37.469992Z debug envoy http external/envoy/source/common/http/conn_manager_impl.cc:1032 [C11988][S12570926387111490963] request end stream thread=48
2025-05-17T08:50:37.470011Z debug envoy connection external/envoy/source/common/network/connection_impl.h:92 [C11988] current connecting state: false thread=48
2025-05-17T08:50:37.470111Z debug envoy router external/envoy/source/common/router/router.cc:470 [C11988][S12570926387111490963] cluster 'outbound|80|version-v2|catalog.istioinaction.svc.cluster.local' match for URL '/items' thread=48
2025-05-17T08:50:37.470157Z debug envoy router external/envoy/source/common/router/router.cc:678 [C11988][S12570926387111490963] router decoding headers:
':authority', 'catalog.istioinaction.io:30000'
':path', '/items'
':method', 'GET'
':scheme', 'http'
'user-agent', 'curl/8.13.0'
'accept', '*/*'
'x-forwarded-for', '172.18.0.1'
'x-forwarded-proto', 'http'
'x-envoy-internal', 'true'
'x-request-id', '556487c7-4e8b-9465-ba02-4a876f016439'
'x-envoy-decorator-operation', 'catalog-v1-v2:80/*'
'x-envoy-peer-metadata', 'ChQKDkFQUF9DT05UQUlORVJTEgIaAAoaCgpDTFVTVEVSX0lEEgwaCkt1YmVybmV0ZXMKGwoMSU5TVEFOQ0VfSVBTEgsaCTEwLjEwLjAuNgoZCg1JU1RJT19WRVJTSU9OEggaBjEuMTcuOAqcAwoGTEFCRUxTEpEDKo4DCh0KA2FwcBIWGhRpc3Rpby1pbmdyZXNzZ2F0ZXdheQoTCgVjaGFydBIKGghnYXRld2F5cwoUCghoZXJpdGFnZRIIGgZUaWxsZXIKNgopaW5zdGFsbC5vcGVyYXRvci5pc3Rpby5pby9vd25pbmctcmVzb3VyY2USCRoHdW5rbm93bgoZCgVpc3RpbxIQGg5pbmdyZXNzZ2F0ZXdheQoZCgxpc3Rpby5pby9yZXYSCRoHZGVmYXVsdAowChtvcGVyYXRvci5pc3Rpby5pby9jb21wb25lbnQSERoPSW5ncmVzc0dhdGV3YXlzChIKB3JlbGVhc2USBxoFaXN0aW8KOQofc2VydmljZS5pc3Rpby5pby9jYW5vbmljYWwtbmFtZRIWGhRpc3Rpby1pbmdyZXNzZ2F0ZXdheQovCiNzZXJ2aWNlLmlzdGlvLmlvL2Nhbm9uaWNhbC1yZXZpc2lvbhIIGgZsYXRlc3QKIgoXc2lkZWNhci5pc3Rpby5pby9pbmplY3QSBxoFZmFsc2UKGgoHTUVTSF9JRBIPGg1jbHVzdGVyLmxvY2FsCi8KBE5BTUUSJxolaXN0aW8taW5ncmVzc2dhdGV3YXktNmJiOGZiNjU0OS0ybWxrMgobCglOQU1FU1BBQ0USDhoMaXN0aW8tc3lzdGVtCl0KBU9XTkVSElQaUmt1YmVybmV0ZXM6Ly9hcGlzL2FwcHMvdjEvbmFtZXNwYWNlcy9pc3Rpby1zeXN0ZW0vZGVwbG95bWVudHMvaXN0aW8taW5ncmVzc2dhdGV3YXkKFwoRUExBVEZPUk1fTUVUQURBVEESAioACicKDVdPUktMT0FEX05BTUUSFhoUaXN0aW8taW5ncmVzc2dhdGV3YXk='
'x-envoy-peer-metadata-id', 'router~10.10.0.6~istio-ingressgateway-6bb8fb6549-2mlk2.istio-system~istio-system.svc.cluster.local'
'x-envoy-expected-rq-timeout-ms', '500'
'x-envoy-attempt-count', '1'
thread=48
- 클러스터 매핑이 정상적으로 수행된 상태임을 확인
(4) upstream 연결 상태 및 요청 종료 흐름
1
2
3
4
5
6
7
8
9
10
2025-05-17T08:50:37.470174Z debug envoy pool external/envoy/source/common/conn_pool/conn_pool_base.cc:265 [C11950] using existing fully connected connection thread=48
2025-05-17T08:50:37.470178Z debug envoy pool external/envoy/source/common/conn_pool/conn_pool_base.cc:182 [C11950] creating stream thread=48
2025-05-17T08:50:37.470188Z debug envoy router external/envoy/source/common/router/upstream_request.cc:581 [C11988][S12570926387111490963] pool ready thread=48
2025-05-17T08:50:37.970287Z debug envoy router external/envoy/source/common/router/router.cc:947 [C11988][S12570926387111490963] upstream timeout thread=48
2025-05-17T08:50:37.970353Z debug envoy router external/envoy/source/common/router/upstream_request.cc:500 [C11988][S12570926387111490963] resetting pool request thread=48
2025-05-17T08:50:37.970373Z debug envoy connection external/envoy/source/common/network/connection_impl.cc:139 [C11950] closing data_to_write=0 type=1 thread=48
2025-05-17T08:50:37.970380Z debug envoy connection external/envoy/source/common/network/connection_impl.cc:250 [C11950] closing socket: 1 thread=48
2025-05-17T08:50:37.970486Z debug envoy connection external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:320 [C11950] SSL shutdown: rc=0 thread=48
2025-05-17T08:50:37.970552Z debug envoy pool external/envoy/source/common/conn_pool/conn_pool_base.cc:484 [C11950] client disconnected, failure reason: thread=48
2025-05-17T08:50:37.970599Z debug envoy pool external/envoy/source/common/conn_pool/conn_pool_base.cc:454 invoking idle callbacks - is_draining_for_deletion_=false thread=48
- 기존 커넥션 재사용 및 스트림 생성 확인
- 타임아웃 발생 로그 확인
- TLS 커넥션 종료 처리 확인
(5) 최종적으로 504 응답 전송
1
2
3
4
5
6
7
8
9
10
11
2025-05-17T08:50:37.970643Z debug envoy http external/envoy/source/common/http/filter_manager.cc:967 [C11988][S12570926387111490963] Sending local reply with details response_timeout thread=48
2025-05-17T08:50:37.970716Z debug envoy http external/envoy/source/common/http/conn_manager_impl.cc:1687 [C11988][S12570926387111490963] encoding headers via codec (end_stream=false):
':status', '504'
'content-length', '24'
'content-type', 'text/plain'
'date', 'Sat, 17 May 2025 08:50:37 GMT'
'server', 'istio-envoy'
thread=48
2025-05-17T08:50:37.971159Z debug envoy pool external/envoy/source/common/conn_pool/conn_pool_base.cc:215 [C11950] destroying stream: 0 remaining thread=48
2025-05-17T08:50:37.971895Z debug envoy connection external/envoy/source/common/network/connection_impl.cc:656 [C11988] remote close thread=48
2025-05-17T08:50:37.971912Z debug envoy connection external/envoy/source/common/network/connection_impl.cc:250 [C11988] closing socket: 0 thread=48
- 엔보이가 클라이언트로 504 응답을 리턴하는 구간을 확인
🧪 tcpdump로 네트워크 트래픽 검사
1. slow 파드 정보 확인
1
2
CATALOG_POD=$(kubectl get pods -l version=v2 -n istioinaction -o jsonpath={.items..metadata.name} | cut -d ' ' -f1)
kubectl get pod -n istioinaction $CATALOG_POD -owide
✅ 출력
1
2
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
catalog-v2-56c97f6db-8wnq6 2/2 Running 0 5h31m 10.10.0.13 myk8s-control-plane <none> <none>
2. catalog 서비스 및 endpoint 정보 확인
1
kubectl get svc,ep -n istioinaction
✅ 출력
1
2
3
4
5
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/catalog ClusterIP 10.200.3.2 <none> 80/TCP 5h32m
NAME ENDPOINTS AGE
endpoints/catalog 10.10.0.12:3000,10.10.0.13:3000,10.10.0.14:3000 5h32m
3. istio-proxy 권한 및 네트워크 인터페이스 정보 확인
(1) sudo
권한 확인
1
kubectl exec -it -n istioinaction $CATALOG_POD -c istio-proxy -- sudo whoami
✅ 출력
1
root
(2) eth0
, lo
인터페이스 전체 주소 출력
1
kubectl exec -it -n istioinaction $CATALOG_POD -c istio-proxy -- ip -c addr
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 6a:fb:01:b9:43:1f brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.10.0.13/24 brd 10.10.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::68fb:1ff:feb9:431f/64 scope link
valid_lft forever preferred_lft forever
(3) eth0
인터페이스 상세 정보 확인
1
kubectl exec -it -n istioinaction $CATALOG_POD -c istio-proxy -- ip add show dev eth0
✅ 출력
1
2
3
4
5
6
2: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 6a:fb:01:b9:43:1f brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.10.0.13/24 brd 10.10.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::68fb:1ff:feb9:431f/64 scope link
valid_lft forever preferred_lft forever
(4) lo
인터페이스 상세 정보 확인
1
kubectl exec -it -n istioinaction $CATALOG_POD -c istio-proxy -- ip add show dev lo
✅ 출력
1
2
3
4
5
6
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
4. istio-proxy 네트워크 패킷 캡처
(1) eth0
인터페이스에서 포트 3000의 TCP 패킷 덤프 (요약 출력)
1
kubectl exec -it -n istioinaction $CATALOG_POD -c istio-proxy -- sudo tcpdump -i eth0 tcp port 3000 -nnq
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
09:14:49.103817 IP 10.10.0.13.3000 > 10.10.0.6.38620: tcp 1754
09:14:49.103851 IP 10.10.0.6.38620 > 10.10.0.13.3000: tcp 0
09:14:50.114934 IP 10.10.0.6.38620 > 10.10.0.13.3000: tcp 1662
09:14:50.114965 IP 10.10.0.13.3000 > 10.10.0.6.38620: tcp 0
09:14:50.614176 IP 10.10.0.6.38620 > 10.10.0.13.3000: tcp 24
09:14:50.614244 IP 10.10.0.6.38620 > 10.10.0.13.3000: tcp 0
09:14:50.614417 IP 10.10.0.13.3000 > 10.10.0.6.38620: tcp 24
09:14:50.614486 IP 10.10.0.6.38620 > 10.10.0.13.3000: tcp 0
09:14:54.671920 IP 10.10.0.6.48292 > 10.10.0.13.3000: tcp 0
09:14:54.671946 IP 10.10.0.13.3000 > 10.10.0.6.48292: tcp 0
09:14:54.671966 IP 10.10.0.6.48292 > 10.10.0.13.3000: tcp 0
09:14:54.672116 IP 10.10.0.6.48292 > 10.10.0.13.3000: tcp 2223
09:14:54.672127 IP 10.10.0.13.3000 > 10.10.0.6.48292: tcp 0
09:14:54.672865 IP 10.10.0.13.3000 > 10.10.0.6.48292: tcp 218
09:14:54.672914 IP 10.10.0.6.48292 > 10.10.0.13.3000: tcp 0
09:14:54.673181 IP 10.10.0.6.48292 > 10.10.0.13.3000: tcp 64
...
(2) eth0
인터페이스에서 포트 3000의 TCP 패킷 덤프 (일반 출력)
1
kubectl exec -it -n istioinaction $CATALOG_POD -c istio-proxy -- sudo tcpdump -i eth0 tcp port 3000 -nn
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
09:15:42.184340 IP 10.10.0.6.40720 > 10.10.0.13.3000: Flags [S], seq 1403529227, win 64240, options [mss 1460,sackOK,TS val 3018578004 ecr 0,nop,wscale 7], length 0
09:15:42.184360 IP 10.10.0.13.3000 > 10.10.0.6.40720: Flags [S.], seq 2122704047, ack 1403529228, win 65160, options [mss 1460,sackOK,TS val 452260130 ecr 3018578004,nop,wscale 7], length 0
09:15:42.184376 IP 10.10.0.6.40720 > 10.10.0.13.3000: Flags [.], ack 1, win 502, options [nop,nop,TS val 3018578004 ecr 452260130], length 0
09:15:42.184493 IP 10.10.0.6.40720 > 10.10.0.13.3000: Flags [P.], seq 1:2224, ack 1, win 502, options [nop,nop,TS val 3018578004 ecr 452260130], length 2223
09:15:42.184501 IP 10.10.0.13.3000 > 10.10.0.6.40720: Flags [.], ack 2224, win 544, options [nop,nop,TS val 452260130 ecr 3018578004], length 0
09:15:42.184784 IP 10.10.0.13.3000 > 10.10.0.6.40720: Flags [P.], seq 1:219, ack 2224, win 544, options [nop,nop,TS val 452260131 ecr 3018578004], length 218
09:15:42.184803 IP 10.10.0.6.40720 > 10.10.0.13.3000: Flags [.], ack 219, win 501, options [nop,nop,TS val 3018578005 ecr 452260131], length 0
09:15:42.185016 IP 10.10.0.6.40720 > 10.10.0.13.3000: Flags [P.], seq 2224:2288, ack 219, win 501, options [nop,nop,TS val 3018578005 ecr 452260131], length 64
09:15:42.185110 IP 10.10.0.6.40720 > 10.10.0.13.3000: Flags [P.], seq 2288:3950, ack 219, win 501, options [nop,nop,TS val 3018578005 ecr 452260131], length 1662
09:15:42.185116 IP 10.10.0.13.3000 > 10.10.0.6.40720: Flags [.], ack 3950, win 570, options [nop,nop,TS val 452260131 ecr 3018578005], length 0
09:15:42.684019 IP 10.10.0.6.40720 > 10.10.0.13.3000: Flags [P.], seq 3950:3974, ack 219, win 501, options [nop,nop,TS val 3018578504 ecr 452260131], length 24
09:15:42.684071 IP 10.10.0.6.40720 > 10.10.0.13.3000: Flags [F.], seq 3974, ack 219, win 501, options [nop,nop,TS val 3018578504 ecr 452260131], length 0
09:15:42.684148 IP 10.10.0.13.3000 > 10.10.0.6.40720: Flags [P.], seq 219:4021, ack 3975, win 570, options [nop,nop,TS val 452260630 ecr 3018578504], length 3802
09:15:42.684209 IP 10.10.0.6.40720 > 10.10.0.13.3000: Flags [R], seq 1403533202, win 0, length 0
...
(3) eth0
인터페이스에서 포트 3000의 TCP 패킷 덤프 (디코딩 출력)
1
kubectl exec -it -n istioinaction $CATALOG_POD -c istio-proxy -- sudo tcpdump -i eth0 tcp port 3000
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
09:16:29.533660 IP 10-10-0-6.istio-ingressgateway.istio-system.svc.cluster.local.41986 > catalog-v2-56c97f6db-8wnq6.3000: Flags [S], seq 1742576366, win 64240, options [mss 1460,sackOK,TS val 3018625354 ecr 0,nop,wscale 7], length 0
09:16:29.533684 IP catalog-v2-56c97f6db-8wnq6.3000 > 10-10-0-6.istio-ingressgateway.istio-system.svc.cluster.local.41986: Flags [S.], seq 3244274317, ack 1742576367, win 65160, options [mss 1460,sackOK,TS val 452307480 ecr 3018625354,nop,wscale 7], length 0
09:16:29.533710 IP 10-10-0-6.istio-ingressgateway.istio-system.svc.cluster.local.41986 > catalog-v2-56c97f6db-8wnq6.3000: Flags [.], ack 1, win 502, options [nop,nop,TS val 3018625354 ecr 452307480], length 0
09:16:29.533905 IP 10-10-0-6.istio-ingressgateway.istio-system.svc.cluster.local.41986 > catalog-v2-56c97f6db-8wnq6.3000: Flags [P.], seq 1:518, ack 1, win 502, options [nop,nop,TS val 3018625354 ecr 452307480], length 517
09:16:29.533924 IP catalog-v2-56c97f6db-8wnq6.3000 > 10-10-0-6.istio-ingressgateway.istio-system.svc.cluster.local.41986: Flags [.], ack 518, win 506, options [nop,nop,TS val 452307480 ecr 3018625354], length 0
09:16:29.536973 IP catalog-v2-56c97f6db-8wnq6.3000 > 10-10-0-6.istio-ingressgateway.istio-system.svc.cluster.local.41986: Flags [P.], seq 1:2172, ack 518, win 506, options [nop,nop,TS val 452307483 ecr 3018625354], length 2171
09:16:29.536993 IP 10-10-0-6.istio-ingressgateway.istio-system.svc.cluster.local.41986 > catalog-v2-56c97f6db-8wnq6.3000: Flags [.], ack 2172, win 536, options [nop,nop,TS val 3018625357 ecr 452307483], length 0
09:16:29.540000 IP 10-10-0-6.istio-ingressgateway.istio-system.svc.cluster.local.41986 > catalog-v2-56c97f6db-8wnq6.3000: Flags [P.], seq 518:2503, ack 2172, win 536, options [nop,nop,TS val 3018625360 ecr 452307483], length 1985
09:16:29.540010 IP catalog-v2-56c97f6db-8wnq6.3000 > 10-10-0-6.istio-ingressgateway.istio-system.svc.cluster.local.41986: Flags [.], ack 2503, win 538, options [nop,nop,TS val 452307486 ecr 3018625360], length 0
09:16:29.540126 IP 10-10-0-6.istio-ingressgateway.istio-system.svc.cluster.local.41986 > catalog-v2-56c97f6db-8wnq6.3000: Flags [P.], seq 2503:4165, ack 2172, win 536, options [nop,nop,TS val 3018625360 ecr 452307486], length 1662
09:16:29.540132 IP catalog-v2-56c97f6db-8wnq6.3000 > 10-10-0-6.istio-ingressgateway.istio-system.svc.cluster.local.41986: Flags [.], ack 4165, win 563, options [nop,nop,TS val 452307486 ecr 3018625360], length 0
09:16:30.033685 IP 10-10-0-6.istio-ingressgateway.istio-system.svc.cluster.local.41986 > catalog-v2-56c97f6db-8wnq6.3000: Flags [P.], seq 4165:4189, ack 2172, win 536, options [nop,nop,TS val 3018625854 ecr 452307486], length 24
09:16:30.033772 IP 10-10-0-6.istio-ingressgateway.istio-system.svc.cluster.local.41986 > catalog-v2-56c97f6db-8wnq6.3000: Flags [F.], seq 4189, ack 2172, win 536, options [nop,nop,TS val 3018625854 ecr 452307486], length 0
09:16:30.033821 IP catalog-v2-56c97f6db-8wnq6.3000 > 10-10-0-6.istio-ingressgateway.istio-system.svc.cluster.local.41986: Flags [P.], seq 2172:5974, ack 4190, win 563, options [nop,nop,TS val 452307980 ecr 3018625854], length 3802
09:16:30.033894 IP 10-10-0-6.istio-ingressgateway.istio-system.svc.cluster.local.41986 > catalog-v2-56c97f6db-8wnq6.3000: Flags [R], seq 1742580556, win 0, length 0
...
(4) lo
인터페이스에서 TCP 패킷 덤프
1
kubectl exec -it -n istioinaction $CATALOG_POD -c istio-proxy -- sudo tcpdump -i lo -nnq
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on lo, link-type EN10MB (Ethernet), snapshot length 262144 bytes
09:17:26.132152 IP 127.0.0.1.58946 > 127.0.0.1.15020: tcp 213
09:17:26.132262 IP 127.0.0.1.15020 > 127.0.0.1.58946: tcp 75
09:17:26.132272 IP 127.0.0.1.58946 > 127.0.0.1.15020: tcp 0
09:17:27.479289 IP 127.0.0.6.40371 > 10.10.0.13.3000: tcp 0
09:17:27.479316 IP 10.10.0.13.3000 > 127.0.0.6.40371: tcp 0
09:17:27.479331 IP 127.0.0.6.40371 > 10.10.0.13.3000: tcp 0
09:17:27.479444 IP 127.0.0.6.40371 > 10.10.0.13.3000: tcp 669
09:17:27.479454 IP 10.10.0.13.3000 > 127.0.0.6.40371: tcp 0
09:17:27.979214 IP 127.0.0.6.40371 > 10.10.0.13.3000: tcp 0
09:17:27.979538 IP 10.10.0.13.3000 > 127.0.0.6.40371: tcp 0
09:17:27.979554 IP 127.0.0.6.40371 > 10.10.0.13.3000: tcp 0
09:17:28.132293 IP 127.0.0.1.50086 > 127.0.0.1.15020: tcp 213
09:17:28.132418 IP 127.0.0.1.15020 > 127.0.0.1.50086: tcp 75
09:17:28.132428 IP 127.0.0.1.50086 > 127.0.0.1.15020: tcp 0
09:17:28.628157 IP 127.0.0.1.51180 > 127.0.0.1.15090: tcp 295
09:17:28.628416 IP 127.0.0.1.46722 > 127.0.0.1.15000: tcp 411
09:17:28.630421 IP 127.0.0.1.15000 > 127.0.0.1.46722: tcp 65483
09:17:28.630434 IP 127.0.0.1.46722 > 127.0.0.1.15000: tcp 0
09:17:28.630469 IP 127.0.0.1.15000 > 127.0.0.1.46722: tcp 65483
...
(5) 모든 인터페이스에서 포트 3000의 TCP 패킷 덤프
1
kubectl exec -it -n istioinaction $CATALOG_POD -c istio-proxy -- sudo tcpdump -i any tcp port 3000 -nnq
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
09:18:33.340849 lo In IP 10.10.0.13.3000 > 127.0.0.6.41829: tcp 0
09:18:33.341031 lo In IP 127.0.0.6.41829 > 10.10.0.13.3000: tcp 0
09:18:33.341056 lo In IP 10.10.0.13.3000 > 127.0.0.6.41829: tcp 0
09:18:34.766974 eth0 In IP 10.10.0.6.52800 > 10.10.0.13.3000: tcp 1662
09:18:34.766984 eth0 Out IP 10.10.0.13.3000 > 10.10.0.6.52800: tcp 0
09:18:34.767350 lo In IP 127.0.0.6.52779 > 10.10.0.13.3000: tcp 0
09:18:34.767361 lo In IP 10.10.0.13.3000 > 127.0.0.6.52779: tcp 0
09:18:34.767371 lo In IP 127.0.0.6.52779 > 10.10.0.13.3000: tcp 0
09:18:34.767449 lo In IP 127.0.0.6.52779 > 10.10.0.13.3000: tcp 669
09:18:34.767455 lo In IP 10.10.0.13.3000 > 127.0.0.6.52779: tcp 0
09:18:34.955015 lo In IP 10.10.0.13.3000 > 127.0.0.6.52779: tcp 866
09:18:34.955031 lo In IP 127.0.0.6.52779 > 10.10.0.13.3000: tcp 0
09:18:34.955712 eth0 Out IP 10.10.0.13.3000 > 10.10.0.6.52800: tcp 1754
09:18:34.955764 eth0 In IP 10.10.0.6.52800 > 10.10.0.13.3000: tcp 0
09:18:35.713706 lo In IP 10.10.0.13.3000 > 127.0.0.6.58771: tcp 0
09:18:35.713828 lo In IP 127.0.0.6.58771 > 10.10.0.13.3000: tcp 0
09:18:35.713865 lo In IP 10.10.0.13.3000 > 127.0.0.6.58771: tcp 0
09:18:35.970357 eth0 In IP 10.10.0.6.57626 > 10.10.0.13.3000: tcp 0
09:18:35.970379 eth0 Out IP 10.10.0.13.3000 > 10.10.0.6.57626: tcp 0
09:18:35.970394 eth0 In IP 10.10.0.6.57626 > 10.10.0.13.3000: tcp 0
09:18:35.970546 eth0 In IP 10.10.0.6.57626 > 10.10.0.13.3000: tcp 2223
09:18:35.970556 eth0 Out IP 10.10.0.13.3000 > 10.10.0.6.57626: tcp 0
09:18:35.970898 eth0 Out IP 10.10.0.13.3000 > 10.10.0.6.57626: tcp 218
09:18:35.970925 eth0 In IP 10.10.0.6.57626 > 10.10.0.13.3000: tcp 0
09:18:35.971175 eth0 In IP 10.10.0.6.57626 > 10.10.0.13.3000: tcp 64
09:18:35.971309 eth0 In IP 10.10.0.6.57626 > 10.10.0.13.3000: tcp 1662
09:18:35.971317 eth0 Out IP 10.10.0.13.3000 > 10.10.0.6.57626: tcp 0
09:18:35.971609 lo In IP 127.0.0.6.52779 > 10.10.0.13.3000: tcp 669
09:18:35.971621 lo In IP 10.10.0.13.3000 > 127.0.0.6.52779: tcp 0
09:18:36.470705 eth0 In IP 10.10.0.6.57626 > 10.10.0.13.3000: tcp 24
09:18:36.470744 eth0 In IP 10.10.0.6.57626 > 10.10.0.13.3000: tcp 0
09:18:36.470778 eth0 Out IP 10.10.0.13.3000 > 10.10.0.6.57626: tcp 3802
...
5. istio-proxy
가 포함된 Pod의 상세 설명 출력
1
kubectl describe pod -n istioinaction $CATALOG_POD
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
Name: catalog-v2-56c97f6db-8wnq6
Namespace: istioinaction
Priority: 0
Service Account: catalog
Node: myk8s-control-plane/172.18.0.2
Start Time: Sat, 17 May 2025 12:35:49 +0900
Labels: app=catalog
pod-template-hash=56c97f6db
security.istio.io/tlsMode=istio
service.istio.io/canonical-name=catalog
service.istio.io/canonical-revision=v2
version=v2
Annotations: kubectl.kubernetes.io/default-container: catalog
kubectl.kubernetes.io/default-logs-container: catalog
prometheus.io/path: /stats/prometheus
prometheus.io/port: 15020
prometheus.io/scrape: true
sidecar.istio.io/status:
{"initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["workload-socket","credential-socket","workload-certs","istio-env...
Status: Running
IP: 10.10.0.13
IPs:
IP: 10.10.0.13
Controlled By: ReplicaSet/catalog-v2-56c97f6db
Init Containers:
istio-init:
Container ID: containerd://bb7e69bb3cd2279e5c6ab6a3f65f605a888f0251c71b47377102301421f954c7
Image: docker.io/istio/proxyv2:1.17.8
Image ID: docker.io/istio/proxyv2@sha256:d33fd90e25c59f4f7378d1b9dd0eebbb756e03520ab09cf303a43b51b5cb01b8
Port: <none>
Host Port: <none>
Args:
istio-iptables
-p
15001
-z
15006
-u
1337
-m
REDIRECT
-i
*
-x
-b
*
-d
15090,15021,15020
--log_output_level=default:info
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 17 May 2025 12:35:50 +0900
Finished: Sat, 17 May 2025 12:35:50 +0900
Ready: True
Restart Count: 0
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 10m
memory: 40Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j477f (ro)
Containers:
catalog:
Container ID: containerd://1d6ad1491d6602dcdcbf543ff6a7d3c37e642a0f8652ac92f95cf00b62626a78
Image: istioinaction/catalog
Image ID: docker.io/istioinaction/catalog@sha256:304226b8b076ec363f72c0cd13d60ae1a913680a9f2e61e33254d1de5b34f8fb
Port: 3000/TCP
Host Port: 0/TCP
State: Running
Started: Sat, 17 May 2025 12:36:02 +0900
Ready: True
Restart Count: 0
Environment:
KUBERNETES_NAMESPACE: istioinaction (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j477f (ro)
istio-proxy:
Container ID: containerd://7a627c8ad03dd2dd33e41b9ea5d206e4641052b81f45b9f6e5c4b7b1d77f36fc
Image: docker.io/istio/proxyv2:1.17.8
Image ID: docker.io/istio/proxyv2@sha256:d33fd90e25c59f4f7378d1b9dd0eebbb756e03520ab09cf303a43b51b5cb01b8
Port: 15090/TCP
Host Port: 0/TCP
Args:
proxy
sidecar
--domain
$(POD_NAMESPACE).svc.cluster.local
--proxyLogLevel=warning
--proxyComponentLogLevel=misc:error
--log_output_level=default:info
--concurrency
2
State: Running
Started: Sat, 17 May 2025 12:36:03 +0900
Ready: True
Restart Count: 0
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 10m
memory: 40Mi
Readiness: http-get http://:15021/healthz/ready delay=1s timeout=3s period=2s #success=1 #failure=30
Environment:
JWT_POLICY: third-party-jwt
PILOT_CERT_PROVIDER: istiod
CA_ADDR: istiod.istio-system.svc:15012
POD_NAME: catalog-v2-56c97f6db-8wnq6 (v1:metadata.name)
POD_NAMESPACE: istioinaction (v1:metadata.namespace)
INSTANCE_IP: (v1:status.podIP)
SERVICE_ACCOUNT: (v1:spec.serviceAccountName)
HOST_IP: (v1:status.hostIP)
PROXY_CONFIG: {}
ISTIO_META_POD_PORTS: [
{"name":"http","containerPort":3000,"protocol":"TCP"}
]
ISTIO_META_APP_CONTAINERS: catalog
ISTIO_META_CLUSTER_ID: Kubernetes
ISTIO_META_NODE_NAME: (v1:spec.nodeName)
ISTIO_META_INTERCEPTION_MODE: REDIRECT
ISTIO_META_WORKLOAD_NAME: catalog-v2
ISTIO_META_OWNER: kubernetes://apis/apps/v1/namespaces/istioinaction/deployments/catalog-v2
ISTIO_META_MESH_ID: cluster.local
TRUST_DOMAIN: cluster.local
Mounts:
/etc/istio/pod from istio-podinfo (rw)
/etc/istio/proxy from istio-envoy (rw)
/var/lib/istio/data from istio-data (rw)
/var/run/secrets/credential-uds from credential-socket (rw)
/var/run/secrets/istio from istiod-ca-cert (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j477f (ro)
/var/run/secrets/tokens from istio-token (rw)
/var/run/secrets/workload-spiffe-credentials from workload-certs (rw)
/var/run/secrets/workload-spiffe-uds from workload-socket (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
workload-socket:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
credential-socket:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
workload-certs:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
istio-envoy:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
istio-data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
istio-podinfo:
Type: DownwardAPI (a volume populated by information about the pod)
Items:
metadata.labels -> labels
metadata.annotations -> annotations
istio-token:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 43200
istiod-ca-cert:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: istio-ca-root-cert
Optional: false
kube-api-access-j477f:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
6. tcpdump
를 이용해 포트 3000 패킷 덤프 파일로 저장
1
kubectl exec -it -n istioinaction $CATALOG_POD -c istio-proxy -- sudo tcpdump -i any tcp port 3000 -w /var/lib/istio/data/dump.pcap
7. 저장된 .pcap
파일 존재 확인
1
kubectl exec -it -n istioinaction $CATALOG_POD -c istio-proxy -- ls -l /var/lib/istio/data/
✅ 출력
1
2
total 144
-rw-r--r-- 1 tcpdump tcpdump 143499 May 17 09:21 dump.pcap
8. .pcap
파일을 로컬로 다운로드
1
kubectl cp -n istioinaction -c istio-proxy $CATALOG_POD:var/lib/istio/data/dump.pcap ./dump.pcap
9. 로컬에서 termshark
로 패킷 분석 시작
1
termshark dump.pcap
10. Wireshark 분석 항목
(1) SNI 확인을 위한 Client Hello
확인
클러스터 대상 도메인 이름 포함됨 → outbound_.80_.version-v2_.catalog.istioinaction.svc.cluster.local
(2) 암호화된 컨텐츠 식별 (Encrypted Application Data
)
(3) 평문 HTTP 통신 구간 확인
(4) TCP RST, FIN/ACK 플래그 필터링 추가
((tcp.stream == 1 and http) or tcp.flags == 0x0011 or tcp.flags == 0x0004)
📈 엔보이 텔레메트리로 자신의 애플리케이션 이해하기
1. DC 응답 플래그를 가진 요청 비율 파악 (응답 코드, 버전, Pod 단위 집계)
1
sum(irate(istio_requests_total{reporter="destination", destination_service=~"catalog.istioinaction.svc.cluster.local",response_flags="DC"}[5m])) by(response_code, pod, version)
2. 전체 요청 수 기준으로 응답 코드/버전/Pod 별 요청량 내림차순 정렬
1
sort_desc(sum(irate(istio_requests_total{reporter="destination", destination_service=~"catalog.istioinaction.svc.cluster.local"}[5m]))by(response_code, pod, version))
📊 이스티오 구성 요소 트러블 슈팅하기
1. 기존 Istio 리소스 전체 삭제
1
kubectl delete -n istioinaction deploy,svc,gw,vs,dr,envoyfilter --all
✅ 출력
1
2
3
4
5
6
deployment.apps "catalog" deleted
deployment.apps "catalog-v2" deleted
service "catalog" deleted
gateway.networking.istio.io "catalog-gateway" deleted
virtualservice.networking.istio.io "catalog-v1-v2" deleted
destinationrule.networking.istio.io "catalog" deleted
2. 샘플 애플리케이션 재배포
1
2
3
kubectl apply -f services/catalog/kubernetes/catalog.yaml -n istioinaction
kubectl apply -f services/webapp/kubernetes/webapp.yaml -n istioinaction
kubectl apply -f services/webapp/istio/webapp-catalog-gw-vs.yaml -n istioinaction
✅ 출력
1
2
3
4
5
6
7
8
serviceaccount/catalog unchanged
service/catalog created
deployment.apps/catalog created
serviceaccount/webapp created
service/webapp created
deployment.apps/webapp created
gateway.networking.istio.io/coolstore-gateway created
virtualservice.networking.istio.io/webapp-virtualservice created
3. WebApp 경유 반복 요청 시도 (카탈로그 조회)
1
while true; do curl -s http://webapp.istioinaction.io:30000/api/catalog ; date "+%Y-%m-%d %H:%M:%S" ; sleep 1; echo; done
✅ 출력
1
2
3
4
5
6
7
8
[{"id":1,"color":"amber","department":"Eyewear","name":"Elinor Glasses","price":"282.00"},{"id":2,"color":"cyan","department":"Clothing","name":"Atlas Shirt","price":"127.00"},{"id":3,"color":"teal","department":"Clothing","name":"Small Metal Shoes","price":"232.00"},{"id":4,"color":"red","department":"Watches","name":"Red Dragon Watch","price":"232.00"}]2025-05-17 20:29:04
[{"id":1,"color":"amber","department":"Eyewear","name":"Elinor Glasses","price":"282.00"},{"id":2,"color":"cyan","department":"Clothing","name":"Atlas Shirt","price":"127.00"},{"id":3,"color":"teal","department":"Clothing","name":"Small Metal Shoes","price":"232.00"},{"id":4,"color":"red","department":"Watches","name":"Red Dragon Watch","price":"232.00"}]2025-05-17 20:29:05
[{"id":1,"color":"amber","department":"Eyewear","name":"Elinor Glasses","price":"282.00"},{"id":2,"color":"cyan","department":"Clothing","name":"Atlas Shirt","price":"127.00"},{"id":3,"color":"teal","department":"Clothing","name":"Small Metal Shoes","price":"232.00"},{"id":4,"color":"red","department":"Watches","name":"Red Dragon Watch","price":"232.00"}]2025-05-17 20:29:06
...
4. Istio-Proxy의 서비스 수신 포트 확인
1
kubectl -n istioinaction exec -it deploy/webapp -c istio-proxy -- netstat -tnl
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:15090 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:15090 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:15001 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:15001 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:15006 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:15006 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:15021 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:15021 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:15004 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:15000 0.0.0.0:* LISTEN
tcp6 0 0 :::15020 :::* LISTEN
tcp6 0 0 :::8080 :::* LISTEN
5. Istio-Proxy 포트별 프로세스 확인 (파일럿에이전트, 엔보이)
1
kubectl -n istioinaction exec -it deploy/webapp -c istio-proxy -- ss -tnlp
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 0.0.0.0:15090 0.0.0.0:* users:(("envoy",pid=34,fd=23))
LISTEN 0 4096 0.0.0.0:15090 0.0.0.0:* users:(("envoy",pid=34,fd=22))
LISTEN 0 4096 0.0.0.0:15001 0.0.0.0:* users:(("envoy",pid=34,fd=36))
LISTEN 0 4096 0.0.0.0:15001 0.0.0.0:* users:(("envoy",pid=34,fd=35))
LISTEN 0 4096 0.0.0.0:15006 0.0.0.0:* users:(("envoy",pid=34,fd=38))
LISTEN 0 4096 0.0.0.0:15006 0.0.0.0:* users:(("envoy",pid=34,fd=37))
LISTEN 0 4096 0.0.0.0:15021 0.0.0.0:* users:(("envoy",pid=34,fd=25))
LISTEN 0 4096 0.0.0.0:15021 0.0.0.0:* users:(("envoy",pid=34,fd=24))
LISTEN 0 4096 127.0.0.1:15004 0.0.0.0:* users:(("pilot-agent",pid=1,fd=11))
LISTEN 0 4096 127.0.0.1:15000 0.0.0.0:* users:(("envoy",pid=34,fd=18))
LISTEN 0 4096 *:15020 *:* users:(("pilot-agent",pid=1,fd=3))
LISTEN 0 4096 *:8080 *:*
5. Istio-Proxy의 Readiness Probe 정보 확인
15021
헬스체크 포트
1
kubectl describe pod -n istioinaction -l app=webapp | grep Readiness:
✅ 출력
1
Readiness: http-get http://:15021/healthz/ready delay=1s timeout=3s period=2s #success=1 #failure=30
이스티오 에이전트를 조사하고 트러블슈팅하기 위한 엔드포인트들
1. Liveness Probe 테스트용 Deployment 생성
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: liveness-http
namespace: istioinaction
spec:
selector:
matchLabels:
app: liveness-http
version: v1
template:
metadata:
labels:
app: liveness-http
version: v1
spec:
containers:
- name: liveness-http
image: docker.io/istio/health:example
ports:
- containerPort: 8001
livenessProbe:
httpGet:
path: /foo
port: 8001
initialDelaySeconds: 5
periodSeconds: 5
EOF
# 결과
deployment.apps/liveness-http created
2. liveness-http 파드 생성 여부 확인
1
kubectl get pod -n istioinaction -l app=liveness-http
✅ 출력
1
2
NAME READY STATUS RESTARTS AGE
liveness-http-58bdfbd469-hdj6t 2/2 Running 0 27s
3. liveness-http 파드 상세 정보 확인
1
kubectl describe pod -n istioinaction -l app=liveness-http
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
...
Containers:
liveness-http:
Container ID: containerd://24eab6ef3bc6328d388eb949e917e20ee57ff238141fb5c341efe74c1e7baa79
Image: docker.io/istio/health:example
Image ID: docker.io/istio/health@sha256:d8a2ff91d87f800b4661bec5aaadf73d33de296d618081fa36a0d1cbfb45d3d5
Port: 8001/TCP
Host Port: 0/TCP
State: Running
Started: Sat, 17 May 2025 20:38:01 +0900
Ready: True
Restart Count: 0
Liveness: http-get http://:15020/app-health/liveness-http/livez delay=5s timeout=1s period=5s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qbnsp (ro)
istio-proxy:
Container ID: containerd://b643f3815994f7e7d6bbd507b82c81ba84dedecb7b763e18313270958f04e14a
Image: docker.io/istio/proxyv2:1.17.8
Image ID: docker.io/istio/proxyv2@sha256:d33fd90e25c59f4f7378d1b9dd0eebbb756e03520ab09cf303a43b51b5cb01b8
Port: 15090/TCP
Host Port: 0/TCP
Args:
proxy
sidecar
--domain
$(POD_NAMESPACE).svc.cluster.local
--proxyLogLevel=warning
--proxyComponentLogLevel=misc:error
--log_output_level=default:info
--concurrency
2
State: Running
Started: Sat, 17 May 2025 20:38:01 +0900
Ready: True
Restart Count: 0
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 10m
memory: 40Mi
Readiness: http-get http://:15021/healthz/ready delay=1s timeout=3s period=2s #success=1 #failure=30
Environment:
JWT_POLICY: third-party-jwt
PILOT_CERT_PROVIDER: istiod
CA_ADDR: istiod.istio-system.svc:15012
POD_NAME: liveness-http-58bdfbd469-hdj6t (v1:metadata.name)
POD_NAMESPACE: istioinaction (v1:metadata.namespace)
INSTANCE_IP: (v1:status.podIP)
SERVICE_ACCOUNT: (v1:spec.serviceAccountName)
HOST_IP: (v1:status.hostIP)
PROXY_CONFIG: {}
ISTIO_META_POD_PORTS: [
{"containerPort":8001,"protocol":"TCP"}
]
ISTIO_META_APP_CONTAINERS: liveness-http
ISTIO_META_CLUSTER_ID: Kubernetes
ISTIO_META_NODE_NAME: (v1:spec.nodeName)
ISTIO_META_INTERCEPTION_MODE: REDIRECT
ISTIO_META_WORKLOAD_NAME: liveness-http
ISTIO_META_OWNER: kubernetes://apis/apps/v1/namespaces/istioinaction/deployments/liveness-http
ISTIO_META_MESH_ID: cluster.local
TRUST_DOMAIN: cluster.local
ISTIO_KUBE_APP_PROBERS: {"/app-health/liveness-http/livez":{"httpGet":{"path":"/foo","port":8001,"scheme":"HTTP"},"timeoutSeconds":1}}
Mounts:
/etc/istio/pod from istio-podinfo (rw)
/etc/istio/proxy from istio-envoy (rw)
/var/lib/istio/data from istio-data (rw)
/var/run/secrets/credential-uds from credential-socket (rw)
/var/run/secrets/istio from istiod-ca-cert (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qbnsp (ro)
/var/run/secrets/tokens from istio-token (rw)
/var/run/secrets/workload-spiffe-credentials from workload-certs (rw)
/var/run/secrets/workload-spiffe-uds from workload-socket (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
workload-socket:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
credential-socket:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
workload-certs:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
istio-envoy:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
istio-data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
istio-podinfo:
Type: DownwardAPI (a volume populated by information about the pod)
Items:
metadata.labels -> labels
metadata.annotations -> annotations
istio-token:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 43200
istiod-ca-cert:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: istio-ca-root-cert
Optional: false
kube-api-access-qbnsp:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 49s default-scheduler Successfully assigned istioinaction/liveness-http-58bdfbd469-hdj6t to myk8s-control-plane
Normal Pulled 49s kubelet Container image "docker.io/istio/proxyv2:1.17.8" already present on machine
Normal Created 49s kubelet Created container istio-init
Normal Started 49s kubelet Started container istio-init
Normal Pulling 49s kubelet Pulling image "docker.io/istio/health:example"
Normal Pulled 37s kubelet Successfully pulled image "docker.io/istio/health:example" in 11.12867579s (11.128683521s including waiting)
Normal Created 37s kubelet Created container liveness-http
Normal Started 37s kubelet Started container liveness-http
Normal Pulled 37s kubelet Container image "docker.io/istio/proxyv2:1.17.8" already present on machine
Normal Created 37s kubelet Created container istio-proxy
Normal Started 37s kubelet Started container istio-proxy
4. livenessProbe에 설정된 HTTP 엔드포인트 확인
1
kubectl get pod -n istioinaction -l app=liveness-http -o json | jq '.items[0].spec.containers[0].livenessProbe.httpGet'
✅ 출력
1
2
3
4
5
{
"path": "/app-health/liveness-http/livez",
"port": 15020,
"scheme": "HTTP"
}
5. Istio-Proxy 컨테이너에서 헬스체크 확인
1
kubectl exec -n istioinaction deploy/liveness-http -c istio-proxy -- curl -s localhost:15020/app-health/liveness-http/livez -v
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
* Trying 127.0.0.1:15020...
* Connected to localhost (127.0.0.1) port 15020 (#0)
> GET /app-health/liveness-http/livez HTTP/1.1
> Host: localhost:15020
> User-Agent: curl/7.81.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Sat, 17 May 2025 11:41:37 GMT
< Content-Length: 0
<
* Connection #0 to host localhost left intact
6. 실습 종료 후 리소스 정리 (liveness-http 삭제)
1
2
3
4
kubectl delete deploy liveness-http -n istioinaction
# 결과
deployment.apps "liveness-http" deleted
🧑✈️ 에이전트를 통해 Istio Pilot 디버그 엔드포인트 쿼리하기
1. Istio-Proxy Readiness 상태 확인 (15020 헬스체크 포트)
1
kubectl exec -n istioinaction deploy/webapp -c istio-proxy -- curl -s localhost:15020/healthz/ready -v
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
* Trying 127.0.0.1:15020...
* Connected to localhost (127.0.0.1) port 15020 (#0)
> GET /healthz/ready HTTP/1.1
> Host: localhost:15020
> User-Agent: curl/7.81.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Sat, 17 May 2025 11:43:09 GMT
< Content-Length: 0
<
* Connection #0 to host localhost left intact
2. WebApp의 병합된 Prometheus 통계 확인 (istio_agent
, envoy
메트릭 포함)
1
kubectl exec -n istioinaction deploy/webapp -c istio-proxy -- curl -s localhost:15020/stats/prometheus
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
...
envoy_listener_manager_lds_update_duration_bucket{le="60000"} 3
envoy_listener_manager_lds_update_duration_bucket{le="300000"} 3
envoy_listener_manager_lds_update_duration_bucket{le="600000"} 3
envoy_listener_manager_lds_update_duration_bucket{le="1800000"} 3
envoy_listener_manager_lds_update_duration_bucket{le="3600000"} 3
envoy_listener_manager_lds_update_duration_bucket{le="+Inf"} 3
envoy_listener_manager_lds_update_duration_sum{} 50.6000000000000014210854715202
envoy_listener_manager_lds_update_duration_count{} 3
# TYPE envoy_server_initialization_time_ms histogram
envoy_server_initialization_time_ms_bucket{le="0.5"} 0
envoy_server_initialization_time_ms_bucket{le="1"} 0
envoy_server_initialization_time_ms_bucket{le="5"} 0
envoy_server_initialization_time_ms_bucket{le="10"} 0
envoy_server_initialization_time_ms_bucket{le="25"} 0
envoy_server_initialization_time_ms_bucket{le="50"} 0
envoy_server_initialization_time_ms_bucket{le="100"} 0
envoy_server_initialization_time_ms_bucket{le="250"} 0
envoy_server_initialization_time_ms_bucket{le="500"} 1
envoy_server_initialization_time_ms_bucket{le="1000"} 1
envoy_server_initialization_time_ms_bucket{le="2500"} 1
envoy_server_initialization_time_ms_bucket{le="5000"} 1
envoy_server_initialization_time_ms_bucket{le="10000"} 1
envoy_server_initialization_time_ms_bucket{le="30000"} 1
envoy_server_initialization_time_ms_bucket{le="60000"} 1
envoy_server_initialization_time_ms_bucket{le="300000"} 1
envoy_server_initialization_time_ms_bucket{le="600000"} 1
envoy_server_initialization_time_ms_bucket{le="1800000"} 1
envoy_server_initialization_time_ms_bucket{le="3600000"} 1
envoy_server_initialization_time_ms_bucket{le="+Inf"} 1
envoy_server_initialization_time_ms_sum{} 345
envoy_server_initialization_time_ms_count{} 1
3. 로컬에서 Istio Agent 포트(15020)로 접근하기 위한 포트 포워딩
1
kubectl port-forward deploy/webapp -n istioinaction 15020:15020
✅ 출력
1
2
Forwarding from 127.0.0.1:15020 -> 15020
Forwarding from [::1]:15020 -> 15020
4. Istio Agent → Pilot 간 동기화 상태 확인 (/debug/syncz
)
1
kubectl exec -n istioinaction deploy/webapp -c istio-proxy -- curl -s localhost:15004/debug/syncz | jq
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
{
"versionInfo": "2025-05-17T11:28:09Z/7",
"resources": [
{
"@type": "type.googleapis.com/envoy.service.status.v3.ClientConfig",
"node": {
"id": "istio-egressgateway-85df6b84b7-md4m2.istio-system",
"metadata": {
"CLUSTER_ID": "Kubernetes"
}
},
"genericXdsConfigs": [
{
"typeUrl": "type.googleapis.com/envoy.config.listener.v3.Listener",
"configStatus": "SYNCED"
},
{
"typeUrl": "type.googleapis.com/envoy.config.route.v3.RouteConfiguration",
"configStatus": "NOT_SENT"
},
{
"typeUrl": "type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment",
"configStatus": "SYNCED"
},
{
"typeUrl": "type.googleapis.com/envoy.config.cluster.v3.Cluster",
"configStatus": "SYNCED"
},
{
"typeUrl": "type.googleapis.com/envoy.config.core.v3.TypedExtensionConfig",
"configStatus": "NOT_SENT"
}
]
},
{
"@type": "type.googleapis.com/envoy.service.status.v3.ClientConfig",
"node": {
"id": "catalog-6cf4b97d-dkbtp.istioinaction",
"metadata": {
"CLUSTER_ID": "Kubernetes"
}
},
"genericXdsConfigs": [
{
"typeUrl": "type.googleapis.com/envoy.config.listener.v3.Listener",
"configStatus": "SYNCED"
},
{
"typeUrl": "type.googleapis.com/envoy.config.route.v3.RouteConfiguration",
"configStatus": "SYNCED"
},
{
"typeUrl": "type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment",
"configStatus": "SYNCED"
},
{
"typeUrl": "type.googleapis.com/envoy.config.cluster.v3.Cluster",
"configStatus": "SYNCED"
},
{
"typeUrl": "type.googleapis.com/envoy.config.core.v3.TypedExtensionConfig",
"configStatus": "NOT_SENT"
}
]
},
{
"@type": "type.googleapis.com/envoy.service.status.v3.ClientConfig",
"node": {
"id": "webapp-7685bcb84-n2452.istioinaction",
"metadata": {
"CLUSTER_ID": "Kubernetes"
}
},
"genericXdsConfigs": [
{
"typeUrl": "type.googleapis.com/envoy.config.listener.v3.Listener",
"configStatus": "SYNCED"
},
{
"typeUrl": "type.googleapis.com/envoy.config.route.v3.RouteConfiguration",
"configStatus": "SYNCED"
},
{
"typeUrl": "type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment",
"configStatus": "SYNCED"
},
{
"typeUrl": "type.googleapis.com/envoy.config.cluster.v3.Cluster",
"configStatus": "SYNCED"
},
{
"typeUrl": "type.googleapis.com/envoy.config.core.v3.TypedExtensionConfig",
"configStatus": "NOT_SENT"
}
]
},
{
"@type": "type.googleapis.com/envoy.service.status.v3.ClientConfig",
"node": {
"id": "istio-ingressgateway-6bb8fb6549-2mlk2.istio-system",
"metadata": {
"CLUSTER_ID": "Kubernetes"
}
},
"genericXdsConfigs": [
{
"typeUrl": "type.googleapis.com/envoy.config.listener.v3.Listener",
"configStatus": "SYNCED"
},
{
"typeUrl": "type.googleapis.com/envoy.config.route.v3.RouteConfiguration",
"configStatus": "SYNCED"
},
{
"typeUrl": "type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment",
"configStatus": "SYNCED"
},
{
"typeUrl": "type.googleapis.com/envoy.config.cluster.v3.Cluster",
"configStatus": "SYNCED"
},
{
"typeUrl": "type.googleapis.com/envoy.config.core.v3.TypedExtensionConfig",
"configStatus": "NOT_SENT"
}
]
}
],
"typeUrl": "istio.io/debug/syncz",
"nonce": "ea044911-5335-48e1-bd50-414a225545ff",
"controlPlane": {
"identifier": "{\"Component\":\"istiod\",\"ID\":\"istiod-8d74787f-6jlfq\",\"Info\":{\"version\":\"1.17.8\",\"revision\":\"a781f9ee6c511d8f22140d8990c31e577b2a9676\",\"golang_version\":\"go1.20.10\",\"status\":\"Clean\",\"tag\":\"1.17.8\"}}"
}
}
🧩 이스티오 파일럿이 노출하는 정보
1. Pilot(istiod)의 서비스 바인딩 포트 확인 (netstat
)
1
kubectl -n istio-system exec -it deploy/istiod -- netstat -tnl
✅ 출력
1
2
3
4
5
6
7
8
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.1:9876 0.0.0.0:* LISTEN
tcp6 0 0 :::15012 :::* LISTEN
tcp6 0 0 :::15014 :::* LISTEN
tcp6 0 0 :::15010 :::* LISTEN
tcp6 0 0 :::15017 :::* LISTEN
tcp6 0 0 :::8080 :::* LISTEN
2. Pilot의 프로세스 및 포트 매핑 확인 (ss -tnlp
)
1
kubectl -n istio-system exec -it deploy/istiod -- ss -tnlp
✅ 출력
1
2
3
4
5
6
7
8
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 127.0.0.1:9876 0.0.0.0:* users:(("pilot-discovery",pid=1,fd=8))
LISTEN 0 4096 *:15012 *:* users:(("pilot-discovery",pid=1,fd=10))
LISTEN 0 4096 *:15014 *:* users:(("pilot-discovery",pid=1,fd=9))
LISTEN 0 4096 *:15010 *:* users:(("pilot-discovery",pid=1,fd=11))
LISTEN 0 4096 *:15017 *:* users:(("pilot-discovery",pid=1,fd=12))
LISTEN 0 4096 *:8080 *:* users:(("pilot-discovery",pid=1,fd=3))
3. Istiod Pod의 상태 및 설정 정보 확인
1
kubectl describe pod -n istio-system -l app=istiod
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
Name: istiod-8d74787f-6jlfq
Namespace: istio-system
Priority: 0
Service Account: istiod
Node: myk8s-control-plane/172.18.0.2
Start Time: Sat, 17 May 2025 12:26:26 +0900
Labels: app=istiod
install.operator.istio.io/owning-resource=unknown
istio=pilot
istio.io/rev=default
operator.istio.io/component=Pilot
pod-template-hash=8d74787f
sidecar.istio.io/inject=false
Annotations: prometheus.io/port: 15014
prometheus.io/scrape: true
sidecar.istio.io/inject: false
Status: Running
IP: 10.10.0.5
IPs:
IP: 10.10.0.5
Controlled By: ReplicaSet/istiod-8d74787f
Containers:
discovery:
Container ID: containerd://a43bf00ced61e81740c73036874569e34bb6c81e5b0c834b560aceb05b77dc5c
Image: docker.io/istio/pilot:1.17.8
Image ID: docker.io/istio/pilot@sha256:cb9e7b1b1c7b8dcea37d5173b87c40f38a5ae7b44799adfdcf8574c57a52ad2c
Ports: 8080/TCP, 15010/TCP, 15017/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Args:
discovery
--monitoringAddr=:15014
--log_output_level=default:info
--domain
cluster.local
--keepaliveMaxServerConnectionAge
30m
State: Running
Started: Sat, 17 May 2025 18:35:49 +0900
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Sat, 17 May 2025 12:26:36 +0900
Finished: Sat, 17 May 2025 18:35:48 +0900
Ready: True
Restart Count: 1
Requests:
cpu: 10m
memory: 100Mi
Readiness: http-get http://:8080/ready delay=1s timeout=5s period=3s #success=1 #failure=3
Environment:
REVISION: default
JWT_POLICY: third-party-jwt
PILOT_CERT_PROVIDER: istiod
POD_NAME: istiod-8d74787f-6jlfq (v1:metadata.name)
POD_NAMESPACE: istio-system (v1:metadata.namespace)
SERVICE_ACCOUNT: (v1:spec.serviceAccountName)
KUBECONFIG: /var/run/secrets/remote/config
PILOT_TRACE_SAMPLING: 100
PILOT_ENABLE_PROTOCOL_SNIFFING_FOR_OUTBOUND: true
PILOT_ENABLE_PROTOCOL_SNIFFING_FOR_INBOUND: true
ISTIOD_ADDR: istiod.istio-system.svc:15012
PILOT_ENABLE_ANALYSIS: false
CLUSTER_ID: Kubernetes
Mounts:
/etc/cacerts from cacerts (ro)
/var/run/secrets/istio-dns from local-certs (rw)
/var/run/secrets/istiod/ca from istio-csr-ca-configmap (ro)
/var/run/secrets/istiod/tls from istio-csr-dns-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8gkj8 (ro)
/var/run/secrets/remote from istio-kubeconfig (ro)
/var/run/secrets/tokens from istio-token (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
local-certs:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
istio-token:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 43200
cacerts:
Type: Secret (a volume populated by a Secret)
SecretName: cacerts
Optional: true
istio-kubeconfig:
Type: Secret (a volume populated by a Secret)
SecretName: istio-kubeconfig
Optional: true
istio-csr-dns-cert:
Type: Secret (a volume populated by a Secret)
SecretName: istiod-tls
Optional: true
istio-csr-ca-configmap:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: istio-ca-root-cert
Optional: true
kube-api-access-8gkj8:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
4. Pilot 디버그 엔드포인트 로컬 접근 포트포워딩 (:8080
)
1
kubectl -n istio-system port-forward deploy/istiod 8080
✅ 출력
1
2
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
5. ADS (Aggregated Discovery Service) 디버깅 정보 확인
1
curl -s http://localhost:8080/debug/adsz | jq
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
{
"totalClients": 4,
"clients": [
{
"connectionId": "catalog-6cf4b97d-dkbtp.istioinaction-21",
"connectedAt": "2025-05-17T11:28:01.9191482Z",
"address": "10.10.0.15:43056",
"labels": {
"app": "catalog",
"kubernetes.io/hostname": "myk8s-control-plane",
"pod-template-hash": "6cf4b97d",
"security.istio.io/tlsMode": "istio",
"service.istio.io/canonical-name": "catalog",
"service.istio.io/canonical-revision": "v1",
"topology.istio.io/cluster": "Kubernetes",
"version": "v1"
},
"metadata": {
"PROXY_CONFIG": {
"configPath": "./etc/istio/proxy",
"binaryPath": "/usr/local/bin/envoy",
"serviceCluster": "istio-proxy",
"drainDuration": "45s",
"discoveryAddress": "istiod.istio-system.svc:15012",
"proxyAdminPort": 15000,
"controlPlaneAuthPolicy": "MUTUAL_TLS",
"statNameLength": 189,
"concurrency": 2,
"tracing": {
"zipkin": {
"address": "zipkin.istio-system:9411"
}
},
"statusPort": 15020,
"terminationDrainDuration": "5s"
},
"ISTIO_VERSION": "1.17.8",
"LABELS": {
"app": "catalog",
"security.istio.io/tlsMode": "istio",
"service.istio.io/canonical-name": "catalog",
"service.istio.io/canonical-revision": "v1",
"version": "v1"
},
"ANNOTATIONS": {
"kubectl.kubernetes.io/default-container": "catalog",
"kubectl.kubernetes.io/default-logs-container": "catalog",
"kubernetes.io/config.seen": "2025-05-17T11:28:00.358696466Z",
"kubernetes.io/config.source": "api",
"prometheus.io/path": "/stats/prometheus",
"prometheus.io/port": "15020",
"prometheus.io/scrape": "true",
"sidecar.istio.io/status": "{\"initContainers\":[\"istio-init\"],\"containers\":[\"istio-proxy\"],\"volumes\":[\"workload-socket\",\"credential-socket\",\"workload-certs\",\"istio-envoy\",\"istio-data\",\"istio-podinfo\",\"istio-token\",\"istiod-ca-cert\"],\"imagePullSecrets\":null,\"revision\":\"default\"}"
},
"INSTANCE_IPS": "10.10.0.15",
"NAMESPACE": "istioinaction",
"NODE_NAME": "myk8s-control-plane",
"WORKLOAD_NAME": "catalog",
"INTERCEPTION_MODE": "REDIRECT",
"SERVICE_ACCOUNT": "catalog",
"MESH_ID": "cluster.local",
"CLUSTER_ID": "Kubernetes",
"POD_PORTS": "[{\"name\":\"http\",\"containerPort\":3000,\"protocol\":\"TCP\"}]",
"ENVOY_STATUS_PORT": 15021,
"ENVOY_PROMETHEUS_PORT": 15090
},
"locality": {},
"watches": {
"type.googleapis.com/envoy.config.cluster.v3.Cluster": [],
"type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment": [
"outbound|80||webapp.istioinaction.svc.cluster.local",
"outbound|80||catalog.istioinaction.svc.cluster.local",
"outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local",
"outbound|443||istio-egressgateway.istio-system.svc.cluster.local",
"outbound|80||istio-ingressgateway.istio-system.svc.cluster.local",
"outbound|14268||jaeger-collector.istio-system.svc.cluster.local",
"outbound|80||tracing.istio-system.svc.cluster.local",
"outbound|443||istiod.istio-system.svc.cluster.local",
"outbound|15012||istiod.istio-system.svc.cluster.local",
"outbound|53||kube-dns.kube-system.svc.cluster.local",
"outbound|14250||jaeger-collector.istio-system.svc.cluster.local",
"outbound|443||istio-ingressgateway.istio-system.svc.cluster.local",
"outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local",
"outbound|9411||zipkin.istio-system.svc.cluster.local",
"outbound|443||kubernetes.default.svc.cluster.local",
"outbound|9090||prometheus.istio-system.svc.cluster.local",
"outbound|9090||kiali.istio-system.svc.cluster.local",
"outbound|3000||grafana.istio-system.svc.cluster.local",
"outbound|80||istio-egressgateway.istio-system.svc.cluster.local",
"outbound|16685||tracing.istio-system.svc.cluster.local",
"outbound|15014||istiod.istio-system.svc.cluster.local",
"outbound|15010||istiod.istio-system.svc.cluster.local",
"outbound|9153||kube-dns.kube-system.svc.cluster.local",
"outbound|20001||kiali.istio-system.svc.cluster.local",
"outbound|9411||jaeger-collector.istio-system.svc.cluster.local",
"outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local"
],
"type.googleapis.com/envoy.config.listener.v3.Listener": [],
"type.googleapis.com/envoy.config.route.v3.RouteConfiguration": [
"16685",
"9090",
"grafana.istio-system.svc.cluster.local:3000",
"15014",
"15010",
"20001",
"80",
"kube-dns.kube-system.svc.cluster.local:9153",
"jaeger-collector.istio-system.svc.cluster.local:14268",
"9411",
"jaeger-collector.istio-system.svc.cluster.local:14250",
"istio-ingressgateway.istio-system.svc.cluster.local:15021"
]
}
},
{
"connectionId": "istio-egressgateway-85df6b84b7-md4m2.istio-system-24",
"connectedAt": "2025-05-17T11:37:54.871176993Z",
"address": "10.10.0.7:59422",
"labels": {
"app": "istio-egressgateway",
"chart": "gateways",
"heritage": "Tiller",
"install.operator.istio.io/owning-resource": "unknown",
"istio": "egressgateway",
"istio.io/rev": "default",
"kubernetes.io/hostname": "myk8s-control-plane",
"operator.istio.io/component": "EgressGateways",
"pod-template-hash": "85df6b84b7",
"release": "istio",
"service.istio.io/canonical-name": "istio-egressgateway",
"service.istio.io/canonical-revision": "latest",
"sidecar.istio.io/inject": "false",
"topology.istio.io/cluster": "Kubernetes"
},
"metadata": {
"PROXY_CONFIG": {
"configPath": "./etc/istio/proxy",
"binaryPath": "/usr/local/bin/envoy",
"serviceCluster": "istio-proxy",
"drainDuration": "45s",
"discoveryAddress": "istiod.istio-system.svc:15012",
"proxyAdminPort": 15000,
"controlPlaneAuthPolicy": "MUTUAL_TLS",
"statNameLength": 189,
"tracing": {
"zipkin": {
"address": "zipkin.istio-system:9411"
}
},
"statusPort": 15020,
"terminationDrainDuration": "5s"
},
"ISTIO_VERSION": "1.17.8",
"LABELS": {
"app": "istio-egressgateway",
"chart": "gateways",
"heritage": "Tiller",
"install.operator.istio.io/owning-resource": "unknown",
"istio": "egressgateway",
"istio.io/rev": "default",
"operator.istio.io/component": "EgressGateways",
"release": "istio",
"service.istio.io/canonical-name": "istio-egressgateway",
"service.istio.io/canonical-revision": "latest",
"sidecar.istio.io/inject": "false"
},
"ANNOTATIONS": {
"kubernetes.io/config.seen": "2025-05-17T03:26:38.211822242Z",
"kubernetes.io/config.source": "api",
"prometheus.io/path": "/stats/prometheus",
"prometheus.io/port": "15020",
"prometheus.io/scrape": "true",
"sidecar.istio.io/inject": "false"
},
"INSTANCE_IPS": "10.10.0.7",
"NAMESPACE": "istio-system",
"NODE_NAME": "myk8s-control-plane",
"WORKLOAD_NAME": "istio-egressgateway",
"SERVICE_ACCOUNT": "istio-egressgateway-service-account",
"MESH_ID": "cluster.local",
"CLUSTER_ID": "Kubernetes",
"UNPRIVILEGED_POD": "true",
"ENVOY_STATUS_PORT": 15021,
"ENVOY_PROMETHEUS_PORT": 15090
},
"locality": {},
"watches": {
"type.googleapis.com/envoy.config.cluster.v3.Cluster": [],
"type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment": [
"outbound|80||webapp.istioinaction.svc.cluster.local",
"outbound|80||catalog.istioinaction.svc.cluster.local",
"outbound|80||tracing.istio-system.svc.cluster.local",
"outbound|9090||kiali.istio-system.svc.cluster.local",
"outbound|14268||jaeger-collector.istio-system.svc.cluster.local",
"outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local",
"outbound|443||istiod.istio-system.svc.cluster.local",
"outbound|80||istio-egressgateway.istio-system.svc.cluster.local",
"outbound|20001||kiali.istio-system.svc.cluster.local",
"outbound|80||istio-ingressgateway.istio-system.svc.cluster.local",
"outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local",
"outbound|9411||zipkin.istio-system.svc.cluster.local",
"outbound|3000||grafana.istio-system.svc.cluster.local",
"outbound|15014||istiod.istio-system.svc.cluster.local",
"outbound|9153||kube-dns.kube-system.svc.cluster.local",
"outbound|443||kubernetes.default.svc.cluster.local",
"outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local",
"outbound|9090||prometheus.istio-system.svc.cluster.local",
"outbound|15010||istiod.istio-system.svc.cluster.local",
"outbound|15012||istiod.istio-system.svc.cluster.local",
"outbound|16685||tracing.istio-system.svc.cluster.local",
"outbound|14250||jaeger-collector.istio-system.svc.cluster.local",
"outbound|9411||jaeger-collector.istio-system.svc.cluster.local",
"outbound|53||kube-dns.kube-system.svc.cluster.local",
"outbound|443||istio-ingressgateway.istio-system.svc.cluster.local",
"outbound|443||istio-egressgateway.istio-system.svc.cluster.local"
],
"type.googleapis.com/envoy.config.listener.v3.Listener": []
}
},
{
"connectionId": "istio-ingressgateway-6bb8fb6549-2mlk2.istio-system-23",
"connectedAt": "2025-05-17T11:37:08.20952824Z",
"address": "10.10.0.6:47376",
"labels": {
"app": "istio-ingressgateway",
"chart": "gateways",
"heritage": "Tiller",
"install.operator.istio.io/owning-resource": "unknown",
"istio": "ingressgateway",
"istio.io/rev": "default",
"kubernetes.io/hostname": "myk8s-control-plane",
"operator.istio.io/component": "IngressGateways",
"pod-template-hash": "6bb8fb6549",
"release": "istio",
"service.istio.io/canonical-name": "istio-ingressgateway",
"service.istio.io/canonical-revision": "latest",
"sidecar.istio.io/inject": "false",
"topology.istio.io/cluster": "Kubernetes"
},
"metadata": {
"PROXY_CONFIG": {
"configPath": "./etc/istio/proxy",
"binaryPath": "/usr/local/bin/envoy",
"serviceCluster": "istio-proxy",
"drainDuration": "45s",
"discoveryAddress": "istiod.istio-system.svc:15012",
"proxyAdminPort": 15000,
"controlPlaneAuthPolicy": "MUTUAL_TLS",
"statNameLength": 189,
"tracing": {
"zipkin": {
"address": "zipkin.istio-system:9411"
}
},
"statusPort": 15020,
"terminationDrainDuration": "5s"
},
"ISTIO_VERSION": "1.17.8",
"LABELS": {
"app": "istio-ingressgateway",
"chart": "gateways",
"heritage": "Tiller",
"install.operator.istio.io/owning-resource": "unknown",
"istio": "ingressgateway",
"istio.io/rev": "default",
"operator.istio.io/component": "IngressGateways",
"release": "istio",
"service.istio.io/canonical-name": "istio-ingressgateway",
"service.istio.io/canonical-revision": "latest",
"sidecar.istio.io/inject": "false"
},
"ANNOTATIONS": {
"kubernetes.io/config.seen": "2025-05-17T03:26:38.207585091Z",
"kubernetes.io/config.source": "api",
"prometheus.io/path": "/stats/prometheus",
"prometheus.io/port": "15020",
"prometheus.io/scrape": "true",
"sidecar.istio.io/inject": "false"
},
"INSTANCE_IPS": "10.10.0.6",
"NAMESPACE": "istio-system",
"NODE_NAME": "myk8s-control-plane",
"WORKLOAD_NAME": "istio-ingressgateway",
"SERVICE_ACCOUNT": "istio-ingressgateway-service-account",
"MESH_ID": "cluster.local",
"CLUSTER_ID": "Kubernetes",
"UNPRIVILEGED_POD": "true",
"ENVOY_STATUS_PORT": 15021,
"ENVOY_PROMETHEUS_PORT": 15090
},
"locality": {},
"watches": {
"type.googleapis.com/envoy.config.cluster.v3.Cluster": [],
"type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment": [
"outbound|80||webapp.istioinaction.svc.cluster.local",
"outbound|80||catalog.istioinaction.svc.cluster.local",
"outbound|15014||istiod.istio-system.svc.cluster.local",
"outbound|9411||zipkin.istio-system.svc.cluster.local",
"outbound|443||istiod.istio-system.svc.cluster.local",
"outbound|80||tracing.istio-system.svc.cluster.local",
"outbound|53||kube-dns.kube-system.svc.cluster.local",
"outbound|20001||kiali.istio-system.svc.cluster.local",
"outbound|14268||jaeger-collector.istio-system.svc.cluster.local",
"outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local",
"outbound|3000||grafana.istio-system.svc.cluster.local",
"outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local",
"outbound|15010||istiod.istio-system.svc.cluster.local",
"outbound|443||istio-egressgateway.istio-system.svc.cluster.local",
"outbound|9090||prometheus.istio-system.svc.cluster.local",
"outbound|80||istio-egressgateway.istio-system.svc.cluster.local",
"outbound|9411||jaeger-collector.istio-system.svc.cluster.local",
"outbound|14250||jaeger-collector.istio-system.svc.cluster.local",
"outbound|443||istio-ingressgateway.istio-system.svc.cluster.local",
"outbound|9153||kube-dns.kube-system.svc.cluster.local",
"outbound|15012||istiod.istio-system.svc.cluster.local",
"outbound|443||kubernetes.default.svc.cluster.local",
"outbound|16685||tracing.istio-system.svc.cluster.local",
"outbound|9090||kiali.istio-system.svc.cluster.local",
"outbound|80||istio-ingressgateway.istio-system.svc.cluster.local",
"outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local"
],
"type.googleapis.com/envoy.config.listener.v3.Listener": [],
"type.googleapis.com/envoy.config.route.v3.RouteConfiguration": [
"http.8080"
]
}
},
{
"connectionId": "webapp-7685bcb84-n2452.istioinaction-22",
"connectedAt": "2025-05-17T11:28:08.438869239Z",
"address": "10.10.0.16:58574",
"labels": {
"app": "webapp",
"kubernetes.io/hostname": "myk8s-control-plane",
"pod-template-hash": "7685bcb84",
"security.istio.io/tlsMode": "istio",
"service.istio.io/canonical-name": "webapp",
"service.istio.io/canonical-revision": "latest",
"topology.istio.io/cluster": "Kubernetes"
},
"metadata": {
"PROXY_CONFIG": {
"configPath": "./etc/istio/proxy",
"binaryPath": "/usr/local/bin/envoy",
"serviceCluster": "istio-proxy",
"drainDuration": "45s",
"discoveryAddress": "istiod.istio-system.svc:15012",
"proxyAdminPort": 15000,
"controlPlaneAuthPolicy": "MUTUAL_TLS",
"statNameLength": 189,
"concurrency": 2,
"tracing": {
"zipkin": {
"address": "zipkin.istio-system:9411"
}
},
"statusPort": 15020,
"terminationDrainDuration": "5s"
},
"ISTIO_VERSION": "1.17.8",
"LABELS": {
"app": "webapp",
"security.istio.io/tlsMode": "istio",
"service.istio.io/canonical-name": "webapp",
"service.istio.io/canonical-revision": "latest"
},
"ANNOTATIONS": {
"kubectl.kubernetes.io/default-container": "webapp",
"kubectl.kubernetes.io/default-logs-container": "webapp",
"kubernetes.io/config.seen": "2025-05-17T11:28:00.509096896Z",
"kubernetes.io/config.source": "api",
"prometheus.io/path": "/stats/prometheus",
"prometheus.io/port": "15020",
"prometheus.io/scrape": "true",
"sidecar.istio.io/status": "{\"initContainers\":[\"istio-init\"],\"containers\":[\"istio-proxy\"],\"volumes\":[\"workload-socket\",\"credential-socket\",\"workload-certs\",\"istio-envoy\",\"istio-data\",\"istio-podinfo\",\"istio-token\",\"istiod-ca-cert\"],\"imagePullSecrets\":null,\"revision\":\"default\"}"
},
"INSTANCE_IPS": "10.10.0.16",
"NAMESPACE": "istioinaction",
"NODE_NAME": "myk8s-control-plane",
"WORKLOAD_NAME": "webapp",
"INTERCEPTION_MODE": "REDIRECT",
"SERVICE_ACCOUNT": "webapp",
"MESH_ID": "cluster.local",
"CLUSTER_ID": "Kubernetes",
"POD_PORTS": "[{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}]",
"ENVOY_STATUS_PORT": 15021,
"ENVOY_PROMETHEUS_PORT": 15090
},
"locality": {},
"watches": {
"type.googleapis.com/envoy.config.cluster.v3.Cluster": [],
"type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment": [
"outbound|80||webapp.istioinaction.svc.cluster.local",
"outbound|443||istio-egressgateway.istio-system.svc.cluster.local",
"outbound|15014||istiod.istio-system.svc.cluster.local",
"outbound|9090||prometheus.istio-system.svc.cluster.local",
"outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local",
"outbound|9090||kiali.istio-system.svc.cluster.local",
"outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local",
"outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local",
"outbound|9411||jaeger-collector.istio-system.svc.cluster.local",
"outbound|9411||zipkin.istio-system.svc.cluster.local",
"outbound|443||kubernetes.default.svc.cluster.local",
"outbound|16685||tracing.istio-system.svc.cluster.local",
"outbound|80||catalog.istioinaction.svc.cluster.local",
"outbound|15010||istiod.istio-system.svc.cluster.local",
"outbound|20001||kiali.istio-system.svc.cluster.local",
"outbound|80||istio-egressgateway.istio-system.svc.cluster.local",
"outbound|80||tracing.istio-system.svc.cluster.local",
"outbound|443||istio-ingressgateway.istio-system.svc.cluster.local",
"outbound|14250||jaeger-collector.istio-system.svc.cluster.local",
"outbound|14268||jaeger-collector.istio-system.svc.cluster.local",
"outbound|53||kube-dns.kube-system.svc.cluster.local",
"outbound|3000||grafana.istio-system.svc.cluster.local",
"outbound|15012||istiod.istio-system.svc.cluster.local",
"outbound|80||istio-ingressgateway.istio-system.svc.cluster.local",
"outbound|9153||kube-dns.kube-system.svc.cluster.local",
"outbound|443||istiod.istio-system.svc.cluster.local"
],
"type.googleapis.com/envoy.config.listener.v3.Listener": [],
"type.googleapis.com/envoy.config.route.v3.RouteConfiguration": [
"20001",
"jaeger-collector.istio-system.svc.cluster.local:14250",
"grafana.istio-system.svc.cluster.local:3000",
"istio-ingressgateway.istio-system.svc.cluster.local:15021",
"kube-dns.kube-system.svc.cluster.local:9153",
"80",
"15014",
"9090",
"15010",
"16685",
"9411",
"jaeger-collector.istio-system.svc.cluster.local:14268"
]
}
}
]
}
6. 관리 중인 모든 프록시에 XDS 설정 푸시 트리거
1
curl -s http://localhost:8080/debug/adsz\?push\=true
✅ 출력
1
Pushed to 4 servers
7. 네임스페이스에 적용 중인 AuthorizationPolicy 확인
1
curl -s http://localhost:8080/debug/authorizationz | jq
✅ 출력
1
2
3
4
5
6
{
"authorization_policies": {
"namespace_to_policies": {},
"root_namespace": "istio-system"
}
}
8. Pilot이 관리하는 모든 프록시와의 XDS 동기화 상태 확인
1
curl -s http://localhost:8080/debug/syncz | jq
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
[
{
"cluster_id": "Kubernetes",
"proxy": "webapp-7685bcb84-n2452.istioinaction",
"istio_version": "1.17.8",
"cluster_sent": "677c1678-3ec6-4ee4-9fd9-19d454106fc1",
"cluster_acked": "677c1678-3ec6-4ee4-9fd9-19d454106fc1",
"listener_sent": "7d524ffe-bd0e-411b-ab2c-a64a5b5bcc6e",
"listener_acked": "7d524ffe-bd0e-411b-ab2c-a64a5b5bcc6e",
"route_sent": "2cdff0e1-5fff-44c4-835c-d6edd741f4ac",
"route_acked": "2cdff0e1-5fff-44c4-835c-d6edd741f4ac",
"endpoint_sent": "2a912cbe-0228-4723-b454-27441bcdfbbf",
"endpoint_acked": "2a912cbe-0228-4723-b454-27441bcdfbbf"
},
{
"cluster_id": "Kubernetes",
"proxy": "istio-ingressgateway-6bb8fb6549-2mlk2.istio-system",
"istio_version": "1.17.8",
"cluster_sent": "c6a45349-0714-447d-b881-51d2846c90cc",
"cluster_acked": "c6a45349-0714-447d-b881-51d2846c90cc",
"listener_sent": "04e5a01d-5246-47a0-83bb-e9eed734ea46",
"listener_acked": "04e5a01d-5246-47a0-83bb-e9eed734ea46",
"route_sent": "de594fba-8f9b-4517-be0e-3efde37ac020",
"route_acked": "de594fba-8f9b-4517-be0e-3efde37ac020",
"endpoint_sent": "63d26817-50cf-4c49-9d73-acdcde87ba6d",
"endpoint_acked": "63d26817-50cf-4c49-9d73-acdcde87ba6d"
},
{
"cluster_id": "Kubernetes",
"proxy": "istio-egressgateway-85df6b84b7-md4m2.istio-system",
"istio_version": "1.17.8",
"cluster_sent": "9793b982-3efb-45ee-806f-efaf5524f9b4",
"cluster_acked": "9793b982-3efb-45ee-806f-efaf5524f9b4",
"listener_sent": "40336e9b-3ed8-4ff7-99d5-6d741c3b2c82",
"listener_acked": "40336e9b-3ed8-4ff7-99d5-6d741c3b2c82",
"endpoint_sent": "12eb1d9c-3ce1-4d2f-a70f-2f141045ba86",
"endpoint_acked": "12eb1d9c-3ce1-4d2f-a70f-2f141045ba86"
},
{
"cluster_id": "Kubernetes",
"proxy": "catalog-6cf4b97d-dkbtp.istioinaction",
"istio_version": "1.17.8",
"cluster_sent": "fb76ad32-aec1-486d-b8c4-fac1db9dc87a",
"cluster_acked": "fb76ad32-aec1-486d-b8c4-fac1db9dc87a",
"listener_sent": "da34e716-5173-47c9-a7a3-0b04c94b32a7",
"listener_acked": "da34e716-5173-47c9-a7a3-0b04c94b32a7",
"route_sent": "ad24c45b-b76f-498b-87d0-ee53f75b4325",
"route_acked": "ad24c45b-b76f-498b-87d0-ee53f75b4325",
"endpoint_sent": "a0e05dab-c6e1-4e8f-aa14-1fb8cbb85a63",
"endpoint_acked": "a0e05dab-c6e1-4e8f-aa14-1fb8cbb85a63"
}
]
9. Pilot 디버그 포트(9876) 포트포워딩
1
kubectl -n istio-system port-forward deploy/istiod 9876
✅ 출력
1
2
Forwarding from 127.0.0.1:9876 -> 9876
Forwarding from [::1]:9876 -> 9876
🔄 다음 실습을 위해 삭제 후 재설치
1. Kind 클러스터 삭제
1
2
3
4
5
kind delete cluster --name myk8s
# 결과
Deleting cluster "myk8s" ...
Deleted nodes: ["myk8s-control-plane"]
2. Kind 클러스터 생성
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
kind create cluster --name myk8s --image kindest/node:v1.23.17 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000 # Sample Application (istio-ingrssgateway) HTTP
hostPort: 30000
- containerPort: 30001 # Prometheus
hostPort: 30001
- containerPort: 30002 # Grafana
hostPort: 30002
- containerPort: 30003 # Kiali
hostPort: 30003
- containerPort: 30004 # Tracing
hostPort: 30004
- containerPort: 30005 # Sample Application (istio-ingrssgateway) HTTPS
hostPort: 30005
- containerPort: 30006 # TCP Route
hostPort: 30006
- containerPort: 30007 # kube-ops-view
hostPort: 30007
extraMounts: # 해당 부분 생략 가능
- hostPath: /home/devshin/workspace/istio/istio-in-action/book-source-code-master # 각자 자신의 pwd 경로로 설정
containerPath: /istiobook
networking:
podSubnet: 10.10.0.0/16
serviceSubnet: 10.200.0.0/22
EOF
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
Creating cluster "myk8s" ...
✓ Ensuring node image (kindest/node:v1.23.17) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-myk8s"
You can now use your cluster with:
kubectl cluster-info --context kind-myk8s
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
3. 클러스터 생성 확인
1
docker ps
✅ 출력
1
2
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
873b6a125d02 kindest/node:v1.23.17 "/usr/local/bin/entr…" About a minute ago Up About a minute 0.0.0.0:30000-30007->30000-30007/tcp, 127.0.0.1:38461->6443/tcp myk8s-control-plane
4. 노드에 기본 툴 설치
1
docker exec -it myk8s-control-plane sh -c 'apt update && apt install tree psmisc lsof wget bridge-utils net-tools dnsutils tcpdump ngrep iputils-ping git vim -y'
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
...
Setting up bind9-libs:amd64 (1:9.18.33-1~deb12u2) ...
Setting up openssh-client (1:9.2p1-2+deb12u6) ...
Setting up libxext6:amd64 (2:1.3.4-1+b1) ...
Setting up dbus-daemon (1.14.10-1~deb12u1) ...
Setting up libnet1:amd64 (1.1.6+dfsg-3.2) ...
Setting up libpcap0.8:amd64 (1.10.3-1) ...
Setting up dbus (1.14.10-1~deb12u1) ...
invoke-rc.d: policy-rc.d denied execution of start.
/usr/sbin/policy-rc.d returned 101, not running 'start dbus.service'
Setting up libgdbm-compat4:amd64 (1.23-3) ...
Setting up xauth (1:1.1.2-1) ...
Setting up bind9-host (1:9.18.33-1~deb12u2) ...
Setting up libperl5.36:amd64 (5.36.0-7+deb12u2) ...
Setting up tcpdump (4.99.3-1) ...
Setting up ngrep (1.47+ds1-5+b1) ...
Setting up perl (5.36.0-7+deb12u2) ...
Setting up bind9-dnsutils (1:9.18.33-1~deb12u2) ...
Setting up dnsutils (1:9.18.33-1~deb12u2) ...
Setting up liberror-perl (0.17029-2) ...
Setting up git (1:2.39.5-0+deb12u2) ...
Processing triggers for libc-bin (2.36-9+deb12u4) ...
5. kube-ops-view 설치
1
2
helm repo add geek-cookbook https://geek-cookbook.github.io/charts/
helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 --set service.main.type=NodePort,service.main.ports.http.nodePort=30007 --set env.TZ="Asia/Seoul" --namespace kube-system
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
"geek-cookbook" already exists with the same configuration, skipping
NAME: kube-ops-view
LAST DEPLOYED: Sat May 17 21:08:52 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
export NODE_PORT=$(kubectl get --namespace kube-system -o jsonpath="{.spec.ports[0].nodePort}" services kube-ops-view)
export NODE_IP=$(kubectl get nodes --namespace kube-system -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
6. metrics-server 설치
(1) 헬름을 사용해 메트릭 서버 설치
1
2
helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
helm install metrics-server metrics-server/metrics-server --set 'args[0]=--kubelet-insecure-tls' -n kube-system
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
"metrics-server" already exists with the same configuration, skipping
NAME: metrics-server
LAST DEPLOYED: Sat May 17 21:09:31 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
***********************************************************************
* Metrics Server *
***********************************************************************
Chart version: 3.12.2
App version: 0.7.2
Image tag: registry.k8s.io/metrics-server/metrics-server:v0.7.2
***********************************************************************
(2) 리소스 상태 확인
1
kubectl get all -n kube-system -l app.kubernetes.io/instance=metrics-server
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
NAME READY STATUS RESTARTS AGE
pod/metrics-server-65bb6f47b6-28tck 0/1 Running 0 27s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/metrics-server ClusterIP 10.200.2.144 <none> 443/TCP 27s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/metrics-server 0/1 1 0 27s
NAME DESIRED CURRENT READY AGE
replicaset.apps/metrics-server-65bb6f47b6 1 1 0 27s
7. myk8s-control-plane 진입
1
2
3
docker exec -it myk8s-control-plane bash
root@myk8s-control-plane:/#
8. istioctl 설치
1
2
3
4
5
root@myk8s-control-plane:/# export ISTIOV=1.17.8
echo 'export ISTIOV=1.17.8' >> /root/.bashrc
curl -s -L https://istio.io/downloadIstio | ISTIO_VERSION=$ISTIOV sh -
cp istio-$ISTIOV/bin/istioctl /usr/local/bin/istioctl
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Downloading istio-1.17.8 from https://github.com/istio/istio/releases/download/1.17.8/istio-1.17.8-linux-amd64.tar.gz ...
Istio 1.17.8 download complete!
The Istio release archive has been downloaded to the istio-1.17.8 directory.
To configure the istioctl client tool for your workstation,
add the /istio-1.17.8/bin directory to your environment path variable with:
export PATH="$PATH:/istio-1.17.8/bin"
Begin the Istio pre-installation check by running:
istioctl x precheck
Try Istio in ambient mode
https://istio.io/latest/docs/ambient/getting-started/
Try Istio in sidecar mode
https://istio.io/latest/docs/setup/getting-started/
Install guides for ambient mode
https://istio.io/latest/docs/ambient/install/
Install guides for sidecar mode
https://istio.io/latest/docs/setup/install/
Need more information? Visit https://istio.io/latest/docs/
9. demo 프로파일 컨트롤 플레인 배포
1
root@myk8s-control-plane:/# istioctl install --set profile=demo --set values.global.proxy.privileged=true --set meshConfig.accessLogEncoding=JSON -y
✅ 출력
1
2
3
4
5
6
7
✔ Istio core installed
✔ Istiod installed
✔ Ingress gateways installed
✔ Egress gateways installed
✔ Installation complete Making this installation the default for injection and validation.
Thank you for installing Istio 1.17. Please take a few minutes to tell us about your install/upgrade experience! https://forms.gle/hMHGiwZHPU7UQRWe9
10. 보조 도구 설치
1
root@myk8s-control-plane:/# kubectl apply -f istio-$ISTIOV/samples/addons
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
serviceaccount/grafana created
configmap/grafana created
service/grafana created
deployment.apps/grafana created
configmap/istio-grafana-dashboards created
configmap/istio-services-grafana-dashboards created
deployment.apps/jaeger created
service/tracing created
service/zipkin created
service/jaeger-collector created
serviceaccount/kiali created
configmap/kiali created
clusterrole.rbac.authorization.k8s.io/kiali-viewer created
clusterrole.rbac.authorization.k8s.io/kiali created
clusterrolebinding.rbac.authorization.k8s.io/kiali created
role.rbac.authorization.k8s.io/kiali-controlplane created
rolebinding.rbac.authorization.k8s.io/kiali-controlplane created
service/kiali created
deployment.apps/kiali created
serviceaccount/prometheus created
configmap/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/prometheus created
deployment.apps/prometheus created
11. 컨트롤 플레인 컨테이너에서 빠져나오기
1
2
root@myk8s-control-plane:/# exit
exit
12. 실습을 위한 네임스페이스 설정
1
2
3
kubectl create ns istioinaction
kubectl label namespace istioinaction istio-injection=enabled
kubectl get ns --show-labels
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
namespace/istioinaction created
namespace/istioinaction labeled
NAME STATUS AGE LABELS
default Active 11m kubernetes.io/metadata.name=default
istio-system Active 3m57s kubernetes.io/metadata.name=istio-system
istioinaction Active 1s istio-injection=enabled,kubernetes.io/metadata.name=istioinaction
kube-node-lease Active 11m kubernetes.io/metadata.name=kube-node-lease
kube-public Active 11m kubernetes.io/metadata.name=kube-public
kube-system Active 11m kubernetes.io/metadata.name=kube-system
local-path-storage Active 11m kubernetes.io/metadata.name=local-path-storage
13. istio-ingressgateway NodePort 및 트래픽 정책 수정
1
2
3
4
kubectl patch svc -n istio-system istio-ingressgateway -p '{"spec": {"type": "NodePort", "ports": [{"port": 80, "targetPort": 8080, "nodePort": 30000}]}}'
kubectl patch svc -n istio-system istio-ingressgateway -p '{"spec": {"type": "NodePort", "ports": [{"port": 443, "targetPort": 8443, "nodePort": 30005}]}}'
kubectl patch svc -n istio-system istio-ingressgateway -p '{"spec":{"externalTrafficPolicy": "Local"}}'
kubectl describe svc -n istio-system istio-ingressgateway
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
service/istio-ingressgateway patched
service/istio-ingressgateway patched
service/istio-ingressgateway patched
Name: istio-ingressgateway
Namespace: istio-system
Labels: app=istio-ingressgateway
install.operator.istio.io/owning-resource=unknown
install.operator.istio.io/owning-resource-namespace=istio-system
istio=ingressgateway
istio.io/rev=default
operator.istio.io/component=IngressGateways
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.17.8
release=istio
Annotations: <none>
Selector: app=istio-ingressgateway,istio=ingressgateway
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.200.3.109
IPs: 10.200.3.109
Port: status-port 15021/TCP
TargetPort: 15021/TCP
NodePort: status-port 30830/TCP
Endpoints: 10.10.0.8:15021
Port: http2 80/TCP
TargetPort: 8080/TCP
NodePort: http2 30000/TCP
Endpoints: 10.10.0.8:8080
Port: https 443/TCP
TargetPort: 8443/TCP
NodePort: https 30005/TCP
Endpoints: 10.10.0.8:8443
Port: tcp 31400/TCP
TargetPort: 31400/TCP
NodePort: tcp 30019/TCP
Endpoints: 10.10.0.8:31400
Port: tls 15443/TCP
TargetPort: 15443/TCP
NodePort: tls 30880/TCP
Endpoints: 10.10.0.8:15443
Session Affinity: None
External Traffic Policy: Local
Internal Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 0s service-controller LoadBalancer -> NodePort
14. Istio 관측 도구 - NodePort 서비스 포트 일괄 수정
1
2
3
4
kubectl patch svc -n istio-system prometheus -p '{"spec": {"type": "NodePort", "ports": [{"port": 9090, "targetPort": 9090, "nodePort": 30001}]}}'
kubectl patch svc -n istio-system grafana -p '{"spec": {"type": "NodePort", "ports": [{"port": 3000, "targetPort": 3000, "nodePort": 30002}]}}'
kubectl patch svc -n istio-system kiali -p '{"spec": {"type": "NodePort", "ports": [{"port": 20001, "targetPort": 20001, "nodePort": 30003}]}}'
kubectl patch svc -n istio-system tracing -p '{"spec": {"type": "NodePort", "ports": [{"port": 80, "targetPort": 16686, "nodePort": 30004}]}}'
✅ 출력
1
2
3
4
service/prometheus patched
service/grafana patched
service/kiali patched
service/tracing patched
🛠️ 컨트롤 플레인의 네 가지 황금 신호
1. 실습 환경 구성: Catalog 서비스 및 Istio 리소스 배포
1
2
3
kubectl -n istioinaction apply -f services/catalog/kubernetes/catalog.yaml
kubectl -n istioinaction apply -f ch11/catalog-virtualservice.yaml
kubectl -n istioinaction apply -f ch11/catalog-gateway.yaml
✅ 출력
1
2
3
4
5
serviceaccount/catalog created
service/catalog created
deployment.apps/catalog created
virtualservice.networking.istio.io/catalog created
gateway.networking.istio.io/catalog-gateway created
2. 배포된 리소스 상태 확인 (Deployment, Gateway, VirtualService)
1
kubectl get deploy,gw,vs -n istioinaction
✅ 출력
1
2
3
4
5
6
7
8
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/catalog 1/1 1 1 34s
NAME AGE
gateway.networking.istio.io/catalog-gateway 33s
NAME GATEWAYS HOSTS AGE
virtualservice.networking.istio.io/catalog ["catalog-gateway"] ["catalog.istioinaction.io"] 33s
3. Catalog 서비스에 반복 접근 요청 설정 (curl + 시간 출력)
1
while true; do curl -s http://catalog.istioinaction.io:30000/items ; date "+%Y-%m-%d %H:%M:%S" ; sleep 1; echo; done
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[
{
"id": 1,
"color": "amber",
"department": "Eyewear",
"name": "Elinor Glasses",
"price": "282.00"
},
{
"id": 2,
"color": "cyan",
"department": "Clothing",
"name": "Atlas Shirt",
"price": "127.00"
},
{
"id": 3,
"color": "teal",
"department": "Clothing",
"name": "Small Metal Shoes",
"price": "232.00"
},
{
"id": 4,
"color": "red",
"department": "Watches",
"name": "Red Dragon Watch",
"price": "232.00"
}
]2025-05-17 21:28:04
...
4. Istio 컨트롤 플레인 메트릭 확인 (Pilot의 /metrics
엔드포인트)
1
kubectl exec -it -n istio-system deploy/istiod -- curl localhost:15014/metrics
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
...
# HELP pilot_xds_send_time Total time in seconds Pilot takes to send generated configuration.
# TYPE pilot_xds_send_time histogram
pilot_xds_send_time_bucket{le="0.01"} 129
pilot_xds_send_time_bucket{le="0.1"} 129
pilot_xds_send_time_bucket{le="1"} 129
pilot_xds_send_time_bucket{le="3"} 129
pilot_xds_send_time_bucket{le="5"} 129
pilot_xds_send_time_bucket{le="10"} 129
pilot_xds_send_time_bucket{le="20"} 129
pilot_xds_send_time_bucket{le="30"} 129
pilot_xds_send_time_bucket{le="+Inf"} 129
pilot_xds_send_time_sum 0.0026577
pilot_xds_send_time_count 129
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 2.64
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.073741816e+09
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 22
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 1.32882432e+08
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.74748401013e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 5.124386816e+09
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19
# HELP sidecar_injection_requests_total Total number of sidecar injection requests.
# TYPE sidecar_injection_requests_total counter
sidecar_injection_requests_total 1
# HELP sidecar_injection_success_total Total number of successful sidecar injection requests.
# TYPE sidecar_injection_success_total counter
sidecar_injection_success_total 1
# HELP webhook_patch_attempts_total Webhook patching attempts
# TYPE webhook_patch_attempts_total counter
webhook_patch_attempts_total{name="istio-revision-tag-default"} 1
webhook_patch_attempts_total{name="istio-sidecar-injector"} 3
⏱️ 지연 시간: 데이터 플레인을 업데이트하는 데 필요한 시간
1. 기존 Proxy Push Time 패널 복제 하기
2. Proxy Queue Time : PromQL - pilot_proxy_queue_time
1
2
3
4
histogram_quantile(0.5, sum(rate(pilot_proxy_queue_time_bucket[1m])) by (le))
histogram_quantile(0.9, sum(rate(pilot_proxy_queue_time_bucket[1m])) by (le))
histogram_quantile(0.99, sum(rate(pilot_proxy_queue_time_bucket[1m])) by (le))
histogram_quantile(0.999, sum(rate(pilot_proxy_queue_time_bucket[1m])) by (le))
3. XDS Push Time : PromQL - pilot_xds_push_time_bucket
1
2
3
4
histogram_quantile(0.5, sum(rate(pilot_xds_push_time_bucket[1m])) by (le))
histogram_quantile(0.9, sum(rate(pilot_xds_push_time_bucket[1m])) by (le))
histogram_quantile(0.99, sum(rate(pilot_xds_push_time_bucket[1m])) by (le))
histogram_quantile(0.999, sum(rate(pilot_xds_push_time_bucket[1m])) by (le))
💥 포화도: 컨트롤 플레인이 얼마나(CPU, MEM 리소스) 가득 차 있는가?
1. container_cpu_usage_seconds_total
메트릭으로 Istiod 컨테이너 CPU 사용률 측정
1
2
3
4
5
# Cumulative cpu time consumed by the container in core-seconds
container_cpu_usage_seconds_total
container_cpu_usage_seconds_total{container="discovery"}
container_cpu_usage_seconds_total{container="discovery", pod=~"istiod-.*|istio-pilot-.*"}
sum(irate(container_cpu_usage_seconds_total{container="discovery", pod=~"istiod-.*|istio-pilot-.*"}[1m]))
2. process_cpu_seconds_total
메트릭으로 Istiod 프로세스 CPU 사용률 측정
1
2
3
# Total user and system CPU time spent in seconds
process_cpu_seconds_total{app="istiod"}
irate(process_cpu_seconds_total{app="istiod"}[1m])
3. kubectl top
명령어로 Istiod Pod의 실시간 리소스 사용량 확인
1
kubectl top pod -n istio-system -l app=istiod --containers=true
✅ 출력
1
2
POD NAME CPU(cores) MEMORY(bytes)
istiod-8d74787f-q8md7 discovery 4m 72Mi
4. kubectl top
명령어로 catalog 워크로드와 sidecar의 리소스 사용량 확인
1
kubectl top pod -n istioinaction --containers=true
✅ 출력
1
2
3
POD NAME CPU(cores) MEMORY(bytes)
catalog-6cf4b97d-66qhb catalog 3m 26Mi
catalog-6cf4b97d-66qhb istio-proxy 6m 49Mi
📶 트래픽: 컨트롤 플레인의 부하는 어느 정도인가?
❌ 오류: 컨트롤 플레인의 실패율은 어떻게 되는가?
1
2
3
4
5
6
7
8
9
10
# 각 쿼리 패턴에 Legend 확인
Legend(Rejected CDS Configs) : sum(pilot_xds_cds_reject{app="istiod"}) or (absent(pilot_xds_cds_reject{app="istiod"}) - 1)
Legend(Rejected EDS Configs) : sum(pilot_xds_eds_reject{app="istiod"}) or (absent(pilot_xds_eds_reject{app="istiod"}) - 1)
Legend(Rejected RDS Configs) : sum(pilot_xds_rds_reject{app="istiod"}) or (absent(pilot_xds_rds_reject{app="istiod"}) - 1)
Legend(Rejected LDS Configs) : sum(pilot_xds_lds_reject{app="istiod"}) or (absent(pilot_xds_lds_reject{app="istiod"}) - 1)
Legend(Write Timeouts) : sum(rate(pilot_xds_write_timeout{app="istiod"}[1m]))
Legend(Internal Errors) : sum(rate(pilot_total_xds_internal_errors{app="istiod"}[1m]))
Legend(Config Rejection Rate) : sum(rate(pilot_total_xds_rejects{app="istiod"}[1m]))
Legend(Push Context Errors) : sum(rate(pilot_xds_push_context_errors{app="istiod"}[1m]))
Legend(Push Timeouts) : sum(rate(pilot_xds_write_timeout{app="istiod"}[1m]))
🧰 워크스페이스 준비하기 (더미 워크로드 및 리소스 대량 배포 실습)
1. Catalog 서비스 및 Istio 리소스 초기 배포
1
2
3
kubectl -n istioinaction apply -f services/catalog/kubernetes/catalog.yaml
kubectl -n istioinaction apply -f ch11/catalog-virtualservice.yaml
kubectl -n istioinaction apply -f ch11/catalog-gateway.yaml
2. 더미 워크로드 정의 확인 (Replica 10개 구성)
1
cat ch11/sleep-dummy-workloads.yaml
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
apiVersion: v1
kind: ServiceAccount
metadata:
name: sleep
---
apiVersion: v1
kind: Service
metadata:
name: sleep
labels:
app: sleep
spec:
ports:
- port: 80
name: http
selector:
app: sleep
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sleep
spec:
replicas: 10
selector:
matchLabels:
app: sleep
template:
metadata:
labels:
app: sleep
spec:
serviceAccountName: sleep
containers:
- name: sleep
image: governmentpaas/curl-ssl
command: ["/bin/sleep", "3650d"]
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /etc/sleep/tls
name: secret-volume
volumes:
- name: secret-volume
secret:
secretName: sleep-secret
optional: true
---
3. 더미 워크로드 및 서비스 리소스 배포
1
kubectl -n istioinaction apply -f ch11/sleep-dummy-workloads.yaml
✅ 출력
1
2
3
serviceaccount/sleep created
service/sleep created
deployment.apps/sleep created
4. Catalog + Sleep 워크로드 리소스 전체 확인 (Deploy, Service, Pod)
1
kubectl get deploy,svc,pod -n istioinaction
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/catalog 1/1 1 1 38m
deployment.apps/sleep 10/10 10 10 35s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/catalog ClusterIP 10.200.3.223 <none> 80/TCP 38m
service/sleep ClusterIP 10.200.1.114 <none> 80/TCP 35s
NAME READY STATUS RESTARTS AGE
pod/catalog-6cf4b97d-66qhb 2/2 Running 0 38m
pod/sleep-6f8cfb8c8f-5dbrv 2/2 Running 0 35s
pod/sleep-6f8cfb8c8f-7ghkg 2/2 Running 0 35s
pod/sleep-6f8cfb8c8f-9qsf4 2/2 Running 0 35s
pod/sleep-6f8cfb8c8f-c9z9p 2/2 Running 0 35s
pod/sleep-6f8cfb8c8f-gtpz4 2/2 Running 0 35s
pod/sleep-6f8cfb8c8f-h48cb 2/2 Running 0 35s
pod/sleep-6f8cfb8c8f-njgml 2/2 Running 0 35s
pod/sleep-6f8cfb8c8f-p6kdz 2/2 Running 0 35s
pod/sleep-6f8cfb8c8f-qrsnx 2/2 Running 0 35s
pod/sleep-6f8cfb8c8f-sgpxx 2/2 Running 0 35
5. 프록시 동기화 상태 확인 (istioctl proxy-status 사용)
1
docker exec -it myk8s-control-plane istioctl proxy-status
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION
catalog-6cf4b97d-66qhb.istioinaction Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8d74787f-q8md7 1.17.8
istio-egressgateway-85df6b84b7-8l2b4.istio-system Kubernetes SYNCED SYNCED SYNCED NOT SENT NOT SENT istiod-8d74787f-q8md7 1.17.8
istio-ingressgateway-6bb8fb6549-479j2.istio-system Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8d74787f-q8md7 1.17.8
sleep-6f8cfb8c8f-5dbrv.istioinaction Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8d74787f-q8md7 1.17.8
sleep-6f8cfb8c8f-7ghkg.istioinaction Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8d74787f-q8md7 1.17.8
sleep-6f8cfb8c8f-9qsf4.istioinaction Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8d74787f-q8md7 1.17.8
sleep-6f8cfb8c8f-c9z9p.istioinaction Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8d74787f-q8md7 1.17.8
sleep-6f8cfb8c8f-gtpz4.istioinaction Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8d74787f-q8md7 1.17.8
sleep-6f8cfb8c8f-h48cb.istioinaction Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8d74787f-q8md7 1.17.8
sleep-6f8cfb8c8f-njgml.istioinaction Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8d74787f-q8md7 1.17.8
sleep-6f8cfb8c8f-p6kdz.istioinaction Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8d74787f-q8md7 1.17.8
sleep-6f8cfb8c8f-qrsnx.istioinaction Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8d74787f-q8md7 1.17.8
sleep-6f8cfb8c8f-sgpxx.istioinaction Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-8d74787f-q8md7 1.17.8
6. sleep 서비스에 대한 엔보이 클러스터 정보 확인
1
docker exec -it myk8s-control-plane istioctl proxy-config cluster deploy/catalog.istioinaction --fqdn sleep.istioinaction.svc.cluster.local
✅ 출력
1
2
SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE
sleep.istioinaction.svc.cluster.local 80 - outbound EDS
7. 엔보이가 인식하고 있는 엔드포인트 목록 확인
1
docker exec -it myk8s-control-plane istioctl proxy-config endpoint deploy/catalog.istioinaction
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
ENDPOINT STATUS OUTLIER CHECK CLUSTER
10.10.0.10:9411 HEALTHY OK outbound|9411||jaeger-collector.istio-system.svc.cluster.local
10.10.0.10:9411 HEALTHY OK outbound|9411||zipkin.istio-system.svc.cluster.local
10.10.0.10:14250 HEALTHY OK outbound|14250||jaeger-collector.istio-system.svc.cluster.local
10.10.0.10:14268 HEALTHY OK outbound|14268||jaeger-collector.istio-system.svc.cluster.local
10.10.0.10:16685 HEALTHY OK outbound|16685||tracing.istio-system.svc.cluster.local
10.10.0.10:16686 HEALTHY OK outbound|80||tracing.istio-system.svc.cluster.local
10.10.0.11:3000 HEALTHY OK outbound|3000||grafana.istio-system.svc.cluster.local
10.10.0.12:9090 HEALTHY OK outbound|9090||kiali.istio-system.svc.cluster.local
10.10.0.12:20001 HEALTHY OK outbound|20001||kiali.istio-system.svc.cluster.local
10.10.0.13:9090 HEALTHY OK outbound|9090||prometheus.istio-system.svc.cluster.local
10.10.0.14:3000 HEALTHY OK inbound|3000||
10.10.0.14:3000 HEALTHY OK outbound|80||catalog.istioinaction.svc.cluster.local
10.10.0.15:80 HEALTHY OK outbound|80||sleep.istioinaction.svc.cluster.local
10.10.0.16:80 HEALTHY OK outbound|80||sleep.istioinaction.svc.cluster.local
10.10.0.17:80 HEALTHY OK outbound|80||sleep.istioinaction.svc.cluster.local
10.10.0.18:80 HEALTHY OK outbound|80||sleep.istioinaction.svc.cluster.local
10.10.0.19:80 HEALTHY OK outbound|80||sleep.istioinaction.svc.cluster.local
10.10.0.20:80 HEALTHY OK outbound|80||sleep.istioinaction.svc.cluster.local
10.10.0.21:80 HEALTHY OK outbound|80||sleep.istioinaction.svc.cluster.local
10.10.0.22:80 HEALTHY OK outbound|80||sleep.istioinaction.svc.cluster.local
10.10.0.23:80 HEALTHY OK outbound|80||sleep.istioinaction.svc.cluster.local
10.10.0.24:80 HEALTHY OK outbound|80||sleep.istioinaction.svc.cluster.local
10.10.0.3:53 HEALTHY OK outbound|53||kube-dns.kube-system.svc.cluster.local
10.10.0.3:9153 HEALTHY OK outbound|9153||kube-dns.kube-system.svc.cluster.local
10.10.0.4:53 HEALTHY OK outbound|53||kube-dns.kube-system.svc.cluster.local
10.10.0.4:9153 HEALTHY OK outbound|9153||kube-dns.kube-system.svc.cluster.local
10.10.0.5:8080 HEALTHY OK outbound|8080||kube-ops-view.kube-system.svc.cluster.local
10.10.0.6:10250 HEALTHY OK outbound|443||metrics-server.kube-system.svc.cluster.local
10.10.0.7:15010 HEALTHY OK outbound|15010||istiod.istio-system.svc.cluster.local
10.10.0.7:15012 HEALTHY OK outbound|15012||istiod.istio-system.svc.cluster.local
10.10.0.7:15014 HEALTHY OK outbound|15014||istiod.istio-system.svc.cluster.local
10.10.0.7:15017 HEALTHY OK outbound|443||istiod.istio-system.svc.cluster.local
10.10.0.8:8080 HEALTHY OK outbound|80||istio-ingressgateway.istio-system.svc.cluster.local
10.10.0.8:8443 HEALTHY OK outbound|443||istio-ingressgateway.istio-system.svc.cluster.local
10.10.0.8:15021 HEALTHY OK outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local
10.10.0.8:15443 HEALTHY OK outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local
10.10.0.8:31400 HEALTHY OK outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local
10.10.0.9:8080 HEALTHY OK outbound|80||istio-egressgateway.istio-system.svc.cluster.local
10.10.0.9:8443 HEALTHY OK outbound|443||istio-egressgateway.istio-system.svc.cluster.local
10.200.3.89:9411 HEALTHY OK zipkin
127.0.0.1:15000 HEALTHY OK prometheus_stats
127.0.0.1:15020 HEALTHY OK agent
172.18.0.2:6443 HEALTHY OK outbound|443||kubernetes.default.svc.cluster.local
unix://./etc/istio/proxy/XDS HEALTHY OK xds-grpc
unix://./var/run/secrets/workload-spiffe-uds/socket HEALTHY OK sds-grpc
📈 대규모 리소스 배포를 통한 부하 테스트
1. 서비스/게이트웨이/가상서비스 200개 리소스 일괄 배포
1
kubectl -n istioinaction apply -f ch11/resources-600.yaml
✅ 출력
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
gateway.networking.istio.io/service510b-0-gateway created
service/service510b-0 created
virtualservice.networking.istio.io/service510b-0 created
gateway.networking.istio.io/servicec7dc-1-gateway created
service/servicec7dc-1 created
virtualservice.networking.istio.io/servicec7dc-1 created
gateway.networking.istio.io/serviced38c-2-gateway created
service/serviced38c-2 created
virtualservice.networking.istio.io/serviced38c-2 created
gateway.networking.istio.io/service7a3e-3-gateway created
service/service7a3e-3 created
virtualservice.networking.istio.io/service7a3e-3 created
gateway.networking.istio.io/serviced79b-4-gateway created
service/serviced79b-4 created
virtualservice.networking.istio.io/serviced79b-4 created
gateway.networking.istio.io/service810d-5-gateway created
service/service810d-5 created
virtualservice.networking.istio.io/service810d-5 created
gateway.networking.istio.io/servicefce2-6-gateway created
service/servicefce2-6 created
virtualservice.networking.istio.io/servicefce2-6 created
gateway.networking.istio.io/serviceffaa-7-gateway created
service/serviceffaa-7 created
virtualservice.networking.istio.io/serviceffaa-7 created
gateway.networking.istio.io/servicedd66-8-gateway created
service/servicedd66-8 created
virtualservice.networking.istio.io/servicedd66-8 created
gateway.networking.istio.io/serviced166-9-gateway created
service/serviced166-9 created
virtualservice.networking.istio.io/serviced166-9 created
gateway.networking.istio.io/service9089-10-gateway created
service/service9089-10 created
virtualservice.networking.istio.io/service9089-10 created
gateway.networking.istio.io/service0924-11-gateway created
service/service0924-11 created
virtualservice.networking.istio.io/service0924-11 created
gateway.networking.istio.io/service8933-12-gateway created
service/service8933-12 created
virtualservice.networking.istio.io/service8933-12 created
gateway.networking.istio.io/service913b-13-gateway created
service/service913b-13 created
virtualservice.networking.istio.io/service913b-13 created
gateway.networking.istio.io/service7e13-14-gateway created
service/service7e13-14 created
virtualservice.networking.istio.io/service7e13-14 created
gateway.networking.istio.io/service2976-15-gateway created
service/service2976-15 created
virtualservice.networking.istio.io/service2976-15 created
gateway.networking.istio.io/service533b-16-gateway created
service/service533b-16 created
...
2. 현재 네임스페이스 내 서비스 개수 확인
1
kubectl get svc -n istioinaction --no-headers=true | wc -l
✅ 출력
1
202