๐ง AWS EKS ํ๊ฒฝ ์ค์น ์ค๋น (Arch Linux)
1. ์์คํ
์
๋ฐ์ดํธ
2. AWS CLI ์ค์น
1
2
3
4
5
6
7
| yay -S aws-cli-v2
# ์ค์นํ์ธ
aws --version
# ๊ฒฐ๊ณผ
aws-cli/2.24.0 Python/3.13.1 Linux/6.13.1-arch1-1 source/x86_64.arch
|
3. eksctl ์ค์น
์ต์ eksctl ์ค์น ํ์ (version: 1.31 ์ง์)
1
2
3
4
5
6
7
8
9
10
11
12
13
| ARCH=$(uname -m)
if [[ "$ARCH" == "x86_64" ]]; then ARCH="amd64"; fi
if [[ "$ARCH" == "aarch64" ]]; then ARCH="arm64"; fi
curl --silent --location "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_Linux_${ARCH}.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin/
sudo chmod +x /usr/local/bin/eksctl
# ์ค์นํ์ธ
eksctl version
# ๊ฒฐ๊ณผ
0.203.0
|
4. kubectl ์ค์น
1
2
3
4
5
6
7
8
| yay -S kubectl
# ์ค์นํ์ธ
kubectl version --client=true
# ๊ฒฐ๊ณผ
Client Version: v1.32.1
Kustomize Version: v5.5.0
|
5. Helm ์ค์น
1
2
3
4
5
6
7
| yay -S helm
# ์ค์นํ์ธ
helm version
# ๊ฒฐ๊ณผ
version.BuildInfo{Version:"v3.17.0", GitCommit:"301108edc7ac2a8ba79e4ebf5701b0b6ce6a31e4", GitTreeState:"clean", GoVersion:"go1.23.4"}
|
6. krew ์ค์น ๋ฐ ํ๋ฌ๊ทธ์ธ ์ถ๊ฐ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| yay -S krew
# ์ค์นํ์ธ
kubectl krew version
# ๊ฒฐ๊ณผ
OPTION VALUE
GitTag v0.4.4
GitCommit 343e657
IndexURI https://github.com/kubernetes-sigs/krew-index.git
BasePath /home/devshin/.krew
IndexPath /home/devshin/.krew/index/default
InstallPath /home/devshin/.krew/store
BinPath /home/devshin/.krew/bin
DetectedPlatform linux/amd64
|
1
2
3
4
5
| echo 'export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"' >> ~/.zshrc
echo 'export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"' >> ~/.bashrc
source ~/.zshrc # zsh ์ฌ์ฉ ์
source ~/.bashrc # bash ์ฌ์ฉ ์
|
1
2
3
4
5
6
7
8
9
10
| kubectl krew install neat get-all df-pv stern
kubectl krew list
# ๊ฒฐ๊ณผ
PLUGIN VERSION
df-pv v0.3.0
get-all v1.3.8
krew v0.4.4
neat v2.0.4
stern v1.32.0
|
7. kube-ps1 ์ค์น
1
2
3
4
5
6
7
8
| yay -S kube-ps1
echo "source /opt/kube-ps1/kube-ps1.sh" >> ~/.bashrc
echo 'PS1="[\u@\h \W $(kube_ps1)]$ "' >> ~/.bashrc
source ~/.bashrc
echo "source /opt/kube-ps1/kube-ps1.sh" >> ~/.zshrc
echo 'PROMPT="$(kube_ps1)$PROMPT"' >> ~/.zshrc
source ~/.zshrc
|
8. kubectx ์ค์น
9. kubecolor ์ค์น ๋ฐ alias ์ค์
1
2
3
4
5
6
7
8
9
10
11
12
| yay -S kubecolor
echo "alias k=kubectl" >> ~/.bashrc
echo "alias kubectl=kubecolor" >> ~/.bashrc
echo 'complete -F __start_kubectl kubecolor' >> ~/.bashrc
source ~/.bashrc
echo "alias k=kubectl" >> ~/.zshrc
echo "alias kubectl=kubecolor" >> ~/.zshrc
echo 'autoload -U compinit && compinit' >> ~/.zshrc
echo "compdef kubecolor=kubectl" >> ~/.zshrc
source ~/.zshrc
|
10. (์ต์
) AWS ์ธ์
๋งค๋์ ์ค์น
1
2
3
4
5
6
7
| yay -S aws-session-manager-plugin
# ์ค์นํ์ธ
session-manager-plugin --version
# ๊ฒฐ๊ณผ
1.2.707.0
|
11. (์ต์
) sshpass ์ค์น
1
2
3
4
5
6
7
8
9
10
11
12
13
| yay -S sshpass
# ์ค์นํ์ธ
sshpass -V
# ๊ฒฐ๊ณผ
sshpass 1.10
(C) 2006-2011 Lingnu Open Source Consulting Ltd.
(C) 2015-2016, 2021-2022 Shachar Shemesh
This program is free software, and can be distributed under the terms of the GPL
See the COPYING file for more information.
Using "assword" as the default password prompt indicator.
|
12. (์ต์
) Wireshark ์ค์น
ํจํท ์บก์ณ ๋ฐ ์บก์ณ๋ ํ์ผ์์ ํจํท ๋ด์ฉ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
| yay -S wireshark-qt
# Wireshark๋ ์ผ๋ฐ ์ฌ์ฉ์ ๊ถํ์ผ๋ก ์คํํ ์ ์์ ์ ์์ผ๋ฏ๋ก, wireshark ๊ทธ๋ฃน์ ์ฌ์ฉ์๋ฅผ ์ถ๊ฐํด์ผ ํจ
sudo usermod -aG wireshark $USER
newgrp wireshark # ๋ณ๊ฒฝ ์ฌํญ ์ฆ์ ์ ์ฉ
# ์ค์น ํ์ธ
wireshark --version
# ๊ฒฐ๊ณผ
Wireshark 4.4.3.
|
1
2
3
4
5
| aws configure
AWS Access Key ID [None]: XXXXXXXXXXXXXXXXXX
AWS Secret Access Key [None]: XXXXXXXXXXXXXXXXXX
Default region name [None]: ap-northeast-2
Default output format [None]: json
|
- AWS Access Key ID: (๋ฐ๊ธ๋ฐ์ Access Key ID ์
๋ ฅ)
- AWS Secret Access Key: (Secret Access Key ์
๋ ฅ)
- Default region name:
ap-northeast-2
(์์ธ ๋ฆฌ์ , ์ํ๋ ๋ฆฌ์ ์ ํ ๊ฐ๋ฅ) - Default output format:
json
๋๋ yaml
(๊ธฐ๋ณธ๊ฐ: json
)
1
2
| cd Downloads
curl -O https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/K8S/myeks-2week.yaml
|
1
2
3
4
5
6
7
| aws cloudformation deploy --template-file ~/Downloads/myeks-2week.yaml \
--stack-name myeks --parameter-overrides KeyName=kp-aews SgIngressSshCidr=$(curl -s ipinfo.io/ip)/32 --region ap-northeast-2
# ๊ฒฐ๊ณผ
Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - myeks
|
operator-host
๋ผ๋ ์ด๋ฆ์ t2.small
EC2 ์ธ์คํด์ค๊ฐ ํ๋ ๋ฐฐํฌ๋จ
3. ๋ฐฐํฌ๋ EC2 ์ธ์คํด์ค์ ์ ์
(1) CloudFormation ์คํ ๋ฐฐํฌ ์๋ฃ ํ ์ด์์๋ฒ EC2 IP ์ถ๋ ฅ
1
| aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[*].OutputValue' --output text
|
โ
ย ์ถ๋ ฅ
(2) ์ด์์๋ฒ EC2์ SSH ์ ์
1
2
| # ssh -i kp-aews.pem ec2-user@15.165.15.90
ssh -i kp-aews.pem ec2-user@$(aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[0].OutputValue' --output text)
|
4. ssh-agent
๋ฅผ ์ด์ฉํ ํค ๊ด๋ฆฌ
(1) ssh-agent
์์ ๋ฐ ํค ์ถ๊ฐ
1
2
3
| eval "$(ssh-agent -s)"
# ๊ฒฐ๊ณผ
ssh-add ~/Downloads/kp-aews.pem
|
(2) SSH ์ ์ (๋น๋ฐ๋ฒํธ ์์ด)
1
| ssh ec2-user@15.165.15.90
|
5. ์ด์ ์๋ฒ EC2(operator-host)์์ EKS ์ ๊ทผ ์ค์
(1) AWS IAM ์๊ฒฉ์ฆ๋ช
์ค์
1
2
3
4
5
| [root@operator-host ~]# aws configure
AWS Access Key ID [None]: XXXXXXXXXXXXXXXXXX
AWS Secret Access Key [None]: XXXXXXXXXXXXXXXXXX
Default region name [None]: ap-northeast-2
Default output format [None]: json
|
(2) IAM ์ฌ์ฉ์ ํ์ธ (get-caller-identity
)
1
2
| [root@operator-host ~]# aws sts get-caller-identity --query Arn
"arn:aws:iam::378102432899:user/eks-user"
|
6. Peering connections ํ์ธ
myeks-VPC์ operator-VPC๊ฐ ํผ์ด๋ง์ผ๋ก ์ฐ๊ฒฐ๋ ์ํ
myeks-VPC ๋ผ์ฐํ
๊ฒฝ๋ก
- myeks-VPC ํผ๋ธ๋ฆญ ์๋ธ๋ท์์
172.20.0.0/16
๋์ญ๊ณผ ํต์ ์ ์ธํฐ๋ท ๊ฒฝ์ ์์ด ํผ์ด๋ง(Peering)์ ํตํด Operator VPC์ ์ง์ ํต์
operator-VPC ๋ผ์ฐํ
๊ฒฝ๋ก
- operator-VPC ํผ๋ธ๋ฆญ ์๋ธ๋ท์์
192.168.0.0/16
๋์ญ๊ณผ ํต์ ์ ์ธํฐ๋ท ๊ฒฝ์ ์์ด ํผ์ด๋ง(Peering)์ ํตํด myeks-VPC์ ์ง์ ํต์
๐ข eksctl์ ํตํด EKS ๋ฐฐํฌ
1. ํด๋ฌ์คํฐ๋ช
๋ณ์ ์ง์
1
| export CLUSTER_NAME=myeks
|
2. myeks-VPC
๋ณ์ ์ง์
1
2
| export VPCID=$(aws ec2 describe-vpcs --filters "Name=tag:Name,Values=$CLUSTER_NAME-VPC" --query 'Vpcs[*].VpcId' --output text)
echo $VPCID
|
โ
ย ์ถ๋ ฅ
AWS ์ฝ์ VPC > Your VPCs์์ ์กฐํํ myeks-VPC ID
: vpc-02725f328b257230c
3. myeks
ํผ๋ธ๋ฆญ ์๋ธ๋ท ๋ณ์ ์ง์
AZ1, AZ2, AZ3์ ํด๋นํ๋ ํผ๋ธ๋ฆญ ์๋ธ๋ท์ ID๋ฅผ ๋ณ์๋ก ์ง์
1
2
3
| export PubSubnet1=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-Vpc1PublicSubnet1" --query "Subnets[0].[SubnetId]" --output text)
export PubSubnet2=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-Vpc1PublicSubnet2" --query "Subnets[0].[SubnetId]" --output text)
export PubSubnet3=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-Vpc1PublicSubnet3" --query "Subnets[0].[SubnetId]" --output text)
|
1
| echo $PubSubnet1 $PubSubnet2 $PubSubnet3
|
โ
ย ์ถ๋ ฅ
1
| subnet-0b53307d4d544b3bf subnet-019936bd535b68960 subnet-0d19099a6b73555f8
|
์๋ธ๋ท ๋์ญ(CIDR) ๋ฒ์
- AZ1: 192.168.1.0/24, AZ2: 192.168.2.0/24, AZ3: 192.168.3.0/24
4. myeks.yaml ํ์ผ ์์ฑ
๋ณ๊ฒฝํ ํญ๋ชฉ
VPC ID
, Public Subnet1 ID
, Public Subnet2 ID
, Public Subnet3 ID
, SSH Public Key Name
1
2
| echo $VPCID
vpc-02725f328b257230c
|
1
2
| echo $PubSubnet1 $PubSubnet2 $PubSubnet3
subnet-0b53307d4d544b3bf subnet-019936bd535b68960 subnet-0d19099a6b73555f8
|
myeks.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
| apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: myeks
region: ap-northeast-2
version: "1.31"
kubernetesNetworkConfig:
ipFamily: IPv4
iam:
vpcResourceControllerPolicy: true
withOIDC: true
accessConfig:
authenticationMode: API_AND_CONFIG_MAP
vpc:
autoAllocateIPv6: false
cidr: 192.168.0.0/16
clusterEndpoints:
privateAccess: true # if you only want to allow private access to the cluster
publicAccess: true # if you want to allow public access to the cluster
id: vpc-02725f328b257230c # ๊ฐ์ ํ๊ฒฝ ์ ๋ณด๋ก ์์
manageSharedNodeSecurityGroupRules: true # if you want to manage the rules of the shared node security group
nat:
gateway: Disable
subnets:
public:
ap-northeast-2a:
az: ap-northeast-2a
cidr: 192.168.1.0/24
id: subnet-0b53307d4d544b3bf # ๊ฐ์ ํ๊ฒฝ ์ ๋ณด๋ก ์์
ap-northeast-2b:
az: ap-northeast-2b
cidr: 192.168.2.0/24
id: subnet-019936bd535b68960 # ๊ฐ์ ํ๊ฒฝ ์ ๋ณด๋ก ์์
ap-northeast-2c:
az: ap-northeast-2c
cidr: 192.168.3.0/24
id: subnet-0d19099a6b73555f8 # ๊ฐ์ ํ๊ฒฝ ์ ๋ณด๋ก ์์
addons:
- name: vpc-cni # no version is specified so it deploys the default version
version: latest # auto discovers the latest available
attachPolicyARNs: # attach IAM policies to the add-on's service account
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
configurationValues: |-
enableNetworkPolicy: "true"
- name: kube-proxy
version: latest
- name: coredns
version: latest
- name: metrics-server
version: latest
privateCluster:
enabled: false
skipEndpointCreation: false
managedNodeGroups:
- amiFamily: AmazonLinux2023
desiredCapacity: 3
disableIMDSv1: true
disablePodIMDS: false
iam:
withAddonPolicies:
albIngress: false # Disable ALB Ingress Controller
appMesh: false
appMeshPreview: false
autoScaler: false
awsLoadBalancerController: true # Enable AWS Load Balancer Controller
certManager: true # Enable cert-manager
cloudWatch: false
ebs: false
efs: false
externalDNS: true # Enable ExternalDNS
fsx: false
imageBuilder: true
xRay: false
instanceSelector: {}
instanceType: t3.medium
preBootstrapCommands:
# install additional packages
- "dnf install nvme-cli links tree tcpdump sysstat ipvsadm ipset bind-utils htop -y"
# disable hyperthreading
- "for n in $(cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | cut -s -d, -f2- | tr ',' '\n' | sort -un); do echo 0 > /sys/devices/system/cpu/cpu${n}/online; done"
labels:
alpha.eksctl.io/cluster-name: myeks
alpha.eksctl.io/nodegroup-name: ng1
maxSize: 3
minSize: 3
name: ng1
privateNetworking: false
releaseVersion: ""
securityGroups:
withLocal: null
withShared: null
ssh:
allow: true
publicKeyName: kp-aews # ๊ฐ์ ํ๊ฒฝ ์ ๋ณด๋ก ์์
tags:
alpha.eksctl.io/nodegroup-name: ng1
alpha.eksctl.io/nodegroup-type: managed
volumeIOPS: 3000
volumeSize: 30
volumeThroughput: 125
volumeType: gp3
|
5. ์ต์ข
yaml๋ก eks ๋ฐฐํฌ
(1) kubeconfig ํ์ผ ๊ฒฝ๋ก ์์น ์ง์
1
| export KUBECONFIG=~/Downloads/kubeconfig
|
(2) ๋ฐฐํฌ
1
| eksctl create cluster -f myeks.yaml --verbose 4
|
๋ฐฐํฌ์๋ฃ
6. ๋ฐฐํฌ ํ ๊ธฐ๋ณธ ์ ๋ณด ํ์ธ
(1) API ์๋ฒ ์๋ํฌ์ธํธ ๋ฐ OIDC ์ ๋ณด
API Server Endpoint์ OpenID Connect Provider URL(oidc) ํฌํจ
(2) Compute
Node groups์์ AMI(AL2023) ํ์ธ ๊ฐ๋ฅ
(3) Networking
Access๋ Public and Private์ผ๋ก ์ค์ ๋จ
(4) Add-ons
VPC CNI์์ Edit ํ IRSA ๊ถํ ์ค์ ํ์ธ โ ํด๋น IAM Role ํ์ธ
(5) Access
IAM Access Entries์์ ์ค์น ์ ์ฌ์ฉํ ์๊ฒฉ์ฆ๋ช
์ฌ์ฉ์ ํ์ธ
AmazonEKSClusterAdminPolicy
๊ด๋ฆฌ์ ์ ์ฑ
์ด ํฌํจ๋์ด ์์ด EKS ๊ด๋ฆฌ ์ฝ์ ์ ๊ทผ ๋ฐ ๊ด๋ฆฌ ๊ฐ๋ฅ
7. EKS ์ ๋ณด ํ์ธ
โ
ย ์ถ๋ ฅ
1
2
3
4
| Kubernetes control plane is running at https://D25BC1FD83873C599EC920D5193DC864.gr7.ap-northeast-2.eks.amazonaws.com
CoreDNS is running at https://D25BC1FD83873C599EC920D5193DC864.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
|
8. ๋ค์์คํ์ด์ค default ๋ณ๊ฒฝ ์ ์ฉ
1
2
3
4
| kubens default
# ๊ฒฐ๊ณผ
โ Active namespace is "default"
|
9. ํ์ฌ Kubernetes ์ปจํ
์คํธ ํ์ธ
1
| cat $KUBECONFIG | grep current-context
|
โ
ย ์ถ๋ ฅ
1
| current-context: eks-user@myeks.ap-northeast-2.eksctl.io
|
10. Kubernetes ์ปจํ
์คํธ ์ด๋ฆ ๋ณ๊ฒฝ
1
2
3
4
| kubectl config rename-context "eks-user@myeks.ap-northeast-2.eksctl.io" "eksworkshop"
# ๊ฒฐ๊ณผ
Context "eks-user@myeks.ap-northeast-2.eksctl.io" renamed to "eksworkshop".
|
1
| cat $KUBECONFIG | grep current-context
|
โ
ย ์ถ๋ ฅ
1
| current-context: eksworkshop
|
11. ๋
ธ๋ ์ ๋ณด ํ์ธ
1
| kubectl get node --label-columns=node.kubernetes.io/instance-type,eks.amazonaws.com/capacityType,topology.kubernetes.io/zone
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| NAME STATUS ROLES AGE VERSION INSTANCE-TYPE CAPACITYTYPE ZONE
ip-192-168-1-193.ap-northeast-2.compute.internal Ready <none> 169m v1.31.4-eks-aeac579 t3.medium ON_DEMAND ap-northeast-2a
ip-192-168-2-52.ap-northeast-2.compute.internal Ready <none> 169m v1.31.4-eks-aeac579 t3.medium ON_DEMAND ap-northeast-2b
ip-192-168-3-72.ap-northeast-2.compute.internal Ready <none> 169m v1.31.4-eks-aeac579 t3.medium ON_DEMAND ap-northeast-2
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
| I0212 15:08:46.728964 137397 loader.go:402] Config loaded from file: /home/devshin/Downloads/kubeconfig
I0212 15:08:46.729339 137397 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0212 15:08:46.729350 137397 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0212 15:08:46.729355 137397 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0212 15:08:46.729359 137397 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0212 15:08:47.170830 137397 round_trippers.go:560] GET https://D25BC1FD83873C599EC920D5193DC864.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/nodes?limit=500 200 OK in 436 milliseconds
NAME STATUS ROLES AGE VERSION
ip-192-168-1-193.ap-northeast-2.compute.internal Ready <none> 170m v1.31.4-eks-aeac579
ip-192-168-2-52.ap-northeast-2.compute.internal Ready <none> 170m v1.31.4-eks-aeac579
ip-192-168-3-72.ap-northeast-2.compute.internal Ready <none> 170m v1.31.4-eks-aeac579
|
12. pod ์ ๋ณด ํ์ธ
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-node-6mctb 2/2 Running 0 173m
kube-system aws-node-b66dj 2/2 Running 0 173m
kube-system aws-node-rf79g 2/2 Running 0 173m
kube-system coredns-86f5954566-gqf97 1/1 Running 0 177m
kube-system coredns-86f5954566-nntgz 1/1 Running 0 177m
kube-system kube-proxy-jg5qj 1/1 Running 0 173m
kube-system kube-proxy-t2sqh 1/1 Running 0 173m
kube-system kube-proxy-w96mt 1/1 Running 0 173m
kube-system metrics-server-86bbfd75bb-j72mf 1/1 Running 0 177m
kube-system metrics-server-86bbfd75bb-pbpkd 1/1 Running 0 177m
|
1
| kubectl get pdb -n kube-system
|
โ
ย ์ถ๋ ฅ
1
2
3
| NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
coredns N/A 1 1 178m
metrics-server N/A 1 1 178m
|
13. ๊ด๋ฆฌํ ๋
ธ๋ ๊ทธ๋ฃน ํ์ธ
1
| eksctl get nodegroup --cluster $CLUSTER_NAME
|
โ
ย ์ถ๋ ฅ
1
2
| CLUSTER NODEGROUP STATUS CREATED MIN SIZE MAX SIZE DESIRED CAPACITY INSTANCE TYPE IMAGE ID ASG NAME TYPE
myeks ng1 ACTIVE 2025-02-12T03:17:03Z 3 3 3 t3.medium AL2023_x86_64_STANDARD eks-ng1-84ca7c14-790d-ef45-8256-776960f87794 managed
|
1
| aws eks describe-nodegroup --cluster-name $CLUSTER_NAME --nodegroup-name ng1 | jq
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
| {
"nodegroup": {
"nodegroupName": "ng1",
"nodegroupArn": "arn:aws:eks:ap-northeast-2:378102432899:nodegroup/myeks/ng1/84ca7c14-790d-ef45-8256-776960f87794",
"clusterName": "myeks",
"version": "1.31",
"releaseVersion": "1.31.4-20250203",
"createdAt": "2025-02-12T12:17:03.928000+09:00",
"modifiedAt": "2025-02-12T15:08:14.235000+09:00",
"status": "ACTIVE",
"capacityType": "ON_DEMAND",
"scalingConfig": {
"minSize": 3,
"maxSize": 3,
"desiredSize": 3
},
"instanceTypes": [
"t3.medium"
],
"subnets": [
"subnet-0b53307d4d544b3bf",
"subnet-019936bd535b68960",
"subnet-0d19099a6b73555f8"
],
"amiType": "AL2023_x86_64_STANDARD",
"nodeRole": "arn:aws:iam::378102432899:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-aJPw1zdjbXYF",
"labels": {
"alpha.eksctl.io/cluster-name": "myeks",
"alpha.eksctl.io/nodegroup-name": "ng1"
},
"resources": {
"autoScalingGroups": [
{
"name": "eks-ng1-84ca7c14-790d-ef45-8256-776960f87794"
}
]
},
"health": {
"issues": []
},
"updateConfig": {
"maxUnavailable": 1
},
"launchTemplate": {
"name": "eksctl-myeks-nodegroup-ng1",
"version": "1",
"id": "lt-0ec600e4f000289da"
},
"tags": {
"aws:cloudformation:stack-name": "eksctl-myeks-nodegroup-ng1",
"alpha.eksctl.io/cluster-name": "myeks",
"alpha.eksctl.io/nodegroup-name": "ng1",
"aws:cloudformation:stack-id": "arn:aws:cloudformation:ap-northeast-2:378102432899:stack/eksctl-myeks-nodegroup-ng1/c502af50-e8ef-11ef-89e7-022a714bdf75",
"eksctl.cluster.k8s.io/v1alpha1/cluster-name": "myeks",
"aws:cloudformation:logical-id": "ManagedNodeGroup",
"alpha.eksctl.io/nodegroup-type": "managed",
"alpha.eksctl.io/eksctl-version": "0.203.0"
}
}
}
|
14. eks addon ํ์ธ
1
| eksctl get addon --cluster $CLUSTER_NAME
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
| 2025-02-12 15:29:21 [โน] Kubernetes version "1.31" in use by cluster "myeks"
2025-02-12 15:29:21 [โน] getting all addons
2025-02-12 15:29:23 [โน] to see issues for an addon run `eksctl get addon --name <addon-name> --cluster <cluster-name>`
NAME VERSION STATUS ISSUES IAMROLE UPDATE AVAILABLE CONFIGURATION VALUES POD IDENTITY ASSOCIATION ROLES
coredns v1.11.4-eksbuild.2 ACTIVE 0
kube-proxy v1.31.3-eksbuild.2 ACTIVE 0
metrics-server v0.7.2-eksbuild.1 ACTIVE 0
vpc-cni v1.19.2-eksbuild.5 ACTIVE 0 arn:aws:iam::378102432899:role/eksctl-myeks-addon-vpc-cni-Role1-fGF6qGwGjFyL enableNetworkPolicy: "true"
|
15. ๊ด๋ฆฌํ ๋
ธ๋๊ทธ๋ฃน(EC2) ์ ์ ๋ฐ ๋
ธ๋ ์ ๋ณด ํ์ธ
(1) EC2 ์ธ์คํด์ค์ IAM Role ํ์ธ
EC2 ์ธ์คํด์ค ๋ณด์ ์ค์ ์์ IAM Role์ด ๋งคํ๋์ด ์์ผ๋ฉฐ, EC2 ์ธ์คํด์ค ํ๋กํ์ผ์ ์ฐ๊ฒฐ๋จ
managedNodeGroups์ IAM ์ค์ ๊ณผ ๋งค์นญ๋์ด awsLoadBalancerController ์คํ ์ ํ์ํ ๊ถํ์ ์ ๊ณต
awsLoadBalancerController
, certManager
, externalDNS
๊ถํ ํ์ฑํ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
| managedNodeGroups:
- amiFamily: AmazonLinux2023
desiredCapacity: 3
disableIMDSv1: true
disablePodIMDS: false
iam:
withAddonPolicies:
albIngress: false # Disable ALB Ingress Controller
appMesh: false
appMeshPreview: false
autoScaler: false
awsLoadBalancerController: true # Enable AWS Load Balancer Controller
certManager: true # Enable cert-manager
cloudWatch: false
ebs: false
efs: false
externalDNS: true # Enable ExternalDNS
fsx: false
imageBuilder: true
xRay: false
|
(2) ์ธ์คํด์ค ๊ณต์ธ IP ํ์ธ
1
| [root@operator-host ~]# aws ec2 describe-instances --query "Reservations[*].Instances[*].{InstanceID:InstanceId, PublicIPAdd:PublicIpAddress, PrivateIPAdd:PrivateIpAddress, InstanceName:Tags[?Key=='Name']|[0].Value, Status:State.Name}" --filters Name=instance-state-name,Values=running --output table
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
| -----------------------------------------------------------------------------------------
| DescribeInstances |
+----------------------+-----------------+----------------+-----------------+-----------+
| InstanceID | InstanceName | PrivateIPAdd | PublicIPAdd | Status |
+----------------------+-----------------+----------------+-----------------+-----------+
| i-0d14501e2c8353ae6 | myeks-ng1-Node | 192.168.3.72 | 43.201.115.81 | running |
| i-00db22b426cc7efb5 | operator-host | 172.20.1.100 | 15.165.15.90 | running |
| i-090451779dfb774e9 | myeks-ng1-Node | 192.168.1.193 | 43.202.57.204 | running |
| i-0e8bab88cd0a40ae8 | myeks-ng1-Node | 192.168.2.52 | 15.164.179.214 | running |
+----------------------+-----------------+----------------+-----------------+-----------+
|
(3) ์ธ์คํด์ค ๊ณต์ธ IP ๋ณ์ ์ง์
1
2
3
4
| export N1=43.202.57.204
export N2=15.164.179.214
export N3=43.201.115.81
echo $N1, $N2, $N3
|
โ
ย ์ถ๋ ฅ
1
| 43.202.57.204, 15.164.179.214, 43.201.115.81
|
(4) ping ํ
์คํธ
โ
ย ์ถ๋ ฅ
1
2
3
4
5
| # ์ถ๋ ฅ: ping ์คํจ (100% packet loss)
PING 43.202.57.204 (43.202.57.204) 56(84) bytes of data.
--- 43.202.57.204 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1037ms
|
(5) ๊ด๋ฆฌํ ์์ปค๋
ธ๋ ๋ณด์๊ทธ๋ฃน ํ์ธ
- ๋
ธ๋๊ทธ๋ฃน Remote Access ๋ณด์๊ทธ๋ฃน๊ณผ EKS ํด๋ฌ์คํฐ ๋ณด์๊ทธ๋ฃน ์กด์ฌ
- Remote Access ๋ณด์๊ทธ๋ฃน:
sg-08385fad7996593f1
- EKS ํด๋ฌ์คํฐ ๋ณด์๊ทธ๋ฃน:
sg-073b338fccd7776e7
(6) Remote Access ๋ณด์๊ทธ๋ฃน ์ค์
- ์ง ๊ณต์ธ IP์ Operator ์๋ฒ ๋ด๋ถ IP(172.20.1.100)๋ฅผ ์์ค IP๋ก ์ถ๊ฐ
- Remote Access ๋ณด์๊ทธ๋ฃน ID ๋ณ์ ์ง์
1
| export MNSGID=sg-08385fad7996593f1
|
- ์ง ๊ณต์ธ IP ์ธ๋ฐ์ด๋ ๊ท์น ์ถ๊ฐ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
aws ec2 authorize-security-group-ingress --group-id $MNSGID --protocol '-1' --cidr $(curl -s ipinfo.io/ip)/32
# ๊ฒฐ๊ณผ
{
"Return": true,
"SecurityGroupRules": [
{
"SecurityGroupRuleId": "sgr-0e2dcc79069298593",
"GroupId": "sg-08385fad7996593f1",
"GroupOwnerId": "378102432899",
"IsEgress": false,
"IpProtocol": "-1",
"FromPort": -1,
"ToPort": -1,
"CidrIpv4": "182.230.60.93/32",
"SecurityGroupRuleArn": "arn:aws:ec2:ap-northeast-2:378102432899:security-group-rule/sgr-0e2dcc79069298593"
}
]
}
|
- Operator ์๋ฒ ๋ด๋ถ IP ์ธ๋ฐ์ด๋ ๊ท์น ์ถ๊ฐ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| aws ec2 authorize-security-group-ingress --group-id $MNSGID --protocol '-1' --cidr 172.20.1.100/32
# ๊ฒฐ๊ณผ
{
"Return": true,
"SecurityGroupRules": [
{
"SecurityGroupRuleId": "sgr-062028d793058f3d1",
"GroupId": "sg-08385fad7996593f1",
"GroupOwnerId": "378102432899",
"IsEgress": false,
"IpProtocol": "-1",
"FromPort": -1,
"ToPort": -1,
"CidrIpv4": "172.20.1.100/32",
"SecurityGroupRuleArn": "arn:aws:ec2:ap-northeast-2:378102432899:security-group-rule/sgr-062028d793058f3d1"
}
]
}
|
(7) ping ํ
์คํธ
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
| # ping ์ฑ๊ณต
PING 43.202.57.204 (43.202.57.204) 56(84) bytes of data.
64 bytes from 43.202.57.204: icmp_seq=1 ttl=115 time=6.78 ms
64 bytes from 43.202.57.204: icmp_seq=2 ttl=115 time=20.1 ms
--- 43.202.57.204 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 6.783/13.453/20.123/6.670 ms
|
1
2
| # Operator-host์์ ping ํ
์คํธ
[root@operator-host ~]# ping -c 2 192.168.1.193
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
| # ping ์ฑ๊ณต
PING 192.168.1.193 (192.168.1.193) 56(84) bytes of data.
64 bytes from 192.168.1.193: icmp_seq=1 ttl=127 time=0.227 ms
64 bytes from 192.168.1.193: icmp_seq=2 ttl=127 time=0.498 ms
--- 192.168.1.193 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1006ms
rtt min/avg/max/mdev = 0.227/0.362/0.498/0.136 ms
|
- Operator-Host์์ ์์ปค๋
ธ๋ ์๋ธ๋ท์ผ๋ก ping ์ฑ๊ณต
- myeks-VPC ๋ณด์๊ทธ๋ฃน์ Operator-VPC(172.20.1.100/32) ์ธ๋ฐ์ด๋ ๊ท์น ์ถ๊ฐ๋ก ์ ๊ทผ ๊ฐ๋ฅ
(8) ์์ปค ๋
ธ๋ SSH ์ ์ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| ssh ec2-user@$N1
A newer release of "Amazon Linux" is available.
Version 2023.6.20250203:
Run "/usr/bin/dnf check-release-update" for full release and version update info
, #_
~\_ ####_ Amazon Linux 2023
~~ \_#####\
~~ \###|
~~ \#/ ___ https://aws.amazon.com/linux/amazon-linux-2023
~~ V~' '->
~~~ /
~~._. _/
_/ _/
_/m/'
Last login: Mon Feb 3 23:59:19 2025 from 52.94.123.246
[ec2-user@ip-192-168-1-193 ~]$
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| ssh ec2-user@$N2
A newer release of "Amazon Linux" is available.
Version 2023.6.20250203:
Run "/usr/bin/dnf check-release-update" for full release and version update info
, #_
~\_ ####_ Amazon Linux 2023
~~ \_#####\
~~ \###|
~~ \#/ ___ https://aws.amazon.com/linux/amazon-linux-2023
~~ V~' '->
~~~ /
~~._. _/
_/ _/
_/m/'
Last login: Mon Feb 3 23:59:19 2025 from 52.94.123.246
[ec2-user@ip-192-168-2-52 ~]$
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| ssh ec2-user@$N3
A newer release of "Amazon Linux" is available.
Version 2023.6.20250203:
Run "/usr/bin/dnf check-release-update" for full release and version update info
, #_
~\_ ####_ Amazon Linux 2023
~~ \_#####\
~~ \###|
~~ \#/ ___ https://aws.amazon.com/linux/amazon-linux-2023
~~ V~' '->
~~~ /
~~._. _/
_/ _/
_/m/'
Last login: Mon Feb 3 23:59:19 2025 from 52.94.123.246
[ec2-user@ip-192-168-3-72 ~]$
|
(9) ์ด์์๋ฒ EC2์์ ์์ปค๋
ธ๋ EC2 ์ ์ ๋ฐ ์ ๋ณด ํ์ธ
์ธ์คํด์ค ๊ณต์ธ IP ๋ณ์ ์ง์
1
2
3
4
5
| [root@operator-host ~]# export N1=192.168.1.193
[root@operator-host ~]# export N2=192.168.2.52
[root@operator-host ~]# export N3=192.168.3.72
[root@operator-host ~]# echo $N1, $N2, $N3
192.168.1.193, 192.168.2.52, 192.168.3.72
|
๊ฐ ์์ปค๋
ธ๋์ ping ํ
์คํธ
1
| [root@operator-host ~]# ping -c 2 $N1
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
| PING 192.168.1.193 (192.168.1.193) 56(84) bytes of data.
64 bytes from 192.168.1.193: icmp_seq=1 ttl=127 time=0.232 ms
64 bytes from 192.168.1.193: icmp_seq=2 ttl=127 time=0.242 ms
--- 192.168.1.193 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1007ms
rtt min/avg/max/mdev = 0.232/0.237/0.242/0.005 ms
|
16. ๋
ธ๋ ์ ๋ณด ํ์ธ
(1) ๊ฐ ๋
ธ๋์ ํธ์คํธ ์ ๋ณด ํ์ธ
1
| for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i hostnamectl; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
| >> node 43.202.57.204 <<
Static hostname: ip-192-168-1-193.ap-northeast-2.compute.internal
Icon name: computer-vm
Chassis: vm ๐ด
Machine ID: ec2e13998d891ffbacc5d853155112da
Boot ID: 3b6c5280edb84574b64c4edc1f352d1e
Virtualization: amazon
Operating System: Amazon Linux 2023.6.20250128
CPE OS Name: cpe:2.3:o:amazon:amazon_linux:2023
Kernel: Linux 6.1.124-134.200.amzn2023.x86_64
Architecture: x86-64
Hardware Vendor: Amazon EC2
Hardware Model: t3.medium
Firmware Version: 1.0
>> node 15.164.179.214 <<
Static hostname: ip-192-168-2-52.ap-northeast-2.compute.internal
Icon name: computer-vm
Chassis: vm ๐ด
Machine ID: ec2e325020be14105530b18b0e81710a
Boot ID: 1fdda72d00cd465d8074e7c6238882a9
Virtualization: amazon
Operating System: Amazon Linux 2023.6.20250128
CPE OS Name: cpe:2.3:o:amazon:amazon_linux:2023
Kernel: Linux 6.1.124-134.200.amzn2023.x86_64
Architecture: x86-64
Hardware Vendor: Amazon EC2
Hardware Model: t3.medium
Firmware Version: 1.0
>> node 43.201.115.81 <<
Static hostname: ip-192-168-3-72.ap-northeast-2.compute.internal
Icon name: computer-vm
Chassis: vm ๐ด
Machine ID: ec2951238a61149817871c92befba51d
Boot ID: 9a1cd0af9840421a9234e72a44777415
Virtualization: amazon
Operating System: Amazon Linux 2023.6.20250128
CPE OS Name: cpe:2.3:o:amazon:amazon_linux:2023
Kernel: Linux 6.1.124-134.200.amzn2023.x86_64
Architecture: x86-64
Hardware Vendor: Amazon EC2
Hardware Model: t3.medium
Firmware Version: 1.0
|
(2) ๊ฐ ๋
ธ๋์ ๋คํธ์ํฌ ์ธํฐํ์ด์ค ์ ๋ณด ํ์ธ
1
| for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo ip -c addr; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
| >> node 43.202.57.204 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 02:23:93:a6:bc:61 brd ff:ff:ff:ff:ff:ff
altname enp0s5
inet 192.168.1.193/24 metric 1024 brd 192.168.1.255 scope global dynamic ens5
valid_lft 3137sec preferred_lft 3137sec
inet6 fe80::23:93ff:fea6:bc61/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
3: eni01a4864c88a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 56:be:6a:bd:2d:2a brd ff:ff:ff:ff:ff:ff link-netns cni-a59ccd2b-5db2-0159-b78e-e0797b300a23
inet6 fe80::54be:6aff:febd:2d2a/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
4: ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 02:49:8e:e5:f1:05 brd ff:ff:ff:ff:ff:ff
altname enp0s6
inet 192.168.1.237/24 brd 192.168.1.255 scope global ens6
valid_lft forever preferred_lft forever
inet6 fe80::49:8eff:fee5:f105/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
5: eni61c5a949744@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 52:41:42:c3:03:3f brd ff:ff:ff:ff:ff:ff link-netns cni-74caca86-36e2-a922-2920-c2c8c00e7b43
inet6 fe80::5041:42ff:fec3:33f/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
>> node 15.164.179.214 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 06:c5:5b:d1:77:57 brd ff:ff:ff:ff:ff:ff
altname enp0s5
inet 192.168.2.52/24 metric 1024 brd 192.168.2.255 scope global dynamic ens5
valid_lft 3138sec preferred_lft 3138sec
inet6 fe80::4c5:5bff:fed1:7757/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
3: enibce2df30e87@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 4a:c8:23:ab:85:39 brd ff:ff:ff:ff:ff:ff link-netns cni-5408bb28-ad79-a3f3-3a60-9442968852b1
inet6 fe80::48c8:23ff:feab:8539/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
4: ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 06:91:4b:4c:9c:e9 brd ff:ff:ff:ff:ff:ff
altname enp0s6
inet 192.168.2.42/24 brd 192.168.2.255 scope global ens6
valid_lft forever preferred_lft forever
inet6 fe80::491:4bff:fe4c:9ce9/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
5: enic99196c7a64@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 8a:00:6e:37:b1:71 brd ff:ff:ff:ff:ff:ff link-netns cni-9343fb30-bdc8-5fab-b46d-3a5db58f8007
inet6 fe80::8800:6eff:fe37:b171/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
>> node 43.201.115.81 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 0a:d5:3d:4a:54:bd brd ff:ff:ff:ff:ff:ff
altname enp0s5
inet 192.168.3.72/24 metric 1024 brd 192.168.3.255 scope global dynamic ens5
valid_lft 3144sec preferred_lft 3144sec
inet6 fe80::8d5:3dff:fe4a:54bd/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
|
(3) ๊ฐ ๋
ธ๋์ ๋ผ์ฐํ
ํ
์ด๋ธ ํ์ธ
1
| for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo ip -c route; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
| >> node 43.202.57.204 <<
default via 192.168.1.1 dev ens5 proto dhcp src 192.168.1.193 metric 1024
192.168.0.2 via 192.168.1.1 dev ens5 proto dhcp src 192.168.1.193 metric 1024
192.168.1.0/24 dev ens5 proto kernel scope link src 192.168.1.193 metric 1024
192.168.1.1 dev ens5 proto dhcp scope link src 192.168.1.193 metric 1024
192.168.1.59 dev eni61c5a949744 scope link
192.168.1.90 dev eni01a4864c88a scope link
>> node 15.164.179.214 <<
default via 192.168.2.1 dev ens5 proto dhcp src 192.168.2.52 metric 1024
192.168.0.2 via 192.168.2.1 dev ens5 proto dhcp src 192.168.2.52 metric 1024
192.168.2.0/24 dev ens5 proto kernel scope link src 192.168.2.52 metric 1024
192.168.2.1 dev ens5 proto dhcp scope link src 192.168.2.52 metric 1024
192.168.2.72 dev enibce2df30e87 scope link
192.168.2.248 dev enic99196c7a64 scope link
>> node 43.201.115.81 <<
default via 192.168.3.1 dev ens5 proto dhcp src 192.168.3.72 metric 1024
192.168.0.2 via 192.168.3.1 dev ens5 proto dhcp src 192.168.3.72 metric 1024
192.168.3.0/24 dev ens5 proto kernel scope link src 192.168.3.72 metric 1024
192.168.3.1 dev ens5 proto dhcp scope link src 192.168.3.72 metric 1024
|
(4) ๊ฐ ๋
ธ๋์ cgroup ๋ฒ์ ํ์ธ
1
| for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i stat -fc %T /sys/fs/cgroup/; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
| >> node 43.202.57.204 <<
cgroup2fs
>> node 15.164.179.214 <<
cgroup2fs
>> node 43.201.115.81 <<
cgroup2fs
|
(5) ๊ฐ ๋
ธ๋์ Kubelet ์ค์ ํ์ธ
1
| for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo cat /etc/kubernetes/kubelet/config.json.d/00-nodeadm.conf | jq; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
| >> node 43.202.57.204 <<
{
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"clusterDNS": [
"10.100.0.10"
],
"kind": "KubeletConfiguration",
"maxPods": 17
}
>> node 15.164.179.214 <<
{
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"clusterDNS": [
"10.100.0.10"
],
"kind": "KubeletConfiguration",
"maxPods": 17
}
>> node 43.201.115.81 <<
{
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"clusterDNS": [
"10.100.0.10"
],
"kind": "KubeletConfiguration",
"maxPods": 17
}
|
17. ์ด์์๋ฒ EC2์์ EKS ์ฌ์ฉ ์ค์
(1) IAM ์๊ฒฉ์ฆ๋ช
์ค์
1
2
3
4
5
| [root@operator-host ~]# aws configure
AWS Access Key ID : XXXXXXXXXXXXXXXXXX
AWS Secret Access Key : XXXXXXXXXXXXXXXXXX
Default region name [ap-northeast-2]: ap-northeast-2
Default output format [json]: json
|
(2) IAM ์๊ฒฉ ํ์ธ
1
| [root@operator-host ~]# aws sts get-caller-identity --query Arn
|
โ
ย ์ถ๋ ฅ
1
| "arn:aws:iam::378102432899:user/eks-user"
|
(3) kubeconfig ์์ฑ
1
2
| [root@operator-host ~]# cat ~/.kube/config
cat: /root/.kube/config: No such file or directory
|
1
2
| [root@operator-host ~]# aws eks update-kubeconfig --name myeks --user-alias eks-user
Added new context eks-user to /root/.kube/config
|
(4) kubeconfig ์ ๋ณด ํ์ธ
1
| (eks-user:N/A) [root@operator-host ~]# cat ~/.kube/config
|
โ
ย ์ถ๋ ฅ
EKS ํด๋ฌ์คํฐ ์ ๋ณด, ์ธ์ฆ์, ์ฌ์ฉ์ ์ ๋ณด ํฌํจ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
| apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJYnRmWjRVZ2Fwdkl3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBeU1USXdNekF6TURSYUZ3MHpOVEF5TVRBd016QTRNRFJhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUM5UVd3L3U5UGhSOVpueEtYU3oxemZmR1p3OHlyNzJXS0lZVHozc1c4c2hXUlhpSzh6UHo4RmVaMHQKU2Q3Ym13RzdTK1JvbXltbTVreVQ3K2F1ZWt5SGZHOExhZzNKRUlMWnBQL2lOaFNGRUsvWWlSYkN0VGY3ZWp0NAovOFlwWG9Ic0pDbXpjZ3ZvQ0dwT0dnZ2dHUFh3dlNBM0lLWW5iRmVYNENTbjVUcmpVWXpURjZYVm0wSVZMT05PCjFIY1lJK0ZxQit2dDVGd3FoOVVFYUsrRHBVa3krWmswdi9FOVJ1ZDU1QzFuWUdsRFJBNnNVS1NoamZORzZ6bS8KcjZ0QUFyUmsvTDN1cGkvajMyVHE5amF2dW1ScHlHNTM2YlhIeUpULzFuU2Y5dTF4ZzNnbThYWWtLZUdXeko3TApQbzc2aHJoUnF1eEVnd0hHaysvNGV1ZzIzQi8xQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJRMGpTUFNVN1pqcWFOTTJBa0NJN1FqK2RQV2xqQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQUtoSXd2cmNOTQpuNS9WS2RsdFBpTjdrcTBIZ0tHdFVMbS8zQnlSZ2daSHY3MkFKNVk3bUdsZUFabUUzOFEyNVV6M2NjT25DOEVsCjNOYnlSdzFGMFdCYnR6TU1iNmxnSDZMWW5TcHZlckROWUZ5SGNQQmZKQmRLd0hyTFhBTXNGUlNCczRUL0Nhdm0KdE4reEZaVkh5azRBbk81VlN6QjJRbmJ5dENqbmRxc3ZjajJOWS9WK3ZQbzJMQzZBckh6TXhodmF2eVRyVEtDLwpCK242c2tQM0FTNy9hT3JEeEhyaHlXQUFLUGpqSXhBeUhsZS9Db1FBQjVoNTFQQ0d3NlFZRlR0N3hyaHVMWjdSCjFMc3FIazlobGRXUDRSdjZ5Skx0a0ErOVRCdEM1Y251Z1d1VHFFUXBQZVExTWRLcTgyY2M1aFlaZXFHdXFTRjgKVU9tcHRXTFVsbVpHCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: https://D25BC1FD83873C599EC920D5193DC864.gr7.ap-northeast-2.eks.amazonaws.com
name: arn:aws:eks:ap-northeast-2:378102432899:cluster/myeks
contexts:
- context:
cluster: arn:aws:eks:ap-northeast-2:378102432899:cluster/myeks
user: eks-user
name: eks-user
current-context: eks-user
kind: Config
preferences: {}
users:
- name: eks-user
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- ap-northeast-2
- eks
- get-token
- --cluster-name
- myeks
- --output
- json
command: aws
|
(5) EKS API DNS ์กฐํ
1
| (eks-user:N/A) [root@operator-host ~]# APIDNS=$(aws eks describe-cluster --name myeks | jq -r .cluster.endpoint | cut -d '/' -f 3)
|
1
| (eks-user:N/A) [root@operator-host ~]# dig +short $APIDNS
|
โ
ย ์ถ๋ ฅ
1
2
| 43.201.119.98
15.165.73.21
|
(6) ํด๋ฌ์คํฐ ์ ๋ณด ํ์ธ
1
| (eks-user:N/A) [root@operator-host ~]# kubectl cluster-info
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| Kubernetes control plane is running at https://D25BC1FD83873C599EC920D5193DC864.gr7.ap-northeast-2.eks.amazonaws.com
CoreDNS is running at https://D25BC1FD83873C599EC920D5193DC864.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
|
(7) ๋
ธ๋ ์ ๋ณด ํ์ธ
1
| (eks-user:N/A) [root@operator-host ~]# kubectl get node -v6
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
| I0212 17:07:06.500164 3826 loader.go:395] Config loaded from file: /root/.kube/config
I0212 17:07:07.311634 3826 round_trippers.go:553] GET https://D25BC1FD83873C599EC920D5193DC864.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/nodes?limit=500 200 OK in 805 milliseconds
NAME STATUS ROLES AGE VERSION
ip-192-168-1-193.ap-northeast-2.compute.internal Ready <none> 4h48m v1.31.4-eks-aeac579
ip-192-168-2-52.ap-northeast-2.compute.internal Ready <none> 4h48m v1.31.4-eks-aeac579
ip-192-168-3-72.ap-northeast-2.compute.internal Ready <none> 4h48m v1.31.4-eks-aeac579
|
๐ ๋คํธ์ํฌ ๊ธฐ๋ณธ ์ ๋ณด ํ์ธ
1. CNI ์ ๋ณด ํ์ธ
1
| kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -f 2
|
โ
ย ์ถ๋ ฅ
1
2
3
| amazon-k8s-cni-init:v1.19.2-eksbuild.5
amazon-k8s-cni:v1.19.2-eksbuild.5
amazon
|
2. ๋
ธ๋ IP ํ์ธ
1
| aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,PrivateIPAdd:PrivateIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output table
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
| ------------------------------------------------------------------
| DescribeInstances |
+----------------+-----------------+------------------+----------+
| InstanceName | PrivateIPAdd | PublicIPAdd | Status |
+----------------+-----------------+------------------+----------+
| myeks-ng1-Node| 192.168.3.72 | 43.201.115.81 | running |
| operator-host | 172.20.1.100 | 15.165.15.90 | running |
| myeks-ng1-Node| 192.168.1.193 | 43.202.57.204 | running |
| myeks-ng1-Node| 192.168.2.52 | 15.164.179.214 | running |
+----------------+-----------------+------------------+----------+
|
3. Pod ์ ๋ณด ํ์ธ
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-node-6mctb 2/2 Running 0 5h8m
kube-system aws-node-b66dj 2/2 Running 0 5h8m
kube-system aws-node-rf79g 2/2 Running 0 5h8m
kube-system coredns-86f5954566-gqf97 1/1 Running 0 5h12m
kube-system coredns-86f5954566-nntgz 1/1 Running 0 5h12m
kube-system kube-proxy-jg5qj 1/1 Running 0 5h8m
kube-system kube-proxy-t2sqh 1/1 Running 0 5h8m
kube-system kube-proxy-w96mt 1/1 Running 0 5h8m
kube-system metrics-server-86bbfd75bb-j72mf 1/1 Running 0 5h12m
kube-system metrics-server-86bbfd75bb-pbpkd 1/1 Running 0 5h12m
|
4. ํ๋ IP ํ์ธ
1
| kubectl get pod -n kube-system -o=custom-columns=NAME:.metadata.name,IP:.status.podIP,STATUS:.status.phase
|
โ
ย ์ถ๋ ฅ
aws-node-6mctb(192.168.1.193), kube-proxy-w96mt(192.168.1.193)๊ฐ ์๋ฒ์ IP์ ๊ฐ์ ์ด์ ?
ํธ์คํธ ์๋ฒ์ ๋คํธ์ํฌ ๋ค์์คํ์ด์ค๋ฅผ ๊ณต์ ํ๊ธฐ ๋๋ฌธ
1
2
3
4
5
6
7
8
9
10
11
| NAME IP STATUS
aws-node-6mctb 192.168.1.193 Running
aws-node-b66dj 192.168.2.52 Running
aws-node-rf79g 192.168.3.72 Running
coredns-86f5954566-gqf97 192.168.1.59 Running
coredns-86f5954566-nntgz 192.168.2.248 Running
kube-proxy-jg5qj 192.168.3.72 Running
kube-proxy-t2sqh 192.168.2.52 Running
kube-proxy-w96mt 192.168.1.193 Running
metrics-server-86bbfd75bb-j72mf 192.168.1.90 Running
metrics-server-86bbfd75bb-pbpkd 192.168.2.72 Running
|
5. ๋
ธ๋ ๋คํธ์ํฌ ์ ๋ณด ํ์ธ
(1) CNI ๋ก๊ทธ ํ์ผ ํ์ธ
1
| for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i tree /var/log/aws-routed-eni; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
| >> node 43.202.57.204 <<
/var/log/aws-routed-eni
โโโ ebpf-sdk.log
โโโ egress-v6-plugin.log
โโโ ipamd.log
โโโ network-policy-agent.log
โโโ plugin.log
0 directories, 5 files
>> node 15.164.179.214 <<
/var/log/aws-routed-eni
โโโ ebpf-sdk.log
โโโ egress-v6-plugin.log
โโโ ipamd.log
โโโ network-policy-agent.log
โโโ plugin.log
0 directories, 5 files
>> node 43.201.115.81 <<
/var/log/aws-routed-eni
โโโ ebpf-sdk.log
โโโ ipamd.log
โโโ network-policy-agent.log
0 directories, 3 files
|
(2) ๋
ธ๋๋ณ IP ์ฃผ์ ํ์ธ
1
| for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo ip -br -c addr; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| >> node 43.202.57.204 <<
lo UNKNOWN 127.0.0.1/8 ::1/128
ens5 UP 192.168.1.193/24 metric 1024 fe80::23:93ff:fea6:bc61/64
eni01a4864c88a@if3 UP fe80::54be:6aff:febd:2d2a/64
ens6 UP 192.168.1.237/24 fe80::49:8eff:fee5:f105/64
eni61c5a949744@if3 UP fe80::5041:42ff:fec3:33f/64
>> node 15.164.179.214 <<
lo UNKNOWN 127.0.0.1/8 ::1/128
ens5 UP 192.168.2.52/24 metric 1024 fe80::4c5:5bff:fed1:7757/64
enibce2df30e87@if3 UP fe80::48c8:23ff:feab:8539/64
ens6 UP 192.168.2.42/24 fe80::491:4bff:fe4c:9ce9/64
enic99196c7a64@if3 UP fe80::8800:6eff:fe37:b171/64
>> node 43.201.115.81 <<
lo UNKNOWN 127.0.0.1/8 ::1/128
ens5 UP 192.168.3.72/24 metric 1024 fe80::8d5:3dff:fe4a:54bd/64
|
(3) ๋คํธ์ํฌ ์ธํฐํ์ด์ค ์์ธ ์ ๋ณด ํ์ธ
1
| for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo ip -c addr; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
| >> node 43.202.57.204 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 02:23:93:a6:bc:61 brd ff:ff:ff:ff:ff:ff
altname enp0s5
inet 192.168.1.193/24 metric 1024 brd 192.168.1.255 scope global dynamic ens5
valid_lft 2264sec preferred_lft 2264sec
inet6 fe80::23:93ff:fea6:bc61/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
3: eni01a4864c88a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 56:be:6a:bd:2d:2a brd ff:ff:ff:ff:ff:ff link-netns cni-a59ccd2b-5db2-0159-b78e-e0797b300a23
inet6 fe80::54be:6aff:febd:2d2a/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
4: ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 02:49:8e:e5:f1:05 brd ff:ff:ff:ff:ff:ff
altname enp0s6
inet 192.168.1.237/24 brd 192.168.1.255 scope global ens6
valid_lft forever preferred_lft forever
inet6 fe80::49:8eff:fee5:f105/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
5: eni61c5a949744@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 52:41:42:c3:03:3f brd ff:ff:ff:ff:ff:ff link-netns cni-74caca86-36e2-a922-2920-c2c8c00e7b43
inet6 fe80::5041:42ff:fec3:33f/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
>> node 15.164.179.214 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 06:c5:5b:d1:77:57 brd ff:ff:ff:ff:ff:ff
altname enp0s5
inet 192.168.2.52/24 metric 1024 brd 192.168.2.255 scope global dynamic ens5
valid_lft 2265sec preferred_lft 2265sec
inet6 fe80::4c5:5bff:fed1:7757/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
3: enibce2df30e87@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 4a:c8:23:ab:85:39 brd ff:ff:ff:ff:ff:ff link-netns cni-5408bb28-ad79-a3f3-3a60-9442968852b1
inet6 fe80::48c8:23ff:feab:8539/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
4: ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 06:91:4b:4c:9c:e9 brd ff:ff:ff:ff:ff:ff
altname enp0s6
inet 192.168.2.42/24 brd 192.168.2.255 scope global ens6
valid_lft forever preferred_lft forever
inet6 fe80::491:4bff:fe4c:9ce9/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
5: enic99196c7a64@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 8a:00:6e:37:b1:71 brd ff:ff:ff:ff:ff:ff link-netns cni-9343fb30-bdc8-5fab-b46d-3a5db58f8007
inet6 fe80::8800:6eff:fe37:b171/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
>> node 43.201.115.81 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 0a:d5:3d:4a:54:bd brd ff:ff:ff:ff:ff:ff
altname enp0s5
inet 192.168.3.72/24 metric 1024 brd 192.168.3.255 scope global dynamic ens5
valid_lft 2270sec preferred_lft 2270sec
inet6 fe80::8d5:3dff:fe4a:54bd/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
|
๐ก ๋ณด์กฐ IPv4 ์ฃผ์๋ฅผ ํ๋๊ฐ ์ฌ์ฉํ๋์ง ํ์ธ
1. CoreDNS ํ๋ IP ํ์ธ
1
| kubectl get pod -n kube-system -l k8s-app=kube-dns -owide
|
โ
ย ์ถ๋ ฅ
CoreDNS ํ๋๊ฐ ๋ณด์กฐ IPv4 ์ฃผ์(192.168.2.248)๋ฅผ ์ฌ์ฉํ๊ณ ์์
1
2
3
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-86f5954566-gqf97 1/1 Running 0 5h39m 192.168.1.59 ip-192-168-1-193.ap-northeast-2.compute.internal <none> <none>
coredns-86f5954566-nntgz 1/1 Running 0 5h39m 192.168.2.248 ip-192-168-2-52.ap-northeast-2.compute.internal <none> <none>
|
2. ๋ผ์ฐํ
ํ
์ด๋ธ ํ์ธ
1
| for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo ip -c route; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
| >> node 43.202.57.204 <<
default via 192.168.1.1 dev ens5 proto dhcp src 192.168.1.193 metric 1024
192.168.0.2 via 192.168.1.1 dev ens5 proto dhcp src 192.168.1.193 metric 1024
192.168.1.0/24 dev ens5 proto kernel scope link src 192.168.1.193 metric 1024
192.168.1.1 dev ens5 proto dhcp scope link src 192.168.1.193 metric 1024
192.168.1.59 dev eni61c5a949744 scope link
192.168.1.90 dev eni01a4864c88a scope link
>> node 15.164.179.214 <<
default via 192.168.2.1 dev ens5 proto dhcp src 192.168.2.52 metric 1024
192.168.0.2 via 192.168.2.1 dev ens5 proto dhcp src 192.168.2.52 metric 1024
192.168.2.0/24 dev ens5 proto kernel scope link src 192.168.2.52 metric 1024
192.168.2.1 dev ens5 proto dhcp scope link src 192.168.2.52 metric 1024
192.168.2.72 dev enibce2df30e87 scope link
192.168.2.248 dev enic99196c7a64 scope link
>> node 43.201.115.81 <<
default via 192.168.3.1 dev ens5 proto dhcp src 192.168.3.72 metric 1024
192.168.0.2 via 192.168.3.1 dev ens5 proto dhcp src 192.168.3.72 metric 1024
192.168.3.0/24 dev ens5 proto kernel scope link src 192.168.3.72 metric 1024
192.168.3.1 dev ens5 proto dhcp scope link src 192.168.3.72 metric 1024
|
3. ENI ์ฆ๊ฐ์ Pod IP ํ ๋น ํ์ธ ์ค์ต
(1) ๊ฐ ๋
ธ๋์์ ENI์ ๋ผ์ฐํ
ํ
์ด๋ธ์ ๋ชจ๋ํฐ๋ง
1
2
3
| # ํฐ๋ฏธ๋ 1
ssh ec2-user@$N1
watch -d "ip link | egrep 'ens|eni' ;echo;echo "[ROUTE TABLE]"; route -n | grep eni"
|
1
2
3
| # ํฐ๋ฏธ๋ 2
ssh ec2-user@$N2
watch -d "ip link | egrep 'ens|eni' ;echo;echo "[ROUTE TABLE]"; route -n | grep eni"
|
1
2
3
| # ํฐ๋ฏธ๋ 3
ssh ec2-user@$N3
watch -d "ip link | egrep 'ens|eni' ;echo;echo "[ROUTE TABLE]"; route -n | grep eni"
|
(2) netshoot-pod ๋ํ๋ก์ด๋จผํธ ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: netshoot-pod
spec:
replicas: 3
selector:
matchLabels:
app: netshoot-pod
template:
metadata:
labels:
app: netshoot-pod
spec:
containers:
- name: netshoot-pod
image: nicolaka/netshoot
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF
|
netshot pod ์์ฑ ์งํ
netshot pod ์์ฑ ์๋ฃ
(3) ํ๋ ์์ฑ ํ IP ํ์ธ
โ
ย ์ถ๋ ฅ
1
2
3
4
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
netshoot-pod-744bd84b46-4cfhr 1/1 Running 0 69s 192.168.3.146 ip-192-168-3-72.ap-northeast-2.compute.internal <none> <none>
netshoot-pod-744bd84b46-hkbd6 1/1 Running 0 69s 192.168.1.176 ip-192-168-1-193.ap-northeast-2.compute.internal <none> <none>
netshoot-pod-744bd84b46-pzv6d 1/1 Running 0 69s 192.168.2.226 ip-192-168-2-52.ap-northeast-2.compute.internal <none> <none>
|
4. ํ๋ ์ด๋ฆ ๋ณ์ ์ง์
1
2
3
| PODNAME1=$(kubectl get pod -l app=netshoot-pod -o jsonpath='{.items[0].metadata.name}')
PODNAME2=$(kubectl get pod -l app=netshoot-pod -o jsonpath='{.items[1].metadata.name}')
PODNAME3=$(kubectl get pod -l app=netshoot-pod -o jsonpath='{.items[2].metadata.name}')
|
5. ๋
ธ๋ ๋ผ์ฐํ
์ ๋ณด ํ์ธ
1
| for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo ip -c route; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
netshoot-pod-744bd84b46-4cfhr 1/1 Running 0 95m 192.168.3.146 ip-192-168-3-72.ap-northeast-2.compute.internal <none> <none>
netshoot-pod-744bd84b46-hkbd6 1/1 Running 0 95m 192.168.1.176 ip-192-168-1-193.ap-northeast-2.compute.internal <none> <none>
netshoot-pod-744bd84b46-pzv6d 1/1 Running 0 95m 192.168.2.226 ip-192-168-2-52.ap-northeast-2.compute.internal <none> <none>
NAME IP
netshoot-pod-744bd84b46-4cfhr 192.168.3.146
netshoot-pod-744bd84b46-hkbd6 192.168.1.176
netshoot-pod-744bd84b46-pzv6d 192.168.2.226
>> node 43.202.57.204 <<
default via 192.168.1.1 dev ens5 proto dhcp src 192.168.1.193 metric 1024
192.168.0.2 via 192.168.1.1 dev ens5 proto dhcp src 192.168.1.193 metric 1024
192.168.1.0/24 dev ens5 proto kernel scope link src 192.168.1.193 metric 1024
192.168.1.1 dev ens5 proto dhcp scope link src 192.168.1.193 metric 1024
192.168.1.59 dev eni61c5a949744 scope link
192.168.1.90 dev eni01a4864c88a scope link
192.168.1.176 dev eni1d5278cfd98 scope link
>> node 15.164.179.214 <<
default via 192.168.2.1 dev ens5 proto dhcp src 192.168.2.52 metric 1024
192.168.0.2 via 192.168.2.1 dev ens5 proto dhcp src 192.168.2.52 metric 1024
192.168.2.0/24 dev ens5 proto kernel scope link src 192.168.2.52 metric 1024
192.168.2.1 dev ens5 proto dhcp scope link src 192.168.2.52 metric 1024
192.168.2.72 dev enibce2df30e87 scope link
192.168.2.226 dev enia1aaba2012e scope link
192.168.2.248 dev enic99196c7a64 scope link
>> node 43.201.115.81 <<
default via 192.168.3.1 dev ens5 proto dhcp src 192.168.3.72 metric 1024
192.168.0.2 via 192.168.3.1 dev ens5 proto dhcp src 192.168.3.72 metric 1024
192.168.3.0/24 dev ens5 proto kernel scope link src 192.168.3.72 metric 1024
192.168.3.1 dev ens5 proto dhcp scope link src 192.168.3.72 metric 1024
192.168.3.146 dev enia32f5632a44 scope link
|
6. Pod IP ํ์ธ๊ณผ ๋คํธ์ํฌ ๋ค์์คํ์ด์ค ์ง์
(1) Pod IP ๋ฐ ENI ์ฐ๊ฒฐ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
| [ec2-user@ip-192-168-3-72 ~]$ ip -c addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 0a:d5:3d:4a:54:bd brd ff:ff:ff:ff:ff:ff
altname enp0s5
inet 192.168.3.72/24 metric 1024 brd 192.168.3.255 scope global dynamic ens5
valid_lft 3318sec preferred_lft 3318sec
inet6 fe80::8d5:3dff:fe4a:54bd/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
3: enia32f5632a44@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 76:e0:80:d5:57:23 brd ff:ff:ff:ff:ff:ff link-netns cni-48fae262-f8c0-ab50-61f1-b90d6344980d
inet6 fe80::74e0:80ff:fed5:5723/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
4: ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 0a:da:c7:72:d0:e1 brd ff:ff:ff:ff:ff:ff
altname enp0s6
inet 192.168.3.120/24 brd 192.168.3.255 scope global ens6
valid_lft forever preferred_lft forever
inet6 fe80::8da:c7ff:fe72:d0e1/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
|
ENI(enia32f5632a44@if3)
๊ฐ ์ถ๊ฐ๋์ด ์์ผ๋ฉฐ, Pod ๋คํธ์ํฌ ๋ค์์คํ์ด์ค์ ์ฐ๊ฒฐ๋จ
1
2
3
4
| 3: enia32f5632a44@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 76:e0:80:d5:57:23 brd ff:ff:ff:ff:ff:ff link-netns cni-48fae262-f8c0-ab50-61f1-b90d6344980d
inet6 fe80::74e0:80ff:fed5:5723/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
|
(2) lsns
๋ช
๋ น์ด๋ก ๋คํธ์ํฌ ๋ค์์คํ์ด์ค ํ์ธ
pause ์ปจํ
์ด๋์ PID(110946)๋ฅผ ํ์ธ
1
2
3
4
| [ec2-user@ip-192-168-3-72 ~]$ sudo lsns -t net
NS TYPE NPROCS PID USER NETNSID NSFS COMMAND
4026531840 net 115 1 root unassigned /usr/li
4026532216 net 2 110946 65535 0 /run/netns/cni-48fae262-f8c0-ab50-61f1-b90d6344980d /pause
|
(3) nsenter
๋ช
๋ น์ด๋ก ๋คํธ์ํฌ ๋ค์์คํ์ด์ค ์ง์
1
2
| [ec2-user@ip-192-168-3-72 ~]$ PID=110946
[ec2-user@ip-192-168-3-72 ~]$ sudo nsenter -t $PID -n ip -c addr
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
| 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host proto kernel_lo
valid_lft forever preferred_lft forever
3: eth0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether e2:ab:55:84:93:dc brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.3.146/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::e0ab:55ff:fe84:93dc/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
|
(4) Pod IP ํ์ธ ๊ฒฐ๊ณผ
- Pod IP(192.168.3.146)๊ฐ ์ถ๋ ฅ๋๋ฉฐ, ์ด๋ ์๋ฒ IP๊ฐ ์๋ Pod IP์
- Pod์ ์ธํฐํ์ด์ค
eth0@if3
์ ENI enia32f5632a44@if3
๊ฐ ์ฐ๊ฒฐ๋์ด ์์
โ
ย ์ถ๋ ฅ
1
2
3
4
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
netshoot-pod-744bd84b46-4cfhr 1/1 Running 0 114m 192.168.3.146 ip-192-168-3-72.ap-northeast-2.compute.internal <none> <none>
netshoot-pod-744bd84b46-hkbd6 1/1 Running 0 114m 192.168.1.176 ip-192-168-1-193.ap-northeast-2.compute.internal <none> <none>
netshoot-pod-744bd84b46-pzv6d 1/1 Running 0 114m 192.168.2.226 ip-192-168-2-52.ap-northeast-2.compute.internal <none> <none>
|
7. Pod์ Zsh๋ก ์ง์
ํ์ฌ ๋คํธ์ํฌ ์ ๋ณด ํ์ธ
(1) Pod ๋ชฉ๋ก ํ์ธ
โ
ย ์ถ๋ ฅ
1
2
3
4
| NAME READY STATUS RESTARTS AGE
netshoot-pod-744bd84b46-4cfhr 1/1 Running 0 117m
netshoot-pod-744bd84b46-hkbd6 1/1 Running 0 117m
netshoot-pod-744bd84b46-pzv6d 1/1 Running 0 117m
|
(2) Zsh๋ก Pod ๋ด๋ถ ์ง์
๋ฐ ๋คํธ์ํฌ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
| kubectl exec -it $PODNAME1 -- zsh
# ๊ฒฐ๊ณผ
dP dP dP
88 88 88
88d888b. .d8888b. d8888P .d8888b. 88d888b. .d8888b. .d8888b. d8888P
88' `88 88ooood8 88 Y8ooooo. 88' `88 88' `88 88' `88 88
88 88 88. ... 88 88 88 88 88. .88 88. .88 88
dP dP `88888P' dP `88888P' dP dP `88888P' `88888P' dP
Welcome to Netshoot! (github.com/nicolaka/netshoot)
Version: 0.13
|
Netshoot ์ปจํ
์ด๋์ ์ง์
ํ, ip -c addr
๋ก ๋คํธ์ํฌ ์ ๋ณด ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
| netshoot-pod-744bd84b46-4cfhr [ ~ ] ip -c addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host proto kernel_lo
valid_lft forever preferred_lft forever
3: eth0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether e2:ab:55:84:93:dc brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.3.146/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::e0ab:55ff:fe84:93dc/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
|
(3) Pod ๋ด๋ถ ๋ผ์ฐํ
์ ๋ณด ํ์ธ
1
| netshoot-pod-744bd84b46-4cfhr [ ~ ] cat /etc/resolv.conf
|
โ
ย ์ถ๋ ฅ
1
2
3
| search default.svc.cluster.local svc.cluster.local cluster.local ap-northeast-2.compute.internal
nameserver 10.100.0.10
options ndots:5
|
8. Pod ์ ๋ณด ํ์ธ
โ
ย ์ถ๋ ฅ
1
2
3
4
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
netshoot-pod-744bd84b46-4cfhr 1/1 Running 0 3h3m 192.168.3.146 ip-192-168-3-72.ap-northeast-2.compute.internal <none> <none>
netshoot-pod-744bd84b46-hkbd6 1/1 Running 0 3h3m 192.168.1.176 ip-192-168-1-193.ap-northeast-2.compute.internal <none> <none>
netshoot-pod-744bd84b46-pzv6d 1/1 Running 0 3h3m 192.168.2.226 ip-192-168-2-52.ap-northeast-2.compute.internal <none> <none>
|
9. ํ๋๊ฐ ํต์ ํ
์คํธ
(1) ํ๋ IP ๋ณ์ ์ง์
1
2
3
| PODIP1=$(kubectl get pod -l app=netshoot-pod -o jsonpath='{.items[0].status.podIP}')
PODIP2=$(kubectl get pod -l app=netshoot-pod -o jsonpath='{.items[1].status.podIP}')
PODIP3=$(kubectl get pod -l app=netshoot-pod -o jsonpath='{.items[2].status.podIP}')
|
(2) ํ๋1์์ ํ๋2๋ก ping ํ
์คํธ
1
| kubectl exec -it $PODNAME1 -- ping -c 2 $PODIP2
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
| PING 192.168.1.176 (192.168.1.176) 56(84) bytes of data.
64 bytes from 192.168.1.176: icmp_seq=1 ttl=125 time=1.38 ms
64 bytes from 192.168.1.176: icmp_seq=2 ttl=125 time=1.23 ms
--- 192.168.1.176 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 1.225/1.303/1.381/0.078 ms
|
(3) ํต์ ํ์ธ์ ์ํ ํจํท ์บก์ฒ
ICMP ํจํท๋ง ์บก์ฒํ๋ ๋ช
๋ น ์ฌ์ฉ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
| [ec2-user@ip-192-168-1-193 ~]$ sudo tcpdump -i any -nn icmp
tcpdump: data link type LINUX_SLL2
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
[ec2-user@ip-192-168-2-52 ~]$ sudo tcpdump -i any -nn icmp
tcpdump: data link type LINUX_SLL2
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
[ec2-user@ip-192-168-3-72 ~]$ sudo tcpdump -i any -nn icmp
tcpdump: data link type LINUX_SLL2
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
|
(4) Pod ๊ฐ Ping ํ
์คํธ
1
| kubectl exec -it $PODNAME1 -- ping -c 2 $PODIP2
|
1
2
3
4
5
6
7
8
| 13:00:13.795567 ens5 In IP 192.168.3.146 > 192.168.1.176: ICMP echo request, id 110, seq 1, length 64
13:00:13.795632 eni1d5278cfd98 Out IP 192.168.3.146 > 192.168.1.176: ICMP echo request, id 110, seq 1, length 64
13:00:13.795646 eni1d5278cfd98 In IP 192.168.1.176 > 192.168.3.146: ICMP echo reply, id 110, seq 1, length 64
13:00:13.795654 ens5 Out IP 192.168.1.176 > 192.168.3.146: ICMP echo reply, id 110, seq 1, length 64
13:00:14.796938 ens5 In IP 192.168.3.146 > 192.168.1.176: ICMP echo request, id 110, seq 2, length 64
13:00:14.796970 eni1d5278cfd98 Out IP 192.168.3.146 > 192.168.1.176: ICMP echo request, id 110, seq 2, length 64
13:00:14.796989 eni1d5278cfd98 In IP 192.168.1.176 > 192.168.3.146: ICMP echo reply, id 110, seq 2, length 64
13:00:14.796996 ens5 Out IP 192.168.1.176 > 192.168.3.146: ICMP echo reply, id 110, seq 2, length 64
|
1
2
3
4
5
6
7
8
| 13:00:13.794944 enia32f5632a44 In IP 192.168.3.146 > 192.168.1.176: ICMP echo request, id 110, seq 1, length 64
13:00:13.794990 ens5 Out IP 192.168.3.146 > 192.168.1.176: ICMP echo request, id 110, seq 1, length 64
13:00:13.796201 ens5 In IP 192.168.1.176 > 192.168.3.146: ICMP echo reply, id 110, seq 1, length 64
13:00:13.796236 enia32f5632a44 Out IP 192.168.1.176 > 192.168.3.146: ICMP echo reply, id 110, seq 1, length 64
13:00:14.796330 enia32f5632a44 In IP 192.168.3.146 > 192.168.1.176: ICMP echo request, id 110, seq 2, length 64
13:00:14.796363 ens5 Out IP 192.168.3.146 > 192.168.1.176: ICMP echo request, id 110, seq 2, length 64
13:00:14.797526 ens5 In IP 192.168.1.176 > 192.168.3.146: ICMP echo reply, id 110, seq 2, length 64
13:00:14.797542 enia32f5632a44 Out IP 192.168.1.176 > 192.168.3.146: ICMP echo reply, id 110, seq 2, length 64
|
tcpdump ๊ฒฐ๊ณผ ๋ถ์
ens5
์ eni1d5278cfd98
์ธํฐํ์ด์ค๋ฅผ ํตํด ICMP ์์ฒญ๊ณผ ์๋ต์ด ์ค๊ณ ๊ฐ- NAT ์์ด ์๋ณธ IP๋ก ํต์ ๋จ
10. ๋คํธ์ํฌ ๊ฒฝ๋ก ํ์ธ
(1) ip rule ํ์ธ
1
| [ec2-user@ip-192-168-1-193 ~]$ ip rule
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
| 0: from all lookup local
512: from all to 192.168.1.90 lookup main
512: from all to 192.168.1.59 lookup main
512: from all to 192.168.1.176 lookup main
1024: from all fwmark 0x80/0x80 lookup main
32766: from all lookup main
32767: from all lookup default
|
(2) ip route ํ
์ด๋ธ ํ์ธ
๊ฐ ์ธํฐํ์ด์ค์ ๋ก์ปฌ IP ์ ๋ณด ํ์ธ
1
| [ec2-user@ip-192-168-1-193 ~]$ ip route show table local
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
| local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1
local 192.168.1.193 dev ens5 proto kernel scope host src 192.168.1.193
local 192.168.1.237 dev ens6 proto kernel scope host src 192.168.1.237
broadcast 192.168.1.255 dev ens5 proto kernel scope link src 192.168.1.193
broadcast 192.168.1.255 dev ens6 proto kernel scope link src 192.168.1.237
|
๊ธฐ๋ณธ ๊ฒ์ดํธ์จ์ด ๋ฐ ENI๋ณ ์ฐ๊ฒฐ ์ ๋ณด ํ์ธ
1
| [ec2-user@ip-192-168-1-193 ~]$ ip route show table main
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
| default via 192.168.1.1 dev ens5 proto dhcp src 192.168.1.193 metric 1024
192.168.0.2 via 192.168.1.1 dev ens5 proto dhcp src 192.168.1.193 metric 1024
192.168.1.0/24 dev ens5 proto kernel scope link src 192.168.1.193 metric 1024
192.168.1.1 dev ens5 proto dhcp scope link src 192.168.1.193 metric 1024
192.168.1.59 dev eni61c5a949744 scope link
192.168.1.90 dev eni01a4864c88a scope link
192.168.1.176 dev eni1d5278cfd98 scope link
|
๐ถ ํ๋์์ ์ธ๋ถ ํต์ ํ
์คํธ ๋ฐ ํ์ธ
- Pod์์ ์ธ๋ถ๋ก ping ํ
์คํธํ๋ฉฐ NAT ๊ณผ์ ์ ํ์ธ
- Pod๋ ๋ด๋ถ IP๋ก ํต์ ํ์ง๋ง, ์ธ๋ถ๋ก ๋๊ฐ ๋๋ ์๋ฒ์ ์ ๋ ๊ณต์ธ IP๋ก NAT๋์ด ํต์ ๋จ
1. Pod์์ ์ธ๋ถ๋ก ping ํ
์คํธ
1
| kubectl exec -it $PODNAME1 -- ping -c 1 www.google.com
|
โ
์ถ๋ ฅ
1
2
3
4
| 13:35:28.230157 enia32f5632a44 In IP 192.168.3.146 > 172.217.161.228: ICMP echo request, id 116, seq 1, length 64
13:35:28.230179 ens5 Out IP 192.168.3.72 > 172.217.161.228: ICMP echo request, id 35777, seq 1, length 64
13:35:28.256496 ens5 In IP 172.217.161.228 > 192.168.3.72: ICMP echo reply, id 35777, seq 1, length 64
13:35:28.256529 enia32f5632a44 Out IP 172.217.161.228 > 192.168.3.146: ICMP echo reply, id 116, seq 1, length 64
|
2. ์ธ๋ถ๋ก ์ง์์ ping ํ
์คํธ
1
| kubectl exec -it $PODNAME1 -- ping -i 0.1 www.google.com
|
1
| 64 bytes from sin01s16-in-f4.1e100.net (172.217.25.164): icmp_seq=174 ttl=47 time=35.0 ms
|
veth(enia32f5632a44)์์๋ Pod IP๊ฐ ๋ณด์ด์ง๋ง, ens5์์๋ ์๋ฒ IP๊ฐ ๋ณด์ด๋ฉฐ NAT๊ฐ ์ ์ฉ๋จ
1
2
| 13:37:01.153999 ens5 In IP 172.217.25.164 > 192.168.3.72: ICMP echo reply, id 48046, seq 174, length 64
13:37:01.154037 enia32f5632a44 Out IP 172.217.25.164 > 192.168.3.146: ICMP echo reply, id 122, seq 174, length 64
|
3. ์๋ฒ ๊ณต์ธ IP ํ์ธ
1
| for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i curl -s ipinfo.io/ip; echo; echo; done
|
โ
ย ์ถ๋ ฅ
์ถ๋ ฅ๋ IP๋ ์๋ฒ์ ์ ๋ ๊ณต์ธ IP
1
2
3
4
5
6
7
8
| >> node 43.202.57.204 <<
43.202.57.204
>> node 15.164.179.214 <<
15.164.179.214
>> node 43.201.115.81 <<
43.201.115.81
|
Pod๊ฐ ์ธ๋ถ ํต์ ํ ๋ ์๋ฒ์ ๊ณต์ธ IP๋ก NAT๋์ด ๋๊ฐ
4. Pod ์ธ๋ถ ์ธํฐ๋ท ํต์ ์ NAT ํ์ธ
1
| for i in $PODNAME1 $PODNAME2 $PODNAME3; do echo ">> Pod : $i <<"; kubectl exec -it $i -- curl -s ipinfo.io/ip; echo; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
| >> Pod : netshoot-pod-744bd84b46-4cfhr <<
43.201.115.81
>> Pod : netshoot-pod-744bd84b46-hkbd6 <<
43.202.57.204
>> Pod : netshoot-pod-744bd84b46-pzv6d <<
15.164.179.214
|
5. ๋ผ์ฐํ
ํ
์ด๋ธ ํ์ธ
์๋ฒ์ ๋คํธ์ํฌ ์ธํฐํ์ด์ค ๋ฐ ๋ผ์ฐํ
ํ
์ด๋ธ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| ssh ec2-user@$N3
A newer release of "Amazon Linux" is available.
Version 2023.6.20250203:
Run "/usr/bin/dnf check-release-update" for full release and version update info
, #_
~\_ ####_ Amazon Linux 2023
~~ \_#####\
~~ \###|
~~ \#/ ___ https://aws.amazon.com/linux/amazon-linux-2023
~~ V~' '->
~~~ /
~~._. _/
_/ _/
_/m/'
Last login: Wed Feb 12 11:05:36 2025 from 182.230.60.93
[ec2-user@ip-192-168-3-72 ~]$
|
(1) ๋คํธ์ํฌ ์ธํฐํ์ด์ค ์ ๋ณด ํ์ธ
1
| [ec2-user@ip-192-168-3-72 ~]$ ip -c addr
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 0a:d5:3d:4a:54:bd brd ff:ff:ff:ff:ff:ff
altname enp0s5
inet 192.168.3.72/24 metric 1024 brd 192.168.3.255 scope global dynamic ens5
valid_lft 2513sec preferred_lft 2513sec
inet6 fe80::8d5:3dff:fe4a:54bd/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
3: enia32f5632a44@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 76:e0:80:d5:57:23 brd ff:ff:ff:ff:ff:ff link-netns cni-48fae262-f8c0-ab50-61f1-b90d6344980d
inet6 fe80::74e0:80ff:fed5:5723/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
4: ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 0a:da:c7:72:d0:e1 brd ff:ff:ff:ff:ff:ff
altname enp0s6
inet 192.168.3.120/24 brd 192.168.3.255 scope global ens6
valid_lft forever preferred_lft forever
inet6 fe80::8da:c7ff:fe72:d0e1/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
|
(2) ๋ผ์ฐํ
ํ
์ด๋ธ ํ์ธ
1
| [ec2-user@ip-192-168-3-72 ~]$ ip route show table main
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
| default via 192.168.3.1 dev ens5 proto dhcp src 192.168.3.72 metric 1024
192.168.0.2 via 192.168.3.1 dev ens5 proto dhcp src 192.168.3.72 metric 1024
192.168.3.0/24 dev ens5 proto kernel scope link src 192.168.3.72 metric 1024
192.168.3.1 dev ens5 proto dhcp scope link src 192.168.3.72 metric 1024
192.168.3.146 dev enia32f5632a44 scope link
|
- ๊ธฐ๋ณธ ๊ฒฝ๋ก(ens5)๋ก ๋๊ฐ ๋
iptables
์ SNAT
๊ท์น์ด ์ ์ฉ๋์ด ์๋ฒ IP(192.168.3.72)๋ก ๋ณํ๋จ
(3) iptables ๊ท์น ํ์ธ
1
2
3
| [ec2-user@ip-192-168-3-72 ~]$ sudo iptables -t nat -S | grep 'A AWS-SNAT-CHAIN'
-A AWS-SNAT-CHAIN-0 -d 192.168.0.0/16 -m comment --comment "AWS SNAT CHAIN" -j RETURN
-A AWS-SNAT-CHAIN-0 ! -o vlan+ -m comment --comment "AWS, SNAT" -m addrtype ! --dst-type LOCAL -j SNAT --to-source 192.168.3.72 --random-fully
|
- 192.168.0.0/16 ๋์ญ(VPC ๋ด๋ถ)์ SNAT ์์ด ํต์ ํ๊ณ , ์ธ๋ถ ํต์ ์์๋ SNAT ๊ท์น ์ ์ฉ
(4) Pod ๊ฐ ํต์ ์ NAT ๋ฏธ์ ์ฉ ์ด์
- VPC ๋ด๋ถ ๋์ญ์ NAT ์์ด ํต์ ํ๋ฉฐ, ์ธ๋ถ ์ธํฐ๋ท ๋์ญ์ SNAT ์ ์ฉ
- ๊ฐ์ VPC์์ NAT์ ์ ์ฉํ๋ฉด ํด๋ผ์ด์ธํธ IP๊ฐ ๊ฐ์ถฐ์ง๊ณ ์ค๋ฒํค๋๊ฐ ๋ฐ์ํ๊ธฐ ๋๋ฌธ์ ๋ด๋ถ๋ง์์๋ NAT ์์ด ๋ฐ๋ก ํต์
- ์ธ๋ถ ์ธํฐ๋ท ํต์ ์
SNAT --to-source 192.168.3.72
๋ก ๋ณํ๋์ด ๋๊ฐ๋๋ก ์ค์
6. AWS-SNAT-CHAIN-0์ SNAT ๊ท์น๊ณผ Pod ํต์ ํ์ธ
(1) iptables ์ด๊ธฐํ
1
| sudo iptables -t filter --zero; sudo iptables -t nat --zero; sudo iptables -t mangle --zero; sudo iptables -t raw --zero
|
(2) watch
๋ช
๋ น์ด๋ก AWS-SNAT-CHAIN-0
, KUBE-POSTROUTING
, POSTROUTING
์ฒด์ธ ๋ชจ๋ํฐ๋ง
1
| watch -d 'sudo iptables -v --numeric --table nat --list AWS-SNAT-CHAIN-0; echo ; sudo iptables -v --numeric --table nat --list KUBE-POSTROUTING; echo ; sudo iptables -v --numeric --table nat --list POSTROUTING'
|
(3) AWS-SNAT-CHAIN-0 ๊ท์น ๋ถ์
192.168.0.0/16
๋์ญ์ SNAT ์์ด ํต์ (RETURN
)- ์ธ๋ถ ๋์ญ์ SNAT๋์ด EC2 ๋
ธ๋1 IP
192.168.3.72
๋ก ๋ณ๊ฒฝ
7. conntrack์ ์ด์ฉํ ์ฐ๊ฒฐ ์ถ์
conntrack
๋ช
๋ น์ด๋ก NAT๋ ์ฐ๊ฒฐ ์ํ ํ์ธ
1
| for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo conntrack -L -n |grep -v '169.254.169'; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
| conntrack v1.4.5 (conntrack-tools):
icmp 1 28 src=172.30.66.58 dst=8.8.8.8 type=8 code=0 id=34392 src=8.8.8.8 dst=172.30.85.242 type=0 code=0 id=50705 mark=128 use=1
tcp 6 23 TIME_WAIT src=172.30.66.58 dst=34.117.59.81 sport=58144 dport=80 src=34.117.59.81 dst=172.30.85.242 sport=80 dport=44768 [ASSURED] mark=128 use=1
|
8. Pod ํต์ ํ
์คํธ
operator-host์์ Pod๋ก ping
ํ
์คํธ
1
2
3
4
5
6
7
8
9
| (eks-user:N/A) [root@operator-host ~]# POD1IP=192.168.1.176
(eks-user:N/A) [root@operator-host ~]# ping -c 1 $POD1IP
# ๊ฒฐ๊ณผ
PING 192.168.1.176 (192.168.1.176) 56(84) bytes of data.
64 bytes from 192.168.1.176: icmp_seq=1 ttl=126 time=0.674 ms
--- 192.168.1.176 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms
|
VPC ๋ด๋ถ์์๋ NAT ์์ด ๋ฐ๋ก ํต์ ๋จ
1
2
3
4
| 15:07:35.091537 ens5 In IP 172.20.1.100 > 192.168.1.176: ICMP echo request, id 5445, seq 1, length 64
15:07:35.091600 eni1d5278cfd98 Out IP 172.20.1.100 > 192.168.1.176: ICMP echo request, id 5445, seq 1, length 64
15:07:35.091614 eni1d5278cfd98 In IP 192.168.1.176 > 172.20.1.100: ICMP echo reply, id 5445, seq 1, length 64
15:07:35.091621 ens5 Out IP 192.168.1.176 > 172.20.1.100: ICMP echo reply, id 5445, seq 1, length 64
|
VPC ๋ด๋ถ์์๋ NAT ์์ด ํต์ ํ๋ฉฐ ์ธ๋ถ ํต์ ์ AWS-SNAT-CHAIN-0
๊ท์น์ ์ํด EC2 ๋
ธ๋ IP๋ก SNAT๋๋ ๊ณผ์ ์ ํ์ธํ ์ ์์
๐ ํ๋1 โ ์ด์์๋ฒ EC2 ํต์
1. vpc cni env ์ ๋ณด ํ์ธ
1
| (eks-user:N/A) [root@operator-host ~]# kubectl get ds aws-node -n kube-system -o json | jq '.spec.template.spec.containers[0].env'
|
2. ์ด์์๋ฒ EC2 SSH ์ ์
1
| (eks-user:N/A) [root@operator-host ~]# kubectl exec -it $PODNAME1 -- ping 172.20.1.100
|
3. ํ๋1 ๋ฐฐ์น ์์ปค๋
ธ๋์์ tcpdump ํ์ธ
tcpdump๋ก NAT ์ ์ฉ ํ์ธ
1
| (eks-user:N/A) [root@operator-host ~]# sudo tcpdump -i any -nn icmp
|
Pod IP โ ์๋ฒ IP๋ก SNAT๋์ด ์ธ๋ถ ํต์
1
2
| 15:35:03.714838 enia32f5632a44 In IP 192.168.3.146 > 172.20.1.100: ICMP echo request, id 140, seq 77, length 64
15:35:03.714869 ens5 Out IP 192.168.3.72 > 172.20.1.100: ICMP echo request, id 56175, seq 77, length 64
|
4. ์ฌ๋ด ๋ด๋ถ์ ์ฐ๊ฒฐ ํ์ฅ๋ ๋คํธ์ํฌ ๋์ญ๊ณผ SNAT ์์ด ํต์ ๊ฐ๋ฅํ๊ฒ ์ค์ ํด๋ณด๊ธฐ
(1) ์์ปค๋
ธ๋(192.168.1.193) ์ ์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| ssh ec2-user@$N1
A newer release of "Amazon Linux" is available.
Version 2023.6.20250203:
Run "/usr/bin/dnf check-release-update" for full release and version update info
, #_
~\_ ####_ Amazon Linux 2023
~~ \_#####\
~~ \###|
~~ \#/ ___ https://aws.amazon.com/linux/amazon-linux-2023
~~ V~' '->
~~~ /
~~._. _/
_/ _/
_/m/'
Last login: Wed Feb 12 09:05:57 2025 from 182.230.60.93
[ec2-user@ip-192-168-1-193 ~]$
|
(2) NAT ์ ์ฑ
ํ์ธ
1
2
| [ec2-user@ip-192-168-1-193 ~]$ sudo iptables -t filter --zero; sudo iptables -t nat --zero; sudo iptables -t mangle --zero; sudo iptables -t raw --zero
[ec2-user@ip-192-168-1-193 ~]$ watch -d 'sudo iptables -v --numeric --table nat --list AWS-SNAT-CHAIN-0; echo ; sudo iptables -v --numeric --table nat --list KUBE-POSTROUTING; echo ; sudo iptables -v --numeric --table nat --list POSTROUTING'
|
(3) SNAT ์ ์ธ ๋คํธ์ํฌ ๋์ญ ์ค์
1
2
3
| kubectl set env daemonset aws-node -n kube-system AWS_VPC_K8S_CNI_EXCLUDE_SNAT_CIDRS=172.20.0.0/16
# ๊ฒฐ๊ณผ
daemonset.apps/aws-node env updated
|
(4) NAT ์ ์ฑ
๋ณ๊ฒฝ ํ์ธ
(5) Ping ํ
์คํธ๋ก SNAT ์ ์ธ ํ์ธ
1
| (eks-user:N/A) [root@operator-host ~]# kubectl exec -it $PODNAME1 -- ping 172.20.1.100
|
1
2
| 15:53:57.159915 ens5 In IP 172.20.1.100 > 192.168.3.146: ICMP echo reply, id 146, seq 23, length 64
15:53:57.159929 enia32f5632a44 Out IP 172.20.1.100 > 192.168.3.146: ICMP echo reply, id 146, seq 23, length 64
|
์๋์ ๋ด์ฉ์ด ์ถ๊ฐ๋จ
1
2
3
4
5
6
7
8
9
| kubectl get ds aws-node -n kube-system -o json | jq '.spec.template.spec.containers[0].env'
[
...
{
"name": "AWS_VPC_K8S_CNI_EXCLUDE_SNAT_CIDRS",
"value": "172.20.0.0/16"
}
...
]
|
ens5๋ก NAT ๋๋ ์ด์ ์ํ์์ ์ด์ ๋ 172.20.0.0/16 ๋์ญ์ผ๋ก๋ NAT ์์ด ๋ฐ๋ก ํต์ ๋จ
(6) iptables ๋ชจ๋ํฐ๋ง ๋ช
๋ น์ด ์คํ ํ AWS-SNAT-CHAIN-0
์ฒด์ธ์์ ์นด์ดํธ ์ฆ๊ฐ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| ssh ec2-user@$N3
A newer release of "Amazon Linux" is available.
Version 2023.6.20250203:
Run "/usr/bin/dnf check-release-update" for full release and version update info
, #_
~\_ ####_ Amazon Linux 2023
~~ \_#####\
~~ \###|
~~ \#/ ___ https://aws.amazon.com/linux/amazon-linux-2023
~~ V~' '->
~~~ /
~~._. _/
_/ _/
_/m/'
Last login: Wed Feb 12 14:21:22 2025 from 182.230.60.93
[ec2-user@ip-192-168-3-72 ~]$ watch -d 'sudo iptables -v --numeric --table nat --list AWS-SNAT-CHAIN-0; echo ; sudo iptables -v --numeric --table nat --list KUBE-POSTROUTING; echo ; sudo iptables -v --numeric --table nat --list POSTROUTING'
|
5. ๋ํ๋ก์ด๋จผํธ ์ญ์ ๋ก ์ค์ต ๋ง๋ฌด๋ฆฌ
kubectl delete deploy netshoot-pod
๐ ๋
ธ๋์ ํ๋ ์์ฑ ๊ฐฏ์ ์ ํ
1. kube-ops-view ์ค์น
- kube-ops-view๋ ๋
ธ๋์ ํ๋ ์ ๋ณด๋ฅผ ์๊ฐํํ๋ ๋๊ตฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| helm repo add geek-cookbook https://geek-cookbook.github.io/charts/
helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 --set service.main.type=LoadBalancer --set env.TZ="Asia/Seoul" --namespace kube-system
# ์ค์น ์๋ฃ ์ ์ถ๋ ฅ
"geek-cookbook" already exists with the same configuration, skipping
NAME: kube-ops-view
LAST DEPLOYED: Thu Feb 13 01:05:07 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w kube-ops-view'
export SERVICE_IP=$(kubectl get svc --namespace kube-system kube-ops-view -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:8080
|
2. kube-ops-view ์ ์ URL ํ์ธ (1.5 ๋ฐฐ์จ)
1
| kubectl get svc -n kube-system kube-ops-view -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' | awk '{ print "KUBE-OPS-VIEW URL = http://"$1":8080/#scale=1.5"}'
|
โ
ย ์ถ๋ ฅ
1
| KUBE-OPS-VIEW URL = http://a87c9a7b2db234e36a4a71827a7fbe54-363200146.ap-northeast-2.elb.amazonaws.com:8080/#scale=1.5
|
3. kube-ops-view ์๋น์ค ํ์ธ
1
| kubectl get svc -n kube-system kube-ops-view
|
โ
ย ์ถ๋ ฅ
1
2
| NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-ops-view LoadBalancer 10.100.146.32 a87c9a7b2db234e36a4a71827a7fbe54-363200146.ap-northeast-2.elb.amazonaws.com 8080:32597/TCP 19h
|
- ์ค์น ์ LoadBalancer ์๋น์ค๊ฐ ํจ๊ป ๋ฐฐํฌ๋๋ฉฐ, ์ปจํธ๋กค๋ฌ ๋งค๋์ ๊ฐ ํด๋ผ์ฐ๋ ์ ๊ณต์์ ๋ก๋๋ฐธ๋ฐ์๋ฅผ ์๋์ผ๋ก ์์ฑ
- External IP๋ก ํ ๋น๋ ๋ก๋๋ฐธ๋ฐ์ ๋๋ฉ์ธ์ ํตํด ์ ์ ๊ฐ๋ฅ
- ํฌํธ๋ 8080์ผ๋ก ์ค์
4. ์์ปค๋
ธ๋์ ์ธ์คํด์ค ์ ๋ณด ํ์ธ
t3 ํ์
์ ์ ๋ณด ํ์ธ
1
2
3
| aws ec2 describe-instance-types --filters Name=instance-type,Values=t3.\* \
--query "InstanceTypes[].{Type: InstanceType, MaxENI: NetworkInfo.MaximumNetworkInterfaces, IPv4addr: NetworkInfo.Ipv4AddressesPerInterface}" \
--output table
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
| --------------------------------------
| DescribeInstanceTypes |
+----------+----------+--------------+
| IPv4addr | MaxENI | Type |
+----------+----------+--------------+
| 12 | 3 | t3.large |
| 6 | 3 | t3.medium |
| 15 | 4 | t3.xlarge |
| 15 | 4 | t3.2xlarge |
| 2 | 2 | t3.nano |
| 2 | 2 | t3.micro |
| 4 | 3 | t3.small |
+----------+----------+--------------+
|
- EC2 ์ธ์คํด์ค ํ์
์ ๋ฐ๋ผ ์ต๋ Pod ๋ฐฐํฌ ๊ฐ์๊ฐ ์ ํ๋จ
- ์ฅ์ฐฉํ ์ ์๋ ๋คํธ์ํฌ ์ธํฐํ์ด์ค(ENI) ๊ฐ์์ ๊ฐ ENI์ ํ ๋นํ ์ ์๋ ๋ณด์กฐ IP ๊ฐ์๊ฐ ์ด๋ฏธ ์ ํด์ ธ ์๊ธฐ ๋๋ฌธ
5. t3.medium ์ธ์คํด์ค์์ Pod ์ต๋ ๊ฐ์ ๊ณ์ฐ
- Max ENI(๋คํธ์ํฌ ์ธํฐํ์ด์ค): 3๊ฐ
- ENI๋น IP ํ ๋น ๊ฐ์: 6๊ฐ
- ๊ณ์ฐ ๋ฐฉ๋ฒ:
(6 - 1) * 3 = 15
- ๊ฐ ENI์์ 1๊ฐ์ IP๋ ์ด๋ฏธ ์ฌ์ฉ ์ค์ด๋ฏ๋ก ์ ์ธ
- ์ด 15๊ฐ์ IP๋ฅผ Pod์ ํ ๋น ๊ฐ๋ฅ
- ์ถ๊ฐ ๊ณ ๋ ค์ฌํญ: CNI ๋ฐ๋ชฌ์
๊ณผ Kube-Proxy๊ฐ 2๊ฐ์ IP๋ฅผ ์ฌ์ฉ (IP ๊ณต์ )
- ์ต์ข
๊ฒฐ๊ณผ:t3.medium ์ธ์คํด์ค์์๋ ์ต๋ 17๊ฐ์ Pod ๋ฐฐํฌ ๊ฐ๋ฅ
6. ์์ปค๋
ธ๋ ์์ธ ์ ๋ณด ํ์ธ
๊ฐ ์์ปค๋
ธ๋์์ ์ต๋ 17๊ฐ์ Pod์ด ๋ฐฐํฌ ๊ฐ๋ฅํจ์ ํ์ธํ ์ ์์
1
| kubectl describe node | grep Allocatable: -A6
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| Allocatable:
cpu: 1930m
ephemeral-storage: 27845546346
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3364544Ki
pods: 17
--
Allocatable:
cpu: 1930m
ephemeral-storage: 27845546346
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3364536Ki
pods: 17
--
Allocatable:
cpu: 1930m
ephemeral-storage: 27845546346
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3364544Ki
pods: 17
|
๐ ์ต๋ ํ๋ ์์ฑ ๋ฐ ํ์ธ
1. ์์ปค ๋
ธ๋ 3๋ EC2 ๋ชจ๋ํฐ๋ง
1
| [ec2-user@ip-192-168-1-193 ~]$ while true; do ip -br -c addr show && echo "--------------" ; date "+%Y-%m-%d %H:%M:%S" ; sleep 1; done
|
1
| [ec2-user@ip-192-168-2-52 ~]$ while true; do ip -br -c addr show && echo "--------------" ; date "+%Y-%m-%d %H:%M:%S" ; sleep 1; done
|
1
| [ec2-user@ip-192-168-3-72 ~]$ while true; do ip -br -c addr show && echo "--------------" ; date "+%Y-%m-%d %H:%M:%S" ; sleep 1; done
|
2. ๋ํ๋ก์ด๋จผํธ ์์ฑ ์ ์ํ ํ์ธ
1
2
3
4
5
6
7
8
| --------------
2025-02-13 12:20:21
lo UNKNOWN 127.0.0.1/8 ::1/128
ens5 UP 192.168.2.52/24 metric 1024 fe80::4c5:5bff:fed1:7757/64
enibce2df30e87@if3 UP fe80::48c8:23ff:feab:8539/64
enic99196c7a64@if3 UP fe80::8800:6eff:fe37:b171/64
ens7 UP 192.168.2.138/24 fe80::487:86ff:fe90:b33d/64
--------------
|
1
2
3
4
5
6
7
| --------------
2025-02-13 12:21:10
lo UNKNOWN 127.0.0.1/8 ::1/128
ens5 UP 192.168.3.72/24 metric 1024 fe80::8d5:3dff:fe4a:54bd/64
enif2c43957bf8@if3 UP fe80::a0e3:15ff:fe62:4731/64
ens7 UP 192.168.3.218/24 fe80::80c:3cff:fe65:9c27/64
--------------
|
1
2
3
| Every 2.0s: kubectl get pods -o wide gram88: 09:22:04 PM
in 0.538s (0)
No resources found in default namespace.
|
3. ๋ํ๋ก์ด๋จผํธ ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
EOF
|
4. ๋ํ๋ก์ด๋จผํธ ์์ฑ ํ ์ํ ํ์ธ
ENI ์ถ๊ฐ
- eni189e26c0cf7@if3 UP ์ํ๋ก ๋คํธ์ํฌ ์ธํฐํ์ด์ค ์ถ๊ฐ ํ์ธ
- eni422a7c2b629@if3 UP ์ํ๋ก ๋คํธ์ํฌ ์ธํฐํ์ด์ค ์ถ๊ฐ ํ์ธ
1
2
3
4
5
6
7
8
9
| --------------
2025-02-13 12:24:00
lo UNKNOWN 127.0.0.1/8 ::1/128
ens5 UP 192.168.1.193/24 metric 1024 fe80::23:93ff:fea6:bc61/64
eni01a4864c88a@if3 UP fe80::54be:6aff:febd:2d2a/64
eni61c5a949744@if3 UP fe80::5041:42ff:fec3:33f/64
ens7 UP 192.168.1.232/24 fe80::d8:5dff:fe80:2cc9/64
eni189e26c0cf7@if3 UP fe80::b463:11ff:fec2:9e12/64
--------------
|
1
2
3
4
5
6
7
8
| --------------
2025-02-13 12:24:36
lo UNKNOWN 127.0.0.1/8 ::1/128
ens5 UP 192.168.3.72/24 metric 1024 fe80::8d5:3dff:fe4a:54bd/64
enif2c43957bf8@if3 UP fe80::a0e3:15ff:fe62:4731/64
ens7 UP 192.168.3.218/24 fe80::80c:3cff:fe65:9c27/64
eni422a7c2b629@if3 UP fe80::b455:d1ff:fed1:d8f6/64
--------------
|
Pod ์ค์ผ์ค๋ง ์ถ๊ฐ
1
2
3
4
5
| Every 2.0s: kubectl get pods -o wide gram88: 09:25:54 PM in 0.487s (0)
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-6c8cb99bb9-77rg2 1/1 Running 0 3m5s 192.168.3.170 ip-192-168-3-72.ap-northeast-2.compute.internal <none> <none>
nginx-deployment-6c8cb99bb9-lcfx9 1/1 Running 0 3m5s 192.168.1.176 ip-192-168-1-193.ap-northeast-2.compute.internal <none> <none>
|
5. Pod ๊ฐ์ 8๊ฐ๋ก ์ฆ๊ฐ
1
2
3
| kubectl scale deployment nginx-deployment --replicas=8
# ๊ฒฐ๊ณผ
deployment.apps/nginx-deployment scaled
|
6. Pod ๊ฐ์ 30๊ฐ๋ก ์ฆ๊ฐ
- Pod ๊ฐ์ ์ฆ๊ฐ๋ก ๋ฌผ๋ฆฌ ๋์นด๋(ens6) ์ถ๊ฐ โ ์ด 3์ฅ ์ฌ์ฉ
- ๋์นด๋ 3์ฅ์ผ๋ก ํ์ฅ๋๋ฉฐ ๊ฐ ๋์นด๋์ Private IPv4 ํ ๋น๋ ์ฆ๊ฐ
1
2
3
| kubectl scale deployment nginx-deployment --replicas=30
# ๊ฒฐ๊ณผ
deployment.apps/nginx-deployment scaled
|
7. Pod ๊ฐ์ 50๊ฐ๋ก ์ฆ๊ฐ
1
2
3
| kubectl scale deployment nginx-deployment --replicas=50
# ๊ฒฐ๊ณผ
deployment.apps/nginx-deployment scaled
|
์ผ๋ถ Pod๊ฐ Pending ์ํ๋ก ๋จ์์์
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
| NAME READY STATUS RESTARTS AGE
nginx-deployment-6c8cb99bb9-4mz8l 1/1 Running 0 27s
nginx-deployment-6c8cb99bb9-5lbxg 1/1 Running 0 4m10s
nginx-deployment-6c8cb99bb9-5m868 1/1 Running 0 4m11s
nginx-deployment-6c8cb99bb9-77rg2 1/1 Running 0 8m19s
nginx-deployment-6c8cb99bb9-7f5tw 1/1 Running 0 27s
nginx-deployment-6c8cb99bb9-7jvvw 1/1 Running 0 4m10s
nginx-deployment-6c8cb99bb9-7kzcw 1/1 Running 0 2m38s
nginx-deployment-6c8cb99bb9-7nzn6 0/1 Pending 0 27s
nginx-deployment-6c8cb99bb9-7qchj 1/1 Running 0 2m39s
nginx-deployment-6c8cb99bb9-8s2z8 1/1 Running 0 27s
nginx-deployment-6c8cb99bb9-98tfm 1/1 Running 0 2m39s
nginx-deployment-6c8cb99bb9-9pvbx 1/1 Running 0 2m39s
nginx-deployment-6c8cb99bb9-b2qjf 1/1 Running 0 2m39s
nginx-deployment-6c8cb99bb9-bgxsj 1/1 Running 0 27s
nginx-deployment-6c8cb99bb9-g54wb 1/1 Running 0 2m39s
nginx-deployment-6c8cb99bb9-gkpcw 1/1 Running 0 27s
nginx-deployment-6c8cb99bb9-h8jtz 1/1 Running 0 2m39s
nginx-deployment-6c8cb99bb9-h9ltb 0/1 Pending 0 27s
nginx-deployment-6c8cb99bb9-hdbx8 1/1 Running 0 27s
nginx-deployment-6c8cb99bb9-hh659 1/1 Running 0 4m10s
nginx-deployment-6c8cb99bb9-hmfz8 1/1 Running 0 2m38s
nginx-deployment-6c8cb99bb9-jnm66 1/1 Running 0 2m38s
nginx-deployment-6c8cb99bb9-kbmgh 1/1 Running 0 2m39s
nginx-deployment-6c8cb99bb9-kjkxd 1/1 Running 0 2m38s
nginx-deployment-6c8cb99bb9-kzfk2 0/1 Pending 0 27s
nginx-deployment-6c8cb99bb9-lcfx9 1/1 Running 0 8m19s
nginx-deployment-6c8cb99bb9-lh6l6 1/1 Running 0 2m39s
nginx-deployment-6c8cb99bb9-lvq6b 1/1 Running 0 2m39s
nginx-deployment-6c8cb99bb9-lxqcr 1/1 Running 0 2m39s
nginx-deployment-6c8cb99bb9-mgqcd 1/1 Running 0 28s
nginx-deployment-6c8cb99bb9-pvhps 1/1 Running 0 2m38s
nginx-deployment-6c8cb99bb9-qbpmp 1/1 Running 0 2m38s
nginx-deployment-6c8cb99bb9-qwh72 1/1 Running 0 2m39s
nginx-deployment-6c8cb99bb9-r58vr 0/1 Pending 0 27s
nginx-deployment-6c8cb99bb9-s7bsw 1/1 Running 0 2m39s
nginx-deployment-6c8cb99bb9-sdxw7 1/1 Running 0 2m39s
nginx-deployment-6c8cb99bb9-t2h2t 0/1 Pending 0 27s
nginx-deployment-6c8cb99bb9-tb47g 0/1 Pending 0 27s
nginx-deployment-6c8cb99bb9-tklxp 0/1 Pending 0 27s
nginx-deployment-6c8cb99bb9-ttbzh 1/1 Running 0 4m10s
nginx-deployment-6c8cb99bb9-v5dfl 1/1 Running 0 2m39s
nginx-deployment-6c8cb99bb9-v888x 0/1 Pending 0 27s
nginx-deployment-6c8cb99bb9-vstpp 1/1 Running 0 2m38s
nginx-deployment-6c8cb99bb9-w2dr8 1/1 Running 0 4m10s
nginx-deployment-6c8cb99bb9-ww4c9 0/1 Pending 0 27s
nginx-deployment-6c8cb99bb9-xk74m 1/1 Running 0 2m39s
nginx-deployment-6c8cb99bb9-xn6k7 1/1 Running 0 27s
nginx-deployment-6c8cb99bb9-xp5tt 1/1 Running 0 28s
nginx-deployment-6c8cb99bb9-z2jwk 1/1 Running 0 27s
nginx-deployment-6c8cb99bb9-z4649 0/1 Pending 0 27s
|
IP ๋ฏธํ ๋น์ผ๋ก ์ธํด ๋คํธ์ํฌ ๋ค์์คํ์ด์ค ์ค๋น ๋ถ๊ฐ ์ํ
1
| k describe pod nginx-deployment-6c8cb99bb9-7nzn6
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
| Name: nginx-deployment-6c8cb99bb9-7nzn6
Namespace: default
Priority: 0
Service Account: default
Node: <none>
Labels: app=nginx
pod-template-hash=6c8cb99bb9
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/nginx-deployment-6c8cb99bb9
Containers:
nginx:
Image: nginx:alpine
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cnct2 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kube-api-access-cnct2:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 61s default-scheduler 0/3 nodes are available: 3 Too many pods. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod.
|
8. ๋ํ๋ก์ด๋จผํธ ์ญ์
1
2
3
| kubectl delete deploy nginx-deployment
# ๊ฒฐ๊ณผ
deployment.apps "nginx-deployment" deleted
|
โ๏ธ AWS LoadBalancer Controller ๋ฐฐํฌ
1. ์ค์น ์ crd ํ์ธ
โ
ย ์ถ๋ ฅ
1
2
3
4
5
| NAME CREATED AT
cninodes.vpcresources.k8s.aws 2025-02-12T03:09:25Z
eniconfigs.crd.k8s.amazonaws.com 2025-02-12T03:13:55Z
policyendpoints.networking.k8s.aws 2025-02-12T03:09:25Z
securitygrouppolicies.vpcresources.k8s.aws 2025-02-12T03:09:25Z
|
2. Helm Chart ์ค์น
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| helm repo add eks https://aws.github.io/eks-charts
# ๊ฒฐ๊ณผ
helm repo update
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=$CLUSTER_NAME
"eks" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "eks" chart repository
...Successfully got an update from the "prometheus-community" chart repository
...Successfully got an update from the "geek-cookbook" chart repository
Update Complete. โHappy Helming!โ
NAME: aws-load-balancer-controller
LAST DEPLOYED: Thu Feb 13 21:53:34 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
AWS Load Balancer controller installed!
|
3. ์ค์น ํ์ธ
โ
ย ์ถ๋ ฅ
ingressclassparams.elbv2.k8s.aws
, targetgroupbindings.elbv2.k8s.aws
์ถ๊ฐ
1
2
3
4
5
6
7
| NAME CREATED AT
cninodes.vpcresources.k8s.aws 2025-02-12T03:09:25Z
eniconfigs.crd.k8s.amazonaws.com 2025-02-12T03:13:55Z
ingressclassparams.elbv2.k8s.aws 2025-02-13T12:53:32Z
policyendpoints.networking.k8s.aws 2025-02-12T03:09:25Z
securitygrouppolicies.vpcresources.k8s.aws 2025-02-12T03:09:25Z
targetgroupbindings.elbv2.k8s.aws 2025-02-13T12:53:32Z
|
1
| kubectl explain ingressclassparams.elbv2.k8s.aws
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
| GROUP: elbv2.k8s.aws
KIND: IngressClassParams
VERSION: v1beta1
DESCRIPTION:
IngressClassParams is the Schema for the IngressClassParams API
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <ObjectMeta>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <Object>
IngressClassParamsSpec defines the desired state of IngressClassParams
GROUP: elbv2.k8s.aws
KIND: TargetGroupBinding
VERSION: v1beta1
DESCRIPTION:
TargetGroupBinding is the Schema for the TargetGroupBinding API
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <ObjectMeta>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <Object>
TargetGroupBindingSpec defines the desired state of TargetGroupBinding
status <Object>
TargetGroupBindingStatus defines the observed state of TargetGroupBinding
|
๐ธ๏ธ ์๋น์ค/ํ๋ ๋ฐฐํฌ ํ
์คํธ with NLB
1. ๋ชจ๋ํฐ๋ง
1
| watch -d kubectl get pod,svc,ep,endpointslices
|
2. ๋ํ๋ก์ด๋จผํธ & ์๋น์ค ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
| cat << EOF > echo-service-nlb.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-echo
spec:
replicas: 2
selector:
matchLabels:
app: deploy-websrv
template:
metadata:
labels:
app: deploy-websrv
spec:
terminationGracePeriodSeconds: 0
containers:
- name: aews-websrv
image: k8s.gcr.io/echoserver:1.5
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: svc-nlb-ip-type
annotations:
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "8080"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: LoadBalancer
loadBalancerClass: service.k8s.aws/nlb
kubectl apply -f echo-service-nlb.yaml
deployment.apps/deploy-echo created
service/svc-nlb-ip-type created
|
3. ๋ก๋๋ฐธ๋ฐ์ ์ํ ํ์ธ
๋คํธ์ํฌ ๋ก๋๋ฐธ๋ฐ์๊ฐ provisioning ๋๊ณ ์์
1
| aws elbv2 describe-load-balancers --query 'LoadBalancers[*].State.Code' --output text
|
โ
ย ์ถ๋ ฅ
4. deploy, pod ์ ๋ณด ํ์ธ
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
| NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/deploy-echo 2/2 2 2 3m21s
NAME READY STATUS RESTARTS AGE
pod/deploy-echo-bf9bdb8bc-qcq6k 1/1 Running 0 3m21s
pod/deploy-echo-bf9bdb8bc-xlpz4 1/1 Running 0 3m21s
|
5. ์๋น์ค, ์๋ํฌ์ธํธ, ์ธ๊ทธ๋ ์ค ํด๋์ค ํ๋ผ๋ฏธํฐ, ํ๊ฒ ๊ทธ๋ฃน ๋ฐ์ธ๋ฉ ํ์ธ
1
| kubectl get svc,ep,ingressclassparams,targetgroupbindings
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
| NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 33h
service/svc-nlb-ip-type LoadBalancer 10.100.253.160 k8s-default-svcnlbip-b30242b032-f4b2a1499955f080.elb.ap-northeast-2.amazonaws.com 80:30138/TCP 3m47s
NAME ENDPOINTS AGE
endpoints/kubernetes 192.168.2.87:443,192.168.3.197:443 33h
endpoints/svc-nlb-ip-type 192.168.1.176:8080,192.168.3.77:8080 3m47s
NAME GROUP-NAME SCHEME IP-ADDRESS-TYPE AGE
ingressclassparams.elbv2.k8s.aws/alb 9m48s
NAME SERVICE-NAME SERVICE-PORT TARGET-TYPE AGE
targetgroupbinding.elbv2.k8s.aws/k8s-default-svcnlbip-f4a394e732 svc-nlb-ip-type 80 ip 3m43s
|
targetgroupbinding.elbv2.k8s.aws/k8s-default-svcnlbip-f4a394e732
๋ ์๋ ์ ๋ณด์ ๋งค์นญ๋จ
Target Group Binding Attribute์์ Deregistration delay
๋ graceful shutdown๊ณผ ์ ์ฌํ ๊ฐ๋
- ์ด ์ค์ ์ ๋์ ํ๊ฒ์ด ์ข
๋ฃ๋๊ธฐ ์ ์ ์ฐ๊ฒฐ์ ํด์ ํ๋ ์๊ฐ์ ์๋ฏธ
6. Target Group Binding Attribute ์์
echo-service-nlb.yaml
ํ์ผ์ ์ด์ด ๋ค์ ์ค ์ถ๊ฐํจ
1
| service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.timeout_seconds=60
|
๋น ๋ฅธ ์ค์ต์ ์ํด์ ๋ฑ๋ก ์ทจ์ ์ง์ฐ(๋๋ ์ด๋ ๊ฐ๊ฒฉ) 60์ด๋ก ์์
7. ๋ฐฐํฌ
1
2
3
4
5
| kubectl apply -f echo-service-nlb.yaml
# ๊ฒฐ๊ณผ
deployment.apps/deploy-echo unchanged
service/svc-nlb-ip-type configured
|
Deregistration delay
๊ฐ 60์ด๋ก ๋ณ๊ฒฝ๋จ
๋คํธ์ํฌ ๋ก๋๋ฐธ๋ฐ์๊ฐ Active ์ํ๋ก ์ ํ๋จ
Target Groups์์ Targets๊ฐ Healthy ์ํ๋ก ํ์๋จ
8. ์น ์ ์ ์ฃผ์ ํ์ธ
1
| kubectl get svc svc-nlb-ip-type -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' | awk '{ print "Pod Web URL = http://"$1 }'
|
โ
ย ์ถ๋ ฅ
1
| Pod Web URL = http://k8s-default-svcnlbip-b30242b032-f4b2a1499955f080.elb.ap-northeast-2.amazonaws.com
|
์๋ฒ ์ ๊ทผ ํ์ธ๋จ (http://k8s-default-svcnlbip-b30242b032-f4b2a1499955f080.elb.ap-northeast-2.amazonaws.com)
9. ํ๋ ๋ก๊น
๋ชจ๋ํฐ๋ง
Stern์ผ๋ก ํ๋ ๋ก๊ทธ ๋ชจ๋ํฐ๋ง ์งํ
1
| kubectl stern -l app=deploy-websrv
|
๋ถ์ฐ ์ ์ ํ
์คํธ
1
2
| (eks-user:N/A) [root@operator-host ~]# NLB=$(kubectl get svc svc-nlb-ip-type -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
(eks-user:N/A) [root@operator-host ~]# curl -s $NLB
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| Hostname: deploy-echo-bf9bdb8bc-xlpz4
Pod Information:
-no pod information available-
Server values:
server_version=nginx: 1.13.0 - lua: 10008
Request Information:
client_address=192.168.2.29
method=GET
real path=/
query=
request_version=1.1
request_uri=http://k8s-default-svcnlbip-b30242b032-f4b2a1499955f080.elb.ap-northeast-2.amazonaws.com:8080/
Request Headers:
accept=*/*
host=k8s-default-svcnlbip-b30242b032-f4b2a1499955f080.elb.ap-northeast-2.amazonaws.com
user-agent=curl/8.3.0
Request Body:
-no body in request-
|
100๋ฒ ์์ฒญ์ ๋ณด๋ด ๊ฐ ํ๋๋ก ๋ถ์ฐ๋ ์ ์ ์ ํ์ธ
1
| (eks-user:N/A) [root@operator-host ~]# for i in {1..100}; do curl -s $NLB | grep Hostname ; done | sort | uniq -c | sort -nr
|
โ
ย ์ถ๋ ฅ
1
2
| 56 Hostname: deploy-echo-bf9bdb8bc-qcq6k
44 Hostname: deploy-echo-bf9bdb8bc-xlpz4
|
100๋ฒ์ ์์ฒญ ์ค 56๋ฒ์ deploy-echo-bf9bdb8bc-qcq6k
ํ๋๋ก, 44๋ฒ์ deploy-echo-bf9bdb8bc-xlpz4
ํ๋๋ก ๋ถ์ฐ๋์ด ์ ์๋จ
10. AWS Console์์ NLB ๋ฆฌ์ค๋ ๋ฐ ๋์ ํ์ธ
NLB ๋๋ฉ์ธ์ผ๋ก ์ ๊ทผํ๋ฉด 80๋ฒ ํฌํธ์์ Listener๊ฐ ์์ฒญ์ ๋ฐ๊ณ , Resource map์ ํตํด 192.168.1.176๊ณผ 192.168.3.77์ 8080๋ฒ ํฌํธ๋ก ์ ๋ฌ๋จ
192.168.1.176๊ณผ 192.168.3.77์ ์๋ ์ถ๋ ฅ๋ ํ๋๋ค์ IP์
1
2
3
4
5
| k get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-echo-bf9bdb8bc-qcq6k 1/1 Running 0 27m 192.168.3.77 ip-192-168-3-72.ap-northeast-2.compute.internal <none> <none>
deploy-echo-bf9bdb8bc-xlpz4 1/1 Running 0 27m 192.168.1.176 ip-192-168-1-193.ap-northeast-2.compute.internal <none> <none>
|
11. replicas ํ์ฅ์ผ๋ก ํ๊ฒ ๊ฐ์ ์ฆ๊ฐ
replicas
๋ฅผ 2๊ฐ์์ 3๊ฐ๋ก ํ์ฅํ์ฌ ํ๊ฒ์ 3๊ฐ๋ก ์ฆ๊ฐ์ํด
1
2
3
| kubectl scale deployment deploy-echo --replicas=3
# ๊ฒฐ๊ณผ
deployment.apps/deploy-echo scaled
|
12. ์ค์ต ๋ฆฌ์์ค ์ญ์
1
2
3
4
| kubectl delete deploy deploy-echo; kubectl delete svc svc-nlb-ip-type
# ๊ฒฐ๊ณผ
deployment.apps "deploy-echo" deleted
service "svc-nlb-ip-type" deleted
|
๐ค๏ธ Ingress ์ค์ต
Ingress๋ ํด๋ฌ์คํฐ ๋ด๋ถ ์๋น์ค(ClusterIP, NodePort, LoadBalancer)๋ฅผ ์ธ๋ถ์ HTTP/HTTPS๋ก ๋
ธ์ถํ๋ ์น ํ๋ก์ ์ญํ ์ ํจ
1. ๊ฒ์ ํ๋์ Service, Ingress ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
| cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: game-2048
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: game-2048
name: deployment-2048
spec:
selector:
matchLabels:
app.kubernetes.io/name: app-2048
replicas: 2
template:
metadata:
labels:
app.kubernetes.io/name: app-2048
spec:
containers:
- image: public.ecr.aws/l6m2t8p7/docker-2048:latest
imagePullPolicy: Always
name: app-2048
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: game-2048
name: service-2048
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app.kubernetes.io/name: app-2048
EOF number: 80e-2048rget-type: ipt-facing
# ๊ฒฐ๊ณผ
namespace/game-2048 created
deployment.apps/deployment-2048 created
service/service-2048 created
|
2. ๋ชจ๋ํฐ๋ง
1
| watch -d kubectl get pod,ingress,svc,ep,endpointslices -n game-2048
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| Every 2.0s: kubectl get pod,ingress,svc,ep,endpointslices -n game-2048 gram88: 10:37:51 PM
in 0.718s (0)
NAME READY STATUS RESTARTS AGE
pod/deployment-2048-7df5f9886b-lxtmt 1/1 Running 0 95s
pod/deployment-2048-7df5f9886b-zw8n9 1/1 Running 0 95s
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/ingress-2048 alb * k8s-game2048-ingress2-70d50ce3fd-1110214105.ap-northeast-2.elb.amazonaws.com 80 95s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/service-2048 NodePort 10.100.135.188 <none> 80:31872/TCP 95s
NAME ENDPOINTS AGE
endpoints/service-2048 192.168.1.176:80,192.168.3.19:80 95s
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
endpointslice.discovery.k8s.io/service-2048-2h29k IPv4 80 192.168.3.19,192.168.1.176 95s
|
3. ์์ฑ ํ์ธ
game-2048 ๋ค์์คํ์ด์ค์ ์ธ๊ทธ๋ ์ค, ์๋น์ค, ์๋ํฌ์ธํธ, ํ๋ ํ์ธ
1
| kubectl get ingress,svc,ep,pod -n game-2048
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
| NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/ingress-2048 alb * k8s-game2048-ingress2-70d50ce3fd-1110214105.ap-northeast-2.elb.amazonaws.com 80 2m36s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/service-2048 NodePort 10.100.135.188 <none> 80:31872/TCP 2m36s
NAME ENDPOINTS AGE
endpoints/service-2048 192.168.1.176:80,192.168.3.19:80 2m36s
NAME READY STATUS RESTARTS AGE
pod/deployment-2048-7df5f9886b-lxtmt 1/1 Running 0 2m36s
pod/deployment-2048-7df5f9886b-zw8n9 1/1 Running 0 2m36s
|
game-2048 ๋ค์์คํ์ด์ค์ ํ๊ฒ ๊ทธ๋ฃน ๋ฐ์ธ๋ฉ ํ์ธ
1
| kubectl get targetgroupbindings -n game-2048
|
โ
ย ์ถ๋ ฅ
1
2
| NAME SERVICE-NAME SERVICE-PORT TARGET-TYPE AGE
k8s-game2048-service2-168626cccc service-2048 80 ip 3m20s
|
Listener๊ฐ 80๋ฒ ํฌํธ๋ก ์์ฒญ์ ๋ฐ์ rules๋ฅผ ํตํด ๋์ ๊ทธ๋ฃน์ผ๋ก ์ ๋ฌํจ
4. Ingress ์ค์ ํ์ธ
1
2
| kubectl describe ingress -n game-2048 ingress-2048
kubectl get ingress -n game-2048 ingress-2048 -o jsonpath="{.status.loadBalancer.ingress[*].hostname}{'\n'}"
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
| Name: ingress-2048
Labels: <none>
Namespace: game-2048
Address: k8s-game2048-ingress2-70d50ce3fd-1110214105.ap-northeast-2.elb.amazonaws.com
Ingress Class: alb
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/ service-2048:80 (192.168.3.19:80,192.168.1.176:80)
Annotations: alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfullyReconciled 8m18s ingress Successfully reconciled
k8s-game2048-ingress2-70d50ce3fd-1110214105.ap-northeast-2.elb.amazonaws.com
|
- ๋ชจ๋ ์์ฒญ(*)์
service-2048:80
์ผ๋ก ์ ๋ฌ๋จ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| Every 2.0s: kubectl get pod,ingress,svc,ep,endpointslices -n game-2048 gram88: 10:46:22 PM
in 0.724s (0)
NAME READY STATUS RESTARTS AGE
pod/deployment-2048-7df5f9886b-lxtmt 1/1 Running 0 10m
pod/deployment-2048-7df5f9886b-zw8n9 1/1 Running 0 10m
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/ingress-2048 alb * k8s-game2048-ingress2-70d50ce3fd-1110214105.ap-northeast-2.elb.amazonaws.com 80 10m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/service-2048 NodePort 10.100.135.188 <none> 80:31872/TCP 10m
NAME ENDPOINTS AGE
endpoints/service-2048 192.168.1.176:80,192.168.3.19:80 10m
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
endpointslice.discovery.k8s.io/service-2048-2h29k IPv4 80 192.168.3.19,192.168.1.176 10m
|
service-2048
์ NodePort ํ์
์ด๋ฉฐ 10.100.135.188
ํ ๋น- Pod Endpoint IP:
192.168.1.176:80
, 192.168.3.19:80
- ALB๋ ์ด ๋ Pod๋ก ํธ๋ํฝ ์ ๋ฌ
5. ๊ฒ์ ์ ์ URL ํ์ธ
1
| kubectl get ingress -n game-2048 ingress-2048 -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' | awk '{ print "Game URL = http://"$1 }'
|
โ
ย ์ถ๋ ฅ
1
| Game URL = http://k8s-game2048-ingress2-70d50ce3fd-1110214105.ap-northeast-2.elb.amazonaws.com
|
6. Pod ๋์ ํ๊ฒ ํ์ธ
1
| kubectl get pod -n game-2048 -owide
|
โ
ย ์ถ๋ ฅ
1
2
3
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-2048-7df5f9886b-lxtmt 1/1 Running 0 13m 192.168.3.19 ip-192-168-3-72.ap-northeast-2.compute.internal <none> <none>
deployment-2048-7df5f9886b-zw8n9 1/1 Running 0 13m 192.168.1.176 ip-192-168-1-193.ap-northeast-2.compute.internal <none> <none>
|
- ALB์ ๋์์ผ๋ก ์ง์ Pod IP๊ฐ ์ฌ์ฉ๋จ (NodePort ์๋)
- Pod IP:
192.168.3.19
(๋
ธ๋: ip-192-168-3-72
), 192.168.1.176
(๋
ธ๋: ip-192-168-1-193
)
7. Pod ๊ฐ์ ์ฆ๊ฐ์ ๋ฐ๋ฅธ ํ๊ฒ ๊ทธ๋ฃน ํ์ฅ
pod๋ฅผ 3๊ฐ๋ก ์ฆ๊ฐ์ํค๋ฉด ํ๊ฒ๊ทธ๋ฃน์ ํ๋๊ฐ ๋ ์ถ๊ฐ๋จ
1
2
3
| kubectl scale deployment -n game-2048 deployment-2048 --replicas 3
# ๊ฒฐ๊ณผ
deployment.apps/deployment-2048 scaled
|
8. ์ค์ต ๋ฆฌ์์ค ์ญ์
1
2
3
| kubectl delete ingress ingress-2048 -n game-2048
# ๊ฒฐ๊ณผ
ingress.networking.k8s.io "ingress-2048" deleted
|
1
2
3
4
5
| kubectl delete svc service-2048 -n game-2048 && kubectl delete deploy deployment-2048 -n game-2048 && kubectl delete ns game-2048
# ๊ฒฐ๊ณผ
service "service-2048" deleted
deployment.apps "deployment-2048" deleted
namespace "game-2048" deleted
|
๐ external DNS ์ค์ต
1. ๋๋ฉ์ธ ์์ฑ๊ณผ Zone ID ํ์ธ
2. Route 53 ๋๋ฉ์ธ ID ์กฐํ ๋ฐ ๋ณ์ ์ง์
1
| MyDnzHostedZoneId=$(aws route53 list-hosted-zones-by-name --dns-name "${MyDomain}." --query "HostedZones[0].Id" --output text)
|
๋ณ์ ํ์ธ
1
| echo $MyDomain, $MyDnzHostedZoneId
|
โ
ย ์ถ๋ ฅ
1
| gagajin.com /hostedzone/EXAMPLEID123456789
|
3. ๋๋ฉ์ธ์ A ๋ ์ฝ๋ ๊ฐ ๋ฐ๋ณต ์กฐํ
1
| while true; do aws route53 list-resource-record-sets --hosted-zone-id "${MyDnzHostedZoneId}" --query "ResourceRecordSets[?Type == 'A']" | jq ; date ; echo ; sleep 1; done
|
4. ExternalDNS ๋ฐฐํฌ
1
| curl -s -O https://raw.githubusercontent.com/gasida/PKOS/main/aews/externaldns.yaml
|
1
2
3
4
5
6
| MyDomain=$MyDomain MyDnzHostedZoneId=$MyDnzHostedZoneId envsubst < externaldns.yaml | kubectl apply -f -
# ๊ฒฐ๊ณผ
serviceaccount/external-dns created
clusterrole.rbac.authorization.k8s.io/external-dns created
clusterrolebinding.rbac.authorization.k8s.io/external-dns-viewer created
deployment.apps/external-dns created
|
5. ์ฐ๊ฒฐ ๋ฐ ๋ชจ๋ํฐ๋ง
1
| kubectl get pod -l app.kubernetes.io/name=external-dns -n kube-system
|
โ
ย ์ถ๋ ฅ
1
2
| NAME READY STATUS RESTARTS AGE
external-dns-dc4878f5f-98vcp 1/1 Running 0 25s
|
1
| kubectl logs deploy/external-dns -n kube-system -f
|
6. ํ
ํธ๋ฆฌ์ค ๊ฒ์ ๋ฐฐํฌ
ํ
ํธ๋ฆฌ์ค Deployment ๋ฐ ์๋น์ค ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
| # ํ
ํธ๋ฆฌ์ค ๋ํ๋ก์ด๋จผํธ ๋ฐฐํฌ
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: tetris
labels:
app: tetris
spec:
replicas: 1
selector:
matchLabels:
app: tetris
template:
metadata:
labels:
app: tetris
spec:
containers:
- name: tetris
image: bsord/tetris
---
apiVersion: v1
kind: Service
metadata:
name: tetris
annotations:
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
#service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "80"
spec:
selector:
app: tetris
ports:
- port: 80
protocol: TCP
targetPort: 80
type: LoadBalancer
loadBalancerClass: service.k8s.aws/nlb
EOF
deployment.apps/tetris created
service/tetris created
|
๋ฐฐํฌ ์ํ ํ์ธ
1
| kubectl get deploy,svc,ep tetris
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
| NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/tetris 1/1 1 1 88s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/tetris LoadBalancer 10.100.6.207 k8s-default-tetris-b179cd251d-f36283420d6a53f5.elb.ap-northeast-2.amazonaws.com 80:31999/TCP 88s
NAME ENDPOINTS AGE
endpoints/tetris 192.168.1.176:80 88s
|
7. NLB์ ๋๋ฉ์ธ ์ฐ๊ฒฐ
1
2
3
| kubectl annotate service tetris "external-dns.alpha.kubernetes.io/hostname=tetris.$MyDomain"
# ๊ฒฐ๊ณผ
service/tetris annotated
|
8. Route 53์์ ๋๋ฉ์ธ ์ฐ๊ฒฐ ํ์ธ
Route 53 > Hosted Zone > gagajin.com์์ tetris.gagajin.com
๋๋ฉ์ธ์ด NLB์ ๋งค์นญ๋ ์ํ ํ์ธ
๋๋ฉ์ธ ์ฐ๊ฒฐ ํ์ธ
- ๋๋ฉ์ธ ๊ฐ:
tetris.gagajin.com
- ๋งค์นญ๋ ์๋น์ค:
k8s-default-tetris-b179cd251d-f36283420d6a53f5.elb.ap-northeast-2.amazonaws.com
(NLB)
9. ๋๋ฉ์ธ ์กฐํ
1
| dig +short tetris.$MyDomain @8.8.8.8
|
โ
ย ์ถ๋ ฅ
1
2
3
| 43.200.102.104
3.39.61.119
43.202.39.97
|
10. ๋๋ฉ์ธ ์ํ ์ฒดํฌ
1
2
| echo -e "My Domain Checker Site1 = https://www.whatsmydns.net/#A/tetris.$MyDomain"
echo -e "My Domain Checker Site2 = https://dnschecker.org/#A/tetris.$MyDomain"
|
tetris.gagajin.com
์ ์
11. ๋ฆฌ์์ค ์ญ์ ๋ฐ A ๋ ์ฝ๋ ์ ๊ฑฐ ํ์ธ
external-dns
์ ์ํด A๋ ์ฝ๋๋ ์๋ ์ญ์
1
| kubectl delete deploy,svc tetris
|
๐บ๏ธ Topology Aware Routing ์ค์ต
1. ํ์ฌ ๋
ธ๋ AZ ๋ฐฐํฌ ํ์ธ
1
| kubectl get node --label-columns=topology.kubernetes.io/zone
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| NAME STATUS ROLES AGE VERSION ZONE
ip-192-168-1-193.ap-northeast-2.compute.internal Ready <none> 35h v1.31.4-eks-aeac579 ap-northeast-2a
ip-192-168-2-52.ap-northeast-2.compute.internal Ready <none> 35h v1.31.4-eks-aeac579 ap-northeast-2b
ip-192-168-3-72.ap-northeast-2.compute.internal Ready <none> 35h v1.31.4-eks-aeac579 ap-northeast-2c
|
2. ํ
์คํธ๋ฅผ ์ํ ๋ํ๋ก์ด๋จผํธ์ ์๋น์ค ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
| cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-echo
spec:
replicas: 3
selector:
matchLabels:
app: deploy-websrv
template:
metadata:
labels:
app: deploy-websrv
spec:
terminationGracePeriodSeconds: 0
containers:
- name: websrv
image: registry.k8s.io/echoserver:1.5
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: svc-clusterip
spec:
ports:
- name: svc-webport
port: 80
targetPort: 8080
selector:
app: deploy-websrv
type: ClusterIP
EOF
# ๊ฒฐ๊ณผ
deployment.apps/deploy-echo created
service/svc-clusterip created
|
ํ์ธ
1
2
3
4
5
| kubectl get deploy,svc,ep,endpointslices
kubectl get pod -owide
kubectl get svc,ep svc-clusterip
kubectl get endpointslices -l kubernetes.io/service-name=svc-clusterip
kubectl get endpointslices -l kubernetes.io/service-name=svc-clusterip -o yaml
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
| NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/deploy-echo 3/3 3 3 34s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 35h
service/svc-clusterip ClusterIP 10.100.101.191 <none> 80/TCP 34s
NAME ENDPOINTS AGE
endpoints/kubernetes 192.168.2.87:443,192.168.3.197:443 35h
endpoints/svc-clusterip 192.168.1.176:8080,192.168.2.91:8080,192.168.3.146:8080 34s
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
endpointslice.discovery.k8s.io/kubernetes IPv4 443 192.168.2.87,192.168.3.197 35h
endpointslice.discovery.k8s.io/svc-clusterip-8gc6m IPv4 8080 192.168.1.176,192.168.2.91,192.168.3.146 34s
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-echo-75b7b9558c-8kxhk 1/1 Running 0 34s 192.168.2.91 ip-192-168-2-52.ap-northeast-2.compute.internal <none> <none>
deploy-echo-75b7b9558c-dwhmg 1/1 Running 0 34s 192.168.3.146 ip-192-168-3-72.ap-northeast-2.compute.internal <none> <none>
deploy-echo-75b7b9558c-frsbd 1/1 Running 0 34s 192.168.1.176 ip-192-168-1-193.ap-northeast-2.compute.internal <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/svc-clusterip ClusterIP 10.100.101.191 <none> 80/TCP 35s
NAME ENDPOINTS AGE
endpoints/svc-clusterip 192.168.1.176:8080,192.168.2.91:8080,192.168.3.146:8080 35s
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
svc-clusterip-8gc6m IPv4 8080 192.168.1.176,192.168.2.91,192.168.3.146 36s
apiVersion: v1
items:
- addressType: IPv4
apiVersion: discovery.k8s.io/v1
endpoints:
- addresses:
- 192.168.1.176
conditions:
ready: true
serving: true
terminating: false
nodeName: ip-192-168-1-193.ap-northeast-2.compute.internal
targetRef:
kind: Pod
name: deploy-echo-75b7b9558c-frsbd
namespace: default
uid: c79e9a50-6fe7-4862-b235-f48410dfd4e7
zone: ap-northeast-2a
- addresses:
- 192.168.2.91
conditions:
ready: true
serving: true
terminating: false
nodeName: ip-192-168-2-52.ap-northeast-2.compute.internal
targetRef:
kind: Pod
name: deploy-echo-75b7b9558c-8kxhk
namespace: default
uid: fc2d6025-50e4-488d-844d-b74e64656869
zone: ap-northeast-2b
- addresses:
- 192.168.3.146
conditions:
ready: true
serving: true
terminating: false
nodeName: ip-192-168-3-72.ap-northeast-2.compute.internal
targetRef:
kind: Pod
name: deploy-echo-75b7b9558c-dwhmg
namespace: default
uid: 013991cc-6c6a-465a-be3c-2a7128ed4869
zone: ap-northeast-2c
kind: EndpointSlice
metadata:
annotations:
endpoints.kubernetes.io/last-change-trigger-time: "2025-02-13T15:02:52Z"
creationTimestamp: "2025-02-13T15:02:50Z"
generateName: svc-clusterip-
generation: 4
labels:
endpointslice.kubernetes.io/managed-by: endpointslice-controller.k8s.io
kubernetes.io/service-name: svc-clusterip
name: svc-clusterip-8gc6m
namespace: default
ownerReferences:
- apiVersion: v1
blockOwnerDeletion: true
controller: true
kind: Service
name: svc-clusterip
uid: f97f9347-cd50-40d7-881e-253b2efe5825
resourceVersion: "437737"
uid: 2f964436-d4df-49bd-af2d-b11ec22702b2
ports:
- name: svc-webport
port: 8080
protocol: TCP
kind: List
metadata:
resourceVersion: ""
|
4. ์ ์ ํ
์คํธ๋ฅผ ์ํํ ํด๋ผ์ด์ธํธ ํ๋ ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: netshoot-pod
spec:
containers:
- name: netshoot-pod
image: nicolaka/netshoot
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF
pod/netshoot-pod created
|
ํ์ธ
โ
ย ์ถ๋ ฅ
1
2
3
4
5
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-echo-75b7b9558c-8kxhk 1/1 Running 0 101s 192.168.2.91 ip-192-168-2-52.ap-northeast-2.compute.internal <none> <none>
deploy-echo-75b7b9558c-dwhmg 1/1 Running 0 101s 192.168.3.146 ip-192-168-3-72.ap-northeast-2.compute.internal <none> <none>
deploy-echo-75b7b9558c-frsbd 1/1 Running 0 101s 192.168.1.176 ip-192-168-1-193.ap-northeast-2.compute.internal <none> <none>
netshoot-pod 1/1 Running 0 16s 192.168.1.98 ip-192-168-1-193.ap-northeast-2.compute.internal <none> <none>
|
- Netshoot Pod๊ฐ AZ 1๋ฒ์ ๋ฐฐํฌ๋จ
- AZ 1๋ฒ์์ ๋ฐฐํฌ๋ deploy-echo Pod๋ ๊ฐ์ AZ์ ์๋ netshoot-pod์ ์ฐ๊ฒฐ๋๋ฉด ์ข์ง๋ง, ๋ถํ๋ถ์ฐ ์ ๋ค๋ฅธ AZ์ Pod๋ก๋ ์ฐ๊ฒฐ๋จ
5. ํ
์คํธ ํ๋(netshoot-pod)์์ ClusterIP ์ ์ ์ ๋ถํ๋ถ์ฐ ํ์ธ
1
| kubectl exec -it netshoot-pod -- curl svc-clusterip | grep Hostname
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
| Hostname: deploy-echo-75b7b9558c-8kxhk
# ๋๋
Hostname: deploy-echo-75b7b9558c-dwhmg
# ๋๋
Hostname: deploy-echo-75b7b9558c-frsbd
|
- netshoot-pod์์
curl svc-clusterip
๋ช
๋ น์ด๋ก ์๋น์ค ๋๋ฉ์ธ๋ช
์ ํตํด Pod์ ์ ๊ทผ - ์๋น์ค ์์ฑ ์ ClusterIP๋ผ๋ ๊ฐ์ IP(VIP)๊ฐ ์์ฑ๋์ด ๊ณ ์ ์ง์
์ ์ญํ ์ํ
100๋ฒ ๋ฐ๋ณต ์ ์ : 3๊ฐ์ ํ๋๋ก AZ(zone) ์๊ด์์ด ๋๋ค ํ๋ฅ ๋ถํ๋ถ์ฐ ๋์
1
| kubectl exec -it netshoot-pod -- zsh -c "for i in {1..100}; do curl -s svc-clusterip | grep Hostname; done | sort | uniq -c | sort -nr"
|
โ
ย ์ถ๋ ฅ
1
2
3
| 43 Hostname: deploy-echo-75b7b9558c-frsbd
36 Hostname: deploy-echo-75b7b9558c-dwhmg
21 Hostname: deploy-echo-75b7b9558c-8kxhk
|
6. iptables ๊ธฐ๋ฐ ์๋น์ค ๋ผ์ฐํ
๋ถ์
1
| ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SERVICES
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
| Chain KUBE-SERVICES (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SVC-UAGC4PYEYZJJEW6D tcp -- * * 0.0.0.0/0 10.100.145.143 /* kube-system/aws-load-balancer-webhook-service:webhook-server cluster IP */ tcp dpt:443
110 6600 KUBE-SVC-KBDEBIL6IU6WL7RF tcp -- * * 0.0.0.0/0 10.100.101.191 /* default/svc-clusterip:svc-webport cluster IP */ tcp dpt:80
0 0 KUBE-SVC-I7SKRZYQ7PWYV5X7 tcp -- * * 0.0.0.0/0 10.100.83.10 /* kube-system/eks-extension-metrics-api:metrics-api cluster IP */ tcp dpt:443
110 10560 KUBE-SVC-TCOU7JCQXEZGVUNU udp -- * * 0.0.0.0/0 10.100.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
0 0 KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- * * 0.0.0.0/0 10.100.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
0 0 KUBE-SVC-JD5MR3NA4I4DYORP tcp -- * * 0.0.0.0/0 10.100.0.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
0 0 KUBE-SVC-Z4ANX4WAEWEBLCTM tcp -- * * 0.0.0.0/0 10.100.101.216 /* kube-system/metrics-server:https cluster IP */ tcp dpt:443
0 0 KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- * * 0.0.0.0/0 10.100.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-SVC-7EJNTS7AENER2WX5 tcp -- * * 0.0.0.0/0 10.100.146.32 /* kube-system/kube-ops-view:http cluster IP */ tcp dpt:8080
302 18120 KUBE-NODEPORTS all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
|
7. KUBE-SERVICES ์ฒด์ธ ๋ถ์
1
| ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
| Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
pkts bytes target prot opt in out source destination
45 2700 KUBE-SEP-RSD5LYH4WSEXMEXJ all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.1.176:8080 */ statistic mode random probability 0.33333333349
23 1380 KUBE-SEP-NLHUM4JSBL2UEFAX all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.2.91:8080 */ statistic mode random probability 0.50000000000
42 2520 KUBE-SEP-6CT57P6L6QUJZOMH all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.3.146:8080 */
|
- ๋๋ค ํ๋ฅ ๊ธฐ๋ฐ์ผ๋ก ํธ๋ํฝ์ ๊ฐ Pod๋ก ๋ถ์ฐ
- ๊ฐ Pod๋ก 33%, 50% ๋ฑ์ ํ๋ฅ ๋ก ํธ๋ํฝ ์ ๋ฌ
๋ชจ๋ ์์ปค ๋
ธ๋์ iptables ๊ท์น ์ผ๊ด์ฑ ํ์ธ
1
2
3
4
5
6
| ssh ec2-user@$N2 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-RSD5LYH4WSEXMEXJ all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.1.176:8080 */ statistic mode random probability 0.33333333349
0 0 KUBE-SEP-NLHUM4JSBL2UEFAX all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.2.91:8080 */ statistic mode random probability 0.50000000000
0 0 KUBE-SEP-6CT57P6L6QUJZOMH all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.3.146:8080 */
|
1
2
3
4
5
6
| ssh ec2-user@$N3 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-RSD5LYH4WSEXMEXJ all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.1.176:8080 */ statistic mode random probability 0.33333333349
0 0 KUBE-SEP-NLHUM4JSBL2UEFAX all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.2.91:8080 */ statistic mode random probability 0.50000000000
0 0 KUBE-SEP-6CT57P6L6QUJZOMH all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.3.146:8080 */
|
3๊ฐ์ SEP๋ ๊ฐ๊ฐ ๊ฐ๋ณ ํ๋ ์ ์ ์ ๋ณด
1
2
3
4
5
| ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SEP-RSD5LYH4WSEXMEXJ
Chain KUBE-SEP-RSD5LYH4WSEXMEXJ (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.1.176 0.0.0.0/0 /* default/svc-clusterip:svc-webport */
45 2700 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport */ tcp to:192.168.1.176:8080
|
1
2
3
4
5
| ssh ec2-user@$N2 sudo iptables -v --numeric --table nat --list KUBE-SEP-RSD5LYH4WSEXMEXJ
Chain KUBE-SEP-RSD5LYH4WSEXMEXJ (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.1.176 0.0.0.0/0 /* default/svc-clusterip:svc-webport */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport */ tcp to:192.168.1.176:8080
|
1
2
3
4
5
| ssh ec2-user@$N3 sudo iptables -v --numeric --table nat --list KUBE-SEP-RSD5LYH4WSEXMEXJ
Chain KUBE-SEP-RSD5LYH4WSEXMEXJ (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.1.176 0.0.0.0/0 /* default/svc-clusterip:svc-webport */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport */ tcp to:192.168.1.176:8080
|
8. Topology Aware Routing ์ค์
1
2
3
| kubectl annotate service svc-clusterip "service.kubernetes.io/topology-mode=auto"
# ๊ฒฐ๊ณผ
service/svc-clusterip annotated
|
endpointslices ํ์ธ
๊ฐ Endpoint์ hints๋ก AZ(zone) ์ ๋ณด ํฌํจ๋จ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
| kubectl get endpointslices -l kubernetes.io/service-name=svc-clusterip -o yaml
apiVersion: v1
items:
- addressType: IPv4
apiVersion: discovery.k8s.io/v1
endpoints:
- addresses:
- 192.168.1.176
conditions:
ready: true
serving: true
terminating: false
hints:
forZones:
- name: ap-northeast-2a
nodeName: ip-192-168-1-193.ap-northeast-2.compute.internal
targetRef:
kind: Pod
name: deploy-echo-75b7b9558c-frsbd
namespace: default
uid: c79e9a50-6fe7-4862-b235-f48410dfd4e7
zone: ap-northeast-2a
- addresses:
- 192.168.2.91
conditions:
ready: true
serving: true
terminating: false
hints:
forZones:
- name: ap-northeast-2b
nodeName: ip-192-168-2-52.ap-northeast-2.compute.internal
targetRef:
kind: Pod
name: deploy-echo-75b7b9558c-8kxhk
namespace: default
uid: fc2d6025-50e4-488d-844d-b74e64656869
zone: ap-northeast-2b
- addresses:
- 192.168.3.146
conditions:
ready: true
serving: true
terminating: false
hints:
forZones:
- name: ap-northeast-2c
nodeName: ip-192-168-3-72.ap-northeast-2.compute.internal
targetRef:
kind: Pod
name: deploy-echo-75b7b9558c-dwhmg
namespace: default
uid: 013991cc-6c6a-465a-be3c-2a7128ed4869
zone: ap-northeast-2c
kind: EndpointSlice
metadata:
creationTimestamp: "2025-02-13T15:02:50Z"
generateName: svc-clusterip-
generation: 5
labels:
endpointslice.kubernetes.io/managed-by: endpointslice-controller.k8s.io
kubernetes.io/service-name: svc-clusterip
name: svc-clusterip-8gc6m
namespace: default
ownerReferences:
- apiVersion: v1
blockOwnerDeletion: true
controller: true
kind: Service
name: svc-clusterip
uid: f97f9347-cd50-40d7-881e-253b2efe5825
resourceVersion: "444502"
uid: 2f964436-d4df-49bd-af2d-b11ec22702b2
ports:
- name: svc-webport
port: 8080
protocol: TCP
kind: List
metadata:
resourceVersion: ""
|
9. ๋ถํ ๋ถ์ฐ ํ
์คํธ
1
| kubectl exec -it netshoot-pod -- zsh -c "for i in {1..100}; do curl -s svc-clusterip | grep Hostname; done | sort | uniq -c | sort -nr"
|
โ
ย ์ถ๋ ฅ
1
| 100 Hostname: deploy-echo-75b7b9558c-frsbd
|
- 100๋ฒ ์ ์ ์ ๋์ผ AZ์ ํ๋๋ก๋ง ์ฐ๊ฒฐ
- ํฌ๋ก์ค ๋คํธ์ํฌ ๋น์ฉ์ด ๋ฐ์ํ์ง ์์
10. IPTables ์ ์ฑ
ํ์ธ
ssh
๋ช
๋ น์ด๋ก ๊ฐ ๋
ธ๋์์ iptables
ํ์ธ
1
| ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SERVICES
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
| Chain KUBE-SERVICES (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SVC-JD5MR3NA4I4DYORP tcp -- * * 0.0.0.0/0 10.100.0.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
0 0 KUBE-SVC-UAGC4PYEYZJJEW6D tcp -- * * 0.0.0.0/0 10.100.145.143 /* kube-system/aws-load-balancer-webhook-service:webhook-server cluster IP */ tcp dpt:443
100 6000 KUBE-SVC-KBDEBIL6IU6WL7RF tcp -- * * 0.0.0.0/0 10.100.101.191 /* default/svc-clusterip:svc-webport cluster IP */ tcp dpt:80
0 0 KUBE-SVC-I7SKRZYQ7PWYV5X7 tcp -- * * 0.0.0.0/0 10.100.83.10 /* kube-system/eks-extension-metrics-api:metrics-api cluster IP */ tcp dpt:443
100 9600 KUBE-SVC-TCOU7JCQXEZGVUNU udp -- * * 0.0.0.0/0 10.100.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
0 0 KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- * * 0.0.0.0/0 10.100.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
0 0 KUBE-SVC-Z4ANX4WAEWEBLCTM tcp -- * * 0.0.0.0/0 10.100.101.216 /* kube-system/metrics-server:https cluster IP */ tcp dpt:443
0 0 KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- * * 0.0.0.0/0 10.100.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-SVC-7EJNTS7AENER2WX5 tcp -- * * 0.0.0.0/0 10.100.146.32 /* kube-system/kube-ops-view:http cluster IP */ tcp dpt:8080
108 6480 KUBE-NODEPORTS all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
|
KUBE-SVC-KBDEBIL6IU6WL7RF
์ฒด์ธ์์ ๋์ผ AZ์ ํ๋๋ก๋ง ํธ๋ํฝ ์ ๋ฌ ํ์ธ
Topology Mode์ hints ์ฌ์ฉ ์ kube-proxy๊ฐ ํด๋น AZ์ ์๋ Pod๋ก๋ง ํธ๋ํฝ์ ๋ถ์ฐํ๋๋ก ๊ท์น์ด ๋ณ๊ฒฝ๋จ
1
2
3
4
5
| ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
# ์ถ๋ ฅ
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
pkts bytes target prot opt in out source destination
100 6000 KUBE-SEP-RSD5LYH4WSEXMEXJ all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.1.176:8080 */
|
1
2
3
4
5
| ssh ec2-user@$N2 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
# ์ถ๋ ฅ
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-NLHUM4JSBL2UEFAX all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.2.91:8080 */
|
1
2
3
4
5
| ssh ec2-user@$N3 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
# ์ถ๋ ฅ
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-6CT57P6L6QUJZOMH all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.3.146:8080 */
|
11. ํ๋ ๊ฐ์๋ฅผ 1๊ฐ๋ก ์ค์ฌ ๊ฐ์ AZ์ ๋ชฉ์ ์ง ํ๋๊ฐ ์์ ๊ฒฝ์ฐ
(1) ํ๋ ๊ฐ์ 1๊ฐ๋ก ์ถ์
1
2
3
| kubectl scale deployment deploy-echo --replicas 1
# ๊ฒฐ๊ณผ
deployment.apps/deploy-echo scaled
|
(2) ํ๋ AZ ํ์ธ
โ
ย ์ถ๋ ฅ
1
2
3
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-echo-75b7b9558c-dwhmg 1/1 Running 0 42m 192.168.3.146 ip-192-168-3-72.ap-northeast-2.compute.internal <none> <none>
netshoot-pod 1/1 Running 0 41m 192.168.1.98 ip-192-168-1-193.ap-northeast-2.compute.internal <none> <none>
|
(3) 100๋ฒ ๋ฐ๋ณต ํ
์คํธ
1
| kubectl exec -it netshoot-pod -- zsh -c "for i in {1..100}; do curl -s svc-clusterip | grep Hostname; done | sort | uniq -c | sort -nr"
|
โ
ย ์ถ๋ ฅ
100๋ฒ ๋ชจ๋ deploy-echo-75b7b9558c-dwhmg
ํ๋๋ก ์ ์๋จ
1
| 100 Hostname: deploy-echo-75b7b9558c-dwhmg
|
(4) IPTables ์ ์ฑ
ํ์ธ
๊ฐ ๋
ธ๋์์ KUBE-SVC-KBDEBIL6IU6WL7RF
์ฒด์ธ ํ์ธ ์ ๋ชจ๋ ๊ฐ์ AZ์ ํ๋๋ก๋ง ์ฐ๊ฒฐ๋จ
1
2
3
4
| ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
pkts bytes target prot opt in out source destination
100 6000 KUBE-SEP-6CT57P6L6QUJZOMH all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.3.146:8080 */
|
1
2
3
4
| ssh ec2-user@$N2 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-6CT57P6L6QUJZOMH all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.3.146:8080 */
|
1
2
3
4
| ssh ec2-user@$N3 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-6CT57P6L6QUJZOMH all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.3.146:8080 */
|
(5) EndpointSlices ํ์ธ
hint ์ ๋ณด ์์
1
| kubectl get endpointslices -l kubernetes.io/service-name=svc-clusterip -o yaml
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
| apiVersion: v1
items:
- addressType: IPv4
apiVersion: discovery.k8s.io/v1
endpoints:
- addresses:
- 192.168.3.146
conditions:
ready: true
serving: true
terminating: false
nodeName: ip-192-168-3-72.ap-northeast-2.compute.internal
targetRef:
kind: Pod
name: deploy-echo-75b7b9558c-dwhmg
namespace: default
uid: 013991cc-6c6a-465a-be3c-2a7128ed4869
zone: ap-northeast-2c
kind: EndpointSlice
metadata:
creationTimestamp: "2025-02-13T15:02:50Z"
generateName: svc-clusterip-
generation: 7
labels:
endpointslice.kubernetes.io/managed-by: endpointslice-controller.k8s.io
kubernetes.io/service-name: svc-clusterip
name: svc-clusterip-8gc6m
namespace: default
ownerReferences:
- apiVersion: v1
blockOwnerDeletion: true
controller: true
kind: Service
name: svc-clusterip
uid: f97f9347-cd50-40d7-881e-253b2efe5825
resourceVersion: "447576"
uid: 2f964436-d4df-49bd-af2d-b11ec22702b2
ports:
- name: svc-webport
port: 8080
protocol: TCP
kind: List
metadata:
resourceVersion: ""
|
12. ์ค์ต ๋ฆฌ์์ค ์ญ์
1
2
3
4
| kubectl delete deploy deploy-echo; kubectl delete svc svc-clusterip
# ๊ฒฐ๊ณผ
deployment.apps "deploy-echo" deleted
service "svc-clusterip" deleted
|
๐งช AWS Load Balancer Controller๋ก ๋ธ๋ฃจ/๊ทธ๋ฆฐ ๋ฐฐํฌ, ์นด๋๋ฆฌ ๋ฐฐํฌ, A/B ํ
์คํธ ์ค์ต
1. ์ํ ์ ํ๋ฆฌ์ผ์ด์
ํด๋ก
1
2
3
4
5
6
7
| (eks-user:N/A) [root@operator-host ~]# git clone https://github.com/paulbouwer/hello-kubernetes.git
# ๊ฒฐ๊ณผ
Cloning into 'hello-kubernetes'...
remote: Enumerating objects: 294, done.
remote: Total 294 (delta 0), reused 0 (delta 0), pack-reused 294 (from 1)
Receiving objects: 100% (294/294), 168.42 KiB | 7.66 MiB/s, done.
Resolving deltas: 100% (120/120), done.
|
2. ์ํ ์ ํ๋ฆฌ์ผ์ด์
v1 ์ค์น
1
2
3
4
5
6
7
8
9
10
11
12
| (eks-user:N/A) [root@operator-host ~]# helm install --create-namespace --namespace hello-kubernetes v1 \
> ./hello-kubernetes/deploy/helm/hello-kubernetes \
> --set message="You are reaching hello-kubernetes version 1" \
> --set ingress.configured=true \
> --set service.type="ClusterIP"
# ๊ฒฐ๊ณผ
NAME: v1
LAST DEPLOYED: Fri Feb 14 00:57:28 2025
NAMESPACE: hello-kubernetes
STATUS: deployed
REVISION: 1
TEST SUITE: None
|
3. ์ํ ์ ํ๋ฆฌ์ผ์ด์
v2 ์ค์น
1
2
3
4
5
6
7
8
9
10
11
12
| (eks-user:N/A) [root@operator-host ~]# helm install --create-namespace --namespace hello-kubernetes v2 \
> ./hello-kubernetes/deploy/helm/hello-kubernetes \
> --set message="You are reaching hello-kubernetes version 2" \
> --set ingress.configured=true \
> --set service.type="ClusterIP"
# ๊ฒฐ๊ณผ
NAME: v2
LAST DEPLOYED: Fri Feb 14 00:57:59 2025
NAMESPACE: hello-kubernetes
STATUS: deployed
REVISION: 1
TEST SUITE: None
|
4. ๋ฐฐํฌ ํ์ธ
1
| (eks-user:N/A) [root@operator-host ~]# kubectl get pod,svc,ep -n hello-kubernetes
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
| NAME READY STATUS RESTARTS AGE
pod/hello-kubernetes-v1-7b546f6687-67w5x 1/1 Running 0 92s
pod/hello-kubernetes-v1-7b546f6687-nc877 1/1 Running 0 92s
pod/hello-kubernetes-v2-7b9df8f6c5-4qtd7 1/1 Running 0 60s
pod/hello-kubernetes-v2-7b9df8f6c5-g2jp9 1/1 Running 0 60s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hello-kubernetes-v1 ClusterIP 10.100.175.94 <none> 80/TCP 92s
service/hello-kubernetes-v2 ClusterIP 10.100.160.118 <none> 80/TCP 60s
NAME ENDPOINTS AGE
endpoints/hello-kubernetes-v1 192.168.1.195:8080,192.168.2.237:8080 92s
endpoints/hello-kubernetes-v2 192.168.1.176:8080,192.168.3.146:8080 60s
|
5. ์ธ๊ทธ๋ ์ค ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
| (eks-user:N/A) [root@operator-host ~]# cat <<EOF | kubectl apply -f -
> apiVersion: networking.k8s.io/v1
> kind: Ingress
> metadata:
> name: "hello-kubernetes"
> namespace: "hello-kubernetes"
> annotations:
> alb.ingress.kubernetes.io/scheme: internet-facing
> alb.ingress.kubernetes.io/target-type: ip
> alb.ingress.kubernetes.io/actions.blue-green: |
> {
> "type":"forward",
> "forwardConfig":{
> "targetGroups":[
> {
> "serviceName":"hello-kubernetes-v1",
> "servicePort":"80",
> "weight":100
> },
> {
> "serviceName":"hello-kubernetes-v2",
> "servicePort":"80",
> "weight":0
> }
> ]
> }
> }
> labels:
> app: hello-kubernetes
> spec:
> ingressClassName: alb
> rules:
> - http:
> paths:
> - path: /
> pathType: Prefix
> backend:
> service:
> name: blue-green
> port:
> name: use-annotation
> EOF
ingress.networking.k8s.io/hello-kubernetes created
|
6. ์ธ๊ทธ๋ ์ค ๋ฐ ํฌ์๋ฉ ํ์ธ
์ธ๊ทธ๋ ์ค ํ์ธ
1
| (eks-user:N/A) [root@operator-host ~]# kubectl describe ingress -n hello-kubernetes
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
| Name: hello-kubernetes
Labels: app=hello-kubernetes
Namespace: hello-kubernetes
Address: k8s-hellokub-hellokub-7e40b1a1ff-555575423.ap-northeast-2.elb.amazonaws.com
Ingress Class: alb
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/ blue-green:use-annotation (<error: services "blue-green" not found>)
Annotations: alb.ingress.kubernetes.io/actions.blue-green:
{
"type":"forward",
"forwardConfig":{
"targetGroups":[
{
"serviceName":"hello-kubernetes-v1",
"servicePort":"80",
"weight":100
},
{
"serviceName":"hello-kubernetes-v2",
"servicePort":"80",
"weight":0
}
]
}
}
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfullyReconciled 3m4s ingress Successfully reconciled
|
ํฌ์๋ฉ ํ์ธ
ํ์ฌ ๋ฒ์ 1๋ง ์๋ต
1
2
3
4
5
6
7
8
9
| (eks-user:N/A) [root@operator-host ~]# ELB_URL=$(kubectl get ingress -n hello-kubernetes -o=jsonpath='{.items[0].status.loadBalancer.ingress[0].hostname}')
(eks-user:N/A) [root@operator-host ~]# while true; do curl -s $ELB_URL | grep version; sleep 1; done
You are reaching hello-kubernetes version 1
You are reaching hello-kubernetes version 1
You are reaching hello-kubernetes version 1
You are reaching hello-kubernetes version 1
You are reaching hello-kubernetes version 1
You are reaching hello-kubernetes version 1
....
|
ALB์ ํฌ์๋ฉ ๋์ ํ๊ฒ ๊ทธ๋ฃน์ 2๊ฐ์ง๊ฐ ์์
- ๊ธฐ์กด ๋ฒ์ :
k8s-hellokub-hellokub-bb26fff580
์ผ๋ก 100% ํธ๋ํฝ ์ ๋ฌ - ์ ๊ท ๋ฒ์ :
k8s-hellokub-hellokub-f58a344fd6
์ผ๋ก 0% ํธ๋ํฝ ์ ๋ฌ
7. ๋ธ๋ฃจ/๊ทธ๋ฆฐ ๋ฐฐํฌ
๋ธ๋ฃจ/๊ทธ๋ฆฐ ๋ฐฐํฌ ์ค์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
| (eks-user:N/A) [root@operator-host ~]# cat <<EOF | kubectl apply -f -
> apiVersion: networking.k8s.io/v1
> kind: Ingress
> metadata:
> name: "hello-kubernetes"
> namespace: "hello-kubernetes"
> annotations:
> alb.ingress.kubernetes.io/scheme: internet-facing
> alb.ingress.kubernetes.io/target-type: ip
> alb.ingress.kubernetes.io/actions.blue-green: |
> {
> "type":"forward",
> "forwardConfig":{
> "targetGroups":[
> {
> "serviceName":"hello-kubernetes-v1",
> "servicePort":"80",
> "weight":0
> },
> {
> "serviceName":"hello-kubernetes-v2",
> "servicePort":"80",
> "weight":100
> }
> ]
> }
> }
> labels:
> app: hello-kubernetes
> spec:
> ingressClassName: alb
> rules:
> - http:
> paths:
> - path: /
> pathType: Prefix
> backend:
> service:
> name: blue-green
> port:
> name: use-annotation
> EOF
# ๊ฒฐ๊ณผ
ingress.networking.k8s.io/hello-kubernetes configured
|
ํ๊ฒ ๊ทธ๋ฃน์ด version 2๋ก 100% ํฌ์๋ฉ๋จ
1
2
3
4
5
6
7
8
| (eks-user:N/A) [root@operator-host ~]# while true; do curl -s $ELB_URL | grep version; sleep 1; done
You are reaching hello-kubernetes version 2
You are reaching hello-kubernetes version 2
You are reaching hello-kubernetes version 2
You are reaching hello-kubernetes version 2
You are reaching hello-kubernetes version 2
You are reaching hello-kubernetes version 2
.....
|
ALB ํ๊ฒ ๊ทธ๋ฃน ๋ณ๊ฒฝ ํ์ธ
k8s-hellokub-hellokub-f58a344fd6 100์ผ๋ก ๋ฐ๋์๋ค.
8. ์นด๋๋ฆฌ ๋ฐฐํฌ
์นด๋๋ฆฌ ๋ฐฐํฌ ์ค์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
| (eks-user:N/A) [root@operator-host ~]# cat <<EOF | kubectl apply -f -
> apiVersion: networking.k8s.io/v1
> kind: Ingress
> metadata:
> name: "hello-kubernetes"
> namespace: "hello-kubernetes"
> annotations:
> alb.ingress.kubernetes.io/scheme: internet-facing
> alb.ingress.kubernetes.io/target-type: ip
> alb.ingress.kubernetes.io/actions.blue-green: |
> {
> "type":"forward",
> "forwardConfig":{
> "targetGroups":[
> {
> "serviceName":"hello-kubernetes-v1",
> "servicePort":"80",
> "weight":90
> },
> {
> "serviceName":"hello-kubernetes-v2",
> "servicePort":"80",
> "weight":10
> }
> ]
> }
> }
> labels:
> app: hello-kubernetes
> spec:
> ingressClassName: alb
> rules:
> - http:
> paths:
> - path: /
> pathType: Prefix
> backend:
> service:
> name: blue-green
> port:
> name: use-annotation
> EOF
# ๊ฒฐ๊ณผ
ingress.networking.k8s.io/hello-kubernetes configured
|
์ ์ ๊ฒฐ๊ณผ ํ์ธ
1
2
3
| (eks-user:N/A) [root@operator-host ~]# for i in {1..100}; do curl -s $ELB_URL | grep version ; done | sort | uniq -c | sort -nr
86 You are reaching hello-kubernetes version 1
14 You are reaching hello-kubernetes version 2
|
9. a/b ํ
์คํธ
A/B ํ
์คํธ ์ค์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
| (eks-user:N/A) [root@operator-host ~]# cat <<EOF | kubectl apply -f -
> apiVersion: networking.k8s.io/v1
> kind: Ingress
> metadata:
> name: "hello-kubernetes"
> namespace: "hello-kubernetes"
> annotations:
> alb.ingress.kubernetes.io/scheme: internet-facing
> alb.ingress.kubernetes.io/target-type: ip
> alb.ingress.kubernetes.io/conditions.ab-testing: >
> [{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "HeaderName", "values":["aews-study"]}}]
> alb.ingress.kubernetes.io/actions.ab-testing: >
> {"type":"forward","forwardConfig":{"targetGroups":[{"serviceName":"hello-kubernetes-v2","servicePort":80}]}}
> labels:
> app: hello-kubernetes
> spec:
> ingressClassName: alb
> rules:
> - http:
> paths:
> - path: /
> pathType: Prefix
> backend:
> service:
> name: ab-testing
> port:
> name: use-annotation
> - path: /
> pathType: Prefix
> backend:
> service:
> name: hello-kubernetes-v1
> port:
> name: http
> EOF
# ๊ฒฐ๊ณผ
ingress.networking.k8s.io/hello-kubernetes configured
|
- HTTP Header
HeaderName: aews-study
์กฐ๊ฑด์ผ๋ก ํธ๋ํฝ ๋ถํ
A/B ํ
์คํธ ํ์ธ
HeaderName: aews-study
ํฌํจ ์ version 2๋ก 100% ํฌ์๋ฉ
1
2
| (eks-user:N/A) [root@operator-host ~]# for i in {1..100}; do curl -s -H "HeaderName: aews-study" $ELB_URL | grep version ; done | sort | uniq -c | sort -nr
100 You are reaching hello-kubernetes version 2
|
ํฌํจ๋์ง ์์ ์ version 1๋ก 100% ํฌ์๋ฉ
1
2
| (eks-user:N/A) [root@operator-host ~]# for i in {1..100}; do curl -s $ELB_URL | grep version ; done | sort | uniq -c | sort -nr
100 You are reaching hello-kubernetes version 1
|
10. ์ค์ต ๋ฆฌ์์ค ์ญ์
1
2
3
4
| (eks-user:N/A) [root@operator-host ~]# kubectl delete ingress -n hello-kubernetes hello-kubernetes && kubectl delete ns hello-kubernetes
# ๊ฒฐ๊ณผ
ingress.networking.k8s.io "hello-kubernetes" deleted
namespace "hello-kubernetes" deleted
|
๐ต๏ธโโ๏ธ ๋คํธ์ํฌ ๋ถ์ ํด
1. Kubeskoop ์ค์น
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
| (eks-user:N/A) [root@operator-host ~]# kubectl apply -f https://raw.githubusercontent.com/alibaba/kubeskoop/main/deploy/skoopbundle.yaml
# ๊ฒฐ๊ณผ
namespace/kubeskoop created
daemonset.apps/kubeskoop-exporter created
configmap/kubeskoop-config created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
configmap/prometheus-server-conf created
deployment.apps/prometheus-deployment created
service/prometheus-service created
service/loki-service created
configmap/grafana-datasources created
deployment.apps/grafana created
service/grafana created
deployment.apps/grafana-loki created
configmap/grafana-loki-config created
clusterrole.rbac.authorization.k8s.io/kubeskoop-controller created
clusterrolebinding.rbac.authorization.k8s.io/kubeskoop-controller created
role.rbac.authorization.k8s.io/controller created
rolebinding.rbac.authorization.k8s.io/controller created
configmap/kubeskoop-controller-config created
deployment.apps/controller created
service/controller created
deployment.apps/webconsole created
service/webconsole created
(eks-user:N/A) [root@operator-host ~]# kubectl patch service webconsole -n kubeskoop -p '{"spec": {"type": "LoadBalancer"}}'
service/webconsole patched
(eks-user:N/A) [root@operator-host ~]# kubectl patch service prometheus-service -n kubeskoop -p '{"spec": {"type": "LoadBalancer"}}'
service/prometheus-service patched
(eks-user:N/A) [root@operator-host ~]# kubectl patch service grafana -n kubeskoop -p '{"spec": {"type": "LoadBalancer"}}'
service/grafana patched
|
2. kubeskoop ์น ์ ์
admin / kubeskoop
1
2
3
| (eks-user:N/A) [root@operator-host ~]# kubectl get svc -n kubeskoop webconsole
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
webconsole LoadBalancer 10.100.14.77 ac2514347b0c84f0b84e1aa71fe7e0dc-996135232.ap-northeast-2.elb.amazonaws.com 80:32223/TCP 84s
|
1
2
| (eks-user:N/A) [root@operator-host ~]# kubectl get svc -n kubeskoop webconsole -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' | awk '{ print "KubeSkoop URL = http://"$1""}'
KubeSkoop URL = http://ac2514347b0c84f0b84e1aa71fe7e0dc-996135232.ap-northeast-2.elb.amazonaws.com
|
3. ํจํท ์บก์ฒ๋ง ํ
์คํธ
netshoot-pod ์ ๋ณด ํ์ธ
1
2
3
| (eks-user:N/A) [root@operator-host ~]# k get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
netshoot-pod 1/1 Running 0 109m 192.168.1.98 ip-192-168-1-193.ap-northeast-2.compute.internal <none> <none>
|
netshoot-pod ping ํ
์คํธ
1
2
3
4
5
6
7
| (eks-user:N/A) [root@operator-host ~]# ping 192.168.1.98
PING 192.168.1.98 (192.168.1.98) 56(84) bytes of data.
64 bytes from 192.168.1.98: icmp_seq=1 ttl=126 time=1.08 ms
64 bytes from 192.168.1.98: icmp_seq=2 ttl=126 time=0.644 ms
64 bytes from 192.168.1.98: icmp_seq=3 ttl=126 time=0.649 ms
64 bytes from 192.168.1.98: icmp_seq=4 ttl=126 time=0.680 ms
64 bytes from 192.168.1.98: icmp_seq=5 ttl=126 time=0.767 ms
|
4. ํ๋ก๋ฉํ
์ฐ์ค ์น ์ ์
1
2
| (eks-user:N/A) [root@operator-host ~]# kubectl get svc -n kubeskoop prometheus-service -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' | awk '{ print "prometheus URL = http://"$1""}'
prometheus URL = http://af6d238bde2c24a829f56fd1abdc6989-1339249886.ap-northeast-2.elb.amazonaws.com
|
5. ๊ทธ๋ผํ๋ ์น ์ ์
admin / kubeskoop
1
2
| (eks-user:N/A) [root@operator-host ~]# kubectl get svc -n kubeskoop grafana -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' | awk '{ print "grafana URL = http://"$1""}'
grafana URL = http://a87ee7a3d119b4d0a9a742a9d992375d-114222539.ap-northeast-2.elb.amazonaws.com
|
๐ AWS EKS IPVS ๋ชจ๋ ์ค์
IPVS ๋ชจ๋
- IPVS ๋ ๋ฆฌ๋
์ค ์ปค๋์์ ๋์ํ๋ ์ํํธ์จ์ด ๋ก๋๋ฐธ๋ฐ์์ด๋ค. ๋ฐฑ์๋(ํ๋ซํผ)์ผ๋ก Netfilter ๋ฅผ ์ฌ์ฉํ๋ฉฐ, TCP/UDP ์์ฒญ์ ์ฒ๋ฆฌ ํ ์ ์๋ค.
- iptables ์ rule ๊ธฐ๋ฐ ์ฒ๋ฆฌ์ ์ฑ๋ฅ ํ๊ณ์ ๋ถ์ฐ ์๊ณ ๋ฆฌ์ฆ์ด ์์ด์, ์ต๊ทผ์๋ ๋์ฒด๋ก IPVS ๋ฅผ ์ฌ์ฉํ๋ค.
1. ์๋ฒ๋ณ IPVS ๋ชจ๋ ์ค์
์๋ฒ 1, 2, 3 ๊ฐ๊ฐ IPVS ๋ชจ๋ ์ค์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
| [ec2-user@ip-192-168-1-193 ~]$ sudo sh -c 'cat << EOF > /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_lc
ip_vs_wlc
ip_vs_lblc
ip_vs_lblcr
ip_vs_sh
ip_vs_dh
ip_vs_sed
ip_vs_nq
nf_conntrack
EOF'
[ec2-user@ip-192-168-1-193 ~]$ sudo modprobe ip_vs
sudo modprobe ip_vs_rr
sudo modprobe ip_vs_wrr
sudo modprobe ip_vs_lc
sudo modprobe ip_vs_wlc
sudo modprobe ip_vs_lblc
sudo modprobe ip_vs_lblcr
sudo modprobe ip_vs_sh
sudo modprobe ip_vs_dh
sudo modprobe ip_vs_sed
sudo modprobe ip_vs_nq
sudo modprobe nf_conntrack
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
| [ec2-user@ip-192-168-2-52 ~]$ sudo sh -c 'cat << EOF > /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_lc
ip_vs_wlc
ip_vs_lblc
ip_vs_lblcr
ip_vs_sh
ip_vs_dh
ip_vs_sed
ip_vs_nq
nf_conntrack
EOF'
[ec2-user@ip-192-168-2-52 ~]$ sudo modprobe ip_vs
sudo modprobe ip_vs_rr
sudo modprobe ip_vs_wrr
sudo modprobe ip_vs_lc
sudo modprobe ip_vs_wlc
sudo modprobe ip_vs_lblc
sudo modprobe ip_vs_lblcr
sudo modprobe ip_vs_sh
sudo modprobe ip_vs_dh
sudo modprobe ip_vs_sed
sudo modprobe ip_vs_nq
sudo modprobe nf_conntrack
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
| [ec2-user@ip-192-168-3-72 ~]$ sudo sh -c 'cat << EOF > /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_lc
ip_vs_wlc
ip_vs_lblc
ip_vs_lblcr
ip_vs_sh
ip_vs_dh
ip_vs_sed
ip_vs_nq
nf_conntrack
EOF'
[ec2-user@ip-192-168-3-72 ~]$ sudo modprobe ip_vs
sudo modprobe ip_vs_rr
sudo modprobe ip_vs_wrr
sudo modprobe ip_vs_lc
sudo modprobe ip_vs_wlc
sudo modprobe ip_vs_lblc
sudo modprobe ip_vs_lblcr
sudo modprobe ip_vs_sh
sudo modprobe ip_vs_dh
sudo modprobe ip_vs_sed
sudo modprobe ip_vs_nq
sudo modprobe nf_conntrack
|
2. IPVS ๋ชจ๋ ํ์ธ
๊ฐ ์๋ฒ์์ lsmod | grep ^ip_vs
๋ก IPVS ๋ชจ๋์ด ๋ก๋๋์๋์ง ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
| [ec2-user@ip-192-168-1-193 ~]$ sudo lsmod | grep ^ip_vs
# ์ถ๋ ฅ
ip_vs_nq 16384 0
ip_vs_sed 16384 0
ip_vs_dh 16384 0
ip_vs_sh 16384 0
ip_vs_lblcr 16384 0
ip_vs_lblc 16384 0
ip_vs_wlc 16384 0
ip_vs_lc 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 192512 20 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed
|
1
2
3
4
5
6
7
8
9
10
11
12
13
| [ec2-user@ip-192-168-2-52 ~]$ sudo lsmod | grep ^ip_vs
# ์ถ๋ ฅ
ip_vs_nq 16384 0
ip_vs_sed 16384 0
ip_vs_dh 16384 0
ip_vs_sh 16384 0
ip_vs_lblcr 16384 0
ip_vs_lblc 16384 0
ip_vs_wlc 16384 0
ip_vs_lc 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 192512 20 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed
|
1
2
3
4
5
6
7
8
9
10
11
12
13
| [ec2-user@ip-192-168-3-72 ~]$ sudo lsmod | grep ^ip_vs
# ์ถ๋ ฅ
ip_vs_nq 16384 0
ip_vs_sed 16384 0
ip_vs_dh 16384 0
ip_vs_sh 16384 0
ip_vs_lblcr 16384 0
ip_vs_lblc 16384 0
ip_vs_wlc 16384 0
ip_vs_lc 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 192512 20 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed
|
3. AWS EKS์ IPVS ๋ชจ๋ ๋ฐ์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| aws eks update-addon --cluster-name $CLUSTER_NAME --addon-name kube-proxy \
--configuration-values '{"ipvs": {"scheduler": "rr"}, "mode": "ipvs"}' \
--resolve-conflicts OVERWRITE
# ๊ฒฐ๊ณผ
{
"update": {
"id": "3e57ed90-40b8-3ebb-98ac-73e497864986",
"status": "InProgress",
"type": "AddonUpdate",
"params": [
{
"type": "ResolveConflicts",
"value": "OVERWRITE"
},
{
"type": "ConfigurationValues",
"value": "{\"ipvs\": {\"scheduler\": \"rr\"}, \"mode\": \"ipvs\"}"
}
],
"createdAt": "2025-02-14T02:06:42.661000+09:00",
"errors": []
}
}
|
4. kube-proxy ๋ฐ๋ชฌ์
์ฌ์์
1
2
3
| kubectl -n kube-system rollout restart ds kube-proxy
# ๊ฒฐ๊ณผ
daemonset.apps/kube-proxy restarted
|
5. kube-proxy ์ค์ ํ์ธ
1
| kubectl get cm -n kube-system kube-proxy-config -o yaml
|
โ
ย ์ถ๋ ฅ
mode: "ipvs"
, scheduler: "rr"
์ค์ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
| apiVersion: v1
data:
config: |-
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /var/lib/kube-proxy/kubeconfig
qps: 5
clusterCIDR: ""
configSyncPeriod: 15m0s
conntrack:
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: "rr"
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 0.0.0.0:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -998
portRange: ""
kind: ConfigMap
metadata:
creationTimestamp: "2025-02-12T03:13:56Z"
labels:
eks.amazonaws.com/component: kube-proxy
k8s-app: kube-proxy
name: kube-proxy-config
namespace: kube-system
resourceVersion: "466743"
uid: 83ba2b60-5afb-4ea8-b037-84a02d9036ef
|
6. kube-ipvs0 ์ธํฐํ์ด์ค ์์ฑ ํ์ธ
IPVS ๋ชจ๋๋ก ๋ณ๊ฒฝ ์ kube-ipvs0
์ธํฐํ์ด์ค๊ฐ ์์ฑ๋์ด ์๋น์ค virtual IP ํ ๋น๋จ
1
| for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo ip -c addr; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
| >> node 43.202.57.204 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 02:23:93:a6:bc:61 brd ff:ff:ff:ff:ff:ff
altname enp0s5
inet 192.168.1.193/24 metric 1024 brd 192.168.1.255 scope global dynamic ens5
valid_lft 2300sec preferred_lft 2300sec
inet6 fe80::23:93ff:fea6:bc61/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
3: eni01a4864c88a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 56:be:6a:bd:2d:2a brd ff:ff:ff:ff:ff:ff link-netns cni-a59ccd2b-5db2-0159-b78e-e0797b300a23
inet6 fe80::54be:6aff:febd:2d2a/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
5: eni61c5a949744@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 52:41:42:c3:03:3f brd ff:ff:ff:ff:ff:ff link-netns cni-74caca86-36e2-a922-2920-c2c8c00e7b43
inet6 fe80::5041:42ff:fec3:33f/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
14: ens7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 02:d8:5d:80:2c:c9 brd ff:ff:ff:ff:ff:ff
altname enp0s7
inet 192.168.1.232/24 brd 192.168.1.255 scope global ens7
valid_lft forever preferred_lft forever
inet6 fe80::d8:5dff:fe80:2cc9/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
38: enica32516c01a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 92:2a:72:57:7d:03 brd ff:ff:ff:ff:ff:ff link-netns cni-f0cb87e6-1f26-e2c9-b622-9a99215f4cca
inet6 fe80::902a:72ff:fe57:7d03/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
41: enifc4d699b169@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether b6:46:07:f2:a2:65 brd ff:ff:ff:ff:ff:ff link-netns cni-2c0732df-1963-6937-d7ad-2ea9fe9f3481
inet6 fe80::b446:7ff:fef2:a265/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
42: enid72dce04775@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 5e:54:5b:e7:37:87 brd ff:ff:ff:ff:ff:ff link-netns cni-50b39a65-fb46-3e73-0831-6f9b159d1c56
inet6 fe80::5c54:5bff:fee7:3787/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
43: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
link/ether ce:47:f1:9e:78:ad brd ff:ff:ff:ff:ff:ff
inet 10.100.110.88/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.214.208/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.0.1/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.0.10/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.73.70/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.17.159/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.14.77/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.145.143/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.83.10/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.146.32/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.101.216/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
>> node 15.164.179.214 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 06:c5:5b:d1:77:57 brd ff:ff:ff:ff:ff:ff
altname enp0s5
inet 192.168.2.52/24 metric 1024 brd 192.168.2.255 scope global dynamic ens5
valid_lft 2295sec preferred_lft 2295sec
inet6 fe80::4c5:5bff:fed1:7757/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
3: enibce2df30e87@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 4a:c8:23:ab:85:39 brd ff:ff:ff:ff:ff:ff link-netns cni-5408bb28-ad79-a3f3-3a60-9442968852b1
inet6 fe80::48c8:23ff:feab:8539/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
5: enic99196c7a64@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 8a:00:6e:37:b1:71 brd ff:ff:ff:ff:ff:ff link-netns cni-9343fb30-bdc8-5fab-b46d-3a5db58f8007
inet6 fe80::8800:6eff:fe37:b171/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
28: ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 06:0e:23:61:2c:f9 brd ff:ff:ff:ff:ff:ff
altname enp0s6
inet 192.168.2.136/24 brd 192.168.2.255 scope global ens6
valid_lft forever preferred_lft forever
inet6 fe80::40e:23ff:fe61:2cf9/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
34: eni79cb46fcdac@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether c6:d6:b9:a4:cf:99 brd ff:ff:ff:ff:ff:ff link-netns cni-188308ca-6087-e1e0-f4f3-7e6fd2da16e3
inet6 fe80::c4d6:b9ff:fea4:cf99/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
39: enib1a09dbf60e@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 76:23:31:29:2f:9f brd ff:ff:ff:ff:ff:ff link-netns cni-cb0b81af-c51f-85de-47fa-5036f76d4745
inet6 fe80::7423:31ff:fe29:2f9f/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
40: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
link/ether 82:2e:0b:7a:92:81 brd ff:ff:ff:ff:ff:ff
inet 10.100.0.10/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.0.1/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.17.159/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.73.70/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.110.88/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.146.32/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.83.10/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.145.143/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.101.216/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.214.208/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.14.77/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
>> node 43.201.115.81 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 0a:d5:3d:4a:54:bd brd ff:ff:ff:ff:ff:ff
altname enp0s5
inet 192.168.3.72/24 metric 1024 brd 192.168.3.255 scope global dynamic ens5
valid_lft 2304sec preferred_lft 2304sec
inet6 fe80::8d5:3dff:fe4a:54bd/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
5: enif2c43957bf8@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether a2:e3:15:62:47:31 brd ff:ff:ff:ff:ff:ff link-netns cni-8acd7723-2d0b-690f-3d4b-d3e902287dd1
inet6 fe80::a0e3:15ff:fe62:4731/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
14: ens7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 0a:0c:3c:65:9c:27 brd ff:ff:ff:ff:ff:ff
altname enp0s7
inet 192.168.3.218/24 brd 192.168.3.255 scope global ens7
valid_lft forever preferred_lft forever
inet6 fe80::80c:3cff:fe65:9c27/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
35: eniaad872f8f96@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 5e:71:1d:0d:58:76 brd ff:ff:ff:ff:ff:ff link-netns cni-83401432-a5a6-e355-338b-5b0b2be54e2f
inet6 fe80::5c71:1dff:fe0d:5876/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
39: eni1a06225ba03@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 6e:99:bf:09:eb:4a brd ff:ff:ff:ff:ff:ff link-netns cni-76c37d54-4b57-554c-2033-1bd0903ea10a
inet6 fe80::6c99:bfff:fe09:eb4a/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
42: eni00c25e070e3@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 5a:70:f9:2d:fb:d6 brd ff:ff:ff:ff:ff:ff link-netns cni-ea99b434-526f-c3dc-8f3c-312ea67461e7
inet6 fe80::5870:f9ff:fe2d:fbd6/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
43: enif2b59b1644b@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 7e:a5:ca:a5:92:ae brd ff:ff:ff:ff:ff:ff link-netns cni-d32736a6-d35d-b2bf-009d-446df94aa7b8
inet6 fe80::7ca5:caff:fea5:92ae/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
44: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
link/ether 2a:28:b6:08:eb:09 brd ff:ff:ff:ff:ff:ff
inet 10.100.0.1/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.73.70/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.101.216/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.0.10/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.14.77/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.145.143/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.214.208/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.110.88/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.83.10/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.146.32/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.100.17.159/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
|
๐๏ธ (์ค์ต ์๋ฃ ํ) ์์ ์ญ์
Amazon EKS ํด๋ฌ์คํฐ ์ญ์ (10๋ถ ์ ๋ ์์)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| eksctl delete cluster --name $CLUSTER_NAME
# ๊ฒฐ๊ณผ
2025-02-14 02:20:39 [โน] deleting EKS cluster "myeks"
2025-02-14 02:20:39 [โน] will drain 0 unmanaged nodegroup(s) in cluster "myeks"
2025-02-14 02:20:39 [โน] starting parallel draining, max in-flight of 1
2025-02-14 02:20:40 [โน] deleted 0 Fargate profile(s)
2025-02-14 02:20:40 [โ] kubeconfig has been updated
2025-02-14 02:20:40 [โน] cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
2025-02-14 02:22:55 [โน] 4 sequential tasks: { delete nodegroup "ng1", delete IAM OIDC provider, delete addon IAM "eksctl-myeks-addon-vpc-cni", delete cluster control plane "myeks" [async] }
2025-02-14 02:22:55 [โน] will delete stack "eksctl-myeks-nodegroup-ng1"
2025-02-14 02:22:55 [โน] waiting for stack "eksctl-myeks-nodegroup-ng1" to get deleted
2025-02-14 02:22:55 [โน] waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng1"
2025-02-14 02:23:26 [โน] waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng1"
2025-02-14 02:23:58 [โน] waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng1"
2025-02-14 02:25:36 [โน] waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng1"
2025-02-14 02:27:34 [โน] waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng1"
2025-02-14 02:29:08 [โน] waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng1"
2025-02-14 02:30:35 [โน] waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng1"
2025-02-14 02:31:32 [โน] waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng1"
2025-02-14 02:33:04 [โน] waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng1"
2025-02-14 02:33:05 [โน] will delete stack "eksctl-myeks-addon-vpc-cni"
2025-02-14 02:33:05 [โน] will delete stack "eksctl-myeks-cluster"
2025-02-14 02:33:06 [โ] all cluster resources were deleted
|
1
| aws cloudformation delete-stack --stack-name myeks
|