Post

AEWS 2์ฃผ์ฐจ ์ •๋ฆฌ

๐Ÿง AWS EKS ํ™˜๊ฒฝ ์„ค์น˜ ์ค€๋น„ (Arch Linux)

1. ์‹œ์Šคํ…œ ์—…๋ฐ์ดํŠธ

1
yay -Syu

2. AWS CLI ์„ค์น˜

1
2
3
4
5
6
7
yay -S aws-cli-v2

# ์„ค์น˜ํ™•์ธ
aws --version

# ๊ฒฐ๊ณผ
aws-cli/2.24.0 Python/3.13.1 Linux/6.13.1-arch1-1 source/x86_64.arch

3. eksctl ์„ค์น˜

์ตœ์‹  eksctl ์„ค์น˜ ํ•„์š” (version: 1.31 ์ง€์›)

1
2
3
4
5
6
7
8
9
10
11
12
13
ARCH=$(uname -m)
if [[ "$ARCH" == "x86_64" ]]; then ARCH="amd64"; fi
if [[ "$ARCH" == "aarch64" ]]; then ARCH="arm64"; fi

curl --silent --location "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_Linux_${ARCH}.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin/
sudo chmod +x /usr/local/bin/eksctl

# ์„ค์น˜ํ™•์ธ
eksctl version

# ๊ฒฐ๊ณผ
0.203.0

4. kubectl ์„ค์น˜

1
2
3
4
5
6
7
8
yay -S kubectl

# ์„ค์น˜ํ™•์ธ
kubectl version --client=true

# ๊ฒฐ๊ณผ
Client Version: v1.32.1
Kustomize Version: v5.5.0

5. Helm ์„ค์น˜

1
2
3
4
5
6
7
yay -S helm

# ์„ค์น˜ํ™•์ธ
helm version

# ๊ฒฐ๊ณผ
version.BuildInfo{Version:"v3.17.0", GitCommit:"301108edc7ac2a8ba79e4ebf5701b0b6ce6a31e4", GitTreeState:"clean", GoVersion:"go1.23.4"}

6. krew ์„ค์น˜ ๋ฐ ํ”Œ๋Ÿฌ๊ทธ์ธ ์ถ”๊ฐ€

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
yay -S krew

# ์„ค์น˜ํ™•์ธ
kubectl krew version

# ๊ฒฐ๊ณผ
OPTION            VALUE
GitTag            v0.4.4
GitCommit         343e657
IndexURI          https://github.com/kubernetes-sigs/krew-index.git
BasePath          /home/devshin/.krew
IndexPath         /home/devshin/.krew/index/default
InstallPath       /home/devshin/.krew/store
BinPath           /home/devshin/.krew/bin
DetectedPlatform  linux/amd64
1
2
3
4
5
echo 'export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"' >> ~/.zshrc
echo 'export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"' >> ~/.bashrc

source ~/.zshrc  # zsh ์‚ฌ์šฉ ์‹œ
source ~/.bashrc # bash ์‚ฌ์šฉ ์‹œ
1
2
3
4
5
6
7
8
9
10
kubectl krew install neat get-all df-pv stern
kubectl krew list

# ๊ฒฐ๊ณผ
PLUGIN   VERSION
df-pv    v0.3.0
get-all  v1.3.8
krew     v0.4.4
neat     v2.0.4
stern    v1.32.0

7. kube-ps1 ์„ค์น˜

1
2
3
4
5
6
7
8
yay -S kube-ps1
echo "source /opt/kube-ps1/kube-ps1.sh" >> ~/.bashrc
echo 'PS1="[\u@\h \W $(kube_ps1)]$ "' >> ~/.bashrc
source ~/.bashrc

echo "source /opt/kube-ps1/kube-ps1.sh" >> ~/.zshrc
echo 'PROMPT="$(kube_ps1)$PROMPT"' >> ~/.zshrc
source ~/.zshrc

8. kubectx ์„ค์น˜

1
yay -S kubectx

9. kubecolor ์„ค์น˜ ๋ฐ alias ์„ค์ •

1
2
3
4
5
6
7
8
9
10
11
12
yay -S kubecolor

echo "alias k=kubectl" >> ~/.bashrc
echo "alias kubectl=kubecolor" >> ~/.bashrc
echo 'complete -F __start_kubectl kubecolor' >> ~/.bashrc
source ~/.bashrc

echo "alias k=kubectl" >> ~/.zshrc
echo "alias kubectl=kubecolor" >> ~/.zshrc
echo 'autoload -U compinit && compinit' >> ~/.zshrc
echo "compdef kubecolor=kubectl" >> ~/.zshrc
source ~/.zshrc

10. (์˜ต์…˜) AWS ์„ธ์…˜ ๋งค๋‹ˆ์ € ์„ค์น˜

1
2
3
4
5
6
7
yay -S aws-session-manager-plugin

# ์„ค์น˜ํ™•์ธ
session-manager-plugin --version

# ๊ฒฐ๊ณผ
1.2.707.0

11. (์˜ต์…˜) sshpass ์„ค์น˜

1
2
3
4
5
6
7
8
9
10
11
12
13
yay -S sshpass

# ์„ค์น˜ํ™•์ธ
sshpass -V

# ๊ฒฐ๊ณผ
sshpass 1.10
(C) 2006-2011 Lingnu Open Source Consulting Ltd.
(C) 2015-2016, 2021-2022 Shachar Shemesh
This program is free software, and can be distributed under the terms of the GPL
See the COPYING file for more information.

Using "assword" as the default password prompt indicator.

12. (์˜ต์…˜) Wireshark ์„ค์น˜

ํŒจํ‚ท ์บก์ณ ๋ฐ ์บก์ณ๋œ ํŒŒ์ผ์—์„œ ํŒจํ‚ท ๋‚ด์šฉ ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
yay -S wireshark-qt

# Wireshark๋Š” ์ผ๋ฐ˜ ์‚ฌ์šฉ์ž ๊ถŒํ•œ์œผ๋กœ ์‹คํ–‰ํ•  ์ˆ˜ ์—†์„ ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ, wireshark ๊ทธ๋ฃน์— ์‚ฌ์šฉ์ž๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•จ
sudo usermod -aG wireshark $USER
newgrp wireshark  # ๋ณ€๊ฒฝ ์‚ฌํ•ญ ์ฆ‰์‹œ ์ ์šฉ

# ์„ค์น˜ ํ™•์ธ
wireshark --version

# ๊ฒฐ๊ณผ
Wireshark 4.4.3.

๐Ÿ” AWS Configure ์ž๊ฒฉ ์ฆ๋ช… ์„ค์ •

1
2
3
4
5
aws configure
AWS Access Key ID [None]: XXXXXXXXXXXXXXXXXX
AWS Secret Access Key [None]: XXXXXXXXXXXXXXXXXX
Default region name [None]: ap-northeast-2
Default output format [None]: json
  • AWS Access Key ID: (๋ฐœ๊ธ‰๋ฐ›์€ Access Key ID ์ž…๋ ฅ)
  • AWS Secret Access Key: (Secret Access Key ์ž…๋ ฅ)
  • Default region name: ap-northeast-2 (์„œ์šธ ๋ฆฌ์ „, ์›ํ•˜๋Š” ๋ฆฌ์ „ ์„ ํƒ ๊ฐ€๋Šฅ)
  • Default output format: json ๋˜๋Š” yaml (๊ธฐ๋ณธ๊ฐ’: json)

๐Ÿš€ AWS CloudFormation์„ ํ†ตํ•œ ๊ธฐ๋ณธ ์‹ค์Šต ํ™˜๊ฒฝ ๋ฐฐํฌ

Image

1. CloudFormation ํ…œํ”Œ๋ฆฟ ๋‹ค์šด๋กœ๋“œ

1
2
cd Downloads
curl -O https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/K8S/myeks-2week.yaml

2. CloudFormation ์Šคํƒ ๋ฐฐํฌ

1
2
3
4
5
6
7
aws cloudformation deploy --template-file ~/Downloads/myeks-2week.yaml \
    --stack-name myeks --parameter-overrides KeyName=kp-aews SgIngressSshCidr=$(curl -s ipinfo.io/ip)/32 --region ap-northeast-2

# ๊ฒฐ๊ณผ
Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - myeks

operator-host๋ผ๋Š” ์ด๋ฆ„์˜ t2.small EC2 ์ธ์Šคํ„ด์Šค๊ฐ€ ํ•˜๋‚˜ ๋ฐฐํฌ๋จ

Image

3. ๋ฐฐํฌ๋œ EC2 ์ธ์Šคํ„ด์Šค์— ์ ‘์†

(1) CloudFormation ์Šคํƒ ๋ฐฐํฌ ์™„๋ฃŒ ํ›„ ์šด์˜์„œ๋ฒ„ EC2 IP ์ถœ๋ ฅ

1
aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[*].OutputValue' --output text

โœ…ย ์ถœ๋ ฅ

1
15.165.15.90

(2) ์šด์˜์„œ๋ฒ„ EC2์— SSH ์ ‘์†

1
2
# ssh -i kp-aews.pem ec2-user@15.165.15.90
ssh -i kp-aews.pem ec2-user@$(aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[0].OutputValue' --output text)

4. ssh-agent๋ฅผ ์ด์šฉํ•œ ํ‚ค ๊ด€๋ฆฌ

(1) ssh-agent ์‹œ์ž‘ ๋ฐ ํ‚ค ์ถ”๊ฐ€

1
2
3
eval "$(ssh-agent -s)"
# ๊ฒฐ๊ณผ
ssh-add ~/Downloads/kp-aews.pem

(2) SSH ์ ‘์† (๋น„๋ฐ€๋ฒˆํ˜ธ ์—†์ด)

1
ssh ec2-user@15.165.15.90

Image

5. ์šด์˜ ์„œ๋ฒ„ EC2(operator-host)์—์„œ EKS ์ ‘๊ทผ ์„ค์ •

(1) AWS IAM ์ž๊ฒฉ์ฆ๋ช… ์„ค์ •

1
2
3
4
5
[root@operator-host ~]# aws configure
AWS Access Key ID [None]: XXXXXXXXXXXXXXXXXX
AWS Secret Access Key [None]: XXXXXXXXXXXXXXXXXX
Default region name [None]: ap-northeast-2
Default output format [None]: json

(2) IAM ์‚ฌ์šฉ์ž ํ™•์ธ (get-caller-identity)

1
2
[root@operator-host ~]# aws sts get-caller-identity --query Arn
"arn:aws:iam::378102432899:user/eks-user"

6. Peering connections ํ™•์ธ

Image myeks-VPC์™€ operator-VPC๊ฐ€ ํ”ผ์–ด๋ง์œผ๋กœ ์—ฐ๊ฒฐ๋œ ์ƒํƒœ

Image

myeks-VPC ๋ผ์šฐํŒ… ๊ฒฝ๋กœ

  • myeks-VPC ํผ๋ธ”๋ฆญ ์„œ๋ธŒ๋„ท์—์„  172.20.0.0/16 ๋Œ€์—ญ๊ณผ ํ†ต์‹  ์‹œ ์ธํ„ฐ๋„ท ๊ฒฝ์œ  ์—†์ด ํ”ผ์–ด๋ง(Peering)์„ ํ†ตํ•ด Operator VPC์™€ ์ง์ ‘ ํ†ต์‹ 

Image

operator-VPC ๋ผ์šฐํŒ… ๊ฒฝ๋กœ

  • operator-VPC ํผ๋ธ”๋ฆญ ์„œ๋ธŒ๋„ท์—์„  192.168.0.0/16 ๋Œ€์—ญ๊ณผ ํ†ต์‹  ์‹œ ์ธํ„ฐ๋„ท ๊ฒฝ์œ  ์—†์ด ํ”ผ์–ด๋ง(Peering)์„ ํ†ตํ•ด myeks-VPC์™€ ์ง์ ‘ ํ†ต์‹ 

Image


๐Ÿšข eksctl์„ ํ†ตํ•ด EKS ๋ฐฐํฌ

1. ํด๋Ÿฌ์Šคํ„ฐ๋ช… ๋ณ€์ˆ˜ ์ง€์ •

1
export CLUSTER_NAME=myeks

2. myeks-VPC ๋ณ€์ˆ˜ ์ง€์ •

1
2
export VPCID=$(aws ec2 describe-vpcs --filters "Name=tag:Name,Values=$CLUSTER_NAME-VPC" --query 'Vpcs[*].VpcId' --output text)
echo $VPCID

โœ…ย ์ถœ๋ ฅ

1
vpc-02725f328b257230c

Image

AWS ์ฝ˜์†” VPC > Your VPCs์—์„œ ์กฐํšŒํ•œ myeks-VPC ID : vpc-02725f328b257230c

Image

3. myeks ํผ๋ธ”๋ฆญ ์„œ๋ธŒ๋„ท ๋ณ€์ˆ˜ ์ง€์ •

AZ1, AZ2, AZ3์— ํ•ด๋‹นํ•˜๋Š” ํผ๋ธ”๋ฆญ ์„œ๋ธŒ๋„ท์˜ ID๋ฅผ ๋ณ€์ˆ˜๋กœ ์ง€์ •

1
2
3
export PubSubnet1=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-Vpc1PublicSubnet1" --query "Subnets[0].[SubnetId]" --output text)
export PubSubnet2=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-Vpc1PublicSubnet2" --query "Subnets[0].[SubnetId]" --output text)
export PubSubnet3=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-Vpc1PublicSubnet3" --query "Subnets[0].[SubnetId]" --output text)
1
echo $PubSubnet1 $PubSubnet2 $PubSubnet3

โœ…ย ์ถœ๋ ฅ

1
subnet-0b53307d4d544b3bf subnet-019936bd535b68960 subnet-0d19099a6b73555f8

์„œ๋ธŒ๋„ท ๋Œ€์—ญ(CIDR) ๋ฒ”์œ„

  • AZ1: 192.168.1.0/24, AZ2: 192.168.2.0/24, AZ3: 192.168.3.0/24

Image

4. myeks.yaml ํŒŒ์ผ ์ž‘์„ฑ

๋ณ€๊ฒฝํ•  ํ•ญ๋ชฉ

VPC ID, Public Subnet1 ID, Public Subnet2 ID, Public Subnet3 ID, SSH Public Key Name

1
2
echo $VPCID
vpc-02725f328b257230c
1
2
echo $PubSubnet1 $PubSubnet2 $PubSubnet3
subnet-0b53307d4d544b3bf subnet-019936bd535b68960 subnet-0d19099a6b73555f8

myeks.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: myeks
  region: ap-northeast-2
  version: "1.31"

kubernetesNetworkConfig:
  ipFamily: IPv4

iam:
  vpcResourceControllerPolicy: true
  withOIDC: true

accessConfig:
  authenticationMode: API_AND_CONFIG_MAP

vpc:
  autoAllocateIPv6: false
  cidr: 192.168.0.0/16
  clusterEndpoints:
    privateAccess: true # if you only want to allow private access to the cluster
    publicAccess: true # if you want to allow public access to the cluster
  id: vpc-02725f328b257230c  # ๊ฐ์ž ํ™˜๊ฒฝ ์ •๋ณด๋กœ ์ˆ˜์ •
  manageSharedNodeSecurityGroupRules: true # if you want to manage the rules of the shared node security group
  nat:
    gateway: Disable
  subnets:
    public:
      ap-northeast-2a:
        az: ap-northeast-2a
        cidr: 192.168.1.0/24
        id: subnet-0b53307d4d544b3bf  # ๊ฐ์ž ํ™˜๊ฒฝ ์ •๋ณด๋กœ ์ˆ˜์ •
      ap-northeast-2b:
        az: ap-northeast-2b
        cidr: 192.168.2.0/24
        id: subnet-019936bd535b68960  # ๊ฐ์ž ํ™˜๊ฒฝ ์ •๋ณด๋กœ ์ˆ˜์ •
      ap-northeast-2c:
        az: ap-northeast-2c
        cidr: 192.168.3.0/24
        id: subnet-0d19099a6b73555f8  # ๊ฐ์ž ํ™˜๊ฒฝ ์ •๋ณด๋กœ ์ˆ˜์ •

addons:
  - name: vpc-cni # no version is specified so it deploys the default version
    version: latest # auto discovers the latest available
    attachPolicyARNs: # attach IAM policies to the add-on's service account
      - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
    configurationValues: |-
      enableNetworkPolicy: "true"

  - name: kube-proxy
    version: latest

  - name: coredns
    version: latest

  - name: metrics-server
    version: latest

privateCluster:
  enabled: false
  skipEndpointCreation: false

managedNodeGroups:
- amiFamily: AmazonLinux2023
  desiredCapacity: 3
  disableIMDSv1: true
  disablePodIMDS: false
  iam:
    withAddonPolicies:
      albIngress: false # Disable ALB Ingress Controller
      appMesh: false
      appMeshPreview: false
      autoScaler: false
      awsLoadBalancerController: true # Enable AWS Load Balancer Controller
      certManager: true # Enable cert-manager
      cloudWatch: false
      ebs: false
      efs: false
      externalDNS: true # Enable ExternalDNS
      fsx: false
      imageBuilder: true
      xRay: false
  instanceSelector: {}
  instanceType: t3.medium
  preBootstrapCommands:
    # install additional packages
    - "dnf install nvme-cli links tree tcpdump sysstat ipvsadm ipset bind-utils htop -y"
    # disable hyperthreading
    - "for n in $(cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | cut -s -d, -f2- | tr ',' '\n' | sort -un); do echo 0 > /sys/devices/system/cpu/cpu${n}/online; done"
  labels:
    alpha.eksctl.io/cluster-name: myeks
    alpha.eksctl.io/nodegroup-name: ng1
  maxSize: 3
  minSize: 3
  name: ng1
  privateNetworking: false
  releaseVersion: ""
  securityGroups:
    withLocal: null
    withShared: null
  ssh:
    allow: true
    publicKeyName: kp-aews  # ๊ฐ์ž ํ™˜๊ฒฝ ์ •๋ณด๋กœ ์ˆ˜์ •
  tags:
    alpha.eksctl.io/nodegroup-name: ng1
    alpha.eksctl.io/nodegroup-type: managed
  volumeIOPS: 3000
  volumeSize: 30
  volumeThroughput: 125
  volumeType: gp3

5. ์ตœ์ข… yaml๋กœ eks ๋ฐฐํฌ

(1) kubeconfig ํŒŒ์ผ ๊ฒฝ๋กœ ์œ„์น˜ ์ง€์ •

1
export KUBECONFIG=~/Downloads/kubeconfig

(2) ๋ฐฐํฌ

1
eksctl create cluster -f myeks.yaml --verbose 4

Image

๋ฐฐํฌ์™„๋ฃŒ

Image

6. ๋ฐฐํฌ ํ›„ ๊ธฐ๋ณธ ์ •๋ณด ํ™•์ธ

(1) API ์„œ๋ฒ„ ์—”๋“œํฌ์ธํŠธ ๋ฐ OIDC ์ •๋ณด

API Server Endpoint์™€ OpenID Connect Provider URL(oidc) ํฌํ•จ

Image

(2) Compute

Node groups์—์„œ AMI(AL2023) ํ™•์ธ ๊ฐ€๋Šฅ

Image

(3) Networking

Access๋Š” Public and Private์œผ๋กœ ์„ค์ •๋จ

Image

(4) Add-ons

VPC CNI์—์„œ Edit ํ›„ IRSA ๊ถŒํ•œ ์„ค์ • ํ™•์ธ โ†’ ํ•ด๋‹น IAM Role ํ™•์ธ

Image

Image

(5) Access

IAM Access Entries์—์„œ ์„ค์น˜ ์‹œ ์‚ฌ์šฉํ•œ ์ž๊ฒฉ์ฆ๋ช… ์‚ฌ์šฉ์ž ํ™•์ธ

AmazonEKSClusterAdminPolicy ๊ด€๋ฆฌ์ž ์ •์ฑ…์ด ํฌํ•จ๋˜์–ด ์žˆ์–ด EKS ๊ด€๋ฆฌ ์ฝ˜์†” ์ ‘๊ทผ ๋ฐ ๊ด€๋ฆฌ ๊ฐ€๋Šฅ

Image

7. EKS ์ •๋ณด ํ™•์ธ

1
kubectl cluster-info

โœ…ย ์ถœ๋ ฅ

1
2
3
4
Kubernetes control plane is running at https://D25BC1FD83873C599EC920D5193DC864.gr7.ap-northeast-2.eks.amazonaws.com
CoreDNS is running at https://D25BC1FD83873C599EC920D5193DC864.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

8. ๋„ค์ž„์ŠคํŽ˜์ด์Šค default ๋ณ€๊ฒฝ ์ ์šฉ

1
2
3
4
kubens default

# ๊ฒฐ๊ณผ
โœ” Active namespace is "default"

9. ํ˜„์žฌ Kubernetes ์ปจํ…์ŠคํŠธ ํ™•์ธ

1
cat $KUBECONFIG | grep current-context

โœ…ย ์ถœ๋ ฅ

1
current-context: eks-user@myeks.ap-northeast-2.eksctl.io

10. Kubernetes ์ปจํ…์ŠคํŠธ ์ด๋ฆ„ ๋ณ€๊ฒฝ

1
2
3
4
kubectl config rename-context "eks-user@myeks.ap-northeast-2.eksctl.io" "eksworkshop"

# ๊ฒฐ๊ณผ
Context "eks-user@myeks.ap-northeast-2.eksctl.io" renamed to "eksworkshop".
1
cat $KUBECONFIG | grep current-context

โœ…ย ์ถœ๋ ฅ

1
current-context: eksworkshop

11. ๋…ธ๋“œ ์ •๋ณด ํ™•์ธ

1
kubectl get node --label-columns=node.kubernetes.io/instance-type,eks.amazonaws.com/capacityType,topology.kubernetes.io/zone

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME                                               STATUS   ROLES    AGE    VERSION               INSTANCE-TYPE   CAPACITYTYPE   ZONE
ip-192-168-1-193.ap-northeast-2.compute.internal   Ready    <none>   169m   v1.31.4-eks-aeac579   t3.medium       ON_DEMAND      ap-northeast-2a
ip-192-168-2-52.ap-northeast-2.compute.internal    Ready    <none>   169m   v1.31.4-eks-aeac579   t3.medium       ON_DEMAND      ap-northeast-2b
ip-192-168-3-72.ap-northeast-2.compute.internal    Ready    <none>   169m   v1.31.4-eks-aeac579   t3.medium       ON_DEMAND      ap-northeast-2
1
kubectl get node -v=6

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
I0212 15:08:46.728964  137397 loader.go:402] Config loaded from file:  /home/devshin/Downloads/kubeconfig
I0212 15:08:46.729339  137397 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0212 15:08:46.729350  137397 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0212 15:08:46.729355  137397 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0212 15:08:46.729359  137397 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0212 15:08:47.170830  137397 round_trippers.go:560] GET https://D25BC1FD83873C599EC920D5193DC864.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/nodes?limit=500 200 OK in 436 milliseconds
NAME                                               STATUS   ROLES    AGE    VERSION
ip-192-168-1-193.ap-northeast-2.compute.internal   Ready    <none>   170m   v1.31.4-eks-aeac579
ip-192-168-2-52.ap-northeast-2.compute.internal    Ready    <none>   170m   v1.31.4-eks-aeac579
ip-192-168-3-72.ap-northeast-2.compute.internal    Ready    <none>   170m   v1.31.4-eks-aeac579

12. pod ์ •๋ณด ํ™•์ธ

1
kubectl get pod -A

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   aws-node-6mctb                    2/2     Running   0          173m
kube-system   aws-node-b66dj                    2/2     Running   0          173m
kube-system   aws-node-rf79g                    2/2     Running   0          173m
kube-system   coredns-86f5954566-gqf97          1/1     Running   0          177m
kube-system   coredns-86f5954566-nntgz          1/1     Running   0          177m
kube-system   kube-proxy-jg5qj                  1/1     Running   0          173m
kube-system   kube-proxy-t2sqh                  1/1     Running   0          173m
kube-system   kube-proxy-w96mt                  1/1     Running   0          173m
kube-system   metrics-server-86bbfd75bb-j72mf   1/1     Running   0          177m
kube-system   metrics-server-86bbfd75bb-pbpkd   1/1     Running   0          177m
1
kubectl get pdb -n kube-system

โœ…ย ์ถœ๋ ฅ

1
2
3
NAME             MIN AVAILABLE   MAX UNAVAILABLE   ALLOWED DISRUPTIONS   AGE
coredns          N/A             1                 1                     178m
metrics-server   N/A             1                 1                     178m

13. ๊ด€๋ฆฌํ˜• ๋…ธ๋“œ ๊ทธ๋ฃน ํ™•์ธ

1
eksctl get nodegroup --cluster $CLUSTER_NAME

โœ…ย ์ถœ๋ ฅ

1
2
CLUSTER	NODEGROUP	STATUS	CREATED			MIN SIZE	MAX SIZE	DESIRED CAPACITY	INSTANCE TYPE	IMAGE ID		ASG NAME					TYPE
myeks	ng1		ACTIVE	2025-02-12T03:17:03Z	3		3		3			t3.medium	AL2023_x86_64_STANDARD	eks-ng1-84ca7c14-790d-ef45-8256-776960f87794	managed
1
aws eks describe-nodegroup --cluster-name $CLUSTER_NAME --nodegroup-name ng1 | jq

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
{
  "nodegroup": {
    "nodegroupName": "ng1",
    "nodegroupArn": "arn:aws:eks:ap-northeast-2:378102432899:nodegroup/myeks/ng1/84ca7c14-790d-ef45-8256-776960f87794",
    "clusterName": "myeks",
    "version": "1.31",
    "releaseVersion": "1.31.4-20250203",
    "createdAt": "2025-02-12T12:17:03.928000+09:00",
    "modifiedAt": "2025-02-12T15:08:14.235000+09:00",
    "status": "ACTIVE",
    "capacityType": "ON_DEMAND",
    "scalingConfig": {
      "minSize": 3,
      "maxSize": 3,
      "desiredSize": 3
    },
    "instanceTypes": [
      "t3.medium"
    ],
    "subnets": [
      "subnet-0b53307d4d544b3bf",
      "subnet-019936bd535b68960",
      "subnet-0d19099a6b73555f8"
    ],
    "amiType": "AL2023_x86_64_STANDARD",
    "nodeRole": "arn:aws:iam::378102432899:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-aJPw1zdjbXYF",
    "labels": {
      "alpha.eksctl.io/cluster-name": "myeks",
      "alpha.eksctl.io/nodegroup-name": "ng1"
    },
    "resources": {
      "autoScalingGroups": [
        {
          "name": "eks-ng1-84ca7c14-790d-ef45-8256-776960f87794"
        }
      ]
    },
    "health": {
      "issues": []
    },
    "updateConfig": {
      "maxUnavailable": 1
    },
    "launchTemplate": {
      "name": "eksctl-myeks-nodegroup-ng1",
      "version": "1",
      "id": "lt-0ec600e4f000289da"
    },
    "tags": {
      "aws:cloudformation:stack-name": "eksctl-myeks-nodegroup-ng1",
      "alpha.eksctl.io/cluster-name": "myeks",
      "alpha.eksctl.io/nodegroup-name": "ng1",
      "aws:cloudformation:stack-id": "arn:aws:cloudformation:ap-northeast-2:378102432899:stack/eksctl-myeks-nodegroup-ng1/c502af50-e8ef-11ef-89e7-022a714bdf75",
      "eksctl.cluster.k8s.io/v1alpha1/cluster-name": "myeks",
      "aws:cloudformation:logical-id": "ManagedNodeGroup",
      "alpha.eksctl.io/nodegroup-type": "managed",
      "alpha.eksctl.io/eksctl-version": "0.203.0"
    }
  }
}

14. eks addon ํ™•์ธ

1
eksctl get addon --cluster $CLUSTER_NAME

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
2025-02-12 15:29:21 [โ„น]  Kubernetes version "1.31" in use by cluster "myeks"
2025-02-12 15:29:21 [โ„น]  getting all addons
2025-02-12 15:29:23 [โ„น]  to see issues for an addon run `eksctl get addon --name <addon-name> --cluster <cluster-name>`
NAME		VERSION			STATUS	ISSUES	IAMROLE										UPDATE AVAILABLE	CONFIGURATION VALUES		POD IDENTITY ASSOCIATION ROLES
coredns		v1.11.4-eksbuild.2	ACTIVE	0													
kube-proxy	v1.31.3-eksbuild.2	ACTIVE	0													
metrics-server	v0.7.2-eksbuild.1	ACTIVE	0													
vpc-cni		v1.19.2-eksbuild.5	ACTIVE	0	arn:aws:iam::378102432899:role/eksctl-myeks-addon-vpc-cni-Role1-fGF6qGwGjFyL			enableNetworkPolicy: "true"	

15. ๊ด€๋ฆฌํ˜• ๋…ธ๋“œ๊ทธ๋ฃน(EC2) ์ ‘์† ๋ฐ ๋…ธ๋“œ ์ •๋ณด ํ™•์ธ

(1) EC2 ์ธ์Šคํ„ด์Šค์˜ IAM Role ํ™•์ธ

EC2 ์ธ์Šคํ„ด์Šค ๋ณด์•ˆ ์„ค์ •์—์„œ IAM Role์ด ๋งคํ•‘๋˜์–ด ์žˆ์œผ๋ฉฐ, EC2 ์ธ์Šคํ„ด์Šค ํ”„๋กœํŒŒ์ผ์— ์—ฐ๊ฒฐ๋จ

Image

managedNodeGroups์˜ IAM ์„ค์ •๊ณผ ๋งค์นญ๋˜์–ด awsLoadBalancerController ์‹คํ–‰ ์‹œ ํ•„์š”ํ•œ ๊ถŒํ•œ์„ ์ œ๊ณต

Image

awsLoadBalancerController, certManager, externalDNS ๊ถŒํ•œ ํ™œ์„ฑํ™”

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
managedNodeGroups:
- amiFamily: AmazonLinux2023
  desiredCapacity: 3
  disableIMDSv1: true
  disablePodIMDS: false
  iam:
    withAddonPolicies:
      albIngress: false # Disable ALB Ingress Controller
      appMesh: false
      appMeshPreview: false
      autoScaler: false
      awsLoadBalancerController: true # Enable AWS Load Balancer Controller
      certManager: true # Enable cert-manager
      cloudWatch: false
      ebs: false
      efs: false
      externalDNS: true # Enable ExternalDNS
      fsx: false
      imageBuilder: true
      xRay: false

(2) ์ธ์Šคํ„ด์Šค ๊ณต์ธ IP ํ™•์ธ

1
[root@operator-host ~]# aws ec2 describe-instances --query "Reservations[*].Instances[*].{InstanceID:InstanceId, PublicIPAdd:PublicIpAddress, PrivateIPAdd:PrivateIpAddress, InstanceName:Tags[?Key=='Name']|[0].Value, Status:State.Name}" --filters Name=instance-state-name,Values=running --output table

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
-----------------------------------------------------------------------------------------
|                                   DescribeInstances                                   |
+----------------------+-----------------+----------------+-----------------+-----------+
|      InstanceID      |  InstanceName   | PrivateIPAdd   |   PublicIPAdd   |  Status   |
+----------------------+-----------------+----------------+-----------------+-----------+
|  i-0d14501e2c8353ae6 |  myeks-ng1-Node |  192.168.3.72  |  43.201.115.81  |  running  |
|  i-00db22b426cc7efb5 |  operator-host  |  172.20.1.100  |  15.165.15.90   |  running  |
|  i-090451779dfb774e9 |  myeks-ng1-Node |  192.168.1.193 |  43.202.57.204  |  running  |
|  i-0e8bab88cd0a40ae8 |  myeks-ng1-Node |  192.168.2.52  |  15.164.179.214 |  running  |
+----------------------+-----------------+----------------+-----------------+-----------+

(3) ์ธ์Šคํ„ด์Šค ๊ณต์ธ IP ๋ณ€์ˆ˜ ์ง€์ •

1
2
3
4
export N1=43.202.57.204
export N2=15.164.179.214
export N3=43.201.115.81
echo $N1, $N2, $N3

โœ…ย ์ถœ๋ ฅ

1
43.202.57.204, 15.164.179.214, 43.201.115.81

(4) ping ํ…Œ์ŠคํŠธ

1
ping -c 2 $N1

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
# ์ถœ๋ ฅ: ping ์‹คํŒจ (100% packet loss)
PING 43.202.57.204 (43.202.57.204) 56(84) bytes of data.

--- 43.202.57.204 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1037ms

(5) ๊ด€๋ฆฌํ˜• ์›Œ์ปค๋…ธ๋“œ ๋ณด์•ˆ๊ทธ๋ฃน ํ™•์ธ

  • ๋…ธ๋“œ๊ทธ๋ฃน Remote Access ๋ณด์•ˆ๊ทธ๋ฃน๊ณผ EKS ํด๋Ÿฌ์Šคํ„ฐ ๋ณด์•ˆ๊ทธ๋ฃน ์กด์žฌ
  • Remote Access ๋ณด์•ˆ๊ทธ๋ฃน: sg-08385fad7996593f1
  • EKS ํด๋Ÿฌ์Šคํ„ฐ ๋ณด์•ˆ๊ทธ๋ฃน: sg-073b338fccd7776e7

Image

(6) Remote Access ๋ณด์•ˆ๊ทธ๋ฃน ์„ค์ •

  • ์ง‘ ๊ณต์ธ IP์™€ Operator ์„œ๋ฒ„ ๋‚ด๋ถ€ IP(172.20.1.100)๋ฅผ ์†Œ์Šค IP๋กœ ์ถ”๊ฐ€

Image

  • Remote Access ๋ณด์•ˆ๊ทธ๋ฃน ID ๋ณ€์ˆ˜ ์ง€์ •
1
export MNSGID=sg-08385fad7996593f1
  • ์ง‘ ๊ณต์ธ IP ์ธ๋ฐ”์šด๋“œ ๊ทœ์น™ ์ถ”๊ฐ€
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
aws ec2 authorize-security-group-ingress --group-id $MNSGID --protocol '-1' --cidr $(curl -s ipinfo.io/ip)/32

# ๊ฒฐ๊ณผ
{
    "Return": true,
    "SecurityGroupRules": [
        {
            "SecurityGroupRuleId": "sgr-0e2dcc79069298593",
            "GroupId": "sg-08385fad7996593f1",
            "GroupOwnerId": "378102432899",
            "IsEgress": false,
            "IpProtocol": "-1",
            "FromPort": -1,
            "ToPort": -1,
            "CidrIpv4": "182.230.60.93/32",
            "SecurityGroupRuleArn": "arn:aws:ec2:ap-northeast-2:378102432899:security-group-rule/sgr-0e2dcc79069298593"
        }
    ]
}
  • Operator ์„œ๋ฒ„ ๋‚ด๋ถ€ IP ์ธ๋ฐ”์šด๋“œ ๊ทœ์น™ ์ถ”๊ฐ€
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
aws ec2 authorize-security-group-ingress --group-id $MNSGID --protocol '-1' --cidr 172.20.1.100/32

# ๊ฒฐ๊ณผ
{
    "Return": true,
    "SecurityGroupRules": [
        {
            "SecurityGroupRuleId": "sgr-062028d793058f3d1",
            "GroupId": "sg-08385fad7996593f1",
            "GroupOwnerId": "378102432899",
            "IsEgress": false,
            "IpProtocol": "-1",
            "FromPort": -1,
            "ToPort": -1,
            "CidrIpv4": "172.20.1.100/32",
            "SecurityGroupRuleArn": "arn:aws:ec2:ap-northeast-2:378102432899:security-group-rule/sgr-062028d793058f3d1"
        }
    ]
}

(7) ping ํ…Œ์ŠคํŠธ

1
ping -c 2 $N1

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
# ping ์„ฑ๊ณต
PING 43.202.57.204 (43.202.57.204) 56(84) bytes of data.
64 bytes from 43.202.57.204: icmp_seq=1 ttl=115 time=6.78 ms
64 bytes from 43.202.57.204: icmp_seq=2 ttl=115 time=20.1 ms

--- 43.202.57.204 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 6.783/13.453/20.123/6.670 ms
1
2
# Operator-host์—์„œ ping ํ…Œ์ŠคํŠธ
[root@operator-host ~]# ping -c 2 192.168.1.193

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
# ping ์„ฑ๊ณต
PING 192.168.1.193 (192.168.1.193) 56(84) bytes of data.
64 bytes from 192.168.1.193: icmp_seq=1 ttl=127 time=0.227 ms
64 bytes from 192.168.1.193: icmp_seq=2 ttl=127 time=0.498 ms

--- 192.168.1.193 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1006ms
rtt min/avg/max/mdev = 0.227/0.362/0.498/0.136 ms
  • Operator-Host์—์„œ ์›Œ์ปค๋…ธ๋“œ ์„œ๋ธŒ๋„ท์œผ๋กœ ping ์„ฑ๊ณต
  • myeks-VPC ๋ณด์•ˆ๊ทธ๋ฃน์— Operator-VPC(172.20.1.100/32) ์ธ๋ฐ”์šด๋“œ ๊ทœ์น™ ์ถ”๊ฐ€๋กœ ์ ‘๊ทผ ๊ฐ€๋Šฅ

(8) ์›Œ์ปค ๋…ธ๋“œ SSH ์ ‘์† ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
ssh ec2-user@$N1

A newer release of "Amazon Linux" is available.
  Version 2023.6.20250203:
Run "/usr/bin/dnf check-release-update" for full release and version update info
   ,     #_
   ~\_  ####_        Amazon Linux 2023
  ~~  \_#####\
  ~~     \###|
  ~~       \#/ ___   https://aws.amazon.com/linux/amazon-linux-2023
   ~~       V~' '->
    ~~~         /
      ~~._.   _/
         _/ _/
       _/m/'
Last login: Mon Feb  3 23:59:19 2025 from 52.94.123.246
[ec2-user@ip-192-168-1-193 ~]$
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
ssh ec2-user@$N2

A newer release of "Amazon Linux" is available.
  Version 2023.6.20250203:
Run "/usr/bin/dnf check-release-update" for full release and version update info
   ,     #_
   ~\_  ####_        Amazon Linux 2023
  ~~  \_#####\
  ~~     \###|
  ~~       \#/ ___   https://aws.amazon.com/linux/amazon-linux-2023
   ~~       V~' '->
    ~~~         /
      ~~._.   _/
         _/ _/
       _/m/'
Last login: Mon Feb  3 23:59:19 2025 from 52.94.123.246
[ec2-user@ip-192-168-2-52 ~]$
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
ssh ec2-user@$N3

A newer release of "Amazon Linux" is available.
  Version 2023.6.20250203:
Run "/usr/bin/dnf check-release-update" for full release and version update info
   ,     #_
   ~\_  ####_        Amazon Linux 2023
  ~~  \_#####\
  ~~     \###|
  ~~       \#/ ___   https://aws.amazon.com/linux/amazon-linux-2023
   ~~       V~' '->
    ~~~         /
      ~~._.   _/
         _/ _/
       _/m/'
Last login: Mon Feb  3 23:59:19 2025 from 52.94.123.246
[ec2-user@ip-192-168-3-72 ~]$

(9) ์šด์˜์„œ๋ฒ„ EC2์—์„œ ์›Œ์ปค๋…ธ๋“œ EC2 ์ ‘์† ๋ฐ ์ •๋ณด ํ™•์ธ

์ธ์Šคํ„ด์Šค ๊ณต์ธ IP ๋ณ€์ˆ˜ ์ง€์ •

1
2
3
4
5
[root@operator-host ~]# export N1=192.168.1.193
[root@operator-host ~]# export N2=192.168.2.52
[root@operator-host ~]# export N3=192.168.3.72
[root@operator-host ~]# echo $N1, $N2, $N3
192.168.1.193, 192.168.2.52, 192.168.3.72

๊ฐ ์›Œ์ปค๋…ธ๋“œ์— ping ํ…Œ์ŠคํŠธ

1
[root@operator-host ~]# ping -c 2 $N1

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
PING 192.168.1.193 (192.168.1.193) 56(84) bytes of data.
64 bytes from 192.168.1.193: icmp_seq=1 ttl=127 time=0.232 ms
64 bytes from 192.168.1.193: icmp_seq=2 ttl=127 time=0.242 ms

--- 192.168.1.193 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1007ms
rtt min/avg/max/mdev = 0.232/0.237/0.242/0.005 ms

16. ๋…ธ๋“œ ์ •๋ณด ํ™•์ธ

(1) ๊ฐ ๋…ธ๋“œ์˜ ํ˜ธ์ŠคํŠธ ์ •๋ณด ํ™•์ธ

1
for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i hostnamectl; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
>> node 43.202.57.204 <<
 Static hostname: ip-192-168-1-193.ap-northeast-2.compute.internal
       Icon name: computer-vm
         Chassis: vm ๐Ÿ–ด
      Machine ID: ec2e13998d891ffbacc5d853155112da
         Boot ID: 3b6c5280edb84574b64c4edc1f352d1e
  Virtualization: amazon
Operating System: Amazon Linux 2023.6.20250128
     CPE OS Name: cpe:2.3:o:amazon:amazon_linux:2023
          Kernel: Linux 6.1.124-134.200.amzn2023.x86_64
    Architecture: x86-64
 Hardware Vendor: Amazon EC2
  Hardware Model: t3.medium
Firmware Version: 1.0

>> node 15.164.179.214 <<
 Static hostname: ip-192-168-2-52.ap-northeast-2.compute.internal
       Icon name: computer-vm
         Chassis: vm ๐Ÿ–ด
      Machine ID: ec2e325020be14105530b18b0e81710a
         Boot ID: 1fdda72d00cd465d8074e7c6238882a9
  Virtualization: amazon
Operating System: Amazon Linux 2023.6.20250128
     CPE OS Name: cpe:2.3:o:amazon:amazon_linux:2023
          Kernel: Linux 6.1.124-134.200.amzn2023.x86_64
    Architecture: x86-64
 Hardware Vendor: Amazon EC2
  Hardware Model: t3.medium
Firmware Version: 1.0

>> node 43.201.115.81 <<
 Static hostname: ip-192-168-3-72.ap-northeast-2.compute.internal
       Icon name: computer-vm
         Chassis: vm ๐Ÿ–ด
      Machine ID: ec2951238a61149817871c92befba51d
         Boot ID: 9a1cd0af9840421a9234e72a44777415
  Virtualization: amazon
Operating System: Amazon Linux 2023.6.20250128
     CPE OS Name: cpe:2.3:o:amazon:amazon_linux:2023
          Kernel: Linux 6.1.124-134.200.amzn2023.x86_64
    Architecture: x86-64
 Hardware Vendor: Amazon EC2
  Hardware Model: t3.medium
Firmware Version: 1.0

(2) ๊ฐ ๋…ธ๋“œ์˜ ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค ์ •๋ณด ํ™•์ธ

1
for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo ip -c addr; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
>> node 43.202.57.204 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 02:23:93:a6:bc:61 brd ff:ff:ff:ff:ff:ff
    altname enp0s5
    inet 192.168.1.193/24 metric 1024 brd 192.168.1.255 scope global dynamic ens5
       valid_lft 3137sec preferred_lft 3137sec
    inet6 fe80::23:93ff:fea6:bc61/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
3: eni01a4864c88a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether 56:be:6a:bd:2d:2a brd ff:ff:ff:ff:ff:ff link-netns cni-a59ccd2b-5db2-0159-b78e-e0797b300a23
    inet6 fe80::54be:6aff:febd:2d2a/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
4: ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 02:49:8e:e5:f1:05 brd ff:ff:ff:ff:ff:ff
    altname enp0s6
    inet 192.168.1.237/24 brd 192.168.1.255 scope global ens6
       valid_lft forever preferred_lft forever
    inet6 fe80::49:8eff:fee5:f105/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
5: eni61c5a949744@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether 52:41:42:c3:03:3f brd ff:ff:ff:ff:ff:ff link-netns cni-74caca86-36e2-a922-2920-c2c8c00e7b43
    inet6 fe80::5041:42ff:fec3:33f/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever

>> node 15.164.179.214 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 06:c5:5b:d1:77:57 brd ff:ff:ff:ff:ff:ff
    altname enp0s5
    inet 192.168.2.52/24 metric 1024 brd 192.168.2.255 scope global dynamic ens5
       valid_lft 3138sec preferred_lft 3138sec
    inet6 fe80::4c5:5bff:fed1:7757/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
3: enibce2df30e87@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether 4a:c8:23:ab:85:39 brd ff:ff:ff:ff:ff:ff link-netns cni-5408bb28-ad79-a3f3-3a60-9442968852b1
    inet6 fe80::48c8:23ff:feab:8539/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
4: ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 06:91:4b:4c:9c:e9 brd ff:ff:ff:ff:ff:ff
    altname enp0s6
    inet 192.168.2.42/24 brd 192.168.2.255 scope global ens6
       valid_lft forever preferred_lft forever
    inet6 fe80::491:4bff:fe4c:9ce9/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
5: enic99196c7a64@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether 8a:00:6e:37:b1:71 brd ff:ff:ff:ff:ff:ff link-netns cni-9343fb30-bdc8-5fab-b46d-3a5db58f8007
    inet6 fe80::8800:6eff:fe37:b171/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever

>> node 43.201.115.81 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 0a:d5:3d:4a:54:bd brd ff:ff:ff:ff:ff:ff
    altname enp0s5
    inet 192.168.3.72/24 metric 1024 brd 192.168.3.255 scope global dynamic ens5
       valid_lft 3144sec preferred_lft 3144sec
    inet6 fe80::8d5:3dff:fe4a:54bd/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever

(3) ๊ฐ ๋…ธ๋“œ์˜ ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ” ํ™•์ธ

1
for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo ip -c route; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
>> node 43.202.57.204 <<
default via 192.168.1.1 dev ens5 proto dhcp src 192.168.1.193 metric 1024 
192.168.0.2 via 192.168.1.1 dev ens5 proto dhcp src 192.168.1.193 metric 1024 
192.168.1.0/24 dev ens5 proto kernel scope link src 192.168.1.193 metric 1024 
192.168.1.1 dev ens5 proto dhcp scope link src 192.168.1.193 metric 1024 
192.168.1.59 dev eni61c5a949744 scope link 
192.168.1.90 dev eni01a4864c88a scope link 

>> node 15.164.179.214 <<
default via 192.168.2.1 dev ens5 proto dhcp src 192.168.2.52 metric 1024 
192.168.0.2 via 192.168.2.1 dev ens5 proto dhcp src 192.168.2.52 metric 1024 
192.168.2.0/24 dev ens5 proto kernel scope link src 192.168.2.52 metric 1024 
192.168.2.1 dev ens5 proto dhcp scope link src 192.168.2.52 metric 1024 
192.168.2.72 dev enibce2df30e87 scope link 
192.168.2.248 dev enic99196c7a64 scope link 

>> node 43.201.115.81 <<
default via 192.168.3.1 dev ens5 proto dhcp src 192.168.3.72 metric 1024 
192.168.0.2 via 192.168.3.1 dev ens5 proto dhcp src 192.168.3.72 metric 1024 
192.168.3.0/24 dev ens5 proto kernel scope link src 192.168.3.72 metric 1024 
192.168.3.1 dev ens5 proto dhcp scope link src 192.168.3.72 metric 1024 

(4) ๊ฐ ๋…ธ๋“œ์˜ cgroup ๋ฒ„์ „ ํ™•์ธ

1
for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i stat -fc %T /sys/fs/cgroup/; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
>> node 43.202.57.204 <<
cgroup2fs

>> node 15.164.179.214 <<
cgroup2fs

>> node 43.201.115.81 <<
cgroup2fs

(5) ๊ฐ ๋…ธ๋“œ์˜ Kubelet ์„ค์ • ํ™•์ธ

1
for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo cat /etc/kubernetes/kubelet/config.json.d/00-nodeadm.conf | jq; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
>> node 43.202.57.204 <<
{
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "clusterDNS": [
    "10.100.0.10"
  ],
  "kind": "KubeletConfiguration",
  "maxPods": 17
}

>> node 15.164.179.214 <<
{
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "clusterDNS": [
    "10.100.0.10"
  ],
  "kind": "KubeletConfiguration",
  "maxPods": 17
}

>> node 43.201.115.81 <<
{
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "clusterDNS": [
    "10.100.0.10"
  ],
  "kind": "KubeletConfiguration",
  "maxPods": 17
}

17. ์šด์˜์„œ๋ฒ„ EC2์—์„œ EKS ์‚ฌ์šฉ ์„ค์ •

(1) IAM ์ž๊ฒฉ์ฆ๋ช… ์„ค์ •

1
2
3
4
5
[root@operator-host ~]# aws configure
AWS Access Key ID : XXXXXXXXXXXXXXXXXX
AWS Secret Access Key : XXXXXXXXXXXXXXXXXX
Default region name [ap-northeast-2]: ap-northeast-2
Default output format [json]: json

(2) IAM ์ž๊ฒฉ ํ™•์ธ

1
[root@operator-host ~]# aws sts get-caller-identity --query Arn

โœ…ย ์ถœ๋ ฅ

1
"arn:aws:iam::378102432899:user/eks-user"

(3) kubeconfig ์ƒ์„ฑ

1
2
[root@operator-host ~]# cat ~/.kube/config
cat: /root/.kube/config: No such file or directory
1
2
[root@operator-host ~]# aws eks update-kubeconfig --name myeks --user-alias eks-user
Added new context eks-user to /root/.kube/config

(4) kubeconfig ์ •๋ณด ํ™•์ธ

1
(eks-user:N/A) [root@operator-host ~]# cat ~/.kube/config

โœ…ย ์ถœ๋ ฅ

EKS ํด๋Ÿฌ์Šคํ„ฐ ์ •๋ณด, ์ธ์ฆ์„œ, ์‚ฌ์šฉ์ž ์ •๋ณด ํฌํ•จ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJYnRmWjRVZ2Fwdkl3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBeU1USXdNekF6TURSYUZ3MHpOVEF5TVRBd016QTRNRFJhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUM5UVd3L3U5UGhSOVpueEtYU3oxemZmR1p3OHlyNzJXS0lZVHozc1c4c2hXUlhpSzh6UHo4RmVaMHQKU2Q3Ym13RzdTK1JvbXltbTVreVQ3K2F1ZWt5SGZHOExhZzNKRUlMWnBQL2lOaFNGRUsvWWlSYkN0VGY3ZWp0NAovOFlwWG9Ic0pDbXpjZ3ZvQ0dwT0dnZ2dHUFh3dlNBM0lLWW5iRmVYNENTbjVUcmpVWXpURjZYVm0wSVZMT05PCjFIY1lJK0ZxQit2dDVGd3FoOVVFYUsrRHBVa3krWmswdi9FOVJ1ZDU1QzFuWUdsRFJBNnNVS1NoamZORzZ6bS8KcjZ0QUFyUmsvTDN1cGkvajMyVHE5amF2dW1ScHlHNTM2YlhIeUpULzFuU2Y5dTF4ZzNnbThYWWtLZUdXeko3TApQbzc2aHJoUnF1eEVnd0hHaysvNGV1ZzIzQi8xQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJRMGpTUFNVN1pqcWFOTTJBa0NJN1FqK2RQV2xqQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQUtoSXd2cmNOTQpuNS9WS2RsdFBpTjdrcTBIZ0tHdFVMbS8zQnlSZ2daSHY3MkFKNVk3bUdsZUFabUUzOFEyNVV6M2NjT25DOEVsCjNOYnlSdzFGMFdCYnR6TU1iNmxnSDZMWW5TcHZlckROWUZ5SGNQQmZKQmRLd0hyTFhBTXNGUlNCczRUL0Nhdm0KdE4reEZaVkh5azRBbk81VlN6QjJRbmJ5dENqbmRxc3ZjajJOWS9WK3ZQbzJMQzZBckh6TXhodmF2eVRyVEtDLwpCK242c2tQM0FTNy9hT3JEeEhyaHlXQUFLUGpqSXhBeUhsZS9Db1FBQjVoNTFQQ0d3NlFZRlR0N3hyaHVMWjdSCjFMc3FIazlobGRXUDRSdjZ5Skx0a0ErOVRCdEM1Y251Z1d1VHFFUXBQZVExTWRLcTgyY2M1aFlaZXFHdXFTRjgKVU9tcHRXTFVsbVpHCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://D25BC1FD83873C599EC920D5193DC864.gr7.ap-northeast-2.eks.amazonaws.com
  name: arn:aws:eks:ap-northeast-2:378102432899:cluster/myeks
contexts:
- context:
    cluster: arn:aws:eks:ap-northeast-2:378102432899:cluster/myeks
    user: eks-user
  name: eks-user
current-context: eks-user
kind: Config
preferences: {}
users:
- name: eks-user
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - --region
      - ap-northeast-2
      - eks
      - get-token
      - --cluster-name
      - myeks
      - --output
      - json
      command: aws

(5) EKS API DNS ์กฐํšŒ

1
(eks-user:N/A) [root@operator-host ~]# APIDNS=$(aws eks describe-cluster --name myeks | jq -r .cluster.endpoint | cut -d '/' -f 3)
1
(eks-user:N/A) [root@operator-host ~]# dig +short $APIDNS

โœ…ย ์ถœ๋ ฅ

1
2
43.201.119.98
15.165.73.21

(6) ํด๋Ÿฌ์Šคํ„ฐ ์ •๋ณด ํ™•์ธ

1
(eks-user:N/A) [root@operator-host ~]# kubectl cluster-info

โœ…ย ์ถœ๋ ฅ

1
2
3
4
Kubernetes control plane is running at https://D25BC1FD83873C599EC920D5193DC864.gr7.ap-northeast-2.eks.amazonaws.com
CoreDNS is running at https://D25BC1FD83873C599EC920D5193DC864.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Image

(7) ๋…ธ๋“œ ์ •๋ณด ํ™•์ธ

1
(eks-user:N/A) [root@operator-host ~]# kubectl get node -v6

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
I0212 17:07:06.500164    3826 loader.go:395] Config loaded from file:  /root/.kube/config
I0212 17:07:07.311634    3826 round_trippers.go:553] GET https://D25BC1FD83873C599EC920D5193DC864.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/nodes?limit=500 200 OK in 805 milliseconds
NAME                                               STATUS   ROLES    AGE     VERSION
ip-192-168-1-193.ap-northeast-2.compute.internal   Ready    <none>   4h48m   v1.31.4-eks-aeac579
ip-192-168-2-52.ap-northeast-2.compute.internal    Ready    <none>   4h48m   v1.31.4-eks-aeac579
ip-192-168-3-72.ap-northeast-2.compute.internal    Ready    <none>   4h48m   v1.31.4-eks-aeac579

๐ŸŒ ๋„คํŠธ์›Œํฌ ๊ธฐ๋ณธ ์ •๋ณด ํ™•์ธ

1. CNI ์ •๋ณด ํ™•์ธ

1
kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -f 2

โœ…ย ์ถœ๋ ฅ

1
2
3
amazon-k8s-cni-init:v1.19.2-eksbuild.5
amazon-k8s-cni:v1.19.2-eksbuild.5
amazon

2. ๋…ธ๋“œ IP ํ™•์ธ

1
aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,PrivateIPAdd:PrivateIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output table

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
------------------------------------------------------------------
|                        DescribeInstances                       |
+----------------+-----------------+------------------+----------+
|  InstanceName  |  PrivateIPAdd   |   PublicIPAdd    | Status   |
+----------------+-----------------+------------------+----------+
|  myeks-ng1-Node|  192.168.3.72   |  43.201.115.81   |  running |
|  operator-host |  172.20.1.100   |  15.165.15.90    |  running |
|  myeks-ng1-Node|  192.168.1.193  |  43.202.57.204   |  running |
|  myeks-ng1-Node|  192.168.2.52   |  15.164.179.214  |  running |
+----------------+-----------------+------------------+----------+

3. Pod ์ •๋ณด ํ™•์ธ

1
k get pod -A

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   aws-node-6mctb                    2/2     Running   0          5h8m
kube-system   aws-node-b66dj                    2/2     Running   0          5h8m
kube-system   aws-node-rf79g                    2/2     Running   0          5h8m
kube-system   coredns-86f5954566-gqf97          1/1     Running   0          5h12m
kube-system   coredns-86f5954566-nntgz          1/1     Running   0          5h12m
kube-system   kube-proxy-jg5qj                  1/1     Running   0          5h8m
kube-system   kube-proxy-t2sqh                  1/1     Running   0          5h8m
kube-system   kube-proxy-w96mt                  1/1     Running   0          5h8m
kube-system   metrics-server-86bbfd75bb-j72mf   1/1     Running   0          5h12m
kube-system   metrics-server-86bbfd75bb-pbpkd   1/1     Running   0          5h12m

4. ํŒŒ๋“œ IP ํ™•์ธ

1
kubectl get pod -n kube-system -o=custom-columns=NAME:.metadata.name,IP:.status.podIP,STATUS:.status.phase

โœ…ย ์ถœ๋ ฅ

aws-node-6mctb(192.168.1.193), kube-proxy-w96mt(192.168.1.193)๊ฐ€ ์„œ๋ฒ„์˜ IP์™€ ๊ฐ™์€ ์ด์œ ?

ํ˜ธ์ŠคํŠธ ์„œ๋ฒ„์˜ ๋„คํŠธ์›Œํฌ ๋„ค์ž„์ŠคํŽ˜์ด์Šค๋ฅผ ๊ณต์œ ํ•˜๊ธฐ ๋•Œ๋ฌธ

1
2
3
4
5
6
7
8
9
10
11
NAME                              IP              STATUS
aws-node-6mctb                    192.168.1.193   Running
aws-node-b66dj                    192.168.2.52    Running
aws-node-rf79g                    192.168.3.72    Running
coredns-86f5954566-gqf97          192.168.1.59    Running
coredns-86f5954566-nntgz          192.168.2.248   Running
kube-proxy-jg5qj                  192.168.3.72    Running
kube-proxy-t2sqh                  192.168.2.52    Running
kube-proxy-w96mt                  192.168.1.193   Running
metrics-server-86bbfd75bb-j72mf   192.168.1.90    Running
metrics-server-86bbfd75bb-pbpkd   192.168.2.72    Running

5. ๋…ธ๋“œ ๋„คํŠธ์›Œํฌ ์ •๋ณด ํ™•์ธ

(1) CNI ๋กœ๊ทธ ํŒŒ์ผ ํ™•์ธ

1
for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i tree /var/log/aws-routed-eni; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
>> node 43.202.57.204 <<
/var/log/aws-routed-eni
โ”œโ”€โ”€ ebpf-sdk.log
โ”œโ”€โ”€ egress-v6-plugin.log
โ”œโ”€โ”€ ipamd.log
โ”œโ”€โ”€ network-policy-agent.log
โ””โ”€โ”€ plugin.log

0 directories, 5 files

>> node 15.164.179.214 <<
/var/log/aws-routed-eni
โ”œโ”€โ”€ ebpf-sdk.log
โ”œโ”€โ”€ egress-v6-plugin.log
โ”œโ”€โ”€ ipamd.log
โ”œโ”€โ”€ network-policy-agent.log
โ””โ”€โ”€ plugin.log

0 directories, 5 files

>> node 43.201.115.81 <<
/var/log/aws-routed-eni
โ”œโ”€โ”€ ebpf-sdk.log
โ”œโ”€โ”€ ipamd.log
โ””โ”€โ”€ network-policy-agent.log

0 directories, 3 files

(2) ๋…ธ๋“œ๋ณ„ IP ์ฃผ์†Œ ํ™•์ธ

1
for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo ip -br -c addr; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
>> node 43.202.57.204 <<
lo               UNKNOWN        127.0.0.1/8 ::1/128 
ens5             UP             192.168.1.193/24 metric 1024 fe80::23:93ff:fea6:bc61/64 
eni01a4864c88a@if3 UP             fe80::54be:6aff:febd:2d2a/64 
ens6             UP             192.168.1.237/24 fe80::49:8eff:fee5:f105/64 
eni61c5a949744@if3 UP             fe80::5041:42ff:fec3:33f/64 

>> node 15.164.179.214 <<
lo               UNKNOWN        127.0.0.1/8 ::1/128 
ens5             UP             192.168.2.52/24 metric 1024 fe80::4c5:5bff:fed1:7757/64 
enibce2df30e87@if3 UP             fe80::48c8:23ff:feab:8539/64 
ens6             UP             192.168.2.42/24 fe80::491:4bff:fe4c:9ce9/64 
enic99196c7a64@if3 UP             fe80::8800:6eff:fe37:b171/64 

>> node 43.201.115.81 <<
lo               UNKNOWN        127.0.0.1/8 ::1/128 
ens5             UP             192.168.3.72/24 metric 1024 fe80::8d5:3dff:fe4a:54bd/64 

(3) ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค ์ƒ์„ธ ์ •๋ณด ํ™•์ธ

1
for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo ip -c addr; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
>> node 43.202.57.204 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 02:23:93:a6:bc:61 brd ff:ff:ff:ff:ff:ff
    altname enp0s5
    inet 192.168.1.193/24 metric 1024 brd 192.168.1.255 scope global dynamic ens5
       valid_lft 2264sec preferred_lft 2264sec
    inet6 fe80::23:93ff:fea6:bc61/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
3: eni01a4864c88a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether 56:be:6a:bd:2d:2a brd ff:ff:ff:ff:ff:ff link-netns cni-a59ccd2b-5db2-0159-b78e-e0797b300a23
    inet6 fe80::54be:6aff:febd:2d2a/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
4: ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 02:49:8e:e5:f1:05 brd ff:ff:ff:ff:ff:ff
    altname enp0s6
    inet 192.168.1.237/24 brd 192.168.1.255 scope global ens6
       valid_lft forever preferred_lft forever
    inet6 fe80::49:8eff:fee5:f105/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
5: eni61c5a949744@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether 52:41:42:c3:03:3f brd ff:ff:ff:ff:ff:ff link-netns cni-74caca86-36e2-a922-2920-c2c8c00e7b43
    inet6 fe80::5041:42ff:fec3:33f/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever

>> node 15.164.179.214 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 06:c5:5b:d1:77:57 brd ff:ff:ff:ff:ff:ff
    altname enp0s5
    inet 192.168.2.52/24 metric 1024 brd 192.168.2.255 scope global dynamic ens5
       valid_lft 2265sec preferred_lft 2265sec
    inet6 fe80::4c5:5bff:fed1:7757/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
3: enibce2df30e87@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether 4a:c8:23:ab:85:39 brd ff:ff:ff:ff:ff:ff link-netns cni-5408bb28-ad79-a3f3-3a60-9442968852b1
    inet6 fe80::48c8:23ff:feab:8539/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
4: ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 06:91:4b:4c:9c:e9 brd ff:ff:ff:ff:ff:ff
    altname enp0s6
    inet 192.168.2.42/24 brd 192.168.2.255 scope global ens6
       valid_lft forever preferred_lft forever
    inet6 fe80::491:4bff:fe4c:9ce9/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
5: enic99196c7a64@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether 8a:00:6e:37:b1:71 brd ff:ff:ff:ff:ff:ff link-netns cni-9343fb30-bdc8-5fab-b46d-3a5db58f8007
    inet6 fe80::8800:6eff:fe37:b171/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever

>> node 43.201.115.81 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 0a:d5:3d:4a:54:bd brd ff:ff:ff:ff:ff:ff
    altname enp0s5
    inet 192.168.3.72/24 metric 1024 brd 192.168.3.255 scope global dynamic ens5
       valid_lft 2270sec preferred_lft 2270sec
    inet6 fe80::8d5:3dff:fe4a:54bd/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever

๐Ÿ“ก ๋ณด์กฐ IPv4 ์ฃผ์†Œ๋ฅผ ํŒŒ๋“œ๊ฐ€ ์‚ฌ์šฉํ•˜๋Š”์ง€ ํ™•์ธ

1. CoreDNS ํŒŒ๋“œ IP ํ™•์ธ

1
kubectl get pod -n kube-system -l k8s-app=kube-dns -owide

โœ…ย ์ถœ๋ ฅ

CoreDNS ํŒŒ๋“œ๊ฐ€ ๋ณด์กฐ IPv4 ์ฃผ์†Œ(192.168.2.248)๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ์Œ

1
2
3
NAME                       READY   STATUS    RESTARTS   AGE     IP              NODE                                               NOMINATED NODE   READINESS GATES
coredns-86f5954566-gqf97   1/1     Running   0          5h39m   192.168.1.59    ip-192-168-1-193.ap-northeast-2.compute.internal   <none>           <none>
coredns-86f5954566-nntgz   1/1     Running   0          5h39m   192.168.2.248   ip-192-168-2-52.ap-northeast-2.compute.internal    <none>           <none>

2. ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ” ํ™•์ธ

1
for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo ip -c route; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
>> node 43.202.57.204 <<
default via 192.168.1.1 dev ens5 proto dhcp src 192.168.1.193 metric 1024 
192.168.0.2 via 192.168.1.1 dev ens5 proto dhcp src 192.168.1.193 metric 1024 
192.168.1.0/24 dev ens5 proto kernel scope link src 192.168.1.193 metric 1024 
192.168.1.1 dev ens5 proto dhcp scope link src 192.168.1.193 metric 1024 
192.168.1.59 dev eni61c5a949744 scope link 
192.168.1.90 dev eni01a4864c88a scope link 

>> node 15.164.179.214 <<
default via 192.168.2.1 dev ens5 proto dhcp src 192.168.2.52 metric 1024 
192.168.0.2 via 192.168.2.1 dev ens5 proto dhcp src 192.168.2.52 metric 1024 
192.168.2.0/24 dev ens5 proto kernel scope link src 192.168.2.52 metric 1024 
192.168.2.1 dev ens5 proto dhcp scope link src 192.168.2.52 metric 1024 
192.168.2.72 dev enibce2df30e87 scope link 
192.168.2.248 dev enic99196c7a64 scope link 

>> node 43.201.115.81 <<
default via 192.168.3.1 dev ens5 proto dhcp src 192.168.3.72 metric 1024 
192.168.0.2 via 192.168.3.1 dev ens5 proto dhcp src 192.168.3.72 metric 1024 
192.168.3.0/24 dev ens5 proto kernel scope link src 192.168.3.72 metric 1024 
192.168.3.1 dev ens5 proto dhcp scope link src 192.168.3.72 metric 1024 

3. ENI ์ฆ๊ฐ€์™€ Pod IP ํ• ๋‹น ํ™•์ธ ์‹ค์Šต

(1) ๊ฐ ๋…ธ๋“œ์—์„œ ENI์™€ ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ”์„ ๋ชจ๋‹ˆํ„ฐ๋ง

1
2
3
# ํ„ฐ๋ฏธ๋„ 1
ssh ec2-user@$N1
watch -d "ip link | egrep 'ens|eni' ;echo;echo "[ROUTE TABLE]"; route -n | grep eni"
1
2
3
# ํ„ฐ๋ฏธ๋„ 2
ssh ec2-user@$N2
watch -d "ip link | egrep 'ens|eni' ;echo;echo "[ROUTE TABLE]"; route -n | grep eni"
1
2
3
# ํ„ฐ๋ฏธ๋„ 3
ssh ec2-user@$N3
watch -d "ip link | egrep 'ens|eni' ;echo;echo "[ROUTE TABLE]"; route -n | grep eni"

(2) netshoot-pod ๋””ํ”Œ๋กœ์ด๋จผํŠธ ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: netshoot-pod
spec:
  replicas: 3
  selector:
    matchLabels:
      app: netshoot-pod
  template:
    metadata:
      labels:
        app: netshoot-pod
    spec:
      containers:
      - name: netshoot-pod
        image: nicolaka/netshoot
        command: ["tail"]
        args: ["-f", "/dev/null"]
        terminationGracePeriodSeconds: 0
EOF

netshot pod ์ƒ์„ฑ ์งํ›„

Image

netshot pod ์ƒ์„ฑ ์™„๋ฃŒ

Image

(3) ํŒŒ๋“œ ์ƒ์„ฑ ํ›„ IP ํ™•์ธ

1
k get pod -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME                            READY   STATUS    RESTARTS   AGE   IP              NODE                                               NOMINATED NODE   READINESS GATES
netshoot-pod-744bd84b46-4cfhr   1/1     Running   0          69s   192.168.3.146   ip-192-168-3-72.ap-northeast-2.compute.internal    <none>           <none>
netshoot-pod-744bd84b46-hkbd6   1/1     Running   0          69s   192.168.1.176   ip-192-168-1-193.ap-northeast-2.compute.internal   <none>           <none>
netshoot-pod-744bd84b46-pzv6d   1/1     Running   0          69s   192.168.2.226   ip-192-168-2-52.ap-northeast-2.compute.internal    <none>           <none>

Image

4. ํŒŒ๋“œ ์ด๋ฆ„ ๋ณ€์ˆ˜ ์ง€์ •

1
2
3
PODNAME1=$(kubectl get pod -l app=netshoot-pod -o jsonpath='{.items[0].metadata.name}')
PODNAME2=$(kubectl get pod -l app=netshoot-pod -o jsonpath='{.items[1].metadata.name}')
PODNAME3=$(kubectl get pod -l app=netshoot-pod -o jsonpath='{.items[2].metadata.name}')

5. ๋…ธ๋“œ ๋ผ์šฐํŒ… ์ •๋ณด ํ™•์ธ

1
for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo ip -c route; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
NAME                            READY   STATUS    RESTARTS   AGE   IP              NODE                                               NOMINATED NODE   READINESS GATES
netshoot-pod-744bd84b46-4cfhr   1/1     Running   0          95m   192.168.3.146   ip-192-168-3-72.ap-northeast-2.compute.internal    <none>           <none>
netshoot-pod-744bd84b46-hkbd6   1/1     Running   0          95m   192.168.1.176   ip-192-168-1-193.ap-northeast-2.compute.internal   <none>           <none>
netshoot-pod-744bd84b46-pzv6d   1/1     Running   0          95m   192.168.2.226   ip-192-168-2-52.ap-northeast-2.compute.internal    <none>           <none>
NAME                            IP
netshoot-pod-744bd84b46-4cfhr   192.168.3.146
netshoot-pod-744bd84b46-hkbd6   192.168.1.176
netshoot-pod-744bd84b46-pzv6d   192.168.2.226

>> node 43.202.57.204 <<
default via 192.168.1.1 dev ens5 proto dhcp src 192.168.1.193 metric 1024 
192.168.0.2 via 192.168.1.1 dev ens5 proto dhcp src 192.168.1.193 metric 1024 
192.168.1.0/24 dev ens5 proto kernel scope link src 192.168.1.193 metric 1024 
192.168.1.1 dev ens5 proto dhcp scope link src 192.168.1.193 metric 1024 
192.168.1.59 dev eni61c5a949744 scope link 
192.168.1.90 dev eni01a4864c88a scope link 
192.168.1.176 dev eni1d5278cfd98 scope link 

>> node 15.164.179.214 <<
default via 192.168.2.1 dev ens5 proto dhcp src 192.168.2.52 metric 1024 
192.168.0.2 via 192.168.2.1 dev ens5 proto dhcp src 192.168.2.52 metric 1024 
192.168.2.0/24 dev ens5 proto kernel scope link src 192.168.2.52 metric 1024 
192.168.2.1 dev ens5 proto dhcp scope link src 192.168.2.52 metric 1024 
192.168.2.72 dev enibce2df30e87 scope link 
192.168.2.226 dev enia1aaba2012e scope link 
192.168.2.248 dev enic99196c7a64 scope link 

>> node 43.201.115.81 <<
default via 192.168.3.1 dev ens5 proto dhcp src 192.168.3.72 metric 1024 
192.168.0.2 via 192.168.3.1 dev ens5 proto dhcp src 192.168.3.72 metric 1024 
192.168.3.0/24 dev ens5 proto kernel scope link src 192.168.3.72 metric 1024 
192.168.3.1 dev ens5 proto dhcp scope link src 192.168.3.72 metric 1024 
192.168.3.146 dev enia32f5632a44 scope link 

6. Pod IP ํ™•์ธ๊ณผ ๋„คํŠธ์›Œํฌ ๋„ค์ž„์ŠคํŽ˜์ด์Šค ์ง„์ž…

(1) Pod IP ๋ฐ ENI ์—ฐ๊ฒฐ ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[ec2-user@ip-192-168-3-72 ~]$ ip -c addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 0a:d5:3d:4a:54:bd brd ff:ff:ff:ff:ff:ff
    altname enp0s5
    inet 192.168.3.72/24 metric 1024 brd 192.168.3.255 scope global dynamic ens5
       valid_lft 3318sec preferred_lft 3318sec
    inet6 fe80::8d5:3dff:fe4a:54bd/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
3: enia32f5632a44@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether 76:e0:80:d5:57:23 brd ff:ff:ff:ff:ff:ff link-netns cni-48fae262-f8c0-ab50-61f1-b90d6344980d
    inet6 fe80::74e0:80ff:fed5:5723/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
4: ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 0a:da:c7:72:d0:e1 brd ff:ff:ff:ff:ff:ff
    altname enp0s6
    inet 192.168.3.120/24 brd 192.168.3.255 scope global ens6
       valid_lft forever preferred_lft forever
    inet6 fe80::8da:c7ff:fe72:d0e1/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever

ENI(enia32f5632a44@if3)๊ฐ€ ์ถ”๊ฐ€๋˜์–ด ์žˆ์œผ๋ฉฐ, Pod ๋„คํŠธ์›Œํฌ ๋„ค์ž„์ŠคํŽ˜์ด์Šค์™€ ์—ฐ๊ฒฐ๋จ

1
2
3
4
3: enia32f5632a44@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether 76:e0:80:d5:57:23 brd ff:ff:ff:ff:ff:ff link-netns cni-48fae262-f8c0-ab50-61f1-b90d6344980d
    inet6 fe80::74e0:80ff:fed5:5723/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever

(2) lsns ๋ช…๋ น์–ด๋กœ ๋„คํŠธ์›Œํฌ ๋„ค์ž„์ŠคํŽ˜์ด์Šค ํ™•์ธ

pause ์ปจํ…Œ์ด๋„ˆ์˜ PID(110946)๋ฅผ ํ™•์ธ

1
2
3
4
[ec2-user@ip-192-168-3-72 ~]$ sudo lsns -t net
        NS TYPE NPROCS    PID USER     NETNSID NSFS                                                COMMAND
4026531840 net     115      1 root  unassigned                                                     /usr/li
4026532216 net       2 110946 65535          0 /run/netns/cni-48fae262-f8c0-ab50-61f1-b90d6344980d /pause

(3) nsenter ๋ช…๋ น์–ด๋กœ ๋„คํŠธ์›Œํฌ ๋„ค์ž„์ŠคํŽ˜์ด์Šค ์ง„์ž…

1
2
[ec2-user@ip-192-168-3-72 ~]$ PID=110946
[ec2-user@ip-192-168-3-72 ~]$ sudo nsenter -t $PID -n ip -c addr

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host proto kernel_lo 
       valid_lft forever preferred_lft forever
3: eth0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether e2:ab:55:84:93:dc brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.3.146/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::e0ab:55ff:fe84:93dc/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever

(4) Pod IP ํ™•์ธ ๊ฒฐ๊ณผ

  • Pod IP(192.168.3.146)๊ฐ€ ์ถœ๋ ฅ๋˜๋ฉฐ, ์ด๋Š” ์„œ๋ฒ„ IP๊ฐ€ ์•„๋‹Œ Pod IP์ž„
  • Pod์˜ ์ธํ„ฐํŽ˜์ด์Šค eth0@if3์™€ ENI enia32f5632a44@if3๊ฐ€ ์—ฐ๊ฒฐ๋˜์–ด ์žˆ์Œ
1
k get pod -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME                            READY   STATUS    RESTARTS   AGE    IP              NODE                                               NOMINATED NODE   READINESS GATES
netshoot-pod-744bd84b46-4cfhr   1/1     Running   0          114m   192.168.3.146   ip-192-168-3-72.ap-northeast-2.compute.internal    <none>           <none>
netshoot-pod-744bd84b46-hkbd6   1/1     Running   0          114m   192.168.1.176   ip-192-168-1-193.ap-northeast-2.compute.internal   <none>           <none>
netshoot-pod-744bd84b46-pzv6d   1/1     Running   0          114m   192.168.2.226   ip-192-168-2-52.ap-northeast-2.compute.internal    <none>           <none>

7. Pod์— Zsh๋กœ ์ง„์ž…ํ•˜์—ฌ ๋„คํŠธ์›Œํฌ ์ •๋ณด ํ™•์ธ

(1) Pod ๋ชฉ๋ก ํ™•์ธ

1
k get pod

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME                            READY   STATUS    RESTARTS   AGE
netshoot-pod-744bd84b46-4cfhr   1/1     Running   0          117m
netshoot-pod-744bd84b46-hkbd6   1/1     Running   0          117m
netshoot-pod-744bd84b46-pzv6d   1/1     Running   0          117m

(2) Zsh๋กœ Pod ๋‚ด๋ถ€ ์ง„์ž… ๋ฐ ๋„คํŠธ์›Œํฌ ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
kubectl exec -it $PODNAME1 -- zsh

# ๊ฒฐ๊ณผ
                    dP            dP                           dP   
                    88            88                           88   
88d888b. .d8888b. d8888P .d8888b. 88d888b. .d8888b. .d8888b. d8888P 
88'  `88 88ooood8   88   Y8ooooo. 88'  `88 88'  `88 88'  `88   88   
88    88 88.  ...   88         88 88    88 88.  .88 88.  .88   88   
dP    dP `88888P'   dP   `88888P' dP    dP `88888P' `88888P'   dP   
                                                                    
Welcome to Netshoot! (github.com/nicolaka/netshoot)
Version: 0.13

Netshoot ์ปจํ…Œ์ด๋„ˆ์— ์ง„์ž… ํ›„, ip -c addr๋กœ ๋„คํŠธ์›Œํฌ ์ •๋ณด ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
netshoot-pod-744bd84b46-4cfhr [ ~ ] ip -c addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host proto kernel_lo 
       valid_lft forever preferred_lft forever
3: eth0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether e2:ab:55:84:93:dc brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.3.146/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::e0ab:55ff:fe84:93dc/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever

(3) Pod ๋‚ด๋ถ€ ๋ผ์šฐํŒ… ์ •๋ณด ํ™•์ธ

1
netshoot-pod-744bd84b46-4cfhr [ ~ ] cat /etc/resolv.conf

โœ…ย ์ถœ๋ ฅ

1
2
3
search default.svc.cluster.local svc.cluster.local cluster.local ap-northeast-2.compute.internal
nameserver 10.100.0.10
options ndots:5

8. Pod ์ •๋ณด ํ™•์ธ

1
k get pod -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME                            READY   STATUS    RESTARTS   AGE    IP              NODE                                               NOMINATED NODE   READINESS GATES
netshoot-pod-744bd84b46-4cfhr   1/1     Running   0          3h3m   192.168.3.146   ip-192-168-3-72.ap-northeast-2.compute.internal    <none>           <none>
netshoot-pod-744bd84b46-hkbd6   1/1     Running   0          3h3m   192.168.1.176   ip-192-168-1-193.ap-northeast-2.compute.internal   <none>           <none>
netshoot-pod-744bd84b46-pzv6d   1/1     Running   0          3h3m   192.168.2.226   ip-192-168-2-52.ap-northeast-2.compute.internal    <none>           <none>

9. ํŒŒ๋“œ๊ฐ„ ํ†ต์‹  ํ…Œ์ŠคํŠธ

(1) ํŒŒ๋“œ IP ๋ณ€์ˆ˜ ์ง€์ •

1
2
3
PODIP1=$(kubectl get pod -l app=netshoot-pod -o jsonpath='{.items[0].status.podIP}')
PODIP2=$(kubectl get pod -l app=netshoot-pod -o jsonpath='{.items[1].status.podIP}')
PODIP3=$(kubectl get pod -l app=netshoot-pod -o jsonpath='{.items[2].status.podIP}')

(2) ํŒŒ๋“œ1์—์„œ ํŒŒ๋“œ2๋กœ ping ํ…Œ์ŠคํŠธ

1
kubectl exec -it $PODNAME1 -- ping -c 2 $PODIP2

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
PING 192.168.1.176 (192.168.1.176) 56(84) bytes of data.
64 bytes from 192.168.1.176: icmp_seq=1 ttl=125 time=1.38 ms
64 bytes from 192.168.1.176: icmp_seq=2 ttl=125 time=1.23 ms

--- 192.168.1.176 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 1.225/1.303/1.381/0.078 ms

(3) ํ†ต์‹  ํ™•์ธ์„ ์œ„ํ•œ ํŒจํ‚ท ์บก์ฒ˜

ICMP ํŒจํ‚ท๋งŒ ์บก์ฒ˜ํ•˜๋Š” ๋ช…๋ น ์‚ฌ์šฉ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[ec2-user@ip-192-168-1-193 ~]$ sudo tcpdump -i any -nn icmp
tcpdump: data link type LINUX_SLL2
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes

[ec2-user@ip-192-168-2-52 ~]$ sudo tcpdump -i any -nn icmp
tcpdump: data link type LINUX_SLL2
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes

[ec2-user@ip-192-168-3-72 ~]$ sudo tcpdump -i any -nn icmp
tcpdump: data link type LINUX_SLL2
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes

Image

(4) Pod ๊ฐ„ Ping ํ…Œ์ŠคํŠธ

1
kubectl exec -it $PODNAME1 -- ping -c 2 $PODIP2

Image

1
2
3
4
5
6
7
8
13:00:13.795567 ens5  In  IP 192.168.3.146 > 192.168.1.176: ICMP echo request, id 110, seq 1, length 64
13:00:13.795632 eni1d5278cfd98 Out IP 192.168.3.146 > 192.168.1.176: ICMP echo request, id 110, seq 1, length 64
13:00:13.795646 eni1d5278cfd98 In  IP 192.168.1.176 > 192.168.3.146: ICMP echo reply, id 110, seq 1, length 64
13:00:13.795654 ens5  Out IP 192.168.1.176 > 192.168.3.146: ICMP echo reply, id 110, seq 1, length 64
13:00:14.796938 ens5  In  IP 192.168.3.146 > 192.168.1.176: ICMP echo request, id 110, seq 2, length 64
13:00:14.796970 eni1d5278cfd98 Out IP 192.168.3.146 > 192.168.1.176: ICMP echo request, id 110, seq 2, length 64
13:00:14.796989 eni1d5278cfd98 In  IP 192.168.1.176 > 192.168.3.146: ICMP echo reply, id 110, seq 2, length 64
13:00:14.796996 ens5  Out IP 192.168.1.176 > 192.168.3.146: ICMP echo reply, id 110, seq 2, length 64
1
2
3
4
5
6
7
8
13:00:13.794944 enia32f5632a44 In  IP 192.168.3.146 > 192.168.1.176: ICMP echo request, id 110, seq 1, length 64
13:00:13.794990 ens5  Out IP 192.168.3.146 > 192.168.1.176: ICMP echo request, id 110, seq 1, length 64
13:00:13.796201 ens5  In  IP 192.168.1.176 > 192.168.3.146: ICMP echo reply, id 110, seq 1, length 64
13:00:13.796236 enia32f5632a44 Out IP 192.168.1.176 > 192.168.3.146: ICMP echo reply, id 110, seq 1, length 64
13:00:14.796330 enia32f5632a44 In  IP 192.168.3.146 > 192.168.1.176: ICMP echo request, id 110, seq 2, length 64
13:00:14.796363 ens5  Out IP 192.168.3.146 > 192.168.1.176: ICMP echo request, id 110, seq 2, length 64
13:00:14.797526 ens5  In  IP 192.168.1.176 > 192.168.3.146: ICMP echo reply, id 110, seq 2, length 64
13:00:14.797542 enia32f5632a44 Out IP 192.168.1.176 > 192.168.3.146: ICMP echo reply, id 110, seq 2, length 64

tcpdump ๊ฒฐ๊ณผ ๋ถ„์„

  • ens5์™€ eni1d5278cfd98 ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ํ†ตํ•ด ICMP ์š”์ฒญ๊ณผ ์‘๋‹ต์ด ์˜ค๊ณ ๊ฐ
  • NAT ์—†์ด ์›๋ณธ IP๋กœ ํ†ต์‹ ๋จ

10. ๋„คํŠธ์›Œํฌ ๊ฒฝ๋กœ ํ™•์ธ

(1) ip rule ํ™•์ธ

1
[ec2-user@ip-192-168-1-193 ~]$ ip rule

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
0:	from all lookup local
512:	from all to 192.168.1.90 lookup main
512:	from all to 192.168.1.59 lookup main
512:	from all to 192.168.1.176 lookup main
1024:	from all fwmark 0x80/0x80 lookup main
32766:	from all lookup main
32767:	from all lookup default

(2) ip route ํ…Œ์ด๋ธ” ํ™•์ธ

๊ฐ ์ธํ„ฐํŽ˜์ด์Šค์˜ ๋กœ์ปฌ IP ์ •๋ณด ํ™•์ธ

1
[ec2-user@ip-192-168-1-193 ~]$ ip route show table local

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 
local 192.168.1.193 dev ens5 proto kernel scope host src 192.168.1.193 
local 192.168.1.237 dev ens6 proto kernel scope host src 192.168.1.237 
broadcast 192.168.1.255 dev ens5 proto kernel scope link src 192.168.1.193 
broadcast 192.168.1.255 dev ens6 proto kernel scope link src 192.168.1.237

๊ธฐ๋ณธ ๊ฒŒ์ดํŠธ์›จ์ด ๋ฐ ENI๋ณ„ ์—ฐ๊ฒฐ ์ •๋ณด ํ™•์ธ

1
[ec2-user@ip-192-168-1-193 ~]$ ip route show table main

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
default via 192.168.1.1 dev ens5 proto dhcp src 192.168.1.193 metric 1024 
192.168.0.2 via 192.168.1.1 dev ens5 proto dhcp src 192.168.1.193 metric 1024 
192.168.1.0/24 dev ens5 proto kernel scope link src 192.168.1.193 metric 1024 
192.168.1.1 dev ens5 proto dhcp scope link src 192.168.1.193 metric 1024 
192.168.1.59 dev eni61c5a949744 scope link 
192.168.1.90 dev eni01a4864c88a scope link 
192.168.1.176 dev eni1d5278cfd98 scope link

๐Ÿ“ถ ํŒŒ๋“œ์—์„œ ์™ธ๋ถ€ ํ†ต์‹  ํ…Œ์ŠคํŠธ ๋ฐ ํ™•์ธ

  • Pod์—์„œ ์™ธ๋ถ€๋กœ ping ํ…Œ์ŠคํŠธํ•˜๋ฉฐ NAT ๊ณผ์ •์„ ํ™•์ธ
  • Pod๋Š” ๋‚ด๋ถ€ IP๋กœ ํ†ต์‹ ํ•˜์ง€๋งŒ, ์™ธ๋ถ€๋กœ ๋‚˜๊ฐˆ ๋•Œ๋Š” ์„œ๋ฒ„์˜ ์œ ๋™ ๊ณต์ธ IP๋กœ NAT๋˜์–ด ํ†ต์‹ ๋จ

1. Pod์—์„œ ์™ธ๋ถ€๋กœ ping ํ…Œ์ŠคํŠธ

1
kubectl exec -it $PODNAME1 -- ping -c 1 www.google.com

Image

โœ… ์ถœ๋ ฅ

1
2
3
4
13:35:28.230157 enia32f5632a44 In  IP 192.168.3.146 > 172.217.161.228: ICMP echo request, id 116, seq 1, length 64
13:35:28.230179 ens5  Out IP 192.168.3.72 > 172.217.161.228: ICMP echo request, id 35777, seq 1, length 64
13:35:28.256496 ens5  In  IP 172.217.161.228 > 192.168.3.72: ICMP echo reply, id 35777, seq 1, length 64
13:35:28.256529 enia32f5632a44 Out IP 172.217.161.228 > 192.168.3.146: ICMP echo reply, id 116, seq 1, length 64

2. ์™ธ๋ถ€๋กœ ์ง€์†์  ping ํ…Œ์ŠคํŠธ

1
kubectl exec -it $PODNAME1 -- ping -i 0.1 www.google.com

Image

1
64 bytes from sin01s16-in-f4.1e100.net (172.217.25.164): icmp_seq=174 ttl=47 time=35.0 ms

veth(enia32f5632a44)์—์„œ๋Š” Pod IP๊ฐ€ ๋ณด์ด์ง€๋งŒ, ens5์—์„œ๋Š” ์„œ๋ฒ„ IP๊ฐ€ ๋ณด์ด๋ฉฐ NAT๊ฐ€ ์ ์šฉ๋จ

1
2
13:37:01.153999 ens5  In  IP 172.217.25.164 > 192.168.3.72: ICMP echo reply, id 48046, seq 174, length 64
13:37:01.154037 enia32f5632a44 Out IP 172.217.25.164 > 192.168.3.146: ICMP echo reply, id 122, seq 174, length 64

3. ์„œ๋ฒ„ ๊ณต์ธ IP ํ™•์ธ

1
for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i curl -s ipinfo.io/ip; echo; echo; done

โœ…ย ์ถœ๋ ฅ

์ถœ๋ ฅ๋œ IP๋Š” ์„œ๋ฒ„์˜ ์œ ๋™ ๊ณต์ธ IP

1
2
3
4
5
6
7
8
>> node 43.202.57.204 <<
43.202.57.204

>> node 15.164.179.214 <<
15.164.179.214

>> node 43.201.115.81 <<
43.201.115.81

Pod๊ฐ€ ์™ธ๋ถ€ ํ†ต์‹ ํ•  ๋•Œ ์„œ๋ฒ„์˜ ๊ณต์ธ IP๋กœ NAT๋˜์–ด ๋‚˜๊ฐ

Image

4. Pod ์™ธ๋ถ€ ์ธํ„ฐ๋„ท ํ†ต์‹  ์‹œ NAT ํ™•์ธ

1
for i in $PODNAME1 $PODNAME2 $PODNAME3; do echo ">> Pod : $i <<"; kubectl exec -it $i -- curl -s ipinfo.io/ip; echo; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
>> Pod : netshoot-pod-744bd84b46-4cfhr <<
43.201.115.81

>> Pod : netshoot-pod-744bd84b46-hkbd6 <<
43.202.57.204

>> Pod : netshoot-pod-744bd84b46-pzv6d <<
15.164.179.214

5. ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ” ํ™•์ธ

์„œ๋ฒ„์˜ ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค ๋ฐ ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ” ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
ssh ec2-user@$N3

A newer release of "Amazon Linux" is available.
  Version 2023.6.20250203:
Run "/usr/bin/dnf check-release-update" for full release and version update info
   ,     #_
   ~\_  ####_        Amazon Linux 2023
  ~~  \_#####\
  ~~     \###|
  ~~       \#/ ___   https://aws.amazon.com/linux/amazon-linux-2023
   ~~       V~' '->
    ~~~         /
      ~~._.   _/
         _/ _/
       _/m/'
Last login: Wed Feb 12 11:05:36 2025 from 182.230.60.93
[ec2-user@ip-192-168-3-72 ~]$ 

(1) ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค ์ •๋ณด ํ™•์ธ

1
[ec2-user@ip-192-168-3-72 ~]$ ip -c addr

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 0a:d5:3d:4a:54:bd brd ff:ff:ff:ff:ff:ff
    altname enp0s5
    inet 192.168.3.72/24 metric 1024 brd 192.168.3.255 scope global dynamic ens5
       valid_lft 2513sec preferred_lft 2513sec
    inet6 fe80::8d5:3dff:fe4a:54bd/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
3: enia32f5632a44@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether 76:e0:80:d5:57:23 brd ff:ff:ff:ff:ff:ff link-netns cni-48fae262-f8c0-ab50-61f1-b90d6344980d
    inet6 fe80::74e0:80ff:fed5:5723/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
4: ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 0a:da:c7:72:d0:e1 brd ff:ff:ff:ff:ff:ff
    altname enp0s6
    inet 192.168.3.120/24 brd 192.168.3.255 scope global ens6
       valid_lft forever preferred_lft forever
    inet6 fe80::8da:c7ff:fe72:d0e1/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever

(2) ๋ผ์šฐํŒ… ํ…Œ์ด๋ธ” ํ™•์ธ

1
[ec2-user@ip-192-168-3-72 ~]$ ip route show table main

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
default via 192.168.3.1 dev ens5 proto dhcp src 192.168.3.72 metric 1024 
192.168.0.2 via 192.168.3.1 dev ens5 proto dhcp src 192.168.3.72 metric 1024 
192.168.3.0/24 dev ens5 proto kernel scope link src 192.168.3.72 metric 1024 
192.168.3.1 dev ens5 proto dhcp scope link src 192.168.3.72 metric 1024 
192.168.3.146 dev enia32f5632a44 scope link
  • ๊ธฐ๋ณธ ๊ฒฝ๋กœ(ens5)๋กœ ๋‚˜๊ฐˆ ๋•Œ iptables์˜ SNAT ๊ทœ์น™์ด ์ ์šฉ๋˜์–ด ์„œ๋ฒ„ IP(192.168.3.72)๋กœ ๋ณ€ํ™˜๋จ

(3) iptables ๊ทœ์น™ ํ™•์ธ

1
2
3
[ec2-user@ip-192-168-3-72 ~]$ sudo iptables -t nat -S | grep 'A AWS-SNAT-CHAIN'
-A AWS-SNAT-CHAIN-0 -d 192.168.0.0/16 -m comment --comment "AWS SNAT CHAIN" -j RETURN
-A AWS-SNAT-CHAIN-0 ! -o vlan+ -m comment --comment "AWS, SNAT" -m addrtype ! --dst-type LOCAL -j SNAT --to-source 192.168.3.72 --random-fully
  • 192.168.0.0/16 ๋Œ€์—ญ(VPC ๋‚ด๋ถ€)์€ SNAT ์—†์ด ํ†ต์‹ ํ•˜๊ณ , ์™ธ๋ถ€ ํ†ต์‹  ์‹œ์—๋Š” SNAT ๊ทœ์น™ ์ ์šฉ

(4) Pod ๊ฐ„ ํ†ต์‹  ์‹œ NAT ๋ฏธ์ ์šฉ ์ด์œ 

  • VPC ๋‚ด๋ถ€ ๋Œ€์—ญ์€ NAT ์—†์ด ํ†ต์‹ ํ•˜๋ฉฐ, ์™ธ๋ถ€ ์ธํ„ฐ๋„ท ๋Œ€์—ญ์€ SNAT ์ ์šฉ
  • ๊ฐ™์€ VPC์—์„œ NAT์„ ์ ์šฉํ•˜๋ฉด ํด๋ผ์ด์–ธํŠธ IP๊ฐ€ ๊ฐ์ถฐ์ง€๊ณ  ์˜ค๋ฒ„ํ—ค๋“œ๊ฐ€ ๋ฐœ์ƒํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋‚ด๋ถ€๋ง์—์„œ๋Š” NAT ์—†์ด ๋ฐ”๋กœ ํ†ต์‹ 
  • ์™ธ๋ถ€ ์ธํ„ฐ๋„ท ํ†ต์‹ ์€ SNAT --to-source 192.168.3.72๋กœ ๋ณ€ํ™˜๋˜์–ด ๋‚˜๊ฐ€๋„๋ก ์„ค์ •

6. AWS-SNAT-CHAIN-0์˜ SNAT ๊ทœ์น™๊ณผ Pod ํ†ต์‹  ํ™•์ธ

(1) iptables ์ดˆ๊ธฐํ™”

1
sudo iptables -t filter --zero; sudo iptables -t nat --zero; sudo iptables -t mangle --zero; sudo iptables -t raw --zero

(2) watch ๋ช…๋ น์–ด๋กœ AWS-SNAT-CHAIN-0, KUBE-POSTROUTING, POSTROUTING ์ฒด์ธ ๋ชจ๋‹ˆํ„ฐ๋ง

1
watch -d 'sudo iptables -v --numeric --table nat --list AWS-SNAT-CHAIN-0; echo ; sudo iptables -v --numeric --table nat --list KUBE-POSTROUTING; echo ; sudo iptables -v --numeric --table nat --list POSTROUTING'

Image

(3) AWS-SNAT-CHAIN-0 ๊ทœ์น™ ๋ถ„์„

  • 192.168.0.0/16 ๋Œ€์—ญ์€ SNAT ์—†์ด ํ†ต์‹ (RETURN)
  • ์™ธ๋ถ€ ๋Œ€์—ญ์€ SNAT๋˜์–ด EC2 ๋…ธ๋“œ1 IP 192.168.3.72๋กœ ๋ณ€๊ฒฝ

7. conntrack์„ ์ด์šฉํ•œ ์—ฐ๊ฒฐ ์ถ”์ 

conntrack ๋ช…๋ น์–ด๋กœ NAT๋œ ์—ฐ๊ฒฐ ์ƒํƒœ ํ™•์ธ

1
for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo conntrack -L -n |grep -v '169.254.169'; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
conntrack v1.4.5 (conntrack-tools): 
icmp     1 28 src=172.30.66.58 dst=8.8.8.8 type=8 code=0 id=34392 src=8.8.8.8 dst=172.30.85.242 type=0 code=0 id=50705 mark=128 use=1
tcp      6 23 TIME_WAIT src=172.30.66.58 dst=34.117.59.81 sport=58144 dport=80 src=34.117.59.81 dst=172.30.85.242 sport=80 dport=44768 [ASSURED] mark=128 use=1

8. Pod ํ†ต์‹  ํ…Œ์ŠคํŠธ

operator-host์—์„œ Pod๋กœ ping ํ…Œ์ŠคํŠธ

1
2
3
4
5
6
7
8
9
(eks-user:N/A) [root@operator-host ~]# POD1IP=192.168.1.176
(eks-user:N/A) [root@operator-host ~]# ping -c 1 $POD1IP
# ๊ฒฐ๊ณผ
PING 192.168.1.176 (192.168.1.176) 56(84) bytes of data.
64 bytes from 192.168.1.176: icmp_seq=1 ttl=126 time=0.674 ms

--- 192.168.1.176 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms

VPC ๋‚ด๋ถ€์—์„œ๋Š” NAT ์—†์ด ๋ฐ”๋กœ ํ†ต์‹ ๋จ

1
2
3
4
15:07:35.091537 ens5  In  IP 172.20.1.100 > 192.168.1.176: ICMP echo request, id 5445, seq 1, length 64
15:07:35.091600 eni1d5278cfd98 Out IP 172.20.1.100 > 192.168.1.176: ICMP echo request, id 5445, seq 1, length 64
15:07:35.091614 eni1d5278cfd98 In  IP 192.168.1.176 > 172.20.1.100: ICMP echo reply, id 5445, seq 1, length 64
15:07:35.091621 ens5  Out IP 192.168.1.176 > 172.20.1.100: ICMP echo reply, id 5445, seq 1, length 64

VPC ๋‚ด๋ถ€์—์„œ๋Š” NAT ์—†์ด ํ†ต์‹ ํ•˜๋ฉฐ ์™ธ๋ถ€ ํ†ต์‹  ์‹œ AWS-SNAT-CHAIN-0 ๊ทœ์น™์— ์˜ํ•ด EC2 ๋…ธ๋“œ IP๋กœ SNAT๋˜๋Š” ๊ณผ์ •์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Œ

Image


๐Ÿ›œ ํŒŒ๋“œ1 โ†’ ์šด์˜์„œ๋ฒ„ EC2 ํ†ต์‹ 

1. vpc cni env ์ •๋ณด ํ™•์ธ

1
(eks-user:N/A) [root@operator-host ~]# kubectl get ds aws-node -n kube-system -o json | jq '.spec.template.spec.containers[0].env'

2. ์šด์˜์„œ๋ฒ„ EC2 SSH ์ ‘์†

1
(eks-user:N/A) [root@operator-host ~]# kubectl exec -it $PODNAME1 -- ping 172.20.1.100

3. ํŒŒ๋“œ1 ๋ฐฐ์น˜ ์›Œ์ปค๋…ธ๋“œ์—์„œ tcpdump ํ™•์ธ

tcpdump๋กœ NAT ์ ์šฉ ํ™•์ธ

1
(eks-user:N/A) [root@operator-host ~]# sudo tcpdump -i any -nn icmp

Pod IP โ†’ ์„œ๋ฒ„ IP๋กœ SNAT๋˜์–ด ์™ธ๋ถ€ ํ†ต์‹ 

1
2
15:35:03.714838 enia32f5632a44 In  IP 192.168.3.146 > 172.20.1.100: ICMP echo request, id 140, seq 77, length 64
15:35:03.714869 ens5  Out IP 192.168.3.72 > 172.20.1.100: ICMP echo request, id 56175, seq 77, length 64

Image

4. ์‚ฌ๋‚ด ๋‚ด๋ถ€์— ์—ฐ๊ฒฐ ํ™•์žฅ๋œ ๋„คํŠธ์›Œํฌ ๋Œ€์—ญ๊ณผ SNAT ์—†์ด ํ†ต์‹  ๊ฐ€๋Šฅํ•˜๊ฒŒ ์„ค์ • ํ•ด๋ณด๊ธฐ

(1) ์›Œ์ปค๋…ธ๋“œ(192.168.1.193) ์ ‘์†

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
ssh ec2-user@$N1

A newer release of "Amazon Linux" is available.
  Version 2023.6.20250203:
Run "/usr/bin/dnf check-release-update" for full release and version update info
   ,     #_
   ~\_  ####_        Amazon Linux 2023
  ~~  \_#####\
  ~~     \###|
  ~~       \#/ ___   https://aws.amazon.com/linux/amazon-linux-2023
   ~~       V~' '->
    ~~~         /
      ~~._.   _/
         _/ _/
       _/m/'
Last login: Wed Feb 12 09:05:57 2025 from 182.230.60.93
[ec2-user@ip-192-168-1-193 ~]$ 

(2) NAT ์ •์ฑ… ํ™•์ธ

1
2
[ec2-user@ip-192-168-1-193 ~]$ sudo iptables -t filter --zero; sudo iptables -t nat --zero; sudo iptables -t mangle --zero; sudo iptables -t raw --zero
[ec2-user@ip-192-168-1-193 ~]$ watch -d 'sudo iptables -v --numeric --table nat --list AWS-SNAT-CHAIN-0; echo ; sudo iptables -v --numeric --table nat --list KUBE-POSTROUTING; echo ; sudo iptables -v --numeric --table nat --list POSTROUTING'

Image

(3) SNAT ์ œ์™ธ ๋„คํŠธ์›Œํฌ ๋Œ€์—ญ ์„ค์ •

1
2
3
kubectl set env daemonset aws-node -n kube-system AWS_VPC_K8S_CNI_EXCLUDE_SNAT_CIDRS=172.20.0.0/16
# ๊ฒฐ๊ณผ
daemonset.apps/aws-node env updated

(4) NAT ์ •์ฑ… ๋ณ€๊ฒฝ ํ™•์ธ

Image

(5) Ping ํ…Œ์ŠคํŠธ๋กœ SNAT ์ œ์™ธ ํ™•์ธ

1
(eks-user:N/A) [root@operator-host ~]# kubectl exec -it $PODNAME1 -- ping 172.20.1.100

Image

1
2
15:53:57.159915 ens5  In  IP 172.20.1.100 > 192.168.3.146: ICMP echo reply, id 146, seq 23, length 64
15:53:57.159929 enia32f5632a44 Out IP 172.20.1.100 > 192.168.3.146: ICMP echo reply, id 146, seq 23, length 64

์•„๋ž˜์˜ ๋‚ด์šฉ์ด ์ถ”๊ฐ€๋จ

1
2
3
4
5
6
7
8
9
kubectl get ds aws-node -n kube-system -o json | jq '.spec.template.spec.containers[0].env'
[
	...
  {
    "name": "AWS_VPC_K8S_CNI_EXCLUDE_SNAT_CIDRS",
    "value": "172.20.0.0/16"
  }
  ...
]

ens5๋กœ NAT ๋˜๋˜ ์ด์ „ ์ƒํƒœ์—์„œ ์ด์ œ๋Š” 172.20.0.0/16 ๋Œ€์—ญ์œผ๋กœ๋Š” NAT ์—†์ด ๋ฐ”๋กœ ํ†ต์‹ ๋จ

Image

(6) iptables ๋ชจ๋‹ˆํ„ฐ๋ง ๋ช…๋ น์–ด ์‹คํ–‰ ํ›„ AWS-SNAT-CHAIN-0 ์ฒด์ธ์—์„œ ์นด์šดํŠธ ์ฆ๊ฐ€ ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
ssh ec2-user@$N3

A newer release of "Amazon Linux" is available.
  Version 2023.6.20250203:
Run "/usr/bin/dnf check-release-update" for full release and version update info
   ,     #_
   ~\_  ####_        Amazon Linux 2023
  ~~  \_#####\
  ~~     \###|
  ~~       \#/ ___   https://aws.amazon.com/linux/amazon-linux-2023
   ~~       V~' '->
    ~~~         /
      ~~._.   _/
         _/ _/
       _/m/'
Last login: Wed Feb 12 14:21:22 2025 from 182.230.60.93
[ec2-user@ip-192-168-3-72 ~]$ watch -d 'sudo iptables -v --numeric --table nat --list AWS-SNAT-CHAIN-0; echo ; sudo iptables -v --numeric --table nat --list KUBE-POSTROUTING; echo ; sudo iptables -v --numeric --table nat --list POSTROUTING'

Image

5. ๋””ํ”Œ๋กœ์ด๋จผํŠธ ์‚ญ์ œ๋กœ ์‹ค์Šต ๋งˆ๋ฌด๋ฆฌ

kubectl delete deploy netshoot-pod


๐Ÿ“ ๋…ธ๋“œ์— ํŒŒ๋“œ ์ƒ์„ฑ ๊ฐฏ์ˆ˜ ์ œํ•œ

1. kube-ops-view ์„ค์น˜

  • kube-ops-view๋Š” ๋…ธ๋“œ์™€ ํŒŒ๋“œ ์ •๋ณด๋ฅผ ์‹œ๊ฐํ™”ํ•˜๋Š” ๋„๊ตฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
helm repo add geek-cookbook https://geek-cookbook.github.io/charts/
helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 --set service.main.type=LoadBalancer --set env.TZ="Asia/Seoul" --namespace kube-system

# ์„ค์น˜ ์™„๋ฃŒ ์‹œ ์ถœ๋ ฅ
"geek-cookbook" already exists with the same configuration, skipping
NAME: kube-ops-view
LAST DEPLOYED: Thu Feb 13 01:05:07 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
     NOTE: It may take a few minutes for the LoadBalancer IP to be available.
           You can watch the status of by running 'kubectl get svc -w kube-ops-view'
  export SERVICE_IP=$(kubectl get svc --namespace kube-system kube-ops-view -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
  echo http://$SERVICE_IP:8080

2. kube-ops-view ์ ‘์† URL ํ™•์ธ (1.5 ๋ฐฐ์œจ)

1
kubectl get svc -n kube-system kube-ops-view -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' | awk '{ print "KUBE-OPS-VIEW URL = http://"$1":8080/#scale=1.5"}'

โœ…ย ์ถœ๋ ฅ

1
KUBE-OPS-VIEW URL = http://a87c9a7b2db234e36a4a71827a7fbe54-363200146.ap-northeast-2.elb.amazonaws.com:8080/#scale=1.5

3. kube-ops-view ์„œ๋น„์Šค ํ™•์ธ

1
kubectl get svc -n kube-system kube-ops-view

โœ…ย ์ถœ๋ ฅ

1
2
NAME            TYPE           CLUSTER-IP      EXTERNAL-IP                                                                   PORT(S)          AGE
kube-ops-view   LoadBalancer   10.100.146.32   a87c9a7b2db234e36a4a71827a7fbe54-363200146.ap-northeast-2.elb.amazonaws.com   8080:32597/TCP   19h
  • ์„ค์น˜ ์‹œ LoadBalancer ์„œ๋น„์Šค๊ฐ€ ํ•จ๊ป˜ ๋ฐฐํฌ๋˜๋ฉฐ, ์ปจํŠธ๋กค๋Ÿฌ ๋งค๋‹ˆ์ €๊ฐ€ ํด๋ผ์šฐ๋“œ ์ œ๊ณต์ž์˜ ๋กœ๋“œ๋ฐธ๋Ÿฐ์„œ๋ฅผ ์ž๋™์œผ๋กœ ์ƒ์„ฑ
  • External IP๋กœ ํ• ๋‹น๋œ ๋กœ๋“œ๋ฐธ๋Ÿฐ์„œ ๋„๋ฉ”์ธ์„ ํ†ตํ•ด ์ ‘์† ๊ฐ€๋Šฅ
  • ํฌํŠธ๋Š” 8080์œผ๋กœ ์„ค์ •

Image

4. ์›Œ์ปค๋…ธ๋“œ์˜ ์ธ์Šคํ„ด์Šค ์ •๋ณด ํ™•์ธ

t3 ํƒ€์ž…์˜ ์ •๋ณด ํ™•์ธ

1
2
3
aws ec2 describe-instance-types --filters Name=instance-type,Values=t3.\* \
 --query "InstanceTypes[].{Type: InstanceType, MaxENI: NetworkInfo.MaximumNetworkInterfaces, IPv4addr: NetworkInfo.Ipv4AddressesPerInterface}" \ 
 --output table

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
--------------------------------------
|        DescribeInstanceTypes       |
+----------+----------+--------------+
| IPv4addr | MaxENI   |    Type      |
+----------+----------+--------------+
|  12      |  3       |  t3.large    |
|  6       |  3       |  t3.medium   |
|  15      |  4       |  t3.xlarge   |
|  15      |  4       |  t3.2xlarge  |
|  2       |  2       |  t3.nano     |
|  2       |  2       |  t3.micro    |
|  4       |  3       |  t3.small    |
+----------+----------+--------------+
  • EC2 ์ธ์Šคํ„ด์Šค ํƒ€์ž…์— ๋”ฐ๋ผ ์ตœ๋Œ€ Pod ๋ฐฐํฌ ๊ฐœ์ˆ˜๊ฐ€ ์ œํ•œ๋จ
  • ์žฅ์ฐฉํ•  ์ˆ˜ ์žˆ๋Š” ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค(ENI) ๊ฐœ์ˆ˜์™€ ๊ฐ ENI์— ํ• ๋‹นํ•  ์ˆ˜ ์žˆ๋Š” ๋ณด์กฐ IP ๊ฐœ์ˆ˜๊ฐ€ ์ด๋ฏธ ์ •ํ•ด์ ธ ์žˆ๊ธฐ ๋•Œ๋ฌธ

5. t3.medium ์ธ์Šคํ„ด์Šค์—์„œ Pod ์ตœ๋Œ€ ๊ฐœ์ˆ˜ ๊ณ„์‚ฐ

  • Max ENI(๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค): 3๊ฐœ
  • ENI๋‹น IP ํ• ๋‹น ๊ฐœ์ˆ˜: 6๊ฐœ
  • ๊ณ„์‚ฐ ๋ฐฉ๋ฒ•: (6 - 1) * 3 = 15
    • ๊ฐ ENI์—์„œ 1๊ฐœ์˜ IP๋Š” ์ด๋ฏธ ์‚ฌ์šฉ ์ค‘์ด๋ฏ€๋กœ ์ œ์™ธ
    • ์ด 15๊ฐœ์˜ IP๋ฅผ Pod์— ํ• ๋‹น ๊ฐ€๋Šฅ
  • ์ถ”๊ฐ€ ๊ณ ๋ ค์‚ฌํ•ญ: CNI ๋ฐ๋ชฌ์…‹๊ณผ Kube-Proxy๊ฐ€ 2๊ฐœ์˜ IP๋ฅผ ์‚ฌ์šฉ (IP ๊ณต์œ )
  • ์ตœ์ข… ๊ฒฐ๊ณผ:t3.medium ์ธ์Šคํ„ด์Šค์—์„œ๋Š” ์ตœ๋Œ€ 17๊ฐœ์˜ Pod ๋ฐฐํฌ ๊ฐ€๋Šฅ

6. ์›Œ์ปค๋…ธ๋“œ ์ƒ์„ธ ์ •๋ณด ํ™•์ธ

๊ฐ ์›Œ์ปค๋…ธ๋“œ์—์„œ ์ตœ๋Œ€ 17๊ฐœ์˜ Pod์ด ๋ฐฐํฌ ๊ฐ€๋Šฅํ•จ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Œ

1
kubectl describe node | grep Allocatable: -A6

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Allocatable:
  cpu:                1930m
  ephemeral-storage:  27845546346
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3364544Ki
  pods:               17
--
Allocatable:
  cpu:                1930m
  ephemeral-storage:  27845546346
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3364536Ki
  pods:               17
--
Allocatable:
  cpu:                1930m
  ephemeral-storage:  27845546346
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3364544Ki
  pods:               17

๐Ÿ“Š ์ตœ๋Œ€ ํŒŒ๋“œ ์ƒ์„ฑ ๋ฐ ํ™•์ธ

1. ์›Œ์ปค ๋…ธ๋“œ 3๋Œ€ EC2 ๋ชจ๋‹ˆํ„ฐ๋ง

1
[ec2-user@ip-192-168-1-193 ~]$ while true; do ip -br -c addr show && echo "--------------" ; date "+%Y-%m-%d %H:%M:%S" ; sleep 1; done
1
[ec2-user@ip-192-168-2-52 ~]$ while true; do ip -br -c addr show && echo "--------------" ; date "+%Y-%m-%d %H:%M:%S" ; sleep 1; done
1
[ec2-user@ip-192-168-3-72 ~]$ while true; do ip -br -c addr show && echo "--------------" ; date "+%Y-%m-%d %H:%M:%S" ; sleep 1; done

2. ๋””ํ”Œ๋กœ์ด๋จผํŠธ ์ƒ์„ฑ ์ „ ์ƒํƒœ ํ™•์ธ

Image

1
2
3
4
5
6
7
8
--------------
2025-02-13 12:20:21
lo               UNKNOWN        127.0.0.1/8 ::1/128 
ens5             UP             192.168.2.52/24 metric 1024 fe80::4c5:5bff:fed1:7757/64 
enibce2df30e87@if3 UP             fe80::48c8:23ff:feab:8539/64 
enic99196c7a64@if3 UP             fe80::8800:6eff:fe37:b171/64 
ens7             UP             192.168.2.138/24 fe80::487:86ff:fe90:b33d/64 
--------------
1
2
3
4
5
6
7
--------------
2025-02-13 12:21:10
lo               UNKNOWN        127.0.0.1/8 ::1/128 
ens5             UP             192.168.3.72/24 metric 1024 fe80::8d5:3dff:fe4a:54bd/64 
enif2c43957bf8@if3 UP             fe80::a0e3:15ff:fe62:4731/64 
ens7             UP             192.168.3.218/24 fe80::80c:3cff:fe65:9c27/64 
--------------
1
2
3
Every 2.0s: kubectl get pods -o wide                 gram88: 09:22:04 PM
                                                           in 0.538s (0)
No resources found in default namespace.

3. ๋””ํ”Œ๋กœ์ด๋จผํŠธ ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        ports:
        - containerPort: 80
EOF

4. ๋””ํ”Œ๋กœ์ด๋จผํŠธ ์ƒ์„ฑ ํ›„ ์ƒํƒœ ํ™•์ธ

Image

ENI ์ถ”๊ฐ€

  • eni189e26c0cf7@if3 UP ์ƒํƒœ๋กœ ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค ์ถ”๊ฐ€ ํ™•์ธ
  • eni422a7c2b629@if3 UP ์ƒํƒœ๋กœ ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค ์ถ”๊ฐ€ ํ™•์ธ
1
2
3
4
5
6
7
8
9
--------------
2025-02-13 12:24:00
lo               UNKNOWN        127.0.0.1/8 ::1/128 
ens5             UP             192.168.1.193/24 metric 1024 fe80::23:93ff:fea6:bc61/64 
eni01a4864c88a@if3 UP             fe80::54be:6aff:febd:2d2a/64 
eni61c5a949744@if3 UP             fe80::5041:42ff:fec3:33f/64 
ens7             UP             192.168.1.232/24 fe80::d8:5dff:fe80:2cc9/64 
eni189e26c0cf7@if3 UP             fe80::b463:11ff:fec2:9e12/64 
--------------
1
2
3
4
5
6
7
8
--------------
2025-02-13 12:24:36
lo               UNKNOWN        127.0.0.1/8 ::1/128 
ens5             UP             192.168.3.72/24 metric 1024 fe80::8d5:3dff:fe4a:54bd/64 
enif2c43957bf8@if3 UP             fe80::a0e3:15ff:fe62:4731/64 
ens7             UP             192.168.3.218/24 fe80::80c:3cff:fe65:9c27/64 
eni422a7c2b629@if3 UP             fe80::b455:d1ff:fed1:d8f6/64 
--------------

Pod ์Šค์ผ€์ค„๋ง ์ถ”๊ฐ€

1
2
3
4
5
Every 2.0s: kubectl get pods -o wide                 gram88: 09:25:54 PM                                                                               in 0.487s (0)
NAME                                READY   STATUS    RESTARTS   AGE    IP              NODE                                               NOMINATED NODE   READINESS GATES
nginx-deployment-6c8cb99bb9-77rg2   1/1     Running   0          3m5s   192.168.3.170   ip-192-168-3-72.ap-northeast-2.compute.internal    <none>           <none>
nginx-deployment-6c8cb99bb9-lcfx9   1/1     Running   0          3m5s   192.168.1.176   ip-192-168-1-193.ap-northeast-2.compute.internal   <none>           <none>

5. Pod ๊ฐœ์ˆ˜ 8๊ฐœ๋กœ ์ฆ๊ฐ€

1
2
3
kubectl scale deployment nginx-deployment --replicas=8
# ๊ฒฐ๊ณผ
deployment.apps/nginx-deployment scaled

Image

6. Pod ๊ฐœ์ˆ˜ 30๊ฐœ๋กœ ์ฆ๊ฐ€

  • Pod ๊ฐœ์ˆ˜ ์ฆ๊ฐ€๋กœ ๋ฌผ๋ฆฌ ๋žœ์นด๋“œ(ens6) ์ถ”๊ฐ€ โ†’ ์ด 3์žฅ ์‚ฌ์šฉ
  • ๋žœ์นด๋“œ 3์žฅ์œผ๋กœ ํ™•์žฅ๋˜๋ฉฐ ๊ฐ ๋žœ์นด๋“œ์˜ Private IPv4 ํ• ๋‹น๋„ ์ฆ๊ฐ€
1
2
3
kubectl scale deployment nginx-deployment --replicas=30
# ๊ฒฐ๊ณผ
deployment.apps/nginx-deployment scaled

Image Image

7. Pod ๊ฐœ์ˆ˜ 50๊ฐœ๋กœ ์ฆ๊ฐ€

1
2
3
kubectl scale deployment nginx-deployment --replicas=50
# ๊ฒฐ๊ณผ
deployment.apps/nginx-deployment scaled

Image

์ผ๋ถ€ Pod๊ฐ€ Pending ์ƒํƒœ๋กœ ๋‚จ์•„์žˆ์Œ

1
k get pod

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-6c8cb99bb9-4mz8l   1/1     Running   0          27s
nginx-deployment-6c8cb99bb9-5lbxg   1/1     Running   0          4m10s
nginx-deployment-6c8cb99bb9-5m868   1/1     Running   0          4m11s
nginx-deployment-6c8cb99bb9-77rg2   1/1     Running   0          8m19s
nginx-deployment-6c8cb99bb9-7f5tw   1/1     Running   0          27s
nginx-deployment-6c8cb99bb9-7jvvw   1/1     Running   0          4m10s
nginx-deployment-6c8cb99bb9-7kzcw   1/1     Running   0          2m38s
nginx-deployment-6c8cb99bb9-7nzn6   0/1     Pending   0          27s
nginx-deployment-6c8cb99bb9-7qchj   1/1     Running   0          2m39s
nginx-deployment-6c8cb99bb9-8s2z8   1/1     Running   0          27s
nginx-deployment-6c8cb99bb9-98tfm   1/1     Running   0          2m39s
nginx-deployment-6c8cb99bb9-9pvbx   1/1     Running   0          2m39s
nginx-deployment-6c8cb99bb9-b2qjf   1/1     Running   0          2m39s
nginx-deployment-6c8cb99bb9-bgxsj   1/1     Running   0          27s
nginx-deployment-6c8cb99bb9-g54wb   1/1     Running   0          2m39s
nginx-deployment-6c8cb99bb9-gkpcw   1/1     Running   0          27s
nginx-deployment-6c8cb99bb9-h8jtz   1/1     Running   0          2m39s
nginx-deployment-6c8cb99bb9-h9ltb   0/1     Pending   0          27s
nginx-deployment-6c8cb99bb9-hdbx8   1/1     Running   0          27s
nginx-deployment-6c8cb99bb9-hh659   1/1     Running   0          4m10s
nginx-deployment-6c8cb99bb9-hmfz8   1/1     Running   0          2m38s
nginx-deployment-6c8cb99bb9-jnm66   1/1     Running   0          2m38s
nginx-deployment-6c8cb99bb9-kbmgh   1/1     Running   0          2m39s
nginx-deployment-6c8cb99bb9-kjkxd   1/1     Running   0          2m38s
nginx-deployment-6c8cb99bb9-kzfk2   0/1     Pending   0          27s
nginx-deployment-6c8cb99bb9-lcfx9   1/1     Running   0          8m19s
nginx-deployment-6c8cb99bb9-lh6l6   1/1     Running   0          2m39s
nginx-deployment-6c8cb99bb9-lvq6b   1/1     Running   0          2m39s
nginx-deployment-6c8cb99bb9-lxqcr   1/1     Running   0          2m39s
nginx-deployment-6c8cb99bb9-mgqcd   1/1     Running   0          28s
nginx-deployment-6c8cb99bb9-pvhps   1/1     Running   0          2m38s
nginx-deployment-6c8cb99bb9-qbpmp   1/1     Running   0          2m38s
nginx-deployment-6c8cb99bb9-qwh72   1/1     Running   0          2m39s
nginx-deployment-6c8cb99bb9-r58vr   0/1     Pending   0          27s
nginx-deployment-6c8cb99bb9-s7bsw   1/1     Running   0          2m39s
nginx-deployment-6c8cb99bb9-sdxw7   1/1     Running   0          2m39s
nginx-deployment-6c8cb99bb9-t2h2t   0/1     Pending   0          27s
nginx-deployment-6c8cb99bb9-tb47g   0/1     Pending   0          27s
nginx-deployment-6c8cb99bb9-tklxp   0/1     Pending   0          27s
nginx-deployment-6c8cb99bb9-ttbzh   1/1     Running   0          4m10s
nginx-deployment-6c8cb99bb9-v5dfl   1/1     Running   0          2m39s
nginx-deployment-6c8cb99bb9-v888x   0/1     Pending   0          27s
nginx-deployment-6c8cb99bb9-vstpp   1/1     Running   0          2m38s
nginx-deployment-6c8cb99bb9-w2dr8   1/1     Running   0          4m10s
nginx-deployment-6c8cb99bb9-ww4c9   0/1     Pending   0          27s
nginx-deployment-6c8cb99bb9-xk74m   1/1     Running   0          2m39s
nginx-deployment-6c8cb99bb9-xn6k7   1/1     Running   0          27s
nginx-deployment-6c8cb99bb9-xp5tt   1/1     Running   0          28s
nginx-deployment-6c8cb99bb9-z2jwk   1/1     Running   0          27s
nginx-deployment-6c8cb99bb9-z4649   0/1     Pending   0          27s

IP ๋ฏธํ• ๋‹น์œผ๋กœ ์ธํ•ด ๋„คํŠธ์›Œํฌ ๋„ค์ž„์ŠคํŽ˜์ด์Šค ์ค€๋น„ ๋ถˆ๊ฐ€ ์ƒํƒœ

1
k describe pod nginx-deployment-6c8cb99bb9-7nzn6

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
Name:             nginx-deployment-6c8cb99bb9-7nzn6
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=nginx
                  pod-template-hash=6c8cb99bb9
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/nginx-deployment-6c8cb99bb9
Containers:
  nginx:
    Image:        nginx:alpine
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cnct2 (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  kube-api-access-cnct2:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  61s   default-scheduler  0/3 nodes are available: 3 Too many pods. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod.

8. ๋””ํ”Œ๋กœ์ด๋จผํŠธ ์‚ญ์ œ

1
2
3
kubectl delete deploy nginx-deployment
# ๊ฒฐ๊ณผ
deployment.apps "nginx-deployment" deleted

โš–๏ธ AWS LoadBalancer Controller ๋ฐฐํฌ

1. ์„ค์น˜ ์ „ crd ํ™•์ธ

1
kubectl get crd

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
NAME                                         CREATED AT
cninodes.vpcresources.k8s.aws                2025-02-12T03:09:25Z
eniconfigs.crd.k8s.amazonaws.com             2025-02-12T03:13:55Z
policyendpoints.networking.k8s.aws           2025-02-12T03:09:25Z
securitygrouppolicies.vpcresources.k8s.aws   2025-02-12T03:09:25Z

2. Helm Chart ์„ค์น˜

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
helm repo add eks https://aws.github.io/eks-charts

# ๊ฒฐ๊ณผ
helm repo update
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=$CLUSTER_NAME
"eks" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "eks" chart repository
...Successfully got an update from the "prometheus-community" chart repository
...Successfully got an update from the "geek-cookbook" chart repository
Update Complete. โŽˆHappy Helming!โŽˆ
NAME: aws-load-balancer-controller
LAST DEPLOYED: Thu Feb 13 21:53:34 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
AWS Load Balancer controller installed!

3. ์„ค์น˜ ํ™•์ธ

1
kubectl get crd

โœ…ย ์ถœ๋ ฅ

ingressclassparams.elbv2.k8s.aws, targetgroupbindings.elbv2.k8s.aws ์ถ”๊ฐ€

1
2
3
4
5
6
7
NAME                                         CREATED AT
cninodes.vpcresources.k8s.aws                2025-02-12T03:09:25Z
eniconfigs.crd.k8s.amazonaws.com             2025-02-12T03:13:55Z
ingressclassparams.elbv2.k8s.aws             2025-02-13T12:53:32Z
policyendpoints.networking.k8s.aws           2025-02-12T03:09:25Z
securitygrouppolicies.vpcresources.k8s.aws   2025-02-12T03:09:25Z
targetgroupbindings.elbv2.k8s.aws            2025-02-13T12:53:32Z
1
kubectl explain ingressclassparams.elbv2.k8s.aws

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
GROUP:      elbv2.k8s.aws
KIND:       IngressClassParams
VERSION:    v1beta1

DESCRIPTION:
    IngressClassParams is the Schema for the IngressClassParams API
    
FIELDS:
  apiVersion	<string>
    APIVersion defines the versioned schema of this representation of an object.
    Servers should convert recognized schemas to the latest internal value, and
    may reject unrecognized values. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

  kind	<string>
    Kind is a string value representing the REST resource this object
    represents. Servers may infer this from the endpoint the client submits
    requests to. Cannot be updated. In CamelCase. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

  metadata	<ObjectMeta>
    Standard object's metadata. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

  spec	<Object>
    IngressClassParamsSpec defines the desired state of IngressClassParams

GROUP:      elbv2.k8s.aws
KIND:       TargetGroupBinding
VERSION:    v1beta1

DESCRIPTION:
    TargetGroupBinding is the Schema for the TargetGroupBinding API
    
FIELDS:
  apiVersion	<string>
    APIVersion defines the versioned schema of this representation of an object.
    Servers should convert recognized schemas to the latest internal value, and
    may reject unrecognized values. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

  kind	<string>
    Kind is a string value representing the REST resource this object
    represents. Servers may infer this from the endpoint the client submits
    requests to. Cannot be updated. In CamelCase. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

  metadata	<ObjectMeta>
    Standard object's metadata. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

  spec	<Object>
    TargetGroupBindingSpec defines the desired state of TargetGroupBinding

  status	<Object>
    TargetGroupBindingStatus defines the observed state of TargetGroupBinding

๐Ÿ•ธ๏ธ ์„œ๋น„์Šค/ํŒŒ๋“œ ๋ฐฐํฌ ํ…Œ์ŠคํŠธ with NLB

1. ๋ชจ๋‹ˆํ„ฐ๋ง

1
watch -d kubectl get pod,svc,ep,endpointslices

2. ๋””ํ”Œ๋กœ์ด๋จผํŠธ & ์„œ๋น„์Šค ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
cat << EOF > echo-service-nlb.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-echo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: deploy-websrv
  template:
    metadata:
      labels:
        app: deploy-websrv
    spec:
      terminationGracePeriodSeconds: 0
      containers:
      - name: aews-websrv
        image: k8s.gcr.io/echoserver:1.5
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: svc-nlb-ip-type
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing 
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "8080"
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
spec:
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
  type: LoadBalancer
  loadBalancerClass: service.k8s.aws/nlb
kubectl apply -f echo-service-nlb.yaml
deployment.apps/deploy-echo created
service/svc-nlb-ip-type created

3. ๋กœ๋“œ๋ฐธ๋Ÿฐ์„œ ์ƒํƒœ ํ™•์ธ

๋„คํŠธ์›Œํฌ ๋กœ๋“œ๋ฐธ๋Ÿฐ์„œ๊ฐ€ provisioning ๋˜๊ณ  ์žˆ์Œ

1
aws elbv2 describe-load-balancers --query 'LoadBalancers[*].State.Code' --output text

โœ…ย ์ถœ๋ ฅ

1
provisioning

Image

4. deploy, pod ์ •๋ณด ํ™•์ธ

1
kubectl get deploy,pod

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/deploy-echo   2/2     2            2           3m21s

NAME                              READY   STATUS    RESTARTS   AGE
pod/deploy-echo-bf9bdb8bc-qcq6k   1/1     Running   0          3m21s
pod/deploy-echo-bf9bdb8bc-xlpz4   1/1     Running   0          3m21s

5. ์„œ๋น„์Šค, ์—”๋“œํฌ์ธํŠธ, ์ธ๊ทธ๋ ˆ์Šค ํด๋ž˜์Šค ํŒŒ๋ผ๋ฏธํ„ฐ, ํƒ€๊ฒŸ ๊ทธ๋ฃน ๋ฐ”์ธ๋”ฉ ํ™•์ธ

1
kubectl get svc,ep,ingressclassparams,targetgroupbindings

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
NAME                      TYPE           CLUSTER-IP       EXTERNAL-IP                                                                         PORT(S)        AGE
service/kubernetes        ClusterIP      10.100.0.1       <none>                                                                              443/TCP        33h
service/svc-nlb-ip-type   LoadBalancer   10.100.253.160   k8s-default-svcnlbip-b30242b032-f4b2a1499955f080.elb.ap-northeast-2.amazonaws.com   80:30138/TCP   3m47s

NAME                        ENDPOINTS                              AGE
endpoints/kubernetes        192.168.2.87:443,192.168.3.197:443     33h
endpoints/svc-nlb-ip-type   192.168.1.176:8080,192.168.3.77:8080   3m47s

NAME                                   GROUP-NAME   SCHEME   IP-ADDRESS-TYPE   AGE
ingressclassparams.elbv2.k8s.aws/alb                                           9m48s

NAME                                                               SERVICE-NAME      SERVICE-PORT   TARGET-TYPE   AGE
targetgroupbinding.elbv2.k8s.aws/k8s-default-svcnlbip-f4a394e732   svc-nlb-ip-type   80             ip            3m43s

targetgroupbinding.elbv2.k8s.aws/k8s-default-svcnlbip-f4a394e732๋Š” ์•„๋ž˜ ์ •๋ณด์™€ ๋งค์นญ๋จ

Image

Target Group Binding Attribute์—์„œ Deregistration delay๋Š” graceful shutdown๊ณผ ์œ ์‚ฌํ•œ ๊ฐœ๋…

  • ์ด ์„ค์ •์€ ๋Œ€์ƒ ํƒ€๊ฒŸ์ด ์ข…๋ฃŒ๋˜๊ธฐ ์ „์— ์—ฐ๊ฒฐ์„ ํ•ด์ œํ•˜๋Š” ์‹œ๊ฐ„์„ ์˜๋ฏธ

Image

6. Target Group Binding Attribute ์ˆ˜์ •

echo-service-nlb.yaml ํŒŒ์ผ์„ ์—ด์–ด ๋‹ค์Œ ์ค„ ์ถ”๊ฐ€ํ•จ

1
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.timeout_seconds=60

๋น ๋ฅธ ์‹ค์Šต์„ ์œ„ํ•ด์„œ ๋“ฑ๋ก ์ทจ์†Œ ์ง€์—ฐ(๋“œ๋ ˆ์ด๋‹ ๊ฐ„๊ฒฉ) 60์ดˆ๋กœ ์ˆ˜์ •

Image

7. ๋ฐฐํฌ

1
2
3
4
5
kubectl apply -f echo-service-nlb.yaml

# ๊ฒฐ๊ณผ
deployment.apps/deploy-echo unchanged
service/svc-nlb-ip-type configured

Deregistration delay๊ฐ€ 60์ดˆ๋กœ ๋ณ€๊ฒฝ๋จ

Image

๋„คํŠธ์›Œํฌ ๋กœ๋“œ๋ฐธ๋Ÿฐ์„œ๊ฐ€ Active ์ƒํƒœ๋กœ ์ „ํ™˜๋จ

Image

Target Groups์—์„œ Targets๊ฐ€ Healthy ์ƒํƒœ๋กœ ํ‘œ์‹œ๋จ

Image

8. ์›น ์ ‘์† ์ฃผ์†Œ ํ™•์ธ

1
kubectl get svc svc-nlb-ip-type -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' | awk '{ print "Pod Web URL = http://"$1 }'

โœ…ย ์ถœ๋ ฅ

1
Pod Web URL = http://k8s-default-svcnlbip-b30242b032-f4b2a1499955f080.elb.ap-northeast-2.amazonaws.com

์„œ๋ฒ„ ์ ‘๊ทผ ํ™•์ธ๋จ (http://k8s-default-svcnlbip-b30242b032-f4b2a1499955f080.elb.ap-northeast-2.amazonaws.com)

Image

9. ํŒŒ๋“œ ๋กœ๊น… ๋ชจ๋‹ˆํ„ฐ๋ง

Stern์œผ๋กœ ํŒŒ๋“œ ๋กœ๊ทธ ๋ชจ๋‹ˆํ„ฐ๋ง ์ง„ํ–‰

1
kubectl stern -l  app=deploy-websrv

๋ถ„์‚ฐ ์ ‘์† ํ…Œ์ŠคํŠธ

1
2
(eks-user:N/A) [root@operator-host ~]# NLB=$(kubectl get svc svc-nlb-ip-type -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
(eks-user:N/A) [root@operator-host ~]# curl -s $NLB

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Hostname: deploy-echo-bf9bdb8bc-xlpz4

Pod Information:
	-no pod information available-

Server values:
	server_version=nginx: 1.13.0 - lua: 10008

Request Information:
	client_address=192.168.2.29
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://k8s-default-svcnlbip-b30242b032-f4b2a1499955f080.elb.ap-northeast-2.amazonaws.com:8080/

Request Headers:
	accept=*/*
	host=k8s-default-svcnlbip-b30242b032-f4b2a1499955f080.elb.ap-northeast-2.amazonaws.com
	user-agent=curl/8.3.0

Request Body:
	-no body in request-

100๋ฒˆ ์š”์ฒญ์„ ๋ณด๋‚ด ๊ฐ ํŒŒ๋“œ๋กœ ๋ถ„์‚ฐ๋œ ์ ‘์† ์ˆ˜ ํ™•์ธ

1
(eks-user:N/A) [root@operator-host ~]# for i in {1..100}; do curl -s $NLB | grep Hostname ; done | sort | uniq -c | sort -nr

โœ…ย ์ถœ๋ ฅ

1
2
     56 Hostname: deploy-echo-bf9bdb8bc-qcq6k
     44 Hostname: deploy-echo-bf9bdb8bc-xlpz4

100๋ฒˆ์˜ ์š”์ฒญ ์ค‘ 56๋ฒˆ์€ deploy-echo-bf9bdb8bc-qcq6k ํŒŒ๋“œ๋กœ, 44๋ฒˆ์€ deploy-echo-bf9bdb8bc-xlpz4 ํŒŒ๋“œ๋กœ ๋ถ„์‚ฐ๋˜์–ด ์ ‘์†๋จ

Image

10. AWS Console์—์„œ NLB ๋ฆฌ์Šค๋„ˆ ๋ฐ ๋Œ€์ƒ ํ™•์ธ

NLB ๋„๋ฉ”์ธ์œผ๋กœ ์ ‘๊ทผํ•˜๋ฉด 80๋ฒˆ ํฌํŠธ์—์„œ Listener๊ฐ€ ์š”์ฒญ์„ ๋ฐ›๊ณ , Resource map์„ ํ†ตํ•ด 192.168.1.176๊ณผ 192.168.3.77์˜ 8080๋ฒˆ ํฌํŠธ๋กœ ์ „๋‹ฌ๋จ

Image

192.168.1.176๊ณผ 192.168.3.77์€ ์•„๋ž˜ ์ถœ๋ ฅ๋œ ํŒŒ๋“œ๋“ค์˜ IP์ž„

1
2
3
4
5
k get pod -owide
NAME                          READY   STATUS    RESTARTS   AGE   IP              NODE                                               NOMINATED NODE   READINESS GATES
deploy-echo-bf9bdb8bc-qcq6k   1/1     Running   0          27m   192.168.3.77    ip-192-168-3-72.ap-northeast-2.compute.internal    <none>           <none>
deploy-echo-bf9bdb8bc-xlpz4   1/1     Running   0          27m   192.168.1.176   ip-192-168-1-193.ap-northeast-2.compute.internal   <none>           <none>

11. replicas ํ™•์žฅ์œผ๋กœ ํƒ€๊ฒŸ ๊ฐœ์ˆ˜ ์ฆ๊ฐ€

replicas๋ฅผ 2๊ฐœ์—์„œ 3๊ฐœ๋กœ ํ™•์žฅํ•˜์—ฌ ํƒ€๊ฒŸ์„ 3๊ฐœ๋กœ ์ฆ๊ฐ€์‹œํ‚ด

1
2
3
kubectl scale deployment deploy-echo --replicas=3
# ๊ฒฐ๊ณผ
deployment.apps/deploy-echo scaled

Image

12. ์‹ค์Šต ๋ฆฌ์†Œ์Šค ์‚ญ์ œ

1
2
3
4
kubectl delete deploy deploy-echo; kubectl delete svc svc-nlb-ip-type
# ๊ฒฐ๊ณผ
deployment.apps "deploy-echo" deleted
service "svc-nlb-ip-type" deleted

๐Ÿ›ค๏ธ Ingress ์‹ค์Šต

Ingress๋Š” ํด๋Ÿฌ์Šคํ„ฐ ๋‚ด๋ถ€ ์„œ๋น„์Šค(ClusterIP, NodePort, LoadBalancer)๋ฅผ ์™ธ๋ถ€์— HTTP/HTTPS๋กœ ๋…ธ์ถœํ•˜๋Š” ์›น ํ”„๋ก์‹œ ์—ญํ• ์„ ํ•จ

1. ๊ฒŒ์ž„ ํŒŒ๋“œ์™€ Service, Ingress ๋ฐฐํฌ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: game-2048
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: game-2048
  name: deployment-2048
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: app-2048
  replicas: 2
  template:
    metadata:
      labels:
        app.kubernetes.io/name: app-2048
    spec:
      containers:
      - image: public.ecr.aws/l6m2t8p7/docker-2048:latest
        imagePullPolicy: Always
        name: app-2048
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  namespace: game-2048
  name: service-2048
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  type: NodePort
  selector:
    app.kubernetes.io/name: app-2048
EOF             number: 80e-2048rget-type: ipt-facing

# ๊ฒฐ๊ณผ
namespace/game-2048 created
deployment.apps/deployment-2048 created
service/service-2048 created

2. ๋ชจ๋‹ˆํ„ฐ๋ง

1
watch -d kubectl get pod,ingress,svc,ep,endpointslices -n game-2048

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Every 2.0s: kubectl get pod,ingress,svc,ep,endpointslices -n game-2048                                                         gram88: 10:37:51 PM
                                                                                                                                     in 0.718s (0)
NAME                                   READY   STATUS    RESTARTS   AGE
pod/deployment-2048-7df5f9886b-lxtmt   1/1     Running   0          95s
pod/deployment-2048-7df5f9886b-zw8n9   1/1     Running   0          95s

NAME                                     CLASS   HOSTS   ADDRESS                                                                        PORTS   AGE
ingress.networking.k8s.io/ingress-2048   alb     *       k8s-game2048-ingress2-70d50ce3fd-1110214105.ap-northeast-2.elb.amazonaws.com   80      95s

NAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/service-2048   NodePort   10.100.135.188   <none>        80:31872/TCP   95s

NAME                     ENDPOINTS                          AGE
endpoints/service-2048   192.168.1.176:80,192.168.3.19:80   95s

NAME                                                ADDRESSTYPE   PORTS   ENDPOINTS                    AGE
endpointslice.discovery.k8s.io/service-2048-2h29k   IPv4          80      192.168.3.19,192.168.1.176   95s

3. ์ƒ์„ฑ ํ™•์ธ

game-2048 ๋„ค์ž„์ŠคํŽ˜์ด์Šค์˜ ์ธ๊ทธ๋ ˆ์Šค, ์„œ๋น„์Šค, ์—”๋“œํฌ์ธํŠธ, ํŒŒ๋“œ ํ™•์ธ

1
kubectl get ingress,svc,ep,pod -n game-2048

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
NAME                                     CLASS   HOSTS   ADDRESS                                                                        PORTS   AGE
ingress.networking.k8s.io/ingress-2048   alb     *       k8s-game2048-ingress2-70d50ce3fd-1110214105.ap-northeast-2.elb.amazonaws.com   80      2m36s

NAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/service-2048   NodePort   10.100.135.188   <none>        80:31872/TCP   2m36s

NAME                     ENDPOINTS                          AGE
endpoints/service-2048   192.168.1.176:80,192.168.3.19:80   2m36s

NAME                                   READY   STATUS    RESTARTS   AGE
pod/deployment-2048-7df5f9886b-lxtmt   1/1     Running   0          2m36s
pod/deployment-2048-7df5f9886b-zw8n9   1/1     Running   0          2m36s

game-2048 ๋„ค์ž„์ŠคํŽ˜์ด์Šค์˜ ํƒ€๊ฒŸ ๊ทธ๋ฃน ๋ฐ”์ธ๋”ฉ ํ™•์ธ

1
kubectl get targetgroupbindings -n game-2048

โœ…ย ์ถœ๋ ฅ

1
2
NAME                               SERVICE-NAME   SERVICE-PORT   TARGET-TYPE   AGE
k8s-game2048-service2-168626cccc   service-2048   80             ip            3m20s

Listener๊ฐ€ 80๋ฒˆ ํฌํŠธ๋กœ ์š”์ฒญ์„ ๋ฐ›์•„ rules๋ฅผ ํ†ตํ•ด ๋Œ€์ƒ ๊ทธ๋ฃน์œผ๋กœ ์ „๋‹ฌํ•จ

Image

Image

Image

4. Ingress ์„ค์ • ํ™•์ธ

1
2
kubectl describe ingress -n game-2048 ingress-2048
kubectl get ingress -n game-2048 ingress-2048 -o jsonpath="{.status.loadBalancer.ingress[*].hostname}{'\n'}"

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Name:             ingress-2048
Labels:           <none>
Namespace:        game-2048
Address:          k8s-game2048-ingress2-70d50ce3fd-1110214105.ap-northeast-2.elb.amazonaws.com
Ingress Class:    alb
Default backend:  <default>
Rules:
  Host        Path  Backends
  ----        ----  --------
  *           
              /   service-2048:80 (192.168.3.19:80,192.168.1.176:80)
Annotations:  alb.ingress.kubernetes.io/scheme: internet-facing
              alb.ingress.kubernetes.io/target-type: ip
Events:
  Type    Reason                  Age    From     Message
  ----    ------                  ----   ----     -------
  Normal  SuccessfullyReconciled  8m18s  ingress  Successfully reconciled
k8s-game2048-ingress2-70d50ce3fd-1110214105.ap-northeast-2.elb.amazonaws.com
  • ๋ชจ๋“  ์š”์ฒญ(*)์€ service-2048:80์œผ๋กœ ์ „๋‹ฌ๋จ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Every 2.0s: kubectl get pod,ingress,svc,ep,endpointslices -n game-2048                                                         gram88: 10:46:22 PM
                                                                                                                                     in 0.724s (0)
NAME                                   READY   STATUS    RESTARTS   AGE
pod/deployment-2048-7df5f9886b-lxtmt   1/1     Running   0          10m  
pod/deployment-2048-7df5f9886b-zw8n9   1/1     Running   0          10m  

NAME                                     CLASS   HOSTS   ADDRESS                                                                        PORTS   AGE
ingress.networking.k8s.io/ingress-2048   alb     *       k8s-game2048-ingress2-70d50ce3fd-1110214105.ap-northeast-2.elb.amazonaws.com   80      10m  

NAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/service-2048   NodePort   10.100.135.188   <none>        80:31872/TCP   10m  

NAME                     ENDPOINTS                          AGE
endpoints/service-2048   192.168.1.176:80,192.168.3.19:80   10m  

NAME                                                ADDRESSTYPE   PORTS   ENDPOINTS                    AGE
endpointslice.discovery.k8s.io/service-2048-2h29k   IPv4          80      192.168.3.19,192.168.1.176   10m  
  • service-2048์€ NodePort ํƒ€์ž…์ด๋ฉฐ 10.100.135.188 ํ• ๋‹น
  • Pod Endpoint IP: 192.168.1.176:80, 192.168.3.19:80
  • ALB๋Š” ์ด ๋‘ Pod๋กœ ํŠธ๋ž˜ํ”ฝ ์ „๋‹ฌ

5. ๊ฒŒ์ž„ ์ ‘์† URL ํ™•์ธ

1
kubectl get ingress -n game-2048 ingress-2048 -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' | awk '{ print "Game URL = http://"$1 }'

โœ…ย ์ถœ๋ ฅ

1
Game URL = http://k8s-game2048-ingress2-70d50ce3fd-1110214105.ap-northeast-2.elb.amazonaws.com

Image

6. Pod ๋Œ€์ƒ ํƒ€๊ฒŸ ํ™•์ธ

1
kubectl get pod -n game-2048 -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
NAME                               READY   STATUS    RESTARTS   AGE   IP              NODE                                               NOMINATED NODE   READINESS GATES
deployment-2048-7df5f9886b-lxtmt   1/1     Running   0          13m   192.168.3.19    ip-192-168-3-72.ap-northeast-2.compute.internal    <none>           <none>
deployment-2048-7df5f9886b-zw8n9   1/1     Running   0          13m   192.168.1.176   ip-192-168-1-193.ap-northeast-2.compute.internal   <none>           <none>
  • ALB์˜ ๋Œ€์ƒ์œผ๋กœ ์ง์ ‘ Pod IP๊ฐ€ ์‚ฌ์šฉ๋จ (NodePort ์•„๋‹˜)
  • Pod IP: 192.168.3.19 (๋…ธ๋“œ: ip-192-168-3-72), 192.168.1.176 (๋…ธ๋“œ: ip-192-168-1-193)

7. Pod ๊ฐœ์ˆ˜ ์ฆ๊ฐ€์— ๋”ฐ๋ฅธ ํƒ€๊ฒŸ ๊ทธ๋ฃน ํ™•์žฅ

pod๋ฅผ 3๊ฐœ๋กœ ์ฆ๊ฐ€์‹œํ‚ค๋ฉด ํƒ€๊ฒŸ๊ทธ๋ฃน์— ํ•˜๋‚˜๊ฐ€ ๋” ์ถ”๊ฐ€๋จ

1
2
3
kubectl scale deployment -n game-2048 deployment-2048 --replicas 3
# ๊ฒฐ๊ณผ
deployment.apps/deployment-2048 scaled

Image

8. ์‹ค์Šต ๋ฆฌ์†Œ์Šค ์‚ญ์ œ

1
2
3
kubectl delete ingress ingress-2048 -n game-2048
# ๊ฒฐ๊ณผ
ingress.networking.k8s.io "ingress-2048" deleted
1
2
3
4
5
kubectl delete svc service-2048 -n game-2048 && kubectl delete deploy deployment-2048 -n game-2048 && kubectl delete ns game-2048
# ๊ฒฐ๊ณผ
service "service-2048" deleted
deployment.apps "deployment-2048" deleted
namespace "game-2048" deleted

๐ŸŒ external DNS ์‹ค์Šต

1. ๋„๋ฉ”์ธ ์ƒ์„ฑ๊ณผ Zone ID ํ™•์ธ

Image

2. Route 53 ๋„๋ฉ”์ธ ID ์กฐํšŒ ๋ฐ ๋ณ€์ˆ˜ ์ง€์ •

1
MyDomain=gagajin.com
1
MyDnzHostedZoneId=$(aws route53 list-hosted-zones-by-name --dns-name "${MyDomain}." --query "HostedZones[0].Id" --output text)

๋ณ€์ˆ˜ ํ™•์ธ

1
echo $MyDomain, $MyDnzHostedZoneId

โœ…ย ์ถœ๋ ฅ

1
gagajin.com /hostedzone/EXAMPLEID123456789

3. ๋„๋ฉ”์ธ์˜ A ๋ ˆ์ฝ”๋“œ ๊ฐ’ ๋ฐ˜๋ณต ์กฐํšŒ

1
while true; do aws route53 list-resource-record-sets --hosted-zone-id "${MyDnzHostedZoneId}" --query "ResourceRecordSets[?Type == 'A']" | jq ; date ; echo ; sleep 1; done

Image

4. ExternalDNS ๋ฐฐํฌ

1
curl -s -O https://raw.githubusercontent.com/gasida/PKOS/main/aews/externaldns.yaml
1
2
3
4
5
6
MyDomain=$MyDomain MyDnzHostedZoneId=$MyDnzHostedZoneId envsubst < externaldns.yaml | kubectl apply -f -
# ๊ฒฐ๊ณผ
serviceaccount/external-dns created
clusterrole.rbac.authorization.k8s.io/external-dns created
clusterrolebinding.rbac.authorization.k8s.io/external-dns-viewer created
deployment.apps/external-dns created

5. ์—ฐ๊ฒฐ ๋ฐ ๋ชจ๋‹ˆํ„ฐ๋ง

1
kubectl get pod -l app.kubernetes.io/name=external-dns -n kube-system

โœ…ย ์ถœ๋ ฅ

1
2
NAME                           READY   STATUS    RESTARTS   AGE
external-dns-dc4878f5f-98vcp   1/1     Running   0          25s
1
kubectl logs deploy/external-dns -n kube-system -f

Image

6. ํ…ŒํŠธ๋ฆฌ์Šค ๊ฒŒ์ž„ ๋ฐฐํฌ

ํ…ŒํŠธ๋ฆฌ์Šค Deployment ๋ฐ ์„œ๋น„์Šค ๋ฐฐํฌ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# ํ…ŒํŠธ๋ฆฌ์Šค ๋””ํ”Œ๋กœ์ด๋จผํŠธ ๋ฐฐํฌ
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tetris
  labels:
    app: tetris
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tetris
  template:
    metadata:
      labels:
        app: tetris
    spec:
      containers:
      - name: tetris
        image: bsord/tetris
---
apiVersion: v1
kind: Service
metadata:
  name: tetris
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
    #service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "80"
spec:
  selector:
    app: tetris
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    type: LoadBalancer
  loadBalancerClass: service.k8s.aws/nlb
EOF

deployment.apps/tetris created
service/tetris created

๋ฐฐํฌ ์ƒํƒœ ํ™•์ธ

1
kubectl get deploy,svc,ep tetris

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/tetris   1/1     1            1           88s

NAME             TYPE           CLUSTER-IP     EXTERNAL-IP                                                                       PORT(S)        AGE
service/tetris   LoadBalancer   10.100.6.207   k8s-default-tetris-b179cd251d-f36283420d6a53f5.elb.ap-northeast-2.amazonaws.com   80:31999/TCP   88s

NAME               ENDPOINTS          AGE
endpoints/tetris   192.168.1.176:80   88s

7. NLB์— ๋„๋ฉ”์ธ ์—ฐ๊ฒฐ

1
2
3
kubectl annotate service tetris "external-dns.alpha.kubernetes.io/hostname=tetris.$MyDomain"
# ๊ฒฐ๊ณผ
service/tetris annotated

Image

8. Route 53์—์„œ ๋„๋ฉ”์ธ ์—ฐ๊ฒฐ ํ™•์ธ

Route 53 > Hosted Zone > gagajin.com์—์„œ tetris.gagajin.com ๋„๋ฉ”์ธ์ด NLB์™€ ๋งค์นญ๋œ ์ƒํƒœ ํ™•์ธ

Image

๋„๋ฉ”์ธ ์—ฐ๊ฒฐ ํ™•์ธ

  • ๋„๋ฉ”์ธ ๊ฐ’: tetris.gagajin.com
  • ๋งค์นญ๋œ ์„œ๋น„์Šค: k8s-default-tetris-b179cd251d-f36283420d6a53f5.elb.ap-northeast-2.amazonaws.com (NLB)

Image

9. ๋„๋ฉ”์ธ ์กฐํšŒ

1
dig +short tetris.$MyDomain @8.8.8.8

โœ…ย ์ถœ๋ ฅ

1
2
3
43.200.102.104
3.39.61.119
43.202.39.97

10. ๋„๋ฉ”์ธ ์ƒํƒœ ์ฒดํฌ

1
2
echo -e "My Domain Checker Site1 = https://www.whatsmydns.net/#A/tetris.$MyDomain"
echo -e "My Domain Checker Site2 = https://dnschecker.org/#A/tetris.$MyDomain"

Image

tetris.gagajin.com ์ ‘์†

Image

11. ๋ฆฌ์†Œ์Šค ์‚ญ์ œ ๋ฐ A ๋ ˆ์ฝ”๋“œ ์ œ๊ฑฐ ํ™•์ธ

external-dns์— ์˜ํ•ด A๋ ˆ์ฝ”๋“œ๋„ ์ž๋™ ์‚ญ์ œ

1
kubectl delete deploy,svc tetris

Image

Image


๐Ÿ—บ๏ธ Topology Aware Routing ์‹ค์Šต

1. ํ˜„์žฌ ๋…ธ๋“œ AZ ๋ฐฐํฌ ํ™•์ธ

1
kubectl get node --label-columns=topology.kubernetes.io/zone

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME                                               STATUS   ROLES    AGE   VERSION               ZONE
ip-192-168-1-193.ap-northeast-2.compute.internal   Ready    <none>   35h   v1.31.4-eks-aeac579   ap-northeast-2a
ip-192-168-2-52.ap-northeast-2.compute.internal    Ready    <none>   35h   v1.31.4-eks-aeac579   ap-northeast-2b
ip-192-168-3-72.ap-northeast-2.compute.internal    Ready    <none>   35h   v1.31.4-eks-aeac579   ap-northeast-2c

2. ํ…Œ์ŠคํŠธ๋ฅผ ์œ„ํ•œ ๋””ํ”Œ๋กœ์ด๋จผํŠธ์™€ ์„œ๋น„์Šค ๋ฐฐํฌ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-echo
spec:
  replicas: 3
  selector:
    matchLabels:
      app: deploy-websrv
  template:
    metadata:
      labels:
        app: deploy-websrv
    spec:
      terminationGracePeriodSeconds: 0
      containers:
      - name: websrv
        image: registry.k8s.io/echoserver:1.5
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: svc-clusterip
spec:
  ports:
    - name: svc-webport
      port: 80
      targetPort: 8080
  selector:
    app: deploy-websrv
  type: ClusterIP
EOF

# ๊ฒฐ๊ณผ
deployment.apps/deploy-echo created
service/svc-clusterip created

ํ™•์ธ

1
2
3
4
5
kubectl get deploy,svc,ep,endpointslices
kubectl get pod -owide
kubectl get svc,ep svc-clusterip
kubectl get endpointslices -l kubernetes.io/service-name=svc-clusterip
kubectl get endpointslices -l kubernetes.io/service-name=svc-clusterip -o yaml

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/deploy-echo   3/3     3            3           34s

NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/kubernetes      ClusterIP   10.100.0.1       <none>        443/TCP   35h
service/svc-clusterip   ClusterIP   10.100.101.191   <none>        80/TCP    34s

NAME                      ENDPOINTS                                                 AGE
endpoints/kubernetes      192.168.2.87:443,192.168.3.197:443                        35h
endpoints/svc-clusterip   192.168.1.176:8080,192.168.2.91:8080,192.168.3.146:8080   34s

NAME                                                 ADDRESSTYPE   PORTS   ENDPOINTS                                  AGE
endpointslice.discovery.k8s.io/kubernetes            IPv4          443     192.168.2.87,192.168.3.197                 35h
endpointslice.discovery.k8s.io/svc-clusterip-8gc6m   IPv4          8080    192.168.1.176,192.168.2.91,192.168.3.146   34s
NAME                           READY   STATUS    RESTARTS   AGE   IP              NODE                                               NOMINATED NODE   READINESS GATES
deploy-echo-75b7b9558c-8kxhk   1/1     Running   0          34s   192.168.2.91    ip-192-168-2-52.ap-northeast-2.compute.internal    <none>           <none>
deploy-echo-75b7b9558c-dwhmg   1/1     Running   0          34s   192.168.3.146   ip-192-168-3-72.ap-northeast-2.compute.internal    <none>           <none>
deploy-echo-75b7b9558c-frsbd   1/1     Running   0          34s   192.168.1.176   ip-192-168-1-193.ap-northeast-2.compute.internal   <none>           <none>
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/svc-clusterip   ClusterIP   10.100.101.191   <none>        80/TCP    35s

NAME                      ENDPOINTS                                                 AGE
endpoints/svc-clusterip   192.168.1.176:8080,192.168.2.91:8080,192.168.3.146:8080   35s
NAME                  ADDRESSTYPE   PORTS   ENDPOINTS                                  AGE
svc-clusterip-8gc6m   IPv4          8080    192.168.1.176,192.168.2.91,192.168.3.146   36s
apiVersion: v1
items:
- addressType: IPv4
  apiVersion: discovery.k8s.io/v1
  endpoints:
  - addresses:
    - 192.168.1.176
    conditions:
      ready: true
      serving: true
      terminating: false
    nodeName: ip-192-168-1-193.ap-northeast-2.compute.internal
    targetRef:
      kind: Pod
      name: deploy-echo-75b7b9558c-frsbd
      namespace: default
      uid: c79e9a50-6fe7-4862-b235-f48410dfd4e7
    zone: ap-northeast-2a
  - addresses:
    - 192.168.2.91
    conditions:
      ready: true
      serving: true
      terminating: false
    nodeName: ip-192-168-2-52.ap-northeast-2.compute.internal
    targetRef:
      kind: Pod
      name: deploy-echo-75b7b9558c-8kxhk
      namespace: default
      uid: fc2d6025-50e4-488d-844d-b74e64656869
    zone: ap-northeast-2b
  - addresses:
    - 192.168.3.146
    conditions:
      ready: true
      serving: true
      terminating: false
    nodeName: ip-192-168-3-72.ap-northeast-2.compute.internal
    targetRef:
      kind: Pod
      name: deploy-echo-75b7b9558c-dwhmg
      namespace: default
      uid: 013991cc-6c6a-465a-be3c-2a7128ed4869
    zone: ap-northeast-2c
  kind: EndpointSlice
  metadata:
    annotations:
      endpoints.kubernetes.io/last-change-trigger-time: "2025-02-13T15:02:52Z"
    creationTimestamp: "2025-02-13T15:02:50Z"
    generateName: svc-clusterip-
    generation: 4
    labels:
      endpointslice.kubernetes.io/managed-by: endpointslice-controller.k8s.io
      kubernetes.io/service-name: svc-clusterip
    name: svc-clusterip-8gc6m
    namespace: default
    ownerReferences:
    - apiVersion: v1
      blockOwnerDeletion: true
      controller: true
      kind: Service
      name: svc-clusterip
      uid: f97f9347-cd50-40d7-881e-253b2efe5825
    resourceVersion: "437737"
    uid: 2f964436-d4df-49bd-af2d-b11ec22702b2
  ports:
  - name: svc-webport
    port: 8080
    protocol: TCP
kind: List
metadata:
  resourceVersion: ""

4. ์ ‘์† ํ…Œ์ŠคํŠธ๋ฅผ ์ˆ˜ํ–‰ํ•  ํด๋ผ์ด์–ธํŠธ ํŒŒ๋“œ ๋ฐฐํฌ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: netshoot-pod
spec:
  containers:
  - name: netshoot-pod
    image: nicolaka/netshoot
    command: ["tail"]
    args: ["-f", "/dev/null"]
  terminationGracePeriodSeconds: 0
EOF
pod/netshoot-pod created

ํ™•์ธ

1
kubectl get pod -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
NAME                           READY   STATUS    RESTARTS   AGE    IP              NODE                                               NOMINATED NODE   READINESS GATES
deploy-echo-75b7b9558c-8kxhk   1/1     Running   0          101s   192.168.2.91    ip-192-168-2-52.ap-northeast-2.compute.internal    <none>           <none>
deploy-echo-75b7b9558c-dwhmg   1/1     Running   0          101s   192.168.3.146   ip-192-168-3-72.ap-northeast-2.compute.internal    <none>           <none>
deploy-echo-75b7b9558c-frsbd   1/1     Running   0          101s   192.168.1.176   ip-192-168-1-193.ap-northeast-2.compute.internal   <none>           <none>
netshoot-pod                   1/1     Running   0          16s    192.168.1.98    ip-192-168-1-193.ap-northeast-2.compute.internal   <none>           <none>
  • Netshoot Pod๊ฐ€ AZ 1๋ฒˆ์— ๋ฐฐํฌ๋จ
  • AZ 1๋ฒˆ์—์„œ ๋ฐฐํฌ๋œ deploy-echo Pod๋Š” ๊ฐ™์€ AZ์— ์žˆ๋Š” netshoot-pod์— ์—ฐ๊ฒฐ๋˜๋ฉด ์ข‹์ง€๋งŒ, ๋ถ€ํ•˜๋ถ„์‚ฐ ์‹œ ๋‹ค๋ฅธ AZ์˜ Pod๋กœ๋„ ์—ฐ๊ฒฐ๋จ

5. ํ…Œ์ŠคํŠธ ํŒŒ๋“œ(netshoot-pod)์—์„œ ClusterIP ์ ‘์† ์‹œ ๋ถ€ํ•˜๋ถ„์‚ฐ ํ™•์ธ

1
kubectl exec -it netshoot-pod -- curl svc-clusterip | grep Hostname

โœ…ย ์ถœ๋ ฅ

Image

1
2
3
4
5
Hostname: deploy-echo-75b7b9558c-8kxhk
# ๋˜๋Š”
Hostname: deploy-echo-75b7b9558c-dwhmg
# ๋˜๋Š”
Hostname: deploy-echo-75b7b9558c-frsbd
  • netshoot-pod์—์„œ curl svc-clusterip ๋ช…๋ น์–ด๋กœ ์„œ๋น„์Šค ๋„๋ฉ”์ธ๋ช…์„ ํ†ตํ•ด Pod์— ์ ‘๊ทผ
  • ์„œ๋น„์Šค ์ƒ์„ฑ ์‹œ ClusterIP๋ผ๋Š” ๊ฐ€์ƒ IP(VIP)๊ฐ€ ์ƒ์„ฑ๋˜์–ด ๊ณ ์ • ์ง„์ž…์  ์—ญํ•  ์ˆ˜ํ–‰

100๋ฒˆ ๋ฐ˜๋ณต ์ ‘์† : 3๊ฐœ์˜ ํŒŒ๋“œ๋กœ AZ(zone) ์ƒ๊ด€์—†์ด ๋žœ๋ค ํ™•๋ฅ  ๋ถ€ํ•˜๋ถ„์‚ฐ ๋™์ž‘

1
kubectl exec -it netshoot-pod -- zsh -c "for i in {1..100}; do curl -s svc-clusterip | grep Hostname; done | sort | uniq -c | sort -nr"

โœ…ย ์ถœ๋ ฅ

1
2
3
   43 Hostname: deploy-echo-75b7b9558c-frsbd
   36 Hostname: deploy-echo-75b7b9558c-dwhmg
   21 Hostname: deploy-echo-75b7b9558c-8kxhk

6. iptables ๊ธฐ๋ฐ˜ ์„œ๋น„์Šค ๋ผ์šฐํŒ… ๋ถ„์„

1
ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SERVICES

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
Chain KUBE-SERVICES (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-SVC-UAGC4PYEYZJJEW6D  tcp  --  *      *       0.0.0.0/0            10.100.145.143       /* kube-system/aws-load-balancer-webhook-service:webhook-server cluster IP */ tcp dpt:443
  110  6600 KUBE-SVC-KBDEBIL6IU6WL7RF  tcp  --  *      *       0.0.0.0/0            10.100.101.191       /* default/svc-clusterip:svc-webport cluster IP */ tcp dpt:80
    0     0 KUBE-SVC-I7SKRZYQ7PWYV5X7  tcp  --  *      *       0.0.0.0/0            10.100.83.10         /* kube-system/eks-extension-metrics-api:metrics-api cluster IP */ tcp dpt:443
  110 10560 KUBE-SVC-TCOU7JCQXEZGVUNU  udp  --  *      *       0.0.0.0/0            10.100.0.10          /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
    0     0 KUBE-SVC-ERIFXISQEP7F7OF4  tcp  --  *      *       0.0.0.0/0            10.100.0.10          /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
    0     0 KUBE-SVC-JD5MR3NA4I4DYORP  tcp  --  *      *       0.0.0.0/0            10.100.0.10          /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
    0     0 KUBE-SVC-Z4ANX4WAEWEBLCTM  tcp  --  *      *       0.0.0.0/0            10.100.101.216       /* kube-system/metrics-server:https cluster IP */ tcp dpt:443
    0     0 KUBE-SVC-NPX46M4PTMTKRN6Y  tcp  --  *      *       0.0.0.0/0            10.100.0.1           /* default/kubernetes:https cluster IP */ tcp dpt:443
    0     0 KUBE-SVC-7EJNTS7AENER2WX5  tcp  --  *      *       0.0.0.0/0            10.100.146.32        /* kube-system/kube-ops-view:http cluster IP */ tcp dpt:8080
  302 18120 KUBE-NODEPORTS  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL

7. KUBE-SERVICES ์ฒด์ธ ๋ถ„์„

1
ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
 pkts bytes target     prot opt in     out     source               destination         
   45  2700 KUBE-SEP-RSD5LYH4WSEXMEXJ  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.1.176:8080 */ statistic mode random probability 0.33333333349
   23  1380 KUBE-SEP-NLHUM4JSBL2UEFAX  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.2.91:8080 */ statistic mode random probability 0.50000000000
   42  2520 KUBE-SEP-6CT57P6L6QUJZOMH  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.3.146:8080 */
  • ๋žœ๋ค ํ™•๋ฅ  ๊ธฐ๋ฐ˜์œผ๋กœ ํŠธ๋ž˜ํ”ฝ์„ ๊ฐ Pod๋กœ ๋ถ„์‚ฐ
  • ๊ฐ Pod๋กœ 33%, 50% ๋“ฑ์˜ ํ™•๋ฅ ๋กœ ํŠธ๋ž˜ํ”ฝ ์ „๋‹ฌ

๋ชจ๋“  ์›Œ์ปค ๋…ธ๋“œ์˜ iptables ๊ทœ์น™ ์ผ๊ด€์„ฑ ํ™•์ธ

1
2
3
4
5
6
ssh ec2-user@$N2 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-SEP-RSD5LYH4WSEXMEXJ  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.1.176:8080 */ statistic mode random probability 0.33333333349
    0     0 KUBE-SEP-NLHUM4JSBL2UEFAX  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.2.91:8080 */ statistic mode random probability 0.50000000000
    0     0 KUBE-SEP-6CT57P6L6QUJZOMH  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.3.146:8080 */
1
2
3
4
5
6
ssh ec2-user@$N3 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-SEP-RSD5LYH4WSEXMEXJ  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.1.176:8080 */ statistic mode random probability 0.33333333349
    0     0 KUBE-SEP-NLHUM4JSBL2UEFAX  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.2.91:8080 */ statistic mode random probability 0.50000000000
    0     0 KUBE-SEP-6CT57P6L6QUJZOMH  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.3.146:8080 */

3๊ฐœ์˜ SEP๋Š” ๊ฐ๊ฐ ๊ฐœ๋ณ„ ํŒŒ๋“œ ์ ‘์† ์ •๋ณด

1
2
3
4
5
ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SEP-RSD5LYH4WSEXMEXJ
Chain KUBE-SEP-RSD5LYH4WSEXMEXJ (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  *      *       192.168.1.176        0.0.0.0/0            /* default/svc-clusterip:svc-webport */
   45  2700 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport */ tcp to:192.168.1.176:8080
1
2
3
4
5
ssh ec2-user@$N2 sudo iptables -v --numeric --table nat --list KUBE-SEP-RSD5LYH4WSEXMEXJ
Chain KUBE-SEP-RSD5LYH4WSEXMEXJ (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  *      *       192.168.1.176        0.0.0.0/0            /* default/svc-clusterip:svc-webport */
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport */ tcp to:192.168.1.176:8080
1
2
3
4
5
ssh ec2-user@$N3 sudo iptables -v --numeric --table nat --list KUBE-SEP-RSD5LYH4WSEXMEXJ
Chain KUBE-SEP-RSD5LYH4WSEXMEXJ (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  *      *       192.168.1.176        0.0.0.0/0            /* default/svc-clusterip:svc-webport */
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport */ tcp to:192.168.1.176:8080

8. Topology Aware Routing ์„ค์ •

1
2
3
kubectl annotate service svc-clusterip "service.kubernetes.io/topology-mode=auto"
# ๊ฒฐ๊ณผ
service/svc-clusterip annotated

endpointslices ํ™•์ธ

๊ฐ Endpoint์— hints๋กœ AZ(zone) ์ •๋ณด ํฌํ•จ๋จ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
kubectl get endpointslices -l kubernetes.io/service-name=svc-clusterip -o yaml
apiVersion: v1
items:
- addressType: IPv4
  apiVersion: discovery.k8s.io/v1
  endpoints:
  - addresses:
    - 192.168.1.176
    conditions:
      ready: true
      serving: true
      terminating: false
    hints:
      forZones:
      - name: ap-northeast-2a
    nodeName: ip-192-168-1-193.ap-northeast-2.compute.internal
    targetRef:
      kind: Pod
      name: deploy-echo-75b7b9558c-frsbd
      namespace: default
      uid: c79e9a50-6fe7-4862-b235-f48410dfd4e7
    zone: ap-northeast-2a
  - addresses:
    - 192.168.2.91
    conditions:
      ready: true
      serving: true
      terminating: false
    hints:
      forZones:
      - name: ap-northeast-2b
    nodeName: ip-192-168-2-52.ap-northeast-2.compute.internal
    targetRef:
      kind: Pod
      name: deploy-echo-75b7b9558c-8kxhk
      namespace: default
      uid: fc2d6025-50e4-488d-844d-b74e64656869
    zone: ap-northeast-2b
  - addresses:
    - 192.168.3.146
    conditions:
      ready: true
      serving: true
      terminating: false
    hints:
      forZones:
      - name: ap-northeast-2c
    nodeName: ip-192-168-3-72.ap-northeast-2.compute.internal
    targetRef:
      kind: Pod
      name: deploy-echo-75b7b9558c-dwhmg
      namespace: default
      uid: 013991cc-6c6a-465a-be3c-2a7128ed4869
    zone: ap-northeast-2c
  kind: EndpointSlice
  metadata:
    creationTimestamp: "2025-02-13T15:02:50Z"
    generateName: svc-clusterip-
    generation: 5
    labels:
      endpointslice.kubernetes.io/managed-by: endpointslice-controller.k8s.io
      kubernetes.io/service-name: svc-clusterip
    name: svc-clusterip-8gc6m
    namespace: default
    ownerReferences:
    - apiVersion: v1
      blockOwnerDeletion: true
      controller: true
      kind: Service
      name: svc-clusterip
      uid: f97f9347-cd50-40d7-881e-253b2efe5825
    resourceVersion: "444502"
    uid: 2f964436-d4df-49bd-af2d-b11ec22702b2
  ports:
  - name: svc-webport
    port: 8080
    protocol: TCP
kind: List
metadata:
  resourceVersion: ""

9. ๋ถ€ํ•˜ ๋ถ„์‚ฐ ํ…Œ์ŠคํŠธ

1
kubectl exec -it netshoot-pod -- zsh -c "for i in {1..100}; do curl -s svc-clusterip | grep Hostname; done | sort | uniq -c | sort -nr"

โœ…ย ์ถœ๋ ฅ

1
  100 Hostname: deploy-echo-75b7b9558c-frsbd
  • 100๋ฒˆ ์ ‘์† ์‹œ ๋™์ผ AZ์˜ ํŒŒ๋“œ๋กœ๋งŒ ์—ฐ๊ฒฐ
  • ํฌ๋กœ์Šค ๋„คํŠธ์›Œํฌ ๋น„์šฉ์ด ๋ฐœ์ƒํ•˜์ง€ ์•Š์Œ

10. IPTables ์ •์ฑ… ํ™•์ธ

ssh ๋ช…๋ น์–ด๋กœ ๊ฐ ๋…ธ๋“œ์—์„œ iptables ํ™•์ธ

1
ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SERVICES

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
Chain KUBE-SERVICES (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-SVC-JD5MR3NA4I4DYORP  tcp  --  *      *       0.0.0.0/0            10.100.0.10          /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
    0     0 KUBE-SVC-UAGC4PYEYZJJEW6D  tcp  --  *      *       0.0.0.0/0            10.100.145.143       /* kube-system/aws-load-balancer-webhook-service:webhook-server cluster IP */ tcp dpt:443
  100  6000 KUBE-SVC-KBDEBIL6IU6WL7RF  tcp  --  *      *       0.0.0.0/0            10.100.101.191       /* default/svc-clusterip:svc-webport cluster IP */ tcp dpt:80
    0     0 KUBE-SVC-I7SKRZYQ7PWYV5X7  tcp  --  *      *       0.0.0.0/0            10.100.83.10         /* kube-system/eks-extension-metrics-api:metrics-api cluster IP */ tcp dpt:443
  100  9600 KUBE-SVC-TCOU7JCQXEZGVUNU  udp  --  *      *       0.0.0.0/0            10.100.0.10          /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
    0     0 KUBE-SVC-ERIFXISQEP7F7OF4  tcp  --  *      *       0.0.0.0/0            10.100.0.10          /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
    0     0 KUBE-SVC-Z4ANX4WAEWEBLCTM  tcp  --  *      *       0.0.0.0/0            10.100.101.216       /* kube-system/metrics-server:https cluster IP */ tcp dpt:443
    0     0 KUBE-SVC-NPX46M4PTMTKRN6Y  tcp  --  *      *       0.0.0.0/0            10.100.0.1           /* default/kubernetes:https cluster IP */ tcp dpt:443
    0     0 KUBE-SVC-7EJNTS7AENER2WX5  tcp  --  *      *       0.0.0.0/0            10.100.146.32        /* kube-system/kube-ops-view:http cluster IP */ tcp dpt:8080
  108  6480 KUBE-NODEPORTS  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL

KUBE-SVC-KBDEBIL6IU6WL7RF ์ฒด์ธ์—์„œ ๋™์ผ AZ์˜ ํŒŒ๋“œ๋กœ๋งŒ ํŠธ๋ž˜ํ”ฝ ์ „๋‹ฌ ํ™•์ธ

Topology Mode์˜ hints ์‚ฌ์šฉ ์‹œ kube-proxy๊ฐ€ ํ•ด๋‹น AZ์— ์žˆ๋Š” Pod๋กœ๋งŒ ํŠธ๋ž˜ํ”ฝ์„ ๋ถ„์‚ฐํ•˜๋„๋ก ๊ทœ์น™์ด ๋ณ€๊ฒฝ๋จ

1
2
3
4
5
ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
# ์ถœ๋ ฅ
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
 pkts bytes target     prot opt in     out     source               destination         
  100  6000 KUBE-SEP-RSD5LYH4WSEXMEXJ  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.1.176:8080 */
1
2
3
4
5
ssh ec2-user@$N2 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
# ์ถœ๋ ฅ
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-SEP-NLHUM4JSBL2UEFAX  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.2.91:8080 */
1
2
3
4
5
ssh ec2-user@$N3 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
# ์ถœ๋ ฅ
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-SEP-6CT57P6L6QUJZOMH  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.3.146:8080 */

11. ํŒŒ๋“œ ๊ฐœ์ˆ˜๋ฅผ 1๊ฐœ๋กœ ์ค„์—ฌ ๊ฐ™์€ AZ์— ๋ชฉ์ ์ง€ ํŒŒ๋“œ๊ฐ€ ์—†์„ ๊ฒฝ์šฐ

(1) ํŒŒ๋“œ ๊ฐœ์ˆ˜ 1๊ฐœ๋กœ ์ถ•์†Œ

1
2
3
kubectl scale deployment deploy-echo --replicas 1
# ๊ฒฐ๊ณผ
deployment.apps/deploy-echo scaled

(2) ํŒŒ๋“œ AZ ํ™•์ธ

1
kubectl get pod -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
NAME                           READY   STATUS    RESTARTS   AGE   IP              NODE                                               NOMINATED NODE   READINESS GATES
deploy-echo-75b7b9558c-dwhmg   1/1     Running   0          42m   192.168.3.146   ip-192-168-3-72.ap-northeast-2.compute.internal    <none>           <none>
netshoot-pod                   1/1     Running   0          41m   192.168.1.98    ip-192-168-1-193.ap-northeast-2.compute.internal   <none>           <none>

(3) 100๋ฒˆ ๋ฐ˜๋ณต ํ…Œ์ŠคํŠธ

1
kubectl exec -it netshoot-pod -- zsh -c "for i in {1..100}; do curl -s svc-clusterip | grep Hostname; done | sort | uniq -c | sort -nr"

โœ…ย ์ถœ๋ ฅ

100๋ฒˆ ๋ชจ๋‘ deploy-echo-75b7b9558c-dwhmg ํŒŒ๋“œ๋กœ ์ ‘์†๋จ

1
  100 Hostname: deploy-echo-75b7b9558c-dwhmg

(4) IPTables ์ •์ฑ… ํ™•์ธ

๊ฐ ๋…ธ๋“œ์—์„œ KUBE-SVC-KBDEBIL6IU6WL7RF ์ฒด์ธ ํ™•์ธ ์‹œ ๋ชจ๋‘ ๊ฐ™์€ AZ์˜ ํŒŒ๋“œ๋กœ๋งŒ ์—ฐ๊ฒฐ๋จ

1
2
3
4
ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
 pkts bytes target     prot opt in     out     source               destination         
  100  6000 KUBE-SEP-6CT57P6L6QUJZOMH  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.3.146:8080 */
1
2
3
4
ssh ec2-user@$N2 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-SEP-6CT57P6L6QUJZOMH  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.3.146:8080 */
1
2
3
4
ssh ec2-user@$N3 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-SEP-6CT57P6L6QUJZOMH  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.3.146:8080 */

(5) EndpointSlices ํ™•์ธ

hint ์ •๋ณด ์—†์Œ

1
kubectl get endpointslices -l kubernetes.io/service-name=svc-clusterip -o yaml

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
apiVersion: v1
items:
- addressType: IPv4
  apiVersion: discovery.k8s.io/v1
  endpoints:
  - addresses:
    - 192.168.3.146
    conditions:
      ready: true
      serving: true
      terminating: false
    nodeName: ip-192-168-3-72.ap-northeast-2.compute.internal
    targetRef:
      kind: Pod
      name: deploy-echo-75b7b9558c-dwhmg
      namespace: default
      uid: 013991cc-6c6a-465a-be3c-2a7128ed4869
    zone: ap-northeast-2c
  kind: EndpointSlice
  metadata:
    creationTimestamp: "2025-02-13T15:02:50Z"
    generateName: svc-clusterip-
    generation: 7
    labels:
      endpointslice.kubernetes.io/managed-by: endpointslice-controller.k8s.io
      kubernetes.io/service-name: svc-clusterip
    name: svc-clusterip-8gc6m
    namespace: default
    ownerReferences:
    - apiVersion: v1
      blockOwnerDeletion: true
      controller: true
      kind: Service
      name: svc-clusterip
      uid: f97f9347-cd50-40d7-881e-253b2efe5825
    resourceVersion: "447576"
    uid: 2f964436-d4df-49bd-af2d-b11ec22702b2
  ports:
  - name: svc-webport
    port: 8080
    protocol: TCP
kind: List
metadata:
  resourceVersion: ""

12. ์‹ค์Šต ๋ฆฌ์†Œ์Šค ์‚ญ์ œ

1
2
3
4
kubectl delete deploy deploy-echo; kubectl delete svc svc-clusterip
# ๊ฒฐ๊ณผ
deployment.apps "deploy-echo" deleted
service "svc-clusterip" deleted

๐Ÿงช AWS Load Balancer Controller๋กœ ๋ธ”๋ฃจ/๊ทธ๋ฆฐ ๋ฐฐํฌ, ์นด๋‚˜๋ฆฌ ๋ฐฐํฌ, A/B ํ…Œ์ŠคํŠธ ์‹ค์Šต

1. ์ƒ˜ํ”Œ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ํด๋ก 

1
2
3
4
5
6
7
(eks-user:N/A) [root@operator-host ~]# git clone https://github.com/paulbouwer/hello-kubernetes.git
# ๊ฒฐ๊ณผ
Cloning into 'hello-kubernetes'...
remote: Enumerating objects: 294, done.
remote: Total 294 (delta 0), reused 0 (delta 0), pack-reused 294 (from 1)
Receiving objects: 100% (294/294), 168.42 KiB | 7.66 MiB/s, done.
Resolving deltas: 100% (120/120), done.

2. ์ƒ˜ํ”Œ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ v1 ์„ค์น˜

1
2
3
4
5
6
7
8
9
10
11
12
(eks-user:N/A) [root@operator-host ~]# helm install --create-namespace --namespace hello-kubernetes v1 \
>   ./hello-kubernetes/deploy/helm/hello-kubernetes \
>   --set message="You are reaching hello-kubernetes version 1" \
>   --set ingress.configured=true \
>   --set service.type="ClusterIP"
# ๊ฒฐ๊ณผ
NAME: v1
LAST DEPLOYED: Fri Feb 14 00:57:28 2025
NAMESPACE: hello-kubernetes
STATUS: deployed
REVISION: 1
TEST SUITE: None

3. ์ƒ˜ํ”Œ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ v2 ์„ค์น˜

1
2
3
4
5
6
7
8
9
10
11
12
(eks-user:N/A) [root@operator-host ~]# helm install --create-namespace --namespace hello-kubernetes v2 \
>   ./hello-kubernetes/deploy/helm/hello-kubernetes \
>   --set message="You are reaching hello-kubernetes version 2" \
>   --set ingress.configured=true \
>   --set service.type="ClusterIP"
# ๊ฒฐ๊ณผ
NAME: v2
LAST DEPLOYED: Fri Feb 14 00:57:59 2025
NAMESPACE: hello-kubernetes
STATUS: deployed
REVISION: 1
TEST SUITE: None

4. ๋ฐฐํฌ ํ™•์ธ

1
(eks-user:N/A) [root@operator-host ~]# kubectl get pod,svc,ep -n hello-kubernetes

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
NAME                                       READY   STATUS    RESTARTS   AGE
pod/hello-kubernetes-v1-7b546f6687-67w5x   1/1     Running   0          92s
pod/hello-kubernetes-v1-7b546f6687-nc877   1/1     Running   0          92s
pod/hello-kubernetes-v2-7b9df8f6c5-4qtd7   1/1     Running   0          60s
pod/hello-kubernetes-v2-7b9df8f6c5-g2jp9   1/1     Running   0          60s

NAME                          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/hello-kubernetes-v1   ClusterIP   10.100.175.94    <none>        80/TCP    92s
service/hello-kubernetes-v2   ClusterIP   10.100.160.118   <none>        80/TCP    60s

NAME                            ENDPOINTS                               AGE
endpoints/hello-kubernetes-v1   192.168.1.195:8080,192.168.2.237:8080   92s
endpoints/hello-kubernetes-v2   192.168.1.176:8080,192.168.3.146:8080   60s

5. ์ธ๊ทธ๋ ˆ์Šค ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
(eks-user:N/A) [root@operator-host ~]# cat <<EOF | kubectl apply -f -
> apiVersion: networking.k8s.io/v1
> kind: Ingress
> metadata:
>   name: "hello-kubernetes"
>   namespace: "hello-kubernetes"
>   annotations:
>     alb.ingress.kubernetes.io/scheme: internet-facing
>     alb.ingress.kubernetes.io/target-type: ip
>     alb.ingress.kubernetes.io/actions.blue-green: |
>       {
>         "type":"forward",
>         "forwardConfig":{
>           "targetGroups":[
>             {
>               "serviceName":"hello-kubernetes-v1",
>               "servicePort":"80",
>               "weight":100
>             },
>             {
>               "serviceName":"hello-kubernetes-v2",
>               "servicePort":"80",
>               "weight":0
>             }
>           ]
>         }
>       }
>   labels:
>     app: hello-kubernetes
> spec:
>   ingressClassName: alb
>   rules:
>     - http:
>         paths:
>           - path: /
>             pathType: Prefix
>             backend:
>               service:
>                 name: blue-green
>                 port:
>                   name: use-annotation
> EOF
ingress.networking.k8s.io/hello-kubernetes created

6. ์ธ๊ทธ๋ ˆ์Šค ๋ฐ ํฌ์›Œ๋”ฉ ํ™•์ธ

์ธ๊ทธ๋ ˆ์Šค ํ™•์ธ

1
(eks-user:N/A) [root@operator-host ~]# kubectl describe ingress -n hello-kubernetes

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
Name:             hello-kubernetes
Labels:           app=hello-kubernetes
Namespace:        hello-kubernetes
Address:          k8s-hellokub-hellokub-7e40b1a1ff-555575423.ap-northeast-2.elb.amazonaws.com
Ingress Class:    alb
Default backend:  <default>
Rules:
  Host        Path  Backends
  ----        ----  --------
  *           
              /   blue-green:use-annotation (<error: services "blue-green" not found>)
Annotations:  alb.ingress.kubernetes.io/actions.blue-green:
                {
                  "type":"forward",
                  "forwardConfig":{
                    "targetGroups":[
                      {
                        "serviceName":"hello-kubernetes-v1",
                        "servicePort":"80",
                        "weight":100
                      },
                      {
                        "serviceName":"hello-kubernetes-v2",
                        "servicePort":"80",
                        "weight":0
                      }
                    ]
                  }
                }
              alb.ingress.kubernetes.io/scheme: internet-facing
              alb.ingress.kubernetes.io/target-type: ip
Events:
  Type    Reason                  Age   From     Message
  ----    ------                  ----  ----     -------
  Normal  SuccessfullyReconciled  3m4s  ingress  Successfully reconciled

ํฌ์›Œ๋”ฉ ํ™•์ธ

ํ˜„์žฌ ๋ฒ„์ „ 1๋งŒ ์‘๋‹ต

1
2
3
4
5
6
7
8
9
(eks-user:N/A) [root@operator-host ~]# ELB_URL=$(kubectl get ingress -n hello-kubernetes -o=jsonpath='{.items[0].status.loadBalancer.ingress[0].hostname}')
(eks-user:N/A) [root@operator-host ~]# while true; do curl -s $ELB_URL | grep version; sleep 1; done
  You are reaching hello-kubernetes version 1
  You are reaching hello-kubernetes version 1
  You are reaching hello-kubernetes version 1
  You are reaching hello-kubernetes version 1
  You are reaching hello-kubernetes version 1
  You are reaching hello-kubernetes version 1
  ....

ALB์˜ ํฌ์›Œ๋”ฉ ๋Œ€์ƒ ํƒ€๊ฒŸ ๊ทธ๋ฃน์€ 2๊ฐ€์ง€๊ฐ€ ์žˆ์Œ

Image

  • ๊ธฐ์กด ๋ฒ„์ „: k8s-hellokub-hellokub-bb26fff580์œผ๋กœ 100% ํŠธ๋ž˜ํ”ฝ ์ „๋‹ฌ
  • ์‹ ๊ทœ ๋ฒ„์ „: k8s-hellokub-hellokub-f58a344fd6์œผ๋กœ 0% ํŠธ๋ž˜ํ”ฝ ์ „๋‹ฌ

7. ๋ธ”๋ฃจ/๊ทธ๋ฆฐ ๋ฐฐํฌ

๋ธ”๋ฃจ/๊ทธ๋ฆฐ ๋ฐฐํฌ ์„ค์ •

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
(eks-user:N/A) [root@operator-host ~]# cat <<EOF | kubectl apply -f -
> apiVersion: networking.k8s.io/v1
> kind: Ingress
> metadata:
>   name: "hello-kubernetes"
>   namespace: "hello-kubernetes"
>   annotations:
>     alb.ingress.kubernetes.io/scheme: internet-facing
>     alb.ingress.kubernetes.io/target-type: ip
>     alb.ingress.kubernetes.io/actions.blue-green: |
>       {
>         "type":"forward",
>         "forwardConfig":{
>           "targetGroups":[
>             {
>               "serviceName":"hello-kubernetes-v1",
>               "servicePort":"80",
>               "weight":0
>             },
>             {
>               "serviceName":"hello-kubernetes-v2",
>               "servicePort":"80",
>               "weight":100
>             }
>           ]
>         }
>       }
>   labels:
>     app: hello-kubernetes
> spec:
>   ingressClassName: alb
>   rules:
>     - http:
>         paths:
>           - path: /
>             pathType: Prefix
>             backend:
>               service:
>                 name: blue-green
>                 port:
>                   name: use-annotation
> EOF

# ๊ฒฐ๊ณผ
ingress.networking.k8s.io/hello-kubernetes configured

ํƒ€๊ฒŸ ๊ทธ๋ฃน์ด version 2๋กœ 100% ํฌ์›Œ๋”ฉ๋จ

1
2
3
4
5
6
7
8
(eks-user:N/A) [root@operator-host ~]# while true; do curl -s $ELB_URL | grep version; sleep 1; done
  You are reaching hello-kubernetes version 2
  You are reaching hello-kubernetes version 2
  You are reaching hello-kubernetes version 2
  You are reaching hello-kubernetes version 2
  You are reaching hello-kubernetes version 2
  You are reaching hello-kubernetes version 2
  .....

ALB ํƒ€๊ฒŸ ๊ทธ๋ฃน ๋ณ€๊ฒฝ ํ™•์ธ

k8s-hellokub-hellokub-f58a344fd6 100์œผ๋กœ ๋ฐ”๋€Œ์—ˆ๋‹ค.

Image

8. ์นด๋‚˜๋ฆฌ ๋ฐฐํฌ

์นด๋‚˜๋ฆฌ ๋ฐฐํฌ ์„ค์ •

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
(eks-user:N/A) [root@operator-host ~]# cat <<EOF | kubectl apply -f -
> apiVersion: networking.k8s.io/v1
> kind: Ingress
> metadata:
>   name: "hello-kubernetes"
>   namespace: "hello-kubernetes"
>   annotations:
>     alb.ingress.kubernetes.io/scheme: internet-facing
>     alb.ingress.kubernetes.io/target-type: ip
>     alb.ingress.kubernetes.io/actions.blue-green: |
>       {
>         "type":"forward",
>         "forwardConfig":{
>           "targetGroups":[
>             {
>               "serviceName":"hello-kubernetes-v1",
>               "servicePort":"80",
>               "weight":90
>             },
>             {
>               "serviceName":"hello-kubernetes-v2",
>               "servicePort":"80",
>               "weight":10
>             }
>           ]
>         }
>       }
>   labels:
>     app: hello-kubernetes
> spec:
>   ingressClassName: alb
>   rules:
>     - http:
>         paths:
>           - path: /
>             pathType: Prefix
>             backend:
>               service:
>                 name: blue-green
>                 port:
>                   name: use-annotation
> EOF
# ๊ฒฐ๊ณผ
ingress.networking.k8s.io/hello-kubernetes configured

์ ‘์† ๊ฒฐ๊ณผ ํ™•์ธ

1
2
3
(eks-user:N/A) [root@operator-host ~]# for i in {1..100};  do curl -s $ELB_URL | grep version ; done | sort | uniq -c | sort -nr
     86   You are reaching hello-kubernetes version 1
     14   You are reaching hello-kubernetes version 2

Image

9. a/b ํ…Œ์ŠคํŠธ

A/B ํ…Œ์ŠคํŠธ ์„ค์ •

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
(eks-user:N/A) [root@operator-host ~]# cat <<EOF | kubectl apply -f -
> apiVersion: networking.k8s.io/v1
> kind: Ingress
> metadata:
>   name: "hello-kubernetes"
>   namespace: "hello-kubernetes"
>   annotations:
>     alb.ingress.kubernetes.io/scheme: internet-facing
>     alb.ingress.kubernetes.io/target-type: ip
>     alb.ingress.kubernetes.io/conditions.ab-testing: >
>       [{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "HeaderName", "values":["aews-study"]}}]
>     alb.ingress.kubernetes.io/actions.ab-testing: >
>       {"type":"forward","forwardConfig":{"targetGroups":[{"serviceName":"hello-kubernetes-v2","servicePort":80}]}}
>   labels:
>     app: hello-kubernetes
> spec:
>   ingressClassName: alb
>   rules:
>     - http:
>         paths:
>           - path: /
>             pathType: Prefix
>             backend:
>               service:
>                 name: ab-testing
>                 port:
>                   name: use-annotation
>           - path: /
>             pathType: Prefix
>             backend:
>               service:
>                 name: hello-kubernetes-v1
>                 port:
>                   name: http
> EOF
# ๊ฒฐ๊ณผ
ingress.networking.k8s.io/hello-kubernetes configured
  • HTTP Header HeaderName: aews-study ์กฐ๊ฑด์œผ๋กœ ํŠธ๋ž˜ํ”ฝ ๋ถ„ํ• 

A/B ํ…Œ์ŠคํŠธ ํ™•์ธ

HeaderName: aews-study ํฌํ•จ ์‹œ version 2๋กœ 100% ํฌ์›Œ๋”ฉ

1
2
(eks-user:N/A) [root@operator-host ~]# for i in {1..100};  do curl -s -H "HeaderName: aews-study" $ELB_URL | grep version ; done | sort | uniq -c | sort -nr
    100   You are reaching hello-kubernetes version 2

ํฌํ•จ๋˜์ง€ ์•Š์„ ์‹œ version 1๋กœ 100% ํฌ์›Œ๋”ฉ

1
2
(eks-user:N/A) [root@operator-host ~]# for i in {1..100};  do curl -s $ELB_URL | grep version ; done | sort | uniq -c | sort -nr
    100   You are reaching hello-kubernetes version 1

Image

10. ์‹ค์Šต ๋ฆฌ์†Œ์Šค ์‚ญ์ œ

1
2
3
4
(eks-user:N/A) [root@operator-host ~]# kubectl delete ingress -n hello-kubernetes hello-kubernetes && kubectl delete ns hello-kubernetes
# ๊ฒฐ๊ณผ
ingress.networking.k8s.io "hello-kubernetes" deleted
namespace "hello-kubernetes" deleted

๐Ÿ•ต๏ธโ€โ™‚๏ธ ๋„คํŠธ์›Œํฌ ๋ถ„์„ ํˆด

1. Kubeskoop ์„ค์น˜

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
(eks-user:N/A) [root@operator-host ~]# kubectl apply -f https://raw.githubusercontent.com/alibaba/kubeskoop/main/deploy/skoopbundle.yaml
# ๊ฒฐ๊ณผ
namespace/kubeskoop created
daemonset.apps/kubeskoop-exporter created
configmap/kubeskoop-config created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
configmap/prometheus-server-conf created
deployment.apps/prometheus-deployment created
service/prometheus-service created
service/loki-service created
configmap/grafana-datasources created
deployment.apps/grafana created
service/grafana created
deployment.apps/grafana-loki created
configmap/grafana-loki-config created
clusterrole.rbac.authorization.k8s.io/kubeskoop-controller created
clusterrolebinding.rbac.authorization.k8s.io/kubeskoop-controller created
role.rbac.authorization.k8s.io/controller created
rolebinding.rbac.authorization.k8s.io/controller created
configmap/kubeskoop-controller-config created
deployment.apps/controller created
service/controller created
deployment.apps/webconsole created
service/webconsole created

(eks-user:N/A) [root@operator-host ~]# kubectl patch service webconsole -n kubeskoop -p '{"spec": {"type": "LoadBalancer"}}'
service/webconsole patched
(eks-user:N/A) [root@operator-host ~]# kubectl patch service prometheus-service -n kubeskoop -p '{"spec": {"type": "LoadBalancer"}}'
service/prometheus-service patched
(eks-user:N/A) [root@operator-host ~]# kubectl patch service grafana -n kubeskoop -p '{"spec": {"type": "LoadBalancer"}}'
service/grafana patched

2. kubeskoop ์›น ์ ‘์†

admin / kubeskoop

1
2
3
(eks-user:N/A) [root@operator-host ~]# kubectl get svc -n kubeskoop webconsole
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP                                                                   PORT(S)        AGE
webconsole   LoadBalancer   10.100.14.77   ac2514347b0c84f0b84e1aa71fe7e0dc-996135232.ap-northeast-2.elb.amazonaws.com   80:32223/TCP   84s
1
2
(eks-user:N/A) [root@operator-host ~]# kubectl get svc -n kubeskoop webconsole -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' | awk '{ print "KubeSkoop URL = http://"$1""}'
KubeSkoop URL = http://ac2514347b0c84f0b84e1aa71fe7e0dc-996135232.ap-northeast-2.elb.amazonaws.com

Image

3. ํŒจํ‚ท ์บก์ฒ˜๋ง ํ…Œ์ŠคํŠธ

netshoot-pod ์ •๋ณด ํ™•์ธ

1
2
3
(eks-user:N/A) [root@operator-host ~]# k get pod -owide
NAME           READY   STATUS    RESTARTS   AGE    IP             NODE                                               NOMINATED NODE   READINESS GATES
netshoot-pod   1/1     Running   0          109m   192.168.1.98   ip-192-168-1-193.ap-northeast-2.compute.internal   <none>           <none>

netshoot-pod ping ํ…Œ์ŠคํŠธ

1
2
3
4
5
6
7
(eks-user:N/A) [root@operator-host ~]# ping 192.168.1.98
PING 192.168.1.98 (192.168.1.98) 56(84) bytes of data.
64 bytes from 192.168.1.98: icmp_seq=1 ttl=126 time=1.08 ms
64 bytes from 192.168.1.98: icmp_seq=2 ttl=126 time=0.644 ms
64 bytes from 192.168.1.98: icmp_seq=3 ttl=126 time=0.649 ms
64 bytes from 192.168.1.98: icmp_seq=4 ttl=126 time=0.680 ms
64 bytes from 192.168.1.98: icmp_seq=5 ttl=126 time=0.767 ms

Image

4. ํ”„๋กœ๋ฉ”ํ…Œ์šฐ์Šค ์›น ์ ‘์†

1
2
(eks-user:N/A) [root@operator-host ~]# kubectl get svc -n kubeskoop prometheus-service -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' | awk '{ print "prometheus URL = http://"$1""}'
prometheus URL = http://af6d238bde2c24a829f56fd1abdc6989-1339249886.ap-northeast-2.elb.amazonaws.com

Image

5. ๊ทธ๋ผํŒŒ๋‚˜ ์›น ์ ‘์†

admin / kubeskoop

1
2
(eks-user:N/A) [root@operator-host ~]# kubectl get svc -n kubeskoop grafana -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' | awk '{ print "grafana URL = http://"$1""}'
grafana URL = http://a87ee7a3d119b4d0a9a742a9d992375d-114222539.ap-northeast-2.elb.amazonaws.com

Image

๐Ÿ”— AWS EKS IPVS ๋ชจ๋“œ ์„ค์ •

IPVS ๋ชจ๋“œ

  • IPVS ๋Š” ๋ฆฌ๋ˆ…์Šค ์ปค๋„์—์„œ ๋™์ž‘ํ•˜๋Š” ์†Œํ”„ํŠธ์›จ์–ด ๋กœ๋“œ๋ฐธ๋Ÿฐ์„œ์ด๋‹ค. ๋ฐฑ์—”๋“œ(ํ”Œ๋žซํผ)์œผ๋กœ Netfilter ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉฐ, TCP/UDP ์š”์ฒญ์„ ์ฒ˜๋ฆฌ ํ•  ์ˆ˜ ์žˆ๋‹ค.
  • iptables ์˜ rule ๊ธฐ๋ฐ˜ ์ฒ˜๋ฆฌ์˜ ์„ฑ๋Šฅ ํ•œ๊ณ„์™€ ๋ถ„์‚ฐ ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด ์—†์–ด์„œ, ์ตœ๊ทผ์—๋Š” ๋Œ€์ฒด๋กœ IPVS ๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค.

1. ์„œ๋ฒ„๋ณ„ IPVS ๋ชจ๋“ˆ ์„ค์ •

์„œ๋ฒ„ 1, 2, 3 ๊ฐ๊ฐ IPVS ๋ชจ๋“ˆ ์„ค์ •

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[ec2-user@ip-192-168-1-193 ~]$ sudo sh -c 'cat << EOF > /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_lc
ip_vs_wlc
ip_vs_lblc
ip_vs_lblcr
ip_vs_sh
ip_vs_dh
ip_vs_sed
ip_vs_nq
nf_conntrack
EOF'

[ec2-user@ip-192-168-1-193 ~]$ sudo modprobe ip_vs
sudo modprobe ip_vs_rr
sudo modprobe ip_vs_wrr
sudo modprobe ip_vs_lc
sudo modprobe ip_vs_wlc
sudo modprobe ip_vs_lblc
sudo modprobe ip_vs_lblcr
sudo modprobe ip_vs_sh
sudo modprobe ip_vs_dh
sudo modprobe ip_vs_sed
sudo modprobe ip_vs_nq
sudo modprobe nf_conntrack
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[ec2-user@ip-192-168-2-52 ~]$ sudo sh -c 'cat << EOF > /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_lc
ip_vs_wlc
ip_vs_lblc
ip_vs_lblcr
ip_vs_sh
ip_vs_dh
ip_vs_sed
ip_vs_nq
nf_conntrack
EOF'

[ec2-user@ip-192-168-2-52 ~]$ sudo modprobe ip_vs
sudo modprobe ip_vs_rr
sudo modprobe ip_vs_wrr
sudo modprobe ip_vs_lc
sudo modprobe ip_vs_wlc
sudo modprobe ip_vs_lblc
sudo modprobe ip_vs_lblcr
sudo modprobe ip_vs_sh
sudo modprobe ip_vs_dh
sudo modprobe ip_vs_sed
sudo modprobe ip_vs_nq
sudo modprobe nf_conntrack
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[ec2-user@ip-192-168-3-72 ~]$ sudo sh -c 'cat << EOF > /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_lc
ip_vs_wlc
ip_vs_lblc
ip_vs_lblcr
ip_vs_sh
ip_vs_dh
ip_vs_sed
ip_vs_nq
nf_conntrack
EOF'

[ec2-user@ip-192-168-3-72 ~]$ sudo modprobe ip_vs
sudo modprobe ip_vs_rr
sudo modprobe ip_vs_wrr
sudo modprobe ip_vs_lc
sudo modprobe ip_vs_wlc
sudo modprobe ip_vs_lblc
sudo modprobe ip_vs_lblcr
sudo modprobe ip_vs_sh
sudo modprobe ip_vs_dh
sudo modprobe ip_vs_sed
sudo modprobe ip_vs_nq
sudo modprobe nf_conntrack

2. IPVS ๋ชจ๋“ˆ ํ™•์ธ

๊ฐ ์„œ๋ฒ„์—์„œ lsmod | grep ^ip_vs๋กœ IPVS ๋ชจ๋“ˆ์ด ๋กœ๋“œ๋˜์—ˆ๋Š”์ง€ ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
[ec2-user@ip-192-168-1-193 ~]$ sudo lsmod | grep ^ip_vs
# ์ถœ๋ ฅ
ip_vs_nq               16384  0
ip_vs_sed              16384  0
ip_vs_dh               16384  0
ip_vs_sh               16384  0
ip_vs_lblcr            16384  0
ip_vs_lblc             16384  0
ip_vs_wlc              16384  0
ip_vs_lc               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs                 192512  20 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed
1
2
3
4
5
6
7
8
9
10
11
12
13
[ec2-user@ip-192-168-2-52 ~]$ sudo lsmod | grep ^ip_vs
# ์ถœ๋ ฅ
ip_vs_nq               16384  0
ip_vs_sed              16384  0
ip_vs_dh               16384  0
ip_vs_sh               16384  0
ip_vs_lblcr            16384  0
ip_vs_lblc             16384  0
ip_vs_wlc              16384  0
ip_vs_lc               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs                 192512  20 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed
1
2
3
4
5
6
7
8
9
10
11
12
13
[ec2-user@ip-192-168-3-72 ~]$ sudo lsmod | grep ^ip_vs
# ์ถœ๋ ฅ
ip_vs_nq               16384  0
ip_vs_sed              16384  0
ip_vs_dh               16384  0
ip_vs_sh               16384  0
ip_vs_lblcr            16384  0
ip_vs_lblc             16384  0
ip_vs_wlc              16384  0
ip_vs_lc               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs                 192512  20 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed

3. AWS EKS์— IPVS ๋ชจ๋“œ ๋ฐ˜์˜

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
aws eks update-addon --cluster-name $CLUSTER_NAME --addon-name kube-proxy \
  --configuration-values '{"ipvs": {"scheduler": "rr"}, "mode": "ipvs"}' \
  --resolve-conflicts OVERWRITE

# ๊ฒฐ๊ณผ
{
    "update": {
        "id": "3e57ed90-40b8-3ebb-98ac-73e497864986",
        "status": "InProgress",
        "type": "AddonUpdate",
        "params": [
            {
                "type": "ResolveConflicts",
                "value": "OVERWRITE"
            },
            {
                "type": "ConfigurationValues",
                "value": "{\"ipvs\": {\"scheduler\": \"rr\"}, \"mode\": \"ipvs\"}"
            }
        ],
        "createdAt": "2025-02-14T02:06:42.661000+09:00",
        "errors": []
    }
}

4. kube-proxy ๋ฐ๋ชฌ์…‹ ์žฌ์‹œ์ž‘

1
2
3
kubectl -n kube-system rollout restart ds kube-proxy
# ๊ฒฐ๊ณผ
daemonset.apps/kube-proxy restarted

5. kube-proxy ์„ค์ • ํ™•์ธ

1
kubectl get cm -n kube-system kube-proxy-config -o yaml

โœ…ย ์ถœ๋ ฅ

mode: "ipvs", scheduler: "rr" ์„ค์ • ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
apiVersion: v1
data:
  config: |-
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    clientConnection:
      acceptContentTypes: ""
      burst: 10
      contentType: application/vnd.kubernetes.protobuf
      kubeconfig: /var/lib/kube-proxy/kubeconfig
      qps: 5
    clusterCIDR: ""
    configSyncPeriod: 15m0s
    conntrack:
      maxPerCore: 32768
      min: 131072
      tcpCloseWaitTimeout: 1h0m0s
      tcpEstablishedTimeout: 24h0m0s
    enableProfiling: false
    healthzBindAddress: 0.0.0.0:10256
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: 14
      minSyncPeriod: 0s
      syncPeriod: 30s
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: "rr"
      syncPeriod: 30s
    kind: KubeProxyConfiguration
    metricsBindAddress: 0.0.0.0:10249
    mode: "ipvs"
    nodePortAddresses: null
    oomScoreAdj: -998
    portRange: ""
kind: ConfigMap
metadata:
  creationTimestamp: "2025-02-12T03:13:56Z"
  labels:
    eks.amazonaws.com/component: kube-proxy
    k8s-app: kube-proxy
  name: kube-proxy-config
  namespace: kube-system
  resourceVersion: "466743"
  uid: 83ba2b60-5afb-4ea8-b037-84a02d9036ef

6. kube-ipvs0 ์ธํ„ฐํŽ˜์ด์Šค ์ƒ์„ฑ ํ™•์ธ

IPVS ๋ชจ๋“œ๋กœ ๋ณ€๊ฒฝ ์‹œ kube-ipvs0 ์ธํ„ฐํŽ˜์ด์Šค๊ฐ€ ์ƒ์„ฑ๋˜์–ด ์„œ๋น„์Šค virtual IP ํ• ๋‹น๋จ

1
for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo ip -c addr; echo; done

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
>> node 43.202.57.204 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 02:23:93:a6:bc:61 brd ff:ff:ff:ff:ff:ff
    altname enp0s5
    inet 192.168.1.193/24 metric 1024 brd 192.168.1.255 scope global dynamic ens5
       valid_lft 2300sec preferred_lft 2300sec
    inet6 fe80::23:93ff:fea6:bc61/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
3: eni01a4864c88a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether 56:be:6a:bd:2d:2a brd ff:ff:ff:ff:ff:ff link-netns cni-a59ccd2b-5db2-0159-b78e-e0797b300a23
    inet6 fe80::54be:6aff:febd:2d2a/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
5: eni61c5a949744@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether 52:41:42:c3:03:3f brd ff:ff:ff:ff:ff:ff link-netns cni-74caca86-36e2-a922-2920-c2c8c00e7b43
    inet6 fe80::5041:42ff:fec3:33f/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
14: ens7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 02:d8:5d:80:2c:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s7
    inet 192.168.1.232/24 brd 192.168.1.255 scope global ens7
       valid_lft forever preferred_lft forever
    inet6 fe80::d8:5dff:fe80:2cc9/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
38: enica32516c01a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether 92:2a:72:57:7d:03 brd ff:ff:ff:ff:ff:ff link-netns cni-f0cb87e6-1f26-e2c9-b622-9a99215f4cca
    inet6 fe80::902a:72ff:fe57:7d03/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
41: enifc4d699b169@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether b6:46:07:f2:a2:65 brd ff:ff:ff:ff:ff:ff link-netns cni-2c0732df-1963-6937-d7ad-2ea9fe9f3481
    inet6 fe80::b446:7ff:fef2:a265/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
42: enid72dce04775@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether 5e:54:5b:e7:37:87 brd ff:ff:ff:ff:ff:ff link-netns cni-50b39a65-fb46-3e73-0831-6f9b159d1c56
    inet6 fe80::5c54:5bff:fee7:3787/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
43: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    link/ether ce:47:f1:9e:78:ad brd ff:ff:ff:ff:ff:ff
    inet 10.100.110.88/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.214.208/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.0.1/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.0.10/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.73.70/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.17.159/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.14.77/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.145.143/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.83.10/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.146.32/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.101.216/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever

>> node 15.164.179.214 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 06:c5:5b:d1:77:57 brd ff:ff:ff:ff:ff:ff
    altname enp0s5
    inet 192.168.2.52/24 metric 1024 brd 192.168.2.255 scope global dynamic ens5
       valid_lft 2295sec preferred_lft 2295sec
    inet6 fe80::4c5:5bff:fed1:7757/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
3: enibce2df30e87@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether 4a:c8:23:ab:85:39 brd ff:ff:ff:ff:ff:ff link-netns cni-5408bb28-ad79-a3f3-3a60-9442968852b1
    inet6 fe80::48c8:23ff:feab:8539/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
5: enic99196c7a64@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether 8a:00:6e:37:b1:71 brd ff:ff:ff:ff:ff:ff link-netns cni-9343fb30-bdc8-5fab-b46d-3a5db58f8007
    inet6 fe80::8800:6eff:fe37:b171/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
28: ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 06:0e:23:61:2c:f9 brd ff:ff:ff:ff:ff:ff
    altname enp0s6
    inet 192.168.2.136/24 brd 192.168.2.255 scope global ens6
       valid_lft forever preferred_lft forever
    inet6 fe80::40e:23ff:fe61:2cf9/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
34: eni79cb46fcdac@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether c6:d6:b9:a4:cf:99 brd ff:ff:ff:ff:ff:ff link-netns cni-188308ca-6087-e1e0-f4f3-7e6fd2da16e3
    inet6 fe80::c4d6:b9ff:fea4:cf99/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
39: enib1a09dbf60e@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether 76:23:31:29:2f:9f brd ff:ff:ff:ff:ff:ff link-netns cni-cb0b81af-c51f-85de-47fa-5036f76d4745
    inet6 fe80::7423:31ff:fe29:2f9f/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
40: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    link/ether 82:2e:0b:7a:92:81 brd ff:ff:ff:ff:ff:ff
    inet 10.100.0.10/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.0.1/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.17.159/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.73.70/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.110.88/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.146.32/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.83.10/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.145.143/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.101.216/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.214.208/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.14.77/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever

>> node 43.201.115.81 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 0a:d5:3d:4a:54:bd brd ff:ff:ff:ff:ff:ff
    altname enp0s5
    inet 192.168.3.72/24 metric 1024 brd 192.168.3.255 scope global dynamic ens5
       valid_lft 2304sec preferred_lft 2304sec
    inet6 fe80::8d5:3dff:fe4a:54bd/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
5: enif2c43957bf8@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether a2:e3:15:62:47:31 brd ff:ff:ff:ff:ff:ff link-netns cni-8acd7723-2d0b-690f-3d4b-d3e902287dd1
    inet6 fe80::a0e3:15ff:fe62:4731/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
14: ens7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 0a:0c:3c:65:9c:27 brd ff:ff:ff:ff:ff:ff
    altname enp0s7
    inet 192.168.3.218/24 brd 192.168.3.255 scope global ens7
       valid_lft forever preferred_lft forever
    inet6 fe80::80c:3cff:fe65:9c27/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
35: eniaad872f8f96@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether 5e:71:1d:0d:58:76 brd ff:ff:ff:ff:ff:ff link-netns cni-83401432-a5a6-e355-338b-5b0b2be54e2f
    inet6 fe80::5c71:1dff:fe0d:5876/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
39: eni1a06225ba03@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether 6e:99:bf:09:eb:4a brd ff:ff:ff:ff:ff:ff link-netns cni-76c37d54-4b57-554c-2033-1bd0903ea10a
    inet6 fe80::6c99:bfff:fe09:eb4a/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
42: eni00c25e070e3@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether 5a:70:f9:2d:fb:d6 brd ff:ff:ff:ff:ff:ff link-netns cni-ea99b434-526f-c3dc-8f3c-312ea67461e7
    inet6 fe80::5870:f9ff:fe2d:fbd6/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
43: enif2b59b1644b@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether 7e:a5:ca:a5:92:ae brd ff:ff:ff:ff:ff:ff link-netns cni-d32736a6-d35d-b2bf-009d-446df94aa7b8
    inet6 fe80::7ca5:caff:fea5:92ae/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
44: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    link/ether 2a:28:b6:08:eb:09 brd ff:ff:ff:ff:ff:ff
    inet 10.100.0.1/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.73.70/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.101.216/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.0.10/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.14.77/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.145.143/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.214.208/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.110.88/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.83.10/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.146.32/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.17.159/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever

๐Ÿ—‘๏ธ (์‹ค์Šต ์™„๋ฃŒ ํ›„) ์ž์› ์‚ญ์ œ

Amazon EKS ํด๋Ÿฌ์Šคํ„ฐ ์‚ญ์ œ(10๋ถ„ ์ •๋„ ์†Œ์š”)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
eksctl delete cluster --name $CLUSTER_NAME
# ๊ฒฐ๊ณผ
2025-02-14 02:20:39 [โ„น]  deleting EKS cluster "myeks"
2025-02-14 02:20:39 [โ„น]  will drain 0 unmanaged nodegroup(s) in cluster "myeks"
2025-02-14 02:20:39 [โ„น]  starting parallel draining, max in-flight of 1
2025-02-14 02:20:40 [โ„น]  deleted 0 Fargate profile(s)
2025-02-14 02:20:40 [โœ”]  kubeconfig has been updated
2025-02-14 02:20:40 [โ„น]  cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
2025-02-14 02:22:55 [โ„น]  4 sequential tasks: { delete nodegroup "ng1", delete IAM OIDC provider, delete addon IAM "eksctl-myeks-addon-vpc-cni", delete cluster control plane "myeks" [async] }
2025-02-14 02:22:55 [โ„น]  will delete stack "eksctl-myeks-nodegroup-ng1"
2025-02-14 02:22:55 [โ„น]  waiting for stack "eksctl-myeks-nodegroup-ng1" to get deleted
2025-02-14 02:22:55 [โ„น]  waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng1"
2025-02-14 02:23:26 [โ„น]  waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng1"
2025-02-14 02:23:58 [โ„น]  waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng1"
2025-02-14 02:25:36 [โ„น]  waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng1"
2025-02-14 02:27:34 [โ„น]  waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng1"
2025-02-14 02:29:08 [โ„น]  waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng1"
2025-02-14 02:30:35 [โ„น]  waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng1"
2025-02-14 02:31:32 [โ„น]  waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng1"
2025-02-14 02:33:04 [โ„น]  waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng1"
2025-02-14 02:33:05 [โ„น]  will delete stack "eksctl-myeks-addon-vpc-cni"
2025-02-14 02:33:05 [โ„น]  will delete stack "eksctl-myeks-cluster"
2025-02-14 02:33:06 [โœ”]  all cluster resources were deleted

1
aws cloudformation delete-stack --stack-name myeks
This post is licensed under CC BY 4.0 by the author.