๐ ์ค์ต ํ๊ฒฝ ๋ฐฐํฌ
2๊ฐ์ VPC(EKS ๋ฐฐํฌ, ์ด์์ฉ ๊ตฌ๋ถ), myeks-vpc์ public ์ EFS ์ถ๊ฐ
1. yaml ํ์ผ ๋ค์ด๋ก๋
1
2
3
4
5
| curl -O https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/K8S/myeks-3week.yaml
# ๊ฒฐ๊ณผ
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:--100 14933 100 14933 0 0 141k 0 --:--:-- --:--:-- --:--:-- 140k
|
2. ๋ฐฐํฌ
1
2
3
4
5
6
7
| aws cloudformation deploy --template-file ~/Downloads/myeks-3week.yaml \
--stack-name myeks --parameter-overrides KeyName=kp-aews SgIngressSshCidr=$(curl -s ipinfo.io/ip)/32 --region ap-northeast-2
# ๊ฒฐ๊ณผ
Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - myeks
|
3. ์ด์์๋ฒ EC2 ์ SSH ์ ์
1
2
3
4
5
6
7
8
9
10
11
12
13
| ssh -i kp-aews.pem ec2-user@$(aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[0].OutputValue' --output text)
, #_
~\_ ####_ Amazon Linux 2
~~ \_#####\
~~ \###| AL2 End of Life is 2026-06-30.
~~ \#/ ___
~~ V~' '->
~~~ / A newer version of Amazon Linux is available!
~~._. _/
_/ _/ Amazon Linux 2023, GA and supported until 2028-03-15.
_/m/' https://aws.amazon.com/linux/amazon-linux-2023/
[root@operator-host ~]#
|
EFS์ ๋คํธ์ํฌ ์ธํฐํ์ด์ค(ENI)๋ฅผ ํตํด ์๋ธ๋ท ID ๋ฐ IPv4 ์ฃผ์๊ฐ ํ ๋น๋จ
EFS > Network ๋ฉ๋ด์์ ์๋ธ๋ท ID, IPv4 Address ํ์ธ ๊ฐ๋ฅ โ
๐ ๏ธ eksctl ์ ํตํด EKS ๋ฐฐํฌ
1. ๋ณ์ ์ค์
Cluster Name
, VPC ID
, Public Subnet
, SSH Key
๋ณ์๋ก ์ ์ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| export CLUSTER_NAME=myeks
# VPC ID ๊ฐ์ ธ์ค๊ธฐ
export VPCID=$(aws ec2 describe-vpcs --filters "Name=tag:Name,Values=$CLUSTER_NAME-VPC" --query 'Vpcs[*].VpcId' --output text)
echo $VPCID
vpc-0e32b5a6653acdcd9
# Public Subnet ID ๊ฐ์ ธ์ค๊ธฐ
export PubSubnet1=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-Vpc1PublicSubnet1" --query "Subnets[0].[SubnetId]" --output text)
export PubSubnet2=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-Vpc1PublicSubnet2" --query "Subnets[0].[SubnetId]" --output text)
export PubSubnet3=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-Vpc1PublicSubnet3" --query "Subnets[0].[SubnetId]" --output text)
echo $PubSubnet1 $PubSubnet2 $PubSubnet3
subnet-0fed28a1b3e108719 subnet-0e4fb63cb543698fe subnet-0861bd68771150000
# ์์ ์ SSH Keypair ์ด๋ฆ
SSHKEYNAME=kp-aews
|
2. EKS ํด๋ฌ์คํฐ ์ค์ ํ์ผ ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
| cat << EOF > myeks.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: myeks
region: ap-northeast-2
version: "1.31"
iam:
withOIDC: true # enables the IAM OIDC provider as well as IRSA for the Amazon CNI plugin
serviceAccounts: # service accounts to create in the cluster. See IAM Service Accounts
- metadata:
name: aws-load-balancer-controller
namespace: kube-system
wellKnownPolicies:
awsLoadBalancerController: true
vpc:
cidr: 192.168.0.0/16
clusterEndpoints:
privateAccess: true # if you only want to allow private access to the cluster
publicAccess: true # if you want to allow public access to the cluster
id: $VPCID
subnets:
public:
ap-northeast-2a:
az: ap-northeast-2a
cidr: 192.168.1.0/24
id: $PubSubnet1
ap-northeast-2b:
az: ap-northeast-2b
cidr: 192.168.2.0/24
id: $PubSubnet2
ap-northeast-2c:
az: ap-northeast-2c
cidr: 192.168.3.0/24
id: $PubSubnet3
addons:
- name: vpc-cni # no version is specified so it deploys the default version
version: latest # auto discovers the latest available
attachPolicyARNs: # attach IAM policies to the add-on's service account
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
configurationValues: |-
enableNetworkPolicy: "true"
- name: kube-proxy
version: latest
- name: coredns
version: latest
- name: metrics-server
version: latest
managedNodeGroups:
- amiFamily: AmazonLinux2023
desiredCapacity: 3
iam:
withAddonPolicies:
certManager: true # Enable cert-manager
externalDNS: true # Enable ExternalDNS
instanceType: t3.medium
preBootstrapCommands:
# install additional packages
- "dnf install nvme-cli links tree tcpdump sysstat ipvsadm ipset bind-utils htop -y"
labels:
alpha.eksctl.io/cluster-name: myeks
alpha.eksctl.io/nodegroup-name: ng1
maxPodsPerNode: 100
maxSize: 3
minSize: 3
name: ng1
ssh:
allow: true
publicKeyName: $SSHKEYNAME
tags:
alpha.eksctl.io/nodegroup-name: ng1
alpha.eksctl.io/nodegroup-type: managed
volumeIOPS: 3000
volumeSize: 120
volumeThroughput: 125
volumeType: gp3
EOF
|
- ์ค์ ์์ฑ๋
myeks.yaml
ํ์ผ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
| apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: myeks
region: ap-northeast-2
version: "1.31"
iam:
withOIDC: true # enables the IAM OIDC provider as well as IRSA for the Amazon CNI plugin
serviceAccounts: # service accounts to create in the cluster. See IAM Service Accounts
- metadata:
name: aws-load-balancer-controller
namespace: kube-system
wellKnownPolicies:
awsLoadBalancerController: true
vpc:
cidr: 192.168.0.0/16
clusterEndpoints:
privateAccess: true # if you only want to allow private access to the cluster
publicAccess: true # if you want to allow public access to the cluster
id: vpc-0e32b5a6653acdcd9
subnets:
public:
ap-northeast-2a:
az: ap-northeast-2a
cidr: 192.168.1.0/24
id: subnet-0fed28a1b3e108719
ap-northeast-2b:
az: ap-northeast-2b
cidr: 192.168.2.0/24
id: subnet-0e4fb63cb543698fe
ap-northeast-2c:
az: ap-northeast-2c
cidr: 192.168.3.0/24
id: subnet-0861bd68771150000
addons:
- name: vpc-cni # no version is specified so it deploys the default version
version: latest # auto discovers the latest available
attachPolicyARNs: # attach IAM policies to the add-on's service account
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
configurationValues: |-
enableNetworkPolicy: "true"
- name: kube-proxy
version: latest
- name: coredns
version: latest
- name: metrics-server
version: latest
managedNodeGroups:
- amiFamily: AmazonLinux2023
desiredCapacity: 3
iam:
withAddonPolicies:
certManager: true # Enable cert-manager
externalDNS: true # Enable ExternalDNS
instanceType: t3.medium
preBootstrapCommands:
# install additional packages
- "dnf install nvme-cli links tree tcpdump sysstat ipvsadm ipset bind-utils htop -y"
labels:
alpha.eksctl.io/cluster-name: myeks
alpha.eksctl.io/nodegroup-name: ng1
maxPodsPerNode: 100
maxSize: 3
minSize: 3
name: ng1
ssh:
allow: true
publicKeyName: kp-aews
tags:
alpha.eksctl.io/nodegroup-name: ng1
alpha.eksctl.io/nodegroup-type: managed
volumeIOPS: 3000
volumeSize: 120
volumeThroughput: 125
volumeType: gp3
|
3. eks ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| eksctl create cluster -f myeks.yaml --verbose 4
# ๊ฒฐ๊ณผ
2025-02-18 11:31:47 [โถ] Setting credentials expiry window to 30 minutes
2025-02-18 11:31:48 [โถ] role ARN for the current session is "arn:aws:iam::378102432899:user/eks-user"
2025-02-18 11:31:48 [โน] eksctl version 0.203.0
2025-02-18 11:31:48 [โน] using region ap-northeast-2
2025-02-18 11:31:48 [โ] using existing VPC (vpc-0e32b5a6653acdcd9) and subnets (private:map[] public:map[ap-northeast-2a:{subnet-0fed28a1b3e108719 ap-northeast-2a 192.168.1.0/24 0 } ap-northeast-2b:{subnet-0e4fb63cb543698fe ap-northeast-2b 192.168.2.0/24 0 } ap-northeast-2c:{subnet-0861bd68771150000 ap-northeast-2c 192.168.3.0/24 0 }])
2025-02-18 11:31:48 [!] custom VPC/subnets will be used; if resulting cluster doesn't function as expected, make sure to review the configuration of VPC/subnets
2025-02-18 11:31:48 [โน] nodegroup "ng1" will use "" [AmazonLinux2023/1.31]
2025-02-18 11:31:48 [โน] using EC2 key pair "kp-aews"
2025-02-18 11:31:48 [โน] using Kubernetes version 1.31
2025-02-18 11:31:48 [โน] creating EKS cluster "myeks" in "ap-northeast-2" region with managed nodes
2025-02-18 11:31:48 [โถ] cfg.json = \
.......
|
4. EKS ๋ค์์คํ์ด์ค ๋ฐ ์ปจํ
์คํธ ์ค์ ๋ณ๊ฒฝ
- ๋ค์์คํ์ด์ค default ๋ณ๊ฒฝ ์ ์ฉ
1
2
3
| kubens default
# ๊ฒฐ๊ณผ
โ Active namespace is "default"
|
- EKS ์ปจํ
์คํธ๋ช
๋ณ๊ฒฝ (eks-user โ eksworkshop)
1
2
3
4
5
6
7
| kubectl ctx
# ๊ฒฐ๊ณผ
Switched to context "eks-user@myeks.ap-northeast-2.eksctl.io".
kubectl config rename-context "eks-user@myeks.ap-northeast-2.eksctl.io" "eksworkshop"
# ๊ฒฐ๊ณผ
Context "eks-user@myeks.ap-northeast-2.eksctl.io" renamed to "eksworkshop".
|
5. EKS ํด๋ฌ์คํฐ ์ ๋ณด ํ์ธ
โ
ย ์ถ๋ ฅ
1
2
3
4
| Kubernetes control plane is running at https://791BC5A9BB3716EA88C45304E0696F83.gr7.ap-northeast-2.eks.amazonaws.com
CoreDNS is running at https://791BC5A9BB3716EA88C45304E0696F83.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
|
6. EKS ๋
ธ๋ ์ ๋ณด ์กฐํ
- ํด๋ฌ์คํฐ์ ๋
ธ๋ ์ํ, ์ธ์คํด์ค ์ ํ, ์ฉ๋ ์ ํ(์จ๋๋งจ๋), ๊ฐ์ฉ ์์ญ ํ์ธ
1
| kubectl get node --label-columns=node.kubernetes.io/instance-type,eks.amazonaws.com/capacityType,topology.kubernetes.io/zone
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| NAME STATUS ROLES AGE VERSION INSTANCE-TYPE CAPACITYTYPE ZONE
ip-192-168-1-207.ap-northeast-2.compute.internal Ready <none> 26m v1.31.5-eks-5d632ec t3.medium ON_DEMAND ap-northeast-2a
ip-192-168-2-84.ap-northeast-2.compute.internal Ready <none> 26m v1.31.5-eks-5d632ec t3.medium ON_DEMAND ap-northeast-2b
ip-192-168-3-80.ap-northeast-2.compute.internal Ready <none> 26m v1.31.5-eks-5d632ec t3.medium ON_DEMAND ap-northeast-2c
|
7. ๊ด๋ฆฌํ ๋
ธ๋ ๊ทธ๋ฃน ํ์ธ
1
| eksctl get nodegroup --cluster $CLUSTER_NAME
|
โ
ย ์ถ๋ ฅ
1
2
| CLUSTER NODEGROUP STATUS CREATED MIN SIZE MAX SIZE DESIRED CAPACITY INSTANCE TYPE IMAGE ID ASG NAME TYPE
myeks ng1 ACTIVE 2025-02-18T02:45:19Z 3 3 3 t3.medium AL2023_x86_64_STANDARD eks-ng1-c8ca8b79-064a-1771-a08d-6ace8e163bd4 managed
|
1
| aws eks describe-nodegroup --cluster-name $CLUSTER_NAME --nodegroup-name ng1 | jq
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
| {
"nodegroup": {
"nodegroupName": "ng1",
"nodegroupArn": "arn:aws:eks:ap-northeast-2:378102432899:nodegroup/myeks/ng1/c8ca8b79-064a-1771-a08d-6ace8e163bd4",
"clusterName": "myeks",
"version": "1.31",
"releaseVersion": "1.31.5-20250212",
"createdAt": "2025-02-18T11:45:19.813000+09:00",
"modifiedAt": "2025-02-18T12:06:04.747000+09:00",
"status": "ACTIVE",
"capacityType": "ON_DEMAND",
"scalingConfig": {
"minSize": 3,
"maxSize": 3,
"desiredSize": 3
},
"instanceTypes": [
"t3.medium"
],
"subnets": [
"subnet-0fed28a1b3e108719",
"subnet-0e4fb63cb543698fe",
"subnet-0861bd68771150000"
],
"amiType": "AL2023_x86_64_STANDARD",
"nodeRole": "arn:aws:iam::378102432899:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-rGyQG9rZlOwl",
"labels": {
"alpha.eksctl.io/cluster-name": "myeks",
"alpha.eksctl.io/nodegroup-name": "ng1"
},
"resources": {
"autoScalingGroups": [
{
"name": "eks-ng1-c8ca8b79-064a-1771-a08d-6ace8e163bd4"
}
]
},
"health": {
"issues": []
},
"updateConfig": {
"maxUnavailable": 1
},
"launchTemplate": {
"name": "eksctl-myeks-nodegroup-ng1",
"version": "1",
"id": "lt-070d49ace29ca00a6"
},
"tags": {
"aws:cloudformation:stack-name": "eksctl-myeks-nodegroup-ng1",
"alpha.eksctl.io/cluster-name": "myeks",
"alpha.eksctl.io/nodegroup-name": "ng1",
"aws:cloudformation:stack-id": "arn:aws:cloudformation:ap-northeast-2:378102432899:stack/eksctl-myeks-nodegroup-ng1/55132e60-eda2-11ef-876d-06d9644ecb71",
"eksctl.cluster.k8s.io/v1alpha1/cluster-name": "myeks",
"aws:cloudformation:logical-id": "ManagedNodeGroup",
"alpha.eksctl.io/nodegroup-type": "managed",
"alpha.eksctl.io/eksctl-version": "0.203.0"
}
}
}
|
8. EKS addon ์ํ ํ์ธ
1
| eksctl get addon --cluster $CLUSTER_NAME
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
| 2025-02-18 12:16:16 [โน] Kubernetes version "1.31" in use by cluster "myeks"
2025-02-18 12:16:16 [โน] getting all addons
2025-02-18 12:16:18 [โน] to see issues for an addon run `eksctl get addon --name <addon-name> --cluster <cluster-name>`
NAME VERSION STATUS ISSUES IAMROLE UPDATE AVAILABLE CONFIGURATION VALUES POD IDENTITY ASSOCIATION ROLES
coredns v1.11.4-eksbuild.2 ACTIVE 0
kube-proxy v1.31.3-eksbuild.2 ACTIVE 0
metrics-server v0.7.2-eksbuild.2 ACTIVE 0
vpc-cni v1.19.2-eksbuild.5 ACTIVE 0 arn:aws:iam::378102432899:role/eksctl-myeks-addon-vpc-cni-Role1-ZTYxtOMDwfFu enableNetworkPolicy: "true"
|
9. AWS Load Balancer Controller์ฉ IAM ์๋น์ค ๊ณ์ ํ์ธ
(1) IAM ์๋น์ค ๊ณ์ ํ์ธ
aws-load-balancer-controller
์๋น์ค ๊ณ์ ์ด IAM Role๊ณผ ์ฌ๋ฐ๋ฅด๊ฒ ๋ฐ์ธ๋ฉ๋์๋์ง ํ์ธ
1
| eksctl get iamserviceaccount --cluster $CLUSTER_NAME
|
โ
ย ์ถ๋ ฅ
1
2
| NAMESPACE NAME ROLE ARN
kube-system aws-load-balancer-controller arn:aws:iam::378102432899:role/eksctl-myeks-addon-iamserviceaccount-kube-sys-Role1-O6YEYsN7iVeQ
|
(2) myeks.yaml ํ์ผ ํ์ธ
certManager
๋ฐ externalDNS
๋ฅผ ํ์ฑํํ์ฌ SSL ์ธ์ฆ์ ์๋ํ ๋ฐ ์ธ๋ถ DNS ๋ ์ฝ๋ ๊ด๋ฆฌ ๊ธฐ๋ฅ ์ถ๊ฐํ์
1
2
3
4
5
6
7
| managedNodeGroups:
- amiFamily: AmazonLinux2023
desiredCapacity: 3
iam:
withAddonPolicies:
certManager: true # Enable cert-manager
externalDNS: true # Enable ExternalDNS
|
(3) EC2 ์ธ์คํด์ค์ IAM Role ํ์ธ
- ํด๋ฌ์คํฐ์ ๊ด๋ฆฌํ ๋
ธ๋ ๊ทธ๋ฃน(
myeks-ng1-Node
)์ ์ฐ๊ฒฐ๋ IAM Role ํ์ธ eksctl-myeks-nodegroup-ng1-NodeInstanceRole-rGyQG9rZlOwl
์ญํ ์ด ๋ถ์ฌ๋จ
- ์ด ์ญํ ์ ํตํด
externalDNS
๋ฐ certManager
์ IAM ์ ์ฑ
์ด ์ ์ฉ๋จ
๐ฅ๏ธ ๊ด๋ฆฌํ ๋
ธ๋ ๊ทธ๋ฃน(EC2) ์ ์
1. ์ธ์คํด์ค ์ ๋ณด ํ์ธ
1
| aws ec2 describe-instances --query "Reservations[*].Instances[*].{InstanceID:InstanceId, PublicIPAdd:PublicIpAddress, PrivateIPAdd:PrivateIpAddress, InstanceName:Tags[?Key=='Name']|[0].Value, Status:State.Name}" --filters Name=instance-state-name,Values=running --output table
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
| ----------------------------------------------------------------------------------------
| DescribeInstances |
+----------------------+-----------------+----------------+----------------+-----------+
| InstanceID | InstanceName | PrivateIPAdd | PublicIPAdd | Status |
+----------------------+-----------------+----------------+----------------+-----------+
| i-0484d2b724be33973 | myeks-ng1-Node | 192.168.3.80 | 15.164.49.232 | running |
| i-0b995a92db06f58d8 | operator-host | 172.20.1.100 | 3.35.230.35 | running |
| i-093ad32d5ff5a8770 | myeks-ng1-Node | 192.168.1.207 | 3.35.47.226 | running |
| i-0a80fdc36a856f394 | myeks-ng1-Node | 192.168.2.84 | 43.203.131.45 | running |
+----------------------+-----------------+----------------+----------------+-----------+
|
2. ๊ฐ์ฉ ์์ญ(AZ)๋ณ EC2 ๊ณต์ธ IP ์กฐํ
- AZ1 ๋ฐฐ์น๋ EC2 ๊ณต์ธ IP
1
2
3
4
5
6
7
| aws ec2 describe-instances \
--filters "Name=tag:Name,Values=myeks-ng1-Node" "Name=availability-zone,Values=ap-northeast-2a" \
--query 'Reservations[*].Instances[*].PublicIpAddress' \
--output text
# ๊ฒฐ๊ณผ
3.35.47.226
|
- AZ2 ๋ฐฐ์น๋ EC2 ๊ณต์ธ IP
1
2
3
4
5
6
7
| aws ec2 describe-instances \
--filters "Name=tag:Name,Values=myeks-ng1-Node" "Name=availability-zone,Values=ap-northeast-2b" \
--query 'Reservations[*].Instances[*].PublicIpAddress' \
--output text
# ๊ฒฐ๊ณผ
43.203.131.45
|
- AZ3 ๋ฐฐ์น๋ EC2 ๊ณต์ธ IP
1
2
3
4
5
6
7
| aws ec2 describe-instances \
--filters "Name=tag:Name,Values=myeks-ng1-Node" "Name=availability-zone,Values=ap-northeast-2c" \
--query 'Reservations[*].Instances[*].PublicIpAddress' \
--output text
# ๊ฒฐ๊ณผ
15.164.49.232
|
3. AZ๋ณ EC2 ๊ณต์ธ IP๋ฅผ ํ๊ฒฝ ๋ณ์์ ์ ์ฅ
- ๊ฐ ๊ฐ์ฉ ์์ญ(AZ)์ ๋ฐฐ์น๋ EC2 ๊ณต์ธ IP๋ฅผ ๋ณ์(
N1
, N2
, N3
)์ ์ ์ฅ
1
2
3
4
5
| export N1=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=myeks-ng1-Node" "Name=availability-zone,Values=ap-northeast-2a" --query 'Reservations[*].Instances[*].PublicIpAddress' --output text)
export N2=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=myeks-ng1-Node" "Name=availability-zone,Values=ap-northeast-2b" --query 'Reservations[*].Instances[*].PublicIpAddress' --output text)
export N3=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=myeks-ng1-Node" "Name=availability-zone,Values=ap-northeast-2c" --query 'Reservations[*].Instances[*].PublicIpAddress' --output text)
echo $N1, $N2, $N3
|
โ
ย ์ถ๋ ฅ
1
| 3.35.47.226, 43.203.131.45, 15.164.49.232
|
4. EKS ์์ปค๋
ธ๋์ SSH ๋ณด์ ๊ทธ๋ฃน ์๋ ์ถ๊ฐ ํ์ธ
myeks.yaml
์์ ssh.allow: true
๋ก ์ค์ ํ๋ฉด EKS ์์ปค๋
ธ๋ ๋ณด์ ๊ทธ๋ฃน์ด ์๋ ์์ฑ๋จ
1
2
3
| ssh:
allow: true
publicKeyName: kp-aews
|
- ์์ฑ๋ ๋ณด์ ๊ทธ๋ฃน:
sg-0edd80591095505a9 (eksctl-myeks-nodegroup-ng1-remoteAccess)
- ๊ธฐ๋ณธ์ค์ : ํฌํธ 22๋ฒ(SSH)์ด Any(0.0.0.0/0)๋ก ์คํ๋จ
- ๋ณด์ ์ํ: ๋ถํน์ ๋ค์๊ฐ SSH ์ ๊ทผ ๊ฐ๋ฅํ์ฌ ๋ณด์ ์ทจ์ฝ์ ๋ฐ์ ๊ฐ๋ฅ
- ๊ฐ์ ๋ฐฉ์:
ssh.allow: false
๋ก ์ค์ ํ๊ณ , ์ง์ ๋ณด์ ๊ทธ๋ฃน์์ IP๋ฅผ ํต์ ํ๋ ๋ฐฉ์์ด ์์
5. ํน์ IP์์๋ง ์ ๊ทผ ํ์ฉ (๋ณด์ ๊ฐํ)
- ๋ณด์ ๊ทธ๋ฃน(
remoteAccess
)์์ ์ง IP์ ์ด์ ์๋ฒ EC2๋ง ์ ๊ทผ ๊ฐ๋ฅํ๋๋ก ์์ - ๋ชจ๋ ํธ๋ํฝ์ ํ์ฉํ์ง ์๊ณ , ํน์ IP์์๋ง ์ ๊ทผ ๊ฐ๋ฅํ๋๋ก ์ ํ
(1) remoteAccess
ํฌํจ๋ ๋ณด์ ๊ทธ๋ฃน ID ํ์ธ
1
2
| aws ec2 describe-security-groups --filters "Name=group-name,Values=*remoteAccess*" | jq
export MNSGID=$(aws ec2 describe-security-groups --filters "Name=group-name,Values=*remoteAccess*" --query 'SecurityGroups[*].GroupId' --output text)
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
| {
"SecurityGroups": [
{
"GroupId": "sg-0edd80591095505a9",
"IpPermissionsEgress": [
{
"IpProtocol": "-1",
"UserIdGroupPairs": [],
"IpRanges": [
{
"CidrIp": "0.0.0.0/0"
}
],
"Ipv6Ranges": [],
"PrefixListIds": []
}
],
"Tags": [
{
"Key": "alpha.eksctl.io/nodegroup-type",
"Value": "managed"
},
{
"Key": "aws:cloudformation:logical-id",
"Value": "SSH"
},
{
"Key": "alpha.eksctl.io/nodegroup-name",
"Value": "ng1"
},
{
"Key": "alpha.eksctl.io/cluster-name",
"Value": "myeks"
},
{
"Key": "Name",
"Value": "eksctl-myeks-nodegroup-ng1/SSH"
},
{
"Key": "aws:cloudformation:stack-name",
"Value": "eksctl-myeks-nodegroup-ng1"
},
{
"Key": "aws:cloudformation:stack-id",
"Value": "arn:aws:cloudformation:ap-northeast-2:378102432899:stack/eksctl-myeks-nodegroup-ng1/55132e60-eda2-11ef-876d-06d9644ecb71"
},
{
"Key": "alpha.eksctl.io/eksctl-version",
"Value": "0.203.0"
},
{
"Key": "eksctl.cluster.k8s.io/v1alpha1/cluster-name",
"Value": "myeks"
}
],
"VpcId": "vpc-0e32b5a6653acdcd9",
"SecurityGroupArn": "arn:aws:ec2:ap-northeast-2:378102432899:security-group/sg-0edd80591095505a9",
"OwnerId": "378102432899",
"GroupName": "eksctl-myeks-nodegroup-ng1-remoteAccess",
"Description": "Allow SSH access",
"IpPermissions": [
{
"IpProtocol": "tcp",
"FromPort": 22,
"ToPort": 22,
"UserIdGroupPairs": [],
"IpRanges": [
{
"Description": "Allow SSH access to managed worker nodes in group ng1",
"CidrIp": "0.0.0.0/0"
}
],
"Ipv6Ranges": [
{
"Description": "Allow SSH access to managed worker nodes in group ng1",
"CidrIpv6": "::/0"
}
],
"PrefixListIds": []
}
]
}
]
}
|
(2) ์ง ๊ณต์ธ IP๋ฅผ ๋ณด์ ๊ทธ๋ฃน Inbound์ ์ถ๊ฐ
- ํ์ฌ ์ ์ ์ค์ธ ์ง ๊ณต์ธ IP๋ฅผ ๋ณด์ ๊ทธ๋ฃน(
remoteAccess
)์ Inbound ๋ฃฐ์ ์ถ๊ฐ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| aws ec2 authorize-security-group-ingress --group-id $MNSGID --protocol '-1' --cidr $(curl -s ipinfo.io/ip)/32
{
"Return": true,
"SecurityGroupRules": [
{
"SecurityGroupRuleId": "sgr-08fceda8b8811d00b",
"GroupId": "sg-0edd80591095505a9",
"GroupOwnerId": "378102432899",
"IsEgress": false,
"IpProtocol": "-1",
"FromPort": -1,
"ToPort": -1,
"CidrIpv4": "182.230.60.93/32",
"SecurityGroupRuleArn": "arn:aws:ec2:ap-northeast-2:378102432899:security-group-rule/sgr-08fceda8b8811d00b"
}
]
}
|
(3) ์ด์ ์๋ฒ ๋ด๋ถ IP๋ฅผ ๋ณด์ ๊ทธ๋ฃน Inbound์ ์ถ๊ฐ
- ์ด์ ์๋ฒ ๋ด๋ถ IP(
172.20.1.100/32
)๋ ๋ณด์ ๊ทธ๋ฃน์ Inbound ๋ฃฐ์ ์ถ๊ฐ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| aws ec2 authorize-security-group-ingress --group-id $MNSGID --protocol '-1' --cidr 172.20.1.100/32
{
"Return": true,
"SecurityGroupRules": [
{
"SecurityGroupRuleId": "sgr-02799adf73d669efa",
"GroupId": "sg-0edd80591095505a9",
"GroupOwnerId": "378102432899",
"IsEgress": false,
"IpProtocol": "-1",
"FromPort": -1,
"ToPort": -1,
"CidrIpv4": "172.20.1.100/32",
"SecurityGroupRuleArn": "arn:aws:ec2:ap-northeast-2:378102432899:security-group-rule/sgr-02799adf73d669efa"
}
]
}
|
(4) ์์ปค๋
ธ๋ ping ํ
์คํธ
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
| PING 3.35.47.226 (3.35.47.226) 56(84) bytes of data.
64 bytes from 3.35.47.226: icmp_seq=1 ttl=115 time=13.7 ms
64 bytes from 3.35.47.226: icmp_seq=2 ttl=115 time=13.8 ms
--- 3.35.47.226 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 13.744/13.779/13.815/0.035 ms
|
(5) ์์ปค๋
ธ๋ SSH ์ ์ ํ
์คํธ
1
| ssh -i kp-aews.pem -o StrictHostKeyChecking=no ec2-user@$N1 hostname
|
โ
ย ์ถ๋ ฅ
1
| ip-192-168-1-207.ap-northeast-2.compute.internal
|
1
| for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh -o StrictHostKeyChecking=no ec2-user@$i hostname; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
| >> node 3.35.47.226 <<
ip-192-168-1-207.ap-northeast-2.compute.internal
>> node 43.203.131.45 <<
ip-192-168-2-84.ap-northeast-2.compute.internal
>> node 15.164.49.232 <<
ip-192-168-3-80.ap-northeast-2.compute.internal
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
| ssh ec2-user@$N1
# ๊ฒฐ๊ณผ
A newer release of "Amazon Linux" is available.
Version 2023.6.20250211:
Run "/usr/bin/dnf check-release-update" for full release and version update info
, #_
~\_ ####_ Amazon Linux 2023
~~ \_#####\
~~ \###|
~~ \#/ ___ https://aws.amazon.com/linux/amazon-linux-2023
~~ V~' '->
~~~ /
~~._. _/
_/ _/
_/m/'
Last login: Wed Feb 12 05:52:48 2025 from 52.94.123.236
[ec2-user@ip-192-168-1-207 ~]$
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
| ssh ec2-user@$N2
# ๊ฒฐ๊ณผ
A newer release of "Amazon Linux" is available.
Version 2023.6.20250211:
Run "/usr/bin/dnf check-release-update" for full release and version update info
, #_
~\_ ####_ Amazon Linux 2023
~~ \_#####\
~~ \###|
~~ \#/ ___ https://aws.amazon.com/linux/amazon-linux-2023
~~ V~' '->
~~~ /
~~._. _/
_/ _/
_/m/'
Last login: Wed Feb 12 05:52:48 2025 from 52.94.123.236
[ec2-user@ip-192-168-2-84 ~]$
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
| ssh ec2-user@$N3
# ๊ฒฐ๊ณผ
A newer release of "Amazon Linux" is available.
Version 2023.6.20250211:
Run "/usr/bin/dnf check-release-update" for full release and version update info
, #_
~\_ ####_ Amazon Linux 2023
~~ \_#####\
~~ \###|
~~ \#/ ___ https://aws.amazon.com/linux/amazon-linux-2023
~~ V~' '->
~~~ /
~~._. _/
_/ _/
_/m/'
Last login: Wed Feb 12 05:52:48 2025 from 52.94.123.236
[ec2-user@ip-192-168-3-80 ~]$
|
๐ ๋
ธ๋ ์ ๋ณด ํ์ธ
1. EKS ์์ปค๋
ธ๋ ๊ธฐ๋ณธ ์ ๋ณด ํ์ธ
1
| for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i hostnamectl; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
| >> node 3.35.47.226 <<
Static hostname: ip-192-168-1-207.ap-northeast-2.compute.internal
Icon name: computer-vm
Chassis: vm ๐ด
Machine ID: ec26a612d3fd1934a5b6b39b72fa9d18
Boot ID: 98d99077d96a424daa3ab81b196f38a0
Virtualization: amazon
Operating System: Amazon Linux 2023.6.20250203
CPE OS Name: cpe:2.3:o:amazon:amazon_linux:2023
Kernel: Linux 6.1.127-135.201.amzn2023.x86_64
Architecture: x86-64
Hardware Vendor: Amazon EC2
Hardware Model: t3.medium
Firmware Version: 1.0
>> node 43.203.131.45 <<
Static hostname: ip-192-168-2-84.ap-northeast-2.compute.internal
Icon name: computer-vm
Chassis: vm ๐ด
Machine ID: ec2f16bc9e1f90914542402b6bd7e2db
Boot ID: b921993e9cd54993bed4e93693d89aab
Virtualization: amazon
Operating System: Amazon Linux 2023.6.20250203
CPE OS Name: cpe:2.3:o:amazon:amazon_linux:2023
Kernel: Linux 6.1.127-135.201.amzn2023.x86_64
Architecture: x86-64
Hardware Vendor: Amazon EC2
Hardware Model: t3.medium
Firmware Version: 1.0
>> node 15.164.49.232 <<
Static hostname: ip-192-168-3-80.ap-northeast-2.compute.internal
Icon name: computer-vm
Chassis: vm ๐ด
Machine ID: ec2e2414fefb5afd3b0c3ec76d9735ce
Boot ID: 909e95d8c70f4b00a9cd6545fad4d33b
Virtualization: amazon
Operating System: Amazon Linux 2023.6.20250203
CPE OS Name: cpe:2.3:o:amazon:amazon_linux:2023
Kernel: Linux 6.1.127-135.201.amzn2023.x86_64
Architecture: x86-64
Hardware Vendor: Amazon EC2
Hardware Model: t3.medium
Firmware Version: 1.0
|
2. EKS ์์ปค๋
ธ๋ ๋คํธ์ํฌ ์ธํฐํ์ด์ค ํ์ธ
1
| for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo ip -c addr; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
| >> node 3.35.47.226 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 02:e7:74:d0:4e:63 brd ff:ff:ff:ff:ff:ff
altname enp0s5
inet 192.168.1.207/24 metric 1024 brd 192.168.1.255 scope global dynamic ens5
valid_lft 1811sec preferred_lft 1811sec
inet6 fe80::e7:74ff:fed0:4e63/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
>> node 43.203.131.45 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 06:3b:f6:53:20:41 brd ff:ff:ff:ff:ff:ff
altname enp0s5
inet 192.168.2.84/24 metric 1024 brd 192.168.2.255 scope global dynamic ens5
valid_lft 1812sec preferred_lft 1812sec
inet6 fe80::43b:f6ff:fe53:2041/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
>> node 15.164.49.232 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 0a:2f:05:2e:be:c1 brd ff:ff:ff:ff:ff:ff
altname enp0s5
inet 192.168.3.80/24 metric 1024 brd 192.168.3.255 scope global dynamic ens5
valid_lft 1805sec preferred_lft 1805sec
inet6 fe80::82f:5ff:fe2e:bec1/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
3: eni81d769258b0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether ae:50:23:e1:4f:0f brd ff:ff:ff:ff:ff:ff link-netns cni-fa2faa2f-0179-8831-ef6d-458723488300
inet6 fe80::ac50:23ff:fee1:4f0f/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
4: enia025c0419e6@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 4e:d3:3a:8d:e9:53 brd ff:ff:ff:ff:ff:ff link-netns cni-994c349e-66b0-685d-52c0-6358456a1ee1
inet6 fe80::4cd3:3aff:fe8d:e953/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
5: eni181f90d8a40@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether be:89:71:d7:fc:1c brd ff:ff:ff:ff:ff:ff link-netns cni-dca67549-f5c2-daf8-b13c-c641ff9b4191
inet6 fe80::bc89:71ff:fed7:fc1c/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
6: enifa068c4d7bd@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 92:11:37:91:87:7f brd ff:ff:ff:ff:ff:ff link-netns cni-99f480e7-d3cc-12d0-c204-65a883192bdb
inet6 fe80::9011:37ff:fe91:877f/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
7: ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 0a:75:29:ce:c9:4d brd ff:ff:ff:ff:ff:ff
altname enp0s6
inet 192.168.3.26/24 brd 192.168.3.255 scope global ens6
valid_lft forever preferred_lft forever
inet6 fe80::875:29ff:fece:c94d/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
|
3. EKS ์์ปค๋
ธ๋ ์คํ ๋ฆฌ์ง ์ ๋ณด ํ์ธ
- ์คํ ๋ฆฌ์ง ํ์ธ ๋ฐ ๋ฃจํธ ๋ณผ๋ฅจ ํฌ๊ธฐ ์ฒดํฌ
- ๋ฃจํธ ๋ณผ๋ฅจ 120GB (
nvme0n1
) /boot/efi
ํฌํจ๋ ํํฐ์
๊ตฌ์กฐ ํ์ธ
1
| for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i lsblk; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
| >> node 3.35.47.226 <<
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme0n1 259:0 0 120G 0 disk
โโnvme0n1p1 259:1 0 120G 0 part /
โโnvme0n1p127 259:2 0 1M 0 part
โโnvme0n1p128 259:3 0 10M 0 part /boot/efi
>> node 43.203.131.45 <<
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme0n1 259:0 0 120G 0 disk
โโnvme0n1p1 259:1 0 120G 0 part /
โโnvme0n1p127 259:2 0 1M 0 part
โโnvme0n1p128 259:3 0 10M 0 part /boot/efi
>> node 15.164.49.232 <<
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme0n1 259:0 0 120G 0 disk
โโnvme0n1p1 259:1 0 120G 0 part /
โโnvme0n1p127 259:2 0 1M 0 part
โโnvme0n1p128 259:3 0 10M 0 part /boot/efi
|
4. EKS ์์ปค๋
ธ๋ ๋์คํฌ ์ฌ์ฉ๋ ํ์ธ
- ๋ฃจํธ ํ์ผ ์์คํ
์ ๋์คํฌ ์ฌ์ฉ๋ ํ์ธ
1
| for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i df -hT /; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| >> node 3.35.47.226 <<
Filesystem Type Size Used Avail Use% Mounted on
/dev/nvme0n1p1 xfs 120G 3.4G 117G 3% /
>> node 43.203.131.45 <<
Filesystem Type Size Used Avail Use% Mounted on
/dev/nvme0n1p1 xfs 120G 3.4G 117G 3% /
>> node 15.164.49.232 <<
Filesystem Type Size Used Avail Use% Mounted on
/dev/nvme0n1p1 xfs 120G 3.5G 117G 3% /
|
5. ๊ธฐ๋ณธ ์คํ ๋ฆฌ์ง ํด๋์ค ํ์ธ
- ๊ธฐ๋ณธ์ ์ผ๋ก
gp2
(AWS EBS) ์คํ ๋ฆฌ์ง ํด๋์ค ์ฌ์ฉ gp3
๊ฐ ์ฑ๋ฅ ๋ฐ ๋น์ฉ ์ธก๋ฉด์์ ๋ ์ ๋ฆฌํ๋ฏ๋ก ์ดํ gp3
์คํ ๋ฆฌ์ง ํด๋์ค๋ฅผ ์ถ๊ฐํ ์์
โ
ย ์ถ๋ ฅ
1
2
| NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 102m
|
- ๊ธฐ๋ณธ ์คํ ๋ฆฌ์ง ํด๋์ค(
gp2
) ์์ธ ์ ๋ณด ์กฐํ
1
| kubectl describe sc gp2
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| Name: gp2
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"gp2"},"parameters":{"fsType":"ext4","type":"gp2"},"provisioner":"kubernetes.io/aws-ebs","volumeBindingMode":"WaitForFirstConsumer"}
Provisioner: kubernetes.io/aws-ebs
Parameters: fsType=ext4,type=gp2
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: <none>
|
6. ํ์ฌ ์ค์น๋ CRD ํ์ธ
- ๊ธฐ๋ณธ CRD ๋ชฉ๋ก ์กฐํ
- ์ปจํธ๋กค๋ฌ ์ค์น ํ CRD๊ฐ ์ถ๊ฐ๋ ์์
โ
ย ์ถ๋ ฅ
1
2
3
4
5
| NAME CREATED AT
cninodes.vpcresources.k8s.aws 2025-02-18T02:37:25Z
eniconfigs.crd.k8s.amazonaws.com 2025-02-18T02:40:58Z
policyendpoints.networking.k8s.aws 2025-02-18T02:37:25Z
securitygrouppolicies.vpcresources.k8s.aws 2025-02-18T02:37:25Z
|
7. CSI ๋
ธ๋ ํ์ธ
- ํ์ฌ CSI(Storage Interface) ๋๋ผ์ด๋ฒ ์์
- ์ดํ ์คํ ๋ฆฌ์ง ๊ด๋ จ ์ปจํธ๋กค๋ฌ ์ค์น ์ ์ถ๊ฐ๋ ์์
โ
ย ์ถ๋ ฅ
1
2
3
4
| NAME DRIVERS AGE
ip-192-168-1-207.ap-northeast-2.compute.internal 0 97m
ip-192-168-2-84.ap-northeast-2.compute.internal 0 97m
ip-192-168-3-80.ap-northeast-2.compute.internal 0 97m
|
8. EKS ๋
ธ๋๋ณ ์ต๋ Pod ๊ฐ์ ํ์ธ
1
| kubectl describe node | grep Capacity: -A13
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
| Capacity:
cpu: 2
ephemeral-storage: 125751276Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3919536Ki
pods: 100
Allocatable:
cpu: 1930m
ephemeral-storage: 114818633946
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3364528Ki
pods: 100
--
Capacity:
cpu: 2
ephemeral-storage: 125751276Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3919536Ki
pods: 100
Allocatable:
cpu: 1930m
ephemeral-storage: 114818633946
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3364528Ki
pods: 100
--
Capacity:
cpu: 2
ephemeral-storage: 125751276Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3919544Ki
pods: 100
Allocatable:
cpu: 1930m
ephemeral-storage: 114818633946
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3364536Ki
pods: 100
|
1
| kubectl get nodes -o custom-columns="NAME:.metadata.name,MAXPODS:.status.capacity.pods"
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| NAME MAXPODS
ip-192-168-1-207.ap-northeast-2.compute.internal 100
ip-192-168-2-84.ap-northeast-2.compute.internal 100
ip-192-168-3-80.ap-northeast-2.compute.internal 100
|
9. Kubelet maxPods
๊ธฐ๋ณธ๊ฐ ํ์ธ
/etc/kubernetes/kubelet/config.json
1
| for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo cat /etc/kubernetes/kubelet/config.json | grep maxPods; echo; done
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
| >> node 3.35.47.226 <<
"maxPods": 17,
>> node 43.203.131.45 <<
"maxPods": 17,
>> node 15.164.49.232 <<
"maxPods": 17,
|
10. ์ฌ์ฉ์ ์ ์ Kubelet ์ค์ ํ์ธ
/etc/kubernetes/kubelet/config.json.d/00-nodeadm.conf
maxPods: 100
์ด ์ค์ ๋ ํ์ผ ํ์ธ- ๊ธฐ๋ณธ ๊ฐ(17)์ด ์ค๋ฒ๋ผ์ด๋๋์ด 100์ผ๋ก ์ ์ฉ๋จ
1
2
3
4
5
6
7
8
9
| for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo cat /etc/kubernetes/kubelet/config.json.d/00-nodeadm.conf | grep maxPods; echo; done
>> node 3.35.47.226 <<
"maxPods": 100
>> node 43.203.131.45 <<
"maxPods": 100
>> node 15.164.49.232 <<
"maxPods": 100
|
๐ ์ด์์๋ฒ EC2 : eks kubeconfig ์ค์ , EFS ๋ง์ดํธ ํ
์คํธ
1. ์ด์ ์๋ฒ SSH ์ ์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| ssh -i kp-aews.pem ec2-user@$(aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[0].OutputValue' --output text)
Last login: Tue Feb 18 11:20:41 2025 from 182.230.60.93
, #_
~\_ ####_ Amazon Linux 2
~~ \_#####\
~~ \###| AL2 End of Life is 2026-06-30.
~~ \#/ ___
~~ V~' '->
~~~ / A newer version of Amazon Linux is available!
~~._. _/
_/ _/ Amazon Linux 2023, GA and supported until 2028-03-15.
_/m/' https://aws.amazon.com/linux/amazon-linux-2023/
Last login: Tue Feb 18 11:20:41 KST 2025 on pts/0
[root@operator-host ~]#
|
2. AWS CLI ์๊ฒฉ์ฆ๋ช
์ค์
1
2
3
4
5
| [root@operator-host ~]# aws configure
AWS Access Key ID [None]: XXXXXXXXXXXXXXXXXX
AWS Secret Access Key [None]: XXXXXXXXXXXXXXXXXX
Default region name [None]: ap-northeast-2
Default output format [None]: json
|
- AWS Access Key ID: (๋ฐ๊ธ๋ฐ์ Access Key ID ์
๋ ฅ)
- AWS Secret Access Key: (Secret Access Key ์
๋ ฅ)
- Default region name:ย
ap-northeast-2
ย (์์ธ ๋ฆฌ์ , ์ํ๋ ๋ฆฌ์ ์ ํ ๊ฐ๋ฅ) - Default output format:ย
json
ย ๋๋ย yaml
ย (๊ธฐ๋ณธ๊ฐ:ย json
)
3. ํ์ฌ IAM ์ฌ์ฉ์ ํ์ธ
1
2
3
4
| [root@operator-host ~]# aws sts get-caller-identity --query Arn
# ๊ฒฐ๊ณผ
"arn:aws:iam::378102432899:user/eks-user"
|
4. kubeconfig
์์ฑ ๋ฐ EKS ์ฐ๊ฒฐ ์ค์
- ์ด์ ์๋ฒ์์ EKS ํด๋ฌ์คํฐ์ ์ฐ๊ฒฐํ๊ธฐ ์ํด
kubeconfig
ํ์ผ์ ์์ฑ
1
2
3
| [root@operator-host ~]# aws eks update-kubeconfig --name myeks --user-alias eks-user
Added new context eks-user to /root/.kube/config
(eks-user:N/A) [root@operator-host ~]#
|
kubectl
์ ์ฌ์ฉํ์ฌ ํด๋ฌ์คํฐ ์ ๋ณด ํ์ธ
1
| (eks-user:N/A) [root@operator-host ~]# kubectl cluster-info
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| Kubernetes control plane is running at https://791BC5A9BB3716EA88C45304E0696F83.gr7.ap-northeast-2.eks.amazonaws.com
CoreDNS is running at https://791BC5A9BB3716EA88C45304E0696F83.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
|
- ๋ค์์คํ์ด์ค ๋ณ๊ฒฝ
- ๊ธฐ๋ณธ ๋ค์์คํ์ด์ค๋ฅผ
default
๋ก ์ค์
1
2
3
4
| (eks-user:N/A) [root@operator-host ~]# kubectl ns default
Context "eks-user" modified.
Active namespace is "default".
(eks-user:default) [root@operator-host ~]#
|
5. EKS ํด๋ฌ์คํฐ์ ๋
ธ๋ ์ํ ํ์ธ
1
| (eks-user:default) [root@operator-host ~]# kubectl get node -v6
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
| I0218 13:47:55.441335 2910 loader.go:395] Config loaded from file: /root/.kube/config
I0218 13:47:56.203890 2910 round_trippers.go:553] GET https://791BC5A9BB3716EA88C45304E0696F83.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/nodes?limit=500 200 OK in 755 milliseconds
NAME STATUS ROLES AGE VERSION
ip-192-168-1-207.ap-northeast-2.compute.internal Ready <none> 121m v1.31.5-eks-5d632ec
ip-192-168-2-84.ap-northeast-2.compute.internal Ready <none> 121m v1.31.5-eks-5d632ec
ip-192-168-3-80.ap-northeast-2.compute.internal Ready <none> 121m v1.31.5-eks-5d632ec
|
6. EFS ๋ง์ดํธ ํ
์คํธ ์ค๋น
- ์ด์ ์๋ฒ(172.20.1.100/24)๊ฐ VPC Peering์ ํตํด EFS๋ฅผ ์๊ฒฉ ๋ง์ดํธํ์ฌ ์คํ ๋ฆฌ์ง ์ฌ์ฉ ์์
- ํ์ฌ ์ด์ ์๋ฒ์ EKS ํด๋ฌ์คํฐ๊ฐ ๋ค๋ฅธ VPC์ ์์ผ๋ฏ๋ก, VPC Peering์ ํตํด ์ด์ ์๋ฒ์์ EFS์ ์ ๊ทผํ ์ ์๋๋ก ์ค์ ํ ๊ฒ
7. ํ์ผ ์์คํ
ID ํ์ธ
- ํ์ฌ ์ฌ์ฉ ์ค์ธ EFS ํ์ผ ์์คํ
ID ํ์ธ
1
| (eks-user:default) [root@operator-host ~]# aws efs describe-file-systems --query "FileSystems[*].FileSystemId" --output text
|
โ
ย ์ถ๋ ฅ
8. EFS ๋ง์ดํธ ๋์ ์ ๋ณด ํ์ธ
- EFS์ ์ฐ๊ฒฐ ๊ฐ๋ฅํ ๋ง์ดํธ ํ๊ฒ(Subnet ๋ฐ IP) ์กฐํ
1
| (eks-user:default) [root@operator-host ~]# aws efs describe-mount-targets --file-system-id $(aws efs describe-file-systems --query "FileSystems[*].FileSystemId" --output text) | jq
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
| {
"MountTargets": [
{
"OwnerId": "378102432899",
"MountTargetId": "fsmt-0604d5d000fcd5bac",
"FileSystemId": "fs-0aeb6f8c0c228b9d2",
"SubnetId": "subnet-0e4fb63cb543698fe",
"LifeCycleState": "available",
"IpAddress": "192.168.2.121",
"NetworkInterfaceId": "eni-01a6638b93a0f3c69",
"AvailabilityZoneId": "apne2-az2",
"AvailabilityZoneName": "ap-northeast-2b",
"VpcId": "vpc-0e32b5a6653acdcd9"
},
{
"OwnerId": "378102432899",
"MountTargetId": "fsmt-06d0ec46b5d5eb2e7",
"FileSystemId": "fs-0aeb6f8c0c228b9d2",
"SubnetId": "subnet-0fed28a1b3e108719",
"LifeCycleState": "available",
"IpAddress": "192.168.1.145",
"NetworkInterfaceId": "eni-081306b696d34563b",
"AvailabilityZoneId": "apne2-az1",
"AvailabilityZoneName": "ap-northeast-2a",
"VpcId": "vpc-0e32b5a6653acdcd9"
},
{
"OwnerId": "378102432899",
"MountTargetId": "fsmt-079d4fd81a4d39374",
"FileSystemId": "fs-0aeb6f8c0c228b9d2",
"SubnetId": "subnet-0861bd68771150000",
"LifeCycleState": "available",
"IpAddress": "192.168.3.250",
"NetworkInterfaceId": "eni-05a1c8ad9d2d52f44",
"AvailabilityZoneId": "apne2-az3",
"AvailabilityZoneName": "ap-northeast-2c",
"VpcId": "vpc-0e32b5a6653acdcd9"
}
]
}
|
9. ๋ง์ดํธ ๋์ IP๋ง ์ถ๋ ฅ
- ๋ง์ดํธ ๊ฐ๋ฅํ EFS ๋คํธ์ํฌ ์ธํฐํ์ด์ค IP ๋ชฉ๋ก ์กฐํ
1
2
| (eks-user:default) [root@operator-host ~]# aws efs describe-mount-targets --file-system-id $(aws efs describe-file-systems --query "FileSystems[*].FileSystemId" --output text) --query "MountTargets[*].IpAddress" --output text
192.168.2.121 192.168.1.145 192.168.3.250
|
10. VPC Peering ํ๊ฒฝ์์ ๋ง์ดํธ ๋ฐฉ์ ๊ฒฐ์
- ๊ฐ์ VPC ๋ด๋ถ๋ผ๋ฉด
fs-xxxx.efs.ap-northeast-2.amazonaws.com
ํ์์ DNS ๊ธฐ๋ฐ ๋ง์ดํธ ๊ฐ๋ฅ - ๋ค๋ฅธ VPC์์ VPC Peering์ ํตํด ์ ๊ทผํ๋ ๊ฒฝ์ฐ โ IP ๊ธฐ๋ฐ ๋ง์ดํธ ํ์
1
2
| # DNS ์ง์ ํ
์คํธ: ๊ฐ์ VPC๊ฐ ์๋๋ฏ๋ก DNS ์ง์ ์คํจ
(eks-user:default) [root@operator-host ~]# dig +short $(aws efs describe-file-systems --query "FileSystems[*].FileSystemId" --output text).efs.ap-northeast-2.amazonaws.com
|
11. EFS ๋ง์ดํธ IP ์ ํ ๋ฐ ํ๊ฒฝ ๋ณ์ ์ค์
- ๋ง์ดํธํ IP ์ฃผ์(์: 192.168.1.145) ์ ํ ํ ํ๊ฒฝ ๋ณ์ ์ ์ฅ
1
| (eks-user:default) [root@operator-host ~]# EFSIP1=192.168.1.145
|
12. EFS ๋ง์ดํธ ์ํ
- ์ด์ ์๋ฒ์์ EFS๋ฅผ ๋ง์ดํธํ ๋๋ ํฐ๋ฆฌ ์์ฑ
192.168.1.145:/mnt/myefs
๊ฒฝ๋ก๋ก ๋ง์ดํธ๋จ
1
2
| (eks-user:default) [root@operator-host ~]# mkdir /mnt/myefs
(eks-user:default) [root@operator-host ~]# mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport $EFSIP1:/ /mnt/myefs
|
- AWS ์ฝ์ โ EFS โ Attach ์ต์
์์ ๋ง์ดํธ ๋ฐฉ๋ฒ ๋ฐ ๋ช
๋ น์ด ์ ๊ณต
sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport 192.168.1.145:/ efs
13. ๋ง์ดํธ ์ํ ํ์ธ
1
| (eks-user:default) [root@operator-host ~]# findmnt -t nfs4
|
โ
ย ์ถ๋ ฅ
1
2
| TARGET SOURCE FSTYPE OPTIONS
/mnt/myefs 192.168.1.145:/ nfs4 rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,noresvport,proto=tcp,timeo=600,retrans=2,sec=sy
|
NFS4 ํ์
ํ์ผ ์์คํ
๋ง ํ์ธ
1
| (eks-user:default) [root@operator-host ~]# df -hT --type nfs4
|
โ
ย ์ถ๋ ฅ
1
2
| Filesystem Type Size Used Avail Use% Mounted on
192.168.1.145:/ nfs4 8.0E 0 8.0E 0% /mnt/myefs
|
14. EFS์ ํ์ผ ์ ์ฅ ๋ฐ NFS ํต๊ณ ํ์ธ
(1) ํ์ผ ์ ์ฅ ์ NFS ์ํ ํ์ธ
1
| (eks-user:default) [root@operator-host ~]# nfsstat
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| Client rpc stats:
calls retrans authrefrsh
22 0 22
Client nfs v4:
null read write commit open open_conf
1 4% 0 0% 0 0% 0 0% 0 0% 0 0%
open_noat open_dgrd close setattr fsinfo renew
0 0% 0 0% 0 0% 0 0% 2 9% 0 0%
setclntid confirm lock lockt locku access
0 0% 0 0% 0 0% 0 0% 0 0% 0 0%
getattr lookup lookup_root remove rename link
2 9% 0 0% 1 4% 0 0% 0 0% 0 0%
symlink create pathconf statfs readlink readdir
0 0% 0 0% 1 4% 1 4% 0 0% 0 0%
server_caps delegreturn getacl setacl fs_locations rel_lkowner
3 13% 0 0% 0 0% 0 0% 0 0% 0 0%
secinfo exchange_id create_ses destroy_ses sequence get_lease_t
0 0% 0 0% 2 9% 1 4% 0 0% 6 27%
reclaim_comp layoutget getdevinfo layoutcommit layoutreturn getdevlist
0 0% 1 4% 0 0% 0 0% 0 0% 0 0%
(null)
1 4%
|
- ์ด ์์ฒญ(Call) ์:
22
write
์์ฒญ ์์
(2) ํ์ผ ์ ์ฅ ์คํ
- EFS ๋ง์ดํธ ๋๋ ํ ๋ฆฌ(
/mnt/myefs
)์ ํ์ผ ์์ฑ - ์๊ฒฉ NFS ์ ์ฅ์์ ํ์ผ(
memo.txt
) ์์ฑ๋จ
1
| (eks-user:default) [root@operator-host ~]# echo "EKS Workshop" > /mnt/myefs/memo.txt
|
(3) ํ์ผ ์ ์ฅ ํ NFS ์ํ ํ์ธ
1
| (eks-user:default) [root@operator-host ~]# nfsstat
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| Client rpc stats:
calls retrans authrefrsh
27 0 27
Client nfs v4:
null read write commit open open_conf
1 3% 0 0% 1 3% 0 0% 1 3% 0 0%
open_noat open_dgrd close setattr fsinfo renew
0 0% 0 0% 1 3% 0 0% 2 7% 0 0%
setclntid confirm lock lockt locku access
0 0% 0 0% 0 0% 0 0% 0 0% 1 3%
getattr lookup lookup_root remove rename link
2 7% 0 0% 1 3% 0 0% 0 0% 0 0%
symlink create pathconf statfs readlink readdir
0 0% 0 0% 1 3% 1 3% 0 0% 0 0%
server_caps delegreturn getacl setacl fs_locations rel_lkowner
3 11% 0 0% 0 0% 0 0% 0 0% 0 0%
secinfo exchange_id create_ses destroy_ses sequence get_lease_t
0 0% 0 0% 2 7% 1 3% 0 0% 7 25%
reclaim_comp layoutget getdevinfo layoutcommit layoutreturn getdevlist
0 0% 1 3% 0 0% 0 0% 0 0% 0 0%
(null)
1 3%
|
- ์ด ์์ฒญ(Call) ์:
27
write
์์ฒญ 1ํ ๊ธฐ๋ก๋จ
15. ์ ์ฅ๋ ํ์ผ ํ์ธ
(1) ์ ์ฅ๋ ํ์ผ ๋ชฉ๋ก ํ์ธ
1
| (eks-user:default) [root@operator-host ~]# ls -l /mnt/myefs
|
โ
ย ์ถ๋ ฅ
1
2
| total 4
-rw-r--r-- 1 root root 13 Feb 18 14:09 memo.txt
|
(2) ํ์ผ ๋ด์ฉ ํ์ธ
1
| (eks-user:default) [root@operator-host ~]# cat /mnt/myefs/memo.txt
|
โ
ย ์ถ๋ ฅ
16. EFS ์๋ ๋ง์ดํธ ์ค์ (EC2 ์ฌ๋ถํ
ํ ์ ์ง)
(1) ํ์ฌ /etc/fstab
์ํ ํ์ธ
- ๊ธฐ์กด ๋ง์ดํธ ์ค์ ํ์ธ
1
| (eks-user:default) [root@operator-host ~]# cat /etc/fstab
|
โ
ย ์ถ๋ ฅ
1
| UUID=43b4f483-987f-429f-ad61-9e2993518248 / xfs defaults,noatime 1 1
|
(2) /etc/fstab
ํ์ผ ์์ (EFS ์๋ ๋ง์ดํธ ์ถ๊ฐ)
- ํ์ผ ๋งจ ์๋์ EFS ๋ง์ดํธ ์ ๋ณด ์ถ๊ฐ
1
| 192.168.1.145:/ /mnt/myefs nfs4 defaults,_netdev,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport 0 0
|
(3) ๋ณ๊ฒฝ ์ฌํญ ์ ์ฉ
- ์์ ํ
/etc/fstab
๋ด์ฉ์ ์ฆ์ ๋ฐ์
1
| (eks-user:default) [root@operator-host ~]# sudo mount -a
|
(4) ๋ง์ดํธ ์ํ ํ์ธ
- ๋ง์ดํธ๊ฐ ์ ์์ ์ผ๋ก ์ ์ฉ๋์๋์ง ํ์ธ
1
| (eks-user:default) [root@operator-host ~]# df -hT
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
| Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 981M 0 981M 0% /dev
tmpfs tmpfs 990M 0 990M 0% /dev/shm
tmpfs tmpfs 990M 432K 989M 1% /run
tmpfs tmpfs 990M 0 990M 0% /sys/fs/cgroup
/dev/xvda1 xfs 30G 3.0G 28G 10% /
tmpfs tmpfs 198M 0 198M 0% /run/user/1000
192.168.1.145:/ nfs4 8.0E 0 8.0E 0% /mnt/myefs
|
๐๏ธ EKS ๋ฐฐํฌ ํ ์ค์ต์ ์ํ ํธ์ ์ค์
1. ํ๊ฒฝ ๋ณ์ ์๋ ์ค์ ๋ฐฐ๊ฒฝ
- ์ด์ ์ค์ต์์ ํ๊ฒฝ ๋ณ์๋ฅผ ๋งค๋ฒ ์๋์ผ๋ก ์ค์ ํด์ผ ํ๋ ๋ถํธํจ์ด ์์์
- ์ ๊ท ํฐ๋ฏธ๋ ์ฐฝ์ ์ด ๋๋ง๋ค ํ๊ฒฝ ๋ณ์๊ฐ ์ ์ง๋๋๋ก ์๋ ์ค์
2. ํ๊ฒฝ ๋ณ์ ์ค์ (~/.bashrc
์ ์ถ๊ฐ)
- Route 53 ๋๋ฉ์ธ ๋ฐ Hosted Zone ID ๊ฐ์ ธ์ค๊ธฐ
1
2
| MyDomain=gagajin.com
MyDnzHostedZoneId=$(aws route53 list-hosted-zones-by-name --dns-name "$MyDomain." --query "HostedZones[0].Id" --output text)
|
- ํ๊ฒฝ ๋ณ์๋ฅผ
~/.bashrc
์ ์ถ๊ฐํ์ฌ ์ ํฐ๋ฏธ๋์์๋ ์ ์ง
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| cat << EOF >> ~/.bashrc
# eksworkshop
export CLUSTER_NAME=myeks
export VPCID=$(aws ec2 describe-vpcs --filters "Name=tag:Name,Values=$CLUSTER_NAME-VPC" --query 'Vpcs[*].VpcId' --output text)
export PubSubnet1=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-Vpc1PublicSubnet1" --query "Subnets[0].[SubnetId]" --output text)
export PubSubnet2=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-Vpc1PublicSubnet2" --query "Subnets[0].[SubnetId]" --output text)
export PubSubnet3=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-Vpc1PublicSubnet3" --query "Subnets[0].[SubnetId]" --output text)
export N1=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=$CLUSTER_NAME-ng1-Node" "Name=availability-zone,Values=ap-northeast-2a" --query 'Reservations[*].Instances[*].PublicIpAddress' --output text)
export N2=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=$CLUSTER_NAME-ng1-Node" "Name=availability-zone,Values=ap-northeast-2b" --query 'Reservations[*].Instances[*].PublicIpAddress' --output text)
export N3=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=$CLUSTER_NAME-ng1-Node" "Name=availability-zone,Values=ap-northeast-2c" --query 'Reservations[*].Instances[*].PublicIpAddress' --output text)
MyDomain=gagajin.com # ๊ฐ์ ์์ ์ ๋๋ฉ์ธ ์ด๋ฆ ์
๋ ฅ
MyDnzHostedZoneId=$(aws route53 list-hosted-zones-by-name --dns-name "$MyDomain." --query "HostedZones[0].Id" --output text)
EOF
|
3. ํ๊ฒฝ ๋ณ์ ์ ์ฉ ํ์ธ
- ์ ๊ท ํฐ๋ฏธ๋์์ ํ๊ฒฝ ๋ณ์ ํ์ธ
1
2
| echo $CLUSTER_NAME $VPCID $PubSubnet1 $PubSubnet2 $PubSubnet3
echo $N1 $N2 $N3 $MyDomain $MyDnzHostedZoneId
|
โ
ย ์ถ๋ ฅ
1
2
| myeks vpc-0e32b5a6653acdcd9 subnet-0fed28a1b3e108719 subnet-0e4fb63cb543698fe subnet-0861bd68771150000
3.35.47.226 43.203.131.45 15.164.49.232 gagajin.com /hostedzone/Z099663315X74TRCYB7J5
|
- ํ๊ฒฝ ๋ณ์๊ฐ
~/.bashrc
์ ์ ์์ ์ผ๋ก ์ถ๊ฐ๋์๋์ง ํ์ธ
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| # eksworkshop
export CLUSTER_NAME=myeks
export VPCID=vpc-0e32b5a6653acdcd9
export PubSubnet1=subnet-0fed28a1b3e108719
export PubSubnet2=subnet-0e4fb63cb543698fe
export PubSubnet3=subnet-0861bd68771150000
export N1=3.35.47.226
export N2=43.203.131.45
export N3=15.164.49.232
MyDomain=gagajin.com # ๊ฐ์ ์์ ์ ๋๋ฉ์ธ ์ด๋ฆ ์
๋ ฅ
MyDnzHostedZoneId=/hostedzone/Z099663315X74TRCYB7J5
|
4. ์ค์ต ํ ํ๊ฒฝ ๋ณ์ ์ญ์ ํ์
- ์๋ก์ด ํฐ๋ฏธ๋ ์ฐฝ์์๋ ํ๊ฒฝ ๋ณ์๊ฐ ์๋ ์ค์ ๋์ด ํธ๋ฆฌํ๊ฒ ์ฌ์ฉ ๊ฐ๋ฅ
- ์ค์ต์ด ๋๋๋ฉด
~/.bashrc
์์ ํด๋น ๋ด์ฉ์ ์ญ์ ํ ๊ฒ
๐ AWS LoadBalancerController, ExternalDNS, kube-ops-view ์ค์น
1. Helm ์ ์ฅ์ ์ถ๊ฐ ๋ฐ ์
๋ฐ์ดํธ
1
2
3
| helm repo add geek-cookbook https://geek-cookbook.github.io/charts/
helm repo add eks https://aws.github.io/eks-charts
helm repo update
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
| "geek-cookbook" already exists with the same configuration, skipping
"eks" already exists with the same configuration, skipping
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "eks" chart repository
...Successfully got an update from the "prometheus-community" chart repository
...Successfully got an update from the "geek-cookbook" chart repository
Update Complete. โHappy Helming!โ
|
2. kube-ops-view ์ค์น
1
| helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 --set service.main.type=ClusterIP --set env.TZ="Asia/Seoul" --namespace kube-system
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| NAME: kube-ops-view
LAST DEPLOYED: Tue Feb 18 14:43:54 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace kube-system -l "app.kubernetes.io/name=kube-ops-view,app.kubernetes.io/instance=kube-ops-view" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:8080
|
3. AWS LoadBalancerController ๋ฐฐํฌ
1
2
| helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=$CLUSTER_NAME \
--set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
| NAME: aws-load-balancer-controller
LAST DEPLOYED: Tue Feb 18 14:46:39 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
AWS Load Balancer controller installed!
|
4. ExternalDNS ์ปจํธ๋กค๋ฌ ์ค์น
๋๋ฉ์ธ๊ณผ Route 53 Hosted Zone์ ์ฐ๋ํ์ฌ ์๋์ผ๋ก DNS ๋ ์ฝ๋ ๊ด๋ฆฌ
1
| curl -s https://raw.githubusercontent.com/gasida/PKOS/main/aews/externaldns.yaml | MyDomain=$MyDomain MyDnzHostedZoneId=$MyDnzHostedZoneId envsubst | kubectl apply -f -
|
โ
ย ์ถ๋ ฅ
1
2
3
4
| serviceaccount/external-dns created
clusterrole.rbac.authorization.k8s.io/external-dns created
clusterrolebinding.rbac.authorization.k8s.io/external-dns-viewer created
deployment.apps/external-dns created
|
5. HTTPS ์ค์ ์ ์ํ ์ธ์ฆ์ ๋ฐ๊ธ (AWS Certificate Manager)
(1) ๋๋ฉ์ธ ์ธ์ฆ์ ๋ฐ๊ธ ์์ฒญ
์์ฒญ ํ, Create records in Route 53 ํด๋ฆญํ์ฌ CNAME ๋ ์ฝ๋ ์ถ๊ฐ
Route 53์ CNAME ๋ ์ฝ๋๊ฐ ์๋์ผ๋ก ์ถ๊ฐ๋จ
์ธ์ฆ์ ์ํ๊ฐ โIssuedโ๋ก ๋ณ๊ฒฝ๋๋ฉด ์๋ฃ
(2) ์ธ์ฆ์ ARN ํ์ธ
1
2
| CERT_ARN=$(aws acm list-certificates --query 'CertificateSummaryList[].CertificateArn[]' --output text)
echo $CERT_ARN
|
โ
ย ์ถ๋ ฅ
1
| arn:aws:acm:ap-northeast-2:378102432899:certificate/f967e8ca-f0b5-471d-bbe4-bee231aeb32b
|
6. ALB ๊ธฐ๋ฐ Ingress ์ค์ ๋ฐ kube-ops-view ๋ฐฐํฌ
(1) Ingress ๋ฐฐํฌ
- ALB ๊ทธ๋ฃน ์ด๋ฆ์ ์ถ๊ฐํ์ฌ ์ฌ๋ฌ Ingress์์ ๋์ผํ ALB ์ฌ์ฉ ๊ฐ๋ฅ
- Ingress๋ฅผ ๋ฐฐํฌํ์ฌ HTTPS ๊ธฐ๋ฐ์ผ๋ก ๋๋ฉ์ธ(
kubeopsview.$MyDomain
) ์ฐ๊ฒฐ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
| cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/certificate-arn: $CERT_ARN
alb.ingress.kubernetes.io/group.name: study
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/load-balancer-name: myeks-ingress-alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/success-codes: 200-399
alb.ingress.kubernetes.io/target-type: ip
labels:
app.kubernetes.io/name: kubeopsview
name: kubeopsview
namespace: kube-system
spec:
ingressClassName: alb
rules:
- host: kubeopsview.$MyDomain
http:
paths:
- backend:
service:
name: kube-ops-view
port:
number: 8080
path: /
pathType: Prefix
EOF
# ๊ฒฐ๊ณผ
ingress.networking.k8s.io/kubeopsview created
|
(2) ๋ฐฐํฌ๋ ํ๋ ํ์ธ
1
| kubectl get pods -n kube-system
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| NAME READY STATUS RESTARTS AGE
aws-load-balancer-controller-554fbd9d-vk2p8 1/1 Running 0 74m
aws-load-balancer-controller-554fbd9d-xnx6r 1/1 Running 0 74m
aws-node-rf9bf 2/2 Running 0 4h14m
aws-node-tbbhl 2/2 Running 0 4h14m
aws-node-xb7dt 2/2 Running 0 4h14m
coredns-86f5954566-mskq6 1/1 Running 0 4h20m
coredns-86f5954566-wxwqw 1/1 Running 0 4h20m
external-dns-dc4878f5f-mvnt9 1/1 Running 0 72m
kube-ops-view-657dbc6cd8-fgbqc 1/1 Running 0 77m
kube-proxy-6bc4m 1/1 Running 0 4h14m
kube-proxy-qsd8t 1/1 Running 0 4h14m
kube-proxy-rvw86 1/1 Running 0 4h14m
metrics-server-6bf5998d9c-nt4ks 1/1 Running 0 4h20m
metrics-server-6bf5998d9c-prz6f 1/1 Running 0 4h20m
|
(3) ์๋น์ค(Service), ์๋ํฌ์ธํธ(Endpoints), Ingress ์ ๋ณด ํ์ธ
1
| kubectl get ingress,svc,ep -n kube-system
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/kubeopsview alb kubeopsview.gagajin.com myeks-ingress-alb-60898722.ap-northeast-2.elb.amazonaws.com 80 100s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/aws-load-balancer-webhook-service ClusterIP 10.100.118.159 <none> 443/TCP 75m
service/eks-extension-metrics-api ClusterIP 10.100.124.121 <none> 443/TCP 4h24m
service/kube-dns ClusterIP 10.100.0.10 <none> 53/UDP,53/TCP,9153/TCP 4h20m
service/kube-ops-view ClusterIP 10.100.136.90 <none> 8080/TCP 77m
service/metrics-server ClusterIP 10.100.110.47 <none> 443/TCP 4h20m
NAME ENDPOINTS AGE
endpoints/aws-load-balancer-webhook-service 192.168.1.218:9443,192.168.2.198:9443 75m
endpoints/eks-extension-metrics-api 172.0.32.0:10443 4h24m
endpoints/kube-dns 192.168.3.104:53,192.168.3.131:53,192.168.3.104:53 + 3 more... 4h20m
endpoints/kube-ops-view 192.168.1.204:8080 77m
endpoints/metrics-server 192.168.3.152:10251,192.168.3.77:10251 4h20m
|
(4) kube-ops-view ์ ์ URL ํ์ธ
1
| echo -e "Kube Ops View URL = https://kubeopsview.$MyDomain/#scale=1.5"
|
โ
ย ์ถ๋ ฅ
1
| Kube Ops View URL = https://kubeopsview.gagajin.com/#scale=1.5
|
(5) ์ ์ ๊ฒฐ๊ณผ
๐ฆ Pod ๊ธฐ๋ณธ ์ ์ฅ์๋ฅผ ํ์ฉํ ๋ฐ์ดํฐ ์ ์ง ํ
์คํธ
1. EBS ์ปจํธ๋กค๋ฌ ๋ฐฐํฌ ์ ์ํ ํ์ธ
- EBS ์ปจํธ๋กค๋ฌ ๋ฐฐํฌ ์ ์๋
kubectl describe csinodes
๋ช
๋ น์ด ์คํ ์ Spec ์ ๋ณด๊ฐ ํ์๋์ง ์์ - EBS ์ปจํธ๋กค๋ฌ ๋ฐฐํฌ ํ์๋ ๊ฐ ๋
ธ๋์์ ๊ด๋ จ ์ ๋ณด๋ฅผ ํ์ธํ ์ ์์
1
| kubectl describe csinodes
|
โ
ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| Name: ip-192-168-1-207.ap-northeast-2.compute.internal
Labels: <none>
Annotations: storage.alpha.kubernetes.io/migrated-plugins:
kubernetes.io/aws-ebs,kubernetes.io/azure-disk,kubernetes.io/azure-file,kubernetes.io/cinder,kubernetes.io/gce-pd,kubernetes.io/portworx-v...
CreationTimestamp: Tue, 18 Feb 2025 11:46:46 +0900
Spec:
Events: <none>
Name: ip-192-168-2-84.ap-northeast-2.compute.internal
Labels: <none>
Annotations: storage.alpha.kubernetes.io/migrated-plugins:
kubernetes.io/aws-ebs,kubernetes.io/azure-disk,kubernetes.io/azure-file,kubernetes.io/cinder,kubernetes.io/gce-pd,kubernetes.io/portworx-v...
CreationTimestamp: Tue, 18 Feb 2025 11:46:49 +0900
Spec:
Events: <none>
Name: ip-192-168-3-80.ap-northeast-2.compute.internal
Labels: <none>
Annotations: storage.alpha.kubernetes.io/migrated-plugins:
kubernetes.io/aws-ebs,kubernetes.io/azure-disk,kubernetes.io/azure-file,kubernetes.io/cinder,kubernetes.io/gce-pd,kubernetes.io/portworx-v...
CreationTimestamp: Tue, 18 Feb 2025 11:46:42 +0900
Spec:
Events: <none>
|
2. ํฐ๋ฏธ๋ ํ๊ฒฝ ์ค๋น
- ์ด์ ์๋ฒ์ ๋ก์ปฌ PC์์ ๊ฐ๊ฐ ํฐ๋ฏธ๋ ์ฐฝ์ ์ด์ด ๋ชจ๋ํฐ๋ง
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| ssh -i kp-aews.pem ec2-user@$(aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[0].OutputValue' --output text)
# ๊ฒฐ๊ณผ
Last login: Tue Feb 18 16:22:43 2025 from 182.230.60.93
, #_
~\_ ####_ Amazon Linux 2
~~ \_#####\
~~ \###| AL2 End of Life is 2026-06-30.
~~ \#/ ___
~~ V~' '->
~~~ / A newer version of Amazon Linux is available!
~~._. _/
_/ _/ Amazon Linux 2023, GA and supported until 2028-03-15.
_/m/' https://aws.amazon.com/linux/amazon-linux-2023/
Last login: Tue Feb 18 16:22:43 KST 2025 on pts/0
(eks-user:default) [root@operator-host ~]#
|
- Pod ์ํ ์ค์๊ฐ ํ์ธ
1
| (eks-user:default) [root@operator-host ~]# kubectl get pod -w
|
3. Redis ํ๋ ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
| cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: **Pod**
metadata:
name: **redis**
spec:
terminationGracePeriodSeconds: 0
containers:
- name: redis
image: **redis**
EOF
|
4. Redis Pod ๋ด ๋ฐ์ดํฐ ์ ์ฅ ๋ฐ ํ์ธ
(1) Redis Pod์ ๊ธฐ๋ณธ ์์
๋๋ ํ ๋ฆฌ(/data
) ํ์ธ
1
2
3
| kubectl exec -it redis -- pwd
# ๊ฒฐ๊ณผ
/data
|
(2) Pod ๋ด๋ถ data
๋๋ ํ ๋ฆฌ์ ํ์ผ ์์ฑ
1
| kubectl exec -it redis -- sh -c "echo hello > /data/hello.txt"
|
(3) ํ์ผ ๋ด์ฉ ํ์ธ
1
2
3
| kubectl exec -it redis -- cat /data/hello.txt
# ๊ฒฐ๊ณผ
hello
|
5. Pod ์ฌ์์ ๋ฐ ๋ฐ์ดํฐ ์ ์ค ํ
์คํธ
(1) ps ์ค์น
1
2
3
4
5
6
7
8
9
10
11
12
| kubectl exec -it redis -- sh -c "apt update && apt install procps -y"
Get:1 http://deb.debian.org/debian bookworm InRelease [151 kB]
Get:2 http://deb.debian.org/debian bookworm-updates InRelease [55.4 kB]
Get:3 http://deb.debian.org/debian-security bookworm-security InRelease [48.0 kB]
Get:4 http://deb.debian.org/debian bookworm/main amd64 Packages [8792 kB]
Get:5 http://deb.debian.org/debian bookworm-updates/main amd64 Packages [13.5 kB]
Get:6 http://deb.debian.org/debian-security bookworm-security/main amd64 Packages [246 kB]
Fetched 9306 kB in 1s (6437 kB/s)
Reading package lists... Done
Building dependency tree... Done
....
|
(2) Redis ์ปจํ
์ด๋ ํ๋ก์ธ์ค ์ข
๋ฃ (kill 1)
- Pod ๋ด์์ ์คํ ์ค์ธ ํ๋ก์ธ์ค ํ์ธ (
redis-server
๊ฐ PID 1๋ก ์คํ๋จ)
1
| kubectl exec -it redis -- ps aux
|
โ
์ถ๋ ฅ
1
2
3
| USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
redis 1 0.2 0.2 143876 10436 ? Ssl 07:27 0:01 redis-server
root 227 0.0 0.1 8088 4100 pts/0 Rs+ 07:34 0:00 ps aux
|
- PID 1(๋ฉ์ธ ํ๋ก์ธ์ค) ์ข
๋ฃ
1
| kubectl exec -it redis -- kill 1
|
(3) Pod ์ํ ๋ณํ ํ์ธ
1
2
3
4
5
6
7
8
| (eks-user:default) [root@operator-host ~]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
redis 0/1 Pending 0 0s
redis 0/1 Pending 0 0s
redis 0/1 ContainerCreating 0 0s
redis 1/1 Running 0 9s
redis 0/1 Completed 0 9m2s
redis 1/1 Running 1 (3s ago) 9m4s
|
- Pod๊ฐ ์ญ์ ๋์ง ์๊ณ ์ปจํ
์ด๋๋ง ์ฌ์์๋จ
- ์ปจํ
์ด๋์ Restart Count ๊ฐ์ด ์ฆ๊ฐํจ (
RESTARTS: 1
)
(4) Redis Pod์ ์ฌ์์ ์์ธ ํ์ธ
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
| Name: redis
Namespace: default
Priority: 0
Service Account: default
Node: ip-192-168-1-207.ap-northeast-2.compute.internal/192.168.1.207
Start Time: Tue, 18 Feb 2025 16:27:11 +0900
Labels: <none>
Annotations: <none>
Status: Running
IP: 192.168.1.234
IPs:
IP: 192.168.1.234
Containers:
redis:
Container ID: containerd://c76cf2a72506effed078046cd25d30bcfed857b594b228071541967ade2058c7
Image: redis
Image ID: docker.io/library/redis@sha256:93a8d83b707d0d6a1b9186edecca2e37f83722ae0e398aee4eea0ff17c2fad0e
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 18 Feb 2025 16:36:14 +0900
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 18 Feb 2025 16:27:19 +0900
Finished: Tue, 18 Feb 2025 16:36:12 +0900
Ready: True
Restart Count: 1
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dr52g (ro)
...
|
- Redis ์ปจํ
์ด๋๊ฐ โCompletedโ ์ํ๋ก ์ข
๋ฃ๋จ โ Kubernetes๊ฐ ์๋์ผ๋ก ์ฌ์์
- Pod ์์ฒด๋ ์ ์ง๋์ง๋ง ์ปจํ
์ด๋ ๋ด๋ถ ํ๋ก์ธ์ค๊ฐ ์ฌ์์๋จ
(5) Redis Pod ์ฌ์์ ํ ๋ฐ์ดํฐ ์ ์ค ํ์ธ
- ๊ธฐ์กด์ ์ ์ฅํ ํ์ผ ํ์ธ
1
| kubectl exec -it redis -- cat /data/redis/hello.txt
|
โ
์ถ๋ ฅ
1
2
| cat: /data/redis/hello.txt: No such file or directory
command terminated with exit code 1
|
- Pod๊ฐ ์ญ์ ๋์ง ์์์ง๋ง, ์ปจํ
์ด๋ ์ฌ์์์ผ๋ก ์ธํด ๋ด๋ถ ์ ์ฅ ๋ฐ์ดํฐ๊ฐ ์ ์ค๋จ
- ๊ธฐ๋ณธ์ ์ผ๋ก Pod ๋ด๋ถ์ ํ์ผ ์์คํ
์ ํ๋ฐ์ฑ์ด๋ฉฐ, ์ฌ์์ ํ์๋ ๋ฐ์ดํฐ๋ฅผ ์ ์งํ๋ ค๋ฉด ๋ณผ๋ฅจ ์ค์ ์ด ํ์
6. Redis Pod ์ญ์
1
2
3
4
| kubectl delete pod redis
# ๊ฒฐ๊ณผ
pod "redis" deleted
|
๐๏ธ emptyDir๋ฅผ ํ์ฉํ Pod ๋ฐ์ดํฐ ์ ์ง ํ
์คํธ
1. emptyDir ๋ณผ๋ฅจ์ ํ์ฉํ Redis Pod ๋ฐฐํฌ
- Pod ๋ด์์
emptyDir
๋ณผ๋ฅจ์ ์์ฑ ํ /data/redis
๊ฒฝ๋ก์ ๋ง์ดํธ - Pod๊ฐ ์ฌ์์๋๋๋ผ๋
emptyDir
๋ณผ๋ฅจ์ ์ ์ง๋จ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
| cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
terminationGracePeriodSeconds: 0
containers:
- name: redis
image: redis
volumeMounts:
- name: redis-storage
mountPath: /data/redis
volumes:
- name: redis-storage
emptyDir: {}
EOF
# ๊ฒฐ๊ณผ
pod/redis created
|
2. Pod ๋ด๋ถ ๋ฐ์ดํฐ ์ ์ฅ ๋ฐ ํ์ธ
(1) Pod ๋ด๋ถ์์ ํ์ผ ์์ฑ
1
| kubectl exec -it redis -- sh -c "echo hello > /data/redis/hello.txt"
|
(2) ํ์ผ ๋ด์ฉ ํ์ธ
1
| kubectl exec -it redis -- cat /data/redis/hello.txt
|
โ
์ถ๋ ฅ
3. Pod ์ฌ์์ ๋ฐ ๋ฐ์ดํฐ ์ ์ง ํ์ธ
(1) ps ์ค์น
1
2
3
4
5
6
7
8
9
10
11
12
| kubectl exec -it redis -- sh -c "apt update && apt install procps -y"
Get:1 http://deb.debian.org/debian bookworm InRelease [151 kB]
Get:2 http://deb.debian.org/debian bookworm-updates InRelease [55.4 kB]
Get:3 http://deb.debian.org/debian-security bookworm-security InRelease [48.0 kB]
Get:4 http://deb.debian.org/debian bookworm/main amd64 Packages [8792 kB]
Get:5 http://deb.debian.org/debian bookworm-updates/main amd64 Packages [13.5 kB]
Get:6 http://deb.debian.org/debian-security bookworm-security/main amd64 Packages [246 kB]
Fetched 9306 kB in 2s (6071 kB/s)
Reading package lists... Done
Building dependency tree... Done
...
|
(2) Redis ์ปจํ
์ด๋ ํ๋ก์ธ์ค ์ข
๋ฃ (kill 1)
- Pod ๋ด์์ ์คํ ์ค์ธ ํ๋ก์ธ์ค ํ์ธ
1
2
3
4
5
| kubectl exec -it redis -- ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
redis 1 0.2 0.2 143876 10416 ? Ssl 07:50 0:00 redis
root 228 33.3 0.1 8088 4000 pts/0 Rs+ 07:53 0:00 ps au
|
- Redis ํ๋ก์ธ์ค ์ข
๋ฃ (Pod ์ฌ์์ ์ ๋)
1
| kubectl exec -it redis -- kill 1
|
(3) Pod ์ํ ํ์ธ
โ
์ถ๋ ฅ
1
2
| NAME READY STATUS RESTARTS AGE
redis 1/1 Running 1 (17s ago) 4m52s
|
(4) ํ์ผ ๋ฐ์ดํฐ ์ ์ง ์ฌ๋ถ ํ์ธ
1
| kubectl exec -it redis -- ls -l
|
โ
์ถ๋ ฅ
1
2
| total 0
drwxrwxrwx. 2 redis root 23 Feb 18 07:51 redis
|
1
| kubectl exec -it redis -- cat /data/redis/hello.txt
|
โ
์ถ๋ ฅ
- Pod๊ฐ ์ฌ์์๋๋๋ผ๋
emptyDir
๋ณผ๋ฅจ์ ์ ์ง๋จ - ์ปจํ
์ด๋๊ฐ ์ฌ์์๋๋๋ผ๋ ๋ฐ์ดํฐ๋ ๋ณด์กด๋จ
4. Pod ์ญ์ ๋ฐ ๋ฐ์ดํฐ ์ ์ค ํ
์คํธ
(1) Redis Pod ์ญ์
1
2
3
| kubectl delete pod redis
# ๊ฒฐ๊ณผ
pod "redis" deleted
|
โ
์ถ๋ ฅ
1
| No resources found in default namespace.
|
(2) ์๋ก์ด Redis Pod ์ฌ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
terminationGracePeriodSeconds: 0
containers:
- name: redis
image: redis
volumeMounts:
- name: redis-storage
mountPath: /data/redis
volumes:
- name: redis-storage
emptyDir: {}
EOF
# ๊ฒฐ๊ณผ
pod/redis created
|
(3) ํ์ผ ๋ฐ์ดํฐ ํ์ธ
1
| kubectl exec -it redis -- ls -l /data/redis
|
โ
์ถ๋ ฅ
1
| kubectl exec -it redis -- cat /data/redis/hello.txt
|
โ
์ถ๋ ฅ
1
2
| cat: /data/redis/hello.txt: No such file or directory
command terminated with exit code 1
|
- Pod๋ฅผ ์ญ์ ํ๋ฉด
emptyDir
๋ณผ๋ฅจ๋ ํจ๊ป ์ญ์ ๋จ - ์๋ก์ด Pod๊ฐ ์์ฑ๋๋ฉด ์๋ก์ด
emptyDir
๋ณผ๋ฅจ์ด ์์ฑ๋๋ฏ๋ก ๊ธฐ์กด ๋ฐ์ดํฐ๋ ์ ์ง๋์ง ์์
5. Redis Pod ์ญ์
1
2
3
| kubectl delete pod redis
# ๊ฒฐ๊ณผ
pod "redis" deleted
|
๐ EKS์์ Local Path Provisioner๋ฅผ ํ์ฉํ ๋์ ๋ณผ๋ฅจ ๊ด๋ฆฌ
1. Pod ์ ์ฅ์์ ๋ณผ๋ฅจ ๋ผ์ดํ์ฌ์ดํด ๊ฐ๋
- Pod ์ญ์ ์ ๋ด๋ถ ๋ฐ์ดํฐ๋ ํจ๊ป ์ญ์ ๋จ
- ๋ฐ์ดํฐ ๋ณด์กด์ ์ํด Persistent Volume(PV) ์ฌ์ฉ ํ์
- PV๋ฅผ ํ์ฉํ์ฌ Pod์ ๋ผ์ดํ์ฌ์ดํด๊ณผ ๋ฐ์ดํฐ ์ ์ฅ์๋ฅผ ๋ถ๋ฆฌํด ์ง์์ ์ธ ๋ฐ์ดํฐ ์ ์ง ๊ฐ๋ฅ
hostPath
๋ฅผ ์ฌ์ฉํ์ฌ ๋
ธ๋์ ํน์ ๋๋ ํ ๋ฆฌ์ ๋ฐ์ดํฐ๋ฅผ ์ ์ฅํ๋ ๋ฐฉ๋ฒ ์ค์ต
2. Local Path Provisioner ์ค์น
- ๋
ธ๋์ ํน์ ๋๋ ํ ๋ฆฌ(
/opt/local-path-provisioner
๋ฑ)๋ฅผ ๋์ ์ผ๋ก ๋ง์ดํธํ์ฌ ์ ์ฅ์๋ก ํ์ฉ - StorageClass๋ฅผ ์ฌ์ฉํด PVC ์์ฒญ ์ ์๋์ผ๋ก PV๋ฅผ ์์ฑ ๋ฐ ํ ๋น
1
2
3
4
5
6
7
8
9
10
11
12
| kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.31/deploy/local-path-storage.yaml
# ๊ฒฐ๊ณผ
namespace/local-path-storage created
serviceaccount/local-path-provisioner-service-account created
role.rbac.authorization.k8s.io/local-path-provisioner-role created
clusterrole.rbac.authorization.k8s.io/local-path-provisioner-role created
rolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind created
clusterrolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind created
deployment.apps/local-path-provisioner created
storageclass.storage.k8s.io/local-path created
configmap/local-path-config created
|
3. Local Path Provisioner ๊ตฌ์ฑ ํ์ธ
1
| kubectl get-all -n local-path-storage
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
| NAME NAMESPACE AGE
configmap/kube-root-ca.crt local-path-storage 2m40s
configmap/local-path-config local-path-storage 2m40s
pod/local-path-provisioner-84967477f-g6xvh local-path-storage 2m40s
serviceaccount/default local-path-storage 2m40s
serviceaccount/local-path-provisioner-service-account local-path-storage 2m40s
deployment.apps/local-path-provisioner local-path-storage 2m40s
replicaset.apps/local-path-provisioner-84967477f local-path-storage 2m40s
rolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind local-path-storage 2m40s
role.rbac.authorization.k8s.io/local-path-provisioner-role local-path-storage 2m40s
|
1
| kubectl get pod -n local-path-storage -owide
|
โ
์ถ๋ ฅ
1
2
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
local-path-provisioner-84967477f-g6xvh 1/1 Running 0 3m14s 192.168.2.14 ip-192-168-2-84.ap-northeast-2.compute.internal <none> <none>
|
4. ConfigMap ์ค์ ํ์ธ
PV๊ฐ ๋์ ์ผ๋ก ์์ฑ๋ ๊ฒฝ๋ก๋ฅผ ์ง์ ํ๋ ConfigMap ํ์ธ
1
| kubectl describe cm -n local-path-storage local-path-config
|
โ
์ถ๋ ฅ (config.json)
1
2
3
4
5
6
7
8
9
10
| config.json:
----
{
"nodePathMap":[
{
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths":["/opt/local-path-provisioner"]
}
]
}
|
/opt/local-path-provisioner
ํ์์ ๋์ ๋๋ ํ ๋ฆฌ ์์ฑ ๋ฐ ๊ด๋ฆฌ- PVC ์์ฒญ ์ ํด๋น ๊ฒฝ๋ก์ PV๊ฐ ์๋ ์์ฑ๋จ
โ
์ถ๋ ฅ (helperPod.yaml)
๋๋ ํ ๋ฆฌ ์์ฑ ๋ฐ ์ญ์ ๋ฅผ ๊ด๋ฆฌํ๋ Pod ์ค์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
| helperPod.yaml:
----
apiVersion: v1
kind: Pod
metadata:
name: helper-pod
spec:
priorityClassName: system-node-critical
tolerations:
- key: node.kubernetes.io/disk-pressure
operator: Exists
effect: NoSchedule
containers:
- name: helper-pod
image: busybox
imagePullPolicy: IfNotPresent
setup:
----
#!/bin/sh
set -eu
mkdir -m 0777 -p "$VOL_DIR"
teardown:
----
#!/bin/sh
set -eu
rm -rf "$VOL_DIR"
BinaryData
====
|
- helperPod๋ฅผ ํตํด ๋์ ๋๋ ํ ๋ฆฌ ์์ฑ (
setup
์คํฌ๋ฆฝํธ) - PV ์ญ์ ์ ํด๋น ๋๋ ํ ๋ฆฌ๋ ์ ๊ฑฐ (
teardown
์คํฌ๋ฆฝํธ)
5. StorageClass ํ์ธ
1
| kubectl get sc local-path
|
โ
์ถ๋ ฅ
1
2
| NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path rancher.io/local-path Delete WaitForFirstConsumer false 5m28s
|
6. PVC(Persistent Volume Claim) ์์ฑ ๋ฐ ์ํ ํ์ธ
(1) PVC ์ํ ๋ชจ๋ํฐ๋ง
1
| (eks-user:default) [root@operator-host ~]# watch -d kubectl get pv,pvc,pod
|
โ
์ถ๋ ฅ
1
2
3
| Every 2.0s: kubectl get pv,pvc,pod Tue Feb 18 17:18:32 2025
No resources found
|
(2) PVC ์์ฑ์ ์ํ ํ๊ฒฝ ํ์ธ
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-load-balancer-controller-554fbd9d-vk2p8 1/1 Running 0 158m
kube-system aws-load-balancer-controller-554fbd9d-xnx6r 1/1 Running 0 158m
kube-system aws-node-rf9bf 2/2 Running 0 5h38m
kube-system aws-node-tbbhl 2/2 Running 0 5h38m
kube-system aws-node-xb7dt 2/2 Running 0 5h38m
kube-system coredns-86f5954566-mskq6 1/1 Running 0 5h44m
kube-system coredns-86f5954566-wxwqw 1/1 Running 0 5h44m
kube-system external-dns-dc4878f5f-mvnt9 1/1 Running 0 156m
kube-system kube-ops-view-657dbc6cd8-fgbqc 1/1 Running 0 161m
kube-system kube-proxy-6bc4m 1/1 Running 0 5h38m
kube-system kube-proxy-qsd8t 1/1 Running 0 5h38m
kube-system kube-proxy-rvw86 1/1 Running 0 5h38m
kube-system metrics-server-6bf5998d9c-nt4ks 1/1 Running 0 5h44m
kube-system metrics-server-6bf5998d9c-prz6f 1/1 Running 0 5h44m
local-path-storage local-path-provisioner-84967477f-g6xvh 1/1 Running 0 16m
|
local-path-provisioner
Pod๊ฐ ๋ฐฐํฌ๋์ด ์์ด์ผ PVC๋ฅผ ๋์ ์ผ๋ก ๊ด๋ฆฌํ ์ ์์
(3) PVC(Persistent Volume Claim) ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: localpath-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 1Gi
EOF
# ๊ฒฐ๊ณผ
persistentvolumeclaim/localpath-claim created
|
(4) PVC ์ํ ํ์ธ
- PVC๊ฐ Pending ์ํ
- PVC๊ฐ ์์ง ๋ฐ์ธ๋ฉ๋์ง ์์์ผ๋ฉฐ, Pod๊ฐ ์์ฑ๋๋ฉด ๋ฐ์ธ๋ฉ๋จ
1
2
3
4
| Every 2.0s: kubectl get pv,pvc,pod Tue Feb 18 17:28:31 2025
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/localpath-claim Pending local-path <unset> 31s
|
(5) StorageClass ํ์ธ
- StorageClass์
VOLUMEBINDINGMODE
๊ฐ WaitForFirstConsumer
์ํ - PVC๊ฐ Pod์ ์ํด ์์ฒญ๋ ๋๊น์ง PV๊ฐ ์๋ ์์ฑ๋์ง ์์
โ
์ถ๋ ฅ
1
2
3
| NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 5h52m
local-path rancher.io/local-path Delete WaitForFirstConsumer false 20m
|
(6) PVC ์์ธ ์ ๋ณด ํ์ธ
- PVC๊ฐ ๋ฐ์ธ๋ฉ๋์ง ์์ ์ด์ ๋ฐ ์ด๋ฒคํธ ํ์ธ
- Pod ์์ฑ ์ ๊น์ง PVC๋
WaitForFirstConsumer
์ํ
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| Name: localpath-claim
Namespace: default
StorageClass: local-path
Status: Pending
Volume:
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 6s (x15 over 3m25s) persistentvolume-controller waiting for first consumer to be created before binding
|
(7) Pod ์์ฑ ๋ฐ PVC ๋ฐ์ธ๋ฉ
- PVC๋ฅผ ์ฌ์ฉํ๋ Pod ์์ฑ
- Pod์์
localpath-claim
PVC๋ฅผ /data
๊ฒฝ๋ก์ ๋ง์ดํธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
terminationGracePeriodSeconds: 3
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo \$(date -u) >> /data/out.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: localpath-claim
EOF
# ๊ฒฐ๊ณผ
pod/app created
|
- Pod๊ฐ ์์ฑ๋๋ฉด PVC๊ฐ ์๋์ผ๋ก PV์ ๋ฐ์ธ๋ฉ๋จ
(8) PVC ๋ฐ PV ๋ฐ์ธ๋ฉ ํ์ธ
- PVC ์ํ๊ฐ
Bound
๋ก ๋ณ๊ฒฝ๋จ - PV๊ฐ ์๋ ์์ฑ๋์ด PVC์ ์ฐ๊ฒฐ๋จ
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
| NAME READY STATUS RESTARTS AGE
pod/app 1/1 Running 0 109s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
persistentvolume/pvc-fc043ac6-1fdc-4ef1-b03b-cabf03df8018 1Gi RWO Delete Bound default/localpath-claim local-path <unset> 101s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/localpath-claim Bound pvc-fc043ac6-1fdc-4ef1-b03b-cabf03df8018 1Gi RWO local-path <unset> 11m
|
(9) PV ์์ธ ์ ๋ณด ํ์ธ
- PV๋ ํน์ ๋
ธ๋(
192.168.1.207
)์ ๋ฐ์ธ๋ฉ๋จ (Node Affinity
์ค์ ) - ์ ์ฅ์๋
HostPath
๋ฅผ ์ฌ์ฉํ์ฌ ๋
ธ๋์ ํน์ ๊ฒฝ๋ก(/opt/local-path-provisioner/...
)์ ์ ์ฅ๋จ - Pod๊ฐ ๋ค๋ฅธ ๋
ธ๋๋ก ์ด๋ํ๋ฉด ๊ธฐ์กด PV๋ฅผ ์ฌ์ฉํ ์ ์์
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
| Name: pvc-fc043ac6-1fdc-4ef1-b03b-cabf03df8018
Labels: <none>
Annotations: local.path.provisioner/selected-node: ip-192-168-1-207.ap-northeast-2.compute.internal
pv.kubernetes.io/provisioned-by: rancher.io/local-path
Finalizers: [kubernetes.io/pv-protection]
StorageClass: local-path
Status: Bound
Claim: default/localpath-claim
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [ip-192-168-1-207.ap-northeast-2.compute.internal]
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /opt/local-path-provisioner/pvc-fc043ac6-1fdc-4ef1-b03b-cabf03df8018_default_localpath-claim
HostPathType: DirectoryOrCreate
Events: <none>
|
7. Pod ๋ด ๋ฐ์ดํฐ ์ ์ฅ ๋ฐ ์์์ฑ ํ
์คํธ
(1) ๋ฐ์ดํฐ ์ ์ฅ ํ์ธ
- Pod๋ 5์ด๋ง๋ค
/data/out.txt
์ ํ์์คํฌํ ๊ธฐ๋ก - ๋ฐ์ดํฐ๊ฐ ์ ์์ ์ผ๋ก ์ ์ฅ๋๋์ง ํ์ธ
1
| kubectl exec -it app -- tail -f /data/out.txt
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| Tue Feb 18 08:48:00 UTC 2025
Tue Feb 18 08:48:05 UTC 2025
Tue Feb 18 08:48:10 UTC 2025
Tue Feb 18 08:48:15 UTC 2025
Tue Feb 18 08:48:20 UTC 2025
Tue Feb 18 08:48:25 UTC 2025
Tue Feb 18 08:48:30 UTC 2025
Tue Feb 18 08:48:35 UTC 2025
Tue Feb 18 08:48:40 UTC 2025
Tue Feb 18 08:48:45 UTC 2025
...
|
(2) ๋ฐ์ดํฐ ์ ์ฅ ์์น ํ์ธ
- ํ์ฌ ํด๋ฌ์คํฐ์ 3๊ฐ์ ์์ปค ๋
ธ๋ ์กด์ฌ
- ๊ฐ ๋
ธ๋์
/opt/local-path-provisioner
๊ฒฝ๋ก๋ฅผ ํ์ธํ์ฌ ๋ฐ์ดํฐ ์ ์ฅ ์ฌ๋ถ ํ์ธ
1
| for node in $N1 $N2 $N3; do ssh ec2-user@$node tree /opt/local-path-provisioner; done
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| /opt/local-path-provisioner
โโโ pvc-fc043ac6-1fdc-4ef1-b03b-cabf03df8018_default_localpath-claim
โโโ out.txt
1 directory, 1 file
/opt/local-path-provisioner [error opening dir]
0 directories, 0 files
/opt/local-path-provisioner [error opening dir]
0 directories, 0 files
|
- Pod๊ฐ ๋ฐฐํฌ๋ ๋
ธ๋์์๋ง ๋ฐ์ดํฐ๊ฐ ์ ์ฅ๋จ
- ๋ค๋ฅธ ๋
ธ๋์์๋ ๋ฐ์ดํฐ๊ฐ ์กด์ฌํ์ง ์์
(3) ๋
ธ๋ ์ง์ ์ ๊ทผํ์ฌ ๋ฐ์ดํฐ ํ์ธ
- ์๋ฒ์ ์ง์ ์ ๊ทผํ์ฌ
out.txt
ํ์ผ ๋ด์ฉ ํ์ธ - Pod ๋ด์์ ํ์ธํ๋ ๊ฒ๊ณผ ๋์ผํ ๊ฒฐ๊ณผ ํ์ธ ๊ฐ๋ฅ
1
| ssh ec2-user@$N1 tail -f /opt/local-path-provisioner/pvc-fc043ac6-1fdc-4ef1-b03b-cabf03df8018_default_localpath-claim/out.txt
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
| Tue Feb 18 08:53:46 UTC 2025
Tue Feb 18 08:53:51 UTC 2025
Tue Feb 18 08:53:56 UTC 2025
Tue Feb 18 08:54:01 UTC 2025
Tue Feb 18 08:54:06 UTC 2025
Tue Feb 18 08:54:11 UTC 2025
Tue Feb 18 08:54:16 UTC 2025
Tue Feb 18 08:54:21 UTC 2025
Tue Feb 18 08:54:26 UTC 2025
...
|
8. Pod ์ญ์ ํ ๋ฐ์ดํฐ ์ ์ง ํ์ธ
(1) pod ์ญ์
- Pod๋ฅผ ์ญ์ ํ์ฌ ๋ฐ์ดํฐ๊ฐ ์ ์ง๋๋์ง ํ์ธ
1
2
3
| kubectl delete pod app
# ๊ฒฐ๊ณผ
pod "app" deleted
|
(2) pod ์ญ์ ํ PVC ์ํ ํ์ธ
โ
์ถ๋ ฅ
1
2
3
4
5
| NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
persistentvolume/pvc-fc043ac6-1fdc-4ef1-b03b-cabf03df8018 1Gi RWO Delete Bound default/localpath-claim local-path <unset> 21m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/localpath-claim Bound pvc-fc043ac6-1fdc-4ef1-b03b-cabf03df8018 1Gi RWO local-path <unset> 31m
|
- PVC์ PV๋ ์ญ์ ๋์ง ์๊ณ ์ ์ง๋จ
- Pod๊ฐ ์์ด๋ ๋ฐ์ดํฐ๋ ๋ณด์กด๋จ
(3) ์๋ฒ์์ ๋ฐ์ดํฐ ์ ์ง ์ฌ๋ถ ํ์ธ
- Pod๊ฐ ์ญ์ ๋ ํ์๋ ๋ฐ์ดํฐ๊ฐ ์ ์ง๋๋์ง ์ง์ ํ์ธ
- ๊ฐ ๋
ธ๋์
/opt/local-path-provisioner
๊ฒฝ๋ก๋ฅผ ํ์ธํ์ฌ ํ์ผ ์กด์ฌ ์ฌ๋ถ ํ์ธ
1
| for node in $N1 $N2 $N3; do ssh ec2-user@$node tree /opt/local-path-provisioner; done
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| /opt/local-path-provisioner
โโโ pvc-fc043ac6-1fdc-4ef1-b03b-cabf03df8018_default_localpath-claim
โโโ out.txt
1 directory, 1 file
/opt/local-path-provisioner [error opening dir]
0 directories, 0 files
/opt/local-path-provisioner [error opening dir]
0 directories, 0 files
|
- Pod๊ฐ ๋ฐฐํฌ๋ ๋
ธ๋(1๋ฒ ์๋ฒ)์ ๋ฐ์ดํฐ๊ฐ ์ ์ง๋จ
- Pod๊ฐ ์ญ์ ๋์์ด๋ ํ์ผ(
out.txt
)์ด ๋ณด์กด๋จ
(4) Pod ์ฌ๋ฐฐํฌ
- Pod๋ฅผ ๋ค์ ๋ฐฐํฌํ์ฌ ๊ธฐ์กด PVC์ ์ฐ๊ฒฐ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
terminationGracePeriodSeconds: 3
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo \$(date -u) >> /data/out.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: localpath-claim
EOF
# ๊ฒฐ๊ณผ
pod/app created
|
(5) ๊ธฐ์กด ๋ฐ์ดํฐ ์ ์ง ์ฌ๋ถ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| kubectl exec -it app -- tail -f /data/out.txt
Tue Feb 18 08:38:35 UTC 2025
Tue Feb 18 08:38:40 UTC 2025
Tue Feb 18 09:02:27 UTC 2025
Tue Feb 18 09:02:32 UTC 2025
Tue Feb 18 09:02:37 UTC 2025
Tue Feb 18 09:02:42 UTC 2025
Tue Feb 18 09:02:47 UTC 2025
Tue Feb 18 09:02:52 UTC 2025
Tue Feb 18 09:02:57 UTC 2025
Tue Feb 18 09:03:02 UTC 2025
Tue Feb 18 09:03:07 UTC 2025
Tue Feb 18 09:03:12 UTC 2025
...
|
- Pod ์ญ์ ํ์๋ ๊ธฐ์กด ๋ฐ์ดํฐ(
out.txt
)๊ฐ ์ ์ง๋จ - emptyDir์ ๋ฌ๋ฆฌ PVC๋ฅผ ์ฌ์ฉํ๋ฉด Pod๊ฐ ์ญ์ ๋์ด๋ ๋ฐ์ดํฐ๊ฐ ๋ณด์กด๋จ
9. PVC ์ญ์ ํ ๋ฐ์ดํฐ ์์ ์ญ์ ํ์ธ
(1) PVC ์ญ์
1
| kubectl delete pvc localpath-claim
|
โ
์ถ๋ ฅ
1
| persistentvolumeclaim "localpath-claim" deleted
|
(2) PV ์ํ ํ์ธ
โ
์ถ๋ ฅ
- PVC๋ฅผ ์ญ์ ํ๋ฉด PV๋ ์๋์ผ๋ก ์ญ์ ๋จ
- ๋ฐ์ดํฐ๋ ํจ๊ป ์ญ์ ๋จ
(3) ๋
ธ๋์์ ๋ฐ์ดํฐ ์ญ์ ํ์ธ
1
| for node in $N1 $N2 $N3; do ssh ec2-user@$node tree /opt/local-path-provisioner; done
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
| /opt/local-path-provisioner
0 directories, 0 files
/opt/local-path-provisioner [error opening dir]
0 directories, 0 files
/opt/local-path-provisioner [error opening dir]
0 directories, 0 files
|
- PVC ์ญ์ ์ PV๋ ์ญ์ ๋๋ฉฐ, ์ ์ฅ๋ ๋ฐ์ดํฐ๋ ์์ ํ ์ ๊ฑฐ๋จ
- PVC๋ฅผ ์ ์งํ๋ฉด ๋ฐ์ดํฐ ๋ณด์กด, ์ญ์ ํ๋ฉด ๋ฐ์ดํฐ ์์ ์ญ์ ๋จ
๐ ๋์คํฌ ์ฑ๋ฅ ์ธก์ ๋ฐ Kubestr ํ์ฉ
1. ๋์คํฌ ์ฑ๋ฅ ์ธก์ ๊ฐ์
(1) ๋์คํฌ ์ฑ๋ฅ ์ธก์ ์ ์ฃผ์ ์งํ
- IOPS (Input/Output Operations Per Second): ์ด๋น ์ฒ๋ฆฌํ ์ ์๋ I/O ์์
์
- Bandwidth (Throughput): ์ด๋น ๋ฐ์ดํฐ ์ ์ก๋
(2) ํ์ฌ ์์ปค ๋
ธ๋์ ๋์คํฌ ์ ๋ณด
- ์ฌ์ฉ ์ค์ธ ๋์คํฌ: AWS EBS gp3 ํ์
- IOPS: 3000
- Throughput: 125 MB/s
2. Kubestr ์ค์น
Kubestr
: Kubernetes์์ ์คํ ๋ฆฌ์ง ํด๋์ค๋ณ ์ฑ๋ฅ ์ธก์ ์ ์ํ ๋๊ตฌ- PV๋ฅผ ์์ฑํ์ฌ ํน์ ํ ์คํ ๋ฆฌ์ง ํด๋์ค์ ์๋๋ฅผ ์ธก์ ํ ์๋ ์ญ์
(1) Kubestr ๋ค์ด๋ก๋ ๋ฐ ์ค์น
1
| (eks-user:default) [root@operator-host ~]# wget https://github.com/kastenhq/kubestr/releases/download/v0.4.48/kubestr_0.4.48_Linux_amd64.tar.gz
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| --2025-02-18 18:36:53-- https://github.com/kastenhq/kubestr/releases/download/v0.4.48/kubestr_0.4.48_Linux_amd64.tar.gz
Resolving github.com (github.com)... 20.200.245.247
Connecting to github.com (github.com)|20.200.245.247|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/291834502/210e8359-9fb9-4740-afef-17a5b458ab0e?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=releaseassetproduction%2F20250218%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250218T093653Z&X-Amz-Expires=300&X-Amz-Signature=8fb0bacd16f3bcb0eec0ff2fc8939dd6fe2549fcea74b7edcb30e345e4bf026b&X-Amz-SignedHeaders=host&response-content-disposition=attachment%3B%20filename%3Dkubestr_0.4.48_Linux_amd64.tar.gz&response-content-type=application%2Foctet-stream [following]
--2025-02-18 18:36:53-- https://objects.githubusercontent.com/github-production-release-asset-2e65be/291834502/210e8359-9fb9-4740-afef-17a5b458ab0e?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=releaseassetproduction%2F20250218%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250218T093653Z&X-Amz-Expires=300&X-Amz-Signature=8fb0bacd16f3bcb0eec0ff2fc8939dd6fe2549fcea74b7edcb30e345e4bf026b&X-Amz-SignedHeaders=host&response-content-disposition=attachment%3B%20filename%3Dkubestr_0.4.48_Linux_amd64.tar.gz&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 14703952 (14M) [application/octet-stream]
Saving to: โkubestr_0.4.48_Linux_amd64.tar.gzโ
100%[========================================================================================================>] 14,703,952 38.6MB/s in 0.4s
2025-02-18 18:36:55 (38.6 MB/s) - โkubestr_0.4.48_Linux_amd64.tar.gzโ saved [14703952/14703952]
|
1
| (eks-user:default) [root@operator-host ~]# tar xvfz kubestr_0.4.48_Linux_amd64.tar.gz && mv kubestr /usr/local/bin/ && chmod +x /usr/local/bin/kubestr
|
โ
์ถ๋ ฅ
1
2
3
| LICENSE
README.md
kubestr
|
(2) ์ค์น ํ์ธ
1
| (eks-user:default) [root@operator-host ~]# kubestr -h
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
| kubestr is a tool that will scan your k8s cluster
and validate that the storage systems in place as well as run
performance tests.
Usage:
kubestr [flags]
kubestr [command]
Available Commands:
blockmount Checks if a storage class supports block volumes
completion Generate the autocompletion script for the specified shell
csicheck Runs the CSI snapshot restore check
file-restore Restore file(s) from a Snapshot or PVC to it's source PVC
fio Runs an fio test
help Help about any command
Flags:
-h, --help help for kubestr
-e, --outfile string The file where test results will be written
-o, --output string Options(json)
Use "kubestr [command] --help" for more information about a command.
|
3. ํด๋ฌ์คํฐ ๋ด ์คํ ๋ฆฌ์ง ํด๋์ค ํ์ธ
1
| (eks-user:default) [root@operator-host ~]# kubestr
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
| **************************************
_ ___ _ ___ ___ ___ _____ ___
| |/ / | | | _ ) __/ __|_ _| _ \
| ' <| |_| | _ \ _|\__ \ | | | /
|_|\_\\___/|___/___|___/ |_| |_|_\
Explore your Kubernetes storage options
**************************************
Kubernetes Version Check:
Valid kubernetes version (v1.31.5-eks-8cce635) - OK
RBAC Check:
Kubernetes RBAC is enabled - OK
Aggregated Layer Check:
The Kubernetes Aggregated Layer is enabled - OK
Available Storage Provisioners:
kubernetes.io/aws-ebs:
This is an in tree provisioner.
Storage Classes:
* gp2
To perform a FIO test, run-
./kubestr fio -s <storage class>
To perform a check for block device support, run-
./kubestr blockmount -s <storage class>
rancher.io/local-path:
Unknown driver type.
Storage Classes:
* local-path
To perform a FIO test, run-
./kubestr fio -s <storage class>
To perform a check for block device support, run-
./kubestr blockmount -s <storage class>
|
- ์คํ ๋ฆฌ์ง ํด๋์ค ๋ชฉ๋ก ํ์ธ (
gp2
, local-path
) - ํด๋น ์คํ ๋ฆฌ์ง ํด๋์ค๋ฅผ ๋์์ผ๋ก ์ฑ๋ฅ ํ
์คํธ ๊ฐ๋ฅ
4. ๋ชจ๋ํฐ๋ง ์ค์
(1) ์คํ ๋ฆฌ์ง ์ฑ๋ฅ ๋ชจ๋ํฐ๋ง
1
| (eks-user:default) [root@operator-host ~]# watch 'kubectl get pod -owide;echo;kubectl get pv,pvc'
|
โ
์ถ๋ ฅ
1
2
3
4
5
| Every 2.0s: kubectl get pod -owide;echo;kubectl get pv,pvc Tue Feb 18 18:39:29 2025
No resources found in default namespace.
No resources found
|
(2) ๋
ธ๋๋ณ ๋์คํฌ ์ฑ๋ฅ ๋ชจ๋ํฐ๋ง (iostat
)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| ssh ec2-user@$N1
A newer release of "Amazon Linux" is available.
Version 2023.6.20250211:
Run "/usr/bin/dnf check-release-update" for full release and version update info
, #_
~\_ ####_ Amazon Linux 2023
~~ \_#####\
~~ \###|
~~ \#/ ___ https://aws.amazon.com/linux/amazon-linux-2023
~~ V~' '->
~~~ /
~~._. _/
_/ _/
_/m/'
Last login: Tue Feb 18 10:13:47 2025 from 182.230.60.93
[ec2-user@ip-192-168-1-207 ~]$ iostat -xmdz 1
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| ssh ec2-user@$N2
A newer release of "Amazon Linux" is available.
Version 2023.6.20250211:
Run "/usr/bin/dnf check-release-update" for full release and version update info
, #_
~\_ ####_ Amazon Linux 2023
~~ \_#####\
~~ \###|
~~ \#/ ___ https://aws.amazon.com/linux/amazon-linux-2023
~~ V~' '->
~~~ /
~~._. _/
_/ _/
_/m/'
Last login: Tue Feb 18 10:13:58 2025 from 182.230.60.93
[ec2-user@ip-192-168-2-84 ~]$ iostat -xmdz 1
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| ssh ec2-user@$N3
A newer release of "Amazon Linux" is available.
Version 2023.6.20250211:
Run "/usr/bin/dnf check-release-update" for full release and version update info
, #_
~\_ ####_ Amazon Linux 2023
~~ \_#####\
~~ \###|
~~ \#/ ___ https://aws.amazon.com/linux/amazon-linux-2023
~~ V~' '->
~~~ /
~~._. _/
_/ _/
_/m/'
Last login: Tue Feb 18 10:14:06 2025 from 182.230.60.93
[ec2-user@ip-192-168-3-80 ~]$ iostat -xmdz 1
|
- 1์ด ๋จ์๋ก ๋์คํฌ ๋ถํ ๋ชจ๋ํฐ๋ง (
nvme0n1
)
5. Kubestr๋ฅผ ํ์ฉํ ๋์คํฌ ์ฑ๋ฅ ์ธก์
๋๋ค์ฝ๊ธฐ ์ฑ๋ฅ ํ
์คํธ
(1) FIO ํ
์คํธ ์ค์ ํ์ผ ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| (eks-user:default) [root@operator-host ~]# cat << EOF > fio-read.fio
> [global]
> ioengine=libaio
> direct=1
> bs=4k
> runtime=120
> time_based=1
> iodepth=16
> numjobs=4
> group_reporting
> size=1g
> rw=randread
> [read]
> EOF
|
(2) Kubestr๋ฅผ ์ด์ฉํ ์ฑ๋ฅ ํ
์คํธ ์คํ
1
| (eks-user:default) [root@operator-host ~]# kubestr fio -f fio-read.fio -s local-path --size 10G
|
โ
์ถ๋ ฅ
1
2
3
| PVC created kubestr-fio-pvc-lsl6j
Pod created kubestr-fio-pod-xdnwx
Running FIO test (fio-read.fio) on StorageClass (local-path) with a PVC of Size (10G)
|
- PV, PVC, Pod๋ฅผ ์์ฑ ํ ์คํ ๋ฆฌ์ง ํด๋์ค(
local-path
)์ ์ฑ๋ฅ ํ
์คํธ ์งํ
- ํ
์คํธ๊ฐ 192.168.1.207 ๋
ธ๋์์ ์คํ๋จ
(3) ์ฑ๋ฅ ์ธก์ ๊ฒฐ๊ณผ (IOPS ๋ฐ Bandwidth)
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| FIO test results:
FIO version - fio-3.36
Global options - ioengine=libaio verify= direct=1 gtod_reduce=
JobName:
blocksize= filesize= iodepth= rw=
read:
IOPS=3023.845947 BW(KiB/s)=12095
iops: min=2220 max=9001 avg=3025.832520
bw(KiB/s): min=8880 max=36007 avg=12103.798828
Disk stats (read/write):
nvme0n1: ios=362379/201 merge=0/30 ticks=6373605/3593 in_queue=6377198, util=95.415291%
- OK
|
- IOPS (์ด๋น ์
์ถ๋ ฅ ์์
์) : ํ๊ท 3024 (์ต์ 2220, ์ต๋ 9001)
- Bandwidth (Throughput) : ์ฝ 11.8 MB/s (์ต์ 8.7 MB/s, ์ต๋ 35.2 MB/s)
- AWS EBS
gp3
์ ๊ธฐ๋ณธ ์ฑ๋ฅ(3000 IOPS, 125 MB/s)๊ณผ ์ ์ฌ - ๋์คํฌ ์ฌ์ฉ๋ฅ : 95.4% ํ์ฉ๋จ
๋๋ค์ฐ๊ธฐ ์ฑ๋ฅ ํ
์คํธ
(1) ๋๋ค ์ฐ๊ธฐ ํ
์คํธ ์ค์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| (eks-user:default) [root@operator-host ~]# cat << EOF > fio-write.fio
> [global]
> ioengine=libaio
> numjobs=16
> iodepth=16
> direct=1
> bs=4k
> runtime=120
> time_based=1
> size=1g
> group_reporting
> rw=randrw
> rwmixread=0
> rwmixwrite=100
> [write]
> EOF
|
(2) ๋๋ค ์ฐ๊ธฐ ์ฑ๋ฅ ํ
์คํธ ์คํ
- StorageClass
local-path
๋ฅผ ์ฌ์ฉํ์ฌ 20GB PVC ์์ฑ ํ ํ
์คํธ ์งํ
1
| (eks-user:default) [root@operator-host ~]# kubestr fio -f fio-write.fio -s local-path --size 20G
|
โ
์ถ๋ ฅ
1
2
| PVC created kubestr-fio-pvc-ntvm4
Pod created kubestr-fio-pod-lhstx
|
- ํ
์คํธ๊ฐ 192.168.1.207 ๋
ธ๋์์ ์คํ๋จ
(3) ์ฑ๋ฅ ์ธก์ ๊ฒฐ๊ณผ (IOPS ๋ฐ Bandwidth)
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| Running FIO test (fio-write.fio) on StorageClass (local-path) with a PVC of Size (20G)
Elapsed time- 4m10.953340777s
FIO test results:
FIO version - fio-3.36
Global options - ioengine=libaio verify= direct=1 gtod_reduce=
JobName:
blocksize= filesize= iodepth= rw=
write:
IOPS=3024.511475 BW(KiB/s)=12098
iops: min=1456 max=8625 avg=3024.983154
bw(KiB/s): min=5824 max=34517 avg=12101.912109
Disk stats (read/write):
nvme0n1: ios=0/362366 merge=0/8 ticks=0/7063240 in_queue=7063240, util=94.799950%
- OK
|
- IOPS: ํ๊ท 3024.98, ์ต์ 1456, ์ต๋ 8625
- Bandwidth: ํ๊ท 12,101 KiB/s, ์ต์ 5824 KiB/s, ์ต๋ 34,517 KiB/s
- ๋์คํฌ ํ์ฉ๋ฅ 94.8%
๐พ EBS CSI ์ปจํธ๋กค๋ฌ ์ค์ ๋ฐ ๊ตฌ์ฑ
1. EBS CSI ์ปจํธ๋กค๋ฌ ๊ฐ์
- HostPath ๋ณผ๋ฅจ์ ํ๊ณ: ๋
ธ๋์ ๋์คํฌ๊ฐ ๊ฝ ์ฐจ๋ฉด ์ฌ์ฉ ๋ถ๊ฐ
- AWS EBS์ ์ฅ์ : ๋ธ๋ก ์คํ ๋ฆฌ์ง ์ ๊ณต, Pod์์ ์ฝ๊ฒ ๋ณผ๋ฅจ์ ์์ฑ ๋ฐ ๋ถ์ฐฉ ๊ฐ๋ฅ
- EBS CSI ์ปจํธ๋กค๋ฌ ์ญํ
- EBS ๋ณผ๋ฅจ์ ์์ฑํ๊ณ Pod์ Attach
- Kubernetes API ์๋ฒ๋ฅผ ํตํด ์์ฒญ์ ์ฒ๋ฆฌํ๋ CSI ์ปจํธ๋กค๋ฌ ์ญํ ์ํ
2. EBS CSI ๋๋ผ์ด๋ฒ ๋ฒ์ ํ์ธ
- EKS์์ ์ฌ์ฉ ๊ฐ๋ฅํ EBS CSI ๋๋ผ์ด๋ฒ ๋ฒ์ ์กฐํ
1
2
3
4
5
| (eks-user:default) [root@operator-host ~]# aws eks describe-addon-versions \
> --addon-name aws-ebs-csi-driver \
> --kubernetes-version 1.31 \
> --query "addons[].addonVersions[].[addonVersion, compatibilities[].defaultVersion]" \
> --output text
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| v1.39.0-eksbuild.1
True
v1.38.1-eksbuild.2
False
v1.38.1-eksbuild.1
False
v1.37.0-eksbuild.2
False
v1.37.0-eksbuild.1
False
...
|
3. IAM Role for Service Account (IRSA) ์์ฑ
- EBS CSI ๋๋ผ์ด๋ฒ๊ฐ AWS EBS์ ์ ๊ทผํ ์ ์๋๋ก IAM ์ญํ ์์ฑ
- AWS EKS ํด๋ฌ์คํฐ์ IAM ServiceAccount ์ถ๊ฐ
1
2
3
4
5
6
7
8
| eksctl create iamserviceaccount \
--name ebs-csi-controller-sa \
--namespace kube-system \
--cluster ${CLUSTER_NAME} \
--attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
--approve \
--role-only \
--role-name AmazonEKS_EBS_CSI_DriverRole
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
| 2025-02-18 20:00:11 [โน] 1 existing iamserviceaccount(s) (kube-system/aws-load-balancer-controller) will be excluded
2025-02-18 20:00:11 [โน] 1 iamserviceaccount (kube-system/ebs-csi-controller-sa) was included (based on the include/exclude rules)
2025-02-18 20:00:11 [!] serviceaccounts in Kubernetes will not be created or modified, since the option --role-only is used
2025-02-18 20:00:11 [โน] 1 task: { create IAM role for serviceaccount "kube-system/ebs-csi-controller-sa" }
2025-02-18 20:00:11 [โน] building iamserviceaccount stack "eksctl-myeks-addon-iamserviceaccount-kube-system-ebs-csi-controller-sa"
2025-02-18 20:00:11 [โน] deploying stack "eksctl-myeks-addon-iamserviceaccount-kube-system-ebs-csi-controller-sa"
2025-02-18 20:00:11 [โน] waiting for CloudFormation stack "eksctl-myeks-addon-iamserviceaccount-kube-system-ebs-csi-controller-sa"
2025-02-18 20:00:42 [โน] waiting for CloudFormation stack "eksctl-myeks-addon-iamserviceaccount-kube-system-ebs-csi-controller-sa"
|
- CloudFormation์ ์๋ก์ด Stack์ด ์์ฑ๋จ
1
| eksctl get iamserviceaccount --cluster ${CLUSTER_NAME}
|
โ
์ถ๋ ฅ
1
2
3
| NAMESPACE NAME ROLE ARN
kube-system aws-load-balancer-controller arn:aws:iam::378102432899:role/eksctl-myeks-addon-iamserviceaccount-kube-sys-Role1-O6YEYsN7iVeQ
kube-system ebs-csi-controller-sa arn:aws:iam::378102432899:role/AmazonEKS_EBS_CSI_DriverRole
|
4. Amazon EBS CSI ๋๋ผ์ด๋ฒ ๋ฐฐํฌ
(1) EBS CSI ๋๋ผ์ด๋ฒ Add-on ์ค์น
1
2
| export ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)
eksctl create addon --name aws-ebs-csi-driver --cluster ${CLUSTER_NAME} --service-account-role-arn arn:aws:iam::${ACCOUNT_ID}:role/AmazonEKS_EBS_CSI_DriverRole --force
|
โ
์ถ๋ ฅ
1
2
3
4
5
| 2025-02-18 20:05:22 [โน] Kubernetes version "1.31" in use by cluster "myeks"
2025-02-18 20:05:23 [โน] IRSA is set for "aws-ebs-csi-driver" addon; will use this to configure IAM permissions
2025-02-18 20:05:23 [!] the recommended way to provide IAM permissions for "aws-ebs-csi-driver" addon is via pod identity associations; after addon creation is completed, run `eksctl utils migrate-to-pod-identity`
2025-02-18 20:05:23 [โน] using provided ServiceAccountRoleARN "arn:aws:iam::378102432899:role/AmazonEKS_EBS_CSI_DriverRole"
2025-02-18 20:05:23 [โน] creating addon
|
(2) ์ค์น๋ Add-on ํ์ธ
1
| eksctl get addon --cluster ${CLUSTER_NAME}
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
| 2025-02-18 20:09:13 [โน] Kubernetes version "1.31" in use by cluster "myeks"
2025-02-18 20:09:13 [โน] getting all addons
2025-02-18 20:09:15 [โน] to see issues for an addon run `eksctl get addon --name <addon-name> --cluster <cluster-name>`
NAME VERSION STATUS ISSUES IAMROLE UPDATE AVAILABLE CONFIGURATION VALUES POD IDENTITY ASSOCIATION ROLES
aws-ebs-csi-driver v1.39.0-eksbuild.1 ACTIVE 0 arn:aws:iam::378102432899:role/AmazonEKS_EBS_CSI_DriverRole
coredns v1.11.4-eksbuild.2 ACTIVE 0
kube-proxy v1.31.3-eksbuild.2 ACTIVE 0
metrics-server v0.7.2-eksbuild.2 ACTIVE 0
vpc-cni v1.19.2-eksbuild.5 ACTIVE 0 arn:aws:iam::378102432899:role/eksctl-myeks-addon-vpc-cni-Role1-ZTYxtOMDwfFu enableNetworkPolicy: "true"
|
5. EBS CSI ์ปจํธ๋กค๋ฌ ๋ฐ DaemonSet ํ์ธ
1
| kubectl get deploy,ds -l=app.kubernetes.io/name=aws-ebs-csi-driver -n kube-system
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
| NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ebs-csi-controller 2/2 2 2 5m36s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/ebs-csi-node 3 3 3 3 3 kubernetes.io/os=linux 5m36s
daemonset.apps/ebs-csi-node-windows 0 0 0 0 0 kubernetes.io/os=windows 5m36s
|
1
| kubectl get pod -n kube-system -l app=ebs-csi-controller -o jsonpath='{.items[0].spec.containers[*].name}' ; echo
|
โ
์ถ๋ ฅ
1
| ebs-plugin csi-provisioner csi-attacher csi-snapshotter csi-resizer liveness-probe
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
| NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-load-balancer-controller-554fbd9d-vk2p8 1/1 Running 0 5h26m
kube-system aws-load-balancer-controller-554fbd9d-xnx6r 1/1 Running 0 5h26m
kube-system aws-node-rf9bf 2/2 Running 0 8h
kube-system aws-node-tbbhl 2/2 Running 0 8h
kube-system aws-node-xb7dt 2/2 Running 0 8h
kube-system coredns-86f5954566-mskq6 1/1 Running 0 8h
kube-system coredns-86f5954566-wxwqw 1/1 Running 0 8h
kube-system ebs-csi-controller-7f8f8cb84-fd2bm 6/6 Running 0 7m35s
kube-system ebs-csi-controller-7f8f8cb84-tsvk8 6/6 Running 0 7m35s
kube-system ebs-csi-node-8d77m 3/3 Running 0 7m35s
kube-system ebs-csi-node-b2qcp 3/3 Running 0 7m35s
kube-system ebs-csi-node-rkk64 3/3 Running 0 7m35s
kube-system external-dns-dc4878f5f-mvnt9 1/1 Running 0 5h24m
kube-system kube-ops-view-657dbc6cd8-fgbqc 1/1 Running 0 5h29m
kube-system kube-proxy-6bc4m 1/1 Running 0 8h
kube-system kube-proxy-qsd8t 1/1 Running 0 8h
kube-system kube-proxy-rvw86 1/1 Running 0 8h
kube-system metrics-server-6bf5998d9c-nt4ks 1/1 Running 0 8h
kube-system metrics-server-6bf5998d9c-prz6f 1/1 Running 0 8h
local-path-storage local-path-provisioner-84967477f-g6xvh 1/1 Running 0 3h3m
|
6. CSI Node ๋ฐ Driver ํ์ธ
(1) CSI ๊ด๋ จ ๋ฆฌ์์ค ํ์ธ
1
| kubectl api-resources | grep -i csi
|
โ
์ถ๋ ฅ
1
2
3
| csidrivers storage.k8s.io/v1 false CSIDriver
csinodes storage.k8s.io/v1 false CSINode
csistoragecapacities storage.k8s.io/v1 true CSIStorageCapacity
|
(2) CSI ๋
ธ๋ ์ํ ํ์ธ
โ
์ถ๋ ฅ
1
2
3
4
| NAME DRIVERS AGE
ip-192-168-1-207.ap-northeast-2.compute.internal 1 8h
ip-192-168-2-84.ap-northeast-2.compute.internal 1 8h
ip-192-168-3-80.ap-northeast-2.compute.internal 1 8h
|
- ๊ฐ ๋
ธ๋์
ebs.csi.aws.com
๋๋ผ์ด๋ฒ๊ฐ ์ค์น๋จ
(3) CSI Node ์์ธ ์ ๋ณด ํ์ธ
1
| kubectl describe csinodes
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
| Name: ip-192-168-1-207.ap-northeast-2.compute.internal
Labels: <none>
Annotations: storage.alpha.kubernetes.io/migrated-plugins:
kubernetes.io/aws-ebs,kubernetes.io/azure-disk,kubernetes.io/azure-file,kubernetes.io/cinder,kubernetes.io/gce-pd,kubernetes.io/portworx-v...
CreationTimestamp: Tue, 18 Feb 2025 11:46:46 +0900
Spec:
Drivers:
ebs.csi.aws.com:
Node ID: i-093ad32d5ff5a8770
Allocatables:
Count: 25
Topology Keys: [kubernetes.io/os topology.ebs.csi.aws.com/zone topology.kubernetes.io/zone]
Events: <none>
Name: ip-192-168-2-84.ap-northeast-2.compute.internal
Labels: <none>
Annotations: storage.alpha.kubernetes.io/migrated-plugins:
kubernetes.io/aws-ebs,kubernetes.io/azure-disk,kubernetes.io/azure-file,kubernetes.io/cinder,kubernetes.io/gce-pd,kubernetes.io/portworx-v...
CreationTimestamp: Tue, 18 Feb 2025 11:46:49 +0900
Spec:
Drivers:
ebs.csi.aws.com:
Node ID: i-0a80fdc36a856f394
Allocatables:
Count: 25
Topology Keys: [kubernetes.io/os topology.ebs.csi.aws.com/zone topology.kubernetes.io/zone]
Events: <none>
Name: ip-192-168-3-80.ap-northeast-2.compute.internal
Labels: <none>
Annotations: storage.alpha.kubernetes.io/migrated-plugins:
kubernetes.io/aws-ebs,kubernetes.io/azure-disk,kubernetes.io/azure-file,kubernetes.io/cinder,kubernetes.io/gce-pd,kubernetes.io/portworx-v...
CreationTimestamp: Tue, 18 Feb 2025 11:46:42 +0900
Spec:
Drivers:
ebs.csi.aws.com:
Node ID: i-0484d2b724be33973
Allocatables:
Count: 25
Topology Keys: [kubernetes.io/os topology.ebs.csi.aws.com/zone topology.kubernetes.io/zone]
Events: <none>
|
- ๊ฐ ๋
ธ๋์์ ์ต๋ 25๊ฐ์ EBS ๋ณผ๋ฅจ์ ๋ถ์ฐฉ ๊ฐ๋ฅ
(4) CSI ๋๋ผ์ด๋ฒ ๋ชฉ๋ก ์กฐํ
โ
์ถ๋ ฅ
1
2
3
| NAME ATTACHREQUIRED PODINFOONMOUNT STORAGECAPACITY TOKENREQUESTS REQUIRESREPUBLISH MODES AGE
ebs.csi.aws.com true false false <unset> false Persistent 12m
efs.csi.aws.com false false false <unset> false Persistent 8h
|
- EBS CSI ๋๋ผ์ด๋ฒ(
ebs.csi.aws.com
)๊ฐ ์ ์์ ์ผ๋ก ๋ฑ๋ก๋จ
(5) CSI ๋๋ผ์ด๋ฒ ์์ธ ์ ๋ณด ํ์ธ
1
| kubectl describe csidrivers ebs.csi.aws.com
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| Name: ebs.csi.aws.com
Namespace:
Labels: app.kubernetes.io/component=csi-driver
app.kubernetes.io/managed-by=EKS
app.kubernetes.io/name=aws-ebs-csi-driver
app.kubernetes.io/version=1.39.0
Annotations: <none>
API Version: storage.k8s.io/v1
Kind: CSIDriver
Metadata:
Creation Timestamp: 2025-02-18T11:05:31Z
Resource Version: 112312
UID: 4db549f5-c7ef-42b5-8dbc-7611b898b6fc
Spec:
Attach Required: true
Fs Group Policy: ReadWriteOnceWithFSType
Pod Info On Mount: false
Requires Republish: false
Se Linux Mount: false
Storage Capacity: false
Volume Lifecycle Modes:
Persistent
Events: <none>
|
7. ๋
ธ๋์ ์ต๋ EBS ๋ถ์ฐฉ ์๋ ๋ณ๊ฒฝ
- ๊ธฐ๋ณธ์ ์ผ๋ก ๋
ธ๋๋น EBS ๋ณผ๋ฅจ ์ต๋ 25๊ฐ๊น์ง ๋ถ์ฐฉ ๊ฐ๋ฅ
- ์ต๋ ๋ถ์ฐฉ ์๋์ 31๊ฐ๋ก ์ฆ๊ฐ
1
2
3
4
5
6
7
| aws eks update-addon --cluster-name ${CLUSTER_NAME} --addon-name aws-ebs-csi-driver \
--addon-version v1.39.0-eksbuild.1 --configuration-values '{
"node": {
"volumeAttachLimit": 31,
"enableMetrics": true
}
}'
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| {
"update": {
"id": "31bc4279-d50f-30e0-aebe-de2971095881",
"status": "InProgress",
"type": "AddonUpdate",
"params": [
{
"type": "AddonVersion",
"value": "v1.39.0-eksbuild.1"
},
{
"type": "ConfigurationValues",
"value": "{\n \"node\": {\n \"volumeAttachLimit\": 31,\n \"enableMetrics\": true\n }\n }"
}
],
"createdAt": "2025-02-18T20:19:34.938000+09:00",
"errors": []
}
|
- EKS Add-ons์์
aws-ebs-csi-driver
์ ์ค์ ๋ณ๊ฒฝ ๊ฐ๋ฅ
8. EBS CSI Node ๋ฐ๋ชฌ์
์ค์ ํ์ธ
1
| kubectl get ds -n kube-system ebs-csi-node -o yaml
|
โ
์ถ๋ ฅ (์ผ๋ถ)
1
2
3
4
5
6
7
| containers:
- args:
- node
- --endpoint=$(CSI_ENDPOINT)
- --http-endpoint=0.0.0.0:3302
- --csi-mount-point-prefix=/var/lib/kubelet/plugins/kubernetes.io/csi/ebs.csi.aws.com/
- --volume-attach-limit=31
|
- EBS CSI Node ๋ฐ๋ชฌ์
์ด ์ฌ๊ธฐ๋๋๋ฉด์ Argument ๊ฐ์ด ์ ์ฉ๋จ
- ๊ฐ ๋
ธ๋์์
-volume-attach-limit=31
์ค์ ์ด ๋ฐ์๋จ
9. CSI Node ๋ณผ๋ฅจ ๋ถ์ฐฉ ํ๋ ์ฆ๊ฐ ํ์ธ
1
| kubectl describe csinodes
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
| Name: ip-192-168-1-207.ap-northeast-2.compute.internal
Labels: <none>
Annotations: storage.alpha.kubernetes.io/migrated-plugins:
kubernetes.io/aws-ebs,kubernetes.io/azure-disk,kubernetes.io/azure-file,kubernetes.io/cinder,kubernetes.io/gce-pd,kubernetes.io/portworx-v...
CreationTimestamp: Tue, 18 Feb 2025 11:46:46 +0900
Spec:
Drivers:
ebs.csi.aws.com:
Node ID: i-093ad32d5ff5a8770
Allocatables:
Count: 31
Topology Keys: [kubernetes.io/os topology.ebs.csi.aws.com/zone topology.kubernetes.io/zone]
Events: <none>
Name: ip-192-168-2-84.ap-northeast-2.compute.internal
Labels: <none>
Annotations: storage.alpha.kubernetes.io/migrated-plugins:
kubernetes.io/aws-ebs,kubernetes.io/azure-disk,kubernetes.io/azure-file,kubernetes.io/cinder,kubernetes.io/gce-pd,kubernetes.io/portworx-v...
CreationTimestamp: Tue, 18 Feb 2025 11:46:49 +0900
Spec:
Drivers:
ebs.csi.aws.com:
Node ID: i-0a80fdc36a856f394
Allocatables:
Count: 31
Topology Keys: [kubernetes.io/os topology.ebs.csi.aws.com/zone topology.kubernetes.io/zone]
Events: <none>
Name: ip-192-168-3-80.ap-northeast-2.compute.internal
Labels: <none>
Annotations: storage.alpha.kubernetes.io/migrated-plugins:
kubernetes.io/aws-ebs,kubernetes.io/azure-disk,kubernetes.io/azure-file,kubernetes.io/cinder,kubernetes.io/gce-pd,kubernetes.io/portworx-v...
CreationTimestamp: Tue, 18 Feb 2025 11:46:42 +0900
Spec:
Drivers:
ebs.csi.aws.com:
Node ID: i-0484d2b724be33973
Allocatables:
Count: 31
Topology Keys: [kubernetes.io/os topology.ebs.csi.aws.com/zone topology.kubernetes.io/zone]
Events: <none>
|
10. ๊ธฐ์กด ์คํ ๋ฆฌ์ง ํด๋์ค ํ์ธ
โ
์ถ๋ ฅ
1
2
3
| NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 8h
local-path rancher.io/local-path Delete WaitForFirstConsumer false 3h19m
|
11. gp3 ์คํ ๋ฆฌ์ง ํด๋์ค ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
| cat <<EOF | kubectl apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gp3
annotations:
storageclass.kubernetes.io/is-default-class: "true"
allowVolumeExpansion: true
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
type: gp3
#iops: "5000"
#throughput: "250"
allowAutoIOPSPerGBIncrease: 'true'
encrypted: 'true'
fsType: xfs # ๊ธฐ๋ณธ๊ฐ์ด ext4
EOF
# ๊ฒฐ๊ณผ
storageclass.storage.k8s.io/gp3 created
|
- ์คํ ๋ฆฌ์ง ํด๋์ค ์์ฑ ํ์ธ
โ
์ถ๋ ฅ
1
2
3
4
| NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 8h
gp3 (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 30s
local-path rancher.io/local-path Delete WaitForFirstConsumer false 3h20m
|
12. gp3
์คํ ๋ฆฌ์ง ํด๋์ค ์์ธ ์ ๋ณด ํ์ธ
1
| kubectl describe sc gp3
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| Name: gp3
IsDefaultClass: Yes
Annotations: kubectl.kubernetes.io/last-applied-configuration={"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"gp3"},"parameters":{"allowAutoIOPSPerGBIncrease":"true","encrypted":"true","fsType":"xfs","type":"gp3"},"provisioner":"ebs.csi.aws.com","volumeBindingMode":"WaitForFirstConsumer"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner: ebs.csi.aws.com
Parameters: allowAutoIOPSPerGBIncrease=true,encrypted=true,fsType=xfs,type=gp3
AllowVolumeExpansion: True
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: <none>
|
- ๋ณผ๋ฅจ ํ์ฅ ๊ฐ๋ฅ (
AllowVolumeExpansion: True
) - ๊ธฐ๋ณธ ํ์ผ ์์คํ
xfs
์ ์ฉ
13. EBS ๋ณผ๋ฅจ ์์ฑ ๋ชจ๋ํฐ๋ง
1
| while true; do aws ec2 describe-volumes --filters Name=tag:ebs.csi.aws.com/cluster,Values=true --query "Volumes[].{VolumeId: VolumeId, VolumeType: VolumeType, InstanceId: Attachments[0].InstanceId, State: Attachments[0].State}" --output text; date; sleep 1; done
|
โ
์ถ๋ ฅ
1
2
3
4
| Tue Feb 18 08:36:24 PM KST 2025
Tue Feb 18 08:36:25 PM KST 2025
Tue Feb 18 08:36:27 PM KST 2025
Tue Feb 18 08:36:28 PM KST 2025
|
14. EBS PVC ์์ฑ
(1) StorageClass gp3
๋ฅผ ์ฌ์ฉํ์ฌ PVC๋ฅผ ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
storageClassName: gp3
EOF
# ๊ฒฐ๊ณผ
persistentvolumeclaim/ebs-claim created
|
(2) ์์ฑ ํ์ธ
- ์์ง PVC๊ฐ ๋ฐ์ธ๋ฉ๋์ง ์์ ์ํ
1
2
3
4
5
6
| k get pv
No resources found
k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
ebs-claim Pending gp3 <unset> 96s
|
15. PVC๋ฅผ ์ฌ์ฉํ๋ Pod ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
terminationGracePeriodSeconds: 3
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo \$(date -u) >> /data/out.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: ebs-claim
EOF
# ๊ฒฐ๊ณผ
pod/app created
|
16. PVC์ PV ๋ฐ์ธ๋ฉ ์ํ ํ์ธ
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
| NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/ebs-claim Bound pvc-ef2fe3fe-7117-44f2-94d0-cdb253c47af5 4Gi RWO gp3 <unset> 4m22s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
persistentvolume/pvc-ef2fe3fe-7117-44f2-94d0-cdb253c47af5 4Gi RWO Delete Bound default/ebs-claim gp3 <unset> 26s
NAME READY STATUS RESTARTS AGE
pod/app 1/1 Running 0 29s
|
17. EBS ๋ณผ๋ฅจ์ด ํน์ ๋
ธ๋์ ๋ถ์ฐฉ๋์๋์ง ํ์ธ
1
| kubectl get VolumeAttachment
|
โ
์ถ๋ ฅ
1
2
| NAME ATTACHER PV NODE ATTACHED AGE
csi-2c549400779c2b0340a9261869f4798e1239dc965d8a47cf3b3c29fe8b2b4fd4 ebs.csi.aws.com pvc-ef2fe3fe-7117-44f2-94d0-cdb253c47af5 ip-192-168-1-207.ap-northeast-2.compute.internal true 7m45s
|
- EBS ๋ณผ๋ฅจ์ด
ip-192-168-1-207
๋
ธ๋์ ๋ถ์ฐฉ๋จ (ATTACHED: true
)
18. EBS ๋ณผ๋ฅจ ์ฌ์ฉ๋ ํ์ธ
โ
์ถ๋ ฅ
1
2
| PV NAME PVC NAME NAMESPACE NODE NAME POD NAME VOLUME MOUNT NAME SIZE USED AVAILABLE %USED IUSED IFREE %IUSED
pvc-ef2fe3fe-7117-44f2-94d0-cdb253c47af5 ebs-claim default ip-192-168-1-207.ap-northeast-2.compute.internal app persistent-storage 3Gi 60Mi 3Gi 1.50 4 2097148 0.00
|
19. EBS ๋ณผ๋ฅจ์ Node Affinity ํ์ธ
โ
์ถ๋ ฅ (์ผ๋ถ)
1
2
3
4
5
6
7
8
9
| spec:
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- ap-northeast-2a
|
- ํ์ฌ
ap-northeast-2a
๊ฐ์ฉ ์์ญ(AZ)์ ์๋ ๋
ธ๋์์๋ง EBS ๋ถ์ฐฉ ๊ฐ๋ฅ
20. ํ์ฌ ๋
ธ๋ ๋ชฉ๋ก ๋ฐ ๊ฐ์ฉ ์์ญ ํ์ธ
1
| kubectl get node --label-columns=topology.ebs.csi.aws.com/zone,topology.k8s.aws/zone-id
|
โ
์ถ๋ ฅ
1
2
3
4
| NAME STATUS ROLES AGE VERSION ZONE ZONE-ID
ip-192-168-1-207.ap-northeast-2.compute.internal Ready <none> 9h v1.31.5-eks-5d632ec ap-northeast-2a apne2-az1
ip-192-168-2-84.ap-northeast-2.compute.internal Ready <none> 9h v1.31.5-eks-5d632ec ap-northeast-2b apne2-az2
ip-192-168-3-80.ap-northeast-2.compute.internal Ready <none> 9h v1.31.5-eks-5d632ec ap-northeast-2c apne2-az3
|
- EBS ๋ณผ๋ฅจ์
ap-northeast-2a
์ ์๋ ip-192-168-1-207
๋
ธ๋์์๋ง ์ฌ์ฉ ๊ฐ๋ฅ
21. EBS ๋ณผ๋ฅจ ๋ด ๋ฐ์ดํฐ ์ ์ ์ ์ฅ ํ์ธ
1
| kubectl exec app -- tail -f /data/out.txt
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
| Tue Feb 18 12:05:17 UTC 2025
Tue Feb 18 12:05:22 UTC 2025
Tue Feb 18 12:05:27 UTC 2025
Tue Feb 18 12:05:32 UTC 2025
Tue Feb 18 12:05:37 UTC 2025
Tue Feb 18 12:05:42 UTC 2025
Tue Feb 18 12:05:47 UTC 2025
...
|
- EBS ๋ณผ๋ฅจ์ด ์ ์์ ์ผ๋ก
/data/out.txt
ํ์ผ์ ๋ฐ์ดํฐ ๊ธฐ๋ก ์ค
22. EBS ๋ณผ๋ฅจ ๋ง์ดํธ ํ์ธ
(1) Overlay ํ์ผ์์คํ
ํ์ธ
1
| kubectl exec -it app -- sh -c 'df -hT --type=overlay'
|
โ
์ถ๋ ฅ
1
2
| Filesystem Type Size Used Avail Use% Mounted on
overlay overlay 120G 4.5G 116G 4% /
|
(2) XFS ํ์ผ์์คํ
ํ์ธ
1
| kubectl exec -it app -- sh -c 'df -hT --type=xfs'
|
โ
์ถ๋ ฅ
1
2
3
| Filesystem Type Size Used Avail Use% Mounted on
/dev/nvme1n1 xfs 4.0G 61M 3.9G 2% /data
/dev/nvme0n1p1 xfs 120G 4.5G 116G 4% /etc/hosts
|
- EBS ๋ณผ๋ฅจ(
/dev/nvme1n1
)์ด /data
์ ๋ง์ดํธ๋จ - ํ์ฌ
4.0GiB
์ค 61MiB
์ฌ์ฉ๋จ (2%
)
23. EBS ๋ณผ๋ฅจ ํฌ๊ธฐ ํ์ฅ ํ
์คํธ
(1) ํ์ฌ PVC ํฌ๊ธฐ ํ์ธ
1
| kubectl get pvc ebs-claim -o jsonpath={.spec.resources.requests.storage} ; echo
|
โ
์ถ๋ ฅ
(2) PVC ํฌ๊ธฐ ํ์ฅ ์์ฒญ (4GiB โ 10GiB
)
1
2
3
4
| kubectl patch pvc ebs-claim -p '{"spec":{"resources":{"requests":{"storage":"10Gi"}}}}'
# ๊ฒฐ๊ณผ
persistentvolumeclaim/ebs-claim patched
|
(3) ๋ณ๊ฒฝ๋ ๋ณผ๋ฅจ ํฌ๊ธฐ ํ์ธ
1
| kubectl exec -it app -- sh -c 'df -hT --type=xfs'
|
โ
์ถ๋ ฅ
1
2
3
| Filesystem Type Size Used Avail Use% Mounted on
/dev/nvme1n1 xfs 10G 105M 9.9G 2% /data
/dev/nvme0n1p1 xfs 120G 4.5G 116G 4% /etc/hosts
|
- EBS ๋ณผ๋ฅจ ํฌ๊ธฐ๊ฐ
4GiB โ 10GiB
๋ก ์ ์ ํ์ฅ๋จ
(4) PVC ์ํ ํ์ธ (df-pv
ํ์ฉ)
โ
์ถ๋ ฅ
1
2
| PV NAME PVC NAME NAMESPACE NODE NAME POD NAME VOLUME MOUNT NAME SIZE USED AVAILABLE %USED IUSED IFREE %IUSED
pvc-ef2fe3fe-7117-44f2-94d0-cdb253c47af5 ebs-claim default ip-192-168-1-207.ap-northeast-2.compute.internal app persistent-storage 9Gi 104Mi 9Gi 1.02 4 5242876 0.00
|
- ๋ณผ๋ฅจ ํฌ๊ธฐ
9GiB
, ์ฌ์ฉ๋ 1.02%
ํ์ธ๋จ
(5) AWS์์ ๋ณผ๋ฅจ ํฌ๊ธฐ ๋ณ๊ฒฝ ์ฌํญ ํ์ธ
1
| aws ec2 describe-volumes --volume-ids $(kubectl get pv -o jsonpath="{.items[0].spec.csi.volumeHandle}") | jq
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
| {
"Volumes": [
{
"Iops": 3000,
"Tags": [
{
"Key": "kubernetes.io/created-for/pvc/name",
"Value": "ebs-claim"
},
{
"Key": "ebs.csi.aws.com/cluster",
"Value": "true"
},
{
"Key": "kubernetes.io/cluster/myeks",
"Value": "owned"
},
{
"Key": "kubernetes.io/created-for/pvc/namespace",
"Value": "default"
},
{
"Key": "KubernetesCluster",
"Value": "myeks"
},
{
"Key": "CSIVolumeName",
"Value": "pvc-ef2fe3fe-7117-44f2-94d0-cdb253c47af5"
},
{
"Key": "Name",
"Value": "myeks-dynamic-pvc-ef2fe3fe-7117-44f2-94d0-cdb253c47af5"
},
{
"Key": "kubernetes.io/created-for/pv/name",
"Value": "pvc-ef2fe3fe-7117-44f2-94d0-cdb253c47af5"
}
],
"VolumeType": "gp3",
"MultiAttachEnabled": false,
"Throughput": 125,
"Operator": {
"Managed": false
},
"VolumeId": "vol-0b12360fbeebb9580",
"Size": 10,
"SnapshotId": "",
"AvailabilityZone": "ap-northeast-2a",
"State": "in-use",
"CreateTime": "2025-02-18T11:46:58.192000+00:00",
"Attachments": [
{
"DeleteOnTermination": false,
"VolumeId": "vol-0b12360fbeebb9580",
"InstanceId": "i-093ad32d5ff5a8770",
"Device": "/dev/xvdaa",
"State": "attached",
"AttachTime": "2025-02-18T11:47:01+00:00"
}
],
"Encrypted": true,
"KmsKeyId": "arn:aws:kms:ap-northeast-2:378102432899:key/8c9984ef-c009-4d66-bb63-428b05a0ed1e"
}
]
}
|
- AWS ์ฝ์์์๋ EBS ๋ณผ๋ฅจ ํฌ๊ธฐ
10GiB
๋ก ํ์ฅ๋ ๊ฒ ํ์ธ ๊ฐ๋ฅ
24. ๋ณผ๋ฅจ ํ์ฅ ํ Pod ๋ฐ PVC ์ญ์
1
| kubectl delete pod app & kubectl delete pvc ebs-claim
|
โ
์ถ๋ ฅ
1
2
3
4
| [1] 208891
pod "app" deleted
persistentvolumeclaim "ebs-claim" deleted
[1]+ Done kubecolor delete pod app
|
๐ธ EBS ๋ณผ๋ฅจ ์ค๋
์ท ์์ฑ ๋ฐ ๋ณต์
1. EBS ๋ณผ๋ฅจ ์ค๋
์ท ๊ธฐ๋ฅ ๊ฐ์
- AWS EBS ๋ณผ๋ฅจ์ ์ค๋
์ท์ ์ฟ ๋ฒ๋คํฐ์ค ๋ค์ดํฐ๋ธ ๋ฐฉ์์ผ๋ก ํ์ฉ
- AWS Volume SnapShots Controller๋ฅผ ์ฌ์ฉํ์ฌ ์ค๋
์ท ์์ฑ ๋ฐ ๋ณต์ ๊ฐ๋ฅ
2. Volume Snapshot CRD ์ค์น
1
2
3
| kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
|
โ
์ถ๋ ฅ
1
2
3
| customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
|
3. ์ค์น ํ์ธ
(1) CRD์์ ์ค๋
์ท ๊ด๋ จ ๋ฆฌ์์ค ํ์ธ
1
| kubectl get crd | grep snapshot
|
โ
์ถ๋ ฅ
1
2
3
| volumesnapshotclasses.snapshot.storage.k8s.io 2025-02-18T12:47:37Z
volumesnapshotcontents.snapshot.storage.k8s.io 2025-02-18T12:47:38Z
volumesnapshots.snapshot.storage.k8s.io 2025-02-18T12:47:36Z
|
(2) API ๋ฆฌ์์ค์์ ์ค๋
์ท ๊ด๋ จ ๋ฆฌ์์ค ํ์ธ
1
| kubectl api-resources | grep snapshot
|
โ
์ถ๋ ฅ
1
2
3
| volumesnapshotclasses vsclass,vsclasses snapshot.storage.k8s.io/v1 false VolumeSnapshotClass
volumesnapshotcontents vsc,vscs snapshot.storage.k8s.io/v1 false VolumeSnapshotContent
volumesnapshots vs snapshot.storage.k8s.io/v1 true VolumeSnapshot
|
4. ์ค๋
์ท ์ปจํธ๋กค๋ฌ ๋ฐฐํฌ
1
2
| kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
| serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
|
5. ๋ฐฐํฌ ํ์ธ
1
| kubectl get deploy -n kube-system snapshot-controller
|
โ
์ถ๋ ฅ
1
2
| NAME READY UP-TO-DATE AVAILABLE AGE
snapshot-controller 2/2 2 2 48s
|
6. AWS EBS ์ค๋
์ท ํด๋์ค ์์ฑ
1
2
3
4
| kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/master/examples/kubernetes/snapshot/manifests/classes/snapshotclass.yaml
# ๊ฒฐ๊ณผ
volumesnapshotclass.snapshot.storage.k8s.io/csi-aws-vsc created
|
7. ์ค๋
์ท ํด๋์ค ํ์ธ
(1) ๋ณผ๋ฅจ ์ค๋
์ท ํด๋์ค(VolumeSnapshotClass) ๋ชฉ๋ก ํ์ธ
โ
์ถ๋ ฅ
1
2
| NAME DRIVER DELETIONPOLICY AGE
csi-aws-vsc ebs.csi.aws.com Delete 1s
|
(2) ๋ณผ๋ฅจ ์ค๋
์ท ํด๋์ค(VolumeSnapshotClass) ์์ธ ์ ๋ณด ํ์ธ
1
| kubectl describe vsclass
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| Name: csi-aws-vsc
Namespace:
Labels: <none>
Annotations: <none>
API Version: snapshot.storage.k8s.io/v1
Deletion Policy: Delete
Driver: ebs.csi.aws.com
Kind: VolumeSnapshotClass
Metadata:
Creation Timestamp: 2025-02-18T12:53:56Z
Generation: 1
Resource Version: 142886
UID: e310327c-0673-4a0d-bf3f-fa1c2791b063
Events: <none>
|
8. PVC ๋ฐ Pod ์์ฑ
(1) ๋ชจ๋ํฐ๋ง
1
2
3
4
5
| watch -d kubectl get pv,pvc,pod
Every 2.0s: kubectl get pv,pvc,pod gram88: 09:57:32 PM
in 0.813s (0)
No resources found
|
(2) PVC ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
storageClassName: gp3
EOF
# ๊ฒฐ๊ณผ
persistentvolumeclaim/ebs-claim created
|
(3) PVC, PV ์กฐํ
โ
์ถ๋ ฅ
1
2
| NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/ebs-claim Pending gp3 <unset> 0s
|
(4) Pod ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
terminationGracePeriodSeconds: 3
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo \$(date -u) >> /data/out.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: ebs-claim
EOF
# ๊ฒฐ๊ณผ
pod/app created
|
9. ํ์ผ ์ ์ฅ ํ์ธ
1
| kubectl exec app -- tail -f /data/out.txt
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
| Tue Feb 18 13:00:45 UTC 2025
Tue Feb 18 13:00:50 UTC 2025
Tue Feb 18 13:00:55 UTC 2025
Tue Feb 18 13:01:00 UTC 2025
Tue Feb 18 13:01:05 UTC 2025
Tue Feb 18 13:01:10 UTC 2025
Tue Feb 18 13:01:15 UTC 2025
Tue Feb 18 13:01:20 UTC 2025
Tue Feb 18 13:01:25 UTC 2025
...
|
10. EBS ๋ณผ๋ฅจ ์ค๋
์ท ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
| cat <<EOF | kubectl apply -f -
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: ebs-volume-snapshot
spec:
volumeSnapshotClassName: csi-aws-vsc
source:
persistentVolumeClaimName: ebs-claim
EOF
# ๊ฒฐ๊ณผ
volumesnapshot.snapshot.storage.k8s.io/ebs-volume-snapshot created
|
- AWS ์ฝ์์์ EBS ์ค๋
์ท์ด ์ ์์ ์ผ๋ก ์์ฑ๋จ
- Kubernetes์ Pod๊ฐ ์ฌ์ฉ ์ค์ธ EBS ๋ณผ๋ฅจ์ ๊ทธ๋๋ก ์ค๋
์ท์ผ๋ก ์ ์ฅ
11. ์ค๋
์ท ์ํ ํ์ธ
(1) ์์ฑ๋ VolumeSnapshot ํ์ธ
1
| kubectl get volumesnapshot
|
โ
์ถ๋ ฅ
1
2
| NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
ebs-volume-snapshot true ebs-claim 4Gi csi-aws-vsc snapcontent-9bd1cc5d-21d5-47f4-97de-c4b6a871ae01 4m 4m1s
|
(2) ํน์ VolumeSnapshot์ SnapshotContent ๋ฐ์ธ๋ฉ ์ ๋ณด ์กฐํ
1
| kubectl get volumesnapshot ebs-volume-snapshot -o jsonpath={.status.boundVolumeSnapshotContentName} ; echo
|
โ
์ถ๋ ฅ
1
| snapcontent-9bd1cc5d-21d5-47f4-97de-c4b6a871ae01
|
(3) ํน์ VolumeSnapshot ์์ธ ์ ๋ณด ์กฐํ
1
| kubectl describe volumesnapshot.snapshot.storage.k8s.io ebs-volume-snapshot
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
| Name: ebs-volume-snapshot
Namespace: default
Labels: <none>
Annotations: <none>
API Version: snapshot.storage.k8s.io/v1
Kind: VolumeSnapshot
Metadata:
Creation Timestamp: 2025-02-18T13:03:27Z
Finalizers:
snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection
snapshot.storage.kubernetes.io/volumesnapshot-bound-protection
Generation: 1
Resource Version: 146033
UID: 9bd1cc5d-21d5-47f4-97de-c4b6a871ae01
Spec:
Source:
Persistent Volume Claim Name: ebs-claim
Volume Snapshot Class Name: csi-aws-vsc
Status:
Bound Volume Snapshot Content Name: snapcontent-9bd1cc5d-21d5-47f4-97de-c4b6a871ae01
Creation Time: 2025-02-18T13:03:28Z
Ready To Use: true
Restore Size: 4Gi
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreatingSnapshot 5m20s snapshot-controller Waiting for a snapshot default/ebs-volume-snapshot to be created by the CSI driver.
Normal SnapshotCreated 5m19s snapshot-controller Snapshot default/ebs-volume-snapshot was successfully created by the CSI driver.
Normal SnapshotReady 4m12s snapshot-controller Snapshot default/ebs-volume-snapshot is ready to use.
|
(4) ์์ฑ๋ VolumeSnapshotContent ํ์ธ
1
| kubectl get volumesnapshotcontents
|
โ
์ถ๋ ฅ
1
2
| NAME READYTOUSE RESTORESIZE DELETIONPOLICY DRIVER VOLUMESNAPSHOTCLASS VOLUMESNAPSHOT VOLUMESNAPSHOTNAMESPACE AGE
snapcontent-9bd1cc5d-21d5-47f4-97de-c4b6a871ae01 true 4294967296 Delete ebs.csi.aws.com csi-aws-vsc ebs-volume-snapshot default 5m52s
|
(5) VolumeSnapshot ID ํ์ธ
1
| kubectl get volumesnapshotcontents -o jsonpath='{.items[*].status.snapshotHandle}' ; echo
|
โ
์ถ๋ ฅ
(6) AWS EBS ์ค๋
์ท ๋ชฉ๋ก ์กฐํ (JSON ์ถ๋ ฅ)
1
| aws ec2 describe-snapshots --owner-ids self | jq
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
| {
"Snapshots": [
{
"Tags": [
{
"Key": "Name",
"Value": "myeks-dynamic-snapshot-9bd1cc5d-21d5-47f4-97de-c4b6a871ae01"
},
{
"Key": "CSIVolumeSnapshotName",
"Value": "snapshot-9bd1cc5d-21d5-47f4-97de-c4b6a871ae01"
},
{
"Key": "kubernetes.io/cluster/myeks",
"Value": "owned"
},
{
"Key": "ebs.csi.aws.com/cluster",
"Value": "true"
}
],
"StorageTier": "standard",
"TransferType": "standard",
"CompletionTime": "2025-02-18T13:04:04.793000+00:00",
"SnapshotId": "snap-0f1c3fa51d2fc9d33",
"VolumeId": "vol-090da41ed97dae65a",
"State": "completed",
"StartTime": "2025-02-18T13:03:28.688000+00:00",
"Progress": "100%",
"OwnerId": "378102432899",
"Description": "Created by AWS EBS CSI driver for volume vol-090da41ed97dae65a",
"VolumeSize": 4,
"Encrypted": true,
"KmsKeyId": "arn:aws:kms:ap-northeast-2:378102432899:key/8c9984ef-c009-4d66-bb63-428b05a0ed1e"
}
]
}
|
(7) AWS EBS ์ค๋
์ท ๋ชฉ๋ก ์กฐํ (ํ
์ด๋ธ ์ถ๋ ฅ)
1
| aws ec2 describe-snapshots --owner-ids self --query 'Snapshots[]' --output table
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
| --------------------------------------------------------------------------------------------------------
| DescribeSnapshots |
+----------------+-------------------------------------------------------------------------------------+
| CompletionTime| 2025-02-18T13:04:04.793000+00:00 |
| Description | Created by AWS EBS CSI driver for volume vol-090da41ed97dae65a |
| Encrypted | True |
| KmsKeyId | arn:aws:kms:ap-northeast-2:378102432899:key/8c9984ef-c009-4d66-bb63-428b05a0ed1e |
| OwnerId | 378102432899 |
| Progress | 100% |
| SnapshotId | snap-0f1c3fa51d2fc9d33 |
| StartTime | 2025-02-18T13:03:28.688000+00:00 |
| State | completed |
| StorageTier | standard |
| TransferType | standard |
| VolumeId | vol-090da41ed97dae65a |
| VolumeSize | 4 |
+----------------+-------------------------------------------------------------------------------------+
|| Tags ||
|+--------------------------------+-------------------------------------------------------------------+|
|| Key | Value ||
|+--------------------------------+-------------------------------------------------------------------+|
|| Name | myeks-dynamic-snapshot-9bd1cc5d-21d5-47f4-97de-c4b6a871ae01 ||
|| CSIVolumeSnapshotName | snapshot-9bd1cc5d-21d5-47f4-97de-c4b6a871ae01 ||
|| kubernetes.io/cluster/myeks | owned ||
|| ebs.csi.aws.com/cluster | true ||
|+--------------------------------+-------------------------------------------------------------------+|
|
12. EBS ๋ณผ๋ฅจ ์ญ์ (์ค์ ๊ฐ์ )
- Pod ๋ฐ PVC ์ญ์
- ์ค์๋ก ์ธํด ๋ณผ๋ฅจ์ด ์ญ์ ๋ ์ํฉ ๊ฐ์
1
2
3
4
5
| kubectl delete pod app && kubectl delete pvc ebs-claim
# ๊ฒฐ๊ณผ
pod "app" deleted
persistentvolumeclaim "ebs-claim" deleted
|
13. ์ค๋
์ท์ ํ์ฉํ PVC ๋ณต์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
| cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-snapshot-restored-claim
spec:
storageClassName: gp3
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
dataSource:
name: ebs-volume-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
EOF
# ๊ฒฐ๊ณผ
persistentvolumeclaim/ebs-snapshot-restored-claim created
|
14. ๋ณต์๋ PVC๋ฅผ ์ฌ์ฉํ๋ Pod ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
terminationGracePeriodSeconds: 3
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo \$(date -u) >> /data/out.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: ebs-snapshot-restored-claim
EOF
# ๊ฒฐ๊ณผ
pod/app created
|
15. ๋ณต์๋ ๋ฐ์ดํฐ ํ์ธ
1
| kubectl exec app -- cat /data/out.txt
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
| Tue Feb 18 13:03:00 UTC 2025
Tue Feb 18 13:03:05 UTC 2025
Tue Feb 18 13:03:10 UTC 2025
Tue Feb 18 13:03:15 UTC 2025
Tue Feb 18 13:18:15 UTC 2025
Tue Feb 18 13:18:20 UTC 2025
Tue Feb 18 13:18:25 UTC 2025
Tue Feb 18 13:18:30 UTC 2025
Tue Feb 18 13:18:35 UTC 2025
|
- ์ค๋
์ท ์์ฑ ์์ (
13:03:15
)๊น์ง์ ๋ฐ์ดํฐ๊ฐ ๋ณต์๋จ - ์๋ก์ด ๋ฐ์ดํฐ(
13:18:15
์ดํ)๋ ์ ์์ ์ผ๋ก ์ ์ฅ๋จ
16. ์ญ์
1
| kubectl delete pod app && kubectl delete pvc ebs-snapshot-restored-claim && kubectl delete volumesnapshots ebs-volume-snapshot
|
โ
์ถ๋ ฅ
1
2
3
| pod "app" deleted
persistentvolumeclaim "ebs-snapshot-restored-claim" deleted
volumesnapshot.snapshot.storage.k8s.io "ebs-volume-snapshot" deleted
|
๐ AWS EFS Controller
1. EFS Controller ๊ฐ์
- ๊ธฐ์กด EBS๋ Block Storage, EFS๋ File System ๊ธฐ๋ฐ์ ์คํ ๋ฆฌ์ง
- Amazon EFS ํ์ผ ์์คํ
์ CloudFormation์ ํตํด ์์ฑ
- ์ดํ EFS๋ฅผ ์ฌ์ฉํ๊ธฐ ์ํด CSI ๋๋ผ์ด๋ฒ ์ค์ ์งํ
2. ์์ฑ๋ EFS ํ์ผ ์์คํ
ํ์ธ
1
| aws efs describe-file-systems --query "FileSystems[*].FileSystemId" --output text
|
โ
์ถ๋ ฅ
3. EFS CSI ๋๋ผ์ด๋ฒ ๋ฒ์ ํ์ธ
1
2
3
4
5
| aws eks describe-addon-versions \
--addon-name aws-efs-csi-driver \
--kubernetes-version 1.31 \
--query "addons[].addonVersions[].[addonVersion, compatibilities[].defaultVersion]" \
--output text
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| v2.1.4-eksbuild.1
True
v2.1.3-eksbuild.1
False
v2.1.2-eksbuild.1
False
v2.1.1-eksbuild.1
False
v2.1.0-eksbuild.1
False
...
|
4. IAM Role for Service Account (IRSA) ์ค์
1
2
3
4
5
6
7
8
| eksctl create iamserviceaccount \
--name efs-csi-controller-sa \
--namespace kube-system \
--cluster ${CLUSTER_NAME} \
--attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEFSCSIDriverPolicy \
--approve \
--role-only \
--role-name AmazonEKS_EFS_CSI_DriverRole
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
| 2025-02-18 22:32:31 [โน] 2 existing iamserviceaccount(s) (kube-system/aws-load-balancer-controller,kube-system/ebs-csi-controller-sa) will be excluded
2025-02-18 22:32:31 [โน] 1 iamserviceaccount (kube-system/efs-csi-controller-sa) was included (based on the include/exclude rules)
2025-02-18 22:32:31 [!] serviceaccounts in Kubernetes will not be created or modified, since the option --role-only is used
2025-02-18 22:32:31 [โน] 1 task: { create IAM role for serviceaccount "kube-system/efs-csi-controller-sa" }
2025-02-18 22:32:31 [โน] building iamserviceaccount stack "eksctl-myeks-addon-iamserviceaccount-kube-system-efs-csi-controller-sa"
2025-02-18 22:32:31 [โน] deploying stack "eksctl-myeks-addon-iamserviceaccount-kube-system-efs-csi-controller-sa"
2025-02-18 22:32:31 [โน] waiting for CloudFormation stack "eksctl-myeks-addon-iamserviceaccount-kube-system-efs-csi-controller-sa"
2025-02-18 22:33:01 [โน] waiting for CloudFormation stack "eksctl-myeks-addon-iamserviceaccount-kube-system-efs-csi-controller-sa"
|
- EFS CSI ๋๋ผ์ด๋ฒ๊ฐ AWS EFS API์ ํต์ ํ ์ ์๋๋ก IAM ์ญํ ์์ฑ
- Amazon์ด ์ ๊ณตํ๋
AmazonEFSCSIDriverPolicy
IAM ์ ์ฑ
์ ์ฐ๊ฒฐ
5. IAM Service Account ํ์ธ
1
| eksctl get iamserviceaccount --cluster ${CLUSTER_NAME}
|
โ
์ถ๋ ฅ
1
2
3
4
| NAMESPACE NAME ROLE ARN
kube-system aws-load-balancer-controller arn:aws:iam::378102432899:role/eksctl-myeks-addon-iamserviceaccount-kube-sys-Role1-O6YEYsN7iVeQ
kube-system ebs-csi-controller-sa arn:aws:iam::378102432899:role/AmazonEKS_EBS_CSI_DriverRole
kube-system efs-csi-controller-sa arn:aws:iam::378102432899:role/AmazonEKS_EFS_CSI_DriverRole
|
efs-csi-controller-sa
์๋น์ค ๊ณ์ ์ด ์ฌ๋ฐ๋ฅด๊ฒ IAM ์ญํ ๊ณผ ์ฐ๊ฒฐ๋จ ํ์ธ
6. Amazon EFS CSI ๋๋ผ์ด๋ฒ ์ ๋์จ ๋ฐฐํฌ
1
2
| export ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)
eksctl create addon --name aws-efs-csi-driver --cluster ${CLUSTER_NAME} --service-account-role-arn arn:aws:iam::${ACCOUNT_ID}:role/AmazonEKS_EFS_CSI_DriverRole --force
|
โ
์ถ๋ ฅ
1
2
3
4
5
| 2025-02-18 22:35:34 [โน] Kubernetes version "1.31" in use by cluster "myeks"
2025-02-18 22:35:34 [โน] IRSA is set for "aws-efs-csi-driver" addon; will use this to configure IAM permissions
2025-02-18 22:35:34 [!] the recommended way to provide IAM permissions for "aws-efs-csi-driver" addon is via pod identity associations; after addon creation is completed, run `eksctl utils migrate-to-pod-identity`
2025-02-18 22:35:34 [โน] using provided ServiceAccountRoleARN "arn:aws:iam::378102432899:role/AmazonEKS_EFS_CSI_DriverRole"
2025-02-18 22:35:34 [โน] creating addon
|
- EFS CSI Driver ์ ๋์จ์ EKS ํด๋ฌ์คํฐ์ ์ค์น
7. ์ ๋์จ ๋ฐฐํฌ ์ํ ํ์ธ
1
| eksctl get addon --cluster ${CLUSTER_NAME}
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| 2025-02-18 22:36:24 [โน] Kubernetes version "1.31" in use by cluster "myeks"
2025-02-18 22:36:24 [โน] getting all addons
2025-02-18 22:36:26 [โน] to see issues for an addon run `eksctl get addon --name <addon-name> --cluster <cluster-name>`
NAME VERSION STATUS ISSUES IAMROLE UPDATE AVAILABLE CONFIGURATION VALUES POD IDENTITY ASSOCIATION ROLES
aws-ebs-csi-driver v1.39.0-eksbuild.1 ACTIVE 0 {
"node": {
"volumeAttachLimit": 31,
"enableMetrics": true
}
}
aws-efs-csi-driver v2.1.4-eksbuild.1 ACTIVE 0 arn:aws:iam::378102432899:role/AmazonEKS_EFS_CSI_DriverRole
coredns v1.11.4-eksbuild.2 ACTIVE 0
kube-proxy v1.31.3-eksbuild.2 ACTIVE 0
metrics-server v0.7.2-eksbuild.2 ACTIVE 0
vpc-cni v1.19.2-eksbuild.5 ACTIVE 0 arn:aws:iam::378102432899:role/eksctl-myeks-addon-vpc-cni-Role1-ZTYxtOMDwfFu enableNetworkPolicy: "true"
|
aws-efs-csi-driver
๊ฐ ACTIVE ์ํ์์ ํ์ธ
8. EFS CSI Driver ๋ฐฐํฌ ํ์ธ
1
| kubectl get csidrivers efs.csi.aws.com -o yaml
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"storage.k8s.io/v1","kind":"CSIDriver","metadata":{"annotations":{},"name":"efs.csi.aws.com"},"spec":{"attachRequired":false}}
creationTimestamp: "2025-02-18T02:37:23Z"
name: efs.csi.aws.com
resourceVersion: "155201"
uid: 9b68ab1a-c2c2-40e2-84c8-b0f06f04b289
spec:
attachRequired: false
fsGroupPolicy: ReadWriteOnceWithFSType
podInfoOnMount: false
requiresRepublish: false
seLinuxMount: false
storageCapacity: false
volumeLifecycleModes:
- Persistent
|
efs.csi.aws.com
CSI ๋๋ผ์ด๋ฒ๊ฐ ์ ์์ ์ผ๋ก ์ค์น๋จ ํ์ธ
9. EFS ์ฌ์ฉ ๋ฐฉ์: ์ ์ฒด ๊ณต์ vs Access Points
10. EFS CSI Driver ํด๋ก ๋ฐ ํ์ผ ๊ตฌ์กฐ ํ์ธ
1
2
| (eks-user:default) [root@operator-host ~]# git clone https://github.com/kubernetes-sigs/aws-efs-csi-driver.git /root/efs-csi
(eks-user:default) [root@operator-host ~]# cd /root/efs-csi/examples/kubernetes/multiple_pods/specs && tree
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| Cloning into '/root/efs-csi'...
remote: Enumerating objects: 29682, done.
remote: Counting objects: 100% (5145/5145), done.
remote: Compressing objects: 100% (1142/1142), done.
remote: Total 29682 (delta 4275), reused 4015 (delta 3999), pack-reused 24537 (from 3)
Receiving objects: 100% (29682/29682), 27.11 MiB | 17.36 MiB/s, done.
Resolving deltas: 100% (16140/16140), done.
.
โโโ claim.yaml
โโโ pod1.yaml
โโโ pod2.yaml
โโโ pv.yaml
โโโ storageclass.yaml
0 directories, 5 files
|
- EFS CSI Driver ์ ์ฅ์๋ฅผ ํด๋ก ํ์ฌ ์ํ ์ค์ ํ์ผ ๋ค์ด๋ก๋
- StorageClass, PV, PVC, Pod ๊ด๋ จ YAML ํ์ผ ์กด์ฌ
11. EFS StorageClass ์์ฑ ๋ฐ ํ์ธ
(1) StorageClass ํ์ธ
1
| (eks-user:default) [root@operator-host specs]# cat storageclass.yaml
|
โ
์ถ๋ ฅ
1
2
3
4
5
| kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
|
(2) StorageClass ์์ฑ
1
2
3
4
| (eks-user:default) [root@operator-host specs]# kubectl apply -f storageclass.yaml
# ๊ฒฐ๊ณผ
storageclass.storage.k8s.io/efs-sc created
|
(3) StorageClass ์์ฑ ํ์ธ
1
| (eks-user:default) [root@operator-host specs]# kubectl get sc efs-sc
|
โ
์ถ๋ ฅ
1
2
| NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
efs-sc efs.csi.aws.com Delete Immediate false 54s
|
12. EFS PersistentVolume(PV) ์ค์
(1) EFS PV ์ค์ ํ์ธ
1
| (eks-user:default) [root@operator-host specs]# cat pv.yaml
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: fs-4af69aab
|
- ๊ธฐ๋ณธ์ ์ผ๋ก
volumeHandle: fs-4af69aab
๋ก ์ค์ ๋์ด ์์
(2) EFS ์์คํ
ID ์
๋ฐ์ดํธ
1
2
3
| (eks-user:default) [root@operator-host specs]# EfsFsId=$(aws efs describe-file-systems --query "FileSystems[*].FileSystemId" --output text)
(eks-user:default) [root@operator-host specs]# sed -i "s/fs-4af69aab/$EfsFsId/g" pv.yaml
(eks-user:default) [root@operator-host specs]# cat pv.yaml
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: fs-0aeb6f8c0c228b9d2
|
- ๊ธฐ์กด
fs-4af69aab
โ ์ค์ ์ฌ์ฉ ์ค์ธ EFS ์์คํ
ID(fs-0aeb6f8c0c228b9d2
)๋ก ๋ณ๊ฒฝ
13. EFS PV ์์ฑ ๋ฐ ํ์ธ
(1) efs-pv
์์ฑ
1
2
3
| (eks-user:default) [root@operator-host specs]# kubectl apply -f pv.yaml
# ๊ฒฐ๊ณผ
persistentvolume/efs-pv created
|
(2) efs-pv
ํ์ธ
1
| (eks-user:default) [root@operator-host specs]# kubectl get pv; kubectl describe pv
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
efs-pv 5Gi RWX Retain Available efs-sc <unset> 31s
Name: efs-pv
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass: efs-sc
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 5Gi
Node Affinity: <none>
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: efs.csi.aws.com
FSType:
VolumeHandle: fs-0aeb6f8c0c228b9d2
ReadOnly: false
VolumeAttributes: <none>
Events: <none>
|
- ACCESS MODES: RWX
- IP ๊ธฐ๋ฐ ์ ๊ทผ์ด๋ฏ๋ก ๋ค์์ ํ๋๊ฐ ๋์์ ์ ๊ทผ ๊ฐ๋ฅ
14. EFS PersistentVolumeClaim(PVC) ์์ฑ
(1) PVC ์์ฑ ๋ฐ ์ ์ฉ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| (eks-user:default) [root@operator-host specs]# cat claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 5Gi
(eks-user:default) [root@operator-host specs]# kubectl apply -f claim.yaml
# ๊ฒฐ๊ณผ
persistentvolumeclaim/efs-claim created
|
efs-pv
๋ฅผ ์ฌ์ฉํ efs-claim
PVC๋ฅผ ์์ฑ
(2) PVC ์ํ ํ์ธ
1
| (eks-user:default) [root@operator-host specs]# kubectl get pvc
|
โ
์ถ๋ ฅ
1
2
| NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
efs-claim Bound efs-pv 5Gi RWX efs-sc <unset> 32s
|
STATUS
๊ฐ Bound
์ด๋ฉด PV์ PVC๊ฐ ์ ์์ ์ผ๋ก ์ฐ๊ฒฐ๋จACCESS MODES
๊ฐ RWX
๋ก ์ค์ ๋์ด ์ฌ๋ฌ Pod์์ ๋์์ ์ ๊ทผ ๊ฐ๋ฅ
15. EFS ๊ธฐ๋ฐ ๋ค์ค Pod ์์ฑ
(1) ๋ค์ค Pod ์์ฑ ๋ฐ ์ ์ฉ
- ๋ ๊ฐ์ Pod(
app1
, app2
)๋ฅผ ์์ฑํ์ฌ EFS ๋ณผ๋ฅจ์ ๊ณต์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
| (eks-user:default) [root@operator-host specs]# cat pod1.yaml pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: app1
spec:
containers:
- name: app1
image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out1.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: efs-claim
apiVersion: v1
kind: Pod
metadata:
name: app2
spec:
containers:
- name: app2
image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out2.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: efs-claim
|
1
2
3
4
5
| (eks-user:default) [root@operator-host specs]# kubectl apply -f pod1.yaml,pod2.yaml
# ๊ฒฐ๊ณผ
pod/app1 created
pod/app2 created
|
(2) ๋ค์ค Pod ์ํ ํ์ธ
1
| (eks-user:default) [root@operator-host specs]# kubectl get pods
|
โ
์ถ๋ ฅ
1
2
3
| NAME READY STATUS RESTARTS AGE
app1 1/1 Running 0 57s
app2 1/1 Running 0 57s
|
16. Pod ๋ด๋ถ์์ EFS ๋ณผ๋ฅจ ํฌ๊ธฐ ํ์ธ
(1) EFS ๋ณผ๋ฅจ์ด ์ ์์ ์ผ๋ก ๋ง์ดํธ๋์๋์ง ํ์ธ
app1
๊ณผ app2
Pod ๋ด๋ถ์์ /data
๋๋ ํฐ๋ฆฌ๊ฐ EFS ๋ณผ๋ฅจ์ผ๋ก ์ ์ ๋ง์ดํธ๋์๋์ง ํ์ธ
1
2
3
4
5
| (eks-user:default) [root@operator-host specs]# kubectl exec -ti app1 -- sh -c "df -hT -t nfs4"
# โ
์ถ๋ ฅ
Filesystem Type Size Used Available Use% Mounted on
127.0.0.1:/ nfs4 8.0E 0 8.0E 0% /data
|
1
2
3
4
5
| (eks-user:default) [root@operator-host specs]# kubectl exec -ti app2 -- sh -c "df -hT -t nfs4"
# โ
์ถ๋ ฅ
Filesystem Type Size Used Available Use% Mounted on
127.0.0.1:/ nfs4 8.0E 0 8.0E 0% /data
|
(2) EFS ๋ณผ๋ฅจ ํฌ๊ธฐ ์ฐจ์ด ๋ถ์
kubectl get pv
๋ก ํ์ธํ EFS์ ์ค์ ๋ ์ฉ๋์ 5GiB- ๊ทธ๋ฌ๋ Pod ๋ด๋ถ์์๋ EFS ํฌ๊ธฐ๊ฐ
8.0E
(์์ฌ๋ฐ์ดํธ)๋ก ํ์๋จ - ์ด๋ EFS๊ฐ ๋ธ๋ก ์คํ ๋ฆฌ์ง๊ฐ ์๋ ํ์ผ ์์คํ
๊ธฐ๋ฐ์ผ๋ก ๊ด๋ฆฌ๋๋ฉฐ, ํฌ๊ธฐ ์ ํ ์์ด ์๋ ํ์ฅ ๊ฐ๋ฅํ๊ธฐ ๋๋ฌธ
- ์ฆ, EFS๋ ๋
ผ๋ฆฌ์ ์ผ๋ก ๋ฌดํํ ํฌ๊ธฐ๋ก ์ธ์๋๋ฉฐ, ์ค์ ์ฌ์ฉ๋์ ๋ฐ๋ผ ์๋์ผ๋ก ์ฆ๊ฐ
17. ๋ค์ค Pod์์ EFS ๋ง์ดํธ ์ค์
app1
๊ณผ app2
Pod๊ฐ ๋์ผํ EFS ๋ณผ๋ฅจ์ /data
๊ฒฝ๋ก์ ๋ง์ดํธํ๋๋ก ์ค์ - ๊ฐ Pod๋
/data/out1.txt
๋ฐ /data/out2.txt
ํ์ผ์ 5์ด ๊ฐ๊ฒฉ์ผ๋ก ๋ก๊ทธ๋ฅผ ๊ธฐ๋ก
1
| (eks-user:default) [root@operator-host specs]# cat pod1.yaml pod2.yaml
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
| apiVersion: v1
kind: Pod
metadata:
name: app1
spec:
containers:
- name: app1
image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out1.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: efs-claim
apiVersion: v1
kind: Pod
metadata:
name: app2
spec:
containers:
- name: app2
image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out2.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: efs-claim
|
18. ์ด์์๋ฒ์์ EFS ๋ง์ดํธ ํ์ธ
1
| (eks-user:default) [root@operator-host specs]# df -hT
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
| Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 981M 0 981M 0% /dev
tmpfs tmpfs 990M 0 990M 0% /dev/shm
tmpfs tmpfs 990M 432K 989M 1% /run
tmpfs tmpfs 990M 0 990M 0% /sys/fs/cgroup
/dev/xvda1 xfs 30G 3.2G 27G 11% /
192.168.1.145:/ nfs4 8.0E 0 8.0E 0% /mnt/myefs
tmpfs tmpfs 198M 0 198M 0% /run/user/1000
|
- ์ด์์๋ฒ์์๋ ๋์ผํ EFS ๋ณผ๋ฅจ์
/mnt/myefs
์ ๋ง์ดํธํ์ฌ ๊ณต์ ์ ์ฅ์๋ก ํ์ฉ ๊ฐ๋ฅ
19. ๊ณต์ ์ ์ฅ์ ๋ด ํ์ผ ํ์ธ
1
| (eks-user:default) [root@operator-host specs]# tree /mnt/myefs
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
| /mnt/myefs
โโโ memo.txt
โโโ out1.txt
โโโ out2.txt
0 directories, 3 files
|
out1.txt
์ out2.txt
๋ ๊ฐ๊ฐ app1
๊ณผ app2
๊ฐ ์์ฑํ ํ์ผ- ์ด์์๋ฒ์์๋ ๋์ผํ ๋ฐ์ดํฐ๋ฅผ ํ์ธ ๊ฐ๋ฅ
20. EFS ๊ณต์ ์ ์ฅ์ ๋์ ํ์ธ
์ด์์๋ฒ์ app1
, app2
์์ ๊ฐ์ ํ์ผ์ ์กฐํ
(1) ์ด์์๋ฒ์์ ํ์ผ ํ์ธ
1
| (eks-user:default) [root@operator-host specs]# tail -f /mnt/myefs/out1.txt
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
| Tue Feb 18 14:02:30 UTC 2025
Tue Feb 18 14:02:35 UTC 2025
Tue Feb 18 14:02:40 UTC 2025
Tue Feb 18 14:02:45 UTC 2025
Tue Feb 18 14:02:50 UTC 2025
...
|
(2) Pod ๋ด๋ถ์์ ๋์ผํ ํ์ผ ํ์ธ
1
| (eks-user:default) [root@operator-host specs]# kubectl exec -ti app1 -- tail -f /data/out1.txt
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
| Tue Feb 18 14:02:55 UTC 2025
Tue Feb 18 14:03:00 UTC 2025
Tue Feb 18 14:03:05 UTC 2025
Tue Feb 18 14:03:10 UTC 2025
Tue Feb 18 14:03:15 UTC 2025
...
|
- ์ด์์๋ฒ์ Pod ๊ฐ์ ๋์ผํ ํ์ผ์ ๊ณต์ ํ๋ฉฐ ์ค์๊ฐ์ผ๋ก ๋ฐ์ดํฐ๊ฐ ์ ์ฅ๋จ
21. EFS ๋ง์ดํธ ํด์ ๋ฐ ๋ฆฌ์์ค ์ ๋ฆฌ
(1) Pod ๋ฐ EFS ๊ด๋ จ ๋ฆฌ์์ค ์ญ์
1
2
3
4
| (eks-user:default) [root@operator-host specs]# kubectl delete pod app1 app2
# ๊ฒฐ๊ณผ
pod "app1" deleted
pod "app2" deleted
|
1
2
3
4
5
| (eks-user:default) [root@operator-host specs]# kubectl delete pvc efs-claim && kubectl delete pv efs-pv && kubectl delete sc efs-sc
# ๊ฒฐ๊ณผ
persistentvolumeclaim "efs-claim" deleted
persistentvolume "efs-pv" deleted
storageclass.storage.k8s.io "efs-sc" deleted
|
๐ EFS AccessPoints๋ฅผ ํ์ฉํ ๋์ ํ๋ก๋น์ ๋ ๋ฐฐํฌ
1. EFS AccessPoints ๊ธฐ๋ฐ StorageClass ์์ฑ
EFS AccessPoints๋ฅผ ํ์ฉํ์ฌ ํน์ ๋๋ ํ ๋ฆฌ์ ๊ถํ์ ์ค์ ํ ์ ์๋ StorageClass ์์ฑ
1
2
| (eks-user:default) [root@operator-host specs]# curl -s -O https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/examples/kubernetes/dynamic_provisioning/specs/storageclass.yaml
(eks-user:default) [root@operator-host specs]# cat storageclass.yaml
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
parameters:
provisioningMode: efs-ap
fileSystemId: fs-92107410
directoryPerms: "700"
gidRangeStart: "1000" # optional
gidRangeEnd: "2000" # optional
basePath: "/dynamic_provisioning" # optional
subPathPattern: "${.PVC.namespace}/${.PVC.name}" # optional
ensureUniqueDirectory: "true" # optional
reuseAccessPoint: "false" # optional
|
์ค์ ํญ๋ชฉ
fileSystemId
: ์ฌ์ฉ ์ค์ธ EFS ID๋ฅผ ๋์ ์ผ๋ก ์ ์ฉbasePath
: ๋์ ์์ฑ๋ ๋๋ ํ ๋ฆฌ ๊ฒฝ๋ก ์ค์ subPathPattern
: PVC๋ณ ๊ฐ๋ณ ๋๋ ํ ๋ฆฌ ํ ๋น
1
2
3
4
5
| (eks-user:default) [root@operator-host specs]# sed -i "s/fs-92107410/$EfsFsId/g" storageclass.yaml
(eks-user:default) [root@operator-host specs]# kubectl apply -f storageclass.yaml
# ๊ฒฐ๊ณผ
storageclass.storage.k8s.io/efs-sc created
|
2. StorageClass ์์ฑ ํ์ธ
1
| (eks-user:default) [root@operator-host specs]# kubectl get sc efs-sc
|
โ
์ถ๋ ฅ
1
2
| NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
efs-sc efs.csi.aws.com Delete Immediate false 5s
|
3. PVC ๋ฐ Pod ์์ฑ
(1) PVC ๋ฐ Pod ์ ์ ํ์ผ ๋ค์ด๋ก๋
1
| (eks-user:default) [root@operator-host specs]# curl -s -O https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/examples/kubernetes/dynamic_provisioning/specs/pod.yaml
|
(2) PVC ๋ฐ Pod ์ค์ ๋ด์ฉ ํ์ธ
1
| (eks-user:default) [root@operator-host specs]# cat pod.yaml
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
| ---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
name: efs-app
spec:
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: efs-claim
|
(3) PVC ๋ฐ Pod ์์ฑ
1
2
3
4
5
| (eks-user:default) [root@operator-host specs]# kubectl apply -f pod.yaml
# ๊ฒฐ๊ณผ
persistentvolumeclaim/efs-claim created
pod/efs-app created
|
4. PVC, PV, Pod ์ํ ํ์ธ
1
| (eks-user:default) [root@operator-host specs]# kubectl get pvc,pv,pod
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
| NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/efs-claim Bound pvc-4ea92c76-1bda-4425-b183-77f8c6ea11ef 5Gi RWX efs-sc <unset> 2s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
persistentvolume/pvc-4ea92c76-1bda-4425-b183-77f8c6ea11ef 5Gi RWX Delete Bound default/efs-claim efs-sc <unset> 2s
NAME READY STATUS RESTARTS AGE
pod/efs-app 0/1 ContainerCreating 0 2s
|
5. PVC, PV ์์ฑ ๋ก๊ทธ ํ์ธ
1
| (eks-user:default) [root@operator-host specs]# kubectl krew install stern
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
| Updated the local copy of plugin index.
Installing plugin: stern
Installed plugin: stern
\
| Use this plugin:
| kubectl stern
| Documentation:
| https://github.com/stern/stern
/
WARNING: You installed plugin "stern" from the krew-index plugin repository.
These plugins are not audited for security by the Krew maintainers.
Run them at your own risk.
|
EFS CSI ๋๋ผ์ด๋ฒ ๋ก๊ทธ ํ์ธ
1
| kubectl stern -n kube-system -l app=efs-csi-controller -c csi-provisioner
|
6. Pod ๋ด๋ถ์์ EFS ๋ง์ดํธ ํ์ธ
1
| (eks-user:default) [root@operator-host specs]# kubectl exec -it efs-app -- sh -c "df -hT -t nfs4"
|
โ
์ถ๋ ฅ
1
2
| Filesystem Type Size Used Avail Use% Mounted on
127.0.0.1:/ nfs4 8.0E 0 8.0E 0% /data
|
- EFS๊ฐ ์ ์์ ์ผ๋ก ๋ง์ดํธ๋จ
- ๋์ ์ผ๋ก ํฌ๊ธฐ ํ์ฅ ๊ฐ๋ฅ
7. EFS AccessPoints๋ฅผ ํตํ ์ ์ฅ์ ๊ด๋ฆฌ
- AccessPoints๋ฅผ ํ์ฉํ์ฌ ํ๊ฒฝ๋ณ ์ ์ฅ์ ๋ถ๋ฆฌ ๊ฐ๋ฅ
8. ๊ณต์ ์ ์ฅ์ ๋ด ๋ฐ์ดํฐ ํ์ธ
1
| (eks-user:default) [root@operator-host specs]# tree /mnt/myefs
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
| /mnt/myefs
โโโ dynamic_provisioning
โย ย โโโ default
โย ย โโโ efs-claim-1a0017bc-6165-44fb-ac74-5d623fd39925
โย ย โโโ out
โโโ memo.txt
โโโ out1.txt
โโโ out2.txt
3 directories, 4 files
|
- PVC๋ณ ๊ฐ๋ณ ๋๋ ํ ๋ฆฌ๊ฐ ์๋ ์์ฑ๋จ
- ๋์ ๋ณผ๋ฅจ ํ ๋น์ด ์ ์์ ์ผ๋ก ์ํ๋จ
9. ๋ฆฌ์์ค ์ ๋ฆฌ
1
2
3
| (eks-user:default) [root@operator-host specs]# kubectl delete -f pod.yaml
persistentvolumeclaim "efs-claim" deleted
pod "efs-app" deleted
|
1
2
3
4
| (eks-user:default) [root@operator-host specs]# kubectl delete -f storageclass.yaml
storageclass.storage.k8s.io "efs-sc" deleted
(eks-user:default) [root@operator-host specs]# cd $HOME
|
๐๏ธ EKS ์ธ์คํด์ค ์คํ ์ด ๊ธฐ๋ฐ Persistent Volumes ๋ฐ NodeGroup ์ถ๊ฐ
1. ์ธ์คํด์ค ์คํ ์ด(Instance Store)๋?
- ๋ฌผ๋ฆฌ ์๋ฒ์ ๋ก์ปฌ ์ ์ฅ์๋ฅผ ์ ๊ณตํ๋ EC2 ์ธ์คํด์ค์ ์ ์ฅ์
- ์ธ์คํด์ค๊ฐ ์ข
๋ฃ๋๋ฉด ๋ฐ์ดํฐ๊ฐ ์ญ์ ๋ ๊ฐ๋ฅ์ฑ์ด ์์
- IO ์ฑ๋ฅ์ด ๋ฐ์ด๋๋ฉฐ ๋งค์ฐ ๋น ๋ฅธ ์๋๋ก ๋์
2. ์ธ์คํด์ค ์คํ ์ด ์ง์ ์ธ์คํด์ค ์ ํ ๋ฐ ์ฉ๋ ์กฐํ
1
2
3
4
| aws ec2 describe-instance-types \
--filters "Name=instance-type,Values=c5*" "Name=instance-storage-supported,Values=true" \
--query "InstanceTypes[].[InstanceType, InstanceStorageInfo.TotalSizeInGB]" \
--output table
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
| --------------------------
| DescribeInstanceTypes |
+---------------+--------+
| c5d.large | 50 |
| c5d.12xlarge | 1800 |
| c5d.2xlarge | 200 |
| c5d.24xlarge | 3600 |
| c5d.4xlarge | 400 |
| c5d.18xlarge | 1800 |
| c5d.xlarge | 100 |
| c5d.metal | 3600 |
| c5d.9xlarge | 900 |
+---------------+--------+
|
3. ์๋ธ๋ท ๋ฐ SSH ํคํ์ด ์ค์
- ๋
ธ๋ ๊ทธ๋ฃน์ด ๋ฐฐํฌ๋ ์๋ธ๋ท ๋ฐ SSH ํคํ์ด ์ค์
1
2
3
4
5
6
| export PubSubnet1=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-Vpc1PublicSubnet1" --query "Subnets[0].[SubnetId]" --output text)
export PubSubnet2=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-Vpc1PublicSubnet2" --query "Subnets[0].[SubnetId]" --output text)
export PubSubnet3=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-Vpc1PublicSubnet3" --query "Subnets[0].[SubnetId]" --output text)
echo $PubSubnet1 $PubSubnet2 $PubSubnet3
SSHKEYNAME=kp-aews
|
4. ์ ๊ท NodeGroup ๊ตฌ์ฑ ํ์ผ ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
| cat << EOF > myng2.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: myeks
region: ap-northeast-2
version: "1.31"
managedNodeGroups:
- amiFamily: AmazonLinux2
desiredCapacity: 1
instanceType: c5d.large
labels:
alpha.eksctl.io/cluster-name: myeks
alpha.eksctl.io/nodegroup-name: ng2
disk: instancestore
maxPodsPerNode: 110
maxSize: 1
minSize: 1
name: ng2
ssh:
allow: true
publicKeyName: $SSHKEYNAME
subnets:
- $PubSubnet1
- $PubSubnet2
- $PubSubnet3
tags:
alpha.eksctl.io/nodegroup-name: ng2
alpha.eksctl.io/nodegroup-type: managed
volumeIOPS: 3000
volumeSize: 30
volumeThroughput: 125
volumeType: gp3
preBootstrapCommands:
- |
# Install Tools
yum install nvme-cli links tree jq tcpdump sysstat -y
# Filesystem & Mount
mkfs -t xfs /dev/nvme1n1
echo /dev/nvme1n1 /data xfs defaults,noatime 0 2 >> /etc/fstab
EOF
|
- ์ธ์คํด์ค ํ์
:
c5d.large
(50GB ์ธ์คํด์ค ์คํ ์ด ์ ๊ณต) - ๋์คํฌ ๋ง์ดํธ:
/dev/nvme1n1 โ /data
- ์ฌ์ ์ค์น ํจํค์ง:
nvme-cli, jq, tree, tcpdump, sysstat
- ๋ณผ๋ฅจ ํฌ๊ธฐ ๋ฐ ์ฑ๋ฅ ์ค์ : 30GB (gp3, 3000 IOPS, 125MB/s)
5. ์ ๊ท NodeGroup ๋ฐฐํฌ
1
| eksctl create nodegroup -f myng2.yaml
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
| 2025-02-18 23:40:47 [โน] nodegroup "ng2" will use "" [AmazonLinux2/1.31]
2025-02-18 23:40:47 [โน] using EC2 key pair "kp-aews"
2025-02-18 23:40:48 [โน] 1 existing nodegroup(s) (ng1) will be excluded
2025-02-18 23:40:48 [โน] 1 nodegroup (ng2) was included (based on the include/exclude rules)
2025-02-18 23:40:48 [โน] will create a CloudFormation stack for each of 1 managed nodegroups in cluster "myeks"
2025-02-18 23:40:49 [โน]
2 sequential tasks: { fix cluster compatibility, 1 task: { 1 task: { create managed nodegroup "ng2" } }
}
2025-02-18 23:40:49 [โน] checking cluster stack for missing resources
2025-02-18 23:40:49 [โน] cluster stack has all required resources
2025-02-18 23:40:49 [โน] building managed nodegroup stack "eksctl-myeks-nodegroup-ng2"
2025-02-18 23:40:49 [โน] deploying stack "eksctl-myeks-nodegroup-ng2"
2025-02-18 23:40:49 [โน] waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng2"
2025-02-18 23:41:20 [โน] waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng2"
2025-02-18 23:42:18 [โน] waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng2"
2025-02-18 23:43:00 [โน] waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng2"
2025-02-18 23:43:38 [โน] waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng2"
2025-02-18 23:43:38 [โน] no tasks
2025-02-18 23:43:38 [โ] created 0 nodegroup(s) in cluster "myeks"
2025-02-18 23:43:38 [โน] nodegroup "ng2" has 1 node(s)
2025-02-18 23:43:38 [โน] node "ip-192-168-3-139.ap-northeast-2.compute.internal" is ready
2025-02-18 23:43:38 [โน] waiting for at least 1 node(s) to become ready in "ng2"
2025-02-18 23:43:38 [โน] nodegroup "ng2" has 1 node(s)
2025-02-18 23:43:38 [โน] node "ip-192-168-3-139.ap-northeast-2.compute.internal" is ready
2025-02-18 23:43:38 [โ] created 1 managed nodegroup(s) in cluster "myeks"
2025-02-18 23:43:38 [โน] checking security group configuration for all nodegroups
2025-02-18 23:43:38 [โน] all nodegroups have up-to-date cloudformation templates
|
6. ๋ฐฐํฌ๋ ๋
ธ๋ ํ์ธ
1
| kubectl get node --label-columns=node.kubernetes.io/instance-type,eks.amazonaws.com/capacityType,topology.kubernetes.io/zone
|
โ
์ถ๋ ฅ
1
2
3
4
5
| NAME STATUS ROLES AGE VERSION INSTANCE-TYPE CAPACITYTYPE ZONE
ip-192-168-1-207.ap-northeast-2.compute.internal Ready <none> 11h v1.31.5-eks-5d632ec t3.medium ON_DEMAND ap-northeast-2a
ip-192-168-2-84.ap-northeast-2.compute.internal Ready <none> 11h v1.31.5-eks-5d632ec t3.medium ON_DEMAND ap-northeast-2b
ip-192-168-3-139.ap-northeast-2.compute.internal Ready <none> 2m59s v1.31.5-eks-5d632ec c5d.large ON_DEMAND ap-northeast-2c
ip-192-168-3-80.ap-northeast-2.compute.internal Ready <none> 11h v1.31.5-eks-5d632ec t3.medium ON_DEMAND ap-northeast-2c
|
- ๋
ธ๋
c5d.large
๊ฐ ์ ์์ ์ผ๋ก ์ถ๊ฐ๋จ
1
| kubectl get node -l disk=instancestore
|
โ
์ถ๋ ฅ
1
2
| NAME STATUS ROLES AGE VERSION
ip-192-168-3-139.ap-northeast-2.compute.internal Ready <none> 3m29s v1.31.5-eks-5d632ec
|
- ์ธ์คํด์ค ์คํ ์ด๊ฐ ์๋ ๋
ธ๋ ํ์ธ ์๋ฃ
7. ๋ณด์ ๊ทธ๋ฃน ์ค์ (ng2-remoteAccess ํฌํจ)
1
2
3
| export NG2SGID=$(aws ec2 describe-security-groups --filters "Name=group-name,Values=*ng2-remoteAccess*" --query 'SecurityGroups[*].GroupId' --output text)
aws ec2 authorize-security-group-ingress --group-id $NG2SGID --protocol '-1' --cidr $(curl -s ipinfo.io/ip)/32
aws ec2 authorize-security-group-ingress --group-id $NG2SGID --protocol '-1' --cidr 172.20.1.100/32
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
| {
"Return": true,
"SecurityGroupRules": [
{
"SecurityGroupRuleId": "sgr-01263a4b3dd1797db",
"GroupId": "sg-0e186958bfe3d2895",
"GroupOwnerId": "378102432899",
"IsEgress": false,
"IpProtocol": "-1",
"FromPort": -1,
"ToPort": -1,
"CidrIpv4": "182.230.60.93/32",
"SecurityGroupRuleArn": "arn:aws:ec2:ap-northeast-2:378102432899:security-group-rule/sgr-01263a4b3dd1797db"
}
]
}
{
"Return": true,
"SecurityGroupRules": [
{
"SecurityGroupRuleId": "sgr-004bfcccf4933bca4",
"GroupId": "sg-0e186958bfe3d2895",
"GroupOwnerId": "378102432899",
"IsEgress": false,
"IpProtocol": "-1",
"FromPort": -1,
"ToPort": -1,
"CidrIpv4": "172.20.1.100/32",
"SecurityGroupRuleArn": "arn:aws:ec2:ap-northeast-2:378102432899:security-group-rule/sgr-004bfcccf4933bca4"
}
]
}
|
- ๋ณด์ ๊ทธ๋ฃน ๊ท์น ์ถ๊ฐ ์๋ฃ
8. SSH๋ฅผ ํตํ ์์ปค ๋
ธ๋ ์ ์
1
2
| N4=3.34.90.173
ssh ec2-user@$N4 hostname
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
| The authenticity of host '3.34.90.173 (3.34.90.173)' can't be established.
ED25519 key fingerprint is SHA256:8982ohpoaUv/4ImQwqxA8Ye4HDQZbziZ+n7vcW++NKw.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '3.34.90.173' (ED25519) to the list of known hosts.
ip-192-168-3-139.ap-northeast-2.compute.internal
|
9. ์ธ์คํด์ค ์คํ ์ด ๋์คํฌ ํ์ธ
์ธ์คํด์ค์์ ์ฌ์ฉ ๊ฐ๋ฅํ NVMe ๋์คํฌ ๋ชฉ๋ก ์กฐํ
1
| ssh ec2-user@$N4 sudo nvme list
|
โ
์ถ๋ ฅ
1
2
3
4
| Node SN Model Namespace Usage Format FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 vol0c28a9e99805af630 Amazon Elastic Block Store 1 32.21 GB / 32.21 GB 512 B + 0 B 1.0
/dev/nvme1n1 AWS3E4B64A6880B81A2C Amazon EC2 NVMe Instance Storage 1 50.00 GB / 50.00 GB 512 B + 0 B 0
|
- EBS(30GB) ์ธ์๋ 50GB ์ธ์คํด์ค ์คํ ์ด ์ถ๊ฐ ํ์ธ๋จ
10. ์ธ์คํด์ค ์คํ ์ด ๋ง์ดํธ ํ์ธ
nvme1n1
๋์คํฌ๊ฐ /data
์ ๋ง์ดํธ๋์ด ์๋์ง ํ์ธ
1
| ssh ec2-user@$N4 sudo lsblk -e 7 -d
|
โ
์ถ๋ ฅ
1
2
3
| NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme1n1 259:0 0 46.6G 0 disk /data
nvme0n1 259:1 0 30G 0 disk
|
- 50GB ์ธ์คํด์ค ์คํ ์ด ๋์คํฌ๊ฐ
/data
์ ๋ง์ดํธ๋จ
1
| ssh ec2-user@$N4 sudo df -hT -t xfs
|
โ
์ถ๋ ฅ
1
2
3
| Filesystem Type Size Used Avail Use% Mounted on
/dev/nvme0n1p1 xfs 30G 3.7G 27G 13% /
/dev/nvme1n1 xfs 47G 365M 47G 1% /data
|
- ๋์คํฌ๊ฐ
XFS
ํ์ผ ์์คํ
์ผ๋ก ํฌ๋งท๋์ด /data
์ ์ ์์ ์ผ๋ก ๋ง์ดํธ๋จ
1
| ssh ec2-user@$N4 sudo tree /data
|
โ
์ถ๋ ฅ
1
2
3
| /data
0 directories, 0 files
|
- ๋ฐ์ดํฐ ๋๋ ํ ๋ฆฌ์ ์๋ฌด ํ์ผ๋ ์์
11. fstab ์ค์ ํ์ธ (๋ถํ
์ ์๋ ๋ง์ดํธ ์ฌ๋ถ)
1
| ssh ec2-user@$N4 sudo cat /etc/fstab
|
โ
์ถ๋ ฅ
1
2
| UUID=1dfdfe0d-276a-4d52-8572-ceb3b011d9ea / xfs defaults,noatime 1 1
/dev/nvme1n1 /data xfs defaults,noatime 0 2
|
- ์ฌ๋ถํ
์
/data
๊ฐ ์๋ ๋ง์ดํธ๋๋๋ก ์ค์ ๋จ
12. ๋
ธ๋ ๋ฆฌ์์ค ํ์ธ
1
| kubectl describe node -l disk=instancestore | grep Allocatable: -A7
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
| Allocatable:
cpu: 1930m
ephemeral-storage: 27905944324
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3097488Ki
pods: 110
System Info:
|
ephemeral-storage
๋ฐ pods
๊ฐ์ ํ์ธ ๊ฐ๋ฅ
13. Kubelet ํ๋ผ๋ฏธํฐ ํ์ธ
1
| ssh ec2-user@$N4 sudo ps -ef | grep kubelet
|
โ
์ถ๋ ฅ
1
2
3
4
| root 3014 1 0 14:42 ? 00:00:07 /usr/bin/kubelet --config /etc/kubernetes/kubelet/kubelet-config.json --kubeconfig /var/lib/kubelet/kubeconfig --container-runtime-endpoint unix:///run/containerd/containerd.sock --image-credential-provider-config /etc/eks/image-credential-provider/config.json --image-credential-provider-bin-dir /etc/eks/image-credential-provider --node-ip=192.168.3.139 --pod-infra-container-image=602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/pause:3.5 --v=2 --hostname-override=ip-192-168-3-139.ap-northeast-2.compute.internal --cloud-provider=external --node-labels=eks.amazonaws.com/sourceLaunchTemplateVersion=1,alpha.eksctl.io/cluster-name=myeks,alpha.eksctl.io/nodegroup-name=ng2,disk=instancestore,eks.amazonaws.com/nodegroup-image=ami-0fa05db9e3c145f63,eks.amazonaws.com/capacityType=ON_DEMAND,eks.amazonaws.com/nodegroup=ng2,eks.amazonaws.com/sourceLaunchTemplateId=lt-0d2cf44115bd914f2 --max-pods=29 --max-pods=110
root 3602 3110 0 14:42 ? 00:00:00 /csi-node-driver-registrar --csi-address=/csi/csi.sock --kubelet-registration-path=/var/lib/kubelet/plugins/efs.csi.aws.com/csi.sock --v=2
root 4015 3921 0 14:42 ? 00:00:00 /bin/aws-ebs-csi-driver node --endpoint=unix:/csi/csi.sock --http-endpoint=0.0.0.0:3302 --csi-mount-point-prefix=/var/lib/kubelet/plugins/kubernetes.io/csi/ebs.csi.aws.com/ --volume-attach-limit=31 --logging-format=text --v=2
root 4063 3921 0 14:42 ? 00:00:00 /csi-node-driver-registrar --csi-address=/csi/csi.sock --kubelet-registration-path=/var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock --v=2
|
--max-pods=110
๊ฐ์ด ์ต์ข
์ ์ฉ๋จ
14. ๊ธฐ์กด local-path ์คํ ๋ฆฌ์ง ํด๋์ค ์ญ์
โ
์ถ๋ ฅ
1
2
3
4
| NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 12h
gp3 (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 3h32m
local-path rancher.io/local-path Delete WaitForFirstConsumer false 6h53m
|
1
2
3
4
5
6
7
8
9
10
11
12
| kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.31/deploy/local-path-storage.yaml
# ๊ฒฐ๊ณผ
namespace "local-path-storage" deleted
serviceaccount "local-path-provisioner-service-account" deleted
role.rbac.authorization.k8s.io "local-path-provisioner-role" deleted
clusterrole.rbac.authorization.k8s.io "local-path-provisioner-role" deleted
rolebinding.rbac.authorization.k8s.io "local-path-provisioner-bind" deleted
clusterrolebinding.rbac.authorization.k8s.io "local-path-provisioner-bind" deleted
deployment.apps "local-path-provisioner" deleted
storageclass.storage.k8s.io "local-path" deleted
configmap "local-path-config" deleted
|
15. ์๋ก์ด local-path ์คํ ๋ฆฌ์ง ํด๋์ค ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
| curl -sL https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.31/deploy/local-path-storage.yaml | sed 's/opt/data/g' | kubectl apply -f -
# ๊ฒฐ๊ณผ
namespace/local-path-storage created
serviceaccount/local-path-provisioner-service-account created
role.rbac.authorization.k8s.io/local-path-provisioner-role created
clusterrole.rbac.authorization.k8s.io/local-path-provisioner-role created
rolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind created
clusterrolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind created
deployment.apps/local-path-provisioner created
storageclass.storage.k8s.io/local-path created
configmap/local-path-config created
|
16. local-path ์ค์ ํ์ธ
1
| kubectl describe cm -n local-path-storage local-path-config
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
| Name: local-path-config
Namespace: local-path-storage
Labels: <none>
Annotations: <none>
Data
====
config.json:
----
{
"nodePathMap":[
{
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths":["/data/local-path-provisioner"]
}
]
}
helperPod.yaml:
----
apiVersion: v1
kind: Pod
metadata:
name: helper-pod
spec:
priorityClassName: system-node-critical
tolerations:
- key: node.kubernetes.io/disk-pressure
operator: Exists
effect: NoSchedule
containers:
- name: helper-pod
image: busybox
imagePullPolicy: IfNotPresent
setup:
----
#!/bin/sh
set -eu
mkdir -m 0777 -p "$VOL_DIR"
teardown:
----
#!/bin/sh
set -eu
rm -rf "$VOL_DIR"
BinaryData
====
Events: <none>
|
- ๋ก์ปฌ ์คํ ๋ฆฌ์ง์ ๋ง์ดํธ ๊ฒฝ๋ก
/data/local-path-provisioner
๋ก ๋ณ๊ฒฝ๋จ
17. ์ธ์คํด์ค ์ ์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| ssh ec2-user@$N4
Last login: Tue Feb 18 15:07:08 2025 from 182.230.60.93
, #_
~\_ ####_ Amazon Linux 2
~~ \_#####\
~~ \###| AL2 End of Life is 2026-06-30.
~~ \#/ ___
~~ V~' '->
~~~ / A newer version of Amazon Linux is available!
~~._. _/
_/ _/ Amazon Linux 2023, GA and supported until 2028-03-15.
_/m/' https://aws.amazon.com/linux/amazon-linux-2023/
[ec2-user@ip-192-168-3-139 ~]$
|
18. ๋์คํฌ ์ฑ๋ฅ ๋ชจ๋ํฐ๋ง (IOPS ํ์ธ)
1
| [ec2-user@ip-192-168-3-139 ~]$ iostat -xmdz 1 -p nvme1n1
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
| Linux 5.10.233-224.894.amzn2.x86_64 (ip-192-168-3-139.ap-northeast-2.compute.internal) 02/18/2025 _x86_64_ (2 CPU)
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
nvme1n1 0.00 0.00 0.19 0.16 0.00 0.02 118.97 0.00 0.23 0.08 0.41 0.28 0.01
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
|
- nvme1n1 ๋์คํฌ์ ์
์ถ๋ ฅ(IO) ์๋๋ฅผ ์ค์๊ฐ์ผ๋ก ํ์ธ ๊ฐ๋ฅ
19. FIO ํ
์คํธ๋ก IOPS ์ฑ๋ฅ ์ธก์
1
| (eks-user:default) [root@operator-host ~]# kubestr fio -f fio-read.fio -s local-path --size 10G --nodeselector disk=instancestore
|
- FIO ๋ฒค์น๋งํฌ ํ
์คํธ๊ฐ ์งํ๋จ
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| PVC created kubestr-fio-pvc-cvqpl
Pod created kubestr-fio-pod-xzw7g
Running FIO test (fio-read.fio) on StorageClass (local-path) with a PVC of Size (10G)
Elapsed time- 3m42.67154584s
FIO test results:
FIO version - fio-3.36
Global options - ioengine=libaio verify= direct=1 gtod_reduce=
JobName:
blocksize= filesize= iodepth= rw=
read:
IOPS=20308.564453 BW(KiB/s)=81234
iops: min=15898 max=93734 avg=20316.953125
bw(KiB/s): min=63592 max=374940 avg=81267.796875
Disk stats (read/write):
nvme1n1: ios=2433492/10 merge=0/3 ticks=7648125/15 in_queue=7648141, util=99.949951%
- OK
|
- IOPS๊ฐ ๊ธฐ์กด ๋๋น 7๋ฐฐ ์ฆ๊ฐ (
IOPS=20308.564453
), ํ๊ท ๋์ญํญ 81MB/s ๋ฌ์ฑ
19. ๋ฐฐํฌ ๋ฆฌ์์ค ์ ๋ฆฌ ๋ฐ ์ญ์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| # local-path ์คํ ๋ฆฌ์ง ํด๋์ค ์ญ์
kubectl delete -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.31/deploy/local-path-storage.yaml
namespace "local-path-storage" deleted
serviceaccount "local-path-provisioner-service-account" deleted
role.rbac.authorization.k8s.io "local-path-provisioner-role" deleted
clusterrole.rbac.authorization.k8s.io "local-path-provisioner-role" deleted
rolebinding.rbac.authorization.k8s.io "local-path-provisioner-bind" deleted
clusterrolebinding.rbac.authorization.k8s.io "local-path-provisioner-bind" deleted
deployment.apps "local-path-provisioner" deleted
storageclass.storage.k8s.io "local-path" deleted
configmap "local-path-config" deleted
# ng2 ๋
ธ๋๊ทธ๋ฃน ์ญ์
eksctl delete nodegroup -c $CLUSTER_NAME -n ng2
2025-02-19 00:18:37 [โน] 1 nodegroup (ng2) was included (based on the include/exclude rules)
2025-02-19 00:18:37 [โน] will drain 1 nodegroup(s) in cluster "myeks"
2025-02-19 00:18:37 [โน] starting parallel draining, max in-flight of 1
2025-02-19 00:18:37 [โน] cordon node "ip-192-168-3-139.ap-northeast-2.compute.internal"
2025-02-19 00:18:37 [โ] drained all nodes: [ip-192-168-3-139.ap-northeast-2.compute.internal]
2025-02-19 00:18:37 [โน] will delete 1 nodegroups from cluster "myeks"
2025-02-19 00:18:38 [โน] 1 task: { 1 task: { delete nodegroup "ng2" [async] } }
2025-02-19 00:18:38 [โน] will delete stack "eksctl-myeks-nodegroup-ng2"
2025-02-19 00:18:38 [โ] deleted 1 nodegroup(s) from cluster "myeks"
|
๐ ๋ฉํฐํ๋ซํผ ์ปจํ
์ด๋ ๋น๋ ๋ฐ ECR ๋ฐฐํฌ
1. ๋
ธ๋ ๊ทธ๋ฃน ๋ฐ CPU ์ํคํ
์ฒ ํ์ธ
1
2
3
| (eks-user:default) [root@operator-host ~]# arch
# ๊ฒฐ๊ณผ
x86_64
|
๋ค๋ฅธ ์ํคํ
์ฒ(ARM, RISC-V) ๊ธฐ๋ฐ ์ปจํ
์ด๋ ์คํ ์ ์ค๋ฅ ๋ฐ์
1
2
3
4
5
6
7
8
9
10
11
12
13
| (eks-user:default) [root@operator-host ~]# docker run --rm -it riscv64/ubuntu bash
# ๊ฒฐ๊ณผ
Unable to find image 'riscv64/ubuntu:latest' locally
latest: Pulling from riscv64/ubuntu
docker: no matching manifest for linux/amd64 in the manifest list entries.
See 'docker run --help'.
(eks-user:default) [root@operator-host ~]# docker run --rm -it arm64v8/ubuntu bash
# ๊ฒฐ๊ณผ
Unable to find image 'arm64v8/ubuntu:latest' locally
latest: Pulling from arm64v8/ubuntu
docker: no matching manifest for linux/amd64 in the manifest list entries.
See 'docker run --help'
|
- CPU ์ํคํ
์ฒ๊ฐ ๋ค๋ฅผ ๊ฒฝ์ฐ ๊ธฐ๋ณธ์ ์ผ๋ก ์คํ ๋ถ๊ฐ๋ฅ
2. ๋ฉํฐํ๋ซํผ ์ปจํ
์ด๋ ๋น๋ ํ๊ฒฝ ๊ตฌ์ถ
(1) buildx
๋น๋ ์ํ ํ์ธ
1
| (eks-user:default) [root@operator-host ~]# docker buildx ls
|
โ
์ถ๋ ฅ
1
2
3
| NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
default * docker
default default running v0.12.5 linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
|
- ๊ธฐ๋ณธ์ ์ผ๋ก
amd64
ํ๋ซํผ๋ง ์ง์๋จ
(2) QEMU๋ฅผ ์ฌ์ฉํ์ฌ ๋ค๋ฅธ ์ํคํ
์ฒ ์ง์ ํ์ฑํ
1
| (eks-user:default) [root@operator-host ~]# docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
| Unable to find image 'multiarch/qemu-user-static:latest' locally
latest: Pulling from multiarch/qemu-user-static
205dae5015e7: Pull complete
816739e52091: Pull complete
30abb83a18eb: Pull complete
0657daef200b: Pull complete
30c9c93f40b9: Pull complete
Digest: sha256:fe60359c92e86a43cc87b3d906006245f77bfc0565676b80004cc666e4feb9f0
Status: Downloaded newer image for multiarch/qemu-user-static:latest
Setting /usr/bin/qemu-alpha-static as binfmt interpreter for alpha
Setting /usr/bin/qemu-arm-static as binfmt interpreter for arm
Setting /usr/bin/qemu-armeb-static as binfmt interpreter for armeb
Setting /usr/bin/qemu-sparc-static as binfmt interpreter for sparc
Setting /usr/bin/qemu-sparc32plus-static as binfmt interpreter for sparc32plus
Setting /usr/bin/qemu-sparc64-static as binfmt interpreter for sparc64
Setting /usr/bin/qemu-ppc-static as binfmt interpreter for ppc
Setting /usr/bin/qemu-ppc64-static as binfmt interpreter for ppc64
Setting /usr/bin/qemu-ppc64le-static as binfmt interpreter for ppc64le
Setting /usr/bin/qemu-m68k-static as binfmt interpreter for m68k
Setting /usr/bin/qemu-mips-static as binfmt interpreter for mips
Setting /usr/bin/qemu-mipsel-static as binfmt interpreter for mipsel
Setting /usr/bin/qemu-mipsn32-static as binfmt interpreter for mipsn32
Setting /usr/bin/qemu-mipsn32el-static as binfmt interpreter for mipsn32el
Setting /usr/bin/qemu-mips64-static as binfmt interpreter for mips64
Setting /usr/bin/qemu-mips64el-static as binfmt interpreter for mips64el
Setting /usr/bin/qemu-sh4-static as binfmt interpreter for sh4
Setting /usr/bin/qemu-sh4eb-static as binfmt interpreter for sh4eb
Setting /usr/bin/qemu-s390x-static as binfmt interpreter for s390x
Setting /usr/bin/qemu-aarch64-static as binfmt interpreter for aarch64
Setting /usr/bin/qemu-aarch64_be-static as binfmt interpreter for aarch64_be
Setting /usr/bin/qemu-hppa-static as binfmt interpreter for hppa
Setting /usr/bin/qemu-riscv32-static as binfmt interpreter for riscv32
Setting /usr/bin/qemu-riscv64-static as binfmt interpreter for riscv64
Setting /usr/bin/qemu-xtensa-static as binfmt interpreter for xtensa
Setting /usr/bin/qemu-xtensaeb-static as binfmt interpreter for xtensaeb
Setting /usr/bin/qemu-microblaze-static as binfmt interpreter for microblaze
Setting /usr/bin/qemu-microblazeel-static as binfmt interpreter for microblazeel
Setting /usr/bin/qemu-or1k-static as binfmt interpreter for or1k
Setting /usr/bin/qemu-hexagon-static as binfmt interpreter for hexagon
|
- QEMU๋ฅผ ํตํด ๋ค์ํ ์ํคํ
์ฒ ์ง์ ์ถ๊ฐ๋จ
(3) QEMU ์ค์น ํ์ธ
1
| (eks-user:default) [root@operator-host ~]# docker images
|
โ
์ถ๋ ฅ
1
2
| REPOSITORY TAG IMAGE ID CREATED SIZE
multiarch/qemu-user-static latest 3539aaa87393 2 years ago 305MB
|
3. ๋น๋ ํ๊ฒฝ ์์ฑ ๋ฐ ํ์ธ
(1) buildx
๋น๋ ์์ฑ
1
2
3
| (eks-user:default) [root@operator-host ~]# docker buildx create --use --name mybuilder
# ๊ฒฐ๊ณผ
mybuilder
|
(2) buildx
๋น๋ ๋ชฉ๋ก ํ์ธ
1
| (eks-user:default) [root@operator-host ~]# docker buildx ls
|
โ
์ถ๋ ฅ
1
2
3
4
5
| NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
mybuilder * docker-container
mybuilder0 unix:///var/run/docker.sock inactive
default docker
default default running v0.12.5 linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386, linux/arm64, linux/riscv64, linux/ppc64, linux/ppc64le, linux/s390x, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
|
- ๋ฉํฐํ๋ซํผ ๋น๋ ํ๊ฒฝ์ด ํ์ฑํ๋จ
(3) buildx
๋น๋ ๋ถํธ์คํธ๋ฉ ์คํ
1
| (eks-user:default) [root@operator-host ~]# docker buildx inspect --bootstrap
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
| [+] Building 8.6s (1/1) FINISHED
=> [internal] booting buildkit 8.6s
=> => pulling image moby/buildkit:buildx-stable-1 7.9s
=> => creating container buildx_buildkit_mybuilder0 0.7s
Name: mybuilder
Driver: docker-container
Last Activity: 2025-02-18 15:33:47 +0000 UTC
Nodes:
Name: mybuilder0
Endpoint: unix:///var/run/docker.sock
Status: running
Buildkit: v0.19.0
Platforms: linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/arm64, linux/riscv64, linux/ppc64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
Labels:
org.mobyproject.buildkit.worker.executor: oci
org.mobyproject.buildkit.worker.hostname: 9dd2e75695e3
org.mobyproject.buildkit.worker.network: host
org.mobyproject.buildkit.worker.oci.process-mode: sandbox
org.mobyproject.buildkit.worker.selinux.enabled: false
org.mobyproject.buildkit.worker.snapshotter: overlayfs
GC Policy rule#0:
All: false
Filters: type==source.local,type==exec.cachemount,type==source.git.checkout
Keep Duration: 48h0m0s
GC Policy rule#1:
All: false
Keep Duration: 1440h0m0s
Keep Bytes: 2.794GiB
GC Policy rule#2:
All: false
Keep Bytes: 2.794GiB
GC Policy rule#3:
All: true
Keep Bytes: 2.794GiB
|
- ์ด์ ์ฌ๋ฌ ์ํคํ
์ฒ์ ์ปจํ
์ด๋ ๋น๋ ๊ฐ๋ฅ
(4) buildx
๋น๋ ๋ชฉ๋ก ํ์ธ
1
| (eks-user:default) [root@operator-host ~]# docker buildx ls
|
โ
์ถ๋ ฅ
1
2
3
4
5
| NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
mybuilder * docker-container
mybuilder0 unix:///var/run/docker.sock running v0.19.0 linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/arm64, linux/riscv64, linux/ppc64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
default docker
default default running v0.12.5 linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386, linux/arm64, linux/riscv64, linux/ppc64, linux/ppc64le, linux/s390x, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
|
linux/amd64
, linux/arm64
, linux/riscv64
๋ฑ ๋ค์ค ํ๋ซํผ ์ง์ ํ์ธ ๊ฐ๋ฅ
(5) buildx
์คํ ํ์ธ
1
| (eks-user:default) [root@operator-host ~]# docker ps
|
โ
์ถ๋ ฅ
1
2
| CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9dd2e75695e3 moby/buildkit:buildx-stable-1 "buildkitd" 58 seconds ago Up 57 seconds buildx_buildkit_mybuilder0
|
- moby/buildkit ์ปจํ
์ด๋๊ฐ ์คํ๋๋ฉฐ ๋ฉํฐํ๋ซํผ ๋น๋ ๊ฐ๋ฅ
4. ์ํ ์ปจํ
์ด๋ ์ ํ๋ฆฌ์ผ์ด์
์์ฑ
(1) ํ๋ก์ ํธ ๋๋ ํ ๋ฆฌ ์์ฑ
1
| (eks-user:default) [root@operator-host ~]# mkdir myweb && cd myweb
|
(2) server.py
์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
| (eks-user:default) [root@operator-host myweb]# cat > server.py <<EOF
> from http.server import ThreadingHTTPServer, BaseHTTPRequestHandler
> from datetime import datetime
> import socket
>
> class RequestHandler(BaseHTTPRequestHandler):
> def do_GET(self):
> self.send_response(200)
> self.send_header('Content-type', 'text/plain')
> self.end_headers()
>
> now = datetime.now()
> hostname = socket.gethostname()
> response_string = now.strftime("The time is %-I:%M:%S %p, VERSION 0.0.1\n")
> response_string += f"Server hostname: {hostname}\n"
> self.wfile.write(bytes(response_string, "utf-8"))
>
> def startServer():
> try:
> server = ThreadingHTTPServer(('', 80), RequestHandler)
> print("Listening on " + ":".join(map(str, server.server_address)))
> server.serve_forever()
> except KeyboardInterrupt:
> server.shutdown()
>
> if __name__ == "__main__":
> startServer()
> EOF
|
(3) Dockerfile
์์ฑ
1
2
3
4
5
6
7
| (eks-user:default) [root@operator-host myweb]# cat > Dockerfile <<EOF
> FROM python:3.12
> ENV PYTHONUNBUFFERED 1
> COPY . /app
> WORKDIR /app
> CMD python3 server.py
> EOF
|
5. ๋จ์ผ ํ๋ซํผ ์ปจํ
์ด๋ ๋น๋ ๋ฐ ์คํ
(1) Python 3.12 ๊ธฐ๋ฐ ์ปจํ
์ด๋ ์ด๋ฏธ์ง ๋ค์ด๋ก๋
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| (eks-user:default) [root@operator-host myweb]# docker pull python:3.12
# ๊ฒฐ๊ณผ
3.12: Pulling from library/python
a492eee5e559: Pull complete
32b550be6cb6: Pull complete
35af2a7690f2: Pull complete
7576b00d9bb1: Pull complete
07612085660d: Pull complete
60fd44efca0f: Pull complete
4a12975a6131: Pull complete
Digest: sha256:f61c61fb2a8967599fb0874746c93530c3d2a4583478528eda06584abc736ea0
Status: Downloaded newer image for python:3.12
docker.io/library/python:3.12
|
(2) Python 3.12 ๊ธฐ๋ฐ ์ปจํ
์ด๋ ๋น๋
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| (eks-user:default) [root@operator-host myweb]# docker build -t myweb:1 -t myweb:latest .
# ๊ฒฐ๊ณผ
[+] Building 0.3s (8/8) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 125B 0.0s
=> [internal] load metadata for docker.io/library/python:3.12 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 1.04kB 0.0s
=> [1/3] FROM docker.io/library/python:3.12 0.1s
=> [2/3] COPY . /app 0.2s
=> [3/3] WORKDIR /app 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:e8e1bb0cc5209d01fa642497dc930b06db1e6317d3eacc9b94c40033b7436efd 0.0s
=> => naming to docker.io/library/myweb:1 0.0s
=> => naming to docker.io/library/myweb:latest 0.0s
|
myweb:1
๋ฐ myweb:latest
ํ๊ทธ๋ก ์ปจํ
์ด๋ ์ด๋ฏธ์ง ๋น๋ ์๋ฃ
6. ์ปจํ
์ด๋ ์ด๋ฏธ์ง ํ์ธ
1
| (eks-user:default) [root@operator-host myweb]# docker images
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
| REPOSITORY TAG IMAGE ID CREATED SIZE
myweb 1 e8e1bb0cc520 27 seconds ago 1.02GB
myweb latest e8e1bb0cc520 27 seconds ago 1.02GB
python 3.12 149b9784258f 13 days ago 1.02GB
moby/buildkit buildx-stable-1 23b5a9d195cf 4 weeks ago 208MB
multiarch/qemu-user-static latest 3539aaa87393 2 years ago 305MB
|
- ๋น๋๋
myweb:1
์ปจํ
์ด๋ ์ด๋ฏธ์ง ํ์ธ
7. ์ปจํ
์ด๋ ์คํ ๋ฐ ์๋น์ค ํ์ธ
1
2
3
4
| (eks-user:default) [root@operator-host myweb]# docker run -d -p 8080:80 --name=timeserver myweb
# ๊ฒฐ๊ณผ
2651f49a0fe29182098c0c6e185ee81e63816f1ddf034fa2f32cc58383f7badb
|
timeserver
์ปจํ
์ด๋ ์คํ- 8080 ํฌํธ๋ก HTTP ์์ฒญ์ ๋ฐ์ ์ ์์
1
| (eks-user:default) [root@operator-host myweb]# curl http://localhost:8080
|
โ
์ถ๋ ฅ
1
2
| The time is 3:51:20 PM, VERSION 0.0.1
Server hostname: 2651f49a0fe2
|
- ํ์ฌ ์๊ฐ๊ณผ ์ปจํ
์ด๋ ํธ์คํธ๋ช
์ ๋ฐํํ๋ ๊ฐ๋จํ ์น ์๋ฒ ๋์ ํ์ธ
8. ์ปจํ
์ด๋ ์ ๋ฆฌ
1
2
3
4
| (eks-user:default) [root@operator-host myweb]# docker rm -f timeserver
# ๊ฒฐ๊ณผ
timeserver
|
9. Docker Hub ๋ก๊ทธ์ธ
1
2
3
4
5
6
7
8
9
10
11
| (eks-user:default) [root@operator-host myweb]# docker login
Log in with your Docker ID or email address to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com/ to create one.
You can log in with your password or a Personal Access Token (PAT). Using a limited-scope PAT grants better security and is required for organizations using SSO. Learn more at https://docs.docker.com/go/access-tokens/
Username: shinminjin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
|
10. Docker ์ฌ์ฉ์๋ช
๋ณ์ ์ค์
1
| (eks-user:default) [root@operator-host myweb]# DOCKERNAME=shinminjin
|
11. ๋ค์ค ์ํคํ
์ฒ ์ปจํ
์ด๋ ๋น๋ ๋ฐ ํธ์
1
| (eks-user:default) [root@operator-host myweb]# docker buildx build --platform linux/amd64,linux/arm64 --push --tag $DOCKERNAME/myweb:multi .
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
| [+] Building 106.5s (14/14) FINISHED docker-container:mybuilder
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 125B 0.0s
=> [linux/arm64 internal] load metadata for docker.io/library/python:3.12 2.6s
=> [linux/amd64 internal] load metadata for docker.io/library/python:3.12 2.5s
=> [auth] library/python:pull token for registry-1.docker.io 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [linux/amd64 1/3] FROM docker.io/library/python:3.12@sha256:f61c61fb2a8967599fb0874746c93530c3d2a4583478528eda06584abc736ea0 36.9s
=> => resolve docker.io/library/python:3.12@sha256:f61c61fb2a8967599fb0874746c93530c3d2a4583478528eda06584abc736ea0 0.0s
=> => sha256:4a12975a6131fb8b18f6c80441a8533d18ec06d744cd9dd26431ae147a1b7552 248B / 248B 0.5s
=> => sha256:60fd44efca0fdc711987188c7b289674aea93ed94d2beff8a1e4d92a8e1fbe7f 25.66MB / 25.66MB 1.1s
=> => sha256:7576b00d9bb10cc967bb5bdeeb3d5fa078ac8800e112aa03ed15ec199662d4f7 211.33MB / 211.33MB 5.7s
=> => sha256:07612085660d86eae935f91c31ea91065995815b395c798b5c0f8df260c7e2a8 6.16MB / 6.16MB 0.7s
=> => sha256:35af2a7690f2b43e7237d1fae8e3f2350dfb25f3249e9cf65121866f9c56c772 64.39MB / 64.39MB 2.6s
=> => sha256:32b550be6cb62359a0f3a96bc0dc289f8b45d097eaad275887f163c6780b4108 24.06MB / 24.06MB 1.7s
=> => sha256:a492eee5e55976c7d3feecce4c564aaf6f14fb07fdc5019d06f4154eddc93fde 48.48MB / 48.48MB 1.9s
=> => extracting sha256:a492eee5e55976c7d3feecce4c564aaf6f14fb07fdc5019d06f4154eddc93fde 4.3s
=> => extracting sha256:32b550be6cb62359a0f3a96bc0dc289f8b45d097eaad275887f163c6780b4108 1.5s
=> => extracting sha256:35af2a7690f2b43e7237d1fae8e3f2350dfb25f3249e9cf65121866f9c56c772 5.4s
=> => extracting sha256:7576b00d9bb10cc967bb5bdeeb3d5fa078ac8800e112aa03ed15ec199662d4f7 15.7s
=> => extracting sha256:07612085660d86eae935f91c31ea91065995815b395c798b5c0f8df260c7e2a8 0.6s
=> => extracting sha256:60fd44efca0fdc711987188c7b289674aea93ed94d2beff8a1e4d92a8e1fbe7f 1.6s
=> => extracting sha256:4a12975a6131fb8b18f6c80441a8533d18ec06d744cd9dd26431ae147a1b7552 0.0s
=> [linux/arm64 1/3] FROM docker.io/library/python:3.12@sha256:f61c61fb2a8967599fb0874746c93530c3d2a4583478528eda06584abc736ea0 36.1s
=> => resolve docker.io/library/python:3.12@sha256:f61c61fb2a8967599fb0874746c93530c3d2a4583478528eda06584abc736ea0 0.0s
=> => sha256:305935ae2ee22ec786106269d7da84b1df25782c691db84256ebce68190e1c79 250B / 250B 0.5s
=> => sha256:d708adbcc8ee3a9b1354ffe945eb8875d5e843a6aceb8397559caf8ef2ffd214 24.91MB / 24.91MB 1.1s
=> => sha256:7d49e4574da2d8f2419ffdbd2cfc65c6be873a05dab52159fb02b63d8be816fe 6.24MB / 6.24MB 1.0s
=> => sha256:9611c2b713640ce0f9156445b244c4da5e621183b56c0901d97a8b6d54ce10d7 202.72MB / 202.72MB 6.2s
=> => sha256:c9d3572a68af0b860060b7ea84adfa8406fa20cfd1337c947dfb661aa965eee7 64.36MB / 64.36MB 2.8s
=> => sha256:193c44006e77abbadfdd7be72b4ab6d7a5c08640ef575970f722b798ee7800ac 23.60MB / 23.60MB 1.3s
=> => sha256:106abeaee908db66722312b3379ae398e2bcc5b2fdad0cc248509efa14a819ff 48.31MB / 48.31MB 2.2s
=> => extracting sha256:106abeaee908db66722312b3379ae398e2bcc5b2fdad0cc248509efa14a819ff 8.1s
=> => extracting sha256:193c44006e77abbadfdd7be72b4ab6d7a5c08640ef575970f722b798ee7800ac 1.6s
=> => extracting sha256:c9d3572a68af0b860060b7ea84adfa8406fa20cfd1337c947dfb661aa965eee7 5.4s
=> => extracting sha256:9611c2b713640ce0f9156445b244c4da5e621183b56c0901d97a8b6d54ce10d7 14.6s
=> => extracting sha256:7d49e4574da2d8f2419ffdbd2cfc65c6be873a05dab52159fb02b63d8be816fe 0.6s
=> => extracting sha256:d708adbcc8ee3a9b1354ffe945eb8875d5e843a6aceb8397559caf8ef2ffd214 2.2s
=> => extracting sha256:305935ae2ee22ec786106269d7da84b1df25782c691db84256ebce68190e1c79 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 1.04kB 0.0s
=> [linux/arm64 2/3] COPY . /app 0.2s
=> [linux/arm64 3/3] WORKDIR /app 0.0s
=> [linux/amd64 2/3] COPY . /app 0.1s
=> [linux/amd64 3/3] WORKDIR /app 0.0s
=> exporting to image 66.8s
=> => exporting layers 0.1s
=> => exporting manifest sha256:b73a3a9ae85ac83166587bc398a9cda723b1296cd200e7bf61a2f8fdc03f22d6 0.0s
=> => exporting config sha256:2e766855d2c0dc9c8a005866ec139d251b4f3cabd38586fff3fa9df874b1b14f 0.0s
=> => exporting attestation manifest sha256:be4b65e573ddbae236084237c32bb3e61598865f2378464082b008a268349bd7 0.0s
=> => exporting manifest sha256:5d18951aa10d7db756c39875c1360499c3a6643a097f9ea81ed8fdc7479c9b49 0.0s
=> => exporting config sha256:c319f11eb4c8a52e492b75bd068715a6703f6d0afcf6d6240dbed9c0ad1011e8 0.0s
=> => exporting attestation manifest sha256:eaa3e8e820eb8ee42165d8d674562cc887cdf8b0e9ba7d26d24e05bde231ce07 0.0s
=> => exporting manifest list sha256:1c6d6bc4f4383a11c8d033222ba28e68073abd45e0de31272d0b94020916ee56 0.0s
=> => pushing layers 62.4s
=> => pushing manifest for docker.io/shinminjin/myweb:multi@sha256:1c6d6bc4f4383a11c8d033222ba28e68073abd45e0de31272d0b94020916ee56 4.2s
=> [auth] shinminjin/myweb:pull,push token for registry-1.docker.io 0.0s
2 warnings found (use --debug to expand):
- LegacyKeyValueFormat: "ENV key=value" should be used instead of legacy "ENV key value" format (line 2)
- JSONArgsRecommended: JSON arguments recommended for CMD to prevent unintended behavior related to OS signals (line 5)
|
linux/amd64
, linux/arm64
์ํคํ
์ฒ๋ฅผ ์ง์ํ๋ ์ปจํ
์ด๋ ์ด๋ฏธ์ง ๋น๋- ๋น๋๋ ์ด๋ฏธ์ง๋ฅผ Docker Hub๋ก ํธ์
12. ๋น๋๋ ์ปจํ
์ด๋ ์ํคํ
์ฒ ํ์ธ
- Docker Hub์ ํธ์๋ ์ปจํ
์ด๋ ์ด๋ฏธ์ง์ OS ๋ฐ ์ํคํ
์ฒ ํ์ธ ๊ฐ๋ฅ
- โ
์ด์ ์ฒด์ (OS):
linux
- โ
์ง์ ์ํคํ
์ฒ(Architecture):
amd64
, arm64
- ๋ค์ค ํ๋ซํผ ์ง์์ผ๋ก ์ด์ ํจ์จ์ฑ ์ฆ๊ฐ
1
| (eks-user:default) [root@operator-host myweb]# docker manifest inspect $DOCKERNAME/myweb:multi | jq
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
| {
"schemaVersion": 2,
"mediaType": "application/vnd.oci.image.index.v1+json",
"manifests": [
{
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"size": 2010,
"digest": "sha256:b73a3a9ae85ac83166587bc398a9cda723b1296cd200e7bf61a2f8fdc03f22d6",
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"size": 2010,
"digest": "sha256:5d18951aa10d7db756c39875c1360499c3a6643a097f9ea81ed8fdc7479c9b49",
"platform": {
"architecture": "arm64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"size": 566,
"digest": "sha256:be4b65e573ddbae236084237c32bb3e61598865f2378464082b008a268349bd7",
"platform": {
"architecture": "unknown",
"os": "unknown"
}
},
{
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"size": 566,
"digest": "sha256:eaa3e8e820eb8ee42165d8d674562cc887cdf8b0e9ba7d26d24e05bde231ce07",
"platform": {
"architecture": "unknown",
"os": "unknown"
}
}
]
}
|
- ์ปจํ
์ด๋ ์ด๋ฏธ์ง๊ฐ
amd64
, arm64
์ํคํ
์ฒ๋ฅผ ๋ชจ๋ ์ง์ํ๋์ง ํ์ธ ๊ฐ๋ฅ
13. ์ปจํ
์ด๋ ์ญ์
1
| docker rm -f timeserver
|
๐ข AWS ECR ํ๋ผ์ด๋น ์ ์ฅ์ ์ฌ์ฉํ๊ธฐ
1. AWS ECR ๋ก๊ทธ์ธ ๋ฐ ์ธ์ฆ ์ค์
(1) AWS ๊ณ์ ID๋ฅผ ํ๊ฒฝ ๋ณ์์ ์ ์ฅ
1
| (eks-user:default) [root@operator-host myweb]# export ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text
|
(2) ECR์ Docker ์ธ์ฆ ์ ๋ณด ์ค์
1
2
3
4
| (eks-user:default) [root@operator-host myweb]# aws ecr get-login-password \
> --region ap-northeast-2 | docker login \
> --username AWS \
> --password-stdin ${ACCOUNT_ID}.dkr.ecr.ap-northeast-2.amazonaws.com
|
โ
์ถ๋ ฅ
1
2
3
4
5
| WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
|
2. ECR ์ธ์ฆ ์ ๋ณด ํ์ธ
1
| (eks-user:default) [root@operator-host myweb]# cat /root/.docker/config.json | jq
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
| {
"auths": {
"378102432899.dkr.ecr.ap-northeast-2.amazonaws.com": {
"auth": "xxxxxxxxxxxxxxxxxx"
},
"https://index.docker.io/v1/": {
"auth": "xxxxxxxxxxxxxxxxxx"
}
}
}
|
auths
ํญ๋ชฉ์ ECR ์ ์ฅ์ URL์ด ํฌํจ๋์ด ์๋์ง ํ์ธ
3. ECR ํ๋ผ์ด๋น ์ ์ฅ์ ์์ฑ
- ์๋ก์ด ์ปจํ
์ด๋ ์ด๋ฏธ์ง ์ ์ฅ์๋ฅผ AWS ECR์์ ์์ฑ
1
| (eks-user:default) [root@operator-host myweb]# aws ecr create-repository --repository-name myweb
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| {
"repository": {
"repositoryArn": "arn:aws:ecr:ap-northeast-2:378102432899:repository/myweb",
"registryId": "378102432899",
"repositoryName": "myweb",
"repositoryUri": "378102432899.dkr.ecr.ap-northeast-2.amazonaws.com/myweb",
"createdAt": "2025-02-19T01:17:53.137000+09:00",
"imageTagMutability": "MUTABLE",
"imageScanningConfiguration": {
"scanOnPush": false
},
"encryptionConfiguration": {
"encryptionType": "AES256"
}
}
}
|
4. ๋ค์ค ์ํคํ
์ฒ ์ปจํ
์ด๋ ๋น๋ ๋ฐ ECR ํธ์
linux/amd64
, linux/arm64
์ํคํ
์ฒ ์ง์ ์ปจํ
์ด๋ ๋น๋- ๋น๋๋ ์ด๋ฏธ์ง๋ฅผ AWS ECR๋ก ๋ฐ๋ก ํธ์
1
| (eks-user:default) [root@operator-host myweb]# docker buildx build --platform linux/amd64,linux/arm64 --push --tag ${ACCOUNT_ID}.dkr.ecr.ap-northeast-2.amazonaws.com/myweb:multi .
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
| [+] Building 13.3s (14/14) FINISHED docker-container:mybuilder
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 125B 0.0s
=> [linux/arm64 internal] load metadata for docker.io/library/python:3.12 1.4s
=> [linux/amd64 internal] load metadata for docker.io/library/python:3.12 1.4s
=> [auth] library/python:pull token for registry-1.docker.io 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [linux/amd64 1/3] FROM docker.io/library/python:3.12@sha256:f61c61fb2a8967599fb0874746c93530c3d2a4583478528eda06584abc736ea0 0.0s
=> => resolve docker.io/library/python:3.12@sha256:f61c61fb2a8967599fb0874746c93530c3d2a4583478528eda06584abc736ea0 0.0s
=> [linux/arm64 1/3] FROM docker.io/library/python:3.12@sha256:f61c61fb2a8967599fb0874746c93530c3d2a4583478528eda06584abc736ea0 0.0s
=> => resolve docker.io/library/python:3.12@sha256:f61c61fb2a8967599fb0874746c93530c3d2a4583478528eda06584abc736ea0 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 60B 0.0s
=> CACHED [linux/arm64 2/3] COPY . /app 0.0s
=> CACHED [linux/arm64 3/3] WORKDIR /app 0.0s
=> CACHED [linux/amd64 2/3] COPY . /app 0.0s
=> CACHED [linux/amd64 3/3] WORKDIR /app 0.0s
=> exporting to image 11.8s
=> => exporting layers 0.0s
=> => exporting manifest sha256:b73a3a9ae85ac83166587bc398a9cda723b1296cd200e7bf61a2f8fdc03f22d6 0.0s
=> => exporting config sha256:2e766855d2c0dc9c8a005866ec139d251b4f3cabd38586fff3fa9df874b1b14f 0.0s
=> => exporting attestation manifest sha256:75d8f32e66dee7b5268c1882a93f0107bf972a16c7972903f0fefbde843b0e08 0.0s
=> => exporting manifest sha256:5d18951aa10d7db756c39875c1360499c3a6643a097f9ea81ed8fdc7479c9b49 0.0s
=> => exporting config sha256:c319f11eb4c8a52e492b75bd068715a6703f6d0afcf6d6240dbed9c0ad1011e8 0.0s
=> => exporting attestation manifest sha256:388d86fbeda42c224fd181d385228fa7d7ff1d0ff2c3379c5b7f6f21e4e2f8f2 0.0s
=> => exporting manifest list sha256:2bfb7328872d6ac549cfdc136c70bca7f133b7d2da51da1a3f456ff256d774ba 0.0s
=> => pushing layers 10.4s
=> => pushing manifest for 378102432899.dkr.ecr.ap-northeast-2.amazonaws.com/myweb:multi@sha256:2bfb7328872d6ac549cfdc136c70bca7f133b7d2da 1.4s
=> [auth] sharing credentials for 378102432899.dkr.ecr.ap-northeast-2.amazonaws.com 0.0s
2 warnings found (use --debug to expand):
- LegacyKeyValueFormat: "ENV key=value" should be used instead of legacy "ENV key value" format (line 2)
- JSONArgsRecommended: JSON arguments recommended for CMD to prevent unintended behavior related to OS signals (line 5)
|
multi-architecture
์ง์ ์ปจํ
์ด๋ ์ด๋ฏธ์ง๊ฐ ECR์ ํธ์๋จ
5. AWS ECR ์ปจํ
์ด๋ ์คํ
- ECR์์ ์ปจํ
์ด๋ ์ด๋ฏธ์ง๋ฅผ ํํ์ฌ ์คํ
- ์๋์ฐPC(amd64), macOS(arm64)์์ ๋์ผํ ์ด๋ฏธ์ง ์ฌ์ฉ ๊ฐ๋ฅ
1
| (eks-user:default) [root@operator-host myweb]# docker run -d -p 8080:80 --name=timeserver ${ACCOUNT_ID}.dkr.ecr.ap-northeast-2.amazonaws.com/myweb:multi
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| Unable to find image '378102432899.dkr.ecr.ap-northeast-2.amazonaws.com/myweb:multi' locally
multi: Pulling from myweb
a492eee5e559: Already exists
32b550be6cb6: Already exists
35af2a7690f2: Already exists
7576b00d9bb1: Already exists
07612085660d: Already exists
60fd44efca0f: Already exists
4a12975a6131: Already exists
0a18c01708c3: Pull complete
4f4fb700ef54: Pull complete
Digest: sha256:2bfb7328872d6ac549cfdc136c70bca7f133b7d2da51da1a3f456ff256d774ba
Status: Downloaded newer image for 378102432899.dkr.ecr.ap-northeast-2.amazonaws.com/myweb:multi
c981958a36336d01c5ee98d23a9953b03261bbeceeab19e9c4b910598a9bad2e
|
- ์ปจํ
์ด๋ ์คํ ์ํ ํ์ธ
1
| (eks-user:default) [root@operator-host myweb]# curl http://localhost:8080
|
โ
์ถ๋ ฅ
1
2
| The time is 4:21:54 PM, VERSION 0.0.1
Server hostname: c981958a3633
|
6. ECR ์ปจํ
์ด๋ ๋ฐ ์ ์ฅ์ ์ ๋ฆฌ
(1) ECR ์ ์ฅ์ ์ญ์
(2) ์ปจํ
์ด๋ ์ญ์
1
2
3
| (eks-user:default) [root@operator-host myweb]# docker rm -f timeserver
# ๊ฒฐ๊ณผ
timeserver
|
โก AWS Graviton (ARM) ๋
ธ๋๊ทธ๋ฃน ๋ฐ ๋ฐฐํฌ ์ค์ต
1. ๊ธฐ์กด ๋
ธ๋ ์ํคํ
์ฒ ํ์ธ
1
| kubectl get nodes -L kubernetes.io/arch
|
โ
์ถ๋ ฅ
1
2
3
4
| NAME STATUS ROLES AGE VERSION ARCH
ip-192-168-1-207.ap-northeast-2.compute.internal Ready <none> 13h v1.31.5-eks-5d632ec amd64
ip-192-168-2-84.ap-northeast-2.compute.internal Ready <none> 13h v1.31.5-eks-5d632ec amd64
ip-192-168-3-80.ap-northeast-2.compute.internal Ready <none> 13h v1.31.5-eks-5d632ec amd64
|
- ํ์ฌ EKS ํด๋ฌ์คํฐ์ ๋ชจ๋ ๋
ธ๋๋
amd64
์ํคํ
์ฒ
2. Graviton(ARM) ๊ธฐ๋ฐ ๋
ธ๋๊ทธ๋ฃน ์์ฑ
(1) YAML ํ์ผ ์์ฑ (myng3.yaml
)
1
2
| eksctl create nodegroup -c $CLUSTER_NAME -r ap-northeast-2 --subnet-ids "$PubSubnet1","$PubSubnet2","$PubSubnet3" \
-n ng3 -t t4g.medium -N 1 -m 1 -M 1 --node-volume-size=30 --node-labels family=graviton --dry-run > myng3.yaml
|
(2) ๋
ธ๋๊ทธ๋ฃน ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
| eksctl create nodegroup -f myng3.yaml
# ๊ฒฐ๊ณผ
2025-02-19 01:32:09 [โน] nodegroup "ng3" will use "" [AmazonLinux2/1.31]
2025-02-19 01:32:10 [โน] 1 existing nodegroup(s) (ng1) will be excluded
2025-02-19 01:32:10 [โน] 1 nodegroup (ng3) was included (based on the include/exclude rules)
2025-02-19 01:32:10 [โน] will create a CloudFormation stack for each of 1 managed nodegroups in cluster "myeks"
2025-02-19 01:32:11 [โน]
2 sequential tasks: { fix cluster compatibility, 1 task: { 1 task: { create managed nodegroup "ng3" } }
}
2025-02-19 01:32:11 [โน] checking cluster stack for missing resources
2025-02-19 01:32:11 [โน] cluster stack has all required resources
2025-02-19 01:32:12 [โน] building managed nodegroup stack "eksctl-myeks-nodegroup-ng3"
2025-02-19 01:32:12 [โน] deploying stack "eksctl-myeks-nodegroup-ng3"
2025-02-19 01:32:12 [โน] waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng3"
2025-02-19 01:32:42 [โน] waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng3"
2025-02-19 01:33:17 [โน] waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng3"
2025-02-19 01:33:57 [โน] waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng3"
2025-02-19 01:34:38 [โน] waiting for CloudFormation stack "eksctl-myeks-nodegroup-ng3"
2025-02-19 01:34:38 [โน] no tasks
2025-02-19 01:34:38 [โ] created 0 nodegroup(s) in cluster "myeks"
2025-02-19 01:34:38 [โน] nodegroup "ng3" has 1 node(s)
2025-02-19 01:34:38 [โน] node "ip-192-168-3-99.ap-northeast-2.compute.internal" is ready
2025-02-19 01:34:38 [โน] waiting for at least 1 node(s) to become ready in "ng3"
2025-02-19 01:34:38 [โน] nodegroup "ng3" has 1 node(s)
2025-02-19 01:34:38 [โน] node "ip-192-168-3-99.ap-northeast-2.compute.internal" is ready
2025-02-19 01:34:38 [โ] created 1 managed nodegroup(s) in cluster "myeks"
2025-02-19 01:34:38 [โน] checking security group configuration for all nodegroups
2025-02-19 01:34:38 [โน] all nodegroups have up-to-date cloudformation templates
|
3. ๋ฐฐํฌ๋ ๋
ธ๋ ํ์ธ
1
| kubectl get nodes --label-columns eks.amazonaws.com/nodegroup,kubernetes.io/arch,eks.amazonaws.com/capacityType
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
| NAME STATUS ROLES AGE VERSION NODEGROUP ARCH CAPACITYTYPE
ip-192-168-1-207.ap-northeast-2.compute.internal Ready <none> 13h v1.31.5-eks-5d632ec ng1 amd64 ON_DEMAND
ip-192-168-1-92.ap-northeast-2.compute.internal Ready <none> 7m41s v1.31.5-eks-5d632ec managed-spot amd64 SPOT
ip-192-168-2-145.ap-northeast-2.compute.internal Ready <none> 99s v1.31.4-eks-0f56d01 ng-bottlerocket amd64 ON_DEMAND
ip-192-168-2-164.ap-northeast-2.compute.internal Ready <none> 7m40s v1.31.5-eks-5d632ec managed-spot amd64 SPOT
ip-192-168-2-84.ap-northeast-2.compute.internal Ready <none> 13h v1.31.5-eks-5d632ec ng1 amd64 ON_DEMAND
ip-192-168-3-73.ap-northeast-2.compute.internal Ready <none> 73s v1.31.4-eks-0f56d01 ng-bottlerocket-ssh amd64 ON_DEMAND
ip-192-168-3-80.ap-northeast-2.compute.internal Ready <none> 13h v1.31.5-eks-5d632ec ng1 amd64 ON_DEMAND
ip-192-168-3-99.ap-northeast-2.compute.internal Ready <none> 12m v1.31.5-eks-5d632ec ng3 arm64 ON_DEMAND
|
ng3
๋
ธ๋๊ฐ arm64
๋ก ๋ฐฐํฌ๋จ
4. ํ์ฌ Taints ์ค์ ํ์ธ
1
| aws eks describe-nodegroup --cluster-name $CLUSTER_NAME --nodegroup-name ng3 | jq .nodegroup.taints
|
โ
์ถ๋ ฅ
5. Graviton ๋
ธ๋์ Taints ์ค์
ng3
๋
ธ๋์ frontend=true:NoExecute
Taint ์ ์ฉ- ํน์ Pod๋ง ํด๋น ๋
ธ๋์ ์ค์ผ์ค๋ง ๊ฐ๋ฅ
1
| aws eks update-nodegroup-config --cluster-name $CLUSTER_NAME --nodegroup-name ng3 --taints "addOrUpdateTaints=[{key=frontend, value=true, effect=NO_EXECUTE}]"
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| {
"update": {
"id": "eabe8298-9d6b-3ef6-965f-00120771faa6",
"status": "InProgress",
"type": "ConfigUpdate",
"params": [
{
"type": "TaintsToAdd",
"value": "[{\"effect\":\"NO_EXECUTE\",\"value\":\"true\",\"key\":\"frontend\"}]"
},
{
"type": "TaintsToRemove",
"value": "[]"
}
],
"createdAt": "2025-02-19T01:52:40.946000+09:00",
"errors": []
}
}
|
1
| kubectl describe nodes --selector family=graviton | grep Taints
|
โ
์ถ๋ ฅ
1
| Taints: frontend=true:NoExecute
|
6. Graviton ๋
ธ๋์ Tolerations์ ํฌํจํ Pod ๋ฐฐํฌ
tolerations
์ ์ค์ ํ์ฌ Taint๊ฐ ์์ด๋ Pod๊ฐ ๋ฐฐํฌ๋ ์ ์๋๋ก ์ค์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
terminationGracePeriodSeconds: 3
containers:
- name: busybox
image: busybox
command:
- "/bin/sh"
- "-c"
- "while true; do date >> /home/pod-out.txt; cd /home; sync; sync; sleep 10; done"
tolerations:
- effect: NoExecute
key: frontend
operator: Exists
nodeSelector:
family: graviton
EOF
# ๊ฒฐ๊ณผ
pod/busybox created
|
7. Pod ์ ๋ณด ํ์ธ
(1) Pod ์ธ๋ถ ์ ๋ณด ํ์ธ
1
| kubectl describe pod busybox
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
| Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: ip-192-168-3-99.ap-northeast-2.compute.internal/192.168.3.99
Start Time: Wed, 19 Feb 2025 01:59:49 +0900
Labels: <none>
Annotations: <none>
Status: Running
IP: 192.168.3.40
IPs:
IP: 192.168.3.40
Containers:
busybox:
Container ID: containerd://620e686393b9dee8876708ba15aac6f8523fd6fa9dd0507183dd230b9277bca9
Image: busybox
Image ID: docker.io/library/busybox@sha256:a5d0ce49aa801d475da48f8cb163c354ab95cab073cd3c138bd458fc8257fbf1
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
while true; do date >> /home/pod-out.txt; cd /home; sync; sync; sleep 10; done
State: Running
Started: Wed, 19 Feb 2025 01:59:54 +0900
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5sq9z (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-5sq9z:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: family=graviton
Tolerations: frontend:NoExecute op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m27s default-scheduler Successfully assigned default/busybox to ip-192-168-3-99.ap-northeast-2.compute.internal
Normal Pulling 2m26s kubelet Pulling image "busybox"
Normal Pulled 2m22s kubelet Successfully pulled image "busybox" in 4.274s (4.274s including waiting). Image size: 1855561 bytes.
Normal Created 2m22s kubelet Created container busybox
Normal Started 2m22s kubelet Started container busybox
|
(2) Pod ๋ด๋ถ์์ ์ํคํ
์ฒ ํ์ธ
1
2
3
| kubectl exec -it busybox -- arch
# ๊ฒฐ๊ณผ
aarch64
|
(3) Pod ๋ด๋ถ์์ ๋ก๊ทธ ํ์ธ
1
| kubectl exec -it busybox -- tail -f /home/pod-out.txt
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
| Tue Feb 18 17:01:24 UTC 2025
Tue Feb 18 17:01:34 UTC 2025
Tue Feb 18 17:01:44 UTC 2025
Tue Feb 18 17:01:54 UTC 2025
Tue Feb 18 17:02:04 UTC 2025
Tue Feb 18 17:02:14 UTC 2025
Tue Feb 18 17:02:24 UTC 2025
Tue Feb 18 17:02:34 UTC 2025
Tue Feb 18 17:02:44 UTC 2025
...
|
busybox
์ปจํ
์ด๋๊ฐ ์ ์์ ์ผ๋ก ์คํ ์ค
8. Pod ์ญ์
1
2
3
| kubectl delete pod busybox
# ๊ฒฐ๊ณผ
pod "busybox" deleted
|
9. ๋ค์ค ์ํคํ
์ฒ ์ปจํ
์ด๋ ๋ฐฐํฌ (x86_64 & ARM64)
- ์ด์์๋ฒ EC2์์ ๋น๋ํ
myweb
์ปจํ
์ด๋๋ฅผ ๋ค์ค ์ํคํ
์ฒ ์ง์ํ๋ Pod๋ก ๋ฐฐํฌ - x86_64(AMD) ๋ฐ ARM64(Graviton) ๋
ธ๋์์ ๊ฐ๊ฐ ์คํ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
| cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: myweb-arm
spec:
terminationGracePeriodSeconds: 3
containers:
- name: myweb
image: shinminjin/myweb:multi
tolerations:
- effect: NoExecute
key: frontend
operator: Exists
nodeSelector:
family: graviton
---
apiVersion: v1
kind: Pod
metadata:
name: myweb-amd
spec:
terminationGracePeriodSeconds: 3
containers:
- name: myweb
image: shinminjin/myweb:multi
EOF
# ๊ฒฐ๊ณผ
pod/myweb-arm created
pod/myweb-amd created
|
10. ๋ฐฐํฌ๋ Pod ๋ฐ ๋
ธ๋ ํ์ธ
โ
์ถ๋ ฅ
1
2
3
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myweb-amd 1/1 Running 0 77s 192.168.2.219 ip-192-168-2-145.ap-northeast-2.compute.internal <none> <none>
myweb-arm 1/1 Running 0 77s 192.168.3.40 ip-192-168-3-99.ap-northeast-2.compute.internal <none> <none>
|
11. ๊ฐ ์ปจํ
์ด๋์ ์ํคํ
์ฒ ํ์ธ
1
2
3
4
5
| kubectl exec -it myweb-arm -- arch
aarch64
kubectl exec -it myweb-amd -- arch
x86_64
|
12. ์น ์๋น์ค ์ ์ ๋์ ํ์ธ
1
2
3
4
5
6
7
8
9
| kubectl exec -it myweb-arm -- curl localhost
# ๊ฒฐ๊ณผ
The time is 5:09:21 PM, VERSION 0.0.1
Server hostname: myweb-arm
kubectl exec -it myweb-amd -- curl localhost
# ๊ฒฐ๊ณผ
The time is 5:09:30 PM, VERSION 0.0.1
Server hostname: myweb-amd
|
13. Pod ์ญ์ ๋ฐ ng3 ๋
ธ๋ ๊ทธ๋ฃน ์ญ์
1
2
3
4
| kubectl delete pod myweb-arm myweb-amd
# ๊ฒฐ๊ณผ
pod "myweb-arm" deleted
pod "myweb-amd" deleted
|
1
2
3
4
5
6
7
8
9
10
11
| eksctl delete nodegroup -c $CLUSTER_NAME -n ng3
# ๊ฒฐ๊ณผ
2025-02-19 02:12:21 [โน] 1 nodegroup (ng3) was included (based on the include/exclude rules)
2025-02-19 02:12:21 [โน] will drain 1 nodegroup(s) in cluster "myeks"
2025-02-19 02:12:21 [โน] starting parallel draining, max in-flight of 1
2025-02-19 02:12:21 [โน] cordon node "ip-192-168-3-99.ap-northeast-2.compute.internal"
2025-02-19 02:12:22 [โ] drained all nodes: [ip-192-168-3-99.ap-northeast-2.compute.internal]
2025-02-19 02:12:22 [โน] will delete 1 nodegroups from cluster "myeks"
2025-02-19 02:12:22 [โน] 1 task: { 1 task: { delete nodegroup "ng3" [async] } }
2025-02-19 02:12:22 [โน] will delete stack "eksctl-myeks-nodegroup-ng3"
2025-02-19 02:12:22 [โ] deleted 1 nodegroup(s) from cluster "myeks"
|
๐ฐ ์คํ ๋
ธ๋๊ทธ๋ฃน ๋ฐ ๋ฐฐํฌ ์ค์ต
1. ๋
ธ๋ ์ญํ ์กฐํ
1
2
| NODEROLEARN=$(aws iam list-roles --query "Roles[?contains(RoleName, 'nodegroup-ng1')].Arn" --output text)
echo $NODEROLEARN
|
โ
์ถ๋ ฅ
1
| arn:aws:iam::378102432899:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-rGyQG9rZlOwl
|
2. ์คํ ๋
ธ๋๊ทธ๋ฃน ์์ฑ
1
2
3
4
5
6
7
8
9
| aws eks create-nodegroup \
--cluster-name $CLUSTER_NAME \
--nodegroup-name managed-spot \
--subnets $PubSubnet1 $PubSubnet2 $PubSubnet3 \
--node-role $NODEROLEARN \
--instance-types c5.large c5d.large c5a.large \
--capacity-type SPOT \
--scaling-config minSize=2,maxSize=3,desiredSize=2 \
--disk-size 20
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
| {
"nodegroup": {
"nodegroupName": "managed-spot",
"nodegroupArn": "arn:aws:eks:ap-northeast-2:378102432899:nodegroup/myeks/managed-spot/56ca8cf5-e169-efa6-7b36-7922fce5434f",
"clusterName": "myeks",
"version": "1.31",
"releaseVersion": "1.31.5-20250212",
"createdAt": "2025-02-19T01:37:18.476000+09:00",
"modifiedAt": "2025-02-19T01:37:18.476000+09:00",
"status": "CREATING",
"capacityType": "SPOT",
"scalingConfig": {
"minSize": 2,
"maxSize": 3,
"desiredSize": 2
},
"instanceTypes": [
"c5.large",
"c5d.large",
"c5a.large"
],
"subnets": [
"subnet-0fed28a1b3e108719",
"subnet-0e4fb63cb543698fe",
"subnet-0861bd68771150000"
],
"amiType": "AL2023_x86_64_STANDARD",
"nodeRole": "arn:aws:iam::378102432899:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-rGyQG9rZlOwl",
"diskSize": 20,
"health": {
"issues": []
},
"updateConfig": {
"maxUnavailable": 1
},
"tags": {}
}
}
|
managed-spot
๋
ธ๋๊ทธ๋ฃน์ด ์คํ ์ธ์คํด์ค๋ก ์์ฑ๋จ
3. ์ ์ ํ ์คํ ์ธ์คํด์ค ์ ํ
ec2-instance-selector
์ค์น- ์๊ตฌํ๋ CPU, ๋ฉ๋ชจ๋ฆฌ, ์ํคํ
์ฒ์ ๋ง๋ ์ธ์คํด์ค๋ฅผ ์ ํ ๊ฐ๋ฅ
1
2
3
| curl -Lo ec2-instance-selector https://github.com/aws/amazon-ec2-instance-selector/releases/download/v2.4.1/ec2-instance-selector-`uname | tr '[:upper:]' '[:lower:]'`-amd64 && chmod +x ec2-instance-selector
sudo mv ec2-instance-selector /usr/local/bin/
ec2-instance-selector --version
|
โ
์ถ๋ ฅ
ec2-instance-selector
์ ์ ์ค์น ์๋ฃ
4. ์ธ์คํด์ค ์ถ์ฒ ์กฐํ (vCPU 2, RAM 4GB, GPU ์์, x86_64 ์ํคํ
์ฒ)
1
| ec2-instance-selector --vcpus 2 --memory 4 --gpus 0 --current-generation -a x86_64 --deny-list 't.*' --output table-wide
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
| NOTE: Could not retrieve 30 day avg hourly spot price for instance type p2.16xlarge
Instance Type VCPUs Mem (GiB) Hypervisor Current Gen Hibernation Support CPU Arch Network Performance ENIs GPUs GPU Mem (GiB) GPU Info On-Demand Price/Hr Spot Price/Hr (30d avg)
------------- ----- --------- ---------- ----------- ------------------- -------- ------------------- ---- ---- ------------- -------- ------------------ -----------------------
c5.large 2 4 nitro true true x86_64 Up to 10 Gigabit 3 0 0 none $0.096 $0.02993
c5a.large 2 4 nitro true false x86_64 Up to 10 Gigabit 3 0 0 none $0.086 $0.0397
c5d.large 2 4 nitro true true x86_64 Up to 10 Gigabit 3 0 0 none $0.11 $0.03067
c6i.large 2 4 nitro true true x86_64 Up to 12.5 Gigabit 3 0 0 none $0.096 $0.03346
c6id.large 2 4 nitro true true x86_64 Up to 12.5 Gigabit 3 0 0 none $0.1155 $0.03071
c6in.large 2 4 nitro true true x86_64 Up to 25 Gigabit 3 0 0 none $0.1281 $0.05098
c7i-flex.large 2 4 nitro true true x86_64 Up to 12.5 Gigabit 3 0 0 none $0.09576 $0.02884
c7i.large 2 4 nitro true true x86_64 Up to 12.5 Gigabit 3 0 0 none $0.1008 $0.02964
|
c5.large
๊ณ์ด ์ธ์คํด์ค๊ฐ ์ ๋ ดํ ์คํ ๊ฐ๊ฒฉ์ ์ ๊ณตํจ
5. ์คํ ๋ฐ ์จ๋๋งจ๋ ๋
ธ๋ ํ์ธ
(1) ์จ๋๋งจ๋ ๋
ธ๋๋ง ์กฐํ
1
| kubectl get nodes -l eks.amazonaws.com/capacityType=ON_DEMAND
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
| NAME STATUS ROLES AGE VERSION
ip-192-168-1-207.ap-northeast-2.compute.internal Ready <none> 14h v1.31.5-eks-5d632ec
ip-192-168-2-145.ap-northeast-2.compute.internal Ready <none> 35m v1.31.4-eks-0f56d01
ip-192-168-2-84.ap-northeast-2.compute.internal Ready <none> 14h v1.31.5-eks-5d632ec
ip-192-168-3-73.ap-northeast-2.compute.internal Ready <none> 34m v1.31.4-eks-0f56d01
ip-192-168-3-80.ap-northeast-2.compute.internal Ready <none> 14h v1.31.5-eks-5d632ec
|
(2) ์ ์ฒด ๋
ธ๋ (์จ๋๋งจ๋ vs ์คํ) ์กฐํ
1
| kubectl get nodes -L eks.amazonaws.com/capacityType
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
| NAME STATUS ROLES AGE VERSION CAPACITYTYPE
ip-192-168-1-207.ap-northeast-2.compute.internal Ready <none> 14h v1.31.5-eks-5d632ec ON_DEMAND
ip-192-168-1-92.ap-northeast-2.compute.internal Ready <none> 41m v1.31.5-eks-5d632ec SPOT
ip-192-168-2-145.ap-northeast-2.compute.internal Ready <none> 35m v1.31.4-eks-0f56d01 ON_DEMAND
ip-192-168-2-164.ap-northeast-2.compute.internal Ready <none> 41m v1.31.5-eks-5d632ec SPOT
ip-192-168-2-84.ap-northeast-2.compute.internal Ready <none> 14h v1.31.5-eks-5d632ec ON_DEMAND
ip-192-168-3-73.ap-northeast-2.compute.internal Ready <none> 35m v1.31.4-eks-0f56d01 ON_DEMAND
ip-192-168-3-80.ap-northeast-2.compute.internal Ready <none> 14h v1.31.5-eks-5d632ec ON_DEMAND
|
6. ์คํ ๋
ธ๋์ Pod ๋ฐฐํฌ
์คํ ๋
ธ๋์์๋ง ์คํ๋๋ busybox
Pod ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
| cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
terminationGracePeriodSeconds: 3
containers:
- name: busybox
image: busybox
command:
- "/bin/sh"
- "-c"
- "while true; do date >> /home/pod-out.txt; cd /home; sync; sync; sleep 10; done"
nodeSelector:
eks.amazonaws.com/capacityType: SPOT
EOF
# ๊ฒฐ๊ณผ
pod/busybox created
|
7. Pod ๋ฐฐํฌ ๋
ธ๋ ํ์ธ
โ
์ถ๋ ฅ
1
2
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 0 50s 192.168.2.75 ip-192-168-2-164.ap-northeast-2.compute.internal <none> <none>
|
1
| k get nodes -L eks.amazonaws.com/capacityType
|
โ
์ถ๋ ฅ
1
2
3
4
5
6
7
8
| NAME STATUS ROLES AGE VERSION CAPACITYTYPE
ip-192-168-1-207.ap-northeast-2.compute.internal Ready <none> 14h v1.31.5-eks-5d632ec ON_DEMAND
ip-192-168-1-92.ap-northeast-2.compute.internal Ready <none> 47m v1.31.5-eks-5d632ec SPOT
ip-192-168-2-145.ap-northeast-2.compute.internal Ready <none> 41m v1.31.4-eks-0f56d01 ON_DEMAND
ip-192-168-2-164.ap-northeast-2.compute.internal Ready <none> 47m v1.31.5-eks-5d632ec SPOT
ip-192-168-2-84.ap-northeast-2.compute.internal Ready <none> 14h v1.31.5-eks-5d632ec ON_DEMAND
ip-192-168-3-73.ap-northeast-2.compute.internal Ready <none> 41m v1.31.4-eks-0f56d01 ON_DEMAND
ip-192-168-3-80.ap-northeast-2.compute.internal Ready <none> 14h v1.31.5-eks-5d632ec ON_DEMAND
|
busybox
Pod๊ฐ ์คํ ์ธ์คํด์ค์์ ์คํ๋จ
8. ์คํ ์ธ์คํด์ค ๋ฆฌ์์ค ์ญ์
(1) Pod ์ญ์
1
2
3
| kubectl delete pod busybox
# ๊ฒฐ๊ณผ
pod "busybox" deleted
|
(2) ng3 ๋
ธ๋ ๊ทธ๋ฃน ์ญ์
1
2
3
4
5
6
7
8
9
10
11
12
13
| eksctl delete nodegroup -c $CLUSTER_NAME -n managed-spot
# ๊ฒฐ๊ณผ
2025-02-19 02:30:04 [โน] 1 nodegroup (managed-spot) was included (based on the include/exclude rules)
2025-02-19 02:30:04 [โน] will drain 1 nodegroup(s) in cluster "myeks"
2025-02-19 02:30:04 [โน] starting parallel draining, max in-flight of 1
2025-02-19 02:30:04 [โน] cordon node "ip-192-168-1-92.ap-northeast-2.compute.internal"
2025-02-19 02:30:04 [โน] cordon node "ip-192-168-2-164.ap-northeast-2.compute.internal"
2025-02-19 02:30:05 [โ] drained all nodes: [ip-192-168-1-92.ap-northeast-2.compute.internal ip-192-168-2-164.ap-northeast-2.compute.internal]
2025-02-19 02:30:05 [โน] will delete 1 nodegroups from cluster "myeks"
2025-02-19 02:30:05 [โน]
2 parallel tasks: { delete unowned nodegroup managed-spot, no tasks
}
2025-02-19 02:30:05 [โ] deleted 1 nodegroup(s) from cluster "myeks"
|
๐๏ธ (์ค์ต ์๋ฃ ํ) ์์ ์ญ์
(1) Amazon EKS ํด๋ฌ์คํฐ ์ญ์
1
| eksctl delete cluster --name $CLUSTER_NAME
|
(2) AWS CloudFormation ์คํ ์ญ์
1
| aws cloudformation delete-stack --stack-name myeks
|
(3) ๋ณ์ ์ค์ ์ญ์
EKS ๋ฐฐํฌ ํ ์ค์ต ํธ์๋ฅผ ์ํด ์ถ๊ฐํ๋ ๋ณ์ ์ค์ ์ ๊ฑฐ