AEWS 6์ฃผ์ฐจ ์ ๋ฆฌ
๐ ์ค์ต ํ๊ฒฝ ๋ฐฐํฌ
1. YAML ํ์ผ ๋ค์ด๋ก๋
1
2
3
4
5
6
curl -O https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/K8S/myeks-6week.yaml
# ๊ฒฐ๊ณผ
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 26663 100 26663 0 0 297k 0 --:--:-- --:--:-- --:--:-- 299k
2. ๋ณ์ ์ง์
1
2
3
4
CLUSTER_NAME=myeks
SSHKEYNAME=kp-aews # SSH ํคํ์ด ์ด๋ฆ
MYACCESSKEY=XXXXXXXXXXXXXXXXXX # IAM User ์ก์ธ์ค ํค
MYSECRETKEY=XXXXXXXXXXXXXXXXXX # IAM User ์ํฌ๋ฆฟ ํค
3. CloudFormation ์คํ ๋ฐฐํฌ
1
2
3
4
5
aws cloudformation deploy --template-file myeks-6week.yaml --stack-name $CLUSTER_NAME --parameter-overrides KeyName=$SSHKEYNAME SgIngressSshCidr=$(curl -s ipinfo.io/ip)/32 MyIamUserAccessKeyID=$MYACCESSKEY MyIamUserSecretAccessKey=$MYSECRETKEY ClusterBaseName=$CLUSTER_NAME --region ap-northeast-2
Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - myeks
4. CloudFormation ์คํ ๋ฐฐํฌ ์๋ฃ ํ ์์ ์ฉ EC2 IP ์ถ๋ ฅ
1
aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[0].OutputValue' --output text
โ ย ์ถ๋ ฅ
1
13.125.235.88
โณ AWS EKS ์ค์น ํ์ธ (์คํ ์์ฑ ์์ ํ ์ฝ 20๋ถ ๊ฒฝ๊ณผ)
1. eksctl ํด๋ฌ์คํฐ ์กฐํ
1
eksctl get cluster
โ ย ์ถ๋ ฅ
1
2
NAME REGION EKSCTL CREATED
myeks ap-northeast-2 True
2. kubeconfig ์์ฑ
(1) ์๊ฒฉ์ฆ๋ช ์ฌ์ฉ์ ํ์ธ
1
aws sts get-caller-identity --query Arn
โ ย ์ถ๋ ฅ
1
"arn:aws:iam::378102432899:user/eks-user"
(2) kubeconfig ์ ๋ฐ์ดํธ ๋ช ๋ น ์คํ
1
2
3
4
aws eks update-kubeconfig --name myeks --user-alias eks-user
# ๊ฒฐ๊ณผ
Updated context eks-user in /home/devshin/.kube/config
3. Kubernetes ํด๋ฌ์คํฐ ๋ฐ ๋ฆฌ์์ค ์ํ ํ์ธ
(1) ์ธ์คํด์ค ์ ํ, ์ฉ๋ ์ ํ, ๊ฐ์ฉ ์์ญ ๋ผ๋ฒจ ์ ๋ณด ์์ธ ์กฐํ
1
kubectl get node --label-columns=node.kubernetes.io/instance-type,eks.amazonaws.com/capacityType,topology.kubernetes.io/zone
โ ย ์ถ๋ ฅ
1
2
3
4
NAME STATUS ROLES AGE VERSION INSTANCE-TYPE CAPACITYTYPE ZONE
ip-192-168-1-170.ap-northeast-2.compute.internal Ready <none> 26m v1.31.5-eks-5d632ec t3.medium ON_DEMAND ap-northeast-2a
ip-192-168-2-112.ap-northeast-2.compute.internal Ready <none> 26m v1.31.5-eks-5d632ec t3.medium ON_DEMAND ap-northeast-2b
ip-192-168-3-100.ap-northeast-2.compute.internal Ready <none> 26m v1.31.5-eks-5d632ec t3.medium ON_DEMAND ap-northeast-2c
(2) ํ๋ ์ ๋ณด ์กฐํ
1
kubectl get pod -A
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-node-7s9dd 2/2 Running 0 26m
kube-system aws-node-d6v7m 2/2 Running 0 26m
kube-system aws-node-wnd97 2/2 Running 0 26m
kube-system coredns-86f5954566-cc4m4 1/1 Running 0 32m
kube-system coredns-86f5954566-t7gfd 1/1 Running 0 32m
kube-system ebs-csi-controller-9c9c4d49f-5ns8n 6/6 Running 0 23m
kube-system ebs-csi-controller-9c9c4d49f-j6drv 6/6 Running 0 23m
kube-system ebs-csi-node-k5spq 3/3 Running 0 23m
kube-system ebs-csi-node-p4tcb 3/3 Running 0 23m
kube-system ebs-csi-node-rrkrv 3/3 Running 0 23m
kube-system kube-proxy-ddmmr 1/1 Running 0 26m
kube-system kube-proxy-szvqr 1/1 Running 0 26m
kube-system kube-proxy-z9fjt 1/1 Running 0 26m
kube-system metrics-server-6bf5998d9c-5b5r5 1/1 Running 0 32m
kube-system metrics-server-6bf5998d9c-bscbc 1/1 Running 0 32m
(3) ํ๋ ์ค๋จ ํ์ฉ(PDB) ์กฐํ
1
kubectl get pdb -n kube-system
โ ย ์ถ๋ ฅ
1
2
3
4
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
coredns N/A 1 1 32m
ebs-csi-controller N/A 1 1 24m
metrics-server N/A 1 1 32m
๐ป ๋ ธ๋ IP ์ ๋ณด ํ์ธ ๋ฐ SSH ์ ์
1. EC2 ๊ณต์ธ IP ๋ณ์ ์ง์
1
2
3
4
export N1=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=myeks-ng1-Node" "Name=availability-zone,Values=ap-northeast-2a" --query 'Reservations[*].Instances[*].PublicIpAddress' --output text)
export N2=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=myeks-ng1-Node" "Name=availability-zone,Values=ap-northeast-2b" --query 'Reservations[*].Instances[*].PublicIpAddress' --output text)
export N3=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=myeks-ng1-Node" "Name=availability-zone,Values=ap-northeast-2c" --query 'Reservations[*].Instances[*].PublicIpAddress' --output text)
echo $N1, $N2, $N3
โ ย ์ถ๋ ฅ
1
13.209.14.121, 54.180.129.8, 13.209.6.163
2. EC2 ๋ณด์ ๊ทธ๋ฃน ์กฐํ (remoteAccess ํํฐ ์ ์ฉ)
1
aws ec2 describe-security-groups --filters "Name=group-name,Values=*remoteAccess*" | jq
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
{
"SecurityGroups": [
{
"GroupId": "sg-07b56e2f4106e4e71",
"IpPermissionsEgress": [
{
"IpProtocol": "-1",
"UserIdGroupPairs": [],
"IpRanges": [
{
"CidrIp": "0.0.0.0/0"
}
],
"Ipv6Ranges": [],
"PrefixListIds": []
}
],
"Tags": [
{
"Key": "alpha.eksctl.io/eksctl-version",
"Value": "0.205.0"
},
{
"Key": "alpha.eksctl.io/nodegroup-name",
"Value": "ng1"
},
{
"Key": "aws:cloudformation:logical-id",
"Value": "SSH"
},
{
"Key": "aws:cloudformation:stack-id",
"Value": "arn:aws:cloudformation:ap-northeast-2:378102432899:stack/eksctl-myeks-nodegroup-ng1/a8323340-00dd-11f0-9a27-0a42503a199b"
},
{
"Key": "eksctl.cluster.k8s.io/v1alpha1/cluster-name",
"Value": "myeks"
},
{
"Key": "aws:cloudformation:stack-name",
"Value": "eksctl-myeks-nodegroup-ng1"
},
{
"Key": "alpha.eksctl.io/nodegroup-type",
"Value": "managed"
},
{
"Key": "Name",
"Value": "eksctl-myeks-nodegroup-ng1/SSH"
},
{
"Key": "alpha.eksctl.io/cluster-name",
"Value": "myeks"
}
],
"VpcId": "vpc-050ad5b5af470a60a",
"SecurityGroupArn": "arn:aws:ec2:ap-northeast-2:378102432899:security-group/sg-07b56e2f4106e4e71",
"OwnerId": "378102432899",
"GroupName": "eksctl-myeks-nodegroup-ng1-remoteAccess",
"Description": "Allow SSH access",
"IpPermissions": [
{
"IpProtocol": "tcp",
"FromPort": 22,
"ToPort": 22,
"UserIdGroupPairs": [],
"IpRanges": [
{
"Description": "Allow SSH access to managed worker nodes in group ng1",
"CidrIp": "0.0.0.0/0"
}
],
"Ipv6Ranges": [
{
"Description": "Allow SSH access to managed worker nodes in group ng1",
"CidrIpv6": "::/0"
}
],
"PrefixListIds": []
}
]
}
]
}
3. ๋ณด์ ๊ทธ๋ฃน ID ํ๊ฒฝ ๋ณ์ ์ค์
1
export MNSGID=$(aws ec2 describe-security-groups --filters "Name=group-name,Values=*remoteAccess*" --query 'SecurityGroups[*].GroupId' --output text)
4. ํด๋น ๋ณด์๊ทธ๋ฃน ์ธ๋ฐ์ด๋ ๊ท์น์ ๋ณธ์ธ์ ์ง ๊ณต์ธ IP ์ถ๊ฐ
1
aws ec2 authorize-security-group-ingress --group-id $MNSGID --protocol '-1' --cidr $(curl -s ipinfo.io/ip)/32
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
{
"Return": true,
"SecurityGroupRules": [
{
"SecurityGroupRuleId": "sgr-06d4e35eb0cfa2714",
"GroupId": "sg-07b56e2f4106e4e71",
"GroupOwnerId": "378102432899",
"IsEgress": false,
"IpProtocol": "-1",
"FromPort": -1,
"ToPort": -1,
"CidrIpv4": "182.230.60.93/32",
"SecurityGroupRuleArn": "arn:aws:ec2:ap-northeast-2:378102432899:security-group-rule/sgr-06d4e35eb0cfa2714"
}
]
}
5. ํด๋น ๋ณด์ ๊ทธ๋ฃน์ ์ธ๋ฐ์ด๋ ๊ท์น์ ์ด์ ์๋ฒ ๋ด๋ถ IP ์ถ๊ฐ
1
2
aws ec2 authorize-security-group-ingress --group-id $MNSGID --protocol '-1' --cidr 172.20.1.100/32
aws ec2 authorize-security-group-ingress --group-id $MNSGID --protocol '-1' --cidr 172.20.1.200/32
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
{
"Return": true,
"SecurityGroupRules": [
{
"SecurityGroupRuleId": "sgr-08eb6b67e3ffbf287",
"GroupId": "sg-07b56e2f4106e4e71",
"GroupOwnerId": "378102432899",
"IsEgress": false,
"IpProtocol": "-1",
"FromPort": -1,
"ToPort": -1,
"CidrIpv4": "172.20.1.100/32",
"SecurityGroupRuleArn": "arn:aws:ec2:ap-northeast-2:378102432899:security-group-rule/sgr-08eb6b67e3ffbf287"
}
]
}
{
"Return": true,
"SecurityGroupRules": [
{
"SecurityGroupRuleId": "sgr-039e4f1deccdc8530",
"GroupId": "sg-07b56e2f4106e4e71",
"GroupOwnerId": "378102432899",
"IsEgress": false,
"IpProtocol": "-1",
"FromPort": -1,
"ToPort": -1,
"CidrIpv4": "172.20.1.200/32",
"SecurityGroupRuleArn": "arn:aws:ec2:ap-northeast-2:378102432899:security-group-rule/sgr-039e4f1deccdc8530"
}
]
}
6. ์์ปค ๋ ธ๋ SSH ์ ์
1
for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh -o StrictHostKeyChecking=no ec2-user@$i hostname; echo; done
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
>> node 13.209.14.121 <<
Warning: Permanently added '13.209.14.121' (ED25519) to the list of known hosts.
ip-192-168-1-170.ap-northeast-2.compute.internal
>> node 54.180.129.8 <<
Warning: Permanently added '54.180.129.8' (ED25519) to the list of known hosts.
ip-192-168-2-112.ap-northeast-2.compute.internal
>> node 13.209.6.163 <<
Warning: Permanently added '13.209.6.163' (ED25519) to the list of known hosts.
ip-192-168-3-100.ap-northeast-2.compute.internal
7. ์ด์ ์๋ฒ EC2 SSH ์๊ฒฉ ์ ์ ํ ๊ธฐ๋ณธ ์ ๋ณด ํ์ธ
(1) ์ด์์๋ฒ SSH ์ ์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
ssh -i kp-aews.pem ec2-user@$(aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[0].OutputValue' --output text)
The authenticity of host '13.125.235.88 (13.125.235.88)' can't be established.
ED25519 key fingerprint is SHA256:WMoAIr6IspcFWn5x1mkmX5GFOAUW8cpRQ3H3Jd9kYTU.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '13.125.235.88' (ED25519) to the list of known hosts.
, #_
~\_ ####_ Amazon Linux 2
~~ \_#####\
~~ \###| AL2 End of Life is 2026-06-30.
~~ \#/ ___
~~ V~' '->
~~~ / A newer version of Amazon Linux is available!
~~._. _/
_/ _/ Amazon Linux 2023, GA and supported until 2028-03-15.
_/m/' https://aws.amazon.com/linux/amazon-linux-2023/
(eks-user@myeks:N/A) [root@operator-host ~]#
(2) default ๋ค์์คํ์ด์ค ์ ์ฉ
1
2
3
4
5
(eks-user@myeks:N/A) [root@operator-host ~]# kubectl ns default
# ๊ฒฐ๊ณผ
Context "eks-user@myeks.ap-northeast-2.eksctl.io" modified.
Active namespace is "default".
(eks-user@myeks:default) [root@operator-host ~]#
(3) ํ๊ฒฝ๋ณ์ ์ ๋ณด ํ์ธ
1
(eks-user@myeks:default) [root@operator-host ~]# export | egrep 'ACCOUNT|AWS_|CLUSTER|KUBERNETES|VPC|Subnet' | egrep -v 'KEY'
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
declare -x ACCOUNT_ID="xxxxxxxxxxxx"
declare -x AWS_DEFAULT_REGION="ap-northeast-2"
declare -x AWS_PAGER=""
declare -x CLUSTER_NAME="myeks"
declare -x KUBERNETES_VERSION="1.31"
declare -x PubSubnet1="subnet-018486d88b18ed068"
declare -x PubSubnet2="subnet-0a6e9d1ac60fb9ce8"
declare -x PubSubnet3="subnet-08b145655560d7abc"
declare -x VPCID="vpc-050ad5b5af470a60a"
(4) krew ํ๋ฌ๊ทธ์ธ ํ์ธ
1
(eks-user@myeks:default) [root@operator-host ~]# kubectl krew list
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
PLUGIN VERSION
access-matrix v0.5.0
ctx v0.9.5
df-pv v0.3.0
get-all v1.3.8
krew v0.4.4
neat v2.0.4
ns v0.9.5
oomd v0.0.7
rbac-tool v1.20.0
rbac-view v0.2.1
rolesum v1.5.5
stern v1.32.0
view-secret v0.13.0
whoami v0.0.46
8. ์ธ์คํด์ค ์ ๋ณด ํ์ธ
1
(eks-user@myeks:default) [root@operator-host ~]# aws ec2 describe-instances --query "Reservations[*].Instances[*].{InstanceID:InstanceId, PublicIPAdd:PublicIpAddress, PrivateIPAdd:PrivateIpAddress, InstanceName:Tags[?Key=='Name']|[0].Value, Status:State.Name}" --filters Name=instance-state-name,Values=running --output table
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
-----------------------------------------------------------------------------------------
| DescribeInstances |
+----------------------+------------------+----------------+----------------+-----------+
| InstanceID | InstanceName | PrivateIPAdd | PublicIPAdd | Status |
+----------------------+------------------+----------------+----------------+-----------+
| i-0658adcdb6bceb939 | myeks-ng1-Node | 192.168.3.100 | 13.209.6.163 | running |
| i-024f66075a2000bb1 | myeks-ng1-Node | 192.168.1.170 | 13.209.14.121 | running |
| i-0fdd6206c5a41dd4a | operator-host | 172.20.1.100 | 13.125.235.88 | running |
| i-07419265553915ec2 | operator-host-2 | 172.20.1.200 | 52.78.171.168 | running |
| i-06cde703b1becff8b | myeks-ng1-Node | 192.168.2.112 | 54.180.129.8 | running |
+----------------------+------------------+----------------+----------------+-----------+
9. PrivateIP ๋ณ์ ์ง์
1
2
3
4
5
6
7
(eks-user@myeks:default) [root@operator-host ~]# N1=$(kubectl get node --label-columns=topology.kubernetes.io/zone --selector=topology.kubernetes.io/zone=ap-northeast-2a -o jsonpath={.items[0].status.addresses[0].address})
(eks-user@myeks:default) [root@operator-host ~]# N2=$(kubectl get node --label-columns=topology.kubernetes.io/zone --selector=topology.kubernetes.io/zone=ap-northeast-2b -o jsonpath={.items[0].status.addresses[0].address})
(eks-user@myeks:default) [root@operator-host ~]# N3=$(kubectl get node --label-columns=topology.kubernetes.io/zone --selector=topology.kubernetes.io/zone=ap-northeast-2c -o jsonpath={.items[0].status.addresses[0].address})
(eks-user@myeks:default) [root@operator-host ~]# echo "export N1=$N1" >> /etc/profile
(eks-user@myeks:default) [root@operator-host ~]# echo "export N2=$N2" >> /etc/profile
(eks-user@myeks:default) [root@operator-host ~]# echo "export N3=$N3" >> /etc/profile
(eks-user@myeks:default) [root@operator-host ~]# echo $N1, $N2, $N3
โ ย ์ถ๋ ฅ
1
192.168.1.170, 192.168.2.112, 192.168.3.100
10. ๋ ธ๋ IP ๋ก ping ํ ์คํธ
1
(eks-user@myeks:default) [root@operator-host ~]# for i in $N1 $N2 $N3; do echo ">> node $i <<"; ping -c 1 $i ; echo; done
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
>> node 192.168.1.170 <<
PING 192.168.1.170 (192.168.1.170) 56(84) bytes of data.
64 bytes from 192.168.1.170: icmp_seq=1 ttl=127 time=0.802 ms
--- 192.168.1.170 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.802/0.802/0.802/0.000 ms
>> node 192.168.2.112 <<
PING 192.168.2.112 (192.168.2.112) 56(84) bytes of data.
64 bytes from 192.168.2.112: icmp_seq=1 ttl=127 time=1.18 ms
--- 192.168.2.112 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.188/1.188/1.188/0.000 ms
>> node 192.168.3.100 <<
PING 192.168.3.100 (192.168.3.100) 56(84) bytes of data.
64 bytes from 192.168.3.100: icmp_seq=1 ttl=127 time=1.47 ms
--- 192.168.3.100 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.472/1.472/1.472/0.000 ms
11. EKS ๋ฐฐํฌ ํ ์ค์ต ํธ์๋ฅผ ์ํ ์ค์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
cat << EOF >> ~/.bashrc
# eksworkshop
export CLUSTER_NAME=myeks
export VPCID=$(aws ec2 describe-vpcs --filters "Name=tag:Name,Values=$CLUSTER_NAME-VPC" --query 'Vpcs[*].VpcId' --output text)
export PubSubnet1=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-Vpc1PublicSubnet1" --query "Subnets[0].[SubnetId]" --output text)
export PubSubnet2=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-Vpc1PublicSubnet2" --query "Subnets[0].[SubnetId]" --output text)
export PubSubnet3=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-Vpc1PublicSubnet3" --query "Subnets[0].[SubnetId]" --output text)
export N1=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=$CLUSTER_NAME-ng1-Node" "Name=availability-zone,Values=ap-northeast-2a" --query 'Reservations[*].Instances[*].PublicIpAddress' --output text)
export N2=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=$CLUSTER_NAME-ng1-Node" "Name=availability-zone,Values=ap-northeast-2b" --query 'Reservations[*].Instances[*].PublicIpAddress' --output text)
export N3=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=$CLUSTER_NAME-ng1-Node" "Name=availability-zone,Values=ap-northeast-2c" --query 'Reservations[*].Instances[*].PublicIpAddress' --output text)
export CERT_ARN=$(aws acm list-certificates --query 'CertificateSummaryList[].CertificateArn[]' --output text)
MyDomain=gagajin.com # ๊ฐ์ ์์ ์ ๋๋ฉ์ธ ์ด๋ฆ ์
๋ ฅ
MyDnzHostedZoneId=$(aws route53 list-hosted-zones-by-name --dns-name "$MyDomain." --query "HostedZones[0].Id" --output text)
EOF
source ~/.bashrc
12. AWS LoadBalancerController ์ค์น
1
2
3
helm repo add eks https://aws.github.io/eks-charts
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=$CLUSTER_NAME \
--set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
NAME: aws-load-balancer-controller
LAST DEPLOYED: Fri Mar 14 23:45:03 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
AWS Load Balancer controller installed!
13. ExternalDNS ์ค์น
1
curl -s https://raw.githubusercontent.com/gasida/PKOS/main/aews/externaldns.yaml | MyDomain=$MyDomain MyDnzHostedZoneId=$MyDnzHostedZoneId envsubst | kubectl apply -f -
โ ย ์ถ๋ ฅ
1
2
3
4
serviceaccount/external-dns created
clusterrole.rbac.authorization.k8s.io/external-dns created
clusterrolebinding.rbac.authorization.k8s.io/external-dns-viewer created
deployment.apps/external-dns created
14. gp3 ์คํ ๋ฆฌ์ง ํด๋์ค ์ค์น
(1) ์คํ ๋ฆฌ์ง ํด๋์ค ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
cat <<EOF | kubectl apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gp3
annotations:
storageclass.kubernetes.io/is-default-class: "true"
allowVolumeExpansion: true
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
type: gp3
allowAutoIOPSPerGBIncrease: 'true'
encrypted: 'true'
fsType: xfs # ๊ธฐ๋ณธ๊ฐ์ด ext4
EOF
# ๊ฒฐ๊ณผ
storageclass.storage.k8s.io/gp3 created
(2) ํ์ธ
1
kubectl get sc
โ ย ์ถ๋ ฅ
1
2
3
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 47m
gp3 (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 8s
15. kube-ops-view ์ค์น
(1) helm ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 --set service.main.type=ClusterIP --set env.TZ="Asia/Seoul" --namespace kube-system
# ๊ฒฐ๊ณผ
NAME: kube-ops-view
LAST DEPLOYED: Fri Mar 14 23:46:11 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace kube-system -l "app.kubernetes.io/name=kube-ops-view,app.kubernetes.io/instance=kube-ops-view" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:8080
(2) Ingress ์ค์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/certificate-arn: $CERT_ARN
alb.ingress.kubernetes.io/group.name: study
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/load-balancer-name: $CLUSTER_NAME-ingress-alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/success-codes: 200-399
alb.ingress.kubernetes.io/target-type: ip
labels:
app.kubernetes.io/name: kubeopsview
name: kubeopsview
namespace: kube-system
spec:
ingressClassName: alb
rules:
- host: kubeopsview.$MyDomain
http:
paths:
- backend:
service:
name: kube-ops-view
port:
number: 8080 # name: http
path: /
pathType: Prefix
EOF
# ๊ฒฐ๊ณผ
ingress.networking.k8s.io/kubeopsview created
(3) ์ค์น๋ ํ๋ ์ ๋ณด ํ์ธ
1
kubectl get pods -n kube-system
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
NAME READY STATUS RESTARTS AGE
aws-load-balancer-controller-554fbd9d-8r96n 1/1 Running 0 107s
aws-load-balancer-controller-554fbd9d-dzk6c 1/1 Running 0 107s
aws-node-7s9dd 2/2 Running 0 37m
aws-node-d6v7m 2/2 Running 0 37m
aws-node-wnd97 2/2 Running 0 37m
coredns-86f5954566-cc4m4 1/1 Running 0 43m
coredns-86f5954566-t7gfd 1/1 Running 0 43m
ebs-csi-controller-9c9c4d49f-5ns8n 6/6 Running 0 34m
ebs-csi-controller-9c9c4d49f-j6drv 6/6 Running 0 34m
ebs-csi-node-k5spq 3/3 Running 0 34m
ebs-csi-node-p4tcb 3/3 Running 0 34m
ebs-csi-node-rrkrv 3/3 Running 0 34m
external-dns-dc4878f5f-6cg6l 1/1 Running 0 89s
kube-ops-view-657dbc6cd8-shbj2 1/1 Running 0 41s
kube-proxy-ddmmr 1/1 Running 0 37m
kube-proxy-szvqr 1/1 Running 0 37m
kube-proxy-z9fjt 1/1 Running 0 37m
metrics-server-6bf5998d9c-5b5r5 1/1 Running 0 43m
metrics-server-6bf5998d9c-bscbc 1/1 Running 0 43m
(4) service, ep, ingress ํ์ธ
1
kubectl get ingress,svc,ep -n kube-system
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/kubeopsview alb kubeopsview.gagajin.com myeks-ingress-alb-849073579.ap-northeast-2.elb.amazonaws.com 80 24s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/aws-load-balancer-webhook-service ClusterIP 10.100.175.162 <none> 443/TCP 2m2s
service/eks-extension-metrics-api ClusterIP 10.100.218.172 <none> 443/TCP 48m
service/kube-dns ClusterIP 10.100.0.10 <none> 53/UDP,53/TCP,9153/TCP 43m
service/kube-ops-view ClusterIP 10.100.171.227 <none> 8080/TCP 56s
service/metrics-server ClusterIP 10.100.169.227 <none> 443/TCP 43m
NAME ENDPOINTS AGE
endpoints/aws-load-balancer-webhook-service 192.168.1.154:9443,192.168.2.110:9443 2m2s
endpoints/eks-extension-metrics-api 172.0.32.0:10443 48m
endpoints/kube-dns 192.168.3.115:53,192.168.3.62:53,192.168.3.115:53 + 3 more... 43m
endpoints/kube-ops-view 192.168.1.97:8080 56s
endpoints/metrics-server 192.168.3.19:10251,192.168.3.20:10251 43m
(5) Kube Ops View ์ ์ ์ ๋ณด ํ์ธ
1
echo -e "Kube Ops View URL = https://kubeopsview.$MyDomain/#scale=1.5"
โ ย ์ถ๋ ฅ
1
Kube Ops View URL = https://kubeopsview.gagajin.com/#scale=1.5
16. prometheus-stack ์ค์น
(1) helm ์ ์ฅ์ ์ถ๊ฐ
1
2
3
4
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
# ๊ฒฐ๊ณผ
"prometheus-community" already exists with the same configuration, skipping
(2) ํ๋ผ๋ฏธํฐ ํ์ผ ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
cat <<EOT > monitor-values.yaml
prometheus:
prometheusSpec:
scrapeInterval: "15s"
evaluationInterval: "15s"
podMonitorSelectorNilUsesHelmValues: false
serviceMonitorSelectorNilUsesHelmValues: false
retention: 5d
retentionSize: "10GiB"
# Enable vertical pod autoscaler support for prometheus-operator
verticalPodAutoscaler:
enabled: true
ingress:
enabled: true
ingressClassName: alb
hosts:
- prometheus.$MyDomain
paths:
- /*
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/certificate-arn: $CERT_ARN
alb.ingress.kubernetes.io/success-codes: 200-399
alb.ingress.kubernetes.io/load-balancer-name: myeks-ingress-alb
alb.ingress.kubernetes.io/group.name: study
alb.ingress.kubernetes.io/ssl-redirect: '443'
grafana:
defaultDashboardsTimezone: Asia/Seoul
adminPassword: prom-operator
defaultDashboardsEnabled: false
ingress:
enabled: true
ingressClassName: alb
hosts:
- grafana.$MyDomain
paths:
- /*
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/certificate-arn: $CERT_ARN
alb.ingress.kubernetes.io/success-codes: 200-399
alb.ingress.kubernetes.io/load-balancer-name: myeks-ingress-alb
alb.ingress.kubernetes.io/group.name: study
alb.ingress.kubernetes.io/ssl-redirect: '443'
alertmanager:
enabled: false
defaultRules:
create: false
kubeControllerManager:
enabled: false
kubeEtcd:
enabled: false
kubeScheduler:
enabled: false
prometheus-windows-exporter:
prometheus:
monitor:
enabled: false
EOT
(3) helm ๋ฐฐํฌ
1
2
helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --version 69.3.1 \
-f monitor-values.yaml --create-namespace --namespace monitoring
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
NAME: kube-prometheus-stack
LAST DEPLOYED: Fri Mar 14 23:49:00 2025
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
kubectl --namespace monitoring get pods -l "release=kube-prometheus-stack"
Get Grafana 'admin' user password by running:
kubectl --namespace monitoring get secrets kube-prometheus-stack-grafana -o jsonpath="{.data.admin-password}" | base64 -d ; echo
Access Grafana local instance:
export POD_NAME=$(kubectl --namespace monitoring get pod -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=kube-prometheus-stack" -oname)
kubectl --namespace monitoring port-forward $POD_NAME 3000
Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
(4) kube-prometheus-stack ๋ชจ๋ํฐ๋ง ์ค์ ๊ฐ ์กฐํ
1
helm get values -n monitoring kube-prometheus-stack
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
USER-SUPPLIED VALUES:
alertmanager:
enabled: false
defaultRules:
create: false
grafana:
adminPassword: prom-operator
defaultDashboardsEnabled: false
defaultDashboardsTimezone: Asia/Seoul
ingress:
annotations:
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-northeast-2:378102432899:certificate/f967e8ca-f0b5-471d-bbe4-bee231aeb32b
alb.ingress.kubernetes.io/group.name: study
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/load-balancer-name: myeks-ingress-alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/success-codes: 200-399
alb.ingress.kubernetes.io/target-type: ip
enabled: true
hosts:
- grafana.gagajin.com
ingressClassName: alb
paths:
- /*
kubeControllerManager:
enabled: false
kubeEtcd:
enabled: false
kubeScheduler:
enabled: false
prometheus:
ingress:
annotations:
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-northeast-2:378102432899:certificate/f967e8ca-f0b5-471d-bbe4-bee231aeb32b
alb.ingress.kubernetes.io/group.name: study
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/load-balancer-name: myeks-ingress-alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/success-codes: 200-399
alb.ingress.kubernetes.io/target-type: ip
enabled: true
hosts:
- prometheus.gagajin.com
ingressClassName: alb
paths:
- /*
prometheusSpec:
evaluationInterval: 15s
podMonitorSelectorNilUsesHelmValues: false
retention: 5d
retentionSize: 10GiB
scrapeInterval: 15s
serviceMonitorSelectorNilUsesHelmValues: false
verticalPodAutoscaler:
enabled: true
prometheus-windows-exporter:
prometheus:
monitor:
enabled: false
(5) ํ๋ก๋ฉํ ์ฐ์ค ์น ์ ์
1
echo -e "https://prometheus.$MyDomain"
โ ย ์ถ๋ ฅ
1
https://prometheus.gagajin.com
(6) ๊ทธ๋ผํ๋ ์น ์ ์ : admin / prom-operator
1
echo -e "https://grafana.$MyDomain"
โ ย ์ถ๋ ฅ
1
https://grafana.gagajin.com
(7) 17900 ๋์๋ณด๋ ์ํฌํธ
- CPU ์ ์ ์จ, ๋ฉ๋ชจ๋ฆฌ ์ ์ ์จ, ๋์คํฌ ์ฌ์ฉ๋ฅ ์์
- https://shinminjin.github.io/posts/aews04/ ์ฐธ๊ณ
๐ [์ด์์๋ฒ2 EC2] kind(k8s) x.509 ์ธ์ฆ์ ํ์ธ
1. ๋ ธ๋ IP ํ์ธ
์ด์์๋ฒ2 EC2 ๊ณต์ธ IP ํ์ธ
1
aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output table
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
-------------------------------------------------
| DescribeInstances |
+------------------+-----------------+----------+
| InstanceName | PublicIPAdd | Status |
+------------------+-----------------+----------+
| myeks-ng1-Node | 13.209.6.163 | running |
| myeks-ng1-Node | 13.209.14.121 | running |
| operator-host | 13.125.235.88 | running |
| operator-host-2 | 52.78.171.168 | running |
| myeks-ng1-Node | 54.180.129.8 | running |
+------------------+-----------------+----------+
2. SSH ์ ์
1
ssh -i kp-aews.pem ec2-user@52.78.171.168
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
The authenticity of host '52.78.171.168 (52.78.171.168)' can't be established.
ED25519 key fingerprint is SHA256:w71v9Y7VpV1ZUAIa6PyWbe78ngCf+sMgbXfKm2JwFGI.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '52.78.171.168' (ED25519) to the list of known hosts.
, #_
~\_ ####_ Amazon Linux 2
~~ \_#####\
~~ \###| AL2 End of Life is 2026-06-30.
~~ \#/ ___
~~ V~' '->
~~~ / A newer version of Amazon Linux is available!
~~._. _/
_/ _/ Amazon Linux 2023, GA and supported until 2028-03-15.
_/m/' https://aws.amazon.com/linux/amazon-linux-2023/
[root@operator-host-2 ~]#
3. ์์คํ inotify ์ค์ ๋ณ๊ฒฝ
1
2
3
4
5
6
7
[root@operator-host-2 ~]# sudo sysctl fs.inotify.max_user_watches=524288
# ๊ฒฐ๊ณผ
fs.inotify.max_user_watches = 524288
[root@operator-host-2 ~]# sudo sysctl fs.inotify.max_user_instances=512
# ๊ฒฐ๊ณผ
fs.inotify.max_user_instances = 512
4. kind๋ก ์ฟ ๋ฒ๋คํฐ์ค ํด๋ฌ์คํฐ ์์ฑ
1
[root@operator-host-2 ~]# kind create cluster --name myk8s
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Creating cluster "myk8s" ...
โ Ensuring node image (kindest/node:v1.32.2) ๐ผ
โ Preparing nodes ๐ฆ
โ Writing configuration ๐
โ Starting control-plane ๐น๏ธ
โ Installing CNI ๐
โ Installing StorageClass ๐พ
Set kubectl context to "kind-myk8s"
You can now use your cluster with:
kubectl cluster-info --context kind-myk8s
Not sure what to do next? ๐
Check out https://kind.sigs.k8s.io/docs/user/quick-start/
(kind-myk8s:N/A) [root@operator-host-2 ~]#
5. ํด๋ฌ์คํฐ PKI ๋๋ ํ ๋ฆฌ ํ์ธ
1
(kind-myk8s:N/A) [root@operator-host-2 ~]# docker exec -it myk8s-control-plane ls -l /etc/kubernetes/pki
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
total 56
-rw-r--r-- 1 root root 1123 Mar 14 14:59 apiserver-etcd-client.crt
-rw------- 1 root root 1679 Mar 14 14:59 apiserver-etcd-client.key
-rw-r--r-- 1 root root 1176 Mar 14 14:59 apiserver-kubelet-client.crt
-rw------- 1 root root 1675 Mar 14 14:59 apiserver-kubelet-client.key
-rw-r--r-- 1 root root 1326 Mar 14 14:59 apiserver.crt # ๋ฃจํธ์ธ์ฆ์๋ก๋ถํฐ ๋ฐ๊ธ๋ ํ์ ์ธ์ฆ์
-rw------- 1 root root 1679 Mar 14 14:59 apiserver.key
-rw-r--r-- 1 root root 1107 Mar 14 14:59 ca.crt # ๋ฃจํธ์ธ์ฆ์
-rw------- 1 root root 1679 Mar 14 14:59 ca.key # ๋ฃจํธ์ธ์ฆ์์ ๋์ํ๋ ๋น๋ฐํค
drwxr-xr-x 2 root root 162 Mar 14 14:59 etcd
-rw-r--r-- 1 root root 1123 Mar 14 14:59 front-proxy-ca.crt
-rw------- 1 root root 1675 Mar 14 14:59 front-proxy-ca.key
-rw-r--r-- 1 root root 1119 Mar 14 14:59 front-proxy-client.crt
-rw------- 1 root root 1679 Mar 14 14:59 front-proxy-client.key
-rw------- 1 root root 1679 Mar 14 14:59 sa.key
-rw------- 1 root root 451 Mar 14 14:59 sa.pub
6. CA ์ธ์ฆ์ ๋ด์ฉ ํ์ธ
1
(kind-myk8s:N/A) [root@operator-host-2 ~]# docker exec -it myk8s-control-plane cat /etc/kubernetes/pki/ca.crt
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
-----BEGIN CERTIFICATE-----
MIIDBTCCAe2gAwIBAgIIDSnZZPMaQCswDQYJKoZIhvcNAQELBQAwFTETMBEGA1UE
AxMKa3ViZXJuZXRlczAeFw0yNTAzMTQxNDU0MjJaFw0zNTAzMTIxNDU5MjJaMBUx
EzARBgNVBAMTCmt1YmVybmV0ZXMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
AoIBAQCuL5WM5PSatpvoj+pYYY6MFpAEf7USPboNJ54WvH6uR1IzFTJXS/nT69y4
+oMhFd845Ubk99vGcu5RLc7cFb2mp6QZABaj5rIWkfdRSHUxG5mH9CeD5sBAgT+7
vEYHeXwqmY69ZCOkng5jKV3H8xyBsud8CoaJGKI6RopJpASDI+IVG4D11mqAJZBi
rsh+/oCW2MQIHq4By+CFbr6zSsq+suY0KoWzBf6srrYXlreuGKB/CwTuruuGe9Mm
xtf2+VapOmwq5DquYH/pQKnV88gi0fz9HD+EW+zxDwbELyvHLfzGg2aCQlvTIkdY
5TN6u/pLHjdhWlUJDBySw3kjUFsnAgMBAAGjWTBXMA4GA1UdDwEB/wQEAwICpDAP
BgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBRYa7R4GizeU5xJgRAXowkmKVTBaDAV
BgNVHREEDjAMggprdWJlcm5ldGVzMA0GCSqGSIb3DQEBCwUAA4IBAQALcMxV5Y6c
1VkeXZHSRk1fwRT0wF2V3P3clbPP485TPlazTf93LkHO9NlWXrOA5NDfmY5F0Crt
hSYSuSYlyP6omh8FsofIfucEobixGagheQbirhOzJ6C/Xu6ZfMk7Sa5Cp+UBWoub
h7K/PrRDNDCnnLOik4+w4SBniwwSTctPX6pabMLbMb7AAomcNnUACV/HaP6zXnYp
f2XKvIYglpmI2IGcLNcXWgjfdQk/OTQSUTS8ntxdebDpsE9dVW/NB6V8Zhwk6ChR
YBRM7hzgJtmIdsO5M8LbXRHZuffRZLYIq/yLpyeEFtOnRSLCVaC3J9lPa6Ey4D1F
jl7Fk8Ook3t6
-----END CERTIFICATE-----
7. openssl๋ก CA ์ธ์ฆ์ ์์ธ ์ ๋ณด ํ์ธ
1
(kind-myk8s:N/A) [root@operator-host-2 ~]# docker exec -it myk8s-control-plane openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -text
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 948528224136740907 (0xd29d964f31a402b)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = kubernetes
Validity
Not Before: Mar 14 14:54:22 2025 GMT
Not After : Mar 12 14:59:22 2035 GMT
Subject: CN = kubernetes
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:ae:2f:95:8c:e4:f4:9a:b6:9b:e8:8f:ea:58:61:
8e:8c:16:90:04:7f:b5:12:3d:ba:0d:27:9e:16:bc:
7e:ae:47:52:33:15:32:57:4b:f9:d3:eb:dc:b8:fa:
83:21:15:df:38:e5:46:e4:f7:db:c6:72:ee:51:2d:
ce:dc:15:bd:a6:a7:a4:19:00:16:a3:e6:b2:16:91:
f7:51:48:75:31:1b:99:87:f4:27:83:e6:c0:40:81:
3f:bb:bc:46:07:79:7c:2a:99:8e:bd:64:23:a4:9e:
0e:63:29:5d:c7:f3:1c:81:b2:e7:7c:0a:86:89:18:
a2:3a:46:8a:49:a4:04:83:23:e2:15:1b:80:f5:d6:
6a:80:25:90:62:ae:c8:7e:fe:80:96:d8:c4:08:1e:
ae:01:cb:e0:85:6e:be:b3:4a:ca:be:b2:e6:34:2a:
85:b3:05:fe:ac:ae:b6:17:96:b7:ae:18:a0:7f:0b:
04:ee:ae:eb:86:7b:d3:26:c6:d7:f6:f9:56:a9:3a:
6c:2a:e4:3a:ae:60:7f:e9:40:a9:d5:f3:c8:22:d1:
fc:fd:1c:3f:84:5b:ec:f1:0f:06:c4:2f:2b:c7:2d:
fc:c6:83:66:82:42:5b:d3:22:47:58:e5:33:7a:bb:
fa:4b:1e:37:61:5a:55:09:0c:1c:92:c3:79:23:50:
5b:27
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment, Certificate Sign
X509v3 Basic Constraints: critical
CA:TRUE
X509v3 Subject Key Identifier:
58:6B:B4:78:1A:2C:DE:53:9C:49:81:10:17:A3:09:26:29:54:C1:68
X509v3 Subject Alternative Name:
DNS:kubernetes
Signature Algorithm: sha256WithRSAEncryption
Signature Value:
0b:70:cc:55:e5:8e:9c:d5:59:1e:5d:91:d2:46:4d:5f:c1:14:
f4:c0:5d:95:dc:fd:dc:95:b3:cf:e3:ce:53:3e:56:b3:4d:ff:
77:2e:41:ce:f4:d9:56:5e:b3:80:e4:d0:df:99:8e:45:d0:2a:
ed:85:26:12:b9:26:25:c8:fe:a8:9a:1f:05:b2:87:c8:7e:e7:
04:a1:b8:b1:19:a8:21:79:06:e2:ae:13:b3:27:a0:bf:5e:ee:
99:7c:c9:3b:49:ae:42:a7:e5:01:5a:8b:9b:87:b2:bf:3e:b4:
43:34:30:a7:9c:b3:a2:93:8f:b0:e1:20:67:8b:0c:12:4d:cb:
4f:5f:aa:5a:6c:c2:db:31:be:c0:02:89:9c:36:75:00:09:5f:
c7:68:fe:b3:5e:76:29:7f:65:ca:bc:86:20:96:99:88:d8:81:
9c:2c:d7:17:5a:08:df:75:09:3f:39:34:12:51:34:bc:9e:dc:
5d:79:b0:e9:b0:4f:5d:55:6f:cd:07:a5:7c:66:1c:24:e8:28:
51:60:14:4c:ee:1c:e0:26:d9:88:76:c3:b9:33:c2:db:5d:11:
d9:b9:f7:d1:64:b6:08:ab:fc:8b:a7:27:84:16:d3:a7:45:22:
c2:55:a0:b7:27:d9:4f:6b:a1:32:e0:3d:45:8e:5e:c5:93:c3:
a8:93:7b:7a
8. apiserver-kubelet-client ์ธ์ฆ์ ํ์ธ
(1) ์ธ์ฆ์ ํ์ผ ์ถ๋ ฅ
1
(kind-myk8s:N/A) [root@operator-host-2 ~]# docker exec -it myk8s-control-plane cat /etc/kubernetes/pki/apiserver-kubelet-client.crt
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
-----BEGIN CERTIFICATE-----
MIIDNjCCAh6gAwIBAgIITD+1nuWVI6UwDQYJKoZIhvcNAQELBQAwFTETMBEGA1UE
AxMKa3ViZXJuZXRlczAeFw0yNTAzMTQxNDU0MjJaFw0yNjAzMTQxNDU5MjJaMEkx
HzAdBgNVBAoTFmt1YmVhZG06Y2x1c3Rlci1hZG1pbnMxJjAkBgNVBAMTHWt1YmUt
YXBpc2VydmVyLWt1YmVsZXQtY2xpZW50MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEAnMqW3pzHDH4k7kshAN5pQP0MyVOJoSaobTP4NhGxDab9+2mP+0zo
NmcPgtTdoBagLp55Xtwvc9dbvVqGvRs0An+lUxxWSCJH5dHn7zRV40BJUPoWqht8
4s/mHm1stLNAj0cbaCONYUUWkLU5yknPFgXU00ewETy7xoMtJj8X1EA7gTUa1ck5
vc6A5QIXU9xNbiJUJXooDbgmqpGeRKN8EbtZeQ+ZAr25cGvcDrJGvAUTg3FffpqN
T8gE/f2v/ByXRHyYkQco5abdn1CsbPCkzdnsF8oKHF+KhR4Qd3etMF1nv3ou3RM3
i0asJsPsD4SSTHFADV1RWweW/V0tMbpdpwIDAQABo1YwVDAOBgNVHQ8BAf8EBAMC
BaAwEwYDVR0lBAwwCgYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAW
gBRYa7R4GizeU5xJgRAXowkmKVTBaDANBgkqhkiG9w0BAQsFAAOCAQEAn6ILGOmn
ABNWq8NlHgBNve8FehzgEUHQSVS7lHcMIQ5fG3B5LZTJ7dHfKb78C79Tocm1EJIc
eqxde5wpkZapyL749NcAoaZP9dnzYBY2gjRrccRhYEXQVgJdC1rFNLOgO3GKbBfn
GsnrlbeExOoj9i5DJ+IBiozK+smTIztWt9hARXqD8Yl+1KEq+fjRDjnoECxw8G05
RoKKJyhp2Iioxsd9PGUGcfSDz0Pvej7rdzEmLyj6cGY5pjIHJuBVqH8SMkMHwlz2
n73qbTKy6kBU10iVC4uLitUnGnU2IgMb3nJmGqHb7sQMFQGULHByvNSvHMllXMuf
88+A5/z1ubMMqw==
-----END CERTIFICATE-----
(2) ์ธ์ฆ์ ์์ธ ์ ๋ณด ์กฐํ
1
(kind-myk8s:N/A) [root@operator-host-2 ~]# docker exec -it myk8s-control-plane openssl x509 -in /etc/kubernetes/pki/apiserver-kubelet-client.crt -noout -text
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 5494309764476511141 (0x4c3fb59ee59523a5)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = kubernetes
Validity
Not Before: Mar 14 14:54:22 2025 GMT
Not After : Mar 14 14:59:22 2026 GMT
Subject: O = kubeadm:cluster-admins, CN = kube-apiserver-kubelet-client
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:9c:ca:96:de:9c:c7:0c:7e:24:ee:4b:21:00:de:
69:40:fd:0c:c9:53:89:a1:26:a8:6d:33:f8:36:11:
b1:0d:a6:fd:fb:69:8f:fb:4c:e8:36:67:0f:82:d4:
dd:a0:16:a0:2e:9e:79:5e:dc:2f:73:d7:5b:bd:5a:
86:bd:1b:34:02:7f:a5:53:1c:56:48:22:47:e5:d1:
e7:ef:34:55:e3:40:49:50:fa:16:aa:1b:7c:e2:cf:
e6:1e:6d:6c:b4:b3:40:8f:47:1b:68:23:8d:61:45:
16:90:b5:39:ca:49:cf:16:05:d4:d3:47:b0:11:3c:
bb:c6:83:2d:26:3f:17:d4:40:3b:81:35:1a:d5:c9:
39:bd:ce:80:e5:02:17:53:dc:4d:6e:22:54:25:7a:
28:0d:b8:26:aa:91:9e:44:a3:7c:11:bb:59:79:0f:
99:02:bd:b9:70:6b:dc:0e:b2:46:bc:05:13:83:71:
5f:7e:9a:8d:4f:c8:04:fd:fd:af:fc:1c:97:44:7c:
98:91:07:28:e5:a6:dd:9f:50:ac:6c:f0:a4:cd:d9:
ec:17:ca:0a:1c:5f:8a:85:1e:10:77:77:ad:30:5d:
67:bf:7a:2e:dd:13:37:8b:46:ac:26:c3:ec:0f:84:
92:4c:71:40:0d:5d:51:5b:07:96:fd:5d:2d:31:ba:
5d:a7
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Client Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
58:6B:B4:78:1A:2C:DE:53:9C:49:81:10:17:A3:09:26:29:54:C1:68
Signature Algorithm: sha256WithRSAEncryption
Signature Value:
9f:a2:0b:18:e9:a7:00:13:56:ab:c3:65:1e:00:4d:bd:ef:05:
7a:1c:e0:11:41:d0:49:54:bb:94:77:0c:21:0e:5f:1b:70:79:
2d:94:c9:ed:d1:df:29:be:fc:0b:bf:53:a1:c9:b5:10:92:1c:
7a:ac:5d:7b:9c:29:91:96:a9:c8:be:f8:f4:d7:00:a1:a6:4f:
f5:d9:f3:60:16:36:82:34:6b:71:c4:61:60:45:d0:56:02:5d:
0b:5a:c5:34:b3:a0:3b:71:8a:6c:17:e7:1a:c9:eb:95:b7:84:
c4:ea:23:f6:2e:43:27:e2:01:8a:8c:ca:fa:c9:93:23:3b:56:
b7:d8:40:45:7a:83:f1:89:7e:d4:a1:2a:f9:f8:d1:0e:39:e8:
10:2c:70:f0:6d:39:46:82:8a:27:28:69:d8:88:a8:c6:c7:7d:
3c:65:06:71:f4:83:cf:43:ef:7a:3e:eb:77:31:26:2f:28:fa:
70:66:39:a6:32:07:26:e0:55:a8:7f:12:32:43:07:c2:5c:f6:
9f:bd:ea:6d:32:b2:ea:40:54:d7:48:95:0b:8b:8b:8a:d5:27:
1a:75:36:22:03:1b:de:72:66:1a:a1:db:ee:c4:0c:15:01:94:
2c:70:72:bc:d4:af:1c:c9:65:5c:cb:9f:f3:cf:80:e7:fc:f5:
b9:b3:0c:ab
- ์ธ์ฆ์์ ์์ธ ์ ๋ณด(ex. ์ฃผ์ฒด, ์ฉ๋, ๊ณต๊ฐํค ๋ฑ)๋ฅผ ํ์ธํจ
9. CertificateSigningRequest (CSR) ์ํ ํ์ธ
(1) CSR ๋ชฉ๋ก ์กฐํ
1
(kind-myk8s:N/A) [root@operator-host-2 ~]# kubectl get certificatesigningrequests
โ ย ์ถ๋ ฅ
1
2
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
csr-zm8wg 2m38s kubernetes.io/kube-apiserver-client-kubelet system:node:myk8s-control-plane <none> Approved,Issued
(2) CSR ์์ธ ์ ๋ณด ์กฐํ
1
(kind-myk8s:N/A) [root@operator-host-2 ~]# kubectl describe certificatesigningrequests
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
Name: csr-zm8wg
Labels: <none>
Annotations: <none>
CreationTimestamp: Fri, 14 Mar 2025 23:59:32 +0900
Requesting User: system:node:myk8s-control-plane
Signer: kubernetes.io/kube-apiserver-client-kubelet
Status: Approved,Issued
Subject:
Common Name: system:node:myk8s-control-plane
Serial Number:
Organization: system:nodes
Events: <none>
10. kubeconfig ์ ๋ณด ํ์ธ
1
(kind-myk8s:N/A) [root@operator-host-2 ~]# cat $HOME/.kube/config
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJRFNuWlpQTWFRQ3N3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1UUXhORFUwTWpKYUZ3MHpOVEF6TVRJeE5EVTVNakphTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUN1TDVXTTVQU2F0cHZvaitwWVlZNk1GcEFFZjdVU1Bib05KNTRXdkg2dVIxSXpGVEpYUy9uVDY5eTQKK29NaEZkODQ1VWJrOTl2R2N1NVJMYzdjRmIybXA2UVpBQmFqNXJJV2tmZFJTSFV4RzVtSDlDZUQ1c0JBZ1QrNwp2RVlIZVh3cW1ZNjlaQ09rbmc1aktWM0g4eHlCc3VkOENvYUpHS0k2Um9wSnBBU0RJK0lWRzREMTFtcUFKWkJpCnJzaCsvb0NXMk1RSUhxNEJ5K0NGYnI2elNzcStzdVkwS29XekJmNnNycllYbHJldUdLQi9Dd1R1cnV1R2U5TW0KeHRmMitWYXBPbXdxNURxdVlIL3BRS25WODhnaTBmejlIRCtFVyt6eER3YkVMeXZITGZ6R2cyYUNRbHZUSWtkWQo1VE42dS9wTEhqZGhXbFVKREJ5U3cza2pVRnNuQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJSWWE3UjRHaXplVTV4SmdSQVhvd2ttS1ZUQmFEQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQUxjTXhWNVk2YwoxVmtlWFpIU1JrMWZ3UlQwd0YyVjNQM2NsYlBQNDg1VFBsYXpUZjkzTGtITzlObFdYck9BNU5EZm1ZNUYwQ3J0CmhTWVN1U1lseVA2b21oOEZzb2ZJZnVjRW9iaXhHYWdoZVFiaXJoT3pKNkMvWHU2WmZNazdTYTVDcCtVQldvdWIKaDdLL1ByUkRORENubkxPaWs0K3c0U0JuaXd3U1RjdFBYNnBhYk1MYk1iN0FBb21jTm5VQUNWL0hhUDZ6WG5ZcApmMlhLdklZZ2xwbUkySUdjTE5jWFdnamZkUWsvT1RRU1VUUzhudHhkZWJEcHNFOWRWVy9OQjZWOFpod2s2Q2hSCllCUk03aHpnSnRtSWRzTzVNOExiWFJIWnVmZlJaTFlJcS95THB5ZUVGdE9uUlNMQ1ZhQzNKOWxQYTZFeTREMUYKamw3Rms4T29rM3Q2Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: https://127.0.0.1:44471
name: kind-myk8s
contexts:
- context:
cluster: kind-myk8s
user: kind-myk8s
name: kind-myk8s
current-context: kind-myk8s
kind: Config
preferences: {}
users:
- name: kind-myk8s
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLVENDQWhHZ0F3SUJBZ0lJYnZzbUlPMGQ0RXd3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1UUXhORFUwTWpKYUZ3MHlOakF6TVRReE5EVTVNakphTUR3eApIekFkQmdOVkJBb1RGbXQxWW1WaFpHMDZZMngxYzNSbGNpMWhaRzFwYm5NeEdUQVhCZ05WQkFNVEVHdDFZbVZ5CmJtVjBaWE10WVdSdGFXNHdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFETGFoeXUKQ20xT0U4dmJWZHVIOWtiZkVoMlhjcWhUL0ZwUDVUdWp4QThBNEZrNys5Nmh0Vitwd0JZYktaNXBYbS84Rmk0YQpxWGlrN21hbWQ2WHhOZFBjZzh1R1BlTjYrNlFUdHRDMUYwNGhBa2JJQ2pwbWRuUUtDQ0R2ZzZFVTF4RkJlZlJWCnFLRS82VGJ6R2RBSDE4OFVqVjBmSXZHMFJ5czFPZS84MGZ0T0ZJSXFINnBzQnVVQ1FTWGdBM2RDbnBUaWJIVUkKMk81VUUyejVxZ0ViZ1JqcUJjNUQrMEJ5cncvQVpON1JSWnhaNnVaVXZMOXF6bmxlM3pzTVlTS3RIR05qREFrbQozOURRUXhiT3JoZU9UV0NrZEZDeXFXN0luQXdwTzRSMVc3aTkxSGRsWGs2OGdTeml6SjdmcVhzUGJqa2pmam81CkxzTmtGMFpRWnJFZXBRMjNBZ01CQUFHalZqQlVNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUsKQmdnckJnRUZCUWNEQWpBTUJnTlZIUk1CQWY4RUFqQUFNQjhHQTFVZEl3UVlNQmFBRkZocnRIZ2FMTjVUbkVtQgpFQmVqQ1NZcFZNRm9NQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUFwSFU3dDhER1NwUHZJSnQvdEdhNEMzL09uCkpxb1ZNRnJNaEpWdjVwNzFHRHNRSXgwUThGSk1GdzNlbEkzY2F0UlZXWjE4a1UxaFZ2d1VhNVl1YUhRWTVhQ3kKeVJKY2RMMUtrYzNxMHE2bmJ5dS9vT2p4YnJtRzgyckN1Qk5McWN1MlB6YXhYeVNFeVNqZEN6Yy9QMmxhZllJbAptTUdTN0hZbDF0dThWUTBkM0ZMemd0RW1SdjRjeHhvWU95bkFWcGNTV1pPN3k2RUVkSmM4TExDTWU3SlhyYTIzCkJwdkVlcXdsQkNwRTVGdjZ3UUZYMk1uTExIWk53Z1I0RGtzblN0Rm1rY2l3UHBiMk4ycDNNay9sbzZMR2w3dnkKUW0vM01qaHhKT0k3ejVibDRzSm8vaG12OVc1WXRJRURKWHJ2Z1hENjhwRVhjbFdOL0FBTVltcjdrallWCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBeTJvY3JncHRUaFBMMjFYYmgvWkczeElkbDNLb1UveGFUK1U3bzhRUEFPQlpPL3ZlCm9iVmZxY0FXR3ltZWFWNXYvQll1R3FsNHBPNW1wbmVsOFRYVDNJUExoajNqZXZ1a0U3YlF0UmRPSVFKR3lBbzYKWm5aMENnZ2c3NE9oRk5jUlFYbjBWYWloUCtrMjh4blFCOWZQRkkxZEh5THh0RWNyTlRudi9OSDdUaFNDS2grcQpiQWJsQWtFbDRBTjNRcDZVNG14MUNOanVWQk5zK2FvQkc0RVk2Z1hPUS90QWNxOFB3R1RlMFVXY1dlcm1WTHkvCmFzNTVYdDg3REdFaXJSeGpZd3dKSnQvUTBFTVd6cTRYamsxZ3BIUlFzcWx1eUp3TUtUdUVkVnU0dmRSM1pWNU8KdklFczRzeWUzNmw3RDI0NUkzNDZPUzdEWkJkR1VHYXhIcVVOdHdJREFRQUJBb0lCQUVGYytKaElTM1ZTVVNoSwp2MzlCK09ZSFNURDRLL1RJMnpCZkpibnE3ek5GNUFhdFdZMjIzV1dManM3dG9iU1VId0h2RXFPSW4zYklFSDRmClpsaExCcWdPUmFEK1ZCR1p1TkNJNXltNXNtWlM5L0docjhCckFjQ1RlaG5jdnk4V0tMcFVlTm5wbE44WGpvdXgKV0xLY2V6Tk1kWWJpMEs0d1RFY1BOZm1VYzk5VGpqRVhCb1UvcDhTdi9yWDlOUGlBcUdhNkVjZ21mZ1QrWm9OdwpKWHk0bGVRQWNLVXpZYjhZbVNNeUtXTU9yNlhJMkwxRERiMnE0NTdBUC9rb091VGRKYmxDN1owcExnKzB0eVFQClU2dzNMNkhWRTRTM1RPSjFGTFdaSTIxTlI5ZWF1MjY0RDhENVArdnlHMDVFZktwREE4WnZsSGdxNHV1Wkl2bHkKTDYwQnBDRUNnWUVBL3JhdHh5SmtRbTB0OW1Ic0MxZWx4M1dXc29ld0VSaWttRWxxQ28rWG8wMkJjai8xa08zaApQRldteldlNUQ2TlJ4TldCV2tudUtSWHoxSnBJdml4Wm1ZMFV6T3Bydk41WGkwT09xN1BZZHBpeHpQQ2tRMVMxCnBoYzdYWmVKVGFtdldNOHdIckQ4c3pvMXFrbjhGbGVaY0FndmFEUlNRaDNBWmRmVSsxOHI1NUVDZ1lFQXpIRWIKdEVtMzN6bmVjbERvVGR2YW0zaGFmUTJEWU10TVE2UWxjM2R0ZTFlUDlmcmJJNTQwMzBXWjY1RW5SMmVENldUdApvWWxkUTlRNGJRM0VSU3pMOG5ncis2dysxN2xWeXhBZWo3dHltQi9QSkEyK2YydGQzbmh1ZnBUSE5WbGhZOXdyCnAwWVdLNGNKc2lWVXd6TTR4VmhLdG5PV3huWnc5UlM5VUMzMVRNY0NnWUVBc24ySEYwV0ZabnNsdTBMeGF4MVgKWVllSU84RVQ0MWNXZUZUeHgwYktaemczM3J6dE0wdFBDNzJscnNqaGlSRFVpdzltbnNPeDdmNmhLRG1aZ2hLSQpFeThuQlZXOGU5Ui9HbXNUL2tTQUN0T0R2TzVnM1lIdDdON1l6Z1FUeG1XREo4UEFuN0U4MDhlVnRhZzB5OTlFCitabnl4cDNyaXNOWWdNV1hUVE5yQzlFQ2dZQXM0VXUycVZRL0llSU9jR3ArNVJ1NWM1TlJ6b3lmekNGaTIvOEkKdVJnRXNyVTh4NlFoenBKR3pXMjd3L0srZngvN05aZmhGVm12RVVDTjJDN1ZETDk4N0JxanRpMVppQ3NvVjlLTgp0UlcwQlkrZ2w0L1JRdzJwVUFEWnN1bUVjYW1xbFdQVDVkUHFIRXZwbXI1ZjE3Zkh3dGtyOG5ZUC9XSlF1d3ZRCk5UYWJjd0tCZ1FDQ3Nma0hzajIweTNoU0pjOE9kcHNwMHRJZUxaMTlxTU55eHlib1JwZ0tSbWVvdVR5TzNSeVEKbGFncms0YVY1WWE1L3V6T1h6Y2RpN3lBQVZsWGZGNXFwd1FvYVFCcUpuWWNHY2dWTFBTV2haS01sbHJzT042WApYYWZabXJRdzBwZVVQTThCdTg4TXU5VDNraGlLSzhIa0hiRDlDaG1PSGlPa3duUTlieUFHUGc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
11. certificate-authority-data ํ์ธ
(1) certificate-authority-data base64 ๋์ฝ๋ฉ
1
echo "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJRFNuWlpQTWFRQ3N3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1UUXhORFUwTWpKYUZ3MHpOVEF6TVRJeE5EVTVNakphTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUN1TDVXTTVQU2F0cHZvaitwWVlZNk1GcEFFZjdVU1Bib05KNTRXdkg2dVIxSXpGVEpYUy9uVDY5eTQKK29NaEZkODQ1VWJrOTl2R2N1NVJMYzdjRmIybXA2UVpBQmFqNXJJV2tmZFJTSFV4RzVtSDlDZUQ1c0JBZ1QrNwp2RVlIZVh3cW1ZNjlaQ09rbmc1aktWM0g4eHlCc3VkOENvYUpHS0k2Um9wSnBBU0RJK0lWRzREMTFtcUFKWkJpCnJzaCsvb0NXMk1RSUhxNEJ5K0NGYnI2elNzcStzdVkwS29XekJmNnNycllYbHJldUdLQi9Dd1R1cnV1R2U5TW0KeHRmMitWYXBPbXdxNURxdVlIL3BRS25WODhnaTBmejlIRCtFVyt6eER3YkVMeXZITGZ6R2cyYUNRbHZUSWtkWQo1VE42dS9wTEhqZGhXbFVKREJ5U3cza2pVRnNuQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJSWWE3UjRHaXplVTV4SmdSQVhvd2ttS1ZUQmFEQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQUxjTXhWNVk2YwoxVmtlWFpIU1JrMWZ3UlQwd0YyVjNQM2NsYlBQNDg1VFBsYXpUZjkzTGtITzlObFdYck9BNU5EZm1ZNUYwQ3J0CmhTWVN1U1lseVA2b21oOEZzb2ZJZnVjRW9iaXhHYWdoZVFiaXJoT3pKNkMvWHU2WmZNazdTYTVDcCtVQldvdWIKaDdLL1ByUkRORENubkxPaWs0K3c0U0JuaXd3U1RjdFBYNnBhYk1MYk1iN0FBb21jTm5VQUNWL0hhUDZ6WG5ZcApmMlhLdklZZ2xwbUkySUdjTE5jWFdnamZkUWsvT1RRU1VUUzhudHhkZWJEcHNFOWRWVy9OQjZWOFpod2s2Q2hSCllCUk03aHpnSnRtSWRzTzVNOExiWFJIWnVmZlJaTFlJcS95THB5ZUVGdE9uUlNMQ1ZhQzNKOWxQYTZFeTREMUYKamw3Rms4T29rM3Q2Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K" | base64 -d
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
-----BEGIN CERTIFICATE-----
MIIDBTCCAe2gAwIBAgIIDSnZZPMaQCswDQYJKoZIhvcNAQELBQAwFTETMBEGA1UE
AxMKa3ViZXJuZXRlczAeFw0yNTAzMTQxNDU0MjJaFw0zNTAzMTIxNDU5MjJaMBUx
EzARBgNVBAMTCmt1YmVybmV0ZXMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
AoIBAQCuL5WM5PSatpvoj+pYYY6MFpAEf7USPboNJ54WvH6uR1IzFTJXS/nT69y4
+oMhFd845Ubk99vGcu5RLc7cFb2mp6QZABaj5rIWkfdRSHUxG5mH9CeD5sBAgT+7
vEYHeXwqmY69ZCOkng5jKV3H8xyBsud8CoaJGKI6RopJpASDI+IVG4D11mqAJZBi
rsh+/oCW2MQIHq4By+CFbr6zSsq+suY0KoWzBf6srrYXlreuGKB/CwTuruuGe9Mm
xtf2+VapOmwq5DquYH/pQKnV88gi0fz9HD+EW+zxDwbELyvHLfzGg2aCQlvTIkdY
5TN6u/pLHjdhWlUJDBySw3kjUFsnAgMBAAGjWTBXMA4GA1UdDwEB/wQEAwICpDAP
BgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBRYa7R4GizeU5xJgRAXowkmKVTBaDAV
BgNVHREEDjAMggprdWJlcm5ldGVzMA0GCSqGSIb3DQEBCwUAA4IBAQALcMxV5Y6c
1VkeXZHSRk1fwRT0wF2V3P3clbPP485TPlazTf93LkHO9NlWXrOA5NDfmY5F0Crt
hSYSuSYlyP6omh8FsofIfucEobixGagheQbirhOzJ6C/Xu6ZfMk7Sa5Cp+UBWoub
h7K/PrRDNDCnnLOik4+w4SBniwwSTctPX6pabMLbMb7AAomcNnUACV/HaP6zXnYp
f2XKvIYglpmI2IGcLNcXWgjfdQk/OTQSUTS8ntxdebDpsE9dVW/NB6V8Zhwk6ChR
YBRM7hzgJtmIdsO5M8LbXRHZuffRZLYIq/yLpyeEFtOnRSLCVaC3J9lPa6Ey4D1F
jl7Fk8Ook3t6
-----END CERTIFICATE-----
(2) myuser ์ธ์ฆ์ ํ์ผ ํธ์ง
1
(kind-myk8s:N/A) [root@operator-host-2 ~]# vi myuser.crt
(3) myuser ์ธ์ฆ์ ์์ธ ์ ๋ณด ์กฐํ
1
(kind-myk8s:N/A) [root@operator-host-2 ~]# openssl x509 -in myuser.crt -noout -text
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 948528224136740907 (0xd29d964f31a402b)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=kubernetes
Validity
Not Before: Mar 14 14:54:22 2025 GMT
Not After : Mar 12 14:59:22 2035 GMT
Subject: CN=kubernetes
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:ae:2f:95:8c:e4:f4:9a:b6:9b:e8:8f:ea:58:61:
8e:8c:16:90:04:7f:b5:12:3d:ba:0d:27:9e:16:bc:
7e:ae:47:52:33:15:32:57:4b:f9:d3:eb:dc:b8:fa:
83:21:15:df:38:e5:46:e4:f7:db:c6:72:ee:51:2d:
ce:dc:15:bd:a6:a7:a4:19:00:16:a3:e6:b2:16:91:
f7:51:48:75:31:1b:99:87:f4:27:83:e6:c0:40:81:
3f:bb:bc:46:07:79:7c:2a:99:8e:bd:64:23:a4:9e:
0e:63:29:5d:c7:f3:1c:81:b2:e7:7c:0a:86:89:18:
a2:3a:46:8a:49:a4:04:83:23:e2:15:1b:80:f5:d6:
6a:80:25:90:62:ae:c8:7e:fe:80:96:d8:c4:08:1e:
ae:01:cb:e0:85:6e:be:b3:4a:ca:be:b2:e6:34:2a:
85:b3:05:fe:ac:ae:b6:17:96:b7:ae:18:a0:7f:0b:
04:ee:ae:eb:86:7b:d3:26:c6:d7:f6:f9:56:a9:3a:
6c:2a:e4:3a:ae:60:7f:e9:40:a9:d5:f3:c8:22:d1:
fc:fd:1c:3f:84:5b:ec:f1:0f:06:c4:2f:2b:c7:2d:
fc:c6:83:66:82:42:5b:d3:22:47:58:e5:33:7a:bb:
fa:4b:1e:37:61:5a:55:09:0c:1c:92:c3:79:23:50:
5b:27
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment, Certificate Sign
X509v3 Basic Constraints: critical
CA:TRUE
X509v3 Subject Key Identifier:
58:6B:B4:78:1A:2C:DE:53:9C:49:81:10:17:A3:09:26:29:54:C1:68
X509v3 Subject Alternative Name:
DNS:kubernetes
Signature Algorithm: sha256WithRSAEncryption
0b:70:cc:55:e5:8e:9c:d5:59:1e:5d:91:d2:46:4d:5f:c1:14:
f4:c0:5d:95:dc:fd:dc:95:b3:cf:e3:ce:53:3e:56:b3:4d:ff:
77:2e:41:ce:f4:d9:56:5e:b3:80:e4:d0:df:99:8e:45:d0:2a:
ed:85:26:12:b9:26:25:c8:fe:a8:9a:1f:05:b2:87:c8:7e:e7:
04:a1:b8:b1:19:a8:21:79:06:e2:ae:13:b3:27:a0:bf:5e:ee:
99:7c:c9:3b:49:ae:42:a7:e5:01:5a:8b:9b:87:b2:bf:3e:b4:
43:34:30:a7:9c:b3:a2:93:8f:b0:e1:20:67:8b:0c:12:4d:cb:
4f:5f:aa:5a:6c:c2:db:31:be:c0:02:89:9c:36:75:00:09:5f:
c7:68:fe:b3:5e:76:29:7f:65:ca:bc:86:20:96:99:88:d8:81:
9c:2c:d7:17:5a:08:df:75:09:3f:39:34:12:51:34:bc:9e:dc:
5d:79:b0:e9:b0:4f:5d:55:6f:cd:07:a5:7c:66:1c:24:e8:28:
51:60:14:4c:ee:1c:e0:26:d9:88:76:c3:b9:33:c2:db:5d:11:
d9:b9:f7:d1:64:b6:08:ab:fc:8b:a7:27:84:16:d3:a7:45:22:
c2:55:a0:b7:27:d9:4f:6b:a1:32:e0:3d:45:8e:5e:c5:93:c3:
a8:93:7b:7a
12. client-certificate-data ํ์ธ
1
2
3
(kind-myk8s:N/A) [root@operator-host-2 ~]# echo "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLVENDQWhHZ0F3SUJBZ0lJYnZzbUlPMGQ0RXd3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1UUXhORFUwTWpKYUZ3MHlOakF6TVRReE5EVTVNakphTUR3eApIekFkQmdOVkJBb1RGbXQxWW1WaFpHMDZZMngxYzNSbGNpMWhaRzFwYm5NeEdUQVhCZ05WQkFNVEVHdDFZbVZ5CmJtVjBaWE10WVdSdGFXNHdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFETGFoeXUKQ20xT0U4dmJWZHVIOWtiZkVoMlhjcWhUL0ZwUDVUdWp4QThBNEZrNys5Nmh0Vitwd0JZYktaNXBYbS84Rmk0YQpxWGlrN21hbWQ2WHhOZFBjZzh1R1BlTjYrNlFUdHRDMUYwNGhBa2JJQ2pwbWRuUUtDQ0R2ZzZFVTF4RkJlZlJWCnFLRS82VGJ6R2RBSDE4OFVqVjBmSXZHMFJ5czFPZS84MGZ0T0ZJSXFINnBzQnVVQ1FTWGdBM2RDbnBUaWJIVUkKMk81VUUyejVxZ0ViZ1JqcUJjNUQrMEJ5cncvQVpON1JSWnhaNnVaVXZMOXF6bmxlM3pzTVlTS3RIR05qREFrbQozOURRUXhiT3JoZU9UV0NrZEZDeXFXN0luQXdwTzRSMVc3aTkxSGRsWGs2OGdTeml6SjdmcVhzUGJqa2pmam81CkxzTmtGMFpRWnJFZXBRMjNBZ01CQUFHalZqQlVNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUsKQmdnckJnRUZCUWNEQWpBTUJnTlZIUk1CQWY4RUFqQUFNQjhHQTFVZEl3UVlNQmFBRkZocnRIZ2FMTjVUbkVtQgpFQmVqQ1NZcFZNRm9NQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUFwSFU3dDhER1NwUHZJSnQvdEdhNEMzL09uCkpxb1ZNRnJNaEpWdjVwNzFHRHNRSXgwUThGSk1GdzNlbEkzY2F0UlZXWjE4a1UxaFZ2d1VhNVl1YUhRWTVhQ3kKeVJKY2RMMUtrYzNxMHE2bmJ5dS9vT2p4YnJtRzgyckN1Qk5McWN1MlB6YXhYeVNFeVNqZEN6Yy9QMmxhZllJbAptTUdTN0hZbDF0dThWUTBkM0ZMemd0RW1SdjRjeHhvWU95bkFWcGNTV1pPN3k2RUVkSmM4TExDTWU3SlhyYTIzCkJwdkVlcXdsQkNwRTVGdjZ3UUZYMk1uTExIWk53Z1I0RGtzblN0Rm1rY2l3UHBiMk4ycDNNay9sbzZMR2w3dnkKUW0vM01qaHhKT0k3ejVibDRzSm8vaG12OVc1WXRJRURKWHJ2Z1hENjhwRVhjbFdOL0FBTVltcjdrallWCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K" | \
> base64 -d | \
> openssl x509 -noout -text
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 7997027486185414732 (0x6efb2620ed1de04c)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=kubernetes
Validity
Not Before: Mar 14 14:54:22 2025 GMT
Not After : Mar 14 14:59:22 2026 GMT
Subject: O=kubeadm:cluster-admins, CN=kubernetes-admin
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:cb:6a:1c:ae:0a:6d:4e:13:cb:db:55:db:87:f6:
46:df:12:1d:97:72:a8:53:fc:5a:4f:e5:3b:a3:c4:
0f:00:e0:59:3b:fb:de:a1:b5:5f:a9:c0:16:1b:29:
9e:69:5e:6f:fc:16:2e:1a:a9:78:a4:ee:66:a6:77:
a5:f1:35:d3:dc:83:cb:86:3d:e3:7a:fb:a4:13:b6:
d0:b5:17:4e:21:02:46:c8:0a:3a:66:76:74:0a:08:
20:ef:83:a1:14:d7:11:41:79:f4:55:a8:a1:3f:e9:
36:f3:19:d0:07:d7:cf:14:8d:5d:1f:22:f1:b4:47:
2b:35:39:ef:fc:d1:fb:4e:14:82:2a:1f:aa:6c:06:
e5:02:41:25:e0:03:77:42:9e:94:e2:6c:75:08:d8:
ee:54:13:6c:f9:aa:01:1b:81:18:ea:05:ce:43:fb:
40:72:af:0f:c0:64:de:d1:45:9c:59:ea:e6:54:bc:
bf:6a:ce:79:5e:df:3b:0c:61:22:ad:1c:63:63:0c:
09:26:df:d0:d0:43:16:ce:ae:17:8e:4d:60:a4:74:
50:b2:a9:6e:c8:9c:0c:29:3b:84:75:5b:b8:bd:d4:
77:65:5e:4e:bc:81:2c:e2:cc:9e:df:a9:7b:0f:6e:
39:23:7e:3a:39:2e:c3:64:17:46:50:66:b1:1e:a5:
0d:b7
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Client Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
keyid:58:6B:B4:78:1A:2C:DE:53:9C:49:81:10:17:A3:09:26:29:54:C1:68
Signature Algorithm: sha256WithRSAEncryption
29:1d:4e:ed:f0:31:92:a4:fb:c8:26:df:ed:19:ae:02:df:f3:
a7:26:aa:15:30:5a:cc:84:95:6f:e6:9e:f5:18:3b:10:23:1d:
10:f0:52:4c:17:0d:de:94:8d:dc:6a:d4:55:59:9d:7c:91:4d:
61:56:fc:14:6b:96:2e:68:74:18:e5:a0:b2:c9:12:5c:74:bd:
4a:91:cd:ea:d2:ae:a7:6f:2b:bf:a0:e8:f1:6e:b9:86:f3:6a:
c2:b8:13:4b:a9:cb:b6:3f:36:b1:5f:24:84:c9:28:dd:0b:37:
3f:3f:69:5a:7d:82:25:98:c1:92:ec:76:25:d6:db:bc:55:0d:
1d:dc:52:f3:82:d1:26:46:fe:1c:c7:1a:18:3b:29:c0:56:97:
12:59:93:bb:cb:a1:04:74:97:3c:2c:b0:8c:7b:b2:57:ad:ad:
b7:06:9b:c4:7a:ac:25:04:2a:44:e4:5b:fa:c1:01:57:d8:c9:
cb:2c:76:4d:c2:04:78:0e:4b:27:4a:d1:66:91:c8:b0:3e:96:
f6:37:6a:77:32:4f:e5:a3:a2:c6:97:bb:f2:42:6f:f7:32:38:
71:24:e2:3b:cf:96:e5:e2:c2:68:fe:19:af:f5:6e:58:b4:81:
03:25:7a:ef:81:70:fa:f2:91:17:72:55:8d:fc:00:0c:62:6a:
fb:92:36:15
13. client-key-data ํ์ธ
1
2
3
(kind-myk8s:N/A) [root@operator-host-2 ~]# echo "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBeTJvY3JncHRUaFBMMjFYYmgvWkczeElkbDNLb1UveGFUK1U3bzhRUEFPQlpPL3ZlCm9iVmZxY0FXR3ltZWFWNXYvQll1R3FsNHBPNW1wbmVsOFRYVDNJUExoajNqZXZ1a0U3YlF0UmRPSVFKR3lBbzYKWm5aMENnZ2c3NE9oRk5jUlFYbjBWYWloUCtrMjh4blFCOWZQRkkxZEh5THh0RWNyTlRudi9OSDdUaFNDS2grcQpiQWJsQWtFbDRBTjNRcDZVNG14MUNOanVWQk5zK2FvQkc0RVk2Z1hPUS90QWNxOFB3R1RlMFVXY1dlcm1WTHkvCmFzNTVYdDg3REdFaXJSeGpZd3dKSnQvUTBFTVd6cTRYamsxZ3BIUlFzcWx1eUp3TUtUdUVkVnU0dmRSM1pWNU8KdklFczRzeWUzNmw3RDI0NUkzNDZPUzdEWkJkR1VHYXhIcVVOdHdJREFRQUJBb0lCQUVGYytKaElTM1ZTVVNoSwp2MzlCK09ZSFNURDRLL1RJMnpCZkpibnE3ek5GNUFhdFdZMjIzV1dManM3dG9iU1VId0h2RXFPSW4zYklFSDRmClpsaExCcWdPUmFEK1ZCR1p1TkNJNXltNXNtWlM5L0docjhCckFjQ1RlaG5jdnk4V0tMcFVlTm5wbE44WGpvdXgKV0xLY2V6Tk1kWWJpMEs0d1RFY1BOZm1VYzk5VGpqRVhCb1UvcDhTdi9yWDlOUGlBcUdhNkVjZ21mZ1QrWm9OdwpKWHk0bGVRQWNLVXpZYjhZbVNNeUtXTU9yNlhJMkwxRERiMnE0NTdBUC9rb091VGRKYmxDN1owcExnKzB0eVFQClU2dzNMNkhWRTRTM1RPSjFGTFdaSTIxTlI5ZWF1MjY0RDhENVArdnlHMDVFZktwREE4WnZsSGdxNHV1Wkl2bHkKTDYwQnBDRUNnWUVBL3JhdHh5SmtRbTB0OW1Ic0MxZWx4M1dXc29ld0VSaWttRWxxQ28rWG8wMkJjai8xa08zaApQRldteldlNUQ2TlJ4TldCV2tudUtSWHoxSnBJdml4Wm1ZMFV6T3Bydk41WGkwT09xN1BZZHBpeHpQQ2tRMVMxCnBoYzdYWmVKVGFtdldNOHdIckQ4c3pvMXFrbjhGbGVaY0FndmFEUlNRaDNBWmRmVSsxOHI1NUVDZ1lFQXpIRWIKdEVtMzN6bmVjbERvVGR2YW0zaGFmUTJEWU10TVE2UWxjM2R0ZTFlUDlmcmJJNTQwMzBXWjY1RW5SMmVENldUdApvWWxkUTlRNGJRM0VSU3pMOG5ncis2dysxN2xWeXhBZWo3dHltQi9QSkEyK2YydGQzbmh1ZnBUSE5WbGhZOXdyCnAwWVdLNGNKc2lWVXd6TTR4VmhLdG5PV3huWnc5UlM5VUMzMVRNY0NnWUVBc24ySEYwV0ZabnNsdTBMeGF4MVgKWVllSU84RVQ0MWNXZUZUeHgwYktaemczM3J6dE0wdFBDNzJscnNqaGlSRFVpdzltbnNPeDdmNmhLRG1aZ2hLSQpFeThuQlZXOGU5Ui9HbXNUL2tTQUN0T0R2TzVnM1lIdDdON1l6Z1FUeG1XREo4UEFuN0U4MDhlVnRhZzB5OTlFCitabnl4cDNyaXNOWWdNV1hUVE5yQzlFQ2dZQXM0VXUycVZRL0llSU9jR3ArNVJ1NWM1TlJ6b3lmekNGaTIvOEkKdVJnRXNyVTh4NlFoenBKR3pXMjd3L0srZngvN05aZmhGVm12RVVDTjJDN1ZETDk4N0JxanRpMVppQ3NvVjlLTgp0UlcwQlkrZ2w0L1JRdzJwVUFEWnN1bUVjYW1xbFdQVDVkUHFIRXZwbXI1ZjE3Zkh3dGtyOG5ZUC9XSlF1d3ZRCk5UYWJjd0tCZ1FDQ3Nma0hzajIweTNoU0pjOE9kcHNwMHRJZUxaMTlxTU55eHlib1JwZ0tSbWVvdVR5TzNSeVEKbGFncms0YVY1WWE1L3V6T1h6Y2RpN3lBQVZsWGZGNXFwd1FvYVFCcUpuWWNHY2dWTFBTV2haS01sbHJzT042WApYYWZabXJRdzBwZVVQTThCdTg4TXU5VDNraGlLSzhIa0hiRDlDaG1PSGlPa3duUTlieUFHUGc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=" | \
> base64 -d | \
> openssl rsa -noout -text
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
Private-Key: (2048 bit)
modulus:
00:cb:6a:1c:ae:0a:6d:4e:13:cb:db:55:db:87:f6:
46:df:12:1d:97:72:a8:53:fc:5a:4f:e5:3b:a3:c4:
0f:00:e0:59:3b:fb:de:a1:b5:5f:a9:c0:16:1b:29:
9e:69:5e:6f:fc:16:2e:1a:a9:78:a4:ee:66:a6:77:
a5:f1:35:d3:dc:83:cb:86:3d:e3:7a:fb:a4:13:b6:
d0:b5:17:4e:21:02:46:c8:0a:3a:66:76:74:0a:08:
20:ef:83:a1:14:d7:11:41:79:f4:55:a8:a1:3f:e9:
36:f3:19:d0:07:d7:cf:14:8d:5d:1f:22:f1:b4:47:
2b:35:39:ef:fc:d1:fb:4e:14:82:2a:1f:aa:6c:06:
e5:02:41:25:e0:03:77:42:9e:94:e2:6c:75:08:d8:
ee:54:13:6c:f9:aa:01:1b:81:18:ea:05:ce:43:fb:
40:72:af:0f:c0:64:de:d1:45:9c:59:ea:e6:54:bc:
bf:6a:ce:79:5e:df:3b:0c:61:22:ad:1c:63:63:0c:
09:26:df:d0:d0:43:16:ce:ae:17:8e:4d:60:a4:74:
50:b2:a9:6e:c8:9c:0c:29:3b:84:75:5b:b8:bd:d4:
77:65:5e:4e:bc:81:2c:e2:cc:9e:df:a9:7b:0f:6e:
39:23:7e:3a:39:2e:c3:64:17:46:50:66:b1:1e:a5:
0d:b7
publicExponent: 65537 (0x10001)
privateExponent:
41:5c:f8:98:48:4b:75:52:51:28:4a:bf:7f:41:f8:
e6:07:49:30:f8:2b:f4:c8:db:30:5f:25:b9:ea:ef:
33:45:e4:06:ad:59:8d:b6:dd:65:8b:8e:ce:ed:a1:
b4:94:1f:01:ef:12:a3:88:9f:76:c8:10:7e:1f:66:
58:4b:06:a8:0e:45:a0:fe:54:11:99:b8:d0:88:e7:
29:b9:b2:66:52:f7:f1:a1:af:c0:6b:01:c0:93:7a:
19:dc:bf:2f:16:28:ba:54:78:d9:e9:94:df:17:8e:
8b:b1:58:b2:9c:7b:33:4c:75:86:e2:d0:ae:30:4c:
47:0f:35:f9:94:73:df:53:8e:31:17:06:85:3f:a7:
c4:af:fe:b5:fd:34:f8:80:a8:66:ba:11:c8:26:7e:
04:fe:66:83:70:25:7c:b8:95:e4:00:70:a5:33:61:
bf:18:99:23:32:29:63:0e:af:a5:c8:d8:bd:43:0d:
bd:aa:e3:9e:c0:3f:f9:28:3a:e4:dd:25:b9:42:ed:
9d:29:2e:0f:b4:b7:24:0f:53:ac:37:2f:a1:d5:13:
84:b7:4c:e2:75:14:b5:99:23:6d:4d:47:d7:9a:bb:
6e:b8:0f:c0:f9:3f:eb:f2:1b:4e:44:7c:aa:43:03:
c6:6f:94:78:2a:e2:eb:99:22:f9:72:2f:ad:01:a4:
21
prime1:
00:fe:b6:ad:c7:22:64:42:6d:2d:f6:61:ec:0b:57:
a5:c7:75:96:b2:87:b0:11:18:a4:98:49:6a:0a:8f:
97:a3:4d:81:72:3f:f5:90:ed:e1:3c:55:a6:cd:67:
b9:0f:a3:51:c4:d5:81:5a:49:ee:29:15:f3:d4:9a:
48:be:2c:59:99:8d:14:cc:ea:6b:bc:de:57:8b:43:
8e:ab:b3:d8:76:98:b1:cc:f0:a4:43:54:b5:a6:17:
3b:5d:97:89:4d:a9:af:58:cf:30:1e:b0:fc:b3:3a:
35:aa:49:fc:16:57:99:70:08:2f:68:34:52:42:1d:
c0:65:d7:d4:fb:5f:2b:e7:91
prime2:
00:cc:71:1b:b4:49:b7:df:39:de:72:50:e8:4d:db:
da:9b:78:5a:7d:0d:83:60:cb:4c:43:a4:25:73:77:
6d:7b:57:8f:f5:fa:db:23:9e:34:df:45:99:eb:91:
27:47:67:83:e9:64:ed:a1:89:5d:43:d4:38:6d:0d:
c4:45:2c:cb:f2:78:2b:fb:ac:3e:d7:b9:55:cb:10:
1e:8f:bb:72:98:1f:cf:24:0d:be:7f:6b:5d:de:78:
6e:7e:94:c7:35:59:61:63:dc:2b:a7:46:16:2b:87:
09:b2:25:54:c3:33:38:c5:58:4a:b6:73:96:c6:76:
70:f5:14:bd:50:2d:f5:4c:c7
exponent1:
00:b2:7d:87:17:45:85:66:7b:25:bb:42:f1:6b:1d:
57:61:87:88:3b:c1:13:e3:57:16:78:54:f1:c7:46:
ca:67:38:37:de:bc:ed:33:4b:4f:0b:bd:a5:ae:c8:
e1:89:10:d4:8b:0f:66:9e:c3:b1:ed:fe:a1:28:39:
99:82:12:88:13:2f:27:05:55:bc:7b:d4:7f:1a:6b:
13:fe:44:80:0a:d3:83:bc:ee:60:dd:81:ed:ec:de:
d8:ce:04:13:c6:65:83:27:c3:c0:9f:b1:3c:d3:c7:
95:b5:a8:34:cb:df:44:f9:99:f2:c6:9d:eb:8a:c3:
58:80:c5:97:4d:33:6b:0b:d1
exponent2:
2c:e1:4b:b6:a9:54:3f:21:e2:0e:70:6a:7e:e5:1b:
b9:73:93:51:ce:8c:9f:cc:21:62:db:ff:08:b9:18:
04:b2:b5:3c:c7:a4:21:ce:92:46:cd:6d:bb:c3:f2:
be:7f:1f:fb:35:97:e1:15:59:af:11:40:8d:d8:2e:
d5:0c:bf:7c:ec:1a:a3:b6:2d:59:88:2b:28:57:d2:
8d:b5:15:b4:05:8f:a0:97:8f:d1:43:0d:a9:50:00:
d9:b2:e9:84:71:a9:aa:95:63:d3:e5:d3:ea:1c:4b:
e9:9a:be:5f:d7:b7:c7:c2:d9:2b:f2:76:0f:fd:62:
50:bb:0b:d0:35:36:9b:73
coefficient:
00:82:b1:f9:07:b2:3d:b4:cb:78:52:25:cf:0e:76:
9b:29:d2:d2:1e:2d:9d:7d:a8:c3:72:c7:26:e8:46:
98:0a:46:67:a8:b9:3c:8e:dd:1c:90:95:a8:2b:93:
86:95:e5:86:b9:fe:ec:ce:5f:37:1d:8b:bc:80:01:
59:57:7c:5e:6a:a7:04:28:69:00:6a:26:76:1c:19:
c8:15:2c:f4:96:85:92:8c:96:5a:ec:38:de:97:5d:
a7:d9:9a:b4:30:d2:97:94:3c:cf:01:bb:cf:0c:bb:
d4:f7:92:18:8a:2b:c1:e4:1d:b0:fd:0a:19:8e:1e:
23:a4:c2:74:3d:6f:20:06:3e
๐ ์ ๊ท ๊ด๋ฆฌ์๋ฅผ ์ํ ์ธ์ฆ์ ์ค์
- https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/
- https://kubernetes.io/docs/concepts/security/controlling-access/
1. ํ์ ์ธ์ฆ์๋ฅผ ์ํ ๋น๋ฐํค ์์ฑ
1
2
3
4
5
6
(kind-myk8s:N/A) [root@operator-host-2 ~]# openssl genrsa -out gagajin.key 2048
# ๊ฒฐ๊ณผ
Generating RSA private key, 2048 bit long modulus
....................................................................................................+++
.................................................+++
e is 65537 (0x10001)
2. ๋น๋ฐํค ํ์ธ
1
(kind-myk8s:N/A) [root@operator-host-2 ~]# cat gagajin.key
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEAxXbx7097GUhsD00itgX1kjU0y6lqq1KGqL2jwrk0uUTiE7cV
FoXI5asm6bB5zTDEfGR11t+YbRai8+O6U4rJRGQi4dbCZ1nZigqBH9p8pmcGZMMv
AtSBPOsCZN67dY+TQUUbyckBVdWdHrE0PpgCmI0fWc3SAWjJKwjs8FgRj8pjUwOW
X6s8TX64bdMU9tS+Ba+iuwzRkFH+OZtII5khxxSoAWBSQhYHSGrSDmtr5TNtvJKt
KA3+GUMgFVxAEQrrW9FA1fwWvd7NoVlSlC0AVlH/g7Unwz9TFRZ2pcOSEjDbjbLo
aMzU90VYg8GhI9dYwe+IQ8QUNo9fOEPPwbh1MQIDAQABAoIBAQC/FgexibyaDtdj
R0Xb15B42DwrCdtLxyTAjiO2//rhfqM4aOdjUDvE5QdXBYwY4KSDq8PNF7stdcZj
NXDN/3QdVrnayjR+RxiY2Olzpb8SWIh7YdZQasxV4yYB7viBH1tkwjfN4VIFeh39
4YWpPPjmH0rDYMVkz748gvqm6tKzzXc5zcWhxeTWID0gmY3UNNlW0QI5wEX+nY0g
A53kcEqjBkLmVWuSzG2/JKo7QbvJ6Y0Uz/JiCuS/1VT2H67JGAy2QRxEckOh+phr
YHvbuECGlgPK6F4mWh+RDEHW4p+cCBcPlScdofwpMPp4UbnJaXTXOEXFe6SdpV/6
iIFEpP1xAoGBAOkOzFe2qEnozHq5w0Vp32k24IgYG0l8YzHR5wWpiJYFROzqe7Pj
b4E0R9zAXhSiUOdX4VzswOaJMmOu70sWQr0sshvGfuXnGIgw4Zcsm57h7pPTxMrH
66GFzKFVdHxl049o9rPyRZwfzLmJjxdcsyLPWC3x0/+z/vlcNXxMlQ9/AoGBANjn
LThdbraJ8QVdjhj1+rfJQua7JyNtGDshWgKaCqwiGsFJhtEiBWGW9eNmIF12f892
SbjJh2rd5f5PxQwx5OGb4D6hR7a9TYDRSye2mo0Bh3+C37ZK0G6vWxDdZMZ3AIY5
IIsgD2i4B5kcaSL2KaB2UxkpnV7DAxxpCzHFjdNPAoGAIAT0diiWPnFJhqL2/RZq
p13uw0Psm9AHINUh1FlSdqoKqjIdBL3+l9XC+cVEJ7mVO/OK9uVgK0w2LBPgtIQ+
bxcw8Tf4P0XczPlKRSbPyqhnys+RffqxmON1FcVT17N1uYJGQrrKbYTA78zCaAdI
ZUPvbYCIC92C7meIwacT46kCgYEAgXuH6DUGiZvRMQXHdSkqcZqJAJpK5AAVTf87
73+rzVRSqn5NJ/1qPvbSdNybh4/c/qk7mz9bQrWSvf06wWvrma7m8BxxZiqd4L+Q
YPXGT1TRYZJsIDOLN/ggofG4Xi3eN0JVJhiOelIZ3xIxxTg0Y2EffE72bgJ2kfg3
QZAQeUsCgYEAo/RiDAMbyvWHIva1gdFlBp+4jVH8vXsw9DqpgfnSOlk+1M00q8oH
92aNNpGOjqqgZD1eoRoYVg64KoV9MJtNawTVzhPmH9ZBupXrqhSlfTSel8QHvaVS
0vUvkTWLt7CcgPMHdK0Z94PeC7Q0NsR9mCaadAmpgb7KRQLZn7a2G2Q=
-----END RSA PRIVATE KEY-----
3. ์ธ์ฆ์ ์ฌ์ธ ์์ฒญ ํ์ผ(.csr) ํ์ผ ์์ฑ
1
(kind-myk8s:N/A) [root@operator-host-2 ~]# openssl req -new -key gagajin.key -out gagajin.csr -subj "/O=kubeadm:cluster-admins/CN=gagajin-cert"
- ๋น๋ฐํค์ ํจ๊ป
openssl req
๋ช ๋ น์ด๋ฅผ ์ฌ์ฉํ์ฌ CSR ํ์ผ(gagajin.csr)์ ์์ฑ /O=kubeadm:cluster-admins/CN=gagajin-cert
ํํ์ ์ฃผ์ฒด ์ ๋ณด(subj)๊ฐ ํฌํจ๋จ
4. CSR ํ์ผ ํ์ธ
1
(kind-myk8s:N/A) [root@operator-host-2 ~]# cat gagajin.csr
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
-----BEGIN CERTIFICATE REQUEST-----
MIICfTCCAWUCAQAwODEfMB0GA1UECgwWa3ViZWFkbTpjbHVzdGVyLWFkbWluczEV
MBMGA1UEAwwMZ2FnYWppbi1jZXJ0MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB
CgKCAQEAxXbx7097GUhsD00itgX1kjU0y6lqq1KGqL2jwrk0uUTiE7cVFoXI5asm
6bB5zTDEfGR11t+YbRai8+O6U4rJRGQi4dbCZ1nZigqBH9p8pmcGZMMvAtSBPOsC
ZN67dY+TQUUbyckBVdWdHrE0PpgCmI0fWc3SAWjJKwjs8FgRj8pjUwOWX6s8TX64
bdMU9tS+Ba+iuwzRkFH+OZtII5khxxSoAWBSQhYHSGrSDmtr5TNtvJKtKA3+GUMg
FVxAEQrrW9FA1fwWvd7NoVlSlC0AVlH/g7Unwz9TFRZ2pcOSEjDbjbLoaMzU90VY
g8GhI9dYwe+IQ8QUNo9fOEPPwbh1MQIDAQABoAAwDQYJKoZIhvcNAQELBQADggEB
AFwwLS9BRDptjyxC+0pRFV47JFT3PBY8y/Z0A6QmhakuXjiJHL4NulAG7eoH2i7+
bfK63Yx17Y1ickJ3CQetBHKXwx09HsWoWrnjMkyhESRdJT1A7CsJdOw6YVlDyaIl
ThzKNJ8DZEoTbV+iJGjlvtKCqO2+uLgjdbImuOMGGNB0oWoxZGmZ1r49Ev+RwOIi
zRIAPaQ5ElHqvazdEpEZ8nIa0ktKoV/GOL07RRi/BrsnMMOYwx+lvq2hVSJNRVsF
fS/8SmgpnZ/ZrCuMLLaC/U8jzXJzf1sjVCnpaWoHP3lkQnOnLERGSmf4Rtzuhprw
PGtpK74SumXtq1aixUsgv/Q=
-----END CERTIFICATE REQUEST-----
5. CSR ํ์ผ Base64 ์ธ์ฝ๋ฉ
1
(kind-myk8s:N/A) [root@operator-host-2 ~]# cat gagajin.csr | base64 | tr -d '\n'
โ ย ์ถ๋ ฅ
1
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2ZUQ0NBV1VDQVFBd09ERWZNQjBHQTFVRUNnd1dhM1ZpWldGa2JUcGpiSFZ6ZEdWeUxXRmtiV2x1Y3pFVgpNQk1HQTFVRUF3d01aMkZuWVdwcGJpMWpaWEowTUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCCkNnS0NBUUVBeFhieDcwOTdHVWhzRDAwaXRnWDFralUweTZscXExS0dxTDJqd3JrMHVVVGlFN2NWRm9YSTVhc20KNmJCNXpUREVmR1IxMXQrWWJSYWk4K082VTRySlJHUWk0ZGJDWjFuWmlncUJIOXA4cG1jR1pNTXZBdFNCUE9zQwpaTjY3ZFkrVFFVVWJ5Y2tCVmRXZEhyRTBQcGdDbUkwZldjM1NBV2pKS3dqczhGZ1JqOHBqVXdPV1g2czhUWDY0CmJkTVU5dFMrQmEraXV3elJrRkgrT1p0SUk1a2h4eFNvQVdCU1FoWUhTR3JTRG10cjVUTnR2Skt0S0EzK0dVTWcKRlZ4QUVRcnJXOUZBMWZ3V3ZkN05vVmxTbEMwQVZsSC9nN1Vud3o5VEZSWjJwY09TRWpEYmpiTG9hTXpVOTBWWQpnOEdoSTlkWXdlK0lROFFVTm85Zk9FUFB3YmgxTVFJREFRQUJvQUF3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCCkFGd3dMUzlCUkRwdGp5eEMrMHBSRlY0N0pGVDNQQlk4eS9aMEE2UW1oYWt1WGppSkhMNE51bEFHN2VvSDJpNysKYmZLNjNZeDE3WTFpY2tKM0NRZXRCSEtYd3gwOUhzV29Xcm5qTWt5aEVTUmRKVDFBN0NzSmRPdzZZVmxEeWFJbApUaHpLTko4RFpFb1RiVitpSkdqbHZ0S0NxTzIrdUxnamRiSW11T01HR05CMG9Xb3haR21aMXI0OUV2K1J3T0lpCnpSSUFQYVE1RWxIcXZhemRFcEVaOG5JYTBrdEtvVi9HT0wwN1JSaS9CcnNuTU1PWXd4K2x2cTJoVlNKTlJWc0YKZlMvOFNtZ3BuWi9ackN1TUxMYUMvVThqelhKemYxc2pWQ25wYVdvSFAzbGtRbk9uTEVSR1NtZjRSdHp1aHBydwpQR3RwSzc0U3VtWHRxMWFpeFVzZ3YvUT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCg==
6. CertificateSigningRequest ์์ฑ ๋ฐ ์ ์ก
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
(kind-myk8s:N/A) [root@operator-host-2 ~]# cat <<EOF | kubectl apply -f -
> apiVersion: certificates.k8s.io/v1
> kind: CertificateSigningRequest
> metadata:
> name: gagajin-csr
> spec:
> signerName: kubernetes.io/kube-apiserver-client
> groups:
> - system:masters
> - system:authenticated
> request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2ZUQ0NBV1VDQVFBd09ERWZNQjBHQTFVRUNnd1dhM1ZpWldGa2JUcGpiSFZ6ZEdWeUxXRmtiV2x1Y3pFVgpNQk1HQTFVRUF3d01aMkZuWVdwcGJpMWpaWEowTUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCCkNnS0NBUUVBeFhieDcwOTdHVWhzRDAwaXRnWDFralUweTZscXExS0dxTDJqd3JrMHVVVGlFN2NWRm9YSTVhc20KNmJCNXpUREVmR1IxMXQrWWJSYWk4K082VTRySlJHUWk0ZGJDWjFuWmlncUJIOXA4cG1jR1pNTXZBdFNCUE9zQwpaTjY3ZFkrVFFVVWJ5Y2tCVmRXZEhyRTBQcGdDbUkwZldjM1NBV2pKS3dqczhGZ1JqOHBqVXdPV1g2czhUWDY0CmJkTVU5dFMrQmEraXV3elJrRkgrT1p0SUk1a2h4eFNvQVdCU1FoWUhTR3JTRG10cjVUTnR2Skt0S0EzK0dVTWcKRlZ4QUVRcnJXOUZBMWZ3V3ZkN05vVmxTbEMwQVZsSC9nN1Vud3o5VEZSWjJwY09TRWpEYmpiTG9hTXpVOTBWWQpnOEdoSTlkWXdlK0lROFFVTm85Zk9FUFB3YmgxTVFJREFRQUJvQUF3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCCkFGd3dMUzlCUkRwdGp5eEMrMHBSRlY0N0pGVDNQQlk4eS9aMEE2UW1oYWt1WGppSkhMNE51bEFHN2VvSDJpNysKYmZLNjNZeDE3WTFpY2tKM0NRZXRCSEtYd3gwOUhzV29Xcm5qTWt5aEVTUmRKVDFBN0NzSmRPdzZZVmxEeWFJbApUaHpLTko4RFpFb1RiVitpSkdqbHZ0S0NxTzIrdUxnamRiSW11T01HR05CMG9Xb3haR21aMXI0OUV2K1J3T0lpCnpSSUFQYVE1RWxIcXZhemRFcEVaOG5JYTBrdEtvVi9HT0wwN1JSaS9CcnNuTU1PWXd4K2x2cTJoVlNKTlJWc0YKZlMvOFNtZ3BuWi9ackN1TUxMYUMvVThqelhKemYxc2pWQ25wYVdvSFAzbGtRbk9uTEVSR1NtZjRSdHp1aHBydwpQR3RwSzc0U3VtWHRxMWFpeFVzZ3YvUT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCg==
> usages:
> - digital signature
> - key encipherment
> - client auth
> EOF
# ๊ฒฐ๊ณผ
certificatesigningrequest.certificates.k8s.io/gagajin-csr created
7. csr ์ํ ํ์ธ (Pending)
1
(kind-myk8s:N/A) [root@operator-host-2 ~]# kubectl get csr
โ ย ์ถ๋ ฅ
1
2
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
gagajin-csr 56s kubernetes.io/kube-apiserver-client kubernetes-admin <none> Pending
8. CSR ์น์ธ
1
2
3
4
(kind-myk8s:N/A) [root@operator-host-2 ~]# kubectl certificate approve gagajin-csr
# ๊ฒฐ๊ณผ
certificatesigningrequest.certificates.k8s.io/gagajin-csr approved
9. CSR ์ํ ํ์ธ (Approved, Issued)
1
(kind-myk8s:N/A) [root@operator-host-2 ~]# kubectl get csr
โ ย ์ถ๋ ฅ
1
2
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
gagajin-csr 2m44s kubernetes.io/kube-apiserver-client kubernetes-admin <none> Approved,Issued
10. CSR์์ ํ์ ์ธ์ฆ์ ์ถ์ถ
1
(kind-myk8s:N/A) [root@operator-host-2 ~]# kubectl get csr gagajin-csr -o jsonpath='{.status.certificate}' | base64 -d
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
-----BEGIN CERTIFICATE-----
MIIDLjCCAhagAwIBAgIRAKprCyvCDglrAh+AIEmPz1swDQYJKoZIhvcNAQELBQAw
FTETMBEGA1UEAxMKa3ViZXJuZXRlczAeFw0yNTAzMTQyMjU4NTBaFw0yNjAzMTQy
MjU4NTBaMDgxHzAdBgNVBAoTFmt1YmVhZG06Y2x1c3Rlci1hZG1pbnMxFTATBgNV
BAMTDGdhZ2FqaW4tY2VydDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
AMV28e9PexlIbA9NIrYF9ZI1NMupaqtShqi9o8K5NLlE4hO3FRaFyOWrJumwec0w
xHxkddbfmG0WovPjulOKyURkIuHWwmdZ2YoKgR/afKZnBmTDLwLUgTzrAmTeu3WP
k0FFG8nJAVXVnR6xND6YApiNH1nN0gFoySsI7PBYEY/KY1MDll+rPE1+uG3TFPbU
vgWvorsM0ZBR/jmbSCOZIccUqAFgUkIWB0hq0g5ra+UzbbySrSgN/hlDIBVcQBEK
61vRQNX8Fr3ezaFZUpQtAFZR/4O1J8M/UxUWdqXDkhIw242y6GjM1PdFWIPBoSPX
WMHviEPEFDaPXzhDz8G4dTECAwEAAaNWMFQwDgYDVR0PAQH/BAQDAgWgMBMGA1Ud
JQQMMAoGCCsGAQUFBwMCMAwGA1UdEwEB/wQCMAAwHwYDVR0jBBgwFoAUWGu0eBos
3lOcSYEQF6MJJilUwWgwDQYJKoZIhvcNAQELBQADggEBAI1n+3i7BdAMjF3rD0Qw
3w/RTyqSp+hR8kjsn/vbPMJ/e5g5F19sZvwyQIYyQRoD1EjgIT2AvkKBB9pIVLQ+
kAiI3D3rX5Zh2J2FBhiFHW6WRLbcAooefUp9XBOwRHcYncsigDXjScIYv/5dLGAb
rRxgoMdo8YH+apsjxALFa90WEQLIFPnVzYuXE0kfka/R0rL9VO01QdDYtSS31tgy
66EVVzpN33SmlQ5m1kEoPyIHpeos3rs2i0fhJdJTFk4MbST//Gb3wXlelcl34ZUq
YDjXkSuCxZh+2EBdZbNCiXeu22MyGAENUuJAX9oM+jfjA5EQaeaPnsiNxTRIXrY1
/nk=
-----END CERTIFICATE-----
11. ์ธ์ฆ์๋ฅผ crt ํ์ผ๋ก ์ ์ฅ
1
(kind-myk8s:N/A) [root@operator-host-2 ~]# kubectl get csr gagajin-csr -o jsonpath='{.status.certificate}' | base64 -d > gagajin.crt
12. ๋ฐ๊ธ๋ฐ์ ์ธ์ฆ์ ๋ด์ฉ ํ์ธ
์ธ์ฆ์์ CN, ์ ํจ๊ธฐ๊ฐ(๊ธฐ๋ณธ 1๋ ) ๋ฑ์ด ํฌํจ๋์ด ์์
1
(kind-myk8s:N/A) [root@operator-host-2 ~]# openssl x509 -in gagajin.crt -noout -text
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
aa:6b:0b:2b:c2:0e:09:6b:02:1f:80:20:49:8f:cf:5b
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=kubernetes
Validity
Not Before: Mar 14 22:58:50 2025 GMT
Not After : Mar 14 22:58:50 2026 GMT
Subject: O=kubeadm:cluster-admins, CN=gagajin-cert
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:c5:76:f1:ef:4f:7b:19:48:6c:0f:4d:22:b6:05:
f5:92:35:34:cb:a9:6a:ab:52:86:a8:bd:a3:c2:b9:
34:b9:44:e2:13:b7:15:16:85:c8:e5:ab:26:e9:b0:
79:cd:30:c4:7c:64:75:d6:df:98:6d:16:a2:f3:e3:
ba:53:8a:c9:44:64:22:e1:d6:c2:67:59:d9:8a:0a:
81:1f:da:7c:a6:67:06:64:c3:2f:02:d4:81:3c:eb:
02:64:de:bb:75:8f:93:41:45:1b:c9:c9:01:55:d5:
9d:1e:b1:34:3e:98:02:98:8d:1f:59:cd:d2:01:68:
c9:2b:08:ec:f0:58:11:8f:ca:63:53:03:96:5f:ab:
3c:4d:7e:b8:6d:d3:14:f6:d4:be:05:af:a2:bb:0c:
d1:90:51:fe:39:9b:48:23:99:21:c7:14:a8:01:60:
52:42:16:07:48:6a:d2:0e:6b:6b:e5:33:6d:bc:92:
ad:28:0d:fe:19:43:20:15:5c:40:11:0a:eb:5b:d1:
40:d5:fc:16:bd:de:cd:a1:59:52:94:2d:00:56:51:
ff:83:b5:27:c3:3f:53:15:16:76:a5:c3:92:12:30:
db:8d:b2:e8:68:cc:d4:f7:45:58:83:c1:a1:23:d7:
58:c1:ef:88:43:c4:14:36:8f:5f:38:43:cf:c1:b8:
75:31
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Client Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
keyid:58:6B:B4:78:1A:2C:DE:53:9C:49:81:10:17:A3:09:26:29:54:C1:68
Signature Algorithm: sha256WithRSAEncryption
8d:67:fb:78:bb:05:d0:0c:8c:5d:eb:0f:44:30:df:0f:d1:4f:
2a:92:a7:e8:51:f2:48:ec:9f:fb:db:3c:c2:7f:7b:98:39:17:
5f:6c:66:fc:32:40:86:32:41:1a:03:d4:48:e0:21:3d:80:be:
42:81:07:da:48:54:b4:3e:90:08:88:dc:3d:eb:5f:96:61:d8:
9d:85:06:18:85:1d:6e:96:44:b6:dc:02:8a:1e:7d:4a:7d:5c:
13:b0:44:77:18:9d:cb:22:80:35:e3:49:c2:18:bf:fe:5d:2c:
60:1b:ad:1c:60:a0:c7:68:f1:81:fe:6a:9b:23:c4:02:c5:6b:
dd:16:11:02:c8:14:f9:d5:cd:8b:97:13:49:1f:91:af:d1:d2:
b2:fd:54:ed:35:41:d0:d8:b5:24:b7:d6:d8:32:eb:a1:15:57:
3a:4d:df:74:a6:95:0e:66:d6:41:28:3f:22:07:a5:ea:2c:de:
bb:36:8b:47:e1:25:d2:53:16:4e:0c:6d:24:ff:fc:66:f7:c1:
79:5e:95:c9:77:e1:95:2a:60:38:d7:91:2b:82:c5:98:7e:d8:
40:5d:65:b3:42:89:77:ae:db:63:32:18:01:0d:52:e2:40:5f:
da:0c:fa:37:e3:03:91:10:69:e6:8f:9e:c8:8d:c5:34:48:5e:
b6:35:fe:79
13. ์ฌ์ฉ์ ์๊ฒฉ ์ฆ๋ช ๋ฑ๋ก
1
2
3
(kind-myk8s:N/A) [root@operator-host-2 ~]# kubectl config set-credentials gagajin-user --client-certificate=gagajin.crt --client-key=gagajin.key
# ๊ฒฐ๊ณผ
User "gagajin-user" set.
14. ์๋ก์ด ์ปจํ ์คํธ ์์ฑ
1
2
3
(kind-myk8s:N/A) [root@operator-host-2 ~]# kubectl config set-context kind-gagajin --cluster=kind-myk8s --user=gagajin-user
# ๊ฒฐ๊ณผ
Context "kind-gagajin" created.
15. kubeconfig ํ์ผ ํ์ธ
1
(kind-myk8s:N/A) [root@operator-host-2 ~]# cat ~/.kube/config
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJRFNuWlpQTWFRQ3N3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1UUXhORFUwTWpKYUZ3MHpOVEF6TVRJeE5EVTVNakphTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUN1TDVXTTVQU2F0cHZvaitwWVlZNk1GcEFFZjdVU1Bib05KNTRXdkg2dVIxSXpGVEpYUy9uVDY5eTQKK29NaEZkODQ1VWJrOTl2R2N1NVJMYzdjRmIybXA2UVpBQmFqNXJJV2tmZFJTSFV4RzVtSDlDZUQ1c0JBZ1QrNwp2RVlIZVh3cW1ZNjlaQ09rbmc1aktWM0g4eHlCc3VkOENvYUpHS0k2Um9wSnBBU0RJK0lWRzREMTFtcUFKWkJpCnJzaCsvb0NXMk1RSUhxNEJ5K0NGYnI2elNzcStzdVkwS29XekJmNnNycllYbHJldUdLQi9Dd1R1cnV1R2U5TW0KeHRmMitWYXBPbXdxNURxdVlIL3BRS25WODhnaTBmejlIRCtFVyt6eER3YkVMeXZITGZ6R2cyYUNRbHZUSWtkWQo1VE42dS9wTEhqZGhXbFVKREJ5U3cza2pVRnNuQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJSWWE3UjRHaXplVTV4SmdSQVhvd2ttS1ZUQmFEQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQUxjTXhWNVk2YwoxVmtlWFpIU1JrMWZ3UlQwd0YyVjNQM2NsYlBQNDg1VFBsYXpUZjkzTGtITzlObFdYck9BNU5EZm1ZNUYwQ3J0CmhTWVN1U1lseVA2b21oOEZzb2ZJZnVjRW9iaXhHYWdoZVFiaXJoT3pKNkMvWHU2WmZNazdTYTVDcCtVQldvdWIKaDdLL1ByUkRORENubkxPaWs0K3c0U0JuaXd3U1RjdFBYNnBhYk1MYk1iN0FBb21jTm5VQUNWL0hhUDZ6WG5ZcApmMlhLdklZZ2xwbUkySUdjTE5jWFdnamZkUWsvT1RRU1VUUzhudHhkZWJEcHNFOWRWVy9OQjZWOFpod2s2Q2hSCllCUk03aHpnSnRtSWRzTzVNOExiWFJIWnVmZlJaTFlJcS95THB5ZUVGdE9uUlNMQ1ZhQzNKOWxQYTZFeTREMUYKamw3Rms4T29rM3Q2Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: https://127.0.0.1:44471
name: kind-myk8s
contexts:
- context:
cluster: kind-myk8s
user: gagajin-user
name: kind-gagajin
- context:
cluster: kind-myk8s
user: kind-myk8s
name: kind-myk8s
current-context: kind-myk8s
kind: Config
preferences: {}
users:
- name: gagajin-user
user:
client-certificate: /root/gagajin.crt
client-key: /root/gagajin.key
- name: kind-myk8s
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLVENDQWhHZ0F3SUJBZ0lJYnZzbUlPMGQ0RXd3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1UUXhORFUwTWpKYUZ3MHlOakF6TVRReE5EVTVNakphTUR3eApIekFkQmdOVkJBb1RGbXQxWW1WaFpHMDZZMngxYzNSbGNpMWhaRzFwYm5NeEdUQVhCZ05WQkFNVEVHdDFZbVZ5CmJtVjBaWE10WVdSdGFXNHdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFETGFoeXUKQ20xT0U4dmJWZHVIOWtiZkVoMlhjcWhUL0ZwUDVUdWp4QThBNEZrNys5Nmh0Vitwd0JZYktaNXBYbS84Rmk0YQpxWGlrN21hbWQ2WHhOZFBjZzh1R1BlTjYrNlFUdHRDMUYwNGhBa2JJQ2pwbWRuUUtDQ0R2ZzZFVTF4RkJlZlJWCnFLRS82VGJ6R2RBSDE4OFVqVjBmSXZHMFJ5czFPZS84MGZ0T0ZJSXFINnBzQnVVQ1FTWGdBM2RDbnBUaWJIVUkKMk81VUUyejVxZ0ViZ1JqcUJjNUQrMEJ5cncvQVpON1JSWnhaNnVaVXZMOXF6bmxlM3pzTVlTS3RIR05qREFrbQozOURRUXhiT3JoZU9UV0NrZEZDeXFXN0luQXdwTzRSMVc3aTkxSGRsWGs2OGdTeml6SjdmcVhzUGJqa2pmam81CkxzTmtGMFpRWnJFZXBRMjNBZ01CQUFHalZqQlVNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUsKQmdnckJnRUZCUWNEQWpBTUJnTlZIUk1CQWY4RUFqQUFNQjhHQTFVZEl3UVlNQmFBRkZocnRIZ2FMTjVUbkVtQgpFQmVqQ1NZcFZNRm9NQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUFwSFU3dDhER1NwUHZJSnQvdEdhNEMzL09uCkpxb1ZNRnJNaEpWdjVwNzFHRHNRSXgwUThGSk1GdzNlbEkzY2F0UlZXWjE4a1UxaFZ2d1VhNVl1YUhRWTVhQ3kKeVJKY2RMMUtrYzNxMHE2bmJ5dS9vT2p4YnJtRzgyckN1Qk5McWN1MlB6YXhYeVNFeVNqZEN6Yy9QMmxhZllJbAptTUdTN0hZbDF0dThWUTBkM0ZMemd0RW1SdjRjeHhvWU95bkFWcGNTV1pPN3k2RUVkSmM4TExDTWU3SlhyYTIzCkJwdkVlcXdsQkNwRTVGdjZ3UUZYMk1uTExIWk53Z1I0RGtzblN0Rm1rY2l3UHBiMk4ycDNNay9sbzZMR2w3dnkKUW0vM01qaHhKT0k3ejVibDRzSm8vaG12OVc1WXRJRURKWHJ2Z1hENjhwRVhjbFdOL0FBTVltcjdrallWCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBeTJvY3JncHRUaFBMMjFYYmgvWkczeElkbDNLb1UveGFUK1U3bzhRUEFPQlpPL3ZlCm9iVmZxY0FXR3ltZWFWNXYvQll1R3FsNHBPNW1wbmVsOFRYVDNJUExoajNqZXZ1a0U3YlF0UmRPSVFKR3lBbzYKWm5aMENnZ2c3NE9oRk5jUlFYbjBWYWloUCtrMjh4blFCOWZQRkkxZEh5THh0RWNyTlRudi9OSDdUaFNDS2grcQpiQWJsQWtFbDRBTjNRcDZVNG14MUNOanVWQk5zK2FvQkc0RVk2Z1hPUS90QWNxOFB3R1RlMFVXY1dlcm1WTHkvCmFzNTVYdDg3REdFaXJSeGpZd3dKSnQvUTBFTVd6cTRYamsxZ3BIUlFzcWx1eUp3TUtUdUVkVnU0dmRSM1pWNU8KdklFczRzeWUzNmw3RDI0NUkzNDZPUzdEWkJkR1VHYXhIcVVOdHdJREFRQUJBb0lCQUVGYytKaElTM1ZTVVNoSwp2MzlCK09ZSFNURDRLL1RJMnpCZkpibnE3ek5GNUFhdFdZMjIzV1dManM3dG9iU1VId0h2RXFPSW4zYklFSDRmClpsaExCcWdPUmFEK1ZCR1p1TkNJNXltNXNtWlM5L0docjhCckFjQ1RlaG5jdnk4V0tMcFVlTm5wbE44WGpvdXgKV0xLY2V6Tk1kWWJpMEs0d1RFY1BOZm1VYzk5VGpqRVhCb1UvcDhTdi9yWDlOUGlBcUdhNkVjZ21mZ1QrWm9OdwpKWHk0bGVRQWNLVXpZYjhZbVNNeUtXTU9yNlhJMkwxRERiMnE0NTdBUC9rb091VGRKYmxDN1owcExnKzB0eVFQClU2dzNMNkhWRTRTM1RPSjFGTFdaSTIxTlI5ZWF1MjY0RDhENVArdnlHMDVFZktwREE4WnZsSGdxNHV1Wkl2bHkKTDYwQnBDRUNnWUVBL3JhdHh5SmtRbTB0OW1Ic0MxZWx4M1dXc29ld0VSaWttRWxxQ28rWG8wMkJjai8xa08zaApQRldteldlNUQ2TlJ4TldCV2tudUtSWHoxSnBJdml4Wm1ZMFV6T3Bydk41WGkwT09xN1BZZHBpeHpQQ2tRMVMxCnBoYzdYWmVKVGFtdldNOHdIckQ4c3pvMXFrbjhGbGVaY0FndmFEUlNRaDNBWmRmVSsxOHI1NUVDZ1lFQXpIRWIKdEVtMzN6bmVjbERvVGR2YW0zaGFmUTJEWU10TVE2UWxjM2R0ZTFlUDlmcmJJNTQwMzBXWjY1RW5SMmVENldUdApvWWxkUTlRNGJRM0VSU3pMOG5ncis2dysxN2xWeXhBZWo3dHltQi9QSkEyK2YydGQzbmh1ZnBUSE5WbGhZOXdyCnAwWVdLNGNKc2lWVXd6TTR4VmhLdG5PV3huWnc5UlM5VUMzMVRNY0NnWUVBc24ySEYwV0ZabnNsdTBMeGF4MVgKWVllSU84RVQ0MWNXZUZUeHgwYktaemczM3J6dE0wdFBDNzJscnNqaGlSRFVpdzltbnNPeDdmNmhLRG1aZ2hLSQpFeThuQlZXOGU5Ui9HbXNUL2tTQUN0T0R2TzVnM1lIdDdON1l6Z1FUeG1XREo4UEFuN0U4MDhlVnRhZzB5OTlFCitabnl4cDNyaXNOWWdNV1hUVE5yQzlFQ2dZQXM0VXUycVZRL0llSU9jR3ArNVJ1NWM1TlJ6b3lmekNGaTIvOEkKdVJnRXNyVTh4NlFoenBKR3pXMjd3L0srZngvN05aZmhGVm12RVVDTjJDN1ZETDk4N0JxanRpMVppQ3NvVjlLTgp0UlcwQlkrZ2w0L1JRdzJwVUFEWnN1bUVjYW1xbFdQVDVkUHFIRXZwbXI1ZjE3Zkh3dGtyOG5ZUC9XSlF1d3ZRCk5UYWJjd0tCZ1FDQ3Nma0hzajIweTNoU0pjOE9kcHNwMHRJZUxaMTlxTU55eHlib1JwZ0tSbWVvdVR5TzNSeVEKbGFncms0YVY1WWE1L3V6T1h6Y2RpN3lBQVZsWGZGNXFwd1FvYVFCcUpuWWNHY2dWTFBTV2haS01sbHJzT042WApYYWZabXJRdzBwZVVQTThCdTg4TXU5VDNraGlLSzhIa0hiRDlDaG1PSGlPa3duUTlieUFHUGc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
16. ์ปจํ ์คํธ ์ ํ
1
2
3
(kind-myk8s:N/A) [root@operator-host-2 ~]# kubectl config use-context kind-gagajin
# ๊ฒฐ๊ณผ
Switched to context "kind-gagajin".
17. ๋ ธ๋ ์ ๋ณด ์กฐํ (์์ธ ๋ก๊ทธ)
1
(kind-gagajin:N/A) [root@operator-host-2 ~]# k get node -v6
โ ย ์ถ๋ ฅ
1
2
3
4
5
I0315 08:13:23.651401 14303 loader.go:395] Config loaded from file: /root/.kube/config
I0315 08:13:23.652086 14303 cert_rotation.go:140] Starting client certificate rotation controller
I0315 08:13:23.666839 14303 round_trippers.go:553] GET https://127.0.0.1:44471/api/v1/nodes?limit=500 200 OK in 8 milliseconds
NAME STATUS ROLES AGE VERSION
myk8s-control-plane Ready control-plane 8h v1.32.2
- kubeconfig ์ค์ ๋ฐ ์ธ์ฆ ๊ณผ์ ์ด ์ ์์ ์ผ๋ก ๋์ํ๋ ๊ฒ์ ํ์ธํ ์ ์์
๐ ์ฟ ๋ฒ๋คํฐ์ค API
1. Pod ์์ฑ
1
2
3
(kind-gagajin:N/A) [root@operator-host-2 ~]# kubectl run nginx --image=nginx
# ๊ฒฐ๊ณผ
pod/nginx created
2. Default ๋ค์์คํ์ด์ค์ Pod ๋ชฉ๋ก ์กฐํ (Raw API)
1
(kind-gagajin:N/A) [root@operator-host-2 ~]# kubectl get --raw /api/v1/namespaces/default/pods | jq
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "39898"
},
"items": [
{
"metadata": {
"name": "nginx",
"namespace": "default",
"uid": "aeb763b0-bcfc-449e-9b3c-01aeba0cc76e",
"resourceVersion": "39881",
"creationTimestamp": "2025-03-14T23:25:16Z",
"labels": {
"run": "nginx"
},
"managedFields": [
{
"manager": "kubectl-run",
"operation": "Update",
"apiVersion": "v1",
"time": "2025-03-14T23:25:16Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:labels": {
".": {},
"f:run": {}
}
},
"f:spec": {
"f:containers": {
"k:{\"name\":\"nginx\"}": {
".": {},
"f:image": {},
"f:imagePullPolicy": {},
"f:name": {},
"f:resources": {},
"f:terminationMessagePath": {},
"f:terminationMessagePolicy": {}
}
},
"f:dnsPolicy": {},
"f:enableServiceLinks": {},
"f:restartPolicy": {},
"f:schedulerName": {},
"f:securityContext": {},
"f:terminationGracePeriodSeconds": {}
}
}
},
{
"manager": "kubelet",
"operation": "Update",
"apiVersion": "v1",
"time": "2025-03-14T23:25:24Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:status": {
"f:conditions": {
"k:{\"type\":\"ContainersReady\"}": {
".": {},
"f:lastProbeTime": {},
"f:lastTransitionTime": {},
"f:status": {},
"f:type": {}
},
"k:{\"type\":\"Initialized\"}": {
".": {},
"f:lastProbeTime": {},
"f:lastTransitionTime": {},
"f:status": {},
"f:type": {}
},
"k:{\"type\":\"PodReadyToStartContainers\"}": {
".": {},
"f:lastProbeTime": {},
"f:lastTransitionTime": {},
"f:status": {},
"f:type": {}
},
"k:{\"type\":\"Ready\"}": {
".": {},
"f:lastProbeTime": {},
"f:lastTransitionTime": {},
"f:status": {},
"f:type": {}
}
},
"f:containerStatuses": {},
"f:hostIP": {},
"f:hostIPs": {},
"f:phase": {},
"f:podIP": {},
"f:podIPs": {
".": {},
"k:{\"ip\":\"10.244.0.5\"}": {
".": {},
"f:ip": {}
}
},
"f:startTime": {}
}
},
"subresource": "status"
}
]
},
"spec": {
"volumes": [
{
"name": "kube-api-access-bqqxn",
"projected": {
"sources": [
{
"serviceAccountToken": {
"expirationSeconds": 3607,
"path": "token"
}
},
{
"configMap": {
"name": "kube-root-ca.crt",
"items": [
{
"key": "ca.crt",
"path": "ca.crt"
}
]
}
},
{
"downwardAPI": {
"items": [
{
"path": "namespace",
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.namespace"
}
}
]
}
}
],
"defaultMode": 420
}
}
],
"containers": [
{
"name": "nginx",
"image": "nginx",
"resources": {},
"volumeMounts": [
{
"name": "kube-api-access-bqqxn",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "myk8s-control-plane",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0,
"enableServiceLinks": true,
"preemptionPolicy": "PreemptLowerPriority"
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "PodReadyToStartContainers",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2025-03-14T23:25:24Z"
},
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2025-03-14T23:25:16Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2025-03-14T23:25:24Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2025-03-14T23:25:24Z"
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2025-03-14T23:25:16Z"
}
],
"hostIP": "172.18.0.2",
"hostIPs": [
{
"ip": "172.18.0.2"
}
],
"podIP": "10.244.0.5",
"podIPs": [
{
"ip": "10.244.0.5"
}
],
"startTime": "2025-03-14T23:25:16Z",
"containerStatuses": [
{
"name": "nginx",
"state": {
"running": {
"startedAt": "2025-03-14T23:25:23Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "docker.io/library/nginx:latest",
"imageID": "docker.io/library/nginx@sha256:9d6b58feebd2dbd3c56ab5853333d627cc6e281011cfd6050fa4bcf2072c9496",
"containerID": "containerd://c23685346a11498759a790dbe54aff8e6c86d4ae8492aeadb4c495ae9d9a0722",
"started": true,
"volumeMounts": [
{
"name": "kube-api-access-bqqxn",
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"readOnly": true,
"recursiveReadOnly": "Disabled"
}
]
}
],
"qosClass": "BestEffort"
}
}
]
}
3. Pod ์ ๋ณด ์์ธ ๋ก๊ทธ ์กฐํ
1
(kind-gagajin:N/A) [root@operator-host-2 ~]# kubectl get pod -v6
โ ย ์ถ๋ ฅ
1
2
3
4
5
I0315 08:27:03.283304 14813 loader.go:395] Config loaded from file: /root/.kube/config
I0315 08:27:03.284711 14813 cert_rotation.go:140] Starting client certificate rotation controller
I0315 08:27:03.305973 14813 round_trippers.go:553] GET https://127.0.0.1:44471/api/v1/namespaces/default/pods?limit=500 200 OK in 10 milliseconds
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 107s
4. ํน์ Pod ์ ๋ณด ์กฐํ (Raw API)
1
(kind-gagajin:N/A) [root@operator-host-2 ~]# kubectl get --raw /api/v1/namespaces/default/pods/nginx | jq
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "nginx",
"namespace": "default",
"uid": "aeb763b0-bcfc-449e-9b3c-01aeba0cc76e",
"resourceVersion": "39881",
"creationTimestamp": "2025-03-14T23:25:16Z",
"labels": {
"run": "nginx"
},
"managedFields": [
{
"manager": "kubectl-run",
"operation": "Update",
"apiVersion": "v1",
"time": "2025-03-14T23:25:16Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:labels": {
".": {},
"f:run": {}
}
},
"f:spec": {
"f:containers": {
"k:{\"name\":\"nginx\"}": {
".": {},
"f:image": {},
"f:imagePullPolicy": {},
"f:name": {},
"f:resources": {},
"f:terminationMessagePath": {},
"f:terminationMessagePolicy": {}
}
},
"f:dnsPolicy": {},
"f:enableServiceLinks": {},
"f:restartPolicy": {},
"f:schedulerName": {},
"f:securityContext": {},
"f:terminationGracePeriodSeconds": {}
}
}
},
{
"manager": "kubelet",
"operation": "Update",
"apiVersion": "v1",
"time": "2025-03-14T23:25:24Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:status": {
"f:conditions": {
"k:{\"type\":\"ContainersReady\"}": {
".": {},
"f:lastProbeTime": {},
"f:lastTransitionTime": {},
"f:status": {},
"f:type": {}
},
"k:{\"type\":\"Initialized\"}": {
".": {},
"f:lastProbeTime": {},
"f:lastTransitionTime": {},
"f:status": {},
"f:type": {}
},
"k:{\"type\":\"PodReadyToStartContainers\"}": {
".": {},
"f:lastProbeTime": {},
"f:lastTransitionTime": {},
"f:status": {},
"f:type": {}
},
"k:{\"type\":\"Ready\"}": {
".": {},
"f:lastProbeTime": {},
"f:lastTransitionTime": {},
"f:status": {},
"f:type": {}
}
},
"f:containerStatuses": {},
"f:hostIP": {},
"f:hostIPs": {},
"f:phase": {},
"f:podIP": {},
"f:podIPs": {
".": {},
"k:{\"ip\":\"10.244.0.5\"}": {
".": {},
"f:ip": {}
}
},
"f:startTime": {}
}
},
"subresource": "status"
}
]
},
"spec": {
"volumes": [
{
"name": "kube-api-access-bqqxn",
"projected": {
"sources": [
{
"serviceAccountToken": {
"expirationSeconds": 3607,
"path": "token"
}
},
{
"configMap": {
"name": "kube-root-ca.crt",
"items": [
{
"key": "ca.crt",
"path": "ca.crt"
}
]
}
},
{
"downwardAPI": {
"items": [
{
"path": "namespace",
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.namespace"
}
}
]
}
}
],
"defaultMode": 420
}
}
],
"containers": [
{
"name": "nginx",
"image": "nginx",
"resources": {},
"volumeMounts": [
{
"name": "kube-api-access-bqqxn",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "myk8s-control-plane",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0,
"enableServiceLinks": true,
"preemptionPolicy": "PreemptLowerPriority"
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "PodReadyToStartContainers",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2025-03-14T23:25:24Z"
},
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2025-03-14T23:25:16Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2025-03-14T23:25:24Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2025-03-14T23:25:24Z"
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2025-03-14T23:25:16Z"
}
],
"hostIP": "172.18.0.2",
"hostIPs": [
{
"ip": "172.18.0.2"
}
],
"podIP": "10.244.0.5",
"podIPs": [
{
"ip": "10.244.0.5"
}
],
"startTime": "2025-03-14T23:25:16Z",
"containerStatuses": [
{
"name": "nginx",
"state": {
"running": {
"startedAt": "2025-03-14T23:25:23Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "docker.io/library/nginx:latest",
"imageID": "docker.io/library/nginx@sha256:9d6b58feebd2dbd3c56ab5853333d627cc6e281011cfd6050fa4bcf2072c9496",
"containerID": "containerd://c23685346a11498759a790dbe54aff8e6c86d4ae8492aeadb4c495ae9d9a0722",
"started": true,
"volumeMounts": [
{
"name": "kube-api-access-bqqxn",
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"readOnly": true,
"recursiveReadOnly": "Disabled"
}
]
}
],
"qosClass": "BestEffort"
}
}
5. API ๊ทธ๋ฃน ๋ฐ ๋ฆฌ์์ค ๋ชฉ๋ก ํ์ธ
1
(kind-gagajin:N/A) [root@operator-host-2 ~]# kubectl api-resources
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
NAME SHORTNAMES APIVERSION NAMESPACED KIND
bindings v1 true Binding
componentstatuses cs v1 false ComponentStatus
configmaps cm v1 true ConfigMap
endpoints ep v1 true Endpoints
events ev v1 true Event
limitranges limits v1 true LimitRange
namespaces ns v1 false Namespace
nodes no v1 false Node
persistentvolumeclaims pvc v1 true PersistentVolumeClaim
persistentvolumes pv v1 false PersistentVolume
pods po v1 true Pod
podtemplates v1 true PodTemplate
replicationcontrollers rc v1 true ReplicationController
resourcequotas quota v1 true ResourceQuota
secrets v1 true Secret
serviceaccounts sa v1 true ServiceAccount
services svc v1 true Service
mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration
validatingadmissionpolicies admissionregistration.k8s.io/v1 false ValidatingAdmissionPolicy
validatingadmissionpolicybindings admissionregistration.k8s.io/v1 false ValidatingAdmissionPolicyBinding
validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration
customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition
apiservices apiregistration.k8s.io/v1 false APIService
controllerrevisions apps/v1 true ControllerRevision
daemonsets ds apps/v1 true DaemonSet
deployments deploy apps/v1 true Deployment
replicasets rs apps/v1 true ReplicaSet
statefulsets sts apps/v1 true StatefulSet
selfsubjectreviews authentication.k8s.io/v1 false SelfSubjectReview
tokenreviews authentication.k8s.io/v1 false TokenReview
localsubjectaccessreviews authorization.k8s.io/v1 true LocalSubjectAccessReview
selfsubjectaccessreviews authorization.k8s.io/v1 false SelfSubjectAccessReview
selfsubjectrulesreviews authorization.k8s.io/v1 false SelfSubjectRulesReview
subjectaccessreviews authorization.k8s.io/v1 false SubjectAccessReview
horizontalpodautoscalers hpa autoscaling/v2 true HorizontalPodAutoscaler
cronjobs cj batch/v1 true CronJob
jobs batch/v1 true Job
certificatesigningrequests csr certificates.k8s.io/v1 false CertificateSigningRequest
leases coordination.k8s.io/v1 true Lease
endpointslices discovery.k8s.io/v1 true EndpointSlice
events ev events.k8s.io/v1 true Event
flowschemas flowcontrol.apiserver.k8s.io/v1 false FlowSchema
prioritylevelconfigurations flowcontrol.apiserver.k8s.io/v1 false PriorityLevelConfiguration
ingressclasses networking.k8s.io/v1 false IngressClass
ingresses ing networking.k8s.io/v1 true Ingress
networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy
runtimeclasses node.k8s.io/v1 false RuntimeClass
poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget
clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding
clusterroles rbac.authorization.k8s.io/v1 false ClusterRole
rolebindings rbac.authorization.k8s.io/v1 true RoleBinding
roles rbac.authorization.k8s.io/v1 true Role
priorityclasses pc scheduling.k8s.io/v1 false PriorityClass
csidrivers storage.k8s.io/v1 false CSIDriver
csinodes storage.k8s.io/v1 false CSINode
csistoragecapacities storage.k8s.io/v1 true CSIStorageCapacity
storageclasses sc storage.k8s.io/v1 false StorageClass
volumeattachments storage.k8s.io/v1 false VolumeAttachment
๐ K8S ์ธ์ฆ/์ธ๊ฐ
1. .kube/config
ํ์ผ ๋ด์ฉ ํ์ธ
(1) EKS
1
cat ~/.kube/config
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJUXhZZFRsV2QxS293RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1UUXhNelV6TURsYUZ3MHpOVEF6TVRJeE16VTRNRGxhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUR2RE1HZ1dYSlBmUVlrMERKWTdGYk52OEN3RkFMMWNIMWQxVkxxUmpYemt0RGFwTW1xS2RGV3lmSXMKbGFqbEkyS1FvZ0VPY1dHc0dEOW1JWVFhTXM0VHFab1k5U1dZNFVlMVYzU0Y5YVV3dFI0M2VLTzh5RUhQekNUSQpGN0hyWDBKVkpxM29RQU5RVjNPc2JJdE9pdHQ3eDBoaFVWZ3NQci9qTy9ZamFkaHczSVV4QUxKYnhKTmIzSW5rCkdjT3ltbXViNDR4SUJ0NmtIM3V5VWNxQm83aStoL3NJWWFXRXNocHp3dGJSemN0NmRMYldFNW9WbU9xY1pHTDMKSXE1WmNiVTBQc3B3S2xTK1BwcFhmaEk2djk5VGsyV0VRVzdkQmo1ZkM5cWdOeU95N3BFdU1MVHhyK05xYS9yVAp1dzNtLzNvb1JoYk82azR0VGxUSW1ZMXBrMnpaQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTYWp6dzc5dzNNUjBIMEJUaTFuQVNhN2JVS1J6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQWVSK2c5d0FVQwpuUElEOUtKNGkrMGliRFcvczlxSHpHTG5OelhpQlpsSXI1ZWtUYjdaYzhwT09ZQ2t5c01oSTlLbnJQUUdPTlpaCitjSTM0UnRDWmN0eU9QVmRsS1JsUFM5R1lNaGpBeEhQMW1Jck4zU2F5aGpYM3kzY3g1NG4wRmswR216RitKbXIKYUFsMHkyM05GMnk2VzJ0WWUxRW1aZFRZWVcxNDJLRHNIZldPQlhtMmJjR3Z6QmM4YTh6QWZIdElDbkVSMlc4ZQpndERCV1dpNVRuN1NKOWpTTEFVWUNkQUIza2xDTUJNaE80bGkvRzB4MFROT0VVRXF0bFQxY0x6Rm9PTVM4OWVRCnlpcDE1Mk9tc1ZIMy9pVG55Zy9UVytUUXpLNUF3eVlNb3Jmc05KNjg1bFN6Mmh1S3NEZE9YMVJqWis3ZDI5dW0KTXRIV2hwNHVDajFBCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: https://9112435064B824498AF68C696D05183C.gr7.ap-northeast-2.eks.amazonaws.com
name: arn:aws:eks:ap-northeast-2:378102432899:cluster/myeks
- cluster:
certificate-authority-data:
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJUk9Ca1BpYnVRd013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1Ea3dOakE0TWpsYUZ3MHpOVEF6TURjd05qRXpNamxhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURaWDVpSmZSbGIxbFZuZzZYTjZhZDRDN0I4bGdmamQ0RDFUVHR5MHFJYlpIWXZFa2tVeWlZSmphOEYKcUgzRS8rY2VQdk9vTVp1SnFoc29aTHNCcHJ0L1oySkMvS2UyYUdFcnpNNTZiZTRkZG55cjdHcCtxTjFWMjJwZgpnNWN0azlNaVVaNGUrbGp1NmpCaXBHQm84NFBqSDFqeUs0U29qcXVIR2NRek5LcG54VmN1dis3UXNHYVp2eG1MCkduL0owMmVOWjErZ0ltR0xjdVFyYTBKOVBnOWpGeHlQLzB1TFNGYmpKd1JuYmxYM3lUUnpjL3lyNHZlbldtenIKRWlGZVFDUTJWYmJ6Y3h0emc0NWIvSGxIK1pUbEJ2RlQ2V01NekZOU1VlYVBaUEpMM2RNQWJ2OXFWbTAxZWpmRAo0bGxxRTc0TUJxbytlK05PWStCOXpYRE92QlJaQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJRTnhGNktLT1N4cHo3NEIzd0VPUkdiNFE3UWJEQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQStneHpZSlpmWAo5OWE3Sm9ITVdQRUFMUno3NXBWLzNLOXFaeXM4aHo3aFZ1cG9aSmRVdFFrcUJiMGNxNUloUmpYcWcyS3JCSzQzCkhVOGs4SmZYU2Z1OGtLbVFhL2xhUGUzY0lrSi82NjQ2ZUlCcDJqZTcvK2xMK0FJWkNZbEl5MG5YTG9wY3o2cXYKMnE0aG9sb3BRSENoYk5RMkVQNDREemRJZVZUZitpdEFIKzJqRU0xeTBUbm5MaUgrK1c2eGlPOUJGdzc4SFlKZApCTG5QZ0ltbUxnRjgwVC9Kb09QbkZudEw1bFVPM280TzN1WGgvYWdDeXNiWkptcnJFelY4YlZab2F5bzFHT3dsCjF4ZmtpUzlERzhDVVJDbmIrS0hNZlVTYlVFYk9ZbmNrOER4Y1VpSDRsbGdiRlBWaHpJRE53QXVyVU1EWi9LTE8KM3JXdTBSN1BGUEV4Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: https://127.0.0.1:45881
name: kind-myk8s
contexts:
- context:
cluster: arn:aws:eks:ap-northeast-2:378102432899:cluster/myeks
user: eks-user
name: eks-user
- context:
cluster: kind-myk8s
user: kind-myk8s
name: kind-myk8s
current-context: eks-user
kind: Config
preferences: {}
users:
- name: eks-user
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- ap-northeast-2
- eks
- get-token
- --cluster-name
- myeks
- --output
- json
command: aws
- name: eks-user@myeks.ap-northeast-2.eksctl.io
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- eks
- get-token
- --output
- json
- --cluster-name
- myeks
- --region
- ap-northeast-2
command: aws
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
interactiveMode: IfAvailable
provideClusterInfo: false
- name: kind-myk8s
user:
client-certificate-data:
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLVENDQWhHZ0F3SUJBZ0lJQ3ZjUXJwNk4wK2t3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1Ea3dOakE0TWpsYUZ3MHlOakF6TURrd05qRXpNamxhTUR3eApIekFkQmdOVkJBb1RGbXQxWW1WaFpHMDZZMngxYzNSbGNpMWhaRzFwYm5NeEdUQVhCZ05WQkFNVEVHdDFZbVZ5CmJtVjBaWE10WVdSdGFXNHdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFEUGxpdTAKa3l6dmVFUVI1RjB2WTdDVDllUkkyWEluNDA5WGl4ZXFZUUVabXZROS84ZWVrNExPbkVmdElVMVQrQzJoWEw5ZApEaDhYREliNGRDRGN4MElyMkEzQVdLd1JQRkFveXpIUlZiam9neE5saVo1YmFLMWlKVjBiWVFzQ2ErWml6MlozCkQwK0labi9FMEtHdnE3cm1IcW94NWZSZ2ZNeCt3K1lpMHZKTW9IbHRuM3lZWXdxQnFyRFBaSk5tWGpKenFCQXUKWE9uV0lFV2xHNlIwcGdnNmM4YjMyRTlFaWR5MFdPallMdVhEaDN1d2F2TWxlY0prdmtJcnZpc041ZHhWb1NzVQpncm1qUEpCZW5jVmVSbUphWm1nVUU1blBna0J1dDRxK20vZW1tdW9XYmtsSjBmSUpiVXQ4bzdtWSswUmU0akdLCmNDc1JrMG5QR2taWlVOVHhBZ01CQUFHalZqQlVNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUsKQmdnckJnRUZCUWNEQWpBTUJnTlZIUk1CQWY4RUFqQUFNQjhHQTFVZEl3UVlNQmFBRkEzRVhvb281TEduUHZnSApmQVE1RVp2aER0QnNNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUJQRUxuWDdac3lHeVpZOGZRMDR6ZCtPU3U3CkFSNHZPbmMvc0xkdVR4T3FrcjFTcWRZWmRaT3g5Mlh3M1JIQk5HM3BMWTZRNUZWcHN1U0JucjloWU1OckxZOFcKV29pZGFLTGtjRVIrdlpFUFBWakFmaDFiekN2SitlZDRWNk1iVit5UjdMbkZYS1NyY3c4aGVWekpFYXZPYnpyeApmdjZBMFF5cmhuMGphOG80V1JTOHVzK1F1Qm5aU1k3ai9FbWNtRGdicmtUUXFpbXpVNTZHcnpzazlNL1grMGpBCnhPNTA4d3JjYUU1WXYxNk8wNjFndVUrb1BsUWtpd1VzcndCZnVQOVgxeUFpdHcwWW42NnRTNzlZbWVkMjdNZlEKZHREKzkvR3d1Ni9ORGphdFdMSUJRS2lTbFdvYitMbE4zY1lveklrTHdha3lwc3R5ekZ2ZlN2SFVMZW5ZCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
client-key-data:
LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBejVZcnRKTXM3M2hFRWVSZEwyT3drL1hrU05seUorTlBWNHNYcW1FQkdacjBQZi9ICm5wT0N6cHhIN1NGTlUvZ3RvVnkvWFE0ZkZ3eUcrSFFnM01kQ0s5Z053RmlzRVR4UUtNc3gwVlc0NklNVFpZbWUKVzJpdFlpVmRHMkVMQW12bVlzOW1kdzlQaUdaL3hOQ2hyNnU2NWg2cU1lWDBZSHpNZnNQbUl0THlUS0I1Ylo5OAptR01LZ2Fxd3oyU1RabDR5YzZnUUxsenAxaUJGcFJ1a2RLWUlPblBHOTloUFJJbmN0RmpvMkM3bHc0ZDdzR3J6CkpYbkNaTDVDSzc0ckRlWGNWYUVyRklLNW96eVFYcDNGWGtaaVdtWm9GQk9aejRKQWJyZUt2cHYzcHBycUZtNUoKU2RIeUNXMUxmS081bVB0RVh1SXhpbkFyRVpOSnp4cEdXVkRVOFFJREFRQUJBb0lCQUV5YnhRRmRGMFpCQWc0QQpVd3Q4Sk54VjhLdVo0L0VvaUUxc2ZZMVpRMGlwME4xWW1kakI0NUpRUnBNU3FURUY5QUVLODJ4cUc3c3IybHV5CnI3WUtxemIzQXd3ckxVVW5Gd1lYQndtVU15dEx4RXJDb1BobkJ5V2pXRnNVVXpYUGtEU0RDZk9DYVAyVHBpK0MKTjBsWGZTQVR6UWVoTDdDMEhkdmllL242RTRXSFFKbzA5UUhYMmhpVHdLT1pYamRtUGFmUmlLMm1nWjlVTzY0Twpza001dEVNL0tSN1lsRWtEYzZWb1VRN2VxdXR6RVNtZS9JTFNpSWhRZ053dHlBSU01c2tSU0xvRkFlcU0vQk9OCmdudlViRktnSXpkWmo5bnBIdDh5eEFtbU9qRnFXRUNzbG9lOVl5WDc1QkZnc013V2RvOTAwODNESmtSOHBjK1AKZ1R3dkhFa0NnWUVBMFhiUEdwM3VuTWxPd3BNcnZPMnY0SDRvUGRWb0NrdnlpOEF4UGNPZHhieHFjT2JHODgvaAp5UXp3Q3NMbjVBWnhZSm1uZlBmK0xWZi8wbStpRDdRa2xrdnJaelRYQTJCNDRHWHdObzJoUkpBa3NMU2J5NVlZCmR4UGI0a1pWVHNIM2p6c2RwbnRMV3RNVHNNTHhXaFhiN0hud1Y5QUd4T0plTUNQbXFWazd3cHNDZ1lFQS9iU1UKZTJUbjdjUytwcllORlRXQmlUWE1pS3AzaERlZXQ1QUFiMW5BY0JTMFM1dm1LUE02L0FxR1NiSDBQTGNJTmxrQQpKd3lJUlBxSEFMeXlIRDNTdFUzUGtwbFphQkR4amZmRnBlcnRlS2lkSmVjeWRxY1BUUURVTnU5RUErZFNKekZoClhqeHFrd2NHay9WOTcxbGliQUdIbkJraVZXYXBSaW9UaUkzWmFXTUNnWUJXUlFpbmZjUjQ3ckJ4a3d2QWxHU0wKb1dvUmpZTjhPaXQ3UTMwRVl6em40K0l5L2RtVE1WdGM0dWM2aDJ2YWpvekRySVUvQXlTOHFESEZDaFZGUW55UApLbFdaL0RsU09ybU9NbTN0Q2dnUnBReDNldXR2dmpIMVdVaUd1VkVKVHZvWEU5SHliM1Zwd3VXcE42RVA2VkRhCjVKNElqTFU5QWI2cE5TQWJQNVZOWVFLQmdFZlRhZjROTVVRMVlTeGRlaEs1RlRVOVQreVpKa0QrWmliZDArR3kKYlRMT0NjVW1HK0VZQzJqenFkVVBWbkFoK1djNWh6dUc1c1Z3ait2N2dBbFN6MmFZNHQxRUlQVy9aa09sRkFYSApIdmY3OUpHWWhNYm13UVF4NmVLcmxudnNiMnU5SlMzQ3VRRnJDY2UxeHJPT2dMakhMaGRaWGtrRFNZVWR3RzMyCmlzaTFBb0dCQUkyTCtVTmUzOWJxbUNlNGZ1cGJYZmloODNOakJrMzR1YURETGpNa1p1QUdBVWhUcC9xemd5MVIKcnp0R0dlMHFWL0Y3VmVzeGF1WGFNczU5TmJia2ozTll3OXU5YzVFcWNUN1o1RjIxUkJYdWZxSStOSC9GWlZ6KwppaThMSnhhSExySkthS1FxeXNkbEhoTzMrS2NOcytvdG1BWjV5dDU2NjNBQ1hZRFdQbHhzCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
2. ๋ค์์คํ์ด์ค ์์ฑ
1
2
3
4
5
6
kubectl create namespace dev-team
kubectl create ns infra-team
# ๊ฒฐ๊ณผ
namespace/dev-team created
namespace/infra-team created
3. ๋ค์์คํ์ด์ค ํ์ธ
1
kubectl get ns
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
NAME STATUS AGE
default Active 9h
dev-team Active 31s
infra-team Active 31s
kube-node-lease Active 9h
kube-public Active 9h
kube-system Active 9h
monitoring Active 8h
4. ์๋น์ค ์ด์นด์ดํธ ์์ฑ
1
2
3
4
5
6
kubectl create sa dev-k8s -n dev-team
kubectl create sa infra-k8s -n infra-team
# ๊ฒฐ๊ณผ
serviceaccount/dev-k8s created
serviceaccount/infra-k8s created
- ๊ฐ ๋ค์์คํ์ด์ค(
dev-team
,infra-team
)์ ์๋น์ค ์ด์นด์ดํธ(dev-k8s
,infra-k8s
)๊ฐ ์์ฑ๋จ
5. ์๋น์ค ์ด์นด์ดํธ ์ ๋ณด ํ์ธ
(1) dev-team
๋ค์์คํ์ด์ค์ ์๋น์ค ์ด์นด์ดํธ ๋ชฉ๋ก ์กฐํ
1
kubectl get sa -n dev-team
โ ย ์ถ๋ ฅ
1
2
3
NAME SECRETS AGE
default 0 109s
dev-k8s 0 30s
(2) dev-team
๋ค์์คํ์ด์ค์ dev-k8s
์๋น์ค ์ด์นด์ดํธ ์์ธ ์ ๋ณด ์กฐํ
1
kubectl get sa dev-k8s -n dev-team -o yaml
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2025-03-14T23:44:18Z"
name: dev-k8s
namespace: dev-team
resourceVersion: "162050"
uid: ac4ef7cb-f59e-47ed-8adc-ce4e11f7d78c
(3) infra-team
๋ค์์คํ์ด์ค์ ์๋น์ค ์ด์นด์ดํธ ๋ชฉ๋ก ์กฐํ
1
kubectl get sa -n infra-team
โ ย ์ถ๋ ฅ
1
2
3
NAME SECRETS AGE
default 0 3m
infra-k8s 0 100s
(4) infra-team
๋ค์์คํ์ด์ค์ infra-k8s
์๋น์ค ์ด์นด์ดํธ ์์ธ ์ ๋ณด ์กฐํ
1
kubectl get sa infra-k8s -n infra-team -o yaml
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2025-03-14T23:44:19Z"
name: infra-k8s
namespace: infra-team
resourceVersion: "162052"
uid: f70ee9c0-add4-4b14-8873-d89663e75e45
6. ์๋น์ค ์ด์นด์ดํธ๋ฅผ ์ง์ ํ ํ๋ ์์ฑ (๊ถํ ํ ์คํธ์ฉ)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: dev-kubectl
namespace: dev-team
spec:
serviceAccountName: dev-k8s
containers:
- name: kubectl-pod
image: bitnami/kubectl:1.31.4
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: infra-kubectl
namespace: infra-team
spec:
serviceAccountName: infra-k8s
containers:
- name: kubectl-pod
image: bitnami/kubectl:1.31.4
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF
# ๊ฒฐ๊ณผ
pod/dev-kubectl created
pod/infra-kubectl created
7. ์ ์ฒด ํ๋ ๋ชฉ๋ก ๋ฐ ์๋น์ค ์ด์นด์ดํธ ํ๋ ์ ๋ณด ํ์ธ
(1) ๋ชจ๋ ๋ค์์คํ์ด์ค์ ํ๋ ๋ชฉ๋ก ์กฐํ
1
kubectl get pod -A
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
NAMESPACE NAME READY STATUS RESTARTS AGE
dev-team dev-kubectl 1/1 Running 0 35s
infra-team infra-kubectl 1/1 Running 0 34s
kube-system aws-load-balancer-controller-554fbd9d-8r96n 1/1 Running 0 9h
kube-system aws-load-balancer-controller-554fbd9d-dzk6c 1/1 Running 0 9h
kube-system aws-node-7s9dd 2/2 Running 0 9h
kube-system aws-node-d6v7m 2/2 Running 0 9h
kube-system aws-node-wnd97 2/2 Running 0 9h
kube-system coredns-86f5954566-cc4m4 1/1 Running 0 9h
kube-system coredns-86f5954566-t7gfd 1/1 Running 0 9h
kube-system ebs-csi-controller-9c9c4d49f-5ns8n 6/6 Running 0 9h
kube-system ebs-csi-controller-9c9c4d49f-j6drv 6/6 Running 0 9h
kube-system ebs-csi-node-k5spq 3/3 Running 0 9h
kube-system ebs-csi-node-p4tcb 3/3 Running 0 9h
kube-system ebs-csi-node-rrkrv 3/3 Running 0 9h
kube-system external-dns-dc4878f5f-6cg6l 1/1 Running 0 9h
kube-system kube-ops-view-657dbc6cd8-shbj2 1/1 Running 0 9h
kube-system kube-proxy-ddmmr 1/1 Running 0 9h
kube-system kube-proxy-szvqr 1/1 Running 0 9h
kube-system kube-proxy-z9fjt 1/1 Running 0 9h
kube-system metrics-server-6bf5998d9c-5b5r5 1/1 Running 0 9h
kube-system metrics-server-6bf5998d9c-bscbc 1/1 Running 0 9h
monitoring kube-prometheus-stack-grafana-78bc45ff97-whzh7 3/3 Running 0 9h
monitoring kube-prometheus-stack-kube-state-metrics-5dbfbd4b9-2zgg5 1/1 Running 0 9h
monitoring kube-prometheus-stack-operator-76bdd654bf-9j7dk 1/1 Running 0 9h
monitoring kube-prometheus-stack-prometheus-node-exporter-bxbs6 1/1 Running 0 9h
monitoring kube-prometheus-stack-prometheus-node-exporter-kb24c 1/1 Running 0 9h
monitoring kube-prometheus-stack-prometheus-node-exporter-n896q 1/1 Running 0 9h
monitoring prometheus-kube-prometheus-stack-prometheus-0 2/2 Running 0 9h
(2) dev-team
๋ค์์คํ์ด์ค์ ์๋น์ค ์ด์นด์ดํธ ์์ธ ์ ๋ณด ์กฐํ
1
kubectl get pod -o dev-kubectl -n dev-team -o yaml
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
apiVersion: v1
items:
- apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2025-03-14T23:48:55Z"
name: dev-kubectl
namespace: dev-team
resourceVersion: "163389"
uid: 83183b00-37d5-48bb-ab1a-ed2c6c922ae3
spec:
containers:
- args:
- -f
- /dev/null
command:
- tail
image: bitnami/kubectl:1.31.4
imagePullPolicy: IfNotPresent
name: kubectl-pod
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-rqb92
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ip-192-168-3-100.ap-northeast-2.compute.internal
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: dev-k8s
serviceAccountName: dev-k8s
terminationGracePeriodSeconds: 0
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-rqb92
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2025-03-14T23:49:04Z"
status: "True"
type: PodReadyToStartContainers
- lastProbeTime: null
lastTransitionTime: "2025-03-14T23:48:55Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2025-03-14T23:49:04Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2025-03-14T23:49:04Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2025-03-14T23:48:55Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://0bc762f8a8966015380207b1a3bbcb8e532adc34756605bbef638f3d0e40df71
image: docker.io/bitnami/kubectl:1.31.4
imageID: docker.io/bitnami/kubectl@sha256:64614ef8290f3fb27fed5164b338debeeb79a1e5e26c93eb920770b71abd7c48
lastState: {}
name: kubectl-pod
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2025-03-14T23:49:04Z"
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-rqb92
readOnly: true
recursiveReadOnly: Disabled
hostIP: 192.168.3.100
hostIPs:
- ip: 192.168.3.100
phase: Running
podIP: 192.168.3.220
podIPs:
- ip: 192.168.3.220
qosClass: BestEffort
startTime: "2025-03-14T23:48:55Z"
kind: List
metadata:
resourceVersion: ""
(3) infra-team
๋ค์์คํ์ด์ค์ ์๋น์ค ์ด์นด์ดํธ ์์ธ ์ ๋ณด ์กฐํ
1
kubectl get pod -o infra-kubectl -n infra-team -o yaml
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
apiVersion: v1
items:
- apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2025-03-14T23:48:56Z"
name: infra-kubectl
namespace: infra-team
resourceVersion: "163413"
uid: 55f43156-40a6-43b6-b8de-8eaa91a13913
spec:
containers:
- args:
- -f
- /dev/null
command:
- tail
image: bitnami/kubectl:1.31.4
imagePullPolicy: IfNotPresent
name: kubectl-pod
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-64n5t
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ip-192-168-1-170.ap-northeast-2.compute.internal
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: infra-k8s
serviceAccountName: infra-k8s
terminationGracePeriodSeconds: 0
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-64n5t
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2025-03-14T23:49:08Z"
status: "True"
type: PodReadyToStartContainers
- lastProbeTime: null
lastTransitionTime: "2025-03-14T23:48:56Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2025-03-14T23:49:08Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2025-03-14T23:49:08Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2025-03-14T23:48:56Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://dc49543540ab2b1708369d9860bea09f00f741312939b6bfa9890ce50a6fc50c
image: docker.io/bitnami/kubectl:1.31.4
imageID: docker.io/bitnami/kubectl@sha256:64614ef8290f3fb27fed5164b338debeeb79a1e5e26c93eb920770b71abd7c48
lastState: {}
name: kubectl-pod
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2025-03-14T23:49:07Z"
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-64n5t
readOnly: true
recursiveReadOnly: Disabled
hostIP: 192.168.1.170
hostIPs:
- ip: 192.168.1.170
phase: Running
podIP: 192.168.1.194
podIPs:
- ip: 192.168.1.194
qosClass: BestEffort
startTime: "2025-03-14T23:48:56Z"
kind: List
metadata:
resourceVersion: ""
8. ํ๋ ๋ด ์๋น์ค ์ด์นด์ดํธ ํ ํฐ ๋ฐ ๊ด๋ จ ์ ๋ณด ํ์ธ
(1) ์๋น์ค ์ด์นด์ดํธ ๋๋ ํ ๋ฆฌ ๋ชฉ๋ก ํ์ธ
1
kubectl exec -it dev-kubectl -n dev-team -- ls /run/secrets/kubernetes.io/serviceaccount
โ ย ์ถ๋ ฅ
1
ca.crt namespace token
(2) ์๋น์ค ์ด์นด์ดํธ ํ ํฐ ๋ด์ฉ ์กฐํ
1
kubectl exec -it dev-kubectl -n dev-team -- cat /run/secrets/kubernetes.io/serviceaccount/token
โ ย ์ถ๋ ฅ
1
eyJhbGciOiJSUzI1NiIsImtpZCI6IjdlOTJmOGQ3NmM5ODUzNzUzYTZjOWExYzlkOTU5NzBkMjFkN2UxY2IifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTc3MzUzMjEzNSwiaWF0IjoxNzQxOTk2MTM1LCJpc3MiOiJodHRwczovL29pZGMuZWtzLmFwLW5vcnRoZWFzdC0yLmFtYXpvbmF3cy5jb20vaWQvOTExMjQzNTA2NEI4MjQ0OThBRjY4QzY5NkQwNTE4M0MiLCJqdGkiOiI5N2I5MTQyYi0yZDZkLTRhMjYtYTA1OS0xNDVjMjVkMzhiYzYiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImRldi10ZWFtIiwibm9kZSI6eyJuYW1lIjoiaXAtMTkyLTE2OC0zLTEwMC5hcC1ub3J0aGVhc3QtMi5jb21wdXRlLmludGVybmFsIiwidWlkIjoiZjQ0ZDA1YjUtZTExZC00ZDllLThiN2YtODVmZDViNGI4MzY4In0sInBvZCI6eyJuYW1lIjoiZGV2LWt1YmVjdGwiLCJ1aWQiOiI4MzE4M2IwMC0zN2Q1LTQ4YmItYWIxYS1lZDJjNmM5MjJhZTMifSwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImRldi1rOHMiLCJ1aWQiOiJhYzRlZjdjYi1mNTllLTQ3ZWQtOGFkYy1jZTRlMTFmN2Q3OGMifSwid2FybmFmdGVyIjoxNzQxOTk5NzQyfSwibmJmIjoxNzQxOTk2MTM1LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGV2LXRlYW06ZGV2LWs4cyJ9.Ic3N87qD7T6y67b5efwavYt9XbyTFmvMQOyCUl7Odhsg0OE5MFLjgfrcD4x69QHMgBey_ZS8vym_EcqEzsOpqpECLKSJmepy5dEHDG08oYiVe9zzxqXep9Z9mf4BRGKXvV7D0CRAzPYocLp7weWVB-IePWxcvdWTdKY9QCqmhfrOsAFwSLxKGMVfH2-vLo14TYgIU2og5ZscVqaV16urEotcgyjvLsu8fgcebplrEgS5P9FldP4_YXFqT2UG7wAMwKIyCS0AyVeiD4BhECHo4544h8Cn7dEckZv3N1wb8I5R3sCGU_zH8gnaAxUHMlxGwLDl7IpZozPhuCt2feThZQ
(3) ์๋น์ค ์ด์นด์ดํธ ๋ค์์คํ์ด์ค ํ์ผ ๋ด์ฉ ์กฐํ
1
kubectl exec -it dev-kubectl -n dev-team -- cat /run/secrets/kubernetes.io/serviceaccount/namespace
โ ย ์ถ๋ ฅ
1
dev-team
(4) ์๋น์ค ์ด์นด์ดํธ CA ์ธ์ฆ์ ๋ด์ฉ ์กฐํ
1
kubectl exec -it dev-kubectl -n dev-team -- cat /run/secrets/kubernetes.io/serviceaccount/ca.crt
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
-----BEGIN CERTIFICATE-----
MIIDBTCCAe2gAwIBAgIIQxYdTlWd1KowDQYJKoZIhvcNAQELBQAwFTETMBEGA1UE
AxMKa3ViZXJuZXRlczAeFw0yNTAzMTQxMzUzMDlaFw0zNTAzMTIxMzU4MDlaMBUx
EzARBgNVBAMTCmt1YmVybmV0ZXMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
AoIBAQDvDMGgWXJPfQYk0DJY7FbNv8CwFAL1cH1d1VLqRjXzktDapMmqKdFWyfIs
lajlI2KQogEOcWGsGD9mIYQaMs4TqZoY9SWY4Ue1V3SF9aUwtR43eKO8yEHPzCTI
F7HrX0JVJq3oQANQV3OsbItOitt7x0hhUVgsPr/jO/Yjadhw3IUxALJbxJNb3Ink
GcOymmub44xIBt6kH3uyUcqBo7i+h/sIYaWEshpzwtbRzct6dLbWE5oVmOqcZGL3
Iq5ZcbU0PspwKlS+PppXfhI6v99Tk2WEQW7dBj5fC9qgNyOy7pEuMLTxr+Nqa/rT
uw3m/3ooRhbO6k4tTlTImY1pk2zZAgMBAAGjWTBXMA4GA1UdDwEB/wQEAwICpDAP
BgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSajzw79w3MR0H0BTi1nASa7bUKRzAV
BgNVHREEDjAMggprdWJlcm5ldGVzMA0GCSqGSIb3DQEBCwUAA4IBAQAeR+g9wAUC
nPID9KJ4i+0ibDW/s9qHzGLnNzXiBZlIr5ekTb7Zc8pOOYCkysMhI9KnrPQGONZZ
+cI34RtCZctyOPVdlKRlPS9GYMhjAxHP1mIrN3SayhjX3y3cx54n0Fk0GmzF+Jmr
aAl0y23NF2y6W2tYe1EmZdTYYW142KDsHfWOBXm2bcGvzBc8a8zAfHtICnER2W8e
gtDBWWi5Tn7SJ9jSLAUYCdAB3klCMBMhO4li/G0x0TNOEUEqtlT1cLzFoOMS89eQ
yip152OmsVH3/iTnyg/TW+TQzK5AwyYMorfsNJ685lSz2huKsDdOX1RjZ+7d29um
MtHWhp4uCj1A
-----END CERTIFICATE-----
9. ๋จ์ถ ๋ช ๋ น์ด(alias) ์ค์
1
2
alias k1='kubectl exec -it dev-kubectl -n dev-team -- kubectl'
alias k2='kubectl exec -it infra-kubectl -n infra-team -- kubectl'
10. ์๋น์ค ์ด์นด์ดํธ ๊ธฐ๋ณธ ๊ถํ ํ ์คํธ (์ด๊ธฐ, ๊ถํ ์์)
(1) ์๋น์ค ์ด์นด์ดํธ(dev-team
)๋ก ํ๋ ๋ชฉ๋ก ์กฐํ ์๋
1
k1 get pods
โ ย ์ถ๋ ฅ
1
2
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:dev-team:dev-k8s" cannot list resource "pods" in API group "" in the namespace "dev-team"
command terminated with exit code 1
- ์ค๋ฅ: Forbidden (
dev-team
๋ค์์คํ์ด์ค ์ธ๋ถ ๋ฆฌ์์ค ์ ๊ทผ ๋ถ๊ฐ)
(2) ์๋น์ค ์ด์นด์ดํธ(dev-team)๋ก ํ๋ ์์ฑ ์๋
1
k1 run nginx --image nginx:1.20-alpine
โ ย ์ถ๋ ฅ
1
2
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:dev-team:dev-k8s" cannot create resource "pods" in API group "" in the namespace "dev-team"
command terminated with exit code 1
- ์ค๋ฅ: Forbidden (ํ๋ ์์ฑ ๊ถํ ์์)
(3) ์๋น์ค ์ด์นด์ดํธ ๊ถํ ํ์ธ: ํ๋ ์กฐํ ๊ฐ๋ฅ ์ฌ๋ถ ํ ์คํธ
1
2
k1 auth can-i get pods
k2 auth can-i get pods
โ ย ์ถ๋ ฅ
1
2
3
no
no
command terminated with exit code 1
11. ๋ค์์คํ์ด์ค ๋ด ๋ชจ๋ ๊ถํ ๋กค(Role) ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: role-dev-team
namespace: dev-team
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
EOF
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: role-infra-team
namespace: infra-team
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
EOF
# ๊ฒฐ๊ณผ
role.rbac.authorization.k8s.io/role-dev-team created
role.rbac.authorization.k8s.io/role-infra-team created
12. ๋กค ์์ฑ ๊ฒฐ๊ณผ ํ์ธ
1
kubectl describe roles role-dev-team -n dev-team
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
Name: role-dev-team
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
*.* [] [] [*]
13. ๋กค๋ฐ์ธ๋ฉ ์์ฑ (์๋น์ค์ด์นด์ดํธ์ ๋กค ์ฐ๋)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: roleB-dev-team
namespace: dev-team
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: role-dev-team
subjects:
- kind: ServiceAccount
name: dev-k8s
namespace: dev-team
EOF
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: roleB-infra-team
namespace: infra-team
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: role-infra-team
subjects:
- kind: ServiceAccount
name: infra-k8s
namespace: infra-team
EOF
# ๊ฒฐ๊ณผ
rolebinding.rbac.authorization.k8s.io/roleB-dev-team created
rolebinding.rbac.authorization.k8s.io/roleB-infra-team created
14. ๋กค๋ฐ์ธ๋ฉ ๊ฒฐ๊ณผ ํ์ธ (dev-team
)
1
kubectl describe rolebindings roleB-dev-team -n dev-team
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
Name: roleB-dev-team
Labels: <none>
Annotations: <none>
Role:
Kind: Role
Name: role-dev-team
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount dev-k8s dev-team
15. ์๋น์ค ์ด์นด์ดํธ๋ฅผ ์ง์ ํ ํ๋์์ ํ๋ ๋ชฉ๋ก ์กฐํ ํ ์คํธ
1
k1 get pods
โ ย ์ถ๋ ฅ
1
2
NAME READY STATUS RESTARTS AGE
dev-kubectl 1/1 Running 0 23m
16. ์์ธ ๋ก๊ทธ ํฌํจ ํ๋ ๋ชฉ๋ก ์กฐํ (๊ถํ ํ ์คํธ)
1
k1 get pods -v=6
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
I0315 00:14:30.726587 81 merged_client_builder.go:163] Using in-cluster namespace
I0315 00:14:30.727119 81 merged_client_builder.go:121] Using in-cluster configuration
I0315 00:14:30.750810 81 merged_client_builder.go:121] Using in-cluster configuration
I0315 00:14:30.776694 81 round_trippers.go:553] GET https://10.100.0.1:443/api/v1/namespaces/dev-team/pods?limit=500 200 OK in 18 milliseconds
NAME READY STATUS RESTARTS AGE
dev-kubectl 1/1 Running 0 25m
17. ์๋น์ค ์ด์นด์ดํธ๋ก ํ๋ ์์ฑ ์๋ (๋กค ์ ์ฉ ํ)
1
2
3
k1 run nginx --image nginx:1.20-alpine
# ๊ฒฐ๊ณผ
pod/nginx created
18. ์์ฑ๋ ํ๋ ๋ชฉ๋ก ์ฌ์กฐํ
1
k1 get pods
โ ย ์ถ๋ ฅ
1
2
3
NAME READY STATUS RESTARTS AGE
dev-kubectl 1/1 Running 0 26m
nginx 1/1 Running 0 18s
19. ์์ฑ๋ nginx ํ๋ ์ญ์
1
2
3
k1 delete pods nginx
# ๊ฒฐ๊ณผ
pod "nginx" deleted
20. ๋ค๋ฅธ ๋ค์์คํ์ด์ค ํ๋ ์กฐํ ์๋
1
k1 get pods -n kube-system
โ ย ์ถ๋ ฅ
1
2
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:dev-team:dev-k8s" cannot list resource "pods" in API group "" in the namespace "kube-system"
command terminated with exit code 1
kube-system
๋ค์์คํ์ด์ค์ ํ๋๋ฅผ ์กฐํ ์๋ํ์์ผ๋ ๊ถํ ๋ถ์กฑ์ผ๋ก Forbidden ์ค๋ฅ ๋ฐ์
21. ์์ธ ๋ก๊ทธ ํฌํจ ๋ค๋ฅธ ๋ค์์คํ์ด์ค ํ๋ ์กฐํ ์๋
1
k1 get pods -n kube-system -v=6
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
I0315 00:17:20.198691 131 merged_client_builder.go:121] Using in-cluster configuration
I0315 00:17:20.206009 131 merged_client_builder.go:121] Using in-cluster configuration
I0315 00:17:20.224312 131 round_trippers.go:553] GET https://10.100.0.1:443/api/v1/namespaces/kube-system/pods?limit=500 403 Forbidden in 17 milliseconds
I0315 00:17:20.224868 131 helpers.go:246] server response object: [{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "pods is forbidden: User \"system:serviceaccount:dev-team:dev-k8s\" cannot list resource \"pods\" in API group \"\" in the namespace \"kube-system\"",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}]
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:dev-team:dev-k8s" cannot list resource "pods" in API group "" in the namespace "kube-system"
command terminated with exit code 1
kube-system
๋ค์์คํ์ด์ค ํ๋ ์กฐํ ์๋๋ฅผ ์์ธ ๋ก๊ทธ์ ํจ๊ป ์คํํ์์ผ๋ฉฐ, 403 Forbidden ์๋ต์ ํ์ธํจ
22. ํด๋ฌ์คํฐ ๋ฒ์ ๋ฆฌ์์ค(nodes) ์กฐํ ์๋
1
k1 get nodes
โ ย ์ถ๋ ฅ
1
2
Error from server (Forbidden): nodes is forbidden: User "system:serviceaccount:dev-team:dev-k8s" cannot list resource "nodes" in API group "" at the cluster scope
command terminated with exit code 1
23. ์์ธ ๋ก๊ทธ ํฌํจ ํด๋ฌ์คํฐ ๋ฒ์ ๋ฆฌ์์ค(nodes) ์กฐํ ์๋
1
k1 get nodes -v=6
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
I0315 00:18:48.690181 151 merged_client_builder.go:163] Using in-cluster namespace
I0315 00:18:48.690736 151 merged_client_builder.go:121] Using in-cluster configuration
I0315 00:18:48.700904 151 merged_client_builder.go:121] Using in-cluster configuration
I0315 00:18:48.711491 151 round_trippers.go:553] GET https://10.100.0.1:443/api/v1/nodes?limit=500 403 Forbidden in 10 milliseconds
I0315 00:18:48.713212 151 helpers.go:246] server response object: [{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "nodes is forbidden: User \"system:serviceaccount:dev-team:dev-k8s\" cannot list resource \"nodes\" in API group \"\" at the cluster scope",
"reason": "Forbidden",
"details": {
"kind": "nodes"
},
"code": 403
}]
Error from server (Forbidden): nodes is forbidden: User "system:serviceaccount:dev-team:dev-k8s" cannot list resource "nodes" in API group "" at the cluster scope
command terminated with exit code 1
24. ๊ถํ ํ์ธ ํ ์คํธ (์๋น์ค ์ด์นด์ดํธ ๊ถํ ์กฐํ)
1
2
k1 auth can-i get pods
k2 auth can-i get pods
โ ย ์ถ๋ ฅ
1
2
yes
yes
- ๊ฐ๊ฐ์ ์๋น์ค ์ด์นด์ดํธ(
dev-team
,infra-team
)๊ฐ ํ๋ ์กฐํ ๊ถํ์ ๋ณด์ ํ๋์ง ํ์ธํจ
25. ๋ฆฌ์์ค ์ ๋ฆฌ
1
2
3
4
5
kubectl delete ns dev-team infra-team
# ๊ฒฐ๊ณผ
namespace "dev-team" deleted
namespace "infra-team" deleted
๐ EKS ์ธ์ฆ/์ธ๊ฐ
1. ํ๋ฌ๊ทธ์ธ ์ค์น
1
kubectl krew install access-matrix rbac-tool rbac-view rolesum whoami
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
Updated the local copy of plugin index.
Installing plugin: access-matrix
Installed plugin: access-matrix
\
| Use this plugin:
| kubectl access-matrix
| Documentation:
| https://github.com/corneliusweig/rakkess
| Caveats:
| \
| | Usage:
| | kubectl access-matrix
| | kubectl access-matrix for pods
| /
/
WARNING: You installed plugin "access-matrix" from the krew-index plugin repository.
These plugins are not audited for security by the Krew maintainers.
Run them at your own risk.
Installing plugin: rbac-tool
Installed plugin: rbac-tool
\
| Use this plugin:
| kubectl rbac-tool
| Documentation:
| https://github.com/alcideio/rbac-tool
/
WARNING: You installed plugin "rbac-tool" from the krew-index plugin repository.
These plugins are not audited for security by the Krew maintainers.
Run them at your own risk.
Installing plugin: rbac-view
Installed plugin: rbac-view
\
| Use this plugin:
| kubectl rbac-view
| Documentation:
| https://github.com/jasonrichardsmith/rbac-view
| Caveats:
| \
| | Run "kubectl rbac-view" to open a browser with an html view of your permissions.
| /
/
WARNING: You installed plugin "rbac-view" from the krew-index plugin repository.
These plugins are not audited for security by the Krew maintainers.
Run them at your own risk.
Installing plugin: rolesum
Installed plugin: rolesum
\
| Use this plugin:
| kubectl rolesum
| Documentation:
| https://github.com/Ladicle/kubectl-rolesum
/
WARNING: You installed plugin "rolesum" from the krew-index plugin repository.
These plugins are not audited for security by the Krew maintainers.
Run them at your own risk.
Installing plugin: whoami
Installed plugin: whoami
\
| Use this plugin:
| kubectl whoami
| Documentation:
| https://github.com/rajatjindal/kubectl-whoami
/
WARNING: You installed plugin "whoami" from the krew-index plugin repository.
These plugins are not audited for security by the Krew maintainers.
Run them at your own risk.
2. ํ์ฌ ์ธ์ฆ๋ ์ฃผ์ฒด ํ์ธ
1
kubectl whoami
โ ย ์ถ๋ ฅ
1
arn:aws:iam::378102432899:user/eks-user
3. ์ก์ธ์ค ๋งคํธ๋ฆญ์ค ํ์ธ
1
kubectl access-matrix
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
NAME LIST CREATE UPDATE DELETE
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
โ โ โ โ
alertmanagerconfigs.monitoring.coreos.com โ โ โ โ
alertmanagers.monitoring.coreos.com โ โ โ โ
apiservices.apiregistration.k8s.io โ โ โ โ
bindings โ
certificatesigningrequests.certificates.k8s.io โ โ โ โ
clusterrolebindings.rbac.authorization.k8s.io โ โ โ โ
clusterroles.rbac.authorization.k8s.io โ โ โ โ
cninodes.vpcresources.k8s.aws โ โ โ โ
componentstatuses โ
configmaps โ โ โ โ
controllerrevisions.apps โ โ โ โ
cronjobs.batch โ โ โ โ
csidrivers.storage.k8s.io โ โ โ โ
csinodes.storage.k8s.io โ โ โ โ
csistoragecapacities.storage.k8s.io โ โ โ โ
customresourcedefinitions.apiextensions.k8s.io โ โ โ โ
daemonsets.apps โ โ โ โ
deployments.apps โ โ โ โ
endpoints โ โ โ โ
endpointslices.discovery.k8s.io โ โ โ โ
eniconfigs.crd.k8s.amazonaws.com โ โ โ โ
events โ โ โ โ
events.events.k8s.io โ โ โ โ
flowschemas.flowcontrol.apiserver.k8s.io โ โ โ โ
horizontalpodautoscalers.autoscaling โ โ โ โ
ingressclasses.networking.k8s.io โ โ โ โ
ingressclassparams.elbv2.k8s.aws โ โ โ โ
ingresses.networking.k8s.io โ โ โ โ
jobs.batch โ โ โ โ
leases.coordination.k8s.io โ โ โ โ
limitranges โ โ โ โ
localsubjectaccessreviews.authorization.k8s.io โ
mutatingwebhookconfigurations.admissionregistration.k8s.io โ โ โ โ
namespaces โ โ โ โ
networkpolicies.networking.k8s.io โ โ โ โ
nodes โ โ โ โ
nodes.metrics.k8s.io โ
persistentvolumeclaims โ โ โ โ
persistentvolumes โ โ โ โ
poddisruptionbudgets.policy โ โ โ โ
podmonitors.monitoring.coreos.com โ โ โ โ
pods โ โ โ โ
pods.metrics.k8s.io โ
podtemplates โ โ โ โ
policyendpoints.networking.k8s.aws โ โ โ โ
priorityclasses.scheduling.k8s.io โ โ โ โ
prioritylevelconfigurations.flowcontrol.apiserver.k8s.io โ โ โ โ
probes.monitoring.coreos.com โ โ โ โ
prometheusagents.monitoring.coreos.com โ โ โ โ
prometheuses.monitoring.coreos.com โ โ โ โ
prometheusrules.monitoring.coreos.com โ โ โ โ
replicasets.apps โ โ โ โ
replicationcontrollers โ โ โ โ
resourcequotas โ โ โ โ
rolebindings.rbac.authorization.k8s.io โ โ โ โ
roles.rbac.authorization.k8s.io โ โ โ โ
runtimeclasses.node.k8s.io โ โ โ โ
scrapeconfigs.monitoring.coreos.com โ โ โ โ
secrets โ โ โ โ
securitygrouppolicies.vpcresources.k8s.aws โ โ โ โ
selfsubjectaccessreviews.authorization.k8s.io โ
selfsubjectreviews.authentication.k8s.io โ
selfsubjectrulesreviews.authorization.k8s.io โ
serviceaccounts โ โ โ โ
servicemonitors.monitoring.coreos.com โ โ โ โ
services โ โ โ โ
statefulsets.apps โ โ โ โ
storageclasses.storage.k8s.io โ โ โ โ
subjectaccessreviews.authorization.k8s.io โ
targetgroupbindings.elbv2.k8s.aws โ โ โ โ
thanosrulers.monitoring.coreos.com โ โ โ โ
tokenreviews.authentication.k8s.io โ
validatingadmissionpolicies.admissionregistration.k8s.io โ โ โ โ
validatingadmissionpolicybindings.admissionregistration.k8s.io โ โ โ โ
validatingwebhookconfigurations.admissionregistration.k8s.io โ โ โ โ
volumeattachments.storage.k8s.io โ โ โ โ
volumeattributesclasses.storage.k8s.io โ โ โ โ
No namespace given, this implies cluster scope (try -n if this is not intended)
4. RBAC ๋๊ตฌ๋ฅผ ์ด์ฉํ ์ ์ฑ ๋ฃฉ์
(1) ์ ์ฒด RBAC ์ ์ฑ ๋ฃฉ์
1
kubectl rbac-tool lookup
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
SUBJECT | SUBJECT TYPE | SCOPE | NAMESPACE | ROLE | BINDING
----------------------------------------------+----------------+-------------+-------------+---------------------------------------------------------------+----------------------------------------------------------------
attachdetach-controller | ServiceAccount | ClusterRole | | system:controller:attachdetach-controller | system:controller:attachdetach-controller
aws-load-balancer-controller | ServiceAccount | ClusterRole | | aws-load-balancer-controller-role | aws-load-balancer-controller-rolebinding
aws-load-balancer-controller | ServiceAccount | Role | kube-system | aws-load-balancer-controller-leader-election-role | aws-load-balancer-controller-leader-election-rolebinding
aws-node | ServiceAccount | ClusterRole | | aws-node | aws-node
bootstrap-signer | ServiceAccount | Role | kube-public | system:controller:bootstrap-signer | system:controller:bootstrap-signer
bootstrap-signer | ServiceAccount | Role | kube-system | system:controller:bootstrap-signer | system:controller:bootstrap-signer
certificate-controller | ServiceAccount | ClusterRole | | system:controller:certificate-controller | system:controller:certificate-controller
cloud-provider | ServiceAccount | Role | kube-system | system:controller:cloud-provider | system:controller:cloud-provider
clusterrole-aggregation-controller | ServiceAccount | ClusterRole | | system:controller:clusterrole-aggregation-controller | system:controller:clusterrole-aggregation-controller
coredns | ServiceAccount | ClusterRole | | system:coredns | system:coredns
cronjob-controller | ServiceAccount | ClusterRole | | system:controller:cronjob-controller | system:controller:cronjob-controller
daemon-set-controller | ServiceAccount | ClusterRole | | system:controller:daemon-set-controller | system:controller:daemon-set-controller
deployment-controller | ServiceAccount | ClusterRole | | system:controller:deployment-controller | system:controller:deployment-controller
disruption-controller | ServiceAccount | ClusterRole | | system:controller:disruption-controller | system:controller:disruption-controller
ebs-csi-controller-sa | ServiceAccount | ClusterRole | | ebs-external-snapshotter-role | ebs-csi-snapshotter-binding
ebs-csi-controller-sa | ServiceAccount | ClusterRole | | ebs-external-attacher-role | ebs-csi-attacher-binding
ebs-csi-controller-sa | ServiceAccount | ClusterRole | | ebs-external-provisioner-role | ebs-csi-provisioner-binding
ebs-csi-controller-sa | ServiceAccount | ClusterRole | | ebs-external-resizer-role | ebs-csi-resizer-binding
ebs-csi-controller-sa | ServiceAccount | Role | kube-system | ebs-csi-leases-role | ebs-csi-leases-rolebinding
ebs-csi-node-sa | ServiceAccount | ClusterRole | | ebs-csi-node-role | ebs-csi-node-getter-binding
eks:addon-manager | User | ClusterRole | | eks:addon-manager | eks:addon-manager
eks:addon-manager | User | ClusterRole | | cluster-admin | eks:addon-cluster-admin
eks:addon-manager | User | Role | kube-system | eks:addon-manager | eks:addon-manager
eks:authenticator | User | Role | kube-system | eks:authenticator | eks:authenticator
eks:az-poller | User | ClusterRole | | eks:az-poller | eks:az-poller
eks:az-poller | User | Role | kube-system | eks:az-poller | eks:az-poller
eks:certificate-controller | User | ClusterRole | | eks:certificate-controller-approver | eks:certificate-controller-approver
eks:certificate-controller | User | ClusterRole | | eks:certificate-controller-signer | eks:certificate-controller-signer
eks:certificate-controller | User | ClusterRole | | eks:certificate-controller-manager | eks:certificate-controller-manager
eks:certificate-controller | User | ClusterRole | | system:controller:certificate-controller | eks:certificate-controller
eks:certificate-controller | User | Role | kube-system | eks:certificate-controller | eks:certificate-controller
eks:cloud-controller-manager | User | ClusterRole | | eks:cloud-controller-manager | eks:cloud-controller-manager
eks:cloud-controller-manager | User | Role | kube-system | extension-apiserver-authentication-reader | eks:cloud-controller-manager:apiserver-authentication-reader
eks:cluster-event-watcher | User | ClusterRole | | eks:cluster-event-watcher | eks:cluster-event-watcher
eks:coredns-autoscaler | User | ClusterRole | | eks:coredns-autoscaler | eks:coredns-autoscaler
eks:coredns-autoscaler | User | Role | kube-system | eks:coredns-autoscaler | eks:coredns-autoscaler
eks:fargate-manager | User | ClusterRole | | eks:fargate-manager | eks:fargate-manager
eks:fargate-manager | User | Role | kube-system | eks:fargate-manager | eks:fargate-manager
eks:fargate-scheduler | User | ClusterRole | | eks:fargate-scheduler | eks:fargate-scheduler
eks:k8s-metrics | User | ClusterRole | | eks:k8s-metrics | eks:k8s-metrics
eks:k8s-metrics | User | Role | kube-system | eks:k8s-metrics | eks:k8s-metrics
eks:kms-storage-migrator | User | ClusterRole | | eks:kms-storage-migrator | eks:kms-storage-migrator
eks:kube-proxy-windows | Group | ClusterRole | | system:node-proxier | eks:kube-proxy-windows
eks:network-policy-controller | User | ClusterRole | | eks:network-policy-controller | eks:network-policy-controller
eks:network-policy-controller | User | Role | kube-system | eks:network-policy-controller | eks:network-policy-controller
eks:network-webhooks | User | ClusterRole | | eks:network-webhooks | eks:network-webhooks
eks:node-manager | User | ClusterRole | | eks:node-manager | eks:node-manager
eks:node-manager | User | Role | kube-system | eks:node-manager | eks:node-manager
eks:nodewatcher | User | ClusterRole | | eks:nodewatcher | eks:nodewatcher
eks:pod-identity-mutating-webhook | User | ClusterRole | | eks:pod-identity-mutating-webhook | eks:pod-identity-mutating-webhook
eks:service-operations | Group | ClusterRole | | eks:service-operations | eks:service-operations
eks:service-operations | Group | Role | kube-system | eks:service-operations-configmaps | eks:service-operations
eks:vpc-resource-controller | User | ClusterRole | | vpc-resource-controller-role | vpc-resource-controller-rolebinding
eks:vpc-resource-controller | User | Role | kube-system | eks-vpc-resource-controller-role | eks-vpc-resource-controller-rolebinding
endpoint-controller | ServiceAccount | ClusterRole | | system:controller:endpoint-controller | system:controller:endpoint-controller
endpointslice-controller | ServiceAccount | ClusterRole | | system:controller:endpointslice-controller | system:controller:endpointslice-controller
endpointslicemirroring-controller | ServiceAccount | ClusterRole | | system:controller:endpointslicemirroring-controller | system:controller:endpointslicemirroring-controller
ephemeral-volume-controller | ServiceAccount | ClusterRole | | system:controller:ephemeral-volume-controller | system:controller:ephemeral-volume-controller
expand-controller | ServiceAccount | ClusterRole | | system:controller:expand-controller | system:controller:expand-controller
external-dns | ServiceAccount | ClusterRole | | external-dns | external-dns-viewer
generic-garbage-collector | ServiceAccount | ClusterRole | | system:controller:generic-garbage-collector | system:controller:generic-garbage-collector
horizontal-pod-autoscaler | ServiceAccount | ClusterRole | | system:controller:horizontal-pod-autoscaler | system:controller:horizontal-pod-autoscaler
job-controller | ServiceAccount | ClusterRole | | system:controller:job-controller | system:controller:job-controller
kube-controller-manager | ServiceAccount | Role | kube-system | system::leader-locking-kube-controller-manager | system::leader-locking-kube-controller-manager
kube-dns | ServiceAccount | ClusterRole | | system:kube-dns | system:kube-dns
kube-ops-view | ServiceAccount | ClusterRole | | kube-ops-view | kube-ops-view
kube-prometheus-stack-grafana | ServiceAccount | ClusterRole | | kube-prometheus-stack-grafana-clusterrole | kube-prometheus-stack-grafana-clusterrolebinding
kube-prometheus-stack-grafana | ServiceAccount | Role | monitoring | kube-prometheus-stack-grafana | kube-prometheus-stack-grafana
kube-prometheus-stack-kube-state-metrics | ServiceAccount | ClusterRole | | kube-prometheus-stack-kube-state-metrics | kube-prometheus-stack-kube-state-metrics
kube-prometheus-stack-operator | ServiceAccount | ClusterRole | | kube-prometheus-stack-operator | kube-prometheus-stack-operator
kube-prometheus-stack-prometheus | ServiceAccount | ClusterRole | | kube-prometheus-stack-prometheus | kube-prometheus-stack-prometheus
kube-proxy | ServiceAccount | ClusterRole | | system:node-proxier | eks:kube-proxy
kube-scheduler | ServiceAccount | Role | kube-system | system::leader-locking-kube-scheduler | system::leader-locking-kube-scheduler
leader-election-controller | ServiceAccount | Role | kube-system | system::leader-locking-kube-controller-manager | system::leader-locking-kube-controller-manager
legacy-service-account-token-cleaner | ServiceAccount | ClusterRole | | system:controller:legacy-service-account-token-cleaner | system:controller:legacy-service-account-token-cleaner
metrics-server | ServiceAccount | ClusterRole | | system:metrics-server | system:metrics-server
metrics-server | ServiceAccount | ClusterRole | | system:auth-delegator | metrics-server:system:auth-delegator
metrics-server | ServiceAccount | Role | kube-system | extension-apiserver-authentication-reader | metrics-server-auth-reader
namespace-controller | ServiceAccount | ClusterRole | | system:controller:namespace-controller | system:controller:namespace-controller
node-controller | ServiceAccount | ClusterRole | | system:controller:node-controller | system:controller:node-controller
persistent-volume-binder | ServiceAccount | ClusterRole | | system:controller:persistent-volume-binder | system:controller:persistent-volume-binder
pod-garbage-collector | ServiceAccount | ClusterRole | | system:controller:pod-garbage-collector | system:controller:pod-garbage-collector
pv-protection-controller | ServiceAccount | ClusterRole | | system:controller:pv-protection-controller | system:controller:pv-protection-controller
pvc-protection-controller | ServiceAccount | ClusterRole | | system:controller:pvc-protection-controller | system:controller:pvc-protection-controller
replicaset-controller | ServiceAccount | ClusterRole | | system:controller:replicaset-controller | system:controller:replicaset-controller
replication-controller | ServiceAccount | ClusterRole | | system:controller:replication-controller | system:controller:replication-controller
resourcequota-controller | ServiceAccount | ClusterRole | | system:controller:resourcequota-controller | system:controller:resourcequota-controller
root-ca-cert-publisher | ServiceAccount | ClusterRole | | system:controller:root-ca-cert-publisher | system:controller:root-ca-cert-publisher
route-controller | ServiceAccount | ClusterRole | | system:controller:route-controller | system:controller:route-controller
service-account-controller | ServiceAccount | ClusterRole | | system:controller:service-account-controller | system:controller:service-account-controller
service-controller | ServiceAccount | ClusterRole | | system:controller:service-controller | system:controller:service-controller
statefulset-controller | ServiceAccount | ClusterRole | | system:controller:statefulset-controller | system:controller:statefulset-controller
system:authenticated | Group | ClusterRole | | system:public-info-viewer | system:public-info-viewer
system:authenticated | Group | ClusterRole | | system:discovery | system:discovery
system:authenticated | Group | ClusterRole | | system:basic-user | system:basic-user
system:bootstrappers | Group | ClusterRole | | eks:node-bootstrapper | eks:node-bootstrapper
system:kube-controller-manager | User | ClusterRole | | system:kube-controller-manager | system:kube-controller-manager
system:kube-controller-manager | User | ClusterRole | | eks:cloud-provider-extraction-migration | eks:cloud-provider-extraction-migration
system:kube-controller-manager | User | Role | kube-system | system::leader-locking-kube-controller-manager | system::leader-locking-kube-controller-manager
system:kube-controller-manager | User | Role | kube-system | extension-apiserver-authentication-reader | system::extension-apiserver-authentication-reader
system:kube-proxy | User | ClusterRole | | system:node-proxier | system:node-proxier
system:kube-scheduler | User | ClusterRole | | system:volume-scheduler | system:volume-scheduler
system:kube-scheduler | User | ClusterRole | | system:kube-scheduler | system:kube-scheduler
system:kube-scheduler | User | Role | kube-system | extension-apiserver-authentication-reader | system::extension-apiserver-authentication-reader
system:kube-scheduler | User | Role | kube-system | system::leader-locking-kube-scheduler | system::leader-locking-kube-scheduler
system:masters | Group | ClusterRole | | cluster-admin | cluster-admin
system:monitoring | Group | ClusterRole | | system:monitoring | system:monitoring
system:node-proxier | Group | ClusterRole | | system:node-proxier | eks:kube-proxy-fargate
system:nodes | Group | ClusterRole | | eks:node-bootstrapper | eks:node-bootstrapper
system:serviceaccounts | Group | ClusterRole | | system:service-account-issuer-discovery | system:service-account-issuer-discovery
system:unauthenticated | Group | ClusterRole | | system:public-info-viewer | system:public-info-viewer
tagging-controller | ServiceAccount | ClusterRole | | eks:tagging-controller | eks:tagging-controller
token-cleaner | ServiceAccount | Role | kube-system | system:controller:token-cleaner | system:controller:token-cleaner
ttl-after-finished-controller | ServiceAccount | ClusterRole | | system:controller:ttl-after-finished-controller | system:controller:ttl-after-finished-controller
ttl-controller | ServiceAccount | ClusterRole | | system:controller:ttl-controller | system:controller:ttl-controller
validatingadmissionpolicy-status-controller | ServiceAccount | ClusterRole | | system:controller:validatingadmissionpolicy-status-controller | system:controller:validatingadmissionpolicy-status-controller
(2) RBAC ์ ์ฑ
๋ฃฉ์
: system:masters
๋์
1
kubectl rbac-tool lookup system:masters
โ ย ์ถ๋ ฅ
1
2
3
SUBJECT | SUBJECT TYPE | SCOPE | NAMESPACE | ROLE | BINDING
-----------------+--------------+-------------+-----------+---------------+----------------
system:masters | Group | ClusterRole | | cluster-admin | cluster-admin
5. ํน์ ClusterRole ์ ๋ณด ํ์ธ (eks:node-bootstrapper
)
1
kubectl describe ClusterRole eks:node-bootstrapper
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
Name: eks:node-bootstrapper
Labels: eks.amazonaws.com/component=node
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
certificatesigningrequests.certificates.k8s.io/selfnodeserver [] [] [create]
6. RBAC ์ ์ฑ
๋ฃฐ ์กฐํ (system:authenticated
)
1
kubectl rbac-tool policy-rules -e '^system:authenticated'
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
TYPE | SUBJECT | VERBS | NAMESPACE | API GROUP | KIND | NAMES | NONRESOURCEURI | ORIGINATED FROM
--------+----------------------+--------+-----------+-----------------------+--------------------------+-------+------------------------------------------------------------------------------------------+------------------------------------------
Group | system:authenticated | create | * | authentication.k8s.io | selfsubjectreviews | | | ClusterRoles>>system:basic-user
Group | system:authenticated | create | * | authorization.k8s.io | selfsubjectaccessreviews | | | ClusterRoles>>system:basic-user
Group | system:authenticated | create | * | authorization.k8s.io | selfsubjectrulesreviews | | | ClusterRoles>>system:basic-user
Group | system:authenticated | get | * | | | | /healthz,/livez,/readyz,/version,/version/ | ClusterRoles>>system:public-info-viewer
Group | system:authenticated | get | * | | | | /api,/api/*,/apis,/apis/*,/healthz,/livez,/openapi,/openapi/*,/readyz,/version,/version/ | ClusterRoles>>system:discovery
7. RBAC ์ญํ ์ ๋ณด ์์ฝ ํ์ธ
1
kubectl rbac-tool show
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: custom-cluster-role
rules:
- apiGroups:
- scheduling.k8s.io
resources:
- priorityclasses
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- storage.k8s.io
resources:
- volumeattributesclasses
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
...
8. ํ์ฌ ์ธ์ฆ ์ฃผ์ฒด ์ ๋ณด ์ถ๋ ฅ (RBAC Tool whoami)
1
kubectl rbac-tool whoami
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
{Username: "arn:aws:iam::378102432899:user/eks-user",
UID: "aws-iam-authenticator:378102432899:XXXXXXXXXXXXXXXXXXXXX",
Groups: ["system:authenticated"],
Extra: {accessKeyId: ["XXXXXXXXXXXXXXXXXXXX"],
arn: ["arn:aws:iam::378102432899:user/eks-user"],
canonicalArn: ["arn:aws:iam::378102432899:user/eks-user"],
principalId: ["XXXXXXXXXXXXXXXXXXXXX"],
sessionName: [""],
sigs.k8s.io/aws-iam-authenticator/principalId: ["XXXXXXXXXXXXXXXXXXXXX"]}}
9. ์๋น์ค ์ด์นด์ดํธ ๋ฐ ์ฌ์ฉ์ RBAC ์ญํ ์์ฝ (aws-node
)
1
kubectl rolesum aws-node -n kube-system
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
ServiceAccount: kube-system/aws-node
Secrets:
Policies:
โข [CRB] */aws-node โถ [CR] */aws-node
Resource Name Exclude Verbs G L W C U P D DC
cninodes.vpcresources.k8s.aws [*] [-] [-] โ โ โ โ โ โ โ โ
eniconfigs.crd.k8s.amazonaws.com [*] [-] [-] โ โ โ โ โ โ โ โ
events.[,events.k8s.io] [*] [-] [-] โ โ โ โ โ โ โ โ
namespaces [*] [-] [-] โ โ โ โ โ โ โ โ
nodes [*] [-] [-] โ โ โ โ โ โ โ โ
pods [*] [-] [-] โ โ โ โ โ โ โ โ
policyendpoints.networking.k8s.aws [*] [-] [-] โ โ โ โ โ โ โ โ
policyendpoints.networking.k8s.aws/status [*] [-] [-] โ โ โ โ โ โ โ โ
10. RBAC ์ญํ ์์ฝ (User: system:kube-proxy
)
1
kubectl rolesum -k User system:kube-proxy
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
User: system:kube-proxy
Policies:
โข [CRB] */system:node-proxier โถ [CR] */system:node-proxier
Resource Name Exclude Verbs G L W C U P D DC
endpoints [*] [-] [-] โ โ โ โ โ โ โ โ
endpointslices.discovery.k8s.io [*] [-] [-] โ โ โ โ โ โ โ โ
events.[,events.k8s.io] [*] [-] [-] โ โ โ โ โ โ โ โ
nodes [*] [-] [-] โ โ โ โ โ โ โ โ
services [*] [-] [-] โ โ โ โ โ โ โ โ
11. RBAC ์ญํ ์์ฝ (Group: system:authenticated
)
1
kubectl rolesum -k Group system:authenticated
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
Group: system:authenticated
Policies:
โข [CRB] */system:basic-user โถ [CR] */system:basic-user
Resource Name Exclude Verbs G L W C U P D DC
selfsubjectaccessreviews.authorization.k8s.io [*] [-] [-] โ โ โ โ โ โ โ โ
selfsubjectreviews.authentication.k8s.io [*] [-] [-] โ โ โ โ โ โ โ โ
selfsubjectrulesreviews.authorization.k8s.io [*] [-] [-] โ โ โ โ โ โ โ โ
โข [CRB] */system:discovery โถ [CR] */system:discovery
โข [CRB] */system:public-info-viewer โถ [CR] */system:public-info-viewer
12. RBAC View ์คํ ๋ฐ ์น ์ธํฐํ์ด์ค ์ ๊ทผ
(1) RBAC View ์คํ
1
(eks-user@myeks:default) [root@operator-host ~]# kubectl rbac-view
โ ย ์ถ๋ ฅ
1
2
INFO[0000] Getting K8s client
INFO[0000] serving RBAC View and http://localhost:8800
(2) ์น ์ธํฐํ์ด์ค ์ ๊ทผ
1
echo -e "RBAC View Web http://$(curl -s ipinfo.io/ip):8800"
โ ย ์ถ๋ ฅ
1
RBAC View Web http://13.125.235.88:8800
- ์ด์์๋ฒ1์ ๊ณต์ธ IP(13.125.235.88) ํ์ธ
- ์ ์ ํ๋ฉด(Cluster Roles)
- ์ ์ ํ๋ฉด(Roles)
๐ EKS ์ธ์ฆ/์ธ๊ฐ ํ์ธ
1. STS Caller Identity ARN ํ์ธ
1
aws sts get-caller-identity --query Arn
โ ย ์ถ๋ ฅ
1
"arn:aws:iam::378102432899:user/eks-user"
2. Kubeconfig ํ์ผ ์ ๋ณด ํ์ธ
1
cat ~/.kube/config
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJUXhZZFRsV2QxS293RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1UUXhNelV6TURsYUZ3MHpOVEF6TVRJeE16VTRNRGxhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUR2RE1HZ1dYSlBmUVlrMERKWTdGYk52OEN3RkFMMWNIMWQxVkxxUmpYemt0RGFwTW1xS2RGV3lmSXMKbGFqbEkyS1FvZ0VPY1dHc0dEOW1JWVFhTXM0VHFab1k5U1dZNFVlMVYzU0Y5YVV3dFI0M2VLTzh5RUhQekNUSQpGN0hyWDBKVkpxM29RQU5RVjNPc2JJdE9pdHQ3eDBoaFVWZ3NQci9qTy9ZamFkaHczSVV4QUxKYnhKTmIzSW5rCkdjT3ltbXViNDR4SUJ0NmtIM3V5VWNxQm83aStoL3NJWWFXRXNocHp3dGJSemN0NmRMYldFNW9WbU9xY1pHTDMKSXE1WmNiVTBQc3B3S2xTK1BwcFhmaEk2djk5VGsyV0VRVzdkQmo1ZkM5cWdOeU95N3BFdU1MVHhyK05xYS9yVAp1dzNtLzNvb1JoYk82azR0VGxUSW1ZMXBrMnpaQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTYWp6dzc5dzNNUjBIMEJUaTFuQVNhN2JVS1J6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQWVSK2c5d0FVQwpuUElEOUtKNGkrMGliRFcvczlxSHpHTG5OelhpQlpsSXI1ZWtUYjdaYzhwT09ZQ2t5c01oSTlLbnJQUUdPTlpaCitjSTM0UnRDWmN0eU9QVmRsS1JsUFM5R1lNaGpBeEhQMW1Jck4zU2F5aGpYM3kzY3g1NG4wRmswR216RitKbXIKYUFsMHkyM05GMnk2VzJ0WWUxRW1aZFRZWVcxNDJLRHNIZldPQlhtMmJjR3Z6QmM4YTh6QWZIdElDbkVSMlc4ZQpndERCV1dpNVRuN1NKOWpTTEFVWUNkQUIza2xDTUJNaE80bGkvRzB4MFROT0VVRXF0bFQxY0x6Rm9PTVM4OWVRCnlpcDE1Mk9tc1ZIMy9pVG55Zy9UVytUUXpLNUF3eVlNb3Jmc05KNjg1bFN6Mmh1S3NEZE9YMVJqWis3ZDI5dW0KTXRIV2hwNHVDajFBCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: https://9112435064B824498AF68C696D05183C.gr7.ap-northeast-2.eks.amazonaws.com
name: arn:aws:eks:ap-northeast-2:378102432899:cluster/myeks
- cluster:
certificate-authority-data:
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJUk9Ca1BpYnVRd013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1Ea3dOakE0TWpsYUZ3MHpOVEF6TURjd05qRXpNamxhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURaWDVpSmZSbGIxbFZuZzZYTjZhZDRDN0I4bGdmamQ0RDFUVHR5MHFJYlpIWXZFa2tVeWlZSmphOEYKcUgzRS8rY2VQdk9vTVp1SnFoc29aTHNCcHJ0L1oySkMvS2UyYUdFcnpNNTZiZTRkZG55cjdHcCtxTjFWMjJwZgpnNWN0azlNaVVaNGUrbGp1NmpCaXBHQm84NFBqSDFqeUs0U29qcXVIR2NRek5LcG54VmN1dis3UXNHYVp2eG1MCkduL0owMmVOWjErZ0ltR0xjdVFyYTBKOVBnOWpGeHlQLzB1TFNGYmpKd1JuYmxYM3lUUnpjL3lyNHZlbldtenIKRWlGZVFDUTJWYmJ6Y3h0emc0NWIvSGxIK1pUbEJ2RlQ2V01NekZOU1VlYVBaUEpMM2RNQWJ2OXFWbTAxZWpmRAo0bGxxRTc0TUJxbytlK05PWStCOXpYRE92QlJaQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJRTnhGNktLT1N4cHo3NEIzd0VPUkdiNFE3UWJEQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQStneHpZSlpmWAo5OWE3Sm9ITVdQRUFMUno3NXBWLzNLOXFaeXM4aHo3aFZ1cG9aSmRVdFFrcUJiMGNxNUloUmpYcWcyS3JCSzQzCkhVOGs4SmZYU2Z1OGtLbVFhL2xhUGUzY0lrSi82NjQ2ZUlCcDJqZTcvK2xMK0FJWkNZbEl5MG5YTG9wY3o2cXYKMnE0aG9sb3BRSENoYk5RMkVQNDREemRJZVZUZitpdEFIKzJqRU0xeTBUbm5MaUgrK1c2eGlPOUJGdzc4SFlKZApCTG5QZ0ltbUxnRjgwVC9Kb09QbkZudEw1bFVPM280TzN1WGgvYWdDeXNiWkptcnJFelY4YlZab2F5bzFHT3dsCjF4ZmtpUzlERzhDVVJDbmIrS0hNZlVTYlVFYk9ZbmNrOER4Y1VpSDRsbGdiRlBWaHpJRE53QXVyVU1EWi9LTE8KM3JXdTBSN1BGUEV4Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: https://127.0.0.1:45881
name: kind-myk8s
contexts:
- context:
cluster: arn:aws:eks:ap-northeast-2:378102432899:cluster/myeks
user: eks-user
name: eks-user
- context:
cluster: kind-myk8s
user: kind-myk8s
name: kind-myk8s
current-context: eks-user
kind: Config
preferences: {}
users:
- name: eks-user
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- ap-northeast-2
- eks
- get-token
- --cluster-name
- myeks
- --output
- json
command: aws
- name: eks-user@myeks.ap-northeast-2.eksctl.io
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- eks
- get-token
- --output
- json
- --cluster-name
- myeks
- --region
- ap-northeast-2
command: aws
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
interactiveMode: IfAvailable
provideClusterInfo: false
- name: kind-myk8s
user:
client-certificate-data:
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLVENDQWhHZ0F3SUJBZ0lJQ3ZjUXJwNk4wK2t3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1Ea3dOakE0TWpsYUZ3MHlOakF6TURrd05qRXpNamxhTUR3eApIekFkQmdOVkJBb1RGbXQxWW1WaFpHMDZZMngxYzNSbGNpMWhaRzFwYm5NeEdUQVhCZ05WQkFNVEVHdDFZbVZ5CmJtVjBaWE10WVdSdGFXNHdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFEUGxpdTAKa3l6dmVFUVI1RjB2WTdDVDllUkkyWEluNDA5WGl4ZXFZUUVabXZROS84ZWVrNExPbkVmdElVMVQrQzJoWEw5ZApEaDhYREliNGRDRGN4MElyMkEzQVdLd1JQRkFveXpIUlZiam9neE5saVo1YmFLMWlKVjBiWVFzQ2ErWml6MlozCkQwK0labi9FMEtHdnE3cm1IcW94NWZSZ2ZNeCt3K1lpMHZKTW9IbHRuM3lZWXdxQnFyRFBaSk5tWGpKenFCQXUKWE9uV0lFV2xHNlIwcGdnNmM4YjMyRTlFaWR5MFdPallMdVhEaDN1d2F2TWxlY0prdmtJcnZpc041ZHhWb1NzVQpncm1qUEpCZW5jVmVSbUphWm1nVUU1blBna0J1dDRxK20vZW1tdW9XYmtsSjBmSUpiVXQ4bzdtWSswUmU0akdLCmNDc1JrMG5QR2taWlVOVHhBZ01CQUFHalZqQlVNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUsKQmdnckJnRUZCUWNEQWpBTUJnTlZIUk1CQWY4RUFqQUFNQjhHQTFVZEl3UVlNQmFBRkEzRVhvb281TEduUHZnSApmQVE1RVp2aER0QnNNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUJQRUxuWDdac3lHeVpZOGZRMDR6ZCtPU3U3CkFSNHZPbmMvc0xkdVR4T3FrcjFTcWRZWmRaT3g5Mlh3M1JIQk5HM3BMWTZRNUZWcHN1U0JucjloWU1OckxZOFcKV29pZGFLTGtjRVIrdlpFUFBWakFmaDFiekN2SitlZDRWNk1iVit5UjdMbkZYS1NyY3c4aGVWekpFYXZPYnpyeApmdjZBMFF5cmhuMGphOG80V1JTOHVzK1F1Qm5aU1k3ai9FbWNtRGdicmtUUXFpbXpVNTZHcnpzazlNL1grMGpBCnhPNTA4d3JjYUU1WXYxNk8wNjFndVUrb1BsUWtpd1VzcndCZnVQOVgxeUFpdHcwWW42NnRTNzlZbWVkMjdNZlEKZHREKzkvR3d1Ni9ORGphdFdMSUJRS2lTbFdvYitMbE4zY1lveklrTHdha3lwc3R5ekZ2ZlN2SFVMZW5ZCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
client-key-data:
LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBejVZcnRKTXM3M2hFRWVSZEwyT3drL1hrU05seUorTlBWNHNYcW1FQkdacjBQZi9ICm5wT0N6cHhIN1NGTlUvZ3RvVnkvWFE0ZkZ3eUcrSFFnM01kQ0s5Z053RmlzRVR4UUtNc3gwVlc0NklNVFpZbWUKVzJpdFlpVmRHMkVMQW12bVlzOW1kdzlQaUdaL3hOQ2hyNnU2NWg2cU1lWDBZSHpNZnNQbUl0THlUS0I1Ylo5OAptR01LZ2Fxd3oyU1RabDR5YzZnUUxsenAxaUJGcFJ1a2RLWUlPblBHOTloUFJJbmN0RmpvMkM3bHc0ZDdzR3J6CkpYbkNaTDVDSzc0ckRlWGNWYUVyRklLNW96eVFYcDNGWGtaaVdtWm9GQk9aejRKQWJyZUt2cHYzcHBycUZtNUoKU2RIeUNXMUxmS081bVB0RVh1SXhpbkFyRVpOSnp4cEdXVkRVOFFJREFRQUJBb0lCQUV5YnhRRmRGMFpCQWc0QQpVd3Q4Sk54VjhLdVo0L0VvaUUxc2ZZMVpRMGlwME4xWW1kakI0NUpRUnBNU3FURUY5QUVLODJ4cUc3c3IybHV5CnI3WUtxemIzQXd3ckxVVW5Gd1lYQndtVU15dEx4RXJDb1BobkJ5V2pXRnNVVXpYUGtEU0RDZk9DYVAyVHBpK0MKTjBsWGZTQVR6UWVoTDdDMEhkdmllL242RTRXSFFKbzA5UUhYMmhpVHdLT1pYamRtUGFmUmlLMm1nWjlVTzY0Twpza001dEVNL0tSN1lsRWtEYzZWb1VRN2VxdXR6RVNtZS9JTFNpSWhRZ053dHlBSU01c2tSU0xvRkFlcU0vQk9OCmdudlViRktnSXpkWmo5bnBIdDh5eEFtbU9qRnFXRUNzbG9lOVl5WDc1QkZnc013V2RvOTAwODNESmtSOHBjK1AKZ1R3dkhFa0NnWUVBMFhiUEdwM3VuTWxPd3BNcnZPMnY0SDRvUGRWb0NrdnlpOEF4UGNPZHhieHFjT2JHODgvaAp5UXp3Q3NMbjVBWnhZSm1uZlBmK0xWZi8wbStpRDdRa2xrdnJaelRYQTJCNDRHWHdObzJoUkpBa3NMU2J5NVlZCmR4UGI0a1pWVHNIM2p6c2RwbnRMV3RNVHNNTHhXaFhiN0hud1Y5QUd4T0plTUNQbXFWazd3cHNDZ1lFQS9iU1UKZTJUbjdjUytwcllORlRXQmlUWE1pS3AzaERlZXQ1QUFiMW5BY0JTMFM1dm1LUE02L0FxR1NiSDBQTGNJTmxrQQpKd3lJUlBxSEFMeXlIRDNTdFUzUGtwbFphQkR4amZmRnBlcnRlS2lkSmVjeWRxY1BUUURVTnU5RUErZFNKekZoClhqeHFrd2NHay9WOTcxbGliQUdIbkJraVZXYXBSaW9UaUkzWmFXTUNnWUJXUlFpbmZjUjQ3ckJ4a3d2QWxHU0wKb1dvUmpZTjhPaXQ3UTMwRVl6em40K0l5L2RtVE1WdGM0dWM2aDJ2YWpvekRySVUvQXlTOHFESEZDaFZGUW55UApLbFdaL0RsU09ybU9NbTN0Q2dnUnBReDNldXR2dmpIMVdVaUd1VkVKVHZvWEU5SHliM1Zwd3VXcE42RVA2VkRhCjVKNElqTFU5QWI2cE5TQWJQNVZOWVFLQmdFZlRhZjROTVVRMVlTeGRlaEs1RlRVOVQreVpKa0QrWmliZDArR3kKYlRMT0NjVW1HK0VZQzJqenFkVVBWbkFoK1djNWh6dUc1c1Z3ait2N2dBbFN6MmFZNHQxRUlQVy9aa09sRkFYSApIdmY3OUpHWWhNYm13UVF4NmVLcmxudnNiMnU5SlMzQ3VRRnJDY2UxeHJPT2dMakhMaGRaWGtrRFNZVWR3RzMyCmlzaTFBb0dCQUkyTCtVTmUzOWJxbUNlNGZ1cGJYZmloODNOakJrMzR1YURETGpNa1p1QUdBVWhUcC9xemd5MVIKcnp0R0dlMHFWL0Y3VmVzeGF1WGFNczU5TmJia2ozTll3OXU5YzVFcWNUN1o1RjIxUkJYdWZxSStOSC9GWlZ6KwppaThMSnhhSExySkthS1FxeXNkbEhoTzMrS2NOcytvdG1BWjV5dDU2NjNBQ1hZRFdQbHhzCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
3. EKS ์ธ์ฆ ํ ํฐ ์์ฒญ ๋ฐ ํ์ธ
1
aws eks get-token help
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
GET-TOKEN() GET-TOKEN()
NAME
get-token -
DESCRIPTION
Get a token for authentication with an Amazon EKS cluster. This can be
used as an alternative to the aws-iam-authenticator.
...
4. ์์ ๋ณด์ ์๊ฒฉ ์ฆ๋ช (ํ ํฐ) ์์ฒญ
1
aws eks get-token --cluster-name $CLUSTER_NAME | jq
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
{
"kind": "ExecCredential",
"apiVersion": "client.authentication.k8s.io/v1beta1",
"spec": {},
"status": {
"expirationTimestamp": "2025-03-15T01:22:30Z",
"token": "k8s-aws-v1.aHR0cHM6Ly9zdHMuYXAtbm9ydGhlYXN0LTIuYW1hem9uYXdzLmNvbS8_QWN0aW9uPUdldENhbGxlcklkZW50aXR5JlZlcnNpb249MjAxMS0wNi0xNSZYLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWUUNGSklTQjQ0RkFQU01QJTJGMjAyNTAzMTUlMkZhcC1ub3J0aGVhc3QtMiUyRnN0cyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMzE1VDAxMDgzMFomWC1BbXotRXhwaXJlcz02MCZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QlM0J4LWs4cy1hd3MtaWQmWC1BbXotU2lnbmF0dXJlPThhOTY2YmUyMzQ4NjkwMDk3YjA2NWUwZDVjMmEzYWJkNTM1Yjk3OWY5NmYwN2QzZGViMjgzNGE5NDNlOGM3MTQ"
}
}
5. ๋๋ฒ๊ทธ ๋ชจ๋๋ก ์์ ํ ํฐ ์์ฒญ ๋ฐ ์์ฒญ ๊ณผ์ ํ์ธ
1
aws eks get-token --cluster-name $CLUSTER_NAME --debug | jq
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
2025-03-15 10:09:12,122 - MainThread - awscli.clidriver - DEBUG - CLI version: aws-cli/2.24.21 Python/3.13.2 Linux/6.13.5-arch1-1 source/x86_64.arch
2025-03-15 10:09:12,122 - MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['eks', 'get-token', '--cluster-name', 'myeks', '--debug']
2025-03-15 10:09:12,126 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function add_s3 at 0x7dca2d956200>
2025-03-15 10:09:12,126 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function add_ddb at 0x7dca2db7c720>
2025-03-15 10:09:12,126 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <bound method BasicCommand.add_command of <class 'awscli.customizations.configure.configure.ConfigureCommand'>>
2025-03-15 10:09:12,126 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function change_name at 0x7dca2dcd8400>
2025-03-15 10:09:12,126 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function change_name at 0x7dca2dcd9620>
2025-03-15 10:09:12,126 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function alias_opsworks_cm at 0x7dca2d971d00>
2025-03-15 10:09:12,126 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function add_history_commands at 0x7dca2dbcc5e0>
2025-03-15 10:09:12,126 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <bound method BasicCommand.add_command of <class 'awscli.customizations.devcommands.CLIDevCommand'>>
2025-03-15 10:09:12,126 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function add_waiters at 0x7dca2d970d60>
2025-03-15 10:09:12,126 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <bound method AliasSubCommandInjector.on_building_command_table of <awscli.alias.AliasSubCommandInjector object at 0x7dca2da11010>>
2025-03-15 10:09:12,126 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/lib/python3.13/site-packages/awscli/data/cli.json
2025-03-15 10:09:12,127 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function resolve_types at 0x7dca2da8a700>
2025-03-15 10:09:12,127 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function no_sign_request at 0x7dca2da8aa20>
2025-03-15 10:09:12,127 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function resolve_verify_ssl at 0x7dca2da8a980>
2025-03-15 10:09:12,127 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function resolve_cli_read_timeout at 0x7dca2da8ab60>
2025-03-15 10:09:12,127 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function resolve_cli_connect_timeout at 0x7dca2da8aac0>
2025-03-15 10:09:12,127 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <built-in method update of dict object at 0x7dca2d83b500>
2025-03-15 10:09:12,128 - MainThread - awscli.clidriver - DEBUG - CLI version: aws-cli/2.24.21 Python/3.13.2 Linux/6.13.5-arch1-1 source/x86_64.arch
2025-03-15 10:09:12,128 - MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['eks', 'get-token', '--cluster-name', 'myeks', '--debug']
2025-03-15 10:09:12,128 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function add_timestamp_parser at 0x7dca2d956a20>
2025-03-15 10:09:12,128 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function register_uri_param_handler at 0x7dca2ea9e8e0>
2025-03-15 10:09:12,128 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function add_binary_formatter at 0x7dca2d9fad40>
2025-03-15 10:09:12,128 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function no_pager_handler at 0x7dca2e604220>
2025-03-15 10:09:12,128 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function inject_assume_role_provider_cache at 0x7dca2e52dbc0>
2025-03-15 10:09:12,129 - MainThread - botocore.utils - DEBUG - IMDS ENDPOINT: http://169.254.169.254/
2025-03-15 10:09:12,130 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function attach_history_handler at 0x7dca2dbb7d80>
2025-03-15 10:09:12,130 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function inject_json_file_cache at 0x7dca2db747c0>
2025-03-15 10:09:12,134 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/lib/python3.13/site-packages/awscli/botocore/data/eks/2017-11-01/service-2.json
2025-03-15 10:09:12,136 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/lib/python3.13/site-packages/awscli/botocore/data/eks/2017-11-01/service-2.sdk-extras.json
2025-03-15 10:09:12,138 - MainThread - botocore.hooks - DEBUG - Event building-command-table.eks: calling handler <function inject_commands at 0x7dca2da79940>
2025-03-15 10:09:12,138 - MainThread - botocore.hooks - DEBUG - Event building-command-table.eks: calling handler <function add_waiters at 0x7dca2d970d60>
2025-03-15 10:09:12,142 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/lib/python3.13/site-packages/awscli/botocore/data/eks/2017-11-01/waiters-2.json
2025-03-15 10:09:12,142 - MainThread - botocore.hooks - DEBUG - Event building-command-table.eks: calling handler <bound method AliasSubCommandInjector.on_building_command_table of <awscli.alias.AliasSubCommandInjector object at 0x7dca2da11010>>
2025-03-15 10:09:12,142 - MainThread - botocore.hooks - DEBUG - Event building-command-table.eks_get-token: calling handler <function add_waiters at 0x7dca2d970d60>
2025-03-15 10:09:12,142 - MainThread - botocore.hooks - DEBUG - Event building-command-table.eks_get-token: calling handler <bound method AliasSubCommandInjector.on_building_command_table of <awscli.alias.AliasSubCommandInjector object at 0x7dca2da11010>>
2025-03-15 10:09:12,143 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.custom.get-token.cluster-name: calling handler <awscli.paramfile.URIArgumentHandler object at 0x7dca2da11a90>
2025-03-15 10:09:12,143 - MainThread - botocore.hooks - DEBUG - Event process-cli-arg.custom.get-token: calling handler <awscli.argprocess.ParamShorthandParser object at 0x7dca2dac7770>
2025-03-15 10:09:12,143 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.custom.get-token.role-arn: calling handler <awscli.paramfile.URIArgumentHandler object at 0x7dca2da11a90>
2025-03-15 10:09:12,143 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.custom.get-token.cluster-id: calling handler <awscli.paramfile.URIArgumentHandler object at 0x7dca2da11a90>
2025-03-15 10:09:12,143 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: env
2025-03-15 10:09:12,143 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: assume-role
2025-03-15 10:09:12,143 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: assume-role-with-web-identity
2025-03-15 10:09:12,143 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: sso
2025-03-15 10:09:12,143 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: shared-credentials-file
2025-03-15 10:09:12,143 - MainThread - botocore.credentials - INFO - Found credentials in shared credentials file: ~/.aws/credentials
2025-03-15 10:09:12,145 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/lib/python3.13/site-packages/awscli/botocore/data/endpoints.json
2025-03-15 10:09:12,157 - MainThread - botocore.hooks - DEBUG - Event choose-service-name: calling handler <function handle_service_name_alias at 0x7dca2f02c0e0>
2025-03-15 10:09:12,157 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/lib/python3.13/site-packages/awscli/botocore/data/sts/2011-06-15/service-2.json
2025-03-15 10:09:12,161 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/lib/python3.13/site-packages/awscli/botocore/data/sts/2011-06-15/endpoint-rule-set-1.json
2025-03-15 10:09:12,162 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/lib/python3.13/site-packages/awscli/botocore/data/partitions.json
2025-03-15 10:09:12,162 - MainThread - botocore.hooks - DEBUG - Event creating-client-class.sts: calling handler <function add_generate_presigned_url at 0x7dca2f172de0>
2025-03-15 10:09:12,162 - MainThread - botocore.configprovider - DEBUG - Looking for endpoint for sts via: environment_service
2025-03-15 10:09:12,162 - MainThread - botocore.configprovider - DEBUG - Looking for endpoint for sts via: environment_global
2025-03-15 10:09:12,162 - MainThread - botocore.configprovider - DEBUG - Looking for endpoint for sts via: config_service
2025-03-15 10:09:12,162 - MainThread - botocore.configprovider - DEBUG - Looking for endpoint for sts via: config_global
2025-03-15 10:09:12,162 - MainThread - botocore.configprovider - DEBUG - No configured endpoint found.
2025-03-15 10:09:12,163 - MainThread - botocore.endpoint - DEBUG - Setting sts timeout as (60, 60)
2025-03-15 10:09:12,164 - MainThread - botocore.hooks - DEBUG - Event provide-client-params.sts.GetCallerIdentity: calling handler <bound method STSClientFactory._retrieve_k8s_aws_id of <awscli.customizations.eks.get_token.STSClientFactory object at 0x7dca2d877e00>>
2025-03-15 10:09:12,164 - MainThread - botocore.hooks - DEBUG - Event provide-client-params.sts.GetCallerIdentity: calling handler <function base64_decode_input_blobs at 0x7dca2d9fade0>
2025-03-15 10:09:12,164 - MainThread - botocore.hooks - DEBUG - Event before-parameter-build.sts.GetCallerIdentity: calling handler <function generate_idempotent_uuid at 0x7dca2f02c4a0>
2025-03-15 10:09:12,164 - MainThread - botocore.hooks - DEBUG - Event before-parameter-build.sts.GetCallerIdentity: calling handler <function _handle_request_validation_mode_member at 0x7dca2f02ee80>
2025-03-15 10:09:12,164 - MainThread - botocore.regions - DEBUG - Calling endpoint provider with parameters: {'Region': 'ap-northeast-2', 'UseDualStack': False, 'UseFIPS': False, 'UseGlobalEndpoint': False}
2025-03-15 10:09:12,165 - MainThread - botocore.regions - DEBUG - Endpoint provider result: https://sts.ap-northeast-2.amazonaws.com
2025-03-15 10:09:12,165 - MainThread - botocore.hooks - DEBUG - Event choose-signer.sts.GetCallerIdentity: calling handler <function set_operation_specific_signer at 0x7dca2f02c2c0>
2025-03-15 10:09:12,165 - MainThread - botocore.hooks - DEBUG - Event before-sign.sts.GetCallerIdentity: calling handler <bound method STSClientFactory._inject_k8s_aws_id_header of <awscli.customizations.eks.get_token.STSClientFactory object at 0x7dca2d877e00>>
2025-03-15 10:09:12,165 - MainThread - botocore.auth - DEBUG - Calculating signature using v4 auth.
2025-03-15 10:09:12,165 - MainThread - botocore.auth - DEBUG - CanonicalRequest:
GET
/
Action=GetCallerIdentity&Version=2011-06-15&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVQCFJISB44FAPSMP%2F20250315%2Fap-northeast-2%2Fsts%2Faws4_request&X-Amz-Date=20250315T010912Z&X-Amz-Expires=60&X-Amz-SignedHeaders=host%3Bx-k8s-aws-id
host:sts.ap-northeast-2.amazonaws.com
x-k8s-aws-id:myeks
host;x-k8s-aws-id
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
2025-03-15 10:09:12,165 - MainThread - botocore.auth - DEBUG - StringToSign:
AWS4-HMAC-SHA256
20250315T010912Z
20250315/ap-northeast-2/sts/aws4_request
367074ba4e66203fdbbfea51b755364b82eb660ab0128ea27a38ffb6c42402f6
2025-03-15 10:09:12,165 - MainThread - botocore.auth - DEBUG - Signature:
1b93ecb31aee9ba6e4194af62ba01856b40e59aba75bf122e0a69c36f6257b50
{
"kind": "ExecCredential",
"apiVersion": "client.authentication.k8s.io/v1beta1",
"spec": {},
"status": {
"expirationTimestamp": "2025-03-15T01:23:12Z",
"token": "k8s-aws-v1.aHR0cHM6Ly9zdHMuYXAtbm9ydGhlYXN0LTIuYW1hem9uYXdzLmNvbS8_QWN0aW9uPUdldENhbGxlcklkZW50aXR5JlZlcnNpb249MjAxMS0wNi0xNSZYLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWUUNGSklTQjQ0RkFQU01QJTJGMjAyNTAzMTUlMkZhcC1ub3J0aGVhc3QtMiUyRnN0cyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMzE1VDAxMDkxMlomWC1BbXotRXhwaXJlcz02MCZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QlM0J4LWs4cy1hd3MtaWQmWC1BbXotU2lnbmF0dXJlPTFiOTNlY2IzMWFlZTliYTZlNDE5NGFmNjJiYTAxODU2YjQwZTU5YWJhNzViZjEyMmUwYTY5YzM2ZjYyNTdiNTA"
}
}
- ํ ํฐ ์์ฑ ์์ฒญ ์, ํด๋ผ์ด์ธํธ๊ฐ AWS STS ์๋ํฌ์ธํธ(https://sts.ap-northeast-2.amazonaws.com)๋ก ์์ฒญ์ ๋ณด๋
- AWS4-HMAC-SHA256 ์๋ช ์ ๊ณ์ฐํ๋ ๊ณผ์ ์ ๋ณด์ฌ์ค
6. Token Review API ๋ฆฌ์์ค ํ์ธ
1
kubectl api-resources | grep authentication
โ ย ์ถ๋ ฅ
1
2
selfsubjectreviews authentication.k8s.io/v1 false SelfSubjectReview
tokenreviews authentication.k8s.io/v1 false TokenReview
7. TokenReview ๊ฐ์ฒด ํ๋ ์ค๋ช ์กฐํ
1
kubectl explain tokenreviews
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
GROUP: authentication.k8s.io
KIND: TokenReview
VERSION: v1
DESCRIPTION:
TokenReview attempts to authenticate a token to a known user. Note:
TokenReview requests may be cached by the webhook token authenticator plugin
in the kube-apiserver.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <ObjectMeta>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <TokenReviewSpec> -required-
Spec holds information about the request being evaluated
status <TokenReviewStatus>
Status is filled in by the server and indicates whether the request can be
authenticated.
- GetCallerIdentity ํ์ธ
8. aws-auth ConfigMap ์ ๋ณด ํ์ธ
1
kubectl get cm -n kube-system aws-auth -o yaml
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::378102432899:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02
username: system:node:
kind: ConfigMap
metadata:
creationTimestamp: "2025-03-14T14:08:44Z"
name: aws-auth
namespace: kube-system
resourceVersion: "2158"
uid: 945bd318-dcae-4fbe-8758-a382effa5959
9. ํ์ฌ ์ธ์ฆ๋ EKS IAM User ์ ๋ณด ํ์ธ
1
kubectl rbac-tool whoami
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
{Username: "arn:aws:iam::378102432899:user/eks-user",
UID: "aws-iam-authenticator:378102432899:XXXXXXXXXXXXXXXXXXXXX",
Groups: ["system:authenticated"],
Extra: {accessKeyId: ["XXXXXXXXXXXXXXXXXXXX"],
arn: ["arn:aws:iam::378102432899:user/eks-user"],
canonicalArn: ["arn:aws:iam::378102432899:user/eks-user"],
principalId: ["XXXXXXXXXXXXXXXXXXXXX"],
sessionName: [""],
sigs.k8s.io/aws-iam-authenticator/principalId: ["XXXXXXXXXXXXXXXXXXXXX"]}}
10. ์ ๊ท IAM ์ฌ์ฉ์(testuser) ์์ฑ
1
aws iam create-user --user-name testuser
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
{
"User": {
"Path": "/",
"UserName": "testuser",
"UserId": "AIDAVQCFJISBV2CJXXXXX",
"Arn": "arn:aws:iam::378102432899:user/testuser",
"CreateDate": "2025-03-15T01:27:40+00:00"
}
}
11. testuser์ ํ๋ก๊ทธ๋๋ฐ ๋ฐฉ์ ์ก์ธ์ค ๊ถํ ๋ถ์ฌ
1
2
3
4
5
6
7
8
9
10
aws iam create-access-key --user-name testuser
{
"AccessKey": {
"UserName": "testuser",
"AccessKeyId": "AKIAVQCFJISB4Sxxxxxx",
"Status": "Active",
"SecretAccessKey": "xSVHPIYSp50e8rSojaBLh176cHwLru8DVvjxxxx",
"CreateDate": "2025-03-15T01:28:24+00:00"
}
}
12. testuser์ ์ต๊ณ ๊ด๋ฆฌ์ ๊ถํ ์ถ๊ฐ
1
aws iam attach-user-policy --policy-arn arn:aws:iam::aws:policy/AdministratorAccess --user-name testuser
13. ํด๋ฌ์คํฐ ์ญ์ ๋ฐ kubeconfig ๋ฐฑ์
(1) ํด๋ฌ์คํฐ ์ญ์
1
2
3
4
(kind-gagajin:N/A) [root@operator-host-2 ~]# kind delete cluster --name myk8s
# ๊ฒฐ๊ณผ
Deleting cluster "myk8s" ...
Deleted nodes: ["myk8s-control-plane"]
(2) ๊ธฐ์กด kubeconfig ํ์ผ ๋ฐฑ์
1
2
(kind-gagajin:N/A) [root@operator-host-2 ~]# mv ~/.kube/config ~/.kube/config.old
[root@operator-host-2 ~]#
14. AWS ์๊ฒฉ์ฆ๋ช ์ฌ์ค์ ๋ฐ ์ฌ์ธ์ฆ ํ์ธ
(1) ์๊ฒฉ์ฆ๋ช ๋ฏธ์ค์ ์ํ ํ์ธ
1
2
3
[root@operator-host-2 ~]# aws sts get-caller-identity --query Arn
# ๊ฒฐ๊ณผ
Unable to locate credentials. You can configure credentials by running "aws configure".
(2) testuser ์๊ฒฉ์ฆ๋ช ์ค์
1
2
3
4
5
[root@operator-host-2 ~]# aws configure
AWS Access Key ID [None]: AKIAVQCFJISB4Sxxxxxx
AWS Secret Access Key [None]: xSVHPIYSp50e8rSojaBLh176cHwLru8DVvjxxxx
Default region name [None]: ap-northeast-2
Default output format [None]: json
(3) ์๊ฒฉ์ฆ๋ช ํ์ธ ๋ฐ ARN ์ถ๋ ฅ
1
[root@operator-host-2 ~]# aws sts get-caller-identity --query Arn
โ ย ์ถ๋ ฅ
1
"arn:aws:iam::378102432899:user/testuser"
15. kubectl ๋ช ๋ น์ด ์คํ ์คํจ ํ์ธ
1
[root@operator-host-2 ~]# kubectl get node -v6
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
I0315 10:41:00.827713 18045 round_trippers.go:553] GET http://localhost:8080/api?timeout=32s in 1 milliseconds
E0315 10:41:00.828004 18045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.1:8080: connect: connection refused"
I0315 10:41:00.829505 18045 cached_discovery.go:120] skipped caching discovery info due to Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
I0315 10:41:00.829977 18045 round_trippers.go:553] GET http://localhost:8080/api?timeout=32s in 0 milliseconds
E0315 10:41:00.830105 18045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.1:8080: connect: connection refused"
I0315 10:41:00.831260 18045 cached_discovery.go:120] skipped caching discovery info due to Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
I0315 10:41:00.831291 18045 shortcut.go:103] Error loading discovery information: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
I0315 10:41:00.831581 18045 round_trippers.go:553] GET http://localhost:8080/api?timeout=32s in 0 milliseconds
E0315 10:41:00.831632 18045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.1:8080: connect: connection refused"
I0315 10:41:00.832814 18045 cached_discovery.go:120] skipped caching discovery info due to Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
I0315 10:41:00.833280 18045 round_trippers.go:553] GET http://localhost:8080/api?timeout=32s in 0 milliseconds
E0315 10:41:00.833336 18045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.1:8080: connect: connection refused"
I0315 10:41:00.834589 18045 cached_discovery.go:120] skipped caching discovery info due to Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
I0315 10:41:00.835288 18045 round_trippers.go:553] GET http://localhost:8080/api?timeout=32s in 0 milliseconds
E0315 10:41:00.835510 18045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.1:8080: connect: connection refused"
I0315 10:41:00.836761 18045 cached_discovery.go:120] skipped caching discovery info due to Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
I0315 10:41:00.836824 18045 helpers.go:264] Connection error: Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
- ๋ก์ปฌ์ kubeconfig ํ์ผ์ด ๋ฐฑ์ (config.old)๋์ด ์์ด ํ์ฌ ์ฌ์ฉ ์ค์ธ ์๊ฒฉ์ฆ๋ช ์ด ์์ผ๋ฏ๋ก ์๋ฒ์ ์ฐ๊ฒฐํ ์ ์๋ค๋ ์ค๋ฅ ๋ฐ์
16. ๋ก์ปฌ kubeconfig ํ์ผ ํ์ธ
1
2
[root@operator-host-2 ~]# ls ~/.kube
cache config.old
๐บ๏ธ aws-auth configmap ์ฌ์ฉ
1. IAM Identity Mapping ํ์ธ (ํ์ฌ ๋งคํ ์ํ)
1
eksctl get iamidentitymapping --cluster $CLUSTER_NAME
โ ย ์ถ๋ ฅ
1
2
ARN USERNAME GROUPS ACCOUNT
arn:aws:iam::378102432899:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02 system:node: system:bootstrappers,system:nodes
2. testuser IAM ์ฌ์ฉ์ ์์ฑ ๋ฐ ๋งคํ ์ค์
1
2
3
4
5
export ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)
eksctl create iamidentitymapping --cluster $CLUSTER_NAME --username testuser --group system:masters --arn arn:aws:iam::$ACCOUNT_ID:user/testuser
# ๊ฒฐ๊ณผ
2025-03-15 10:46:55 [โน] checking arn arn:aws:iam::378102432899:user/testuser against entries in the auth ConfigMap
2025-03-15 10:46:55 [โน] adding identity "arn:aws:iam::378102432899:user/testuser" to auth ConfigMap
3. aws-auth ConfigMap์์ ๋งคํ ํ์ธ
1
kubectl get cm -n kube-system aws-auth -o yaml
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::378102432899:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02
username: system:node:
mapUsers: |
- groups:
- system:masters
userarn: arn:aws:iam::378102432899:user/testuser
username: testuser
kind: ConfigMap
metadata:
creationTimestamp: "2025-03-14T14:08:44Z"
name: aws-auth
namespace: kube-system
resourceVersion: "196238"
uid: 945bd318-dcae-4fbe-8758-a382effa5959
4. ์ ์ฒด IAM Identity Mapping ๋ชฉ๋ก ํ์ธ
1
eksctl get iamidentitymapping --cluster $CLUSTER_NAME
โ ย ์ถ๋ ฅ
1
2
3
ARN USERNAME GROUPS ACCOUNT
arn:aws:iam::378102432899:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02 system:node: system:bootstrappers,system:nodes
arn:aws:iam::378102432899:user/testuser testuser system:masters
5. testuser kubeconfig ์์ฑ ๋ฐ context ์ถ๊ฐ
1
2
3
4
[root@operator-host-2 ~]# CLUSTER_NAME=myeks
[root@operator-host-2 ~]# aws eks update-kubeconfig --name $CLUSTER_NAME --user-alias testuser
# ๊ฒฐ๊ณผ
Added new context testuser to /root/.kube/config
6. ์์ฑ๋ kubeconfig ํ์ผ ํ์ธ
1
(testuser:N/A) [root@operator-host-2 ~]# cat ~/.kube/config
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJUXhZZFRsV2QxS293RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1UUXhNelV6TURsYUZ3MHpOVEF6TVRJeE16VTRNRGxhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUR2RE1HZ1dYSlBmUVlrMERKWTdGYk52OEN3RkFMMWNIMWQxVkxxUmpYemt0RGFwTW1xS2RGV3lmSXMKbGFqbEkyS1FvZ0VPY1dHc0dEOW1JWVFhTXM0VHFab1k5U1dZNFVlMVYzU0Y5YVV3dFI0M2VLTzh5RUhQekNUSQpGN0hyWDBKVkpxM29RQU5RVjNPc2JJdE9pdHQ3eDBoaFVWZ3NQci9qTy9ZamFkaHczSVV4QUxKYnhKTmIzSW5rCkdjT3ltbXViNDR4SUJ0NmtIM3V5VWNxQm83aStoL3NJWWFXRXNocHp3dGJSemN0NmRMYldFNW9WbU9xY1pHTDMKSXE1WmNiVTBQc3B3S2xTK1BwcFhmaEk2djk5VGsyV0VRVzdkQmo1ZkM5cWdOeU95N3BFdU1MVHhyK05xYS9yVAp1dzNtLzNvb1JoYk82azR0VGxUSW1ZMXBrMnpaQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTYWp6dzc5dzNNUjBIMEJUaTFuQVNhN2JVS1J6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQWVSK2c5d0FVQwpuUElEOUtKNGkrMGliRFcvczlxSHpHTG5OelhpQlpsSXI1ZWtUYjdaYzhwT09ZQ2t5c01oSTlLbnJQUUdPTlpaCitjSTM0UnRDWmN0eU9QVmRsS1JsUFM5R1lNaGpBeEhQMW1Jck4zU2F5aGpYM3kzY3g1NG4wRmswR216RitKbXIKYUFsMHkyM05GMnk2VzJ0WWUxRW1aZFRZWVcxNDJLRHNIZldPQlhtMmJjR3Z6QmM4YTh6QWZIdElDbkVSMlc4ZQpndERCV1dpNVRuN1NKOWpTTEFVWUNkQUIza2xDTUJNaE80bGkvRzB4MFROT0VVRXF0bFQxY0x6Rm9PTVM4OWVRCnlpcDE1Mk9tc1ZIMy9pVG55Zy9UVytUUXpLNUF3eVlNb3Jmc05KNjg1bFN6Mmh1S3NEZE9YMVJqWis3ZDI5dW0KTXRIV2hwNHVDajFBCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: https://9112435064B824498AF68C696D05183C.gr7.ap-northeast-2.eks.amazonaws.com
name: arn:aws:eks:ap-northeast-2:378102432899:cluster/myeks
contexts:
- context:
cluster: arn:aws:eks:ap-northeast-2:378102432899:cluster/myeks
user: testuser
name: testuser
current-context: testuser
kind: Config
preferences: {}
users:
- name: testuser
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- ap-northeast-2
- eks
- get-token
- --cluster-name
- myeks
- --output
- json
command: aws
7. ๊ธฐ๋ณธ ๋ค์์คํ์ด์ค ์ค์ ๋ณ๊ฒฝ
1
2
3
4
(testuser:N/A) [root@operator-host-2 ~]# kubectl ns default
Context "testuser" modified.
Active namespace is "default".
(testuser:default) [root@operator-host-2 ~]#
8. ๋ ธ๋ ์กฐํ ์ฑ๊ณต ํ์ธ
1
(testuser:default) [root@operator-host-2 ~]# kubectl get node -v6
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
I0315 10:54:02.204404 18335 loader.go:395] Config loaded from file: /root/.kube/config
I0315 10:54:03.167181 18335 round_trippers.go:553] GET https://9112435064B824498AF68C696D05183C.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/nodes?limit=500 200 OK in 953 milliseconds
NAME STATUS ROLES AGE VERSION
ip-192-168-1-170.ap-northeast-2.compute.internal Ready <none> 11h v1.31.5-eks-5d632ec
ip-192-168-2-112.ap-northeast-2.compute.internal Ready <none> 11h v1.31.5-eks-5d632ec
ip-192-168-3-100.ap-northeast-2.compute.internal Ready <none> 11h v1.31.5-eks-5d632ec
9. RBAC ์ ๋ณด ํ์ธ (ํ์ฌ ์ฌ์ฉ์ ์ ๋ณด ๋น๊ต)
1
(testuser:default) [root@operator-host-2 ~]# kubectl rbac-tool whoami
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
{Username: "testuser",
UID: "aws-iam-authenticator:378102432899:AIDAVQCFJISXXXXXXXXXX",
Groups: ["system:masters",
"system:authenticated"],
Extra: {accessKeyId: ["AKIAVQCFJIXXXXXXXXXX"],
arn: ["arn:aws:iam::378102432899:user/testuser"],
canonicalArn: ["arn:aws:iam::378102432899:user/testuser"],
principalId: ["AIDAVQCFJISXXXXXXXXXX"],
sessionName: [""],
sigs.k8s.io/aws-iam-authenticator/principalId: ["AIDAVQCFJISXXXXXXXXXX"]}}
10. aws-auth ConfigMap ์์ (RBAC ๊ถํ ์กฐ์ )
system:masters
โsystem:authenticated
1
2
(testuser:default) [root@operator-host-2 ~]# kubectl edit cm -n kube-system aws-auth
configmap/aws-auth edited
11. IAM Identity Mapping ์ ์ฒด ๋ชฉ๋ก ํ์ธ
1
eksctl get iamidentitymapping --cluster $CLUSTER_NAME
โ ย ์ถ๋ ฅ
1
2
3
ARN USERNAME GROUPS ACCOUNT
arn:aws:iam::378102432899:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02 system:node: system:bootstrappers,system:nodes
arn:aws:iam::378102432899:user/testuser testuser system:authenticated
12. ๋ ธ๋ ์กฐํ ์คํจ(๊ถํ ๋ถ์กฑ) ํ์ธ
1
(testuser:default) [root@operator-host-2 ~]# kubectl get node -v6
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
I0315 11:02:47.033211 18747 loader.go:395] Config loaded from file: /root/.kube/config
I0315 11:02:47.933271 18747 round_trippers.go:553] GET https://9112435064B824498AF68C696D05183C.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/nodes?limit=500 403 Forbidden in 893 milliseconds
I0315 11:02:47.933895 18747 helpers.go:246] server response object: [{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "nodes is forbidden: User \"testuser\" cannot list resource \"nodes\" in API group \"\" at the cluster scope",
"reason": "Forbidden",
"details": {
"kind": "nodes"
},
"code": 403
}]
Error from server (Forbidden): nodes is forbidden: User "testuser" cannot list resource "nodes" in API group "" at the cluster scope
13. API ๋ฆฌ์์ค ์กฐํ ์๋ (์ธ์ฆ ์ค๋ฅ ํ์ธ)
1
(testuser:default) [root@operator-host-2 ~]# kubectl api-resources -v5
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
NAME SHORTNAMES APIVERSION NAMESPACED KIND
bindings v1 true Binding
componentstatuses cs v1 false ComponentStatus
configmaps cm v1 true ConfigMap
endpoints ep v1 true Endpoints
events ev v1 true Event
limitranges limits v1 true LimitRange
namespaces ns v1 false Namespace
nodes no v1 false Node
persistentvolumeclaims pvc v1 true PersistentVolumeClaim
persistentvolumes pv v1 false PersistentVolume
pods po v1 true Pod
podtemplates v1 true PodTemplate
replicationcontrollers rc v1 true ReplicationController
resourcequotas quota v1 true ResourceQuota
secrets v1 true Secret
serviceaccounts sa v1 true ServiceAccount
services svc v1 true Service
mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration
validatingadmissionpolicies admissionregistration.k8s.io/v1 false ValidatingAdmissionPolicy
validatingadmissionpolicybindings admissionregistration.k8s.io/v1 false ValidatingAdmissionPolicyBinding
validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration
...
14. IAM Identity Mapping ์ญ์ (testuser ์ ๊ฑฐ)
1
2
3
4
eksctl delete iamidentitymapping --cluster $CLUSTER_NAME --arn arn:aws:iam::$ACCOUNT_ID:user/testuser
# ๊ฒฐ๊ณผ
2025-03-15 11:06:36 [โน] removing identity "arn:aws:iam::378102432899:user/testuser" from auth ConfigMap (username = "testuser", groups = ["system:authenticated"])
15. ํด๋ฌ์คํฐ IAM Identity Mapping ์กฐํ
1
eksctl get iamidentitymapping --cluster $CLUSTER_NAME
โ ย ์ถ๋ ฅ
1
2
3
ARN USERNAME GROUPS ACCOUNT
arn:aws:iam::378102432899:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02 system:node: system:bootstrappers,system:nodes
16. aws-auth ConfigMap ํ์ธ (๋งคํ ์ญ์ ํ)
1
kubectl get cm -n kube-system aws-auth -o yaml
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::378102432899:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02
username: system:node:
mapUsers: |
[]
kind: ConfigMap
metadata:
creationTimestamp: "2025-03-14T14:08:44Z"
name: aws-auth
namespace: kube-system
resourceVersion: "201724"
uid: 945bd318-dcae-4fbe-8758-a382effa5959
17. testuser ๊ณ์ ์ผ๋ก ๋ ธ๋ ์กฐํ ์๋ (์ธ์ฆ ์คํจ ํ์ธ)
1
(testuser:default) [root@operator-host-2 ~]# kubectl get node -v6
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
I0315 11:10:15.173287 18990 loader.go:395] Config loaded from file: /root/.kube/config
I0315 11:10:17.586766 18990 round_trippers.go:553] GET https://9112435064B824498AF68C696D05183C.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/nodes?limit=500 401 Unauthorized in 2400 milliseconds
I0315 11:10:17.587181 18990 helpers.go:246] server response object: [{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}]
error: You must be logged in to the server (Unauthorized)
18. API ๋ฆฌ์์ค ์กฐํ ์๋ (์ธ์ฆ ์ค๋ฅ ์ฌํ์ธ)
1
(testuser:default) [root@operator-host-2 ~]# kubectl api-resources -v5
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
E0315 11:10:56.043674 19044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: the server has asked for the client to provide credentials"
I0315 11:10:56.044843 19044 cached_discovery.go:120] skipped caching discovery info due to the server has asked for the client to provide credentials
NAME SHORTNAMES APIVERSION NAMESPACED KIND
I0315 11:10:56.045065 19044 helpers.go:246] server response object: [{
"metadata": {},
"status": "Failure",
"message": "the server has asked for the client to provide credentials",
"reason": "Unauthorized",
"details": {
"causes": [
{
"reason": "UnexpectedServerResponse",
"message": "unknown"
}
]
},
"code": 401
}]
error: You must be logged in to the server (the server has asked for the client to provide credentials)
๐ค EC2 Instance Profile
1. ๋ ธ๋์ STS ARN ์ ๋ณด ํ์ธ
1
for node in $N1 $N2 $N3; do ssh ec2-user@$node aws sts get-caller-identity --query Arn; done
โ ย ์ถ๋ ฅ
1
2
3
"arn:aws:sts::378102432899:assumed-role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02/i-024f66075a2000bb1"
"arn:aws:sts::378102432899:assumed-role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02/i-06cde703b1becff8b"
"arn:aws:sts::378102432899:assumed-role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02/i-0658adcdb6bceb939"
2. aws-auth ์ปจํผ๊ทธ๋งต ํ์ธ
1
kubectl describe configmap -n kube-system aws-auth
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Name: aws-auth
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
mapRoles:
----
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::378102432899:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02
username: system:node:
mapUsers:
----
[]
BinaryData
====
Events: <none>
3. IAM Identity Mapping ์กฐํ
1
eksctl get iamidentitymapping --cluster $CLUSTER_NAME
โ ย ์ถ๋ ฅ
1
2
ARN USERNAME GROUPS ACCOUNT
arn:aws:iam::378102432899:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02 system:node: system:bootstrappers,system:nodes
4. AWS CLI ํ๋ ๋ฐฐํฌ ๋ฐ ๋ ธ๋ IMDS ์ ๋ณด ํ์ธ
(1) AWS CLI ํ๋ ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: awscli-pod
spec:
replicas: 2
selector:
matchLabels:
app: awscli-pod
template:
metadata:
labels:
app: awscli-pod
spec:
containers:
- name: awscli-pod
image: amazon/aws-cli
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF
# ๊ฒฐ๊ณผ
deployment.apps/awscli-pod created
(2) ํ๋ ์ํ ํ์ธ
1
kubectl get pod -owide
โ ย ์ถ๋ ฅ
1
2
3
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
awscli-pod-59598f6ff8-nxfpw 1/1 Running 0 31s 192.168.3.26 ip-192-168-3-100.ap-northeast-2.compute.internal <none> <none>
awscli-pod-59598f6ff8-x959k 1/1 Running 0 31s 192.168.2.65 ip-192-168-2-112.ap-northeast-2.compute.internal <none> <none>
(3) ํ๋ ์ด๋ฆ ๋ณ์ ์ง์
1
2
3
APODNAME1=$(kubectl get pod -l app=awscli-pod -o jsonpath="{.items[0].metadata.name}")
APODNAME2=$(kubectl get pod -l app=awscli-pod -o jsonpath="{.items[1].metadata.name}")
echo $APODNAME1, $APODNAME2
โ ย ์ถ๋ ฅ
1
awscli-pod-59598f6ff8-nxfpw, awscli-pod-59598f6ff8-x959k
(4) ํ๋ ๋ด์์ ๋ ธ๋์ IAM Role ARN ํ์ธ
1
2
kubectl exec -it $APODNAME1 -- aws sts get-caller-identity --query Arn
kubectl exec -it $APODNAME2 -- aws sts get-caller-identity --query Arn
โ ย ์ถ๋ ฅ
1
2
"arn:aws:sts::378102432899:assumed-role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02/i-0658adcdb6bceb939"
"arn:aws:sts::378102432899:assumed-role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02/i-06cde703b1becff8b"
5. AWS ์๋น์ค ์ ๋ณด ์กฐํ ํ ์คํธ
(1) EC2 ์ธ์คํด์ค ์ ๋ณด ์กฐํ (์ถ๋ ฅ์ด ๋ฐฉ๋ํ์ฌ ์๋ต)
1
kubectl exec -it $APODNAME1 -- aws ec2 describe-instances --region ap-northeast-2 --output table --no-cli-pager
(2) VPC ์ ๋ณด ์กฐํ
1
kubectl exec -it $APODNAME2 -- aws ec2 describe-vpcs --region ap-northeast-2 --output table --no-cli-pager
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
----------------------------------------------------------------------------------------------------------------------------------------------
| DescribeVpcs |
+--------------------------------------------------------------------------------------------------------------------------------------------+
|| Vpcs ||
|+---------------------------------------------------------+--------------------------------------------------------------------------------+|
|| CidrBlock | 192.168.0.0/16 ||
|| DhcpOptionsId | dopt-08e6acad9e813e9f4 ||
|| InstanceTenancy | default ||
|| IsDefault | False ||
|| OwnerId | 378102432899 ||
|| State | available ||
|| VpcId | vpc-050ad5b5af470a60a ||
|+---------------------------------------------------------+--------------------------------------------------------------------------------+|
||| BlockPublicAccessStates |||
||+------------------------------------------------------------------------------------------------------------+---------------------------+||
||| InternetGatewayBlockMode | off |||
||+------------------------------------------------------------------------------------------------------------+---------------------------+||
||| CidrBlockAssociationSet |||
||+------------------------------------------+---------------------------------------------------------------------------------------------+||
||| AssociationId | vpc-cidr-assoc-0601030d789823479 |||
||| CidrBlock | 192.168.0.0/16 |||
||+------------------------------------------+---------------------------------------------------------------------------------------------+||
|||| CidrBlockState ||||
|||+---------------------------------------------------+----------------------------------------------------------------------------------+|||
|||| State | associated ||||
|||+---------------------------------------------------+----------------------------------------------------------------------------------+|||
||| Tags |||
||+-------------------------------+--------------------------------------------------------------------------------------------------------+||
||| Key | Value |||
||+-------------------------------+--------------------------------------------------------------------------------------------------------+||
||| aws:cloudformation:stack-id | arn:aws:cloudformation:ap-northeast-2:378102432899:stack/myeks/7f456ee0-00db-11f0-b434-02bbcdad6607 |||
||| aws:cloudformation:stack-name| myeks |||
||| Name | myeks-VPC |||
||| aws:cloudformation:logical-id| EksVPC |||
||+-------------------------------+--------------------------------------------------------------------------------------------------------+||
|| Vpcs ||
|+---------------------------------------------------------+--------------------------------------------------------------------------------+|
|| CidrBlock | 172.20.0.0/16 ||
|| DhcpOptionsId | dopt-08e6acad9e813e9f4 ||
|| InstanceTenancy | default ||
|| IsDefault | False ||
|| OwnerId | 378102432899 ||
|| State | available ||
|| VpcId | vpc-0448287ca498c315f ||
|+---------------------------------------------------------+--------------------------------------------------------------------------------+|
||| BlockPublicAccessStates |||
||+------------------------------------------------------------------------------------------------------------+---------------------------+||
||| InternetGatewayBlockMode | off |||
||+------------------------------------------------------------------------------------------------------------+---------------------------+||
||| CidrBlockAssociationSet |||
||+------------------------------------------+---------------------------------------------------------------------------------------------+||
||| AssociationId | vpc-cidr-assoc-087ac189d496d4440 |||
||| CidrBlock | 172.20.0.0/16 |||
||+------------------------------------------+---------------------------------------------------------------------------------------------+||
|||| CidrBlockState ||||
|||+---------------------------------------------------+----------------------------------------------------------------------------------+|||
|||| State | associated ||||
|||+---------------------------------------------------+----------------------------------------------------------------------------------+|||
||| Tags |||
||+-------------------------------+--------------------------------------------------------------------------------------------------------+||
||| Key | Value |||
||+-------------------------------+--------------------------------------------------------------------------------------------------------+||
||| aws:cloudformation:stack-id | arn:aws:cloudformation:ap-northeast-2:378102432899:stack/myeks/7f456ee0-00db-11f0-b434-02bbcdad6607 |||
||| Name | operator-VPC |||
||| aws:cloudformation:logical-id| OpsVPC |||
||| aws:cloudformation:stack-name| myeks |||
||+-------------------------------+--------------------------------------------------------------------------------------------------------+||
- ๋ ธ๋์ ๋งคํ๋ IAM ์ญํ ๋๋ถ์ ํด๋น ๋ ธ๋์์ ์คํ๋๋ ๋ชจ๋ ํ๋๋ ๊ทธ ์ญํ ์ ํ ๋น๋ ๊ถํ์ ์ฌ์ฉํ ์ ์์
6. ๋ ธ๋ ๊ธฐ๋ฐ kubeconfig ์์ฑ ๋ฐ ํ์ธ
(1) ๋ ธ๋ ์ญํ ARN ์กฐํ
1
eksctl get iamidentitymapping --cluster $CLUSTER_NAME
โ ย ์ถ๋ ฅ
1
2
ARN USERNAME GROUPS ACCOUNT
arn:aws:iam::378102432899:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02 system:node: system:bootstrappers,system:nodes
(2) ๋ ธ๋ ์ญํ ARN ๋ณ์ ์ค์
1
2
NODE_ROLE=<๊ฐ์ ์์ ์ ๋
ธ๋ Role ์ด๋ฆ>
NODE_ROLE=eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02
(3) ๋ ธ๋ ๊ธฐ๋ฐ kubeconfig ์์ฑ
1
2
3
kubectl exec -it $APODNAME1 -- aws eks update-kubeconfig --name $CLUSTER_NAME --role-arn $NODE_ROLE
# ๊ฒฐ๊ณผ
Added new context arn:aws:eks:ap-northeast-2:378102432899:cluster/myeks to /root/.kube/config
(4) ์์ฑ๋ kubeconfig ํ์ธ
1
kubectl exec -it $APODNAME1 -- cat /root/.kube/config
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJUXhZZFRsV2QxS293RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1UUXhNelV6TURsYUZ3MHpOVEF6TVRJeE16VTRNRGxhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUR2RE1HZ1dYSlBmUVlrMERKWTdGYk52OEN3RkFMMWNIMWQxVkxxUmpYemt0RGFwTW1xS2RGV3lmSXMKbGFqbEkyS1FvZ0VPY1dHc0dEOW1JWVFhTXM0VHFab1k5U1dZNFVlMVYzU0Y5YVV3dFI0M2VLTzh5RUhQekNUSQpGN0hyWDBKVkpxM29RQU5RVjNPc2JJdE9pdHQ3eDBoaFVWZ3NQci9qTy9ZamFkaHczSVV4QUxKYnhKTmIzSW5rCkdjT3ltbXViNDR4SUJ0NmtIM3V5VWNxQm83aStoL3NJWWFXRXNocHp3dGJSemN0NmRMYldFNW9WbU9xY1pHTDMKSXE1WmNiVTBQc3B3S2xTK1BwcFhmaEk2djk5VGsyV0VRVzdkQmo1ZkM5cWdOeU95N3BFdU1MVHhyK05xYS9yVAp1dzNtLzNvb1JoYk82azR0VGxUSW1ZMXBrMnpaQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTYWp6dzc5dzNNUjBIMEJUaTFuQVNhN2JVS1J6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQWVSK2c5d0FVQwpuUElEOUtKNGkrMGliRFcvczlxSHpHTG5OelhpQlpsSXI1ZWtUYjdaYzhwT09ZQ2t5c01oSTlLbnJQUUdPTlpaCitjSTM0UnRDWmN0eU9QVmRsS1JsUFM5R1lNaGpBeEhQMW1Jck4zU2F5aGpYM3kzY3g1NG4wRmswR216RitKbXIKYUFsMHkyM05GMnk2VzJ0WWUxRW1aZFRZWVcxNDJLRHNIZldPQlhtMmJjR3Z6QmM4YTh6QWZIdElDbkVSMlc4ZQpndERCV1dpNVRuN1NKOWpTTEFVWUNkQUIza2xDTUJNaE80bGkvRzB4MFROT0VVRXF0bFQxY0x6Rm9PTVM4OWVRCnlpcDE1Mk9tc1ZIMy9pVG55Zy9UVytUUXpLNUF3eVlNb3Jmc05KNjg1bFN6Mmh1S3NEZE9YMVJqWis3ZDI5dW0KTXRIV2hwNHVDajFBCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: https://9112435064B824498AF68C696D05183C.gr7.ap-northeast-2.eks.amazonaws.com
name: arn:aws:eks:ap-northeast-2:378102432899:cluster/myeks
contexts:
- context:
cluster: arn:aws:eks:ap-northeast-2:378102432899:cluster/myeks
user: arn:aws:eks:ap-northeast-2:378102432899:cluster/myeks
name: arn:aws:eks:ap-northeast-2:378102432899:cluster/myeks
current-context: arn:aws:eks:ap-northeast-2:378102432899:cluster/myeks
kind: Config
preferences: {}
users:
- name: arn:aws:eks:ap-northeast-2:378102432899:cluster/myeks
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- ap-northeast-2
- eks
- get-token
- --cluster-name
- myeks
- --output
- json
- --role
- eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02
command: aws
๐ [์ ๊ธฐ๋ฅ] ๊ฐ์ํ๋ Amazon EKS ์ ๊ทผ ๊ด๋ฆฌ ์ ์ด ๊ธฐ๋ฅ
1. EKS API ์ก์ธ์ค๋ชจ๋ ๋ณ๊ฒฝ
1
aws eks update-cluster-config --name $CLUSTER_NAME --access-config authenticationMode=API
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
"update": {
"id": "85a5fad2-60e5-3c7e-860a-80ee7ff3d04f",
"status": "InProgress",
"type": "AccessConfigUpdate",
"params": [
{
"type": "AuthenticationMode",
"value": "\"API\""
}
],
"createdAt": "2025-03-15T11:39:27.439000+09:00",
"errors": []
}
}
2. EKS API ๋ฐ ConfigMap ์ ๋ฐ์ดํธ ํ์ธ
3. ๋งตํ ํด๋ฌ์คํฐ๋กค ์ ๋ณด ํ์ธ
1
kubectl get clusterroles -l 'kubernetes.io/bootstrapping=rbac-defaults' | grep -v 'system:'
โ ย ์ถ๋ ฅ
1
2
3
4
5
NAME CREATED AT
admin 2025-03-14T13:58:41Z
cluster-admin 2025-03-14T13:58:41Z
edit 2025-03-14T13:58:41Z
view 2025-03-14T13:58:41Z
4. ํด๋ฌ์คํฐ ์ก์ธ์ค ์ํธ๋ฆฌ ๋ชฉ๋ก ์กฐํ
1
aws eks list-access-entries --cluster-name $CLUSTER_NAME | jq
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
{
"accessEntries": [
"arn:aws:iam::378102432899:role/aws-service-role/eks.amazonaws.com/AWSServiceRoleForAmazonEKS",
"arn:aws:iam::378102432899:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02",
"arn:aws:iam::378102432899:user/eks-user"
]
}
5. eks-user์ ์ฐ๊ด ์ก์ธ์ค ์ ์ฑ ํ์ธ
1
2
export ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)
aws eks list-associated-access-policies --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/eks-user | jq
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
"associatedAccessPolicies": [
{
"policyArn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy",
"accessScope": {
"type": "cluster",
"namespaces": []
},
"associatedAt": "2025-03-14T22:53:53.072000+09:00",
"modifiedAt": "2025-03-14T22:53:53.072000+09:00"
}
],
"clusterName": "myeks",
"principalArn": "arn:aws:iam::378102432899:user/eks-user"
}
6. ๋ ธ๋ ๊ทธ๋ฃน ์ญํ ์ ์ฐ๊ด ์ก์ธ์ค ์ ์ฑ ์กฐํ
1
aws eks list-associated-access-policies --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02 | jq
โ ย ์ถ๋ ฅ
1
2
3
4
5
{
"associatedAccessPolicies": [],
"clusterName": "myeks",
"principalArn": "arn:aws:iam::378102432899:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02"
}
7. ์ก์ธ์ค ์ํธ๋ฆฌ ์์ธ ์ ๋ณด ์กฐํ (eks-user ๋ฐ ๋ ธ๋ ๊ทธ๋ฃน ์ญํ )
1
2
aws eks describe-access-entry --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/eks-user | jq
aws eks describe-access-entry --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02 | jq
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
{
"accessEntry": {
"clusterName": "myeks",
"principalArn": "arn:aws:iam::378102432899:user/eks-user",
"kubernetesGroups": [],
"accessEntryArn": "arn:aws:eks:ap-northeast-2:378102432899:access-entry/myeks/user/378102432899/eks-user/70caca77-6675-8d25-ce0a-ec6e85e27c5a",
"createdAt": "2025-03-14T22:53:52.927000+09:00",
"modifiedAt": "2025-03-14T22:53:52.927000+09:00",
"tags": {},
"username": "arn:aws:iam::378102432899:user/eks-user",
"type": "STANDARD"
}
}
{
"accessEntry": {
"clusterName": "myeks",
"principalArn": "arn:aws:iam::378102432899:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02",
"kubernetesGroups": [
"system:nodes"
],
"accessEntryArn": "arn:aws:eks:ap-northeast-2:378102432899:access-entry/myeks/role/378102432899/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02/b6caca7e-3679-f72d-fcfd-d4fa1db93b92",
"createdAt": "2025-03-14T23:08:45.866000+09:00",
"modifiedAt": "2025-03-14T23:08:45.866000+09:00",
"tags": {},
"username": "system:node:",
"type": "EC2_LINUX"
}
}
8. testuser ์ก์ธ์ค ์ํธ๋ฆฌ ์์ฑ
1
aws eks create-access-entry --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
{
"accessEntry": {
"clusterName": "myeks",
"principalArn": "arn:aws:iam::378102432899:user/testuser",
"kubernetesGroups": [],
"accessEntryArn": "arn:aws:eks:ap-northeast-2:378102432899:access-entry/myeks/user/378102432899/testuser/e0cacbdd-5069-af75-550f-6192927ba723",
"createdAt": "2025-03-15T11:55:45.418000+09:00",
"modifiedAt": "2025-03-15T11:55:45.418000+09:00",
"tags": {},
"username": "arn:aws:iam::378102432899:user/testuser",
"type": "STANDARD"
}
}
9. testuser์ ๊ด๋ฆฌ์ ์ก์ธ์ค ์ ์ฑ ์ฐ๋
1
2
aws eks associate-access-policy --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser \
--policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy --access-scope type=cluster
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
{
"clusterName": "myeks",
"principalArn": "arn:aws:iam::378102432899:user/testuser",
"associatedAccessPolicy": {
"policyArn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy",
"accessScope": {
"type": "cluster",
"namespaces": []
},
"associatedAt": "2025-03-15T11:57:48.204000+09:00",
"modifiedAt": "2025-03-15T11:57:48.204000+09:00"
}
}
10. testuser ์ธ์ฆ ๋ฐ kubectl ์ฌ์ฉ ํ์ธ
(1) testuser ์ ๋ณด ํ์ธ
1
2
(testuser:default) [root@operator-host-2 ~]# aws sts get-caller-identity --query Arn
(testuser:default) [root@operator-host-2 ~]# kubectl whoami
โ ย ์ถ๋ ฅ
1
2
"arn:aws:iam::378102432899:user/testuser"
arn:aws:iam::378102432899:user/testuser
(2) kubectl ๋ ธ๋ ์กฐํ
1
(testuser:default) [root@operator-host-2 ~]# kubectl get node -v6
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
I0315 12:01:23.364853 19395 loader.go:395] Config loaded from file: /root/.kube/config
I0315 12:01:24.412937 19395 round_trippers.go:553] GET https://9112435064B824498AF68C696D05183C.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/nodes?limit=500 200 OK in 1039 milliseconds
NAME STATUS ROLES AGE VERSION
ip-192-168-1-170.ap-northeast-2.compute.internal Ready <none> 12h v1.31.5-eks-5d632ec
ip-192-168-2-112.ap-northeast-2.compute.internal Ready <none> 12h v1.31.5-eks-5d632ec
ip-192-168-3-100.ap-northeast-2.compute.internal Ready <none> 12h v1.31.5-eks-5d632ec
11. testuser RBAC ๊ทธ๋ฃน ํ์ธ
1
(testuser:default) [root@operator-host-2 ~]# kubectl rbac-tool whoami
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
{Username: "arn:aws:iam::378102432899:user/testuser",
UID: "aws-iam-authenticator:378102432899:AIDAVQCFJISXXXXXXXXXX",
Groups: ["system:authenticated"],
Extra: {accessKeyId: ["AKIAVQCFJIXXXXXXXXXX"],
arn: ["arn:aws:iam::378102432899:user/testuser"],
canonicalArn: ["arn:aws:iam::378102432899:user/testuser"],
principalId: ["AIDAVQCFJISXXXXXXXXXX"],
sessionName: [""],
sigs.k8s.io/aws-iam-authenticator/principalId: ["AIDAVQCFJISXXXXXXXXXX"]}}
12. aws-auth ConfigMap ํ์ธ ๋ฐ ์ ๋ฐ์ดํธ
1
(testuser:default) [root@operator-host-2 ~]# kubectl get cm -n kube-system aws-auth -o yaml
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::378102432899:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02
username: system:node:
mapUsers: |
[]
kind: ConfigMap
metadata:
creationTimestamp: "2025-03-14T14:08:44Z"
name: aws-auth
namespace: kube-system
resourceVersion: "201724"
uid: 945bd318-dcae-4fbe-8758-a382effa5959
- EKS API๋ง ํ์ฑํ
13. testuser ์ก์ธ์ค ์ํธ๋ฆฌ ์ญ์
1
aws eks delete-access-entry --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser
14. ์ญ์ ํ ์ก์ธ์ค ์ํธ๋ฆฌ ๋ชฉ๋ก ํ์ธ
1
aws eks list-access-entries --cluster-name $CLUSTER_NAME | jq -r .accessEntries[]
โ ย ์ถ๋ ฅ
1
2
3
arn:aws:iam::378102432899:role/aws-service-role/eks.amazonaws.com/AWSServiceRoleForAmazonEKS
arn:aws:iam::378102432899:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02
arn:aws:iam::378102432899:user/eks-user
15. pod-viewer-role, pod-admin-role ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
cat <<EoF> ~/pod-viewer-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-viewer-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "get", "watch"]
EoF
cat <<EoF> ~/pod-admin-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-admin-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["*"]
EoF
kubectl apply -f ~/pod-viewer-role.yaml
kubectl apply -f ~/pod-admin-role.yaml
# ๊ฒฐ๊ณผ
clusterrole.rbac.authorization.k8s.io/pod-viewer-role created
clusterrole.rbac.authorization.k8s.io/pod-admin-role created
16. viewer-role-binding, admin-role-binding ์์ฑ
1
2
3
4
5
6
kubectl create clusterrolebinding viewer-role-binding --clusterrole=pod-viewer-role --group=pod-viewer
kubectl create clusterrolebinding admin-role-binding --clusterrole=pod-admin-role --group=pod-admin
# ๊ฒฐ๊ณผ
clusterrolebinding.rbac.authorization.k8s.io/viewer-role-binding created
clusterrolebinding.rbac.authorization.k8s.io/admin-role-binding created
17. pod-viewer ์ก์ธ์ค ์ํธ๋ฆฌ ์์ฑ
1
aws eks create-access-entry --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser --kubernetes-group pod-viewer
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
"accessEntry": {
"clusterName": "myeks",
"principalArn": "arn:aws:iam::378102432899:user/testuser",
"kubernetesGroups": [
"pod-viewer"
],
"accessEntryArn": "arn:aws:eks:ap-northeast-2:378102432899:access-entry/myeks/user/378102432899/testuser/90cacbe3-ed92-5977-f597-47857d3215d5",
"createdAt": "2025-03-15T12:10:12.297000+09:00",
"modifiedAt": "2025-03-15T12:10:12.297000+09:00",
"tags": {},
"username": "arn:aws:iam::378102432899:user/testuser",
"type": "STANDARD"
}
}
18. testuser๋ก ํ๋ ์กฐํ ๋ฐ ๊ถํ ํ์ธ
(1) ํ๋ ์กฐํ ๊ถํ โ o
1
(testuser:default) [root@operator-host-2 ~]# kubectl get pod -v6
โ ย ์ถ๋ ฅ
1
2
3
4
5
I0315 12:12:11.244919 19608 loader.go:395] Config loaded from file: /root/.kube/config
I0315 12:12:12.539044 19608 round_trippers.go:553] GET https://9112435064B824498AF68C696D05183C.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/namespaces/default/pods?limit=500 200 OK in 1283 milliseconds
NAME READY STATUS RESTARTS AGE
awscli-pod-59598f6ff8-nxfpw 1/1 Running 0 51m
awscli-pod-59598f6ff8-x959k 1/1 Running 0 51m
(2) ํ๋ ์ญ์ ๊ถํ โ x
1
(testuser:default) [root@operator-host-2 ~]# kubectl auth can-i delete pods --all-namespaces
โ ย ์ถ๋ ฅ
1
no
19. Kubernetes ๊ทธ๋ฃน ์ ๋ฐ์ดํธ (pod-admin ๊ทธ๋ฃน์ผ๋ก ๋ณ๊ฒฝ)
1
aws eks update-access-entry --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser --kubernetes-group pod-admin | jq -r .accessEntry
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
{
"clusterName": "myeks",
"principalArn": "arn:aws:iam::378102432899:user/testuser",
"kubernetesGroups": [
"pod-admin"
],
"accessEntryArn": "arn:aws:eks:ap-northeast-2:378102432899:access-entry/myeks/user/378102432899/testuser/90cacbe3-ed92-5977-f597-47857d3215d5",
"createdAt": "2025-03-15T12:10:12.297000+09:00",
"modifiedAt": "2025-03-15T12:16:02.727000+09:00",
"tags": {},
"username": "arn:aws:iam::378102432899:user/testuser",
"type": "STANDARD"
}
20. ์ ๋ฐ์ดํธ ํ testuser ๊ถํ ์ฌํ์ธ
1
(testuser:default) [root@operator-host-2 ~]# kubectl auth can-i delete pods --all-namespaces
โ ย ์ถ๋ ฅ
1
yes
๐ EKS IRSA & Pod Identity
ํด๋ฌ์คํฐ ๊ตฌ์ฑ ํ์ผ(myeks.yaml) ํ์ธ
1
(eks-user@myeks:default) [root@operator-host ~]# cat myeks.yaml
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: myeks
region: ap-northeast-2
version: "1.31"
iam:
withOIDC: true
serviceAccounts:
- metadata:
name: aws-load-balancer-controller
namespace: kube-system
wellKnownPolicies:
awsLoadBalancerController: true
vpc:
cidr: 192.168.0.0/16
clusterEndpoints:
privateAccess: true
publicAccess: true
id: vpc-050ad5b5af470a60a
subnets:
public:
ap-northeast-2a:
az: ap-northeast-2a
cidr: 192.168.1.0/24
id: subnet-018486d88b18ed068
ap-northeast-2b:
az: ap-northeast-2b
cidr: 192.168.2.0/24
id: subnet-0a6e9d1ac60fb9ce8
ap-northeast-2c:
az: ap-northeast-2c
cidr: 192.168.3.0/24
id: subnet-08b145655560d7abc
addons:
- name: vpc-cni # no version is specified so it deploys the default version
version: latest # auto discovers the latest available
attachPolicyARNs: # attach IAM policies to the add-on's service account
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
configurationValues: |-
enableNetworkPolicy: "true"
- name: kube-proxy
version: latest
- name: coredns
version: latest
- name: metrics-server
version: latest
- name: aws-ebs-csi-driver
version: latest
wellKnownPolicies:
ebsCSIController: true
managedNodeGroups:
- amiFamily: AmazonLinux2023
desiredCapacity: 3
iam:
withAddonPolicies:
autoScaler: true
certManager: true
externalDNS: true
instanceType: t3.medium
preBootstrapCommands:
# install additional packages
- "dnf install nvme-cli links tree tcpdump sysstat ipvsadm ipset bind-utils htop -y"
#- "curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.31.2/2024-11-15/bin/linux/amd64/kubectl"
#- "install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl"
labels:
alpha.eksctl.io/cluster-name: myeks
alpha.eksctl.io/nodegroup-name: ng1
maxPodsPerNode: 60
maxSize: 3
minSize: 3
name: ng1
ssh:
allow: true
publicKeyName: kp-aews
tags:
alpha.eksctl.io/nodegroup-name: ng1
alpha.eksctl.io/nodegroup-type: managed
volumeIOPS: 3000
volumeSize: 60
volumeThroughput: 125
volumeType: gp3
- ๋ ธ๋์ ๋ฐฐํฌ๋ Pod ํ๋๋ง ํ์ทจ๋๋๋ผ๋ ํด๋น ๋ ธ๋์ ๋งคํ๋ IAM Role์ ๊ถํ์ ๋ชจ๋ ์ฌ์ฉํ ์ ์์
- ๋ฐ๋ผ์ ๋ ธ๋์๋ ํ์ํ ์ต์ํ์ ๊ถํ๋ง ๋ถ์ฌํด์ผ ํจ
๐ฆ ํ๋ก์ ํฐ๋ ๋ณผ๋ฅจ์ ํตํ ํตํฉ ์คํ ๋ฆฌ์ง ๊ตฌ์ฑ
1. Secret ์์ฑ์ ํตํ ๋ฏผ๊ฐ ์ ๋ณด ๋ณดํธ
1
2
3
4
5
6
7
8
9
echo -n "admin" > ./username.txt
echo -n "1f2d1e2e67df" > ./password.txt
kubectl create secret generic user --from-file=./username.txt
kubectl create secret generic pass --from-file=./password.txt
# ๊ฒฐ๊ณผ
secret/user created
secret/pass created
2. Projected ๋ณผ๋ฅจ์ ์ฌ์ฉํ๋ ํ๋ ์์ฑ
1
2
3
4
kubectl apply -f https://k8s.io/examples/pods/storage/projected.yaml
# ๊ฒฐ๊ณผ
pod/test-projected-volume created
3. Projected ๋ณผ๋ฅจ ํ๋ ์ ๋ณด ํ์ธ
1
kubectl get pod test-projected-volume -o yaml | kubectl neat
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
apiVersion: v1
kind: Pod
metadata:
name: test-projected-volume
namespace: default
spec:
containers:
- args:
- sleep
- "86400"
image: busybox:1.28
name: test-projected-volume
volumeMounts:
- mountPath: /projected-volume
name: all-in-one
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-kdlzq
readOnly: true
preemptionPolicy: PreemptLowerPriority
priority: 0
serviceAccountName: default
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: all-in-one
projected:
sources:
- secret:
name: user
- secret:
name: pass
- name: kube-api-access-kdlzq
projected:
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
fieldPath: metadata.namespace
path: namespace
4. Projected ๋ณผ๋ฅจ ๋ด ์ํฌ๋ฆฟ ๋ฐ์ดํฐ ๊ฒ์ฆ
(1) ํ๋ก์ ํฐ๋ ๋ณผ๋ฅจ ํ์ผ ๋ชฉ๋ก ํ์ธ
1
kubectl exec -it test-projected-volume -- ls /projected-volume/
โ ย ์ถ๋ ฅ
1
password.txt username.txt
(2) username.txt ํ์ผ ๋ด์ฉ ํ์ธ
1
kubectl exec -it test-projected-volume -- cat /projected-volume/username.txt ;echo
โ ย ์ถ๋ ฅ
1
admin
(3) password.txt ํ์ผ ๋ด์ฉ ํ์ธ
1
kubectl exec -it test-projected-volume -- cat /projected-volume/password.txt ;echo
โ ย ์ถ๋ ฅ
1
1f2d1e2e67df
5. ํ ์คํธ ํ ๋ฆฌ์์ค ์ ๋ฆฌ
1
2
3
4
5
6
kubectl delete pod test-projected-volume && kubectl delete secret user pass
# ๊ฒฐ๊ณผ
pod "test-projected-volume" deleted
secret "user" deleted
secret "pass" deleted
๐ซ ์ค์ต1: Service Accounts ๋ฏธ์ฌ์ฉ
1. IAM ๊ธฐ๋ฐ Pod ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: eks-iam-test1
spec:
containers:
- name: my-aws-cli
image: amazon/aws-cli:latest
args: ['s3', 'ls']
restartPolicy: Never
automountServiceAccountToken: false
terminationGracePeriodSeconds: 0
EOF
# ๊ฒฐ๊ณผ
pod/eks-iam-test1 created
2. Pod ์์ฑ ํ ์ํ ํ์ธ
1
kubectl get pod
โ ย ์ถ๋ ฅ
1
2
3
4
NAME READY STATUS RESTARTS AGE
awscli-pod-59598f6ff8-nxfpw 1/1 Running 0 83m
awscli-pod-59598f6ff8-x959k 1/1 Running 0 83m
eks-iam-test1 1/1 Running 0 3s
3. Pod ์์ธ ์ ๋ณด ํ์ธ
1
kubectl describe pod
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
Name: awscli-pod-59598f6ff8-nxfpw
Namespace: default
Priority: 0
Service Account: default
Node: ip-192-168-3-100.ap-northeast-2.compute.internal/192.168.3.100
Start Time: Sat, 15 Mar 2025 11:20:23 +0900
Labels: app=awscli-pod
pod-template-hash=59598f6ff8
Annotations: <none>
Status: Running
IP: 192.168.3.26
IPs:
IP: 192.168.3.26
Controlled By: ReplicaSet/awscli-pod-59598f6ff8
Containers:
awscli-pod:
Container ID: containerd://a268e61dda2a2cc465942693970567de17b4cf6cc08642a39adfb816ffa7b63e
Image: amazon/aws-cli
Image ID: docker.io/amazon/aws-cli@sha256:904fd77c855c3999b9d13cce0c9ec11f2c0e9640881a41dee26340d45f63469b
Port: <none>
Host Port: <none>
Command:
tail
Args:
-f
/dev/null
State: Running
Started: Sat, 15 Mar 2025 11:20:36 +0900
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gc2sd (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-gc2sd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
Name: awscli-pod-59598f6ff8-x959k
Namespace: default
Priority: 0
Service Account: default
Node: ip-192-168-2-112.ap-northeast-2.compute.internal/192.168.2.112
Start Time: Sat, 15 Mar 2025 11:20:23 +0900
Labels: app=awscli-pod
pod-template-hash=59598f6ff8
Annotations: <none>
Status: Running
IP: 192.168.2.65
IPs:
IP: 192.168.2.65
Controlled By: ReplicaSet/awscli-pod-59598f6ff8
Containers:
awscli-pod:
Container ID: containerd://26b074e90d99282681dbb506821219181614be4d3dd8e965308ea546e1f9f407
Image: amazon/aws-cli
Image ID: docker.io/amazon/aws-cli@sha256:904fd77c855c3999b9d13cce0c9ec11f2c0e9640881a41dee26340d45f63469b
Port: <none>
Host Port: <none>
Command:
tail
Args:
-f
/dev/null
State: Running
Started: Sat, 15 Mar 2025 11:20:38 +0900
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7jf6r (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-7jf6r:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
Name: eks-iam-test1
Namespace: default
Priority: 0
Service Account: default
Node: ip-192-168-1-170.ap-northeast-2.compute.internal/192.168.1.170
Start Time: Sat, 15 Mar 2025 12:43:29 +0900
Labels: <none>
Annotations: <none>
Status: Failed
IP: 192.168.1.194
IPs:
IP: 192.168.1.194
Containers:
my-aws-cli:
Container ID: containerd://4794b52cf8fb37772e3bf501f515aab7b1fd883a7dc0d6e9e8224c0b826f372f
Image: amazon/aws-cli:latest
Image ID: docker.io/amazon/aws-cli@sha256:904fd77c855c3999b9d13cce0c9ec11f2c0e9640881a41dee26340d45f63469b
Port: <none>
Host Port: <none>
Args:
s3
ls
State: Terminated
Reason: Error
Exit Code: 254
Started: Sat, 15 Mar 2025 12:43:31 +0900
Finished: Sat, 15 Mar 2025 12:43:33 +0900
Ready: False
Restart Count: 0
Environment: <none>
Mounts: <none>
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes: <none>
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 50s default-scheduler Successfully assigned default/eks-iam-test1 to ip-192-168-1-170.ap-northeast-2.compute.internal
Normal Pulling 49s kubelet Pulling image "amazon/aws-cli:latest"
Normal Pulled 48s kubelet Successfully pulled image "amazon/aws-cli:latest" in 1.363s (1.363s including waiting). Image size: 133092038 bytes.
Normal Created 48s kubelet Created container my-aws-cli
Normal Started 48s kubelet Started container my-aws-cli
4. IAM ๊ถํ ๋ฏธ๋ณด์ ๋ก ์ธํ S3 ์ ๊ทผ ๊ฑฐ๋ถ ํ์ธ
1
kubectl logs eks-iam-test1
โ ย ์ถ๋ ฅ
1
An error occurred (AccessDenied) when calling the ListBuckets operation: User: arn:aws:sts::378102432899:assumed-role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02/i-024f66075a2000bb1 is not authorized to perform: s3:ListAllMyBuckets because no identity-based policy allows the s3:ListAllMyBuckets action
5. ์คํจํ Pod ์ญ์
1
2
3
kubectl delete pod eks-iam-test1
# ๊ฒฐ๊ณผ
pod "eks-iam-test1" deleted
๐ฅ ์ค์ต2: Service Accounts
1. eks-iam-test2 Pod ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: eks-iam-test2
spec:
containers:
- name: my-aws-cli
image: amazon/aws-cli:latest
command: ['sleep', '36000']
restartPolicy: Never
terminationGracePeriodSeconds: 0
EOF
# ๊ฒฐ๊ณผ
pod/eks-iam-test2 created
2. Pod์ ServiceAccount ํ ํฐ ํ์ธ
1
kubectl exec -it eks-iam-test2 -- cat /var/run/secrets/kubernetes.io/serviceaccount/token ;echo
โ ย ์ถ๋ ฅ
1
eyJhbGciOiJSUzI1NiIsImtpZCI6IjdlOTJmOGQ3NmM5ODUzNzUzYTZjOWExYzlkOTU5NzBkMjFkN2UxY2IifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTc3MzU0NjU3NiwiaWF0IjoxNzQyMDEwNTc2LCJpc3MiOiJodHRwczovL29pZGMuZWtzLmFwLW5vcnRoZWFzdC0yLmFtYXpvbmF3cy5jb20vaWQvOTExMjQzNTA2NEI4MjQ0OThBRjY4QzY5NkQwNTE4M0MiLCJqdGkiOiIzN2Y2ZTEyZi02YWZkLTQ0NWUtODY2Yy0zNzNiNWQxNzQ4MWUiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImRlZmF1bHQiLCJub2RlIjp7Im5hbWUiOiJpcC0xOTItMTY4LTMtMTAwLmFwLW5vcnRoZWFzdC0yLmNvbXB1dGUuaW50ZXJuYWwiLCJ1aWQiOiJmNDRkMDViNS1lMTFkLTRkOWUtOGI3Zi04NWZkNWI0YjgzNjgifSwicG9kIjp7Im5hbWUiOiJla3MtaWFtLXRlc3QyIiwidWlkIjoiMGRlN2E1MDgtMmFkYS00NGRjLWE5NTAtOWIxNjFjYzhiYzEwIn0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkZWZhdWx0IiwidWlkIjoiOGQ3NTgxZDYtYWIxNi00MjMwLTkyNTMtZTIyYTU2ODE3OTgyIn0sIndhcm5hZnRlciI6MTc0MjAxNDE4M30sIm5iZiI6MTc0MjAxMDU3Niwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6ZGVmYXVsdCJ9.SAtgxfOIczYaSe9wEUbY833qslMQfOdS1V_JJjZrnDHTuC6dOZO4HW5RPvqxUu9cmYo5mJWk6MN25TnQbIsf22WPb5rMiHlPhuSFVpijACh0_EIxHMtmeoQO_-ufc6yZM-elTawuMcwKRgGM_JRCsMg_yrYfi-antPVw_ud7HYsODIRj8aCftsejWpGM1wJDVGrtLU6A4g9K09LiPKFqoYbQNaAXXmglMbO8ChGCqBADvwoCt4n8BIFvfU4W2cOex_L5r8x1_T34SB8MFtvykvLCJg36FICbJ3vyTo5ZHbfCEbB8Du9l51uydHIty8QtwdUVF4z6i3P7uchy9rWLbA
3. AWS S3 ์๋น์ค ์ ๊ทผ ์๋
1
kubectl exec -it eks-iam-test2 -- aws s3 ls
โ ย ์ถ๋ ฅ
1
2
An error occurred (AccessDenied) when calling the ListBuckets operation: User: arn:aws:sts::378102432899:assumed-role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-u2QV8pJrrx02/i-0658adcdb6bceb939 is not authorized to perform: s3:ListAllMyBuckets because no identity-based policy allows the s3:ListAllMyBuckets action
command terminated with exit code 254
4. ์๋น์ค ์ด์นด์ดํธ ํ ํฐ ๋ณ์์ ์ ์ฅ ๋ฐ ์ถ๋ ฅ
1
2
SA_TOKEN=$(kubectl exec -it eks-iam-test2 -- cat /var/run/secrets/kubernetes.io/serviceaccount/token)
echo $SA_TOKEN
โ ย ์ถ๋ ฅ
1
eyJhbGciOiJSUzI1NiIsImtpZCI6IjdlOTJmOGQ3NmM5ODUzNzUzYTZjOWExYzlkOTU5NzBkMjFkN2UxY2IifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTc3MzU0NjU3NiwiaWF0IjoxNzQyMDEwNTc2LCJpc3MiOiJodHRwczovL29pZGMuZWtzLmFwLW5vcnRoZWFzdC0yLmFtYXpvbmF3cy5jb20vaWQvOTExMjQzNTA2NEI4MjQ0OThBRjY4QzY5NkQwNTE4M0MiLCJqdGkiOiIzN2Y2ZTEyZi02YWZkLTQ0NWUtODY2Yy0zNzNiNWQxNzQ4MWUiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImRlZmF1bHQiLCJub2RlIjp7Im5hbWUiOiJpcC0xOTItMTY4LTMtMTAwLmFwLW5vcnRoZWFzdC0yLmNvbXB1dGUuaW50ZXJuYWwiLCJ1aWQiOiJmNDRkMDViNS1lMTFkLTRkOWUtOGI3Zi04NWZkNWI0YjgzNjgifSwicG9kIjp7Im5hbWUiOiJla3MtaWFtLXRlc3QyIiwidWlkIjoiMGRlN2E1MDgtMmFkYS00NGRjLWE5NTAtOWIxNjFjYzhiYzEwIn0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkZWZhdWx0IiwidWlkIjoiOGQ3NTgxZDYtYWIxNi00MjMwLTkyNTMtZTIyYTU2ODE3OTgyIn0sIndhcm5hZnRlciI6MTc0MjAxNDE4M30sIm5iZiI6MTc0MjAxMDU3Niwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6ZGVmYXVsdCJ9.SAtgxfOIczYaSe9wEUbY833qslMQfOdS1V_JJjZrnDHTuC6dOZO4HW5RPvqxUu9cmYo5mJWk6MN25TnQbIsf22WPb5rMiHlPhuSFVpijACh0_EIxHMtmeoQO_-ufc6yZM-elTawuMcwKRgGM_JRCsMg_yrYfi-antPVw_ud7HYsODIRj8aCftsejWpGM1wJDVGrtLU6A4g9K09LiPKFqoYbQNaAXXmglMbO8ChGCqBADvwoCt4n8BIFvfU4W2cOex_L5r8x1_T34SB8MFtvykvLCJg36FICbJ3vyTo5ZHbfCEbB8Du9l51uydHIty8QtwdUVF4z6i3P7uchy9rWLbA
5. JWT ํ ํฐ ๋ถ์ (์น ๋๊ตฌ ํ์ฉ)
- https://jwt.io/
6. eks-iam-test2 Pod ์ญ์
1
2
3
kubectl delete pod eks-iam-test2
# ๊ฒฐ๊ณผ
pod "eks-iam-test2" deleted
๐ ์ค์ต3: IRSA & Pod Identity
1. IAM Service Account ์์ฑ (IRSA ์์ฑ)
1
2
3
4
5
6
eksctl create iamserviceaccount \
--name my-sa \
--namespace default \
--cluster $CLUSTER_NAME \
--approve \
--attach-policy-arn $(aws iam list-policies --query 'Policies[?PolicyName==`AmazonS3ReadOnlyAccess`].Arn' --output text)
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
2025-03-15 12:56:47 [โน] 1 existing iamserviceaccount(s) (kube-system/aws-load-balancer-controller) will be excluded
2025-03-15 12:56:47 [โน] 1 iamserviceaccount (default/my-sa) was included (based on the include/exclude rules)
2025-03-15 12:56:47 [!] serviceaccounts that exist in Kubernetes will be excluded, use --override-existing-serviceaccounts to override
2025-03-15 12:56:47 [โน] 1 task: {
2 sequential sub-tasks: {
create IAM role for serviceaccount "default/my-sa",
create serviceaccount "default/my-sa",
} }2025-03-15 12:56:47 [โน] building iamserviceaccount stack "eksctl-myeks-addon-iamserviceaccount-default-my-sa"
2025-03-15 12:56:48 [โน] deploying stack "eksctl-myeks-addon-iamserviceaccount-default-my-sa"
2025-03-15 12:56:48 [โน] waiting for CloudFormation stack "eksctl-myeks-addon-iamserviceaccount-default-my-sa"
2025-03-15 12:57:18 [โน] waiting for CloudFormation stack "eksctl-myeks-addon-iamserviceaccount-default-my-sa"
2025-03-15 12:57:18 [โน] created serviceaccount "default/my-sa"
2. Kubernetes ์๋น์ค ์ด์นด์ดํธ ์์ธ ๊ฒ์ฆ
(1) IAM ์๋น์ค ์ด์นด์ดํธ ์กฐํ
1
eksctl get iamserviceaccount --cluster $CLUSTER_NAME
โ ย ์ถ๋ ฅ
1
2
3
NAMESPACE NAME ROLE ARN
default my-sa arn:aws:iam::378102432899:role/eksctl-myeks-addon-iamserviceaccount-default--Role1-vEyynqVMXUxH
kube-system aws-load-balancer-controller arn:aws:iam::378102432899:role/eksctl-myeks-addon-iamserviceaccount-kube-sys-Role1-W51wMh6qKjFc
(2) Kubernetes ์๋น์ค ์ด์นด์ดํธ ํ์ธ
1
2
kubectl get sa
kubectl describe sa my-sa
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
NAME SECRETS AGE
default 0 14h
my-sa 0 5m39s
Name: my-sa
Namespace: default
Labels: app.kubernetes.io/managed-by=eksctl
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::378102432899:role/eksctl-myeks-addon-iamserviceaccount-default--Role1-vEyynqVMXUxH
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: <none>
Events: <none>
3. IRSA๋ฅผ ์ฌ์ฉํ๋ Pod ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: eks-iam-test3
spec:
serviceAccountName: my-sa
containers:
- name: my-aws-cli
image: amazon/aws-cli:latest
command: ['sleep', '36000']
restartPolicy: Never
terminationGracePeriodSeconds: 0
EOF
# ๊ฒฐ๊ณผ
pod/eks-iam-test3 created
4. Pod-Identity Webhook ๊ตฌ์ฑ ํ์ธ
1
kubectl get mutatingwebhookconfigurations pod-identity-webhook -o yamlapiVersion: admissionregistration.k8s.io/v1
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
kind: MutatingWebhookConfiguration
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"admissionregistration.k8s.io/v1","kind":"MutatingWebhookConfiguration","metadata":{"annotations":{},"name":"pod-identity-webhook"},"webhooks":[{"admissionReviewVersions":["v1beta1"],"clientConfig":{"caBundle":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJUXhZZFRsV2QxS293RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1UUXhNelV6TURsYUZ3MHpOVEF6TVRJeE16VTRNRGxhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUR2RE1HZ1dYSlBmUVlrMERKWTdGYk52OEN3RkFMMWNIMWQxVkxxUmpYemt0RGFwTW1xS2RGV3lmSXMKbGFqbEkyS1FvZ0VPY1dHc0dEOW1JWVFhTXM0VHFab1k5U1dZNFVlMVYzU0Y5YVV3dFI0M2VLTzh5RUhQekNUSQpGN0hyWDBKVkpxM29RQU5RVjNPc2JJdE9pdHQ3eDBoaFVWZ3NQci9qTy9ZamFkaHczSVV4QUxKYnhKTmIzSW5rCkdjT3ltbXViNDR4SUJ0NmtIM3V5VWNxQm83aStoL3NJWWFXRXNocHp3dGJSemN0NmRMYldFNW9WbU9xY1pHTDMKSXE1WmNiVTBQc3B3S2xTK1BwcFhmaEk2djk5VGsyV0VRVzdkQmo1ZkM5cWdOeU95N3BFdU1MVHhyK05xYS9yVAp1dzNtLzNvb1JoYk82azR0VGxUSW1ZMXBrMnpaQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTYWp6dzc5dzNNUjBIMEJUaTFuQVNhN2JVS1J6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQWVSK2c5d0FVQwpuUElEOUtKNGkrMGliRFcvczlxSHpHTG5OelhpQlpsSXI1ZWtUYjdaYzhwT09ZQ2t5c01oSTlLbnJQUUdPTlpaCitjSTM0UnRDWmN0eU9QVmRsS1JsUFM5R1lNaGpBeEhQMW1Jck4zU2F5aGpYM3kzY3g1NG4wRmswR216RitKbXIKYUFsMHkyM05GMnk2VzJ0WWUxRW1aZFRZWVcxNDJLRHNIZldPQlhtMmJjR3Z6QmM4YTh6QWZIdElDbkVSMlc4ZQpndERCV1dpNVRuN1NKOWpTTEFVWUNkQUIza2xDTUJNaE80bGkvRzB4MFROT0VVRXF0bFQxY0x6Rm9PTVM4OWVRCnlpcDE1Mk9tc1ZIMy9pVG55Zy9UVytUUXpLNUF3eVlNb3Jmc05KNjg1bFN6Mmh1S3NEZE9YMVJqWis3ZDI5dW0KTXRIV2hwNHVDajFBCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K","url":"https://127.0.0.1:23443/mutate"},"failurePolicy":"Ignore","name":"iam-for-pods.amazonaws.com","objectSelector":{"matchExpressions":[{"key":"eks.amazonaws.com/skip-pod-identity-webhook","operator":"DoesNotExist","values":[]}]},"reinvocationPolicy":"IfNeeded","rules":[{"apiGroups":[""],"apiVersions":["v1"],"operations":["CREATE"],"resources":["pods"]}],"sideEffects":"None"}]}
creationTimestamp: "2025-03-14T13:58:45Z"
generation: 1
name: pod-identity-webhook
resourceVersion: "278"
uid: 1a127f47-b944-4aae-bc7d-ad2cfc15c701
webhooks:
- admissionReviewVersions:
- v1beta1
clientConfig:
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJUXhZZFRsV2QxS293RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1UUXhNelV6TURsYUZ3MHpOVEF6TVRJeE16VTRNRGxhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUR2RE1HZ1dYSlBmUVlrMERKWTdGYk52OEN3RkFMMWNIMWQxVkxxUmpYemt0RGFwTW1xS2RGV3lmSXMKbGFqbEkyS1FvZ0VPY1dHc0dEOW1JWVFhTXM0VHFab1k5U1dZNFVlMVYzU0Y5YVV3dFI0M2VLTzh5RUhQekNUSQpGN0hyWDBKVkpxM29RQU5RVjNPc2JJdE9pdHQ3eDBoaFVWZ3NQci9qTy9ZamFkaHczSVV4QUxKYnhKTmIzSW5rCkdjT3ltbXViNDR4SUJ0NmtIM3V5VWNxQm83aStoL3NJWWFXRXNocHp3dGJSemN0NmRMYldFNW9WbU9xY1pHTDMKSXE1WmNiVTBQc3B3S2xTK1BwcFhmaEk2djk5VGsyV0VRVzdkQmo1ZkM5cWdOeU95N3BFdU1MVHhyK05xYS9yVAp1dzNtLzNvb1JoYk82azR0VGxUSW1ZMXBrMnpaQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTYWp6dzc5dzNNUjBIMEJUaTFuQVNhN2JVS1J6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQWVSK2c5d0FVQwpuUElEOUtKNGkrMGliRFcvczlxSHpHTG5OelhpQlpsSXI1ZWtUYjdaYzhwT09ZQ2t5c01oSTlLbnJQUUdPTlpaCitjSTM0UnRDWmN0eU9QVmRsS1JsUFM5R1lNaGpBeEhQMW1Jck4zU2F5aGpYM3kzY3g1NG4wRmswR216RitKbXIKYUFsMHkyM05GMnk2VzJ0WWUxRW1aZFRZWVcxNDJLRHNIZldPQlhtMmJjR3Z6QmM4YTh6QWZIdElDbkVSMlc4ZQpndERCV1dpNVRuN1NKOWpTTEFVWUNkQUIza2xDTUJNaE80bGkvRzB4MFROT0VVRXF0bFQxY0x6Rm9PTVM4OWVRCnlpcDE1Mk9tc1ZIMy9pVG55Zy9UVytUUXpLNUF3eVlNb3Jmc05KNjg1bFN6Mmh1S3NEZE9YMVJqWis3ZDI5dW0KTXRIV2hwNHVDajFBCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
url: https://127.0.0.1:23443/mutate
failurePolicy: Ignore
matchPolicy: Equivalent
name: iam-for-pods.amazonaws.com
namespaceSelector: {}
objectSelector:
matchExpressions:
- key: eks.amazonaws.com/skip-pod-identity-webhook
operator: DoesNotExist
reinvocationPolicy: IfNeeded
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- CREATE
resources:
- pods
scope: '*'
sideEffects: None
timeoutSeconds: 10
5. Pod ๋ด์์ IAM ์ญํ ARN ํ์ธ
1
kubectl exec -it eks-iam-test3 -- aws sts get-caller-identity --query Arn
โ ย ์ถ๋ ฅ
1
"arn:aws:sts::378102432899:assumed-role/eksctl-myeks-addon-iamserviceaccount-default--Role1-vEyynqVMXUxH/botocore-session-1742011852"
6. S3 ๊ถํ ํ ์คํธ (์ ์ ๋์)
1
2
kubectl exec -it eks-iam-test3 -- aws s3 ls
# ์๋ฌ ๋ฐ์ ์ํจ
7. EC2 ์ธ์คํด์ค ์กฐํ ์คํจ ํ์ธ
1
kubectl exec -it eks-iam-test3 -- aws ec2 describe-instances --region ap-northeast-2
โ ย ์ถ๋ ฅ
1
2
An error occurred (UnauthorizedOperation) when calling the DescribeInstances operation: You are not authorized to perform this operation. User: arn:aws:sts::378102432899:assumed-role/eksctl-myeks-addon-iamserviceaccount-default--Role1-vEyynqVMXUxH/botocore-session-1742011852 is not authorized to perform: ec2:DescribeInstances because no identity-based policy allows the ec2:DescribeInstances action
command terminated with exit code 254
8. VPC ์กฐํ ์คํจ ํ์ธ
1
kubectl exec -it eks-iam-test3 -- aws ec2 describe-vpcs --region ap-northeast-2
โ ย ์ถ๋ ฅ
1
2
An error occurred (UnauthorizedOperation) when calling the DescribeVpcs operation: You are not authorized to perform this operation. User: arn:aws:sts::378102432899:assumed-role/eksctl-myeks-addon-iamserviceaccount-default--Role1-vEyynqVMXUxH/botocore-session-1742011852 is not authorized to perform: ec2:DescribeVpcs because no identity-based policy allows the ec2:DescribeVpcs action
command terminated with exit code 254
9. CloudTrail์์ AssumeRoleWithWebIdentity ์ด๋ฒคํธ ํ์ธ
10. ํ๋ ๋ณผ๋ฅจ ๋ง์ดํธ ํ์ธ
1
kubectl get pod eks-iam-test3 -o json | jq -r '.spec.containers | .[].volumeMounts'
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
[
{
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"name": "kube-api-access-lb79n",
"readOnly": true
},
{
"mountPath": "/var/run/secrets/eks.amazonaws.com/serviceaccount",
"name": "aws-iam-token",
"readOnly": true
}
]
11. aws-iam-token ๋ณผ๋ฅจ ์ ๋ณด ํ์ธ
1
kubectl get pod eks-iam-test3 -o json | jq -r '.spec.volumes[] | select(.name=="aws-iam-token")'
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
"name": "aws-iam-token",
"projected": {
"defaultMode": 420,
"sources": [
{
"serviceAccountToken": {
"audience": "sts.amazonaws.com",
"expirationSeconds": 86400,
"path": "token"
}
}
]
}
}
12. pod-identity-webhook ๊ตฌ์ฑ ํ์ธ
1
kubectl get MutatingWebhookConfiguration pod-identity-webhook -o yaml
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"admissionregistration.k8s.io/v1","kind":"MutatingWebhookConfiguration","metadata":{"annotations":{},"name":"pod-identity-webhook"},"webhooks":[{"admissionReviewVersions":["v1beta1"],"clientConfig":{"caBundle":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJUXhZZFRsV2QxS293RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1UUXhNelV6TURsYUZ3MHpOVEF6TVRJeE16VTRNRGxhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUR2RE1HZ1dYSlBmUVlrMERKWTdGYk52OEN3RkFMMWNIMWQxVkxxUmpYemt0RGFwTW1xS2RGV3lmSXMKbGFqbEkyS1FvZ0VPY1dHc0dEOW1JWVFhTXM0VHFab1k5U1dZNFVlMVYzU0Y5YVV3dFI0M2VLTzh5RUhQekNUSQpGN0hyWDBKVkpxM29RQU5RVjNPc2JJdE9pdHQ3eDBoaFVWZ3NQci9qTy9ZamFkaHczSVV4QUxKYnhKTmIzSW5rCkdjT3ltbXViNDR4SUJ0NmtIM3V5VWNxQm83aStoL3NJWWFXRXNocHp3dGJSemN0NmRMYldFNW9WbU9xY1pHTDMKSXE1WmNiVTBQc3B3S2xTK1BwcFhmaEk2djk5VGsyV0VRVzdkQmo1ZkM5cWdOeU95N3BFdU1MVHhyK05xYS9yVAp1dzNtLzNvb1JoYk82azR0VGxUSW1ZMXBrMnpaQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTYWp6dzc5dzNNUjBIMEJUaTFuQVNhN2JVS1J6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQWVSK2c5d0FVQwpuUElEOUtKNGkrMGliRFcvczlxSHpHTG5OelhpQlpsSXI1ZWtUYjdaYzhwT09ZQ2t5c01oSTlLbnJQUUdPTlpaCitjSTM0UnRDWmN0eU9QVmRsS1JsUFM5R1lNaGpBeEhQMW1Jck4zU2F5aGpYM3kzY3g1NG4wRmswR216RitKbXIKYUFsMHkyM05GMnk2VzJ0WWUxRW1aZFRZWVcxNDJLRHNIZldPQlhtMmJjR3Z6QmM4YTh6QWZIdElDbkVSMlc4ZQpndERCV1dpNVRuN1NKOWpTTEFVWUNkQUIza2xDTUJNaE80bGkvRzB4MFROT0VVRXF0bFQxY0x6Rm9PTVM4OWVRCnlpcDE1Mk9tc1ZIMy9pVG55Zy9UVytUUXpLNUF3eVlNb3Jmc05KNjg1bFN6Mmh1S3NEZE9YMVJqWis3ZDI5dW0KTXRIV2hwNHVDajFBCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K","url":"https://127.0.0.1:23443/mutate"},"failurePolicy":"Ignore","name":"iam-for-pods.amazonaws.com","objectSelector":{"matchExpressions":[{"key":"eks.amazonaws.com/skip-pod-identity-webhook","operator":"DoesNotExist","values":[]}]},"reinvocationPolicy":"IfNeeded","rules":[{"apiGroups":[""],"apiVersions":["v1"],"operations":["CREATE"],"resources":["pods"]}],"sideEffects":"None"}]}
creationTimestamp: "2025-03-14T13:58:45Z"
generation: 1
name: pod-identity-webhook
resourceVersion: "278"
uid: 1a127f47-b944-4aae-bc7d-ad2cfc15c701
webhooks:
- admissionReviewVersions:
- v1beta1
clientConfig:
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJUXhZZFRsV2QxS293RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1UUXhNelV6TURsYUZ3MHpOVEF6TVRJeE16VTRNRGxhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUR2RE1HZ1dYSlBmUVlrMERKWTdGYk52OEN3RkFMMWNIMWQxVkxxUmpYemt0RGFwTW1xS2RGV3lmSXMKbGFqbEkyS1FvZ0VPY1dHc0dEOW1JWVFhTXM0VHFab1k5U1dZNFVlMVYzU0Y5YVV3dFI0M2VLTzh5RUhQekNUSQpGN0hyWDBKVkpxM29RQU5RVjNPc2JJdE9pdHQ3eDBoaFVWZ3NQci9qTy9ZamFkaHczSVV4QUxKYnhKTmIzSW5rCkdjT3ltbXViNDR4SUJ0NmtIM3V5VWNxQm83aStoL3NJWWFXRXNocHp3dGJSemN0NmRMYldFNW9WbU9xY1pHTDMKSXE1WmNiVTBQc3B3S2xTK1BwcFhmaEk2djk5VGsyV0VRVzdkQmo1ZkM5cWdOeU95N3BFdU1MVHhyK05xYS9yVAp1dzNtLzNvb1JoYk82azR0VGxUSW1ZMXBrMnpaQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTYWp6dzc5dzNNUjBIMEJUaTFuQVNhN2JVS1J6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQWVSK2c5d0FVQwpuUElEOUtKNGkrMGliRFcvczlxSHpHTG5OelhpQlpsSXI1ZWtUYjdaYzhwT09ZQ2t5c01oSTlLbnJQUUdPTlpaCitjSTM0UnRDWmN0eU9QVmRsS1JsUFM5R1lNaGpBeEhQMW1Jck4zU2F5aGpYM3kzY3g1NG4wRmswR216RitKbXIKYUFsMHkyM05GMnk2VzJ0WWUxRW1aZFRZWVcxNDJLRHNIZldPQlhtMmJjR3Z6QmM4YTh6QWZIdElDbkVSMlc4ZQpndERCV1dpNVRuN1NKOWpTTEFVWUNkQUIza2xDTUJNaE80bGkvRzB4MFROT0VVRXF0bFQxY0x6Rm9PTVM4OWVRCnlpcDE1Mk9tc1ZIMy9pVG55Zy9UVytUUXpLNUF3eVlNb3Jmc05KNjg1bFN6Mmh1S3NEZE9YMVJqWis3ZDI5dW0KTXRIV2hwNHVDajFBCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
url: https://127.0.0.1:23443/mutate
failurePolicy: Ignore
matchPolicy: Equivalent
name: iam-for-pods.amazonaws.com
namespaceSelector: {}
objectSelector:
matchExpressions:
- key: eks.amazonaws.com/skip-pod-identity-webhook
operator: DoesNotExist
reinvocationPolicy: IfNeeded
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- CREATE
resources:
- pods
scope: '*'
sideEffects: None
timeoutSeconds: 10
13. AWS_WEB_IDENTITY_TOKEN_FILE ๋ด์ฉ ํ์ธ
1
2
IAM_TOKEN=$(kubectl exec -it eks-iam-test3 -- cat /var/run/secrets/eks.amazonaws.com/serviceaccount/token)
echo $IAM_TOKEN
โ ย ์ถ๋ ฅ
1
eyJhbGciOiJSUzI1NiIsImtpZCI6IjdlOTJmOGQ3NmM5ODUzNzUzYTZjOWExYzlkOTU5NzBkMjFkN2UxY2IifQ.eyJhdWQiOlsic3RzLmFtYXpvbmF3cy5jb20iXSwiZXhwIjoxNzQyMDk4MDk0LCJpYXQiOjE3NDIwMTE2OTQsImlzcyI6Imh0dHBzOi8vb2lkYy5la3MuYXAtbm9ydGhlYXN0LTIuYW1hem9uYXdzLmNvbS9pZC85MTEyNDM1MDY0QjgyNDQ5OEFGNjhDNjk2RDA1MTgzQyIsImp0aSI6IjU5N2M1ZTI3LTVmOTktNDk1Ni05NGQwLWM5MjdjMDRlMjFhNCIsImt1YmVybmV0ZXMuaW8iOnsibmFtZXNwYWNlIjoiZGVmYXVsdCIsIm5vZGUiOnsibmFtZSI6ImlwLTE5Mi0xNjgtMS0xNzAuYXAtbm9ydGhlYXN0LTIuY29tcHV0ZS5pbnRlcm5hbCIsInVpZCI6ImRiZTZlY2M3LTE4ZjctNDA3MS04Zjc5LWMxOWY1ZmIwMWVkMSJ9LCJwb2QiOnsibmFtZSI6ImVrcy1pYW0tdGVzdDMiLCJ1aWQiOiJhZTczNzNiZC03ZmI0LTRmYzQtODBhNi1lYTY4YWZkMzYyZjYifSwic2VydmljZWFjY291bnQiOnsibmFtZSI6Im15LXNhIiwidWlkIjoiZmFkODM0ZWYtMzA4ZC00YjUzLWJmNDQtN2FhM2QwYmFjY2RkIn19LCJuYmYiOjE3NDIwMTE2OTQsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0Om15LXNhIn0.gqN7WHcwMgUqgPa5z7nqBZy_SoKKbYyslGg_zAWEH2Wn7I0w1Np0SS0T6ussquAWgY6fFo6hgFkjTQiu0xGuuyfNX8Q8Csu9ChoUQCZisuSCQCESuylc_CPj7VFjt7Pi67uHrsV2UL0wt9z9avo9s5EGudpnxObF5N8eu9kKnjaLZ9Y8YBcZkujzmEcbMrzl95fu21I7iei6rqDqVa_W9G0zqpYba47EdfWHg6zNRd0mXRfaYISwnoWuWS_6w56UXNqoybq7QbZE94RdK65TJJNHcmSnM2WVpp6eVi_dwJX4_DSmZjubon0rqfu9L3RhbOTAVcjscpGe-PkkUpZ1fQ
14. ํ๋ ์ปจํ ์ด๋ ํ๊ฒฝ ๋ณ์ ํ์ธ
1
kubectl get pod eks-iam-test3 -o json | jq -r '.spec.containers | .[].env'
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[
{
"name": "AWS_STS_REGIONAL_ENDPOINTS",
"value": "regional"
},
{
"name": "AWS_DEFAULT_REGION",
"value": "ap-northeast-2"
},
{
"name": "AWS_REGION",
"value": "ap-northeast-2"
},
{
"name": "AWS_ROLE_ARN",
"value": "arn:aws:iam::378102432899:role/eksctl-myeks-addon-iamserviceaccount-default--Role1-vEyynqVMXUxH"
},
{
"name": "AWS_WEB_IDENTITY_TOKEN_FILE",
"value": "/var/run/secrets/eks.amazonaws.com/serviceaccount/token"
}
]
๐ก๏ธ EKS Pod Identity
1. EKS Pod Identity ์์ด์ ํธ ์ค์น
1
aws eks create-addon --cluster-name $CLUSTER_NAME --addon-name eks-pod-identity-agent
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
"addon": {
"addonName": "eks-pod-identity-agent",
"clusterName": "myeks",
"status": "CREATING",
"addonVersion": "v1.3.4-eksbuild.1",
"health": {
"issues": []
},
"addonArn": "arn:aws:eks:ap-northeast-2:378102432899:addon/myeks/eks-pod-identity-agent/02cacc09-2234-d05b-4402-3afd4324440e",
"createdAt": "2025-03-15T13:31:29.045000+09:00",
"modifiedAt": "2025-03-15T13:31:29.060000+09:00",
"tags": {}
}
}
2. ์๋์จ ์ํ ํ์ธ
1
eksctl get addon --cluster $CLUSTER_NAME
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
2025-03-15 13:32:26 [โน] Kubernetes version "1.31" in use by cluster "myeks"
2025-03-15 13:32:26 [โน] getting all addons
2025-03-15 13:32:28 [โน] to see issues for an addon run `eksctl get addon --name <addon-name> --cluster <cluster-name>`
NAME VERSION STATUS ISSUES IAMROLEUPDATE AVAILABLE CONFIGURATION VALUES POD IDENTITY ASSOCIATION ROLES
aws-ebs-csi-driver v1.40.1-eksbuild.1 ACTIVE 0 arn:aws:iam::378102432899:role/eksctl-myeks-addon-aws-ebs-csi-driver-Role1-m5BypBJRv4Oq
coredns v1.11.4-eksbuild.2 ACTIVE 0
eks-pod-identity-agent v1.3.4-eksbuild.1 ACTIVE 0 v1.3.5-eksbuild.2
kube-proxy v1.31.3-eksbuild.2 ACTIVE 0
metrics-server v0.7.2-eksbuild.2 ACTIVE 0
vpc-cni v1.19.3-eksbuild.1 ACTIVE 0 arn:aws:iam::378102432899:role/eksctl-myeks-addon-vpc-cni-Role1-hDIZNrSDJAlH enableNetworkPolicy: "true"
3. ํด๋ฌ์คํฐ ํ๋ ๋ชฉ๋ก ์กฐํ
1
k get pod -A
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
NAMESPACE NAME READY STATUS RESTARTS AGE
default awscli-pod-59598f6ff8-nxfpw 1/1 Running 0 133m
default awscli-pod-59598f6ff8-x959k 1/1 Running 0 133m
default eks-iam-test3 1/1 Running 0 25m
kube-system aws-load-balancer-controller-554fbd9d-8r96n 1/1 Running 0 13h
kube-system aws-load-balancer-controller-554fbd9d-dzk6c 1/1 Running 0 13h
kube-system aws-node-7s9dd 2/2 Running 0 14h
kube-system aws-node-d6v7m 2/2 Running 0 14h
kube-system aws-node-wnd97 2/2 Running 0 14h
kube-system coredns-86f5954566-cc4m4 1/1 Running 0 14h
kube-system coredns-86f5954566-t7gfd 1/1 Running 0 14h
kube-system ebs-csi-controller-9c9c4d49f-5ns8n 6/6 Running 0 14h
kube-system ebs-csi-controller-9c9c4d49f-j6drv 6/6 Running 0 14h
kube-system ebs-csi-node-k5spq 3/3 Running 0 14h
kube-system ebs-csi-node-p4tcb 3/3 Running 0 14h
kube-system ebs-csi-node-rrkrv 3/3 Running 0 14h
kube-system eks-pod-identity-agent-n6kgs 1/1 Running 0 112s
kube-system eks-pod-identity-agent-ql55v 1/1 Running 0 112s
kube-system eks-pod-identity-agent-zcsrl 1/1 Running 0 112s
kube-system external-dns-dc4878f5f-6cg6l 1/1 Running 0 13h
kube-system kube-ops-view-657dbc6cd8-shbj2 1/1 Running 0 13h
kube-system kube-proxy-ddmmr 1/1 Running 0 14h
kube-system kube-proxy-szvqr 1/1 Running 0 14h
kube-system kube-proxy-z9fjt 1/1 Running 0 14h
kube-system metrics-server-6bf5998d9c-5b5r5 1/1 Running 0 14h
kube-system metrics-server-6bf5998d9c-bscbc 1/1 Running 0 14h
monitoring kube-prometheus-stack-grafana-78bc45ff97-whzh7 3/3 Running 0 13h
monitoring kube-prometheus-stack-kube-state-metrics-5dbfbd4b9-2zgg5 1/1 Running 0 13h
monitoring kube-prometheus-stack-operator-76bdd654bf-9j7dk 1/1 Running 0 13h
monitoring kube-prometheus-stack-prometheus-node-exporter-bxbs6 1/1 Running 0 13h
monitoring kube-prometheus-stack-prometheus-node-exporter-kb24c 1/1 Running 0 13h
monitoring kube-prometheus-stack-prometheus-node-exporter-n896q 1/1 Running 0 13h
monitoring prometheus-kube-prometheus-stack-prometheus-0 2/2 Running 0 13h
- kube-system ๋ค์์คํ์ด์ค์ eks-pod-identity-agent ๊ด๋ จ ํ๋๊ฐ 3๊ฐ(๊ฐ ๋ ธ๋๋น 1๊ฐ) ๋ฐฐํฌ๋์ด ์์
4. DaemonSet ๊ตฌ์ฑ ํ์ธ
1
kubectl get ds -n kube-system eks-pod-identity-agent -o yaml
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
deprecated.daemonset.template.generation: "1"
creationTimestamp: "2025-03-15T04:31:35Z"
generation: 1
labels:
app.kubernetes.io/instance: eks-pod-identity-agent
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: eks-pod-identity-agent
app.kubernetes.io/version: 0.1.17
helm.sh/chart: eks-pod-identity-agent-1.3.4
name: eks-pod-identity-agent
namespace: kube-system
resourceVersion: "242246"
uid: b128b387-8066-4747-8b4c-a4efbbe067e4
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/instance: eks-pod-identity-agent
app.kubernetes.io/name: eks-pod-identity-agent
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: eks-pod-identity-agent
app.kubernetes.io/name: eks-pod-identity-agent
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
- key: kubernetes.io/arch
operator: In
values:
- amd64
- arm64
- key: eks.amazonaws.com/compute-type
operator: NotIn
values:
- fargate
- hybrid
- auto
containers:
- args:
- --port
- "80"
- --cluster-name
- myeks
- --probe-port
- "2703"
command:
- /go-runner
- /eks-pod-identity-agent
- server
env:
- name: AWS_REGION
value: ap-northeast-2
image: 602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/eks-pod-identity-agent:0.1.17
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
host: localhost
path: /healthz
port: probes-port
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
name: eks-pod-identity-agent
ports:
- containerPort: 80
name: proxy
protocol: TCP
- containerPort: 2703
name: probes-port
protocol: TCP
readinessProbe:
failureThreshold: 30
httpGet:
host: localhost
path: /readyz
port: probes-port
scheme: HTTP
initialDelaySeconds: 1
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
resources: {}
securityContext:
capabilities:
add:
- CAP_NET_BIND_SERVICE
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
hostNetwork: true
initContainers:
- command:
- /go-runner
- /eks-pod-identity-agent
- initialize
image: 602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/eks-pod-identity-agent:0.1.17
imagePullPolicy: Always
name: eks-pod-identity-agent-init
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
priorityClassName: system-node-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
tolerations:
- operator: Exists
updateStrategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 10%
type: RollingUpdate
status:
currentNumberScheduled: 3
desiredNumberScheduled: 3
numberAvailable: 3
numberMisscheduled: 0
numberReady: 3
observedGeneration: 1
updatedNumberScheduled: 3
5. ๋ ธ๋ ๋คํธ์ํฌ ์ ๋ณด ํ์ธ
1
for node in $N1 $N2 $N3; do ssh ec2-user@$node sudo ss -tnlp | grep eks-pod-identit; echo "-----";done
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
LISTEN 0 4096 169.254.170.23:80 0.0.0.0:* users:(("eks-pod-identit",pid=274822,fd=3))
LISTEN 0 4096 127.0.0.1:2703 0.0.0.0:* users:(("eks-pod-identit",pid=274822,fd=4))
LISTEN 0 4096 [fd00:ec2::23]:80 [::]:* users:(("eks-pod-identit",pid=274822,fd=8))
-----
LISTEN 0 4096 169.254.170.23:80 0.0.0.0:* users:(("eks-pod-identit",pid=273542,fd=8))
LISTEN 0 4096 127.0.0.1:2703 0.0.0.0:* users:(("eks-pod-identit",pid=273542,fd=7))
LISTEN 0 4096 [fd00:ec2::23]:80 [::]:* users:(("eks-pod-identit",pid=273542,fd=6))
-----
LISTEN 0 4096 127.0.0.1:2703 0.0.0.0:* users:(("eks-pod-identit",pid=274304,fd=6))
LISTEN 0 4096 169.254.170.23:80 0.0.0.0:* users:(("eks-pod-identit",pid=274304,fd=8))
LISTEN 0 4096 [fd00:ec2::23]:80 [::]:* users:(("eks-pod-identit",pid=274304,fd=7))
-----
6. ๋ ธ๋ ๋ผ์ฐํ ์ ๋ณด ํ์ธ
1
for node in $N1 $N2 $N3; do ssh ec2-user@$node sudo ip -c route; done
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
default via 192.168.1.1 dev ens5 proto dhcp src 192.168.1.170 metric 1024
169.254.170.23 dev pod-id-link0
192.168.0.2 via 192.168.1.1 dev ens5 proto dhcp src 192.168.1.170 metric 1024
192.168.1.0/24 dev ens5 proto kernel scope link src 192.168.1.170 metric 1024
192.168.1.1 dev ens5 proto dhcp scope link src 192.168.1.170 metric 1024
192.168.1.96 dev eni1fb6ee8b3c0 scope link
192.168.1.97 dev enidef047f8c68 scope link
192.168.1.147 dev enie56f4c00502 scope link
192.168.1.154 dev enib4a035e6ce0 scope link
192.168.1.169 dev eniab10bd74de4 scope link
192.168.1.194 dev eniad482f944f9 scope link
192.168.1.205 dev eni1ef27223337 scope link
default via 192.168.2.1 dev ens5 proto dhcp src 192.168.2.112 metric 1024
169.254.170.23 dev pod-id-link0
192.168.0.2 via 192.168.2.1 dev ens5 proto dhcp src 192.168.2.112 metric 1024
192.168.2.0/24 dev ens5 proto kernel scope link src 192.168.2.112 metric 1024
192.168.2.1 dev ens5 proto dhcp scope link src 192.168.2.112 metric 1024
192.168.2.65 dev eni438eb40c72d scope link
192.168.2.94 dev eni51a70c9f950 scope link
192.168.2.99 dev enicae80e9eb37 scope link
192.168.2.110 dev eni936c1913cda scope link
192.168.2.139 dev enic1f5e2e4b85 scope link
192.168.2.188 dev eniff64e513d5d scope link
default via 192.168.3.1 dev ens5 proto dhcp src 192.168.3.100 metric 1024
169.254.170.23 dev pod-id-link0
192.168.0.2 via 192.168.3.1 dev ens5 proto dhcp src 192.168.3.100 metric 1024
192.168.3.0/24 dev ens5 proto kernel scope link src 192.168.3.100 metric 1024
192.168.3.1 dev ens5 proto dhcp scope link src 192.168.3.100 metric 1024
192.168.3.19 dev enibb020840ab8 scope link
192.168.3.20 dev enibb4818dd14c scope link
192.168.3.26 dev enie5f3ff3c235 scope link
192.168.3.62 dev eni04b21f34581 scope link
192.168.3.73 dev eni7afb87f22ac scope link
192.168.3.115 dev eni5a291dc8665 scope link
192.168.3.203 dev enia9e288fb883 scope link
- 169.254.170.23 ๊ฒฝ๋ก๊ฐ pod-id-link0 ์ธํฐํ์ด์ค์ ๋งคํ๋์ด ์์
7. ๋ ธ๋ IP ์ฃผ์ ์ ๋ณด ํ์ธ
1
for node in $N1 $N2 $N3; do ssh ec2-user@$node sudo ip -c addr; done
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 02:b3:60:13:a3:95 brd ff:ff:ff:ff:ff:ff
altname enp0s5
inet 192.168.1.170/24 metric 1024 brd 192.168.1.255 scope global dynamic ens5
valid_lft 1846sec preferred_lft 1846sec
inet6 fe80::b3:60ff:fe13:a395/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
3: eniab10bd74de4@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
link/ether 12:8c:7e:be:1d:44 brd ff:ff:ff:ff:ff:ff link-netns cni-3e888494-b264-1414-1011-2496d2daacb2
inet6 fe80::108c:7eff:febe:1d44/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
4: eni1fb6ee8b3c0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
link/ether 9e:e5:a8:10:24:8f brd ff:ff:ff:ff:ff:ff link-netns cni-20795375-baf8-5260-227a-777221b09b4e
inet6 fe80::9ce5:a8ff:fe10:248f/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
5: ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 02:e1:63:39:55:2d brd ff:ff:ff:ff:ff:ff
altname enp0s6
inet 192.168.1.218/24 brd 192.168.1.255 scope global ens6
valid_lft forever preferred_lft forever
inet6 fe80::e1:63ff:fe39:552d/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
6: enib4a035e6ce0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
link/ether 3e:23:08:e3:d5:5b brd ff:ff:ff:ff:ff:ff link-netns cni-5302774d-cbdc-1171-da6f-ff221edf04c3
inet6 fe80::3c23:8ff:fee3:d55b/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
7: enidef047f8c68@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
link/ether 2a:01:fc:e8:0d:3d brd ff:ff:ff:ff:ff:ff link-netns cni-ae19e7f4-a1bb-013b-52f7-727dcc72335f
inet6 fe80::2801:fcff:fee8:d3d/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
9: eni1ef27223337@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
link/ether ae:32:a6:de:cc:72 brd ff:ff:ff:ff:ff:ff link-netns cni-dc0a2b93-2617-6d44-88fa-a0c8b7f94e9f
inet6 fe80::ac32:a6ff:fede:cc72/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
11: enie56f4c00502@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
link/ether 26:bf:59:b1:17:2e brd ff:ff:ff:ff:ff:ff link-netns cni-4a62e4d1-5662-7da7-8b51-6c2d036d1828
inet6 fe80::24bf:59ff:feb1:172e/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
12: ens7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 02:1e:2b:c0:7a:17 brd ff:ff:ff:ff:ff:ff
altname enp0s7
inet 192.168.1.76/24 brd 192.168.1.255 scope global ens7
valid_lft forever preferred_lft forever
inet6 fe80::1e:2bff:fec0:7a17/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
16: eniad482f944f9@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
link/ether 86:a5:01:52:91:85 brd ff:ff:ff:ff:ff:ff link-netns cni-47688cf4-0c20-d87c-36ae-3d205f933bab
inet6 fe80::84a5:1ff:fe52:9185/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
17: pod-id-link0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 6a:ba:d5:1f:67:77 brd ff:ff:ff:ff:ff:ff
inet 169.254.170.23/32 scope global pod-id-link0
valid_lft forever preferred_lft forever
inet6 fd00:ec2::23/128 scope global
valid_lft forever preferred_lft forever
inet6 fe80::68ba:d5ff:fe1f:6777/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 06:ce:3b:be:0e:69 brd ff:ff:ff:ff:ff:ff
altname enp0s5
inet 192.168.2.112/24 metric 1024 brd 192.168.2.255 scope global dynamic ens5
valid_lft 1846sec preferred_lft 1846sec
inet6 fe80::4ce:3bff:febe:e69/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
3: enic1f5e2e4b85@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
link/ether 46:ec:45:9e:8d:69 brd ff:ff:ff:ff:ff:ff link-netns cni-35cd6f0b-8933-aeaf-3644-9f81cacc8848
inet6 fe80::44ec:45ff:fe9e:8d69/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
4: eniff64e513d5d@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
link/ether f6:2e:20:f0:6c:55 brd ff:ff:ff:ff:ff:ff link-netns cni-77183b72-0768-b697-478f-2b908a4c48d1
inet6 fe80::f42e:20ff:fef0:6c55/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
5: ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 06:83:3f:63:27:21 brd ff:ff:ff:ff:ff:ff
altname enp0s6
inet 192.168.2.56/24 brd 192.168.2.255 scope global ens6
valid_lft forever preferred_lft forever
inet6 fe80::483:3fff:fe63:2721/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
6: eni936c1913cda@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
link/ether b6:d9:cb:53:bd:65 brd ff:ff:ff:ff:ff:ff link-netns cni-b06ec2c9-713a-b18f-4389-633c1fbede97
inet6 fe80::b4d9:cbff:fe53:bd65/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
7: eni51a70c9f950@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
link/ether 0a:37:6c:7a:82:6c brd ff:ff:ff:ff:ff:ff link-netns cni-38c1d528-751e-ba97-4234-104442207091
inet6 fe80::837:6cff:fe7a:826c/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
8: enicae80e9eb37@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
link/ether 42:c4:dd:30:d2:8d brd ff:ff:ff:ff:ff:ff link-netns cni-4acc3516-1ec8-14b8-4ac9-fbd97f51bcb3
inet6 fe80::40c4:ddff:fe30:d28d/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
11: eni438eb40c72d@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
link/ether ba:db:e4:62:eb:36 brd ff:ff:ff:ff:ff:ff link-netns cni-4d1f582e-4e0e-402a-53de-dbd3a6a099b2
inet6 fe80::b8db:e4ff:fe62:eb36/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
12: ens7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 06:02:1a:ca:56:19 brd ff:ff:ff:ff:ff:ff
altname enp0s7
inet 192.168.2.9/24 brd 192.168.2.255 scope global ens7
valid_lft forever preferred_lft forever
inet6 fe80::402:1aff:feca:5619/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
13: pod-id-link0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether c2:60:f0:87:d0:a4 brd ff:ff:ff:ff:ff:ff
inet 169.254.170.23/32 scope global pod-id-link0
valid_lft forever preferred_lft forever
inet6 fd00:ec2::23/128 scope global
valid_lft forever preferred_lft forever
inet6 fe80::c060:f0ff:fe87:d0a4/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 0a:35:ae:c3:55:17 brd ff:ff:ff:ff:ff:ff
altname enp0s5
inet 192.168.3.100/24 metric 1024 brd 192.168.3.255 scope global dynamic ens5
valid_lft 1827sec preferred_lft 1827sec
inet6 fe80::835:aeff:fec3:5517/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
3: eni04b21f34581@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
link/ether 1e:e2:80:6f:7b:a4 brd ff:ff:ff:ff:ff:ff link-netns cni-15eb3003-53f1-2aa6-f47c-c9f063d6f70f
inet6 fe80::1ce2:80ff:fe6f:7ba4/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
4: enibb020840ab8@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
link/ether 22:92:1c:91:fe:f8 brd ff:ff:ff:ff:ff:ff link-netns cni-0e92d427-a3d3-fd1f-465f-a8a40d4f8399
inet6 fe80::2092:1cff:fe91:fef8/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
5: enibb4818dd14c@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
link/ether 32:e6:b7:44:45:b5 brd ff:ff:ff:ff:ff:ff link-netns cni-ac072a33-faad-2ca9-222e-ae7661801d70
inet6 fe80::30e6:b7ff:fe44:45b5/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
6: eni5a291dc8665@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
link/ether 6a:a8:03:6f:69:bd brd ff:ff:ff:ff:ff:ff link-netns cni-93729665-32f7-4d0d-a479-684e353934ff
inet6 fe80::68a8:3ff:fe6f:69bd/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
7: ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 0a:22:af:33:73:97 brd ff:ff:ff:ff:ff:ff
altname enp0s6
inet 192.168.3.104/24 brd 192.168.3.255 scope global ens6
valid_lft forever preferred_lft forever
inet6 fe80::822:afff:fe33:7397/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
8: enia9e288fb883@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
link/ether 0a:36:26:ed:97:d7 brd ff:ff:ff:ff:ff:ff link-netns cni-42f16c85-11f5-bdf2-66f8-a830685567f8
inet6 fe80::836:26ff:feed:97d7/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
9: eni7afb87f22ac@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
link/ether 56:6b:e3:b9:1e:70 brd ff:ff:ff:ff:ff:ff link-netns cni-e8ba877b-3e17-c9c1-83ce-3ed4984e59ef
inet6 fe80::546b:e3ff:feb9:1e70/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
10: ens7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 0a:b5:fd:f0:67:c5 brd ff:ff:ff:ff:ff:ff
altname enp0s7
inet 192.168.3.152/24 brd 192.168.3.255 scope global ens7
valid_lft forever preferred_lft forever
inet6 fe80::8b5:fdff:fef0:67c5/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
12: enie5f3ff3c235@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
link/ether e6:bc:d5:dc:d8:9e brd ff:ff:ff:ff:ff:ff link-netns cni-4c7e0f5d-4cba-6f10-9d09-4c2f21e9a1cf
inet6 fe80::e4bc:d5ff:fedc:d89e/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
15: pod-id-link0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 32:3f:12:99:de:7a brd ff:ff:ff:ff:ff:ff
inet 169.254.170.23/32 scope global pod-id-link0
valid_lft forever preferred_lft forever
inet6 fd00:ec2::23/128 scope global
valid_lft forever preferred_lft forever
inet6 fe80::303f:12ff:fe99:de7a/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
8. podidentityassociation ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
eksctl create podidentityassociation \
--cluster $CLUSTER_NAME \
--namespace default \
--service-account-name s3-sa \
--role-name s3-eks-pod-identity-role \
--permission-policy-arns arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess \
--region ap-northeast-2
# ๊ฒฐ๊ณผ
2025-03-15 13:40:49 [โน] 1 task: {
2 sequential sub-tasks: {
create IAM role for pod identity association for service account "default/s3-sa",
create pod identity association for service account "default/s3-sa",
} }2025-03-15 13:40:49 [โน] deploying stack "eksctl-myeks-podidentityrole-default-s3-sa"
2025-03-15 13:40:49 [โน] waiting for CloudFormation stack "eksctl-myeks-podidentityrole-default-s3-sa"
2025-03-15 13:41:19 [โน] waiting for CloudFormation stack "eksctl-myeks-podidentityrole-default-s3-sa"
2025-03-15 13:41:21 [โน] created pod identity association for service account "s3-sa" in namespace "default"
2025-03-15 13:41:21 [โน] all tasks were completed successfully
- ํด๋น pod๋ง์ด Role์ ๋งคํ๋ ๊ถํ์ ์ฌ์ฉํ ์ ์์
9. ์๋น์ค์ด์นด์ดํธ ์์ฑ
1
2
3
kubectl create sa s3-sa
# ๊ฒฐ๊ณผ
serviceaccount/s3-sa created
10. ํ๋ ๋ฐฐํฌ (IRSA ์ ์ฉ)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: eks-pod-identity
spec:
serviceAccountName: s3-sa
containers:
- name: my-aws-cli
image: amazon/aws-cli:latest
command: ['sleep', '36000']
restartPolicy: Never
terminationGracePeriodSeconds: 0
EOF
# ๊ฒฐ๊ณผ
pod/eks-pod-identity created
11. Pod Identity Webhook ๊ตฌ์ฑ ํ์ธ
1
kubectl get pod eks-pod-identity -o yaml | kubectl neat
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
apiVersion: v1
kind: Pod
metadata:
name: eks-pod-identity
namespace: default
spec:
containers:
- command:
- sleep
- "36000"
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
- name: AWS_DEFAULT_REGION
value: ap-northeast-2
- name: AWS_REGION
value: ap-northeast-2
- name: AWS_CONTAINER_CREDENTIALS_FULL_URI
value: http://169.254.170.23/v1/credentials
- name: AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE
value: /var/run/secrets/pods.eks.amazonaws.com/serviceaccount/eks-pod-identity-token
image: amazon/aws-cli:latest
name: my-aws-cli
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-b6g6k
readOnly: true
- mountPath: /var/run/secrets/pods.eks.amazonaws.com/serviceaccount
name: eks-pod-identity-token
readOnly: true
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Never
serviceAccountName: s3-sa
terminationGracePeriodSeconds: 0
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: eks-pod-identity-token
projected:
sources:
- serviceAccountToken:
audience: pods.eks.amazonaws.com
expirationSeconds: 86400
path: eks-pod-identity-token
- name: kube-api-access-b6g6k
projected:
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
fieldPath: metadata.namespace
path: namespace
- webhook์ด pod ์์ฑ ์ ์๋์ผ๋ก AWS IAM ์ญํ ๊ด๋ จ ํ๊ฒฝ ๋ณ์์ ๋ณผ๋ฅจ์ ์ฃผ์ ํ๋ ์ญํ ์ ์ํํจ
12. ํ๋ ๋ด IAM ์ญํ ์ฌ์ฉ ํ์ธ
1
kubectl exec -it eks-pod-identity -- aws sts get-caller-identity --query Arn
โ ย ์ถ๋ ฅ
1
"arn:aws:sts::378102432899:assumed-role/s3-eks-pod-identity-role/eks-myeks-eks-pod-id-bc09a975-2c8c-4e64-b239-dce2bc2b30f2"
13. ํ๋ ๋ด ํ๊ฒฝ ๋ณ์ ํ์ธ
1
kubectl exec -it eks-pod-identity -- env | grep AWS
โ ย ์ถ๋ ฅ
1
2
3
4
5
AWS_CONTAINER_CREDENTIALS_FULL_URI=http://169.254.170.23/v1/credentials
AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE=/var/run/secrets/pods.eks.amazonaws.com/serviceaccount/eks-pod-identity-token
AWS_STS_REGIONAL_ENDPOINTS=regional
AWS_DEFAULT_REGION=ap-northeast-2
AWS_REGION=ap-northeast-2
14. ํ๋ ๋ด ํ ํฐ ์ ๋ณด ํ์ธ
1
kubectl exec -it eks-pod-identity -- cat /var/run/secrets/pods.eks.amazonaws.com/serviceaccount/eks-pod-identity-token
โ ย ์ถ๋ ฅ
1
eyJhbGciOiJSUzI1NiIsImtpZCI6IjdlOTJmOGQ3NmM5ODUzNzUzYTZjOWExYzlkOTU5NzBkMjFkN2UxY2IifQ.eyJhdWQiOlsicG9kcy5la3MuYW1hem9uYXdzLmNvbSJdLCJleHAiOjE3NDIxMDA0NjYsImlhdCI6MTc0MjAxNDA2NiwiaXNzIjoiaHR0cHM6Ly9vaWRjLmVrcy5hcC1ub3J0aGVhc3QtMi5hbWF6b25hd3MuY29tL2lkLzkxMTI0MzUwNjRCODI0NDk4QUY2OEM2OTZEMDUxODNDIiwianRpIjoiNmMxNzZiNjYtYjkzYi00ZTk4LTkyMmItMjIyMjljNWRmYmFkIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0Iiwibm9kZSI6eyJuYW1lIjoiaXAtMTkyLTE2OC0zLTEwMC5hcC1ub3J0aGVhc3QtMi5jb21wdXRlLmludGVybmFsIiwidWlkIjoiZjQ0ZDA1YjUtZTExZC00ZDllLThiN2YtODVmZDViNGI4MzY4In0sInBvZCI6eyJuYW1lIjoiZWtzLXBvZC1pZGVudGl0eSIsInVpZCI6IjczNWJkYTUwLWU1NjEtNDYwOS04NzY2LTMyYTdlMjM1ZmQ2ZSJ9LCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiczMtc2EiLCJ1aWQiOiJkNjFkNGNhMy0wNmU4LTQxMWYtYjIyMC1jZTkxMzRhYjVhYTkifX0sIm5iZiI6MTc0MjAxNDA2Niwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6czMtc2EifQ.W2bG2UgZbyVHR44If6jupkiCoIN7YnxnX7CK0-8hwinvG-y68uBBXr6ugNOznR62f2FxI5D7iISS7b8_uGvpraaV_H80s7wl-D1-zSmj-GtYytegCxhjYrUjllFrGS9qVHoAjd8_JyrXKg_sBse8RhqLfqsXgMcgiQiLG8WgMCLzXsyOdKwjEId1UDql3uot4yPDHcMEDshc_rTwRjSR6Y3uy1ojyumfL_94jIjkhSCo9n8V5-k-c4PPRpDG-jqTduGm_zJhF72BUZ3_UCnj9qPRje2aob0lfPD32Rc6i_lJy_4d6oKQQ67ugRTrA51ApN7-9Jl8wDI2uG12nPCGQA
15. ์ค์ต ๋ฆฌ์์ค ์ญ์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
eksctl delete podidentityassociation --cluster $CLUSTER_NAME --namespace default --service-account-name s3-sa
# ๊ฒฐ๊ณผ
2025-03-15 13:53:45 [โน] 1 task: {
2 sequential sub-tasks: {
delete pod identity association "default/s3-sa",
delete service account "default/s3-sa", if it exists and is managed by eksctl,
} }2025-03-15 13:53:45 [โน] deleting IAM resources stack "eksctl-myeks-podidentityrole-default-s3-sa" for pod identity association "default/s3-sa"
2025-03-15 13:53:46 [โน] will delete stack "eksctl-myeks-podidentityrole-default-s3-sa"
2025-03-15 13:53:46 [โน] waiting for stack "eksctl-myeks-podidentityrole-default-s3-sa" to get deleted
2025-03-15 13:53:46 [โน] waiting for CloudFormation stack "eksctl-myeks-podidentityrole-default-s3-sa"
2025-03-15 13:54:16 [โน] waiting for CloudFormation stack "eksctl-myeks-podidentityrole-default-s3-sa"
2025-03-15 13:54:16 [โน] serviceaccount "default/s3-sa" was not created by eksctl; will not be deleted
2025-03-15 13:54:16 [โน] all tasks were completed successfully
๐ Kyverno
1. Kyverno ์ค์น ํ์ผ ์์ฑ ๋ฐ ๋ค์์คํ์ด์ค ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
(eks-user@myeks:default) [root@operator-host ~]# cat << EOF > kyverno-value.yaml
> config:
> resourceFiltersExcludeNamespaces: [ kube-system ]
>
> admissionController:
> serviceMonitor:
> enabled: true
>
> backgroundController:
> serviceMonitor:
> enabled: true
>
> cleanupController:
> serviceMonitor:
> enabled: true
>
> reportsController:
> serviceMonitor:
> enabled: true
> EOF
(eks-user@myeks:default) [root@operator-host ~]# kubectl create ns kyverno
namespace/kyverno created
2. Kyverno Helm ์ฐจํธ ๋ฆฌํฌ์งํฐ๋ฆฌ ์ถ๊ฐ ๋ฐ ์ค์น
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
(eks-user@myeks:default) [root@operator-host ~]# helm repo add kyverno https://kyverno.github.io/kyverno/
# ๊ฒฐ๊ณผ
"kyverno" has been added to your repositories
(eks-user@myeks:default) [root@operator-host ~]# helm install kyverno kyverno/kyverno --version 3.3.7 -f kyverno-value.yaml -n kyverno
# ๊ฒฐ๊ณผ
NAME: kyverno
LAST DEPLOYED: Sat Mar 15 14:08:39 2025
NAMESPACE: kyverno
STATUS: deployed
REVISION: 1
NOTES:
Chart version: 3.3.7
Kyverno version: v1.13.4
Thank you for installing kyverno! Your release is named kyverno.
The following components have been installed in your cluster:
- CRDs
- Admission controller
- Reports controller
- Cleanup controller
- Background controller
โ ๏ธ WARNING: Setting the admission controller replica count below 2 means Kyverno is not running in high availability mode.
โ ๏ธ WARNING: PolicyExceptions are disabled by default. To enable them, set '--enablePolicyException' to true.
๐ก Note: There is a trade-off when deciding which approach to take regarding Namespace exclusions. Please see the documentation at https://kyverno.io/docs/installation/#security-vs-operability to understand the risks.
3. Kyverno ๊ด๋ จ ๋ฆฌ์์ค ํ์ธ
(1) ๋ชจ๋ ๋ฆฌ์์ค ์ํ ํ์ธ
1
(eks-user@myeks:default) [root@operator-host ~]# kubectl get all -n kyverno
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
NAME READY STATUS RESTARTS AGE
pod/kyverno-admission-controller-df7b67cf-ngdmj 1/1 Running 0 2m8s
pod/kyverno-background-controller-8544847cf-lwp7v 1/1 Running 0 2m8s
pod/kyverno-cleanup-controller-5db46d8ddb-cbnwf 1/1 Running 0 2m8s
pod/kyverno-reports-controller-77f95686d4-ztdrk 1/1 Running 0 2m8s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kyverno-background-controller-metrics ClusterIP 10.100.7.205 <none> 8000/TCP 2m10s
service/kyverno-cleanup-controller ClusterIP 10.100.51.207 <none> 443/TCP 2m10s
service/kyverno-cleanup-controller-metrics ClusterIP 10.100.61.29 <none> 8000/TCP 2m10s
service/kyverno-reports-controller-metrics ClusterIP 10.100.88.231 <none> 8000/TCP 2m10s
service/kyverno-svc ClusterIP 10.100.65.42 <none> 443/TCP 2m10s
service/kyverno-svc-metrics ClusterIP 10.100.212.30 <none> 8000/TCP 2m10s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kyverno-admission-controller 1/1 1 1 2m9s
deployment.apps/kyverno-background-controller 1/1 1 1 2m9s
deployment.apps/kyverno-cleanup-controller 1/1 1 1 2m9s
deployment.apps/kyverno-reports-controller 1/1 1 1 2m9s
NAME DESIRED CURRENT READY AGE
replicaset.apps/kyverno-admission-controller-df7b67cf 1 1 1 2m9s
replicaset.apps/kyverno-background-controller-8544847cf 1 1 1 2m9s
replicaset.apps/kyverno-cleanup-controller-5db46d8ddb 1 1 1 2m9s
replicaset.apps/kyverno-reports-controller-77f95686d4 1 1 1 2m9s
(2) Kyverno์์ ์์ฑํ CRD ์กฐํ
1
(eks-user@myeks:default) [root@operator-host ~]# kubectl get crd | grep kyverno
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
cleanuppolicies.kyverno.io 2025-03-15T05:08:42Z
clustercleanuppolicies.kyverno.io 2025-03-15T05:08:43Z
clusterephemeralreports.reports.kyverno.io 2025-03-15T05:08:43Z
clusterpolicies.kyverno.io 2025-03-15T05:08:43Z
ephemeralreports.reports.kyverno.io 2025-03-15T05:08:43Z
globalcontextentries.kyverno.io 2025-03-15T05:08:42Z
policies.kyverno.io 2025-03-15T05:08:43Z
policyexceptions.kyverno.io 2025-03-15T05:08:43Z
updaterequests.kyverno.io 2025-03-15T05:08:42Z
4. Step CLI ๋ค์ด๋ก๋ ๋ฐ ์ค์น
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
(eks-user@myeks:default) [root@operator-host ~]# wget https://dl.smallstep.com/cli/docs-cli-install/latest/step-cli_amd64.rpm
# ๊ฒฐ๊ณผ
--2025-03-15 14:12:40-- https://dl.smallstep.com/cli/docs-cli-install/latest/step-cli_amd64.rpm
Resolving dl.smallstep.com (dl.smallstep.com)... 52.76.90.69, 54.179.124.27
Connecting to dl.smallstep.com (dl.smallstep.com)|52.76.90.69|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github.com/smallstep/cli/releases/latest/download/step-cli_amd64.rpm [following]
--2025-03-15 14:12:41-- https://github.com/smallstep/cli/releases/latest/download/step-cli_amd64.rpm
Resolving github.com (github.com)... 20.200.245.247
Connecting to github.com (github.com)|20.200.245.247|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github.com/smallstep/cli/releases/download/v0.28.5/step-cli_amd64.rpm [following]
--2025-03-15 14:12:41-- https://github.com/smallstep/cli/releases/download/v0.28.5/step-cli_amd64.rpm
Reusing existing connection to github.com:443.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/141352703/b44a49f6-777a-4ca5-acfc-50da7b6122fc?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=releaseassetproduction%2F20250315%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250315T051241Z&X-Amz-Expires=300&X-Amz-Signature=cf2e28c198b08308d4994c287b1a88e08348747132fd594f87642948ca3ab00b&X-Amz-SignedHeaders=host&response-content-disposition=attachment%3B%20filename%3Dstep-cli_amd64.rpm&response-content-type=application%2Foctet-stream [following]
--2025-03-15 14:12:41-- https://objects.githubusercontent.com/github-production-release-asset-2e65be/141352703/b44a49f6-777a-4ca5-acfc-50da7b6122fc?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=releaseassetproduction%2F20250315%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250315T051241Z&X-Amz-Expires=300&X-Amz-Signature=cf2e28c198b08308d4994c287b1a88e08348747132fd594f87642948ca3ab00b&X-Amz-SignedHeaders=host&response-content-disposition=attachment%3B%20filename%3Dstep-cli_amd64.rpm&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.109.133, 185.199.110.133, 185.199.111.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.109.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 14407619 (14M) [application/octet-stream]
Saving to: โstep-cli_amd64.rpmโ
100%[==============================>] 14,407,619 55.0MB/s in 0.2s
2025-03-15 14:12:42 (55.0 MB/s) - โstep-cli_amd64.rpmโ saved [14407619/14407619]
5. Step CLI ํจํค์ง ์ค์น
1
2
3
(eks-user@myeks:default) [root@operator-host ~]# sudo rpm -i step-cli_amd64.rpm
# ๊ฒฐ๊ณผ
warning: step-cli_amd64.rpm: Header V4 RSA/SHA256 Signature, key ID b855223c: NOKEY
6. Kyverno TLS ์ธ์ฆ์ ํ์ธ (Secret ๊ธฐ๋ฐ)
1
(eks-user@myeks:default) [root@operator-host ~]# kubectl -n kyverno get secret kyverno-svc.kyverno.svc.kyverno-tls-ca -o jsonpath='{.data.tls\.crt}' | base64 -d | step certificate inspect --short
โ ย ์ถ๋ ฅ
1
2
3
4
5
X.509v3 Root CA Certificate (RSA 2048) [Serial: 0]
Subject: *.kyverno.svc
Issuer: *.kyverno.svc
Valid from: 2025-03-15T04:09:08Z
to: 2026-03-15T05:09:08Z
7. Kyverno Validating Webhook ์ธ์ฆ์ ํ์ธ
1
(eks-user@myeks:default) [root@operator-host ~]# kubectl get validatingwebhookconfiguration kyverno-policy-validating-webhook-cfg -o jsonpath='{.webhooks[0].clientConfig.caBundle}' | base64 -d | step certificate inspect --short
โ ย ์ถ๋ ฅ
1
2
3
4
5
X.509v3 Root CA Certificate (RSA 2048) [Serial: 0]
Subject: *.kyverno.svc
Issuer: *.kyverno.svc
Valid from: 2025-03-15T04:09:08Z
to: 2026-03-15T05:09:08Z
8. 15987 ๋์๋ณด๋ import
9. ClusterPolicy ์์ฑ์ผ๋ก ํ๋ ๋ผ๋ฒจ ๊ฒ์ฆ ํ์ฑํ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
(eks-user@myeks:default) [root@operator-host ~]# kubectl apply -f- << EOF
> apiVersion: kyverno.io/v1
> kind: ClusterPolicy
> metadata:
> name: require-labels
> spec:
> validationFailureAction: Enforce
> rules:
> - name: check-team
> match:
> any:
> - resources:
> kinds:
> - Pod
> validate:
> message: "label 'team' is required"
> pattern:
> metadata:
> labels:
> team: "?*"
> EOF
clusterpolicy.kyverno.io/require-labels created
10. Validating Webhook ๊ตฌ์ฑ ํ์ธ
1
(eks-user@myeks:default) [root@operator-host ~]# kubectl get validatingwebhookconfigurations
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
9
10
NAME WEBHOOKS AGE
aws-load-balancer-webhook 3 14h
kube-prometheus-stack-admission 1 14h
kyverno-cleanup-validating-webhook-cfg 1 10m
kyverno-exception-validating-webhook-cfg 1 10m
kyverno-global-context-validating-webhook-cfg 1 10m
kyverno-policy-validating-webhook-cfg 1 10m
kyverno-resource-validating-webhook-cfg 1 10m
kyverno-ttl-validating-webhook-cfg 1 10m
vpc-resource-validating-webhook 2 15h
11. ClusterPolicy ์ํ ํ์ธ
1
(eks-user@myeks:default) [root@operator-host ~]# kubectl get ClusterPolicy
โ ย ์ถ๋ ฅ
1
2
NAME ADMISSION BACKGROUND READY AGE MESSAGE
require-labels true true True 39s Ready
12. ๋ผ๋ฒจ ๋ฏธํฌํจ ๋ํ๋ก์ด๋จผํธ ์์ฑ ์๋ ๋ฐ ๊ฑฐ๋ถ
1
(eks-user@myeks:default) [root@operator-host ~]# kubectl create deployment nginx --image=nginx
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
error: failed to create deployment: admission webhook "validate.kyverno.svc-fail" denied the request:
resource Deployment/default/nginx was blocked due to the following policies
require-labels:
autogen-check-team: 'validation error: label ''team'' is required. rule autogen-check-team
failed at path /spec/template/metadata/labels/team/'
13. ๋ผ๋ฒจ ํฌํจ ๋ํ๋ก์ด๋จผํธ ์์ฑ
1
2
3
(eks-user@myeks:default) [root@operator-host ~]# kubectl run nginx --image nginx --labels team=backend
# ๊ฒฐ๊ณผ
pod/nginx created
14. ์์ฑ๋ ํ๋ ์ํ ํ์ธ
1
(eks-user@myeks:default) [root@operator-host ~]# kubectl get pod -l team=backend
โ ย ์ถ๋ ฅ
1
2
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 29s
15. Kyverno ์ ์ฑ ๋ณด๊ณ ์ ํ์ธ
1
(eks-user@myeks:default) [root@operator-host ~]# kubectl get policyreport -o wide
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
NAME KIND NAME PASS FAIL WARN ERROR SKIP AGE
0064e6f2-d60c-4ec9-86ed-68c4e61135c2 Pod awscli-pod-59598f6ff8-nxfpw 0 1 0 0 0 4m41s
0cd55544-769e-4624-a912-374ce20a2923 Deployment awscli-pod 0 1 0 0 0 4m42s
2b7c56dc-db43-4012-b918-d1fcadde9f1d Pod nginx 1 0 0 0 0 35s
735bda50-e561-4609-8766-32a7e235fd6e Pod eks-pod-identity 0 1 0 0 0 4m41s
ae7373bd-7fb4-4fc4-80a6-ea68afd362f6 Pod eks-iam-test3 0 1 0 0 0 4m41s
e1661352-fbfd-43f0-8bba-d72b22085538 ReplicaSet awscli-pod-59598f6ff8 0 1 0 0 0 4m42s
ebb2958c-4602-4940-99c6-e90f1746a9d8 Pod awscli-pod-59598f6ff8-x959k 0 1 0 0 0 4m41s
16. ClusterPolicy ์ญ์
1
2
3
(eks-user@myeks:default) [root@operator-host ~]# kubectl delete clusterpolicy require-labels
# ๊ฒฐ๊ณผ
clusterpolicy.kyverno.io "require-labels" deleted
๐ท๏ธ Mutation : ํ๋์ ๋ผ๋ฒจ ์ถ๊ฐ
1. Mutation ์ ์ฑ ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
(eks-user@myeks:default) [root@operator-host ~]# kubectl apply -f- << EOF
> apiVersion: kyverno.io/v1
> kind: ClusterPolicy
> metadata:
> name: add-labels
> spec:
> rules:
> - name: add-team
> match:
> any:
> - resources:
> kinds:
> - Pod
> mutate:
> patchStrategicMerge:
> metadata:
> labels:
> +(team): bravo
> EOF
clusterpolicy.kyverno.io/add-labels created
2. MutatingWebhookConfiguration ํ์ธ
1
(eks-user@myeks:default) [root@operator-host ~]# kubectl get mutatingwebhookconfigurations
โ ย ์ถ๋ ฅ
1
2
3
4
5
6
7
8
NAME WEBHOOKS AGE
aws-load-balancer-webhook 3 14h
kube-prometheus-stack-admission 1 14h
kyverno-policy-mutating-webhook-cfg 1 17m
kyverno-resource-mutating-webhook-cfg 1 17m
kyverno-verify-mutating-webhook-cfg 1 17m
pod-identity-webhook 1 15h
vpc-resource-mutating-webhook 1 15h
3. ClusterPolicy ์ํ ํ์ธ
1
(eks-user@myeks:default) [root@operator-host ~]# kubectl get ClusterPolicy
โ ย ์ถ๋ ฅ
1
2
NAME ADMISSION BACKGROUND READY AGE MESSAGE
add-labels true true True 25s Ready
4. ๋ผ๋ฒจ ์๋ ์ถ๊ฐ ํ ์คํธ: redis ํ๋ ์์ฑ
(1) redis ํ๋ ์์ฑ
1
2
(eks-user@myeks:default) [root@operator-host ~]# kubectl run redis --image redis
pod/redis created
(2) redis ํ๋ ๋ผ๋ฒจ ํ์ธ
1
2
3
(eks-user@myeks:default) [root@operator-host ~]# kubectl get pod redis --show-labels
NAME READY STATUS RESTARTS AGE LABELS
redis 1/1 Running 0 24s run=redis,team=bravo
5. ๋ผ๋ฒจ์ด ์ด๋ฏธ ์ง์ ๋ ๊ฒฝ์ฐ ๋ณ๊ฒฝ๋์ง ์์: newredis ํ๋ ์์ฑ
(3) newredis ํ๋ ์์ฑ (๋ผ๋ฒจ ์ง์ )
1
2
(eks-user@myeks:default) [root@operator-host ~]# kubectl run newredis --image redis -l team=alpha
pod/newredis created
(4) newredis ํ๋ ๋ผ๋ฒจ ํ์ธ
1
2
3
(eks-user@myeks:default) [root@operator-host ~]# kubectl get pod newredis --show-labels
NAME READY STATUS RESTARTS AGE LABELS
newredis 1/1 Running 0 27s team=alpha
6. Mutation ์ ์ฑ ์ญ์
1
2
(eks-user@myeks:default) [root@operator-host ~]# kubectl delete clusterpolicy add-labels
clusterpolicy.kyverno.io "add-labels" deleted
๐๏ธ (์ค์ต ์๋ฃ ํ) ์์ ์ญ์
1
nohup sh -c "eksctl delete cluster --name $CLUSTER_NAME && aws cloudformation delete-stack --stack-name $CLUSTER_NAME" > /root/delete.log 2>&1 &