Post

AEWS 7์ฃผ์ฐจ ์ •๋ฆฌ

๐ŸŒฑ ํ…Œ๋ผํผ ํ™˜๊ฒฝ ๊ตฌ์„ฑ - Arch Linux

1. tfenv ์„ค์น˜

1
yay -S tfenv

2. ํ˜„์žฌ ์‚ฌ์šฉ์ž tfenv ๊ทธ๋ฃน์— ์ถ”๊ฐ€

1
sudo usermod -aG tfenv $USER

3. ์„ธ์…˜ ์ƒˆ๋กœ๊ณ ์นจ

1
newgrp tfenv

4. ์„ค์น˜ ๊ฐ€๋Šฅํ•œ ํ…Œ๋ผํผ ๋ฒ„์ „ ํ™•์ธ

1
tfenv list-remote

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
...
1.5.7
1.5.6
1.5.5
1.5.4
1.5.3
1.5.2
...

5. ํ…Œ๋ผํผ 1.5.6 ๋ฒ„์ „ ์„ค์น˜

1
tfenv install 1.5.6

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
Installing Terraform v1.5.6
Downloading release tarball from https://releases.hashicorp.com/terraform/1.5.6/terraform_1.5.6_linux_amd64.zip
########################################################################################################################################### 100.0%
Downloading SHA hash file from https://releases.hashicorp.com/terraform/1.5.6/terraform_1.5.6_SHA256SUMS
Not instructed to use Local PGP (/opt/tfenv/use-{gpgv,gnupg}) & No keybase install found, skipping OpenPGP signature verification
terraform_1.5.6_linux_amd64.zip: OK
Archive:  /tmp/tfenv_download.k8lBq6/terraform_1.5.6_linux_amd64.zip
  inflating: /var/lib/tfenv/versions/1.5.6/terraform  
Installation of terraform v1.5.6 successful. To make this your default version, run 'tfenv use 1.5.6'

6. ์„ค์น˜๋œ ํ…Œ๋ผํผ ๋ฒ„์ „ ํ™•์ธ (์„ค์ • ์ „)

1
tfenv list

โœ…ย ์ถœ๋ ฅ

1
2
  1.5.6
No default set. Set with 'tfenv use <version>'

7. ํ…Œ๋ผํผ 1.5.6 ๋ฒ„์ „ ์‚ฌ์šฉ ์„ค์ •

1
tfenv use 1.5.6

โœ…ย ์ถœ๋ ฅ

1
2
Switching default version to v1.5.6
Default version (when not overridden by .terraform-version or TFENV_TERRAFORM_VERSION) is now: 1.5.6

8. ์„ค์น˜๋œ ํ…Œ๋ผํผ ๋ฒ„์ „ ํ™•์ธ (์„ค์ • ํ›„)

1
tfenv list

โœ…ย ์ถœ๋ ฅ

1
* 1.5.6 (set by /var/lib/tfenv/version)

9. ํ…Œ๋ผํผ ๋ฒ„์ „ ์ •๋ณด ํ™•์ธ

1
terraform version

โœ…ย ์ถœ๋ ฅ

1
2
3
4
Terraform v1.5.6
on linux_amd64
Your version of Terraform is out of date! The latest version
is 1.11.2. You can update by downloading from https://www.terraform.io/downloads.html

10. ์ž๋™์™„์„ฑ ๊ธฐ๋Šฅ ํ™œ์„ฑํ™”

1
2
3
4
terraform -install-autocomplete

# .zshrc ํŒŒ์ผ ํ™•์ธ
cat ~/.zshrc

โœ…ย ์ถœ๋ ฅ

1
2
3
...
autoload -U +X bashcompinit && bashcompinit
complete -o nospace -C /var/lib/tfenv/versions/1.5.6/terraform terraform

๐Ÿš€ ํ…Œ๋ผํผ์œผ๋กœ ์‹ค์Šต ํ™˜๊ฒฝ ๋ฐฐํฌ

1. GitHub์—์„œ EKS Blueprints ์ฝ”๋“œ ํด๋ก 

1
2
3
4
5
6
7
8
9
10
11
git clone https://github.com/aws-ia/terraform-aws-eks-blueprints
cd terraform-aws-eks-blueprints/patterns/fargate-serverless

# ๊ฒฐ๊ณผ
Cloning into 'terraform-aws-eks-blueprints'...
remote: Enumerating objects: 36038, done.
remote: Counting objects: 100% (25/25), done.
remote: Compressing objects: 100% (20/20), done.
remote: Total 36038 (delta 9), reused 8 (delta 5), pack-reused 36013 (from 2)
Receiving objects: 100% (36038/36038), 38.38 MiB | 14.88 MiB/s, done.
Resolving deltas: 100% (21354/21354), done.

2. main.tf ์ˆ˜์ •

  • ์‹ค์Šต ํŽธ์˜๋ฅผ ์œ„ํ•ด ๋ฆฌ์ „, VPC ๋“ฑ ์ผ๋ถ€ ์„ค์ • ์ˆ˜์ •
  • Sample App ๋ฐฐํฌ ๋ถ€๋ถ„ ์‚ญ์ œ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
provider "aws" {
  region = local.region
}

provider "kubernetes" {
  host                   = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command     = "aws"
    # This requires the awscli to be installed locally where Terraform is executed
    args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
  }
}

provider "helm" {
  kubernetes {
    host                   = module.eks.cluster_endpoint
    cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

    exec {
      api_version = "client.authentication.k8s.io/v1beta1"
      command     = "aws"
      # This requires the awscli to be installed locally where Terraform is executed
      args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
    }
  }
}

data "aws_availability_zones" "available" {
  # Do not include local zones
  filter {
    name   = "opt-in-status"
    values = ["opt-in-not-required"]
  }
}

locals {
  name     = basename(path.cwd)
  region   = "ap-northeast-2"

  vpc_cidr = "10.10.0.0/16"
  azs      = slice(data.aws_availability_zones.available.names, 0, 3)

  tags = {
    Blueprint  = local.name
    GithubRepo = "github.com/aws-ia/terraform-aws-eks-blueprints"
  }
}

################################################################################
# Cluster
################################################################################

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 20.11"

  cluster_name                   = local.name
  cluster_version                = "1.30"
  cluster_endpoint_public_access = true

  # Give the Terraform identity admin access to the cluster
  # which will allow resources to be deployed into the cluster
  enable_cluster_creator_admin_permissions = true

  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.private_subnets

  # Fargate profiles use the cluster primary security group so these are not utilized
  create_cluster_security_group = false
  create_node_security_group    = false

  fargate_profiles = {
    study_wildcard = {
      selectors = [
        { namespace = "study-*" }
      ]
    }
    kube_system = {
      name = "kube-system"
      selectors = [
        { namespace = "kube-system" }
      ]
    }
  }

  fargate_profile_defaults = {
    iam_role_additional_policies = {
      additional = module.eks_blueprints_addons.fargate_fluentbit.iam_policy[0].arn
    }
  }

  tags = local.tags
}

################################################################################
# EKS Blueprints Addons
################################################################################

module "eks_blueprints_addons" {
  source  = "aws-ia/eks-blueprints-addons/aws"
  version = "~> 1.16"

  cluster_name      = module.eks.cluster_name
  cluster_endpoint  = module.eks.cluster_endpoint
  cluster_version   = module.eks.cluster_version
  oidc_provider_arn = module.eks.oidc_provider_arn

  # We want to wait for the Fargate profiles to be deployed first
  create_delay_dependencies = [for prof in module.eks.fargate_profiles : prof.fargate_profile_arn]

  # EKS Add-ons
  eks_addons = {
    coredns = {
      configuration_values = jsonencode({
        computeType = "Fargate"
        # Ensure that the we fully utilize the minimum amount of resources that are supplied by
        # Fargate https://docs.aws.amazon.com/eks/latest/userguide/fargate-pod-configuration.html
        # Fargate adds 256 MB to each pod's memory reservation for the required Kubernetes
        # components (kubelet, kube-proxy, and containerd). Fargate rounds up to the following
        # compute configuration that most closely matches the sum of vCPU and memory requests in
        # order to ensure pods always have the resources that they need to run.
        resources = {
          limits = {
            cpu = "0.25"
            # We are targeting the smallest Task size of 512Mb, so we subtract 256Mb from the
            # request/limit to ensure we can fit within that task
            memory = "256M"
          }
          requests = {
            cpu = "0.25"
            # We are targeting the smallest Task size of 512Mb, so we subtract 256Mb from the
            # request/limit to ensure we can fit within that task
            memory = "256M"
          }
        }
      })
    }
    vpc-cni    = {}
    kube-proxy = {}
  }

  # Enable Fargate logging this may generate a large ammount of logs, disable it if not explicitly required
  enable_fargate_fluentbit = true
  fargate_fluentbit = {
    flb_log_cw = true
  }

  enable_aws_load_balancer_controller = true
  aws_load_balancer_controller = {
    set = [
      {
        name  = "vpcId"
        value = module.vpc.vpc_id
      },
      {
        name  = "podDisruptionBudget.maxUnavailable"
        value = 1
      },
    ]
  }

  tags = local.tags
}

################################################################################
# Supporting Resources
################################################################################

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 5.0"

  name = local.name
  cidr = local.vpc_cidr

  azs             = local.azs
  private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]
  public_subnets  = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]

  enable_nat_gateway = true
  single_nat_gateway = true

  public_subnet_tags = {
    "kubernetes.io/role/elb" = 1
  }

  private_subnet_tags = {
    "kubernetes.io/role/internal-elb" = 1
  }

  tags = local.tags
}

3. ํ…Œ๋ผํผ ์ดˆ๊ธฐํ™”

1
terraform init

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
Initializing the backend...
Initializing modules...
Downloading registry.terraform.io/terraform-aws-modules/eks/aws 20.34.0 for eks...
...
Initializing provider plugins...
- Finding hashicorp/random versions matching ">= 3.6.0"...
- Finding hashicorp/aws versions matching ">= 4.33.0, >= 4.36.0, >= 4.47.0, >= 5.0.0, >= 5.34.0, >= 5.79.0, >= 5.83.0"...
- Finding hashicorp/helm versions matching ">= 2.9.0"...
- Finding hashicorp/kubernetes versions matching ">= 2.20.0"...
- Finding hashicorp/time versions matching ">= 0.9.0"...
- Finding hashicorp/tls versions matching ">= 3.0.0"...
- Finding hashicorp/null versions matching ">= 3.0.0"...
- Finding hashicorp/cloudinit versions matching ">= 2.0.0"...
- Installing hashicorp/cloudinit v2.3.6...
- Installed hashicorp/cloudinit v2.3.6 (signed by HashiCorp)
- Installing hashicorp/random v3.7.1...
- Installed hashicorp/random v3.7.1 (signed by HashiCorp)
- Installing hashicorp/aws v5.92.0...
- Installed hashicorp/aws v5.92.0 (signed by HashiCorp)
- Installing hashicorp/helm v2.17.0...
- Installed hashicorp/helm v2.17.0 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.36.0...
- Installed hashicorp/kubernetes v2.36.0 (signed by HashiCorp)
- Installing hashicorp/time v0.13.0...
- Installed hashicorp/time v0.13.0 (signed by HashiCorp)
- Installing hashicorp/tls v4.0.6...
- Installed hashicorp/tls v4.0.6 (signed by HashiCorp)
- Installing hashicorp/null v3.2.3...
- Installed hashicorp/null v3.2.3 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

4. ๋ชจ๋“ˆ ๊ตฌ์„ฑ ํ™•์ธ (modules.json)

1
cat .terraform/modules/modules.json | jq

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
{
  "Modules": [
    {
      "Key": "",
      "Source": "",
      "Dir": "."
    },
    {
      "Key": "eks",
      "Source": "registry.terraform.io/terraform-aws-modules/eks/aws",
      "Version": "20.34.0",
      "Dir": ".terraform/modules/eks"
    },
    {
      "Key": "eks.eks_managed_node_group",
      "Source": "./modules/eks-managed-node-group",
      "Dir": ".terraform/modules/eks/modules/eks-managed-node-group"
    },
    {
      "Key": "eks.eks_managed_node_group.user_data",
      "Source": "../_user_data",
      "Dir": ".terraform/modules/eks/modules/_user_data"
    },
    {
      "Key": "eks.fargate_profile",
      "Source": "./modules/fargate-profile",
      "Dir": ".terraform/modules/eks/modules/fargate-profile"
    },
    {
      "Key": "eks.kms",
      "Source": "registry.terraform.io/terraform-aws-modules/kms/aws",
      "Version": "2.1.0",
      "Dir": ".terraform/modules/eks.kms"
    },
    {
      "Key": "eks.self_managed_node_group",
      "Source": "./modules/self-managed-node-group",
      "Dir": ".terraform/modules/eks/modules/self-managed-node-group"
    },
    {
      "Key": "eks.self_managed_node_group.user_data",
      "Source": "../_user_data",
      "Dir": ".terraform/modules/eks/modules/_user_data"
    },
    {
      "Key": "eks_blueprints_addons",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addons/aws",
      "Version": "1.20.0",
      "Dir": ".terraform/modules/eks_blueprints_addons"
    },
    {
      "Key": "eks_blueprints_addons.argo_events",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.argo_events"
    },
    {
      "Key": "eks_blueprints_addons.argo_rollouts",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.argo_rollouts"
    },
    {
      "Key": "eks_blueprints_addons.argo_workflows",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.argo_workflows"
    },
    {
      "Key": "eks_blueprints_addons.argocd",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.argocd"
    },
    {
      "Key": "eks_blueprints_addons.aws_cloudwatch_metrics",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.aws_cloudwatch_metrics"
    },
    {
      "Key": "eks_blueprints_addons.aws_efs_csi_driver",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.aws_efs_csi_driver"
    },
    {
      "Key": "eks_blueprints_addons.aws_for_fluentbit",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.aws_for_fluentbit"
    },
    {
      "Key": "eks_blueprints_addons.aws_fsx_csi_driver",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.aws_fsx_csi_driver"
    },
    {
      "Key": "eks_blueprints_addons.aws_gateway_api_controller",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.aws_gateway_api_controller"
    },
    {
      "Key": "eks_blueprints_addons.aws_load_balancer_controller",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.aws_load_balancer_controller"
    },
    {
      "Key": "eks_blueprints_addons.aws_node_termination_handler",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.aws_node_termination_handler"
    },
    {
      "Key": "eks_blueprints_addons.aws_node_termination_handler_sqs",
      "Source": "registry.terraform.io/terraform-aws-modules/sqs/aws",
      "Version": "4.0.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.aws_node_termination_handler_sqs"
    },
    {
      "Key": "eks_blueprints_addons.aws_privateca_issuer",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.aws_privateca_issuer"
    },
    {
      "Key": "eks_blueprints_addons.bottlerocket_shadow",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.bottlerocket_shadow"
    },
    {
      "Key": "eks_blueprints_addons.bottlerocket_update_operator",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.bottlerocket_update_operator"
    },
    {
      "Key": "eks_blueprints_addons.cert_manager",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.cert_manager"
    },
    {
      "Key": "eks_blueprints_addons.cluster_autoscaler",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.cluster_autoscaler"
    },
    {
      "Key": "eks_blueprints_addons.cluster_proportional_autoscaler",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.cluster_proportional_autoscaler"
    },
    {
      "Key": "eks_blueprints_addons.external_dns",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.external_dns"
    },
    {
      "Key": "eks_blueprints_addons.external_secrets",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.external_secrets"
    },
    {
      "Key": "eks_blueprints_addons.gatekeeper",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.gatekeeper"
    },
    {
      "Key": "eks_blueprints_addons.ingress_nginx",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.ingress_nginx"
    },
    {
      "Key": "eks_blueprints_addons.karpenter",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.karpenter"
    },
    {
      "Key": "eks_blueprints_addons.karpenter_sqs",
      "Source": "registry.terraform.io/terraform-aws-modules/sqs/aws",
      "Version": "4.0.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.karpenter_sqs"
    },
    {
      "Key": "eks_blueprints_addons.kube_prometheus_stack",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.kube_prometheus_stack"
    },
    {
      "Key": "eks_blueprints_addons.metrics_server",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.metrics_server"
    },
    {
      "Key": "eks_blueprints_addons.secrets_store_csi_driver",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.secrets_store_csi_driver"
    },
    {
      "Key": "eks_blueprints_addons.secrets_store_csi_driver_provider_aws",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.secrets_store_csi_driver_provider_aws"
    },
    {
      "Key": "eks_blueprints_addons.velero",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.velero"
    },
    {
      "Key": "eks_blueprints_addons.vpa",
      "Source": "registry.terraform.io/aws-ia/eks-blueprints-addon/aws",
      "Version": "1.1.1",
      "Dir": ".terraform/modules/eks_blueprints_addons.vpa"
    },
    {
      "Key": "vpc",
      "Source": "registry.terraform.io/terraform-aws-modules/vpc/aws",
      "Version": "5.19.0",
      "Dir": ".terraform/modules/vpc"
    }
  ]
}

5. ํ…Œ๋ผํผ ํ”„๋กœ๋ฐ”์ด๋” ํ™•์ธ

1
tree .terraform/providers/registry.terraform.io/hashicorp -L 2

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
.terraform/providers/registry.terraform.io/hashicorp
โ”œโ”€โ”€ aws
โ”‚ย ย  โ””โ”€โ”€ 5.92.0
โ”œโ”€โ”€ cloudinit
โ”‚ย ย  โ””โ”€โ”€ 2.3.6
โ”œโ”€โ”€ helm
โ”‚ย ย  โ””โ”€โ”€ 2.17.0
โ”œโ”€โ”€ kubernetes
โ”‚ย ย  โ””โ”€โ”€ 2.36.0
โ”œโ”€โ”€ null
โ”‚ย ย  โ””โ”€โ”€ 3.2.3
โ”œโ”€โ”€ random
โ”‚ย ย  โ””โ”€โ”€ 3.7.1
โ”œโ”€โ”€ time
โ”‚ย ย  โ””โ”€โ”€ 0.13.0
โ””โ”€โ”€ tls
    โ””โ”€โ”€ 4.0.6
17 directories, 0 files

6. EKS ๋ฐฐํฌ

  • ์†Œ์š” ์‹œ๊ฐ„: ์•ฝ 13๋ถ„
  • ํฌํ•จ ๋ฆฌ์†Œ์Šค: EKS ํด๋Ÿฌ์Šคํ„ฐ, Fargate ํ”„๋กœํŒŒ์ผ, Add-ons (coredns, vpc-cni, kube-proxy)
1
terraform apply -auto-approve

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
...
module.eks_blueprints_addons.aws_eks_addon.this["vpc-cni"]: Creation complete after 14s [id=fargate-serverless:vpc-cni]
module.eks_blueprints_addons.aws_eks_addon.this["coredns"]: Still creating... [20s elapsed]
module.eks_blueprints_addons.aws_eks_addon.this["coredns"]: Still creating... [30s elapsed]
module.eks_blueprints_addons.aws_eks_addon.this["coredns"]: Still creating... [40s elapsed]
module.eks_blueprints_addons.aws_eks_addon.this["coredns"]: Still creating... [50s elapsed]
module.eks_blueprints_addons.aws_eks_addon.this["coredns"]: Still creating... [1m0s elapsed]
module.eks_blueprints_addons.aws_eks_addon.this["coredns"]: Still creating... [1m10s elapsed]
module.eks_blueprints_addons.aws_eks_addon.this["coredns"]: Creation complete after 1m15s [id=fargate-serverless:coredns]
Apply complete! Resources: 64 added, 0 changed, 0 destroyed.
Outputs:
configure_kubectl = "aws eks --region ap-northeast-2 update-kubeconfig --name fargate-serverless"

7. ํ…Œ๋ผํผ state list ํ™•์ธ

1
terraform state list

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
data.aws_availability_zones.available
module.eks.data.aws_caller_identity.current[0]
module.eks.data.aws_iam_policy_document.assume_role_policy[0]
module.eks.data.aws_iam_policy_document.custom[0]
module.eks.data.aws_iam_session_context.current[0]
module.eks.data.aws_partition.current[0]
module.eks.data.tls_certificate.this[0]
module.eks.aws_cloudwatch_log_group.this[0]
module.eks.aws_ec2_tag.cluster_primary_security_group["Blueprint"]
module.eks.aws_ec2_tag.cluster_primary_security_group["GithubRepo"]
module.eks.aws_eks_access_entry.this["cluster_creator"]
module.eks.aws_eks_access_policy_association.this["cluster_creator_admin"]
module.eks.aws_eks_cluster.this[0]
module.eks.aws_iam_openid_connect_provider.oidc_provider[0]
module.eks.aws_iam_policy.cluster_encryption[0]
module.eks.aws_iam_policy.custom[0]
module.eks.aws_iam_role.this[0]
module.eks.aws_iam_role_policy_attachment.cluster_encryption[0]
module.eks.aws_iam_role_policy_attachment.custom[0]
module.eks.aws_iam_role_policy_attachment.this["AmazonEKSClusterPolicy"]
module.eks.aws_iam_role_policy_attachment.this["AmazonEKSVPCResourceController"]
module.eks.time_sleep.this[0]
module.eks_blueprints_addons.data.aws_caller_identity.current
module.eks_blueprints_addons.data.aws_eks_addon_version.this["coredns"]
module.eks_blueprints_addons.data.aws_eks_addon_version.this["kube-proxy"]
module.eks_blueprints_addons.data.aws_eks_addon_version.this["vpc-cni"]
module.eks_blueprints_addons.data.aws_iam_policy_document.aws_load_balancer_controller[0]
module.eks_blueprints_addons.data.aws_iam_policy_document.fargate_fluentbit[0]
module.eks_blueprints_addons.data.aws_partition.current
module.eks_blueprints_addons.data.aws_region.current
module.eks_blueprints_addons.aws_cloudformation_stack.usage_telemetry[0]
module.eks_blueprints_addons.aws_cloudwatch_log_group.fargate_fluentbit[0]
module.eks_blueprints_addons.aws_eks_addon.this["coredns"]
module.eks_blueprints_addons.aws_eks_addon.this["kube-proxy"]
module.eks_blueprints_addons.aws_eks_addon.this["vpc-cni"]
module.eks_blueprints_addons.aws_iam_policy.fargate_fluentbit[0]
module.eks_blueprints_addons.kubernetes_config_map_v1.aws_logging[0]
module.eks_blueprints_addons.kubernetes_namespace_v1.aws_observability[0]
module.eks_blueprints_addons.random_bytes.this
module.eks_blueprints_addons.time_sleep.this
module.vpc.aws_default_network_acl.this[0]
module.vpc.aws_default_route_table.default[0]
module.vpc.aws_default_security_group.this[0]
module.vpc.aws_eip.nat[0]
module.vpc.aws_internet_gateway.this[0]
module.vpc.aws_nat_gateway.this[0]
module.vpc.aws_route.private_nat_gateway[0]
module.vpc.aws_route.public_internet_gateway[0]
module.vpc.aws_route_table.private[0]
module.vpc.aws_route_table.public[0]
module.vpc.aws_route_table_association.private[0]
module.vpc.aws_route_table_association.private[1]
module.vpc.aws_route_table_association.private[2]
module.vpc.aws_route_table_association.public[0]
module.vpc.aws_route_table_association.public[1]
module.vpc.aws_route_table_association.public[2]
module.vpc.aws_subnet.private[0]
module.vpc.aws_subnet.private[1]
module.vpc.aws_subnet.private[2]
module.vpc.aws_subnet.public[0]
module.vpc.aws_subnet.public[1]
module.vpc.aws_subnet.public[2]
module.vpc.aws_vpc.this[0]
module.eks.module.fargate_profile["kube_system"].data.aws_caller_identity.current
module.eks.module.fargate_profile["kube_system"].data.aws_iam_policy_document.assume_role_policy[0]
module.eks.module.fargate_profile["kube_system"].data.aws_partition.current
module.eks.module.fargate_profile["kube_system"].data.aws_region.current
module.eks.module.fargate_profile["kube_system"].aws_eks_fargate_profile.this[0]
module.eks.module.fargate_profile["kube_system"].aws_iam_role.this[0]
module.eks.module.fargate_profile["kube_system"].aws_iam_role_policy_attachment.additional["additional"]
module.eks.module.fargate_profile["kube_system"].aws_iam_role_policy_attachment.this["AmazonEKSFargatePodExecutionRolePolicy"]
module.eks.module.fargate_profile["kube_system"].aws_iam_role_policy_attachment.this["AmazonEKS_CNI_Policy"]
module.eks.module.fargate_profile["study_wildcard"].data.aws_caller_identity.current
module.eks.module.fargate_profile["study_wildcard"].data.aws_iam_policy_document.assume_role_policy[0]
module.eks.module.fargate_profile["study_wildcard"].data.aws_partition.current
module.eks.module.fargate_profile["study_wildcard"].data.aws_region.current
module.eks.module.fargate_profile["study_wildcard"].aws_eks_fargate_profile.this[0]
module.eks.module.fargate_profile["study_wildcard"].aws_iam_role.this[0]
module.eks.module.fargate_profile["study_wildcard"].aws_iam_role_policy_attachment.additional["additional"]
module.eks.module.fargate_profile["study_wildcard"].aws_iam_role_policy_attachment.this["AmazonEKSFargatePodExecutionRolePolicy"]
module.eks.module.fargate_profile["study_wildcard"].aws_iam_role_policy_attachment.this["AmazonEKS_CNI_Policy"]
module.eks.module.kms.data.aws_caller_identity.current[0]
module.eks.module.kms.data.aws_iam_policy_document.this[0]
module.eks.module.kms.data.aws_partition.current[0]
module.eks.module.kms.aws_kms_alias.this["cluster"]
module.eks.module.kms.aws_kms_key.this[0]
module.eks_blueprints_addons.module.aws_load_balancer_controller.data.aws_caller_identity.current[0]
module.eks_blueprints_addons.module.aws_load_balancer_controller.data.aws_iam_policy_document.assume[0]
module.eks_blueprints_addons.module.aws_load_balancer_controller.data.aws_iam_policy_document.this[0]
module.eks_blueprints_addons.module.aws_load_balancer_controller.data.aws_partition.current[0]
module.eks_blueprints_addons.module.aws_load_balancer_controller.aws_iam_policy.this[0]
module.eks_blueprints_addons.module.aws_load_balancer_controller.aws_iam_role.this[0]
module.eks_blueprints_addons.module.aws_load_balancer_controller.aws_iam_role_policy_attachment.this[0]
module.eks_blueprints_addons.module.aws_load_balancer_controller.helm_release.this[0]

8. kubeconfig ์„ค์ • ๋ช…๋ น ํ™•์ธ

1
terraform output

โœ…ย ์ถœ๋ ฅ

1
configure_kubectl = "aws eks --region ap-northeast-2 update-kubeconfig --name fargate-serverless"

9. EKS ์ž๊ฒฉ์ฆ๋ช… ์ ์šฉ

1
$(terraform output -raw configure_kubectl)

โœ…ย ์ถœ๋ ฅ

1
Added new context arn:aws:eks:ap-northeast-2:378102432899:cluster/fargate-serverless to /home/devshin/.kube/config

10. kubeconfig ํŒŒ์ผ ๋‚ด์šฉ ํ™•์ธ

1
cat ~/.kube/config

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: 
      LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJUXhZZFRsV2QxS293RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1UUXhNelV6TURsYUZ3MHpOVEF6TVRJeE16VTRNRGxhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUR2RE1HZ1dYSlBmUVlrMERKWTdGYk52OEN3RkFMMWNIMWQxVkxxUmpYemt0RGFwTW1xS2RGV3lmSXMKbGFqbEkyS1FvZ0VPY1dHc0dEOW1JWVFhTXM0VHFab1k5U1dZNFVlMVYzU0Y5YVV3dFI0M2VLTzh5RUhQekNUSQpGN0hyWDBKVkpxM29RQU5RVjNPc2JJdE9pdHQ3eDBoaFVWZ3NQci9qTy9ZamFkaHczSVV4QUxKYnhKTmIzSW5rCkdjT3ltbXViNDR4SUJ0NmtIM3V5VWNxQm83aStoL3NJWWFXRXNocHp3dGJSemN0NmRMYldFNW9WbU9xY1pHTDMKSXE1WmNiVTBQc3B3S2xTK1BwcFhmaEk2djk5VGsyV0VRVzdkQmo1ZkM5cWdOeU95N3BFdU1MVHhyK05xYS9yVAp1dzNtLzNvb1JoYk82azR0VGxUSW1ZMXBrMnpaQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTYWp6dzc5dzNNUjBIMEJUaTFuQVNhN2JVS1J6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQWVSK2c5d0FVQwpuUElEOUtKNGkrMGliRFcvczlxSHpHTG5OelhpQlpsSXI1ZWtUYjdaYzhwT09ZQ2t5c01oSTlLbnJQUUdPTlpaCitjSTM0UnRDWmN0eU9QVmRsS1JsUFM5R1lNaGpBeEhQMW1Jck4zU2F5aGpYM3kzY3g1NG4wRmswR216RitKbXIKYUFsMHkyM05GMnk2VzJ0WWUxRW1aZFRZWVcxNDJLRHNIZldPQlhtMmJjR3Z6QmM4YTh6QWZIdElDbkVSMlc4ZQpndERCV1dpNVRuN1NKOWpTTEFVWUNkQUIza2xDTUJNaE80bGkvRzB4MFROT0VVRXF0bFQxY0x6Rm9PTVM4OWVRCnlpcDE1Mk9tc1ZIMy9pVG55Zy9UVytUUXpLNUF3eVlNb3Jmc05KNjg1bFN6Mmh1S3NEZE9YMVJqWis3ZDI5dW0KTXRIV2hwNHVDajFBCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://9112435064B824498AF68C696D05183C.gr7.ap-northeast-2.eks.amazonaws.com
  name: arn:aws:eks:ap-northeast-2:378102432899:cluster/myeks
- cluster:
    certificate-authority-data: 
      LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJUkMzNC8zV1AwUzh3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1qSXdNVEF4TVROYUZ3MHpOVEF6TWpBd01UQTJNVE5hTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUNzemsyN0NQYXFudDFlTXpTbXhidlBsQVByYXEyeFEvTks1UFFYSmtteWNFelprMm15aU56dVpROGUKam5aOHZ1ckt6R28xcW5zc0pMQ2xKRDhia0VyNjhTdlpaanRrLzlsYlQwc2lFd3FwR3JwZkVSaUxaT28wTU9JYgpoamZhQnJHWFYrdzFZb0I1QW5nRk8wVStmTmdsd3FQaUhwNnl4cTZnbXZKd2xEZUUydDliY2pRNG03Tm1jUGJ4CnBrV2c0dGdDN0hnYkYranlJSnQrZkZLcjk4eTJQbGk1MWJUOEp6N0N5MDBZNVQ2S1YzeFBOSmNFdVEvWUlYMW4KbzhaUHJGaERENkxTbElwRUpabkF5OEZsR3VlaytlclBwT0ZkK2Zzb3hsOTVtTm1WemN3c1ViL1pBaFBlNkNOWApaV3c1bENvUVA1UGs5VmVWaWNZMnNINVZFZGJ0QWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTYVZESHlVa0NqNHdMWElYZWxrZ29YeGhDb1pqQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ3Azd2gvV2VsRwpTVVRWbkUraGx2Rk4yTW1wTHlmaytQZjRwUmk0SEJkSEdXVFFoZ3BQTjdGMGRJVGRXeDV0MWxZc0N2czJzZWhpCnlRQVRqWE5CSzhGTXVCZWtXRWZGL0RFRkFUQ0FyVnN0Q2R6enpFN0VIZzMvOFMvM3ExRjJVVU1ZL0I5S2lURDAKeVZNSFdLZmE4SUhENnFZNzlVdFlpR2Z1bTBaMEp1VStvZ2R4amFmSWF0K2E0Q3AxRkt2ZGFncWpLWWhOdW9VOApXeTlNZHVlM2dJeWMzcHlnK1VtU0FzamcxbW5zejRCV0wwVCtKeDV0UDZwdStUVitOMkk1b09rV0ZZaVFvaEx6ClFJZ3RseXBTNldydkh0UENYUnF1WFFyVmpMUXZJZ3RpQkxTb3RRVkllWDJhQTFtUm5VRXAwbDJ2YkprK3g0dVkKckVWNzA3ek1DSzFYCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://67D0BAB00152B8102D46202C7560A0A5.gr7.ap-northeast-2.eks.amazonaws.com
  name: arn:aws:eks:ap-northeast-2:378102432899:cluster/fargate-serverless
contexts:
- context:
    cluster: arn:aws:eks:ap-northeast-2:378102432899:cluster/myeks
    user: eks-user
  name: eks-user
- context:
    cluster: arn:aws:eks:ap-northeast-2:378102432899:cluster/fargate-serverless
    user: arn:aws:eks:ap-northeast-2:378102432899:cluster/fargate-serverless
  name: arn:aws:eks:ap-northeast-2:378102432899:cluster/fargate-serverless
kind: Config
preferences: {}
users:
- name: eks-user
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - --region
      - ap-northeast-2
      - eks
      - get-token
      - --cluster-name
      - myeks
      - --output
      - json
      command: aws
- name: eks-user@myeks.ap-northeast-2.eksctl.io
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - eks
      - get-token
      - --output
      - json
      - --cluster-name
      - myeks
      - --region
      - ap-northeast-2
      command: aws
      env:
      - name: AWS_STS_REGIONAL_ENDPOINTS
        value: regional
      interactiveMode: IfAvailable
      provideClusterInfo: false
- name: arn:aws:eks:ap-northeast-2:378102432899:cluster/fargate-serverless
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - --region
      - ap-northeast-2
      - eks
      - get-token
      - --cluster-name
      - fargate-serverless
      - --output
      - json
      command: aws
current-context: arn:aws:eks:ap-northeast-2:378102432899:cluster/fargate-serverless

11. kubectl context ๊ด€๋ฆฌ

(1) context ์ „ํ™˜

1
kubectl ctx

โœ…ย ์ถœ๋ ฅ

1
Switched to context "arn:aws:eks:ap-northeast-2:378102432899:cluster/fargate-serverless".

(2) context ์ด๋ฆ„ ๋ณ€๊ฒฝ

1
kubectl config rename-context "arn:aws:eks:ap-northeast-2:$(aws sts get-caller-identity --query 'Account' --output text):cluster/fargate-serverless" "fargate-lab"

โœ…ย ์ถœ๋ ฅ

1
Context "arn:aws:eks:ap-northeast-2:378102432899:cluster/fargate-serverless" renamed to "fargate-lab".

12. ํด๋Ÿฌ์Šคํ„ฐ ์ •๋ณด ํ™•์ธ

1
kubectl cluster-info

โœ…ย ์ถœ๋ ฅ

1
2
3
Kubernetes control plane is running at https://67D0BAB00152B8102D46202C7560A0A5.gr7.ap-northeast-2.eks.amazonaws.com
CoreDNS is running at https://67D0BAB00152B8102D46202C7560A0A5.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

13. ์ƒ์„ธ ์ •๋ณด ํ™•์ธ

(1) ํ…Œ๋ผํผ ์ƒํƒœ ๋ฆฌ์ŠคํŠธ ํ™•์ธ

1
terraform state list

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
data.aws_availability_zones.available
module.eks.data.aws_caller_identity.current[0]
module.eks.data.aws_iam_policy_document.assume_role_policy[0]
module.eks.data.aws_iam_policy_document.custom[0]
module.eks.data.aws_iam_session_context.current[0]
module.eks.data.aws_partition.current[0]
module.eks.data.tls_certificate.this[0]
module.eks.aws_cloudwatch_log_group.this[0]
module.eks.aws_ec2_tag.cluster_primary_security_group["Blueprint"]
module.eks.aws_ec2_tag.cluster_primary_security_group["GithubRepo"]
module.eks.aws_eks_access_entry.this["cluster_creator"]
module.eks.aws_eks_access_policy_association.this["cluster_creator_admin"]
module.eks.aws_eks_cluster.this[0]
module.eks.aws_iam_openid_connect_provider.oidc_provider[0]
module.eks.aws_iam_policy.cluster_encryption[0]
module.eks.aws_iam_policy.custom[0]
module.eks.aws_iam_role.this[0]
module.eks.aws_iam_role_policy_attachment.cluster_encryption[0]
module.eks.aws_iam_role_policy_attachment.custom[0]
module.eks.aws_iam_role_policy_attachment.this["AmazonEKSClusterPolicy"]
module.eks.aws_iam_role_policy_attachment.this["AmazonEKSVPCResourceController"]
module.eks.time_sleep.this[0]
module.eks_blueprints_addons.data.aws_caller_identity.current
module.eks_blueprints_addons.data.aws_eks_addon_version.this["coredns"]
module.eks_blueprints_addons.data.aws_eks_addon_version.this["kube-proxy"]
module.eks_blueprints_addons.data.aws_eks_addon_version.this["vpc-cni"]
module.eks_blueprints_addons.data.aws_iam_policy_document.aws_load_balancer_controller[0]
module.eks_blueprints_addons.data.aws_iam_policy_document.fargate_fluentbit[0]
module.eks_blueprints_addons.data.aws_partition.current
module.eks_blueprints_addons.data.aws_region.current
module.eks_blueprints_addons.aws_cloudformation_stack.usage_telemetry[0]
module.eks_blueprints_addons.aws_cloudwatch_log_group.fargate_fluentbit[0]
module.eks_blueprints_addons.aws_eks_addon.this["coredns"]
module.eks_blueprints_addons.aws_eks_addon.this["kube-proxy"]
module.eks_blueprints_addons.aws_eks_addon.this["vpc-cni"]
module.eks_blueprints_addons.aws_iam_policy.fargate_fluentbit[0]
module.eks_blueprints_addons.kubernetes_config_map_v1.aws_logging[0]
module.eks_blueprints_addons.kubernetes_namespace_v1.aws_observability[0]
module.eks_blueprints_addons.random_bytes.this
module.eks_blueprints_addons.time_sleep.this
module.vpc.aws_default_network_acl.this[0]
module.vpc.aws_default_route_table.default[0]
module.vpc.aws_default_security_group.this[0]
module.vpc.aws_eip.nat[0]
module.vpc.aws_internet_gateway.this[0]
module.vpc.aws_nat_gateway.this[0]
module.vpc.aws_route.private_nat_gateway[0]
module.vpc.aws_route.public_internet_gateway[0]
module.vpc.aws_route_table.private[0]
module.vpc.aws_route_table.public[0]
module.vpc.aws_route_table_association.private[0]
module.vpc.aws_route_table_association.private[1]
module.vpc.aws_route_table_association.private[2]
module.vpc.aws_route_table_association.public[0]
module.vpc.aws_route_table_association.public[1]
module.vpc.aws_route_table_association.public[2]
module.vpc.aws_subnet.private[0]
module.vpc.aws_subnet.private[1]
module.vpc.aws_subnet.private[2]
module.vpc.aws_subnet.public[0]
module.vpc.aws_subnet.public[1]
module.vpc.aws_subnet.public[2]
module.vpc.aws_vpc.this[0]
module.eks.module.fargate_profile["kube_system"].data.aws_caller_identity.current
module.eks.module.fargate_profile["kube_system"].data.aws_iam_policy_document.assume_role_policy[0]
module.eks.module.fargate_profile["kube_system"].data.aws_partition.current
module.eks.module.fargate_profile["kube_system"].data.aws_region.current
module.eks.module.fargate_profile["kube_system"].aws_eks_fargate_profile.this[0]
module.eks.module.fargate_profile["kube_system"].aws_iam_role.this[0]
module.eks.module.fargate_profile["kube_system"].aws_iam_role_policy_attachment.additional["additional"]
module.eks.module.fargate_profile["kube_system"].aws_iam_role_policy_attachment.this["AmazonEKSFargatePodExecutionRolePolicy"]
module.eks.module.fargate_profile["kube_system"].aws_iam_role_policy_attachment.this["AmazonEKS_CNI_Policy"]
module.eks.module.fargate_profile["study_wildcard"].data.aws_caller_identity.current
module.eks.module.fargate_profile["study_wildcard"].data.aws_iam_policy_document.assume_role_policy[0]
module.eks.module.fargate_profile["study_wildcard"].data.aws_partition.current
module.eks.module.fargate_profile["study_wildcard"].data.aws_region.current
module.eks.module.fargate_profile["study_wildcard"].aws_eks_fargate_profile.this[0]
module.eks.module.fargate_profile["study_wildcard"].aws_iam_role.this[0]
module.eks.module.fargate_profile["study_wildcard"].aws_iam_role_policy_attachment.additional["additional"]
module.eks.module.fargate_profile["study_wildcard"].aws_iam_role_policy_attachment.this["AmazonEKSFargatePodExecutionRolePolicy"]
module.eks.module.fargate_profile["study_wildcard"].aws_iam_role_policy_attachment.this["AmazonEKS_CNI_Policy"]
module.eks.module.kms.data.aws_caller_identity.current[0]
module.eks.module.kms.data.aws_iam_policy_document.this[0]
module.eks.module.kms.data.aws_partition.current[0]
module.eks.module.kms.aws_kms_alias.this["cluster"]
module.eks.module.kms.aws_kms_key.this[0]
module.eks_blueprints_addons.module.aws_load_balancer_controller.data.aws_caller_identity.current[0]
module.eks_blueprints_addons.module.aws_load_balancer_controller.data.aws_iam_policy_document.assume[0]
module.eks_blueprints_addons.module.aws_load_balancer_controller.data.aws_iam_policy_document.this[0]
module.eks_blueprints_addons.module.aws_load_balancer_controller.data.aws_partition.current[0]
module.eks_blueprints_addons.module.aws_load_balancer_controller.aws_iam_policy.this[0]
module.eks_blueprints_addons.module.aws_load_balancer_controller.aws_iam_role.this[0]
module.eks_blueprints_addons.module.aws_load_balancer_controller.aws_iam_role_policy_attachment.this[0]
module.eks_blueprints_addons.module.aws_load_balancer_controller.helm_release.this[0]

(2) EKS ํด๋Ÿฌ์Šคํ„ฐ ์ƒ์„ธ ์ •๋ณด ํ™•์ธ

1
terraform state show 'module.eks.aws_eks_cluster.this[0]'

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
# module.eks.aws_eks_cluster.this[0]:
resource "aws_eks_cluster" "this" {
    arn                           = "arn:aws:eks:ap-northeast-2:378102432899:cluster/fargate-serverless"
    bootstrap_self_managed_addons = true
    certificate_authority         = [
        {
            data = "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJUkMzNC8zV1AwUzh3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1qSXdNVEF4TVROYUZ3MHpOVEF6TWpBd01UQTJNVE5hTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUNzemsyN0NQYXFudDFlTXpTbXhidlBsQVByYXEyeFEvTks1UFFYSmtteWNFelprMm15aU56dVpROGUKam5aOHZ1ckt6R28xcW5zc0pMQ2xKRDhia0VyNjhTdlpaanRrLzlsYlQwc2lFd3FwR3JwZkVSaUxaT28wTU9JYgpoamZhQnJHWFYrdzFZb0I1QW5nRk8wVStmTmdsd3FQaUhwNnl4cTZnbXZKd2xEZUUydDliY2pRNG03Tm1jUGJ4CnBrV2c0dGdDN0hnYkYranlJSnQrZkZLcjk4eTJQbGk1MWJUOEp6N0N5MDBZNVQ2S1YzeFBOSmNFdVEvWUlYMW4KbzhaUHJGaERENkxTbElwRUpabkF5OEZsR3VlaytlclBwT0ZkK2Zzb3hsOTVtTm1WemN3c1ViL1pBaFBlNkNOWApaV3c1bENvUVA1UGs5VmVWaWNZMnNINVZFZGJ0QWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTYVZESHlVa0NqNHdMWElYZWxrZ29YeGhDb1pqQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ3Azd2gvV2VsRwpTVVRWbkUraGx2Rk4yTW1wTHlmaytQZjRwUmk0SEJkSEdXVFFoZ3BQTjdGMGRJVGRXeDV0MWxZc0N2czJzZWhpCnlRQVRqWE5CSzhGTXVCZWtXRWZGL0RFRkFUQ0FyVnN0Q2R6enpFN0VIZzMvOFMvM3ExRjJVVU1ZL0I5S2lURDAKeVZNSFdLZmE4SUhENnFZNzlVdFlpR2Z1bTBaMEp1VStvZ2R4amFmSWF0K2E0Q3AxRkt2ZGFncWpLWWhOdW9VOApXeTlNZHVlM2dJeWMzcHlnK1VtU0FzamcxbW5zejRCV0wwVCtKeDV0UDZwdStUVitOMkk1b09rV0ZZaVFvaEx6ClFJZ3RseXBTNldydkh0UENYUnF1WFFyVmpMUXZJZ3RpQkxTb3RRVkllWDJhQTFtUm5VRXAwbDJ2YkprK3g0dVkKckVWNzA3ek1DSzFYCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
        },
    ]
    created_at                    = "2025-03-22T01:01:58Z"
    enabled_cluster_log_types     = [
        "api",
        "audit",
        "authenticator",
    ]
    endpoint                      = "https://67D0BAB00152B8102D46202C7560A0A5.gr7.ap-northeast-2.eks.amazonaws.com"
    id                            = "fargate-serverless"
    identity                      = [
        {
            oidc = [
                {
                    issuer = "https://oidc.eks.ap-northeast-2.amazonaws.com/id/67D0BAB00152B8102D46202C7560A0A5"
                },
            ]
        },
    ]
    name                          = "fargate-serverless"
    platform_version              = "eks.29"
    role_arn                      = "arn:aws:iam::378102432899:role/fargate-serverless-cluster-20250322010135020200000001"
    status                        = "ACTIVE"
    tags                          = {
        "Blueprint"             = "fargate-serverless"
        "GithubRepo"            = "github.com/aws-ia/terraform-aws-eks-blueprints"
        "terraform-aws-modules" = "eks"
    }
    tags_all                      = {
        "Blueprint"             = "fargate-serverless"
        "GithubRepo"            = "github.com/aws-ia/terraform-aws-eks-blueprints"
        "terraform-aws-modules" = "eks"
    }
    version                       = "1.30"
    access_config {
        authentication_mode                         = "API_AND_CONFIG_MAP"
        bootstrap_cluster_creator_admin_permissions = false
    }
    encryption_config {
        resources = [
            "secrets",
        ]
        provider {
            key_arn = "arn:aws:kms:ap-northeast-2:378102432899:key/6c1b1910-a38e-4d86-a3c2-26eaf093beb0"
        }
    }
    kubernetes_network_config {
        ip_family         = "ipv4"
        service_ipv4_cidr = "172.20.0.0/16"
        elastic_load_balancing {
            enabled = false
        }
    }
    timeouts {}
    upgrade_policy {
        support_type = "EXTENDED"
    }
    vpc_config {
        cluster_security_group_id = "sg-05567d53e0904e612"
        endpoint_private_access   = true
        endpoint_public_access    = true
        public_access_cidrs       = [
            "0.0.0.0/0",
        ]
        subnet_ids                = [
            "subnet-036ecdf4f681ecd01",
            "subnet-05cefaf3ec33b3428",
            "subnet-0e4d404db8bc45ab4",
        ]
        vpc_id                    = "vpc-075d7467603079482"
    }
}
  • api, audit, authenticator ์„ธ ๊ฐ€์ง€ ๋กœ๊น… ์˜ต์…˜์ด ํ™œ์„ฑํ™”๋˜์–ด ์žˆ์Œ
  • Observability ํƒญ์—์„œ ํ™œ์„ฑํ™”๋œ ๋กœ๊น… ํ•ญ๋ชฉ๋“ค์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Œ

Image

  • API server endpoint, OIDC, IAM role ARN, KSM Key ์…‹ํŒ… ํ™•์ธ

Image

  • Compute ํƒญ์—์„œ Fargate profiles ํ™•์ธ

Image

ํ…Œ๋ผํผ ์ฝ”๋“œ ์˜ˆ์‹œ (main.tf)

  • study_wildcard ๋ฐ kube_system ๋„ค์ž„์ŠคํŽ˜์ด์Šค์— ์†ํ•œ ํŒŒ๋“œ๋“ค์ด Fargate์— ๋ฐฐํฌ๋จ
1
2
3
4
5
6
7
8
9
10
11
12
13
fargate_profiles = {
  study_wildcard = {
    selectors = [
      { namespace = "study-*" }
    ]
  }
  kube_system = {
    name = "kube-system"
    selectors = [
      { namespace = "kube-system" }
    ]
  }
}
  • Pod selectors์— โ€œstudy-*โ€ ๋„ค์ž„์ŠคํŽ˜์ด์Šค๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์Œ

Image

  • EC2 ์ธ์Šคํ„ด์Šค๋Š” ์กด์žฌํ•˜์ง€ ์•Š์œผ๋ฉฐ, ์„œ๋ฒ„๋ฆฌ์Šค ํ™˜๊ฒฝ์—์„œ ์‹คํ–‰๋จ

Image


โ„น๏ธ ๊ธฐ๋ณธ ์ •๋ณด ํ™•์ธ

1. ์„œ๋น„์Šค ๋ฐ ์—”๋“œํฌ์ธํŠธ ์ •๋ณด ์กฐํšŒ

1
kubectl get svc,ep

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   172.20.0.1   <none>        443/TCP   57m

NAME                   ENDPOINTS                       AGE
endpoints/kubernetes   10.10.1.2:443,10.10.33.19:443   57m
  • 10.10.1.2, 10.10.33.19: EKS Owned-ENI

Image

2. ๋…ธ๋“œ ์ •๋ณด ํ™•์ธ (Micro VM 4๋Œ€)

(1) CSR ์กฐํšŒ

1
kubectl get csr

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
NAME        AGE   SIGNERNAME                      REQUESTOR                                                             REQUESTEDDURATION   CONDITION
csr-5z5v5   48m   kubernetes.io/kubelet-serving   system:node:fargate-ip-10-10-5-189.ap-northeast-2.compute.internal    <none>              Approved,Issued
csr-7rvf9   48m   kubernetes.io/kubelet-serving   system:node:fargate-ip-10-10-19-109.ap-northeast-2.compute.internal   <none>              Approved,Issued
csr-97ld8   48m   kubernetes.io/kubelet-serving   system:node:fargate-ip-10-10-22-59.ap-northeast-2.compute.internal    <none>              Approved,Issued
csr-fkzgd   48m   kubernetes.io/kubelet-serving   system:node:fargate-ip-10-10-32-64.ap-northeast-2.compute.internal    <none>              Approved,Issued

(2) ๋…ธ๋“œ ์ƒ์„ธ ์ •๋ณด ์กฐํšŒ

1
kubectl get node -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
NAME                                                      STATUS   ROLES    AGE   VERSION               INTERNAL-IP    EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION                  CONTAINER-RUNTIME
fargate-ip-10-10-19-109.ap-northeast-2.compute.internal   Ready    <none>   49m   v1.30.8-eks-2d5f260   10.10.19.109   <none>        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25
fargate-ip-10-10-22-59.ap-northeast-2.compute.internal    Ready    <none>   49m   v1.30.8-eks-2d5f260   10.10.22.59    <none>        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25
fargate-ip-10-10-32-64.ap-northeast-2.compute.internal    Ready    <none>   49m   v1.30.8-eks-2d5f260   10.10.32.64    <none>        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25
fargate-ip-10-10-5-189.ap-northeast-2.compute.internal    Ready    <none>   49m   v1.30.8-eks-2d5f260   10.10.5.189    <none>        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25
  • Fargate์— ์—ฐ๊ฒฐ๋œ EKS Owned ENI๋Š” Requester ID์™€ Instance owner๊ฐ€ ์„œ๋กœ ๋‹ค๋ฆ„

Image

(3) ๋…ธ๋“œ์˜ Compute Type ํ™•์ธ

1
kubectl describe node | grep eks.amazonaws.com/compute-type

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
                    eks.amazonaws.com/compute-type=fargate
Taints:             eks.amazonaws.com/compute-type=fargate:NoSchedule
                    eks.amazonaws.com/compute-type=fargate
Taints:             eks.amazonaws.com/compute-type=fargate:NoSchedule
                    eks.amazonaws.com/compute-type=fargate
Taints:             eks.amazonaws.com/compute-type=fargate:NoSchedule
                    eks.amazonaws.com/compute-type=fargate
Taints:             eks.amazonaws.com/compute-type=fargate:NoSchedule

3. ํŒŒ๋“œ ์ •๋ณด ํ™•์ธ

(1) ํŒŒ๋“œ Disruption Budget ํ™•์ธ

1
kubectl get pdb -n kube-system

โœ…ย ์ถœ๋ ฅ

1
2
3
NAME                           MIN AVAILABLE   MAX UNAVAILABLE   ALLOWED DISRUPTIONS   AGE
aws-load-balancer-controller   N/A             1                 1                     51m
coredns                        N/A             1                 1                     58m

(2) ํŒŒ๋“œ์™€ ๋…ธ๋“œ IP ์ผ์น˜ ํ™•์ธ

1
kubectl get pod -A -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
NAMESPACE     NAME                                            READY   STATUS    RESTARTS   AGE   IP             NODE                                                      NOMINATED NODE   READINESS GATES
kube-system   aws-load-balancer-controller-5f8c95c4cd-827kg   1/1     Running   0          52m   10.10.32.64    fargate-ip-10-10-32-64.ap-northeast-2.compute.internal    <none>           <none>
kube-system   aws-load-balancer-controller-5f8c95c4cd-dz7k5   1/1     Running   0          52m   10.10.5.189    fargate-ip-10-10-5-189.ap-northeast-2.compute.internal    <none>           <none>
kube-system   coredns-64696d8b7f-mhdbl                        1/1     Running   0          53m   10.10.19.109   fargate-ip-10-10-19-109.ap-northeast-2.compute.internal   <none>           <none>
kube-system   coredns-64696d8b7f-w8xkl                        1/1     Running   0          53m   10.10.22.59    fargate-ip-10-10-22-59.ap-northeast-2.compute.internal    <none>           <none>

4. kube-system ์„œ๋น„์Šค ๋ฐ ์—”๋“œํฌ์ธํŠธ ์ •๋ณด ํ™•์ธ

1
kubectl get svc,ep -n kube-system

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
NAME                                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
service/aws-load-balancer-webhook-service   ClusterIP   172.20.165.195   <none>        443/TCP                  53m
service/eks-extension-metrics-api           ClusterIP   172.20.143.212   <none>        443/TCP                  61m
service/kube-dns                            ClusterIP   172.20.0.10      <none>        53/UDP,53/TCP,9153/TCP   60m

NAME                                          ENDPOINTS                                                    AGE
endpoints/aws-load-balancer-webhook-service   10.10.32.64:9443,10.10.5.189:9443                            53m
endpoints/eks-extension-metrics-api           172.0.32.0:10443                                             61m
endpoints/kube-dns                            10.10.19.109:53,10.10.22.59:53,10.10.19.109:53 + 3 more...   60m

5. API ์„œ๋น„์Šค ํ™•์ธ (eks-extension-metrics-api)

1
kubectl get --raw "/apis/metrics.eks.amazonaws.com" | jq

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
  "kind": "APIGroup",
  "apiVersion": "v1",
  "name": "metrics.eks.amazonaws.com",
  "versions": [
    {
      "groupVersion": "metrics.eks.amazonaws.com/v1",
      "version": "v1"
    }
  ],
  "preferredVersion": {
    "groupVersion": "metrics.eks.amazonaws.com/v1",
    "version": "v1"
  }
}

6. ConfigMap ๋ชฉ๋ก ํ™•์ธ

1
kubectl get cm -n kube-system

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
NAME                                                   DATA   AGE
amazon-vpc-cni                                         7      61m
aws-auth                                               1      58m
aws-load-balancer-controller-leader                    0      54m
coredns                                                1      61m
extension-apiserver-authentication                     6      63m
kube-apiserver-legacy-service-account-token-tracking   1      63m
kube-proxy                                             1      61m
kube-proxy-config                                      1      61m
kube-root-ca.crt                                       1     

7. aws-auth ConfigMap ์ƒ์„ธ ํ™•์ธ

1
kubectl get cm -n kube-system aws-auth -o yaml

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
apiVersion: v1
data:
  mapRoles: |
    - groups:
      - system:bootstrappers
      - system:nodes
      - system:node-proxier
      rolearn: arn:aws:iam::378102432899:role/study_wildcard-2025032201103892830000000f
      username: system:node:
    - groups:
      - system:bootstrappers
      - system:nodes
      - system:node-proxier
      rolearn: arn:aws:iam::378102432899:role/kube-system-20250322011038928700000010
      username: system:node:
kind: ConfigMap
metadata:
  creationTimestamp: "2025-03-22T01:11:18Z"
  name: aws-auth
  namespace: kube-system
  resourceVersion: "1537"
  uid: f76078ab-b6d9-4d4c-8e7c-4f71f72b6219

Image

8. RBAC ์ •๋ณด ํ™•์ธ

(1) RBAC ์ •๋ณด ์กฐํšŒ

1
kubectl rbac-tool lookup system:node-proxier

โœ…ย ์ถœ๋ ฅ

1
2
3
  SUBJECT             | SUBJECT TYPE | SCOPE       | NAMESPACE | ROLE                | BINDING                 
----------------------+--------------+-------------+-----------+---------------------+-------------------------
  system:node-proxier | Group        | ClusterRole |           | system:node-proxier | eks:kube-proxy-fargate

(2) RBAC ์ •์ฑ… ์š”์•ฝ ํ™•์ธ

1
kubectl rolesum -k Group system:node-proxier

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
Group: system:node-proxier
Policies:
โ€ข [CRB] */eks:kube-proxy-fargate โŸถ  [CR] */system:node-proxier
  Resource                         Name  Exclude  Verbs  G L W C U P D DC  
  endpoints                        [*]     [-]     [-]   โœ– โœ” โœ” โœ– โœ– โœ– โœ– โœ–   
  endpointslices.discovery.k8s.io  [*]     [-]     [-]   โœ– โœ” โœ” โœ– โœ– โœ– โœ– โœ–   
  events.[,events.k8s.io]          [*]     [-]     [-]   โœ– โœ– โœ– โœ” โœ” โœ” โœ– โœ–   
  nodes                            [*]     [-]     [-]   โœ” โœ” โœ” โœ– โœ– โœ– โœ– โœ–   
  services                         [*]     [-]     [-]   โœ– โœ” โœ” โœ– โœ– โœ– โœ– โœ–   

9. amazon-vpc-cni ConfigMap ์ƒ์„ธ ํ™•์ธ

1
kubectl get cm -n kube-system amazon-vpc-cni -o yaml  

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: v1
data:
  branch-eni-cooldown: "60"
  enable-network-policy-controller: "false"
  enable-windows-ipam: "false"
  enable-windows-prefix-delegation: "false"
  minimum-ip-target: "3"
  warm-ip-target: "1"
  warm-prefix-target: "0"
kind: ConfigMap
metadata:
  creationTimestamp: "2025-03-22T01:08:13Z"
  labels:
    app.kubernetes.io/instance: aws-vpc-cni
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: aws-node
    app.kubernetes.io/version: v1.19.3
    helm.sh/chart: aws-vpc-cni-1.19.3
    k8s-app: aws-node
  name: amazon-vpc-cni
  namespace: kube-system
  resourceVersion: "1757"
  uid: 5e22b867-cef8-465d-8810-49103f00fdb3

10. coredns ์„ค์ • ๋‚ด์šฉ ํ™•์ธ

1
kubectl get cm -n kube-system coredns -o yaml

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
            lameduck 5s
          }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2025-03-22T01:08:13Z"
  labels:
    eks.amazonaws.com/component: coredns
    k8s-app: kube-dns
  name: coredns
  namespace: kube-system
  resourceVersion: "562"
  uid: 3fbc2806-3317-4b39-a12c-5825ad48df8a

11. ์ธ์ฆ์„œ ๊ด€๋ จ ์„ค์ • ํ™•์ธ

  • client-ca-file , requestheader-client-ca-file
1
kubectl get cm -n kube-system extension-apiserver-authentication -o yaml

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
apiVersion: v1
data:
  client-ca-file: |
    -----BEGIN CERTIFICATE-----
    MIIDBTCCAe2gAwIBAgIINnOhkY/65XYwDQYJKoZIhvcNAQELBQAwFTETMBEGA1UE
    AxMKa3ViZXJuZXRlczAeFw0yNTAzMjIwMTAxMThaFw0zNTAzMjAwMTA2MThaMBUx
    EzARBgNVBAMTCmt1YmVybmV0ZXMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
    AoIBAQDGauX9vPIgUjwgNnDqNOlh6+Y8LhV1YdDsRfx0r2dP2G6OqaXaVT8m2e08
    jAGlqLu13Ajt98YJLPgw33+iXv5TecjPzcEM6rzbUECKpHMp/iPqusMIzDqe0SG+
    xAmWmIVG10N3xnvBZnqCQ7Rap8wjLJNdm9ZafJBH9Q19XSIf/3KRkcB7k9s/jxpa
    W34pMTsNzKm5G4Va88XfJQdePE4FST73zRFlHSQqG/+XR3Huzxwhc6Mal02F5Mx3
    /rTQINwAWOKCfiy4+QFbV9B2S6WpROtaxVKRpO4yYgvIqXdmvdz7eSRaJLlVG0Pc
    Khc2xdbgR9DJrgH7N5wPO/PVuo1/AgMBAAGjWTBXMA4GA1UdDwEB/wQEAwICpDAP
    BgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBRnP0S8E2fmUJT8NXHcSk99bLIQ9jAV
    BgNVHREEDjAMggprdWJlcm5ldGVzMA0GCSqGSIb3DQEBCwUAA4IBAQA9fVtVueGN
    jCc+Trhr7Cpa689pUekbQCWQyQRrDYGSuhZo6aOyWMakItZSWDHLyIJpkVUJJnG7
    l4HEQuEWFAuOzNnrncgI/4vR3XOtcd9m2Tac1i67hArRLgm2ryeOG5hWhSEfFaPX
    yJliJcP7bZe6IxMcYB69H4kZ14Zdm2dKv5qA42se45obOP5MxzHEQdNVjnwMV7bf
    ef0xQyoulTt5SmbkQ+oxG1mLxbKml0pf/ZlKZCzw4GSWgwo8z08XN1Bkgidi/7Zx
    Gv1fvt7UDMQimIuEF5V6G8RDViN5wjOxsVR/sOzWZs/ZVGzD+lVgaJnslSfPLb6M
    sHZNk1vC3DI/
    -----END CERTIFICATE-----
    -----BEGIN CERTIFICATE-----
    MIIDBTCCAe2gAwIBAgIIXrQUP7AZMYQwDQYJKoZIhvcNAQELBQAwFTETMBEGA1UE
    AxMKa3ViZXJuZXRlczAeFw0yNTAzMjIwMTA2MTlaFw0zNTAzMjAwMTExMTlaMBUx
    EzARBgNVBAMTCmt1YmVybmV0ZXMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
    AoIBAQC7ZAvLT2YA5ooOIankpmT272lvloZOId6lt197x4VGpW9IGRYllTsVcGAC
    dDEAoF755AaSy++KYakG3jMVc0A79h1wOlmMOkNjJ7OcYYlk8daOtYfe1q7IvNIq
    2ig0M+X5rWr4YoHyHVQJonWKN1j+0yxVcU6F+Bz8qeRglEoMi7k5WkEdH+poRJjQ
    Q9Ci4X9+VQToQo9lPmwB9uTmZwtrEbw2H1DBZCwJbKeRrnauC177AaLw8GCFqNet
    WW1Im1mXKoVztaVKj5indWGrhShJ7nUZBM9+4zB+neL4EC25MEvAYuPBg6YwzHk1
    FzDT9p8N7xZI7jy7QkQIKL+Sjl6tAgMBAAGjWTBXMA4GA1UdDwEB/wQEAwICpDAP
    BgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTIEk6wMEPtSNPGbTyuEBW1kPj1XTAV
    BgNVHREEDjAMggprdWJlcm5ldGVzMA0GCSqGSIb3DQEBCwUAA4IBAQCQzdVWHc7t
    pazwck57Xb5DptR/QFnMk6sCl7JC8YNacLwwQPSsEl27nrye3O1QfjEmLg9dbGo4
    FoZ+eW+WycCM9YJCV1ZgKLVj8ngqAv1T0hrF30F8G/zziIcQ8EEYF+s4QimxrVwG
    no0PaqUw6v87vhHUlA+VAF3CLGeeQ16ojThnapnNm/E+bNzSM3wbjJBDbpBf13XF
    VcII9WFHPGNXIwd8KTl/Kz/y3kbSGDBhN1nxDOJ24HxAb0OUYSZsIf4AZFKWluLg
    /3RE8+h4N2t45yMOQxo38ZJ7l6dowEyqaT4rVB51eXmoIriGrVno38m6orkjJ8x4
    9RRGlonudHwy
    -----END CERTIFICATE-----
  requestheader-allowed-names: '["front-proxy-client"]'
  requestheader-client-ca-file: |
    -----BEGIN CERTIFICATE-----
    MIIDETCCAfmgAwIBAgIId3UE2cVR0HcwDQYJKoZIhvcNAQELBQAwGTEXMBUGA1UE
    AxMOZnJvbnQtcHJveHktY2EwHhcNMjUwMzIyMDEwMTE0WhcNMzUwMzIwMDEwNjE0
    WjAZMRcwFQYDVQQDEw5mcm9udC1wcm94eS1jYTCCASIwDQYJKoZIhvcNAQEBBQAD
    ggEPADCCAQoCggEBAMQkHIigiJmmo2MSbRfJHgZYCQLW6Lkf5zqqu41vxQOO73ap
    PDOcS0jbTbi6eUK8CxKQ09y/Oye4oe3/iZTCsDsSjtKdMQzQO7q0uRSfWmq9gW3U
    JL2yxUl0DmkM062Awyqy0vF93Bn3agseL2klcELKEqYtysWm0YX7JOcCtOrgD5vK
    fSMNQTJc0BNivC7+IQahmD2OfTq+8SwVQB/QX+PDocgNpvEs92Emp312m78D+rzR
    3o0yo/xttWL9n7c+agKCmG/3WlxftSlGuPFGfpVuQBgammU7j5KBHiVyDyas+dFI
    32O9mdVxECKLtjCqoF4QV/e+y/CXYCnuU+QgpLkCAwEAAaNdMFswDgYDVR0PAQH/
    BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFB0i0MVtIb8E/B7ZXYMV
    2jJQhhCmMBkGA1UdEQQSMBCCDmZyb250LXByb3h5LWNhMA0GCSqGSIb3DQEBCwUA
    A4IBAQAAqPY4cszO+a11y64QgOTGYCS96rwpHqB5ssnhOFASICSPAQGcYMPE6cjo
    k+TkyHUcLljudR29oc1UfrXeK2oLtuGlHUyqCCxFf1HIVl34D6cGpXRHzxiV7uMk
    9btYxyvngXC2nJG6YHwT276T4Bv3SPsQhmgiqhsezjsPP4pRXJTkiiEl2s5JzDPk
    dRWcMoo4ONawmGSiNYkAGFlVT42G+MQDF/K21sZh/lv9wFrXFdC1zwZl8HCjqOYG
    Wp8pxos5pSDR3d0jJfci2yKVaPwEE23VpX1A5t/fomHfNyl0mD6Ed56aBrJGnNHw
    clED1NfTTNGf/vyxOCfCIeffFJ/V
    -----END CERTIFICATE-----
  requestheader-extra-headers-prefix: '["X-Remote-Extra-"]'
  requestheader-group-headers: '["X-Remote-Group"]'
  requestheader-username-headers: '["X-Remote-User"]'
kind: ConfigMap
metadata:
  creationTimestamp: "2025-03-22T01:06:42Z"
  name: extension-apiserver-authentication
  namespace: kube-system
  resourceVersion: "1163"
  uid: 4f6f53c3-8a0c-43d6-8822-1fc61571fc46

12. kube-proxy ์„ค์ • ๋‚ด์šฉ ํ™•์ธ

1
kubectl get cm -n kube-system kube-proxy -o yaml

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
apiVersion: v1
data:
  kubeconfig: |-
    kind: Config
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        server: https://67d0bab00152b8102d46202c7560a0a5.gr7.ap-northeast-2.eks.amazonaws.com
      name: default
    contexts:
    - context:
        cluster: default
        namespace: default
        user: default
      name: default
    current-context: default
    users:
    - name: default
      user:
        tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
kind: ConfigMap
metadata:
  creationTimestamp: "2025-03-22T01:08:12Z"
  labels:
    eks.amazonaws.com/component: kube-proxy
    k8s-app: kube-proxy
  name: kube-proxy
  namespace: kube-system
  resourceVersion: "549"
  uid: 8cf1d7b3-61d8-4352-bbe6-13e123fe2cc4

13. kube-proxy-config ์„ค์ • ๋‚ด์šฉ ํ™•์ธ

1
kubectl get cm -n kube-system kube-proxy-config -o yaml 

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
apiVersion: v1
data:
  config: |-
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    clientConnection:
      acceptContentTypes: ""
      burst: 10
      contentType: application/vnd.kubernetes.protobuf
      kubeconfig: /var/lib/kube-proxy/kubeconfig
      qps: 5
    clusterCIDR: ""
    configSyncPeriod: 15m0s
    conntrack:
      maxPerCore: 32768
      min: 131072
      tcpCloseWaitTimeout: 1h0m0s
      tcpEstablishedTimeout: 24h0m0s
    enableProfiling: false
    healthzBindAddress: 0.0.0.0:10256
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: 14
      minSyncPeriod: 0s
      syncPeriod: 30s
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      syncPeriod: 30s
    kind: KubeProxyConfiguration
    metricsBindAddress: 0.0.0.0:10249
    mode: "iptables"
    nodePortAddresses: null
    oomScoreAdj: -998
    portRange: ""
kind: ConfigMap
metadata:
  creationTimestamp: "2025-03-22T01:08:12Z"
  labels:
    eks.amazonaws.com/component: kube-proxy
    k8s-app: kube-proxy
  name: kube-proxy-config
  namespace: kube-system
  resourceVersion: "550"
  uid: 726431cc-fed2-4fc6-ba4e-32e28cf9a452

Private Subnet ๊ตฌ์„ฑ (/20), Public Subnet ๊ตฌ์„ฑ (/24)

1
2
3
azs             = local.azs
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]
public_subnets  = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]

Image


๐Ÿ”ง fargate์— kube-ops-view ์„ค์น˜

1. helm ๋ฆฌํฌ์ง€ํ† ๋ฆฌ ์ถ”๊ฐ€

1
2
3
4
helm repo add geek-cookbook https://geek-cookbook.github.io/charts/

# ๊ฒฐ๊ณผ
"geek-cookbook" already exists with the same configuration, skipping/

2. kube-ops-view ์„ค์น˜

1
2
3
4
5
6
7
8
9
10
11
12
13
14
helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 --set env.TZ="Asia/Seoul" --namespace kube-system

# ๊ฒฐ๊ณผ
NAME: kube-ops-view
LAST DEPLOYED: Sat Mar 22 12:01:51 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace kube-system -l "app.kubernetes.io/name=kube-ops-view,app.kubernetes.io/instance=kube-ops-view" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl port-forward $POD_NAME 8080:8080

3. fargate ํ”„๋กœํŒŒ์ผ์— ์˜ํ•œ ์ž๋™ Mutating ๋™์ž‘

  • kube-system ๋„ค์ž„์ŠคํŽ˜์ด์Šค์— ๋ฐฐํฌ๋˜๋ฉด, fargate ํ”„๋กœํŒŒ์ผ์— ์˜ํ•ด pod ์ŠคํŽ™์ด ์ž๋™์œผ๋กœ ๋ณ€๊ฒฝ๋จ
  • ์ดํ›„, fargate ์Šค์ผ€์ค„๋Ÿฌ๊ฐ€ ํ•ด๋‹น pod๋ฅผ Micro VM์— ๋ฐฐํฌํ•จ

4. ๋ฐฐํฌ๋œ ํŒŒ๋“œ ํ™•์ธ

1
k get pod -A

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
NAMESPACE     NAME                                            READY   STATUS    RESTARTS   AGE
kube-system   aws-load-balancer-controller-5f8c95c4cd-827kg   1/1     Running   0          107m
kube-system   aws-load-balancer-controller-5f8c95c4cd-dz7k5   1/1     Running   0          107m
kube-system   coredns-64696d8b7f-mhdbl                        1/1     Running   0          107m
kube-system   coredns-64696d8b7f-w8xkl                        1/1     Running   0          107m
kube-system   kube-ops-view-796947d6dc-vtv6c                  1/1     Running   0          62s

5. ํฌํŠธ ํฌ์›Œ๋”ฉ ๋ฐ ์ ‘์† URL ํ™•์ธ

(1) ํฌํŠธ ํฌ์›Œ๋”ฉ

1
kubectl port-forward deployment/kube-ops-view -n kube-system 8080:8080 &

โœ…ย ์ถœ๋ ฅ

1
2
3
4
[1] 79226

Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080

(2) ์ ‘์† URL ํ™•์ธ

1
echo -e "KUBE-OPS-VIEW URL = http://localhost:8080/#scale=1.5"

โœ…ย ์ถœ๋ ฅ

1
KUBE-OPS-VIEW URL = http://localhost:8080/#scale=1.5
  • ๊ฐ Micro VM๋งˆ๋‹ค ๋‹จ์ผ pod๊ฐ€ ํ• ๋‹น๋จ

Image

6. ๋…ธ๋“œ ๋ฐ ํŒŒ๋“œ ์ •๋ณด ํ™•์ธ

(1) CSR ์ƒํƒœ ํ™•์ธ

1
kubectl get csr

โœ…ย ์ถœ๋ ฅ

1
2
NAME        AGE     SIGNERNAME                      REQUESTOR                                                            REQUESTEDDURATION   CONDITION
csr-dpjpw   7m56s   kubernetes.io/kubelet-serving   system:node:fargate-ip-10-10-5-126.ap-northeast-2.compute.internal   <none>              Approved,Issued

(2) ๋…ธ๋“œ ์ •๋ณด ์กฐํšŒ

1
kubectl get node -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
NAME                                                      STATUS   ROLES    AGE     VERSION               INTERNAL-IP    EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION                  CONTAINER-RUNTIME
fargate-ip-10-10-19-109.ap-northeast-2.compute.internal   Ready    <none>   115m    v1.30.8-eks-2d5f260   10.10.19.109   <none>        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25
fargate-ip-10-10-22-59.ap-northeast-2.compute.internal    Ready    <none>   115m    v1.30.8-eks-2d5f260   10.10.22.59    <none>        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25
fargate-ip-10-10-32-64.ap-northeast-2.compute.internal    Ready    <none>   115m    v1.30.8-eks-2d5f260   10.10.32.64    <none>        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25
fargate-ip-10-10-5-126.ap-northeast-2.compute.internal    Ready    <none>   9m14s   v1.30.8-eks-2d5f260   10.10.5.126    <none>        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25
fargate-ip-10-10-5-189.ap-northeast-2.compute.internal    Ready    <none>   115m    v1.30.8-eks-2d5f260   10.10.5.189    <none>        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25

(3) Compute Type ๋ฐ Taint ํ™•์ธ

1
kubectl describe node | grep eks.amazonaws.com/compute-type

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
                    eks.amazonaws.com/compute-type=fargate
Taints:             eks.amazonaws.com/compute-type=fargate:NoSchedule
                    eks.amazonaws.com/compute-type=fargate
Taints:             eks.amazonaws.com/compute-type=fargate:NoSchedule
                    eks.amazonaws.com/compute-type=fargate
Taints:             eks.amazonaws.com/compute-type=fargate:NoSchedule
                    eks.amazonaws.com/compute-type=fargate
Taints:             eks.amazonaws.com/compute-type=fargate:NoSchedule
                    eks.amazonaws.com/compute-type=fargate
Taints:             eks.amazonaws.com/compute-type=fargate:NoSchedul

(4) CapacityProvisioned ํ™•์ธ

1
kubectl get pod -n kube-system -o jsonpath='{.items[0].metadata.annotations.CapacityProvisioned}'

โœ…ย ์ถœ๋ ฅ

1
0.25vCPU 0.5GB

7. ๋””ํ”Œ๋กœ์ด๋จผํŠธ ์ƒ์„ธ ์ •๋ณด ํ™•์ธ

1
kubectl get deploy -n kube-system kube-ops-view -o yaml

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    meta.helm.sh/release-name: kube-ops-view
    meta.helm.sh/release-namespace: kube-system
  creationTimestamp: "2025-03-22T03:01:52Z"
  generation: 1
  labels:
    app.kubernetes.io/instance: kube-ops-view
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: kube-ops-view
    app.kubernetes.io/version: 20.4.0
    helm.sh/chart: kube-ops-view-1.2.2
  name: kube-ops-view
  namespace: kube-system
  resourceVersion: "30486"
  uid: 21bd6d0f-4013-485c-b738-1ba7e4c429b7
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      app.kubernetes.io/instance: kube-ops-view
      app.kubernetes.io/name: kube-ops-view
  strategy:
    type: Recreate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: kube-ops-view
        app.kubernetes.io/name: kube-ops-view
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TZ
          value: Asia/Seoul
        image: hjacobs/kube-ops-view:20.4.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          periodSeconds: 10
          successThreshold: 1
          tcpSocket:
            port: 8080
          timeoutSeconds: 1
        name: kube-ops-view
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          periodSeconds: 10
          successThreshold: 1
          tcpSocket:
            port: 8080
          timeoutSeconds: 1
        resources: {}
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        startupProbe:
          failureThreshold: 30
          periodSeconds: 5
          successThreshold: 1
          tcpSocket:
            port: 8080
          timeoutSeconds: 1
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      enableServiceLinks: true
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: kube-ops-view
      serviceAccountName: kube-ops-view
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2025-03-22T03:02:45Z"
    lastUpdateTime: "2025-03-22T03:02:45Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2025-03-22T03:01:52Z"
    lastUpdateTime: "2025-03-22T03:02:45Z"
    message: ReplicaSet "kube-ops-view-796947d6dc" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

8. ํŒŒ๋“œ ์ƒ์„ธ ์ •๋ณด ํ™•์ธ

  • admission control์— ์˜ํ•ด ์ตœ์ข…์ ์œผ๋กœ schedulerName์ด fargate-scheduler๋กœ ์„ค์ •๋จ
1
kubectl get pod -n kube-system -l app.kubernetes.io/instance=kube-ops-view -o yaml

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
apiVersion: v1
items:
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      CapacityProvisioned: 0.25vCPU 0.5GB
      Logging: LoggingEnabled
    creationTimestamp: "2025-03-22T03:01:52Z"
    generateName: kube-ops-view-796947d6dc-
    labels:
      app.kubernetes.io/instance: kube-ops-view
      app.kubernetes.io/name: kube-ops-view
      eks.amazonaws.com/fargate-profile: kube-system
      pod-template-hash: 796947d6dc
    name: kube-ops-view-796947d6dc-vtv6c
    namespace: kube-system
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: kube-ops-view-796947d6dc
      uid: 8b775fea-bd27-498e-862d-9fa03526e959
    resourceVersion: "30482"
    uid: 7ff1d6c3-c32e-473f-94b1-b3e397b4f297
  spec:
    automountServiceAccountToken: true
    containers:
    - env:
      - name: TZ
        value: Asia/Seoul
      image: hjacobs/kube-ops-view:20.4.0
      imagePullPolicy: IfNotPresent
      livenessProbe:
        failureThreshold: 3
        periodSeconds: 10
        successThreshold: 1
        tcpSocket:
          port: 8080
        timeoutSeconds: 1
      name: kube-ops-view
      ports:
      - containerPort: 8080
        name: http
        protocol: TCP
      readinessProbe:
        failureThreshold: 3
        periodSeconds: 10
        successThreshold: 1
        tcpSocket:
          port: 8080
        timeoutSeconds: 1
      resources: {}
      securityContext:
        readOnlyRootFilesystem: true
        runAsNonRoot: true
        runAsUser: 1000
      startupProbe:
        failureThreshold: 30
        periodSeconds: 5
        successThreshold: 1
        tcpSocket:
          port: 8080
        timeoutSeconds: 1
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-fssmb
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    nodeName: fargate-ip-10-10-5-126.ap-northeast-2.compute.internal
    preemptionPolicy: PreemptLowerPriority
    priority: 2000001000
    priorityClassName: system-node-critical
    restartPolicy: Always
    schedulerName: fargate-scheduler
    securityContext: {}
    serviceAccount: kube-ops-view
    serviceAccountName: kube-ops-view
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    volumes:
    - name: kube-api-access-fssmb
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2025-03-22T03:02:43Z"
      status: "True"
      type: PodReadyToStartContainers
    - lastProbeTime: null
      lastTransitionTime: "2025-03-22T03:02:32Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2025-03-22T03:02:45Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2025-03-22T03:02:45Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2025-03-22T03:02:32Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: containerd://83296b3547dee780557c57c1eecc3f2163e0347169315d430a69ecccb450914f
      image: docker.io/hjacobs/kube-ops-view:20.4.0
      imageID: docker.io/hjacobs/kube-ops-view@sha256:58221b57d4d23efe7558355c58ad7c66c8458db20b1f55ddd9f89cc9275bbc90
      lastState: {}
      name: kube-ops-view
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2025-03-22T03:02:43Z"
    hostIP: 10.10.5.126
    hostIPs:
    - ip: 10.10.5.126
    phase: Running
    podIP: 10.10.5.126
    podIPs:
    - ip: 10.10.5.126
    qosClass: BestEffort
    startTime: "2025-03-22T03:02:32Z"
kind: List
metadata:
  resourceVersion: ""

9. ํŒŒ๋“œ ์ด๋ฒคํŠธ ํ™•์ธ

1
kubectl describe pod -n kube-system -l app.kubernetes.io/instance=kube-ops-view | grep Events: -A10

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
Events:
  Type    Reason          Age   From               Message
  ----    ------          ----  ----               -------
  Normal  LoggingEnabled  16m   fargate-scheduler  Successfully enabled logging for pod
  Normal  Scheduled       15m   fargate-scheduler  Successfully assigned kube-system/kube-ops-view-796947d6dc-vtv6c to fargate-ip-10-10-5-126.ap-northeast-2.compute.internal
  Normal  Pulling         15m   kubelet            Pulling image "hjacobs/kube-ops-view:20.4.0"
  Normal  Pulled          15m   kubelet            Successfully pulled image "hjacobs/kube-ops-view:20.4.0" in 9.982s (9.982s including waiting). Image size: 81086356 bytes.
  Normal  Created         15m   kubelet            Created container kube-ops-view
  Normal  Started         15m   kubelet            Started container kube-ops-view

๐Ÿ” fargate ์— netshoot ๋””ํ”Œ๋กœ์ด๋จผํŠธ(ํŒŒ๋“œ)

1. ๋„ค์ž„์ŠคํŽ˜์ด์Šค ์ƒ์„ฑ

1
2
3
4
kubectl create ns study-aews

# ๊ฒฐ๊ณผ
namespace/study-aews created

2. netshoot ๋””ํ”Œ๋กœ์ด๋จผํŠธ ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: netshoot
  namespace: study-aews
spec:
  replicas: 1
  selector:
    matchLabels:
      app: netshoot
  template:
    metadata:
      labels:
        app: netshoot
    spec:
      containers:
      - name: netshoot
        image: nicolaka/netshoot
        command: ["tail"]
        args: ["-f", "/dev/null"]
        resources: 
          requests:
            cpu: 500m
            memory: 500Mi
          limits:
            cpu: 2
            memory: 2Gi
      terminationGracePeriodSeconds: 0
EOF

# ๊ฒฐ๊ณผ
deployment.apps/netshoot created

3. ์ด๋ฒคํŠธ ๋ชจ๋‹ˆํ„ฐ๋ง

1
kubectl get events -w --sort-by '.lastTimestamp'

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
warning: --watch requested, --sort-by will be ignored for watch events received
LAST SEEN   TYPE      REASON                    OBJECT                                                        MESSAGE
21m         Normal    Starting                  node/fargate-ip-10-10-5-126.ap-northeast-2.compute.internal   
21m         Normal    Starting                  node/fargate-ip-10-10-5-126.ap-northeast-2.compute.internal   Starting kubelet.
21m         Warning   InvalidDiskCapacity       node/fargate-ip-10-10-5-126.ap-northeast-2.compute.internal   invalid capacity 0 on image filesystem
21m         Normal    NodeHasSufficientMemory   node/fargate-ip-10-10-5-126.ap-northeast-2.compute.internal   Node fargate-ip-10-10-5-126.ap-northeast-2.compute.internal status is now: NodeHasSufficientMemory
21m         Normal    NodeHasNoDiskPressure     node/fargate-ip-10-10-5-126.ap-northeast-2.compute.internal   Node fargate-ip-10-10-5-126.ap-northeast-2.compute.internal status is now: NodeHasNoDiskPressure
21m         Normal    NodeHasSufficientPID      node/fargate-ip-10-10-5-126.ap-northeast-2.compute.internal   Node fargate-ip-10-10-5-126.ap-northeast-2.compute.internal status is now: NodeHasSufficientPID
21m         Normal    NodeAllocatableEnforced   node/fargate-ip-10-10-5-126.ap-northeast-2.compute.internal   Updated Node Allocatable limit across pods
21m         Normal    Synced                    node/fargate-ip-10-10-5-126.ap-northeast-2.compute.internal   Node synced successfully
21m         Normal    NodeReady                 node/fargate-ip-10-10-5-126.ap-northeast-2.compute.internal   Node fargate-ip-10-10-5-126.ap-northeast-2.compute.internal status is now: NodeReady
21m         Normal    RegisteredNode            node/fargate-ip-10-10-5-126.ap-northeast-2.compute.internal   Node fargate-ip-10-10-5-126.ap-northeast-2.compute.internal event: Registered Node fargate-ip-10-10-5-126.ap-northeast-2.compute.internal in Controller
0s          Normal    Starting                  node/fargate-ip-10-10-21-203.ap-northeast-2.compute.internal   Starting kubelet.
0s          Warning   InvalidDiskCapacity       node/fargate-ip-10-10-21-203.ap-northeast-2.compute.internal   invalid capacity 0 on image filesystem
0s          Normal    NodeHasSufficientMemory   node/fargate-ip-10-10-21-203.ap-northeast-2.compute.internal   Node fargate-ip-10-10-21-203.ap-northeast-2.compute.internal status is now: NodeHasSufficientMemory
0s          Normal    NodeHasNoDiskPressure     node/fargate-ip-10-10-21-203.ap-northeast-2.compute.internal   Node fargate-ip-10-10-21-203.ap-northeast-2.compute.internal status is now: NodeHasNoDiskPressure
0s          Normal    NodeHasSufficientPID      node/fargate-ip-10-10-21-203.ap-northeast-2.compute.internal   Node fargate-ip-10-10-21-203.ap-northeast-2.compute.internal status is now: NodeHasSufficientPID
0s          Normal    NodeAllocatableEnforced   node/fargate-ip-10-10-21-203.ap-northeast-2.compute.internal   Updated Node Allocatable limit across pods
0s          Normal    NodeHasSufficientMemory   node/fargate-ip-10-10-21-203.ap-northeast-2.compute.internal   Node fargate-ip-10-10-21-203.ap-northeast-2.compute.internal status is now: NodeHasSufficientMemory
0s          Normal    NodeHasNoDiskPressure     node/fargate-ip-10-10-21-203.ap-northeast-2.compute.internal   Node fargate-ip-10-10-21-203.ap-northeast-2.compute.internal status is now: NodeHasNoDiskPressure
0s          Normal    NodeHasSufficientPID      node/fargate-ip-10-10-21-203.ap-northeast-2.compute.internal   Node fargate-ip-10-10-21-203.ap-northeast-2.compute.internal status is now: NodeHasSufficientPID
0s          Normal    Synced                    node/fargate-ip-10-10-21-203.ap-northeast-2.compute.internal   Node synced successfully
0s          Normal    NodeReady                 node/fargate-ip-10-10-21-203.ap-northeast-2.compute.internal   Node fargate-ip-10-10-21-203.ap-northeast-2.compute.internal status is now: NodeReady
0s          Normal    Starting                  node/fargate-ip-10-10-21-203.ap-northeast-2.compute.internal   
0s          Normal    RegisteredNode            node/fargate-ip-10-10-21-203.ap-northeast-2.compute.internal   Node fargate-ip-10-10-21-203.ap-northeast-2.compute.internal event: Registered Node fargate-ip-10-10-21-203.ap-northeast-2.compute.internal in Controller

Image

4. CapacityProvisioned ํ™•์ธ

1
kubectl get pod -n study-aews -o jsonpath='{.items[0].metadata.annotations.CapacityProvisioned}'

โœ…ย ์ถœ๋ ฅ

1
0.5vCPU 1GB

5. ๋””ํ”Œ๋กœ์ด๋จผํŠธ ์ƒ์„ธ ์ •๋ณด ํ™•์ธ

1
kubectl get deploy -n study-aews netshoot -o yaml

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"netshoot","namespace":"study-aews"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"netshoot"}},"template":{"metadata":{"labels":{"app":"netshoot"}},"spec":{"containers":[{"args":["-f","/dev/null"],"command":["tail"],"image":"nicolaka/netshoot","name":"netshoot","resources":{"limits":{"cpu":2,"memory":"2Gi"},"requests":{"cpu":"500m","memory":"500Mi"}}}],"terminationGracePeriodSeconds":0}}}}
  creationTimestamp: "2025-03-22T03:24:07Z"
  generation: 1
  name: netshoot
  namespace: study-aews
  resourceVersion: "36642"
  uid: 5ad8ba1b-5fed-44df-a32a-b8359d399764
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: netshoot
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: netshoot
    spec:
      containers:
      - args:
        - -f
        - /dev/null
        command:
        - tail
        image: nicolaka/netshoot
        imagePullPolicy: Always
        name: netshoot
        resources:
          limits:
            cpu: "2"
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 500Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 0
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2025-03-22T03:25:14Z"
    lastUpdateTime: "2025-03-22T03:25:14Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2025-03-22T03:24:07Z"
    lastUpdateTime: "2025-03-22T03:25:14Z"
    message: ReplicaSet "netshoot-84558cd8d9" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

6. ํŒŒ๋“œ ์ƒ์„ธ ์ •๋ณด ํ™•์ธ

1
kubectl get pod -n study-aews -l app=netshoot -o yaml

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
apiVersion: v1
items:
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      CapacityProvisioned: 0.5vCPU 1GB
      Logging: LoggingEnabled
    creationTimestamp: "2025-03-22T03:24:07Z"
    generateName: netshoot-84558cd8d9-
    labels:
      app: netshoot
      eks.amazonaws.com/fargate-profile: study_wildcard
      pod-template-hash: 84558cd8d9
    name: netshoot-84558cd8d9-6nnnc
    namespace: study-aews
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: netshoot-84558cd8d9
      uid: 29b941bf-d875-4de9-940b-c80a6bc23c8c
    resourceVersion: "36640"
    uid: 8aa75100-ced7-46a2-ab95-26fe6a072d63
  spec:
    containers:
    - args:
      - -f
      - /dev/null
      command:
      - tail
      image: nicolaka/netshoot
      imagePullPolicy: Always
      name: netshoot
      resources:
        limits:
          cpu: "2"
          memory: 2Gi
        requests:
          cpu: 500m
          memory: 500Mi
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-6c6ls
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    nodeName: fargate-ip-10-10-21-203.ap-northeast-2.compute.internal
    preemptionPolicy: PreemptLowerPriority
    priority: 2000001000
    priorityClassName: system-node-critical
    restartPolicy: Always
    schedulerName: fargate-scheduler
    securityContext: {}
    serviceAccount: default
    serviceAccountName: default
    terminationGracePeriodSeconds: 0
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    volumes:
    - name: kube-api-access-6c6ls
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2025-03-22T03:25:14Z"
      status: "True"
      type: PodReadyToStartContainers
    - lastProbeTime: null
      lastTransitionTime: "2025-03-22T03:24:56Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2025-03-22T03:25:14Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2025-03-22T03:25:14Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2025-03-22T03:24:55Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: containerd://6cfc2c5d870d3871d0b38db3c7ff6238e033037a7b8a86442cb7939e206d13ae
      image: docker.io/nicolaka/netshoot:latest
      imageID: docker.io/nicolaka/netshoot@sha256:a20c2531bf35436ed3766cd6cfe89d352b050ccc4d7005ce6400adf97503da1b
      lastState: {}
      name: netshoot
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2025-03-22T03:25:13Z"
    hostIP: 10.10.21.203
    hostIPs:
    - ip: 10.10.21.203
    phase: Running
    podIP: 10.10.21.203
    podIPs:
    - ip: 10.10.21.203
    qosClass: Burstable
    startTime: "2025-03-22T03:24:56Z"
kind: List
metadata:
  resourceVersion: ""
  • ์–ด๋…ธํ…Œ์ด์…˜์— CapacityProvisioned(0.5vCPU 1GB)์™€ LoggingEnabled ๋“ฑ์ด ํฌํ•จ๋จ
  • fargate ํ”„๋กœํŒŒ์ผ์— ์˜ํ•ด ์Šค์ผ€์ค„๋Ÿฌ๊ฐ€ fargate-scheduler๋กœ ์„ค์ •๋˜์–ด ์žˆ์Œ

7. ํŒŒ๋“œ ์ด๋ฒคํŠธ ๋กœ๊ทธ ํ™•์ธ

1
kubectl describe pod -n study-aews -l app=netshoot | grep Events: -A10

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
Events:
  Type    Reason          Age    From               Message
  ----    ------          ----   ----               -------
  Normal  LoggingEnabled  5m57s  fargate-scheduler  Successfully enabled logging for pod
  Normal  Scheduled       5m9s   fargate-scheduler  Successfully assigned study-aews/netshoot-84558cd8d9-6nnnc to fargate-ip-10-10-21-203.ap-northeast-2.compute.internal
  Normal  Pulling         5m9s   kubelet            Pulling image "nicolaka/netshoot"
  Normal  Pulled          4m52s  kubelet            Successfully pulled image "nicolaka/netshoot" in 16.95s (16.95s including waiting). Image size: 183950747 bytes.
  Normal  Created         4m52s  kubelet            Created container netshoot
  Normal  Started         4m52s  kubelet            Started container netshoot

8. MutatingWebhook ์„ค์ • ํ™•์ธ

1
kubectl get mutatingwebhookconfigurations.admissionregistration.k8s.io

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
NAME                                             WEBHOOKS   AGE
0500-amazon-eks-fargate-mutation.amazonaws.com   2          139m
aws-load-balancer-webhook                        3          137m
pod-identity-webhook                             1          145m
vpc-resource-mutating-webhook                    1          145m

9. ํŒŒ๋“œ ๋‚ด๋ถ€ ๋„คํŠธ์›Œํฌ ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
kubectl exec -it deploy/netshoot -n study-aews -- zsh
                    dP            dP                           dP   
                    88            88                           88   
88d888b. .d8888b. d8888P .d8888b. 88d888b. .d8888b. .d8888b. d8888P 
88'  `88 88ooood8   88   Y8ooooo. 88'  `88 88'  `88 88'  `88   88   
88    88 88.  ...   88         88 88    88 88.  .88 88.  .88   88   
dP    dP `88888P'   dP   `88888P' dP    dP `88888P' `88888P'   dP   
                                                                    
Welcome to Netshoot! (github.com/nicolaka/netshoot)
Version: 0.13
netshoot-84558cd8d9-6nnnc ~ ip -c a

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
4: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
    link/ether 62:26:be:1f:2e:c1 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.21.203/20 brd 10.10.31.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::6026:beff:fe1f:2ec1/64 scope link 
       valid_lft forever preferred_lft forever

10. DNS ์„ค์ • ํ™•์ธ

1
netshoot-84558cd8d9-6nnnc ~ cat /etc/resolv.conf

โœ…ย ์ถœ๋ ฅ

1
2
3
search study-aews.svc.cluster.local svc.cluster.local cluster.local ap-northeast-2.compute.internal
nameserver 172.20.0.10
options ndots:5
  • ํŒŒ๋“œ์˜ DNS ์„ค์ • ํŒŒ์ผ์—์„œ nameserver๊ฐ€ 172.20.0.10์œผ๋กœ ์„ค์ •๋˜์–ด ์žˆ์Œ

11. ์™ธ๋ถ€ ์ธํ„ฐ๋„ท ์ ‘์† ๋ฐ ๊ณต์ธ IP ํ™•์ธ

1
netshoot-84558cd8d9-6nnnc ~ curl ipinfo.io/ip

โœ…ย ์ถœ๋ ฅ

1
43.201.218.71
  • ํŒŒ๋“œ๊ฐ€ ์™ธ๋ถ€ ์ธํ„ฐ๋„ท์— ์ ‘์†ํ•  ๋•Œ, NAT Gateway๋ฅผ ํ†ตํ•ด ๊ณต์ธ IP(43.201.218.71)๊ฐ€ ๋…ธ์ถœ๋จ

Image

12. ๋””์Šคํฌ ๋ฐ ํŒŒ์ผ ์‹œ์Šคํ…œ ์ •๋ณด ํ™•์ธ

(1) ๋””์Šคํฌ ์ •๋ณด ์กฐํšŒ

1
netshoot-84558cd8d9-6nnnc ~ lsblk

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
NAME          MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme1n1       259:0    0  30G  0 disk /etc/resolv.conf
                                      /etc/hostname
nvme0n1       259:1    0   5G  0 disk 
โ”œโ”€nvme0n1p1   259:2    0   5G  0 part 
โ””โ”€nvme0n1p128 259:3    0   1M  0 part

(2) ํŒŒ์ผ ์‹œ์Šคํ…œ ์‚ฌ์šฉ๋Ÿ‰ ๋ฐ ํƒ€์ž… ํ™•์ธ

1
netshoot-84558cd8d9-6nnnc ~ df -hT /

โœ…ย ์ถœ๋ ฅ

1
2
Filesystem           Type            Size      Used Available Use% Mounted on
overlay              overlay        29.4G     11.8G     16.0G  43% /

13. fstab ์„ค์ • ํ™•์ธ

1
netshoot-84558cd8d9-6nnnc ~ cat /etc/fstab

โœ…ย ์ถœ๋ ฅ

1
2
/dev/cdrom	/media/cdrom	iso9660	noauto,ro 0 0
/dev/usbdisk	/media/usb	vfat	noauto,ro 0 0

๐Ÿšจ ํŒŒ๋“œ ๊ถŒํ•œ๊ณผ ํ˜ธ์ŠคํŠธ ๋„ค์ž„์ŠคํŽ˜์ด์Šค ๊ณต์œ ๋กœ ํ˜ธ์ŠคํŠธ ํƒˆ์ทจ ์‹œ๋„

1. ๊ณต๊ฒฉ์šฉ ํŒŒ๋“œ ์ƒ์„ฑ ์‹œ๋„

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: root-shell
  namespace: study-aews
spec:
  containers:
  - command:
    - /bin/cat
    image: alpine:3
    name: root-shell
    securityContext:
      privileged: true
    tty: true
    stdin: true
    volumeMounts:
    - mountPath: /host
      name: hostroot
  hostNetwork: true
  hostPID: true
  hostIPC: true
  tolerations:
  - effect: NoSchedule
    operator: Exists
  - effect: NoExecute
    operator: Exists
  volumes:
  - hostPath:
      path: /
    name: hostroot
EOF
# ๊ฒฐ๊ณผ
pod/root-shell created

2. ํŒŒ๋“œ ์ƒํƒœ ํ™•์ธ

1
kubectl get pod -n study-aews root-shell

โœ…ย ์ถœ๋ ฅ

1
2
NAME         READY   STATUS    RESTARTS   AGE
root-shell   0/1     Pending   0          57s

3. ์Šค์ผ€์ค„๋ง ์‹คํŒจ ์ด๋ฒคํŠธ ํ™•์ธ

1
kubectl describe pod -n study-aews root-shell | grep Events: -A 10

โœ…ย ์ถœ๋ ฅ

1
2
3
4
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  93s   fargate-scheduler  Pod not supported on Fargate: fields not supported: HostNetwork, HostPID, HostIPC, volumes not supported: hostroot is of an unsupported volume Type, invalid SecurityContext fields: Privileged

4. ํŒŒ๋“œ ์‚ญ์ œ

1
2
3
4
kubectl delete pod -n study-aews root-shell

# ๊ฒฐ๊ณผ
pod "root-shell" deleted

๐ŸŒ AWS ALB(Ingress)

1. ๊ฒŒ์ž„ ๋””ํ”Œ๋กœ์ด๋จผํŠธ, ์„œ๋น„์Šค, Ingress ๋ฐฐํฌ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: study-aews
  name: deployment-2048
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: app-2048
  replicas: 2
  template:
    metadata:
      labels:
        app.kubernetes.io/name: app-2048
    spec:
      containers:
      - image: public.ecr.aws/l6m2t8p7/docker-2048:latest
        imagePullPolicy: Always
        name: app-2048
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  namespace: study-aews
  name: service-2048
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  type: ClusterIP
  selector:
    app.kubernetes.io/name: app-2048
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: study-aews
  name: ingress-2048
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
spec:
  ingressClassName: alb
  rules:
    - http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: service-2048
              port:
                number: 80
EOF

# ๊ฒฐ๊ณผ
deployment.apps/deployment-2048 created
service/service-2048 created
ingress.networking.k8s.io/ingress-2048 created

2. ๋ฆฌ์†Œ์Šค ์ƒ์„ฑ ๊ฒฐ๊ณผ ํ™•์ธ

1
kubectl get pod,ingress,svc,ep,endpointslices -n study-aews

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
NAME                                  READY   STATUS    RESTARTS   AGE
pod/deployment-2048-85f8c7d69-pqqdv   1/1     Running   0          111s
pod/deployment-2048-85f8c7d69-zm2zf   1/1     Running   0          111s
pod/netshoot-84558cd8d9-6nnnc         1/1     Running   0          29m

NAME                                     CLASS   HOSTS   ADDRESS                                                                        PORTS   AGE
ingress.networking.k8s.io/ingress-2048   alb     *       k8s-studyaew-ingress2-08c53ee834-1560071651.ap-northeast-2.elb.amazonaws.com   80      110s

NAME                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/service-2048   ClusterIP   172.20.201.9   <none>        80/TCP    112s

NAME                     ENDPOINTS                        AGE
endpoints/service-2048   10.10.11.11:80,10.10.45.150:80   111s

NAME                                                ADDRESSTYPE   PORTS   ENDPOINTS                  AGE
endpointslice.discovery.k8s.io/service-2048-xc5sx   IPv4          80      10.10.11.11,10.10.45.150   111s

Image Image Image

3. ์ „์ฒด ๋ฆฌ์†Œ์Šค ๋ชฉ๋ก ์กฐํšŒ

1
kubectl get-all -n study-aews

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
NAME                                                               NAMESPACE   AGE
configmap/kube-root-ca.crt                                         study-aews  35m    
endpoints/service-2048                                             study-aews  6m39s  
pod/deployment-2048-85f8c7d69-pqqdv                                study-aews  6m40s  
pod/deployment-2048-85f8c7d69-zm2zf                                study-aews  6m40s  
pod/netshoot-84558cd8d9-6nnnc                                      study-aews  34m    
serviceaccount/default                                             study-aews  35m    
service/service-2048                                               study-aews  6m40s  
deployment.apps/deployment-2048                                    study-aews  6m40s  
deployment.apps/netshoot                                           study-aews  34m    
replicaset.apps/deployment-2048-85f8c7d69                          study-aews  6m40s  
replicaset.apps/netshoot-84558cd8d9                                study-aews  34m    
endpointslice.discovery.k8s.io/service-2048-xc5sx                  study-aews  6m39s  
targetgroupbinding.elbv2.k8s.aws/k8s-studyaew-service2-96ed4830e0  study-aews  6m35s  
ingress.networking.k8s.io/ingress-2048                             study-aews  6m39s  

4. TargetGroupBinding ํ™•์ธ

1
kubectl get targetgroupbindings -n study-aews

โœ…ย ์ถœ๋ ฅ

1
2
NAME                               SERVICE-NAME   SERVICE-PORT   TARGET-TYPE   AGE
k8s-studyaew-service2-96ed4830e0   service-2048   80             ip            8m20s

5. Ingress ์ƒ์„ธ ์ •๋ณด ํ™•์ธ

1
kubectl describe ingress -n study-aews ingress-2048

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Name:             ingress-2048
Labels:           <none>
Namespace:        study-aews
Address:          k8s-studyaew-ingress2-08c53ee834-1560071651.ap-northeast-2.elb.amazonaws.com
Ingress Class:    alb
Default backend:  <default>
Rules:
  Host        Path  Backends
  ----        ----  --------
  *           
              /   service-2048:80 (10.10.11.11:80,10.10.45.150:80)
Annotations:  alb.ingress.kubernetes.io/scheme: internet-facing
              alb.ingress.kubernetes.io/target-type: ip
Events:
  Type    Reason                  Age    From     Message
  ----    ------                  ----   ----     -------
  Normal  SuccessfullyReconciled  9m50s  ingress  Successfully reconciled

6. Ingress URL ํ™•์ธ ๋ฐ ๊ฒŒ์ž„ ์ ‘์†

1
kubectl get ingress -n study-aews ingress-2048 -o jsonpath="{.status.loadBalancer.ingress[*].hostname}{'\n'}"

โœ…ย ์ถœ๋ ฅ

1
k8s-studyaew-ingress2-08c53ee834-1560071651.ap-northeast-2.elb.amazonaws.com

Image

7. ํŒŒ๋“œ IP ํ™•์ธ

1
kubectl get pod -n study-aews -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME                              READY   STATUS    RESTARTS   AGE   IP             NODE                                                      NOMINATED NODE   READINESS GATES
deployment-2048-85f8c7d69-pqqdv   1/1     Running   0          13m   10.10.11.11    fargate-ip-10-10-11-11.ap-northeast-2.compute.internal    <none>           <none>
deployment-2048-85f8c7d69-zm2zf   1/1     Running   0          13m   10.10.45.150   fargate-ip-10-10-45-150.ap-northeast-2.compute.internal   <none>           <none>
netshoot-84558cd8d9-6nnnc         1/1     Running   0          40m   10.10.21.203   fargate-ip-10-10-21-203.ap-northeast-2.compute.internal   <none>           <none>

8. ํŒŒ๋“œ ์ˆ˜ ์ฆ๊ฐ€

1
2
kubectl scale deployment -n study-aews  deployment-2048 --replicas 4
deployment.apps/deployment-2048 scaled
  • Fargate ํ™˜๊ฒฝ์—์„œ๋Š” ํ•˜๋‚˜์˜ ๋…ธ๋“œ์— ํ•˜๋‚˜์˜ ํŒŒ๋“œ๊ฐ€ ํ• ๋‹น๋จ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Œ

Image

9. ๊ฒŒ์ž„ ๋ฐฐํฌ ๋ฆฌ์†Œ์Šค ์‚ญ์ œ

1
2
3
4
kubectl delete ingress ingress-2048 -n study-aews

# ๊ฒฐ๊ณผ
ingress.networking.k8s.io "ingress-2048" deleted
1
2
3
4
5
kubectl delete svc service-2048 -n study-aews && kubectl delete deploy deployment-2048 -n study-aews

# ๊ฒฐ๊ณผ
service "service-2048" deleted
deployment.apps "deployment-2048" deleted

โณ fargate job

1. Job ์ƒ์„ฑ ๋ฐ ๋ฐฐํฌ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
cat <<EOF | kubectl apply -f -
apiVersion: batch/v1
kind: Job
metadata:
  name: busybox1
  namespace: study-aews
spec:
  template:
    spec:
      containers:
      - name: busybox
        image: busybox
        command: ["/bin/sh", "-c", "sleep 10"]
      restartPolicy: Never
  ttlSecondsAfterFinished: 60 # <-- TTL controller
---
apiVersion: batch/v1
kind: Job
metadata:
  name: busybox2
  namespace: study-aews
spec:
  template:
    spec:
      containers:
      - name: busybox
        image: busybox
        command: ["/bin/sh", "-c", "sleep 10"]
      restartPolicy: Never
EOF

# ๊ฒฐ๊ณผ
job.batch/busybox1 created
job.batch/busybox2 created

2. Job ๋ฐ Pod ์ƒํƒœ ํ™•์ธ

1
k get job,pod -A

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
NAMESPACE    NAME                 STATUS     COMPLETIONS   DURATION   AGE
study-aews   job.batch/busybox1   Complete   1/1           61s        62s
study-aews   job.batch/busybox2   Running    0/1           62s        62s
NAMESPACE     NAME                                                READY   STATUS      RESTARTS   AGE
kube-system   pod/aws-load-balancer-controller-5f8c95c4cd-827kg   1/1     Running     0          175m
kube-system   pod/aws-load-balancer-controller-5f8c95c4cd-dz7k5   1/1     Running     0          175m
kube-system   pod/coredns-64696d8b7f-mhdbl                        1/1     Running     0          175m
kube-system   pod/coredns-64696d8b7f-w8xkl                        1/1     Running     0          175m
kube-system   pod/kube-ops-view-796947d6dc-vtv6c                  1/1     Running     0          69m
study-aews    pod/busybox1-spxfg                                  0/1     Completed   0          62s
study-aews    pod/busybox2-gkhw2                                  0/1     Completed   0          62s
study-aews    pod/netshoot-84558cd8d9-6nnnc                       1/1     Running     0          46m
  • TTL ์„ค์ • ๋•๋ถ„์— Job์ด ์™„๋ฃŒ๋œ ํ›„ ์ผ์ • ์‹œ๊ฐ„(60์ดˆ) ํ›„์— ์ž๋™์œผ๋กœ ์‚ญ์ œ๋จ

Image

3. ํ™•์ธ ํ›„ ๋ฆฌ์†Œ์Šค ์ •๋ฆฌ

  • study-aews ๋„ค์ž„์ŠคํŽ˜์ด์Šค ๋‚ด ๋ชจ๋“  Job์„ ์‚ญ์ œํ•จ
1
2
kubectl delete job -n study-aews --all
job.batch "busybox2" deleted

๐Ÿ“œ fargate logging

1. Deployment ๋ฐ Service ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-app
  namespace: study-aews
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:latest
        name: nginx
        ports:
        - containerPort: 80
          name: http
        resources:
          requests:
            cpu: 500m
            memory: 500Mi
          limits:
            cpu: 2
            memory: 2Gi
---
apiVersion: v1
kind: Service
metadata:
  name: sample-app
  namespace: study-aews
spec:
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
  type: ClusterIP
EOF

# ๊ฒฐ๊ณผ
deployment.apps/sample-app created
service/sample-app created

2. ํŒŒ๋“œ ์ƒํƒœ ํ™•์ธ

1
kubectl get pod -n study-aews -l app=nginx

โœ…ย ์ถœ๋ ฅ

1
2
3
NAME                          READY   STATUS    RESTARTS   AGE
sample-app-7596c66778-gcbpm   1/1     Running   0          4m47s
sample-app-7596c66778-m72wk   1/1     Running   0          4m47s

3. ๋กœ๊น… ์„ค์ • ์ •๋ณด ํ™•์ธ

(1) ๋„ค์ž„์ŠคํŽ˜์ด์Šค ๋ผ๋ฒจ ํ™•์ธ

1
kubectl get ns --show-labels

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
NAME                STATUS   AGE     LABELS
aws-observability   Active   3h10m   aws-observability=enabled,kubernetes.io/metadata.name=aws-observability
default             Active   3h13m   kubernetes.io/metadata.name=default
kube-node-lease     Active   3h13m   kubernetes.io/metadata.name=kube-node-lease
kube-public         Active   3h13m   kubernetes.io/metadata.name=kube-public
kube-system         Active   3h13m   kubernetes.io/metadata.name=kube-system
study-aews          Active   57m     kubernetes.io/metadata.name=study-aews

(2) aws-observability ๋„ค์ž„์ŠคํŽ˜์ด์Šค์˜ ConfigMap ์กฐํšŒ

1
kubectl get cm -n aws-observability

โœ…ย ์ถœ๋ ฅ

1
2
3
NAME               DATA   AGE
aws-logging        4      3h10m
kube-root-ca.crt   1      3h10m

(3) aws-logging ConfigMap ์ƒ์„ธ ์ •๋ณด ์กฐํšŒ

1
kubectl get cm -n aws-observability aws-logging -o yaml

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
apiVersion: v1
data:
  filters.conf: |
    [FILTER]
      Name parser
      Match *
      Key_name log
      Parser crio
    [FILTER]
      Name kubernetes
      Match kube.*
      Merge_Log On
      Keep_Log Off
      Buffer_Size 0
      Kube_Meta_Cache_TTL 300s
  flb_log_cw: "true"
  output.conf: |+
    [OUTPUT]
          Name cloudwatch
          Match kube.*
          region ap-northeast-2
          log_group_name /fargate-serverless/fargate-fluentbit-logs2025032201101014020000000c
          log_stream_prefix fargate-logs-
          auto_create_group true
    [OUTPUT]
          Name cloudwatch_logs
          Match *
          region ap-northeast-2
          log_group_name /fargate-serverless/fargate-fluentbit-logs2025032201101014020000000c
          log_stream_prefix fargate-logs-fluent-bit-
          auto_create_group true
  parsers.conf: |
    [PARSER]
      Name crio
      Format Regex
      Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>P|F) (?<log>.*)$
      Time_Key    time
      Time_Format %Y-%m-%dT%H:%M:%S.%L%z
      Time_Keep On
immutable: false
kind: ConfigMap
metadata:
  creationTimestamp: "2025-03-22T01:10:10Z"
  name: aws-logging
  namespace: aws-observability
  resourceVersion: "900"
  uid: b16b5c6d-c791-4965-b359-dd9116e8268b

4. ํŒŒ๋“œ ๋‚ด ์„œ๋น„์Šค ์‘๋‹ต ํ™•์ธ

1
kubectl exec -it deploy/netshoot -n study-aews -- curl sample-app | grep title

โœ…ย ์ถœ๋ ฅ

1
<title>Welcome to nginx!</title>

5. ๋ฆฌ์†Œ์Šค ์ƒํƒœ ์ „์ฒด ํ™•์ธ

1
k get svc -A

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
NAMESPACE     NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes                          ClusterIP   172.20.0.1       <none>        443/TCP                  3h19m
kube-system   aws-load-balancer-webhook-service   ClusterIP   172.20.165.195   <none>        443/TCP                  3h11m
kube-system   eks-extension-metrics-api           ClusterIP   172.20.143.212   <none>        443/TCP                  3h19m
kube-system   kube-dns                            ClusterIP   172.20.0.10      <none>        53/UDP,53/TCP,9153/TCP   3h18m
kube-system   kube-ops-view                       ClusterIP   172.20.94.181    <none>        8080/TCP                 84m
study-aews    sample-app                          ClusterIP   172.20.151.12    <none>        80/TCP                   8m27s

6. ๋ฐ˜๋ณต ์ ‘์†

1
while true; do kubectl exec -it deploy/netshoot -n study-aews -- curl sample-app | grep title; sleep 1; echo ; date; done;

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
Sat Mar 22 01:30:48 PM KST 2025
<title>Welcome to nginx!</title>
Sat Mar 22 01:30:49 PM KST 2025
<title>Welcome to nginx!</title>
Sat Mar 22 01:30:51 PM KST 2025
<title>Welcome to nginx!</title>
Sat Mar 22 01:30:53 PM KST 2025
<title>Welcome to nginx!</title>
...
  • AWS CloudWatch์˜ Log groups๋ฅผ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ์Œ

Image

7. ์‹ค์Šต ๋ฆฌ์†Œ์Šค ์‚ญ์ œ

(1) sample-app ๋ฆฌ์†Œ์Šค ์‚ญ์ œ

1
2
3
4
5
kubectl delete deploy,svc -n study-aews sample-app

# ๊ฒฐ๊ณผ
deployment.apps "sample-app" deleted
service "sample-app" deleted

(2) netshoot ๋””ํ”Œ๋กœ์ด๋จผํŠธ ์‚ญ์ œ

1
2
3
kubectl delete deploy netshoot -n study-aews
# ๊ฒฐ๊ณผ
deployment.apps "netshoot" deleted

(3) kube-ops-view ์‚ญ์ œ

1
2
3
helm uninstall kube-ops-view -n kube-system
# ๊ฒฐ๊ณผ
release "kube-ops-view" uninstalled

(4) ํ…Œ๋ผํผ ์‚ญ์ œ

1
terraform destroy -auto-approve

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
...
module.eks.aws_iam_role_policy_attachment.this["AmazonEKSClusterPolicy"]: Destroying... [id=fargate-serverless-cluster-20250322010135020200000001-20250322010137182200000005]
module.vpc.aws_subnet.private[1]: Destroying... [id=subnet-05cefaf3ec33b3428]
module.vpc.aws_subnet.private[0]: Destroying... [id=subnet-036ecdf4f681ecd01]
module.vpc.aws_subnet.private[2]: Destroying... [id=subnet-0e4d404db8bc45ab4]
module.eks.aws_cloudwatch_log_group.this[0]: Destroying... [id=/aws/eks/fargate-serverless/cluster]
module.eks.module.kms.aws_kms_key.this[0]: Destroying... [id=6c1b1910-a38e-4d86-a3c2-26eaf093beb0]
module.eks.aws_cloudwatch_log_group.this[0]: Destruction complete after 0s
module.eks.module.kms.aws_kms_key.this[0]: Destruction complete after 0s
module.vpc.aws_subnet.private[2]: Destruction complete after 0s
module.vpc.aws_subnet.private[1]: Destruction complete after 0s
module.vpc.aws_subnet.private[0]: Destruction complete after 0s
module.vpc.aws_vpc.this[0]: Destroying... [id=vpc-075d7467603079482]
module.eks.aws_iam_role_policy_attachment.this["AmazonEKSClusterPolicy"]: Destruction complete after 1s
module.eks.aws_iam_role_policy_attachment.this["AmazonEKSVPCResourceController"]: Destruction complete after 1s
module.eks.aws_iam_role.this[0]: Destroying... [id=fargate-serverless-cluster-20250322010135020200000001]
module.vpc.aws_vpc.this[0]: Destruction complete after 1s
module.eks.aws_iam_role.this[0]: Destruction complete after 1s
Destroy complete! Resources: 64 destroyed.

(5) kubeconfig ์‚ญ์ œ

1
rm -rf ~/.kube/config

๐Ÿค– Auto mode

1. ์ฝ”๋“œ ๊ฐ€์ ธ์˜ค๊ธฐ

1
2
3
4
5
6
7
8
9
10
11
git clone https://github.com/aws-samples/sample-aws-eks-auto-mode.git
cd sample-aws-eks-auto-mode/terraform

# ๊ฒฐ๊ณผ
Cloning into 'sample-aws-eks-auto-mode'...
remote: Enumerating objects: 43, done.
remote: Counting objects: 100% (43/43), done.
remote: Compressing objects: 100% (33/33), done.
remote: Total 43 (delta 10), reused 41 (delta 10), pack-reused 0 (from 0)
Receiving objects: 100% (43/43), 19.21 KiB | 9.60 MiB/s, done.
Resolving deltas: 100% (10/10), done.

2. ๋ณ€์ˆ˜ ํŒŒ์ผ ์ˆ˜์ • (variables.tf)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
variable "name" {
  description = "Name of the VPC and EKS Cluster"
  default     = "automode-cluster"
  type        = string
}

variable "region" {
  description = "region"
  default     = "ap-northeast-2" 
  type        = string
}

variable "eks_cluster_version" {
  description = "EKS Cluster version"
  default     = "1.31"
  type        = string
}

# VPC with 65536 IPs (10.0.0.0/16) for 3 AZs
variable "vpc_cidr" {
  description = "VPC CIDR. This should be a valid private (RFC 1918) CIDR range"
  default     = "10.20.0.0/16"
  type        = string
}
  • ap-northeast-2 ๋ฆฌ์ „์„ ์‚ฌ์šฉํ•˜๋„๋ก ์ˆ˜์ •
  • VPC CIDR์„ 10.20.0.0/16์œผ๋กœ ์ˆ˜์ •

3. eks.tf ์„ค์ •

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
################################################################################
# Cluster
################################################################################

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 20.24"

  cluster_name    = var.name
  cluster_version = var.eks_cluster_version

  # Give the Terraform identity admin access to the cluster
  # which will allow it to deploy resources into the cluster
  enable_cluster_creator_admin_permissions = true
  cluster_endpoint_public_access           = true
  
  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.private_subnets

  cluster_compute_config = {
    enabled    = true
    node_pools = ["general-purpose"]
  }
  tags = local.tags
}
  • EKS ํด๋Ÿฌ์Šคํ„ฐ ๋ชจ๋“ˆ ์„ค์ •์—์„œ โ€œsystemโ€ ๊ด€๋ จ ์„ค์ •์€ ์ „์šฉ ์ธ์Šคํ„ด์Šค๋กœ ์ถ”๊ฐ€ํ•˜์ง€ ์•Š์Œ

4. ํ…Œ๋ผํผ ์ดˆ๊ธฐํ™”

1
terraform init

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
Initializing the backend...
Initializing modules...
Downloading registry.terraform.io/terraform-aws-modules/eks/aws 20.34.0 for eks...
- eks in .terraform/modules/eks
- eks.eks_managed_node_group in .terraform/modules/eks/modules/eks-managed-node-group
- eks.eks_managed_node_group.user_data in .terraform/modules/eks/modules/_user_data
- eks.fargate_profile in .terraform/modules/eks/modules/fargate-profile
Downloading registry.terraform.io/terraform-aws-modules/kms/aws 2.1.0 for eks.kms...
- eks.kms in .terraform/modules/eks.kms
- eks.self_managed_node_group in .terraform/modules/eks/modules/self-managed-node-group
- eks.self_managed_node_group.user_data in .terraform/modules/eks/modules/_user_data
Downloading registry.terraform.io/terraform-aws-modules/vpc/aws 5.19.0 for vpc...
- vpc in .terraform/modules/vpc
Initializing provider plugins...
- Finding hashicorp/cloudinit versions matching ">= 2.0.0"...
- Finding hashicorp/kubernetes versions matching ">= 2.10.0"...
- Finding hashicorp/helm versions matching ">= 2.9.0"...
- Finding hashicorp/null versions matching ">= 3.0.0, >= 3.1.0"...
- Finding hashicorp/aws versions matching ">= 4.33.0, >= 5.34.0, >= 5.79.0, >= 5.83.0"...
- Finding latest version of hashicorp/local...
- Finding hashicorp/time versions matching ">= 0.9.0"...
- Finding hashicorp/tls versions matching ">= 3.0.0"...
- Installing hashicorp/local v2.5.2...
- Installed hashicorp/local v2.5.2 (signed by HashiCorp)
- Installing hashicorp/time v0.13.0...
- Installed hashicorp/time v0.13.0 (signed by HashiCorp)
- Installing hashicorp/tls v4.0.6...
- Installed hashicorp/tls v4.0.6 (signed by HashiCorp)
- Installing hashicorp/cloudinit v2.3.6...
- Installed hashicorp/cloudinit v2.3.6 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.36.0...
- Installed hashicorp/kubernetes v2.36.0 (signed by HashiCorp)
- Installing hashicorp/helm v2.17.0...
- Installed hashicorp/helm v2.17.0 (signed by HashiCorp)
- Installing hashicorp/null v3.2.3...
- Installed hashicorp/null v3.2.3 (signed by HashiCorp)
- Installing hashicorp/aws v5.92.0...
- Installed hashicorp/aws v5.92.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

5. ํ…Œ๋ผํผ ๋ฐฐํฌ

1
terraform apply -auto-approve

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
...
Changes to Outputs:
  + configure_kubectl = "aws eks --region ap-northeast-2 update-kubeconfig --name automode-cluster"
null_resource.create_nodepools_dir: Creating...
null_resource.create_nodepools_dir: Provisioning with 'local-exec'...
null_resource.create_nodepools_dir (local-exec): Executing: ["/bin/sh" "-c" "mkdir -p ./../nodepools"]
null_resource.create_nodepools_dir: Creation complete after 0s [id=415270948853617396]
...
...
module.eks.aws_eks_cluster.this[0]: Still creating... [10m0s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating... [10m10s elapsed]
module.eks.aws_eks_cluster.this[0]: Creation complete after 10m15s [id=automode-cluster]
module.eks.aws_ec2_tag.cluster_primary_security_group["Blueprint"]: Creating...
module.eks.data.tls_certificate.this[0]: Reading...
module.eks.aws_eks_access_entry.this["cluster_creator"]: Creating...
module.eks.time_sleep.this[0]: Creating...
module.eks.data.tls_certificate.this[0]: Read complete after 0s [id=380aae2c5231dddde8b28d3d72626bcf7a67b2d8]
module.eks.aws_iam_openid_connect_provider.oidc_provider[0]: Creating...
module.eks.aws_ec2_tag.cluster_primary_security_group["Blueprint"]: Creation complete after 0s [id=sg-08f172750d9947967,Blueprint]
module.eks.aws_eks_access_entry.this["cluster_creator"]: Creation complete after 0s [id=automode-cluster:arn:aws:iam::378102432899:user/eks-user]
module.eks.aws_eks_access_policy_association.this["cluster_creator_admin"]: Creating...
module.eks.aws_eks_access_policy_association.this["cluster_creator_admin"]: Creation complete after 1s [id=automode-cluster#arn:aws:iam::378102432899:user/eks-user#arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy]
module.eks.aws_iam_openid_connect_provider.oidc_provider[0]: Creation complete after 1s [id=arn:aws:iam::378102432899:oidc-provider/oidc.eks.ap-northeast-2.amazonaws.com/id/03DFBBF640E92C148A3EFF53888FF939]
module.eks.time_sleep.this[0]: Still creating... [10s elapsed]
module.eks.time_sleep.this[0]: Still creating... [20s elapsed]
module.eks.time_sleep.this[0]: Still creating... [30s elapsed]
module.eks.time_sleep.this[0]: Creation complete after 30s [id=2025-03-22T05:10:26Z]
Apply complete! Resources: 61 added, 0 changed, 0 destroyed.
Outputs:
configure_kubectl = "aws eks --region ap-northeast-2 update-kubeconfig --name automode-cluster"

6. kubeconfig ๊ตฌ์„ฑ

1
$(terraform output -raw configure_kubectl)

โœ…ย ์ถœ๋ ฅ

1
Added new context arn:aws:eks:ap-northeast-2:378102432899:cluster/automode-cluster to /home/devshin/.kube/config

7. kubectl ์ปจํ…์ŠคํŠธ ๋ณ€๊ฒฝ

1
kubectl config rename-context "arn:aws:eks:ap-northeast-2:$(aws sts get-caller-identity --query 'Account' --output text):cluster/automode-cluster" "automode-lab"

โœ…ย ์ถœ๋ ฅ

1
Context "arn:aws:eks:ap-northeast-2:378102432899:cluster/automode-cluster" renamed to "automode-lab".

8. ๋…ธ๋“œ ๋ฐ ํŒŒ๋“œ ์ƒํƒœ ํ™•์ธ

1
2
k get node
k get pod -A

โœ…ย ์ถœ๋ ฅ

1
2
No resources found
No resources found

9. ์„œ๋น„์Šค ๋ฐ ์—”๋“œํฌ์ธํŠธ ํ™•์ธ

1
k get svc,ep -A

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
NAMESPACE     NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
default       service/kubernetes                  ClusterIP   172.20.0.1       <none>        443/TCP   88m
kube-system   service/eks-extension-metrics-api   ClusterIP   172.20.237.178   <none>        443/TCP   88m

NAMESPACE     NAME                                  ENDPOINTS                        AGE
default       endpoints/kubernetes                  10.20.19.60:443,10.20.4.33:443   88m
kube-system   endpoints/eks-extension-metrics-api   172.0.32.0:10443                 88m

10. EKS ํด๋Ÿฌ์Šคํ„ฐ ์ƒ์„ธ ์ •๋ณด ์กฐํšŒ

1
terraform state show 'module.eks.aws_eks_cluster.this[0]'

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
# module.eks.aws_eks_cluster.this[0]:
resource "aws_eks_cluster" "this" {
    arn                           = "arn:aws:eks:ap-northeast-2:378102432899:cluster/automode-cluster"
    bootstrap_self_managed_addons = false
    certificate_authority         = [
        {
            data = "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJWCt4NloxYm5VaTR3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1qSXdORFU1TURCYUZ3MHpOVEF6TWpBd05UQTBNREJhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURQOFRjMjNueWNCOXZJenphVlZQQjg3NlhOVGJGRjJYOXVBZWYyT2VOeDhVTnZoZXFqeE0zWDZzZ2wKbm1ZU0tvMENSK0o0bWNDUDQ2L2Z2WGdOYnNyRDFFbUdJUFkyUlFmb2k0SXdZYkFYS1E3ZWRacElNMldCcktVNwovWjY5SktDanNiQVVjZCtuVU5EK1ZMaTIrUXUybDEreTZNbjRTaUtPMklXY1hMblVDR1BUSVBlTmQ0eDl0RUxaCkJSSkZxeE9JMnpTVUo1QmtkWXZSR0dQL1VjWml5RnJNNjB2ZWx2ZzE2emRnNU1XSVdZVjQvaHErOFR6ZUxjUHoKNE9vTktHbXl2SEI1bk9NTVBuQzZMMTUyUzZyMVNEM0dlNno0MExPV1MraUpxNmo3WE5CWVd4a0VaVnd2RzNQVQpudWtYWkh2OVU0cGdscEFPeFdoOVZMaUl2aUhKQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJRVE1MVTRRUEZBWStGczQzN2xEZHdjRXJuaTNqQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ2J0M2dQS0VsKwplTm5qOEhTc0V4aGZIbVlqNkhYdkNpcVQ5OEpHWG5USWtQUnFuUmFVeHZmeS9Sak5FWExUTVZCeDhPSFc1ZFl6CjUxU2FUaERSVGtsTWZLVVRJeFgyQXpvUHo2OGVCUUxZUlJET1NHOXl3MGpDeHh2MlRSRDRQb1g5U3hmdDBzVlIKaCtWY3V2QjlVb0FuM2lzbkcwd1JzYTljbUVyWkdLQzViWmhETi9IeEhCMXpTWk90M3J6NkxVN3VnWGF1UHp0MgpkUEVkOElhVGlVRTNRbk9hK2JoL1ZEVzNVYWpjUWdzRkVBTStNOTE1ZkVueTlyb1RuOEtCZ1Ztc0c3R2hJbXhmClg3NXJpeUhDeUNDdzNUNldWM0ZabW5hejZLbDkvTTNyT2ZsZGRESWMvaXdQTjJ2SkVzNVpYdjBEYktxaDNZY0IKL2lDdjI5UUtaaTlwCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
        },
    ]
    created_at                    = "2025-03-22T04:59:44Z"
    enabled_cluster_log_types     = [
        "api",
        "audit",
        "authenticator",
    ]
    endpoint                      = "https://03DFBBF640E92C148A3EFF53888FF939.gr7.ap-northeast-2.eks.amazonaws.com"
    id                            = "automode-cluster"
    identity                      = [
        {
            oidc = [
                {
                    issuer = "https://oidc.eks.ap-northeast-2.amazonaws.com/id/03DFBBF640E92C148A3EFF53888FF939"
                },
            ]
        },
    ]
    name                          = "automode-cluster"
    platform_version              = "eks.21"
    role_arn                      = "arn:aws:iam::378102432899:role/automode-cluster-cluster-20250322045920242000000001"
    status                        = "ACTIVE"
    tags                          = {
        "Blueprint"             = "automode-cluster"
        "terraform-aws-modules" = "eks"
    }
    tags_all                      = {
        "Blueprint"             = "automode-cluster"
        "terraform-aws-modules" = "eks"
    }
    version                       = "1.31"
    access_config {
        authentication_mode                         = "API_AND_CONFIG_MAP"
        bootstrap_cluster_creator_admin_permissions = false
    }
    compute_config {
        enabled       = true
        node_pools    = [
            "general-purpose",
        ]
        node_role_arn = "arn:aws:iam::378102432899:role/automode-cluster-eks-auto-20250322045920243000000002"
    }
    encryption_config {
        resources = [
            "secrets",
        ]
        provider {
            key_arn = "arn:aws:kms:ap-northeast-2:378102432899:key/7cee55fb-ae4e-43ba-9a7b-0eba2418d528"
        }
    }
    kubernetes_network_config {
        ip_family         = "ipv4"
        service_ipv4_cidr = "172.20.0.0/16"
        elastic_load_balancing {
            enabled = true
        }
    }
    storage_config {
        block_storage {
            enabled = true
        }
    }
    timeouts {}
    upgrade_policy {
        support_type = "EXTENDED"
    }
    vpc_config {
        cluster_security_group_id = "sg-08f172750d9947967"
        endpoint_private_access   = true
        endpoint_public_access    = true
        public_access_cidrs       = [
            "0.0.0.0/0",
        ]
        security_group_ids        = [
            "sg-022902e2839163258",
        ]
        subnet_ids                = [
            "subnet-02d00c8168ab5903e",
            "subnet-08dd817c3837e7eaa",
            "subnet-0a6b3e5b70aa63933",
        ]
        vpc_id                    = "vpc-084852690e9db1da8"
    }
}

11. ๊ด€๋ฆฌ ์ฝ˜์†” ํ™•์ธ

(1) VPC - ENI ํ™•์ธ : EKS Owned-ENI

Image

(2) EKS : Cluster IAM Role, Node IAM Role, Auto Mode

Image

(3) Compute : Built-in node pools

Image

(4) Add-ons ์—†์Œ

Image

(5) Access : IAM access entries

Image

๐Ÿ•ต๏ธ kubectl ํ™•์ธ

1. CRD ๋ชฉ๋ก ์กฐํšŒ

1
kubectl get crd

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
NAME                                         CREATED AT
cninodes.eks.amazonaws.com                   2025-03-22T05:08:11Z
cninodes.vpcresources.k8s.aws                2025-03-22T05:04:41Z
ingressclassparams.eks.amazonaws.com         2025-03-22T05:08:11Z
nodeclaims.karpenter.sh                      2025-03-22T05:08:21Z
nodeclasses.eks.amazonaws.com                2025-03-22T05:08:21Z
nodediagnostics.eks.amazonaws.com            2025-03-22T05:08:21Z
nodepools.karpenter.sh                       2025-03-22T05:08:21Z
policyendpoints.networking.k8s.aws           2025-03-22T05:04:41Z
securitygrouppolicies.vpcresources.k8s.aws   2025-03-22T05:04:40Z
targetgroupbindings.eks.amazonaws.com        2025-03-22T05:08:12Z

2. ๋…ธ๋“œ ๊ด€๋ จ API ๋ฆฌ์†Œ์Šค ์กฐํšŒ

1
kubectl api-resources | grep -i node

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
nodes                               no           v1                                false        Node
cninodes                            cni,cnis     eks.amazonaws.com/v1alpha1        false        CNINode
nodeclasses                                      eks.amazonaws.com/v1              false        NodeClass
nodediagnostics                                  eks.amazonaws.com/v1alpha1        false        NodeDiagnostic
nodeclaims                                       karpenter.sh/v1                   false        NodeClaim
nodepools                                        karpenter.sh/v1                   false        NodePool
runtimeclasses                                   node.k8s.io/v1                    false        RuntimeClass
csinodes                                         storage.k8s.io/v1                 false        CSINode
cninodes                            cnd          vpcresources.k8s.aws/v1alpha1     false        CNINode

3. ๋…ธ๋“œ ์ง„๋‹จ ์ •๋ณด (NodeDiagnostic) ์„ค๋ช…

1
kubectl explain nodediagnostics

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
GROUP:      eks.amazonaws.com
KIND:       NodeDiagnostic
VERSION:    v1alpha1
DESCRIPTION:
    The name of the NodeDiagnostic resource is meant to match the name of the
    node which should perform the diagnostic tasks
    
FIELDS:
  apiVersion	<string>
    APIVersion defines the versioned schema of this representation of an object.
    Servers should convert recognized schemas to the latest internal value, and
    may reject unrecognized values. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
  kind	<string>
    Kind is a string value representing the REST resource this object
    represents. Servers may infer this from the endpoint the client submits
    requests to. Cannot be updated. In CamelCase. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
  metadata	<ObjectMeta>
    Standard object's metadata. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
  spec	<Object>
    <no description>
  status	<Object>
    <no description>

4. NodeClass ์ •๋ณด ์กฐํšŒ

1
kubectl get nodeclasses.eks.amazonaws.com

โœ…ย ์ถœ๋ ฅ

1
2
NAME      ROLE                                                   READY   AGE
default   automode-cluster-eks-auto-20250322045920243000000002   True    79m

5. NodeClass ์ƒ์„ธ ์ •๋ณด ํ™•์ธ

1
kubectl get nodeclasses.eks.amazonaws.com -o yaml

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
apiVersion: v1
items:
- apiVersion: eks.amazonaws.com/v1
  kind: NodeClass
  metadata:
    annotations:
      eks.amazonaws.com/nodeclass-hash: "13741888110944717817"
      eks.amazonaws.com/nodeclass-hash-version: v1
    creationTimestamp: "2025-03-22T05:08:23Z"
    finalizers:
    - eks.amazonaws.com/termination
    generation: 1
    labels:
      app.kubernetes.io/managed-by: eks
    name: default
    resourceVersion: "24231"
    uid: 52705d1b-c6b7-440f-8f19-ab10cc5bf9d9
  spec:
    ephemeralStorage:
      iops: 3000
      size: 80Gi
      throughput: 125
    networkPolicy: DefaultAllow
    networkPolicyEventLogs: Disabled
    role: automode-cluster-eks-auto-20250322045920243000000002
    securityGroupSelectorTerms:
    - id: sg-08f172750d9947967
    snatPolicy: Random
    subnetSelectorTerms:
    - id: subnet-08dd817c3837e7eaa
    - id: subnet-02d00c8168ab5903e
    - id: subnet-0a6b3e5b70aa63933
  status:
    conditions:
    - lastTransitionTime: "2025-03-22T05:08:38Z"
      message: ""
      observedGeneration: 1
      reason: SubnetsReady
      status: "True"
      type: SubnetsReady
    - lastTransitionTime: "2025-03-22T05:08:38Z"
      message: ""
      observedGeneration: 1
      reason: SecurityGroupsReady
      status: "True"
      type: SecurityGroupsReady
    - lastTransitionTime: "2025-03-22T05:08:38Z"
      message: ""
      observedGeneration: 1
      reason: InstanceProfileReady
      status: "True"
      type: InstanceProfileReady
    - lastTransitionTime: "2025-03-22T05:08:38Z"
      message: ""
      observedGeneration: 1
      reason: Ready
      status: "True"
      type: Ready
    instanceProfile: eks-ap-northeast-2-automode-cluster-5106780580677764626
    securityGroups:
    - id: sg-08f172750d9947967
      name: eks-cluster-sg-automode-cluster-2065126657
    subnets:
    - id: subnet-08dd817c3837e7eaa
      zone: ap-northeast-2c
      zoneID: apne2-az3
    - id: subnet-02d00c8168ab5903e
      zone: ap-northeast-2b
      zoneID: apne2-az2
    - id: subnet-0a6b3e5b70aa63933
      zone: ap-northeast-2a
      zoneID: apne2-az1
kind: List
metadata:
  resourceVersion: ""

6. NodePool ์ƒ์„ธ ๋ถ„์„

1
kubectl get nodepools -o yaml

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
apiVersion: v1
items:
- apiVersion: karpenter.sh/v1
  kind: NodePool
  metadata:
    annotations:
      karpenter.sh/nodepool-hash: "4012513481623584108"
      karpenter.sh/nodepool-hash-version: v3
    creationTimestamp: "2025-03-22T05:08:23Z"
    generation: 1
    labels:
      app.kubernetes.io/managed-by: eks
    name: general-purpose
    resourceVersion: "977"
    uid: c3a77232-2527-4df5-aa68-43a5b97cb170
  spec:
    disruption:
      budgets:
      - nodes: 10%
      consolidateAfter: 30s
      consolidationPolicy: WhenEmptyOrUnderutilized
    template:
      metadata: {}
      spec:
        expireAfter: 336h
        nodeClassRef:
          group: eks.amazonaws.com
          kind: NodeClass
          name: default
        requirements:
        - key: karpenter.sh/capacity-type
          operator: In
          values:
          - on-demand
        - key: eks.amazonaws.com/instance-category
          operator: In
          values:
          - c
          - m
          - r
        - key: eks.amazonaws.com/instance-generation
          operator: Gt
          values:
          - "4"
        - key: kubernetes.io/arch
          operator: In
          values:
          - amd64
        - key: kubernetes.io/os
          operator: In
          values:
          - linux
        terminationGracePeriod: 24h0m0s
  status:
    conditions:
    - lastTransitionTime: "2025-03-22T05:08:36Z"
      message: ""
      observedGeneration: 1
      reason: ValidationSucceeded
      status: "True"
      type: ValidationSucceeded
    - lastTransitionTime: "2025-03-22T05:08:38Z"
      message: ""
      observedGeneration: 1
      reason: NodeClassReady
      status: "True"
      type: NodeClassReady
    - lastTransitionTime: "2025-03-22T05:08:38Z"
      message: ""
      observedGeneration: 1
      reason: Ready
      status: "True"
      type: Ready
    resources:
      cpu: "0"
      ephemeral-storage: "0"
      memory: "0"
      nodes: "0"
      pods: "0"
kind: List
metadata:
  resourceVersion: "
  • ๋…ธ๋“œ ํ’€์˜ disruption ์˜ˆ์‚ฐ ์ •๋ณด ํ™•์ธ ๊ฐ€๋Šฅ
  • expireAfter ์„ค์ •์ด 14์ผ๋กœ ์ง€์ •๋˜์–ด ์žˆ์Œ
  • ์š”๊ตฌ ์‚ฌํ•ญ์—๋Š” ์˜จ๋””๋งจ๋“œ ๋ฐ ์ธ์Šคํ„ด์Šค ์นดํ…Œ๊ณ ๋ฆฌ ๋“ฑ์˜ ์„ค์ •์ด ํฌํ•จ๋จ

7. Webhook ๊ตฌ์„ฑ ํ™•์ธ

(1) MutatingWebhookConfiguration ์กฐํšŒ

1
kubectl get mutatingwebhookconfiguration

โœ…ย ์ถœ๋ ฅ

1
2
3
4
NAME                            WEBHOOKS   AGE
eks-load-balancing-webhook      2          82m
pod-identity-webhook            1          86m
vpc-resource-mutating-webhook   1          86m

(2) ValidatingWebhookConfiguration ์กฐํšŒ

1
kubectl get validatingwebhookconfiguration

โœ…ย ์ถœ๋ ฅ

1
2
NAME                              WEBHOOKS   AGE
vpc-resource-validating-webhook   2          86m

๐Ÿ‘€ kube-ops-view ์„ค์น˜ ๋ฐ ์ƒํƒœ ํ™•์ธ

1. kube-ops-view ์„ค์น˜

1
2
helm repo add geek-cookbook https://geek-cookbook.github.io/charts/
helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 --set env.TZ="Asia/Seoul" --namespace kube-system

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
NAME: kube-ops-view
LAST DEPLOYED: Sat Mar 22 15:46:16 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace kube-system -l "app.kubernetes.io/name=kube-ops-view,app.kubernetes.io/instance=kube-ops-view" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl port-forward $POD_NAME 8080:8080

2. ์ด๋ฒคํŠธ ๋กœ๊ทธ ๋ชจ๋‹ˆํ„ฐ๋ง

1
kubectl get events -w --sort-by '.lastTimestamp'

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
warning: --watch requested, --sort-by will be ignored for watch events received
LAST SEEN   TYPE      REASON                    OBJECT                            MESSAGE
14s         Normal    Starting                  node/i-08a7de4e203ea03b7          
38m         Normal    Finalized                 nodeclass/default                 Finalized eks.amazonaws.com/termination
15m         Normal    Finalized                 nodeclass/default                 Finalized eks.amazonaws.com/termination
31s         Normal    Launched                  nodeclaim/general-purpose-6fn25   Status condition transitioned, Type: Launched, Status: Unknown -> True, Reason: Launched
25s         Normal    DisruptionBlocked         nodeclaim/general-purpose-6fn25   Nodeclaim does not have an associated node
15s         Normal    NodeAllocatableEnforced   node/i-08a7de4e203ea03b7          Updated Node Allocatable limit across pods
15s         Normal    NodeReady                 node/i-08a7de4e203ea03b7          Node i-08a7de4e203ea03b7 status is now: NodeReady
15s         Normal    Starting                  node/i-08a7de4e203ea03b7          Starting kubelet.
15s         Warning   InvalidDiskCapacity       node/i-08a7de4e203ea03b7          invalid capacity 0 on image filesystem
15s         Normal    NodeHasSufficientMemory   node/i-08a7de4e203ea03b7          Node i-08a7de4e203ea03b7 status is now: NodeHasSufficientMemory
15s         Normal    NodeHasNoDiskPressure     node/i-08a7de4e203ea03b7          Node i-08a7de4e203ea03b7 status is now: NodeHasNoDiskPressure
15s         Normal    NodeHasSufficientPID      node/i-08a7de4e203ea03b7          Node i-08a7de4e203ea03b7 status is now: NodeHasSufficientPID
14s         Normal    Ready                     node/i-08a7de4e203ea03b7          Status condition transitioned, Type: Ready, Status: False -> True, Reason: KubeletReady, Message: kubelet is posting ready status
14s         Normal    Registered                nodeclaim/general-purpose-6fn25   Status condition transitioned, Type: Registered, Status: Unknown -> True, Reason: Registered
13s         Normal    Initialized               nodeclaim/general-purpose-6fn25   Status condition transitioned, Type: Initialized, Status: Unknown -> True, Reason: Initialized
13s         Normal    Ready                     nodeclaim/general-purpose-6fn25   Status condition transitioned, Type: Ready, Status: Unknown -> True, Reason: Ready
13s         Normal    Synced                    node/i-08a7de4e203ea03b7          Node synced successfully
11s         Normal    RegisteredNode            node/i-08a7de4e203ea03b7          Node i-08a7de4e203ea03b7 event: Registered Node i-08a7de4e203ea03b7 in Controller
5s          Normal    DisruptionBlocked         node/i-08a7de4e203ea03b7          Node is nominated for a pending pod
0s          Normal    Unconsolidatable          node/i-08a7de4e203ea03b7          Can't replace with a cheaper node
0s          Normal    Unconsolidatable          nodeclaim/general-purpose-6fn25   Can't replace with a cheaper node

3. ๋…ธ๋“œ ํด๋ ˆ์ž„ ๋ฐ ๋…ธ๋“œ ์ •๋ณด ์กฐํšŒ

(1) ๋…ธ๋“œ ํด๋ ˆ์ž„ ์กฐํšŒ

1
kubectl get nodeclaims

โœ…ย ์ถœ๋ ฅ

1
2
NAME                    TYPE        CAPACITY    ZONE              NODE                  READY   AGE
general-purpose-6fn25   c5a.large   on-demand   ap-northeast-2b   i-08a7de4e203ea03b7   True    2m4s

(2) ๋…ธ๋“œ ์ƒ์„ธ ์ •๋ณด ์กฐํšŒ

1
kubectl get node -owide

โœ…ย ์ถœ๋ ฅ

1
2
NAME                  STATUS   ROLES    AGE     VERSION               INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                                           KERNEL-VERSION   CONTAINER-RUNTIME
i-08a7de4e203ea03b7   Ready    <none>   2m23s   v1.31.4-eks-0f56d01   10.20.21.118   <none>        Bottlerocket (EKS Auto) 2025.3.14 (aws-k8s-1.31)   6.1.129          containerd://1.7.25+bottlerocket

4. CNI ๋…ธ๋“œ ์ •๋ณด ํ™•์ธ

1
kubectl get cninodes.eks.amazonaws.com

โœ…ย ์ถœ๋ ฅ

1
2
NAME                  AGE
i-08a7de4e203ea03b7   3m16s

5. ํฌํŠธ ํฌ์›Œ๋”ฉ ๋ฐ ์›น ์ ‘์† ํ™•์ธ

(1) ํฌํŠธ ํฌ์›Œ๋”ฉ

1
kubectl port-forward deployment/kube-ops-view -n kube-system 8080:8080 &

โœ…ย ์ถœ๋ ฅ

1
2
3
4
[1] 118262

Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080

(2) ์ ‘์† URL ์ถœ๋ ฅ

1
echo -e "KUBE-OPS-VIEW URL = http://localhost:8080/#scale=1.5"

โœ…ย ์ถœ๋ ฅ

1
KUBE-OPS-VIEW URL = http://localhost:8080/#scale=1.5

Image

6. EC2 ์ธ์Šคํ„ด์Šค ํ™•์ธ ๋ฐ ์—ฐ๊ฒฐ ์‹คํŒจ

  • AWS ๊ด€๋ฆฌ ์ฝ˜์†”์—์„œ EKS์— ์˜ํ•ด ์ƒ์„ฑ๋œ EC2 ์ธ์Šคํ„ด์Šค๊ฐ€ ํ•˜๋‚˜ ํ™•์ธ๋จ

Image

  • ๊ทธ๋Ÿฌ๋‚˜ ํ•ด๋‹น ์ธ์Šคํ„ด์Šค์— ๋Œ€ํ•œ ์—ฐ๊ฒฐ ์‹œ๋„๊ฐ€ ์‹คํŒจํ•จ

Image Image


๐Ÿ’พ [์Šคํ† ๋ฆฌ์ง€] Stateful Workload with PV(EBS) ๋ฐฐํฌ

1. ์Šคํ† ๋ฆฌ์ง€ ํด๋ž˜์Šค ์กฐํšŒ

1
2
3
k get sc                                             
NAME   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gp2    kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   false                  171m

2. ์ƒˆ ์Šคํ† ๋ฆฌ์ง€ ํด๋ž˜์Šค ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: auto-ebs-sc
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: ebs.csi.eks.amazonaws.com  # Uses EKS Auto Mode
volumeBindingMode: WaitForFirstConsumer  # Delays volume creation until a pod needs it
parameters:
  type: gp3
  encrypted: "true" 
EOF

# ๊ฒฐ๊ณผ
storageclass.storage.k8s.io/auto-ebs-sc created

3. PersistentVolumeClaim ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: auto-ebs-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: auto-ebs-sc
  resources:
    requests:
      storage: 8Gi
EOF

# ๊ฒฐ๊ณผ
persistentvolumeclaim/auto-ebs-claim created

4. Stateful Workload ๋ฐฐํฌ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: inflate-stateful
spec:
  replicas: 1
  selector:
    matchLabels:
      app: inflate-stateful
  template:
    metadata:
      labels:
        app: inflate-stateful
    spec:
      terminationGracePeriodSeconds: 0
      nodeSelector:
        eks.amazonaws.com/compute-type: auto
      containers:
        - name: bash
          image: public.ecr.aws/docker/library/bash:4.4
          command: ["/usr/local/bin/bash"]
          args: ["-c", "while true; do echo \$(date -u) >> /data/out.txt; sleep 60; done"]
          resources:
            requests:
              cpu: "1"
          volumeMounts:
            - name: persistent-storage
              mountPath: /data
      volumes:
        - name: persistent-storage
          persistentVolumeClaim:
            claimName: auto-ebs-claim
EOF
  • ํŒŒ๋“œ๋Š” bash ์ด๋ฏธ์ง€๋ฅผ ์‚ฌ์šฉํ•˜๋ฉฐ, /data ๋””๋ ‰ํ† ๋ฆฌ์— PVC๋กœ ์ƒ์„ฑ๋œ EBS ๋ณผ๋ฅจ์„ ๋งˆ์šดํŠธํ•จ
  • ์ปจํ…Œ์ด๋„ˆ๋Š” ์ฃผ๊ธฐ์ ์œผ๋กœ ํ˜„์žฌ ์‹œ๊ฐ„์„ /data/out.txt ํŒŒ์ผ์— ๊ธฐ๋กํ•˜๋„๋ก ์„ค์ •๋˜์–ด ์žˆ์Œ

Image

5. ํŒŒ๋“œ ์ƒํƒœ ํ™•์ธ

1
kubectl get pods -l app=inflate-stateful

โœ…ย ์ถœ๋ ฅ

1
2
NAME                               READY   STATUS    RESTARTS   AGE
inflate-stateful-59db4c8c9-lc5tt   1/1     Running   0          2m32s

6. PVC ์ƒํƒœ ํ™•์ธ

1
kubectl get pvc auto-ebs-claim

โœ…ย ์ถœ๋ ฅ

1
2
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
auto-ebs-claim   Bound    pvc-0765eb36-ebc0-4d22-9b56-614f10f0af71   8Gi        RWO            auto-ebs-sc    <unset>                 4m10s

7. ๋ฐ์ดํ„ฐ ๊ธฐ๋ก ๊ฒ€์ฆ

1
2
3
kubectl exec "$(kubectl get pods -l app=inflate-stateful \
  -o=jsonpath='{.items[0].metadata.name}')" -- \
  tail -f /data/out.txt

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
Sat Mar 22 07:59:03 UTC 2025
Sat Mar 22 08:00:03 UTC 2025
Sat Mar 22 08:01:03 UTC 2025
Sat Mar 22 08:02:03 UTC 2025
Sat Mar 22 08:03:03 UTC 2025
...

8. ๋ฆฌ์†Œ์Šค ์ •๋ฆฌ

1
2
3
4
5
6
kubectl delete deployment/inflate-stateful pvc/auto-ebs-claim storageclass/auto-ebs-sc

# ๊ฒฐ๊ณผ
deployment.apps "inflate-stateful" deleted
persistentvolumeclaim "auto-ebs-claim" deleted
storageclass.storage.k8s.io "auto-ebs-sc" deleted

๐Ÿ–ฅ๏ธ [์šด์˜] ๋…ธ๋“œ ์ฝ˜์†” ์ถœ๋ ฅ ์ •๋ณด ํ™•์ธ : get-console-output

1. ๋…ธ๋“œ ์ƒํƒœ ์กฐํšŒ

1
kubectl get node

โœ…ย ์ถœ๋ ฅ

1
2
NAME                  STATUS   ROLES    AGE   VERSION
i-08a7de4e203ea03b7   Ready    <none>   80m   v1.31.4-eks-0f56d01

2. ๋…ธ๋“œ ID ์„ค์ •

1
2
NODEID=<๊ฐ์ž ์ž์‹ ์˜ ๋…ธ๋“œID>
NODEID=i-08a7de4e203ea03b7

3. EC2 ์ฝ˜์†” ์ถœ๋ ฅ ์กฐํšŒ

1
aws ec2 get-console-output --instance-id $NODEID --latest --output text

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
...
Mar 22 07:57:20 ip-10-20-21-118.ap-northeast-2.compute.internal containerd[1113]: time="2025-03-22T07:57:20.618331563Z" level=info msg="Finish port forwarding for \"e9f31e10f093d6fc3090a71654c426807637a9705a04de6e866d33e2fe8d9b67\" port 8080"
Mar 22 07:58:58 ip-10-20-21-118.ap-northeast-2.compute.internal containerd[1113]: time="2025-03-22T07:58:58.788488028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:inflate-stateful-59db4c8c9-lc5tt,Uid:a6cb66ca-3ed3-49f6-a3f1-7acd1cb3c6fe,Namespace:default,Attempt:0,}"
Mar 22 07:58:59 ip-10-20-21-118.ap-northeast-2.compute.internal containerd[1113]: time="2025-03-22T07:58:59.057588987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:inflate-stateful-59db4c8c9-lc5tt,Uid:a6cb66ca-3ed3-49f6-a3f1-7acd1cb3c6fe,Namespace:default,Attempt:0,} returns sandbox id \"cc85c99a49bb1ae096ad0ad55170b94ffe2ccb013b352a82744c22a0aa1bbc15\""
Mar 22 07:58:59 ip-10-20-21-118.ap-northeast-2.compute.internal containerd[1113]: time="2025-03-22T07:58:59.059355824Z" level=info msg="PullImage \"public.ecr.aws/docker/library/bash:4.4\""
Mar 22 07:59:03 ip-10-20-21-118.ap-northeast-2.compute.internal containerd[1113]: time="2025-03-22T07:59:03.167470796Z" level=info msg="ImageCreate event name:\"public.ecr.aws/docker/library/bash:4.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Mar 22 07:59:03 ip-10-20-21-118.ap-northeast-2.compute.internal containerd[1113]: time="2025-03-22T07:59:03.169631379Z" level=info msg="stop pulling image public.ecr.aws/docker/library/bash:4.4: active requests=0, bytes read=4740157"
Mar 22 07:59:03 ip-10-20-21-118.ap-northeast-2.compute.internal containerd[1113]: time="2025-03-22T07:59:03.171968926Z" level=info msg="ImageCreate event name:\"sha256:4743cc53064fba07a4c5ad2eb5a75b972a739d4d11b48d5bdab6ffd36809400e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Mar 22 07:59:03 ip-10-20-21-118.ap-northeast-2.compute.internal containerd[1113]: time="2025-03-22T07:59:03.177838334Z" level=info msg="ImageCreate event name:\"public.ecr.aws/docker/library/bash@sha256:f6781c68f88bfac5626f6d48b0ffb87939dcc96a6d7ee631514be9a5d1265b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Mar 22 07:59:03 ip-10-20-21-118.ap-northeast-2.compute.internal containerd[1113]: time="2025-03-22T07:59:03.180060469Z" level=info msg="Pulled image \"public.ecr.aws/docker/library/bash:4.4\" with image id \"sha256:4743cc53064fba07a4c5ad2eb5a75b972a739d4d11b48d5bdab6ffd36809400e\", repo tag \"public.ecr.aws/docker/library/bash:4.4\", repo digest \"public.ecr.aws/docker/library/bash@sha256:f6781c68f88bfac5626f6d48b0ffb87939dcc96a6d7ee631514be9a5d1265b7f\", size \"4736987\" in 4.120667495s"
Mar 22 07:59:03 ip-10-20-21-118.ap-northeast-2.compute.internal containerd[1113]: time="2025-03-22T07:59:03.180101940Z" level=info msg="PullImage \"public.ecr.aws/docker/library/bash:4.4\" returns image reference \"sha256:4743cc53064fba07a4c5ad2eb5a75b972a739d4d11b48d5bdab6ffd36809400e\""
Mar 22 07:59:03 ip-10-20-21-118.ap-northeast-2.compute.internal containerd[1113]: time="2025-03-22T07:59:03.183076170Z" level=info msg="CreateContainer within sandbox \"cc85c99a49bb1ae096ad0ad55170b94ffe2ccb013b352a82744c22a0aa1bbc15\" for container &ContainerMetadata{Name:bash,Attempt:0,}"
Mar 22 07:59:03 ip-10-20-21-118.ap-northeast-2.compute.internal containerd[1113]: time="2025-03-22T07:59:03.205843317Z" level=info msg="CreateContainer within sandbox \"cc85c99a49bb1ae096ad0ad55170b94ffe2ccb013b352a82744c22a0aa1bbc15\" for &ContainerMetadata{Name:bash,Attempt:0,} returns container id \"8fb51b216e7cfbf36897aea2f91ce656bd6f5427de8cad8ca937984922012e7c\""
Mar 22 07:59:03 ip-10-20-21-118.ap-northeast-2.compute.internal containerd[1113]: time="2025-03-22T07:59:03.206408878Z" level=info msg="StartContainer for \"8fb51b216e7cfbf36897aea2f91ce656bd6f5427de8cad8ca937984922012e7c\""
Mar 22 07:59:03 ip-10-20-21-118.ap-northeast-2.compute.internal containerd[1113]: time="2025-03-22T07:59:03.299525109Z" level=info msg="StartContainer for \"8fb51b216e7cfbf36897aea2f91ce656bd6f5427de8cad8ca937984922012e7c\" returns successfully"
Mar 22 08:02:23 ip-10-20-21-118.ap-northeast-2.compute.internal containerd[1113]: time="2025-03-22T08:02:23.992222630Z" level=info msg="Executing port forwarding in network namespace \"/var/run/netns/cni-88e4b37e-199b-643a-f44c-027047f65acf\""
Mar 22 08:02:24 ip-10-20-21-118.ap-northeast-2.compute.internal containerd[1113]: time="2025-03-22T08:02:24.983954516Z" level=info msg="Finish port forwarding for \"e9f31e10f093d6fc3090a71654c426807637a9705a04de6e866d33e2fe8d9b67\" port 8080"

๐Ÿž [์šด์˜] ๋…ธ๋“œ ํŠน์ • ํ”„๋กœ์„ธ์Šค ๋กœ๊ทธ ์‹ค์‹œ๊ฐ„ ํ™•์ธ : debug container

1. ๋””๋ฒ„๊ทธ ์ปจํ…Œ์ด๋„ˆ ์‹คํ–‰

1
2
3
4
5
6
7
kubectl debug node/$NODEID -it --profile=sysadmin --image=public.ecr.aws/amazonlinux/amazonlinux:2023

# ๊ฒฐ๊ณผ
Creating debugging pod node-debugger-i-08a7de4e203ea03b7-hvcbt with container debugger on node i-08a7de4e203ea03b7.
If you don't see a command prompt, try pressing enter.
bash-5.2# whoami
root

2. ๋””๋ฒ„๊ทธ ์ปจํ…Œ์ด๋„ˆ์—์„œ ํ•„์š”ํ•œ ํŒจํ‚ค์ง€ ์„ค์น˜

1
bash-5.2# yum install -y util-linux-core htop

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
Amazon Linux 2023 repository                                                                                       80 MB/s |  33 MB     00:00    
Last metadata expiration check: 0:00:07 ago on Sat Mar 22 08:14:37 2025.
Dependencies resolved.
==================================================================================================================================================
 Package                             Architecture               Version                                     Repository                       Size
==================================================================================================================================================
Installing:
 htop                                x86_64                     3.2.1-87.amzn2023.0.3                       amazonlinux                     183 k
 util-linux-core                     x86_64                     2.37.4-1.amzn2023.0.4                       amazonlinux                     432 k
Installing dependencies:
 systemd-libs                        x86_64                     252.23-2.amzn2023                           amazonlinux                     620 k
Transaction Summary
==================================================================================================================================================
Install  3 Packages
Total download size: 1.2 M
Installed size: 3.4 M
Downloading Packages:
(1/3): systemd-libs-252.23-2.amzn2023.x86_64.rpm                                                                   16 MB/s | 620 kB     00:00    
(2/3): util-linux-core-2.37.4-1.amzn2023.0.4.x86_64.rpm                                                            10 MB/s | 432 kB     00:00    
(3/3): htop-3.2.1-87.amzn2023.0.3.x86_64.rpm                                                                      3.9 MB/s | 183 kB     00:00    
--------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                             2.2 MB/s | 1.2 MB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                          1/1 
  Installing       : systemd-libs-252.23-2.amzn2023.x86_64                                                                                    1/3 
  Installing       : util-linux-core-2.37.4-1.amzn2023.0.4.x86_64                                                                             2/3 
  Running scriptlet: util-linux-core-2.37.4-1.amzn2023.0.4.x86_64                                                                             2/3 
  Installing       : htop-3.2.1-87.amzn2023.0.3.x86_64                                                                                        3/3 
  Running scriptlet: htop-3.2.1-87.amzn2023.0.3.x86_64                                                                                        3/3 
  Verifying        : htop-3.2.1-87.amzn2023.0.3.x86_64                                                                                        1/3 
  Verifying        : systemd-libs-252.23-2.amzn2023.x86_64                                                                                    2/3 
  Verifying        : util-linux-core-2.37.4-1.amzn2023.0.4.x86_64                                                                             3/3 
Installed:
  htop-3.2.1-87.amzn2023.0.3.x86_64          systemd-libs-252.23-2.amzn2023.x86_64          util-linux-core-2.37.4-1.amzn2023.0.4.x86_64         
Complete!

3. kubelet ๋ฐ๋ชฌ ๋กœ๊ทธ ์‹ค์‹œ๊ฐ„ ๋ชจ๋‹ˆํ„ฐ๋ง

1
bash-5.2# nsenter -t 1 -m journalctl -f -u kubelet

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
Mar 22 08:04:47 ip-10-20-21-118.ap-northeast-2.compute.internal kubelet[1189]: I0322 08:04:47.848264    1189 reconciler_common.go:288] "Volume detached for volume \"pvc-0765eb36-ebc0-4d22-9b56-614f10f0af71\" (UniqueName: \"kubernetes.io/csi/ebs.csi.eks.amazonaws.com^vol-087248902f2cb437c\") on node \"i-08a7de4e203ea03b7\" DevicePath \"csi-7fae8961b0a2f7cea0c58fc9466301f7f312e08b5769926a3ca6e104a8f3f200\""
Mar 22 08:04:48 ip-10-20-21-118.ap-northeast-2.compute.internal kubelet[1189]: I0322 08:04:48.042551    1189 scope.go:117] "RemoveContainer" containerID="8fb51b216e7cfbf36897aea2f91ce656bd6f5427de8cad8ca937984922012e7c"
Mar 22 08:04:48 ip-10-20-21-118.ap-northeast-2.compute.internal kubelet[1189]: I0322 08:04:48.048148    1189 scope.go:117] "RemoveContainer" containerID="8fb51b216e7cfbf36897aea2f91ce656bd6f5427de8cad8ca937984922012e7c"
Mar 22 08:04:48 ip-10-20-21-118.ap-northeast-2.compute.internal kubelet[1189]: E0322 08:04:48.048506    1189 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8fb51b216e7cfbf36897aea2f91ce656bd6f5427de8cad8ca937984922012e7c\": not found" containerID="8fb51b216e7cfbf36897aea2f91ce656bd6f5427de8cad8ca937984922012e7c"
Mar 22 08:04:48 ip-10-20-21-118.ap-northeast-2.compute.internal kubelet[1189]: I0322 08:04:48.048531    1189 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8fb51b216e7cfbf36897aea2f91ce656bd6f5427de8cad8ca937984922012e7c"} err="failed to get container status \"8fb51b216e7cfbf36897aea2f91ce656bd6f5427de8cad8ca937984922012e7c\": rpc error: code = NotFound desc = an error occurred when try to find container \"8fb51b216e7cfbf36897aea2f91ce656bd6f5427de8cad8ca937984922012e7c\": not found"
Mar 22 08:04:49 ip-10-20-21-118.ap-northeast-2.compute.internal kubelet[1189]: I0322 08:04:49.029139    1189 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6cb66ca-3ed3-49f6-a3f1-7acd1cb3c6fe" path="/var/lib/kubelet/pods/a6cb66ca-3ed3-49f6-a3f1-7acd1cb3c6fe/volumes"
Mar 22 08:13:11 ip-10-20-21-118.ap-northeast-2.compute.internal kubelet[1189]: E0322 08:13:11.825932    1189 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a6cb66ca-3ed3-49f6-a3f1-7acd1cb3c6fe" containerName="bash"
Mar 22 08:13:11 ip-10-20-21-118.ap-northeast-2.compute.internal kubelet[1189]: I0322 08:13:11.825976    1189 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6cb66ca-3ed3-49f6-a3f1-7acd1cb3c6fe" containerName="bash"
Mar 22 08:13:11 ip-10-20-21-118.ap-northeast-2.compute.internal kubelet[1189]: I0322 08:13:11.965974    1189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4xb2\" (UniqueName: \"kubernetes.io/projected/39c06495-7379-46aa-9063-012ef0744c2b-kube-api-access-h4xb2\") pod \"node-debugger-i-08a7de4e203ea03b7-hvcbt\" (UID: \"39c06495-7379-46aa-9063-012ef0744c2b\") " pod="default/node-debugger-i-08a7de4e203ea03b7-hvcbt"
Mar 22 08:13:11 ip-10-20-21-118.ap-northeast-2.compute.internal kubelet[1189]: I0322 08:13:11.966021    1189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-root\" (UniqueName: \"kubernetes.io/host-path/39c06495-7379-46aa-9063-012ef0744c2b-host-root\") pod \"node-debugger-i-08a7de4e203ea03b7-hvcbt\" (UID: \"39c06495-7379-46aa-9063-012ef0744c2b\") " pod="default/node-debugger-i-08a7de4e203ea03b7-hvcbt"

4. ํ˜ธ์ŠคํŠธ ๋ฆฌ์†Œ์Šค ์ •๋ณด ํ™•์ธ

(1) ์‹œ์Šคํ…œ ๋ฆฌ์†Œ์Šค ๋ชจ๋‹ˆํ„ฐ๋ง

1
bash-5.2# htop

โœ…ย ์ถœ๋ ฅ

Image

(2) ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค ์ •๋ณด ์กฐํšŒ

1
bash-5.2# nsenter -t 1 -m ip addr

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 06:22:58:e9:2f:29 brd ff:ff:ff:ff:ff:ff
    altname enp0s5
    altname ens5
    inet 10.20.21.118/20 metric 1024 brd 10.20.31.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::422:58ff:fee9:2f29/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
3: pod-id-link0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 # pod identity agent ์„ค์น˜ ์‹œ ์ƒ์„ฑ
    link/ether c2:bf:fe:b4:6c:4e brd ff:ff:ff:ff:ff:ff
    inet 169.254.170.23/32 scope global pod-id-link0
       valid_lft forever preferred_lft forever
    inet6 fd00:ec2::23/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::c0bf:feff:feb4:6c4e/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
4: coredns: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 82:0a:7b:c8:3a:cf brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.10/32 scope global coredns
       valid_lft forever preferred_lft forever
    inet6 fe80::800a:7bff:fec8:3acf/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
5: eni74051a80866@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether a6:40:e1:66:72:11 brd ff:ff:ff:ff:ff:ff link-netns cni-88e4b37e-199b-643a-f44c-027047f65acf
    inet6 fe80::a440:e1ff:fe66:7211/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever

(3) ํ”„๋กœ์„ธ์Šค ๋ชฉ๋ก ์กฐํšŒ

1
bash-5.2# nsenter -t 1 -m ps -ef

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
UID          PID    PPID  C STIME TTY          TIME CMD
root           1       0  0 07:03 ?        00:00:09 /sbin/init systemd.log_target=journal-or-kmsg systemd.log_color=0 systemd.show_status=true
root           2       0  0 07:03 ?        00:00:00 [kthreadd]
root           3       2  0 07:03 ?        00:00:00 [rcu_gp]
root           4       2  0 07:03 ?        00:00:00 [rcu_par_gp]
root           5       2  0 07:03 ?        00:00:00 [slub_flushwq]
root           6       2  0 07:03 ?        00:00:00 [netns]
root           8       2  0 07:03 ?        00:00:00 [kworker/0:0H-kblockd]
root          10       2  0 07:03 ?        00:00:00 [mm_percpu_wq]
root          11       2  0 07:03 ?        00:00:00 [rcu_tasks_kthread]
root          12       2  0 07:03 ?        00:00:00 [rcu_tasks_rude_kthread]
root          13       2  0 07:03 ?        00:00:00 [rcu_tasks_trace_kthread]
root          14       2  0 07:03 ?        00:00:00 [ksoftirqd/0]
root          15       2  0 07:03 ?        00:00:00 [rcu_preempt]
root          16       2  0 07:03 ?        00:00:00 [migration/0]
root          18       2  0 07:03 ?        00:00:00 [cpuhp/0]
root          19       2  0 07:03 ?        00:00:00 [cpuhp/1]
root          20       2  0 07:03 ?        00:00:00 [migration/1]
root          21       2  0 07:03 ?        00:00:01 [ksoftirqd/1]
root          23       2  0 07:03 ?        00:00:00 [kworker/1:0H-events_highpri]
root          26       2  0 07:03 ?        00:00:00 [kdevtmpfs]
root          27       2  0 07:03 ?        00:00:00 [inet_frag_wq]
root          28       2  0 07:03 ?        00:00:00 [kauditd]
root          29       2  0 07:03 ?        00:00:00 [khungtaskd]
root          30       2  0 07:03 ?        00:00:00 [oom_reaper]
root          32       2  0 07:03 ?        00:00:00 [writeback]
root          33       2  0 07:03 ?        00:00:00 [kcompactd0]
root          34       2  0 07:03 ?        00:00:00 [khugepaged]
root          35       2  0 07:03 ?        00:00:00 [cryptd]
root          36       2  0 07:03 ?        00:00:00 [kintegrityd]
root          37       2  0 07:03 ?        00:00:00 [kblockd]
root          38       2  0 07:03 ?        00:00:00 [blkcg_punt_bio]
root          40       2  0 07:03 ?        00:00:00 [tpm_dev_wq]
root          41       2  0 07:03 ?        00:00:00 [ata_sff]
root          42       2  0 07:03 ?        00:00:00 [md]
root          43       2  0 07:03 ?        00:00:00 [edac-poller]
root          44       2  0 07:03 ?        00:00:00 [watchdogd]
root          45       2  0 07:03 ?        00:00:00 [kworker/1:1H-kblockd]
root          68       2  0 07:03 ?        00:00:00 [kswapd0]
root          71       2  0 07:03 ?        00:00:00 [xfsalloc]
root          72       2  0 07:03 ?        00:00:00 [xfs_mru_cache]
root          73       2  0 07:03 ?        00:00:00 [kworker/u5:0-kverityd]
root          75       2  0 07:03 ?        00:00:00 [kthrotld]
root         138       2  0 07:03 ?        00:00:00 [nvme-wq]
root         140       2  0 07:03 ?        00:00:00 [nvme-reset-wq]
root         143       2  0 07:03 ?        00:00:00 [nvme-delete-wq]
root         149       2  0 07:03 ?        00:00:00 [dm_bufio_cache]
root         186       2  0 07:03 ?        00:00:00 [mld]
root         188       2  0 07:03 ?        00:00:00 [ipv6_addrconf]
root         199       2  0 07:03 ?        00:00:00 [kstrp]
root         219       2  0 07:03 ?        00:00:00 [zswap-shrink]
root         340       2  0 07:03 ?        00:00:00 [kdmflush/252:0]
root         341       2  0 07:03 ?        00:00:00 [kverityd]
root         346       2  0 07:03 ?        00:00:00 [ext4-rsv-conver]
root         348       2  0 07:03 ?        00:00:00 [kworker/0:1H-kblockd]
root         382       1  0 07:03 ?        00:00:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/systemd-journald
root         407       2  0 07:03 ?        00:00:00 [ext4-rsv-conver]
root         418       1  0 07:03 ?        00:00:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/systemd-udevd
root         867       2  0 07:03 ?        00:00:00 [ena]
root         913       2  0 07:03 ?        00:00:00 [xfs-buf/nvme1n1]
root         914       2  0 07:03 ?        00:00:00 [xfs-conv/nvme1n]
root         915       2  0 07:03 ?        00:00:00 [xfs-reclaim/nvm]
root         916       2  0 07:03 ?        00:00:00 [xfs-blockgc/nvm]
root         917       2  0 07:03 ?        00:00:00 [xfs-inodegc/nvm]
root         918       2  0 07:03 ?        00:00:00 [xfs-log/nvme1n1]
root         919       2  0 07:03 ?        00:00:00 [xfs-cil/nvme1n1]
root         920       2  0 07:03 ?        00:00:00 [xfsaild/nvme1n1p1]
root         931       2  0 07:03 ?        00:00:00 [jbd2/nvme0n1p7-8]
root         932       2  0 07:03 ?        00:00:00 [ext4-rsv-conver]
dbus         995       1  0 07:03 ?        00:00:00 /usr/bin/dbus-broker-launch --scope system
dbus         996     995  0 07:03 ?        00:00:00 dbus-broker --log 4 --controller 9 --machine-id ec28607a9117855261a9c8c1cef1cbf2 --max-bytes 5
root        1001       1  0 07:03 ?        00:00:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/systemd-logind
systemd+    1004       1  0 07:03 ?        00:00:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/systemd-networkd
root        1017       1  0 07:03 ?        00:00:00 /usr/bin/apiserver --datastore-path /var/lib/bottlerocket/datastore/current --socket-gid 274
systemd+    1094       1  0 07:03 ?        00:00:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/systemd-resolved
chrony      1102       1  0 07:03 ?        00:00:00 /usr/sbin/chronyd -d -F -1
chrony      1105    1102  0 07:03 ?        00:00:00 /usr/sbin/chronyd -d -F -1
root        1112       1  0 07:03 ?        00:00:01 /usr/bin/aws-network-policy-agent --kubeconfig /etc/kubernetes/aws-network-policy-agent/kubeco
root        1113       1  0 07:03 ?        00:00:14 /usr/bin/containerd
root        1114       1  0 07:03 ?        00:00:17 /usr/bin/eks-node-monitoring-agent --kubeconfig /etc/kubernetes/eks-node-monitoring-agent/kube
root        1138       1  0 07:03 ?        00:00:02 /usr/bin/eks-healthchecker
root        1140       1  0 07:03 ?        00:00:00 /usr/bin/kube-proxy --hostname-override i-08a7de4e203ea03b7 --config=/usr/share/kube-proxy/kub
root        1181       1  0 07:03 ?        00:00:00 /usr/bin/eks-pod-identity-agent server --metrics-address 127.0.0.1 --cluster-name automode-clu
root        1189       1  0 07:03 ?        00:00:27 /usr/bin/kubelet --cloud-provider external --kubeconfig /etc/kubernetes/kubelet/kubeconfig --c
root        1254       1  0 07:04 ?        00:00:00 /usr/bin/csi-node-driver-registrar --csi-address=/var/lib/kubelet/plugins/ebs.csi.eks.amazonaw
root        1269       1  0 07:04 ?        00:00:00 /usr/bin/eks-ebs-csi-driver node --kubeconfig /etc/kubernetes/eks-ebs-csi-driver/kubeconfig --
root        1622       1  0 07:04 ?        00:00:02 /usr/bin/ipamd --kubeconfig /etc/kubernetes/ipamd/kubeconfig --metrics-bind-addr 127.0.0.1:817
coredns     1655       1  0 07:04 ?        00:00:07 /usr/bin/coredns -conf=/etc/coredns/Corefile
root        1801       1  0 07:04 ?        00:00:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 
root        1828    1801  0 07:04 ?        00:00:00 /pause
1000        1878    1801  0 07:04 ?        00:00:11 python3 -m kube_ops_view
root        4600       2  0 07:58 ?        00:00:00 [kworker/u5:1-kverityd]
root        5084       2  0 08:04 ?        00:00:00 [kworker/u4:1-events_unbound]
root        5277       2  0 08:04 ?        00:00:00 [kworker/u4:3-events_unbound]
root        5577       2  0 08:10 ?        00:00:00 [kworker/u4:0-xfs-blockgc/nvme1n1p1]
root        5701       1  0 08:13 ?        00:00:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 
root        5731    5701  0 08:13 ?        00:00:00 /pause
root        5785    5701  0 08:13 ?        00:00:00 /bin/bash
root        5825       2  0 08:13 ?        00:00:00 [kworker/0:2-events]
root        5832       2  0 08:13 ?        00:00:00 [kworker/1:5-events_power_efficient]
root        5834       2  0 08:13 ?        00:00:00 [kworker/1:7-xfs-conv/nvme1n1p1]
root        5845       2  0 08:13 ?        00:00:00 [kworker/0:15-events]
root        6163       2  0 08:20 ?        00:00:00 [kworker/u5:2-kverityd]
root        6191       2  0 08:21 ?        00:00:00 [kworker/0:0-events]
root        6237    5785  0 08:22 ?        00:00:00 ps -ef

(4) /proc ๋””๋ ‰ํ† ๋ฆฌ ๋‚ด์šฉ ์กฐํšŒ

1
bash-5.2# nsenter -t 1 -m ls -l /proc

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
total 0
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 10
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1001
dr-xr-xr-x.  9 systemd-network systemd-network               0 Mar 22 07:03 1004
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1017
dr-xr-xr-x.  9 systemd-resolve systemd-resolve               0 Mar 22 07:03 1094
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 11
dr-xr-xr-x.  9 chrony          chrony                        0 Mar 22 07:03 1102
dr-xr-xr-x.  9 chrony          chrony                        0 Mar 22 08:22 1105
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1112
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1113
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1114
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1138
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1140
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1181
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1189
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 12
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1254
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1269
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 13
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 138
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 14
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 140
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 143
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 149
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 15
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 16
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:04 1622
dr-xr-xr-x.  9 coredns         coredns                       0 Mar 22 07:04 1655
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 18
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:04 1801
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:04 1828
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 186
dr-xr-xr-x.  9            1000 root                          0 Mar 22 07:04 1878
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 188
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 19
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 199
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 2
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 20
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 21
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 219
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 23
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 26
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 27
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 28
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 29
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 3
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 30
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 32
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 33
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 34
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 340
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 341
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 346
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 348
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 35
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 36
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 37
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 38
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 382
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 4
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 40
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 407
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 41
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 418
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 42
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 43
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 44
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 45
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:59 4600
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 5
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:04 5084
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:04 5277
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:13 5701
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:13 5731
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:13 5785
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:13 5825
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:13 5832
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:13 5834
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:13 5845
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 6
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:20 6163
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:21 6191
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:23 6293
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:24 6353
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 68
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 71
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 72
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 73
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 75
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 8
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 867
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 913
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 914
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 915
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 916
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 917
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 918
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 919
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 920
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 931
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 932
dr-xr-xr-x.  9 dbus            dbus                          0 Mar 22 07:03 995
dr-xr-xr-x.  9 dbus            dbus                          0 Mar 22 08:22 996
dr-xr-xr-x.  3 root            root                          0 Mar 22 08:24 acpi
-r--r--r--.  1 root            root                          0 Mar 22 07:03 bootconfig
-r--r--r--.  1 root            root                          0 Mar 22 08:24 buddyinfo
dr-xr-xr-x.  4 root            root                          0 Mar 22 08:24 bus
-r--r--r--.  1 root            root                          0 Mar 22 07:03 cgroups
-r--r--r--.  1 root            root                          0 Mar 22 07:03 cmdline
-r--r--r--.  1 root            root                      35525 Mar 22 08:24 config.gz
-r--r--r--.  1 root            root                          0 Mar 22 08:24 consoles
-r--r--r--.  1 root            root                          0 Mar 22 07:03 cpuinfo
-r--r--r--.  1 root            root                          0 Mar 22 08:24 crypto
-r--r--r--.  1 root            root                          0 Mar 22 07:03 devices
-r--r--r--.  1 root            root                          0 Mar 22 07:03 diskstats
-r--r--r--.  1 root            root                          0 Mar 22 08:24 dma
dr-xr-xr-x.  3 root            root                          0 Mar 22 08:24 driver
dr-xr-xr-x.  3 root            root                          0 Mar 22 08:24 dynamic_debug
-r--r--r--.  1 root            root                          0 Mar 22 08:24 execdomains
-r--r--r--.  1 root            root                          0 Mar 22 08:24 fb
-r--r--r--.  1 root            root                          0 Mar 22 08:24 filesystems
dr-xr-xr-x.  6 root            root                          0 Mar 22 08:24 fs
-r--r--r--.  1 root            root                          0 Mar 22 08:24 interrupts
-r--r--r--.  1 root            root                          0 Mar 22 08:24 iomem
-r--r--r--.  1 root            root                          0 Mar 22 08:24 ioports
dr-xr-xr-x. 28 root            root                          0 Mar 22 08:24 irq
-r--r--r--.  1 root            root                          0 Mar 22 08:24 kallsyms
-r--------.  1 root            root            140737471594496 Mar 22 07:03 kcore
-r--r--r--.  1 root            root                          0 Mar 22 08:24 key-users
-r--r--r--.  1 root            root                          0 Mar 22 08:24 keys
-r--------.  1 root            root                          0 Mar 22 08:24 kmsg
-r--------.  1 root            root                          0 Mar 22 08:24 kpagecgroup
-r--------.  1 root            root                          0 Mar 22 08:24 kpagecount
-r--------.  1 root            root                          0 Mar 22 08:24 kpageflags
-rw-r--r--.  1 root            root                          0 Mar 22 08:24 latency_stats
-r--r--r--.  1 root            root                          0 Mar 22 07:03 loadavg
-r--r--r--.  1 root            root                          0 Mar 22 08:24 locks
-r--r--r--.  1 root            root                          0 Mar 22 08:24 mdstat
-r--r--r--.  1 root            root                          0 Mar 22 07:03 meminfo
-r--r--r--.  1 root            root                          0 Mar 22 08:24 misc
-r--r--r--.  1 root            root                          0 Mar 22 08:24 modules
lrwxrwxrwx.  1 root            root                         11 Mar 22 07:03 mounts -> self/mounts
-rw-r--r--.  1 root            root                          0 Mar 22 08:24 mtrr
lrwxrwxrwx.  1 root            root                          8 Mar 22 07:03 net -> self/net
-r--------.  1 root            root                          0 Mar 22 08:24 pagetypeinfo
-r--r--r--.  1 root            root                          0 Mar 22 08:24 partitions
dr-xr-xr-x.  5 root            root                          0 Mar 22 08:24 pressure
-r--r--r--.  1 root            root                          0 Mar 22 08:24 schedstat
dr-xr-xr-x.  4 root            root                          0 Mar 22 08:24 scsi
lrwxrwxrwx.  1 root            root                          0 Mar 22 07:03 self -> 6353
-r--------.  1 root            root                          0 Mar 22 08:24 slabinfo
-r--r--r--.  1 root            root                          0 Mar 22 08:24 softirqs
-r--r--r--.  1 root            root                          0 Mar 22 07:03 stat
-r--r--r--.  1 root            root                          0 Mar 22 07:03 swaps
dr-xr-xr-x.  1 root            root                          0 Mar 22 07:03 sys
--w-------.  1 root            root                          0 Mar 22 08:24 sysrq-trigger
dr-xr-xr-x.  5 root            root                          0 Mar 22 08:24 sysvipc
lrwxrwxrwx.  1 root            root                          0 Mar 22 07:03 thread-self -> 6353/task/6353
-r--------.  1 root            root                          0 Mar 22 08:24 timer_list
dr-xr-xr-x.  6 root            root                          0 Mar 22 08:22 tty
-r--r--r--.  1 root            root                          0 Mar 22 08:22 uptime
-r--r--r--.  1 root            root                          0 Mar 22 08:24 version
-r--------.  1 root            root                          0 Mar 22 08:24 vmallocinfo
-r--r--r--.  1 root            root                          0 Mar 22 08:24 vmstat
-r--r--r--.  1 root            root                          0 Mar 22 08:24 zoneinfo

(5) ํŒŒ์ผ ์‹œ์Šคํ…œ ์‚ฌ์šฉ๋Ÿ‰ ํ™•์ธ

1
bash-5.2# nsenter -t 1 -m df -hT

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/root      ext4      2.1G  1.1G  902M  56% /
devtmpfs       devtmpfs  1.9G     0  1.9G   0% /dev
tmpfs          tmpfs     1.9G     0  1.9G   0% /dev/shm
tmpfs          tmpfs     763M  700K  762M   1% /run
tmpfs          tmpfs     1.9G  496K  1.9G   1% /etc
tmpfs          tmpfs     1.9G  4.0K  1.9G   1% /etc/cni
tmpfs          tmpfs     1.9G     0  1.9G   0% /tmp
tmpfs          tmpfs     1.9G     0  1.9G   0% /etc/kubernetes/pki/private
tmpfs          tmpfs     1.9G   12K  1.9G   1% /etc/containerd
tmpfs          tmpfs     1.9G  4.0K  1.9G   1% /root/.aws
tmpfs          tmpfs     1.9G     0  1.9G   0% /run/netdog
/dev/nvme0n1p3 ext4       44M   22M   20M  53% /boot
/dev/nvme1n1p1 xfs        80G  2.3G   78G   3% /local
/dev/nvme0n1p7 ext4       80M  1.3M   72M   2% /var/lib/bottlerocket
overlay        overlay    80G  2.3G   78G   3% /opt/cni
overlay        overlay    80G  2.3G   78G   3% /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/modules
overlay        overlay    80G  2.3G   78G   3% /opt/csi
/dev/loop0     squashfs   13M   13M     0 100% /var/lib/kernel-devel/.overlay/lower
/dev/loop1     squashfs  512K  512K     0 100% /x86_64-bottlerocket-linux-gnu/sys-root/usr/share/licenses
overlay        overlay    80G  2.3G   78G   3% /x86_64-bottlerocket-linux-gnu/sys-root/usr/src/kernels
tmpfs          tmpfs     3.1G   12K  3.1G   1% /var/lib/kubelet/pods/26d0b768-1309-4a72-9cbb-5bfb6c9e0353/volumes/kubernetes.io~projected/kube-api-access-7ftbv
shm            tmpfs      64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/e9f31e10f093d6fc3090a71654c426807637a9705a04de6e866d33e2fe8d9b67/shm
overlay        overlay    80G  2.3G   78G   3% /run/containerd/io.containerd.runtime.v2.task/k8s.io/e9f31e10f093d6fc3090a71654c426807637a9705a04de6e866d33e2fe8d9b67/rootfs
overlay        overlay    80G  2.3G   78G   3% /run/containerd/io.containerd.runtime.v2.task/k8s.io/fadf90102e0106f26fd91f6b2318d680169e2a071df0c58bb5d221aa572bbc81/rootfs
tmpfs          tmpfs     3.1G   12K  3.1G   1% /var/lib/kubelet/pods/39c06495-7379-46aa-9063-012ef0744c2b/volumes/kubernetes.io~projected/kube-api-access-h4xb2
overlay        overlay    80G  2.3G   78G   3% /run/containerd/io.containerd.runtime.v2.task/k8s.io/8d6d60ba4c6d5f6fd7381fd3dae85ccff8874ef06477bad3eb24990799ca6d4a/rootfs
overlay        overlay    80G  2.3G   78G   3% /run/containerd/io.containerd.runtime.v2.task/k8s.io/f3f12a5f649077a863415516ada264c023b4baefc48d1fa3897dd66cba77a33a/rootfs

(6) Containerd ๋„ค์ž„์ŠคํŽ˜์ด์Šค ๋ชฉ๋ก ์กฐํšŒ

1
bash-5.2# nsenter -t 1 -m ctr ns ls

โœ…ย ์ถœ๋ ฅ

1
2
NAME   LABELS 
k8s.io  

(7) Containerd ์ปจํ…Œ์ด๋„ˆ ๋ชฉ๋ก ์กฐํšŒ

1
bash-5.2# nsenter -t 1 -m ctr -n k8s.io containers ls

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
CONTAINER                                                           IMAGE                                          RUNTIME                  
05ddc3e2d73d2f6015a53db1fda28777568180726e409c845612a7e4e7563c62    localhost/kubernetes/pause:0.1.0               io.containerd.runc.v2    
8d6d60ba4c6d5f6fd7381fd3dae85ccff8874ef06477bad3eb24990799ca6d4a    localhost/kubernetes/pause:0.1.0               io.containerd.runc.v2    
ac87f2cdee369128c347da9bf763b15cbddedbdedc1a9c1125d0bd3bfb64fe5a    docker.io/hjacobs/kube-ops-view:20.4.0         io.containerd.runc.v2    
e9f31e10f093d6fc3090a71654c426807637a9705a04de6e866d33e2fe8d9b67    localhost/kubernetes/pause:0.1.0               io.containerd.runc.v2    
f3f12a5f649077a863415516ada264c023b4baefc48d1fa3897dd66cba77a33a    public.ecr.aws/amazonlinux/amazonlinux:2023    io.containerd.runc.v2    
fadf90102e0106f26fd91f6b2318d680169e2a071df0c58bb5d221aa572bbc81    docker.io/hjacobs/kube-ops-view:20.4.0         io.containerd.runc.v2  

5. ๋””๋ฒ„๊ทธ ํŒŒ๋“œ ์‚ญ์ œ

1
2
3
4
kubectl delete pod node-debugger-i-08a7de4e203ea03b7-hvcbt

# ๊ฒฐ๊ณผ
pod "node-debugger-i-08a7de4e203ea03b7-hvcbt" deleted

๐Ÿ”“ [๋ณด์•ˆ] ํ˜ธ์ŠคํŠธ ๋„ค์ž„์ŠคํŽ˜์ด์Šค๋ฅผ ๊ณต์œ ํ•˜๋Š” ํŒŒ๋“œ๋ฅผ ํ†ตํ•ด ํ˜ธ์ŠคํŠธ ์ •๋ณด ํš๋“ ์‹œ๋„

1. ํŒŒ๋“œ ๋ฐฐํฌ๋ฅผ ํ†ตํ•œ ํ˜ธ์ŠคํŠธ ๊ถŒํ•œ ๋ถ€์—ฌ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: root-shell
  namespace: kube-system
spec:
  containers:
  - command:
    - /bin/cat
    image: alpine:3
    name: root-shell
    securityContext:
      privileged: true
    tty: true
    stdin: true
    volumeMounts:
    - mountPath: /host
      name: hostroot
  hostNetwork: true
  hostPID: true
  hostIPC: true
  tolerations:
  - effect: NoSchedule
    operator: Exists
  - effect: NoExecute
    operator: Exists
  volumes:
  - hostPath:
      path: /
    name: hostroot
EOF
# ๊ฒฐ๊ณผ
pod/root-shell created
  • hostNetwork, hostPID, hostIPC๋ฅผ ํ™œ์„ฑํ™”ํ•˜๊ณ , ํ˜ธ์ŠคํŠธ์˜ ๋ฃจํŠธ ๋””๋ ‰ํ† ๋ฆฌ ์ „์ฒด(/)๋ฅผ ๋งˆ์šดํŠธํ•œ ํŒŒ๋“œ(root-shell)๋ฅผ ๋ฐฐํฌํ•จ
  • ์ด๋กœ์จ ํŒŒ๋“œ๊ฐ€ ํ˜ธ์ŠคํŠธ ๋„ค์ž„์ŠคํŽ˜์ด์Šค์˜ ์ž์›์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ๋„๋ก ์„ค์ •๋จ

2. ํŒŒ๋“œ ๋ฐ ๋…ธ๋“œ IP ํ™•์ธ

(1) ํŒŒ๋“œ ์ƒํƒœ ์กฐํšŒ

1
kubectl get pod -n kube-system root-shell

โœ…ย ์ถœ๋ ฅ

1
2
NAME         READY   STATUS    RESTARTS   AGE
root-shell   1/1     Running   0          56s

(2) ๋…ธ๋“œ์™€ ํŒŒ๋“œ IP ๋น„๊ต

1
kubectl get node,pod -A -owide

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
NAME                       STATUS   ROLES    AGE    VERSION               INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                                           KERNEL-VERSION   CONTAINER-RUNTIME
node/i-08a7de4e203ea03b7   Ready    <none>   105m   v1.31.4-eks-0f56d01   10.20.21.118   <none>        Bottlerocket (EKS Auto) 2025.3.14 (aws-k8s-1.31)   6.1.129          containerd://1.7.25+bottlerocket

NAMESPACE     NAME                                 READY   STATUS    RESTARTS      AGE    IP             NODE                  NOMINATED NODE   READINESS GATES
kube-system   pod/kube-ops-view-657dbc6cd8-4q6qf   1/1     Running   1 (88m ago)   106m   10.20.28.128   i-08a7de4e203ea03b7   <none>           <none>
kube-system   pod/root-shell                       1/1     Running   0             74s    10.20.21.118   i-08a7de4e203ea03b7   <none>           <none>

3. ๋งˆ์šดํŠธ ๋ฐ ๋ณผ๋ฅจ ์ •๋ณด ํ™•์ธ

1
kubectl describe pod -n kube-system root-shell

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
Name:             root-shell
Namespace:        kube-system
Priority:         0
Service Account:  default
Node:             i-08a7de4e203ea03b7/10.20.21.118
Start Time:       Sat, 22 Mar 2025 17:31:05 +0900
Labels:           <none>
Annotations:      <none>
Status:           Running
IP:               10.20.21.118
IPs:
  IP:  10.20.21.118
Containers:
  root-shell:
    Container ID:  containerd://5940e511977dc4c19603b22a41bcf3fb64a809eba3cfb048d8bb68a8edaf8ce3
    Image:         alpine:3
    Image ID:      docker.io/library/alpine@sha256:a8560b36e8b8210634f77d9f7f9efd7ffa463e380b75e2e74aff4511df3ef88c
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/cat
    State:          Running
      Started:      Sat, 22 Mar 2025 17:31:09 +0900
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /host from hostroot (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bmjsp (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  hostroot:
    Type:          HostPath (bare host directory volume)
    Path:          /
    HostPathType:  
  kube-api-access-bmjsp:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 :NoSchedule op=Exists
                             :NoExecute op=Exists
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  2m13s  default-scheduler  Successfully assigned kube-system/root-shell to i-08a7de4e203ea03b7
  Normal  Pulling    2m13s  kubelet            Pulling image "alpine:3"
  Normal  Pulled     2m9s   kubelet            Successfully pulled image "alpine:3" in 3.563s (3.563s including waiting). Image size: 3653068 bytes.
  Normal  Created    2m9s   kubelet            Created container root-shell
  Normal  Started    2m9s   kubelet            Started container root-shell
  • /host๊ฐ€ hostroot ๋ณผ๋ฅจ์„ ํ†ตํ•ด ์ฝ๊ธฐ-์“ฐ๊ธฐ ๋ชจ๋“œ๋กœ ๋งˆ์šดํŠธ๋˜์–ด ์žˆ์Œ
  • ํ˜ธ์ŠคํŠธ ๋””๋ ‰ํ† ๋ฆฌ ์ „์ฒด๊ฐ€ ํŒŒ๋“œ์— ๋…ธ์ถœ๋จ์„ ์˜๋ฏธํ•จ

4. chroot ์‹œ๋„ ๋ฐ ๊ฒฐ๊ณผ

1
kubectl -n kube-system exec -it root-shell -- chroot /host /bin/sh

โœ…ย ์ถœ๋ ฅ

1
2
chroot: can't execute '/bin/sh': No such file or directory
command terminated with exit code 127

5. ํŒŒ๋“œ ๋‚ด ์‰˜ ์ ‘๊ทผ ๋ฐ ๊ธฐ๋ณธ ์ •๋ณด ํ™•์ธ

1
2
3
4
5
kubectl -n kube-system exec -it root-shell -- sh
/ # whoami
root
/ # pwd
/

6. ๋„คํŠธ์›Œํฌ ์ •๋ณด ํ™•์ธ

(1) ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค ์ •๋ณด ์กฐํšŒ

1
/ # ip link

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP qlen 1000
    link/ether 06:22:58:e9:2f:29 brd ff:ff:ff:ff:ff:ff
3: pod-id-link0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
    link/ether c2:bf:fe:b4:6c:4e brd ff:ff:ff:ff:ff:ff
4: coredns: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
    link/ether 82:0a:7b:c8:3a:cf brd ff:ff:ff:ff:ff:ff
5: eni74051a80866@pod-id-link0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP 
    link/ether a6:40:e1:66:72:11 brd ff:ff:ff:ff:ff:ff

(2) IP ์ฃผ์†Œ ์ •๋ณด ์กฐํšŒ

1
/ # ip addr

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP qlen 1000
    link/ether 06:22:58:e9:2f:29 brd ff:ff:ff:ff:ff:ff
    inet 10.20.21.118/20 brd 10.20.31.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::422:58ff:fee9:2f29/64 scope link 
       valid_lft forever preferred_lft forever
3: pod-id-link0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
    link/ether c2:bf:fe:b4:6c:4e brd ff:ff:ff:ff:ff:ff
    inet 169.254.170.23/32 scope global pod-id-link0
       valid_lft forever preferred_lft forever
    inet6 fd00:ec2::23/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::c0bf:feff:feb4:6c4e/64 scope link 
       valid_lft forever preferred_lft forever
4: coredns: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
    link/ether 82:0a:7b:c8:3a:cf brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.10/32 scope global coredns
       valid_lft forever preferred_lft forever
    inet6 fe80::800a:7bff:fec8:3acf/64 scope link 
       valid_lft forever preferred_lft forever
5: eni74051a80866@pod-id-link0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP 
    link/ether a6:40:e1:66:72:11 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a440:e1ff:fe66:7211/64 scope link 
       valid_lft forever preferred_lft forever
  • ํ˜ธ์ŠคํŠธ์™€ ๋™์ผํ•œ ๋„คํŠธ์›Œํฌ ํ™˜๊ฒฝ์„ ๊ณต์œ ํ•˜๊ณ  ์žˆ์Œ

7. ํฌํŠธ ์—ฐ๊ฒฐ ์ •๋ณด ํ™•์ธ

(1) TCP ์—ฐ๊ฒฐ ์ƒํƒœ ํ™•์ธ

1
/ # netstat -tnlp

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:42319         0.0.0.0:*               LISTEN      1113/containerd
tcp        0      0 127.0.0.1:8181          0.0.0.0:*               LISTEN      1655/coredns
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      1655/coredns
tcp        0      0 169.254.170.23:80       0.0.0.0:*               LISTEN      1181/eks-pod-identi
tcp        0      0 127.0.0.54:53           0.0.0.0:*               LISTEN      1094/systemd-resolv
tcp        0      0 172.20.0.10:53          0.0.0.0:*               LISTEN      1655/coredns
tcp        0      0 127.0.0.1:10256         0.0.0.0:*               LISTEN      1140/kube-proxy
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      1140/kube-proxy
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      1189/kubelet
tcp        0      0 127.0.0.1:61680         0.0.0.0:*               LISTEN      1112/aws-network-po
tcp        0      0 127.0.0.1:61679         0.0.0.0:*               LISTEN      1622/ipamd
tcp        0      0 127.0.0.1:61678         0.0.0.0:*               LISTEN      1622/ipamd
tcp        0      0 127.0.0.1:9153          0.0.0.0:*               LISTEN      1655/coredns
tcp        0      0 127.0.0.1:50052         0.0.0.0:*               LISTEN      1112/aws-network-po
tcp        0      0 127.0.0.1:50051         0.0.0.0:*               LISTEN      1622/ipamd
tcp        0      0 127.0.0.1:8801          0.0.0.0:*               LISTEN      1114/eks-node-monit
tcp        0      0 127.0.0.1:8800          0.0.0.0:*               LISTEN      1114/eks-node-monit
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      1094/systemd-resolv
tcp        0      0 127.0.0.1:8901          0.0.0.0:*               LISTEN      1112/aws-network-po
tcp        0      0 127.0.0.1:8900          0.0.0.0:*               LISTEN      1112/aws-network-po
tcp        0      0 127.0.0.1:2705          0.0.0.0:*               LISTEN      1181/eks-pod-identi
tcp        0      0 127.0.0.1:2703          0.0.0.0:*               LISTEN      1181/eks-pod-identi
tcp        0      0 :::10250                :::*                    LISTEN      1189/kubelet
tcp        0      0 fd00:ec2::23:80         :::*                    LISTEN      1181/eks-pod-identi

(2) UDP ์—ฐ๊ฒฐ ์ƒํƒœ ํ™•์ธ

1
/ # netstat -unlp

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
udp        0      0 127.0.0.1:323           0.0.0.0:*                           1102/chronyd
udp        0      0 172.20.0.10:53          0.0.0.0:*                           1655/coredns
udp        0      0 127.0.0.54:53           0.0.0.0:*                           1094/systemd-resolv
udp        0      0 127.0.0.53:53           0.0.0.0:*                           1094/systemd-resolv
udp        0      0 10.20.21.118:68         0.0.0.0:*                           1004/systemd-networ
udp        0      0 ::1:323                 :::*                                1102/chronyd
udp        0      0 fe80::422:58ff:fee9:2f29:546 :::*                                1004/systemd-networ

8. ํ”„๋กœ์„ธ์Šค ์ •๋ณด ํ™•์ธ

(1) ์‹ค์‹œ๊ฐ„ ๋ฆฌ์†Œ์Šค ๋ชจ๋‹ˆํ„ฐ๋ง

1
/ # top -d 1

โœ…ย ์ถœ๋ ฅ

Image

(2) ์ „์ฒด ํ”„๋กœ์„ธ์Šค ์ƒํƒœ ์กฐํšŒ (ps aux)

1
/ # ps aux

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
PID   USER     TIME  COMMAND
    1 root      0:11 {systemd} /sbin/init systemd.log_target=journal-or-kmsg systemd.log_color=0 systemd.show_status=true
    2 root      0:00 [kthreadd]
    3 root      0:00 [rcu_gp]
    4 root      0:00 [rcu_par_gp]
    5 root      0:00 [slub_flushwq]
    6 root      0:00 [netns]
    8 root      0:00 [kworker/0:0H-kb]
   10 root      0:00 [mm_percpu_wq]
   11 root      0:00 [rcu_tasks_kthre]
   12 root      0:00 [rcu_tasks_rude_]
   13 root      0:00 [rcu_tasks_trace]
   14 root      0:00 [ksoftirqd/0]
   15 root      0:00 [rcu_preempt]
   16 root      0:00 [migration/0]
   18 root      0:00 [cpuhp/0]
   19 root      0:00 [cpuhp/1]
   20 root      0:00 [migration/1]
   21 root      0:01 [ksoftirqd/1]
   23 root      0:00 [kworker/1:0H-ev]
   26 root      0:00 [kdevtmpfs]
   27 root      0:00 [inet_frag_wq]
   28 root      0:00 [kauditd]
   29 root      0:00 [khungtaskd]
   30 root      0:00 [oom_reaper]
   32 root      0:00 [writeback]
   33 root      0:00 [kcompactd0]
   34 root      0:00 [khugepaged]
   35 root      0:00 [cryptd]
   36 root      0:00 [kintegrityd]
   37 root      0:00 [kblockd]
   38 root      0:00 [blkcg_punt_bio]
   40 root      0:00 [tpm_dev_wq]
   41 root      0:00 [ata_sff]
   42 root      0:00 [md]
   43 root      0:00 [edac-poller]
   44 root      0:00 [watchdogd]
   45 root      0:00 [kworker/1:1H-kb]
   68 root      0:00 [kswapd0]
   71 root      0:00 [xfsalloc]
   72 root      0:00 [xfs_mru_cache]
   75 root      0:00 [kthrotld]
  138 root      0:00 [nvme-wq]
  140 root      0:00 [nvme-reset-wq]
  143 root      0:00 [nvme-delete-wq]
  149 root      0:00 [dm_bufio_cache]
  186 root      0:00 [mld]
  188 root      0:00 [ipv6_addrconf]
  199 root      0:00 [kstrp]
  219 root      0:00 [zswap-shrink]
  340 root      0:00 [kdmflush/252:0]
  341 root      0:00 [kverityd]
  346 root      0:00 [ext4-rsv-conver]
  348 root      0:00 [kworker/0:1H-kb]
  382 root      0:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/systemd-journald
  407 root      0:00 [ext4-rsv-conver]
  418 root      0:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/systemd-udevd
  867 root      0:00 [ena]
  913 root      0:00 [xfs-buf/nvme1n1]
  914 root      0:00 [xfs-conv/nvme1n]
  915 root      0:00 [xfs-reclaim/nvm]
  916 root      0:00 [xfs-blockgc/nvm]
  917 root      0:00 [xfs-inodegc/nvm]
  918 root      0:00 [xfs-log/nvme1n1]
  919 root      0:00 [xfs-cil/nvme1n1]
  920 root      0:00 [xfsaild/nvme1n1]
  931 root      0:00 [jbd2/nvme0n1p7-]
  932 root      0:00 [ext4-rsv-conver]
  995 81        0:00 /usr/bin/dbus-broker-launch --scope system
  996 81        0:00 dbus-broker --log 4 --controller 9 --machine-id ec28607a9117855261a9c8c1cef1cbf2 --max-bytes 536870912 --max-fds 4096 --max-
 1001 root      0:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/systemd-logind
 1004 980       0:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/systemd-networkd
 1017 root      0:00 /usr/bin/apiserver --datastore-path /var/lib/bottlerocket/datastore/current --socket-gid 274
 1094 979       0:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/systemd-resolved
 1102 981       0:00 /usr/sbin/chronyd -d -F -1
 1105 981       0:00 /usr/sbin/chronyd -d -F -1
 1112 root      0:02 /usr/bin/aws-network-policy-agent --kubeconfig /etc/kubernetes/aws-network-policy-agent/kubeconfig --enable-network-policy t
 1113 root      0:18 /usr/bin/containerd
 1114 root      0:22 /usr/bin/eks-node-monitoring-agent --kubeconfig /etc/kubernetes/eks-node-monitoring-agent/kubeconfig --hostname-override i-0
 1138 root      0:03 /usr/bin/eks-healthchecker
 1140 root      0:01 /usr/bin/kube-proxy --hostname-override i-08a7de4e203ea03b7 --config=/usr/share/kube-proxy/kube-proxy-config --kubeconfig=/e
 1181 root      0:00 /usr/bin/eks-pod-identity-agent server --metrics-address 127.0.0.1 --cluster-name automode-cluster --port 80 --probe-port 27
 1189 root      0:35 /usr/bin/kubelet --cloud-provider external --kubeconfig /etc/kubernetes/kubelet/kubeconfig --config /etc/kubernetes/kubelet/
 1254 root      0:00 /usr/bin/csi-node-driver-registrar --csi-address=/var/lib/kubelet/plugins/ebs.csi.eks.amazonaws.com/csi.sock --kubelet-regis
 1269 root      0:00 /usr/bin/eks-ebs-csi-driver node --kubeconfig /etc/kubernetes/eks-ebs-csi-driver/kubeconfig --endpoint=unix:/var/lib/kubelet
 1622 root      0:02 /usr/bin/ipamd --kubeconfig /etc/kubernetes/ipamd/kubeconfig --metrics-bind-addr 127.0.0.1:8172 --health-probe-bind-addr 127
 1655 983       0:09 /usr/bin/coredns -conf=/etc/coredns/Corefile
 1801 root      0:01 /x86_64-bottlerocket-linux-gnu/sys-root/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id e9f31e10f093d6fc3090a71654c426
 1828 root      0:00 /pause
 1878 1000      0:13 python3 -m kube_ops_view
 4600 root      0:00 [kworker/u5:1-kv]
 5832 root      0:00 [kworker/1:5-cgr]
 5845 root      0:00 [kworker/0:15-ev]
 6163 root      0:00 [kworker/u5:2-kv]
 6293 root      0:00 [kworker/u4:0-ev]
 6656 root      0:00 [kworker/1:0-eve]
 6791 root      0:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 94f119878d50c69e83d32cbfa55390
 6818 root      0:00 /pause
 6872 root      0:00 /bin/cat
 6986 root      0:00 [kworker/0:0-eve]
 7013 root      0:00 [kworker/u4:2-ev]
 7214 root      0:00 sh
 7284 root      0:00 [kworker/0:1-eve]
 7304 root      0:00 [kworker/u4:1-ev]
 7463 root      0:00 ps aux

(3) ์ „์ฒด ํ”„๋กœ์„ธ์Šค ์ƒํƒœ ์กฐํšŒ (ps -ef)

1
/ # ps -ef

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
PID   USER     TIME  COMMAND
    1 root      0:11 {systemd} /sbin/init systemd.log_target=journal-or-kmsg systemd.log_color=0 systemd.show_status=true
    2 root      0:00 [kthreadd]
    3 root      0:00 [rcu_gp]
    4 root      0:00 [rcu_par_gp]
    5 root      0:00 [slub_flushwq]
    6 root      0:00 [netns]
    8 root      0:00 [kworker/0:0H-kb]
   10 root      0:00 [mm_percpu_wq]
   11 root      0:00 [rcu_tasks_kthre]
   12 root      0:00 [rcu_tasks_rude_]
   13 root      0:00 [rcu_tasks_trace]
   14 root      0:00 [ksoftirqd/0]
   15 root      0:00 [rcu_preempt]
   16 root      0:00 [migration/0]
   18 root      0:00 [cpuhp/0]
   19 root      0:00 [cpuhp/1]
   20 root      0:00 [migration/1]
   21 root      0:01 [ksoftirqd/1]
   23 root      0:00 [kworker/1:0H-ev]
   26 root      0:00 [kdevtmpfs]
   27 root      0:00 [inet_frag_wq]
   28 root      0:00 [kauditd]
   29 root      0:00 [khungtaskd]
   30 root      0:00 [oom_reaper]
   32 root      0:00 [writeback]
   33 root      0:00 [kcompactd0]
   34 root      0:00 [khugepaged]
   35 root      0:00 [cryptd]
   36 root      0:00 [kintegrityd]
   37 root      0:00 [kblockd]
   38 root      0:00 [blkcg_punt_bio]
   40 root      0:00 [tpm_dev_wq]
   41 root      0:00 [ata_sff]
   42 root      0:00 [md]
   43 root      0:00 [edac-poller]
   44 root      0:00 [watchdogd]
   45 root      0:00 [kworker/1:1H-kb]
   68 root      0:00 [kswapd0]
   71 root      0:00 [xfsalloc]
   72 root      0:00 [xfs_mru_cache]
   75 root      0:00 [kthrotld]
  138 root      0:00 [nvme-wq]
  140 root      0:00 [nvme-reset-wq]
  143 root      0:00 [nvme-delete-wq]
  149 root      0:00 [dm_bufio_cache]
  186 root      0:00 [mld]
  188 root      0:00 [ipv6_addrconf]
  199 root      0:00 [kstrp]
  219 root      0:00 [zswap-shrink]
  340 root      0:00 [kdmflush/252:0]
  341 root      0:00 [kverityd]
  346 root      0:00 [ext4-rsv-conver]
  348 root      0:00 [kworker/0:1H-kb]
  382 root      0:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/systemd-journald
  407 root      0:00 [ext4-rsv-conver]
  418 root      0:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/systemd-udevd
  867 root      0:00 [ena]
  913 root      0:00 [xfs-buf/nvme1n1]
  914 root      0:00 [xfs-conv/nvme1n]
  915 root      0:00 [xfs-reclaim/nvm]
  916 root      0:00 [xfs-blockgc/nvm]
  917 root      0:00 [xfs-inodegc/nvm]
  918 root      0:00 [xfs-log/nvme1n1]
  919 root      0:00 [xfs-cil/nvme1n1]
  920 root      0:00 [xfsaild/nvme1n1]
  931 root      0:00 [jbd2/nvme0n1p7-]
  932 root      0:00 [ext4-rsv-conver]
  995 81        0:00 /usr/bin/dbus-broker-launch --scope system
  996 81        0:00 dbus-broker --log 4 --controller 9 --machine-id ec28607a9117855261a9c8c1cef1cbf2 --max-bytes 536870912 --max-fds 4096 --max-
 1001 root      0:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/systemd-logind
 1004 980       0:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/systemd-networkd
 1017 root      0:00 /usr/bin/apiserver --datastore-path /var/lib/bottlerocket/datastore/current --socket-gid 274
 1094 979       0:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/systemd-resolved
 1102 981       0:00 /usr/sbin/chronyd -d -F -1
 1105 981       0:00 /usr/sbin/chronyd -d -F -1
 1112 root      0:02 /usr/bin/aws-network-policy-agent --kubeconfig /etc/kubernetes/aws-network-policy-agent/kubeconfig --enable-network-policy t
 1113 root      0:18 /usr/bin/containerd
 1114 root      0:22 /usr/bin/eks-node-monitoring-agent --kubeconfig /etc/kubernetes/eks-node-monitoring-agent/kubeconfig --hostname-override i-0
 1138 root      0:03 /usr/bin/eks-healthchecker
 1140 root      0:01 /usr/bin/kube-proxy --hostname-override i-08a7de4e203ea03b7 --config=/usr/share/kube-proxy/kube-proxy-config --kubeconfig=/e
 1181 root      0:00 /usr/bin/eks-pod-identity-agent server --metrics-address 127.0.0.1 --cluster-name automode-cluster --port 80 --probe-port 27
 1189 root      0:35 /usr/bin/kubelet --cloud-provider external --kubeconfig /etc/kubernetes/kubelet/kubeconfig --config /etc/kubernetes/kubelet/
 1254 root      0:00 /usr/bin/csi-node-driver-registrar --csi-address=/var/lib/kubelet/plugins/ebs.csi.eks.amazonaws.com/csi.sock --kubelet-regis
 1269 root      0:00 /usr/bin/eks-ebs-csi-driver node --kubeconfig /etc/kubernetes/eks-ebs-csi-driver/kubeconfig --endpoint=unix:/var/lib/kubelet
 1622 root      0:02 /usr/bin/ipamd --kubeconfig /etc/kubernetes/ipamd/kubeconfig --metrics-bind-addr 127.0.0.1:8172 --health-probe-bind-addr 127
 1655 983       0:09 /usr/bin/coredns -conf=/etc/coredns/Corefile
 1801 root      0:01 /x86_64-bottlerocket-linux-gnu/sys-root/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id e9f31e10f093d6fc3090a71654c426
 1828 root      0:00 /pause
 1878 1000      0:13 python3 -m kube_ops_view
 4600 root      0:00 [kworker/u5:1-kv]
 5832 root      0:00 [kworker/1:5-cgr]
 5845 root      0:00 [kworker/0:15-ev]
 6163 root      0:00 [kworker/u5:2-kv]
 6293 root      0:00 [kworker/u4:0-ev]
 6656 root      0:00 [kworker/1:0-eve]
 6791 root      0:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 94f119878d50c69e83d32cbfa55390
 6818 root      0:00 /pause
 6872 root      0:00 /bin/cat
 6986 root      0:00 [kworker/0:0-eve]
 7013 root      0:00 [kworker/u4:2-ev]
 7214 root      0:00 sh
 7284 root      0:00 [kworker/0:1-eve]
 7304 root      0:00 [kworker/u4:1-ev]
 7475 root      0:00 ps -ef

(4) ํ”„๋กœ์„ธ์Šค ํŠธ๋ฆฌ ๊ตฌ์กฐ ํ™•์ธ (pstree)

1
/ # pstree

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
systemd-+-apiserver-+-{actix-rt|system}
        |           `-{actix-server ac}
        |-aws-network-pol---7*[{aws-network-pol}]
        |-chronyd---chronyd
        |-containerd---11*[{containerd}]
        |-containerd-shim-+-pause
        |                 |-python3---2*[{python3}]
        |                 `-11*[{containerd-shim}]
        |-containerd-shim-+-cat
        |                 |-pause
        |                 |-sh---pstree
        |                 `-11*[{containerd-shim}]
        |-coredns---7*[{coredns}]
        |-csi-node-driver---6*[{csi-node-driver}]
        |-dbus-broker-lau---dbus-broker
        |-eks-ebs-csi-dri---5*[{eks-ebs-csi-dri}]
        |-eks-healthcheck---12*[{eks-healthcheck}]
        |-eks-node-monito---13*[{eks-node-monito}]
        |-eks-pod-identit---8*[{eks-pod-identit}]
        |-ipamd---5*[{ipamd}]
        |-kube-proxy---6*[{kube-proxy}]
        |-kubelet---11*[{kubelet}]
        |-systemd-journal
        |-systemd-logind
        |-systemd-network
        |-systemd-resolve
        `-systemd-udevd

(5) PID ํฌํ•จ ํ”„๋กœ์„ธ์Šค ํŠธ๋ฆฌ ํ™•์ธ (pstree -p)

1
/ # pstree -p

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
systemd(1)-+-apiserver(1017)-+-{actix-rt|system}(1019)
           |                 `-{actix-server ac}(1035)
           |-aws-network-pol(1112)-+-{aws-network-pol}(1125)
           |                       |-{aws-network-pol}(1126)
           |                       |-{aws-network-pol}(1127)
           |                       |-{aws-network-pol}(1128)
           |                       |-{aws-network-pol}(1129)
           |                       |-{aws-network-pol}(1712)
           |                       `-{aws-network-pol}(2863)
           |-chronyd(1102)---chronyd(1105)
           |-containerd(1113)-+-{containerd}(1116)
           |                  |-{containerd}(1118)
           |                  |-{containerd}(1119)
           |                  |-{containerd}(1120)
           |                  |-{containerd}(1134)
           |                  |-{containerd}(1760)
           |                  |-{containerd}(1993)
           |                  |-{containerd}(2200)
           |                  |-{containerd}(4767)
           |                  |-{containerd}(4768)
           |                  `-{containerd}(7443)
           |-containerd-shim(1801)-+-pause(1828)
           |                       |-python3(1878)-+-{python3}(1902)
           |                       |               `-{python3}(1903)
           |                       |-{containerd-shim}(1802)
           |                       |-{containerd-shim}(1803)
           |                       |-{containerd-shim}(1804)
           |                       |-{containerd-shim}(1805)
           |                       |-{containerd-shim}(1806)
           |                       |-{containerd-shim}(1807)
           |                       |-{containerd-shim}(1808)
           |                       |-{containerd-shim}(1809)
           |                       |-{containerd-shim}(1835)
           |                       |-{containerd-shim}(1839)
           |                       `-{containerd-shim}(2081)
           |-containerd-shim(6791)-+-cat(6872)
           |                       |-pause(6818)
           |                       |-sh(7214)---pstree(7522)
           |                       |-{containerd-shim}(6792)
           |                       |-{containerd-shim}(6793)
           |                       |-{containerd-shim}(6794)
           |                       |-{containerd-shim}(6795)
           |                       |-{containerd-shim}(6796)
           |                       |-{containerd-shim}(6797)
           |                       |-{containerd-shim}(6798)
           |                       |-{containerd-shim}(6799)
           |                       |-{containerd-shim}(6825)
           |                       |-{containerd-shim}(6826)
           |                       `-{containerd-shim}(6965)
           |-coredns(1655)-+-{coredns}(1661)
           |               |-{coredns}(1662)
           |               |-{coredns}(1663)
           |               |-{coredns}(1664)
           |               |-{coredns}(1665)
           |               |-{coredns}(1672)
           |               `-{coredns}(1934)
           |-csi-node-driver(1254)-+-{csi-node-driver}(1293)
           |                       |-{csi-node-driver}(1294)
           |                       |-{csi-node-driver}(1295)
           |                       |-{csi-node-driver}(1296)
           |                       |-{csi-node-driver}(1298)
           |                       `-{csi-node-driver}(1394)
           |-dbus-broker-lau(995)---dbus-broker(996)
           |-eks-ebs-csi-dri(1269)-+-{eks-ebs-csi-dri}(1310)
           |                       |-{eks-ebs-csi-dri}(1311)
           |                       |-{eks-ebs-csi-dri}(1312)
           |                       |-{eks-ebs-csi-dri}(1315)
           |                       `-{eks-ebs-csi-dri}(4604)
           |-eks-healthcheck(1138)-+-{eks-healthcheck}(1155)
           |                       |-{eks-healthcheck}(1156)
           |                       |-{eks-healthcheck}(1157)
           |                       |-{eks-healthcheck}(1158)
           |                       |-{eks-healthcheck}(1169)
           |                       |-{eks-healthcheck}(1562)
           |                       |-{eks-healthcheck}(1620)
           |                       |-{eks-healthcheck}(1721)
           |                       |-{eks-healthcheck}(1722)
           |                       |-{eks-healthcheck}(1912)
           |                       |-{eks-healthcheck}(2309)
           |                       `-{eks-healthcheck}(2877)
           |-eks-node-monito(1114)-+-{eks-node-monito}(1121)
           |                       |-{eks-node-monito}(1122)
           |                       |-{eks-node-monito}(1123)
           |                       |-{eks-node-monito}(1124)
           |                       |-{eks-node-monito}(1131)
           |                       |-{eks-node-monito}(1190)
           |                       |-{eks-node-monito}(1198)
           |                       |-{eks-node-monito}(1205)
           |                       |-{eks-node-monito}(1206)
           |                       |-{eks-node-monito}(2255)
           |                       |-{eks-node-monito}(2256)
           |                       |-{eks-node-monito}(2259)
           |                       `-{eks-node-monito}(6317)
           |-eks-pod-identit(1181)-+-{eks-pod-identit}(1191)
           |                       |-{eks-pod-identit}(1192)
           |                       |-{eks-pod-identit}(1193)
           |                       |-{eks-pod-identit}(1194)
           |                       |-{eks-pod-identit}(1195)
           |                       |-{eks-pod-identit}(1196)
           |                       |-{eks-pod-identit}(2204)
           |                       `-{eks-pod-identit}(2411)
           |-ipamd(1622)-+-{ipamd}(1656)
           |             |-{ipamd}(1657)
           |             |-{ipamd}(1658)
           |             |-{ipamd}(1659)
           |             `-{ipamd}(1660)
           |-kube-proxy(1140)-+-{kube-proxy}(1162)
           |                  |-{kube-proxy}(1164)
           |                  |-{kube-proxy}(1165)
           |                  |-{kube-proxy}(1166)
           |                  |-{kube-proxy}(1168)
           |                  `-{kube-proxy}(1240)
           |-kubelet(1189)-+-{kubelet}(1199)
           |               |-{kubelet}(1201)
           |               |-{kubelet}(1202)
           |               |-{kubelet}(1203)
           |               |-{kubelet}(1204)
           |               |-{kubelet}(1243)
           |               |-{kubelet}(1244)
           |               |-{kubelet}(1274)
           |               |-{kubelet}(1275)
           |               |-{kubelet}(1277)
           |               `-{kubelet}(1300)
           |-systemd-journal(382)
           |-systemd-logind(1001)
           |-systemd-network(1004)
           |-systemd-resolve(1094)
           `-systemd-udevd(418)

9. htop ์„ค์น˜ ๋ฐ ์‹คํ–‰

(1) htop ์„ค์น˜

1
/ # apk add htop

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
fetch https://dl-cdn.alpinelinux.org/alpine/v3.21/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.21/community/x86_64/APKINDEX.tar.gz
(1/3) Installing ncurses-terminfo-base (6.5_p20241006-r3)
(2/3) Installing libncursesw (6.5_p20241006-r3)
(3/3) Installing htop (3.3.0-r0)
Executing busybox-1.37.0-r12.trigger
OK: 8 MiB in 18 packages

(2) ์‹œ์Šคํ…œ ๋ฆฌ์†Œ์Šค ์‚ฌ์šฉ๋Ÿ‰ ๋ชจ๋‹ˆํ„ฐ๋ง

1
/ # htop

โœ…ย ์ถœ๋ ฅ

Image

10. ๋งˆ์šดํŠธ ์ •๋ณด ํ™•์ธ

(1) ๊ธฐ๋ณธ ํŒŒ์ผ ์‹œ์Šคํ…œ ์‚ฌ์šฉ๋Ÿ‰ ์กฐํšŒ (ํ˜ธ์ŠคํŠธ ๋งˆ์šดํŠธ ์ œ์™ธ)

1
/ # df -hT | grep -v host

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
Filesystem           Type            Size      Used Available Use% Mounted on
overlay              overlay        79.9G      2.2G     77.8G   3% /
tmpfs                tmpfs          64.0M         0     64.0M   0% /dev
/dev/root            ext4            2.1G      1.1G    901.5M  55% /usr/local/sbin/modprobe
/dev/nvme1n1p1       xfs            79.9G      2.2G     77.8G   3% /dev/termination-log
/dev/nvme1n1p1       xfs            79.9G      2.2G     77.8G   3% /etc/resolv.conf
tmpfs                tmpfs           1.9G         0      1.9G   0% /dev/shm
tmpfs                tmpfs           3.1G     12.0K      3.1G   0% /run/secrets/kubernetes.io/serviceaccount

(2) ํ˜ธ์ŠคํŠธ ๊ด€๋ จ ๋งˆ์šดํŠธ ์ •๋ณด ์กฐํšŒ

1
/ # df -hT | grep host

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
/dev/root            ext4            2.1G      1.1G    901.5M  55% /host
devtmpfs             devtmpfs        1.9G         0      1.9G   0% /host/dev
tmpfs                tmpfs           1.9G         0      1.9G   0% /host/dev/shm
tmpfs                tmpfs         762.1M    704.0K    761.4M   0% /host/run
tmpfs                tmpfs           1.9G         0      1.9G   0% /host/run/netdog
shm                  tmpfs          64.0M         0     64.0M   0% /host/run/containerd/io.containerd.grpc.v1.cri/sandboxes/e9f31e10f093d6fc3090a71654c426807637a9705a04de6e866d33e2fe8d9b67/shm
overlay              overlay        79.9G      2.2G     77.8G   3% /host/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9f31e10f093d6fc3090a71654c426807637a9705a04de6e866d33e2fe8d9b67/rootfs
overlay              overlay        79.9G      2.2G     77.8G   3% /host/run/containerd/io.containerd.runtime.v2.task/k8s.io/fadf90102e0106f26fd91f6b2318d680169e2a071df0c58bb5d221aa572bbc81/rootfs
overlay              overlay        79.9G      2.2G     77.8G   3% /host/run/containerd/io.containerd.runtime.v2.task/k8s.io/94f119878d50c69e83d32cbfa553902653e488f7c46c13f3c133620ca67966b9/rootfs
overlay              overlay        79.9G      2.2G     77.8G   3% /host/run/containerd/io.containerd.runtime.v2.task/k8s.io/5940e511977dc4c19603b22a41bcf3fb64a809eba3cfb048d8bb68a8edaf8ce3/rootfs
overlay              overlay        79.9G      2.2G     77.8G   3% /host/run/containerd/io.containerd.runtime.v2.task/k8s.io/5940e511977dc4c19603b22a41bcf3fb64a809eba3cfb048d8bb68a8edaf8ce3/rootfs
tmpfs                tmpfs          64.0M         0     64.0M   0% /host/run/containerd/io.containerd.runtime.v2.task/k8s.io/5940e511977dc4c19603b22a41bcf3fb64a809eba3cfb048d8bb68a8edaf8ce3/rootfs/dev
/dev/root            ext4            2.1G      1.1G    901.5M  55% /host/run/containerd/io.containerd.runtime.v2.task/k8s.io/5940e511977dc4c19603b22a41bcf3fb64a809eba3cfb048d8bb68a8edaf8ce3/rootfs/usr/local/sbin/modprobe
tmpfs                tmpfs           1.9G    496.0K      1.9G   0% /host/etc
tmpfs                tmpfs           1.9G      4.0K      1.9G   0% /host/etc/cni
tmpfs                tmpfs           1.9G         0      1.9G   0% /host/etc/kubernetes/pki/private
tmpfs                tmpfs           1.9G     12.0K      1.9G   0% /host/etc/containerd
tmpfs                tmpfs           1.9G         0      1.9G   0% /host/tmp
tmpfs                tmpfs           1.9G      4.0K      1.9G   0% /host/root/.aws
/dev/nvme0n1p3       ext4           43.5M     21.0M     19.2M  52% /host/boot
/dev/nvme1n1p1       xfs            79.9G      2.2G     77.8G   3% /host/local
/dev/root            ext4            2.1G      1.1G    901.5M  55% /host/local/mnt
/dev/root            ext4            2.1G      1.1G    901.5M  55% /host/local/opt
/dev/root            ext4            2.1G      1.1G    901.5M  55% /host/local/var
/dev/nvme1n1p1       xfs            79.9G      2.2G     77.8G   3% /host/mnt
/dev/nvme1n1p1       xfs            79.9G      2.2G     77.8G   3% /host/opt
overlay              overlay        79.9G      2.2G     77.8G   3% /host/opt/cni
overlay              overlay        79.9G      2.2G     77.8G   3% /host/opt/csi
/dev/nvme1n1p1       xfs            79.9G      2.2G     77.8G   3% /host/var
/dev/nvme0n1p7       ext4           79.4M      1.3M     71.9M   2% /host/var/lib/bottlerocket
/dev/loop0           squashfs       12.6M     12.6M         0 100% /host/var/lib/kernel-devel/.overlay/lower
tmpfs                tmpfs           3.1G     12.0K      3.1G   0% /host/var/lib/kubelet/pods/26d0b768-1309-4a72-9cbb-5bfb6c9e0353/volumes/kubernetes.io~projected/kube-api-access-7ftbv
tmpfs                tmpfs           3.1G     12.0K      3.1G   0% /host/var/lib/kubelet/pods/4963e6f9-73de-49fa-86df-1a9d3f982aad/volumes/kubernetes.io~projected/kube-api-access-bmjsp
overlay              overlay        79.9G      2.2G     77.8G   3% /host/x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/modules
/dev/loop1           squashfs      512.0K    512.0K         0 100% /host/x86_64-bottlerocket-linux-gnu/sys-root/usr/share/licenses
overlay              overlay        79.9G      2.2G     77.8G   3% /host/x86_64-bottlerocket-linux-gnu/sys-root/usr/src/kernels
/dev/nvme1n1p1       xfs            79.9G      2.2G     77.8G   3% /etc/hosts
/dev/nvme1n1p1       xfs            79.9G      2.2G     77.8G   3% /etc/hostname

(3) ํ˜ธ์ŠคํŠธ ์ด๋ฆ„ ํ™•์ธ

1
2
/ # cat /etc/hostname
ip-10-20-21-118.ap-northeast-2.compute.internal

(4) ํŒŒ์ผ ์‹œ์Šคํ…œ ๊ตฌ์กฐ ํ™•์ธ

1
/ # ls -l /host

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
total 37
lrwxrwxrwx    1 root     root             9 Feb 26 00:15 bin -> ./usr/bin
drwxr-xr-x    5 root     root          1024 Mar 14 01:49 boot
drwxr-xr-x   14 root     root          3220 Mar 22 08:04 dev
drwxr-xr-x   20 root     root           920 Mar 22 07:03 etc
drwxr-xr-x    2 root     root          4096 Feb 26 00:15 home
lrwxrwxrwx    1 root     root            48 Feb 26 00:15 lib -> ./x86_64-bottlerocket-linux-gnu/sys-root/usr/lib
lrwxrwxrwx    1 root     root            48 Feb 26 00:15 lib64 -> ./x86_64-bottlerocket-linux-gnu/sys-root/usr/lib
drwxr-xr-x    5 root     root            39 Mar 22 06:46 local
drwxr-xr-x    2 root     root         16384 Mar 14 01:49 lost+found
drwxr-xr-x    3 root     root          4096 Mar 14 01:49 media
drwxr-xr-x    2 root     root             6 Mar 22 06:46 mnt
drwxr-xr-x    4 root     root            53 Mar 22 07:03 opt
dr-xr-xr-x  167 root     root             0 Mar 22 07:03 proc
drwxr-xr-x    3 root     root          4096 Mar 14 01:49 root
drwxr-xr-x   18 root     root           440 Mar 22 07:04 run
lrwxrwxrwx    1 root     root            10 Feb 26 00:15 sbin -> ./usr/sbin
drwxr-xr-x    2 root     root          4096 Feb 26 00:15 srv
dr-xr-xr-x   13 root     root             0 Mar 22 07:03 sys
drwxrwxrwt   11 root     root           220 Mar 22 08:38 tmp
lrwxrwxrwx    1 root     root            44 Feb 26 00:15 usr -> ./x86_64-bottlerocket-linux-gnu/sys-root/usr
drwxr-xr-x    7 root     root           104 Mar 22 07:03 var
drwxr-xr-x    3 root     root          4096 Mar 14 01:49 x86_64-bottlerocket-linux-gnu

11. kube-proxy ์„ค์ • ํŒŒ์ผ ํ™•์ธ

1
/ # cat /host/usr/share/kube-proxy/kube-proxy-config

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /etc/kubernetes/kube-proxy/kubeconfig
  qps: 5
clusterCIDR: ""
configSyncPeriod: 15m0s
conntrack:
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 127.0.0.1:10256
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
ipvs:
  excludeCIDRs: null
  minSyncPeriod: 0s
  scheduler: ""
  syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "iptables"
nodePortAddresses: null
oomScoreAdj: -998
portRange: ""
udpIdleTimeout: 250ms

12. CoreDNS ์„ค์ • ํŒŒ์ผ ํ™•์ธ

1
/ # cat /host/etc/coredns/Corefile

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
.:53 {
    bind 172.20.0.10
    hosts {
        172.20.0.10 kube-dns.kube-system.svc.cluster.local
        fallthrough
    }
    log
    errors
    health  localhost:8080
    ready   localhost:8181
    kubernetes cluster.local in-addr.arpa ip6.arpa {
        kubeconfig /etc/kubernetes/coredns/kubeconfig
        pods insecure
        fallthrough in-addr.arpa ip6.arpa
    }
    prometheus localhost:9153
    forward . /etc/resolv.conf
    cache 30
    loop
    reload
    loadbalance
}

13. ipamd kubeconfig ํ™•์ธ

1
/ # cat /host/etc/kubernetes/ipamd/kubeconfig

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
---
apiVersion: v1
kind: Config
clusters:
- cluster:
    certificate-authority: "/etc/kubernetes/pki/ca.crt"
    server: "https://03DFBBF640E92C148A3EFF53888FF939.gr7.ap-northeast-2.eks.amazonaws.com"
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubelet
  name: kubelet
current-context: kubelet
users:
- name: kubelet
  user:
    as: eks-auto:ipamd
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      command: "/usr/bin/aws-iam-authenticator"
      args:
      - token
      - "-i"
      - "automode-cluster"
      - "--region"
      - "ap-northeast-2"

14. eks-node-monitoring-agent kubeconfig ํ™•์ธ

1
/ # cat /host/etc/kubernetes/eks-node-monitoring-agent/kubeconfig

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
---
apiVersion: v1
kind: Config
clusters:
- cluster:
    certificate-authority: "/etc/kubernetes/pki/ca.crt"
    server: "https://03DFBBF640E92C148A3EFF53888FF939.gr7.ap-northeast-2.eks.amazonaws.com"
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubelet
  name: kubelet
current-context: kubelet
users:
- name: kubelet
  user:
    as: eks-auto:node-monitoring-agent
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      command: "/usr/bin/aws-iam-authenticator"
      args:
      - token
      - "-i"
      - "automode-cluster"
      - "--region"
      - "ap-northeast-2"

15. ํ˜ธ์ŠคํŠธ ํŒŒ์ผ ์‹œ์Šคํ…œ ์“ฐ๊ธฐ ํ…Œ์ŠคํŠธ

1
/ # echo "hello" > /host/home/1.txt

โœ…ย ์ถœ๋ ฅ

1
sh: can't create /host/home/1.txt: Read-only file system
  • ํ˜ธ์ŠคํŠธ์˜ ๋ฃจํŠธ ํŒŒ์ผ ์‹œ์Šคํ…œ์€ ์ฝ๊ธฐ ์ „์šฉ์œผ๋กœ ๋งˆ์šดํŠธ๋˜์–ด ์žˆ์Œ์„ ํ™•์ธ

16. chroot ์‹œ๋„ ๋ฐ ์‹คํŒจ

1
/ # chroot /host /bin/sh

โœ…ย ์ถœ๋ ฅ

1
chroot: can't execute '/bin/sh': No such file or directory

17. kubelet ๋กœ๊ทธ ์‹ค์‹œ๊ฐ„ ๋ชจ๋‹ˆํ„ฐ๋ง

1
/ # nsenter -t 1 -m journalctl -f -u kubelet

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
Mar 22 08:28:49 ip-10-20-21-118.ap-northeast-2.compute.internal kubelet[1189]: I0322 08:28:49.363410    1189 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39c06495-7379-46aa-9063-012ef0744c2b-kube-api-access-h4xb2" (OuterVolumeSpecName: "kube-api-access-h4xb2") pod "39c06495-7379-46aa-9063-012ef0744c2b" (UID: "39c06495-7379-46aa-9063-012ef0744c2b"). InnerVolumeSpecName "kube-api-access-h4xb2". PluginName "kubernetes.io/projected", VolumeGidValue ""
Mar 22 08:28:49 ip-10-20-21-118.ap-northeast-2.compute.internal kubelet[1189]: I0322 08:28:49.461640    1189 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-h4xb2\" (UniqueName: \"kubernetes.io/projected/39c06495-7379-46aa-9063-012ef0744c2b-kube-api-access-h4xb2\") on node \"i-08a7de4e203ea03b7\" DevicePath \"\""
Mar 22 08:28:49 ip-10-20-21-118.ap-northeast-2.compute.internal kubelet[1189]: I0322 08:28:49.461671    1189 reconciler_common.go:288] "Volume detached for volume \"host-root\" (UniqueName: \"kubernetes.io/host-path/39c06495-7379-46aa-9063-012ef0744c2b-host-root\") on node \"i-08a7de4e203ea03b7\" DevicePath \"\""
Mar 22 08:28:50 ip-10-20-21-118.ap-northeast-2.compute.internal kubelet[1189]: I0322 08:28:50.075996    1189 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d6d60ba4c6d5f6fd7381fd3dae85ccff8874ef06477bad3eb24990799ca6d4a"
Mar 22 08:29:21 ip-10-20-21-118.ap-northeast-2.compute.internal kubelet[1189]: I0322 08:29:21.028626    1189 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39c06495-7379-46aa-9063-012ef0744c2b" path="/var/lib/kubelet/pods/39c06495-7379-46aa-9063-012ef0744c2b/volumes"
Mar 22 08:30:01 ip-10-20-21-118.ap-northeast-2.compute.internal kubelet[1189]: I0322 08:30:01.329829    1189 scope.go:117] "RemoveContainer" containerID="f3f12a5f649077a863415516ada264c023b4baefc48d1fa3897dd66cba77a33a"
Mar 22 08:31:05 ip-10-20-21-118.ap-northeast-2.compute.internal kubelet[1189]: E0322 08:31:05.065254    1189 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="39c06495-7379-46aa-9063-012ef0744c2b" containerName="debugger"
Mar 22 08:31:05 ip-10-20-21-118.ap-northeast-2.compute.internal kubelet[1189]: I0322 08:31:05.065294    1189 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c06495-7379-46aa-9063-012ef0744c2b" containerName="debugger"
Mar 22 08:31:05 ip-10-20-21-118.ap-northeast-2.compute.internal kubelet[1189]: I0322 08:31:05.125603    1189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4963e6f9-73de-49fa-86df-1a9d3f982aad-hostroot\") pod \"root-shell\" (UID: \"4963e6f9-73de-49fa-86df-1a9d3f982aad\") " pod="kube-system/root-shell"
Mar 22 08:31:05 ip-10-20-21-118.ap-northeast-2.compute.internal kubelet[1189]: I0322 08:31:05.125642    1189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmjsp\" (UniqueName: \"kubernetes.io/projected/4963e6f9-73de-49fa-86df-1a9d3f982aad-kube-api-access-bmjsp\") pod \"root-shell\" (UID: \"4963e6f9-73de-49fa-86df-1a9d3f982aad\") " pod="kube-system/root-shell"

18. ํ˜ธ์ŠคํŠธ ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค ์ •๋ณด ํ™•์ธ

1
/ # nsenter -t 1 -m ip addr

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 06:22:58:e9:2f:29 brd ff:ff:ff:ff:ff:ff
    altname enp0s5
    altname ens5
    inet 10.20.21.118/20 metric 1024 brd 10.20.31.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::422:58ff:fee9:2f29/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
3: pod-id-link0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether c2:bf:fe:b4:6c:4e brd ff:ff:ff:ff:ff:ff
    inet 169.254.170.23/32 scope global pod-id-link0
       valid_lft forever preferred_lft forever
    inet6 fd00:ec2::23/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::c0bf:feff:feb4:6c4e/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
4: coredns: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 82:0a:7b:c8:3a:cf brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.10/32 scope global coredns
       valid_lft forever preferred_lft forever
    inet6 fe80::800a:7bff:fec8:3acf/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
5: eni74051a80866@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    link/ether a6:40:e1:66:72:11 brd ff:ff:ff:ff:ff:ff link-netns cni-88e4b37e-199b-643a-f44c-027047f65acf
    inet6 fe80::a440:e1ff:fe66:7211/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever

19. ํ˜ธ์ŠคํŠธ ํ”„๋กœ์„ธ์Šค ์ •๋ณด ํ™•์ธ

1
/ # nsenter -t 1 -m ps -ef

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
UID          PID    PPID  C STIME TTY          TIME CMD
root           1       0  0 07:03 ?        00:00:13 /sbin/init systemd.log_target=journal-or-kmsg systemd.log_color=0 systemd.show_status=true
root           2       0  0 07:03 ?        00:00:00 [kthreadd]
root           3       2  0 07:03 ?        00:00:00 [rcu_gp]
root           4       2  0 07:03 ?        00:00:00 [rcu_par_gp]
root           5       2  0 07:03 ?        00:00:00 [slub_flushwq]
root           6       2  0 07:03 ?        00:00:00 [netns]
root           8       2  0 07:03 ?        00:00:00 [kworker/0:0H-kblockd]
root          10       2  0 07:03 ?        00:00:00 [mm_percpu_wq]
root          11       2  0 07:03 ?        00:00:00 [rcu_tasks_kthread]
root          12       2  0 07:03 ?        00:00:00 [rcu_tasks_rude_kthread]
root          13       2  0 07:03 ?        00:00:00 [rcu_tasks_trace_kthread]
root          14       2  0 07:03 ?        00:00:00 [ksoftirqd/0]
root          15       2  0 07:03 ?        00:00:01 [rcu_preempt]
root          16       2  0 07:03 ?        00:00:00 [migration/0]
root          18       2  0 07:03 ?        00:00:00 [cpuhp/0]
root          19       2  0 07:03 ?        00:00:00 [cpuhp/1]
root          20       2  0 07:03 ?        00:00:00 [migration/1]
root          21       2  0 07:03 ?        00:00:02 [ksoftirqd/1]
root          23       2  0 07:03 ?        00:00:00 [kworker/1:0H-events_highpri]
root          26       2  0 07:03 ?        00:00:00 [kdevtmpfs]
root          27       2  0 07:03 ?        00:00:00 [inet_frag_wq]
root          28       2  0 07:03 ?        00:00:00 [kauditd]
root          29       2  0 07:03 ?        00:00:00 [khungtaskd]
root          30       2  0 07:03 ?        00:00:00 [oom_reaper]
root          32       2  0 07:03 ?        00:00:00 [writeback]
root          33       2  0 07:03 ?        00:00:00 [kcompactd0]
root          34       2  0 07:03 ?        00:00:00 [khugepaged]
root          35       2  0 07:03 ?        00:00:00 [cryptd]
root          36       2  0 07:03 ?        00:00:00 [kintegrityd]
root          37       2  0 07:03 ?        00:00:00 [kblockd]
root          38       2  0 07:03 ?        00:00:00 [blkcg_punt_bio]
root          40       2  0 07:03 ?        00:00:00 [tpm_dev_wq]
root          41       2  0 07:03 ?        00:00:00 [ata_sff]
root          42       2  0 07:03 ?        00:00:00 [md]
root          43       2  0 07:03 ?        00:00:00 [edac-poller]
root          44       2  0 07:03 ?        00:00:00 [watchdogd]
root          45       2  0 07:03 ?        00:00:00 [kworker/1:1H-kblockd]
root          68       2  0 07:03 ?        00:00:00 [kswapd0]
root          71       2  0 07:03 ?        00:00:00 [xfsalloc]
root          72       2  0 07:03 ?        00:00:00 [xfs_mru_cache]
root          75       2  0 07:03 ?        00:00:00 [kthrotld]
root         138       2  0 07:03 ?        00:00:00 [nvme-wq]
root         140       2  0 07:03 ?        00:00:00 [nvme-reset-wq]
root         143       2  0 07:03 ?        00:00:00 [nvme-delete-wq]
root         149       2  0 07:03 ?        00:00:00 [dm_bufio_cache]
root         186       2  0 07:03 ?        00:00:00 [mld]
root         188       2  0 07:03 ?        00:00:00 [ipv6_addrconf]
root         199       2  0 07:03 ?        00:00:00 [kstrp]
root         219       2  0 07:03 ?        00:00:00 [zswap-shrink]
root         340       2  0 07:03 ?        00:00:00 [kdmflush/252:0]
root         341       2  0 07:03 ?        00:00:00 [kverityd]
root         346       2  0 07:03 ?        00:00:00 [ext4-rsv-conver]
root         348       2  0 07:03 ?        00:00:00 [kworker/0:1H-kblockd]
root         382       1  0 07:03 ?        00:00:01 /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/systemd-journald
root         407       2  0 07:03 ?        00:00:00 [ext4-rsv-conver]
root         418       1  0 07:03 ?        00:00:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/systemd-udevd
root         867       2  0 07:03 ?        00:00:00 [ena]
root         913       2  0 07:03 ?        00:00:00 [xfs-buf/nvme1n1]
root         914       2  0 07:03 ?        00:00:00 [xfs-conv/nvme1n]
root         915       2  0 07:03 ?        00:00:00 [xfs-reclaim/nvm]
root         916       2  0 07:03 ?        00:00:00 [xfs-blockgc/nvm]
root         917       2  0 07:03 ?        00:00:00 [xfs-inodegc/nvm]
root         918       2  0 07:03 ?        00:00:00 [xfs-log/nvme1n1]
root         919       2  0 07:03 ?        00:00:00 [xfs-cil/nvme1n1]
root         920       2  0 07:03 ?        00:00:00 [xfsaild/nvme1n1p1]
root         931       2  0 07:03 ?        00:00:00 [jbd2/nvme0n1p7-8]
root         932       2  0 07:03 ?        00:00:00 [ext4-rsv-conver]
dbus         995       1  0 07:03 ?        00:00:00 /usr/bin/dbus-broker-launch --scope system
dbus         996     995  0 07:03 ?        00:00:00 dbus-broker --log 4 --controller 9 --machine-id ec28607a9117855261a9c8c1cef1cbf2 --max-bytes 5
root        1001       1  0 07:03 ?        00:00:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/systemd-logind
systemd+    1004       1  0 07:03 ?        00:00:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/systemd-networkd
root        1017       1  0 07:03 ?        00:00:00 /usr/bin/apiserver --datastore-path /var/lib/bottlerocket/datastore/current --socket-gid 274
systemd+    1094       1  0 07:03 ?        00:00:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/systemd-resolved
chrony      1102       1  0 07:03 ?        00:00:00 /usr/sbin/chronyd -d -F -1
chrony      1105    1102  0 07:03 ?        00:00:00 /usr/sbin/chronyd -d -F -1
root        1112       1  0 07:03 ?        00:00:02 /usr/bin/aws-network-policy-agent --kubeconfig /etc/kubernetes/aws-network-policy-agent/kubeco
root        1113       1  0 07:03 ?        00:00:22 /usr/bin/containerd
root        1114       1  0 07:03 ?        00:00:28 /usr/bin/eks-node-monitoring-agent --kubeconfig /etc/kubernetes/eks-node-monitoring-agent/kube
root        1138       1  0 07:03 ?        00:00:04 /usr/bin/eks-healthchecker
root        1140       1  0 07:03 ?        00:00:01 /usr/bin/kube-proxy --hostname-override i-08a7de4e203ea03b7 --config=/usr/share/kube-proxy/kub
root        1181       1  0 07:03 ?        00:00:00 /usr/bin/eks-pod-identity-agent server --metrics-address 127.0.0.1 --cluster-name automode-clu
root        1189       1  0 07:03 ?        00:00:46 /usr/bin/kubelet --cloud-provider external --kubeconfig /etc/kubernetes/kubelet/kubeconfig --c
root        1254       1  0 07:04 ?        00:00:00 /usr/bin/csi-node-driver-registrar --csi-address=/var/lib/kubelet/plugins/ebs.csi.eks.amazonaw
root        1269       1  0 07:04 ?        00:00:00 /usr/bin/eks-ebs-csi-driver node --kubeconfig /etc/kubernetes/eks-ebs-csi-driver/kubeconfig --
root        1622       1  0 07:04 ?        00:00:03 /usr/bin/ipamd --kubeconfig /etc/kubernetes/ipamd/kubeconfig --metrics-bind-addr 127.0.0.1:817
coredns     1655       1  0 07:04 ?        00:00:12 /usr/bin/coredns -conf=/etc/coredns/Corefile
root        1801       1  0 07:04 ?        00:00:01 /x86_64-bottlerocket-linux-gnu/sys-root/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 
root        1828    1801  0 07:04 ?        00:00:00 /pause
1000        1878    1801  0 07:04 ?        00:00:16 python3 -m kube_ops_view
root        4600       2  0 07:58 ?        00:00:00 [kworker/u5:1-kverityd]
root        5832       2  0 08:13 ?        00:00:00 [kworker/1:5-cgroup_destroy]
root        6163       2  0 08:20 ?        00:00:00 [kworker/u5:2-kverityd]
root        6656       2  0 08:28 ?        00:00:00 [kworker/1:0-events_power_efficient]
root        6791       1  0 08:31 ?        00:00:00 /x86_64-bottlerocket-linux-gnu/sys-root/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 
root        6818    6791  0 08:31 ?        00:00:00 /pause
root        6872    6791  0 08:31 ?        00:00:00 /bin/cat
root        6986       2  0 08:33 ?        00:00:00 [kworker/0:0-events]
root        7214    6791  0 08:38 ?        00:00:00 sh
root        7304       2  0 08:39 ?        00:00:00 [kworker/u4:1-events_unbound]
root        7563       2  0 08:45 ?        00:00:00 [tls-strp]
root        7583       2  0 08:45 ?        00:00:00 [kworker/u4:0-flush-259:0]
root        8053       2  0 08:56 ?        00:00:00 [kworker/0:2-events]
root        8231       2  0 09:00 ?        00:00:00 [kworker/u4:2-flush-259:0]
root        8366       2  0 09:03 ?        00:00:00 [kworker/u4:3-events_unbound]
root        8367       2  0 09:03 ?        00:00:00 [kworker/u4:4]
root        8431       2  0 09:05 ?        00:00:00 [kworker/0:1-events]
root        8565    7214  0 09:08 ?        00:00:00 ps -ef

20. ํ˜ธ์ŠคํŠธ ํŒŒ์ผ ์‹œ์Šคํ…œ ๊ตฌ์กฐ ํ™•์ธ

1
/ # nsenter -t 1 -m ls -l /proc

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
total 0
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 10
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1001
dr-xr-xr-x.  9 systemd-network systemd-network               0 Mar 22 07:03 1004
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1017
dr-xr-xr-x.  9 systemd-resolve systemd-resolve               0 Mar 22 07:03 1094
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 11
dr-xr-xr-x.  9 chrony          chrony                        0 Mar 22 07:03 1102
dr-xr-xr-x.  9 chrony          chrony                        0 Mar 22 08:22 1105
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1112
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1113
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1114
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1138
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1140
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1181
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1189
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 12
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1254
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 1269
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 13
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 138
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 14
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 140
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 143
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 149
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 15
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 16
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:04 1622
dr-xr-xr-x.  9 coredns         coredns                       0 Mar 22 07:04 1655
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 18
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:04 1801
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:04 1828
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 186
dr-xr-xr-x.  9            1000 root                          0 Mar 22 07:04 1878
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 188
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 19
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 199
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 2
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 20
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 21
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 219
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 23
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 26
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 27
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 28
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 29
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 3
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 30
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 32
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 33
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 34
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 340
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 341
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 346
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 348
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 35
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 36
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 37
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 38
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 382
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 4
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 40
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 407
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 41
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 418
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 42
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 43
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 44
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 45
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:59 4600
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 5
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:13 5832
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 6
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:20 6163
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:28 6656
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:31 6791
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 68
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:31 6818
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:31 6872
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:33 6986
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 71
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 72
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:38 7214
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:39 7304
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 75
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:45 7563
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:45 7583
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 8
dr-xr-xr-x.  9 root            root                          0 Mar 22 08:56 8053
dr-xr-xr-x.  9 root            root                          0 Mar 22 09:00 8231
dr-xr-xr-x.  9 root            root                          0 Mar 22 09:04 8366
dr-xr-xr-x.  9 root            root                          0 Mar 22 09:04 8367
dr-xr-xr-x.  9 root            root                          0 Mar 22 09:05 8431
dr-xr-xr-x.  9 root            root                          0 Mar 22 09:09 8608
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 867
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 913
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 914
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 915
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 916
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 917
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 918
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 919
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 920
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 931
dr-xr-xr-x.  9 root            root                          0 Mar 22 07:03 932
dr-xr-xr-x.  9 dbus            dbus                          0 Mar 22 07:03 995
dr-xr-xr-x.  9 dbus            dbus                          0 Mar 22 08:22 996
dr-xr-xr-x.  3 root            root                          0 Mar 22 08:24 acpi
-r--r--r--.  1 root            root                          0 Mar 22 07:03 bootconfig
-r--r--r--.  1 root            root                          0 Mar 22 08:24 buddyinfo
dr-xr-xr-x.  4 root            root                          0 Mar 22 08:24 bus
-r--r--r--.  1 root            root                          0 Mar 22 07:03 cgroups
-r--r--r--.  1 root            root                          0 Mar 22 07:03 cmdline
-r--r--r--.  1 root            root                      35525 Mar 22 08:24 config.gz
-r--r--r--.  1 root            root                          0 Mar 22 08:24 consoles
-r--r--r--.  1 root            root                          0 Mar 22 07:03 cpuinfo
-r--r--r--.  1 root            root                          0 Mar 22 08:24 crypto
-r--r--r--.  1 root            root                          0 Mar 22 07:03 devices
-r--r--r--.  1 root            root                          0 Mar 22 07:03 diskstats
-r--r--r--.  1 root            root                          0 Mar 22 08:24 dma
dr-xr-xr-x.  3 root            root                          0 Mar 22 08:24 driver
dr-xr-xr-x.  3 root            root                          0 Mar 22 08:24 dynamic_debug
-r--r--r--.  1 root            root                          0 Mar 22 08:24 execdomains
-r--r--r--.  1 root            root                          0 Mar 22 08:24 fb
-r--r--r--.  1 root            root                          0 Mar 22 08:24 filesystems
dr-xr-xr-x.  6 root            root                          0 Mar 22 08:24 fs
-r--r--r--.  1 root            root                          0 Mar 22 08:24 interrupts
-r--r--r--.  1 root            root                          0 Mar 22 08:24 iomem
-r--r--r--.  1 root            root                          0 Mar 22 08:24 ioports
dr-xr-xr-x. 28 root            root                          0 Mar 22 08:24 irq
-r--r--r--.  1 root            root                          0 Mar 22 08:24 kallsyms
-r--------.  1 root            root            140737471594496 Mar 22 07:03 kcore
-r--r--r--.  1 root            root                          0 Mar 22 08:24 key-users
-r--r--r--.  1 root            root                          0 Mar 22 08:24 keys
-r--------.  1 root            root                          0 Mar 22 08:24 kmsg
-r--------.  1 root            root                          0 Mar 22 08:24 kpagecgroup
-r--------.  1 root            root                          0 Mar 22 08:24 kpagecount
-r--------.  1 root            root                          0 Mar 22 08:24 kpageflags
-rw-r--r--.  1 root            root                          0 Mar 22 08:24 latency_stats
-r--r--r--.  1 root            root                          0 Mar 22 07:03 loadavg
-r--r--r--.  1 root            root                          0 Mar 22 08:24 locks
-r--r--r--.  1 root            root                          0 Mar 22 08:24 mdstat
-r--r--r--.  1 root            root                          0 Mar 22 07:03 meminfo
-r--r--r--.  1 root            root                          0 Mar 22 08:24 misc
-r--r--r--.  1 root            root                          0 Mar 22 08:24 modules
lrwxrwxrwx.  1 root            root                         11 Mar 22 07:03 mounts -> self/mounts
-rw-r--r--.  1 root            root                          0 Mar 22 08:24 mtrr
lrwxrwxrwx.  1 root            root                          8 Mar 22 07:03 net -> self/net
-r--------.  1 root            root                          0 Mar 22 08:24 pagetypeinfo
-r--r--r--.  1 root            root                          0 Mar 22 08:24 partitions
dr-xr-xr-x.  5 root            root                          0 Mar 22 08:24 pressure
-r--r--r--.  1 root            root                          0 Mar 22 08:24 schedstat
dr-xr-xr-x.  4 root            root                          0 Mar 22 08:24 scsi
lrwxrwxrwx.  1 root            root                          0 Mar 22 07:03 self -> 8608
-r--------.  1 root            root                          0 Mar 22 08:24 slabinfo
-r--r--r--.  1 root            root                          0 Mar 22 08:24 softirqs
-r--r--r--.  1 root            root                          0 Mar 22 07:03 stat
-r--r--r--.  1 root            root                          0 Mar 22 07:03 swaps
dr-xr-xr-x.  1 root            root                          0 Mar 22 07:03 sys
--w-------.  1 root            root                          0 Mar 22 08:24 sysrq-trigger
dr-xr-xr-x.  5 root            root                          0 Mar 22 08:24 sysvipc
lrwxrwxrwx.  1 root            root                          0 Mar 22 07:03 thread-self -> 8608/task/8608
-r--------.  1 root            root                          0 Mar 22 08:24 timer_list
dr-xr-xr-x.  6 root            root                          0 Mar 22 08:22 tty
-r--r--r--.  1 root            root                          0 Mar 22 08:22 uptime
-r--r--r--.  1 root            root                          0 Mar 22 08:24 version
-r--------.  1 root            root                          0 Mar 22 08:24 vmallocinfo
-r--r--r--.  1 root            root                          0 Mar 22 08:24 vmstat
-r--r--r--.  1 root            root                          0 Mar 22 08:24 zoneinfo

21. ํŒŒ์ผ ์‹œ์Šคํ…œ ์‚ฌ์šฉ๋Ÿ‰ ํ™•์ธ

1
/ # nsenter -t 1 -m df -hT

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/root      ext4      2.1G  1.1G  902M  56% /
devtmpfs       devtmpfs  1.9G     0  1.9G   0% /dev
tmpfs          tmpfs     1.9G     0  1.9G   0% /dev/shm
tmpfs          tmpfs     763M  704K  762M   1% /run
tmpfs          tmpfs     1.9G  496K  1.9G   1% /etc
tmpfs          tmpfs     1.9G  4.0K  1.9G   1% /etc/cni
tmpfs          tmpfs     1.9G     0  1.9G   0% /tmp
tmpfs          tmpfs     1.9G     0  1.9G   0% /etc/kubernetes/pki/private
tmpfs          tmpfs     1.9G   12K  1.9G   1% /etc/containerd
tmpfs          tmpfs     1.9G  4.0K  1.9G   1% /root/.aws
tmpfs          tmpfs     1.9G     0  1.9G   0% /run/netdog
/dev/nvme0n1p3 ext4       44M   22M   20M  53% /boot
/dev/nvme1n1p1 xfs        80G  2.2G   78G   3% /local
/dev/nvme0n1p7 ext4       80M  1.3M   72M   2% /var/lib/bottlerocket
overlay        overlay    80G  2.2G   78G   3% /opt/cni
overlay        overlay    80G  2.2G   78G   3% /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/modules
overlay        overlay    80G  2.2G   78G   3% /opt/csi
/dev/loop0     squashfs   13M   13M     0 100% /var/lib/kernel-devel/.overlay/lower
/dev/loop1     squashfs  512K  512K     0 100% /x86_64-bottlerocket-linux-gnu/sys-root/usr/share/licenses
overlay        overlay    80G  2.2G   78G   3% /x86_64-bottlerocket-linux-gnu/sys-root/usr/src/kernels
tmpfs          tmpfs     3.1G   12K  3.1G   1% /var/lib/kubelet/pods/26d0b768-1309-4a72-9cbb-5bfb6c9e0353/volumes/kubernetes.io~projected/kube-api-access-7ftbv
shm            tmpfs      64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/e9f31e10f093d6fc3090a71654c426807637a9705a04de6e866d33e2fe8d9b67/shm
overlay        overlay    80G  2.2G   78G   3% /run/containerd/io.containerd.runtime.v2.task/k8s.io/e9f31e10f093d6fc3090a71654c426807637a9705a04de6e866d33e2fe8d9b67/rootfs
overlay        overlay    80G  2.2G   78G   3% /run/containerd/io.containerd.runtime.v2.task/k8s.io/fadf90102e0106f26fd91f6b2318d680169e2a071df0c58bb5d221aa572bbc81/rootfs
tmpfs          tmpfs     3.1G   12K  3.1G   1% /var/lib/kubelet/pods/4963e6f9-73de-49fa-86df-1a9d3f982aad/volumes/kubernetes.io~projected/kube-api-access-bmjsp
overlay        overlay    80G  2.2G   78G   3% /run/containerd/io.containerd.runtime.v2.task/k8s.io/94f119878d50c69e83d32cbfa553902653e488f7c46c13f3c133620ca67966b9/rootfs
overlay        overlay    80G  2.2G   78G   3% /run/containerd/io.containerd.runtime.v2.task/k8s.io/5940e511977dc4c19603b22a41bcf3fb64a809eba3cfb048d8bb68a8edaf8ce3/rootfs

22. Containerd ๋„ค์ž„์ŠคํŽ˜์ด์Šค ๋ฐ ์ปจํ…Œ์ด๋„ˆ ํ™•์ธ

1
/ # nsenter -t 1 -m ctr -n k8s.io containers ls

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
CONTAINER                                                           IMAGE                                     RUNTIME                  
05ddc3e2d73d2f6015a53db1fda28777568180726e409c845612a7e4e7563c62    localhost/kubernetes/pause:0.1.0          io.containerd.runc.v2    
5940e511977dc4c19603b22a41bcf3fb64a809eba3cfb048d8bb68a8edaf8ce3    docker.io/library/alpine:3                io.containerd.runc.v2    
94f119878d50c69e83d32cbfa553902653e488f7c46c13f3c133620ca67966b9    localhost/kubernetes/pause:0.1.0          io.containerd.runc.v2    
ac87f2cdee369128c347da9bf763b15cbddedbdedc1a9c1125d0bd3bfb64fe5a    docker.io/hjacobs/kube-ops-view:20.4.0    io.containerd.runc.v2    
e9f31e10f093d6fc3090a71654c426807637a9705a04de6e866d33e2fe8d9b67    localhost/kubernetes/pause:0.1.0          io.containerd.runc.v2    
fadf90102e0106f26fd91f6b2318d680169e2a071df0c58bb5d221aa572bbc81    docker.io/hjacobs/kube-ops-view:20.4.0    io.containerd.runc.v2   

23. ํŒŒ๋“œ ์‚ญ์ œ: root-shell ์ œ๊ฑฐ

1
2
3
4
kubectl delete pod -n kube-system root-shell

# ๊ฒฐ๊ณผ
pod "root-shell" delete

๐Ÿ“ก [๋„คํŠธ์›Œํ‚น] Service(LoadBalancer) AWS NLB ์‚ฌ์šฉํ•ด๋ณด๊ธฐ

1. ๋ฆฌ์†Œ์Šค ๋ฐฐํฌ ๋ฐ ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
kubectl apply -f https://github.com/aws-containers/retail-store-sample-app/releases/latest/download/kubernetes.yaml

# ๊ฒฐ๊ณผ
serviceaccount/catalog created
secret/catalog-db created
configmap/catalog created
service/catalog-mysql created
service/catalog created
deployment.apps/catalog created
statefulset.apps/catalog-mysql created
serviceaccount/carts created
configmap/carts created
service/carts-dynamodb created
service/carts created
deployment.apps/carts created
deployment.apps/carts-dynamodb created
serviceaccount/orders created
secret/orders-rabbitmq created
secret/orders-db created
configmap/orders created
service/orders-postgresql created
service/orders-rabbitmq created
service/orders created
deployment.apps/orders created
statefulset.apps/orders-postgresql created
statefulset.apps/orders-rabbitmq created
serviceaccount/checkout created
configmap/checkout created
service/checkout-redis created
service/checkout created
deployment.apps/checkout created
deployment.apps/checkout-redis created
serviceaccount/ui created
configmap/ui created
service/ui created
deployment.apps/ui created

2. UI ์„œ๋น„์Šค ์ƒ์„ธ ์ •๋ณด ํ™•์ธ

1
kubectl get svc ui -o yaml | kubectl neat

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: service
    app.kubernetes.io/instance: ui
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ui
    app.kubernetes.io/owner: retail-store-sample
    helm.sh/chart: ui-1.0.2
  name: ui
  namespace: default
spec:
  clusterIP: 172.20.20.37
  clusterIPs:
  - 172.20.20.37
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  loadBalancerClass: eks.amazonaws.com/nlb
  ports:
  - name: http
    nodePort: 30605
    port: 80
    targetPort: http
  selector:
    app.kubernetes.io/component: service
    app.kubernetes.io/instance: ui
    app.kubernetes.io/name: ui
    app.kubernetes.io/owner: retail-store-sample
  type: LoadBalancer

3. Target Group Binding ์ •๋ณด ํ™•์ธ

(1) Target Group Bindings ๋ชฉ๋ก ์กฐํšŒ

1
kubectl get targetgroupbindings.eks.amazonaws.com

โœ…ย ์ถœ๋ ฅ

1
2
NAME                        SERVICE-NAME   SERVICE-PORT   TARGET-TYPE   AGE
k8s-default-ui-7d9dde25ef   ui             80             ip            89s

(2) Target Group Bindings ์ƒ์„ธ ์ •๋ณด ์กฐํšŒ (YAML)

1
kubectl get targetgroupbindings.eks.amazonaws.com -o yaml

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
apiVersion: v1
items:
- apiVersion: eks.amazonaws.com/v1
  kind: TargetGroupBinding
  metadata:
    creationTimestamp: "2025-03-22T09:18:07Z"
    finalizers:
    - elbv2.eks.amazonaws.com/resources
    generation: 1
    labels:
      service.eks.amazonaws.com/stack-name: ui
      service.eks.amazonaws.com/stack-namespace: default
    name: k8s-default-ui-7d9dde25ef
    namespace: default
    resourceVersion: "78657"
    uid: 2d6f568e-f396-4314-9731-ef812402829e
  spec:
    networking:
      ingress:
      - from:
        - securityGroup:
            groupID: sg-0f23e35f4e9eb02b2
        ports:
        - port: http
          protocol: TCP
    serviceRef:
      name: ui
      port: 80
    targetGroupARN: arn:aws:elasticloadbalancing:ap-northeast-2:378102432899:targetgroup/k8s-default-ui-7d9dde25ef/8ce16a9702a8d2ec
    targetType: ip
  status:
    observedGeneration: 1
kind: List
metadata:
  resourceVersion: ""

4. ์„œ๋น„์Šค์™€ ์—”๋“œํฌ์ธํŠธ ์ƒํƒœ ํ™•์ธ

1
kubectl get svc,ep ui

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP                                                                   PORT(S)        AGE
service/ui   LoadBalancer   172.20.20.37   k8s-default-ui-92128123d8-950660d6af6a8993.elb.ap-northeast-2.amazonaws.com   80:30605/TCP   3m9s

NAME           ENDPOINTS           AGE
endpoints/ui   10.20.28.136:8080   3m10s
  • ์ดˆ๊ธฐ์—๋Š” ์ƒ˜ํ”Œ ์•ฑ์— ๋Œ€ํ•œ ์ ‘๊ทผ์ด ๋ถˆ๊ฐ€๋Šฅํ•จ
  • ์ด๋Š” ๋ฐฐํฌ๋œ ์ƒ˜ํ”Œ ์•ฑ์ด EKS ์šฉ์œผ๋กœ ์„ค๊ณ„๋˜์—ˆ๊ธฐ ๋•Œ๋ฌธ์ž„

Image

5. ์„œ๋น„์Šค ์–ด๋…ธํ…Œ์ด์…˜ ๋ณ€๊ฒฝ์œผ๋กœ ์‹ ๊ทœ NLB ์ƒ์„ฑ

1
2
3
kubectl annotate svc ui service.beta.kubernetes.io/aws-load-balancer-scheme=internet-facing
# ๊ฒฐ๊ณผ
service/ui annotated

6. ๋ณ€๊ฒฝ๋œ ์„œ๋น„์Šค ์ •๋ณด ๋ฐ ์ ‘๊ทผ ํ™•์ธ

(1) UI ์„œ๋น„์Šค ๋ฐ ์—”๋“œํฌ์ธํŠธ ์ •๋ณด ์กฐํšŒ

1
kubectl get svc,ep ui

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP                                                                   PORT(S)        AGE
service/ui   LoadBalancer   172.20.20.37   k8s-default-ui-06a3bb282e-551c9922332817d9.elb.ap-northeast-2.amazonaws.com   80:30605/TCP   9m15s

NAME           ENDPOINTS           AGE
endpoints/ui   10.20.28.136:8080   9m15s

(2) UI ์„œ๋น„์Šค ์ƒ์„ธ ๊ตฌ์„ฑ ํ™•์ธ

1
kubectl get svc ui -o yaml | kubectl neat

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
  labels:
    app.kubernetes.io/component: service
    app.kubernetes.io/instance: ui
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ui
    app.kubernetes.io/owner: retail-store-sample
    helm.sh/chart: ui-1.0.2
  name: ui
  namespace: default
spec:
  clusterIP: 172.20.20.37
  clusterIPs:
  - 172.20.20.37
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  loadBalancerClass: eks.amazonaws.com/nlb
  ports:
  - name: http
    nodePort: 30605
    port: 80
    targetPort: http
  selector:
    app.kubernetes.io/component: service
    app.kubernetes.io/instance: ui
    app.kubernetes.io/name: ui
    app.kubernetes.io/owner: retail-store-sample
  type: LoadBalancer
  • http://k8s-default-ui-06a3bb282e-551c9922332817d9.elb.ap-northeast-2.amazonaws.com/ ์ ‘์†

Image


๐Ÿ—‘๏ธ (์‹ค์Šต ์™„๋ฃŒ ํ›„) ์ž์› ์‚ญ์ œ

1. ์ƒ˜ํ”Œ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ๋ฆฌ์†Œ์Šค ์‚ญ์ œ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
kubectl delete -f https://github.com/aws-containers/retail-store-sample-app/releases/latest/download/kubernetes.yaml

# ๊ฒฐ๊ณผ
serviceaccount "catalog" deleted
secret "catalog-db" deleted
configmap "catalog" deleted
service "catalog-mysql" deleted
service "catalog" deleted
deployment.apps "catalog" deleted
statefulset.apps "catalog-mysql" deleted
serviceaccount "carts" deleted
configmap "carts" deleted
service "carts-dynamodb" deleted
service "carts" deleted
deployment.apps "carts" deleted
deployment.apps "carts-dynamodb" deleted
serviceaccount "orders" deleted
secret "orders-rabbitmq" deleted
secret "orders-db" deleted
configmap "orders" deleted
service "orders-postgresql" deleted
service "orders-rabbitmq" deleted
service "orders" deleted
deployment.apps "orders" deleted
statefulset.apps "orders-postgresql" deleted
statefulset.apps "orders-rabbitmq" deleted
serviceaccount "checkout" deleted
configmap "checkout" deleted
service "checkout-redis" deleted
service "checkout" deleted
deployment.apps "checkout" deleted
deployment.apps "checkout-redis" deleted
serviceaccount "ui" deleted
configmap "ui" deleted
service "ui" deleted
deployment.apps "ui" deleted

2. helm ๋ฆด๋ฆฌ์Šค ์‚ญ์ œ: kube-ops-view ์ œ๊ฑฐ

1
2
3
helm uninstall -n kube-system kube-ops-view
# ๊ฒฐ๊ณผ
release "kube-ops-view" uninstalled

3. ํ…Œ๋ผํผ ์ธํ”„๋ผ ์‚ญ์ œ

1
terraform destroy --auto-approve

โœ…ย ์ถœ๋ ฅ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
...
module.eks.aws_security_group_rule.node["ingress_cluster_6443_webhook"]: Destroying... [id=sgrule-4181448357]
module.eks.aws_security_group_rule.node["ingress_nodes_ephemeral"]: Destruction complete after 0s
module.eks.aws_cloudwatch_log_group.this[0]: Destroying... [id=/aws/eks/automode-cluster/cluster]
module.eks.aws_security_group_rule.cluster["ingress_nodes_443"]: Destruction complete after 0s
module.vpc.aws_subnet.private[2]: Destroying... [id=subnet-08dd817c3837e7eaa]
module.eks.aws_cloudwatch_log_group.this[0]: Destruction complete after 0s
module.eks.aws_security_group_rule.node["ingress_self_coredns_udp"]: Destroying... [id=sgrule-3978308737]
module.eks.aws_iam_role_policy_attachment.this["AmazonEKSLoadBalancingPolicy"]: Destruction complete after 1s
module.eks.aws_security_group_rule.node["ingress_self_coredns_tcp"]: Destroying... [id=sgrule-1233096558]
module.eks.aws_security_group_rule.node["ingress_cluster_4443_webhook"]: Destruction complete after 1s
module.eks.aws_iam_role_policy_attachment.this["AmazonEKSComputePolicy"]: Destruction complete after 1s
module.eks.aws_iam_role_policy_attachment.this["AmazonEKSBlockStoragePolicy"]: Destroying... [id=automode-cluster-cluster-20250322045920242000000001-20250322045922399700000004]
module.eks.aws_security_group_rule.node["ingress_cluster_kubelet"]: Destroying... [id=sgrule-712351154]
module.vpc.aws_subnet.private[2]: Destruction complete after 1s
module.eks.aws_iam_role.eks_auto[0]: Destroying... [id=automode-cluster-eks-auto-20250322045920243000000002]
module.eks.aws_iam_role_policy_attachment.this["AmazonEKSClusterPolicy"]: Destruction complete after 1s
module.eks.aws_iam_role_policy_attachment.this["AmazonEKSNetworkingPolicy"]: Destroying... [id=automode-cluster-cluster-20250322045920242000000001-20250322045922671700000007]
module.eks.aws_iam_role_policy_attachment.this["AmazonEKSBlockStoragePolicy"]: Destruction complete after 0s
module.eks.aws_security_group_rule.node["ingress_cluster_443"]: Destruction complete after 1s
module.eks.aws_iam_role_policy_attachment.this["AmazonEKSNetworkingPolicy"]: Destruction complete after 0s
module.eks.aws_iam_role.this[0]: Destroying... [id=automode-cluster-cluster-20250322045920242000000001]
module.eks.aws_security_group_rule.node["ingress_cluster_8443_webhook"]: Destruction complete after 1s
module.eks.aws_security_group_rule.node["egress_all"]: Destruction complete after 2s
module.eks.aws_security_group_rule.node["ingress_cluster_6443_webhook"]: Destruction complete after 2s
module.eks.aws_iam_role.eks_auto[0]: Destruction complete after 1s
module.eks.aws_security_group_rule.node["ingress_self_coredns_udp"]: Destruction complete after 2s
module.eks.aws_iam_role.this[0]: Destruction complete after 1s
module.eks.aws_security_group_rule.node["ingress_self_coredns_tcp"]: Destruction complete after 1s
module.eks.aws_security_group_rule.node["ingress_cluster_kubelet"]: Destruction complete after 2s
module.eks.aws_security_group.cluster[0]: Destroying... [id=sg-022902e2839163258]
module.eks.aws_security_group.node[0]: Destroying... [id=sg-06695cb3eface4125]
module.eks.aws_security_group.cluster[0]: Destruction complete after 0s
module.eks.aws_security_group.node[0]: Destruction complete after 0s
module.vpc.aws_vpc.this[0]: Destroying... [id=vpc-084852690e9db1da8]
module.vpc.aws_vpc.this[0]: Destruction complete after 0s
Destroy complete! Resources: 61 destroyed.

4. kubeconfig ํŒŒ์ผ ์‚ญ์ œ

1
rm -rf ~/.kube/config
This post is licensed under CC BY 4.0 by the author.