Post

CI/CD 6์ฃผ์ฐจ ์ •๋ฆฌ

๐Ÿ”ง kind mgmt k8s ๋ฐฐํฌ + ingress-nginx + Argo CD

1. kind mgmt ํด๋Ÿฌ์Šคํ„ฐ ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
kind create cluster --name mgmt --image kindest/node:v1.32.8 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  labels:
    ingress-ready: true
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
  - containerPort: 30000
    hostPort: 30000
EOF

Creating cluster "mgmt" ...
 โœ“ Ensuring node image (kindest/node:v1.32.8) ๐Ÿ–ผ
 โœ“ Preparing nodes ๐Ÿ“ฆ  
 โœ“ Writing configuration ๐Ÿ“œ 
 โœ“ Starting control-plane ๐Ÿ•น๏ธ 
 โœ“ Installing CNI ๐Ÿ”Œ 
 โœ“ Installing StorageClass ๐Ÿ’พ 
Set kubectl context to "kind-mgmt"
You can now use your cluster with:

kubectl cluster-info --context kind-mgmt

Have a nice day! ๐Ÿ‘‹

2. NGINX Ingress Controller ๋ฐฐํฌ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml

namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created

3. Ingress SSL Passthrough ์˜ต์…˜ ํ™œ์„ฑํ™”

1
2
3
4
5
kubectl get deployment ingress-nginx-controller -n ingress-nginx -o yaml \
| sed '/- --publish-status-address=localhost/a\
        - --enable-ssl-passthrough' | kubectl apply -f -

deployment.apps/ingress-nginx-controller configured

4. Argo CD์šฉ Self-Signed ์ธ์ฆ์„œ ์ƒ์„ฑ

1
2
3
4
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
  -keyout argocd.example.com.key \
  -out argocd.example.com.crt \
  -subj "/CN=argocd.example.com/O=argocd"

5. Argo CD ๋„ค์ž„์ŠคํŽ˜์ด์Šค ๋ฐ TLS Secret ์ƒ์„ฑ

1
2
3
kubectl create ns argocd

namespace/argocd created
1
2
3
4
5
kubectl -n argocd create secret tls argocd-server-tls \
  --cert=argocd.example.com.crt \
  --key=argocd.example.com.key

secret/argocd-server-tls created

6. Helm์œผ๋กœ Argo CD ์„ค์น˜

1
2
3
4
5
6
7
8
9
10
11
12
13
cat <<EOF > argocd-values.yaml
global:
  domain: argocd.example.com

server:
  ingress:
    enabled: true
    ingressClassName: nginx
    annotations:
      nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
      nginx.ingress.kubernetes.io/ssl-passthrough: "true"
    tls: true
EOF
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
helm repo add argo https://argoproj.github.io/argo-helm
helm install argocd argo/argo-cd --version 9.0.5 -f argocd-values.yaml --namespace argocd

NAME: argocd
LAST DEPLOYED: Mon Nov 17 20:00:25 2025
NAMESPACE: argocd
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
In order to access the server UI you have the following options:

1. kubectl port-forward service/argocd-server -n argocd 8080:443

    and then open the browser on http://localhost:8080 and accept the certificate

2. enable ingress in the values file `server.ingress.enabled` and either
      - Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
      - Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts

After reaching the UI the first time you can login with username: admin and the random password generated during the installation. You can find the password by running:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

(You should delete the initial secret afterwards as suggested by the Getting Started Guide: https://argo-cd.readthedocs.io/en/stable/getting_started/#4-login-using-the-cli)

7. /etc/hosts์— Argo CD ๋„๋ฉ”์ธ ๋“ฑ๋ก

1
2
3
4
5
echo "127.0.0.1 argocd.example.com" | sudo tee -a /etc/hosts
cat /etc/hosts

...
127.0.0.1 argocd.example.com
  • ๋กœ์ปฌ ํ™˜๊ฒฝ์—์„œ argocd.example.com ๋„๋ฉ”์ธ์„ 127.0.0.1๋กœ ํ•ด์„ํ•˜๋„๋ก /etc/hosts์— ๋“ฑ๋ก

8. curl๋กœ Argo CD HTTPS ์ ‘์† ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
curl -vk https://argocd.example.com/

* Host argocd.example.com:443 was resolved.
* IPv6: (none)
* IPv4: 127.0.0.1
*   Trying 127.0.0.1:443...
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* SSL Trust: peer verification disabled
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256 / X25519MLKEM768 / RSASSA-PSS
...

9. Argo CD ์ดˆ๊ธฐ admin ๋น„๋ฐ€๋ฒˆํ˜ธ ์กฐํšŒ ๋ฐ ๋ณ€์ˆ˜์„ค์ •

1
2
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ;echo
r0PAK1j7tcDasX7Q
1
2
ARGOPW=<์ตœ์ดˆ ์ ‘์† ์•”ํ˜ธ>
ARGOPW=r0PAK1j7tcDasX7Q

10. Argo CD CLI ๋กœ๊ทธ์ธ ์ˆ˜ํ–‰

1
2
3
4
argocd login argocd.example.com --insecure --username admin --password $ARGOPW

'admin:login' logged in successfully
Context 'argocd.example.com' updated

11. admin ๊ณ„์ • ๋น„๋ฐ€๋ฒˆํ˜ธ ๋ณ€๊ฒฝ

1
2
3
4
argocd account update-password --current-password $ARGOPW --new-password qwe12345

Password updated
Context 'argocd.example.com' updated
  • ๊ณ„์ • ์•”ํ˜ธ : qwe12345

12. Argo CD ์›น ์ฝ˜์†” ์ ‘์† ์ฃผ์†Œ ํ™•์ธ

1
https://argocd.example.com


๐ŸŒฑ kind dev/prd k8s ๋ฐฐํฌ & k8s ์ž๊ฒฉ์ฆ๋ช… ์ˆ˜์ •

1. ์„ค์น˜ ์ „ kubectl ์ปจํ…์ŠคํŠธ ์ƒํƒœ ํ™•์ธ

1
2
3
4
kubectl config get-contexts

CURRENT   NAME        CLUSTER     AUTHINFO    NAMESPACE
*         kind-mgmt   kind-mgmt   kind-mgmt   
  • ์•„์ง์€ kind-mgmt ํด๋Ÿฌ์Šคํ„ฐ๋งŒ ์กด์žฌํ•˜๊ณ , ๊ธฐ๋ณธ ์ปจํ…์ŠคํŠธ๋„ kind-mgmt๋กœ ์„ค์ •๋˜์–ด ์žˆ์Œ

2. kind ๋„์ปค ๋„คํŠธ์›Œํฌ ์ •๋ณด ํ™•์ธ

1
2
3
4
5
6
7
docker network ls

NETWORK ID     NAME      DRIVER    SCOPE
f5ad53882464   bridge    bridge    local
bec308f23ee5   host      host      local
1da18f85ffec   kind      bridge    local
225e867f21f9   none      null      local

3. mgmt-control-plane ์ปจํ…Œ์ด๋„ˆ IP ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
docker network inspect kind | jq

[
  {
    "Name": "kind",
    "Id": "1da18f85ffecfc8a2170a57b9369bf4387586703d6c83f3accf900e9145b7772",
    "Created": "2025-10-18T15:06:51.081819223+09:00",
    "Scope": "local",
    "Driver": "bridge",
    "EnableIPv4": true,
    "EnableIPv6": true,
    "IPAM": {
      "Driver": "default",
      "Options": {},
      "Config": [
        {
          "Subnet": "172.18.0.0/16",
          "Gateway": "172.18.0.1"
        },
        {
          "Subnet": "fc00:f853:ccd:e793::/64",
          "Gateway": "fc00:f853:ccd:e793::1"
        }
      ]
    },
    "Internal": false,
    "Attachable": false,
    "Ingress": false,
    "ConfigFrom": {
      "Network": ""
    },
    "ConfigOnly": false,
    "Containers": {
      "4466ac122129f82aef62ae613cf1b1fa9025f61e3337f49bd9f810c5090054b3": {
        "Name": "mgmt-control-plane",
        "EndpointID": "b831183bd631a8adfdfe4f5659ebdc5192fa74b0c5e9ddada9b2e4b638190565",
        "MacAddress": "0e:cf:0f:10:0d:55",
        "IPv4Address": "172.18.0.2/16",
        "IPv6Address": "fc00:f853:ccd:e793::2/64"
      }
    },
    "Options": {
      "com.docker.network.bridge.enable_ip_masquerade": "true",
      "com.docker.network.driver.mtu": "1500"
    },
    "Labels": {}
  }
]
  • mgmt-control-plane ์ปจํ…Œ์ด๋„ˆ์˜ IPv4 ์ฃผ์†Œ๊ฐ€ 172.18.0.2์ธ ๊ฒƒ์„ ํ™•์ธ
  • ์ดํ›„ dev/prd ํด๋Ÿฌ์Šคํ„ฐ์™€ ํ†ต์‹  ๋ฐ kubeconfig ์ˆ˜์ • ์‹œ ์ฐธ๊ณ ์šฉ์œผ๋กœ ์‚ฌ์šฉ

4. kind dev/prd ํด๋Ÿฌ์Šคํ„ฐ ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
kind create cluster --name dev --image kindest/node:v1.32.8 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 31000
    hostPort: 31000
EOF

Creating cluster "dev" ...
 โœ“ Ensuring node image (kindest/node:v1.32.8) ๐Ÿ–ผ
 โœ“ Preparing nodes ๐Ÿ“ฆ  
 โœ“ Writing configuration ๐Ÿ“œ 
 โœ“ Starting control-plane ๐Ÿ•น๏ธ 
 โœ“ Installing CNI ๐Ÿ”Œ 
 โœ“ Installing StorageClass ๐Ÿ’พ 
Set kubectl context to "kind-dev"
You can now use your cluster with:

kubectl cluster-info --context kind-dev

Have a nice day! ๐Ÿ‘‹
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
kind create cluster --name prd --image kindest/node:v1.32.8 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 32000
    hostPort: 32000
EOF

Creating cluster "prd" ...
 โœ“ Ensuring node image (kindest/node:v1.32.8) ๐Ÿ–ผ
 โœ“ Preparing nodes ๐Ÿ“ฆ  
 โœ“ Writing configuration ๐Ÿ“œ 
 โœ“ Starting control-plane ๐Ÿ•น๏ธ 
 โœ“ Installing CNI ๐Ÿ”Œ 
 โœ“ Installing StorageClass ๐Ÿ’พ 
Set kubectl context to "kind-prd"
You can now use your cluster with:

kubectl cluster-info --context kind-prd

Have a nice day! ๐Ÿ‘‹

5. dev/prd ํด๋Ÿฌ์Šคํ„ฐ ์ƒ์„ฑ ํ›„ ์ปจํ…์ŠคํŠธ ๋ชฉ๋ก ํ™•์ธ

1
2
3
4
5
6
kubectl config get-contexts

CURRENT   NAME        CLUSTER     AUTHINFO    NAMESPACE
          kind-dev    kind-dev    kind-dev    
          kind-mgmt   kind-mgmt   kind-mgmt   
*         kind-prd    kind-prd    kind-prd    
  • ํด๋Ÿฌ์Šคํ„ฐ ์ƒ์„ฑ ํ›„ kubeconfig์— kind-dev, kind-prd ์ปจํ…์ŠคํŠธ๊ฐ€ ์ถ”๊ฐ€๋จ
  • ๊ธฐ๋ณธ ์ปจํ…์ŠคํŠธ๋Š” ์ œ์ผ ๋งˆ์ง€๋ง‰์œผ๋กœ ์ƒ์„ฑ๋œ ํด๋Ÿฌ์Šคํ„ฐ์ธ kind-prd๋กœ ์ง€์ •๋œ ์ƒํƒœ์ž„

6. mgmt ํด๋Ÿฌ์Šคํ„ฐ๋กœ ์ปจํ…์ŠคํŠธ ๋ณ€๊ฒฝ

1
2
3
kubectl config use-context kind-mgmt

Switched to context "kind-mgmt".
1
2
3
4
5
6
kubectl config get-contexts

CURRENT   NAME        CLUSTER     AUTHINFO    NAMESPACE
          kind-dev    kind-dev    kind-dev    
*         kind-mgmt   kind-mgmt   kind-mgmt   
          kind-prd    kind-prd    kind-prd    

7. ๊ฐ ํด๋Ÿฌ์Šคํ„ฐ ๋…ธ๋“œ ์ƒํƒœ ํ™•์ธ (mgmt/dev/prd)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
kubectl get node -v=6 --context kind-mgmt
kubectl get node -v=6 --context kind-dev
kubectl get node -v=6 --context kind-prd

I1117 20:13:19.217800  121966 cmd.go:527] kubectl command headers turned on
I1117 20:13:19.224307  121966 loader.go:402] Config loaded from file:  /home/devshin/.kube/config
I1117 20:13:19.225823  121966 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1117 20:13:19.225838  121966 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1117 20:13:19.225843  121966 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1117 20:13:19.225847  121966 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1117 20:13:19.225852  121966 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1117 20:13:19.237944  121966 round_trippers.go:632] "Response" verb="GET" url="https://127.0.0.1:42415/api/v1/nodes?limit=500" status="200 OK" milliseconds=7
NAME                 STATUS   ROLES           AGE   VERSION
mgmt-control-plane   Ready    control-plane   17m   v1.32.8

I1117 20:13:19.286485  121989 cmd.go:527] kubectl command headers turned on
I1117 20:13:19.293572  121989 loader.go:402] Config loaded from file:  /home/devshin/.kube/config
I1117 20:13:19.295187  121989 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1117 20:13:19.295222  121989 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1117 20:13:19.295233  121989 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1117 20:13:19.295243  121989 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1117 20:13:19.295253  121989 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1117 20:13:19.305632  121989 round_trippers.go:632] "Response" verb="GET" url="https://127.0.0.1:35871/api?timeout=32s" status="200 OK" milliseconds=10
I1117 20:13:19.307735  121989 round_trippers.go:632] "Response" verb="GET" url="https://127.0.0.1:35871/apis?timeout=32s" status="200 OK" milliseconds=1
I1117 20:13:19.315440  121989 round_trippers.go:632] "Response" verb="GET" url="https://127.0.0.1:35871/api/v1/nodes?limit=500" status="200 OK" milliseconds=2
NAME                STATUS   ROLES           AGE     VERSION
dev-control-plane   Ready    control-plane   2m21s   v1.32.8

I1117 20:13:19.363680  122008 cmd.go:527] kubectl command headers turned on
I1117 20:13:19.369643  122008 loader.go:402] Config loaded from file:  /home/devshin/.kube/config
I1117 20:13:19.370592  122008 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1117 20:13:19.370614  122008 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1117 20:13:19.370620  122008 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1117 20:13:19.370625  122008 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1117 20:13:19.370631  122008 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1117 20:13:19.378161  122008 round_trippers.go:632] "Response" verb="GET" url="https://127.0.0.1:34469/api?timeout=32s" status="200 OK" milliseconds=7
I1117 20:13:19.380831  122008 round_trippers.go:632] "Response" verb="GET" url="https://127.0.0.1:34469/apis?timeout=32s" status="200 OK" milliseconds=1
I1117 20:13:19.388897  122008 round_trippers.go:632] "Response" verb="GET" url="https://127.0.0.1:34469/api/v1/nodes?limit=500" status="200 OK" milliseconds=2
NAME                STATUS   ROLES           AGE    VERSION
prd-control-plane   Ready    control-plane   106s   v1.32.8

8. ๊ฐ ํด๋Ÿฌ์Šคํ„ฐ Pod ์ƒํƒœ ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
kubectl get pod -A --context kind-mgmt
kubectl get pod -A --context kind-dev
kubectl get pod -A --context kind-prd

NAMESPACE            NAME                                               READY   STATUS    RESTARTS   AGE
argocd               argocd-application-controller-0                    1/1     Running   0          13m
argocd               argocd-applicationset-controller-bbff79c6f-9qcf8   1/1     Running   0          13m
argocd               argocd-dex-server-6877ddf4f8-fvfll                 1/1     Running   0          13m
argocd               argocd-notifications-controller-7b5658fc47-26p24   1/1     Running   0          13m
argocd               argocd-redis-7d948674-xnl9k                        1/1     Running   0          13m
argocd               argocd-repo-server-7679dc55f5-swj2g                1/1     Running   0          13m
argocd               argocd-server-7d769b6f48-2ts94                     1/1     Running   0          13m
ingress-nginx        ingress-nginx-controller-5b89cb54f9-5gvfh          1/1     Running   0          16m
kube-system          coredns-668d6bf9bc-d5bn7                           1/1     Running   0          17m
kube-system          coredns-668d6bf9bc-vb4p7                           1/1     Running   0          17m
kube-system          etcd-mgmt-control-plane                            1/1     Running   0          18m
kube-system          kindnet-jtm8t                                      1/1     Running   0          17m
kube-system          kube-apiserver-mgmt-control-plane                  1/1     Running   0          18m
kube-system          kube-controller-manager-mgmt-control-plane         1/1     Running   0          18m
kube-system          kube-proxy-b9pmh                                   1/1     Running   0          17m
kube-system          kube-scheduler-mgmt-control-plane                  1/1     Running   0          18m
local-path-storage   local-path-provisioner-7dc846544d-wltkn            1/1     Running   0          17m

NAMESPACE            NAME                                        READY   STATUS    RESTARTS   AGE
kube-system          coredns-668d6bf9bc-lddkn                    1/1     Running   0          3m17s
kube-system          coredns-668d6bf9bc-ltg98                    1/1     Running   0          3m17s
kube-system          etcd-dev-control-plane                      1/1     Running   0          3m23s
kube-system          kindnet-2gks8                               1/1     Running   0          3m18s
kube-system          kube-apiserver-dev-control-plane            1/1     Running   0          3m23s
kube-system          kube-controller-manager-dev-control-plane   1/1     Running   0          3m23s
kube-system          kube-proxy-zmdnk                            1/1     Running   0          3m18s
kube-system          kube-scheduler-dev-control-plane            1/1     Running   0          3m23s
local-path-storage   local-path-provisioner-7dc846544d-qtmxj     1/1     Running   0          3m17s

NAMESPACE            NAME                                        READY   STATUS    RESTARTS   AGE
kube-system          coredns-668d6bf9bc-2hm7g                    1/1     Running   0          2m42s
kube-system          coredns-668d6bf9bc-7kbbg                    1/1     Running   0          2m42s
kube-system          etcd-prd-control-plane                      1/1     Running   0          2m48s
kube-system          kindnet-lqc7k                               1/1     Running   0          2m42s
kube-system          kube-apiserver-prd-control-plane            1/1     Running   0          2m48s
kube-system          kube-controller-manager-prd-control-plane   1/1     Running   0          2m48s
kube-system          kube-proxy-kkhxb                            1/1     Running   0          2m42s
kube-system          kube-scheduler-prd-control-plane            1/1     Running   0          2m49s
local-path-storage   local-path-provisioner-7dc846544d-dj54f     1/1     Running   0          2m42s

9. kubectl alias ์„ค์ •

1
2
3
alias k8s1='kubectl --context kind-mgmt'
alias k8s2='kubectl --context kind-dev'
alias k8s3='kubectl --context kind-prd'
1
2
3
4
5
6
7
8
9
10
11
12
13
# ๊ฐ ๋…ธ๋“œ ๊ฐ„๋‹จํ•œ ์ •๋ณด ํ™•์ธ
k8s1 get node -owide
k8s2 get node -owide
k8s3 get node -owide

NAME                 STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
mgmt-control-plane   Ready    control-plane   19m   v1.32.8   172.18.0.2    <none>        Debian GNU/Linux 12 (bookworm)   6.17.8-arch1-1   containerd://2.1.3

NAME                STATUS   ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
dev-control-plane   Ready    control-plane   4m16s   v1.32.8   172.18.0.3    <none>        Debian GNU/Linux 12 (bookworm)   6.17.8-arch1-1   containerd://2.1.3

NAME                STATUS   ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
prd-control-plane   Ready    control-plane   3m41s   v1.32.8   172.18.0.4    <none>        Debian GNU/Linux 12 (bookworm)   6.17.8-arch1-1   containerd://2.1.3

10. docker ๋„คํŠธ์›Œํฌ์—์„œ ๊ฐ ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ IP ์žฌํ™•์ธ

1
2
3
4
5
6
7
8
9
docker network inspect kind | grep -E 'Name|IPv4Address'

        "Name": "kind",
                "Name": "prd-control-plane",
                "IPv4Address": "172.18.0.4/16",
                "Name": "mgmt-control-plane",
                "IPv4Address": "172.18.0.2/16",
                "Name": "dev-control-plane",
                "IPv4Address": "172.18.0.3/16",

11. ๋„๋ฉ”์ธ ํ†ต์‹  ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
docker exec -it mgmt-control-plane curl -sk https://dev-control-plane:6443/version

{
  "major": "1",
  "minor": "32",
  "gitVersion": "v1.32.8",
  "gitCommit": "2e83bc4bf31e88b7de81d5341939d5ce2460f46f",
  "gitTreeState": "clean",
  "buildDate": "2025-08-13T14:21:22Z",
  "goVersion": "go1.23.11",
  "compiler": "gc",
  "platform": "linux/amd64"
}
1
2
3
4
5
6
7
8
9
10
11
12
13
docker exec -it mgmt-control-plane curl -sk https://prd-control-plane:6443/version

{
  "major": "1",
  "minor": "32",
  "gitVersion": "v1.32.8",
  "gitCommit": "2e83bc4bf31e88b7de81d5341939d5ce2460f46f",
  "gitTreeState": "clean",
  "buildDate": "2025-08-13T14:21:22Z",
  "goVersion": "go1.23.11",
  "compiler": "gc",
  "platform": "linux/amd64"
}
1
2
3
4
5
6
7
8
9
10
11
12
13
docker exec -it dev-control-plane curl -sk https://prd-control-plane:6443/version

{
  "major": "1",
  "minor": "32",
  "gitVersion": "v1.32.8",
  "gitCommit": "2e83bc4bf31e88b7de81d5341939d5ce2460f46f",
  "gitTreeState": "clean",
  "buildDate": "2025-08-13T14:21:22Z",
  "goVersion": "go1.23.11",
  "compiler": "gc",
  "platform": "linux/amd64"
}
  • mgmt-control-plane์—์„œ dev-control-plane, prd-control-plane์˜ API ์„œ๋ฒ„ /version ์—”๋“œํฌ์ธํŠธ๋ฅผ curl๋กœ ํ˜ธ์ถœ
  • dev-control-plane์—์„œ๋„ prd-control-plane API ์„œ๋ฒ„ /version ์—”๋“œํฌ์ธํŠธ๋ฅผ curl๋กœ ํ˜ธ์ถœ

12. ํ˜ธ์ŠคํŠธ์—์„œ kind ๋„คํŠธ์›Œํฌ IP์— ping ํ…Œ์ŠคํŠธ ์ˆ˜ํ–‰

1
2
3
4
5
6
7
8
ping -c 1 172.18.0.2

PING 172.18.0.2 (172.18.0.2) 56(84) bytes of data.
64 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.129 ms

--- 172.18.0.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms
1
2
3
4
5
6
7
8
ping -c 1 172.18.0.3

PING 172.18.0.3 (172.18.0.3) 56(84) bytes of data.
64 bytes from 172.18.0.3: icmp_seq=1 ttl=64 time=0.079 ms

--- 172.18.0.3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms
1
2
3
4
5
6
7
8
ping -c 1 172.18.0.4

PING 172.18.0.4 (172.18.0.4) 56(84) bytes of data.
64 bytes from 172.18.0.4: icmp_seq=1 ttl=64 time=0.172 ms

--- 172.18.0.4 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms
  • ํ˜ธ์ŠคํŠธ โ†” ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ ์ปจํ…Œ์ด๋„ˆ ๊ฐ„ ํ†ต์‹ ์ด ์ •์ƒ์ ์œผ๋กœ ๊ฐ€๋Šฅํ•œ ๊ฒƒ์„ ํ™•์ธ

13. dev/prd kubeconfig์˜ API ์„œ๋ฒ„ ์ฃผ์†Œ๋ฅผ ์ปจํ…Œ์ด๋„ˆ IP๋กœ ๋ณ€๊ฒฝ

1
2
3
4
5
6
7
8
9
vi ~/.kube/config

...
    server: https://172.18.0.3:6443
  name: kind-dev
  ...
    server: https://172.18.0.4:6443
  name: kind-prd
...

14. kubeconfig ์ˆ˜์ • ํ›„ dev/prd API ์„œ๋ฒ„ ์—ฐ๊ฒฐ ์žฌ๊ฒ€์ฆ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
kubectl get node -v=6 --context kind-dev
kubectl get node -v=6 --context kind-prd

I1117 23:24:11.853566  160346 cmd.go:527] kubectl command headers turned on
I1117 23:24:11.859417  160346 loader.go:402] Config loaded from file:  /home/devshin/.kube/config
I1117 23:24:11.860050  160346 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1117 23:24:11.860066  160346 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1117 23:24:11.860073  160346 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1117 23:24:11.860080  160346 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1117 23:24:11.860086  160346 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1117 23:24:11.866956  160346 round_trippers.go:632] "Response" verb="GET" url="https://172.18.0.3:6443/api?timeout=32s" status="200 OK" milliseconds=6
I1117 23:24:11.868615  160346 round_trippers.go:632] "Response" verb="GET" url="https://172.18.0.3:6443/apis?timeout=32s" status="200 OK" milliseconds=0
I1117 23:24:11.876303  160346 round_trippers.go:632] "Response" verb="GET" url="https://172.18.0.3:6443/api/v1/nodes?limit=500" status="200 OK" milliseconds=1
NAME                STATUS   ROLES           AGE     VERSION
dev-control-plane   Ready    control-plane   3h13m   v1.32.8

I1117 23:24:11.918305  160370 cmd.go:527] kubectl command headers turned on
I1117 23:24:11.923330  160370 loader.go:402] Config loaded from file:  /home/devshin/.kube/config
I1117 23:24:11.924213  160370 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1117 23:24:11.924278  160370 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1117 23:24:11.924302  160370 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1117 23:24:11.924337  160370 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1117 23:24:11.924358  160370 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1117 23:24:11.929299  160370 round_trippers.go:632] "Response" verb="GET" url="https://172.18.0.4:6443/api?timeout=32s" status="200 OK" milliseconds=4
I1117 23:24:11.930766  160370 round_trippers.go:632] "Response" verb="GET" url="https://172.18.0.4:6443/apis?timeout=32s" status="200 OK" milliseconds=0
I1117 23:24:11.938976  160370 round_trippers.go:632] "Response" verb="GET" url="https://172.18.0.4:6443/api/v1/nodes?limit=500" status="200 OK" milliseconds=1
NAME                STATUS   ROLES           AGE     VERSION
prd-control-plane   Ready    control-plane   3h12m   v1.32.8

๐ŸŒ Argo CD์— ๋‹ค๋ฅธ K8S Cluster ๋“ฑ๋ก

1. Argo CD ๊ธฐ๋ณธ in-cluster ํด๋Ÿฌ์Šคํ„ฐ ์ƒํƒœ ํ™•์ธ

1
2
3
4
argocd cluster list

SERVER                          NAME        VERSION  STATUS   MESSAGE                                                  PROJECT
https://kubernetes.default.svc  in-cluster           Unknown  Cluster has no applications and is not being monitored.  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
argocd cluster list -o json | jq

[
  {
    "server": "https://kubernetes.default.svc",
    "name": "in-cluster",
    "config": {
      "tlsClientConfig": {
        "insecure": false
      }
    },
    "connectionState": {
      "status": "Unknown",
      "message": "Cluster has no applications and is not being monitored.",
      "attemptedAt": "2025-11-17T14:25:59Z"
    },
    "info": {
      "connectionState": {
        "status": "Unknown",
        "message": "Cluster has no applications and is not being monitored.",
        "attemptedAt": "2025-11-17T14:25:59Z"
      },
      "cacheInfo": {},
      "applicationsCount": 0
    }
  }
]

2. Argo CD ๋„ค์ž„์ŠคํŽ˜์ด์Šค ์‹œํฌ๋ฆฟ ๋ชฉ๋ก ํ™•์ธ

1
2
3
4
5
6
7
8
9
kubectl get secret -n argocd

NAME                           TYPE                 DATA   AGE
argocd-initial-admin-secret    Opaque               1      3h25m
argocd-notifications-secret    Opaque               0      3h25m
argocd-redis                   Opaque               1      3h25m
argocd-secret                  Opaque               3      3h25m
argocd-server-tls              kubernetes.io/tls    2      3h27m
sh.helm.release.v1.argocd.v1   helm.sh/release.v1   1      3h26m

3. dev/prd ํด๋Ÿฌ์Šคํ„ฐ kube-system ServiceAccount ๋ชฉ๋ก ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
k8s2 get sa -n kube-system

NAME                                          SECRETS   AGE
attachdetach-controller                       0         3h16m
bootstrap-signer                              0         3h16m
certificate-controller                        0         3h16m
clusterrole-aggregation-controller            0         3h16m
coredns                                       0         3h16m
cronjob-controller                            0         3h16m
daemon-set-controller                         0         3h16m
default                                       0         3h16m
deployment-controller                         0         3h16m
disruption-controller                         0         3h16m
endpoint-controller                           0         3h16m
endpointslice-controller                      0         3h16m
endpointslicemirroring-controller             0         3h16m
ephemeral-volume-controller                   0         3h16m
expand-controller                             0         3h16m
generic-garbage-collector                     0         3h16m
horizontal-pod-autoscaler                     0         3h16m
job-controller                                0         3h16m
kindnet                                       0         3h16m
kube-proxy                                    0         3h16m
legacy-service-account-token-cleaner          0         3h16m
namespace-controller                          0         3h16m
node-controller                               0         3h16m
persistent-volume-binder                      0         3h16m
pod-garbage-collector                         0         3h16m
pv-protection-controller                      0         3h16m
pvc-protection-controller                     0         3h16m
replicaset-controller                         0         3h16m
replication-controller                        0         3h16m
resourcequota-controller                      0         3h16m
root-ca-cert-publisher                        0         3h16m
service-account-controller                    0         3h16m
statefulset-controller                        0         3h16m
token-cleaner                                 0         3h16m
ttl-after-finished-controller                 0         3h16m
ttl-controller                                0         3h16m
validatingadmissionpolicy-status-controller   0         3h16m
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
k8s3 get sa -n kube-system

NAME                                          SECRETS   AGE
attachdetach-controller                       0         3h15m
bootstrap-signer                              0         3h16m
certificate-controller                        0         3h16m
clusterrole-aggregation-controller            0         3h16m
coredns                                       0         3h16m
cronjob-controller                            0         3h16m
daemon-set-controller                         0         3h15m
default                                       0         3h15m
deployment-controller                         0         3h16m
disruption-controller                         0         3h15m
endpoint-controller                           0         3h15m
endpointslice-controller                      0         3h16m
endpointslicemirroring-controller             0         3h16m
ephemeral-volume-controller                   0         3h16m
expand-controller                             0         3h16m
generic-garbage-collector                     0         3h16m
horizontal-pod-autoscaler                     0         3h16m
job-controller                                0         3h16m
kindnet                                       0         3h16m
kube-proxy                                    0         3h16m
legacy-service-account-token-cleaner          0         3h16m
namespace-controller                          0         3h16m
node-controller                               0         3h16m
persistent-volume-binder                      0         3h16m
pod-garbage-collector                         0         3h16m
pv-protection-controller                      0         3h16m
pvc-protection-controller                     0         3h16m
replicaset-controller                         0         3h16m
replication-controller                        0         3h16m
resourcequota-controller                      0         3h15m
root-ca-cert-publisher                        0         3h16m
service-account-controller                    0         3h16m
statefulset-controller                        0         3h15m
token-cleaner                                 0         3h16m
ttl-after-finished-controller                 0         3h16m
ttl-controller                                0         3h16m
validatingadmissionpolicy-status-controller   0         3h16m

4. dev ํด๋Ÿฌ์Šคํ„ฐ๋ฅผ Argo CD์— ๋“ฑ๋ก

1
2
3
4
5
6
7
8
argocd cluster add kind-dev --name dev-k8s
WARNING: This will create a service account `argocd-manager` on the cluster referenced by context `kind-dev` with full cluster level privileges. Do you want to continue [y/N]? y

{"level":"info","msg":"ServiceAccount \"argocd-manager\" created in namespace \"kube-system\"","time":"2025-11-17T23:28:26+09:00"}
{"level":"info","msg":"ClusterRole \"argocd-manager-role\" created","time":"2025-11-17T23:28:26+09:00"}
{"level":"info","msg":"ClusterRoleBinding \"argocd-manager-role-binding\" created","time":"2025-11-17T23:28:26+09:00"}
{"level":"info","msg":"Created bearer token secret \"argocd-manager-long-lived-token\" for ServiceAccount \"argocd-manager\"","time":"2025-11-17T23:28:26+09:00"}
Cluster 'https://172.18.0.3:6443' added

5. dev ํด๋Ÿฌ์Šคํ„ฐ์— ์ƒ์„ฑ๋œ argocd-manager ServiceAccount ํ™•์ธ

1
2
3
4
k8s2 get sa -n kube-system argocd-manager

NAME             SECRETS   AGE
argocd-manager   0         29s

6. Argo CD์—์„œ dev ํด๋Ÿฌ์Šคํ„ฐ์šฉ cluster ์‹œํฌ๋ฆฟ ์ƒ์„ฑ ํ™•์ธ

1
2
3
4
kubectl get secret -n argocd -l argocd.argoproj.io/secret-type=cluster

NAME                            TYPE     DATA   AGE
cluster-172.18.0.3-4100004299   Opaque   3      2m36s

7. k9s๋กœ dev ํด๋Ÿฌ์Šคํ„ฐ ์‹œํฌ๋ฆฟ ๋‚ด์šฉ ๋””์ฝ”๋”ฉํ•ด์„œ ํ™•์ธ

1
k9s -> : secret argocd -> ์•„๋ž˜ secret ์—์„œ d (Describe) -> x (Toggle Decode) ๋กœ ํ™•์ธ

8. dev-k8s ๋“ฑ๋ก ๊ฒฐ๊ณผ ํ™•์ธ

1
2
3
4
5
argocd cluster list

SERVER                          NAME        VERSION  STATUS   MESSAGE                                                  PROJECT
https://172.18.0.3:6443         dev-k8s              Unknown  Cluster has no applications and is not being monitored.  
https://kubernetes.default.svc  in-cluster           Unknown  Cluster has no applications and is not being monitored.  
  • dev ํด๋Ÿฌ์Šคํ„ฐ๊ฐ€ Argo CD์— ๋“ฑ๋ก๋œ ๊ฒƒ์„ ํ™•์ธํ•จ

9. prd ํด๋Ÿฌ์Šคํ„ฐ๋ฅผ Argo CD์— ๋“ฑ๋ก

1
2
3
4
5
6
7
argocd cluster add kind-prd --name prd-k8s --yes

{"level":"info","msg":"ServiceAccount \"argocd-manager\" created in namespace \"kube-system\"","time":"2025-11-17T23:36:08+09:00"}
{"level":"info","msg":"ClusterRole \"argocd-manager-role\" created","time":"2025-11-17T23:36:08+09:00"}
{"level":"info","msg":"ClusterRoleBinding \"argocd-manager-role-binding\" created","time":"2025-11-17T23:36:08+09:00"}
{"level":"info","msg":"Created bearer token secret \"argocd-manager-long-lived-token\" for ServiceAccount \"argocd-manager\"","time":"2025-11-17T23:36:08+09:00"}
Cluster 'https://172.18.0.4:6443' added

10. prd ํด๋Ÿฌ์Šคํ„ฐ argocd-manager ServiceAccount ์ƒ์„ฑ ํ™•์ธ

1
2
3
4
k8s3 get sa -n kube-system argocd-manager

NAME             SECRETS   AGE
argocd-manager   0         15s

11. dev/prd ํด๋Ÿฌ์Šคํ„ฐ์šฉ Argo CD cluster ์‹œํฌ๋ฆฟ ๋ชฉ๋ก ํ™•์ธ

1
2
3
4
5
kubectl get secret -n argocd -l argocd.argoproj.io/secret-type=cluster

NAME                            TYPE     DATA   AGE
cluster-172.18.0.3-4100004299   Opaque   3      8m18s
cluster-172.18.0.4-568336172    Opaque   3      36s

12. ์ตœ์ข… Argo CD ํด๋Ÿฌ์Šคํ„ฐ ๋ชฉ๋ก ํ™•์ธ

1
2
3
4
5
6
argocd cluster list

SERVER                          NAME        VERSION  STATUS   MESSAGE                                                  PROJECT
https://172.18.0.3:6443         dev-k8s              Unknown  Cluster has no applications and is not being monitored.  
https://172.18.0.4:6443         prd-k8s              Unknown  Cluster has no applications and is not being monitored.  
https://kubernetes.default.svc  in-cluster           Unknown  Cluster has no applications and is not being monitored.  


๐Ÿš€ Argo CD๋กœ 3๊ฐœ์˜ K8S Cluster์— ๊ฐ๊ฐ Nginx ๋ฐฐํฌ

1. kind ๋„คํŠธ์›Œํฌ์—์„œ dev/prd ํด๋Ÿฌ์Šคํ„ฐ IP ํ™•์ธ ๋ฐ ํ™˜๊ฒฝ ๋ณ€์ˆ˜ ์„ค์ •

1
2
3
4
5
6
7
8
9
docker network inspect kind | grep -E 'Name|IPv4Address'

        "Name": "kind",
                "Name": "prd-control-plane",
                "IPv4Address": "172.18.0.4/16",
                "Name": "mgmt-control-plane",
                "IPv4Address": "172.18.0.2/16",
                "Name": "dev-control-plane",
                "IPv4Address": "172.18.0.3/16",
1
2
3
4
5
DEVK8SIP=172.18.0.3
PRDK8SIP=172.18.0.4
echo $DEVK8SIP $PRDK8SIP

172.18.0.3 172.18.0.4

2. mgmt ํด๋Ÿฌ์Šคํ„ฐ์— Nginx ๋ฐฐํฌ์šฉ Argo CD Application ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: mgmt-nginx
  namespace: argocd
  finalizers:
  - resources-finalizer.argocd.argoproj.io
spec:
  project: default
  source:
    helm:
      valueFiles:
      - values.yaml
    path: nginx-chart
    repoURL: https://github.com/Shinminjin/cicd-study
    targetRevision: HEAD
  syncPolicy:
    automated:
      prune: true
    syncOptions:
    - CreateNamespace=true
  destination:
    namespace: mgmt-nginx
    server: https://kubernetes.default.svc
EOF

Warning: metadata.finalizers: "resources-finalizer.argocd.argoproj.io": prefer a domain-qualified finalizer name including a path (/) to avoid accidental conflicts with other finalizer writers
application.argoproj.io/mgmt-nginx created

3. dev ํด๋Ÿฌ์Šคํ„ฐ์— Nginx ๋ฐฐํฌ์šฉ Argo CD Application ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: dev-nginx
  namespace: argocd
  finalizers:
  - resources-finalizer.argocd.argoproj.io
spec:
  project: default
  source:
    helm:
      valueFiles:
      - values-dev.yaml
    path: nginx-chart
    repoURL: https://github.com/Shinminjin/cicd-study
    targetRevision: HEAD
  syncPolicy:
    automated:
      prune: true
    syncOptions:
    - CreateNamespace=true
  destination:
    namespace: dev-nginx
    server: https://$DEVK8SIP:6443
EOF

Warning: metadata.finalizers: "resources-finalizer.argocd.argoproj.io": prefer a domain-qualified finalizer name including a path (/) to avoid accidental conflicts with other finalizer writers
application.argoproj.io/dev-nginx created

4. prd ํด๋Ÿฌ์Šคํ„ฐ์— Nginx ๋ฐฐํฌ์šฉ Argo CD Application ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: prd-nginx
  namespace: argocd
  finalizers:
  - resources-finalizer.argocd.argoproj.io
spec:
  project: default
  source:
    helm:
      valueFiles:
      - values-prd.yaml
    path: nginx-chart
    repoURL: https://github.com/Shinminjin/cicd-study
    targetRevision: HEAD
  syncPolicy:
    automated:
      prune: true
    syncOptions:
    - CreateNamespace=true
  destination:
    namespace: prd-nginx
    server: https://$PRDK8SIP:6443
EOF

Warning: metadata.finalizers: "resources-finalizer.argocd.argoproj.io": prefer a domain-qualified finalizer name including a path (/) to avoid accidental conflicts with other finalizer writers
application.argoproj.io/prd-nginx created

5. Argo CD์—์„œ ์„ธ ๊ฐœ Nginx Application ์ƒํƒœ ํ™•์ธ

1
2
3
4
5
6
argocd app list

NAME               CLUSTER                         NAMESPACE   PROJECT  STATUS  HEALTH   SYNCPOLICY  CONDITIONS  REPO                                      PATH         TARGET
argocd/dev-nginx   https://172.18.0.3:6443         dev-nginx   default  Synced  Healthy  Auto-Prune  <none>      https://github.com/Shinminjin/cicd-study  nginx-chart  HEAD
argocd/mgmt-nginx  https://kubernetes.default.svc  mgmt-nginx  default  Synced  Healthy  Auto-Prune  <none>      https://github.com/Shinminjin/cicd-study  nginx-chart  HEAD
argocd/prd-nginx   https://172.18.0.4:6443         prd-nginx   default  Synced  Healthy  Auto-Prune  <none>      https://github.com/Shinminjin/cicd-study  nginx-chart  HEAD
  • ๊ฐ Application์ด ์˜ฌ๋ฐ”๋ฅธ ํด๋Ÿฌ์Šคํ„ฐ์™€ ๋„ค์ž„์ŠคํŽ˜์ด์Šค์— ๋งคํ•‘๋˜์–ด ์žˆ์Œ์„ ํ™•์ธ
1
2
3
4
5
6
kubectl get applications -n argocd

NAME         SYNC STATUS   HEALTH STATUS
dev-nginx    Synced        Healthy
mgmt-nginx   Synced        Healthy
prd-nginx    Synced        Healthy
  • Kubernetes ๋ฆฌ์†Œ์Šค ๊ด€์ ์—์„œ๋„ Application ์ƒํƒœ๋ฅผ ์žฌํ™•์ธํ•จ

6. mgmt ํด๋Ÿฌ์Šคํ„ฐ์—์„œ Nginx ๋ฆฌ์†Œ์Šค ๋ฐ NodePort ์‘๋‹ต ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
kubectl get pod,svc,ep,cm -n mgmt-nginx

NAME                              READY   STATUS    RESTARTS   AGE
pod/mgmt-nginx-6fc86948bc-vh7k8   1/1     Running   0          96s

NAME                 TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
service/mgmt-nginx   NodePort   10.96.10.62   <none>        80:30000/TCP   96s

NAME                   ENDPOINTS        AGE
endpoints/mgmt-nginx   10.244.0.17:80   96s

NAME                         DATA   AGE
configmap/kube-root-ca.crt   1      96s
configmap/mgmt-nginx         1      96s
1
2
3
4
5
6
7
8
9
10
11
12
curl -s http://127.0.0.1:30000

<!DOCTYPE html>
<html>
<head>
  <title>Welcome to Nginx!</title>
</head>
<body>
  <h1>Hello, Kubernetes!</h1>
  <p>Nginx version 1.26.1</p>
</body>
</html>
  • mgmt-nginx ๋„ค์ž„์ŠคํŽ˜์ด์Šค์˜ Pod, Service, Endpoint, ConfigMap์„ ํ™•์ธ
  • ํ˜ธ์ŠคํŠธ์—์„œ NodePort 30000์œผ๋กœ ์ ‘์†ํ•ด Nginx ๊ธฐ๋ณธ ํŽ˜์ด์ง€ ๋Œ€์‹  ์ปค์Šคํ…€ HTML์ด ๋…ธ์ถœ๋˜๋Š” ๊ฒƒ์„ ํ™•์ธ

7. dev ํด๋Ÿฌ์Šคํ„ฐ์—์„œ Nginx ๋ฐฐํฌ ๋ฐ NodePort ์‘๋‹ต ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
kubectl get pod,svc,ep,cm -n dev-nginx --context kind-dev

NAME                            READY   STATUS    RESTARTS   AGE
pod/dev-nginx-59f4c8899-9vpvf   1/1     Running   0          2m4s

NAME                TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/dev-nginx   NodePort   10.96.108.194   <none>        80:31000/TCP   2m4s

NAME                  ENDPOINTS       AGE
endpoints/dev-nginx   10.244.0.5:80   2m4s

NAME                         DATA   AGE
configmap/dev-nginx          1      2m4s
configmap/kube-root-ca.crt   1      2m4s
1
2
3
4
5
6
7
8
9
10
11
12
curl -s http://127.0.0.1:31000

<!DOCTYPE html>
<html>
<head>
  <title>Welcome to Nginx!</title>
</head>
<body>
  <h1>Hello, Dev - Kubernetes!</h1>
  <p>Nginx version 1.26.1</p>
</body>
</html>
  • kind-dev ์ปจํ…์ŠคํŠธ์—์„œ dev-nginx ๋„ค์ž„์ŠคํŽ˜์ด์Šค ๋ฆฌ์†Œ์Šค๋ฅผ ์กฐํšŒํ•จ
  • ํ˜ธ์ŠคํŠธ์—์„œ 31000 ํฌํŠธ๋กœ ์ ‘์†ํ•ด dev ํ™˜๊ฒฝ์šฉ ๋ฌธ๊ตฌ๊ฐ€ ์ถœ๋ ฅ๋˜๋Š”์ง€ ํ™•์ธ

8. prd ํด๋Ÿฌ์Šคํ„ฐ์—์„œ Nginx ๋‹ค์ค‘ Pod/Endpoint ๋ฐ NodePort ์‘๋‹ต ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
kubectl get pod,svc,ep,cm -n prd-nginx --context kind-prd

NAME                             READY   STATUS    RESTARTS   AGE
pod/prd-nginx-86d9bc9f7f-bfgg7   1/1     Running   0          3m47s
pod/prd-nginx-86d9bc9f7f-g24p8   1/1     Running   0          3m47s

NAME                TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/prd-nginx   NodePort   10.96.193.167   <none>        80:32000/TCP   3m47s

NAME                  ENDPOINTS                     AGE
endpoints/prd-nginx   10.244.0.5:80,10.244.0.6:80   3m47s

NAME                         DATA   AGE
configmap/kube-root-ca.crt   1      3m47s
configmap/prd-nginx          1      3m47s
1
2
3
4
5
6
7
8
9
10
11
12
curl -s http://127.0.0.1:32000

<!DOCTYPE html>
<html>
<head>
  <title>Welcome to Nginx!</title>
</head>
<body>
  <h1>Hello, Prd - Kubernetes!</h1>
  <p>Nginx version 1.26.1</p>
</body>
</html>
  • kind-prd ์ปจํ…์ŠคํŠธ์—์„œ prd-nginx ๋„ค์ž„์ŠคํŽ˜์ด์Šค ๋ฆฌ์†Œ์Šค๋ฅผ ํ™•์ธํ•จ
    • Pod: prd-nginx-... 2๊ฐœ (Replica 2๊ฐœ๋กœ ์šด์˜ ํ™˜๊ฒฝ ์Šค์ผ€์ผ ์„ค์ •)
    • Service: NodePort 80:32000/TCP
    • Endpoint: ๋‘ Pod IP๊ฐ€ ๋ชจ๋‘ ๋“ฑ๋ก๋˜์–ด ์žˆ์Œ

9. Argo CD Application ์‚ญ์ œ

1
2
3
4
5
kubectl delete applications -n argocd mgmt-nginx dev-nginx prd-nginx

application.argoproj.io "mgmt-nginx" deleted from argocd namespace
application.argoproj.io "dev-nginx" deleted from argocd namespace
application.argoproj.io "prd-nginx" deleted from argocd namespace

๐ŸŒณ Argo CD App of Apps ํŒจํ„ด ์ •๋ฆฌ

1. Root Application(apps) ์ƒ์„ฑ

1
2
3
4
5
6
7
argocd app create apps \
    --dest-namespace argocd \
    --dest-server https://kubernetes.default.svc \
    --repo https://github.com/Shinminjin/cicd-study.git \
    --path apps

application 'apps' created

  • argocd ๋„ค์ž„์ŠคํŽ˜์ด์Šค์— apps๋ผ๋Š” ์ด๋ฆ„์˜ Root Application์„ ์ƒ์„ฑํ•จ
  • https://github.com/Shinminjin/cicd-study.git ์ €์žฅ์†Œ์˜ apps ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ์†Œ์Šค๋กœ ์‚ฌ์šฉํ•จ
  • ๋ชฉ์ : ์ด apps ์•ˆ์— ์ •์˜๋œ ์—ฌ๋Ÿฌ Application ๋งค๋‹ˆํŽ˜์ŠคํŠธ๋ฅผ ํ•œ ๋ฒˆ์— ๊ด€๋ฆฌํ•˜๊ธฐ ์œ„ํ•จ

3. Root Application Sync โ†’ ํ•˜์œ„ Application ์ž๋™ ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
argocd app sync apps

TIMESTAMP                  GROUP              KIND    NAMESPACE                  NAME           STATUS    HEALTH        HOOK  MESSAGE
2025-11-17T23:58:44+09:00  argoproj.io  Application      argocd  example.helm-guestbook       OutOfSync  Missing              
2025-11-17T23:58:44+09:00  argoproj.io  Application      argocd  example.kustomize-guestbook  OutOfSync  Missing              
2025-11-17T23:58:44+09:00  argoproj.io  Application      argocd    example.sync-waves         OutOfSync  Missing              
2025-11-17T23:58:44+09:00  argoproj.io  Application      argocd  example.helm-guestbook    Synced  Missing              
2025-11-17T23:58:44+09:00  argoproj.io  Application      argocd    example.sync-waves         OutOfSync  Missing              application.argoproj.io/example.sync-waves created
2025-11-17T23:58:44+09:00  argoproj.io  Application      argocd  example.kustomize-guestbook  OutOfSync  Missing              application.argoproj.io/example.kustomize-guestbook created
2025-11-17T23:58:44+09:00  argoproj.io  Application      argocd  example.helm-guestbook         Synced   Missing              application.argoproj.io/example.helm-guestbook created

Name:               argocd/apps
Project:            default
Server:             https://kubernetes.default.svc
Namespace:          argocd
URL:                https://argocd.example.com/applications/apps
Source:
- Repo:             https://github.com/Shinminjin/cicd-study.git
  Target:           
  Path:             apps
SyncWindow:         Sync Allowed
Sync Policy:        Manual
Sync Status:        Synced to  (b2894c6)
Health Status:      Healthy

Operation:          Sync
Sync Revision:      b2894c67f7a64e42b408da5825cb0b87ee306b04
Phase:              Succeeded
Start:              2025-11-17 23:58:44 +0900 KST
Finished:           2025-11-17 23:58:44 +0900 KST
Duration:           0s
Message:            successfully synced (all tasks run)

GROUP        KIND         NAMESPACE  NAME                         STATUS  HEALTH  HOOK  MESSAGE
argoproj.io  Application  argocd     example.helm-guestbook       Synced                application.argoproj.io/example.helm-guestbook created
argoproj.io  Application  argocd     example.sync-waves           Synced                application.argoproj.io/example.sync-waves created
argoproj.io  Application  argocd     example.kustomize-guestbook  Synced                application.argoproj.io/example.kustomize-guestbook created

3. Root Application(apps) ์ƒ์„ธ ์ƒํƒœ ํ™•์ธ

1
2
3
4
5
6
7
argocd app list

NAME                                CLUSTER                         NAMESPACE            PROJECT  STATUS  HEALTH   SYNCPOLICY  CONDITIONS  REPO                                          PATH                 TARGET
argocd/apps                         https://kubernetes.default.svc  argocd               default  Synced  Healthy  Manual      <none>      https://github.com/Shinminjin/cicd-study.git  apps                 
argocd/example.helm-guestbook       https://kubernetes.default.svc  helm-guestbook       default  Synced  Healthy  Auto-Prune  <none>      https://github.com/gasida/cicd-study          helm-guestbook       main
argocd/example.kustomize-guestbook  https://kubernetes.default.svc  kustomize-guestbook  default  Synced  Healthy  Auto-Prune  <none>      https://github.com/gasida/cicd-study          kustomize-guestbook  main
argocd/example.sync-waves           https://kubernetes.default.svc  sync-waves           default  Synced  Healthy  Auto-Prune  <none>      https://github.com/gasida/cicd-study          sync-waves           main

4. ์‹ค์ œ ์ฟ ๋ฒ„๋„คํ‹ฐ์Šค ๋ฆฌ์†Œ์Šค ์ƒ์„ฑ ์ƒํƒœ ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
kubectl get pod -A

NAMESPACE             NAME                                                 READY   STATUS      RESTARTS   AGE
argocd                argocd-application-controller-0                      1/1     Running     0          4h
argocd                argocd-applicationset-controller-bbff79c6f-9qcf8     1/1     Running     0          4h
argocd                argocd-dex-server-6877ddf4f8-fvfll                   1/1     Running     0          4h
argocd                argocd-notifications-controller-7b5658fc47-26p24     1/1     Running     0          4h
argocd                argocd-redis-7d948674-xnl9k                          1/1     Running     0          4h
argocd                argocd-repo-server-7679dc55f5-swj2g                  1/1     Running     0          4h
argocd                argocd-server-7d769b6f48-2ts94                       1/1     Running     0          4h
helm-guestbook        helm-guestbook-667dffd5cf-f24hj                      1/1     Running     0          2m40s
ingress-nginx         ingress-nginx-controller-5b89cb54f9-5gvfh            1/1     Running     0          4h3m
kube-system           coredns-668d6bf9bc-d5bn7                             1/1     Running     0          4h5m
kube-system           coredns-668d6bf9bc-vb4p7                             1/1     Running     0          4h5m
kube-system           etcd-mgmt-control-plane                              1/1     Running     0          4h5m
kube-system           kindnet-jtm8t                                        1/1     Running     0          4h5m
kube-system           kube-apiserver-mgmt-control-plane                    1/1     Running     0          4h5m
kube-system           kube-controller-manager-mgmt-control-plane           1/1     Running     0          4h5m
kube-system           kube-proxy-b9pmh                                     1/1     Running     0          4h5m
kube-system           kube-scheduler-mgmt-control-plane                    1/1     Running     0          4h5m
kustomize-guestbook   kustomize-guestbook-ui-85db984648-mzc87              1/1     Running     0          2m40s
local-path-storage    local-path-provisioner-7dc846544d-wltkn              1/1     Running     0          4h5m
sync-waves            backend-z4kpq                                        1/1     Running     0          2m28s
sync-waves            frontend-x8xjc                                       1/1     Running     0          108s
sync-waves            maint-page-down-scbbs                                0/1     Completed   0          105s
sync-waves            maint-page-up-tr6d2                                  0/1     Completed   0          115s
sync-waves            upgrade-sql-schemab2894c6-presync-1763391525-5qk25   0/1     Completed   0          2m41s

5. Root Application ์‚ญ์ œ๋กœ App of Apps ์ •๋ฆฌ

1
2
3
argocd app delete argocd/apps --yes

application 'argocd/apps' deleted

๐Ÿ“ฆ ApplicationSet List ์ œ๋„ค๋ ˆ์ดํ„ฐ ์‹ค์Šต

1. List Generator ๊ธฐ๋ฐ˜ ApplicationSet ์ƒ์„ฑ

1
2
echo $DEVK8SIP $PRDK8SIP
172.18.0.3 172.18.0.4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: guestbook
  namespace: argocd
spec:
  goTemplate: true
  goTemplateOptions: ["missingkey=error"]
  generators:
  - list:
      elements:
      - cluster: dev-k8s
        url: https://$DEVK8SIP:6443
      - cluster: prd-k8s
        url: https://$PRDK8SIP:6443
  template:
    metadata:
      name: '-guestbook'
      labels:
        environment: ''
        managed-by: applicationset
    spec:
      project: default
      source:
        repoURL: https://github.com/Shinminjin/cicd-study.git
        targetRevision: HEAD
        path: appset/list/
      destination:
        server: ''
        namespace: guestbook
      syncPolicy:
        syncOptions:
          - CreateNamespace=true
EOF

applicationset.argoproj.io/guestbook created

2. ApplicationSet guestbook ์ •์˜ ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
kubectl get applicationsets -n argocd guestbook -o yaml | kubectl neat | yq

{
  "apiVersion": "argoproj.io/v1alpha1",
  "kind": "ApplicationSet",
  "metadata": {
    "name": "guestbook",
    "namespace": "argocd"
  },
  "spec": {
    "generators": [
      {
        "list": {
          "elements": [
            {
              "cluster": "dev-k8s",
              "url": "https://172.18.0.3:6443"
            },
            {
              "cluster": "prd-k8s",
              "url": "https://172.18.0.4:6443"
            }
          ]
        }
      }
    ],
    "goTemplate": true,
    "goTemplateOptions": [
      "missingkey=error"
    ],
    "template": {
      "metadata": {
        "labels": {
          "environment": "",
          "managed-by": "applicationset"
        },
        "name": "-guestbook"
      },
      "spec": {
        "destination": {
          "namespace": "guestbook",
          "server": ""
        },
        "project": "default",
        "source": {
          "path": "appset/list/",
          "repoURL": "https://github.com/Shinminjin/cicd-study.git",
          "targetRevision": "HEAD"
        },
        "syncPolicy": {
          "syncOptions": [
            "CreateNamespace=true"
          ]
        }
      }
    }
  }
}

3. ApplicationSet ๋ฆฌ์†Œ์Šค ๋ฐ ์ƒํƒœ ํ™•์ธ

1
2
3
4
kubectl get applicationsets -n argocd

NAME        AGE
guestbook   108s
1
2
3
4
argocd appset list

NAME              PROJECT  SYNCPOLICY  CONDITIONS                                                                                                                                                                                                                                     REPO                                          PATH                      TARGET
argocd/guestbook  default  nil         [{ParametersGenerated Successfully generated parameters for all Applications 2025-11-18 00:09:45 +0900 KST True ParametersGenerated} {ResourcesUpToDate ApplicationSet up to date 2025-11-18 00:09:45 +0900 KST True ApplicationSetUpToDate}]  https://github.com/Shinminjin/cicd-study.git  appset/list/  HEAD

4. ApplicationSet์ด ์ƒ์„ฑํ•œ Application ๋ชฉ๋ก ํ™•์ธ

1
2
3
4
5
argocd app list

NAME                      CLUSTER                  NAMESPACE  PROJECT  STATUS     HEALTH   SYNCPOLICY  CONDITIONS  REPO                                          PATH                 TARGET
argocd/dev-k8s-guestbook  https://172.18.0.3:6443  guestbook  default  OutOfSync  Missing  Manual      <none>      https://github.com/Shinminjin/cicd-study.git  appset/list/dev-k8s  HEAD
argocd/prd-k8s-guestbook  https://172.18.0.4:6443  guestbook  default  OutOfSync  Missing  Manual      <none>      https://github.com/Shinminjin/cicd-study.git  appset/list/prd-k8s  HEAD
  • ApplicationSet์ด ์ƒ์„ฑํ•œ ๋‘ ๊ฐœ์˜ Application(dev-k8s-guestbook, prd-k8s-guestbook)์„ ํ™•์ธ
  • ์ฒ˜์Œ ์ƒํƒœ๋Š” ์•„์ง Sync ์ „์ด๋ผ OutOfSync / Missing ์ƒํƒœ์ž„

5. ApplicationSet์ด ๋ถ€์—ฌํ•œ ๋ผ๋ฒจ ๊ธฐ๋ฐ˜ ํ•„ํ„ฐ๋ง

1
2
3
4
5
argocd app list -l managed-by=applicationset

NAME                      CLUSTER                  NAMESPACE  PROJECT  STATUS     HEALTH   SYNCPOLICY  CONDITIONS  REPO                                          PATH                 TARGET
argocd/dev-k8s-guestbook  https://172.18.0.3:6443  guestbook  default  OutOfSync  Missing  Manual      <none>      https://github.com/Shinminjin/cicd-study.git  appset/list/dev-k8s  HEAD
argocd/prd-k8s-guestbook  https://172.18.0.4:6443  guestbook  default  OutOfSync  Missing  Manual      <none>      https://github.com/Shinminjin/cicd-study.git  appset/list/prd-k8s  HEAD

  • Application ํ…œํ”Œ๋ฆฟ์—์„œ ๋ถ€์—ฌํ•œ managed-by=applicationset ๋ผ๋ฒจ์„ ๊ธฐ์ค€์œผ๋กœ Argo CD ์•ฑ ๋ชฉ๋ก์„ ํ•„ํ„ฐ๋งํ•จ
1
2
3
4
5
kubectl get applications -n argocd

NAME                SYNC STATUS   HEALTH STATUS
dev-k8s-guestbook   OutOfSync     Missing
prd-k8s-guestbook   OutOfSync     Missing
1
2
3
4
5
kubectl get applications -n argocd --show-labels

NAME                SYNC STATUS   HEALTH STATUS   LABELS
dev-k8s-guestbook   OutOfSync     Missing         environment=dev-k8s,managed-by=applicationset
prd-k8s-guestbook   OutOfSync     Missing         environment=prd-k8s,managed-by=applicationset
  • Kubernetes ๋ฆฌ์†Œ์Šค ์ธก๋ฉด์—์„œ๋„ Application CR์— ๋ผ๋ฒจ์ด ์ž˜ ๋“ค์–ด๊ฐ”๋Š”์ง€ ํ™•์ธํ•จ

6. Label Selector ๊ธฐ๋ฐ˜ ์ผ๊ด„ Sync ์ˆ˜ํ–‰

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
argocd app sync -l managed-by=applicationset

TIMESTAMP                  GROUP        KIND   NAMESPACE                  NAME    STATUS    HEALTH        HOOK  MESSAGE
2025-11-18T00:14:46+09:00            Service   guestbook          guestbook-ui  OutOfSync  Missing              
2025-11-18T00:14:46+09:00   apps  Deployment   guestbook          guestbook-ui  OutOfSync  Missing              
2025-11-18T00:14:46+09:00          Namespace                         guestbook   Running   Synced              namespace/guestbook created

Name:               argocd/dev-k8s-guestbook
Project:            default
Server:             https://172.18.0.3:6443
Namespace:          guestbook
URL:                https://argocd.example.com/applications/argocd/dev-k8s-guestbook
Source:
- Repo:             https://github.com/Shinminjin/cicd-study.git
  Target:           HEAD
  Path:             appset/list/dev-k8s
SyncWindow:         Sync Allowed
Sync Policy:        Manual
Sync Status:        Synced to HEAD (b2894c6)
Health Status:      Progressing

Operation:          Sync
Sync Revision:      b2894c67f7a64e42b408da5825cb0b87ee306b04
Phase:              Succeeded
Start:              2025-11-18 00:14:46 +0900 KST
Finished:           2025-11-18 00:14:46 +0900 KST
Duration:           0s
Message:            successfully synced (all tasks run)

GROUP  KIND        NAMESPACE  NAME          STATUS   HEALTH       HOOK  MESSAGE
       Namespace              guestbook     Running  Synced             namespace/guestbook created
       Service     guestbook  guestbook-ui  Synced   Healthy            service/guestbook-ui created
apps   Deployment  guestbook  guestbook-ui  Synced   Progressing        deployment.apps/guestbook-ui created
TIMESTAMP                  GROUP        KIND   NAMESPACE                  NAME    STATUS    HEALTH        HOOK  MESSAGE
2025-11-18T00:14:47+09:00   apps  Deployment   guestbook          guestbook-ui  OutOfSync  Missing              
2025-11-18T00:14:47+09:00            Service   guestbook          guestbook-ui  OutOfSync  Missing              
2025-11-18T00:14:47+09:00          Namespace                         guestbook   Running   Synced              namespace/guestbook created

Name:               argocd/prd-k8s-guestbook
Project:            default
Server:             https://172.18.0.4:6443
Namespace:          guestbook
URL:                https://argocd.example.com/applications/argocd/prd-k8s-guestbook
Source:
- Repo:             https://github.com/Shinminjin/cicd-study.git
  Target:           HEAD
  Path:             appset/list/prd-k8s
SyncWindow:         Sync Allowed
Sync Policy:        Manual
Sync Status:        Synced to HEAD (b2894c6)
Health Status:      Progressing

Operation:          Sync
Sync Revision:      b2894c67f7a64e42b408da5825cb0b87ee306b04
Phase:              Succeeded
Start:              2025-11-18 00:14:47 +0900 KST
Finished:           2025-11-18 00:14:47 +0900 KST
Duration:           0s
Message:            successfully synced (all tasks run)

GROUP  KIND        NAMESPACE  NAME          STATUS   HEALTH       HOOK  MESSAGE
       Namespace              guestbook     Running  Synced             namespace/guestbook created
       Service     guestbook  guestbook-ui  Synced   Healthy            service/guestbook-ui created
apps   Deployment  guestbook  guestbook-ui  Synced   Progressing        deployment.apps/guestbook-ui created

  • managed-by=applicationset ๋ผ๋ฒจ์„ ๊ธฐ์ค€์œผ๋กœ ๋‘ ๊ฐœ์˜ guestbook ์•ฑ์„ ํ•œ ๋ฒˆ์— sync ์ˆ˜ํ–‰ํ•จ

7. Sync ์ดํ›„ Application ์ƒํƒœ ๋ฐ ๋ฆฌ์†Œ์Šค ์ƒํƒœ ํ™•์ธ

1
2
3
4
k8s2 get pod -n guestbook

NAME                            READY   STATUS    RESTARTS   AGE
guestbook-ui-7cf4fd7cb9-m7l46   1/1     Running   0          118s
1
2
3
4
5
k8s3 get pod -n guestbook

NAME                            READY   STATUS    RESTARTS   AGE
guestbook-ui-7cf4fd7cb9-9gkql   1/1     Running   0          2m6s
guestbook-ui-7cf4fd7cb9-hbf9s   1/1     Running   0          2m6s
  • dev/prd ํด๋Ÿฌ์Šคํ„ฐ์— ์‹ค์ œ ๋ฐฐํฌ๋œ guestbook Pod ์ƒํƒœ ํ™•์ธ

8. ์ƒ์„ฑ๋œ Application ๋งค๋‹ˆํŽ˜์ŠคํŠธ ๋‚ด์šฉ ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
kubectl get applications -n argocd dev-k8s-guestbook -o yaml | kubectl neat | yq

{
  "apiVersion": "argoproj.io/v1alpha1",
  "kind": "Application",
  "metadata": {
    "labels": {
      "environment": "dev-k8s",
      "managed-by": "applicationset"
    },
    "name": "dev-k8s-guestbook",
    "namespace": "argocd"
  },
  "spec": {
    "destination": {
      "namespace": "guestbook",
      "server": "https://172.18.0.3:6443"
    },
    "project": "default",
    "source": {
      "path": "appset/list/dev-k8s",
      "repoURL": "https://github.com/Shinminjin/cicd-study.git",
      "targetRevision": "HEAD"
    },
    "syncPolicy": {
      "syncOptions": [
        "CreateNamespace=true"
      ]
    }
  }
}

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
kubectl get applications -n argocd prd-k8s-guestbook -o yaml | kubectl neat | yq

{
  "apiVersion": "argoproj.io/v1alpha1",
  "kind": "Application",
  "metadata": {
    "labels": {
      "environment": "prd-k8s",
      "managed-by": "applicationset"
    },
    "name": "prd-k8s-guestbook",
    "namespace": "argocd"
  },
  "spec": {
    "destination": {
      "namespace": "guestbook",
      "server": "https://172.18.0.4:6443"
    },
    "project": "default",
    "source": {
      "path": "appset/list/prd-k8s",
      "repoURL": "https://github.com/Shinminjin/cicd-study.git",
      "targetRevision": "HEAD"
    },
    "syncPolicy": {
      "syncOptions": [
        "CreateNamespace=true"
      ]
    }
  }
}

9. ApplicationSet ์‚ญ์ œ๋กœ ์ œ์–ด ๋ฆฌ์†Œ์Šค ์ •๋ฆฌ

1
2
3
argocd appset delete guestbook --yes

applicationset 'guestbook' deleted

๐ŸŒ ApplicationSet Cluster ์ œ๋„ค๋ ˆ์ดํ„ฐ ์‹ค์Šต

1. Cluster ์ œ๋„ค๋ ˆ์ดํ„ฐ๋กœ 3๊ฐœ ํด๋Ÿฌ์Šคํ„ฐ์— ์ผ๊ด„ ๋ฐฐํฌ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: guestbook
  namespace: argocd
spec:
  goTemplate: true
  goTemplateOptions: ["missingkey=error"]
  generators:
  - clusters: {}
  template:
    metadata:
      name: '-guestbook'
      labels:
        managed-by: applicationset
    spec:
      project: "default"
      source:
        repoURL: https://github.com/Shinminjin/cicd-study
        targetRevision: HEAD
        path: guestbook
      destination:
        server: ''
        namespace: guestbook
      syncPolicy:
        syncOptions:
          - CreateNamespace=true
EOF

applicationset.argoproj.io/guestbook created

  • clusters: {} ์ œ๋„ค๋ ˆ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•ด Argo CD์— ๋“ฑ๋ก๋œ ๋ชจ๋“  ํด๋Ÿฌ์Šคํ„ฐ์— guestbook ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ๋ฐฐํฌํ•จ
  • ๊ฐ ํด๋Ÿฌ์Šคํ„ฐ์˜ ์ด๋ฆ„(.name)๊ณผ API ์„œ๋ฒ„ ์ฃผ์†Œ(.server)๋ฅผ ํ…œํ”Œ๋ฆฟ์—์„œ ์‚ฌ์šฉํ•˜์—ฌ, ํด๋Ÿฌ์Šคํ„ฐ๋ณ„๋กœ Application์ด ์ž๋™ ์ƒ์„ฑ๋˜๋„๋ก ๊ตฌ์„ฑํ•จ

2. Cluster ์ œ๋„ค๋ ˆ์ดํ„ฐ ApplicationSet ์ œ๊ฑฐ

1
2
3
argocd appset delete guestbook --yes

applicationset 'guestbook' deleted

3. dev ํด๋Ÿฌ์Šคํ„ฐ๋งŒ ๋Œ€์ƒ์œผ๋กœ ํ•„ํ„ฐ๋ง ์ค€๋น„ (cluster ์‹œํฌ๋ฆฟ์— ๋ผ๋ฒจ ์ถ”๊ฐ€)

1
2
3
4
5
kubectl get secret -n argocd -l argocd.argoproj.io/secret-type=cluster

NAME                            TYPE     DATA   AGE
cluster-172.18.0.3-4100004299   Opaque   3      51m
cluster-172.18.0.4-568336172    Opaque   3      44m
  • Argo CD๊ฐ€ ๊ด€๋ฆฌ ์ค‘์ธ ํด๋Ÿฌ์Šคํ„ฐ ์‹œํฌ๋ฆฟ ๋ชฉ๋ก์„ ์กฐํšŒํ•ด dev/prd ํด๋Ÿฌ์Šคํ„ฐ๋ฅผ ์‹๋ณ„ํ•จ
1
2
3
4
DEVK8S=cluster-172.18.0.3-4100004299
kubectl label secrets $DEVK8S -n argocd env=stg

secret/cluster-172.18.0.3-4100004299 labeled
  • dev ํด๋Ÿฌ์Šคํ„ฐ์— ํ•ด๋‹นํ•˜๋Š” ์‹œํฌ๋ฆฟ ์ด๋ฆ„์„ ๋ณ€์ˆ˜๋กœ ์ง€์ •ํ•˜๊ณ , ์—ฌ๊ธฐ์— env=stg ๋ผ๋ฒจ์„ ๋ถ€์—ฌํ•จ
1
2
3
4
kubectl get secret -n argocd -l env=stg

NAME                            TYPE     DATA   AGE
cluster-172.18.0.3-4100004299   Opaque   3      53m
  • env=stg ๋ผ๋ฒจ์ด ์ž˜ ๋ถ™์—ˆ๋Š”์ง€ ์žฌํ™•์ธํ•จ

4. ๋ผ๋ฒจ ์…€๋ ‰ํ„ฐ ๊ธฐ๋ฐ˜ Cluster ์ œ๋„ค๋ ˆ์ดํ„ฐ ApplicationSet ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: guestbook
  namespace: argocd
spec:
  goTemplate: true
  goTemplateOptions: ["missingkey=error"]
  generators:
  - clusters:
      selector:
        matchLabels:
          env: "stg"
  template:
    metadata:
      name: '-guestbook'
      labels:
        managed-by: applicationset
    spec:
      project: "default"
      source:
        repoURL: https://github.com/Shinminjin/cicd-study
        targetRevision: HEAD
        path: guestbook
      destination:
        server: ''
        namespace: guestbook
      syncPolicy:
        syncOptions:
          - CreateNamespace=true
        automated:
          prune: true
          selfHeal: true
EOF

applicationset.argoproj.io/guestbook created

5. ๋ผ๋ฒจ ๊ธฐ๋ฐ˜ Cluster ApplicationSet ์‹ค์Šต ์ •๋ฆฌ

1
2
3
argocd appset delete guestbook --yes

applicationset 'guestbook' deleted

๐Ÿ” keycloak ํŒŒ๋“œ๋กœ ๋ฐฐํฌ

1. Keycloak Deployment/Service/Ingress๋กœ ๋ฐฐํฌ ์ค€๋น„

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: keycloak
  labels:
    app: keycloak
spec:
  replicas: 1
  selector:
    matchLabels:
      app: keycloak
  template:
    metadata:
      labels:
        app: keycloak
    spec:
      containers:
        - name: keycloak
          image: quay.io/keycloak/keycloak:26.4.0
          args: ["start-dev"]     # dev mode ์‹คํ–‰
          env:
            - name: KEYCLOAK_ADMIN
              value: admin
            - name: KEYCLOAK_ADMIN_PASSWORD
              value: admin
          ports:
            - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: keycloak
spec:
  selector:
    app: keycloak
  ports:
    - name: http
      port: 80
      targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: keycloak
  namespace: default
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  ingressClassName: nginx
  rules:
    - host: keycloak.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: keycloak
                port:
                  number: 8080
EOF

deployment.apps/keycloak created
service/keycloak created
ingress.networking.k8s.io/keycloak created

2. Keycloak ๋ฆฌ์†Œ์Šค ์ƒํƒœ ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
kubectl get deploy,svc,ep keycloak

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/keycloak   1/1     1            1           40s

NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/keycloak   ClusterIP   10.96.249.212   <none>        80/TCP    40s

NAME                 ENDPOINTS          AGE
endpoints/keycloak   10.244.0.25:8080   40s
1
2
3
4
kubectl get ingress keycloak

NAME       CLASS   HOSTS                  ADDRESS     PORTS   AGE
keycloak   nginx   keycloak.example.com   localhost   80      58s

3. Ingress + /etc/hosts ์„ค์ •์œผ๋กœ Keycloak ๋„๋ฉ”์ธ ์ ‘๊ทผ ๊ฒ€์ฆ

1
2
3
4
5
6
7
8
9
curl -s -H "Host: keycloak.example.com" http://127.0.0.1 -I

HTTP/1.1 302 Found
Date: Tue, 18 Nov 2025 12:02:15 GMT
Connection: keep-alive
Location: http://keycloak.example.com/admin/
Referrer-Policy: no-referrer
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-Content-Type-Options: nosniff
1
2
3
echo "127.0.0.1 keycloak.example.com" | sudo tee -a /etc/hosts

127.0.0.1 keycloak.example.com
1
2
3
4
5
6
7
8
9
curl -s http://keycloak.example.com -I

HTTP/1.1 302 Found
Date: Tue, 18 Nov 2025 12:04:40 GMT
Connection: keep-alive
Location: http://keycloak.example.com/admin/
Referrer-Policy: no-referrer
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-Content-Type-Options: nosniff

4. Keycloak Admin ์ฝ˜์†” ์ ‘์† ๋ฐ ๋กœ๊ทธ์ธ

1
2
http://keycloak.example.com/admin
username, password: admin / admin

5. ์ƒˆ๋กœ์šด Realm ์ƒ์„ฑ โ€“ myrealm

6. myrealm ๋‚ด ํ…Œ์ŠคํŠธ ์‚ฌ์šฉ์ž alice ์ƒ์„ฑ


๐Ÿงช mgmt k8s in-cluster ๋‚ด๋ถ€์—์„œ๋„ keycloak / argocd ๋„๋ฉ”์ธ ํ˜ธ์ถœ ๊ฐ€๋Šฅํ•˜๊ฒŒ ์„ค์ •

1. curl ํ…Œ์ŠคํŠธ์šฉ ํŒŒ๋“œ๋กœ ๋„คํŠธ์›Œํฌ ๊ธฐ๋ณธ ์ƒํƒœ ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: curl
  namespace: default
spec:
  containers:
  - name: curl
    image: curlimages/curl:latest
    command: ["sleep", "infinity"]
EOF

pod/curl created
  • ํด๋Ÿฌ์Šคํ„ฐ ๋‚ด๋ถ€์—์„œ HTTP/DNS๋ฅผ ํ…Œ์ŠคํŠธํ•˜๊ธฐ ์œ„ํ•ด curl ์ด๋ฏธ์ง€๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ๋Ÿ‰ ํŒŒ๋“œ๋ฅผ ์ƒ์„ฑ
1
2
3
4
kubectl get pod -l app=keycloak -owide

NAME                       READY   STATUS    RESTARTS   AGE   IP            NODE                 NOMINATED NODE   READINESS GATES
keycloak-846cb4c68-lphrw   1/1     Running   0          12m   10.244.0.25   mgmt-control-plane   <none>           <none>
1
2
3
kubectl get pod -l app=keycloak -o jsonpath='{.items[0].status.podIP}'

10.244.0.25
1
2
3
4
KEYCLOAKIP=$(kubectl get pod -l app=keycloak -o jsonpath='{.items[0].status.podIP}')
echo $KEYCLOAKIP

10.244.0.25
  • keycloak ํŒŒ๋“œ IP ํ™•์ธ ํ›„, ๋ณ€์ˆ˜๋กœ ์ง€์ •
1
2
3
4
5
6
7
8
kubectl exec -it curl -- ping -c 1 $KEYCLOAKIP

PING 10.244.0.25 (10.244.0.25): 56 data bytes
64 bytes from 10.244.0.25: seq=0 ttl=42 time=0.138 ms

--- 10.244.0.25 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.138/0.138/0.138 ms
  • curl ํŒŒ๋“œ์—์„œ ํ•ด๋‹น IP๋กœ ping์ด ์ž˜ ๋˜๋Š” ๊ฒƒ์„ ํ™•์ธํ•จ

2. cluster DNS ์ด๋ฆ„์œผ๋กœ keycloak OIDC ์—”๋“œํฌ์ธํŠธ ํ˜ธ์ถœ ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
kubectl exec -it curl -- curl -s http://keycloak.default.svc.cluster.local/realms/myrealm/.well-known/openid-configuration | jq

{
  "issuer": "http://keycloak.default.svc.cluster.local/realms/myrealm",
  "authorization_endpoint": "http://keycloak.default.svc.cluster.local/realms/myrealm/protocol/openid-connect/auth",
  "token_endpoint": "http://keycloak.default.svc.cluster.local/realms/myrealm/protocol/openid-connect/token",
  "introspection_endpoint": "http://keycloak.default.svc.cluster.local/realms/myrealm/protocol/openid-connect/token/introspect",
  "userinfo_endpoint": "http://keycloak.default.svc.cluster.local/realms/myrealm/protocol/openid-connect/userinfo",
  "end_session_endpoint": "http://keycloak.default.svc.cluster.local/realms/myrealm/protocol/openid-connect/logout",
  "frontchannel_logout_session_supported": true,
  "frontchannel_logout_supported": true,
  "jwks_uri": "http://keycloak.default.svc.cluster.local/realms/myrealm/protocol/openid-connect/certs",
  "check_session_iframe": "http://keycloak.default.svc.cluster.local/realms/myrealm/protocol/openid-connect/login-status-iframe.html",
  ...
}

3. ๊ทธ๋Ÿฐ๋ฐ keycloak.example.com ์€ ์™œ ์•ˆ๋ ๊นŒ?

1
2
3
kubectl exec -it curl -- curl -s http://keycloak.example.com -I

command terminated with exit code 6
1
2
3
4
5
6
7
8
9
10
11
12
kubectl exec -it curl -- nslookup -debug keycloak.example.com

Server:		10.96.0.10
Address:	10.96.0.10:53

Query #0 completed in 145ms:
** server can't find keycloak.example.com: NXDOMAIN

Query #1 completed in 299ms:
** server can't find keycloak.example.com: NXDOMAIN

command terminated with exit code 1
  • Pod ๋‚ด๋ถ€์—์„œ์˜ DNS ์กฐํšŒ๋Š” ๋…ธ๋“œ /etc/hosts ๋ฅผ ์ง์ ‘ ๋ณด์ง€ ์•Š๊ณ  CoreDNS(์„œ๋น„์Šค IP 10.96.0.10)๋ฅผ ์‚ฌ์šฉํ•จ

4. CoreDNS ConfigMap ์ˆ˜์ •์œผ๋กœ keycloak / argocd ๋„๋ฉ”์ธ ๋“ฑ๋ก

(1) ๋จผ์ € Service IP ํ™•์ธ

1
2
3
4
5
6
7
kubectl get svc keycloak
NAME       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
keycloak   ClusterIP   10.96.249.212   <none>        80/TCP    19m

kubectl get svc -n argocd argocd-server
NAME            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
argocd-server   ClusterIP   10.96.46.178   <none>        80/TCP,443/TCP   25h

(2) CoreDNS ์„ค์ • ์ˆ˜์ •

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
kubectl edit cm -n kube-system coredns

.:53 {
       ...
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        hosts {
           <CLUSTER IP> keycloak.example.com
           <CLUSTER IP> argocd.example.com
           fallthrough
        }
...

configmap/coredns edited

5. ์ˆ˜์ • ํ›„ in-cluster DNS / ๋„๋ฉ”์ธ ํ˜ธ์ถœ ํ™•์ธ

1
2
3
4
5
6
7
kubectl exec -it curl -- nslookup keycloak.example.com

Server:		10.96.0.10
Address:	10.96.0.10:53

Name:	keycloak.example.com
Address: 10.96.249.212

๐Ÿ”‘ keycloak์— argocd๋ฅผ ์œ„ํ•œ client ์ƒ์„ฑ

1. Argo CD ์—ฐ๋™์šฉ Realm / Client ์ƒ์„ฑ

1
2
3
์‚ฌ์šฉ Realm : myrealm
Client id : argocd
name : argocd client

1
Client authentication : ON

1
2
3
4
5
Root URL : https://argocd.example.com/
Home URL : /applications
Valid redirect URIs : https://argocd.example.com/auth/callback
Valid post logout redirect URIs : https://argocd.example.com/applications
Web origins : +

1
2
# ์ƒ์„ฑ๋œ client์—์„œ โ†’ Credentials: ๋ฉ”๋ชจ ํ•ด๋‘๊ธฐ
bJnuNqSdHAaWDPn4ixWCAiac7tOP6nrN


๐Ÿ›ก๏ธ Configuring ArgoCD OIDC

1. Keycloak Client Secret ์„ Argo CD Secret ์— ์ €์žฅ

1
2
3
kubectl -n argocd patch secret argocd-secret --patch='{"stringData": { "oidc.keycloak.clientSecret": "<REPLACE_WITH_CLIENT_SECRET>" }}'
kubectl -n argocd patch secret argocd-secret --patch='{"stringData": { "oidc.keycloak.clientSecret": "bJnuNqSdHAaWDPn4ixWCAiac7tOP6nrN" }}'
secret/argocd-secret patched
  • Keycloak ์—์„œ ๋ฐœ๊ธ‰๋ฐ›์€ clientSecret ๊ฐ’์„ argocd-secret ์‹œํฌ๋ฆฟ์— ์ถ”๊ฐ€ํ•จ
  • oidc.keycloak.clientSecret ํ‚ค๋กœ ์ €์žฅํ•˜์—ฌ OIDC ์„ค์ •์—์„œ ์ฐธ์กฐํ•  ์ˆ˜ ์žˆ๊ฒŒ ๊ตฌ์„ฑํ•จ
1
2
3
4
5
6
7
8
kubectl get secret -n argocd argocd-secret -o jsonpath='{.data}' | jq

{
  "admin.password": "JDJhJDEwJHRuTDZ3UUt1MzFZVUlIRW5hZkJVeXV6T0hLQWRrZ1hxTXQweTZHQ0taMzhHQTlzb2ZCM1ZP",
  "admin.passwordMtime": "MjAyNS0xMS0xN1QxMTowNDowNlo=",
  "oidc.keycloak.clientSecret": "YkpudU5xU2RIQWFXRFBuNGl4V0NBaWFjN3RPUDZuck4=",
  "server.secretkey": "bFdiZUpzZHhGNS9waGVldGwrcGdPbmUwaGJIRDFWd2E2bnE1TThnQkNRUT0="
}
  • ์ €์žฅ ๋‚ด์šฉ ํ™•์ธ

2. Argo CD ConfigMap์— OIDC ์„ค์ • ์ถ”๊ฐ€

1
2
3
4
5
6
7
8
9
10
11
kubectl patch cm argocd-cm -n argocd --type merge -p '
data:
  oidc.config: |
    name: Keycloak
    issuer: http://keycloak.example.com/realms/myrealm
    clientID: argocd
    clientSecret: bJnuNqSdHAaWDPn4ixWCAiac7tOP6nrN
    requestedScopes: ["openid", "profile", "email"]
'

configmap/argocd-cm patched
  • argocd-cm ConfigMap ์— OIDC ๊ด€๋ จ ์„ค์ •์„ ์ถ”๊ฐ€ํ•จ
  • Keycloak ์˜ myrealm ๊ณผ argocd ํด๋ผ์ด์–ธํŠธ ์ •๋ณด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ OIDC ํ”„๋กœ๋ฐ”์ด๋”๋ฅผ ๋“ฑ๋กํ•จ
1
2
3
4
5
6
7
8
kubectl get cm -n argocd argocd-cm -o yaml | grep oidc.config: -A5

  oidc.config: |
    name: Keycloak
    issuer: http://keycloak.example.com/realms/myrealm
    clientID: argocd
    clientSecret: bJnuNqSdHAaWDPn4ixWCAiac7tOP6nrN
    requestedScopes: ["openid", "profile", "email"]
  • ์„ค์ •์ด ์ •์ƒ ๋ฐ˜์˜๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•จ

3. Argo CD ์„œ๋ฒ„ ์žฌ์‹œ์ž‘์œผ๋กœ OIDC ์„ค์ • ๋ฐ˜์˜

1
2
3
kubectl rollout restart deploy argocd-server -n argocd

deployment.apps/argocd-server restarted

๐Ÿ‘จโ€๐Ÿ’ป keycloak ์ธ์ฆ์„ ํ†ตํ•œ Argo CD ๋กœ๊ทธ์ธ : admin logout ํ›„ keycloak์„ ํ†ตํ•œ alice ๋กœ๊ทธ์ธ

1. Argo CD ๋กœ๊ทธ์ธ ํ™”๋ฉด์—์„œ Keycloak ๋กœ๊ทธ์ธ ์ง„์ž…

  • ๋ธŒ๋ผ์šฐ์ €๊ฐ€ keycloak.example.com ์˜ myrealm ๋กœ๊ทธ์ธ ํŽ˜์ด์ง€๋กœ ๋ฆฌ๋‹ค์ด๋ ‰ํŠธ๋จ

2. Keycloak์—์„œ alice ๊ณ„์ •์œผ๋กœ ๋กœ๊ทธ์ธ

1
2
3
username: alice
password: alice123
# ์‚ฌ์šฉ์ž ์ •๋ณด ์—…๋ฐ์ดํŠธ

3. Keycloak ์„ธ์…˜์—์„œ Argo CD ํด๋ผ์ด์–ธํŠธ ๋กœ๊ทธ์ธ ์ƒํƒœ ํ™•์ธ

4. sign out


๐Ÿ“˜ jenkins ์ง์ ‘ ํŒŒ๋“œ๋กœ ๋ฐฐํฌ

1. Jenkins ๋„ค์ž„์ŠคํŽ˜์ด์Šค/PVC/Deployment/Service/Ingress ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
kubectl create ns jenkins

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jenkins-pvc
  namespace: jenkins
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jenkins
  namespace: jenkins
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jenkins
  template:
    metadata:
      labels:
        app: jenkins
    spec:
      securityContext:
        fsGroup: 1000
      containers:
        - name: jenkins
          image: jenkins/jenkins:lts
          ports:
            - name: http
              containerPort: 8080
            - name: agent
              containerPort: 50000
          volumeMounts:
            - name: jenkins-home
              mountPath: /var/jenkins_home
      volumes:
        - name: jenkins-home
          persistentVolumeClaim:
            claimName: jenkins-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: jenkins-svc
  namespace: jenkins
spec:
  type: ClusterIP
  selector:
    app: jenkins
  ports:
    - port: 8080
      targetPort: http
      protocol: TCP
      name: http
    - port: 50000
      targetPort: agent
      protocol: TCP
      name: agent
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: jenkins-ingress
  namespace: jenkins
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
spec:
  ingressClassName: nginx
  rules:
    - host: jenkins.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: jenkins-svc
                port:
                  number: 8080
EOF

namespace/jenkins created
persistentvolumeclaim/jenkins-pvc created
deployment.apps/jenkins created
service/jenkins-svc created
ingress.networking.k8s.io/jenkins-ingress created

2. Jenkins ๋ฆฌ์†Œ์Šค ์ƒํƒœ ๋ฐ PVC ๋ฐ”์ธ๋”ฉ ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
13
kubectl get deploy,svc,ep,pvc -n jenkins

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/jenkins   1/1     1            1           40s

NAME                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)              AGE
service/jenkins-svc   ClusterIP   10.96.167.130   <none>        8080/TCP,50000/TCP   40s

NAME                    ENDPOINTS                            AGE
endpoints/jenkins-svc   10.244.0.29:50000,10.244.0.29:8080   40s

NAME                                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
persistentvolumeclaim/jenkins-pvc   Bound    pvc-74ef6288-9211-470f-be55-a26f4628edcc   10Gi       RWO            standard       <unset>                 40s
1
2
3
4
kubectl get ingress -n jenkins jenkins-ingress

NAME              CLASS   HOSTS                 ADDRESS     PORTS   AGE
jenkins-ingress   nginx   jenkins.example.com   localhost   80      62s

3. /etc/hosts ๋“ฑ๋ก์œผ๋กœ ๋กœ์ปฌ ๋ธŒ๋ผ์šฐ์ €์—์„œ ๋„๋ฉ”์ธ ์ ‘์† ์ค€๋น„

1
2
3
echo "127.0.0.1 jenkins.example.com" | sudo tee -a /etc/hosts

127.0.0.1 jenkins.example.com

4. Jenkins ์ดˆ๊ธฐ ์•”ํ˜ธ ํ™•์ธ ๋ฐ ์ตœ์ดˆ ์„ค์ •

1
2
3
kubectl exec -it -n jenkins deploy/jenkins -- cat /var/jenkins_home/secrets/initialAdminPassword

1e92cb3d52a84da08d5ed47d564d2af1
1
http://jenkins.example.com

(1) ์ดˆ๊ธฐ์•”ํ˜ธ ์ž…๋ ฅ

(2) Install suggested plugins

(3) Create First Admin User

1
2
3
# ๊ด€๋ฆฌ์ž ๊ณ„์ •์„ค์ •
Username: admin
Password: qwe123

5. in-cluster ํ†ต์‹ ์„ ๊ณ ๋ คํ•œ Jenkins ๋„๋ฉ”์ธ ์„ค์ • ๋ฐ CoreDNS ์—ฐ๋™

(1) Jenkins Service ClusterIP ๋ฉ”๋ชจ

1
2
3
4
kubectl get svc -n jenkins

NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)              AGE
jenkins-svc   ClusterIP   10.96.167.130   <none>        8080/TCP,50000/TCP   23m

(2) CoreDNS hosts ํ”Œ๋Ÿฌ๊ทธ์ธ์— Jenkins ๋„๋ฉ”์ธ ์ถ”๊ฐ€

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
kubectl edit cm -n kube-system coredns

.:53 {
       ...
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        hosts {
           <CLUSTER IP> keycloak.example.com
           <CLUSTER IP> argocd.example.com
           <CLUSTER IP> jenkins.example.com
           fallthrough
        }
...


๐Ÿ”‘ keycloak์— jenkins๋ฅผ ์œ„ํ•œ client ์ƒ์„ฑ

1. ๊ธฐ๋ณธ ์„ค์ •

1
2
Client ID : jenkins
Name : jenkins client

2. ์ธ์ฆ ๋ฐ ํ”Œ๋กœ์šฐ ์„ค์ •

1
2
Client authentication : Check
Authentication flow : Standard flow

3. ๋กœ๊ทธ์ธ ์…‹ํŒ…

1
2
3
4
5
Root URL : http://jenkins.example.com/
Home URL : http://jenkins.example.com/
Valid redirect URIs : http://jenkins.example.com/securityRealm/finishLogin
Valid post logout redirect URIs : http://jenkins.example.com
Web origins : +

4. Credentials ๋ฉ”๋ชจ

1
78sizpZAEGkf2MxdwyUkopdgA94ysahc


๐Ÿ›ฐ๏ธ Configuring Jenkins OIDC

1. OpenID Connect Authentication ํ”Œ๋Ÿฌ๊ทธ์ธ ์„ค์น˜

2. Jenkins ๋ณด์•ˆ ์„ค์ •์—์„œ OIDC ์‚ฌ์šฉ ์„ค์ •

1
2
3
4
5
6
7
8
9
10
Manage Jenkins โ†’ Security : Security Realm ์„ค์ • โ†’ ํ•˜๋‹จ Save
Login with Openid Connect
Client id : jenkins
Client secret : <keycloak ์—์„œ jenkins client ์—์„œ credentials>
Configuration mode : Discoveryโ€ฆ
- Well-know configuration endpoint http://keycloak.example.com/realms/myrealm/.well-known/openid-configuration
- Override scopes : openid email profile
Logout from OpenID Provider : Check
Security configuration
- Disable ssl verification : Check

3. Keycloak ์—ฐ๋™ ๋กœ๊ทธ์ธ ๋™์ž‘ ํ™•์ธ

  • ์ด๋ฏธ ๋ธŒ๋ผ์šฐ์ €์— alice ๋กœ Keycloak ์„ธ์…˜์ด ์‚ด์•„์žˆ๋Š” ์ƒํƒœ๋ผ์„œ login via keyclock ํด๋ฆญํ–ˆ๋”๋‹ˆ ๋ฐ”๋กœ ์ง„์ž…

4. Keycloak ์„ธ์…˜์—์„œ Jenkins/Argo CD ๋™์‹œ ์—ฐ๋™ ์ƒํƒœ ํ™•์ธ

5. Jenkins / Argo CD ์‚ฌ์šฉ์ž ์ •๋ณด ํ™•์ธ


๐Ÿ—‚๏ธ LDAP(Lightweight Directory Access Protocol)

1. LDAP ์„ ํšŒ์‚ฌ ์ฃผ์†Œ๋ก + ์ถœ์ž…์ฆ ์‹œ์Šคํ…œ์œผ๋กœ ๋น„์œ 

  • ์‰ฝ๊ฒŒ ๋งํ•ด, โ€œ์กฐ์ง ๋‚ด ์‚ฌ์šฉ์ž, ๊ทธ๋ฃน, ๊ถŒํ•œ ๋“ฑ์˜ ์ •๋ณด๋ฅผ ํŠธ๋ฆฌ ๊ตฌ์กฐ๋กœ ์ €์žฅํ•˜๊ณ  ์กฐํšŒํ•˜๋Š” ์‹œ์Šคํ…œโ€
1
2
3
4
5
6
7
dc=example,dc=org     # Base DN(Root DN)
 โ”œโ”€โ”€ ou=people
 โ”‚    โ”œโ”€โ”€ uid=alice
 โ”‚    โ””โ”€โ”€ uid=bob
 โ””โ”€โ”€ ou=groups
      โ”œโ”€โ”€ cn=devs
      โ””โ”€โ”€ cn=admins

2. ์‚ฌ์ „ ์ž‘์—… : Keycloak ์—์„œ alice ์‚ฌ์šฉ์ž ์ •๋ฆฌ

3. OpenLDAP + phpLDAPadmin K8s ๋ฐฐํฌ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: openldap
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openldap
  namespace: openldap
spec:
  replicas: 1
  selector:
    matchLabels:
      app: openldap
  template:
    metadata:
      labels:
        app: openldap
    spec:
      containers:
        - name: openldap
          image: osixia/openldap:1.5.0
          ports:
            - containerPort: 389
              name: ldap
            - containerPort: 636
              name: ldaps
          env:
            - name: LDAP_ORGANISATION    # ๊ธฐ๊ด€๋ช…, LDAP ๊ธฐ๋ณธ ์ •๋ณด ์ƒ์„ฑ ์‹œ ์‚ฌ์šฉ
              value: "Example Org"
            - name: LDAP_DOMAIN          # LDAP ๊ธฐ๋ณธ Base DN ์„ ์ž๋™ ์ƒ์„ฑ
              value: "example.org"
            - name: LDAP_ADMIN_PASSWORD  # LDAP ๊ด€๋ฆฌ์ž ํŒจ์Šค์›Œ๋“œ
              value: "admin"
            - name: LDAP_CONFIG_PASSWORD
              value: "admin"
        - name: phpldapadmin
          image: osixia/phpldapadmin:0.9.0
          ports:
            - containerPort: 80
              name: phpldapadmin
          env:
            - name: PHPLDAPADMIN_HTTPS
              value: "false"
            - name: PHPLDAPADMIN_LDAP_HOSTS
              value: "openldap"   # LDAP hostname inside cluster
---
apiVersion: v1
kind: Service
metadata:
  name: openldap
  namespace: openldap
spec:
  selector:
    app: openldap
  ports:
    - name: phpldapadmin
      port: 80
      targetPort: 80
      nodePort: 30000
    - name: ldap
      port: 389
      targetPort: 389
    - name: ldaps
      port: 636
      targetPort: 636
  type: NodePort
EOF

namespace/openldap created
deployment.apps/openldap created
service/openldap created
1
2
3
4
5
6
7
8
9
10
11
12
13
kubectl get deploy,pod,svc,ep -n openldap

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/openldap   1/1     1            1           33s

NAME                            READY   STATUS    RESTARTS   AGE
pod/openldap-54857b746c-rnwvf   2/2     Running   0          33s

NAME               TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)                                    AGE
service/openldap   NodePort   10.96.59.112   <none>        80:30000/TCP,389:32686/TCP,636:30808/TCP   33s

NAME                 ENDPOINTS                                        AGE
endpoints/openldap   10.244.0.30:80,10.244.0.30:389,10.244.0.30:636   33s
1
2
3
4
5
# ๊ธฐ๋ณธ LDAP ์ •๋ณด : ์•„๋ž˜ Bind DN๊ณผ PW๋กœ ๋กœ๊ทธ์ธ
## Base DN: dc=example,dc=org
## Bind DN: cn=admin,dc=example,dc=org
## Password: admin
http://127.0.0.1:30000

4. OpenLDAP ์ปจํ…Œ์ด๋„ˆ ๋‚ด๋ถ€ ํ”„๋กœ์„ธ์Šค ํ™•์ธ

(1) openldap ์ปจํ…Œ์ด๋„ˆ ๋‚ด๋ถ€๋กœ ์ง„์ž…

1
2
kubectl -n openldap exec -it deploy/openldap -c openldap -- bash
root@openldap-54857b746c-rnwvf:/# 

(2) slapd ๋ฐ๋ชฌ ํ”„๋กœ์„ธ์Šค ํŠธ๋ฆฌ ํ™•์ธ

1
2
3
4
5
6
7
root@openldap-54857b746c-rnwvf:/# pstree -aplpst

run,1 -u /container/tool/run
  โ””โ”€slapd,440 -h ldap://openldap-54857b746c-rnwvf:389 ldaps://openldap-54857b746c-rnwvf:636 ldapi:/// -u openldap -g openldap -d 256
      โ”œโ”€{slapd},443
      โ”œโ”€{slapd},444
      โ””โ”€{slapd},445

5. LDAP ๊ด€๋ฆฌ์ž ์ธ์ฆ ๋ฐ ๊ธฐ๋ณธ ์—”ํŠธ๋ฆฌ ์กฐํšŒ ํ…Œ์ŠคํŠธ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
root@openldap-54857b746c-rnwvf:/# ldapsearch -x -H ldap://localhost:389 -b dc=example,dc=org -D "cn=admin,dc=example,dc=org" -w admin

# extended LDIF
#
# LDAPv3
# base <dc=example,dc=org> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#

# example.org
dn: dc=example,dc=org
objectClass: top
objectClass: dcObject
objectClass: organization
o: Example Org
dc: example

# search result
search: 2
result: 0 Success

# numResponses: 2
# numEntries: 1

6. ์‹ค์Šต์šฉ ์ตœ์ข… LDAP ํŠธ๋ฆฌ ๊ตฌ์กฐ ์„ค๊ณ„

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
dc=example,dc=org
โ”œโ”€โ”€ ou=people
โ”‚   โ”œโ”€โ”€ uid=alice
โ”‚   โ”‚   โ”œโ”€โ”€ cn: Alice
โ”‚   โ”‚   โ”œโ”€โ”€ sn: Kim
โ”‚   โ”‚   โ”œโ”€โ”€ uid: alice
โ”‚   โ”‚   โ””โ”€โ”€ mail: alice@example.org
โ”‚   โ””โ”€โ”€ uid=bob
โ”‚       โ”œโ”€โ”€ cn: Bob
โ”‚       โ”œโ”€โ”€ sn: Lee
โ”‚       โ”œโ”€โ”€ uid: bob
โ”‚       โ””โ”€โ”€ mail: bob@example.org
โ””โ”€โ”€ ou=groups
    โ”œโ”€โ”€ cn=devs
    โ”‚   โ””โ”€โ”€ member: uid=bob,ou=people,dc=example,dc=org
    โ””โ”€โ”€ cn=admins
        โ””โ”€โ”€ member: uid=alice,ou=people,dc=example,dc=org

7. ldapadd ๋กœ OU(organizationalUnit) ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
root@openldap-54857b746c-rnwvf:/# cat <<EOF | ldapadd -x -D "cn=admin,dc=example,dc=org" -w admin
> dn: ou=people,dc=example,dc=org
> objectClass: organizationalUnit
> ou: people
> 
> dn: ou=groups,dc=example,dc=org
> objectClass: organizationalUnit
> ou: groups
> EOF

adding new entry "ou=people,dc=example,dc=org"
adding new entry "ou=groups,dc=example,dc=org"

8. ldapadd ๋กœ ์‚ฌ์šฉ์ž(alice, bob) ์ถ”๊ฐ€ (inetOrgPerson)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
root@openldap-54857b746c-rnwvf:/# cat <<EOF | ldapadd -x -D "cn=admin,dc=example,dc=org" -w admin
> dn: uid=alice,ou=people,dc=example,dc=org
> objectClass: inetOrgPerson
> cn: Alice
> sn: Kim
> uid: alice
> mail: alice@example.org
> userPassword: alice123
> 
> dn: uid=bob,ou=people,dc=example,dc=org
> objectClass: inetOrgPerson
> cn: Bob
> sn: Lee
> uid: bob
> mail: bob@example.org
> userPassword: bob123
> EOF

adding new entry "uid=alice,ou=people,dc=example,dc=org"
adding new entry "uid=bob,ou=people,dc=example,dc=org"

9. ldapadd ๋กœ ๊ทธ๋ฃน(devs, admins) ์ถ”๊ฐ€ (groupOfNames)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
root@openldap-54857b746c-rnwvf:/# cat <<EOF | ldapadd -x -D "cn=admin,dc=example,dc=org" -w admin
> dn: cn=devs,ou=groups,dc=example,dc=org
> objectClass: groupOfNames
> cn: devs
> member: uid=bob,ou=people,dc=example,dc=org
> 
> dn: cn=admins,ou=groups,dc=example,dc=org
> objectClass: groupOfNames
> cn: admins
> member: uid=alice,ou=people,dc=example,dc=org
> EOF

adding new entry "cn=devs,ou=groups,dc=example,dc=org"
adding new entry "cn=admins,ou=groups,dc=example,dc=org"

10. ldapsearch ๋กœ OU / ์‚ฌ์šฉ์ž / ๊ทธ๋ฃน ์กฐํšŒ

(1) OU ๋ชฉ๋ก ์กฐํšŒ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
root@openldap-54857b746c-rnwvf:/# ldapsearch -x -D "cn=admin,dc=example,dc=org" -w admin \
>   -b "dc=example,dc=org" "(objectClass=organizationalUnit)" ou

# extended LDIF
#
# LDAPv3
# base <dc=example,dc=org> with scope subtree
# filter: (objectClass=organizationalUnit)
# requesting: ou 
#

# people, example.org
dn: ou=people,dc=example,dc=org
ou: people

# groups, example.org
dn: ou=groups,dc=example,dc=org
ou: groups

# search result
search: 2
result: 0 Success

# numResponses: 3
# numEntries: 2

(2) ์‚ฌ์šฉ์ž ๋ชฉ๋ก ์กฐํšŒ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
root@openldap-54857b746c-rnwvf:/# ldapsearch -x -D "cn=admin,dc=example,dc=org" -w admin \
>   -b "ou=people,dc=example,dc=org" "(uid=*)" uid cn mail

# extended LDIF
#
# LDAPv3
# base <ou=people,dc=example,dc=org> with scope subtree
# filter: (uid=*)
# requesting: uid cn mail 
#

# bob, people, example.org
dn: uid=bob,ou=people,dc=example,dc=org
cn: Bob
uid: bob
mail: bob@example.org

# alice, people, example.org
dn: uid=alice,ou=people,dc=example,dc=org
cn: Alice
uid: alice
mail: alice@example.org

# search result
search: 2
result: 0 Success

# numResponses: 3
# numEntries: 2

(3) ๊ทธ๋ฃน ๋ฐ ๋ฉค๋ฒ„ ๊ด€๊ณ„ ์กฐํšŒ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
root@openldap-54857b746c-rnwvf:/# ldapsearch -x -D "cn=admin,dc=example,dc=org" -w admin \
>   -b "ou=groups,dc=example,dc=org" "(objectClass=groupOfNames)" cn member

# extended LDIF
#
# LDAPv3
# base <ou=groups,dc=example,dc=org> with scope subtree
# filter: (objectClass=groupOfNames)
# requesting: cn member 
#

# devs, groups, example.org
dn: cn=devs,ou=groups,dc=example,dc=org
cn: devs
member: uid=bob,ou=people,dc=example,dc=org

# admins, groups, example.org
dn: cn=admins,ou=groups,dc=example,dc=org
cn: admins
member: uid=alice,ou=people,dc=example,dc=org

# search result
search: 2
result: 0 Success

# numResponses: 3
# numEntries: 2

11. LDAP ์‚ฌ์šฉ์ž(alice) ์ธ์ฆ ํ…Œ์ŠคํŠธ

1
2
3
root@openldap-54857b746c-rnwvf:/# ldapwhoami -x -D "uid=alice,ou=people,dc=example,dc=org" -w alice123

dn:uid=alice,ou=people,dc=example,dc=org

๐Ÿ”— KeyCloak์—์„œ LDAP ์„ค์ • ์‹ค์Šต

1. myrealm ์— LDAP Provider ์ถ”๊ฐ€

1
[Realm] myrealm โ†’ User Federation โ†’ Add LDAP providers

1
2
3
4
5
UI display name: ldap
Vendor: Other
Connection URL:  ldap://openldap.openldap.svc:389
Bind DN: (= Login DN) cn=admin,dc=example,dc=org
Bind Credential: admin โ‡’ Test authentication

1
2
3
4
5
6
7
Edit mode: WRITABLE
Users DN: ou=people,dc=example,dc=org
Username LDAP attribute: uid
RDN LDAP attribute: uid
UUID LDAP attribute: uid
User Object Classes: inetOrgPerson
Search scope: Subtree (OU ํ•˜์œ„ ๋ชจ๋‘ ํƒ์ƒ‰)

1
2
Import Users: On (LDAP โ†’ KeyCloak : Sync OK)
Sync Registrations: Off (KeyCloak โ†’ LDAP : Sync OK)

2. LDAP Provider ์˜ Mappers ํ™•์ธ

1
2
User Federation โ†’ LDAP Provider ์„ ํƒ โ†’ Mappers
: user์— ๋Œ€ํ•œ Mappers๊ฐ€ ๊ธฐ๋ณธ ์กด์žฌ

3. LDAP ์—ฐ๋™ ํ›„ Argo CD / Jenkins ๋กœ๊ทธ์ธ ํ…Œ์ŠคํŠธ

(1) Argo CD ๋กœ๊ทธ์ธ ํ…Œ์ŠคํŠธ

1
2
Argo CD ์— bob(์•”ํ˜ธ bob123) ๋กœ๊ทธ์ธ ์‹œ๋„
โ†’ ์ดํ›„ alice(์•”ํ˜ธ alice123)์œผ๋กœ๋„ ๋กœ๊ทธ์ธ ์‹œ๋„

(2) Jenkins ๋กœ๊ทธ์ธ ํ…Œ์ŠคํŠธ

  • Jenkins ๋Š” ์ด๋ฏธ Keycloak OIDC ๋กœ ์—ฐ๋™๋œ ์ƒํƒœ๋ผ ํ™”๋ฉด์—์„œ ๋ณ„๋‹ค๋ฅธ ์ถ”๊ฐ€ ๋กœ๊ทธ์ธ ์ ˆ์ฐจ ์—†์ด ๋Œ€์‹œ๋ณด๋“œ๋กœ ์ง„์ž…๋˜๋Š” ๊ฒƒ ํ™•์ธํ•จ

(3) Keycloak ์„ธ์…˜ ํ™•์ธ

  • Argo CD, Jenkins ์–‘์ชฝ ๋ชจ๋‘ ๋™์ผํ•œ Id(Predicate: uid)๋ฅผ ์‚ฌ์šฉํ•ด SSO ๊ฒฝํ—˜์„ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ๋Š” ๊ตฌ์กฐ๊ฐ€ ์™„์„ฑ๋จ

๐Ÿšง Argo CD ์‚ฌ์šฉ์ž ์‚ฌ์šฉ ์‹œ๋„ : LDAP devs ๊ทธ๋ฃน bob ๊ณ„์ •

1. ์ƒ˜ํ”Œ ApplicationSet์œผ๋กœ guestbook ๋‹ค์ค‘ ํด๋Ÿฌ์Šคํ„ฐ ๋ฐฐํฌ

(1) ApplicationSet ์ƒ์„ฑ

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: guestbook
  namespace: argocd
spec:
  goTemplate: true
  goTemplateOptions: ["missingkey=error"]
  generators:
  - clusters: {}
  template:
    metadata:
      name: '-guestbook'
      labels:
        managed-by: applicationset
    spec:
      project: "default"
      source:
        repoURL: https://github.com/gasida/cicd-study
        targetRevision: HEAD
        path: guestbook
      destination:
        server: ''
        namespace: guestbook
      syncPolicy:
        syncOptions:
          - CreateNamespace=true
EOF

applicationset.argoproj.io/guestbook created

(2) ApplicationSet ๋ผ๋ฒจ ๊ธฐ๋ฐ˜ ๋™๊ธฐํ™” ์‹คํ–‰

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
argocd app sync -l managed-by=applicationset

TIMESTAMP                  GROUP        KIND   NAMESPACE                  NAME    STATUS    HEALTH        HOOK  MESSAGE
2025-11-19T00:03:17+09:00            Service   guestbook          guestbook-ui  OutOfSync  Missing              
2025-11-19T00:03:17+09:00   apps  Deployment   guestbook          guestbook-ui  OutOfSync  Missing              
2025-11-19T00:03:17+09:00   apps  Deployment   guestbook          guestbook-ui  OutOfSync  Missing              deployment.apps/guestbook-ui created
2025-11-19T00:03:17+09:00            Service   guestbook          guestbook-ui  OutOfSync  Missing              service/guestbook-ui created

Name:               argocd/dev-k8s-guestbook
Project:            default
Server:             https://172.18.0.3:6443
Namespace:          guestbook
URL:                https://argocd.example.com/applications/argocd/dev-k8s-guestbook
Source:
- Repo:             https://github.com/gasida/cicd-study
  Target:           HEAD
  Path:             guestbook
SyncWindow:         Sync Allowed
Sync Policy:        Manual
Sync Status:        Synced to HEAD (b2894c6)
Health Status:      Progressing

Operation:          Sync
Sync Revision:      b2894c67f7a64e42b408da5825cb0b87ee306b04
Phase:              Succeeded
Start:              2025-11-19 00:03:17 +0900 KST
Finished:           2025-11-19 00:03:17 +0900 KST
Duration:           0s
Message:            successfully synced (all tasks run)

GROUP  KIND        NAMESPACE  NAME          STATUS  HEALTH       HOOK  MESSAGE
       Service     guestbook  guestbook-ui  Synced  Healthy            service/guestbook-ui created
apps   Deployment  guestbook  guestbook-ui  Synced  Progressing        deployment.apps/guestbook-ui created
TIMESTAMP                  GROUP        KIND   NAMESPACE                  NAME    STATUS    HEALTH        HOOK  MESSAGE
2025-11-19T00:03:18+09:00            Service   guestbook          guestbook-ui  OutOfSync  Missing              
2025-11-19T00:03:18+09:00   apps  Deployment   guestbook          guestbook-ui  OutOfSync  Missing              
2025-11-19T00:03:18+09:00          Namespace                         guestbook   Running   Synced              namespace/guestbook created
2025-11-19T00:03:18+09:00            Service   guestbook          guestbook-ui  OutOfSync  Missing              service/guestbook-ui created
2025-11-19T00:03:18+09:00   apps  Deployment   guestbook          guestbook-ui  OutOfSync  Missing              deployment.apps/guestbook-ui created

Name:               argocd/in-cluster-guestbook
Project:            default
Server:             https://kubernetes.default.svc
Namespace:          guestbook
URL:                https://argocd.example.com/applications/argocd/in-cluster-guestbook
Source:
- Repo:             https://github.com/gasida/cicd-study
  Target:           HEAD
  Path:             guestbook
SyncWindow:         Sync Allowed
Sync Policy:        Manual
Sync Status:        Synced to HEAD (b2894c6)
Health Status:      Progressing

Operation:          Sync
Sync Revision:      b2894c67f7a64e42b408da5825cb0b87ee306b04
Phase:              Succeeded
Start:              2025-11-19 00:03:18 +0900 KST
Finished:           2025-11-19 00:03:18 +0900 KST
Duration:           0s
Message:            successfully synced (all tasks run)

GROUP  KIND        NAMESPACE  NAME          STATUS   HEALTH       HOOK  MESSAGE
       Namespace              guestbook     Running  Synced             namespace/guestbook created
       Service     guestbook  guestbook-ui  Synced   Healthy            service/guestbook-ui created
apps   Deployment  guestbook  guestbook-ui  Synced   Progressing        deployment.apps/guestbook-ui created
TIMESTAMP                  GROUP        KIND   NAMESPACE                  NAME    STATUS    HEALTH        HOOK  MESSAGE
2025-11-19T00:03:19+09:00            Service   guestbook          guestbook-ui  OutOfSync  Missing              
2025-11-19T00:03:19+09:00   apps  Deployment   guestbook          guestbook-ui  OutOfSync  Missing              
2025-11-19T00:03:19+09:00            Service   guestbook          guestbook-ui  OutOfSync  Missing              service/guestbook-ui created
2025-11-19T00:03:19+09:00   apps  Deployment   guestbook          guestbook-ui  OutOfSync  Missing              deployment.apps/guestbook-ui created

Name:               argocd/prd-k8s-guestbook
Project:            default
Server:             https://172.18.0.4:6443
Namespace:          guestbook
URL:                https://argocd.example.com/applications/argocd/prd-k8s-guestbook
Source:
- Repo:             https://github.com/gasida/cicd-study
  Target:           HEAD
  Path:             guestbook
SyncWindow:         Sync Allowed
Sync Policy:        Manual
Sync Status:        Synced to HEAD (b2894c6)
Health Status:      Progressing

Operation:          Sync
Sync Revision:      b2894c67f7a64e42b408da5825cb0b87ee306b04
Phase:              Succeeded
Start:              2025-11-19 00:03:19 +0900 KST
Finished:           2025-11-19 00:03:19 +0900 KST
Duration:           0s
Message:            successfully synced (all tasks run)

GROUP  KIND        NAMESPACE  NAME          STATUS  HEALTH       HOOK  MESSAGE
       Service     guestbook  guestbook-ui  Synced  Healthy            service/guestbook-ui created
apps   Deployment  guestbook  guestbook-ui  Synced  Progressing        deployment.apps/guestbook-ui created

2. ๊ฐ ํด๋Ÿฌ์Šคํ„ฐ์— ๋ฐฐํฌ๋œ guestbook ํŒŒ๋“œ ์ƒํƒœ ํ™•์ธ

1
2
3
4
5
6
7
8
9
10
11
12
k8s1 get pod -n guestbook # mgmt
k8s2 get pod -n guestbook # dev
k8s3 get pod -n guestbook # prd

NAME                            READY   STATUS    RESTARTS   AGE
guestbook-ui-85db984648-8rch8   1/1     Running   0          32s

NAME                            READY   STATUS    RESTARTS   AGE
guestbook-ui-85db984648-k7wms   1/1     Running   0          33s

NAME                            READY   STATUS    RESTARTS   AGE
guestbook-ui-85db984648-7bb75   1/1     Running   0          32s

3. bob(LDAP devs ๊ทธ๋ฃน) ๊ณ„์ •์œผ๋กœ Argo CD UI ์ ‘์† ๊ฒฐ๊ณผ

(1) Applications ํ™”๋ฉด

  • ์‹ค์ œ๋กœ๋Š” dev-k8s-guestbook, in-cluster-guestbook, prd-k8s-guestbook์ด ์กด์žฌํ•˜์ง€๋งŒ bob ๊ณ„์ • ์ž…์žฅ์—์„œ๋Š” ๋ฆฌ์ŠคํŠธ๊ฐ€ ๋น„์–ด์žˆ๋Š” ์ƒํƒœ๋กœ ๋ณด์ž„

(2) Clusters ํ™”๋ฉด

  • mgmt, dev, prd ํด๋Ÿฌ์Šคํ„ฐ ๋ชจ๋‘ ์—ฐ๊ฒฐ๋˜์–ด ์žˆ์ง€๋งŒ bob ๊ถŒํ•œ์œผ๋กœ๋Š” ์กฐํšŒ ๋ถˆ๊ฐ€ํ•จ

๐Ÿ‘ฅ OpenLDAP ๊ทธ๋ฃน ์ •๋ณด๋ฅผ Keycloak ๋™๊ธฐํ™”

1. LDAP Provider ์ง„์ž…

1
2
Keycloak myrealm โ†’ User Federation โ†’ LDAP Provider ์ง„์ž… ํ›„
Mappers ํƒญ์—์„œ Add mapper ํด๋ฆญ

2. Mapper ์„ค์ •

1
2
3
4
5
6
7
8
name : ldap-groups
Mapper type: group-ldap-mapper
LDAP Groups DN : ou=groups,dc=example,dc=org
Group Name LDAP Attribute: cn
Group Object Classed: groupOfNames
Membership LDAP attribute: member
Membership attribute type: DN
Mode: READ_ONLY

3. Sync LDAP groups to Keycloak


๐ŸŒ€ Keycloak์—์„œ ํ† ํฐ์— Group ์ „๋‹ฌ์„ ์œ„ํ•œ ์„ค์ • : ArgoCD Client ์„ค์ •

1. groups Client Scope ์ƒ์„ฑ

Name (groups), ๋‚˜๋จธ์ง€๋Š” ๊ธฐ๋ณธ๊ฐ’

2. groups Scope์— Group Membership ๋งคํผ ์ถ”๊ฐ€

ํ•ด๋‹น client scopes์—์„œ mappers ํด๋ฆญ โ†’ [Configure a new mapper] ํด๋ฆญ

โ€˜Group Membershipโ€™ ์„ ํƒ ํ›„ Name, Token Claim Name ์— groups ์ž…๋ ฅ

3. argocd ํด๋ผ์ด์–ธํŠธ์— groups Scope ์—ฐ๊ฒฐ

clients์—์„œ argocd ํด๋ฆญ

[Client scopes] ํƒญ ์ด๋™

Add client scope ํด๋ฆญ ํ›„ ์ƒ์„ฑํ•œ groups ์„ ํƒ / [Add] ์„ ํƒ ํ›„ ๋“œ๋กญ๋‹ค์šด์˜ Default ์„ ํƒ

4. Argo CD OIDC ์„ค์ •์— groups scope ์ถ”๊ฐ€

1
2
3
4
5
6
7
kubectl edit cm -n argocd argocd-cm

...
    requestedScopes: ["openid", "profile", "email" , "groups"]
-> ์ ์šฉ์„ ์œ„ํ•ด์„œ 15์ดˆ ์ •๋„ ํ›„์— ์•„๋ž˜ ๋กœ๊ทธ์ธ ์ง„ํ–‰

configmap/argocd-cm edited

5. ๊ฒฐ๊ณผ ํ™•์ธ

scope์— groups๊ฐ€ ์ถ”๊ฐ€๋จ

user info์— group์ด ์ถ”๊ฐ€๋จ

ํ•˜์ง€๋งŒ Argo CD ํ™”๋ฉด์—์„œ๋Š” ์—ฌ์ „ํžˆ Applications, Clusters๊ฐ€ ๋ณด์ด์ง€ ์•Š์Œ


๐Ÿง‘โ€โš–๏ธ Argo CD RBAC ํ• ๋‹น

1. Argo CD RBAC ์„ค์ • ConfigMap ์ˆ˜์ •

1
2
3
4
5
6
7
8
kubectl edit cm argocd-rbac-cm -n argocd
...
data:
  policy.csv: |
    g, devs, role:admin
...

configmap/argocd-rbac-cm edited

  • g, <๊ทธ๋ฃน๋ช…>, <์—ญํ• > ํ˜•์‹์˜ RBAC ๊ทœ์น™
  • devs ๊ทธ๋ฃน์— ์†ํ•œ ์‚ฌ์šฉ์ž๋Š” ๋ชจ๋‘ role:admin ๊ถŒํ•œ์„ ๊ฐ€์ง

2. ์ ์šฉ ๊ฒฐ๊ณผ ํ™•์ธ


๐Ÿชช Jenkins์˜ OIDC ์ธ์ฆ ์‹œ scope์— groups ์ถ”๊ฐ€ ์„ค์ •

1. jenkins ํด๋ผ์ด์–ธํŠธ์— groups scope ์ถ”๊ฐ€

jenkins client ์—์„œ add client scope ํด๋ฆญ ํ›„ groups๋ฅผ default ๋กœ ์ถ”๊ฐ€

2. Jenkins OIDC ์„ค์ •์—์„œ groups scope ๋ฐ˜์˜

์„ค์ • โ†’ Security โ†’ ์•„๋ž˜ ์ฒ˜๋Ÿผ scopes์— groups์™€ groups fileld name์— groups ์ถ”๊ฐ€

๋‹ค์‹œ ๋กœ๊ทธ์ธํ•˜๋ฉด ์‚ฌ์šฉ์ž ์ƒ์„ธ ํ™”๋ฉด์— Groups ํ•ญ๋ชฉ์ด ํ‘œ์‹œ

3. ์ •๋ฆฌ ์ž‘์—…

(1) kind ๋กœ ๋งŒ๋“  ์„ธ ๊ฐœ์˜ ํด๋Ÿฌ์Šคํ„ฐ ์‚ญ์ œ

1
2
3
4
5
6
7
8
kind delete cluster --name mgmt ; kind delete cluster --name dev ; kind delete cluster --name prd

Deleting cluster "mgmt" ...
Deleted nodes: ["mgmt-control-plane"]
Deleting cluster "dev" ...
Deleted nodes: ["dev-control-plane"]
Deleting cluster "prd" ...
Deleted nodes: ["prd-control-plane"]

(2) /etc/hosts ์— ์ถ”๊ฐ€ํ–ˆ๋˜ ์‹ค์Šต ๋„๋ฉ”์ธ ์ œ๊ฑฐ

This post is licensed under CC BY 4.0 by the author.