๐ง kind mgmt k8s ๋ฐฐํฌ + ingress-nginx + Argo CD
1. kind mgmt ํด๋ฌ์คํฐ ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
| kind create cluster --name mgmt --image kindest/node:v1.32.8 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
labels:
ingress-ready: true
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- containerPort: 30000
hostPort: 30000
EOF
Creating cluster "mgmt" ...
โ Ensuring node image (kindest/node:v1.32.8) ๐ผ
โ Preparing nodes ๐ฆ
โ Writing configuration ๐
โ Starting control-plane ๐น๏ธ
โ Installing CNI ๐
โ Installing StorageClass ๐พ
Set kubectl context to "kind-mgmt"
You can now use your cluster with:
kubectl cluster-info --context kind-mgmt
Have a nice day! ๐
|
2. NGINX Ingress Controller ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
| kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
|
3. Ingress SSL Passthrough ์ต์
ํ์ฑํ
1
2
3
4
5
| kubectl get deployment ingress-nginx-controller -n ingress-nginx -o yaml \
| sed '/- --publish-status-address=localhost/a\
- --enable-ssl-passthrough' | kubectl apply -f -
deployment.apps/ingress-nginx-controller configured
|
4. Argo CD์ฉ Self-Signed ์ธ์ฆ์ ์์ฑ
1
2
3
4
| openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout argocd.example.com.key \
-out argocd.example.com.crt \
-subj "/CN=argocd.example.com/O=argocd"
|
5. Argo CD ๋ค์์คํ์ด์ค ๋ฐ TLS Secret ์์ฑ
1
2
3
| kubectl create ns argocd
namespace/argocd created
|
1
2
3
4
5
| kubectl -n argocd create secret tls argocd-server-tls \
--cert=argocd.example.com.crt \
--key=argocd.example.com.key
secret/argocd-server-tls created
|
6. Helm์ผ๋ก Argo CD ์ค์น
1
2
3
4
5
6
7
8
9
10
11
12
13
| cat <<EOF > argocd-values.yaml
global:
domain: argocd.example.com
server:
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
tls: true
EOF
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
| helm repo add argo https://argoproj.github.io/argo-helm
helm install argocd argo/argo-cd --version 9.0.5 -f argocd-values.yaml --namespace argocd
NAME: argocd
LAST DEPLOYED: Mon Nov 17 20:00:25 2025
NAMESPACE: argocd
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
In order to access the server UI you have the following options:
1. kubectl port-forward service/argocd-server -n argocd 8080:443
and then open the browser on http://localhost:8080 and accept the certificate
2. enable ingress in the values file `server.ingress.enabled` and either
- Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
- Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts
After reaching the UI the first time you can login with username: admin and the random password generated during the installation. You can find the password by running:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
(You should delete the initial secret afterwards as suggested by the Getting Started Guide: https://argo-cd.readthedocs.io/en/stable/getting_started/#4-login-using-the-cli)
|
7. /etc/hosts์ Argo CD ๋๋ฉ์ธ ๋ฑ๋ก
1
2
3
4
5
| echo "127.0.0.1 argocd.example.com" | sudo tee -a /etc/hosts
cat /etc/hosts
...
127.0.0.1 argocd.example.com
|
- ๋ก์ปฌ ํ๊ฒฝ์์
argocd.example.com ๋๋ฉ์ธ์ 127.0.0.1๋ก ํด์ํ๋๋ก /etc/hosts์ ๋ฑ๋ก
8. curl๋ก Argo CD HTTPS ์ ์ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| curl -vk https://argocd.example.com/
* Host argocd.example.com:443 was resolved.
* IPv6: (none)
* IPv4: 127.0.0.1
* Trying 127.0.0.1:443...
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* SSL Trust: peer verification disabled
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256 / X25519MLKEM768 / RSASSA-PSS
...
|
9. Argo CD ์ด๊ธฐ admin ๋น๋ฐ๋ฒํธ ์กฐํ ๋ฐ ๋ณ์์ค์
1
2
| kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ;echo
r0PAK1j7tcDasX7Q
|
1
2
| ARGOPW=<์ต์ด ์ ์ ์ํธ>
ARGOPW=r0PAK1j7tcDasX7Q
|
10. Argo CD CLI ๋ก๊ทธ์ธ ์ํ
1
2
3
4
| argocd login argocd.example.com --insecure --username admin --password $ARGOPW
'admin:login' logged in successfully
Context 'argocd.example.com' updated
|
11. admin ๊ณ์ ๋น๋ฐ๋ฒํธ ๋ณ๊ฒฝ
1
2
3
4
| argocd account update-password --current-password $ARGOPW --new-password qwe12345
Password updated
Context 'argocd.example.com' updated
|
12. Argo CD ์น ์ฝ์ ์ ์ ์ฃผ์ ํ์ธ
1
| https://argocd.example.com
|
๐ฑ kind dev/prd k8s ๋ฐฐํฌ & k8s ์๊ฒฉ์ฆ๋ช
์์
1. ์ค์น ์ kubectl ์ปจํ
์คํธ ์ํ ํ์ธ
1
2
3
4
| kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kind-mgmt kind-mgmt kind-mgmt
|
- ์์ง์
kind-mgmt ํด๋ฌ์คํฐ๋ง ์กด์ฌํ๊ณ , ๊ธฐ๋ณธ ์ปจํ
์คํธ๋ kind-mgmt๋ก ์ค์ ๋์ด ์์
2. kind ๋์ปค ๋คํธ์ํฌ ์ ๋ณด ํ์ธ
1
2
3
4
5
6
7
| docker network ls
NETWORK ID NAME DRIVER SCOPE
f5ad53882464 bridge bridge local
bec308f23ee5 host host local
1da18f85ffec kind bridge local
225e867f21f9 none null local
|
3. mgmt-control-plane ์ปจํ
์ด๋ IP ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
| docker network inspect kind | jq
[
{
"Name": "kind",
"Id": "1da18f85ffecfc8a2170a57b9369bf4387586703d6c83f3accf900e9145b7772",
"Created": "2025-10-18T15:06:51.081819223+09:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv4": true,
"EnableIPv6": true,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
},
{
"Subnet": "fc00:f853:ccd:e793::/64",
"Gateway": "fc00:f853:ccd:e793::1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"4466ac122129f82aef62ae613cf1b1fa9025f61e3337f49bd9f810c5090054b3": {
"Name": "mgmt-control-plane",
"EndpointID": "b831183bd631a8adfdfe4f5659ebdc5192fa74b0c5e9ddada9b2e4b638190565",
"MacAddress": "0e:cf:0f:10:0d:55",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": "fc00:f853:ccd:e793::2/64"
}
},
"Options": {
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
|
mgmt-control-plane ์ปจํ
์ด๋์ IPv4 ์ฃผ์๊ฐ 172.18.0.2์ธ ๊ฒ์ ํ์ธ- ์ดํ dev/prd ํด๋ฌ์คํฐ์ ํต์ ๋ฐ kubeconfig ์์ ์ ์ฐธ๊ณ ์ฉ์ผ๋ก ์ฌ์ฉ
4. kind dev/prd ํด๋ฌ์คํฐ ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| kind create cluster --name dev --image kindest/node:v1.32.8 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 31000
hostPort: 31000
EOF
Creating cluster "dev" ...
โ Ensuring node image (kindest/node:v1.32.8) ๐ผ
โ Preparing nodes ๐ฆ
โ Writing configuration ๐
โ Starting control-plane ๐น๏ธ
โ Installing CNI ๐
โ Installing StorageClass ๐พ
Set kubectl context to "kind-dev"
You can now use your cluster with:
kubectl cluster-info --context kind-dev
Have a nice day! ๐
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| kind create cluster --name prd --image kindest/node:v1.32.8 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 32000
hostPort: 32000
EOF
Creating cluster "prd" ...
โ Ensuring node image (kindest/node:v1.32.8) ๐ผ
โ Preparing nodes ๐ฆ
โ Writing configuration ๐
โ Starting control-plane ๐น๏ธ
โ Installing CNI ๐
โ Installing StorageClass ๐พ
Set kubectl context to "kind-prd"
You can now use your cluster with:
kubectl cluster-info --context kind-prd
Have a nice day! ๐
|
5. dev/prd ํด๋ฌ์คํฐ ์์ฑ ํ ์ปจํ
์คํธ ๋ชฉ๋ก ํ์ธ
1
2
3
4
5
6
| kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
kind-dev kind-dev kind-dev
kind-mgmt kind-mgmt kind-mgmt
* kind-prd kind-prd kind-prd
|
- ํด๋ฌ์คํฐ ์์ฑ ํ kubeconfig์
kind-dev, kind-prd ์ปจํ
์คํธ๊ฐ ์ถ๊ฐ๋จ - ๊ธฐ๋ณธ ์ปจํ
์คํธ๋ ์ ์ผ ๋ง์ง๋ง์ผ๋ก ์์ฑ๋ ํด๋ฌ์คํฐ์ธ
kind-prd๋ก ์ง์ ๋ ์ํ์
6. mgmt ํด๋ฌ์คํฐ๋ก ์ปจํ
์คํธ ๋ณ๊ฒฝ
1
2
3
| kubectl config use-context kind-mgmt
Switched to context "kind-mgmt".
|
1
2
3
4
5
6
| kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
kind-dev kind-dev kind-dev
* kind-mgmt kind-mgmt kind-mgmt
kind-prd kind-prd kind-prd
|
7. ๊ฐ ํด๋ฌ์คํฐ ๋
ธ๋ ์ํ ํ์ธ (mgmt/dev/prd)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
| kubectl get node -v=6 --context kind-mgmt
kubectl get node -v=6 --context kind-dev
kubectl get node -v=6 --context kind-prd
I1117 20:13:19.217800 121966 cmd.go:527] kubectl command headers turned on
I1117 20:13:19.224307 121966 loader.go:402] Config loaded from file: /home/devshin/.kube/config
I1117 20:13:19.225823 121966 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1117 20:13:19.225838 121966 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1117 20:13:19.225843 121966 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1117 20:13:19.225847 121966 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1117 20:13:19.225852 121966 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1117 20:13:19.237944 121966 round_trippers.go:632] "Response" verb="GET" url="https://127.0.0.1:42415/api/v1/nodes?limit=500" status="200 OK" milliseconds=7
NAME STATUS ROLES AGE VERSION
mgmt-control-plane Ready control-plane 17m v1.32.8
I1117 20:13:19.286485 121989 cmd.go:527] kubectl command headers turned on
I1117 20:13:19.293572 121989 loader.go:402] Config loaded from file: /home/devshin/.kube/config
I1117 20:13:19.295187 121989 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1117 20:13:19.295222 121989 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1117 20:13:19.295233 121989 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1117 20:13:19.295243 121989 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1117 20:13:19.295253 121989 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1117 20:13:19.305632 121989 round_trippers.go:632] "Response" verb="GET" url="https://127.0.0.1:35871/api?timeout=32s" status="200 OK" milliseconds=10
I1117 20:13:19.307735 121989 round_trippers.go:632] "Response" verb="GET" url="https://127.0.0.1:35871/apis?timeout=32s" status="200 OK" milliseconds=1
I1117 20:13:19.315440 121989 round_trippers.go:632] "Response" verb="GET" url="https://127.0.0.1:35871/api/v1/nodes?limit=500" status="200 OK" milliseconds=2
NAME STATUS ROLES AGE VERSION
dev-control-plane Ready control-plane 2m21s v1.32.8
I1117 20:13:19.363680 122008 cmd.go:527] kubectl command headers turned on
I1117 20:13:19.369643 122008 loader.go:402] Config loaded from file: /home/devshin/.kube/config
I1117 20:13:19.370592 122008 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1117 20:13:19.370614 122008 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1117 20:13:19.370620 122008 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1117 20:13:19.370625 122008 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1117 20:13:19.370631 122008 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1117 20:13:19.378161 122008 round_trippers.go:632] "Response" verb="GET" url="https://127.0.0.1:34469/api?timeout=32s" status="200 OK" milliseconds=7
I1117 20:13:19.380831 122008 round_trippers.go:632] "Response" verb="GET" url="https://127.0.0.1:34469/apis?timeout=32s" status="200 OK" milliseconds=1
I1117 20:13:19.388897 122008 round_trippers.go:632] "Response" verb="GET" url="https://127.0.0.1:34469/api/v1/nodes?limit=500" status="200 OK" milliseconds=2
NAME STATUS ROLES AGE VERSION
prd-control-plane Ready control-plane 106s v1.32.8
|
8. ๊ฐ ํด๋ฌ์คํฐ Pod ์ํ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
| kubectl get pod -A --context kind-mgmt
kubectl get pod -A --context kind-dev
kubectl get pod -A --context kind-prd
NAMESPACE NAME READY STATUS RESTARTS AGE
argocd argocd-application-controller-0 1/1 Running 0 13m
argocd argocd-applicationset-controller-bbff79c6f-9qcf8 1/1 Running 0 13m
argocd argocd-dex-server-6877ddf4f8-fvfll 1/1 Running 0 13m
argocd argocd-notifications-controller-7b5658fc47-26p24 1/1 Running 0 13m
argocd argocd-redis-7d948674-xnl9k 1/1 Running 0 13m
argocd argocd-repo-server-7679dc55f5-swj2g 1/1 Running 0 13m
argocd argocd-server-7d769b6f48-2ts94 1/1 Running 0 13m
ingress-nginx ingress-nginx-controller-5b89cb54f9-5gvfh 1/1 Running 0 16m
kube-system coredns-668d6bf9bc-d5bn7 1/1 Running 0 17m
kube-system coredns-668d6bf9bc-vb4p7 1/1 Running 0 17m
kube-system etcd-mgmt-control-plane 1/1 Running 0 18m
kube-system kindnet-jtm8t 1/1 Running 0 17m
kube-system kube-apiserver-mgmt-control-plane 1/1 Running 0 18m
kube-system kube-controller-manager-mgmt-control-plane 1/1 Running 0 18m
kube-system kube-proxy-b9pmh 1/1 Running 0 17m
kube-system kube-scheduler-mgmt-control-plane 1/1 Running 0 18m
local-path-storage local-path-provisioner-7dc846544d-wltkn 1/1 Running 0 17m
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-668d6bf9bc-lddkn 1/1 Running 0 3m17s
kube-system coredns-668d6bf9bc-ltg98 1/1 Running 0 3m17s
kube-system etcd-dev-control-plane 1/1 Running 0 3m23s
kube-system kindnet-2gks8 1/1 Running 0 3m18s
kube-system kube-apiserver-dev-control-plane 1/1 Running 0 3m23s
kube-system kube-controller-manager-dev-control-plane 1/1 Running 0 3m23s
kube-system kube-proxy-zmdnk 1/1 Running 0 3m18s
kube-system kube-scheduler-dev-control-plane 1/1 Running 0 3m23s
local-path-storage local-path-provisioner-7dc846544d-qtmxj 1/1 Running 0 3m17s
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-668d6bf9bc-2hm7g 1/1 Running 0 2m42s
kube-system coredns-668d6bf9bc-7kbbg 1/1 Running 0 2m42s
kube-system etcd-prd-control-plane 1/1 Running 0 2m48s
kube-system kindnet-lqc7k 1/1 Running 0 2m42s
kube-system kube-apiserver-prd-control-plane 1/1 Running 0 2m48s
kube-system kube-controller-manager-prd-control-plane 1/1 Running 0 2m48s
kube-system kube-proxy-kkhxb 1/1 Running 0 2m42s
kube-system kube-scheduler-prd-control-plane 1/1 Running 0 2m49s
local-path-storage local-path-provisioner-7dc846544d-dj54f 1/1 Running 0 2m42s
|
9. kubectl alias ์ค์
1
2
3
| alias k8s1='kubectl --context kind-mgmt'
alias k8s2='kubectl --context kind-dev'
alias k8s3='kubectl --context kind-prd'
|
1
2
3
4
5
6
7
8
9
10
11
12
13
| # ๊ฐ ๋
ธ๋ ๊ฐ๋จํ ์ ๋ณด ํ์ธ
k8s1 get node -owide
k8s2 get node -owide
k8s3 get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
mgmt-control-plane Ready control-plane 19m v1.32.8 172.18.0.2 <none> Debian GNU/Linux 12 (bookworm) 6.17.8-arch1-1 containerd://2.1.3
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
dev-control-plane Ready control-plane 4m16s v1.32.8 172.18.0.3 <none> Debian GNU/Linux 12 (bookworm) 6.17.8-arch1-1 containerd://2.1.3
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
prd-control-plane Ready control-plane 3m41s v1.32.8 172.18.0.4 <none> Debian GNU/Linux 12 (bookworm) 6.17.8-arch1-1 containerd://2.1.3
|
10. docker ๋คํธ์ํฌ์์ ๊ฐ ์ปจํธ๋กค ํ๋ ์ธ IP ์ฌํ์ธ
1
2
3
4
5
6
7
8
9
| docker network inspect kind | grep -E 'Name|IPv4Address'
"Name": "kind",
"Name": "prd-control-plane",
"IPv4Address": "172.18.0.4/16",
"Name": "mgmt-control-plane",
"IPv4Address": "172.18.0.2/16",
"Name": "dev-control-plane",
"IPv4Address": "172.18.0.3/16",
|
11. ๋๋ฉ์ธ ํต์ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
| docker exec -it mgmt-control-plane curl -sk https://dev-control-plane:6443/version
{
"major": "1",
"minor": "32",
"gitVersion": "v1.32.8",
"gitCommit": "2e83bc4bf31e88b7de81d5341939d5ce2460f46f",
"gitTreeState": "clean",
"buildDate": "2025-08-13T14:21:22Z",
"goVersion": "go1.23.11",
"compiler": "gc",
"platform": "linux/amd64"
}
|
1
2
3
4
5
6
7
8
9
10
11
12
13
| docker exec -it mgmt-control-plane curl -sk https://prd-control-plane:6443/version
{
"major": "1",
"minor": "32",
"gitVersion": "v1.32.8",
"gitCommit": "2e83bc4bf31e88b7de81d5341939d5ce2460f46f",
"gitTreeState": "clean",
"buildDate": "2025-08-13T14:21:22Z",
"goVersion": "go1.23.11",
"compiler": "gc",
"platform": "linux/amd64"
}
|
1
2
3
4
5
6
7
8
9
10
11
12
13
| docker exec -it dev-control-plane curl -sk https://prd-control-plane:6443/version
{
"major": "1",
"minor": "32",
"gitVersion": "v1.32.8",
"gitCommit": "2e83bc4bf31e88b7de81d5341939d5ce2460f46f",
"gitTreeState": "clean",
"buildDate": "2025-08-13T14:21:22Z",
"goVersion": "go1.23.11",
"compiler": "gc",
"platform": "linux/amd64"
}
|
mgmt-control-plane์์ dev-control-plane, prd-control-plane์ API ์๋ฒ /version ์๋ํฌ์ธํธ๋ฅผ curl๋ก ํธ์ถdev-control-plane์์๋ prd-control-plane API ์๋ฒ /version ์๋ํฌ์ธํธ๋ฅผ curl๋ก ํธ์ถ
12. ํธ์คํธ์์ kind ๋คํธ์ํฌ IP์ ping ํ
์คํธ ์ํ
1
2
3
4
5
6
7
8
| ping -c 1 172.18.0.2
PING 172.18.0.2 (172.18.0.2) 56(84) bytes of data.
64 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.129 ms
--- 172.18.0.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms
|
1
2
3
4
5
6
7
8
| ping -c 1 172.18.0.3
PING 172.18.0.3 (172.18.0.3) 56(84) bytes of data.
64 bytes from 172.18.0.3: icmp_seq=1 ttl=64 time=0.079 ms
--- 172.18.0.3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms
|
1
2
3
4
5
6
7
8
| ping -c 1 172.18.0.4
PING 172.18.0.4 (172.18.0.4) 56(84) bytes of data.
64 bytes from 172.18.0.4: icmp_seq=1 ttl=64 time=0.172 ms
--- 172.18.0.4 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms
|
- ํธ์คํธ โ ์ปจํธ๋กค ํ๋ ์ธ ์ปจํ
์ด๋ ๊ฐ ํต์ ์ด ์ ์์ ์ผ๋ก ๊ฐ๋ฅํ ๊ฒ์ ํ์ธ
13. dev/prd kubeconfig์ API ์๋ฒ ์ฃผ์๋ฅผ ์ปจํ
์ด๋ IP๋ก ๋ณ๊ฒฝ
1
2
3
4
5
6
7
8
9
| vi ~/.kube/config
...
server: https://172.18.0.3:6443
name: kind-dev
...
server: https://172.18.0.4:6443
name: kind-prd
...
|
14. kubeconfig ์์ ํ dev/prd API ์๋ฒ ์ฐ๊ฒฐ ์ฌ๊ฒ์ฆ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
| kubectl get node -v=6 --context kind-dev
kubectl get node -v=6 --context kind-prd
I1117 23:24:11.853566 160346 cmd.go:527] kubectl command headers turned on
I1117 23:24:11.859417 160346 loader.go:402] Config loaded from file: /home/devshin/.kube/config
I1117 23:24:11.860050 160346 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1117 23:24:11.860066 160346 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1117 23:24:11.860073 160346 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1117 23:24:11.860080 160346 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1117 23:24:11.860086 160346 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1117 23:24:11.866956 160346 round_trippers.go:632] "Response" verb="GET" url="https://172.18.0.3:6443/api?timeout=32s" status="200 OK" milliseconds=6
I1117 23:24:11.868615 160346 round_trippers.go:632] "Response" verb="GET" url="https://172.18.0.3:6443/apis?timeout=32s" status="200 OK" milliseconds=0
I1117 23:24:11.876303 160346 round_trippers.go:632] "Response" verb="GET" url="https://172.18.0.3:6443/api/v1/nodes?limit=500" status="200 OK" milliseconds=1
NAME STATUS ROLES AGE VERSION
dev-control-plane Ready control-plane 3h13m v1.32.8
I1117 23:24:11.918305 160370 cmd.go:527] kubectl command headers turned on
I1117 23:24:11.923330 160370 loader.go:402] Config loaded from file: /home/devshin/.kube/config
I1117 23:24:11.924213 160370 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1117 23:24:11.924278 160370 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1117 23:24:11.924302 160370 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1117 23:24:11.924337 160370 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1117 23:24:11.924358 160370 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1117 23:24:11.929299 160370 round_trippers.go:632] "Response" verb="GET" url="https://172.18.0.4:6443/api?timeout=32s" status="200 OK" milliseconds=4
I1117 23:24:11.930766 160370 round_trippers.go:632] "Response" verb="GET" url="https://172.18.0.4:6443/apis?timeout=32s" status="200 OK" milliseconds=0
I1117 23:24:11.938976 160370 round_trippers.go:632] "Response" verb="GET" url="https://172.18.0.4:6443/api/v1/nodes?limit=500" status="200 OK" milliseconds=1
NAME STATUS ROLES AGE VERSION
prd-control-plane Ready control-plane 3h12m v1.32.8
|
๐ Argo CD์ ๋ค๋ฅธ K8S Cluster ๋ฑ๋ก
1. Argo CD ๊ธฐ๋ณธ in-cluster ํด๋ฌ์คํฐ ์ํ ํ์ธ
1
2
3
4
| argocd cluster list
SERVER NAME VERSION STATUS MESSAGE PROJECT
https://kubernetes.default.svc in-cluster Unknown Cluster has no applications and is not being monitored.
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
| argocd cluster list -o json | jq
[
{
"server": "https://kubernetes.default.svc",
"name": "in-cluster",
"config": {
"tlsClientConfig": {
"insecure": false
}
},
"connectionState": {
"status": "Unknown",
"message": "Cluster has no applications and is not being monitored.",
"attemptedAt": "2025-11-17T14:25:59Z"
},
"info": {
"connectionState": {
"status": "Unknown",
"message": "Cluster has no applications and is not being monitored.",
"attemptedAt": "2025-11-17T14:25:59Z"
},
"cacheInfo": {},
"applicationsCount": 0
}
}
]
|
2. Argo CD ๋ค์์คํ์ด์ค ์ํฌ๋ฆฟ ๋ชฉ๋ก ํ์ธ
1
2
3
4
5
6
7
8
9
| kubectl get secret -n argocd
NAME TYPE DATA AGE
argocd-initial-admin-secret Opaque 1 3h25m
argocd-notifications-secret Opaque 0 3h25m
argocd-redis Opaque 1 3h25m
argocd-secret Opaque 3 3h25m
argocd-server-tls kubernetes.io/tls 2 3h27m
sh.helm.release.v1.argocd.v1 helm.sh/release.v1 1 3h26m
|
3. dev/prd ํด๋ฌ์คํฐ kube-system ServiceAccount ๋ชฉ๋ก ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
| k8s2 get sa -n kube-system
NAME SECRETS AGE
attachdetach-controller 0 3h16m
bootstrap-signer 0 3h16m
certificate-controller 0 3h16m
clusterrole-aggregation-controller 0 3h16m
coredns 0 3h16m
cronjob-controller 0 3h16m
daemon-set-controller 0 3h16m
default 0 3h16m
deployment-controller 0 3h16m
disruption-controller 0 3h16m
endpoint-controller 0 3h16m
endpointslice-controller 0 3h16m
endpointslicemirroring-controller 0 3h16m
ephemeral-volume-controller 0 3h16m
expand-controller 0 3h16m
generic-garbage-collector 0 3h16m
horizontal-pod-autoscaler 0 3h16m
job-controller 0 3h16m
kindnet 0 3h16m
kube-proxy 0 3h16m
legacy-service-account-token-cleaner 0 3h16m
namespace-controller 0 3h16m
node-controller 0 3h16m
persistent-volume-binder 0 3h16m
pod-garbage-collector 0 3h16m
pv-protection-controller 0 3h16m
pvc-protection-controller 0 3h16m
replicaset-controller 0 3h16m
replication-controller 0 3h16m
resourcequota-controller 0 3h16m
root-ca-cert-publisher 0 3h16m
service-account-controller 0 3h16m
statefulset-controller 0 3h16m
token-cleaner 0 3h16m
ttl-after-finished-controller 0 3h16m
ttl-controller 0 3h16m
validatingadmissionpolicy-status-controller 0 3h16m
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
| k8s3 get sa -n kube-system
NAME SECRETS AGE
attachdetach-controller 0 3h15m
bootstrap-signer 0 3h16m
certificate-controller 0 3h16m
clusterrole-aggregation-controller 0 3h16m
coredns 0 3h16m
cronjob-controller 0 3h16m
daemon-set-controller 0 3h15m
default 0 3h15m
deployment-controller 0 3h16m
disruption-controller 0 3h15m
endpoint-controller 0 3h15m
endpointslice-controller 0 3h16m
endpointslicemirroring-controller 0 3h16m
ephemeral-volume-controller 0 3h16m
expand-controller 0 3h16m
generic-garbage-collector 0 3h16m
horizontal-pod-autoscaler 0 3h16m
job-controller 0 3h16m
kindnet 0 3h16m
kube-proxy 0 3h16m
legacy-service-account-token-cleaner 0 3h16m
namespace-controller 0 3h16m
node-controller 0 3h16m
persistent-volume-binder 0 3h16m
pod-garbage-collector 0 3h16m
pv-protection-controller 0 3h16m
pvc-protection-controller 0 3h16m
replicaset-controller 0 3h16m
replication-controller 0 3h16m
resourcequota-controller 0 3h15m
root-ca-cert-publisher 0 3h16m
service-account-controller 0 3h16m
statefulset-controller 0 3h15m
token-cleaner 0 3h16m
ttl-after-finished-controller 0 3h16m
ttl-controller 0 3h16m
validatingadmissionpolicy-status-controller 0 3h16m
|
4. dev ํด๋ฌ์คํฐ๋ฅผ Argo CD์ ๋ฑ๋ก
1
2
3
4
5
6
7
8
| argocd cluster add kind-dev --name dev-k8s
WARNING: This will create a service account `argocd-manager` on the cluster referenced by context `kind-dev` with full cluster level privileges. Do you want to continue [y/N]? y
{"level":"info","msg":"ServiceAccount \"argocd-manager\" created in namespace \"kube-system\"","time":"2025-11-17T23:28:26+09:00"}
{"level":"info","msg":"ClusterRole \"argocd-manager-role\" created","time":"2025-11-17T23:28:26+09:00"}
{"level":"info","msg":"ClusterRoleBinding \"argocd-manager-role-binding\" created","time":"2025-11-17T23:28:26+09:00"}
{"level":"info","msg":"Created bearer token secret \"argocd-manager-long-lived-token\" for ServiceAccount \"argocd-manager\"","time":"2025-11-17T23:28:26+09:00"}
Cluster 'https://172.18.0.3:6443' added
|
5. dev ํด๋ฌ์คํฐ์ ์์ฑ๋ argocd-manager ServiceAccount ํ์ธ
1
2
3
4
| k8s2 get sa -n kube-system argocd-manager
NAME SECRETS AGE
argocd-manager 0 29s
|
6. Argo CD์์ dev ํด๋ฌ์คํฐ์ฉ cluster ์ํฌ๋ฆฟ ์์ฑ ํ์ธ
1
2
3
4
| kubectl get secret -n argocd -l argocd.argoproj.io/secret-type=cluster
NAME TYPE DATA AGE
cluster-172.18.0.3-4100004299 Opaque 3 2m36s
|
7. k9s๋ก dev ํด๋ฌ์คํฐ ์ํฌ๋ฆฟ ๋ด์ฉ ๋์ฝ๋ฉํด์ ํ์ธ
1
| k9s -> : secret argocd -> ์๋ secret ์์ d (Describe) -> x (Toggle Decode) ๋ก ํ์ธ
|
8. dev-k8s ๋ฑ๋ก ๊ฒฐ๊ณผ ํ์ธ
1
2
3
4
5
| argocd cluster list
SERVER NAME VERSION STATUS MESSAGE PROJECT
https://172.18.0.3:6443 dev-k8s Unknown Cluster has no applications and is not being monitored.
https://kubernetes.default.svc in-cluster Unknown Cluster has no applications and is not being monitored.
|
- dev ํด๋ฌ์คํฐ๊ฐ Argo CD์ ๋ฑ๋ก๋ ๊ฒ์ ํ์ธํจ
9. prd ํด๋ฌ์คํฐ๋ฅผ Argo CD์ ๋ฑ๋ก
1
2
3
4
5
6
7
| argocd cluster add kind-prd --name prd-k8s --yes
{"level":"info","msg":"ServiceAccount \"argocd-manager\" created in namespace \"kube-system\"","time":"2025-11-17T23:36:08+09:00"}
{"level":"info","msg":"ClusterRole \"argocd-manager-role\" created","time":"2025-11-17T23:36:08+09:00"}
{"level":"info","msg":"ClusterRoleBinding \"argocd-manager-role-binding\" created","time":"2025-11-17T23:36:08+09:00"}
{"level":"info","msg":"Created bearer token secret \"argocd-manager-long-lived-token\" for ServiceAccount \"argocd-manager\"","time":"2025-11-17T23:36:08+09:00"}
Cluster 'https://172.18.0.4:6443' added
|
10. prd ํด๋ฌ์คํฐ argocd-manager ServiceAccount ์์ฑ ํ์ธ
1
2
3
4
| k8s3 get sa -n kube-system argocd-manager
NAME SECRETS AGE
argocd-manager 0 15s
|
11. dev/prd ํด๋ฌ์คํฐ์ฉ Argo CD cluster ์ํฌ๋ฆฟ ๋ชฉ๋ก ํ์ธ
1
2
3
4
5
| kubectl get secret -n argocd -l argocd.argoproj.io/secret-type=cluster
NAME TYPE DATA AGE
cluster-172.18.0.3-4100004299 Opaque 3 8m18s
cluster-172.18.0.4-568336172 Opaque 3 36s
|
12. ์ต์ข
Argo CD ํด๋ฌ์คํฐ ๋ชฉ๋ก ํ์ธ
1
2
3
4
5
6
| argocd cluster list
SERVER NAME VERSION STATUS MESSAGE PROJECT
https://172.18.0.3:6443 dev-k8s Unknown Cluster has no applications and is not being monitored.
https://172.18.0.4:6443 prd-k8s Unknown Cluster has no applications and is not being monitored.
https://kubernetes.default.svc in-cluster Unknown Cluster has no applications and is not being monitored.
|
๐ Argo CD๋ก 3๊ฐ์ K8S Cluster์ ๊ฐ๊ฐ Nginx ๋ฐฐํฌ
1. kind ๋คํธ์ํฌ์์ dev/prd ํด๋ฌ์คํฐ IP ํ์ธ ๋ฐ ํ๊ฒฝ ๋ณ์ ์ค์
1
2
3
4
5
6
7
8
9
| docker network inspect kind | grep -E 'Name|IPv4Address'
"Name": "kind",
"Name": "prd-control-plane",
"IPv4Address": "172.18.0.4/16",
"Name": "mgmt-control-plane",
"IPv4Address": "172.18.0.2/16",
"Name": "dev-control-plane",
"IPv4Address": "172.18.0.3/16",
|
1
2
3
4
5
| DEVK8SIP=172.18.0.3
PRDK8SIP=172.18.0.4
echo $DEVK8SIP $PRDK8SIP
172.18.0.3 172.18.0.4
|
2. mgmt ํด๋ฌ์คํฐ์ Nginx ๋ฐฐํฌ์ฉ Argo CD Application ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
| cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: mgmt-nginx
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
helm:
valueFiles:
- values.yaml
path: nginx-chart
repoURL: https://github.com/Shinminjin/cicd-study
targetRevision: HEAD
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
destination:
namespace: mgmt-nginx
server: https://kubernetes.default.svc
EOF
Warning: metadata.finalizers: "resources-finalizer.argocd.argoproj.io": prefer a domain-qualified finalizer name including a path (/) to avoid accidental conflicts with other finalizer writers
application.argoproj.io/mgmt-nginx created
|
3. dev ํด๋ฌ์คํฐ์ Nginx ๋ฐฐํฌ์ฉ Argo CD Application ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
| cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: dev-nginx
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
helm:
valueFiles:
- values-dev.yaml
path: nginx-chart
repoURL: https://github.com/Shinminjin/cicd-study
targetRevision: HEAD
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
destination:
namespace: dev-nginx
server: https://$DEVK8SIP:6443
EOF
Warning: metadata.finalizers: "resources-finalizer.argocd.argoproj.io": prefer a domain-qualified finalizer name including a path (/) to avoid accidental conflicts with other finalizer writers
application.argoproj.io/dev-nginx created
|
4. prd ํด๋ฌ์คํฐ์ Nginx ๋ฐฐํฌ์ฉ Argo CD Application ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
| cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: prd-nginx
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
helm:
valueFiles:
- values-prd.yaml
path: nginx-chart
repoURL: https://github.com/Shinminjin/cicd-study
targetRevision: HEAD
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
destination:
namespace: prd-nginx
server: https://$PRDK8SIP:6443
EOF
Warning: metadata.finalizers: "resources-finalizer.argocd.argoproj.io": prefer a domain-qualified finalizer name including a path (/) to avoid accidental conflicts with other finalizer writers
application.argoproj.io/prd-nginx created
|
5. Argo CD์์ ์ธ ๊ฐ Nginx Application ์ํ ํ์ธ
1
2
3
4
5
6
| argocd app list
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET
argocd/dev-nginx https://172.18.0.3:6443 dev-nginx default Synced Healthy Auto-Prune <none> https://github.com/Shinminjin/cicd-study nginx-chart HEAD
argocd/mgmt-nginx https://kubernetes.default.svc mgmt-nginx default Synced Healthy Auto-Prune <none> https://github.com/Shinminjin/cicd-study nginx-chart HEAD
argocd/prd-nginx https://172.18.0.4:6443 prd-nginx default Synced Healthy Auto-Prune <none> https://github.com/Shinminjin/cicd-study nginx-chart HEAD
|
- ๊ฐ Application์ด ์ฌ๋ฐ๋ฅธ ํด๋ฌ์คํฐ์ ๋ค์์คํ์ด์ค์ ๋งคํ๋์ด ์์์ ํ์ธ
1
2
3
4
5
6
| kubectl get applications -n argocd
NAME SYNC STATUS HEALTH STATUS
dev-nginx Synced Healthy
mgmt-nginx Synced Healthy
prd-nginx Synced Healthy
|
- Kubernetes ๋ฆฌ์์ค ๊ด์ ์์๋ Application ์ํ๋ฅผ ์ฌํ์ธํจ
6. mgmt ํด๋ฌ์คํฐ์์ Nginx ๋ฆฌ์์ค ๋ฐ NodePort ์๋ต ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| kubectl get pod,svc,ep,cm -n mgmt-nginx
NAME READY STATUS RESTARTS AGE
pod/mgmt-nginx-6fc86948bc-vh7k8 1/1 Running 0 96s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mgmt-nginx NodePort 10.96.10.62 <none> 80:30000/TCP 96s
NAME ENDPOINTS AGE
endpoints/mgmt-nginx 10.244.0.17:80 96s
NAME DATA AGE
configmap/kube-root-ca.crt 1 96s
configmap/mgmt-nginx 1 96s
|
1
2
3
4
5
6
7
8
9
10
11
12
| curl -s http://127.0.0.1:30000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to Nginx!</title>
</head>
<body>
<h1>Hello, Kubernetes!</h1>
<p>Nginx version 1.26.1</p>
</body>
</html>
|
mgmt-nginx ๋ค์์คํ์ด์ค์ Pod, Service, Endpoint, ConfigMap์ ํ์ธ- ํธ์คํธ์์ NodePort
30000์ผ๋ก ์ ์ํด Nginx ๊ธฐ๋ณธ ํ์ด์ง ๋์ ์ปค์คํ
HTML์ด ๋
ธ์ถ๋๋ ๊ฒ์ ํ์ธ
7. dev ํด๋ฌ์คํฐ์์ Nginx ๋ฐฐํฌ ๋ฐ NodePort ์๋ต ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| kubectl get pod,svc,ep,cm -n dev-nginx --context kind-dev
NAME READY STATUS RESTARTS AGE
pod/dev-nginx-59f4c8899-9vpvf 1/1 Running 0 2m4s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dev-nginx NodePort 10.96.108.194 <none> 80:31000/TCP 2m4s
NAME ENDPOINTS AGE
endpoints/dev-nginx 10.244.0.5:80 2m4s
NAME DATA AGE
configmap/dev-nginx 1 2m4s
configmap/kube-root-ca.crt 1 2m4s
|
1
2
3
4
5
6
7
8
9
10
11
12
| curl -s http://127.0.0.1:31000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to Nginx!</title>
</head>
<body>
<h1>Hello, Dev - Kubernetes!</h1>
<p>Nginx version 1.26.1</p>
</body>
</html>
|
kind-dev ์ปจํ
์คํธ์์ dev-nginx ๋ค์์คํ์ด์ค ๋ฆฌ์์ค๋ฅผ ์กฐํํจ- ํธ์คํธ์์
31000 ํฌํธ๋ก ์ ์ํด dev ํ๊ฒฝ์ฉ ๋ฌธ๊ตฌ๊ฐ ์ถ๋ ฅ๋๋์ง ํ์ธ
8. prd ํด๋ฌ์คํฐ์์ Nginx ๋ค์ค Pod/Endpoint ๋ฐ NodePort ์๋ต ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| kubectl get pod,svc,ep,cm -n prd-nginx --context kind-prd
NAME READY STATUS RESTARTS AGE
pod/prd-nginx-86d9bc9f7f-bfgg7 1/1 Running 0 3m47s
pod/prd-nginx-86d9bc9f7f-g24p8 1/1 Running 0 3m47s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/prd-nginx NodePort 10.96.193.167 <none> 80:32000/TCP 3m47s
NAME ENDPOINTS AGE
endpoints/prd-nginx 10.244.0.5:80,10.244.0.6:80 3m47s
NAME DATA AGE
configmap/kube-root-ca.crt 1 3m47s
configmap/prd-nginx 1 3m47s
|
1
2
3
4
5
6
7
8
9
10
11
12
| curl -s http://127.0.0.1:32000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to Nginx!</title>
</head>
<body>
<h1>Hello, Prd - Kubernetes!</h1>
<p>Nginx version 1.26.1</p>
</body>
</html>
|
kind-prd ์ปจํ
์คํธ์์ prd-nginx ๋ค์์คํ์ด์ค ๋ฆฌ์์ค๋ฅผ ํ์ธํจ- Pod:
prd-nginx-... 2๊ฐ (Replica 2๊ฐ๋ก ์ด์ ํ๊ฒฝ ์ค์ผ์ผ ์ค์ ) - Service: NodePort
80:32000/TCP - Endpoint: ๋ Pod IP๊ฐ ๋ชจ๋ ๋ฑ๋ก๋์ด ์์
9. Argo CD Application ์ญ์
1
2
3
4
5
| kubectl delete applications -n argocd mgmt-nginx dev-nginx prd-nginx
application.argoproj.io "mgmt-nginx" deleted from argocd namespace
application.argoproj.io "dev-nginx" deleted from argocd namespace
application.argoproj.io "prd-nginx" deleted from argocd namespace
|
๐ณ Argo CD App of Apps ํจํด ์ ๋ฆฌ
1. Root Application(apps) ์์ฑ
1
2
3
4
5
6
7
| argocd app create apps \
--dest-namespace argocd \
--dest-server https://kubernetes.default.svc \
--repo https://github.com/Shinminjin/cicd-study.git \
--path apps
application 'apps' created
|
argocd ๋ค์์คํ์ด์ค์ apps๋ผ๋ ์ด๋ฆ์ Root Application์ ์์ฑํจhttps://github.com/Shinminjin/cicd-study.git ์ ์ฅ์์ apps ๋๋ ํฐ๋ฆฌ๋ฅผ ์์ค๋ก ์ฌ์ฉํจ- ๋ชฉ์ : ์ด
apps ์์ ์ ์๋ ์ฌ๋ฌ Application ๋งค๋ํ์คํธ๋ฅผ ํ ๋ฒ์ ๊ด๋ฆฌํ๊ธฐ ์ํจ
3. Root Application Sync โ ํ์ Application ์๋ ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
| argocd app sync apps
TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE
2025-11-17T23:58:44+09:00 argoproj.io Application argocd example.helm-guestbook OutOfSync Missing
2025-11-17T23:58:44+09:00 argoproj.io Application argocd example.kustomize-guestbook OutOfSync Missing
2025-11-17T23:58:44+09:00 argoproj.io Application argocd example.sync-waves OutOfSync Missing
2025-11-17T23:58:44+09:00 argoproj.io Application argocd example.helm-guestbook Synced Missing
2025-11-17T23:58:44+09:00 argoproj.io Application argocd example.sync-waves OutOfSync Missing application.argoproj.io/example.sync-waves created
2025-11-17T23:58:44+09:00 argoproj.io Application argocd example.kustomize-guestbook OutOfSync Missing application.argoproj.io/example.kustomize-guestbook created
2025-11-17T23:58:44+09:00 argoproj.io Application argocd example.helm-guestbook Synced Missing application.argoproj.io/example.helm-guestbook created
Name: argocd/apps
Project: default
Server: https://kubernetes.default.svc
Namespace: argocd
URL: https://argocd.example.com/applications/apps
Source:
- Repo: https://github.com/Shinminjin/cicd-study.git
Target:
Path: apps
SyncWindow: Sync Allowed
Sync Policy: Manual
Sync Status: Synced to (b2894c6)
Health Status: Healthy
Operation: Sync
Sync Revision: b2894c67f7a64e42b408da5825cb0b87ee306b04
Phase: Succeeded
Start: 2025-11-17 23:58:44 +0900 KST
Finished: 2025-11-17 23:58:44 +0900 KST
Duration: 0s
Message: successfully synced (all tasks run)
GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE
argoproj.io Application argocd example.helm-guestbook Synced application.argoproj.io/example.helm-guestbook created
argoproj.io Application argocd example.sync-waves Synced application.argoproj.io/example.sync-waves created
argoproj.io Application argocd example.kustomize-guestbook Synced application.argoproj.io/example.kustomize-guestbook created
|
3. Root Application(apps) ์์ธ ์ํ ํ์ธ
1
2
3
4
5
6
7
| argocd app list
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET
argocd/apps https://kubernetes.default.svc argocd default Synced Healthy Manual <none> https://github.com/Shinminjin/cicd-study.git apps
argocd/example.helm-guestbook https://kubernetes.default.svc helm-guestbook default Synced Healthy Auto-Prune <none> https://github.com/gasida/cicd-study helm-guestbook main
argocd/example.kustomize-guestbook https://kubernetes.default.svc kustomize-guestbook default Synced Healthy Auto-Prune <none> https://github.com/gasida/cicd-study kustomize-guestbook main
argocd/example.sync-waves https://kubernetes.default.svc sync-waves default Synced Healthy Auto-Prune <none> https://github.com/gasida/cicd-study sync-waves main
|
4. ์ค์ ์ฟ ๋ฒ๋คํฐ์ค ๋ฆฌ์์ค ์์ฑ ์ํ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
| kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
argocd argocd-application-controller-0 1/1 Running 0 4h
argocd argocd-applicationset-controller-bbff79c6f-9qcf8 1/1 Running 0 4h
argocd argocd-dex-server-6877ddf4f8-fvfll 1/1 Running 0 4h
argocd argocd-notifications-controller-7b5658fc47-26p24 1/1 Running 0 4h
argocd argocd-redis-7d948674-xnl9k 1/1 Running 0 4h
argocd argocd-repo-server-7679dc55f5-swj2g 1/1 Running 0 4h
argocd argocd-server-7d769b6f48-2ts94 1/1 Running 0 4h
helm-guestbook helm-guestbook-667dffd5cf-f24hj 1/1 Running 0 2m40s
ingress-nginx ingress-nginx-controller-5b89cb54f9-5gvfh 1/1 Running 0 4h3m
kube-system coredns-668d6bf9bc-d5bn7 1/1 Running 0 4h5m
kube-system coredns-668d6bf9bc-vb4p7 1/1 Running 0 4h5m
kube-system etcd-mgmt-control-plane 1/1 Running 0 4h5m
kube-system kindnet-jtm8t 1/1 Running 0 4h5m
kube-system kube-apiserver-mgmt-control-plane 1/1 Running 0 4h5m
kube-system kube-controller-manager-mgmt-control-plane 1/1 Running 0 4h5m
kube-system kube-proxy-b9pmh 1/1 Running 0 4h5m
kube-system kube-scheduler-mgmt-control-plane 1/1 Running 0 4h5m
kustomize-guestbook kustomize-guestbook-ui-85db984648-mzc87 1/1 Running 0 2m40s
local-path-storage local-path-provisioner-7dc846544d-wltkn 1/1 Running 0 4h5m
sync-waves backend-z4kpq 1/1 Running 0 2m28s
sync-waves frontend-x8xjc 1/1 Running 0 108s
sync-waves maint-page-down-scbbs 0/1 Completed 0 105s
sync-waves maint-page-up-tr6d2 0/1 Completed 0 115s
sync-waves upgrade-sql-schemab2894c6-presync-1763391525-5qk25 0/1 Completed 0 2m41s
|
5. Root Application ์ญ์ ๋ก App of Apps ์ ๋ฆฌ
1
2
3
| argocd app delete argocd/apps --yes
application 'argocd/apps' deleted
|
๐ฆ ApplicationSet List ์ ๋ค๋ ์ดํฐ ์ค์ต
1. List Generator ๊ธฐ๋ฐ ApplicationSet ์์ฑ
1
2
| echo $DEVK8SIP $PRDK8SIP
172.18.0.3 172.18.0.4
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
| cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: guestbook
namespace: argocd
spec:
goTemplate: true
goTemplateOptions: ["missingkey=error"]
generators:
- list:
elements:
- cluster: dev-k8s
url: https://$DEVK8SIP:6443
- cluster: prd-k8s
url: https://$PRDK8SIP:6443
template:
metadata:
name: '-guestbook'
labels:
environment: ''
managed-by: applicationset
spec:
project: default
source:
repoURL: https://github.com/Shinminjin/cicd-study.git
targetRevision: HEAD
path: appset/list/
destination:
server: ''
namespace: guestbook
syncPolicy:
syncOptions:
- CreateNamespace=true
EOF
applicationset.argoproj.io/guestbook created
|
2. ApplicationSet guestbook ์ ์ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
| kubectl get applicationsets -n argocd guestbook -o yaml | kubectl neat | yq
{
"apiVersion": "argoproj.io/v1alpha1",
"kind": "ApplicationSet",
"metadata": {
"name": "guestbook",
"namespace": "argocd"
},
"spec": {
"generators": [
{
"list": {
"elements": [
{
"cluster": "dev-k8s",
"url": "https://172.18.0.3:6443"
},
{
"cluster": "prd-k8s",
"url": "https://172.18.0.4:6443"
}
]
}
}
],
"goTemplate": true,
"goTemplateOptions": [
"missingkey=error"
],
"template": {
"metadata": {
"labels": {
"environment": "",
"managed-by": "applicationset"
},
"name": "-guestbook"
},
"spec": {
"destination": {
"namespace": "guestbook",
"server": ""
},
"project": "default",
"source": {
"path": "appset/list/",
"repoURL": "https://github.com/Shinminjin/cicd-study.git",
"targetRevision": "HEAD"
},
"syncPolicy": {
"syncOptions": [
"CreateNamespace=true"
]
}
}
}
}
}
|
3. ApplicationSet ๋ฆฌ์์ค ๋ฐ ์ํ ํ์ธ
1
2
3
4
| kubectl get applicationsets -n argocd
NAME AGE
guestbook 108s
|
1
2
3
4
| argocd appset list
NAME PROJECT SYNCPOLICY CONDITIONS REPO PATH TARGET
argocd/guestbook default nil [{ParametersGenerated Successfully generated parameters for all Applications 2025-11-18 00:09:45 +0900 KST True ParametersGenerated} {ResourcesUpToDate ApplicationSet up to date 2025-11-18 00:09:45 +0900 KST True ApplicationSetUpToDate}] https://github.com/Shinminjin/cicd-study.git appset/list/ HEAD
|
4. ApplicationSet์ด ์์ฑํ Application ๋ชฉ๋ก ํ์ธ
1
2
3
4
5
| argocd app list
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET
argocd/dev-k8s-guestbook https://172.18.0.3:6443 guestbook default OutOfSync Missing Manual <none> https://github.com/Shinminjin/cicd-study.git appset/list/dev-k8s HEAD
argocd/prd-k8s-guestbook https://172.18.0.4:6443 guestbook default OutOfSync Missing Manual <none> https://github.com/Shinminjin/cicd-study.git appset/list/prd-k8s HEAD
|
- ApplicationSet์ด ์์ฑํ ๋ ๊ฐ์ Application(
dev-k8s-guestbook, prd-k8s-guestbook)์ ํ์ธ - ์ฒ์ ์ํ๋ ์์ง Sync ์ ์ด๋ผ
OutOfSync / Missing ์ํ์
5. ApplicationSet์ด ๋ถ์ฌํ ๋ผ๋ฒจ ๊ธฐ๋ฐ ํํฐ๋ง
1
2
3
4
5
| argocd app list -l managed-by=applicationset
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET
argocd/dev-k8s-guestbook https://172.18.0.3:6443 guestbook default OutOfSync Missing Manual <none> https://github.com/Shinminjin/cicd-study.git appset/list/dev-k8s HEAD
argocd/prd-k8s-guestbook https://172.18.0.4:6443 guestbook default OutOfSync Missing Manual <none> https://github.com/Shinminjin/cicd-study.git appset/list/prd-k8s HEAD
|
- Application ํ
ํ๋ฆฟ์์ ๋ถ์ฌํ
managed-by=applicationset ๋ผ๋ฒจ์ ๊ธฐ์ค์ผ๋ก Argo CD ์ฑ ๋ชฉ๋ก์ ํํฐ๋งํจ
1
2
3
4
5
| kubectl get applications -n argocd
NAME SYNC STATUS HEALTH STATUS
dev-k8s-guestbook OutOfSync Missing
prd-k8s-guestbook OutOfSync Missing
|
1
2
3
4
5
| kubectl get applications -n argocd --show-labels
NAME SYNC STATUS HEALTH STATUS LABELS
dev-k8s-guestbook OutOfSync Missing environment=dev-k8s,managed-by=applicationset
prd-k8s-guestbook OutOfSync Missing environment=prd-k8s,managed-by=applicationset
|
- Kubernetes ๋ฆฌ์์ค ์ธก๋ฉด์์๋ Application CR์ ๋ผ๋ฒจ์ด ์ ๋ค์ด๊ฐ๋์ง ํ์ธํจ
6. Label Selector ๊ธฐ๋ฐ ์ผ๊ด Sync ์ํ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
| argocd app sync -l managed-by=applicationset
TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE
2025-11-18T00:14:46+09:00 Service guestbook guestbook-ui OutOfSync Missing
2025-11-18T00:14:46+09:00 apps Deployment guestbook guestbook-ui OutOfSync Missing
2025-11-18T00:14:46+09:00 Namespace guestbook Running Synced namespace/guestbook created
Name: argocd/dev-k8s-guestbook
Project: default
Server: https://172.18.0.3:6443
Namespace: guestbook
URL: https://argocd.example.com/applications/argocd/dev-k8s-guestbook
Source:
- Repo: https://github.com/Shinminjin/cicd-study.git
Target: HEAD
Path: appset/list/dev-k8s
SyncWindow: Sync Allowed
Sync Policy: Manual
Sync Status: Synced to HEAD (b2894c6)
Health Status: Progressing
Operation: Sync
Sync Revision: b2894c67f7a64e42b408da5825cb0b87ee306b04
Phase: Succeeded
Start: 2025-11-18 00:14:46 +0900 KST
Finished: 2025-11-18 00:14:46 +0900 KST
Duration: 0s
Message: successfully synced (all tasks run)
GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE
Namespace guestbook Running Synced namespace/guestbook created
Service guestbook guestbook-ui Synced Healthy service/guestbook-ui created
apps Deployment guestbook guestbook-ui Synced Progressing deployment.apps/guestbook-ui created
TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE
2025-11-18T00:14:47+09:00 apps Deployment guestbook guestbook-ui OutOfSync Missing
2025-11-18T00:14:47+09:00 Service guestbook guestbook-ui OutOfSync Missing
2025-11-18T00:14:47+09:00 Namespace guestbook Running Synced namespace/guestbook created
Name: argocd/prd-k8s-guestbook
Project: default
Server: https://172.18.0.4:6443
Namespace: guestbook
URL: https://argocd.example.com/applications/argocd/prd-k8s-guestbook
Source:
- Repo: https://github.com/Shinminjin/cicd-study.git
Target: HEAD
Path: appset/list/prd-k8s
SyncWindow: Sync Allowed
Sync Policy: Manual
Sync Status: Synced to HEAD (b2894c6)
Health Status: Progressing
Operation: Sync
Sync Revision: b2894c67f7a64e42b408da5825cb0b87ee306b04
Phase: Succeeded
Start: 2025-11-18 00:14:47 +0900 KST
Finished: 2025-11-18 00:14:47 +0900 KST
Duration: 0s
Message: successfully synced (all tasks run)
GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE
Namespace guestbook Running Synced namespace/guestbook created
Service guestbook guestbook-ui Synced Healthy service/guestbook-ui created
apps Deployment guestbook guestbook-ui Synced Progressing deployment.apps/guestbook-ui created
|
managed-by=applicationset ๋ผ๋ฒจ์ ๊ธฐ์ค์ผ๋ก ๋ ๊ฐ์ guestbook ์ฑ์ ํ ๋ฒ์ sync ์ํํจ
7. Sync ์ดํ Application ์ํ ๋ฐ ๋ฆฌ์์ค ์ํ ํ์ธ
1
2
3
4
| k8s2 get pod -n guestbook
NAME READY STATUS RESTARTS AGE
guestbook-ui-7cf4fd7cb9-m7l46 1/1 Running 0 118s
|
1
2
3
4
5
| k8s3 get pod -n guestbook
NAME READY STATUS RESTARTS AGE
guestbook-ui-7cf4fd7cb9-9gkql 1/1 Running 0 2m6s
guestbook-ui-7cf4fd7cb9-hbf9s 1/1 Running 0 2m6s
|
- dev/prd ํด๋ฌ์คํฐ์ ์ค์ ๋ฐฐํฌ๋ guestbook Pod ์ํ ํ์ธ
8. ์์ฑ๋ Application ๋งค๋ํ์คํธ ๋ด์ฉ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
| kubectl get applications -n argocd dev-k8s-guestbook -o yaml | kubectl neat | yq
{
"apiVersion": "argoproj.io/v1alpha1",
"kind": "Application",
"metadata": {
"labels": {
"environment": "dev-k8s",
"managed-by": "applicationset"
},
"name": "dev-k8s-guestbook",
"namespace": "argocd"
},
"spec": {
"destination": {
"namespace": "guestbook",
"server": "https://172.18.0.3:6443"
},
"project": "default",
"source": {
"path": "appset/list/dev-k8s",
"repoURL": "https://github.com/Shinminjin/cicd-study.git",
"targetRevision": "HEAD"
},
"syncPolicy": {
"syncOptions": [
"CreateNamespace=true"
]
}
}
}
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
| kubectl get applications -n argocd prd-k8s-guestbook -o yaml | kubectl neat | yq
{
"apiVersion": "argoproj.io/v1alpha1",
"kind": "Application",
"metadata": {
"labels": {
"environment": "prd-k8s",
"managed-by": "applicationset"
},
"name": "prd-k8s-guestbook",
"namespace": "argocd"
},
"spec": {
"destination": {
"namespace": "guestbook",
"server": "https://172.18.0.4:6443"
},
"project": "default",
"source": {
"path": "appset/list/prd-k8s",
"repoURL": "https://github.com/Shinminjin/cicd-study.git",
"targetRevision": "HEAD"
},
"syncPolicy": {
"syncOptions": [
"CreateNamespace=true"
]
}
}
}
|
9. ApplicationSet ์ญ์ ๋ก ์ ์ด ๋ฆฌ์์ค ์ ๋ฆฌ
1
2
3
| argocd appset delete guestbook --yes
applicationset 'guestbook' deleted
|
๐ ApplicationSet Cluster ์ ๋ค๋ ์ดํฐ ์ค์ต
1. Cluster ์ ๋ค๋ ์ดํฐ๋ก 3๊ฐ ํด๋ฌ์คํฐ์ ์ผ๊ด ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
| cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: guestbook
namespace: argocd
spec:
goTemplate: true
goTemplateOptions: ["missingkey=error"]
generators:
- clusters: {}
template:
metadata:
name: '-guestbook'
labels:
managed-by: applicationset
spec:
project: "default"
source:
repoURL: https://github.com/Shinminjin/cicd-study
targetRevision: HEAD
path: guestbook
destination:
server: ''
namespace: guestbook
syncPolicy:
syncOptions:
- CreateNamespace=true
EOF
applicationset.argoproj.io/guestbook created
|
clusters: {} ์ ๋ค๋ ์ดํฐ๋ฅผ ์ฌ์ฉํด Argo CD์ ๋ฑ๋ก๋ ๋ชจ๋ ํด๋ฌ์คํฐ์ guestbook ์ ํ๋ฆฌ์ผ์ด์
์ ๋ฐฐํฌํจ- ๊ฐ ํด๋ฌ์คํฐ์ ์ด๋ฆ(
.name)๊ณผ API ์๋ฒ ์ฃผ์(.server)๋ฅผ ํ
ํ๋ฆฟ์์ ์ฌ์ฉํ์ฌ, ํด๋ฌ์คํฐ๋ณ๋ก Application์ด ์๋ ์์ฑ๋๋๋ก ๊ตฌ์ฑํจ
2. Cluster ์ ๋ค๋ ์ดํฐ ApplicationSet ์ ๊ฑฐ
1
2
3
| argocd appset delete guestbook --yes
applicationset 'guestbook' deleted
|
3. dev ํด๋ฌ์คํฐ๋ง ๋์์ผ๋ก ํํฐ๋ง ์ค๋น (cluster ์ํฌ๋ฆฟ์ ๋ผ๋ฒจ ์ถ๊ฐ)
1
2
3
4
5
| kubectl get secret -n argocd -l argocd.argoproj.io/secret-type=cluster
NAME TYPE DATA AGE
cluster-172.18.0.3-4100004299 Opaque 3 51m
cluster-172.18.0.4-568336172 Opaque 3 44m
|
- Argo CD๊ฐ ๊ด๋ฆฌ ์ค์ธ ํด๋ฌ์คํฐ ์ํฌ๋ฆฟ ๋ชฉ๋ก์ ์กฐํํด dev/prd ํด๋ฌ์คํฐ๋ฅผ ์๋ณํจ
1
2
3
4
| DEVK8S=cluster-172.18.0.3-4100004299
kubectl label secrets $DEVK8S -n argocd env=stg
secret/cluster-172.18.0.3-4100004299 labeled
|
- dev ํด๋ฌ์คํฐ์ ํด๋นํ๋ ์ํฌ๋ฆฟ ์ด๋ฆ์ ๋ณ์๋ก ์ง์ ํ๊ณ , ์ฌ๊ธฐ์
env=stg ๋ผ๋ฒจ์ ๋ถ์ฌํจ
1
2
3
4
| kubectl get secret -n argocd -l env=stg
NAME TYPE DATA AGE
cluster-172.18.0.3-4100004299 Opaque 3 53m
|
env=stg ๋ผ๋ฒจ์ด ์ ๋ถ์๋์ง ์ฌํ์ธํจ
4. ๋ผ๋ฒจ ์
๋ ํฐ ๊ธฐ๋ฐ Cluster ์ ๋ค๋ ์ดํฐ ApplicationSet ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
| cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: guestbook
namespace: argocd
spec:
goTemplate: true
goTemplateOptions: ["missingkey=error"]
generators:
- clusters:
selector:
matchLabels:
env: "stg"
template:
metadata:
name: '-guestbook'
labels:
managed-by: applicationset
spec:
project: "default"
source:
repoURL: https://github.com/Shinminjin/cicd-study
targetRevision: HEAD
path: guestbook
destination:
server: ''
namespace: guestbook
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
prune: true
selfHeal: true
EOF
applicationset.argoproj.io/guestbook created
|
5. ๋ผ๋ฒจ ๊ธฐ๋ฐ Cluster ApplicationSet ์ค์ต ์ ๋ฆฌ
1
2
3
| argocd appset delete guestbook --yes
applicationset 'guestbook' deleted
|
๐ keycloak ํ๋๋ก ๋ฐฐํฌ
1. Keycloak Deployment/Service/Ingress๋ก ๋ฐฐํฌ ์ค๋น
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
| cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:26.4.0
args: ["start-dev"] # dev mode ์คํ
env:
- name: KEYCLOAK_ADMIN
value: admin
- name: KEYCLOAK_ADMIN_PASSWORD
value: admin
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: keycloak
spec:
selector:
app: keycloak
ports:
- name: http
port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: keycloak
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
ingressClassName: nginx
rules:
- host: keycloak.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: keycloak
port:
number: 8080
EOF
deployment.apps/keycloak created
service/keycloak created
ingress.networking.k8s.io/keycloak created
|
2. Keycloak ๋ฆฌ์์ค ์ํ ํ์ธ
1
2
3
4
5
6
7
8
9
10
| kubectl get deploy,svc,ep keycloak
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/keycloak 1/1 1 1 40s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/keycloak ClusterIP 10.96.249.212 <none> 80/TCP 40s
NAME ENDPOINTS AGE
endpoints/keycloak 10.244.0.25:8080 40s
|
1
2
3
4
| kubectl get ingress keycloak
NAME CLASS HOSTS ADDRESS PORTS AGE
keycloak nginx keycloak.example.com localhost 80 58s
|
3. Ingress + /etc/hosts ์ค์ ์ผ๋ก Keycloak ๋๋ฉ์ธ ์ ๊ทผ ๊ฒ์ฆ
1
2
3
4
5
6
7
8
9
| curl -s -H "Host: keycloak.example.com" http://127.0.0.1 -I
HTTP/1.1 302 Found
Date: Tue, 18 Nov 2025 12:02:15 GMT
Connection: keep-alive
Location: http://keycloak.example.com/admin/
Referrer-Policy: no-referrer
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-Content-Type-Options: nosniff
|
1
2
3
| echo "127.0.0.1 keycloak.example.com" | sudo tee -a /etc/hosts
127.0.0.1 keycloak.example.com
|
1
2
3
4
5
6
7
8
9
| curl -s http://keycloak.example.com -I
HTTP/1.1 302 Found
Date: Tue, 18 Nov 2025 12:04:40 GMT
Connection: keep-alive
Location: http://keycloak.example.com/admin/
Referrer-Policy: no-referrer
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-Content-Type-Options: nosniff
|
4. Keycloak Admin ์ฝ์ ์ ์ ๋ฐ ๋ก๊ทธ์ธ
1
2
| http://keycloak.example.com/admin
username, password: admin / admin
|
5. ์๋ก์ด Realm ์์ฑ โ myrealm
6. myrealm ๋ด ํ
์คํธ ์ฌ์ฉ์ alice ์์ฑ
๐งช mgmt k8s in-cluster ๋ด๋ถ์์๋ keycloak / argocd ๋๋ฉ์ธ ํธ์ถ ๊ฐ๋ฅํ๊ฒ ์ค์
1. curl ํ
์คํธ์ฉ ํ๋๋ก ๋คํธ์ํฌ ๊ธฐ๋ณธ ์ํ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: curl
namespace: default
spec:
containers:
- name: curl
image: curlimages/curl:latest
command: ["sleep", "infinity"]
EOF
pod/curl created
|
- ํด๋ฌ์คํฐ ๋ด๋ถ์์ HTTP/DNS๋ฅผ ํ
์คํธํ๊ธฐ ์ํด
curl ์ด๋ฏธ์ง๋ฅผ ์ฌ์ฉํ๋ ๊ฒฝ๋ ํ๋๋ฅผ ์์ฑ
1
2
3
4
| kubectl get pod -l app=keycloak -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
keycloak-846cb4c68-lphrw 1/1 Running 0 12m 10.244.0.25 mgmt-control-plane <none> <none>
|
1
2
3
| kubectl get pod -l app=keycloak -o jsonpath='{.items[0].status.podIP}'
10.244.0.25
|
1
2
3
4
| KEYCLOAKIP=$(kubectl get pod -l app=keycloak -o jsonpath='{.items[0].status.podIP}')
echo $KEYCLOAKIP
10.244.0.25
|
- keycloak ํ๋ IP ํ์ธ ํ, ๋ณ์๋ก ์ง์
1
2
3
4
5
6
7
8
| kubectl exec -it curl -- ping -c 1 $KEYCLOAKIP
PING 10.244.0.25 (10.244.0.25): 56 data bytes
64 bytes from 10.244.0.25: seq=0 ttl=42 time=0.138 ms
--- 10.244.0.25 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.138/0.138/0.138 ms
|
- curl ํ๋์์ ํด๋น IP๋ก ping์ด ์ ๋๋ ๊ฒ์ ํ์ธํจ
2. cluster DNS ์ด๋ฆ์ผ๋ก keycloak OIDC ์๋ํฌ์ธํธ ํธ์ถ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| kubectl exec -it curl -- curl -s http://keycloak.default.svc.cluster.local/realms/myrealm/.well-known/openid-configuration | jq
{
"issuer": "http://keycloak.default.svc.cluster.local/realms/myrealm",
"authorization_endpoint": "http://keycloak.default.svc.cluster.local/realms/myrealm/protocol/openid-connect/auth",
"token_endpoint": "http://keycloak.default.svc.cluster.local/realms/myrealm/protocol/openid-connect/token",
"introspection_endpoint": "http://keycloak.default.svc.cluster.local/realms/myrealm/protocol/openid-connect/token/introspect",
"userinfo_endpoint": "http://keycloak.default.svc.cluster.local/realms/myrealm/protocol/openid-connect/userinfo",
"end_session_endpoint": "http://keycloak.default.svc.cluster.local/realms/myrealm/protocol/openid-connect/logout",
"frontchannel_logout_session_supported": true,
"frontchannel_logout_supported": true,
"jwks_uri": "http://keycloak.default.svc.cluster.local/realms/myrealm/protocol/openid-connect/certs",
"check_session_iframe": "http://keycloak.default.svc.cluster.local/realms/myrealm/protocol/openid-connect/login-status-iframe.html",
...
}
|
1
2
3
| kubectl exec -it curl -- curl -s http://keycloak.example.com -I
command terminated with exit code 6
|
1
2
3
4
5
6
7
8
9
10
11
12
| kubectl exec -it curl -- nslookup -debug keycloak.example.com
Server: 10.96.0.10
Address: 10.96.0.10:53
Query #0 completed in 145ms:
** server can't find keycloak.example.com: NXDOMAIN
Query #1 completed in 299ms:
** server can't find keycloak.example.com: NXDOMAIN
command terminated with exit code 1
|
- Pod ๋ด๋ถ์์์ DNS ์กฐํ๋ ๋
ธ๋
/etc/hosts ๋ฅผ ์ง์ ๋ณด์ง ์๊ณ CoreDNS(์๋น์ค IP 10.96.0.10)๋ฅผ ์ฌ์ฉํจ
4. CoreDNS ConfigMap ์์ ์ผ๋ก keycloak / argocd ๋๋ฉ์ธ ๋ฑ๋ก
(1) ๋จผ์ Service IP ํ์ธ
1
2
3
4
5
6
7
| kubectl get svc keycloak
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
keycloak ClusterIP 10.96.249.212 <none> 80/TCP 19m
kubectl get svc -n argocd argocd-server
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argocd-server ClusterIP 10.96.46.178 <none> 80/TCP,443/TCP 25h
|
(2) CoreDNS ์ค์ ์์
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| kubectl edit cm -n kube-system coredns
.:53 {
...
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
hosts {
<CLUSTER IP> keycloak.example.com
<CLUSTER IP> argocd.example.com
fallthrough
}
...
configmap/coredns edited
|
5. ์์ ํ in-cluster DNS / ๋๋ฉ์ธ ํธ์ถ ํ์ธ
1
2
3
4
5
6
7
| kubectl exec -it curl -- nslookup keycloak.example.com
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: keycloak.example.com
Address: 10.96.249.212
|
๐ keycloak์ argocd๋ฅผ ์ํ client ์์ฑ
1. Argo CD ์ฐ๋์ฉ Realm / Client ์์ฑ
1
2
3
| ์ฌ์ฉ Realm : myrealm
Client id : argocd
name : argocd client
|
1
| Client authentication : ON
|
1
2
3
4
5
| Root URL : https://argocd.example.com/
Home URL : /applications
Valid redirect URIs : https://argocd.example.com/auth/callback
Valid post logout redirect URIs : https://argocd.example.com/applications
Web origins : +
|
1
2
| # ์์ฑ๋ client์์ โ Credentials: ๋ฉ๋ชจ ํด๋๊ธฐ
bJnuNqSdHAaWDPn4ixWCAiac7tOP6nrN
|
๐ก๏ธ Configuring ArgoCD OIDC
1. Keycloak Client Secret ์ Argo CD Secret ์ ์ ์ฅ
1
2
3
| kubectl -n argocd patch secret argocd-secret --patch='{"stringData": { "oidc.keycloak.clientSecret": "<REPLACE_WITH_CLIENT_SECRET>" }}'
kubectl -n argocd patch secret argocd-secret --patch='{"stringData": { "oidc.keycloak.clientSecret": "bJnuNqSdHAaWDPn4ixWCAiac7tOP6nrN" }}'
secret/argocd-secret patched
|
- Keycloak ์์ ๋ฐ๊ธ๋ฐ์
clientSecret ๊ฐ์ argocd-secret ์ํฌ๋ฆฟ์ ์ถ๊ฐํจ oidc.keycloak.clientSecret ํค๋ก ์ ์ฅํ์ฌ OIDC ์ค์ ์์ ์ฐธ์กฐํ ์ ์๊ฒ ๊ตฌ์ฑํจ
1
2
3
4
5
6
7
8
| kubectl get secret -n argocd argocd-secret -o jsonpath='{.data}' | jq
{
"admin.password": "JDJhJDEwJHRuTDZ3UUt1MzFZVUlIRW5hZkJVeXV6T0hLQWRrZ1hxTXQweTZHQ0taMzhHQTlzb2ZCM1ZP",
"admin.passwordMtime": "MjAyNS0xMS0xN1QxMTowNDowNlo=",
"oidc.keycloak.clientSecret": "YkpudU5xU2RIQWFXRFBuNGl4V0NBaWFjN3RPUDZuck4=",
"server.secretkey": "bFdiZUpzZHhGNS9waGVldGwrcGdPbmUwaGJIRDFWd2E2bnE1TThnQkNRUT0="
}
|
2. Argo CD ConfigMap์ OIDC ์ค์ ์ถ๊ฐ
1
2
3
4
5
6
7
8
9
10
11
| kubectl patch cm argocd-cm -n argocd --type merge -p '
data:
oidc.config: |
name: Keycloak
issuer: http://keycloak.example.com/realms/myrealm
clientID: argocd
clientSecret: bJnuNqSdHAaWDPn4ixWCAiac7tOP6nrN
requestedScopes: ["openid", "profile", "email"]
'
configmap/argocd-cm patched
|
argocd-cm ConfigMap ์ OIDC ๊ด๋ จ ์ค์ ์ ์ถ๊ฐํจ- Keycloak ์
myrealm ๊ณผ argocd ํด๋ผ์ด์ธํธ ์ ๋ณด๋ฅผ ๊ธฐ๋ฐ์ผ๋ก OIDC ํ๋ก๋ฐ์ด๋๋ฅผ ๋ฑ๋กํจ
1
2
3
4
5
6
7
8
| kubectl get cm -n argocd argocd-cm -o yaml | grep oidc.config: -A5
oidc.config: |
name: Keycloak
issuer: http://keycloak.example.com/realms/myrealm
clientID: argocd
clientSecret: bJnuNqSdHAaWDPn4ixWCAiac7tOP6nrN
requestedScopes: ["openid", "profile", "email"]
|
- ์ค์ ์ด ์ ์ ๋ฐ์๋์๋์ง ํ์ธํจ
3. Argo CD ์๋ฒ ์ฌ์์์ผ๋ก OIDC ์ค์ ๋ฐ์
1
2
3
| kubectl rollout restart deploy argocd-server -n argocd
deployment.apps/argocd-server restarted
|
๐จโ๐ป keycloak ์ธ์ฆ์ ํตํ Argo CD ๋ก๊ทธ์ธ : admin logout ํ keycloak์ ํตํ alice ๋ก๊ทธ์ธ
1. Argo CD ๋ก๊ทธ์ธ ํ๋ฉด์์ Keycloak ๋ก๊ทธ์ธ ์ง์
- ๋ธ๋ผ์ฐ์ ๊ฐ
keycloak.example.com ์ myrealm ๋ก๊ทธ์ธ ํ์ด์ง๋ก ๋ฆฌ๋ค์ด๋ ํธ๋จ
2. Keycloak์์ alice ๊ณ์ ์ผ๋ก ๋ก๊ทธ์ธ
1
2
3
| username: alice
password: alice123
# ์ฌ์ฉ์ ์ ๋ณด ์
๋ฐ์ดํธ
|
3. Keycloak ์ธ์
์์ Argo CD ํด๋ผ์ด์ธํธ ๋ก๊ทธ์ธ ์ํ ํ์ธ
4. sign out
๐ jenkins ์ง์ ํ๋๋ก ๋ฐฐํฌ
1. Jenkins ๋ค์์คํ์ด์ค/PVC/Deployment/Service/Ingress ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
| kubectl create ns jenkins
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pvc
namespace: jenkins
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
securityContext:
fsGroup: 1000
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- name: http
containerPort: 8080
- name: agent
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-pvc
---
apiVersion: v1
kind: Service
metadata:
name: jenkins-svc
namespace: jenkins
spec:
type: ClusterIP
selector:
app: jenkins
ports:
- port: 8080
targetPort: http
protocol: TCP
name: http
- port: 50000
targetPort: agent
protocol: TCP
name: agent
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jenkins-ingress
namespace: jenkins
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
spec:
ingressClassName: nginx
rules:
- host: jenkins.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jenkins-svc
port:
number: 8080
EOF
namespace/jenkins created
persistentvolumeclaim/jenkins-pvc created
deployment.apps/jenkins created
service/jenkins-svc created
ingress.networking.k8s.io/jenkins-ingress created
|
2. Jenkins ๋ฆฌ์์ค ์ํ ๋ฐ PVC ๋ฐ์ธ๋ฉ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
13
| kubectl get deploy,svc,ep,pvc -n jenkins
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/jenkins 1/1 1 1 40s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/jenkins-svc ClusterIP 10.96.167.130 <none> 8080/TCP,50000/TCP 40s
NAME ENDPOINTS AGE
endpoints/jenkins-svc 10.244.0.29:50000,10.244.0.29:8080 40s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/jenkins-pvc Bound pvc-74ef6288-9211-470f-be55-a26f4628edcc 10Gi RWO standard <unset> 40s
|
1
2
3
4
| kubectl get ingress -n jenkins jenkins-ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
jenkins-ingress nginx jenkins.example.com localhost 80 62s
|
3. /etc/hosts ๋ฑ๋ก์ผ๋ก ๋ก์ปฌ ๋ธ๋ผ์ฐ์ ์์ ๋๋ฉ์ธ ์ ์ ์ค๋น
1
2
3
| echo "127.0.0.1 jenkins.example.com" | sudo tee -a /etc/hosts
127.0.0.1 jenkins.example.com
|
4. Jenkins ์ด๊ธฐ ์ํธ ํ์ธ ๋ฐ ์ต์ด ์ค์
1
2
3
| kubectl exec -it -n jenkins deploy/jenkins -- cat /var/jenkins_home/secrets/initialAdminPassword
1e92cb3d52a84da08d5ed47d564d2af1
|
1
| http://jenkins.example.com
|
(1) ์ด๊ธฐ์ํธ ์
๋ ฅ
(2) Install suggested plugins
(3) Create First Admin User
1
2
3
| # ๊ด๋ฆฌ์ ๊ณ์ ์ค์
Username: admin
Password: qwe123
|
5. in-cluster ํต์ ์ ๊ณ ๋ คํ Jenkins ๋๋ฉ์ธ ์ค์ ๋ฐ CoreDNS ์ฐ๋
(1) Jenkins Service ClusterIP ๋ฉ๋ชจ
1
2
3
4
| kubectl get svc -n jenkins
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-svc ClusterIP 10.96.167.130 <none> 8080/TCP,50000/TCP 23m
|
(2) CoreDNS hosts ํ๋ฌ๊ทธ์ธ์ Jenkins ๋๋ฉ์ธ ์ถ๊ฐ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| kubectl edit cm -n kube-system coredns
.:53 {
...
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
hosts {
<CLUSTER IP> keycloak.example.com
<CLUSTER IP> argocd.example.com
<CLUSTER IP> jenkins.example.com
fallthrough
}
...
|
๐ keycloak์ jenkins๋ฅผ ์ํ client ์์ฑ
1. ๊ธฐ๋ณธ ์ค์
1
2
| Client ID : jenkins
Name : jenkins client
|
2. ์ธ์ฆ ๋ฐ ํ๋ก์ฐ ์ค์
1
2
| Client authentication : Check
Authentication flow : Standard flow
|
3. ๋ก๊ทธ์ธ ์
ํ
1
2
3
4
5
| Root URL : http://jenkins.example.com/
Home URL : http://jenkins.example.com/
Valid redirect URIs : http://jenkins.example.com/securityRealm/finishLogin
Valid post logout redirect URIs : http://jenkins.example.com
Web origins : +
|
4. Credentials ๋ฉ๋ชจ
1
| 78sizpZAEGkf2MxdwyUkopdgA94ysahc
|
๐ฐ๏ธ Configuring Jenkins OIDC
1. OpenID Connect Authentication ํ๋ฌ๊ทธ์ธ ์ค์น
2. Jenkins ๋ณด์ ์ค์ ์์ OIDC ์ฌ์ฉ ์ค์
1
2
3
4
5
6
7
8
9
10
| Manage Jenkins โ Security : Security Realm ์ค์ โ ํ๋จ Save
Login with Openid Connect
Client id : jenkins
Client secret : <keycloak ์์ jenkins client ์์ credentials>
Configuration mode : Discoveryโฆ
- Well-know configuration endpoint http://keycloak.example.com/realms/myrealm/.well-known/openid-configuration
- Override scopes : openid email profile
Logout from OpenID Provider : Check
Security configuration
- Disable ssl verification : Check
|
3. Keycloak ์ฐ๋ ๋ก๊ทธ์ธ ๋์ ํ์ธ
- ์ด๋ฏธ ๋ธ๋ผ์ฐ์ ์
alice ๋ก Keycloak ์ธ์
์ด ์ด์์๋ ์ํ๋ผ์ login via keyclock ํด๋ฆญํ๋๋ ๋ฐ๋ก ์ง์
4. Keycloak ์ธ์
์์ Jenkins/Argo CD ๋์ ์ฐ๋ ์ํ ํ์ธ
5. Jenkins / Argo CD ์ฌ์ฉ์ ์ ๋ณด ํ์ธ
๐๏ธ LDAP(Lightweight Directory Access Protocol)
1. LDAP ์ ํ์ฌ ์ฃผ์๋ก + ์ถ์
์ฆ ์์คํ
์ผ๋ก ๋น์
- ์ฝ๊ฒ ๋งํด, โ์กฐ์ง ๋ด ์ฌ์ฉ์, ๊ทธ๋ฃน, ๊ถํ ๋ฑ์ ์ ๋ณด๋ฅผ ํธ๋ฆฌ ๊ตฌ์กฐ๋ก ์ ์ฅํ๊ณ ์กฐํํ๋ ์์คํ
โ
1
2
3
4
5
6
7
| dc=example,dc=org # Base DN(Root DN)
โโโ ou=people
โ โโโ uid=alice
โ โโโ uid=bob
โโโ ou=groups
โโโ cn=devs
โโโ cn=admins
|
2. ์ฌ์ ์์
: Keycloak ์์ alice ์ฌ์ฉ์ ์ ๋ฆฌ
3. OpenLDAP + phpLDAPadmin K8s ๋ฐฐํฌ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
| cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: openldap
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: openldap
namespace: openldap
spec:
replicas: 1
selector:
matchLabels:
app: openldap
template:
metadata:
labels:
app: openldap
spec:
containers:
- name: openldap
image: osixia/openldap:1.5.0
ports:
- containerPort: 389
name: ldap
- containerPort: 636
name: ldaps
env:
- name: LDAP_ORGANISATION # ๊ธฐ๊ด๋ช
, LDAP ๊ธฐ๋ณธ ์ ๋ณด ์์ฑ ์ ์ฌ์ฉ
value: "Example Org"
- name: LDAP_DOMAIN # LDAP ๊ธฐ๋ณธ Base DN ์ ์๋ ์์ฑ
value: "example.org"
- name: LDAP_ADMIN_PASSWORD # LDAP ๊ด๋ฆฌ์ ํจ์ค์๋
value: "admin"
- name: LDAP_CONFIG_PASSWORD
value: "admin"
- name: phpldapadmin
image: osixia/phpldapadmin:0.9.0
ports:
- containerPort: 80
name: phpldapadmin
env:
- name: PHPLDAPADMIN_HTTPS
value: "false"
- name: PHPLDAPADMIN_LDAP_HOSTS
value: "openldap" # LDAP hostname inside cluster
---
apiVersion: v1
kind: Service
metadata:
name: openldap
namespace: openldap
spec:
selector:
app: openldap
ports:
- name: phpldapadmin
port: 80
targetPort: 80
nodePort: 30000
- name: ldap
port: 389
targetPort: 389
- name: ldaps
port: 636
targetPort: 636
type: NodePort
EOF
namespace/openldap created
deployment.apps/openldap created
service/openldap created
|
1
2
3
4
5
6
7
8
9
10
11
12
13
| kubectl get deploy,pod,svc,ep -n openldap
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/openldap 1/1 1 1 33s
NAME READY STATUS RESTARTS AGE
pod/openldap-54857b746c-rnwvf 2/2 Running 0 33s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/openldap NodePort 10.96.59.112 <none> 80:30000/TCP,389:32686/TCP,636:30808/TCP 33s
NAME ENDPOINTS AGE
endpoints/openldap 10.244.0.30:80,10.244.0.30:389,10.244.0.30:636 33s
|
1
2
3
4
5
| # ๊ธฐ๋ณธ LDAP ์ ๋ณด : ์๋ Bind DN๊ณผ PW๋ก ๋ก๊ทธ์ธ
## Base DN: dc=example,dc=org
## Bind DN: cn=admin,dc=example,dc=org
## Password: admin
http://127.0.0.1:30000
|
4. OpenLDAP ์ปจํ
์ด๋ ๋ด๋ถ ํ๋ก์ธ์ค ํ์ธ
(1) openldap ์ปจํ
์ด๋ ๋ด๋ถ๋ก ์ง์
1
2
| kubectl -n openldap exec -it deploy/openldap -c openldap -- bash
root@openldap-54857b746c-rnwvf:/#
|
(2) slapd ๋ฐ๋ชฌ ํ๋ก์ธ์ค ํธ๋ฆฌ ํ์ธ
1
2
3
4
5
6
7
| root@openldap-54857b746c-rnwvf:/# pstree -aplpst
run,1 -u /container/tool/run
โโslapd,440 -h ldap://openldap-54857b746c-rnwvf:389 ldaps://openldap-54857b746c-rnwvf:636 ldapi:/// -u openldap -g openldap -d 256
โโ{slapd},443
โโ{slapd},444
โโ{slapd},445
|
5. LDAP ๊ด๋ฆฌ์ ์ธ์ฆ ๋ฐ ๊ธฐ๋ณธ ์ํธ๋ฆฌ ์กฐํ ํ
์คํธ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| root@openldap-54857b746c-rnwvf:/# ldapsearch -x -H ldap://localhost:389 -b dc=example,dc=org -D "cn=admin,dc=example,dc=org" -w admin
# extended LDIF
#
# LDAPv3
# base <dc=example,dc=org> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# example.org
dn: dc=example,dc=org
objectClass: top
objectClass: dcObject
objectClass: organization
o: Example Org
dc: example
# search result
search: 2
result: 0 Success
# numResponses: 2
# numEntries: 1
|
6. ์ค์ต์ฉ ์ต์ข
LDAP ํธ๋ฆฌ ๊ตฌ์กฐ ์ค๊ณ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| dc=example,dc=org
โโโ ou=people
โ โโโ uid=alice
โ โ โโโ cn: Alice
โ โ โโโ sn: Kim
โ โ โโโ uid: alice
โ โ โโโ mail: alice@example.org
โ โโโ uid=bob
โ โโโ cn: Bob
โ โโโ sn: Lee
โ โโโ uid: bob
โ โโโ mail: bob@example.org
โโโ ou=groups
โโโ cn=devs
โ โโโ member: uid=bob,ou=people,dc=example,dc=org
โโโ cn=admins
โโโ member: uid=alice,ou=people,dc=example,dc=org
|
7. ldapadd ๋ก OU(organizationalUnit) ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
| root@openldap-54857b746c-rnwvf:/# cat <<EOF | ldapadd -x -D "cn=admin,dc=example,dc=org" -w admin
> dn: ou=people,dc=example,dc=org
> objectClass: organizationalUnit
> ou: people
>
> dn: ou=groups,dc=example,dc=org
> objectClass: organizationalUnit
> ou: groups
> EOF
adding new entry "ou=people,dc=example,dc=org"
adding new entry "ou=groups,dc=example,dc=org"
|
8. ldapadd ๋ก ์ฌ์ฉ์(alice, bob) ์ถ๊ฐ (inetOrgPerson)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
| root@openldap-54857b746c-rnwvf:/# cat <<EOF | ldapadd -x -D "cn=admin,dc=example,dc=org" -w admin
> dn: uid=alice,ou=people,dc=example,dc=org
> objectClass: inetOrgPerson
> cn: Alice
> sn: Kim
> uid: alice
> mail: alice@example.org
> userPassword: alice123
>
> dn: uid=bob,ou=people,dc=example,dc=org
> objectClass: inetOrgPerson
> cn: Bob
> sn: Lee
> uid: bob
> mail: bob@example.org
> userPassword: bob123
> EOF
adding new entry "uid=alice,ou=people,dc=example,dc=org"
adding new entry "uid=bob,ou=people,dc=example,dc=org"
|
9. ldapadd ๋ก ๊ทธ๋ฃน(devs, admins) ์ถ๊ฐ (groupOfNames)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| root@openldap-54857b746c-rnwvf:/# cat <<EOF | ldapadd -x -D "cn=admin,dc=example,dc=org" -w admin
> dn: cn=devs,ou=groups,dc=example,dc=org
> objectClass: groupOfNames
> cn: devs
> member: uid=bob,ou=people,dc=example,dc=org
>
> dn: cn=admins,ou=groups,dc=example,dc=org
> objectClass: groupOfNames
> cn: admins
> member: uid=alice,ou=people,dc=example,dc=org
> EOF
adding new entry "cn=devs,ou=groups,dc=example,dc=org"
adding new entry "cn=admins,ou=groups,dc=example,dc=org"
|
10. ldapsearch ๋ก OU / ์ฌ์ฉ์ / ๊ทธ๋ฃน ์กฐํ
(1) OU ๋ชฉ๋ก ์กฐํ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
| root@openldap-54857b746c-rnwvf:/# ldapsearch -x -D "cn=admin,dc=example,dc=org" -w admin \
> -b "dc=example,dc=org" "(objectClass=organizationalUnit)" ou
# extended LDIF
#
# LDAPv3
# base <dc=example,dc=org> with scope subtree
# filter: (objectClass=organizationalUnit)
# requesting: ou
#
# people, example.org
dn: ou=people,dc=example,dc=org
ou: people
# groups, example.org
dn: ou=groups,dc=example,dc=org
ou: groups
# search result
search: 2
result: 0 Success
# numResponses: 3
# numEntries: 2
|
(2) ์ฌ์ฉ์ ๋ชฉ๋ก ์กฐํ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
| root@openldap-54857b746c-rnwvf:/# ldapsearch -x -D "cn=admin,dc=example,dc=org" -w admin \
> -b "ou=people,dc=example,dc=org" "(uid=*)" uid cn mail
# extended LDIF
#
# LDAPv3
# base <ou=people,dc=example,dc=org> with scope subtree
# filter: (uid=*)
# requesting: uid cn mail
#
# bob, people, example.org
dn: uid=bob,ou=people,dc=example,dc=org
cn: Bob
uid: bob
mail: bob@example.org
# alice, people, example.org
dn: uid=alice,ou=people,dc=example,dc=org
cn: Alice
uid: alice
mail: alice@example.org
# search result
search: 2
result: 0 Success
# numResponses: 3
# numEntries: 2
|
(3) ๊ทธ๋ฃน ๋ฐ ๋ฉค๋ฒ ๊ด๊ณ ์กฐํ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
| root@openldap-54857b746c-rnwvf:/# ldapsearch -x -D "cn=admin,dc=example,dc=org" -w admin \
> -b "ou=groups,dc=example,dc=org" "(objectClass=groupOfNames)" cn member
# extended LDIF
#
# LDAPv3
# base <ou=groups,dc=example,dc=org> with scope subtree
# filter: (objectClass=groupOfNames)
# requesting: cn member
#
# devs, groups, example.org
dn: cn=devs,ou=groups,dc=example,dc=org
cn: devs
member: uid=bob,ou=people,dc=example,dc=org
# admins, groups, example.org
dn: cn=admins,ou=groups,dc=example,dc=org
cn: admins
member: uid=alice,ou=people,dc=example,dc=org
# search result
search: 2
result: 0 Success
# numResponses: 3
# numEntries: 2
|
11. LDAP ์ฌ์ฉ์(alice) ์ธ์ฆ ํ
์คํธ
1
2
3
| root@openldap-54857b746c-rnwvf:/# ldapwhoami -x -D "uid=alice,ou=people,dc=example,dc=org" -w alice123
dn:uid=alice,ou=people,dc=example,dc=org
|
๐ KeyCloak์์ LDAP ์ค์ ์ค์ต
1. myrealm ์ LDAP Provider ์ถ๊ฐ
1
| [Realm] myrealm โ User Federation โ Add LDAP providers
|
1
2
3
4
5
| UI display name: ldap
Vendor: Other
Connection URL: ldap://openldap.openldap.svc:389
Bind DN: (= Login DN) cn=admin,dc=example,dc=org
Bind Credential: admin โ Test authentication
|
1
2
3
4
5
6
7
| Edit mode: WRITABLE
Users DN: ou=people,dc=example,dc=org
Username LDAP attribute: uid
RDN LDAP attribute: uid
UUID LDAP attribute: uid
User Object Classes: inetOrgPerson
Search scope: Subtree (OU ํ์ ๋ชจ๋ ํ์)
|
1
2
| Import Users: On (LDAP โ KeyCloak : Sync OK)
Sync Registrations: Off (KeyCloak โ LDAP : Sync OK)
|
2. LDAP Provider ์ Mappers ํ์ธ
1
2
| User Federation โ LDAP Provider ์ ํ โ Mappers
: user์ ๋ํ Mappers๊ฐ ๊ธฐ๋ณธ ์กด์ฌ
|
3. LDAP ์ฐ๋ ํ Argo CD / Jenkins ๋ก๊ทธ์ธ ํ
์คํธ
(1) Argo CD ๋ก๊ทธ์ธ ํ
์คํธ
1
2
| Argo CD ์ bob(์ํธ bob123) ๋ก๊ทธ์ธ ์๋
โ ์ดํ alice(์ํธ alice123)์ผ๋ก๋ ๋ก๊ทธ์ธ ์๋
|
(2) Jenkins ๋ก๊ทธ์ธ ํ
์คํธ
- Jenkins ๋ ์ด๋ฏธ Keycloak OIDC ๋ก ์ฐ๋๋ ์ํ๋ผ ํ๋ฉด์์ ๋ณ๋ค๋ฅธ ์ถ๊ฐ ๋ก๊ทธ์ธ ์ ์ฐจ ์์ด ๋์๋ณด๋๋ก ์ง์
๋๋ ๊ฒ ํ์ธํจ
(3) Keycloak ์ธ์
ํ์ธ
- Argo CD, Jenkins ์์ชฝ ๋ชจ๋ ๋์ผํ Id(Predicate: uid)๋ฅผ ์ฌ์ฉํด SSO ๊ฒฝํ์ ์ ๊ณตํ ์ ์๋ ๊ตฌ์กฐ๊ฐ ์์ฑ๋จ
๐ง Argo CD ์ฌ์ฉ์ ์ฌ์ฉ ์๋ : LDAP devs ๊ทธ๋ฃน bob ๊ณ์
1. ์ํ ApplicationSet์ผ๋ก guestbook ๋ค์ค ํด๋ฌ์คํฐ ๋ฐฐํฌ
(1) ApplicationSet ์์ฑ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
| cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: guestbook
namespace: argocd
spec:
goTemplate: true
goTemplateOptions: ["missingkey=error"]
generators:
- clusters: {}
template:
metadata:
name: '-guestbook'
labels:
managed-by: applicationset
spec:
project: "default"
source:
repoURL: https://github.com/gasida/cicd-study
targetRevision: HEAD
path: guestbook
destination:
server: ''
namespace: guestbook
syncPolicy:
syncOptions:
- CreateNamespace=true
EOF
applicationset.argoproj.io/guestbook created
|
(2) ApplicationSet ๋ผ๋ฒจ ๊ธฐ๋ฐ ๋๊ธฐํ ์คํ
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
| argocd app sync -l managed-by=applicationset
TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE
2025-11-19T00:03:17+09:00 Service guestbook guestbook-ui OutOfSync Missing
2025-11-19T00:03:17+09:00 apps Deployment guestbook guestbook-ui OutOfSync Missing
2025-11-19T00:03:17+09:00 apps Deployment guestbook guestbook-ui OutOfSync Missing deployment.apps/guestbook-ui created
2025-11-19T00:03:17+09:00 Service guestbook guestbook-ui OutOfSync Missing service/guestbook-ui created
Name: argocd/dev-k8s-guestbook
Project: default
Server: https://172.18.0.3:6443
Namespace: guestbook
URL: https://argocd.example.com/applications/argocd/dev-k8s-guestbook
Source:
- Repo: https://github.com/gasida/cicd-study
Target: HEAD
Path: guestbook
SyncWindow: Sync Allowed
Sync Policy: Manual
Sync Status: Synced to HEAD (b2894c6)
Health Status: Progressing
Operation: Sync
Sync Revision: b2894c67f7a64e42b408da5825cb0b87ee306b04
Phase: Succeeded
Start: 2025-11-19 00:03:17 +0900 KST
Finished: 2025-11-19 00:03:17 +0900 KST
Duration: 0s
Message: successfully synced (all tasks run)
GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE
Service guestbook guestbook-ui Synced Healthy service/guestbook-ui created
apps Deployment guestbook guestbook-ui Synced Progressing deployment.apps/guestbook-ui created
TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE
2025-11-19T00:03:18+09:00 Service guestbook guestbook-ui OutOfSync Missing
2025-11-19T00:03:18+09:00 apps Deployment guestbook guestbook-ui OutOfSync Missing
2025-11-19T00:03:18+09:00 Namespace guestbook Running Synced namespace/guestbook created
2025-11-19T00:03:18+09:00 Service guestbook guestbook-ui OutOfSync Missing service/guestbook-ui created
2025-11-19T00:03:18+09:00 apps Deployment guestbook guestbook-ui OutOfSync Missing deployment.apps/guestbook-ui created
Name: argocd/in-cluster-guestbook
Project: default
Server: https://kubernetes.default.svc
Namespace: guestbook
URL: https://argocd.example.com/applications/argocd/in-cluster-guestbook
Source:
- Repo: https://github.com/gasida/cicd-study
Target: HEAD
Path: guestbook
SyncWindow: Sync Allowed
Sync Policy: Manual
Sync Status: Synced to HEAD (b2894c6)
Health Status: Progressing
Operation: Sync
Sync Revision: b2894c67f7a64e42b408da5825cb0b87ee306b04
Phase: Succeeded
Start: 2025-11-19 00:03:18 +0900 KST
Finished: 2025-11-19 00:03:18 +0900 KST
Duration: 0s
Message: successfully synced (all tasks run)
GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE
Namespace guestbook Running Synced namespace/guestbook created
Service guestbook guestbook-ui Synced Healthy service/guestbook-ui created
apps Deployment guestbook guestbook-ui Synced Progressing deployment.apps/guestbook-ui created
TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE
2025-11-19T00:03:19+09:00 Service guestbook guestbook-ui OutOfSync Missing
2025-11-19T00:03:19+09:00 apps Deployment guestbook guestbook-ui OutOfSync Missing
2025-11-19T00:03:19+09:00 Service guestbook guestbook-ui OutOfSync Missing service/guestbook-ui created
2025-11-19T00:03:19+09:00 apps Deployment guestbook guestbook-ui OutOfSync Missing deployment.apps/guestbook-ui created
Name: argocd/prd-k8s-guestbook
Project: default
Server: https://172.18.0.4:6443
Namespace: guestbook
URL: https://argocd.example.com/applications/argocd/prd-k8s-guestbook
Source:
- Repo: https://github.com/gasida/cicd-study
Target: HEAD
Path: guestbook
SyncWindow: Sync Allowed
Sync Policy: Manual
Sync Status: Synced to HEAD (b2894c6)
Health Status: Progressing
Operation: Sync
Sync Revision: b2894c67f7a64e42b408da5825cb0b87ee306b04
Phase: Succeeded
Start: 2025-11-19 00:03:19 +0900 KST
Finished: 2025-11-19 00:03:19 +0900 KST
Duration: 0s
Message: successfully synced (all tasks run)
GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE
Service guestbook guestbook-ui Synced Healthy service/guestbook-ui created
apps Deployment guestbook guestbook-ui Synced Progressing deployment.apps/guestbook-ui created
|
2. ๊ฐ ํด๋ฌ์คํฐ์ ๋ฐฐํฌ๋ guestbook ํ๋ ์ํ ํ์ธ
1
2
3
4
5
6
7
8
9
10
11
12
| k8s1 get pod -n guestbook # mgmt
k8s2 get pod -n guestbook # dev
k8s3 get pod -n guestbook # prd
NAME READY STATUS RESTARTS AGE
guestbook-ui-85db984648-8rch8 1/1 Running 0 32s
NAME READY STATUS RESTARTS AGE
guestbook-ui-85db984648-k7wms 1/1 Running 0 33s
NAME READY STATUS RESTARTS AGE
guestbook-ui-85db984648-7bb75 1/1 Running 0 32s
|
3. bob(LDAP devs ๊ทธ๋ฃน) ๊ณ์ ์ผ๋ก Argo CD UI ์ ์ ๊ฒฐ๊ณผ
(1) Applications ํ๋ฉด
- ์ค์ ๋ก๋
dev-k8s-guestbook, in-cluster-guestbook, prd-k8s-guestbook์ด ์กด์ฌํ์ง๋ง bob ๊ณ์ ์
์ฅ์์๋ ๋ฆฌ์คํธ๊ฐ ๋น์ด์๋ ์ํ๋ก ๋ณด์
(2) Clusters ํ๋ฉด
- mgmt, dev, prd ํด๋ฌ์คํฐ ๋ชจ๋ ์ฐ๊ฒฐ๋์ด ์์ง๋ง bob ๊ถํ์ผ๋ก๋ ์กฐํ ๋ถ๊ฐํจ
๐ฅ OpenLDAP ๊ทธ๋ฃน ์ ๋ณด๋ฅผ Keycloak ๋๊ธฐํ
1. LDAP Provider ์ง์
1
2
| Keycloak myrealm โ User Federation โ LDAP Provider ์ง์
ํ
Mappers ํญ์์ Add mapper ํด๋ฆญ
|
2. Mapper ์ค์
1
2
3
4
5
6
7
8
| name : ldap-groups
Mapper type: group-ldap-mapper
LDAP Groups DN : ou=groups,dc=example,dc=org
Group Name LDAP Attribute: cn
Group Object Classed: groupOfNames
Membership LDAP attribute: member
Membership attribute type: DN
Mode: READ_ONLY
|
3. Sync LDAP groups to Keycloak
๐ Keycloak์์ ํ ํฐ์ Group ์ ๋ฌ์ ์ํ ์ค์ : ArgoCD Client ์ค์
1. groups Client Scope ์์ฑ
Name (groups), ๋๋จธ์ง๋ ๊ธฐ๋ณธ๊ฐ
2. groups Scope์ Group Membership ๋งคํผ ์ถ๊ฐ
ํด๋น client scopes์์ mappers ํด๋ฆญ โ [Configure a new mapper] ํด๋ฆญ
โGroup Membershipโ ์ ํ ํ Name, Token Claim Name ์ groups ์
๋ ฅ
3. argocd ํด๋ผ์ด์ธํธ์ groups Scope ์ฐ๊ฒฐ
clients์์ argocd ํด๋ฆญ
[Client scopes] ํญ ์ด๋
Add client scope ํด๋ฆญ ํ ์์ฑํ groups ์ ํ / [Add] ์ ํ ํ ๋๋กญ๋ค์ด์ Default ์ ํ
4. Argo CD OIDC ์ค์ ์ groups scope ์ถ๊ฐ
1
2
3
4
5
6
7
| kubectl edit cm -n argocd argocd-cm
...
requestedScopes: ["openid", "profile", "email" , "groups"]
-> ์ ์ฉ์ ์ํด์ 15์ด ์ ๋ ํ์ ์๋ ๋ก๊ทธ์ธ ์งํ
configmap/argocd-cm edited
|
5. ๊ฒฐ๊ณผ ํ์ธ
scope์ groups๊ฐ ์ถ๊ฐ๋จ
user info์ group์ด ์ถ๊ฐ๋จ
ํ์ง๋ง Argo CD ํ๋ฉด์์๋ ์ฌ์ ํ Applications, Clusters๊ฐ ๋ณด์ด์ง ์์
๐งโโ๏ธ Argo CD RBAC ํ ๋น
1. Argo CD RBAC ์ค์ ConfigMap ์์
1
2
3
4
5
6
7
8
| kubectl edit cm argocd-rbac-cm -n argocd
...
data:
policy.csv: |
g, devs, role:admin
...
configmap/argocd-rbac-cm edited
|
g, <๊ทธ๋ฃน๋ช
>, <์ญํ > ํ์์ RBAC ๊ท์นdevs ๊ทธ๋ฃน์ ์ํ ์ฌ์ฉ์๋ ๋ชจ๋ role:admin ๊ถํ์ ๊ฐ์ง
2. ์ ์ฉ ๊ฒฐ๊ณผ ํ์ธ
๐ชช Jenkins์ OIDC ์ธ์ฆ ์ scope์ groups ์ถ๊ฐ ์ค์
1. jenkins ํด๋ผ์ด์ธํธ์ groups scope ์ถ๊ฐ
jenkins client ์์ add client scope ํด๋ฆญ ํ groups๋ฅผ default ๋ก ์ถ๊ฐ
2. Jenkins OIDC ์ค์ ์์ groups scope ๋ฐ์
์ค์ โ Security โ ์๋ ์ฒ๋ผ scopes์ groups์ groups fileld name์ groups ์ถ๊ฐ
๋ค์ ๋ก๊ทธ์ธํ๋ฉด ์ฌ์ฉ์ ์์ธ ํ๋ฉด์ Groups ํญ๋ชฉ์ด ํ์
3. ์ ๋ฆฌ ์์
(1) kind ๋ก ๋ง๋ ์ธ ๊ฐ์ ํด๋ฌ์คํฐ ์ญ์
1
2
3
4
5
6
7
8
| kind delete cluster --name mgmt ; kind delete cluster --name dev ; kind delete cluster --name prd
Deleting cluster "mgmt" ...
Deleted nodes: ["mgmt-control-plane"]
Deleting cluster "dev" ...
Deleted nodes: ["dev-control-plane"]
Deleting cluster "prd" ...
Deleted nodes: ["prd-control-plane"]
|
(2) /etc/hosts ์ ์ถ๊ฐํ๋ ์ค์ต ๋๋ฉ์ธ ์ ๊ฑฐ