[Add kube formation tsl]
This commit is contained in:
BIN
Présentation_Management_Automation.pptx
Normal file
BIN
Présentation_Management_Automation.pptx
Normal file
Binary file not shown.
289
kubernetes-formation/01-core-workloads/README.md
Normal file
289
kubernetes-formation/01-core-workloads/README.md
Normal file
@@ -0,0 +1,289 @@
|
||||
[//]: # (Confidential document)
|
||||
[//]: # (01/05/2025)
|
||||
[//]: # (v 1.7.3)
|
||||
|
||||
|
||||
# Kubernetes labs v1.7.3 - Core - Workloads
|
||||
|
||||
## Namespaces
|
||||
|
||||
Namespaces are a logical cluster or environment. They are the primary method of partitioning a cluster or scoping access.
|
||||
|
||||
|
||||
|
||||
## Lab 1 : Namespaces
|
||||
|
||||
- Learn how to create and switch between Kubernetes Namespaces using kubectl
|
||||
|
||||
|
||||
```
|
||||
kubectl get namespaces
|
||||
kubectl config get-contexts
|
||||
kubectl create namespace dev
|
||||
kubectl config set-context minikube --namespace=dev
|
||||
kubectl config get-contexts
|
||||
```
|
||||
|
||||
## Pods
|
||||
|
||||
A pod is the atomic unit of Kubernetes. It is the smallest “unit of work” or “management resource” within the system and is the foundational building block of all Kubernetes Workloads.
|
||||
|
||||
|
||||
## Lab 2 : Pods
|
||||
|
||||
- Examine both single and multi-container Pods; including: viewing their attributes through the cli and their exposed Services through the API Server proxy.
|
||||
- Create a simple Pod called pod-example
|
||||
- List, describe Pod
|
||||
- Logs, exec Pod
|
||||
- Use kubectl proxy to verify the web server
|
||||
|
||||
```
|
||||
kubectl create -f manifests/pod-example.yaml
|
||||
kubectl get po
|
||||
kubectl get pod pod-example
|
||||
kubectl describe pod pod-example
|
||||
kubectl logs pod-example
|
||||
kubectl exec -it pod-example -- sh
|
||||
|
||||
kubectl proxy
|
||||
```
|
||||
create a new ssh session and consult the URL
|
||||
|
||||
```
|
||||
ssh -i XXX.pem -L 8001:localhost:8001 ubuntu@IP
|
||||
```
|
||||
- http://127.0.0.1:8001/api/v1/namespaces/dev/pods/pod-example/proxy/
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
kubectl port-forward --address 0.0.0.0 pod/pod-example 8888:80
|
||||
```
|
||||
- http://publicIP:8888
|
||||
|
||||
```
|
||||
kubectl create -f manifests/pod-multi-container-example.yaml
|
||||
kubectl get po
|
||||
kubectl describe po multi-container-example
|
||||
kubectl logs multi-container-example
|
||||
kubectl logs -c nginx multi-container-example
|
||||
kubectl logs -c content multi-container-example
|
||||
kubectl exec -it multi-container-example -- sh
|
||||
kubectl exec -it -c nginx multi-container-example -- sh
|
||||
|
||||
$ kubectl proxy
|
||||
```
|
||||
|
||||
create a new ssh session and consult the URL
|
||||
```
|
||||
ssh -i XXX.pem -L 8001:localhost:8001 ubuntu@IP
|
||||
```
|
||||
- http://127.0.0.1:8001/api/v1/namespaces/dev/pods/multi-container-example/proxy/
|
||||
|
||||
|
||||
|
||||
## Labels and Selectors
|
||||
|
||||
Labels are key-value pairs that are used to identify, describe and group together related sets of objects or resources.
|
||||
|
||||
Selectors use labels to filter or select objects, and are used throughout Kubernetes.
|
||||
|
||||
## Lab 3 : Labels and Selectors
|
||||
|
||||
- Explore the methods of labeling objects in addition to filtering them with both equality and set-based selectors
|
||||
- labes pods
|
||||
- select pods using labels
|
||||
|
||||
```
|
||||
kubectl label pod pod-example app=nginx environment=dev
|
||||
kubectl label pod multi-container-example app=nginx environment=prod
|
||||
kubectl get pods --show-labels
|
||||
kubectl get pods --selector environment=prod
|
||||
kubectl get pods -l app=nginx
|
||||
|
||||
kubectl delete pods pod-example multi-container-example
|
||||
|
||||
```
|
||||
|
||||
## ReplicaSets
|
||||
|
||||
ReplicaSets are the primary method of managing Pod replicas and their lifecycle. This includes their scheduling, scaling, and deletion.
|
||||
|
||||
Their job is simple, always ensure the desired number of replicas that match the selector are running.
|
||||
|
||||
## Lab 4 : ReplicaSets
|
||||
|
||||
- Create and scale a ReplicaSet. Explore and gain an understanding of how the Pods are generated from the Pod template, and how they are targeted with selectors.
|
||||
- create a ReplicaSet called rs-example with 3 replicas
|
||||
- Scale ReplicaSet rs-example up to 5 replicas
|
||||
- Create an independent Pod manually with the same labels (the Pod is created and immediately terminated)
|
||||
- Delete the ReplicaSet
|
||||
|
||||
```
|
||||
kubectl create -f manifests/rs-example.yaml
|
||||
kubectl get pods --watch --show-labels
|
||||
kubectl describe rs rs-example
|
||||
kubectl scale replicaset rs-example --replicas=5
|
||||
kubectl describe rs rs-example
|
||||
kubectl scale rs rs-example --replicas=3
|
||||
kubectl get pods --show-labels --watch
|
||||
kubectl create -f manifests/pod-rs-example.yaml
|
||||
kubectl get pods --show-labels --watch
|
||||
kubectl describe rs rs-example
|
||||
kubectl delete rs rs-example
|
||||
```
|
||||
## Deployments
|
||||
|
||||
Deployments are a declarative method of managing Pods via ReplicaSets. They provide rollback functionality in addition to more granular update control mechanisms.
|
||||
|
||||
## Lab 5 : Deployments
|
||||
|
||||
- Create, update and scale a Deployment as well as explore the relationship of Deployment, ReplicaSet and Pod.
|
||||
- Create a Deployment deploy-example
|
||||
- Check the status of the Deployment
|
||||
- Describe the generated ReplicaSet
|
||||
- Update the deploy-example manifest and add a few additional labels to the Pod template
|
||||
- Apply the change with the --record flag
|
||||
- View the history of a Deployment
|
||||
- View the specific revision with the summary of the Pod Template
|
||||
- Rollback to older revision
|
||||
- Delete the Deployment
|
||||
|
||||
|
||||
```
|
||||
kubectl create -f manifests/deploy-example.yaml --record
|
||||
kubectl get deployments
|
||||
kubectl get rs --show-labels
|
||||
kubectl describe rs deploy-example-<pod-template-hash>
|
||||
kubectl get pods --show-labels
|
||||
kubectl describe pod deploy-example-<pod-template-hash-<random>
|
||||
```
|
||||
update the deploy-example.yaml (add version:1.0.0 in the pod template labels)
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
version: 1.0.0
|
||||
|
||||
```
|
||||
kubectl apply -f manifests/deploy-example-update.yaml --record
|
||||
kubectl get pods --show-labels --watch
|
||||
kubectl get rs --show-labels
|
||||
kubectl scale deploy deploy-example --replicas=5
|
||||
kubectl get rs --show-labels
|
||||
kubectl describe deploy deploy-example
|
||||
kubectl describe rs deploy-example-<pod-template-hash>
|
||||
kubectl describe pod deploy-example-<pod-template-hash-<random>
|
||||
|
||||
kubectl rollout history deployment deploy-example
|
||||
kubectl rollout history deployment deploy-example --revision=1
|
||||
kubectl rollout history deployment deploy-example --revision=2
|
||||
kubectl rollout undo deployment deploy-example --to-revision=1
|
||||
kubectl get pods --show-labels --watch
|
||||
kubectl describe deployment deploy-example
|
||||
kubectl delete deploy deploy-example
|
||||
```
|
||||
|
||||
## DaemonSets
|
||||
|
||||
DaemonSets ensure that all nodes matching certain criteria will run an instance of the supplied Pod.
|
||||
|
||||
They bypass default scheduling mechanisms and restrictions, and are ideal for cluster wide services such as log forwarding, or health monitoring.
|
||||
|
||||
## Lab 6 : DaemonSets
|
||||
- Experience creating, updating, and rolling back a DaemonSet. Additionally delve into the process of how they are scheduled and how an update occurs
|
||||
- Create DaemonSet ds-example
|
||||
- Describe the Ds
|
||||
- Label the node with nodeType=edge
|
||||
- View the current Pods and display their labels with --show-labels
|
||||
|
||||
|
||||
```
|
||||
kubectl create -f manifests/ds-example.yaml --record
|
||||
kubectl get daemonset
|
||||
kubectl label node minikube nodeType=edge
|
||||
kubectl get daemonsets
|
||||
kubectl get pods --show-labels
|
||||
kubectl delete ds ds-example
|
||||
```
|
||||
|
||||
## StatefulSets
|
||||
The StatefulSet controller is tailored to managing Pods that must persist or maintain state. Pod identity including hostname, network, and storage can be considered persistent.
|
||||
|
||||
They ensure persistence by making use of three things:
|
||||
|
||||
- The StatefulSet controller enforcing predicable naming, and ordered provisioning/updating/deletion.
|
||||
- A headless service to provide a unique network identity.
|
||||
- A volume template to ensure stable per-instance storage.
|
||||
|
||||
|
||||
## Lab 7 : StatefulSets
|
||||
- Create, update, and delete a StatefulSet to gain an understanding of how the StatefulSet lifecycle differs from other workloads with regards to updating, deleting and the provisioning of storage
|
||||
- Create StatefulSet sts-example
|
||||
- View pods
|
||||
- View the current Persistent Volume Claims
|
||||
- Delete the sts-example-2 Pod
|
||||
- Check pods
|
||||
- Scale and check Persistent Volume Claims
|
||||
- Delete sts
|
||||
|
||||
```
|
||||
kubectl create -f manifests/sts-example.yaml
|
||||
kubectl get pods --show-labels --watch
|
||||
kubectl describe statefulset sts-example
|
||||
kubectl get pvc
|
||||
kubectl delete pod sts-example-2
|
||||
kubectl get pods
|
||||
|
||||
kubectl scale sts sts-example --replicas=5
|
||||
kubectl get pods
|
||||
kubectl get pvc
|
||||
kubectl scale sts sts-example --replicas=3
|
||||
kubectl get pods
|
||||
kubectl get pvc
|
||||
kubectl delete sts sts-example
|
||||
kubectl delete pvc --all
|
||||
```
|
||||
|
||||
|
||||
## Job
|
||||
The Job Controller ensures one or more Pods are executed and successfully terminate. Essentially a task executor that can be run in parallel.
|
||||
|
||||
|
||||
## Lab 8 : Job
|
||||
- Create a Kubernetes Job and work to understand how the Pods are managed with completions and parallelism directives.
|
||||
- Create job job-example
|
||||
- Watch the Pods
|
||||
- Delete the job
|
||||
```
|
||||
kubectl create -f manifests/job-example.yaml
|
||||
kubectl get pods --show-labels --watch
|
||||
kubectl describe job job-example
|
||||
kubectl delete job job-example
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
## CronJob
|
||||
|
||||
CronJobs are an extension of the Job Controller, and enable Jobs to be run on a schedule.
|
||||
|
||||
## Lab 9 : CronJob
|
||||
- Create a CronJob based off a Job Template. Understand how the Jobs are generated and how to suspend a job in the event of a problem.
|
||||
- Create CronJob cronjob-example using the cron schedule "*/1 * * * *"
|
||||
- Watch the Pods
|
||||
|
||||
```
|
||||
kubectl create -f manifests/cronjob-example.yaml
|
||||
kubectl get jobs
|
||||
kubectl get jobs
|
||||
kubectl describe CronJob cronjob-example
|
||||
kubectl delete cronjob cronjob-example
|
||||
```
|
||||
|
||||
|
||||
## Clean up
|
||||
|
||||
```
|
||||
kubectl delete -f manifests/
|
||||
|
||||
```
|
||||
@@ -0,0 +1,21 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: batch/v1
|
||||
kind: CronJob
|
||||
metadata:
|
||||
name: cronjob-example
|
||||
spec:
|
||||
schedule: "*/1 * * * *"
|
||||
successfulJobsHistoryLimit: 2
|
||||
failedJobsHistoryLimit: 1
|
||||
jobTemplate:
|
||||
spec:
|
||||
completions: 4
|
||||
parallelism: 2
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: hello
|
||||
image: alpine:latest
|
||||
command: ["/bin/sh", "-c"]
|
||||
args: ["echo hello from $HOSTNAME!"]
|
||||
restartPolicy: Never
|
||||
@@ -0,0 +1,29 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: deploy-example
|
||||
spec:
|
||||
replicas: 3
|
||||
revisionHistoryLimit: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
strategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
maxSurge: 1
|
||||
maxUnavailable: 0
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
version: 1.0.0
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:stable-alpine
|
||||
ports:
|
||||
- containerPort: 80
|
||||
|
||||
|
||||
@@ -0,0 +1,28 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: deploy-example
|
||||
spec:
|
||||
replicas: 3
|
||||
revisionHistoryLimit: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
strategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
maxSurge: 1
|
||||
maxUnavailable: 0
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:stable-alpine
|
||||
ports:
|
||||
- containerPort: 80
|
||||
|
||||
|
||||
@@ -0,0 +1,22 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: ds-example
|
||||
spec:
|
||||
revisionHistoryLimit: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
nodeSelector:
|
||||
nodeType: edge
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:stable-alpine
|
||||
ports:
|
||||
- containerPort: 80
|
||||
@@ -0,0 +1,17 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: job-example
|
||||
spec:
|
||||
backoffLimit: 4
|
||||
completions: 4
|
||||
parallelism: 2
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: hello
|
||||
image: alpine:latest
|
||||
command: ["/bin/sh", "-c"]
|
||||
args: ["echo hello from $HOSTNAME!"]
|
||||
restartPolicy: Never
|
||||
@@ -0,0 +1,11 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod-example
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:stable-alpine
|
||||
ports:
|
||||
- containerPort: 80
|
||||
@@ -0,0 +1,28 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: multi-container-example
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:stable-alpine
|
||||
ports:
|
||||
- containerPort: 80
|
||||
volumeMounts:
|
||||
- name: html
|
||||
mountPath: /usr/share/nginx/html
|
||||
- name: content
|
||||
image: alpine:latest
|
||||
command: ["/bin/sh", "-c"]
|
||||
args:
|
||||
- while true; do
|
||||
echo $(date)"<br />" >> /html/index.html;
|
||||
sleep 5;
|
||||
done
|
||||
volumeMounts:
|
||||
- name: html
|
||||
mountPath: /html
|
||||
volumes:
|
||||
- name: html
|
||||
emptyDir: {}
|
||||
@@ -0,0 +1,14 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod-example
|
||||
labels:
|
||||
app: nginx
|
||||
env: prod
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:stable-alpine
|
||||
ports:
|
||||
- containerPort: 80
|
||||
@@ -0,0 +1,22 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: apps/v1
|
||||
kind: ReplicaSet
|
||||
metadata:
|
||||
name: rs-example
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
env: prod
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
env: prod
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:stable-alpine
|
||||
ports:
|
||||
- containerPort: 80
|
||||
@@ -0,0 +1,12 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: clusterip
|
||||
spec:
|
||||
selector:
|
||||
app: nginx
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
||||
@@ -0,0 +1,14 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: loadbalancer
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
selector:
|
||||
app: nginx
|
||||
environment: prod
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
||||
@@ -0,0 +1,15 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nodeport
|
||||
spec:
|
||||
type: NodePort
|
||||
selector:
|
||||
app: nginx
|
||||
environment: prod
|
||||
ports:
|
||||
- nodePort: 32410
|
||||
protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
||||
@@ -0,0 +1,13 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: app
|
||||
spec:
|
||||
clusterIP: None
|
||||
selector:
|
||||
app: stateful
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
||||
@@ -0,0 +1,36 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: sts-example
|
||||
spec:
|
||||
replicas: 3
|
||||
revisionHistoryLimit: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: stateful
|
||||
serviceName: app
|
||||
updateStrategy:
|
||||
type: OnDelete
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: stateful
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:stable-alpine
|
||||
ports:
|
||||
- containerPort: 80
|
||||
volumeMounts:
|
||||
- name: www
|
||||
mountPath: /usr/share/nginx/html
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: www
|
||||
spec:
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
storageClassName: standard
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
42
kubernetes-formation/02-network/README.md
Normal file
42
kubernetes-formation/02-network/README.md
Normal file
@@ -0,0 +1,42 @@
|
||||
[//]: # (Confidential document)
|
||||
[//]: # (01/05/2025)
|
||||
[//]: # (v 1.7.3)
|
||||
|
||||
|
||||
# Kubernetes labs v1.7.3 - Network
|
||||
|
||||
## Services
|
||||
|
||||
Services within Kubernetes are the unified method of accessing the exposed workloads of Pods. They are a durable resource (unlike Pods) that is given a static cluster-unique IP and provide simple load-balancing through kube-proxy.
|
||||
|
||||
|
||||
## Lab 1 : Services
|
||||
- Create single and multi-container Pods
|
||||
- Labes pods
|
||||
- Create a ClusterIP service and view the different ways it is accessible within the cluster
|
||||
|
||||
```
|
||||
kubectl create -f manifests/pod-example.yaml
|
||||
kubectl create -f manifests/pod-multi-container-example.yaml
|
||||
kubectl label pod pod-example app=nginx environment=dev
|
||||
kubectl label pod multi-container-example app=nginx environment=prod
|
||||
kubectl create -f manifests/service-clusterip.yaml
|
||||
kubectl describe service clusterip
|
||||
|
||||
kubectl proxy
|
||||
```
|
||||
create a new ssh session and consult the URL
|
||||
|
||||
```
|
||||
ssh -i XXX.pem -L 8001:localhost:8001 ubuntu@IP
|
||||
```
|
||||
- http://127.0.0.1:8001/api/v1/namespaces/dev/services/clusterip/proxy/
|
||||
|
||||
|
||||
|
||||
## Clean up
|
||||
|
||||
```
|
||||
kubectl delete -f manifests/
|
||||
|
||||
```
|
||||
11
kubernetes-formation/02-network/manifests/pod-example.yaml
Normal file
11
kubernetes-formation/02-network/manifests/pod-example.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod-example
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:stable-alpine
|
||||
ports:
|
||||
- containerPort: 80
|
||||
@@ -0,0 +1,28 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: multi-container-example
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:stable-alpine
|
||||
ports:
|
||||
- containerPort: 80
|
||||
volumeMounts:
|
||||
- name: html
|
||||
mountPath: /usr/share/nginx/html
|
||||
- name: content
|
||||
image: alpine:latest
|
||||
command: ["/bin/sh", "-c"]
|
||||
args:
|
||||
- while true; do
|
||||
echo $(date)"<br />" >> /html/index.html;
|
||||
sleep 5;
|
||||
done
|
||||
volumeMounts:
|
||||
- name: html
|
||||
mountPath: /html
|
||||
volumes:
|
||||
- name: html
|
||||
emptyDir: {}
|
||||
@@ -0,0 +1,12 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: clusterip
|
||||
spec:
|
||||
selector:
|
||||
app: nginx
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
||||
@@ -0,0 +1,14 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: loadbalancer
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
selector:
|
||||
app: nginx
|
||||
environment: prod
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
||||
@@ -0,0 +1,15 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nodeport
|
||||
spec:
|
||||
type: NodePort
|
||||
selector:
|
||||
app: nginx
|
||||
environment: prod
|
||||
ports:
|
||||
- nodePort: 32410
|
||||
protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
||||
184
kubernetes-formation/03-storage-configuration/README.md
Normal file
184
kubernetes-formation/03-storage-configuration/README.md
Normal file
@@ -0,0 +1,184 @@
|
||||
[//]: # (Confidential document)
|
||||
[//]: # (01/05/2025)
|
||||
[//]: # (v 1.7.3)
|
||||
|
||||
|
||||
# Kubernetes labs v1.7.3 - Storage - Configuration
|
||||
|
||||
Storage : Volumes within Kubernetes are storage that is tied to the Pod’s lifecycle.
|
||||
A pod can have one or more type of volumes attached to it. These volumes are consumable by any of the containers within the pod.
|
||||
They can survive Pod restarts; however their durability beyond that is dependent on the Volume Type.
|
||||
|
||||
|
||||
Persistent Volumes and Claims work in conjunction to serve as the direct method in which a Pod Consumes Persistent storage.
|
||||
A PersistentVolume (PV) is a representation of a cluster-wide storage resource that is linked to a backing storage provider - NFS, GCEPersistentDisk, RBD etc.
|
||||
A PersistentVolumeClaim acts as a namespaced request for storage that satisfies a set of a requirements instead of mapping to the storage resource directly.
|
||||
This separation of PV and PVC ensures that an application’s ‘claim’ for storage is portable across numerous backends or providers.
|
||||
|
||||
|
||||
Storage classes are an abstraction on top of an external storage resource (PV). They work directly with the external storage system to enable dynamic provisioning and remove the need for the cluster admin to pre-provision Persistent Volumes.
|
||||
|
||||
|
||||
|
||||
Configuration: Kubernetes has an integrated pattern for decoupling configuration from application or container. This pattern makes use of two Kubernetes components: ConfigMaps and Secrets.
|
||||
|
||||
A ConfigMap is externalized data stored within Kubernetes that can be referenced through several different means:
|
||||
|
||||
- Environment variable
|
||||
- A command line argument (via env var)
|
||||
- Injected as a file into a volume mount
|
||||
|
||||
ConfigMaps can be created from a manifest, literals, a directory, or from the files the directly.
|
||||
|
||||
|
||||
A Secret is externalized "private" base64 encoded data stored within Kubernetes that can be referenced through several different means:
|
||||
|
||||
- Environment variable
|
||||
- A command line argument (via env var)
|
||||
- Injected as a file into a volume mount
|
||||
|
||||
Like ConfigMaps, Secrets can be created from a manifest, literals, or from files directly
|
||||
|
||||
|
||||
## Lab 1 : Volumes
|
||||
|
||||
Pods may have multiple volumes using different Volume types. Those volumes in turn can be mounted to one or more containers within the Pod by adding them to the volumeMounts list. This is done by referencing their name and supplying their mountPath. Additionally, volumes may be mounted both read-write or read-only depending on the application, enabling a variety of use-cases.
|
||||
|
||||
Understand how to add and reference volumes to a Pod and their containers.
|
||||
- Create a Pod with emptydir volume
|
||||
- check read only option
|
||||
|
||||
```
|
||||
kubectl create -f manifests/volume-example.yaml
|
||||
kubectl exec volume-example -c content -- /bin/sh -c "cat /html/index.html"
|
||||
kubectl exec volume-example -c nginx -- /bin/sh -c "cat /usr/share/nginx/html/index.html"
|
||||
kubectl exec volume-example -c nginx -- /bin/sh -c "echo nginx >> /usr/share/nginx/html/index.html"
|
||||
|
||||
kubectl delete -f manifests/
|
||||
|
||||
```
|
||||
|
||||
## Lab 2 : Persistent Volumes/Storage Classes
|
||||
|
||||
|
||||
Storage Classes provide a method of dynamically provisioning Persistent Volumes from an external Storage System. They have the same attributes as normal PVs, and have their own methods of being garbage collected. They may be targeted by name using the storageClassName within a Persistent Volume Claim request, or a Storage Class may be configured as default ensuring that Claims may be fulfilled even when there is no valid selector target.
|
||||
|
||||
Understand how it's possible for a Persistent Volume Claim to consume dynamically provisioned storage via a Storage Class.
|
||||
- describe Storage Class
|
||||
- create/describe Persistent Volume Claim
|
||||
- describe Persistent Volume
|
||||
|
||||
|
||||
```
|
||||
kubectl describe sc standard
|
||||
kubectl create -f manifests/pvc-standard.yaml
|
||||
kubectl describe pvc pvc-standard
|
||||
kubectl get pv
|
||||
|
||||
kubectl delete pvc pvc-standard
|
||||
|
||||
```
|
||||
|
||||
## Lab 3 : ConfigMap
|
||||
|
||||
|
||||
There are four primary methods of creating ConfigMaps with kubectl. From a manifest, passing literals on the command-line, supplying a path to a directory, or to the individual files themselves. These ConfigMaps are stored within etcd, and may be used in a multitude of ways.
|
||||
|
||||
|
||||
Items within a ConfigMap can be injected into a Pod's Environment Variables at container creation. These items may be picked up by the application being run in the container directly, or referenced as a command-line argument. Both methods are commonly used and enable a wide-variety of use-cases.
|
||||
|
||||
|
||||
In addition to being injected as Environment Variables it's possible to mount the contents of a ConfigMap as a volume. This same method may be augmented to mount specific items from a ConfigMap instead of the entire thing. These items can be renamed or be made read-only to meet a variety of application needs providing an easy to use avenue to further decouple application from configuration.
|
||||
|
||||
|
||||
- Create ConfigMaps from manifest, literal, directory and file
|
||||
- Use ConfigMaps with Environment Variables
|
||||
- Use ConfigMaps with Volumes
|
||||
|
||||
```
|
||||
kubectl create -f manifests/cm-manifest.yaml
|
||||
kubectl get configmap manifest-example -o yaml
|
||||
|
||||
kubectl create cm literal-example --from-literal="city=Ann Arbor" --from-literal=state=Michigan
|
||||
kubectl get cm literal-example -o yaml
|
||||
|
||||
kubectl create cm dir-example --from-file=manifests/cm/
|
||||
kubectl get cm dir-example -o yaml
|
||||
|
||||
kubectl create cm file-example --from-file=manifests/cm/city --from-file=manifests/cm/state
|
||||
kubectl get cm file-example -o yaml
|
||||
|
||||
|
||||
kubectl create -f manifests/cm-env-example.yaml
|
||||
kubectl get pods
|
||||
kubectl logs cm-env-example-<pod-id>
|
||||
|
||||
kubectl create -f manifests/cm-cmd-example.yaml
|
||||
kubectl get pods
|
||||
kubectl logs cm-cmd-example-<pod-id>
|
||||
|
||||
kubectl delete job cm-env-example cm-cmd-example
|
||||
|
||||
kubectl create -f manifests/cm-vol-example.yaml
|
||||
kubectl exec cm-vol-example -- ls /myconfig
|
||||
kubectl exec cm-vol-example -- /bin/sh -c "cat /myconfig/*"
|
||||
kubectl exec cm-vol-example -- ls /mycity
|
||||
kubectl exec cm-vol-example -- cat /mycity/thisismycity
|
||||
|
||||
kubectl delete pod cm-vol-example
|
||||
kubectl delete cm dir-example file-example literal-example manifest-example
|
||||
|
||||
```
|
||||
|
||||
## Lab 4 : Secrets
|
||||
|
||||
Secrets are created and used just like ConfigMaps. If you have completed the ConfigMap Exercises, the Secrets section may be skimmed over glossing a few of the minor syntax differences.
|
||||
|
||||
Note the Secret has the additional attribute type when compared to a ConfigMap. The Opaque value simply means the data is unstructured. Additionally, the content referenced in data itself is base64 encoded. Decoded, they are username=example and password=mypassword.
|
||||
|
||||
|
||||
- Create Secrets from manifest, literal, directory and file
|
||||
- Use Secrets with Environment Variables
|
||||
- Use Secrets with Volumes
|
||||
|
||||
```
|
||||
|
||||
kubectl create -f manifests/secret-manifest.yaml
|
||||
kubectl get secret manifest-example -o yaml
|
||||
|
||||
kubectl create secret generic literal-example --from-literal=username=example --from-literal=password=mypassword
|
||||
kubectl get secret literal-example -o yaml
|
||||
|
||||
kubectl create secret generic dir-example --from-file=manifests/secret/
|
||||
kubectl get secret dir-example -o yaml
|
||||
|
||||
kubectl create secret generic file-example --from-file=manifests/secret/username --from-file=manifests/secret/password
|
||||
kubectl get secret file-example -o yaml
|
||||
|
||||
kubectl create -f manifests/secret-env-example.yaml
|
||||
kubectl get pods
|
||||
kubectl logs secret-env-example-<pod-id>
|
||||
kubectl create -f manifests/secret-cmd-example.yaml
|
||||
kubectl get pods
|
||||
kubectl logs secret-cmd-example-<pod-id>
|
||||
|
||||
kubectl delete job secret-env-example secret-cmd-example
|
||||
|
||||
kubectl create -f manifests/secret-vol-example.yaml
|
||||
kubectl exec secret-vol-example -- ls /mysecret
|
||||
kubectl exec secret-vol-example -- /bin/sh -c "cat /mysecret/*"
|
||||
kubectl exec secret-vol-example -- ls /mypass
|
||||
kubectl exec secret-vol-example -- cat /mypass/supersecretpass
|
||||
|
||||
kubectl delete pod secret-vol-example
|
||||
kubectl delete secret dir-example file-example literal-example manifest-example
|
||||
|
||||
```
|
||||
|
||||
|
||||
## Clean up
|
||||
|
||||
```
|
||||
kubectl delete -f manifests/
|
||||
|
||||
```
|
||||
@@ -0,0 +1,20 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: cm-cmd-example
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: env
|
||||
image: alpine:latest
|
||||
command: ["/bin/sh", "-c"]
|
||||
args: ["echo Hello from ${CITY}!"]
|
||||
env:
|
||||
- name: CITY
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: manifest-example
|
||||
key: city
|
||||
restartPolicy: Never
|
||||
@@ -0,0 +1,20 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: cm-env-example
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: env
|
||||
image: alpine:latest
|
||||
command: ["/bin/sh", "-c"]
|
||||
args: ["printenv CITY"]
|
||||
env:
|
||||
- name: CITY
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: manifest-example
|
||||
key: city
|
||||
restartPolicy: Never
|
||||
@@ -0,0 +1,8 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: manifest-example
|
||||
data:
|
||||
city: Ann Arbor
|
||||
state: Michigan
|
||||
@@ -0,0 +1,30 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: cm-vol-example
|
||||
spec:
|
||||
containers:
|
||||
- name: mypod
|
||||
image: alpine:latest
|
||||
command: ["/bin/sh", "-c"]
|
||||
args:
|
||||
- while true; do
|
||||
sleep 10;
|
||||
done
|
||||
volumeMounts:
|
||||
- name: config-volume
|
||||
mountPath: /myconfig
|
||||
- name: city
|
||||
mountPath: /mycity
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: config-volume
|
||||
configMap:
|
||||
name: manifest-example
|
||||
- name: city
|
||||
configMap:
|
||||
name: manifest-example
|
||||
items:
|
||||
- key: city
|
||||
path: thisismycity
|
||||
@@ -0,0 +1 @@
|
||||
Ann Arbor
|
||||
@@ -0,0 +1 @@
|
||||
Michigan
|
||||
@@ -0,0 +1,31 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
kind: PersistentVolume
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: html
|
||||
labels:
|
||||
type: hostpath
|
||||
spec:
|
||||
capacity:
|
||||
storage: 1Gi
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
storageClassName: html
|
||||
persistentVolumeReclaimPolicy: Delete
|
||||
hostPath:
|
||||
type: DirectoryOrCreate
|
||||
path: "/tmp/html"
|
||||
|
||||
---
|
||||
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: html
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
storageClassName: html
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
@@ -0,0 +1,17 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
kind: PersistentVolume
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pv-sc-example
|
||||
labels:
|
||||
type: hostpath
|
||||
spec:
|
||||
capacity:
|
||||
storage: 2Gi
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
storageClassName: mypvsc
|
||||
persistentVolumeReclaimPolicy: Delete
|
||||
hostPath:
|
||||
type: DirectoryOrCreate
|
||||
path: "/tmp/mypvsc"
|
||||
@@ -0,0 +1,16 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
kind: PersistentVolume
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pv-selector-example
|
||||
labels:
|
||||
type: hostpath
|
||||
spec:
|
||||
capacity:
|
||||
storage: 2Gi
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
persistentVolumeReclaimPolicy: Retain
|
||||
hostPath:
|
||||
type: DirectoryOrCreate
|
||||
path: "/tmp/mypvselector"
|
||||
@@ -0,0 +1,12 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pvc-sc-example
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
storageClassName: mypvsc
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
@@ -0,0 +1,14 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pvc-selector-example
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
selector:
|
||||
matchLabels:
|
||||
type: hostpath
|
||||
@@ -0,0 +1,12 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pvc-standard
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
storageClassName: standard
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
@@ -0,0 +1,42 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: reader
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: reader
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: reader
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:stable-alpine
|
||||
ports:
|
||||
- containerPort: 80
|
||||
volumeMounts:
|
||||
- name: html
|
||||
mountPath: /usr/share/nginx/html
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: html
|
||||
persistentVolumeClaim:
|
||||
claimName: html
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: reader
|
||||
spec:
|
||||
selector:
|
||||
app: reader
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
||||
@@ -0,0 +1,20 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: secret-cmd-example
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: env
|
||||
image: alpine:latest
|
||||
command: ["/bin/sh", "-c"]
|
||||
args: ["echo Hello there ${USERNAME}!"]
|
||||
env:
|
||||
- name: USERNAME
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: manifest-example
|
||||
key: username
|
||||
restartPolicy: Never
|
||||
@@ -0,0 +1,20 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: secret-env-example
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: mypod
|
||||
image: alpine:latest
|
||||
command: ["/bin/sh", "-c"]
|
||||
args: ["printenv USERNAME"]
|
||||
env:
|
||||
- name: USERNAME
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: manifest-example
|
||||
key: username
|
||||
restartPolicy: Never
|
||||
@@ -0,0 +1,9 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: manifest-example
|
||||
type: Opaque
|
||||
data:
|
||||
username: ZXhhbXBsZQ==
|
||||
password: bXlwYXNzd29yZA==
|
||||
@@ -0,0 +1,30 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: secret-vol-example
|
||||
spec:
|
||||
containers:
|
||||
- name: mypod
|
||||
image: alpine:latest
|
||||
command: ["/bin/sh", "-c"]
|
||||
args:
|
||||
- while true; do
|
||||
sleep 10;
|
||||
done
|
||||
volumeMounts:
|
||||
- name: secret-volume
|
||||
mountPath: /mysecret
|
||||
- name: password
|
||||
mountPath: /mypass
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: secret-volume
|
||||
secret:
|
||||
secretName: manifest-example
|
||||
- name: password
|
||||
secret:
|
||||
secretName: manifest-example
|
||||
items:
|
||||
- key: password
|
||||
path: supersecretpass
|
||||
@@ -0,0 +1 @@
|
||||
mypassword
|
||||
@@ -0,0 +1 @@
|
||||
example
|
||||
@@ -0,0 +1,30 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: volume-example
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:stable-alpine
|
||||
ports:
|
||||
- containerPort: 80
|
||||
volumeMounts:
|
||||
- name: html
|
||||
mountPath: /usr/share/nginx/html
|
||||
readOnly: true
|
||||
- name: content
|
||||
image: alpine:latest
|
||||
volumeMounts:
|
||||
- name: html
|
||||
mountPath: /html
|
||||
command: ["/bin/sh", "-c"]
|
||||
args:
|
||||
- while true; do
|
||||
echo $(date)"<br />" >> /html/index.html;
|
||||
sleep 5;
|
||||
done
|
||||
volumes:
|
||||
- name: html
|
||||
emptyDir: {}
|
||||
|
||||
@@ -0,0 +1,31 @@
|
||||
## Kubernetes Fundamentals labs v1.4.5
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: writer
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: writer
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: writer
|
||||
spec:
|
||||
containers:
|
||||
- name: content
|
||||
image: alpine:latest
|
||||
volumeMounts:
|
||||
- name: html
|
||||
mountPath: /html
|
||||
command: ["/bin/sh", "-c"]
|
||||
args:
|
||||
- while true; do
|
||||
date >> /html/index.html;
|
||||
sleep 5;
|
||||
done
|
||||
volumes:
|
||||
- name: html
|
||||
persistentVolumeClaim:
|
||||
claimName: html
|
||||
125
kubernetes-formation/04-architecture/README.md
Normal file
125
kubernetes-formation/04-architecture/README.md
Normal file
@@ -0,0 +1,125 @@
|
||||
[//]: # (Confidential document)
|
||||
[//]: # (01/05/2025)
|
||||
[//]: # (v 1.7.3)
|
||||
|
||||
|
||||
# Kubernetes labs v1.7.3 - Architecture
|
||||
|
||||
## Lab1 : minikube
|
||||
|
||||
minikube quickly sets up a local Kubernetes cluster on macOS, Linux, and Windows. We proudly focus on helping application developers and new Kubernetes users.
|
||||
|
||||
- Install minikube
|
||||
- Start a cluster
|
||||
- Enable addons
|
||||
- Access the Kubernetes dashboard
|
||||
|
||||
```
|
||||
minikube start --driver=docker --cni=cilium
|
||||
minikube status
|
||||
kubectl get po -A
|
||||
minikube addons list
|
||||
minikube addons enable metrics-server
|
||||
kubectl get pod,svc -n kube-system
|
||||
kubectl top pods
|
||||
minikube addons enable dashboard
|
||||
minikube dashboard
|
||||
minikube stop
|
||||
minikube delete
|
||||
```
|
||||
|
||||
## Lab2 : kubeadm
|
||||
|
||||
```
|
||||
sudo su
|
||||
apt-get update
|
||||
apt-get install -y apt-transport-https ca-certificates curl gpg
|
||||
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
|
||||
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
|
||||
apt-get update
|
||||
apt-get install -y kubelet kubeadm
|
||||
apt-mark hold kubelet kubeadm
|
||||
systemctl enable --now kubelet
|
||||
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.4.0/cri-dockerd_0.4.0.3-0.ubuntu-jammy_amd64.deb
|
||||
chmod 644 /home/plb/cri-dockerd_0.4.0.3-0.ubuntu-jammy_amd64.deb
|
||||
apt-get install ./cri-dockerd_0.4.0.3-0.ubuntu-jammy_amd64.deb
|
||||
swapoff -a
|
||||
```
|
||||
|
||||
Install Master Node
|
||||
```
|
||||
sudo su
|
||||
swapon --show
|
||||
swapoff -a
|
||||
systemctl enable --now kubelet
|
||||
kubeadm init --cri-socket unix:///var/run/cri-dockerd.sock --v 5 --node-name master
|
||||
|
||||
systemctl status kubelet
|
||||
journalctl -xeu kubelet
|
||||
|
||||
export KUBECONFIG=/etc/kubernetes/admin.conf
|
||||
helm repo add cilium https://helm.cilium.io/
|
||||
helm repo add cilium https://helm.cilium.io/
|
||||
helm install cilium cilium/cilium --version 1.17.3 --set operator.replicas=1 --namespace kube-system
|
||||
|
||||
+ Control plane node isolation
|
||||
```
|
||||
|
||||
Install Worker node
|
||||
```
|
||||
sudo su
|
||||
swapon --show
|
||||
swapoff -a
|
||||
systemctl enable --now kubelet
|
||||
kubeadm join ... --cri-socket unix:///var/run/cri-dockerd.sock --v 5 --node-name worker
|
||||
```
|
||||
|
||||
|
||||
Clean up Master Node
|
||||
```
|
||||
sudo su
|
||||
export KUBECONFIG=/etc/kubernetes/admin.conf
|
||||
kubectl drain worker --delete-emptydir-data --force --ignore-daemonsets
|
||||
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
|
||||
kubectl delete node worker
|
||||
kubeadm reset -f
|
||||
systemctl stop kubelet
|
||||
rm -rf /etc/kubernetes/ /var/lib/kubelet/ /var/lib/etcd /etc/cni /opt/cni
|
||||
docker container rm -f $(docker ps -aq)
|
||||
systemctl restart kubelet
|
||||
systemctl restart docker
|
||||
unset KUBECONFIG
|
||||
```
|
||||
|
||||
Clean up Worker node
|
||||
|
||||
```
|
||||
sudo su
|
||||
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
|
||||
kubeadm reset -f
|
||||
systemctl stop kubelet
|
||||
rm -rf /etc/kubernetes/ /var/lib/kubelet/ /var/lib/etcd /etc/cni /opt/cni
|
||||
docker container rm -f $(docker ps -aq)
|
||||
systemctl restart docker
|
||||
unset KUBECONFIG
|
||||
```
|
||||
|
||||
|
||||
## Lab3 Scheduler Affinity and anti-affinity
|
||||
Modify the manifest to make sure the Pods are scheduled on diff Nodes
|
||||
|
||||
Deploy the Pod
|
||||
```bash
|
||||
kubectl apply -f manifests/pod-example.yaml
|
||||
kubectl apply -f manifests/pod-backup-example.yaml
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Clean up
|
||||
|
||||
```
|
||||
kubectl delete -f manifests/
|
||||
|
||||
```
|
||||
@@ -0,0 +1,14 @@
|
||||
kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pod-backup-example
|
||||
labels:
|
||||
app: myapp-backup
|
||||
spec:
|
||||
containers:
|
||||
- name: myapp-backup
|
||||
image: busybox
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- "sleep 3600"
|
||||
@@ -0,0 +1,12 @@
|
||||
kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pod-custom-scheduler-example
|
||||
labels:
|
||||
name: shell
|
||||
spec:
|
||||
schedulerName: my-scheduler # default-scheduler
|
||||
containers:
|
||||
- name: shell
|
||||
image: ubuntu
|
||||
command: ["sh", "-c", "sleep 3600"]
|
||||
@@ -0,0 +1,14 @@
|
||||
kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pod-example
|
||||
labels:
|
||||
app: myapp
|
||||
spec:
|
||||
containers:
|
||||
- name: myapp
|
||||
image: busybox
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- "sleep 3600"
|
||||
@@ -0,0 +1,26 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: with-node-affinity
|
||||
spec:
|
||||
affinity:
|
||||
nodeAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: kubernetes.io/e2e-az-name
|
||||
operator: In
|
||||
values:
|
||||
- e2e-az1
|
||||
- e2e-az2
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 1
|
||||
preference:
|
||||
matchExpressions:
|
||||
- key: another-node-label-key
|
||||
operator: In
|
||||
values:
|
||||
- another-node-label-value
|
||||
containers:
|
||||
- name: with-node-affinity
|
||||
image: k8s.gcr.io/pause:2.0
|
||||
@@ -0,0 +1,29 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: with-pod-affinity
|
||||
spec:
|
||||
affinity:
|
||||
podAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchExpressions:
|
||||
- key: security
|
||||
operator: In
|
||||
values:
|
||||
- S1
|
||||
topologyKey: failure-domain.beta.kubernetes.io/zone
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 100
|
||||
podAffinityTerm:
|
||||
labelSelector:
|
||||
matchExpressions:
|
||||
- key: security
|
||||
operator: In
|
||||
values:
|
||||
- S2
|
||||
topologyKey: failure-domain.beta.kubernetes.io/zone
|
||||
containers:
|
||||
- name: with-pod-affinity
|
||||
image: k8s.gcr.io/pause:2.0
|
||||
@@ -0,0 +1,15 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
env: test
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
imagePullPolicy: IfNotPresent
|
||||
tolerations:
|
||||
- key: "example-key"
|
||||
operator: "Exists"
|
||||
effect: "NoSchedule"
|
||||
24
kubernetes-formation/04-architecture/manifests/pod.yaml
Normal file
24
kubernetes-formation/04-architecture/manifests/pod.yaml
Normal file
@@ -0,0 +1,24 @@
|
||||
kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pod-backup-example
|
||||
labels:
|
||||
app: myapp-backup
|
||||
spec:
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchExpressions:
|
||||
- key: app
|
||||
operator: In
|
||||
values:
|
||||
- myapp
|
||||
topologyKey: kubernetes.io/hostname
|
||||
containers:
|
||||
- name: myapp-backup
|
||||
image: busybox
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- "sleep 3600"
|
||||
@@ -0,0 +1,59 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: my-scheduler-config-map
|
||||
namespace: kube-system
|
||||
labels:
|
||||
app: my-scheduler
|
||||
data:
|
||||
kube-scheduler-config.yaml: |-
|
||||
algorithmSource:
|
||||
provider: DefaultProvider
|
||||
apiVersion: kubescheduler.config.k8s.io/v1alpha1
|
||||
bindTimeoutSeconds: 600
|
||||
clientConnection:
|
||||
acceptContentTypes: ""
|
||||
burst: 100
|
||||
contentType: application/vnd.kubernetes.protobuf
|
||||
kubeconfig: "/etc/kubernetes/scheduler.conf"
|
||||
qps: 50
|
||||
disablePreemption: false
|
||||
enableContentionProfiling: false
|
||||
enableProfiling: false
|
||||
failureDomains: kubernetes.io/hostname,failure-domain.beta.kubernetes.io/zone,failure-domain.beta.kubernetes.io/region
|
||||
hardPodAffinitySymmetricWeight: 1
|
||||
healthzBindAddress: 0.0.0.0:10251
|
||||
kind: KubeSchedulerConfiguration
|
||||
leaderElection:
|
||||
leaderElect: false
|
||||
metricsBindAddress: 0.0.0.0:10251
|
||||
percentageOfNodesToScore: 50
|
||||
schedulerName: my-scheduler
|
||||
policyConfigMap: my-scheduler-policy-cm
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: my-scheduler-policy-cm
|
||||
namespace: kube-system
|
||||
labels:
|
||||
app: my-scheduler
|
||||
data:
|
||||
scheduler-policy.yaml: |
|
||||
kind: Policy
|
||||
apiVersion: v1
|
||||
predicates:
|
||||
- name: PodFitsPorts
|
||||
- name: PodFitsResources
|
||||
- name: NoDiskConflict
|
||||
- name: MatchNodeSelector
|
||||
- name: HostName
|
||||
priorities:
|
||||
- name: LeastRequestedPriority
|
||||
weight: 1
|
||||
- name: BalancedResourceAllocation
|
||||
weight: 1
|
||||
- name: ServiceSpreadingPriority
|
||||
weight: 1
|
||||
- name: EqualPriority
|
||||
weight: 1
|
||||
@@ -0,0 +1,49 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
annotations:
|
||||
scheduler.alpha.kubernetes.io/critical-pod: ""
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
component: kube-scheduler
|
||||
tier: control-plane
|
||||
app: my-scheduler
|
||||
name: my-kube-scheduler
|
||||
namespace: kube-system
|
||||
spec:
|
||||
containers:
|
||||
- command:
|
||||
- kube-scheduler
|
||||
- --config=/etc/kubernetes/scheduler/kube-scheduler-config.yaml
|
||||
image: k8s.gcr.io/kube-scheduler:v1.17.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
livenessProbe:
|
||||
failureThreshold: 8
|
||||
httpGet:
|
||||
host: 127.0.0.1
|
||||
path: /healthz
|
||||
port: 10251
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 15
|
||||
timeoutSeconds: 15
|
||||
name: kube-scheduler
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
volumeMounts:
|
||||
- mountPath: /etc/kubernetes/scheduler.conf
|
||||
name: kubeconfig
|
||||
readOnly: true
|
||||
- mountPath: /etc/kubernetes/scheduler
|
||||
name: kube-scheduler-configmap
|
||||
hostNetwork: true
|
||||
priorityClassName: system-cluster-critical
|
||||
volumes:
|
||||
- hostPath:
|
||||
path: /etc/kubernetes/scheduler.conf
|
||||
type: FileOrCreate
|
||||
name: kubeconfig
|
||||
- name: kube-scheduler-configmap
|
||||
configMap:
|
||||
name: my-scheduler-config-map
|
||||
status: {}
|
||||
117
kubernetes-formation/05-administration/README.md
Normal file
117
kubernetes-formation/05-administration/README.md
Normal file
@@ -0,0 +1,117 @@
|
||||
[//]: # (Confidential document)
|
||||
[//]: # (01/05/2025)
|
||||
[//]: # (v 1.7.3)
|
||||
|
||||
# Kubernetes labs v1.7.3 - Administration
|
||||
|
||||
## Quotas
|
||||
|
||||
A resource quota, defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per namespace.
|
||||
It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that namespace.
|
||||
|
||||
|
||||
## Lab1 : Quotas
|
||||
|
||||
- Apply the Quota manifest
|
||||
- Get the list of Quotas
|
||||
- Describe Quota to get detailed information
|
||||
- Apply the Deployment manifest
|
||||
- Get the list of Pods
|
||||
- Get the list of Quotas
|
||||
- Describe Quota to get detailed information
|
||||
- Scale the Deployment to 4 replicas
|
||||
- Get the details of the scaled Deployment
|
||||
- Describe the ReplicaSet associated with the scaled Deployment
|
||||
|
||||
|
||||
```
|
||||
kubectl apply -f manifests/quotas.yaml
|
||||
kubectl get quota
|
||||
kubectl describe quota
|
||||
|
||||
kubectl apply -f manifests/deploy-quotas-example.yaml
|
||||
kubectl get po
|
||||
kubectl get quota
|
||||
kubectl describe quota
|
||||
kubectl scale deploy deploy-quotas-example --replicas=4
|
||||
kubectl get deploy deploy-quotas-example
|
||||
kubectl describe deploy deploy-quotas-example
|
||||
kubectl describe rs
|
||||
|
||||
kubectl delete -f manifests/
|
||||
|
||||
```
|
||||
|
||||
## Limit ranges
|
||||
|
||||
|
||||
A LimitRange is a policy to constrain the resource allocations (limits and requests) that you can specify for each applicable object kind (such as Pod or PersistentVolumeClaim) in a namespace.
|
||||
|
||||
A LimitRange provides constraints that can:
|
||||
|
||||
- Enforce minimum and maximum compute resources usage per Pod or Container in a namespace.
|
||||
- Enforce minimum and maximum storage request per PersistentVolumeClaim in a namespace.
|
||||
- Enforce a ratio between request and limit for a resource in a namespace.
|
||||
- Set default request/limit for compute resources in a namespace and automatically inject them to Containers at runtime.
|
||||
|
||||
A LimitRange is enforced in a particular namespace when there is a LimitRange object in that namespace.
|
||||
|
||||
|
||||
## Lab 2 : Default Request & Default Limit
|
||||
|
||||
- Apply the LimitRange
|
||||
- Describe the LimitRange to verify it has been created
|
||||
- Apply a sample Pod
|
||||
- Get the details of the Pod, including CPU and memory resource requests and limits
|
||||
- Delete the LimitRange
|
||||
|
||||
|
||||
```
|
||||
kubectl apply -f manifests/container-limit-range-example1.yaml
|
||||
kubectl describe limitrange/container-limit-range-example1
|
||||
kubectl apply -f manifests/pod-example1.yaml
|
||||
kubectl get pod pod-example1 --output=custom-columns=NAME:.metadata.name,CPU-R:.spec.containers[*].resources.requests.cpu,MEMORY-R:.spec.containers[*].resources.requests.memory,CPU-L:.spec.containers[*].resources.limits.cpu,MEMORY-L:.spec.containers[*].resources.limits.memory
|
||||
kubectl delete limitrange/container-limit-range-example1
|
||||
```
|
||||
|
||||
## Lab 3 : Min & Max & Default Request & Default Limit
|
||||
|
||||
- Apply the LimitRange
|
||||
- Describe the LimitRange to verify it has been created
|
||||
- Apply a sample Pod
|
||||
- Get the details of the Pod, including CPU and memory resource requests and limits
|
||||
- Delete the LimitRange
|
||||
|
||||
|
||||
```
|
||||
kubectl apply -f manifests/container-limit-range-example2.yaml
|
||||
kubectl describe limitrange/container-limit-range-example2
|
||||
kubectl apply -f manifests/pod-example2.yaml
|
||||
kubectl get pod pod-example2 --output=custom-columns=NAME:.metadata.name,CPU-R:.spec.containers[*].resources.requests.cpu,MEMORY-R:.spec.containers[*].resources.requests.memory,CPU-L:.spec.containers[*].resources.limits.cpu,MEMORY-L:.spec.containers[*].resources.limits.memory
|
||||
```
|
||||
|
||||
## Pod QOS
|
||||
|
||||
Kubernetes classifies the Pods that you run and allocates each Pod into a specific quality of service (QoS) class.
|
||||
Kubernetes uses that classification to influence how different pods are handled. Kubernetes does this classification based on the resource requests of the Containers in that Pod, along with how those requests relate to resource limits.
|
||||
This is known as Quality of Service (QoS) class. Kubernetes assigns every Pod a QoS class based on the resource requests and limits of its component Containers. QoS classes are used by Kubernetes to decide which Pods to evict from a Node experiencing Node Pressure. The possible QoS classes are Guaranteed, Burstable, and BestEffort. When a Node runs out of resources, Kubernetes will first evict BestEffort Pods running on that Node, followed by Burstable and finally Guaranteed Pods. When this eviction is due to resource pressure, only Pods exceeding resource requests are candidates for eviction.
|
||||
|
||||
|
||||
## Lab 4 : QoS classes Guaranteed, Burstable, and BestEffort
|
||||
|
||||
- Apply the Pod manifest
|
||||
- Get the QoS class of the Pod
|
||||
- Describe the Pod to get detailed information
|
||||
|
||||
```
|
||||
kubectl apply -f manifests/pod-qos-example.yaml
|
||||
kubectl get pod pod-qos-example --output=jsonpath='{.status.qosClass}'
|
||||
kubectl describe po pod-qos-example
|
||||
```
|
||||
|
||||
## Clean up
|
||||
|
||||
```
|
||||
kubectl delete -f manifests/
|
||||
|
||||
```
|
||||
@@ -0,0 +1,15 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
apiVersion: v1
|
||||
kind: LimitRange
|
||||
metadata:
|
||||
name: container-limit-range-example1
|
||||
spec:
|
||||
limits:
|
||||
- default:
|
||||
memory: 64Mi
|
||||
cpu: 500m
|
||||
defaultRequest:
|
||||
memory: 32Mi
|
||||
cpu: 500m
|
||||
type: Container
|
||||
|
||||
@@ -0,0 +1,22 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
apiVersion: v1
|
||||
kind: LimitRange
|
||||
metadata:
|
||||
name: container-limit-range-example2
|
||||
spec:
|
||||
spec:
|
||||
limits:
|
||||
- default:
|
||||
memory: 64Mi
|
||||
cpu: 500m
|
||||
defaultRequest:
|
||||
memory: 32Mi
|
||||
cpu: 500m
|
||||
min:
|
||||
cpu: 500m
|
||||
memory: 16Mi
|
||||
max:
|
||||
cpu: 1
|
||||
memory: 64Mi
|
||||
type: Container
|
||||
|
||||
@@ -0,0 +1,24 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: deploy-quotas-example
|
||||
spec:
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
name: nginx
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
resources:
|
||||
requests:
|
||||
cpu: 500m
|
||||
memory: 64Mi
|
||||
@@ -0,0 +1,13 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pod-example1
|
||||
labels:
|
||||
name: shell
|
||||
spec:
|
||||
containers:
|
||||
- name: shell
|
||||
image: busybox
|
||||
command: ["sh", "-c", "sleep 3600"]
|
||||
|
||||
@@ -0,0 +1,19 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pod-example2
|
||||
labels:
|
||||
name: shell
|
||||
spec:
|
||||
containers:
|
||||
- name: shell
|
||||
image: busybox
|
||||
command: ["sh", "-c", "sleep 3600"]
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 128Mi
|
||||
limits:
|
||||
memory: 128Mi
|
||||
|
||||
@@ -0,0 +1,16 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod-qos-example
|
||||
spec:
|
||||
containers:
|
||||
- name: pod-qos-example
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
memory: "200Mi"
|
||||
cpu: "700m"
|
||||
requests:
|
||||
memory: "200Mi"
|
||||
cpu: "700m"
|
||||
11
kubernetes-formation/05-administration/manifests/quotas.yaml
Normal file
11
kubernetes-formation/05-administration/manifests/quotas.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: quotas-example
|
||||
spec:
|
||||
hard:
|
||||
cpu: "1"
|
||||
memory: 1Gi
|
||||
pods: "3"
|
||||
|
||||
245
kubernetes-formation/06-security/README.md
Normal file
245
kubernetes-formation/06-security/README.md
Normal file
@@ -0,0 +1,245 @@
|
||||
[//]: # (Confidential document)
|
||||
[//]: # (01/05/2025)
|
||||
[//]: # (v 1.7.3)
|
||||
|
||||
# Kubernetes labs v1.7.3 - Security
|
||||
|
||||
## Security : RBAC
|
||||
|
||||
|
||||
Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization.
|
||||
|
||||
RBAC authorization uses the rbac.authorization.k8s.io API group to drive authorization decisions, allowing you to dynamically configure policies through the Kubernetes API.
|
||||
|
||||
To enable RBAC, start the API server with the --authorization-mode flag set to a comma-separated list that includes RBAC;
|
||||
|
||||
|
||||
## Lab1: RBAC
|
||||
|
||||
- Create
|
||||
- Namespace
|
||||
- Service Account
|
||||
- long-lived API token for a ServiceAccount
|
||||
- Role
|
||||
- RoleBinding
|
||||
- Get Roles, RoleBinding
|
||||
- Describe Roles
|
||||
- Check access using auth can-i
|
||||
|
||||
|
||||
```bash
|
||||
kubectl create -f manifests/rbac.yaml
|
||||
kubectl get ServiceAccount -n rbac
|
||||
kubectl get Roles -n rbac
|
||||
kubectl get RoleBinding -n rbac
|
||||
kubectl describe Roles -n rbac
|
||||
kubectl auth can-i get pods --as=system:serviceaccount:rbac:dev-service-account
|
||||
kubectl auth can-i get cm --as=system:serviceaccount:rbac:dev-service-account
|
||||
kubectl auth can-i get cm --as=system:serviceaccount:rbac:dev-service-account -n rbac
|
||||
```
|
||||
|
||||
## Lab2: Kubeconfig
|
||||
|
||||
- Generate Kubeconfig
|
||||
Usage ./kubeconfig.sh ( namespace ) ( service account name ) ( secret name )
|
||||
```
|
||||
dos2unix manifests/kubeconfig.sh
|
||||
chmod +x manifests/kubeconfig.sh
|
||||
./manifests/kubeconfig.sh rbac dev-service-account dev-secret
|
||||
kubectl --kubeconfig=kubeconfig-rbac get po
|
||||
kubectl --kubeconfig=kubeconfig-rbac get cm
|
||||
kubectl --kubeconfig=kubeconfig-rbac get cm -n rbac
|
||||
|
||||
kubectl delete -f manifests/rbac.yaml
|
||||
|
||||
```
|
||||
|
||||
## Lab3: Token
|
||||
|
||||
The token is a JSON Web Token (JWT), encoded as base64 as specified in the JWT RFC.
|
||||
jwt.io is a useful tool to decode JWT.
|
||||
|
||||
- Generate Token
|
||||
- Manually create an API token for a ServiceAccount
|
||||
```
|
||||
kubectl run --restart=Never busybox -it --image=busybox --rm --quiet -- \
|
||||
cat /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||
|
||||
```
|
||||
|
||||
## Secuirty : Network Policies
|
||||
|
||||
Network Policies are an application-centric construct which allow you to specify how a pod is allowed to communicate with various network "entities over the network.
|
||||
Network Policies apply to a connection with a pod on one or both ends, and are not relevant to other connections.
|
||||
|
||||
By default, if no policies exist in a namespace, then all ingress and egress traffic is allowed to and from pods in that namespace.
|
||||
|
||||
## Lab4: Network Policies
|
||||
|
||||
- DENY all traffic to an application
|
||||
- Run a pod
|
||||
- Test connectivity from another pod (Traffic is allowed)
|
||||
- Apply deny all policy
|
||||
- Test connectivity from another pod (Traffic dropped!)
|
||||
- ALLOW all traffic to an application
|
||||
- Run a pod
|
||||
- Apply allow all policy
|
||||
- Test con nectivity from another pod (Traffic is allowed)
|
||||
- LIMIT traffic to an application
|
||||
- Run a pod with labels
|
||||
- Apply limit traffic policy
|
||||
- Test connectivity from another pod (Traffic dropped!)
|
||||
- Test connectivity from another pod with app=bookstore & role=frontend labels (Traffic is allowed)
|
||||
- DENY all non-whitelisted traffic to a namespace
|
||||
- Run a pod
|
||||
- Apply limit traffic policy
|
||||
- Test connectivity from another pod in the same ns (Traffic dropped!)
|
||||
- Test connectivity from another pod in another ns (Traffic dropped!)
|
||||
- DENY all traffic from other namespaces
|
||||
- Run a pod
|
||||
- Apply limit traffic policy
|
||||
- Test connectivity from another pod in another ns (Traffic dropped!)
|
||||
- Test connectivity from another pod in the same ns (Traffic is allowed)
|
||||
- ALLOW traffic to an application from all namespaces
|
||||
- Run a pod
|
||||
- Apply limit traffic policy
|
||||
- Test connectivity from another pod in another ns (Traffic is allowed)
|
||||
- ALLOW all traffic from a namespace
|
||||
- Run a pod
|
||||
- Apply limit traffic policy
|
||||
- Test connectivity from another pod in dev ns (Traffic dropped!)
|
||||
- Test connectivity from another pod in prod ns (Traffic is allowed)
|
||||
- ALLOW traffic from some pods in another namespace
|
||||
- Run a pod
|
||||
- Apply limit traffic policy
|
||||
- Test connectivity from another pod in same ns (Traffic dropped!)
|
||||
- Test connectivity from another pod with labels type=monitoring in dev ns (Traffic dropped!)
|
||||
- Test connectivity from another pod in other ns (Traffic dropped!)
|
||||
- Test connectivity from another pod with labels type=monitoring in ns other (Traffic is allowed)
|
||||
|
||||
```
|
||||
kubectl run web --image=nginx --labels app=web --expose --port 80
|
||||
kubectl run --rm -i -t --image=alpine test-wget -- sh
|
||||
# wget -qO- http://web
|
||||
|
||||
kubectl apply -f manifests/web-deny-all.yaml
|
||||
|
||||
kubectl run --rm -i -t --image=alpine test-wget -- sh
|
||||
# wget -qO- --timeout=2 http://web
|
||||
|
||||
kubectl delete po,service web
|
||||
kubectl delete networkpolicy web-deny-all
|
||||
-----
|
||||
|
||||
kubectl run web --image=nginx --labels=app=web --expose --port 80
|
||||
kubectl apply -f manifests/web-allow-all.yaml
|
||||
kubectl run test-wget --rm -i -t --image=alpine -- sh
|
||||
# wget -qO- --timeout=2 http://web
|
||||
|
||||
kubectl delete po,service web
|
||||
kubectl delete networkpolicy web-allow-all
|
||||
-----
|
||||
|
||||
kubectl run apiserver --image=nginx --labels app=bookstore,role=api --expose --port 80
|
||||
kubectl apply -f manifests/api-allow.yaml
|
||||
|
||||
kubectl run test-wget --rm -i -t --image=alpine -- sh
|
||||
# wget -qO- --timeout=2 http://apiserver
|
||||
|
||||
kubectl run test-wget --rm -i -t --image=alpine --labels app=bookstore,role=frontend -- sh
|
||||
# wget -qO- --timeout=2 http://apiserver
|
||||
|
||||
kubectl delete po,service apiserver
|
||||
kubectl delete networkpolicy api-allow
|
||||
-----
|
||||
|
||||
kubectl run apiserver --image=nginx --labels app=api --expose --port 80
|
||||
kubectl apply -f manifests/default-deny-all.yaml
|
||||
|
||||
kubectl run test-wget --rm -i -t --image=alpine --labels app=bookstore -- sh
|
||||
# wget -qO- --timeout=2 http://apiserver
|
||||
|
||||
kubectl create namespace other
|
||||
kubectl run test-wget --rm -i -t --image=alpine --namespace=other -- sh
|
||||
# wget -qO- --timeout=2 http://apiserver.default
|
||||
|
||||
kubectl delete po,service apiserver
|
||||
kubectl delete networkpolicy default-deny-all
|
||||
kubectl delete namespace other
|
||||
-----
|
||||
|
||||
kubectl create namespace secondary
|
||||
kubectl run web --namespace secondary --image=nginx --labels=app=web --expose --port 80
|
||||
kubectl apply -f manifests/deny-from-other-namespaces.yaml
|
||||
kubectl run test-wget --namespace=default --rm -i -t --image=alpine -- sh
|
||||
# wget -qO- --timeout=2 http://web.secondary
|
||||
|
||||
kubectl run test-wget --namespace=secondary --rm -i -t --image=alpine -- sh
|
||||
# wget -qO- --timeout=2 http://web.secondary
|
||||
|
||||
kubectl delete po,service web -n secondary
|
||||
kubectl delete networkpolicy deny-from-other-namespaces -n secondary
|
||||
kubectl delete namespace secondary
|
||||
-----
|
||||
|
||||
kubectl create namespace secondary
|
||||
kubectl run web --image=nginx --namespace secondary --labels=app=web --expose --port 80
|
||||
kubectl apply -f manifests/web-allow-all-namespaces.yaml
|
||||
|
||||
kubectl run test-wget --namespace=default --rm -i -t --image=alpine -- sh
|
||||
# wget -qO- --timeout=2 http://web.secondary
|
||||
|
||||
kubectl delete po,service web -n secondary
|
||||
kubectl delete networkpolicy web-allow-all-namespaces -n secondary
|
||||
kubectl delete namespace secondary
|
||||
-----
|
||||
|
||||
kubectl run web --image=nginx --labels=app=web --expose --port 80
|
||||
|
||||
kubectl create namespace dev
|
||||
kubectl label namespace/dev purpose=testing
|
||||
|
||||
kubectl create namespace prod
|
||||
kubectl label namespace/prod purpose=production
|
||||
|
||||
kubectl apply -f manifests/web-allow-prod.yaml
|
||||
|
||||
kubectl run test-wget --namespace=dev --rm -i -t --image=alpine -- sh
|
||||
# wget -qO- --timeout=2 http://web.default
|
||||
|
||||
kubectl run test-wget --namespace=prod --rm -i -t --image=alpine -- sh
|
||||
# wget -qO- --timeout=2 http://web.default
|
||||
|
||||
kubectl delete networkpolicy web-allow-prod
|
||||
kubectl delete po,service web
|
||||
kubectl delete namespace prod
|
||||
kubectl delete namespace dev
|
||||
-----
|
||||
|
||||
kubectl run web --image=nginx --labels=app=web --expose --port 80
|
||||
kubectl create namespace other
|
||||
kubectl label namespace/other team=operations
|
||||
kubectl apply -f manifests/web-allow-all-ns-monitoring.yaml
|
||||
|
||||
kubectl run test-wget --rm -i -t --image=alpine -- sh
|
||||
# wget -qO- --timeout=2 http://web.default
|
||||
|
||||
kubectl run test-wget --labels type=monitoring --rm -i -t --image=alpine -- sh
|
||||
# wget -qO- --timeout=2 http://web.default
|
||||
|
||||
$ kubectl run test-wget --namespace=other --rm -i -t --image=alpine -- sh
|
||||
# wget -qO- --timeout=2 http://web.default
|
||||
|
||||
kubectl run test-wget --namespace=other --labels type=monitoring --rm -i -t --image=alpine -- sh
|
||||
# wget -qO- --timeout=2 http://web.default
|
||||
|
||||
kubectl delete networkpolicy web-allow-all-ns-monitoring
|
||||
kubectl delete namespace other
|
||||
kubectl delete po,service web
|
||||
-----
|
||||
```
|
||||
|
||||
## Clean up
|
||||
```
|
||||
kubectl delete -f ./manifests
|
||||
```
|
||||
65
kubernetes-formation/06-security/kyverno/README.md
Normal file
65
kubernetes-formation/06-security/kyverno/README.md
Normal file
@@ -0,0 +1,65 @@
|
||||
[//]: # (Confidential document)
|
||||
[//]: # (01/05/2025)
|
||||
[//]: # (v 1.7.3)
|
||||
|
||||
|
||||
# Kubernetes labs v1.7.3
|
||||
## Secuirty : Kyverno
|
||||
|
||||
Kyverno is a policy engine designed for Kubernetes. Policies are managed as Kubernetes resources and no new language is required to write policies.
|
||||
|
||||
Kyverno policies can validate, mutate, generate, and cleanup Kubernetes resources, and verify image signatures and artifacts to help secure the software supply chain.
|
||||
|
||||
The Kyverno CLI can be used to test policies and validate resources as part of a CI/CD pipeline.
|
||||
|
||||
|
||||
## Lab1 : Disallow Privileged Containers
|
||||
|
||||
- Add the Kyverno Helm repository
|
||||
- Scan the new repository for charts
|
||||
- Show all available chart versions for Kyverno
|
||||
- Use Helm to create a Namespace and install Kyverno
|
||||
- Apply Disallow Privileged Containers in audit mode
|
||||
- List Policies, Cluster Policies, Policies report, Cluster Policies report
|
||||
- run as non privileged container (OK)
|
||||
- run as privileged container (OK)
|
||||
- Apply Disallow Privileged Containers in enforce mode
|
||||
- run as non privileged container (OK)
|
||||
- run as privileged container (KO)
|
||||
|
||||
```
|
||||
helm repo add kyverno https://kyverno.github.io/kyverno/
|
||||
helm repo update
|
||||
helm search repo kyverno -l
|
||||
helm install kyverno kyverno/kyverno -n kyverno --create-namespace --set replicaCount=1
|
||||
kubectl get pods -n kyverno
|
||||
kubectl logs -l app.kubernetes.io/name=kyverno -n kyverno
|
||||
|
||||
|
||||
kubectl apply -f manifests/disallow-privileged-containers.yaml
|
||||
|
||||
kubectl get policies -A
|
||||
kubectl get clusterpolicies
|
||||
kubectl get policyreport -A
|
||||
kubectl get clusterpolicyreport
|
||||
|
||||
kubectl create -f manifests/runasnonprivileged.yaml
|
||||
kubectl create -f manifests/runasprivileged.yaml
|
||||
kubectl describe po runasnonprivileged
|
||||
kubectl describe po runasprivileged
|
||||
kubectl get ev
|
||||
|
||||
kubectl delete po runasprivileged runasnonprivileged
|
||||
kubectl apply -f manifests/disallow-privileged-containers-enforced.yaml
|
||||
|
||||
kubectl create -f manifests/runasnonprivileged.yaml
|
||||
kubectl create -f manifests/runasprivileged.yaml
|
||||
kubectl describe po runasnonprivileged
|
||||
kubectl get ev
|
||||
|
||||
```
|
||||
|
||||
## Clean up
|
||||
```
|
||||
kubectl delete -f ./manifests
|
||||
```
|
||||
@@ -0,0 +1,40 @@
|
||||
## Kubernetes Fundamentals labs v1.5.0
|
||||
apiVersion: kyverno.io/v1
|
||||
kind: ClusterPolicy
|
||||
metadata:
|
||||
name: disallow-privileged-containers
|
||||
annotations:
|
||||
policies.kyverno.io/title: Disallow Privileged Containers
|
||||
policies.kyverno.io/category: Pod Security Standards (Baseline)
|
||||
policies.kyverno.io/severity: medium
|
||||
policies.kyverno.io/subject: Pod
|
||||
kyverno.io/kyverno-version: 1.6.0
|
||||
kyverno.io/kubernetes-version: "1.22-1.23"
|
||||
policies.kyverno.io/description: >-
|
||||
Privileged mode disables most security mechanisms and must not be allowed. This policy
|
||||
ensures Pods do not call for privileged mode.
|
||||
spec:
|
||||
validationFailureAction: Enforce
|
||||
background: true
|
||||
rules:
|
||||
- name: privileged-containers
|
||||
match:
|
||||
any:
|
||||
- resources:
|
||||
kinds:
|
||||
- Pod
|
||||
validate:
|
||||
message: >-
|
||||
Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged
|
||||
and spec.initContainers[*].securityContext.privileged must be unset or set to `false`.
|
||||
pattern:
|
||||
spec:
|
||||
=(ephemeralContainers):
|
||||
- =(securityContext):
|
||||
=(privileged): "false"
|
||||
=(initContainers):
|
||||
- =(securityContext):
|
||||
=(privileged): "false"
|
||||
containers:
|
||||
- =(securityContext):
|
||||
=(privileged): "false"
|
||||
@@ -0,0 +1,40 @@
|
||||
## Kubernetes Fundamentals labs v1.5.0
|
||||
apiVersion: kyverno.io/v1
|
||||
kind: ClusterPolicy
|
||||
metadata:
|
||||
name: disallow-privileged-containers
|
||||
annotations:
|
||||
policies.kyverno.io/title: Disallow Privileged Containers
|
||||
policies.kyverno.io/category: Pod Security Standards (Baseline)
|
||||
policies.kyverno.io/severity: medium
|
||||
policies.kyverno.io/subject: Pod
|
||||
kyverno.io/kyverno-version: 1.6.0
|
||||
kyverno.io/kubernetes-version: "1.22-1.23"
|
||||
policies.kyverno.io/description: >-
|
||||
Privileged mode disables most security mechanisms and must not be allowed. This policy
|
||||
ensures Pods do not call for privileged mode.
|
||||
spec:
|
||||
validationFailureAction: Audit
|
||||
background: true
|
||||
rules:
|
||||
- name: privileged-containers
|
||||
match:
|
||||
any:
|
||||
- resources:
|
||||
kinds:
|
||||
- Pod
|
||||
validate:
|
||||
message: >-
|
||||
Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged
|
||||
and spec.initContainers[*].securityContext.privileged must be unset or set to `false`.
|
||||
pattern:
|
||||
spec:
|
||||
=(ephemeralContainers):
|
||||
- =(securityContext):
|
||||
=(privileged): "false"
|
||||
=(initContainers):
|
||||
- =(securityContext):
|
||||
=(privileged): "false"
|
||||
containers:
|
||||
- =(securityContext):
|
||||
=(privileged): "false"
|
||||
@@ -0,0 +1,12 @@
|
||||
## Kubernetes Fundamentals labs v1.5.0
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: runasnonprivileged
|
||||
spec:
|
||||
containers:
|
||||
- name: runasroot
|
||||
image: busybox
|
||||
command: ["sleep", "3600"]
|
||||
securityContext:
|
||||
privileged: false
|
||||
@@ -0,0 +1,12 @@
|
||||
## Kubernetes Fundamentals labs v1.5.0
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: runasprivileged
|
||||
spec:
|
||||
containers:
|
||||
- name: runasroot
|
||||
image: busybox
|
||||
command: ["sleep", "3600"]
|
||||
securityContext:
|
||||
privileged: true
|
||||
174
kubernetes-formation/06-security/network-security/README.md
Normal file
174
kubernetes-formation/06-security/network-security/README.md
Normal file
@@ -0,0 +1,174 @@
|
||||
[//]: # (Confidential document)
|
||||
[//]: # (01/05/2025)
|
||||
[//]: # (v 1.7.3)
|
||||
|
||||
# Kubernetes labs v1.7.3
|
||||
## Secuirty : Network Policies
|
||||
|
||||
Network Policies are an application-centric construct which allow you to specify how a pod is allowed to communicate with various network "entities over the network.
|
||||
Network Policies apply to a connection with a pod on one or both ends, and are not relevant to other connections.
|
||||
|
||||
By default, if no policies exist in a namespace, then all ingress and egress traffic is allowed to and from pods in that namespace.
|
||||
|
||||
## Lab1: Network Policies
|
||||
|
||||
- DENY all traffic to an application
|
||||
- Run a pod
|
||||
- Test connectivity from another pod (Traffic is allowed)
|
||||
- Apply deny all policy
|
||||
- Test connectivity from another pod (Traffic dropped!)
|
||||
- ALLOW all traffic to an application
|
||||
- Run a pod
|
||||
- Apply allow all policy
|
||||
- Test con nectivity from another pod (Traffic is allowed)
|
||||
- LIMIT traffic to an application
|
||||
- Run a pod with labels
|
||||
- Apply limit traffic policy
|
||||
- Test connectivity from another pod (Traffic dropped!)
|
||||
- Test connectivity from another pod with app=bookstore & role=frontend labels (Traffic is allowed)
|
||||
- DENY all non-whitelisted traffic to a namespace
|
||||
- Run a pod
|
||||
- Apply limit traffic policy
|
||||
- Test connectivity from another pod in the same ns (Traffic dropped!)
|
||||
- Test connectivity from another pod in another ns (Traffic dropped!)
|
||||
- DENY all traffic from other namespaces
|
||||
- Run a pod
|
||||
- Apply limit traffic policy
|
||||
- Test connectivity from another pod in another ns (Traffic dropped!)
|
||||
- Test connectivity from another pod in the same ns (Traffic is allowed)
|
||||
- ALLOW traffic to an application from all namespaces
|
||||
- Run a pod
|
||||
- Apply limit traffic policy
|
||||
- Test connectivity from another pod in another ns (Traffic is allowed)
|
||||
- ALLOW all traffic from a namespace
|
||||
- Run a pod
|
||||
- Apply limit traffic policy
|
||||
- Test connectivity from another pod in dev ns (Traffic dropped!)
|
||||
- Test connectivity from another pod in prod ns (Traffic is allowed)
|
||||
- ALLOW traffic from some pods in another namespace
|
||||
- Run a pod
|
||||
- Apply limit traffic policy
|
||||
- Test connectivity from another pod in same ns (Traffic dropped!)
|
||||
- Test connectivity from another pod with labels type=monitoring in dev ns (Traffic dropped!)
|
||||
- Test connectivity from another pod in other ns (Traffic dropped!)
|
||||
- Test connectivity from another pod with labels type=monitoring in ns other (Traffic is allowed)
|
||||
```
|
||||
kubectl run web --image=nginx --labels app=web --expose --port 80
|
||||
kubectl run --rm -i -t --image=alpine test-wget -- sh
|
||||
# wget -qO- http://web
|
||||
|
||||
kubectl apply -f manifests/web-deny-all.yaml
|
||||
|
||||
kubectl run --rm -i -t --image=alpine test-wget -- sh
|
||||
# wget -qO- --timeout=2 http://web
|
||||
|
||||
kubectl delete po,service web
|
||||
kubectl delete networkpolicy web-deny-all
|
||||
-----
|
||||
|
||||
kubectl run web --image=nginx --labels=app=web --expose --port 80
|
||||
kubectl apply -f manifests/web-allow-all.yaml
|
||||
kubectl run test-wget --rm -i -t --image=alpine -- sh
|
||||
# wget -qO- --timeout=2 http://web
|
||||
|
||||
kubectl delete po,service web
|
||||
kubectl delete networkpolicy web-allow-all
|
||||
-----
|
||||
|
||||
kubectl run apiserver --image=nginx --labels app=bookstore,role=api --expose --port 80
|
||||
kubectl apply -f manifests/api-allow.yaml
|
||||
|
||||
kubectl run test-wget --rm -i -t --image=alpine -- sh
|
||||
# wget -qO- --timeout=2 http://apiserver
|
||||
|
||||
kubectl run test-wget --rm -i -t --image=alpine --labels app=bookstore,role=frontend -- sh
|
||||
# wget -qO- --timeout=2 http://apiserver
|
||||
|
||||
kubectl delete po,service apiserver
|
||||
kubectl delete networkpolicy api-allow
|
||||
-----
|
||||
|
||||
kubectl run apiserver --image=nginx --labels app=api --expose --port 80
|
||||
kubectl apply -f manifests/default-deny-all.yaml
|
||||
|
||||
kubectl run test-wget --rm -i -t --image=alpine --labels app=bookstore -- sh
|
||||
# wget -qO- --timeout=2 http://apiserver
|
||||
|
||||
kubectl create namespace other
|
||||
kubectl run test-wget --rm -i -t --image=alpine --namespace=other -- sh
|
||||
# wget -qO- --timeout=2 http://apiserver.default
|
||||
|
||||
kubectl delete po,service apiserver
|
||||
kubectl delete networkpolicy default-deny-all
|
||||
kubectl delete namespace other
|
||||
-----
|
||||
|
||||
kubectl create namespace secondary
|
||||
kubectl run web --namespace secondary --image=nginx --labels=app=web --expose --port 80
|
||||
kubectl apply -f manifests/deny-from-other-namespaces.yaml
|
||||
kubectl run test-wget --namespace=default --rm -i -t --image=alpine -- sh
|
||||
# wget -qO- --timeout=2 http://web.secondary
|
||||
|
||||
kubectl run test-wget --namespace=secondary --rm -i -t --image=alpine -- sh
|
||||
# wget -qO- --timeout=2 http://web.secondary
|
||||
|
||||
kubectl delete po,service web -n secondary
|
||||
kubectl delete networkpolicy deny-from-other-namespaces -n secondary
|
||||
kubectl delete namespace secondary
|
||||
-----
|
||||
|
||||
kubectl create namespace secondary
|
||||
kubectl run web --image=nginx --namespace secondary --labels=app=web --expose --port 80
|
||||
kubectl apply -f manifests/web-allow-all-namespaces.yaml
|
||||
|
||||
kubectl run test-wget --namespace=default --rm -i -t --image=alpine -- sh
|
||||
# wget -qO- --timeout=2 http://web.secondary
|
||||
|
||||
kubectl delete po,service web -n secondary
|
||||
kubectl delete networkpolicy web-allow-all-namespaces -n secondary
|
||||
kubectl delete namespace secondary
|
||||
-----
|
||||
|
||||
kubectl run web --image=nginx --labels=app=web --expose --port 80
|
||||
|
||||
kubectl create namespace dev
|
||||
kubectl label namespace/dev purpose=testing
|
||||
|
||||
kubectl create namespace prod
|
||||
kubectl label namespace/prod purpose=production
|
||||
|
||||
kubectl apply -f manifests/web-allow-prod.yaml
|
||||
|
||||
kubectl run test-wget --namespace=dev --rm -i -t --image=alpine -- sh
|
||||
# wget -qO- --timeout=2 http://web.default
|
||||
|
||||
kubectl run test-wget --namespace=prod --rm -i -t --image=alpine -- sh
|
||||
# wget -qO- --timeout=2 http://web.default
|
||||
|
||||
kubectl delete networkpolicy web-allow-prod
|
||||
kubectl delete po,service web
|
||||
kubectl delete namespace prod
|
||||
kubectl delete namespace dev
|
||||
-----
|
||||
|
||||
kubectl run web --image=nginx --labels=app=web --expose --port 80
|
||||
kubectl create namespace other
|
||||
kubectl label namespace/other team=operations
|
||||
kubectl apply -f manifests/web-allow-all-ns-monitoring.yaml
|
||||
|
||||
kubectl run test-wget --rm -i -t --image=alpine -- sh
|
||||
# wget -qO- --timeout=2 http://web.default
|
||||
|
||||
kubectl run test-wget --labels type=monitoring --rm -i -t --image=alpine -- sh
|
||||
# wget -qO- --timeout=2 http://web.default
|
||||
|
||||
$ kubectl run test-wget --namespace=other --rm -i -t --image=alpine -- sh
|
||||
# wget -qO- --timeout=2 http://web.default
|
||||
|
||||
kubectl run test-wget --namespace=other --labels type=monitoring --rm -i -t --image=alpine -- sh
|
||||
# wget -qO- --timeout=2 http://web.default
|
||||
|
||||
kubectl delete networkpolicy web-allow-all-ns-monitoring
|
||||
kubectl delete namespace other
|
||||
kubectl delete po,service web
|
||||
-----
|
||||
@@ -0,0 +1,16 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: api-allow-5000
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: apiserver
|
||||
ingress:
|
||||
- ports:
|
||||
- port: 5000
|
||||
from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
role: monitoring
|
||||
@@ -0,0 +1,15 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: api-allow
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: bookstore
|
||||
role: api
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: bookstore
|
||||
@@ -0,0 +1,11 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: default-deny-all-egress
|
||||
namespace: default
|
||||
spec:
|
||||
policyTypes:
|
||||
- Egress
|
||||
podSelector: {}
|
||||
egress: []
|
||||
@@ -0,0 +1,9 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: default-deny-all
|
||||
namespace: default
|
||||
spec:
|
||||
podSelector: {}
|
||||
ingress: []
|
||||
@@ -0,0 +1,12 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
namespace: secondary
|
||||
name: deny-from-other-namespaces
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector: {}
|
||||
@@ -0,0 +1,12 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: foo-deny-egress
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: foo
|
||||
policyTypes:
|
||||
- Egress
|
||||
egress: []
|
||||
@@ -0,0 +1,19 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: foo-deny-external-egress
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: foo
|
||||
policyTypes:
|
||||
- Egress
|
||||
egress:
|
||||
- ports:
|
||||
- port: 53
|
||||
protocol: UDP
|
||||
- port: 53
|
||||
protocol: TCP
|
||||
to:
|
||||
- namespaceSelector: {}
|
||||
@@ -0,0 +1,25 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: redis-allow-services
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: bookstore
|
||||
role: db
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: bookstore
|
||||
role: search
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: bookstore
|
||||
role: api
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: inventory
|
||||
role: web
|
||||
|
||||
@@ -0,0 +1,13 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
namespace: secondary
|
||||
name: web-allow-all-namespaces
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: web
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector: {}
|
||||
@@ -0,0 +1,18 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: web-allow-all-ns-monitoring
|
||||
namespace: default
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: web
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector: # chooses all pods in namespaces labelled with team=operations
|
||||
matchLabels:
|
||||
team: operations
|
||||
podSelector: # chooses pods with type=monitoring
|
||||
matchLabels:
|
||||
type: monitoring
|
||||
@@ -0,0 +1,12 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: web-allow-all
|
||||
namespace: default
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: web
|
||||
ingress:
|
||||
- {}
|
||||
@@ -0,0 +1,12 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: web-allow-external
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: web
|
||||
ingress:
|
||||
- from: []
|
||||
|
||||
@@ -0,0 +1,14 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: web-allow-prod
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: web
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
purpose: production
|
||||
@@ -0,0 +1,10 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: web-deny-all
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: web
|
||||
ingress: []
|
||||
65
kubernetes-formation/06-security/pod-security/README.md
Normal file
65
kubernetes-formation/06-security/pod-security/README.md
Normal file
@@ -0,0 +1,65 @@
|
||||
[//]: # (Confidential document)
|
||||
[//]: # (01/05/2025)
|
||||
[//]: # (v 1.7.3)
|
||||
|
||||
# Kubernetes labs v1.7.3
|
||||
## Secuirty : Security Context
|
||||
|
||||
A security context defines privilege and access control settings for a Pod or Container.
|
||||
PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext.
|
||||
Field values of container.securityContext take precedence over field values of PodSecurityContext.
|
||||
|
||||
## Lab1: Security context for a Pod
|
||||
|
||||
- Create the Pod:
|
||||
- Use id to check
|
||||
- uid 1000, which is the value of `runAsUser`
|
||||
- gid 3000, which is same as `runAsGroup` field
|
||||
- group 2000, is the value of `fsGroup`
|
||||
- If the `runAsGroup` was omitted the gid would
|
||||
remain as 0(root) and the process will be able to interact with files that are owned by root(0) group and that have
|
||||
the required group permissions for root(0) group.
|
||||
- Use ps to check
|
||||
- The processes are running as user 1000, which is the value of `runAsUser`
|
||||
- Use ls to check
|
||||
- The `/data/demo` directory has group ID 2000, which is the value of `fsGroup`
|
||||
- Create file and use ls to check
|
||||
- `testfile` has group ID 2000, which is the value of `fsGroup
|
||||
|
||||
```
|
||||
kubectl apply -f manifests/pod-security-context-example.yaml
|
||||
kubectl get pod pod-security-context-example
|
||||
kubectl exec -it pod-security-context-example -- sh
|
||||
|
||||
# id
|
||||
# ps
|
||||
# cd /data
|
||||
# ls -l
|
||||
# cd demo
|
||||
# echo hello > testfile
|
||||
# ls -l
|
||||
# exit
|
||||
|
||||
kubectl delete po pod-security-context-example
|
||||
```
|
||||
|
||||
|
||||
## Lab2: Security context for a Container
|
||||
|
||||
- Create the Pod:
|
||||
- Use ps to check :
|
||||
- The output shows that the processes are running as user 2000. This is the value
|
||||
of `runAsUser` specified for the Container. It overrides the value 1000 that is
|
||||
specified for the Pod
|
||||
|
||||
|
||||
```
|
||||
kubectl apply -f manifests/container-security-context-example.yaml
|
||||
kubectl get pod container-security-context-example
|
||||
kubectl exec -it container-security-context-example -- sh
|
||||
|
||||
# ps aux
|
||||
# exit
|
||||
|
||||
kubectl delete po container-security-context-example
|
||||
```
|
||||
@@ -0,0 +1,14 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: container-security-context-example
|
||||
spec:
|
||||
securityContext:
|
||||
runAsUser: 1000
|
||||
containers:
|
||||
- name: container-security-context-example
|
||||
image: gcr.io/google-samples/node-hello:1.0
|
||||
securityContext:
|
||||
runAsUser: 2000
|
||||
allowPrivilegeEscalation: false
|
||||
@@ -0,0 +1,22 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod-security-context-example
|
||||
spec:
|
||||
securityContext:
|
||||
runAsUser: 1000
|
||||
runAsGroup: 3000
|
||||
fsGroup: 2000
|
||||
volumes:
|
||||
- name: sec-ctx-vol
|
||||
emptyDir: {}
|
||||
containers:
|
||||
- name: pod-security-context-example
|
||||
image: busybox
|
||||
command: [ "sh", "-c", "sleep 1h" ]
|
||||
volumeMounts:
|
||||
- name: sec-ctx-vol
|
||||
mountPath: /data/demo
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
77
kubernetes-formation/06-security/rbac/README.md
Normal file
77
kubernetes-formation/06-security/rbac/README.md
Normal file
@@ -0,0 +1,77 @@
|
||||
[//]: # (Confidential document)
|
||||
[//]: # (01/05/2025)
|
||||
[//]: # (v 1.7.3)
|
||||
|
||||
# Kubernetes labs v1.7.3
|
||||
## Security : RBAC
|
||||
|
||||
|
||||
Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization.
|
||||
|
||||
RBAC authorization uses the rbac.authorization.k8s.io API group to drive authorization decisions, allowing you to dynamically configure policies through the Kubernetes API.
|
||||
|
||||
To enable RBAC, start the API server with the --authorization-mode flag set to a comma-separated list that includes RBAC;
|
||||
|
||||
|
||||
## Lab1: RBAC
|
||||
|
||||
- Create
|
||||
- Namespace
|
||||
- Service Account
|
||||
- long-lived API token for a ServiceAccount
|
||||
- Role
|
||||
- RoleBinding
|
||||
- Get Roles, RoleBinding
|
||||
- Describe Roles
|
||||
- Check access using auth can-i
|
||||
|
||||
|
||||
```bash
|
||||
kubectl create -f manifests/rbac.yaml
|
||||
kubectl get ServiceAccount -n rbac
|
||||
kubectl get Roles -n rbac
|
||||
kubectl get RoleBinding -n rbac
|
||||
kubectl describe Roles -n rbac
|
||||
kubectl auth can-i get pods --as=system:serviceaccount:rbac:dev-service-account
|
||||
kubectl auth can-i get cm --as=system:serviceaccount:rbac:dev-service-account
|
||||
kubectl auth can-i get cm --as=system:serviceaccount:rbac:dev-service-account -n rbac
|
||||
```
|
||||
|
||||
## Lab2: Kubeconfig
|
||||
|
||||
- Generate Kubeconfig
|
||||
Usage ./kubeconfig.sh ( namespace ) ( service account name ) ( secret name )
|
||||
```
|
||||
dos2unix manifests/kubeconfig.sh
|
||||
chmod +x manifests/kubeconfig.sh
|
||||
./manifests/kubeconfig.sh rbac dev-service-account dev-secret
|
||||
kubectl --kubeconfig=kubeconfig-rbac get po
|
||||
kubectl --kubeconfig=kubeconfig-rbac get cm
|
||||
kubectl --kubeconfig=kubeconfig-rbac get cm -n rbac
|
||||
```
|
||||
|
||||
## Lab3: Token
|
||||
|
||||
The token is a JSON Web Token (JWT), encoded as base64 as specified in the JWT RFC.
|
||||
jwt.io is a useful tool to decode JWT.
|
||||
|
||||
- Generate Token
|
||||
- Manually create an API token for a ServiceAccount
|
||||
```
|
||||
kubectl run --restart=Never busybox -it --image=busybox --rm --quiet -- \
|
||||
cat /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||
|
||||
|
||||
kubectl create ns dev2
|
||||
kubectl create sa demo -n dev2
|
||||
kubectl create token demo -n dev2
|
||||
kubectl create token demo --duration=999999h -n dev2
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Clean up
|
||||
```bash
|
||||
kubectl delete -f manifests/rbac.yaml
|
||||
kubectl delete ns dev2
|
||||
```
|
||||
@@ -0,0 +1,28 @@
|
||||
#!/bin/bash -e
|
||||
## Usage ./kubeconfig.sh ( namespace ) ( service account name ) ( secret name )
|
||||
|
||||
# Pull the bearer token and cluster CA from the service account secret.
|
||||
BEARER_TOKEN=$( kubectl get secrets -n $1 $3 -o jsonpath='{.data.token}' | base64 -d )
|
||||
|
||||
CLUSTER_URL=$( kubectl config view -o jsonpath='{.clusters[0].cluster.server}' )
|
||||
|
||||
|
||||
KUBECONFIG=kubeconfig-$1
|
||||
|
||||
kubectl config --kubeconfig=$KUBECONFIG \
|
||||
set-cluster \
|
||||
$CLUSTER_URL \
|
||||
--server=$CLUSTER_URL \
|
||||
--insecure-skip-tls-verify=true
|
||||
|
||||
kubectl config --kubeconfig=$KUBECONFIG \
|
||||
set-credentials $2 --token=$BEARER_TOKEN
|
||||
|
||||
kubectl config --kubeconfig=$KUBECONFIG \
|
||||
set-context $1 \
|
||||
--cluster=$CLUSTER_URL \
|
||||
--user=$2
|
||||
|
||||
kubectl config --kubeconfig=$KUBECONFIG \
|
||||
use-context $1
|
||||
echo "kubeconfig written to file \"$KUBECONFIG\""
|
||||
60
kubernetes-formation/06-security/rbac/manifests/rbac.yaml
Normal file
60
kubernetes-formation/06-security/rbac/manifests/rbac.yaml
Normal file
@@ -0,0 +1,60 @@
|
||||
## Kubernetes Fundamentals labs v1.5.0 -- RBAC
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: rbac
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: dev-service-account
|
||||
namespace: rbac
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
type: kubernetes.io/service-account-token
|
||||
metadata:
|
||||
name: dev-secret
|
||||
annotations:
|
||||
kubernetes.io/service-account.name: "dev-service-account"
|
||||
namespace: rbac
|
||||
|
||||
---
|
||||
kind: Role
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: dev-access-role
|
||||
namespace: rbac
|
||||
rules:
|
||||
- apiGroups: ["apps"]
|
||||
resources:
|
||||
- statefulsets
|
||||
verbs: ["*"]
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- services
|
||||
- configmaps
|
||||
- secrets
|
||||
- persistentvolumeclaims
|
||||
- persistentvolumes
|
||||
verbs: ["*"]
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- resourcequotas
|
||||
verbs: ["get", "list", "watch"]
|
||||
---
|
||||
kind: RoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: dev-role-binding
|
||||
namespace: rbac
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: dev-service-account
|
||||
namespace: rbac
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: dev-access-role
|
||||
103
kubernetes-formation/06-security/trivy/README.md
Normal file
103
kubernetes-formation/06-security/trivy/README.md
Normal file
@@ -0,0 +1,103 @@
|
||||
[//]: # (Confidential document)
|
||||
[//]: # (01/05/2025)
|
||||
[//]: # (v 1.7.3)
|
||||
|
||||
|
||||
# Kubernetes labs v1.7.3
|
||||
## Security : Trivy
|
||||
|
||||
|
||||
Trivy is a comprehensive and versatile security scanner. Trivy has scanners that look for security issues, and targets where it can find those issues.
|
||||
|
||||
Targets (what Trivy can scan):
|
||||
- Container Image
|
||||
- Filesystem
|
||||
- Git Repository (remote)
|
||||
- Virtual Machine Image
|
||||
- Kubernetes
|
||||
- AWS
|
||||
|
||||
Scanners (what Trivy can find there):
|
||||
- OS packages and software dependencies in use (SBOM)
|
||||
- Known vulnerabilities (CVEs)
|
||||
- IaC issues and misconfigurations
|
||||
- Sensitive information and secrets
|
||||
- Software licenses
|
||||
|
||||
|
||||
Usage:
|
||||
|
||||
trivy k8s [flags] { cluster | all | specific resources like kubectl. eg: pods, pod/NAME }
|
||||
|
||||
--scanners (default "vuln,config,secret,rbac")
|
||||
|
||||
--compliance(k8s-nsa,k8s-cis, k8s-pss-baseline, k8s-pss-restricted)
|
||||
|
||||
--format(table, json, template, sarif, cyclonedx, spdx, spdx-json, github, cosign-vuln) (default "table")
|
||||
|
||||
--severity (default "UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL")
|
||||
|
||||
--vuln-type(default "os,library")
|
||||
|
||||
--components(default [workload,infra])
|
||||
|
||||
--parallel (between 1-20, default 5)
|
||||
|
||||
|
||||
## Lab1: Cluster
|
||||
|
||||
- Scan the entire Kubernetes cluster for vulnerabilities and get a summary of the scan
|
||||
- Receive additional details, use the ‘--report=all’ flag
|
||||
-
|
||||
|
||||
|
||||
```bash
|
||||
|
||||
trivy k8s --report summary
|
||||
trivy k8s --severity=CRITICAL --report=all
|
||||
trivy k8s --scanners vuln --report all
|
||||
trivy k8s --scanners=secret --report=summary
|
||||
trivy k8s --scanners=misconfig --report=summary
|
||||
trivy k8s --report summary --skip-images
|
||||
trivy k8s --report summary --exclude-kinds node,pod
|
||||
|
||||
```
|
||||
|
||||
## Lab2: Namespaces
|
||||
|
||||
- Scan the entire Kubernetes Namespace for vulnerabilities and get a summary of the scan
|
||||
- Scan deployments in a specific namespace, with a filter for critical vulnerability
|
||||
- Scan deployments and configmaps
|
||||
|
||||
```
|
||||
trivy k8s --include-namespaces kube-system --severity=CRITICAL --report summary
|
||||
trivy k8s --include-kinds pod --include-namespaces kube-system --report summary
|
||||
trivy k8s --include-kinds configmap --include-namespaces kube-system --report summary
|
||||
|
||||
```
|
||||
|
||||
## Lab3: Scanners
|
||||
|
||||
- Scan vuln,misconfig,secret,rbac
|
||||
|
||||
```
|
||||
trivy k8s --report summary --scanners=config
|
||||
trivy k8s --report all --scanners=config
|
||||
trivy k8s --report summary --scanners=config
|
||||
```
|
||||
|
||||
## Lab4: image
|
||||
|
||||
- Scan image vuln, misconfig, license
|
||||
- Check image compliance
|
||||
- Generate image sbom
|
||||
|
||||
```
|
||||
trivy image python:3.4-alpine
|
||||
|
||||
trivy image --scanners vuln centos:7
|
||||
trivy image --scanners misconfig centos:7
|
||||
trivy image --scanners license centos:7
|
||||
trivy image --compliance docker-cis centos:7
|
||||
trivy image --format spdx-json --output result.json alpine:3.15
|
||||
```
|
||||
53
kubernetes-formation/07-operations/scalability/hpa/README.md
Normal file
53
kubernetes-formation/07-operations/scalability/hpa/README.md
Normal file
@@ -0,0 +1,53 @@
|
||||
[//]: # (Confidential document)
|
||||
[//]: # (01/05/2025)
|
||||
[//]: # (v 1.7.3)
|
||||
|
||||
# Kubernetes labs v1.7.3
|
||||
## Scalability : Horizontal Pod Autoscaler HPA
|
||||
|
||||
HorizontalPodAutoscaler automatically updates a workload resource (such as a Deployment or StatefulSet), with the aim of automatically scaling the workload to match demand.
|
||||
Horizontal scaling means that the response to increased load is to deploy more Pods. This is different from vertical scaling, which for Kubernetes would mean assigning more resources (for example: memory or CPU) to the Pods that are already running for the workload.
|
||||
If the load decreases, and the number of Pods is above the configured minimum, the HorizontalPodAutoscaler instructs the workload resource (the Deployment, StatefulSet, or other similar resource) to scale back down.
|
||||
Horizontal pod autoscaling does not apply to objects that can't be scaled (for example: a DaemonSet.)
|
||||
|
||||
The HorizontalPodAutoscaler is implemented as a Kubernetes API resource and a controller. The resource determines the behavior of the controller. The horizontal pod autoscaling controller, running within the Kubernetes control plane, periodically adjusts the desired scale of its target (for example, a Deployment) to match observed metrics such as average CPU utilization, average memory utilization, or any other custom metric you specify.
|
||||
|
||||
|
||||
## lab 1 : HPA
|
||||
|
||||
- Enable the addon Metric server.
|
||||
- Apply the Deployment and Service manifest
|
||||
- list pods
|
||||
- Create Horizontal Pod Autoscaler
|
||||
- Check the current status of autoscaler
|
||||
- Increase load
|
||||
- Within a minute or so, --> higher CPU load
|
||||
- list pods
|
||||
- Stop load
|
||||
|
||||
```
|
||||
minikube addons enable metrics-server
|
||||
|
||||
kubectl create -f manifests/php-apache-deploy.yaml
|
||||
kubectl get po
|
||||
|
||||
kubectl create -f manifests/php-apache-hpa.yaml
|
||||
kubectl get hpa
|
||||
kubectl run -it --rm load-generator --image=busybox /bin/sh
|
||||
# while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done
|
||||
|
||||
kubectl get hpa
|
||||
kubectl get deployment php-apache
|
||||
|
||||
kubectl get hpa
|
||||
kubectl get deployment php-apache
|
||||
```
|
||||
|
||||
## Clean up
|
||||
|
||||
```shell
|
||||
kubectl delete deploy php-apache
|
||||
kubectl delete svc php-apache
|
||||
kubectl delete hpa php-apache
|
||||
kubectl delete pod load-generator
|
||||
```
|
||||
@@ -0,0 +1,3 @@
|
||||
FROM php:5-apache
|
||||
ADD index.php /var/www/html/index.php
|
||||
RUN chmod a+rx index.php
|
||||
@@ -0,0 +1,7 @@
|
||||
<?php
|
||||
$x = 0.0001;
|
||||
for ($i = 0; $i <= 1000000; $i++) {
|
||||
$x += sqrt($x);
|
||||
}
|
||||
echo "OK!";
|
||||
?>
|
||||
@@ -0,0 +1,35 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: php-apache
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
run: php-apache
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
run: php-apache
|
||||
spec:
|
||||
containers:
|
||||
- name: php-apache
|
||||
image: registry.k8s.io/hpa-example
|
||||
ports:
|
||||
- containerPort: 80
|
||||
resources:
|
||||
limits:
|
||||
cpu: 500m
|
||||
requests:
|
||||
cpu: 200m
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: php-apache
|
||||
labels:
|
||||
run: php-apache
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
selector:
|
||||
run: php-apache
|
||||
@@ -0,0 +1,18 @@
|
||||
apiVersion: autoscaling/v2
|
||||
kind: HorizontalPodAutoscaler
|
||||
metadata:
|
||||
name: php-apache
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: php-apache
|
||||
minReplicas: 1
|
||||
maxReplicas: 10
|
||||
metrics:
|
||||
- type: Resource
|
||||
resource:
|
||||
name: cpu
|
||||
target:
|
||||
type: Utilization
|
||||
averageUtilization: 50
|
||||
@@ -0,0 +1,49 @@
|
||||
[//]: # (Confidential document)
|
||||
[//]: # (01/05/2025)
|
||||
[//]: # (v 1.7.3)
|
||||
|
||||
# Kubernetes labs v1.7.3
|
||||
## Scalability : Keda
|
||||
|
||||
KEDA is a Kubernetes-based Event Driven Autoscaler. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed.
|
||||
|
||||
KEDA is a single-purpose and lightweight component that can be added into any Kubernetes cluster. KEDA works alongside standard Kubernetes components like the Horizontal Pod Autoscaler and can extend functionality without overwriting or duplication. With KEDA you can explicitly map the apps you want to use event-driven scale, with other apps continuing to function. This makes KEDA a flexible and safe option to run alongside any number of any other Kubernetes applications or frameworks.
|
||||
|
||||
|
||||
## Lab1: RabbitMQ Queue
|
||||
|
||||
- Add Keda Helm repo & Install Keda Helm chart
|
||||
- Add Bitnami Helm repo & Install RabbitMQ Helm chart
|
||||
- Clone sample go rabbitmq project
|
||||
- Deploy a RabbitMQ consumer
|
||||
- Publish messages to the queue
|
||||
|
||||
```
|
||||
helm repo add kedacore https://kedacore.github.io/charts
|
||||
helm repo update
|
||||
kubectl create namespace keda
|
||||
helm install keda kedacore/keda --namespace keda
|
||||
|
||||
helm repo add bitnami https://charts.bitnami.com/bitnami
|
||||
helm install rabbitmq --set auth.username=user --set auth.password=PASSWORD bitnami/rabbitmq --wait
|
||||
kubectl get po
|
||||
|
||||
|
||||
git clone https://github.com/kedacore/sample-go-rabbitmq
|
||||
cd sample-go-rabbitmq
|
||||
kubectl apply -f deploy/deploy-consumer.yaml
|
||||
kubectl get deploy
|
||||
|
||||
kubectl apply -f deploy/deploy-publisher-job.yaml
|
||||
kubectl get hpa
|
||||
```
|
||||
|
||||
## Clean up
|
||||
|
||||
```shell
|
||||
kubectl delete job rabbitmq-publish
|
||||
kubectl delete ScaledObject rabbitmq-consumer
|
||||
kubectl delete deploy rabbitmq-consumer
|
||||
helm delete rabbitmq
|
||||
helm delete keda -n keda
|
||||
```
|
||||
@@ -0,0 +1,28 @@
|
||||
[//]: # (Confidential document)
|
||||
[//]: # (01/05/2025)
|
||||
[//]: # (v 1.7.3)
|
||||
|
||||
# Kubernetes labs v1.7.3
|
||||
## Kubernetes extensions : Custom Resource Definitions
|
||||
|
||||
A custom resource is an extension of the Kubernetes API that is not necessarily available in a default Kubernetes installation. It represents a customization of a particular Kubernetes installation.
|
||||
|
||||
|
||||
## Lab1: Custom Resource Definitions
|
||||
|
||||
- Manage CRD/CR using kubectl
|
||||
- Create the CRD Custom Resource Definition
|
||||
- Create a custom object my-new-cron-object
|
||||
- Get the crontab object
|
||||
```
|
||||
kubectl apply -f resourcedefinition.yaml
|
||||
kubectl apply -f my-crontab.yaml
|
||||
kubectl get crontab
|
||||
kubectl get ct -o yaml
|
||||
```
|
||||
|
||||
## Clean up
|
||||
```
|
||||
kubectl delete ct my-new-cron-object
|
||||
kubectl delete -f resourcedefinition.yaml
|
||||
```
|
||||
@@ -0,0 +1,8 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
apiVersion: "stable.example.com/v1"
|
||||
kind: CronTab
|
||||
metadata:
|
||||
name: my-new-cron-object
|
||||
spec:
|
||||
cronSpec: "* * * * */5"
|
||||
image: my-awesome-cron-image
|
||||
@@ -0,0 +1,41 @@
|
||||
## Kubernetes Advanced labs v1.5.0
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
# name must match the spec fields below, and be in the form: <plural>.<group>
|
||||
name: tslcrontabs.stable.transatel.com
|
||||
spec:
|
||||
# group name to use for REST API: /apis/<group>/<version>
|
||||
group: stable.transatel.com
|
||||
# list of versions supported by this CustomResourceDefinition
|
||||
versions:
|
||||
- name: v1
|
||||
# Each version can be enabled/disabled by Served flag.
|
||||
served: true
|
||||
# One and only one version must be marked as the storage version.
|
||||
storage: true
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
type: object
|
||||
properties:
|
||||
spec:
|
||||
type: object
|
||||
properties:
|
||||
cronSpec:
|
||||
type: string
|
||||
image:
|
||||
type: string
|
||||
replicas:
|
||||
type: integer
|
||||
# either Namespaced or Cluster
|
||||
scope: Namespaced
|
||||
names:
|
||||
# plural name to be used in the URL: /apis/<group>/<version>/<plural>
|
||||
plural: tslcrontabs
|
||||
# singular name to be used as an alias on the CLI and for display
|
||||
singular: tslcrontab
|
||||
# kind is normally the CamelCased singular type. Your resource manifests use this.
|
||||
kind: TSLCronTab
|
||||
# shortNames allow shorter string to match your resource on the CLI
|
||||
shortNames:
|
||||
- tct
|
||||
@@ -0,0 +1,40 @@
|
||||
[//]: # (Confidential document)
|
||||
[//]: # (01/05/2025)
|
||||
[//]: # (v 1.7.3)
|
||||
|
||||
# Kubernetes labs v1.7.3
|
||||
## Kubernetes extensions : Scheduler
|
||||
|
||||
kube-scheduler is the default scheduler for Kubernetes and runs as part of the control plane.
|
||||
kube-scheduler selects a node for the pod in a 2-step operation:
|
||||
- Filtering
|
||||
- Scoring
|
||||
|
||||
The filtering step finds the set of Nodes where it's feasible to schedule the Pod. For example, the PodFitsResources filter checks whether a candidate Node has enough available resource to meet a Pod's specific resource requests. After this step, the node list contains any suitable Nodes; often, there will be more than one. If the list is empty, that Pod isn't (yet) schedulable.
|
||||
|
||||
In the scoring step, the scheduler ranks the remaining nodes to choose the most suitable Pod placement. The scheduler assigns a score to each Node that survived filtering, basing this score on the active scoring rules.
|
||||
|
||||
Finally, kube-scheduler assigns the Pod to the Node with the highest ranking. If there is more than one node with equal scores, kube-scheduler selects one of these at random.
|
||||
|
||||
|
||||
You can change the default scheduler policy by specifying --policyconfig-file to the kube-scheduler
|
||||
|
||||
## Lab1: Custom Scheduler
|
||||
|
||||
- Deploy a pod using custom scheduler
|
||||
- Describe the pod for identifying events when using a custom scheduler
|
||||
- Delete the pod, update the schedulerName and deploy the pod again
|
||||
|
||||
|
||||
```bash
|
||||
kubectl apply -f manifests/pod-custom-scheduler-example.yaml
|
||||
kubectl get pods
|
||||
kubectl describe po pod-custom-scheduler-example
|
||||
```
|
||||
|
||||
## Clean up
|
||||
```bash
|
||||
kubectl delete -f manifests/pod-custom-scheduler-example.yaml
|
||||
```
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user