Skip to main content

Chapter 3: Cluster Configuration

Authored by syscook.dev

What is Cluster Configuration?

Cluster configuration involves setting up and customizing various aspects of your Kubernetes cluster to meet specific requirements. This includes configuring networking, storage, security policies, resource quotas, and cluster-wide settings that affect how applications run and interact within the cluster.

Key Configuration Areas:

  • Network Configuration: CNI plugins, service mesh, ingress controllers
  • Storage Configuration: Persistent volumes, storage classes, CSI drivers
  • Security Configuration: RBAC, Pod Security Standards, network policies
  • Resource Management: Resource quotas, limit ranges, node affinity
  • Cluster Settings: API server configuration, admission controllers
  • Monitoring and Logging: Metrics collection, log aggregation

Why Configure Your Cluster?

1. Network Isolation and Security

Proper network configuration ensures secure communication between pods and services.

# Example: Network Policy for microservices
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: web-app-network-policy
namespace: production
spec:
podSelector:
matchLabels:
app: web-app
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
- podSelector:
matchLabels:
app: load-balancer
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
- to: [] # Allow DNS resolution
ports:
- protocol: UDP
port: 53

Configuration Explanation:

  • podSelector: Selects pods to apply the policy to
  • policyTypes: Specifies whether to apply ingress, egress, or both
  • ingress: Defines allowed incoming traffic
  • egress: Defines allowed outgoing traffic
  • namespaceSelector: Allows traffic from specific namespaces
  • podSelector: Allows traffic from specific pods

2. Resource Management and Quotas

Resource configuration ensures fair resource allocation and prevents resource exhaustion.

# Example: Resource quota for development namespace
apiVersion: v1
kind: ResourceQuota
metadata:
name: dev-quota
namespace: development
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
persistentvolumeclaims: "10"
pods: "50"
services: "10"
secrets: "20"
configmaps: "20"
count/deployments.apps: "10"
count/replicasets.apps: "20"
count/statefulsets.apps: "5"
count/jobs.batch: "5"
count/cronjobs.batch: "5"

Quota Explanation:

  • requests.cpu: Total CPU requests allowed
  • requests.memory: Total memory requests allowed
  • limits.cpu: Total CPU limits allowed
  • limits.memory: Total memory limits allowed
  • persistentvolumeclaims: Maximum number of PVCs
  • pods: Maximum number of pods
  • services: Maximum number of services
  • secrets: Maximum number of secrets
  • configmaps: Maximum number of config maps
  • count/*: Maximum number of specific resource types

3. Storage Configuration

Storage configuration provides persistent storage for applications.

# Example: Storage class for AWS EBS
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: aws-ebs-gp3
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: ebs.csi.aws.com
parameters:
type: gp3
iops: "3000"
throughput: "125"
encrypted: "true"
kmsKeyId: arn:aws:kms:us-west-2:123456789012:key/12345678-1234-1234-1234-123456789012
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Delete

Storage Class Explanation:

  • provisioner: CSI driver that provisions volumes
  • parameters: Storage-specific configuration
    • type: EBS volume type (gp3, gp2, io1, io2)
    • iops: Input/output operations per second
    • throughput: Throughput in MiB/s
    • encrypted: Enable encryption
    • kmsKeyId: KMS key for encryption
  • volumeBindingMode: When to bind volumes
  • allowVolumeExpansion: Allow volume expansion
  • reclaimPolicy: What happens when PVC is deleted

How to Configure Your Cluster?

1. Network Configuration

CNI Plugin Configuration (Calico)

# Example: Calico CNI configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: calico-config
namespace: kube-system
data:
# Typha is not used, so disable it.
typha_service_name: "none"
# The CNI network configuration to install on the node.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"log_file_path": "/var/log/calico/cni/cni.log",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "calico-ipam",
"assign_ipv4": "true",
"assign_ipv6": "false"
},
"policy": {
"type": "k3s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
},
{
"type": "bandwidth",
"capabilities": {"bandwidth": true}
}
]
}
# The CNI network configuration to install on the node.
cni_network_config_v4: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"log_file_path": "/var/log/calico/cni/cni.log",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "calico-ipam",
"assign_ipv4": "true",
"assign_ipv6": "false"
},
"policy": {
"type": "k3s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
},
{
"type": "bandwidth",
"capabilities": {"bandwidth": true}
}
]
}

Ingress Controller Configuration (NGINX)

# Example: NGINX Ingress Controller configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
# Configuration for NGINX Ingress Controller
proxy-body-size: "50m"
proxy-connect-timeout: "60"
proxy-send-timeout: "60"
proxy-read-timeout: "60"
proxy-buffer-size: "8k"
proxy-buffers-number: "8"
proxy-busy-buffers-size: "16k"
client-max-body-size: "50m"
ssl-protocols: "TLSv1.2 TLSv1.3"
ssl-ciphers: "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384"
ssl-prefer-server-ciphers: "true"
enable-ssl-passthrough: "true"
enable-vts-status: "true"
http-snippet: |
# Custom HTTP configuration
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server-snippet: |
# Custom server configuration
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}

2. Security Configuration

Pod Security Standards

# Example: Pod Security Policy configuration
apiVersion: v1
kind: Namespace
metadata:
name: secure-namespace
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
---
apiVersion: v1
kind: LimitRange
metadata:
name: secure-limit-range
namespace: secure-namespace
spec:
limits:
- default:
memory: "512Mi"
cpu: "500m"
defaultRequest:
memory: "256Mi"
cpu: "250m"
type: Container
- max:
memory: "1Gi"
cpu: "1000m"
min:
memory: "128Mi"
cpu: "100m"
type: Container
- max:
storage: "10Gi"
min:
storage: "1Gi"
type: PersistentVolumeClaim

RBAC Configuration

# Example: RBAC configuration for application team
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-team-sa
namespace: production
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: production
name: app-team-role
rules:
- apiGroups: [""]
resources: ["pods", "services", "configmaps", "secrets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses", "networkpolicies"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-team-rolebinding
namespace: production
subjects:
- kind: ServiceAccount
name: app-team-sa
namespace: production
- kind: User
name: app-team-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: app-team-role
apiGroup: rbac.authorization.k8s.io

3. Storage Configuration

Persistent Volume Configuration

# Example: Persistent Volume for database
apiVersion: v1
kind: PersistentVolume
metadata:
name: database-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data/database"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-pvc
namespace: production
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi

Storage Class Configuration

# Example: Storage class for different storage types
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp3
iops: "10000"
throughput: "250"
encrypted: "true"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Delete
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow-hdd
provisioner: kubernetes.io/aws-ebs
parameters:
type: sc1
encrypted: "true"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Delete

4. Monitoring Configuration

Prometheus Configuration

# Example: Prometheus configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: monitoring
data:
prometheus.yml: |
global:
scrape_interval: 15s
evaluation_interval: 15s
external_labels:
cluster: 'production'
environment: 'prod'

rule_files:
- "alert_rules.yml"

alerting:
alertmanagers:
- static_configs:
- targets:
- alertmanager:9093

scrape_configs:
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https

- job_name: 'kubernetes-nodes'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics

- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name

Grafana Configuration

# Example: Grafana configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-datasources
namespace: monitoring
data:
datasources.yaml: |
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://prometheus:9090
isDefault: true
editable: true
jsonData:
timeInterval: "5s"
queryTimeout: "60s"
httpMethod: "POST"
- name: Loki
type: loki
access: proxy
url: http://loki:3100
editable: true
jsonData:
maxLines: 1000
derivedFields:
- datasourceUid: prometheus
matcherRegex: "traceID=(\\w+)"
name: TraceID
url: "$${__value.raw}"

5. Cluster-wide Configuration

Admission Controllers Configuration

# Example: ValidatingAdmissionWebhook configuration
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingAdmissionWebhook
metadata:
name: pod-security-webhook
namespace: kube-system
webhooks:
- name: pod-security.example.com
clientConfig:
service:
name: pod-security-webhook
namespace: kube-system
path: "/validate"
rules:
- operations: ["CREATE", "UPDATE"]
apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
failurePolicy: Fail
sideEffects: None
admissionReviewVersions: ["v1", "v1beta1"]

Cluster Autoscaler Configuration

# Example: Cluster Autoscaler configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
app: cluster-autoscaler
spec:
replicas: 1
selector:
matchLabels:
app: cluster-autoscaler
template:
metadata:
labels:
app: cluster-autoscaler
spec:
serviceAccountName: cluster-autoscaler
containers:
- image: k8s.gcr.io/autoscaling/cluster-autoscaler:v1.28.0
name: cluster-autoscaler
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 100m
memory: 300Mi
command:
- ./cluster-autoscaler
- --v=4
- --stderrthreshold=info
- --cloud-provider=aws
- --skip-nodes-with-local-storage=false
- --expander=least-waste
- --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/production-cluster
- --balance-similar-node-groups
- --scale-down-enabled=true
- --scale-down-delay-after-add=10m
- --scale-down-unneeded-time=10m
env:
- name: AWS_REGION
value: us-west-2

Practical Examples

1. Complete Production Cluster Setup

Step 1: Create Namespaces and Resource Quotas

#!/bin/bash
# setup-namespaces.sh

# Create namespaces
kubectl create namespace production
kubectl create namespace staging
kubectl create namespace development
kubectl create namespace monitoring
kubectl create namespace ingress-nginx

# Apply resource quotas
kubectl apply -f - <<EOF
apiVersion: v1
kind: ResourceQuota
metadata:
name: production-quota
namespace: production
spec:
hard:
requests.cpu: "8"
requests.memory: 16Gi
limits.cpu: "16"
limits.memory: 32Gi
pods: "100"
services: "20"
persistentvolumeclaims: "20"
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: staging-quota
namespace: staging
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
pods: "50"
services: "10"
persistentvolumeclaims: "10"
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: development-quota
namespace: development
spec:
hard:
requests.cpu: "2"
requests.memory: 4Gi
limits.cpu: "4"
limits.memory: 8Gi
pods: "25"
services: "5"
persistentvolumeclaims: "5"
EOF

echo "Namespaces and resource quotas created successfully!"

Step 2: Configure Network Policies

#!/bin/bash
# setup-network-policies.sh

# Apply network policies
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-nginx
namespace: production
spec:
podSelector:
matchLabels:
app: web-app
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
ports:
- protocol: TCP
port: 8080
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-database-access
namespace: production
spec:
podSelector:
matchLabels:
app: database
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: web-app
ports:
- protocol: TCP
port: 5432
EOF

echo "Network policies configured successfully!"

Step 3: Setup Monitoring Stack

#!/bin/bash
# setup-monitoring.sh

# Install Prometheus and Grafana using Helm
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

# Install Prometheus
helm install prometheus prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--create-namespace \
--set grafana.adminPassword=admin123 \
--set prometheus.prometheusSpec.retention=30d \
--set prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.resources.requests.storage=50Gi

# Install Jaeger for tracing
helm repo add jaegertracing https://jaegertracing.github.io/helm-charts
helm install jaeger jaegertracing/jaeger \
--namespace monitoring \
--set storage.type=elasticsearch \
--set storage.elasticsearch.nodeCount=1

echo "Monitoring stack installed successfully!"

2. Storage Configuration

Step 1: Create Storage Classes

#!/bin/bash
# setup-storage.sh

# Create storage classes
kubectl apply -f - <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: ebs.csi.aws.com
parameters:
type: gp3
iops: "3000"
throughput: "125"
encrypted: "true"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Delete
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow-hdd
provisioner: ebs.csi.aws.com
parameters:
type: sc1
encrypted: "true"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Delete
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: high-iops
provisioner: ebs.csi.aws.com
parameters:
type: io2
iops: "10000"
encrypted: "true"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Delete
EOF

echo "Storage classes created successfully!"

Step 2: Create Persistent Volumes

#!/bin/bash
# create-persistent-volumes.sh

# Create persistent volumes for databases
kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
labels:
type: local
spec:
storageClassName: fast-ssd
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data/postgres"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-pv
labels:
type: local
spec:
storageClassName: fast-ssd
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data/redis"
EOF

echo "Persistent volumes created successfully!"

3. Security Configuration

Step 1: Setup RBAC

#!/bin/bash
# setup-rbac.sh

# Create service accounts
kubectl create serviceaccount app-team-sa -n production
kubectl create serviceaccount monitoring-sa -n monitoring

# Create roles
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: production
name: app-team-role
rules:
- apiGroups: [""]
resources: ["pods", "services", "configmaps", "secrets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses", "networkpolicies"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: monitoring
name: monitoring-role
rules:
- apiGroups: [""]
resources: ["pods", "services", "endpoints"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch"]
EOF

# Create role bindings
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-team-rolebinding
namespace: production
subjects:
- kind: ServiceAccount
name: app-team-sa
namespace: production
roleRef:
kind: Role
name: app-team-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: monitoring-rolebinding
namespace: monitoring
subjects:
- kind: ServiceAccount
name: monitoring-sa
namespace: monitoring
roleRef:
kind: Role
name: monitoring-role
apiGroup: rbac.authorization.k8s.io
EOF

echo "RBAC configured successfully!"

Best Practices

1. Resource Planning

# Calculate resource requirements
# Master nodes: 2-4 CPU cores, 4-8GB RAM
# Worker nodes: 2-8 CPU cores, 4-16GB RAM
# Storage: 20-100GB per node

# Example resource calculation for production cluster
# 3 worker nodes × 4 CPU cores = 12 total CPU cores
# 3 worker nodes × 8GB RAM = 24GB total RAM
# 3 worker nodes × 50GB storage = 150GB total storage

2. Security Configuration

# Enable Pod Security Standards
kubectl label namespace production pod-security.kubernetes.io/enforce=restricted
kubectl label namespace production pod-security.kubernetes.io/audit=restricted
kubectl label namespace production pod-security.kubernetes.io/warn=restricted

# Enable RBAC
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--user=admin

3. Monitoring Setup

# Install Prometheus and Grafana
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus prometheus-community/kube-prometheus-stack

# Install Jaeger for tracing
helm repo add jaegertracing https://jaegertracing.github.io/helm-charts
helm install jaeger jaegertracing/jaeger

Common Pitfalls and Solutions

1. Network Configuration Issues

# ❌ Problem: Pods cannot communicate with each other
# Error: network is unreachable

# ✅ Solution: Install and configure CNI plugin
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

2. Storage Configuration Issues

# ❌ Problem: Persistent volumes not binding
# Error: no persistent volumes available for this claim

# ✅ Solution: Check storage class and node affinity
kubectl get storageclass
kubectl get pv
kubectl describe pvc <pvc-name>

3. Resource Quota Issues

# ❌ Problem: Pod creation fails due to resource quota
# Error: exceeded quota: dev-quota

# ✅ Solution: Check resource quotas and adjust
kubectl describe quota -n development
kubectl get limitrange -n development

Conclusion

Cluster configuration is essential for creating a production-ready Kubernetes environment. By understanding:

  • What different configuration areas are and their purposes
  • Why proper configuration is crucial for security, performance, and reliability
  • How to configure networking, storage, security, and monitoring

You can create a well-configured Kubernetes cluster that meets your specific requirements. Proper configuration ensures that your cluster is secure, scalable, and maintainable in production environments.

Next Steps

  • Practice with different configuration scenarios
  • Set up monitoring and logging
  • Move on to Chapter 4: Pods & Deployments

This tutorial is part of the Kubernetes Mastery series by syscook.dev