轻量级K8s实战:k3s高效部署Nginx指南
2025.09.23 14:23浏览量:0简介:本文详解如何使用轻量级Kubernetes发行版k3s部署Nginx,涵盖环境准备、部署配置、服务暴露及高可用优化,适合开发者与运维人员参考。
一、k3s与Nginx的适配性分析
k3s作为Rancher推出的轻量级Kubernetes发行版,专为边缘计算、IoT设备及资源受限环境设计。其核心优势在于二进制包仅40MB、去除了非必要组件(如存储插件、云控制器),同时保留了Kubernetes的核心功能。这种设计使其成为部署Nginx这类轻量级Web服务的理想平台。
Nginx作为高性能反向代理和负载均衡器,其单进程多连接模型与k3s的轻量化架构高度契合。在k3s环境中部署Nginx,可充分利用其以下特性:
- 资源高效利用:k3s的精简内核使Nginx容器能以更低的CPU和内存占用运行,实测在树莓派4B(4GB RAM)上可稳定运行10+个Nginx实例。
- 快速部署能力:k3s的自动化证书管理和简易安装流程,使Nginx部署时间从传统K8s的15分钟缩短至3分钟内。
- 边缘场景适配:k3s支持离线安装和ARM架构,使得Nginx可部署在工业网关、零售终端等边缘设备上。
二、k3s集群环境准备
1. 基础环境要求
- 硬件配置:单节点至少2核CPU、4GB RAM(生产环境建议4核8GB起)
- 操作系统:支持RHEL 7+/Ubuntu 18.04+/CentOS 7+(推荐使用最新LTS版本)
- 网络要求:节点间需互通6443(K8s API)、10250(kubelet)等端口
2. 快速安装k3s
# 主节点安装(带traefik禁用,因我们将用Nginx替代)
curl -sfL https://get.k3s.io | sh -s -- --disable traefik
# 工作节点安装(需替换<TOKEN>和<SERVER_IP>)
curl -sfL https://get.k3s.io | sh -s -- --server https://<SERVER_IP>:6443 --token <TOKEN>
安装后验证:
sudo k3s kubectl get nodes
# 应显示类似输出:
# NAME STATUS ROLES AGE VERSION
# k3s-node Ready control-plane 5m v1.28.4+k3s1
3. 必备组件配置
- 存储类配置(如需持久化):
# /var/lib/rancher/k3s/server/manifests/local-storage.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
- Ingress控制器准备:虽k3s默认禁用traefik,但需确保CoreDNS正常运行:
sudo k3s kubectl get pods -n kube-system | grep coredns
三、Nginx部署全流程
1. 基础Deployment配置
# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-demo
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25.3-alpine
ports:
- containerPort: 80
resources:
limits:
memory: "128Mi"
cpu: "500m"
部署命令:
sudo k3s kubectl apply -f nginx-deployment.yaml
2. 服务暴露方案
方案A:NodePort(适合测试)
# nginx-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
nodePort: 30080
方案B:LoadBalancer(需MetalLB支持)
- 安装MetalLB:
sudo k3s kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.10/config/manifests/metallb-native.yaml
- 配置IP地址池:
# metallb-config.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 192.168.1.240-192.168.1.250
- 创建LoadBalancer服务:
# nginx-lb.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-lb
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- port: 80
targetPort: 80
3. Ingress配置(推荐生产使用)
# nginx-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: "nginx.example.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
需先部署Nginx Ingress Controller(k3s默认未安装):
sudo k3s kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.4/deploy/static/provider/baremetal/deploy.yaml
四、高可用与优化实践
1. 多节点部署架构
建议采用3节点架构:
- 1个控制平面节点(运行etcd、API Server等)
- 2个工作节点(运行Nginx Pod)
配置示例:
# /etc/rancher/k3s/config.yaml(主节点)
write-kubeconfig-mode: "0644"
tls-san:
- "k3s-master.example.com"
node-taint:
- "CriticalAddonsOnly=true:NoExecute"
2. 资源限制优化
通过--kubelet-arg
参数调整资源请求:
# /etc/systemd/system/k3s.service.d/10-kubelet-args.conf
[Service]
Environment="K3S_KUBELET_ARG=--kube-reserved=cpu=200m,memory=256Mi --system-reserved=cpu=100m,memory=128Mi"
3. 监控集成方案
推荐使用Prometheus Operator:
sudo k3s kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.67.1/bundle.yaml
配置Nginx指标收集:
# nginx-prometheus-exporter.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-prometheus
spec:
template:
spec:
containers:
- name: exporter
image: nginx/nginx-prometheus-exporter:0.11.0
args: ["-nginx.scrape-uri=http://nginx-service:80/metrics"]
五、故障排查指南
1. 常见问题处理
- Pod Pending状态:
sudo k3s kubectl describe pod <pod-name> | grep -i "events"
# 检查节点资源是否不足或存储类未配置
- 502 Bad Gateway:
sudo k3s kubectl logs -n ingress-nginx <ingress-controller-pod>
# 检查后端服务是否健康
2. 日志收集方案
配置EFK堆栈:
# Elasticsearch部署
sudo k3s kubectl apply -f https://download.elastic.co/downloads/eck/2.9.0/operator.yaml
# Filebeat配置(收集Nginx日志)
apiVersion: filebeat.elastic.co/v1beta1
kind: Filebeat
metadata:
name: filebeat
spec:
config:
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
in_cluster: true
3. 备份与恢复策略
使用Velero进行集群备份:
# 安装Velero
velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.4.0 \
--bucket velero-backup \
--backup-location-config region=minio,s3ForcePathStyle=true,s3Url=http://minio.example.com
# 备份Nginx命名空间
velero backup create nginx-backup --include-namespaces nginx
六、性能测试与调优
1. 基准测试方法
使用wrk
进行压力测试:
wrk -t12 -c400 -d30s http://nginx.example.com
# 预期结果:RPS应稳定在5000+(树莓派4B上实测)
2. 参数调优建议
- 内核参数:
# /etc/sysctl.conf
net.core.somaxconn = 65535
net.ipv4.ip_local_port_range = 1024 65535
- Nginx配置优化:
# /etc/nginx/nginx.conf
worker_processes auto;
worker_rlimit_nofile 65535;
events {
worker_connections 4096;
multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
keepalive_timeout 65;
keepalive_requests 1000;
}
3. 水平扩展策略
基于HPA的自动扩展:
# nginx-hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: nginx-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx-demo
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
通过以上完整流程,开发者可在k3s上实现Nginx的高效部署与运维。实际生产环境中,建议结合CI/CD流水线实现自动化部署,并定期进行混沌工程测试验证系统韧性。
发表评论
登录后可评论,请前往 登录 或 注册