☸️ K8s 集群部署与 KubeSphere
可视化管理平台安装指南

企业级 Kubernetes 容器编排平台完整部署方案 —— 从 kubeadm 集群搭建、KubeKey 一键安装到 KubeSphere 多租户管理、DevOps、监控日志的全流程实操手册

📅 更新日期:2026 年 3 月 12 日 ☸️ Kubernetes 版本:v1.29.x 🎯 KubeSphere 版本:v3.4.x / v4.0 ⏱️ 预计耗时:90-120 分钟

1. Kubernetes 概述与核心概念

🎯 核心优势

  • 自动化容器编排与调度
  • 服务发现与负载均衡
  • 自动扩缩容(HPA/VPA)
  • 自我修复与健康检查
  • 滚动更新与回滚

🏗️ 架构组件

  • Control Plane(控制平面)
  • kube-apiserver(API 服务器)
  • etcd(分布式存储)
  • kube-scheduler(调度器)
  • kube-controller-manager(控制器)

🖥️ 工作节点

  • kubelet(节点代理)
  • kube-proxy(网络代理)
  • Container Runtime(容器运行时)
  • CNI 网络插件
  • CSI 存储插件

📦 核心资源

  • Pod(最小调度单元)
  • Deployment(无状态应用)
  • StatefulSet(有状态应用)
  • DaemonSet(守护进程)
  • Job/CronJob(批处理任务)

1.1 Kubernetes 架构图

┌─────────────────────────────────────────────────────────────────┐
│                     Kubernetes Cluster Architecture              │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│  ┌─────────────────────────────────────────────────────────┐   │
│  │                    Control Plane                         │   │
│  │  ┌──────────────┐  ┌──────────────┐  ┌──────────────┐  │   │
│  │  │   apiserver  │  │  scheduler   │  │  controller  │  │   │
│  │  │   (API)      │  │  (调度器)     │  │  manager     │  │   │
│  │  └──────┬───────┘  └──────────────┘  └──────────────┘  │   │
│  │         │                                               │   │
│  │  ┌──────▼───────┐                                       │   │
│  │  │    etcd      │  (分布式键值存储,保存集群状态)          │   │
│  │  └──────────────┘                                       │   │
│  └─────────────────────────────────────────────────────────┘   │
│                           │                                     │
│         ┌─────────────────┼─────────────────┐                  │
│         │                 │                 │                   │
│  ┌──────▼──────┐   ┌──────▼──────┐   ┌──────▼──────┐          │
│  │ Worker Node1│   │ Worker Node2│   │ Worker Node3│          │
│  │ ┌─────────┐ │   │ ┌─────────┐ │   │ ┌─────────┐ │          │
│  │ │ kubelet │ │   │ │ kubelet │ │   │ │ kubelet │ │          │
│  │ ├─────────┤ │   │ ├─────────┤ │   │ ├─────────┤ │          │
│  │ │  Pod A  │ │   │ │  Pod B  │ │   │ │  Pod C  │ │          │
│  │ │  Pod B  │ │   │ │  Pod C  │ │   │ │  Pod D  │ │          │
│  │ ├─────────┤ │   │ ├─────────┤ │   │ ├─────────┤ │          │
│  │ │kube-proxy│ │  │ │kube-proxy│ │  │ │kube-proxy│ │         │
│  │ ├─────────┤ │   │ ├─────────┤ │   │ ├─────────┤ │          │
│  │ │ Container│ │  │ │ Container│ │  │ │ Container│ │         │
│  │ │ Runtime  │ │  │ │ Runtime  │ │  │ │ Runtime  │ │         │
│  │ └─────────┘ │   │ └─────────┘ │   │ └─────────┘ │          │
│  └─────────────┘   └─────────────┘   └─────────────┘          │
│                                                                  │
│  Add-ons: CNI (Calico/Flannel) | CSI (NFS/Ceph) | CoreDNS      │
│                                                                  │
└─────────────────────────────────────────────────────────────────┘
                    

2. 系统要求与环境准备

2.1 硬件配置要求

角色 CPU 内存 磁盘 数量
Control Plane 4 Core+ 8 GB+ 100 GB SSD 1-3 台(HA)
Worker Node 2 Core+ 4 GB+ 50 GB+ N 台
All-in-One 4 Core+ 8 GB+ 100 GB SSD 1 台

2.2 软件依赖

操作系统

  • Ubuntu 20.04/22.04 LTS
  • CentOS 7.9/8.x/9.x
  • Rocky Linux 8.x/9.x
  • Debian 10/11
  • openSUSE Leap 15.x

容器运行时

  • Docker 20.10+(已弃用但仍支持)
  • containerd 1.6+(推荐)
  • CRI-O 1.24+
  • cri-dockerd 0.3+(兼容 Docker)

网络要求

  • 稳定的内网互通
  • 可访问外网(下载镜像)
  • 或配置私有镜像仓库
  • 开放必要端口(6443, 10250 等)

2.3 端口要求

端口 协议 用途 方向
6443 TCP Kubernetes API Server Inbound
2379-2380 TCP etcd 客户端/服务器 API Inbound
10250 TCP Kubelet API Inbound
10251 TCP kube-scheduler Inbound
10252 TCP kube-controller-manager Inbound
10255 TCP Kubelet 只读 API Inbound
30000-32767 TCP NodePort Services Inbound
179 TCP Calico BGP Bidirectional
4789 UDP VXLAN(Flannel/Calico) Bidirectional

2.4 环境初始化脚本

#!/bin/bash # init-k8s-node.sh - Kubernetes 节点初始化脚本 NODE_IP=$(hostname -I | awk '{print $1}') HOSTNAME=$(hostname) # 1. 设置主机名 echo "设置主机名..." hostnamectl set-hostname ${HOSTNAME} echo "${NODE_IP} ${HOSTNAME}" >> /etc/hosts # 2. 关闭防火墙 echo "关闭防火墙..." systemctl stop firewalld systemctl disable firewalld ufw disable # Ubuntu # 3. 关闭 SELinux echo "关闭 SELinux..." setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config # 4. 关闭 Swap echo "关闭 Swap..." swapoff -a sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab # 5. 配置内核参数 cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 vm.swappiness = 0 vm.overcommit_memory = 1 kernel.panic = 10 EOF sysctl --system # 6. 加载内核模块 cat > /etc/modules-load.d/k8s.conf << EOF overlay br_netfilter EOF modprobe overlay modprobe br_netfilter # 7. 配置时间同步 echo "配置时间同步..." yum install -y chrony # CentOS/Rocky apt-get install -y chrony # Ubuntu systemctl enable chronyd && systemctl start chronyd timedatectl set-ntp true # 8. 配置 ulimit cat >> /etc/security/limits.conf << EOF * soft nofile 65536 * hard nofile 65536 * soft nproc 65536 * hard nproc 65536 EOF echo "✅ 节点初始化完成!请重启系统。" echo "执行:reboot"

3. 部署方式选择与对比

部署方式 适用场景 优点 缺点 推荐度
kubeadm 学习/生产/自定义 官方工具、灵活可控 手动步骤多、复杂度高 ⭐⭐⭐⭐
KubeKey 快速部署/KubeSphere 一键安装、内置 HA、集成 KubeSphere 定制化程度较低 ⭐⭐⭐⭐⭐
kubespray 大规模生产集群 Ansible 自动化、高度可定制 学习曲线陡峭 ⭐⭐⭐⭐
RKE/RKE2 Rancher 生态 简单可靠、二进制部署 Rancher 绑定 ⭐⭐⭐⭐
minikube/kind 本地开发测试 轻量快速、易于重置 不适合生产 ⭐⭐⭐(仅开发)
💡 推荐方案:
  • 快速体验/中小规模:KubeKey(一键安装 K8s + KubeSphere)
  • 学习/深度定制:kubeadm(官方标准工具)
  • 大规模生产:kubespray(Ansible 自动化)
  • 本地开发:kind/minikube

4. kubeadm 部署多节点集群

步骤 1:安装容器运行时(containerd)

# ===== 所有节点执行 ===== # Ubuntu/Debian apt-get update apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker.list apt-get update && apt-get install -y containerd.io # CentOS/Rocky yum install -y yum-utils yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum install -y containerd.io # 生成 containerd 配置 mkdir -p /etc/containerd containerd config default > /etc/containerd/config.toml # 修改配置使用 systemd cgroup 驱动 sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml sed -i 's|k8s.gcr.io/pause|registry.k8s.io/pause|g' /etc/containerd/config.toml # 启动 containerd systemctl daemon-reload systemctl enable --now containerd systemctl status containerd # 安装 crictl(容器运行时 CLI) wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.29.0/crictl-v1.29.0-linux-amd64.tar.gz tar -xzf crictl-v1.29.0-linux-amd64.tar.gz -C /usr/local/bin rm -f crictl-v1.29.0-linux-amd64.tar.gz # 配置 crictl cat > /etc/crictl.yaml << EOF runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF

步骤 2:安装 kubeadm、kubelet、kubectl

# ===== 所有节点执行 ===== # 添加 Kubernetes 官方 YUM/APT 源 cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni EOF # Ubuntu/Debian curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' > /etc/apt/sources.list.d/kubernetes.list # 安装 Kubernetes 组件(指定版本) KUBE_VERSION=1.29.2 yum install -y kubelet-${KUBE_VERSION} kubeadm-${KUBE_VERSION} kubectl-${KUBE_VERSION} # CentOS apt-get update && apt-get install -y kubelet=${KUBE_VERSION}-00 kubectl=${KUBE_VERSION}-00 kubeadm=${KUBE_VERSION}-00 # Ubuntu # 禁止自动更新 yum-config-manager --disable kubernetes # CentOS apt-mark hold kubelet kubeadm kubectl # Ubuntu # 配置 kubelet cat > /etc/default/kubelet << EOF KUBELET_EXTRA_ARGS="--node-ip=${NODE_IP}" EOF # 启动 kubelet(此时会失败,因为还未初始化) systemctl enable --now kubelet systemctl status kubelet

步骤 3:初始化 Control Plane 节点

# ===== 仅在第一个 Master 节点执行 ===== MASTER_IP="192.168.1.100" POD_CIDR="10.244.0.0/16" SERVICE_CIDR="10.96.0.0/12" # 创建 kubeadm 配置文件 cat > kubeadm-config.yaml << EOF apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: ${MASTER_IP} bindPort: 6443 nodeRegistration: criSocket: unix:///var/run/containerd/containerd.sock name: $(hostname) --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration kubernetesVersion: v${KUBE_VERSION} controlPlaneEndpoint: "${MASTER_IP}:6443" networking: podSubnet: ${POD_CIDR} serviceSubnet: ${SERVICE_CIDR} dnsDomain: cluster.local clusterName: kubernetes certificatesDir: /etc/kubernetes/pki imageRepository: registry.k8s.io --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd EOF # 拉取所需镜像(可选,加速安装) kubeadm config images pull --config kubeadm-config.yaml # 初始化集群 kubeadm init --config kubeadm-config.yaml --upload-certs # 输出包含 join 命令,保存下来用于其他节点加入 # kubeadm join 192.168.1.100:6443 --token xxxxx --discovery-token-ca-cert-hash sha256:xxxxx # 配置 kubectl mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config # 验证集群 kubectl get nodes kubectl get pods -n kube-system

步骤 4:Worker 节点加入集群

# ===== 在每个 Worker 节点执行 ===== # 使用 master 节点生成的 join 命令 kubeadm join 192.168.1.100:6443 \ --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:1234567890abcdef... # 如果是 Control Plane 高可用,添加 --control-plane 参数 kubeadm join 192.168.1.100:6443 \ --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:1234567890abcdef... \ --control-plane \ --certificate-key 1234567890abcdef... # ===== 回到 Master 节点验证 ===== kubectl get nodes # NAME STATUS ROLES AGE VERSION # master-1 Ready control-plane 5m v1.29.2 # worker-1 NotReady 1m v1.29.2 # worker-2 NotReady 30s v1.29.2

步骤 5:部署网络插件 CNI

# ===== 仅在 Master 节点执行 ===== # 方案 1: Calico(推荐,功能强大) kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml cat > calico-installation.yaml << EOF apiVersion: operator.tigera.io/v1 kind: Installation metadata: name: default spec: calicoNetwork: bgp: Disabled ipPools: - blockSize: 26 cidr: 10.244.0.0/16 encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: all() EOF kubectl apply -f calico-installation.yaml # 方案 2: Flannel(简单易用) kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml # 方案 3: Cilium(eBPF 高性能) helm repo add cilium https://helm.cilium.io/ helm install cilium cilium/cilium --version 1.14.0 --namespace kube-system # 验证网络插件 kubectl get pods -n kube-system -l k8s-app=calico-node kubectl get nodes # 所有节点应变为 Ready 状态

5. KubeKey 一键安装集群

5.1 KubeKey 简介

KubeKey 是 KubeSphere 官方推出的 Kubernetes 安装工具,支持一键部署高可用 K8s 集群,并可同时安装 KubeSphere 容器平台。

步骤 1:下载 KubeKey

# 在部署机器上执行(可以是其中任意一个节点) # 国内用户设置镜像源 export KKZONE=cn # 下载最新版本的 KubeKey curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh - # 添加执行权限 chmod +x kk # 验证版本 ./kk version

步骤 2:创建集群配置文件

# 生成示例配置文件 ./kk create config --with-kubernetes v1.29.2 --with-kubesphere v3.4.1 # 编辑 config-sample.yaml vi config-sample.yaml # ===== config-sample.yaml 关键配置 ===== apiVersion: kubekey.kubesphere.io/v1alpha2 kind: Manifest metadata: name: sample-cluster spec: hosts: - name: master1 address: 192.168.1.100 internalAddress: 192.168.1.100 user: root password: YourPassword isMaster: true - name: master2 address: 192.168.1.101 internalAddress: 192.168.1.101 user: root password: YourPassword isMaster: true - name: master3 address: 192.168.1.102 internalAddress: 192.168.1.102 user: root password: YourPassword isMaster: true - name: worker1 address: 192.168.1.103 internalAddress: 192.168.1.103 user: root password: YourPassword isMaster: false - name: worker2 address: 192.168.1.104 internalAddress: 192.168.1.104 user: root password: YourPassword isMaster: false roleGroups: etcd: - master1 - master2 - master3 master: - master1 - master2 - master3 worker: - master1 - master2 - master3 - worker1 - worker2 controlPlaneEndpoint: domain: lb.kubesphere.local address: "" port: 6443 kubernetes: version: v1.29.2 clusterName: cluster.local autoRenewCerts: true containerManager: containerd kubesphere: enabled: true version: v3.4.1 configurations: multicluster: clusterRole: none # host | member | none authentication: jwtSecret: "" common: core: console: port: 30880 type: NodePort registry: privateRegistry: "" addons: []

步骤 3:开始安装

# 开始安装(会自动完成系统初始化、K8s 部署、KubeSphere 安装) ./kk create cluster -f config-sample.yaml # 安装选项: # --with-kubernetes v1.29.2 指定 K8s 版本 # --with-kubesphere v3.4.1 指定 KubeSphere 版本 # --with-local-storage 启用本地存储(LocalPath) # --skip-preflight-checks 跳过预检(不推荐) # 国内加速安装 export KKZONE=cn ./kk create cluster -f config-sample.yaml # 安装过程约 10-20 分钟,完成后输出: Congratulations! The installation is completed. KubeSphere is now available at http://IP:30880 username: admin password: P@88w0rd

步骤 4:验证安装

# 查看集群状态 kubectl get nodes kubectl get pods -A # 查看 KubeSphere 组件 kubectl get pods -n kubesphere-system kubectl get pods -n kubesphere-controls-system kubectl get pods -n kubesphere-monitoring-system kubectl get pods -n kubesphere-logging-system # 查看服务 kubectl get svc -n kubesphere-system # 访问 Web 控制台 echo "KubeSphere Console: http://:30880" echo "Username: admin" echo "Password: P@88w0rd"

5.2 All-in-One 模式(单机快速体验)

# 快速安装单节点 K8s + KubeSphere export KKZONE=cn curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh - chmod +x kk ./kk create cluster --with-kubernetes v1.29.2 --with-kubesphere v3.4.1 # 安装完成后访问 http://localhost:30880 admin / P@88w0rd

6. 网络插件 CNI 配置

6.1 CNI 插件对比

插件 性能 功能 复杂度 适用场景
Calico 高(BGP 路由) NetworkPolicy、BGP 中等 生产环境(推荐)
Flannel 中(VXLAN) 基础网络 简单 开发测试/小型集群
Cilium 极高(eBPF) L7 策略、可观测性 较高 高性能/安全需求
Weave 加密、跨集群 简单 跨主机加密通信

6.2 Calico 高级配置

# 查看 Calico 配置 kubectl get felixconfiguration default -o yaml # 启用 IP 自动检测 kubectl patch felixconfiguration default --type='merge' -p '{"spec": {"ipipEnabled": false, "vxlanEnabled": true}}' # 配置 BGP Peer(多集群互联) cat << EOF | kubectl apply -f - apiVersion: projectcalico.org/v3 kind: BGPPeer metadata: name: peer-to-remote-cluster spec: peerIP: 192.168.2.1 asNumber: 64512 EOF # 配置 GlobalNetworkPolicy(全局网络策略) cat << EOF | kubectl apply -f - apiVersion: projectcalico.org/v3 kind: GlobalNetworkPolicy metadata: name: deny-cross-namespace spec: selector: all() types: - Ingress - Egress ingress: - from: - namespaceSelector: label == "allow-access" egress: - to: - namespaceSelector: label == "allow-access" EOF

7. 存储类 StorageClass 配置

7.1 NFS StorageClass 配置

# 1. 安装 NFS 服务端(在存储服务器上) yum install -y nfs-utils rpcbind # CentOS apt-get install -y nfs-kernel-server # Ubuntu # 2. 创建共享目录 mkdir -p /data/nfs/k8s chown -R nobody:nogroup /data/nfs/k8s chmod 777 /data/nfs/k8s # 3. 配置 exports echo "/data/nfs/k8s *(rw,sync,no_root_squash,no_subtree_check)" >> /etc/exports exportfs -arv systemctl enable --now nfs-server rpcbind # 4. 在 K8s 集群安装 NFS Client Provisioner helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \ --namespace kube-system \ --set nfs.server=192.168.1.200 \ --set nfs.path=/data/nfs/k8s \ --set storageClass.name=nfs-client \ --set storageClass.defaultClass=true # 5. 验证 StorageClass kubectl get sc # NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE # nfs-client k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate true 1m # 6. 测试动态存储 cat << EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: nfs-client EOF kubectl get pvc

7.2 LocalPath StorageClass(KubeKey 内置)

# KubeKey 默认安装 LocalPath Provisioner kubectl get sc # NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE # local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 10m # LocalPath 特点: # ✓ 无需外部存储,使用节点本地磁盘 # ✓ 适合开发测试和边缘计算 # ✗ 不支持跨节点迁移 # ✗ 不适合生产环境有状态应用

8. KubeSphere 安装与配置

8.1 在现有 K8s 集群安装 KubeSphere

# 前提条件: # 1. Kubernetes 1.20.x - 1.29.x # 2. 已配置默认 StorageClass # 3. 已部署网络插件 # 1. 下载 ks-installer kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/kubesphere-installer.yaml kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/cluster-configuration.yaml # 2. 查看安装进度 kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f # 3. 等待安装完成(约 10-15 分钟) kubectl get pod --all-namespaces # 4. 查看 KubeSphere 服务 kubectl get svc -n kubesphere-system # 5. 访问控制台 http://:30880 admin / P@88w0rd

8.2 启用可插拔组件

# 编辑集群配置 kubectl edit cc ks-installer -n kubesphere-system # 在 spec 中添加要启用的组件: spec: alerting: enabled: true # 告警系统 auditing: enabled: true # 审计日志 devops: enabled: true # DevOps 系统 events: enabled: true # 事件中心 logging: enabled: true # 日志系统 metrics_server: enabled: true # 指标服务器 monitoring: enabled: true # 监控系统 servicemesh: enabled: true # 服务网格(Istio) openpitrix: enabled: true # 应用商店 # 保存后,ks-installer 会自动应用配置 kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

9. 多租户与权限管理

9.1 KubeSphere 租户模型

KubeSphere 多租户层级:

┌─────────────────────────────────────────┐
│         Platform (平台级)                │
│  ┌─────────────────────────────────┐   │
│  │  Cluster (集群级)                │   │
│  │  ┌──────────────────────────┐  │   │
│  │  │  Workspace (企业空间)     │  │   │
│  │  │  ┌────────────────────┐  │  │   │
│  │  │  │  Project (项目)     │  │  │   │
│  │  │  │  ┌──────────────┐  │  │  │   │
│  │  │  │  │  Namespace   │  │  │  │   │
│  │  │  │  │  (K8s 命名空间) │  │  │  │   │
│  │  │  │  └──────────────┘  │  │  │   │
│  │  │  └────────────────────┘  │  │   │
│  │  └──────────────────────────┘  │   │
│  └─────────────────────────────────┘   │
└─────────────────────────────────────────┘

角色体系:
• 平台角色:platform-admin, platform-regular
• 集群角色:cluster-admin, cluster-viewer
• 企业空间角色:workspace-admin, workspace-viewer
• 项目角色:admin, developer, viewer, operator
                    

9.2 创建企业空间和项目

# 通过 kubectl 创建工作空间 cat << EOF | kubectl apply -f - apiVersion: tenant.kubesphere.io/v1alpha2 kind: Workspace metadata: name: dev-team spec: manager: admin template: dev-workspace EOF # 创建项目(Namespace) kubectl create namespace dev-project kubectl label namespace dev-project kubesphere.io/workspace=dev-team # 创建用户 cat << EOF | kubectl apply -f - apiVersion: iam.kubesphere.io/v1alpha2 kind: User metadata: name: developer1 spec: email: developer1@example.com password: SecurePass123! lang: zh EOF # 分配角色 cat << EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: developer1-dev-role namespace: dev-project subjects: - kind: User name: developer1 apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: role-developer apiGroup: rbac.authorization.k8s.io EOF

10. DevOps 与 CI/CD 集成

10.1 启用 DevOps 组件

# 编辑集群配置启用 DevOps kubectl edit cc ks-installer -n kubesphere-system # 添加配置: devops: enabled: true jenkinsMemoryLim: 2Gi jenkinsMemoryReq: 1Gi jenkinsVolumeSize: 8Gi sonarqube: enabled: true # Jenkins 访问 kubectl get svc -n kubesphere-devops-system # jenkins -> NodePort: 30180 http://:30180

10.2 创建 CI/CD 流水线

💡 通过 KubeSphere 控制台创建流水线:
  1. 进入企业空间 → DevOps 工程
  2. 创建 DevOps 工程(如:demo-project)
  3. 添加凭证(Git 账号、Docker Registry 账号)
  4. 创建流水线(支持图形化编辑和 Jenkinsfile)
  5. 配置触发器(Webhook、定时构建)
  6. 运行流水线并查看日志

10.3 Jenkins Pipeline 示例

pipeline { agent any tools { maven 'Maven 3.8' nodejs 'Node.js 18' } environment { DOCKER_REGISTRY = 'harbor.yourcompany.com' IMAGE_NAME = 'myapp' } stages { stage('Checkout') { steps { checkout scm } } stage('Build & Test') { steps { sh 'mvn clean test' } } stage('Code Quality') { steps { script { withSonarQubeEnv('SonarQube') { sh 'mvn sonar:sonar' } } } } stage('Build Image') { steps { script { docker.build("${DOCKER_REGISTRY}/${IMAGE_NAME}:${env.BUILD_NUMBER}") } } } stage('Push Image') { steps { withCredentials([usernamePassword(credentialsId: 'harbor-credentials', usernameVariable: 'USER', passwordVariable: 'PASS')]) { sh ''' echo $PASS | docker login $DOCKER_REGISTRY -u $USER --password-stdin docker push $DOCKER_REGISTRY/$IMAGE_NAME:$BUILD_NUMBER ''' } } } stage('Deploy to K8s') { steps { sh ''' kubectl set image deployment/myapp myapp=$DOCKER_REGISTRY/$IMAGE_NAME:$BUILD_NUMBER kubectl rollout status deployment/myapp ''' } } } post { success { echo 'Pipeline completed successfully!' } failure { echo 'Pipeline failed!' } } }

11. 监控告警与日志系统

11.1 Prometheus + Grafana 监控

# KubeSphere 内置 Prometheus 监控 kubectl get pods -n kubesphere-monitoring-system # 访问 Grafana kubectl get svc -n kubesphere-monitoring-system grafana # NodePort: 30885 http://:30885 admin / admin # 主要监控面板: • Kubernetes Cluster Status • Node Exporter Full • KubeSphere Components • Application Resources

11.2 日志系统(Elasticsearch + Fluent Bit)

# 查看日志组件 kubectl get pods -n kubesphere-logging-system # 访问日志查询界面 kubectl get svc -n kubesphere-logging-system logging-query # NodePort: 30777 # 日志采集配置 kubectl get loggingcollector -o yaml # 创建日志采集规则 cat << EOF | kubectl apply -f - apiVersion: logging.kubesphere.io/v1alpha2 kind: LogCollector metadata: name: fluent-bit spec: logPath: /var/log/pods/*/*.log containerLogPath: /var/lib/docker/containers/*/*.log EOF

11.3 配置告警规则

# 通过 KubeSphere 控制台配置告警: 1. 进入「监控告警」→「告警规则」 2. 创建告警规则(如 CPU 使用率 > 80%) 3. 配置通知渠道(邮件、Webhook、钉钉、企业微信) 4. 绑定到具体资源(节点、Pod、工作负载) # 示例:CPU 告警规则 YAML cat << EOF | kubectl apply -f - apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: cpu-alert namespace: kubesphere-monitoring-system spec: groups: - name: k8s.rules rules: - alert: HighCPUUsage expr: sum(rate(container_cpu_usage_seconds_total{image!=""}[5m])) by (pod) > 0.8 for: 5m labels: severity: warning annotations: summary: "High CPU usage detected" description: "Pod {{ \$labels.pod }} CPU usage is above 80%" EOF

12. 应用商店与 Helm 管理

12.1 启用应用商店

# 编辑集群配置启用应用商店 kubectl edit cc ks-installer -n kubesphere-system openpitrix: enabled: true # 访问应用商店 http://:30880/apps # 预置应用: • Redis, MySQL, PostgreSQL • WordPress, Jenkins, GitLab • Prometheus, Grafana, ELK • Istio, Kiali

12.2 Helm Chart 管理

# 添加 Helm 仓库 helm repo add bitnami https://charts.bitnami.com/bitnami helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update # 搜索 Chart helm search repo redis # 安装应用 helm install my-redis bitnami/redis \ --namespace default \ --set auth.password=MyRedisPassword123 # 查看发布 helm list -n default # 升级应用 helm upgrade my-redis bitnami/redis \ --set image.tag=7.2 # 卸载应用 helm uninstall my-redis -n default

12.3 私有 Helm 仓库(Harbor 集成)

# Harbor 2.0+ 支持 Helm Chart 托管 # 推送 Chart 到 Harbor helm package my-app/ helm push my-app-1.0.0.tgz oci://harbor.yourcompany.com/my-project # 从 Harbor 安装 helm install my-app oci://harbor.yourcompany.com/my-project/my-app --version 1.0.0 # 在 KubeSphere 应用商店添加私有仓库 1. 应用管理 → 应用仓库 → 添加仓库 2. 名称:harbor-private 3. URL: https://harbor.yourcompany.com/chartrepo/my-project 4. 凭证:Harbor 账号密码