1. Kubernetes 概述与核心概念
🎯 核心优势
- 自动化容器编排与调度
- 服务发现与负载均衡
- 自动扩缩容(HPA/VPA)
- 自我修复与健康检查
- 滚动更新与回滚
🏗️ 架构组件
- Control Plane(控制平面)
- kube-apiserver(API 服务器)
- etcd(分布式存储)
- kube-scheduler(调度器)
- kube-controller-manager(控制器)
🖥️ 工作节点
- kubelet(节点代理)
- kube-proxy(网络代理)
- Container Runtime(容器运行时)
- CNI 网络插件
- CSI 存储插件
📦 核心资源
- Pod(最小调度单元)
- Deployment(无状态应用)
- StatefulSet(有状态应用)
- DaemonSet(守护进程)
- Job/CronJob(批处理任务)
1.1 Kubernetes 架构图
┌─────────────────────────────────────────────────────────────────┐
│ Kubernetes Cluster Architecture │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Control Plane │ │
│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │
│ │ │ apiserver │ │ scheduler │ │ controller │ │ │
│ │ │ (API) │ │ (调度器) │ │ manager │ │ │
│ │ └──────┬───────┘ └──────────────┘ └──────────────┘ │ │
│ │ │ │ │
│ │ ┌──────▼───────┐ │ │
│ │ │ etcd │ (分布式键值存储,保存集群状态) │ │
│ │ └──────────────┘ │ │
│ └─────────────────────────────────────────────────────────┘ │
│ │ │
│ ┌─────────────────┼─────────────────┐ │
│ │ │ │ │
│ ┌──────▼──────┐ ┌──────▼──────┐ ┌──────▼──────┐ │
│ │ Worker Node1│ │ Worker Node2│ │ Worker Node3│ │
│ │ ┌─────────┐ │ │ ┌─────────┐ │ │ ┌─────────┐ │ │
│ │ │ kubelet │ │ │ │ kubelet │ │ │ │ kubelet │ │ │
│ │ ├─────────┤ │ │ ├─────────┤ │ │ ├─────────┤ │ │
│ │ │ Pod A │ │ │ │ Pod B │ │ │ │ Pod C │ │ │
│ │ │ Pod B │ │ │ │ Pod C │ │ │ │ Pod D │ │ │
│ │ ├─────────┤ │ │ ├─────────┤ │ │ ├─────────┤ │ │
│ │ │kube-proxy│ │ │ │kube-proxy│ │ │ │kube-proxy│ │ │
│ │ ├─────────┤ │ │ ├─────────┤ │ │ ├─────────┤ │ │
│ │ │ Container│ │ │ │ Container│ │ │ │ Container│ │ │
│ │ │ Runtime │ │ │ │ Runtime │ │ │ │ Runtime │ │ │
│ │ └─────────┘ │ │ └─────────┘ │ │ └─────────┘ │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
│ Add-ons: CNI (Calico/Flannel) | CSI (NFS/Ceph) | CoreDNS │
│ │
└─────────────────────────────────────────────────────────────────┘
2. 系统要求与环境准备
2.1 硬件配置要求
| 角色 |
CPU |
内存 |
磁盘 |
数量 |
| Control Plane |
4 Core+ |
8 GB+ |
100 GB SSD |
1-3 台(HA) |
| Worker Node |
2 Core+ |
4 GB+ |
50 GB+ |
N 台 |
| All-in-One |
4 Core+ |
8 GB+ |
100 GB SSD |
1 台 |
2.2 软件依赖
操作系统
- Ubuntu 20.04/22.04 LTS
- CentOS 7.9/8.x/9.x
- Rocky Linux 8.x/9.x
- Debian 10/11
- openSUSE Leap 15.x
容器运行时
- Docker 20.10+(已弃用但仍支持)
- containerd 1.6+(推荐)
- CRI-O 1.24+
- cri-dockerd 0.3+(兼容 Docker)
网络要求
- 稳定的内网互通
- 可访问外网(下载镜像)
- 或配置私有镜像仓库
- 开放必要端口(6443, 10250 等)
2.3 端口要求
| 端口 |
协议 |
用途 |
方向 |
| 6443 |
TCP |
Kubernetes API Server |
Inbound |
| 2379-2380 |
TCP |
etcd 客户端/服务器 API |
Inbound |
| 10250 |
TCP |
Kubelet API |
Inbound |
| 10251 |
TCP |
kube-scheduler |
Inbound |
| 10252 |
TCP |
kube-controller-manager |
Inbound |
| 10255 |
TCP |
Kubelet 只读 API |
Inbound |
| 30000-32767 |
TCP |
NodePort Services |
Inbound |
| 179 |
TCP |
Calico BGP |
Bidirectional |
| 4789 |
UDP |
VXLAN(Flannel/Calico) |
Bidirectional |
2.4 环境初始化脚本
3. 部署方式选择与对比
| 部署方式 |
适用场景 |
优点 |
缺点 |
推荐度 |
| kubeadm |
学习/生产/自定义 |
官方工具、灵活可控 |
手动步骤多、复杂度高 |
⭐⭐⭐⭐ |
| KubeKey |
快速部署/KubeSphere |
一键安装、内置 HA、集成 KubeSphere |
定制化程度较低 |
⭐⭐⭐⭐⭐ |
| kubespray |
大规模生产集群 |
Ansible 自动化、高度可定制 |
学习曲线陡峭 |
⭐⭐⭐⭐ |
| RKE/RKE2 |
Rancher 生态 |
简单可靠、二进制部署 |
Rancher 绑定 |
⭐⭐⭐⭐ |
| minikube/kind |
本地开发测试 |
轻量快速、易于重置 |
不适合生产 |
⭐⭐⭐(仅开发) |
💡 推荐方案:
- 快速体验/中小规模:KubeKey(一键安装 K8s + KubeSphere)
- 学习/深度定制:kubeadm(官方标准工具)
- 大规模生产:kubespray(Ansible 自动化)
- 本地开发:kind/minikube
4. kubeadm 部署多节点集群
步骤 1:安装容器运行时(containerd)
apt-get update
apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker.list
apt-get update && apt-get install -y containerd.io
yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y containerd.io
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sed -i 's|k8s.gcr.io/pause|registry.k8s.io/pause|g' /etc/containerd/config.toml
systemctl daemon-reload
systemctl enable --now containerd
systemctl status containerd
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.29.0/crictl-v1.29.0-linux-amd64.tar.gz
tar -xzf crictl-v1.29.0-linux-amd64.tar.gz -C /usr/local/bin
rm -f crictl-v1.29.0-linux-amd64.tar.gz
cat > /etc/crictl.yaml << EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
步骤 2:安装 kubeadm、kubelet、kubectl
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' > /etc/apt/sources.list.d/kubernetes.list
KUBE_VERSION=1.29.2
yum install -y kubelet-${KUBE_VERSION} kubeadm-${KUBE_VERSION} kubectl-${KUBE_VERSION}
apt-get update && apt-get install -y kubelet=${KUBE_VERSION}-00 kubectl=${KUBE_VERSION}-00 kubeadm=${KUBE_VERSION}-00
yum-config-manager --disable kubernetes
apt-mark hold kubelet kubeadm kubectl
cat > /etc/default/kubelet << EOF
KUBELET_EXTRA_ARGS="--node-ip=${NODE_IP}"
EOF
systemctl enable --now kubelet
systemctl status kubelet
步骤 3:初始化 Control Plane 节点
MASTER_IP="192.168.1.100"
POD_CIDR="10.244.0.0/16"
SERVICE_CIDR="10.96.0.0/12"
cat > kubeadm-config.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: ${MASTER_IP}
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
name: $(hostname)
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: v${KUBE_VERSION}
controlPlaneEndpoint: "${MASTER_IP}:6443"
networking:
podSubnet: ${POD_CIDR}
serviceSubnet: ${SERVICE_CIDR}
dnsDomain: cluster.local
clusterName: kubernetes
certificatesDir: /etc/kubernetes/pki
imageRepository: registry.k8s.io
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF
kubeadm config images pull --config kubeadm-config.yaml
kubeadm init --config kubeadm-config.yaml --upload-certs
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes
kubectl get pods -n kube-system
步骤 4:Worker 节点加入集群
kubeadm join 192.168.1.100:6443 \
--token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:1234567890abcdef...
kubeadm join 192.168.1.100:6443 \
--token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:1234567890abcdef... \
--control-plane \
--certificate-key 1234567890abcdef...
kubectl get nodes
步骤 5:部署网络插件 CNI
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml
cat > calico-installation.yaml << EOF
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
calicoNetwork:
bgp: Disabled
ipPools:
- blockSize: 26
cidr: 10.244.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
EOF
kubectl apply -f calico-installation.yaml
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
helm repo add cilium https://helm.cilium.io/
helm install cilium cilium/cilium --version 1.14.0 --namespace kube-system
kubectl get pods -n kube-system -l k8s-app=calico-node
kubectl get nodes
5. KubeKey 一键安装集群
5.1 KubeKey 简介
KubeKey 是 KubeSphere 官方推出的 Kubernetes 安装工具,支持一键部署高可用 K8s 集群,并可同时安装 KubeSphere 容器平台。
步骤 1:下载 KubeKey
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
chmod +x kk
./kk version
步骤 2:创建集群配置文件
./kk create config --with-kubernetes v1.29.2 --with-kubesphere v3.4.1
vi config-sample.yaml
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
name: sample-cluster
spec:
hosts:
- name: master1
address: 192.168.1.100
internalAddress: 192.168.1.100
user: root
password: YourPassword
isMaster: true
- name: master2
address: 192.168.1.101
internalAddress: 192.168.1.101
user: root
password: YourPassword
isMaster: true
- name: master3
address: 192.168.1.102
internalAddress: 192.168.1.102
user: root
password: YourPassword
isMaster: true
- name: worker1
address: 192.168.1.103
internalAddress: 192.168.1.103
user: root
password: YourPassword
isMaster: false
- name: worker2
address: 192.168.1.104
internalAddress: 192.168.1.104
user: root
password: YourPassword
isMaster: false
roleGroups:
etcd:
- master1
- master2
- master3
master:
- master1
- master2
- master3
worker:
- master1
- master2
- master3
- worker1
- worker2
controlPlaneEndpoint:
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.29.2
clusterName: cluster.local
autoRenewCerts: true
containerManager: containerd
kubesphere:
enabled: true
version: v3.4.1
configurations:
multicluster:
clusterRole: none
authentication:
jwtSecret: ""
common:
core:
console:
port: 30880
type: NodePort
registry:
privateRegistry: ""
addons: []
步骤 3:开始安装
./kk create cluster -f config-sample.yaml
export KKZONE=cn
./kk create cluster -f config-sample.yaml
步骤 4:验证安装
kubectl get nodes
kubectl get pods -A
kubectl get pods -n kubesphere-system
kubectl get pods -n kubesphere-controls-system
kubectl get pods -n kubesphere-monitoring-system
kubectl get pods -n kubesphere-logging-system
kubectl get svc -n kubesphere-system
echo "KubeSphere Console: http://:30880"
echo "Username: admin"
echo "Password: P@88w0rd"
5.2 All-in-One 模式(单机快速体验)
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
chmod +x kk
./kk create cluster --with-kubernetes v1.29.2 --with-kubesphere v3.4.1
http://localhost:30880
6. 网络插件 CNI 配置
6.1 CNI 插件对比
| 插件 |
性能 |
功能 |
复杂度 |
适用场景 |
| Calico |
高(BGP 路由) |
NetworkPolicy、BGP |
中等 |
生产环境(推荐) |
| Flannel |
中(VXLAN) |
基础网络 |
简单 |
开发测试/小型集群 |
| Cilium |
极高(eBPF) |
L7 策略、可观测性 |
较高 |
高性能/安全需求 |
| Weave |
中 |
加密、跨集群 |
简单 |
跨主机加密通信 |
6.2 Calico 高级配置
kubectl get felixconfiguration default -o yaml
kubectl patch felixconfiguration default --type='merge' -p '{"spec": {"ipipEnabled": false, "vxlanEnabled": true}}'
cat << EOF | kubectl apply -f -
apiVersion: projectcalico.org/v3
kind: BGPPeer
metadata:
name: peer-to-remote-cluster
spec:
peerIP: 192.168.2.1
asNumber: 64512
EOF
cat << EOF | kubectl apply -f -
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: deny-cross-namespace
spec:
selector: all()
types:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector: label == "allow-access"
egress:
- to:
- namespaceSelector: label == "allow-access"
EOF
7. 存储类 StorageClass 配置
7.1 NFS StorageClass 配置
yum install -y nfs-utils rpcbind
apt-get install -y nfs-kernel-server
mkdir -p /data/nfs/k8s
chown -R nobody:nogroup /data/nfs/k8s
chmod 777 /data/nfs/k8s
echo "/data/nfs/k8s *(rw,sync,no_root_squash,no_subtree_check)" >> /etc/exports
exportfs -arv
systemctl enable --now nfs-server rpcbind
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--namespace kube-system \
--set nfs.server=192.168.1.200 \
--set nfs.path=/data/nfs/k8s \
--set storageClass.name=nfs-client \
--set storageClass.defaultClass=true
kubectl get sc
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: nfs-client
EOF
kubectl get pvc
7.2 LocalPath StorageClass(KubeKey 内置)
kubectl get sc
8. KubeSphere 安装与配置
8.1 在现有 K8s 集群安装 KubeSphere
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/cluster-configuration.yaml
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
kubectl get pod --all-namespaces
kubectl get svc -n kubesphere-system
http://:30880
admin / P@88w0rd
8.2 启用可插拔组件
kubectl edit cc ks-installer -n kubesphere-system
spec:
alerting:
enabled: true
auditing:
enabled: true
devops:
enabled: true
events:
enabled: true
logging:
enabled: true
metrics_server:
enabled: true
monitoring:
enabled: true
servicemesh:
enabled: true
openpitrix:
enabled: true
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
9. 多租户与权限管理
9.1 KubeSphere 租户模型
KubeSphere 多租户层级:
┌─────────────────────────────────────────┐
│ Platform (平台级) │
│ ┌─────────────────────────────────┐ │
│ │ Cluster (集群级) │ │
│ │ ┌──────────────────────────┐ │ │
│ │ │ Workspace (企业空间) │ │ │
│ │ │ ┌────────────────────┐ │ │ │
│ │ │ │ Project (项目) │ │ │ │
│ │ │ │ ┌──────────────┐ │ │ │ │
│ │ │ │ │ Namespace │ │ │ │ │
│ │ │ │ │ (K8s 命名空间) │ │ │ │ │
│ │ │ │ └──────────────┘ │ │ │ │
│ │ │ └────────────────────┘ │ │ │
│ │ └──────────────────────────┘ │ │
│ └─────────────────────────────────┘ │
└─────────────────────────────────────────┘
角色体系:
• 平台角色:platform-admin, platform-regular
• 集群角色:cluster-admin, cluster-viewer
• 企业空间角色:workspace-admin, workspace-viewer
• 项目角色:admin, developer, viewer, operator
9.2 创建企业空间和项目
cat << EOF | kubectl apply -f -
apiVersion: tenant.kubesphere.io/v1alpha2
kind: Workspace
metadata:
name: dev-team
spec:
manager: admin
template: dev-workspace
EOF
kubectl create namespace dev-project
kubectl label namespace dev-project kubesphere.io/workspace=dev-team
cat << EOF | kubectl apply -f -
apiVersion: iam.kubesphere.io/v1alpha2
kind: User
metadata:
name: developer1
spec:
email: developer1@example.com
password: SecurePass123!
lang: zh
EOF
cat << EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: developer1-dev-role
namespace: dev-project
subjects:
- kind: User
name: developer1
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: role-developer
apiGroup: rbac.authorization.k8s.io
EOF
10. DevOps 与 CI/CD 集成
10.1 启用 DevOps 组件
kubectl edit cc ks-installer -n kubesphere-system
devops:
enabled: true
jenkinsMemoryLim: 2Gi
jenkinsMemoryReq: 1Gi
jenkinsVolumeSize: 8Gi
sonarqube:
enabled: true
kubectl get svc -n kubesphere-devops-system
http://:30180
10.2 创建 CI/CD 流水线
💡 通过 KubeSphere 控制台创建流水线:
- 进入企业空间 → DevOps 工程
- 创建 DevOps 工程(如:demo-project)
- 添加凭证(Git 账号、Docker Registry 账号)
- 创建流水线(支持图形化编辑和 Jenkinsfile)
- 配置触发器(Webhook、定时构建)
- 运行流水线并查看日志
10.3 Jenkins Pipeline 示例
pipeline {
agent any
tools {
maven 'Maven 3.8'
nodejs 'Node.js 18'
}
environment {
DOCKER_REGISTRY = 'harbor.yourcompany.com'
IMAGE_NAME = 'myapp'
}
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Build & Test') {
steps {
sh 'mvn clean test'
}
}
stage('Code Quality') {
steps {
script {
withSonarQubeEnv('SonarQube') {
sh 'mvn sonar:sonar'
}
}
}
}
stage('Build Image') {
steps {
script {
docker.build("${DOCKER_REGISTRY}/${IMAGE_NAME}:${env.BUILD_NUMBER}")
}
}
}
stage('Push Image') {
steps {
withCredentials([usernamePassword(credentialsId: 'harbor-credentials',
usernameVariable: 'USER',
passwordVariable: 'PASS')]) {
sh '''
echo $PASS | docker login $DOCKER_REGISTRY -u $USER --password-stdin
docker push $DOCKER_REGISTRY/$IMAGE_NAME:$BUILD_NUMBER
'''
}
}
}
stage('Deploy to K8s') {
steps {
sh '''
kubectl set image deployment/myapp myapp=$DOCKER_REGISTRY/$IMAGE_NAME:$BUILD_NUMBER
kubectl rollout status deployment/myapp
'''
}
}
}
post {
success {
echo 'Pipeline completed successfully!'
}
failure {
echo 'Pipeline failed!'
}
}
}
11. 监控告警与日志系统
11.1 Prometheus + Grafana 监控
kubectl get pods -n kubesphere-monitoring-system
kubectl get svc -n kubesphere-monitoring-system grafana
http://:30885
11.2 日志系统(Elasticsearch + Fluent Bit)
kubectl get pods -n kubesphere-logging-system
kubectl get svc -n kubesphere-logging-system logging-query
kubectl get loggingcollector -o yaml
cat << EOF | kubectl apply -f -
apiVersion: logging.kubesphere.io/v1alpha2
kind: LogCollector
metadata:
name: fluent-bit
spec:
logPath: /var/log/pods/*/*.log
containerLogPath: /var/lib/docker/containers/*/*.log
EOF
11.3 配置告警规则
cat << EOF | kubectl apply -f -
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: cpu-alert
namespace: kubesphere-monitoring-system
spec:
groups:
- name: k8s.rules
rules:
- alert: HighCPUUsage
expr: sum(rate(container_cpu_usage_seconds_total{image!=""}[5m])) by (pod) > 0.8
for: 5m
labels:
severity: warning
annotations:
summary: "High CPU usage detected"
description: "Pod {{ \$labels.pod }} CPU usage is above 80%"
EOF
12. 应用商店与 Helm 管理
12.1 启用应用商店
kubectl edit cc ks-installer -n kubesphere-system
openpitrix:
enabled: true
http://:30880/apps
12.2 Helm Chart 管理
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm search repo redis
helm install my-redis bitnami/redis \
--namespace default \
--set auth.password=MyRedisPassword123
helm list -n default
helm upgrade my-redis bitnami/redis \
--set image.tag=7.2
helm uninstall my-redis -n default
12.3 私有 Helm 仓库(Harbor 集成)
helm package my-app/
helm push my-app-1.0.0.tgz oci://harbor.yourcompany.com/my-project
helm install my-app oci://harbor.yourcompany.com/my-project/my-app --version 1.0.0