技术代码:Kubernetes

1# yum install -y etcd
2# vim /etc/etcd/etcd.conf
3ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
4
5# yum install -y kubernetes
6# vim /usr/lib/systemd/system/kube-apiserver.service
7[Unit]
8Description=Kubernetes API Server
9Documentation=https://github.com/GoogleCloudPlatform/kubernetes
10Wants=etcd.service
11After=etcd.service
12
13[Service]
14EnvironmentFile=-/etc/kubernetes/config
15EnvironmentFile=-/etc/kubernetes/apiserver
16User=kube
17ExecStart=/usr/bin/kube-apiserver \
18        $KUBE_LOGTOSTDERR \
19        $KUBE_LOG_LEVEL \
20        $KUBE_ETCD_SERVERS \
21        $KUBE_API_ADDRESS \
22        $KUBE_API_PORT \
23        $KUBELET_PORT \
24        $KUBE_ALLOW_PRIV \
25        $KUBE_SERVICE_ADDRESSES \
26        $KUBE_ADMISSION_CONTROL \
27        $KUBE_API_ARGS
28Restart=on-failure
29Type=notify
30LimitNOFILE=65536
31
32[Install]
33WantedBy=multi-user.target
参考地址:Kubernetes安装部署
1[Unit]
2Description=Etcd Server
3[Service]
4Type=notify
5WorkingDirectory=/data/etcd/
6EnvironmentFile=-/etc/etcd/etcd.conf
7ExecStart=/usr/bin/etcd \\
8  --name k8s-master \\
9  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \\
10  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\
11  --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\
12  --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\
13  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\
14  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\
15  --initial-advertise-peer-urls https://172.16.110.108:2380 \\
16  --listen-peer-urls https://172.16.110.108:2380 \\
17  --listen-client-urls https://172.16.110.108:2379,https://127.0.0.1:2379 \\
18  --advertise-client-urls https://172.16.110.108:2379 \\
19  --initial-cluster-token etcd-cluster-0 \\
20  --initial-cluster k8s-master=https://172.16.110.108:2380,k8s-node1=https://172.16.110.15:2380,k8s-node2=https://172.16.110.107:2380 \\
21  --initial-cluster-state new \\
22  --data-dir=/data/etcd
23Restart=on-failure
24RestartSec=5
25LimitNOFILE=65536
26[Install]
27WantedBy=multi-user.target
参考地址:kubernetes 源码安装部署 1.12
1# 导入之前的 nginx-dm yaml文件
2[root@kubernetes-master-1 tmp]# kubectl delete -f deplyment.yaml 
3deployment "nginx-dm" deleted
4service "nginx-svc" deleted
5
6[root@kubernetes-master-1 tmp]# kubectl create -f deplyment.yaml 
7deployment "nginx-dm" created
8service "nginx-svc" created
9
10[root@kubernetes-master-1 tmp]# kubectl get svc nginx-svc
11NAME         CLUSTER-IP        EXTERNAL-IP   PORT(S)   AGE
12nginx-svc    192.254.230.234   <none>        80/TCP    40s
13
14# 创建一个 pods 来测试一下 nameserver
15
16apiVersion: v1
17kind: Pod
18metadata:
19  name: alpine
20spec:
21  containers:
22  - name: alpine
23    image: alpine
24    command:
25    - sh
26    - -c
27    - while true; do sleep 1; done
28
29
30
31# 查看 pods
32[root@kubernetes-master-1 tmp]# kubectl get pods 
33NAME                        READY     STATUS    RESTARTS   AGE
34alpine                      1/1       Running   0          1m
35nginx-dm-2214564181-5fr75   1/1       Running   0          3m
36nginx-dm-2214564181-jtqg0   1/1       Running   0          3m
37
38
39
40# 测试
41
42[root@kubernetes-master-1 tmp]# kubectl exec -it alpine ping nginx-svc
43PING nginx-svc (192.254.230.234): 56 data bytes
4464 bytes from 192.254.230.234: seq=0 ttl=61 time=302.746 ms
4564 bytes from 192.254.230.234: seq=1 ttl=61 time=327.175 ms
46
47
48[root@kubernetes-master-1 tmp]# kubectl exec -it alpine nslookup nginx-svc
49nslookup: can't resolve '(null)': Name does not resolve
50
51Name:      nginx-svc
52Address 1: 192.254.230.234 nginx-svc.default.svc.cluster.local
参考地址:kubernetes 1.7.2 + Calico部署
1[root@k8s_Master package]# export KUBE_APISERVER="https://192.168.0.221:6443"
2[root@k8s_Master package]# kubectl config set-cluster kubernetes \
3>   --certificate-authority=/etc/kubernetes/ssl/ca.pem \
4>   --embed-certs=true \
5>   --server=${KUBE_APISERVER}
6Cluster "kubernetes" set.
7
8
9[root@k8s_Master package]# kubectl config set-credentials admin \
10>   --client-certificate=/etc/kubernetes/ssl/admin.pem \
11>   --embed-certs=true \
12>   --client-key=/etc/kubernetes/ssl/admin-key.pem
13User "admin" set.
14
15
16[root@k8s_Master package]# kubectl config set-context kubernetes \
17>   --cluster=kubernetes \
18>   --user=admin
19Context "kubernetes" created.
20
21
22[root@k8s_Master package]# kubectl config use-context kubernetes
23Switched to context "kubernetes".
参考地址:Kubernetes (K8s)安装部署过程(二)之kubectl命令行工具安装
1~$ kubectl get pods
2NAME                                   READY     STATUS        RESTARTS   AGE
3blogtest-7f9798c5b5-gwkm9              1/1       Running       0          3d
4blogtest-7f9798c5b5-snppq              1/1       Running       0          48m
5kubernetes-bootcamp-56cdd766d-7skbz    1/1       Terminating   0          1m
6kubernetes-bootcamp-7799cbcb86-bpnpp   1/1       Running       0          22s
7
8~$ curl galaxy-k8s-test-master-02:17812
9Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-7799cbcb86-bpnpp | v=2
10
11~$ kubectl rollout undo deployments/kubernetes-bootcamp
12deployment.apps "kubernetes-bootcamp" 
13
14~$ kubectl get pods
15NAME                                   READY     STATUS        RESTARTS   AGE
16blogtest-7f9798c5b5-gwkm9              1/1       Running       0          3d
17blogtest-7f9798c5b5-snppq              1/1       Running       0          49m
18kubernetes-bootcamp-56cdd766d-jjzqm    1/1       Running       0          13s
19kubernetes-bootcamp-7799cbcb86-bpnpp   1/1       Terminating   0          57s
20
21~$ curl galaxy-k8s-test-master-02:17812
22Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-56cdd766d-jjzqm | v=1
参考地址:kubernetes学习(一) Scale应用 & 滚动更新
1[root@k8s_Master kubernetes]# export KUBE_APISERVER="https://19s.168.0.221:6443"
2
3# 设置集群参数
4[root@k8s_Master kubernetes]# kubectl config set-cluster kubernetes \
5>   --certificate-authority=/etc/kubernetes/ssl/ca.pem \
6>   --embed-certs=true \
7>   --server=${KUBE_APISERVER} \
8>   --kubeconfig=kube-proxy.kubeconfig
9Cluster "kubernetes" set.
10
11# 设置客户端认证参数
12[root@k8s_Master kubernetes]# kubectl config set-credentials kube-proxy \
13>   --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
14>   --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
15>   --embed-certs=true \
16>   --kubeconfig=kube-proxy.kubeconfig
17User "kube-proxy" set.
18
19# 设置上下文参数
20[root@k8s_Master kubernetes]# kubectl config set-context default \
21>   --cluster=kubernetes \
22>   --user=kube-proxy \
23>   --kubeconfig=kube-proxy.kubeconfig
24Context "default" created.
25
26# 设置默认上下文
27[root@k8s_Master kubernetes]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
28Switched to context "default".
参考地址:Kubernetes (K8s)安装部署过程(三)之证书kubeconfig文件创建
1apiVersion: rbac.authorization.k8s.io/v1beta1
2kind: ClusterRoleBinding
3metadata:
4  name: kubernetes-dashboard
5  labels:
6    k8s-app: kubernetes-dashboard
7roleRef:
8  apiGroup: rbac.authorization.k8s.io
9  kind: ClusterRole
10  name: cluster-admin
11subjects:
12- kind: ServiceAccount
13  name: kubernetes-dashboard
14  namespace: kube-system
参考地址:Kubernetes详解(五十六)——Dashboard安装与部署
1[root@k8s_Node2 kubernetes]# vim /usr/lib/systemd/system/kubelet.service
2[root@k8s_Node2 kubernetes]# cat /usr/lib/systemd/system/kubelet.service
3[Unit]
4Description=Kubernetes Kubelet Server
5Documentation=https://github.com/GoogleCloudPlatform/kubernetes
6After=docker.service
7Requires=docker.service
8
9[Service]
10WorkingDirectory=/var/lib/kubelet
11EnvironmentFile=-/etc/kubernetes/config
12EnvironmentFile=-/etc/kubernetes/kubelet
13ExecStart=/usr/local/bin/kubelet \
14            $KUBE_LOGTOSTDERR \
15            $KUBE_LOG_LEVEL \
16            $KUBELET_API_SERVER \
17            $KUBELET_ADDRESS \
18            $KUBELET_PORT \
19            $KUBELET_HOSTNAME \
20            $KUBE_ALLOW_PRIV \
21            $KUBELET_POD_INFRA_CONTAINER \
22            $KUBELET_ARGS
23Restart=on-failure
24
25[Install]
26WantedBy=multi-user.target
参考地址:Kubernetes (K8s) 安装部署过程(六)之部署node节点
1Your Kubernetes control-plane has initialized successfully!
2
3To start using your cluster, you need to run the following as a regular user:
4
5  mkdir -p $HOME/.kube
6  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
7  sudo chown $(id -u):$(id -g) $HOME/.kube/config
8
9Alternatively, if you are the root user, you can run:
10
11  export KUBECONFIG=/etc/kubernetes/admin.conf
12
13You should now deploy a pod network to the cluster.
14Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
15  https://kubernetes.io/docs/concepts/cluster-administration/addons/
16
17You can now join any number of control-plane nodes by copying certificate authorities
18and service account keys on each node and then running the following as root:
19
20  kubeadm join cluster-endpoint:6443 --token ea3n0e.t0c5hbeg8hb1dhx4 \
21    --discovery-token-ca-cert-hash sha256:ab163a50fe4d7c6cc69aec706d14ad68bd9a0a0ccb22913d0baec1b91b274109 \
22    --control-plane
23
24Then you can join any number of worker nodes by running the following on each as root:
25
26kubeadm join cluster-endpoint:6443 --token ea3n0e.t0c5hbeg8hb1dhx4 \
27    --discovery-token-ca-cert-hash sha256:ab163a50fe4d7c6cc69aec706d14ad68bd9a0a0ccb22913d0baec1b91b274109
参考地址:Kubernetes教程—— Kubernetes:1.20.x集群搭建(阿里云ECS)
1[root@xtwj90 ~]# kubeadm init
2W0507 10:03:38.623387    6624 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
3[init] Using Kubernetes version: v1.18.2
4[preflight] Running pre-flight checks
5	[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
6[preflight] Pulling images required for setting up a Kubernetes cluster
7[preflight] This might take a minute or two, depending on the speed of your internet connection
8[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
9[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
10[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
11[kubelet-start] Starting the kubelet
12[certs] Using certificateDir folder "/etc/kubernetes/pki"
13[certs] Generating "ca" certificate and key
14[certs] Generating "apiserver" certificate and key
15[certs] apiserver serving cert is signed for DNS names [xtwj90 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.3.90]
16[certs] Generating "apiserver-kubelet-client" certificate and key
17[certs] Generating "front-proxy-ca" certificate and key
18[certs] Generating "front-proxy-client" certificate and key
19[certs] Generating "etcd/ca" certificate and key
20[certs] Generating "etcd/server" certificate and key
21[certs] etcd/server serving cert is signed for DNS names [xtwj90 localhost] and IPs [192.168.3.90 127.0.0.1 ::1]
22[certs] Generating "etcd/peer" certificate and key
23[certs] etcd/peer serving cert is signed for DNS names [xtwj90 localhost] and IPs [192.168.3.90 127.0.0.1 ::1]
24[certs] Generating "etcd/healthcheck-client" certificate and key
25[certs] Generating "apiserver-etcd-client" certificate and key
26[certs] Generating "sa" key and public key
27[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
28[kubeconfig] Writing "admin.conf" kubeconfig file
29[kubeconfig] Writing "kubelet.conf" kubeconfig file
30[kubeconfig] Writing "controller-manager.conf" kubeconfig file
31[kubeconfig] Writing "scheduler.conf" kubeconfig file
32[control-plane] Using manifest folder "/etc/kubernetes/manifests"
33[control-plane] Creating static Pod manifest for "kube-apiserver"
34[control-plane] Creating static Pod manifest for "kube-controller-manager"
35W0507 10:03:45.462692    6624 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
36[control-plane] Creating static Pod manifest for "kube-scheduler"
37W0507 10:03:45.464292    6624 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
38[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
39[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
40[apiclient] All control plane components are healthy after 28.517580 seconds
41[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
42[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
43[upload-certs] Skipping phase. Please see --upload-certs
44[mark-control-plane] Marking the node xtwj90 as control-plane by adding the label "node-role.kubernetes.io/master=''"
45[mark-control-plane] Marking the node xtwj90 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
46[bootstrap-token] Using token: 39pco8.qkv3fw99bdf03cs9
47[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
48[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
49[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
50[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
51[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
52[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
53[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
54[addons] Applied essential addon: CoreDNS
55[addons] Applied essential addon: kube-proxy
56
57Your Kubernetes control-plane has initialized successfully!
58
59To start using your cluster, you need to run the following as a regular user:
60
61  mkdir -p $HOME/.kube
62  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
63  sudo chown $(id -u):$(id -g) $HOME/.kube/config
64
65You should now deploy a pod network to the cluster.
66Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
67  https://kubernetes.io/docs/concepts/cluster-administration/addons/
68
69Then you can join any number of worker nodes by running the following on each as root:
70
71kubeadm join 192.168.3.90:6443 --token 39pco8.qkv3fw99bdf03cs9 \
72    --discovery-token-ca-cert-hash sha256:b9468bc04aa282ef42e80e098051d057f43cfdfaa3230df961a3aa2f96a1cf42 
73[root@xtwj90 ~]# ll /etc/kubernetes/admin.conf
74-rw-------. 1 root root 5448 May  7 10:03 /etc/kubernetes/admin.conf
75[root@xtwj90 ~]#
参考地址:How to manage kubernetes on Centos 7, Part I

代码交流 2021