k8s安装部署

配置规划

节点规划

最小化本地测试环境

192.168.8.34    k8s-master
192.168.8.35    k8s-worker1
192.168.8.36    k8s-worker2

软件版本

kubelet=1.17.9-00
kubeadm=1.17.9-00
kubectl=1.17.9-00
calico=3.17

基础环境初始化

初始化系统

安装必要的工具

sudo apt update
sudo apt install -y gpg apt-transport-https ca-certificates curl gnupg2 software-properties-common

禁用交换分区

# sudo swapoff -a
[root@k8s-master ~]$ nano /etc/fstab 
#/dev/mapper/cl-swap     swap                    swap    defaults        0 0

修改hosts

# sudo nano /etc/hosts
192.168.8.34    k8s-master
192.168.8.35    k8s-worker1
192.168.8.36    k8s-worker2

修改hostname

以master节点为例

# sudo nano /etc/hostname
k8s-master

配置内核

配置内核模块

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter

内核参数配置,允许iptables管理二层流量

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system

添加GPG公钥

# k8s
curl -s https://mirrors.huaweicloud.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
# docker
curl -s https://mirrors.huaweicloud.com/docker-ce/linux/debian/gpg | sudo apt-key add -

添加软件源

# kubernetes
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.huaweicloud.com/kubernetes/apt/ kubernetes-xenial main
EOF
# Docker
cat <<EOF | sudo tee /etc/apt/sources.list.d/docker.list
deb https://mirrors.huaweicloud.com/docker-ce/linux/debian $(lsb_release -cs) stable
EOF
apt update

安装Docker

使用非root用户

sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose
# 将当前用户加入docker组
sudo usermod -aG docker $USER

docker添加容器镜像代理

# 可选代理:http://hub-mirror.c.163.com
# native.cgroupdriver=systemd 必须配置,不然会导致kubelet启动异常
cat <<EOF | sudo tee /etc/docker/daemon.json 
{
    "exec-opts": ["native.cgroupdriver=systemd"],
    "registry-mirrors":["http://hub-mirror.c.163.com"]
}
EOF
sudo systemctl daemon-reload && sudo systemctl restart docker

安装Kubernetes

采用非root用户安装

# 安装指定版本k8s
sudo apt update
sudo apt install -y kubelet=1.17.9-00 kubeadm=1.17.9-00 kubectl=1.17.9-00
# 锁定版本
sudo apt-mark hold kubelet kubeadm kubectl

删除Kubernetes

sudo apt --purge remove kubelet kubeadm kubectl kubernetes-cni

清除exited 容器

sudo docker rm `docker ps -a|grep Exited|awk ' {print $1}'`

Kubernetes 配置

初始化Master

创建基础配置,以master为例

sudo mkdir -p /data/k8s
cat <<EOF | sudo tee /data/k8s/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: 1.17.9
controlPlaneEndpoint: "k8s-master:6443"
imageRepository: registry.aliyuncs.com/google_containers
networking:
  dnsDomain: cluster.local
  podSubnet: 100.64.0.0/16
  serviceSubnet: 100.65.0.0/16
EOF
# 可选公网镜像代理:registry.aliyuncs.com/google_containers

进行初始化

sudo kubeadm init --config /data/k8s/kubeadm-config.yaml --upload-certs --v=5

初始化完成

[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join k8s-master:6443 --token 5fim49.vjcptmrka1jb2bpi \
    --discovery-token-ca-cert-hash sha256:bc6c70e463e8fa19bd9a0a8b17371afbfcb1e4ed4e5cae259688bdb3f3e129c6 \
    --control-plane --certificate-key 97c3a28f632ac8885881a964ddd0fb47eb3144a9b0a568250d3941f4019a944d

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-master:6443 --token 5fim49.vjcptmrka1jb2bpi \
    --discovery-token-ca-cert-hash sha256:bc6c70e463e8fa19bd9a0a8b17371afbfcb1e4ed4e5cae259688bdb3f3e129c6 

保存配置

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

初始化Node

直接加入集群

sudo kubeadm join k8s-master:6443 --token 5fim49.vjcptmrka1jb2bpi \
    --discovery-token-ca-cert-hash sha256:bc6c70e463e8fa19bd9a0a8b17371afbfcb1e4ed4e5cae259688bdb3f3e129c6 

检查集群状态

注意: 在未安装网络插件的情况下,所有节点均为NotReady状态

kubectl get node

安装网络插件

Calico安装部署

Calico适用的Kubernetes查询

https://projectcalico.docs.tigera.io/archive/v3.17/getting-started/kubernetes/requirements

下载calico配置文件

注意:Calico与Kubernetes有版本依赖关系。

cd /data/k8s && sudo curl https://docs.projectcalico.org/v3.17/manifests/calico.yaml -O

应用Calico

kubectl apply -f /data/k8s/calico.yaml 

查询运行状态

kubectl get pod -n kube-system -o wide | grep calico

二进制安装calicoctl

需要注意calicoctl要与calico版本一致

wget https://downloads.tigera.io/ee/binaries/v3.17.3/calicoctl -O /usr/local/bin/calicoctl
chmod +x /usr/local/bin/calicoctl

查询nodes状态

calicoctl node status

查询ip池信息

 calicoctl get ipPool -o yaml

测试验证

部署一个pods

通过 deployment 命令来部署基于 Nginx 的应用程序,来验证 Kubernetes 集群的安装是否正确

kubectl create deployment nginx-app --image=nginx
kubectl expose deployment nginx-app --name=nginx-web-svc --type NodePort --port 80 --target-port 80
kubectl describe svc nginx-web-svc

输出效果

ops@k8s-master:~$ kubectl describe svc nginx-web-svc
Name:                     nginx-web-svc
Namespace:                default
Labels:                   app=nginx-app
Annotations:              <none>
Selector:                 app=nginx-app
Type:                     NodePort
IP:                       100.65.225.119
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31751/TCP
Endpoints:                100.64.126.4:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

访问31751端口就能看到输出结果

curl http://k8s-master:31751/
上一篇
下一篇