手动部署kubenetes集群1.14

一、Openssl自签TLS证书配置

kubernetes集群组件分布

组件版本下载地址
etcd3.3.10https://github.com/etcd-io/etcd/releases
kube-apiserver1.4.2https://storage.googleapis.com/kubernetes-release/release/v1.14.2/kubernetes-server-linux-amd64.tar.gz
kube-controller-manager1.4.2
kube-scheduler1.4.2
kubelet1.4.2https://storage.googleapis.com/kubernetes-release/release/v1.14.2/kubernetes-node-linux-amd64.tar.gz
kube-proxy1.4.2
calicoctl3.7.2https://github.com/projectcalico/calicoctl/releases

证书介绍

证书名称配置文件用途
ca-config.pemca-config.json证书有效期描述
ca.pemca-csr.jsonk8s 根 CA 证书
etcd.pemetcd-csr.jsonetcd 集群证书
kubernetes.pemkubernetes-csr.jsonkube-apiserver 使用的证书
admin.pemadmin-csr.jsonkubectl 使用的证书
kube-proxy.pemkube-proxy-csr.jsonkube-proxy 使用的证书

cfssl 工具安装

1
2
3
4
5
6
7
8
9
10
11
12
[admin@haifly-bj-dev-k8s-master1 adimin] mkdir -p kubenetes/cfssl
[admin@haifly-bj-dev-k8s-master1 adimin] cd kubenetes/cfssl
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 cfssl
mv cfssljson_linux-amd64 cfssljson
mv cfssl-certinfo_linux-amd64 cfssl-certinfo
chmod +x ./*

添加一个临时环境变量,文件生成证书配置
[admin@haifly-bj-dev-k8s-master1 cfssl]$ export PATH=/work/admin/kubernetes/cfssl/:$PATH

1.配置证书生成策略
2.创建用来生成 CA 证书签名请求(CSR)的 JSON 配置文件
3.生成CA证书和私钥(root 证书和私钥)
4.分发证书

创建认证中心(CA)

CFSSL可以创建一个获取和操作证书的内部认证中心。

运行认证中心需要一个CA证书和相应的CA私钥。任何知道私钥的人都可以充当CA颁发证书。因此,私钥的保护至关重要。

1.配置证书生成策略

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[admin@haifly-bj-dev-k8s-master1 cfssl]$ cat <<EOF>> ca-config.json 
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
EOF

这个策略,有一个默认的配置,和一个profile,可以设置多个profile,这里的profile是kubernetes,也可以是其他用途的比如说etcd等。

  • 默认策略,指定了证书的有效期是一年(8760h)

  • kubernetes策略,指定了证书的用途

  • signing, 表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE

  • server auth:表示 client 可以用该 CA 对 server 提供的证书进行验证

  • client auth:表示 server 可以用该 CA 对 client 提供的证书进行验证

2.创建用来生成 CA 证书签名请求(CSR)的 JSON 配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[admin@haifly-bj-dev-k8s-master1 cfssl]$ cat <<EOF>> ca-csr.json 
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
  • CN: Common Name,浏览器使用该字段验证网站是否合法,一般写的是域名。非常重要。浏览器使用该字段验证网站是否合法

  • C: Country, 国家

  • L: Locality,地区,城市

  • O: Organization Name,组织名称,公司名称

  • OU: Organization Unit Name,组织单位名称,公司部门

  • ST: State,州,省

3.生成CA证书和私钥(root 证书和私钥)

ca证书:ca.pem 私钥:ca-key.pem 初始化CA cfssl gencert -initca ca-csr.json | cfssljson -bare ca这个命令会生成运行CA所必需的文件ca-key.pem(私钥)和ca.pem(证书),还会生成ca.csr(证书签名请求),用于交叉签名或重新签名。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[admin@haifly-bj-dev-k8s-master1 cfssl]$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2019/05/27 09:28:14 [INFO] generating a new CA key and certificate from CSR
2019/05/27 09:28:14 [INFO] generate received request
2019/05/27 09:28:14 [INFO] received CSR
2019/05/27 09:28:14 [INFO] generating key: rsa-2048
2019/05/27 09:28:14 [INFO] encoded CSR
2019/05/27 09:28:14 [INFO] signed certificate with serial number 10840760309635909896488288805780810828105360395
ll
total 18828
-rw-rw-r-- 1 admin admin 290 May 25 18:10 ca-config.json
-rw-r--r-- 1 admin admin 1001 May 27 09:28 ca.csr
-rw-rw-r-- 1 admin admin 208 May 25 18:10 ca-csr.json
-rw------- 1 admin admin 1679 May 27 09:28 ca-key.pem
-rw-rw-r-- 1 admin admin 1359 May 27 09:28 ca.pem
-rwxrwxr-x 1 admin admin 10376657 Mar 30 2016 cfssl
-rwxrwxr-x 1 admin admin 6595195 Mar 30 2016 cfssl-certinfo
-rwxrwxr-x 1 admin admin 2277873 Mar 30 2016 cfssljson

注意: 使用现有的CA私钥,重新生成:

1
2
3
4
5
6
7
```
cfssl gencert -initca -ca-key key.pem ca-csr.json | cfssljson -bare ca
​```
使用现有的CA私钥和CA证书,重新生成:
​```
cfssl gencert -renewca -ca cert.pem -ca-key key.pem
​```

3.1 查看cert(证书信息):

1
[admin@haifly-bj-dev-k8s-master1 cfssl]$ cfssl certinfo -cert ca.pem

3.2 查看CSR(证书签名请求)信息:

1
cfssl certinfo -csr ca.csr

4.分发证书

1
2
3
4
[admin@haifly-bj-dev-k8s-master1 cfssl]$ cp ca.pem ca-key.pem /work/admin/kubernetes/ssl/
SCP证书到master,node节点
[admin@haifly-bj-dev-k8s-master1 cfss]$ scp ca.pem ca-key.pem admin@192.168.9.149:/etc/kubernetes/ssl
[admin@haifly-bj-dev-k8s-master1 cfssl]$ scp ca.pem ca-key.pem admin@192.168.9.150:/etc/kubernetes/ssl

二、搭建三节点的etcd集群

搭建步骤:

  1. 准备安装包
  2. 创建etcd签名请求:
  3. 生成etcd证书和私钥
  4. 分发etcd的证书
  5. 设置ETCD配置文件
  6. 配置完后启动etcd
  7. 查看状态集群的状态是否正常

1.准备安装包

1
2
3
4
5
[admin@haifly-bj-dev-k8s-master1 cfssl]$ ll ~/kubernetes/bin/
-rwxr-xr-x 1 admin admin 19237536 May 27 10:07 etcd
-rwxr-xr-x 1 admin admin 15817472 May 27 10:07 etcdctl
[admin@haifly-bj-dev-k8s-master1 cfssl]$ scp ~/kubernetes/bin/etcd* admin@192.168.9.149:/etc/kubernetes/bin
[admin@haifly-bj-dev-k8s-master1 cfssl]$ scp ~/kubernetes/bin/etcd* admin@192.168.10.177:/etc/kubernetes/bin

2.创建etcd签名请求

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[admin@haifly-bj-dev-k8s-master1 cfssl]$ cat <<EOF>> etcd-csr.json 
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"192.168.9.148",
"192.168.9.149",
"192.168.9.150"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF

3.生成etcd证书和私钥

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[admin@haifly-bj-dev-k8s-master1 cfssl]$ cfssl gencert -ca=/work/admin/kubernetes/cfssl/ca.pem \
-ca-key=/work/admin/kubernetes/cfssl/ca-key.pem \
-config=/work/admin/kubernetes/cfssl/ca-config.json \
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd
2019/05/27 10:18:13 [INFO] generate received request
2019/05/27 10:18:13 [INFO] received CSR
2019/05/27 10:18:13 [INFO] generating key: rsa-2048
2019/05/27 10:18:13 [INFO] encoded CSR
2019/05/27 10:18:13 [INFO] signed certificate with serial number 581380436007405715369541007082781715569637270273
2019/05/27 10:18:13 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[admin@haifly-bj-dev-k8s-master1 cfssl]$ ll etcd*
-rw-r--r-- 1 admin admin 1062 May 27 10:18 etcd.csr
-rw-rw-r-- 1 admin admin 300 May 27 10:22 etcd-csr.json
-rw------- 1 admin admin 1675 May 27 10:18 etcd-key.pem
-rw-rw-r-- 1 admin admin 1436 May 27 10:18 etcd.pem

4.分发etcd证书

1
2
3
[admin@haifly-bj-dev-k8s-master1 cfssl]$ cp etcd*.pem ~/kubernetes/ssl/
[admin@haifly-bj-dev-k8s-master1 cfssl]$ scp etcd*.pem admin@192.168.9.149:~/kubernetes/ssl/
[admin@haifly-bj-dev-k8s-master1 cfssl]$ scp etcd*.pem admin@192.168.9.150:~/kubernetes/ssl/

5.设置etcd配置文件

设置etcd的配置有两种方式:

  • 所有的配置都写在service文件中
  • 配置文件写在/ect/kubernetes/cfg中,然后在service文件中加上配置的地址

下面我把两种方式的实现都写出来

5.1 配置方法1

先配置etcd.conf 注意:除了ETCD_INITIAL_CLUSTER不改变外,剩下的其他地方涉及到了ip地址的,都需要替换成相应node的ip地址。 另外,要注意ETCD_NAME要和下面的ETCD_INITIAL_CLUSTER中描述的名字对应上。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
[admin@haifly-bj-dev-k8s-master1 cfg]$ pwd
/work/admin/kubernetes/cfg
[admin@haifly-bj-dev-k8s-master1 cfg]$ cat <<EOF>> etcd.conf
#[member]
ETCD_NAME="etcd-node1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_SNAPSHOT_COUNTER="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://192.168.9.148:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.9.148:2379,https://127.0.0.1:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.9.148:2380"
# if you use different ETCD_NAME (e.g. test),
# set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd-node1=https://192.168.9.148:2380,etcd-node2=https://192.168.9.149:2380,etcd-node3=https://192.168.9.150:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.9.148:2379"
#[security]
CLIENT_CERT_AUTH="true"
ETCD_CA_FILE="/work/admin/kubernetes/ssl/ca.pem"
ETCD_CERT_FILE="/work/admin/kubernetes/ssl/etcd.pem"
ETCD_KEY_FILE="/work/admin/kubernetes/ssl/etcd-key.pem"
PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_CA_FILE="/work/admin/kubernetes/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/work/admin/kubernetes/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/work/admin/kubernetes/ssl/etcd-key.pem"
EOF

再在服务文件中将配置加上去

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[admin@haifly-bj-dev-k8s-master1 cfg]$ cat /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=simple
WorkingDirectory=/var/lib/etcd
EnvironmentFile=-/work/admin/kubernetes/cfg/etcd.conf
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /work/admin/kubernetes/bin/etcd"
Type=notify

[Install]
WantedBy=multi-user.target

5.2 配置方法2

注意:除了initial-cluster不改变外,剩下的其他地方涉及到了ip地址的,都需要替换成相应node的ip地址

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
[admin@haifly-bj-dev-k8s-master1 ~]$ cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/work/admin/kubernetes/bin/etcd \
--name=etcd-node1 \
--cert-file=/work/admin/kubernetes/ssl/etcd.pem \
--key-file=/work/admin/kubernetes/ssl/etcd-key.pem \
--peer-cert-file=/work/admin/kubernetes/ssl/etcd.pem \
--peer-key-file=/work/admin/kubernetes/ssl/etcd-key.pem \
--trusted-ca-file=/work/admin/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/work/admin/kubernetes/ssl/ca.pem \
--initial-advertise-peer-urls=https://192.168.9.148:2380 \
--listen-peer-urls=https://192.168.9.148:2380 \
--listen-client-urls=https://192.168.9.148:2379,https://127.0.0.1:2379,http://127.0.0.1:2379 \
--advertise-client-urls=https://192.168.9.148:2379 \
--initial-cluster-token=etcd-cluster-0 \
--initial-cluster="etcd-node1=https://192.168.9.148:2380,etcd-node2=https://192.168.9.149:2380,etcd-node3=https://192.168.9.150:2380" \
--initial-cluster-state=new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

6.启动etcd

注意,配置完第一个先不着急启动,等其他两个节点的配置完成后,三个节点同时启动etcd,否则第一个节点先启动时会一直夯在那儿。

1
2
3
4
5
6
7
8
9
10
11
12
[admin@haifly-bj-dev-k8s-master1 ~]$ sudo systemctl daemon-reload
[admin@haifly-bj-dev-k8s-master1 ~]$ sudo systemctl enable etcd

复制相应的配置信息到其他节点,记住一定要修改后才能用!
[admin@haifly-bj-dev-k8s-master1 ~]$ scp ~/kubernetes/cfg/etcd.conf admin@192.168.9.149:~/kubernetes/cfg/
[admin@haifly-bj-dev-k8s-master1 ~]$ scp /etc/systemd/system/etcd.service admin@192.168.9.149:/etc/systemd/system/etcd.service
[admin@haifly-bj-dev-k8s-master1 ~]$ scp ~/kubernetes/cfg/etcd.conf admin@192.168.9.150:~/kubernetes/cfg/
[admin@haifly-bj-dev-k8s-master1 ~]$ scp /etc/systemd/system/etcd.service admin@192.168.9.150:/etc/systemd/system/etcd.service

在所有节点上创建etcd存储目录并启动etcd
[admin@haifly-bj-dev-k8s-master1 ~]$ sudo mkdir /var/lib/etcd
[admin@haifly-bj-dev-k8s-master1 ~]$ sudo systemctl start etcd

7.查看etcd集群状态

master1(etcd-node1)的状态

1
2
3
4
5
6
7
[admin@haifly-bj-dev-k8s-master1 ~]$ systemctl status etcd
● etcd.service - Etcd Server
Loaded: loaded (/etc/systemd/system/etcd.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 2019-05-27 11:34:25 CST; 13s ago
Main PID: 1274 (etcd)
CGroup: /system.slice/etcd.service
└─1274 /work/admin/kubernetes/bin/etcd

master2(etcd-node2)的状态

1
2
3
4
5
6
7
[admin@haifly-bj-dev-k8s-master2 ~]$ systemctl status etcd
● etcd.service - Etcd Server
Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-05-27 11:34:09 CST; 1h 3min ago
Main PID: 1340 (etcd)
CGroup: /system.slice/etcd.service
└─1340 /work/admin/kubernetes/bin/etcd

node1(etcd-node3)的状态

1
2
3
4
5
6
7
8
[admin@haifly-bj-dev-k8s-node1 ~]$ systemctl status etcd
● etcd.service - Etcd Server
Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-05-27 11:34:09 CST; 1h 9min ago
Main PID: 1596 (etcd)
Memory: 27.6M
CGroup: /system.slice/etcd.service
└─1596 /work/admin/kubernetes/bin/etcd

8.用命令查看集群状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[admin@haifly-bj-dev-k8s-master1 ~]$ ~/kubernetes/bin/etcdctl --endpoints=https://192.168.9.148:2379,https://192.168.9.149:2379,https://192.168.9.150:2379 \
--ca-file=/work/admin/kubernetes/ssl/ca.pem \
--cert-file=/work/admin/kubernetes/ssl/etcd.pem \
--key-file=/work/admin/kubernetes/ssl/etcd-key.pem cluster-health
member 9e08e3b367dcae6d is healthy: got healthy result from https://192.168.9.148:2379
member d474e742dc1ac302 is healthy: got healthy result from https://192.168.9.150:2379
member e01ade2c3426b65a is healthy: got healthy result from https://192.168.9.149:2379
cluster is healthy

[admin@haifly-bj-dev-k8s-master1 cfssl]$ ~/kubernetes/bin/etcdctl --endpoints=https://192.168.9.148:2379,https://192.168.9.149:2379,https://192.168.9.150:2379 \
--ca-file=/work/admin/kubernetes/ssl/ca.pem \
--cert-file=/work/admin/kubernetes/ssl/etcd.pem \
--key-file=/work/admin/kubernetes/ssl/etcd-key.pem member list
9e08e3b367dcae6d: name=etcd-node1 peerURLs=https://192.168.9.148:2380 clientURLs=https://192.168.9.148:2379 isLeader=false
e01ade2c3426b65a: name=etcd-node2 peerURLs=https://192.168.9.149:2380 clientURLs=https://192.168.9.149:2379 isLeader=false
e94f3a70f0d2730e: name=etcd-node3 peerURLs=https://192.168.9.150:2380 clientURLs=https://192.168.9.150:2379 isLeader=true

创建检索:
[admin@haifly-bj-dev-k8s-master1 cfssl]$ ~/kubernetes/bin/etcdctl --endpoints=https://192.168.9.148:2379,https://192.168.9.149:2379,https://192.168.9.150:2379 \
--ca-file=/work/admin/kubernetes/ssl/ca.pem \
--cert-file=/work/admin/kubernetes/ssl/etcd.pem \
--key-file=/work/admin/kubernetes/ssl/etcd-key.pem mkdir test

查看数据是否同步:
[admin@haifly-bj-dev-k8s-master1 cfssl]$ ~/kubernetes/bin/etcdctl --endpoints=https://192.168.9.148:2379,https://192.168.9.149:2379,https://192.168.9.150:2379 \
--ca-file=/work/admin/kubernetes/ssl/ca.pem \
--cert-file=/work/admin/kubernetes/ssl/etcd.pem \
--key-file=/work/admin/kubernetes/ssl/etcd-key.pem ls
/test

这里小小的调优一下etcd timeout时间(默认100ms)

~/kubernetes/bin/etcd –heartbeat-interval=100 –election-timeout=500

# 环境参数:

ETCD_HEARTBEAT_INTERVAL=100 ETCD_ELECTION_TIMEOUT=500 etcd

三、部署master节点

1.准备安装包
2.创建生成kubernetes的CSR的 JSON 配置文件
3.生成 kubernetes 证书和私钥
4.分发证书
5.配置 kube-apiserver
5.1 创建 kube-apiserver 使用的客户端 token 文件
5.2 创建基础用户名/密码认证配置(abac)
5.3 部署kube-apiserver服务
5.4 启动kube-apiserver服务
5.5 查看kube-apiserver服务状态
6.配置kube-controller-manager
6.1 部署kube-controller-manager服务
6.2 启动kube-controller-manager服务
6.3 查看kube-controller-manager服务状态
7.配置kube-scheduler
7.1 部署kube-scheduler服务
7.2 启动 kube-scheduler 服务
7.3 查看 kube-scheduler 状态

1.准备安装包

1
2
3
4
5
6
7
[admin@haifly-bj-dev-k8s-master1 ~]$ ll ~/kubernetes/bin/
total 552068
-rwxr-xr-x 1 admin admin 19237536 May 27 10:07 etcd
-rwxr-xr-x 1 admin admin 15817472 May 27 10:07 etcdctl
-rwxr-xr-x 1 admin admin 167595360 May 27 12:57 kube-apiserver
-rwxr-xr-x 1 admin admin 115612192 May 27 12:57 kube-controller-manager
-rwxr-xr-x 1 admin admin 39258304 May 27 12:57 kube-scheduler

2.创建生成kubernetes的CSR的 JSON 配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
[admin@haifly-bj-dev-k8s-master1 ~]$ cd kubernetes/cfssl/
[admin@haifly-bj-dev-k8s-master1 cfssl]$ cat <<EOF>> kubernetes-csr.json
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"10.1.0.1",
"192.168.9.148",
"192.168.9.149",
"192.168.9.150",
"192.168.7.177",
"192.168.7.178",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF

3.生成kubernetes证书和私钥

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[admin@haifly-bj-dev-k8s-master1 cfssl]$ cfssl gencert -ca=/work/admin/kubernetes/cfssl/ca.pem \
-ca-key=/work/admin/kubernetes/cfssl/ca-key.pem \
-config=/work/admin/kubernetes/cfssl/ca-config.json \
-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
2019/05/27 13:53:24 [INFO] generate received request
2019/05/27 13:53:24 [INFO] received CSR
2019/05/27 13:53:24 [INFO] generating key: rsa-2048
2019/05/27 13:53:24 [INFO] encoded CSR
2019/05/27 13:53:24 [INFO] signed certificate with serial number 256951686520375414970420988042970133531805257836
2019/05/27 13:53:24 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[admin@haifly-bj-dev-k8s-master1 cfssl]$
[admin@haifly-bj-dev-k8s-master1 cfssl]$ ll kubernetes*
-rw-r--r-- 1 admin admin 1253 May 27 13:53 kubernetes.csr
-rw-rw-r-- 1 admin admin 461 May 27 13:52 kubernetes-csr.json
-rw------- 1 admin admin 1679 May 27 13:53 kubernetes-key.pem
-rw-rw-r-- 1 admin admin 1619 May 27 13:53 kubernetes.pem

4.分发证书

1
2
[admin@haifly-bj-dev-k8s-master1 cfssl]$ cp kubernetes*.pem ~/kubernetes/ssl/
[admin@haifly-bj-dev-k8s-master1 cfssl]$ scp kubernetes*.pem admin@192.168.9.149:~/kubernetes/ssl/

5.配置 kube-apiserver

5.1创建kube-apiserver使用的客户端token文件

1
2
3
4
5
6
7
8
[admin@haifly-bj-dev-k8s-master1 cfssl]$ head -c 16 /dev/urandom | od -An -t x | tr -d ' '
c25603db53aff8d0111b4e45c281ab32
[admin@haifly-bj-dev-k8s-master1 cfssl]$ vim ~/kubernetes/ssl/bootstrap-token.csv
[admin@haifly-bj-dev-k8s-master1 cfssl]$ cat ~/kubernetes/ssl/bootstrap-token.csv
c25603db53aff8d0111b4e45c281ab32,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

拷贝 bootstrap-token.csv 到另一台master服务器上
[admin@haifly-bj-dev-k8s-master1 cfssl]$ scp ~/kubernetes/ssl/bootstrap-token.csv admin@192.168.9.149:~/kubernetes/ssl/

5.2 部署kube-apiserver服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
[admin@haifly-bj-dev-k8s-master1 cfssl]$ cat /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/work/admin/kubernetes/bin/kube-apiserver \
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \
--insecure-bind-address=127.0.0.1 \
--authorization-mode=Node,RBAC \
--runtime-config=rbac.authorization.k8s.io/v1 \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/work/admin/kubernetes/ssl/bootstrap-token.csv \
--service-cluster-ip-range=10.1.0.0/16 \
--service-node-port-range=20000-40000 \
--tls-cert-file=/work/admin/kubernetes/ssl/kubernetes.pem \
--tls-private-key-file=/work/admin/kubernetes/ssl/kubernetes-key.pem \
--client-ca-file=/work/admin/kubernetes/ssl/ca.pem \
--service-account-key-file=/work/admin/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/work/admin/kubernetes/ssl/ca.pem \
--etcd-certfile=/work/admin/kubernetes/ssl/kubernetes.pem \
--etcd-keyfile=/work/admin/kubernetes/ssl/kubernetes-key.pem \
--etcd-servers=https://192.168.9.148:2379,https://192.168.9.149:2379,https://192.168.9.150:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/work/admin/kubernetes/log/api-audit.log \
--event-ttl=1h \
--v=2 \
--logtostderr=false \
--log-dir=/work/admin/kubernetes/log
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

5.3 启动kube-apiserver

1
2
3
4
5
6
7
8
9
10
11
12
[admin@haifly-bj-dev-k8s-master1 cfssl]$ sudo systemctl daemon-reload
[admin@haifly-bj-dev-k8s-master1 cfssl]$ sudo systemctl enable kube-apiserver
[admin@haifly-bj-dev-k8s-master1 cfssl]$ sudo systemctl start kube-apiserver
[admin@haifly-bj-dev-k8s-master1 ssl]$ systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-05-27 14:17:59 CST; 2min 59s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 1741 (kube-apiserver)
CGroup: /system.slice/kube-apiserver.service
└─1741 /work/admin/kubernetes/bin/kube-apiserver --admission-control=NamespaceLifecycle,LimitRanger...
[admin@haifly-bj-dev-k8s-master1 ssl]$

6.配置kube-controller-manager

6.1部署kube-controller-manager

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[admin@haifly-bj-dev-k8s-master1 ssl]$ cat /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/work/admin/kubernetes/bin/kube-controller-manager \
--bind-address=127.0.0.1 \
--master=http://127.0.0.1:8080 \
--allocate-node-cidrs=true \
--service-cluster-ip-range=10.1.0.0/16 \
--cluster-cidr=10.2.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/work/admin/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/work/admin/kubernetes/ssl/ca-key.pem \
--service-account-private-key-file=/work/admin/kubernetes/ssl/ca-key.pem \
--root-ca-file=/work/admin/kubernetes/ssl/ca.pem \
--leader-elect=true \
--v=2 \
--logtostderr=false \
--log-dir=/work/admin/kubernetes/log

Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
  • --master=http://{MASTER_IP}:8080:使用非安全 8080 端口与 kube-apiserver 通信;
  • --cluster-cidr 指定 Cluster 中 Pod 的 CIDR 范围,该网段在各 Node 间必须路由可达(flanneld保证);
  • --service-cluster-ip-range 参数指定 Cluster 中 Service 的CIDR范围,该网络在各 Node 间必须路由不可达,必须和 kube-apiserver 中的参数一致;
  • --cluster-signing-* 指定的证书和私钥文件用来签名为 TLS BootStrap 创建的证书和私钥;
  • --root-ca-file 用来对 kube-apiserver 证书进行校验,指定该参数后,才会在Pod 容器的 ServiceAccount 中放置该 CA 证书文件
  • --leader-elect=true 部署多台机器组成的 master 集群时选举产生一处于工作状态的 kube-controller-manager 进程;

6.2启动kube-controller-manager

1
2
3
[admin@haifly-bj-dev-k8s-master1 cfssl]$ sudo systemctl daemon-reload
[admin@haifly-bj-dev-k8s-master1 cfssl]$ sudo systemctl enable kube-controller-manager
[admin@haifly-bj-dev-k8s-master1 cfssl]$ sudo systemctl start kube-controller-manager

6.3查看kube-controller-manager服务状态

1
2
3
4
5
6
7
8
[admin@haifly-bj-dev-k8s-master1 cfssl]$ systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 2019-05-27 14:25:42 CST; 7s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 1813 (kube-controller)
CGroup: /system.slice/kube-controller-manager.service
└─1813 /work/admin/kubernetes/bin/kube-controller-manager --address=192.168.9.148 --master=http://1...

7.配置kube-scheduler

7.1部署kube-scheduler服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[admin@haifly-bj-dev-k8s-master1 cfssl]$ cat /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/work/admin/kubernetes/bin/kube-scheduler \
--address=127.0.0.1 \
--master=http://127.0.0.1:8080 \
--leader-elect=true \
--v=2 \
--logtostderr=false \
--log-dir=/work/admin/kubernetes/log

Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

7.2启动kube-scheduler服务

1
2
3
[admin@haifly-bj-dev-k8s-master1 cfssl]$ sudo systemctl daemon-reload
[admin@haifly-bj-dev-k8s-master1 cfssl]$ sudo systemctl enable kube-scheduler
[admin@haifly-bj-dev-k8s-master1 cfssl]$ sudo systemctl start kube-scheduler

7.3查看kube-scheduler状态

1
2
3
4
5
6
7
8
9
[admin@haifly-bj-dev-k8s-master1 ~]$ systemctl status kube-scheduler
● kube-scheduler.service - Kubernetes Scheduler
Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-05-27 14:29:53 CST; 2min 32s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 1832 (kube-scheduler)
CGroup: /system.slice/kube-scheduler.service
└─1832 /work/admin/kubernetes/bin/kube-scheduler --address=192.168.9.148 --master=http://192.168.9....
[admin@haifly-bj-dev-k8s-master1 ~]$

四、部署kubectl工具

1.准备二进制文件
2.创建 admin 证书签名请求
3.生成admin证书
4.分发证书
5.配置kubectl的使用的config
5.1 设置集群参数
5.2 设置客户端认证参数
5.3 设置上下文参数
5.4 设置默认上下文
6.使用kubectl查看当前的资源情况

1.准备二进制文件

1
2
3
4
5
6
7
8
9
10
[admin@haifly-bj-dev-k8s-master1 cfssl]$ ll ~/kubernetes/bin/
total 552072
-rwxr-xr-x 1 admin admin 19237536 May 27 10:07 etcd
-rwxr-xr-x 1 admin admin 15817472 May 27 10:07 etcdctl
-rwxr-xr-x 1 admin admin 167595360 May 27 12:57 kube-apiserver
-rwxr-xr-x 1 admin admin 115612192 May 27 12:57 kube-controller-manager
-rwxr-xr-x 1 admin admin 43115328 May 27 12:57 kubectl
-rwxr-xr-x 1 admin admin 127981504 May 27 12:57 kubelet
-rwxr-xr-x 1 admin admin 36685440 May 27 12:57 kube-proxy
-rwxr-xr-x 1 admin admin 39258304 May 27 12:57 kube-scheduler

2.创建admin证书签名

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[admin@haifly-bj-dev-k8s-master1 cfssl]$ cat <<EOF>> admin-csr.json
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF

3.生成admin证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[admin@haifly-bj-dev-k8s-master1 cfssl]$ cfssl gencert -ca=/work/admin/kubernetes/cfssl/ca.pem \
-ca-key=/work/admin/kubernetes/cfssl/ca-key.pem \
-config=/work/admin/kubernetes/cfssl/ca-config.json \
-profile=kubernetes admin-csr.json | cfssljson -bare admin
2019/05/27 14:56:55 [INFO] generate received request
2019/05/27 14:56:55 [INFO] received CSR
2019/05/27 14:56:55 [INFO] generating key: rsa-2048
2019/05/27 14:56:56 [INFO] encoded CSR
2019/05/27 14:56:56 [INFO] signed certificate with serial number 619469735564507398344826076638968754739291644427
2019/05/27 14:56:56 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[admin@haifly-bj-dev-k8s-master1 cfssl]$ ll admin*
-rw-r--r-- 1 admin admin 1009 May 27 14:56 admin.csr
-rw-rw-r-- 1 admin admin 229 May 27 14:55 admin-csr.json
-rw------- 1 admin admin 1675 May 27 14:56 admin-key.pem
-rw-rw-r-- 1 admin admin 1399 May 27 14:56 admin.pem

4.分发证书

1
2
[admin@haifly-bj-dev-k8s-master1 cfssl]$ cp admin*.pem ~/kubernetes/ssl/
[admin@haifly-bj-dev-k8s-master1 cfssl]$ scp admin*.pem admin@192.168.9.149:~/kubernetes/ssl/

5.配置kubectl的config

因为rbac认证的原因,默认会内置一些角色,可以查看admin-crs.json中的"O": "system:masters"

5.1设置集群参数

先把kubectl命令添加到系统PATH路径下

1
2
3
4
[admin@haifly-bj-dev-k8s-master1 cfssl]$ kubectl config set-cluster kubernetes \
--certificate-authority=/work/admin/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=https://192.168.9.148:6443

5.2设置客户端认证参数

1
2
3
4
[admin@haifly-bj-dev-k8s-master1 cfssl]$ kubectl config set-credentials admin \
--client-certificate=/work/admin/kubernetes/ssl/admin.pem \
--embed-certs=true \
--client-key=/work/admin/kubernetes/ssl/admin-key.pem

5.3设置上下文参数

1
2
3
[admin@haifly-bj-dev-k8s-master1 cfssl]$ kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=admin

5.4设置默认上下文

1
[admin@haifly-bj-dev-k8s-master1 cfssl]$ kubectl config use-context kubernetes

以上所有的操作的最终目的是为了生成 ~/.kube/config文件,如果node节点想使用kubectl的话,就需要将相应的admin*.pem拷贝到/etc/kubernetes/ssl目录,以及拷贝一份 ~/.kube/config文件

6.使用kubectl 查看当前资源情况

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[admin@haifly-bj-dev-k8s-master1 ~]$ kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}

[admin@haifly-bj-dev-k8s-master2 ~]$ kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}

五、部署node节点

1.安装docker环境
1.1.安装一些必要的系统工具
1.2.添加docker镜像源
1.3.安装需要的版本
1.4.把当前用户加入docker用户组
1.5.修改配置文件
1.6.启动docker服务
1.7.查看docker确保没有info类信息
2.准备工作
2.1. 准备node节点的kubelet和kube-proxy二进制包
2.2.创建角色绑定
2.3.创建 用于kubelet的kubeconfig 文件
3.部署kubelet
3.1.创建kubelet工作目录
3.2.创建kubelet服务
3.3.启动kubelet服务
3.4.查看kubelet服务状态
3.5.在master节点查看csr请求
3.6.在master节点通过kubelet 的 TLS 证书请求
4.配置kube-proxy
4.1.安装kube-proxy使用LVS需要的包
4.2.创建 kube-proxy 证书请求的json
4.3.生成kube-proxy证书
4.4.分发证书
4.5.创建kube-proxy配置文件
4.6.创建kube-proxy的工作目录
4.7.创建kube-proxy服务配置
4.8.启动kube-proxy服务
4.9.查看kube-proxy服务状态
4.10.检查LVS状态

1.安装docker环境

1.1.安装一些必要的系统工具

1
[admin@haifly-bj-dev-k8s-node1 ~]$ sudo yum install -y yum-utils device-mapper-persistent-data lvm2

1.2.添加docker镜像源

1
[admin@haifly-bj-dev-k8s-node1 ~]$ sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

1.3.安装需要的版本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
[admin@haifly-bj-dev-k8s-node1 ~]$ yum list docker-ce  --showduplicates |sort -r
Loaded plugins: fastestmirror
Installed Packages
docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 18.03.1.ce-1.el7.centos @docker-ce-stable
docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable
docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable
Determining fastest mirrors
Available Packages

[admin@haifly-bj-dev-k8s-node1 ~]$ sudo yum -y install docker-ce-18.03.1.ce-1.el7.centos

1.4.把当前用户加入docker用户组

1
2
3
4
[admin@haifly-bj-dev-k8s-node1 ~]$ sudo groupadd docker
[admin@haifly-bj-dev-k8s-node1 ~]$ sudo usermod -aG docker $USER

#退出终端,重新登录生效

1.5.修改配置文件

1
2
3
4
5
6
7
8
9
10
11
12
[admin@haifly-bj-dev-k8s-node1 ~]$ sudo mkdir /etc/docker
[admin@haifly-bj-dev-k8s-node1 ~]$ sudo vim /etc/docker/daemon.json
{
"storage-driver": "overlay2",
"storage-opts": [ "overlay2.override_kernel_check=true" ],
"graph":"/work/admin/docker",
"insecure-registries":["harbor.feiersmart.com"],
"log-driver":"json-file",
"log-opts": {"max-size":"100m","max-file":"3"}
}

#这里修改了docker存储驱动为overlay2,docker数据存储目录,私有镜像仓库地址,日志为json格式方便后期对pod stdout日志进行采集,pod回滚日志大小为100m并保留3个历史日志

1.6.启动docker服务

1
2
3
[admin@haifly-bj-dev-k8s-node1 ~]$ sudo systemctl daemon-reload
[admin@haifly-bj-dev-k8s-node1 ~]$ sudo systemctl enable docker
[admin@haifly-bj-dev-k8s-node1 ~]$ sudo systemctl start docker

1.7.查看docker确保没有info类信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
[admin@haifly-bj-dev-k8s-node1 ~]$ docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 18.03.1-ce
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-693.2.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 7.639GiB
Name: haifly-bj-dev-k8s-node1
ID: 7PQT:SMIR:M4GQ:DEV7:CPYX:KXKC:VARL:FJKO:RVRT:NPYZ:SDMC:GLBJ
Docker Root Dir: /work/admin/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
harbor.feiersmart.local
127.0.0.0/8
Live Restore Enabled: false

WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

WARNING类的告警需要解决掉

1
2
3
4
5
6
[admin@haifly-bj-dev-k8s-node1 ~]$ sudo vim /etc/sysctl.conf

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

[admin@haifly-bj-dev-k8s-node1 ~]$ sudo sysctl -p

2.准备工作

2.1准备node节点的kubelet和kube-proxy二进制包

1
2
3
4
[admin@haifly-bj-dev-k8s-node1 ~]$ ll kubernetes/bin/
total 160812
-rwxr-xr-x 1 admin admin 127981504 May 27 15:54 kubelet
-rwxr-xr-x 1 admin admin 36685440 May 27 15:54 kube-proxy

2.2创建角色绑定

1
[admin@haifly-bj-dev-k8s-master1 ~]$ kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

2.3创建用于kubelet的kubeconfig文件

2.3.1设置集群参数
1
2
3
4
5
[admin@haifly-bj-dev-k8s-master1 ~]$ kubectl config set-cluster kubernetes \
--certificate-authority=/work/admin/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=https://192.168.9.148:6443 \
--kubeconfig=bootstrap.kubeconfig
2.3.2设置客户端认证参数
1
2
3
[admin@haifly-bj-dev-k8s-master1 ~]$ kubectl config set-credentials kubelet-bootstrap \
--token=c25603db53aff8d0111b4e45c281ab32 \
--kubeconfig=bootstrap.kubeconfig

该token串是之前配置kube-apiserver使用的客户端token文件

2.3.3设置上下文参数
1
2
3
4
[admin@haifly-bj-dev-k8s-master1 ~]$ kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
2.3.4选择默认上下文
1
[admin@haifly-bj-dev-k8s-master1 ~]$ kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
2.3.5将当前目录生成的 bootstrap.kubeconfig 分发到node节点上
1
[admin@haifly-bj-dev-k8s-master1 ~]$ scp bootstrap.kubeconfig admin@192.168.10.177:~/kubernetes/ssl/

3.部署kubelet

3.1创建kubelet工作目录

1
[admin@haifly-bj-dev-k8s-node1 ~]$ mkdir ~/kubernetes/kubelet/

3.2创建kubelet服务

这里做了一些优化,node主机资源预留,在node资源不足时能够自动驱逐pod到其它节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
[admin@haifly-bj-dev-k8s-node1 ~]$ cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/work/admin/kubernetes/kubelet
ExecStart=/work/admin/kubernetes/bin/kubelet \
--address=192.168.10.177 \
--hostname-override=192.168.10.177 \
--pod-infra-container-image=harbor.feiersmart.local/public/pause-amd64:3.1 \
--register-node=true \
--experimental-bootstrap-kubeconfig=/work/admin/kubernetes/ssl/bootstrap.kubeconfig \
--kubeconfig=/work/admin/kubernetes/kubelet.kubeconfig \
--cert-dir=/work/admin/kubernetes/kubelet/ \
--cluster-dns=10.1.100.100 \
--cluster-domain=cluster.local. \
--hairpin-mode promiscuous-bridge
--max-pods=110 \
--network-plugin=cni \
--allow-privileged=true \
--serialize-image-pulls=false \
--logtostderr=true \
--cgroups-per-qos=true \
--cgroup-driver=cgroupfs \
--system-reserved-cgroup=/system.slice \
--enforce-node-allocatable=pods,kube-reserved,system-reserved \
--kube-reserved-cgroup=/system.slice/kubelet.service \
--kube-reserved=cpu=200m,memory=250Mi,ephemeral-storage=1Gi \
--system-reserved-cgroup=/system.slice
--system-reserved=cpu=200m,memory=250Mi,ephemeral-storage=1Gi \
--kube-reserved-cgroup=/system.slice/kubelet.service \
--eviction-hard=memory.available<5%,nodefs.available<10%,imagefs.available<10% \
--eviction-soft=memory.available<10%,nodefs.available<15%,imagefs.available<15% \
--eviction-soft-grace-period=memory.available=2m,nodefs.available=2m,imagefs.available=2m \
--eviction-minimum-reclaim=memory.available=0Mi,nodefs.available=500Mi,imagefs.available=500Mi \
--eviction-max-pod-grace-period=110 \
--v=2 \
--container-runtime=docker
ExecStartPost=/sbin/iptables -A INPUT -s 10.1.0.0/16 -p tcp --dport 4194 -j ACCEPT
ExecStartPost=/sbin/iptables -A INPUT -s 10.2.0.0/16 -p tcp --dport 4194 -j ACCEPT
ExecStartPost=/sbin/iptables -A INPUT -s 172.16.0.0/12 -p tcp --dport 4194 -j ACCEPT
ExecStartPost=/sbin/iptables -A INPUT -s 192.168.0.0/16 -p tcp --dport 4194 -j ACCEPT
ExecStartPost=/sbin/iptables -A INPUT -p tcp --dport 4194 -j DROP
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
  • –hostname-override 在集群中显示的主机名
  • –kubeconfig 指定kubeconfig文件位置,会自动生成
  • –bootstrap-kubeconfig 指定刚才生成的bootstrap.kubeconfig文件
  • –cert-dir 颁发证书存放位置
  • –pod-infra-container-image 管理Pod网络的镜像

3.3启动kubelet服务

1
2
3
[admin@haifly-bj-dev-k8s-node1 ~]$ sudo systemctl daemon-reload
[admin@haifly-bj-dev-k8s-node1 ~]$ sudo systemctl enable kubelet
[admin@haifly-bj-dev-k8s-node1 ~]$ sudo systemctl start kubelet

3.4查看kubelet服务状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[admin@haifly-bj-dev-k8s-node1 ~]$ systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 2019-05-27 17:34:58 CST; 6s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Process: 9237 ExecStartPost=/sbin/iptables -A INPUT -p tcp --dport 4194 -j DROP (code=exited, status=0/SUCCESS)
Process: 9233 ExecStartPost=/sbin/iptables -A INPUT -s 192.168.0.0/16 -p tcp --dport 4194 -j ACCEPT (code=exited, status=0/SUCCESS)
Process: 9230 ExecStartPost=/sbin/iptables -A INPUT -s 172.16.0.0/12 -p tcp --dport 4194 -j ACCEPT (code=exited, status=0/SUCCESS)
Process: 9229 ExecStartPost=/sbin/iptables -A INPUT -s 10.2.0.0/16 -p tcp --dport 4194 -j ACCEPT (code=exited, status=0/SUCCESS)
Process: 9223 ExecStartPost=/sbin/iptables -A INPUT -s 10.1.0.0/16 -p tcp --dport 4194 -j ACCEPT (code=exited, status=0/SUCCESS)
Main PID: 9222 (kubelet)
Memory: 13.5M
CGroup: /system.slice/kubelet.service
└─9222 /work/admin/kubernetes/bin/kubelet --address=192.168.10.177 --hostname-override=node1 --pod-...

3.5在master节点查看csr请求

1
2
3
4
[admin@haifly-bj-dev-k8s-master1 ~]$ kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-QV2WFl5YQzZoV6P__d7rPPi5_PTh2Sx8I1pxEJm-nVE 14s kubelet-bootstrap Pending
[admin@haifly-bj-dev-k8s-master1 ~]$

3.6在master节点通过kubelet的TLS证书请求

1
2
3
4
5
6
7
8
9
10
11
12
[admin@haifly-bj-dev-k8s-master1 ~]$ kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve
certificatesigningrequest.certificates.k8s.io/node-csr-QV2WFl5YQzZoV6P__d7rPPi5_PTh2Sx8I1pxEJm-nVE approved

再次查询状态已经approved了
[admin@haifly-bj-dev-k8s-master1 ~]$ kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-QV2WFl5YQzZoV6P__d7rPPi5_PTh2Sx8I1pxEJm-nVE 5m27s kubelet-bootstrap Approved,Issued

查询node
[admin@haifly-bj-dev-k8s-master1 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready <none> 66s v1.14.2

4.配置kube-proxy

4.1安装kube-proxy使用LVS需要的包

注意:如果不使用lvs的话,就会使用iptables,这个是1.10版本的新特性

1
[admin@haifly-bj-dev-k8s-master1 ~]$ sudo yum install -y ipvsadm ipset conntrack

4.2创建kube-proxy证书请求的json

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[admin@haifly-bj-dev-k8s-master1 cfssl]$ cat <<EOF>> kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF

4.3生成kube-proxy证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[admin@haifly-bj-dev-k8s-master1 cfssl]$ cfssl gencert -ca=/work/admin/kubernetes/cfssl/ca.pem \
-ca-key=/work/admin/kubernetes/cfssl/ca-key.pem \
-config=/work/admin/kubernetes/cfssl/ca-config.json \
-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2019/05/27 17:47:19 [INFO] generate received request
2019/05/27 17:47:19 [INFO] received CSR
2019/05/27 17:47:19 [INFO] generating key: rsa-2048
2019/05/27 17:47:20 [INFO] encoded CSR
2019/05/27 17:47:20 [INFO] signed certificate with serial number 275412850767955755157758754957830816223105416047
2019/05/27 17:47:20 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[admin@haifly-bj-dev-k8s-master1 cfssl]$ ll kube-proxy*
-rw-r--r-- 1 admin admin 1009 May 27 17:47 kube-proxy.csr
-rw-rw-r-- 1 admin admin 230 May 27 17:45 kube-proxy-csr.json
-rw------- 1 admin admin 1679 May 27 17:47 kube-proxy-key.pem
-rw-rw-r-- 1 admin admin 1403 May 27 17:47 kube-proxy.pem

4.4分发证书

1
[admin@haifly-bj-dev-k8s-master1 cfssl]$ scp kube-proxy*.pem admin@192.168.10.177:~/kubernetes/ssl/

4.5创建kube-proxy配置文件

4.5.1创建集群信息
1
2
3
4
5
[admin@haifly-bj-dev-k8s-master1 cfssl]$ kubectl config set-cluster kubernetes \
--certificate-authority=/work/admin/kubernetes/cfssl/ca.pem \
--embed-certs=true \
--server=https://192.168.9.148:6443 \
--kubeconfig=kube-proxy.kubeconfig
4.5.2配置认证信息
1
2
3
4
5
[admin@haifly-bj-dev-k8s-master1 cfssl]$ kubectl config set-credentials kube-proxy \
--client-certificate=/work/admin/kubernetes/cfssl/kube-proxy.pem \
--client-key=/work/admin/kubernetes/cfssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
4.5.3设置上下文环境
1
2
3
4
[admin@haifly-bj-dev-k8s-master1 cfssl]$ kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
4.5.4配置默认的上下文
1
[admin@haifly-bj-dev-k8s-master1 cfssl]$ kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
4.5.5分发kube-proxy.kubeconfig到node节点
1
[admin@haifly-bj-dev-k8s-master1 cfssl]$ scp kube-proxy.kubeconfig admin@192.168.10.177:~/kubernetes/ssl/

4.6创建kube-proxy工作目录

1
[admin@haifly-bj-dev-k8s-node1 ~]$ mkdir ~/kubernetes/kube-proxy

4.7创建kube-proxy服务配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[admin@haifly-bj-dev-k8s-node1 ~]$ cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
WorkingDirectory=/work/admin/kubernetes/kube-proxy
ExecStart=/work/admin/kubernetes/bin/kube-proxy \
--bind-address=192.168.10.177 \
--hostname-override=192.168.10.177 \
--cluster-cidr=10.1.0.0/16 \
--kubeconfig=/work/admin/kubernetes/ssl/kube-proxy.kubeconfig \
--masquerade-all \
--feature-gates=SupportIPVSProxyMode=true \
--proxy-mode=ipvs \
--ipvs-min-sync-period=5s \
--ipvs-sync-period=5s \
--ipvs-scheduler=rr \
--logtostderr=true \
--v=2 \
--logtostderr=false \
--log-dir=/work/admin/kubernetes/log

Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
  • --hostname-override 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 iptables 规则
  • --cluster-cidr 必须与 kube-controller-manager 的 --cluster-cidr 选项值一致,10.1.0.0/16
  • kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr--masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT
  • --kubeconfig 指定的配置文件嵌入了 kube-apiserver 的地址、用户名、证书、秘钥等请求和认证信息
  • 预定义的 RoleBinding cluster-admin 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限

4.8启动kube-proxy服务

1
2
3
4
[admin@haifly-bj-dev-k8s-node1 ~]$ sudo systemctl daemon-reload
[admin@haifly-bj-dev-k8s-node1 ~]$ sudo systemctl enable kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[admin@haifly-bj-dev-k8s-node1 ~]$ sudo systemctl start kube-proxy

4.9查看kube-proxy服务状态

1
2
3
4
5
6
7
8
[admin@haifly-bj-dev-k8s-node1 ~]$ systemctl status kube-proxy
● kube-proxy.service - Kubernetes Kube-Proxy Server
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-05-27 18:07:23 CST; 4min 31s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 10323 (kube-proxy)
CGroup: /system.slice/kube-proxy.service
‣ 10323 /work/admin/kubernetes/bin/kube-proxy --bind-address=0.0.0.0 --hostname-override=node1 --cl...

六、部署kubernetes网络(calico)k8s容器方式

1.准备工作

1
2
3
4
1.1.下载calicoctl工具
1.2.下载安装文件
1.3.拉取calico所需镜像
1.4.修改calico.yaml文件

2.部署calico
2.1.创建calico容器
2.2.查看容器运行状态
2.3.查看calico网络状态
2.4.修改kubelet配置,支持cni网络插件

1.准备工作

1.1下载calicoctl工具

1
2
3
[admin@haifly-bj-dev-k8s-node1 ~]$ cd ~/kubernetes/bin/
[admin@haifly-bj-dev-k8s-node1 bin]$ curl -O -L https://github.com/projectcalico/calicoctl/releases/download/v3.7.2/calicoctl
[admin@haifly-bj-dev-k8s-node1 bin]$ chmod +x calicoctl

1.2下载安装文件

1
2
3
[admin@haifly-bj-dev-k8s-master1 bin]$ mkdir ~/kubernetes/calico
[admin@haifly-bj-dev-k8s-master1 bin]$ cd ~/kubernetes/calico
[admin@haifly-bj-dev-k8s-master1 calico]$ wget https://docs.projectcalico.org/v3.7/getting-started/kubernetes/installation/hosted/calico.yaml

1.3拉取calico所需镜像

1
2
3
4
5
[admin@haifly-bj-dev-k8s-node1 ~]$ docker pull calico/cni:v3.7.2
[admin@haifly-bj-dev-k8s-node1 ~]$ calico/node:v3.7.2
[admin@haifly-bj-dev-k8s-node1 ~]$ calico/kube-controllers:v3.7.2

把镜像都传到自己的私有镜像仓库中去

1.4修改calico.yaml文件

我装的calico是最新版本3.7.2,yaml文件跟以前也差不多,修改以下几行内容

1
2
3
4
5
6
7
8
9
10
11
12
13
17   etcd-key: (cat /work/admin/kubernetes/ssl/etcd-key.pem | base64 | tr -d '\n')
18 etcd-cert: (cat /work/admin/kubernetes/ssl/etcd.pem | base64 | tr -d '\n')
19 etcd-ca: (cat /work/admin/kubernetes/ssl/ca.pem | base64 | tr -d '\n')
#将etcd 和ca 证书用base64转换后放在这里

30 etcd_endpoints: "https://192.168.9.148:2379,https://192.168.9.149:2379,https://192.168.9.150:2379"
#etcd集群地址

42 veth_mtu: "1500"
#MTU最大传输单元,要小于或等于网卡eth0的MTU值

311 value: "10.1.0.0/16"
#kubernetes容器网段地址,为kube-controller-manager里的--service-cluster-ip-range地址

2.部署calico

2.1创建calico容器

1
2
3
4
5
6
7
8
9
10
11
[admin@haifly-bj-dev-k8s-master1 calico]$ kubectl create -f calico-etcd.yaml
secret/calico-etcd-secrets created
configmap/calico-config created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.extensions/calico-node created
serviceaccount/calico-node created
deployment.extensions/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

2.2.查看容器运行状态

1
2
3
4
5
[admin@haifly-bj-dev-k8s-master1 calico]$ kubectl get pod -o wide -n kube-system 
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-66f5c7bc8c-6c5gg 1/1 Running 0 15s 192.168.10.177 node1 <none> <none>
calico-node-gs4tb 1/1 Running 0 15s 192.168.10.177 node1 <none> <none>
calico-node-tjhxn 1/1 Running 0 15s 192.168.10.178 node2 <none> <none>

2.3.查看calico网络状态

1
2
3
4
5
6
7
8
9
10
11
12
13
[admin@haifly-bj-dev-k8s-node1 ~]$ sudo ~/kubernetes/bin/calicoctl node status
[sudo] password for admin:
Calico process is running.

IPv4 BGP status
+----------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+----------------+-------------------+-------+----------+-------------+
| 192.168.10.178 | node-to-node mesh | up | 06:19:27 | Established |
+----------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.
-------------本文结束感谢您的阅读-------------