最美情侣中文字幕电影,在线麻豆精品传媒,在线网站高清黄,久久黄色视频

歡迎光臨散文網(wǎng) 會(huì)員登陸 & 注冊(cè)

樹(shù)莓派K8S集群搭建

2023-02-19 09:53 作者:騎驢看數(shù)據(jù)  | 我要投稿

一、環(huán)境準(zhǔn)備

1、我的樹(shù)莓派配置清單

master:樹(shù)莓派4b 4G內(nèi)存、16G存儲(chǔ)。

node1/node2:樹(shù)莓派4b 8G內(nèi)存,32G存儲(chǔ)。

系統(tǒng):樹(shù)莓派 64位系統(tǒng)?GNU/Linux 11

注意:以下操作均在root用戶下操作。

2、基本配置

三臺(tái)設(shè)備均要操作

2.1、時(shí)間同步

三臺(tái)主機(jī)的時(shí)間要同步

2.2、關(guān)閉防火墻

樹(shù)莓派默認(rèn)防火墻規(guī)則是放開(kāi)所有,可以不用管。

2.3、禁用swap分區(qū)

臨時(shí)禁用:swapoff -a? 或者?dphys-swapfile swapoff

永久禁用:nano /etc/dphys-swapfile

將CONF_SWAPSIZE的值調(diào)整成0

????????重載swap配置文件并查看swap的值

重載swap配置即可

2.4、為三臺(tái)主機(jī)添加hosts文件

192.168.31.85 master

192.168.31.70 pinode1

192.168.31.252 pinode2

2.5、開(kāi)啟ip_forword轉(zhuǎn)發(fā)

臨時(shí)生效:echo "1" > /proc/sys/net/ipv4/ip_forward

永久生效:nano?/etc/sysctl.conf

????????執(zhí)行sysctl -p命令使其生效。

2.6、讓樹(shù)莓派支持cgroup

https://www.cnblogs.com/zhangzhide/p/16414728.html? ?# 相關(guān)文檔

使用該文檔中的方式2:編輯/boot/cmdline.txt

注意:這步很重要,否則樹(shù)莓派主節(jié)點(diǎn)無(wú)法初始化,從節(jié)點(diǎn)無(wú)法加入集群。

二、軟件安裝

1、配置kubernetes的源和docker源

https://mirrors.huaweicloud.com/? ? # 使用華為云的源

關(guān)于k8s和docker源的配置方法,華為云有詳細(xì)說(shuō)明,不做贅述。

如果出現(xiàn)公鑰不可用的情況,就需要想辦法驗(yàn)證公鑰,比如下面的示例:

root@pinode1:~# apt-get update

Get:1 https://repo.huaweicloud.com/kubernetes/apt kubernetes-xenial InRelease [8,993 B]

Hit:2 http://security.debian.org/debian-security bullseye-security InRelease? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Hit:3 http://deb.debian.org/debian bullseye InRelease? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??

Hit:4 http://deb.debian.org/debian bullseye-updates InRelease? ? ? ? ? ? ? ? ? ??

Err:1 https://repo.huaweicloud.com/kubernetes/apt kubernetes-xenial InRelease

? The following signatures couldn't be verified because the public key is not available: NO_PUBKEY B53DC80D13EDEF05

Hit:5 http://archive.raspberrypi.org/debian bullseye InRelease

Reading package lists... Done

W: GPG error: https://repo.huaweicloud.com/kubernetes/apt kubernetes-xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY B53DC80D13EDEF05

E: The repository 'https://repo.huaweicloud.com/kubernetes/apt kubernetes-xenial InRelease' is not signed.

N: Updating from such a repository can't be done securely, and is therefore disabled by default.

N: See apt-secure(8) manpage for repository creation and user configuration details.

解決辦法:

root@pinode1:~# apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv B53DC80D13EDEF05

Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).

Executing: /tmp/apt-key-gpghome.dlnfR5rnS6/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv B53DC80D13EDEF05

gpg: key B53DC80D13EDEF05: 1 duplicate signature removed

gpg: key B53DC80D13EDEF05: public key "Rapture Automatic Signing Key (cloud-rapture-signing-key-2022-03-07-08_01_01.pub)" imported

gpg: Total number processed: 1

gpg:? ? ? ? ? ? ? ?imported: 1

2、在master和node主機(jī)上均要安裝

apt-get install kubelet?kubeadm?kubectl?containerd.io?-y

root@master:~ # kubeadm version

kubeadm version: &version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.1", GitCommit:"8f94681cd294aa8cfd3407b8191f6c70214973a4", GitTreeState:"clean", BuildDate:"2023-01-18T15:56:50Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/arm64"}

root@master:~ # containerd -version

containerd containerd.io 1.6.18 2456e983eb9e37e47538f59ea18f2043c9a73640

3、生成containerd配置文件

containerd config default > /etc/containerd/config.toml

并且要修改兩處地方:

????????SystemdCgroup = false? 改為?SystemdCgroup = true

? ? ? ? sandbox_image = "registry.k8s.io/pause:3.6"? 改為? sandbox_image = "registry.aliyuncs.com/k8sxio/pause:3.6"

主要是為了支持Cgroup,以及切換為國(guó)內(nèi)源才能將所需鏡像下載下來(lái),否則會(huì)一直超時(shí)。

并且重啟containerd服務(wù):systemctl restart containerd

4、開(kāi)啟bridge-nf-call-iptables

nano?/etc/sysctl.d/k8s.conf

????net.bridge.bridge-nf-call-iptables = 1

????net.bridge.bridge-nf-call-ip6tables = 1

????net.ipv4.ip_forward = 1

加載配置文件:

root@pinode1:/etc/sysctl.d# sysctl -p /etc/sysctl.d/k8s.conf

sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory

sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory

net.ipv4.ip_forward = 1

root@pinode1:/etc/sysctl.d# modprobe br_netfilter? ?# 如果報(bào)上面的錯(cuò)誤就執(zhí)行該命令

root@pinode1:/etc/sysctl.d# sysctl -p /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-iptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.ipv4.ip_forward = 1

5、啟動(dòng)服務(wù)

systemctl start kubelet

systemctl enable kubelet

systemctl start containerd

systemctl enable containerd

三、初始化k8s控制平面

kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.26.1 --apiserver-advertise-address 192.168.31.85 --apiserver-bind-port 6443 --pod-network-cidr 172.16.0.0/16

[init] Using Kubernetes version: v1.26.1

[preflight] Running pre-flight checks

[WARNING SystemVerification]: missing optional cgroups: hugetlb

[preflight] Pulling images required for setting up a Kubernetes cluster

[preflight] This might take a minute or two, depending on the speed of your internet connection

[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

[certs] Using certificateDir folder "/etc/kubernetes/pki"

[certs] Generating "ca" certificate and key

[certs] Generating "apiserver" certificate and key

[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.31.85]

[certs] Generating "apiserver-kubelet-client" certificate and key

[certs] Generating "front-proxy-ca" certificate and key

[certs] Generating "front-proxy-client" certificate and key

[certs] Generating "etcd/ca" certificate and key

[certs] Generating "etcd/server" certificate and key

[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.31.85 127.0.0.1 ::1]

[certs] Generating "etcd/peer" certificate and key

[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.31.85 127.0.0.1 ::1]

[certs] Generating "etcd/healthcheck-client" certificate and key

[certs] Generating "apiserver-etcd-client" certificate and key

[certs] Generating "sa" key and public key

[kubeconfig] Using kubeconfig folder "/etc/kubernetes"

[kubeconfig] Writing "admin.conf" kubeconfig file

[kubeconfig] Writing "kubelet.conf" kubeconfig file

[kubeconfig] Writing "controller-manager.conf" kubeconfig file

[kubeconfig] Writing "scheduler.conf" kubeconfig file

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Starting the kubelet

[control-plane] Using manifest folder "/etc/kubernetes/manifests"

[control-plane] Creating static Pod manifest for "kube-apiserver"

[control-plane] Creating static Pod manifest for "kube-controller-manager"

[control-plane] Creating static Pod manifest for "kube-scheduler"

[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

[apiclient] All control plane components are healthy after 23.504546 seconds

[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster

[upload-certs] Skipping phase. Please see --upload-certs

[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]

[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

[bootstrap-token] Using token: joxngp.as9ns2ieyl257okk

[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes

[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key

[addons] Applied essential addon: CoreDNS

[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

?sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

? https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.31.85:6443 --token joxngp.as9ns2ieyl257okk \

--discovery-token-ca-cert-hash sha256:08381f2456b2a2a32bbdc93c932f87dd642e1d693509c5a0df1a9a141064da6a

? ? ? ? 在初始化的過(guò)程中如果遇到類似這種錯(cuò)誤“

error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: nodes "master" not found

To see the stack trace of this error execute with --v=5 or higher”,大概率是kubelet環(huán)境不干凈導(dǎo)致的,需要重新reset,或者需要在舊環(huán)境重新初始化新的環(huán)境時(shí),不要去刪除/etc/kubernetes/下的文件來(lái)處理這種問(wèn)題。而是使用kubeadm reset命令來(lái)清除舊的文件,它會(huì)將相關(guān)文件夾的舊文件全部清除。再執(zhí)行kubelet的重啟命令“systemctl restart kubelet”。然后就可以重新執(zhí)行“kubeadm init ”初始化命令了。

四、部署calico網(wǎng)絡(luò)

calico部署官網(wǎng)地址:https://docs.tigera.io/calico/3.25/getting-started/kubernetes/self-managed-onprem/onpremises

1、下載calico.yaml文件

curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml -O

2、修改calico.yaml

默認(rèn)192.168。修改成初始化時(shí)定義的網(wǎng)段

3、應(yīng)用calico.yaml文件

root@master:~ # kubectl apply -f calico.yaml?

五、向集群添加node節(jié)點(diǎn)

root@pinode1:~#?kubeadm join 192.168.31.85:6443 --token joxngp.as9ns2ieyl257okk --discovery-token-ca-cert-hash sha256:08381f2456b2a2a32bbdc93c932f87dd642e1d693509c5a0df1a9a141064da6a

[preflight] Running pre-flight checks

[WARNING SystemVerification]: missing optional cgroups: hugetlb

[preflight] Reading configuration from the cluster...

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Starting the kubelet

[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:

* Certificate signing request was sent to apiserver and a response was received.

* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

pinode2節(jié)點(diǎn)加入集群

六、查看集群節(jié)點(diǎn)信息

兩個(gè)node節(jié)點(diǎn)都Ready。




樹(shù)莓派K8S集群搭建的評(píng)論 (共 條)

分享到微博請(qǐng)遵守國(guó)家法律
德安县| 龙门县| 姜堰市| 清镇市| 鲁山县| 林芝县| 兰西县| 太康县| 手游| 应用必备| 丹东市| 老河口市| 平利县| 营口市| 卓尼县| 莒南县| 泽普县| 北票市| 宝山区| 醴陵市| 遂平县| 新平| 上思县| 甘泉县| 丹寨县| 汉川市| 凯里市| 板桥市| 铁力市| 遂溪县| 建水县| 永康市| 涡阳县| 获嘉县| 闸北区| 普定县| 都昌县| 晋城| 兴隆县| 汤原县| 革吉县|