Setup Kubernetes Cluster using Vagrant
Install Kubernetes on Ubuntu 22.04
The kubeadm
tool helps you bootstrap a minimum viable Kubernetes
cluster that conforms to best practices.
The kubeadm tool is good if you need:
- A
simple way for you to try out Kubernetes, possibly for the first time.
- A
way for existing users to automate setting up a cluster and test their
application.
- A
building block in other ecosystem and/or installer tools with a larger
scope.
Before you begin
To follow this guide, you need:
- One or more machines running a
deb/rpm-compatible Linux OS; for example: Ubuntu or CentOS.
- 2 GiB or more of RAM per
machine--any less leaves little room for your apps.
- At least 2 CPUs on the machine
that you use as a control-plane node.
- Full network connectivity among
all machines in the cluster. You can use either a public or a private
network.
Check required ports
Control-plane node(s)
Protocol |
Direction |
Port Range |
Purpose |
Used By |
TCP |
Inbound |
6443* |
Kubernetes
API server |
All |
TCP |
Inbound |
2379-2380 |
etcd
server client API |
kube-apiserver,
etcd |
TCP |
Inbound |
10250 |
Kubelet
API |
Self,
Control plane |
TCP |
Inbound |
10251 |
kube-scheduler |
Self |
TCP |
Inbound |
10252 |
kube-controller-manager |
Self |
Worker node(s)
Protocol |
Direction |
Port Range |
Purpose |
Used By |
TCP |
Inbound |
10250 |
Kubelet
API |
Self,
Control plane |
TCP |
Inbound |
30000-32767 |
NodePort
Services† |
All |
Installing runtime
By default, Kubernetes uses the Container Runtime Interface (CRI)
to interface with your chosen container runtime.
If you don't specify a runtime,
kubeadm automatically tries to detect an installed container runtime by
scanning through a list of well known Unix domain sockets.
Runtime |
Path to Unix domain socket |
Docker |
/var/run/docker.sock |
containerd |
/run/containerd/containerd.sock |
CRI-O |
/var/run/crio/crio.sock |
If both Docker and containerd are detected, Docker takes precedence. This is
needed because Docker 18.09 ships with containerd and both are detectable even
if you only installed Docker. If any other two or more runtimes are detected,
kubeadm exits with an error.
Installing kubeadm, kubelet and kubectl
· kubeadm
: the command to
bootstrap the cluster.
·
kubelet
: the component that
runs on all of the machines in your cluster and does things like starting pods
and containers.
·
kubectl
: the command line util
to talk to your cluster.
Infrastructure
Lets Create 3 VirtualMachines(VMs) (1 Master Node and 2 Worker
node). There must be network connectivity among these VMs
Installation
on Ubuntu (Both on Master and Worker Nodes)
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
mkdir /etc/apt/keyrings/
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Create Master Server
On master machine run the below
command
1. kubeadm init --apiserver-advertise-address=<<Master
ServerIP>> --pod-network-cidr=192.168.0.0/16
2. mkdir -p $HOME/.kube
3. sudo cp -i /etc/kubernetes/admin.conf
$HOME/.kube/config
4. sudo chown $(id -u):$(id -g) $HOME/.kube/config
5. Run the join command on workernodes to connect these on kubernetes cluster.
Install Calico (run it only on master node)
# kubectl create -f https://docs.projectcalico.org/v3.18/manifests/calico.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/custom-resources.yaml
kubectl get nodes
Installation on RHEL/CentOS (Both on Master and Worker Nodes)
In case if you are using CentOS/RHEL
cat
<<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg \
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce
0
sudo sed -i
's/^SELINUX=enforcing$/SELINUX=permissive/'
/etc/selinux/config
sudo yum install -y kubelet kubeadm kubectl --disableexcludes
=
kubernetes
sudo systemctl
enable
--now kubelet
Hi Raman,
home/ubuntu# kubectl get nodes
home/ubuntu#
home/ubuntu# kubectl get pods
home/ubuntu#
home/ubuntu# kubeadm join 172.31.17.197:6443 --token 3j7va6.h325yawyg1mrcfva --discovery-token-ca-cert-hash sha256:74fcd00d34a89f340a9a8fb5d6de2562e7d33a83d6c92305f07e956aaa3b149a
home/ubuntu#
DeleteI did the setup as explained above. I could see the nodes created
==============
root@myCPMaster
NAME STATUS ROLES AGE VERSION
mycpmaster Ready control-plane,master 14m v1.20.0
worker01 Ready 10m v1.20.0
worker02 Ready 12m v1.20.0
root@myCPMaster
The next day I stopped and restarted the aws instances(master,worker nodes). Now when I go to the master and say kubectl get nodes
On master kubectl get pods
root@myCPMaster
No resources found in default namespace.----------------> it is showing this
root@myCPMaster
I thought i'll try to join the worker nodes again
So when I ran the join command on the worker nodes it gave me
==========
On worker nodes I thought I will join the master node again
root@worker01
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
root@worker01
===============================
Can you help
After removing /etc/kubernetes/kubelet.conf and /etc/kubernetes/pki/ca.crt, u can restart kubelet, this issue will be resolved.
Deleterm - rf /etc/kubernetes/kubelet.conf /etc/kubernetes/pki/ca.crt
sudo systemctl restart kubelet.
Also u need to enable docker service.
sudo systemctl enable docker
Good Thanks for posting this.
DeleteHi Raman,
# apt update
http://deb.debian.org/debian buster-updates InRelease
#
DeleteI am getting below issue when going into POD and executing apt update command.
root@pod2
Err:1 http://security.debian.org/debian-security buster/updates InRelease
Temporary failure resolving 'security.debian.org'
Err:2 http://deb.debian.org/debian buster InRelease
Temporary failure resolving 'deb.debian.org'
Err
Temporary failure resolving 'deb.debian.org'
Reading package lists... Done
Building dependency tree
Reading state information... Done
All packages are up to date.
W: Failed to fetch http://deb.debian.org/debian/dists/buster/InRelease Temporary failure resolving 'deb.debian.org'
W: Failed to fetch http://security.debian.org/debian-security/dists/buster/updates/InRelease Temporary failure resolving 'security.debian.org'
W: Failed to fetch http://deb.debian.org/debian/dists/buster-updates/InRelease Temporary failure resolving 'deb.debian.org'
W: Some index files failed to download. They have been ignored, or old ones used instead.
root@pod2
This could related to some network issue. please create the pod again and try to disconnect if you are connected with VPN.
Deleteroot@ip-172-31-34-32
home/ubuntu# systemctl status kubelet
Delete● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Wed 2021-08-11 10:25:14 UTC; 4s ago
Docs: https://kubernetes.io/docs/home/
Process: 5421 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
Main PID: 5421 (code=exited, status=1/FAILURE)
please run kubeadm reset command first and try kubeadm init command again
DeleteThis comment has been removed by the author.
DeleteRaman,
DeleteI'm getting this error continuously when I run the "vagrant up" command to create the master and worker nodes.
schannel: next initializesecuritycontext failed "(0x80090326" )
Tried reinstalling vagrant,virtualbox and git. Nothing is helping. Tried almost all solutions from Google. Still no way around.
Could you please help? I need to have the nodes to try and prepare for the Internal Certification exam.
Please reinstall vbox and vagrant
DeleteHi raman i get this error message,
Delete* minikube v1.30.1 on Ubuntu 22.04
* Automatically selected the docker driver. Other choices: virtualbox, none, ssh
* The "docker" driver should not be used with root privileges. If you wish to continue as root, use --force.
* If you are running minikube within a VM, consider using --driver=none:
* https://minikube.sigs.k8s.io/docs/reference/drivers/none/
X Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.
what is the command you have used is it minikube start...
Delete