top of page
Search
Writer's pictureroshnipatil1314

Kubernetes Multinode cluster on AWS !!!

Updated: Apr 16, 2021

Welcome back to my new technical, In this blog, I am going to configure Kubernetes multi-node cluster on AWS (ec2 instances) using ansible role. Through this blog, you will get to know how to create a role to Configure K8S Multi-Node.

Pre-requisites:-

  • Ansible basics

  • AWS EC2 instances

  • Kubernetes basics

  • Configure Aws dynamic inventory


Steps we have to do:-

  1. Write a role for provisioning ec2 instances, eg:- provision_ec2.

  2. Create playbook to run role provison_ec2.

  3. Write a role for configuring Kubernetes master node.

  4. Write a role for configuring the Kubernetes slave node.

  5. Create a playbook to run above created two roles.


Let's Jump to the practical part


1. Write a role for provisioning ec2 instances, eg:- provision_ec2


Now I am going to launch 3 ec2 instances using an ansible role in which one instance is for the master node and the remaining two are for the worker node.

But before that make sure that you have updated your ansible configuration file and provided the path of key and roles.

Now let's create a role for launching ec2 instances, for creating role we have to run this command

ansible-galaxy init <name_of_role>

After creating roles, we have to change the directory to role directory and put all variable in the main.yml file of vars directory and task in main.yml of tasks directory


 

2. Write a playbook to run role provison_ec2.


Now write a playbook for the above created role and play it .

Playbook:-

Now you can see that we have launch 3 ec2 instances successfully.


Here you can see that our instance has been successfully configured as a target node of ansible, so now we are good to go for configuring Kubernetes multinode cluster.

 

3. Write a role for configuring Kubernetes master node.


Now let's create a role for the Kubernetes master node.


ansible-galaxy init <name_of_role>

In my case name of the role is "K8s_master"


Configure Kubernetes master node:-

  • Install docker:-

As we know Kubernetes is a container management tool, so we need to download docker.

- name: installing docker
  package:
      name: docker
      state: present
  • Start and enable docker service:-


- name: starting docker service
  service:
      name: docker
      enabled: yes
      
  • Configure yum repository for Kubernetes:-

 
 - name : configuring yum repo
   yum_repository:
      name: kubernetes
      description: kubernetes YUM repo
      baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
      gpgcheck: yes
      repo_gpgcheck: yes
      enabled : yes
      gpgkey: https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
      exclude: kubelet kubeadm kubectl
      
  • Install kubeadm, kubelet, kubectl:-

- name: Installing kubeadm kubelet kubectl 
  command: yum install kubeadm kubelet kubectl -y --disableexcludes=kubernetes
  • Start and enable kubelet service:-

- name : Starting kubelet service
  service:
      name: kubelet
      state: started
      enabled: yes
  • Change driver of docker:-

Kubernetes support system driver whereas docker works on c driver i.e, cgroupfs. So we need to change the container's driver from cgroupfs to systemd.

For changing the internals of docker go to /etc/docker directory and create a file named as daemon.json file and write

{

"exec-opts" : ["natice.cgroupdriver=systemd"]

}

- name : changing driver of docker
  copy:
      src: daemon.json
      dest: /etc/docker/daemon.json
      

We have put daemon.json file inside /files directory

  • Restart docker service:-

After doing any changes we have to restart the service.



- name : Restarting docker services
  service:
      name: docker
      state : restarted
      
  • Pull k8s configuration images:-

As we know that programs are running inside a container So we need to pull all these required images for configuring the Kubernetes master node using the kubeadm program.


  - name : Pulling K8s configuration images
    command: kubeadm config images pull
  
  • Install iproute-tc:-

We have to install iproute-tc because kuberenetes needs tc software for routing purposes, which controls and manages traffic.


- name  : Installing iproute for traffic
  package :
      name: iproute-tc
      state: present
      
  • Change the bridge-nf-call-iptables:-

Sometimes we will face error like iptable contents are not to set to one, so weed to set it in /etc/sysctl.d/iptables-1.conf file


- name: change the bridge-nf-call-iptables
  copy:
      dest: /etc/sysctl.d/iptables-1.conf
      src: kube.conf
 
  • Restart system service:-

As I mentioned earlier, if we do any changes in any service then we have to restart that service.


  - name: Restart system service
  command: sysctl --system
  

Now we have to initialize the Kubernetes master node using the command given below.


kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem

As we know every container (pods) has IP address, So the master is the one who is responsible to allocates the IP's to pods. So we have to pass a range of IP addresses through the --pod-network-cidr option so that the master will give an IP address to pods in that range.

--ignore-preflight-errors option we have to use to ignore the error of computing requirements. We need a minimum of 2 CPUs and 2GB ram, but we are using t2.micro which has 1 GB Ram and 1 CPU so we can ignore this error easily by using --ignore-preflight-errors.



  - name: Initializing Master
    command: kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem
    ignore_errors: yes
    
  • Configure Master as a Kubernetes client:-

- name: Creating .kube directory
  command: mkdir -p $HOME/.kube
  ignore_errors: yes

- name: Config file copying to .kube directory
  command: sudo cp -i /etc/kubernetes/admin.conf  $HOME/.kube/config
  ignore_errors: yes

- name: Changing permission
  shell:  sudo chown $(id -u):$(id -g)  $HOME/.kube/config
  ignore_errors: yes

Most of the time command module doesn't work. So instead of that module, we can use shell.

  • Create a Flannel for the Overlay network:-


- name: Creating Flannel for overlay network
  command: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  
  • Generate Token and copy that token:-

Now at last we have to create a token and copy that so that we can join slave to this master node.


- name: Generating token
  command : kubeadm token create --print-join-command
  register: token
  

 

4. Write a role for configuring the Kubernetes slave node.


Now create a role for the Kubernetes slave node.


ansible-galaxy init <name_of_role>

In my case name of the role is "K8s_slave"


Configure Kubernetes slave node:-

Most of the steps are the same as we do in configuring the master node.

  • Install docker:-

As we know Kubernetes is a container management tool, so we need to download docker.

- name: installing docker
  package:
      name: docker
      state: present
  • Start and enable docker service


- name: starting docker service
  service:
      name: docker
      enabled: yes
      
  • Configure yum repository for Kubernetes

 
 - name : configuring yum repo
   yum_repository:
      name: kubernetes
      description: kubernetes YUM repo
      baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
      gpgcheck: yes
      repo_gpgcheck: yes
      enabled : yes
      gpgkey: https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
      exclude: kubelet kubeadm kubectl
      
  • Install kubeadm, kubelet, kubectl

- name: Installing kubeadm kubelet kubectl 
  command: yum install kubeadm kubelet kubectl -y --disableexcludes=kubernetes
  • Start and enable kubelet service

- name : Starting kubelet service
  service:
      name: kubelet
      state: started
      enabled: yes
  • Change driver of docker

Kubernetes support system driver whereas docker works on c driver i.e, cgroupfs. So we need to change the container's driver from cgroupfs to systemd.

For changing the internals of docker go to /etc/docker directory and create a file named as daemon.json file and write

{

"exec-opts" : ["natice.cgroupdriver=systemd"]

}

- name : changing driver of docker
  copy:
      src: daemon.json
      dest: /etc/docker/daemon.json
      

We have put daemon.json file inside /files directory

  • Restart docker service:-

After doing any changes we have to restart the service.



- name : Restarting docker services
  service:
      name: docker
      state : restarted
        
  • Install iproute-tc:-

We have to install iproute-tc because Kubernetes needs tc software for routing purposes, which controls and manages traffic.


- name  : Installing iproute for traffic
  package :
      name: iproute-tc
      state: present
      
  • Change the bridge-nf-call-iptables:-

Sometimes we will face error like iptable contents are not to set to one, so weed to set it in /etc/sysctl.d/iptables-1.conf file


- name: change the bridge-nf-call-iptables
  copy:
      dest: /etc/sysctl.d/iptables-1.conf
      src: kube.conf
 
  • Restart system service:-

As I mentioned earlier, if we do any changes in any service then we have to restart that service.


  - name: Restart system service
  command: sysctl --system
  
  • Join slave to the master node:-

Now, at last, we have to join the slave node with a master node for joining the node we have to enter that token as we generated earlier.

So I stored that token earlier in one variable and passed that variable here.

- name: join slave to master node
  shell: "{{ master_token }}"
  ignore_errors: yes
  register: masterToken

 

5. Create a playbook to run above created two roles


Now write a playbook to run above created roles for the whole multinode cluster set up of Kubernetes.

playbook:-

Now let's run the playbook




Let's check that our kubernetes multinode cluster has been successfully set up or not.


You can also go through this video for a better understanding.




47 views0 comments

Recent Posts

See All

Comments


Post: Blog2_Post
bottom of page