Kubernetes
| Created | |
|---|---|
| Type | Platform |
| Language | Shell |
| Last Edit |
Basics
Definition
Container orchestration system/tool.
Manage containerised applications in different deployment environments/cluster of machines.
Need for Kubernetes
Increased use of containers due to microservices trend.
Demand for a proper way to manage (the state) hundreds of containers.
Move containers from one node to another node (for maintenance)
Schedule containers on cluster of machines
Other Popular Docker Orchestrators
- Docker Swarm
- Mesos
Features
- High availability (or no downtime)
- High modularity
- Scalability (or high performance)
- Disaster Recovery (backup and restore)
- Can start containers in specific nodes
- Open Source
Basic Architecture
Master-Worker Cluster
- Worker node contains
Kubeletprocess running on it.
Kubeletmakes the cluster to be able to talk to each other.
Setup - Local
Basics
You can setup kubernetes in following methods:
- A local cluster (on your machine): you can follow the
minikubeor docker for windows/mac lectures
- A production cluster using Kops on AWS
- A on-prem or cloud-agnostic cluster using kubeadm (lecture is at the end of the course)
- A managed production cluster on AWS using EKS (lecture can also be found at the end of the course)
Out of which Kops is recommended for testing out all features
Minikube
Basics
- Run single-node kubernetes cluster
- Cannot use for production
- Needs virtualisation software (ex VirtualBox)
Installation
https://minikube.sigs.k8s.io/docs/start/?arch=/macos/arm64/stable/binary+download
After installation run:
minikube startMinikube will start a VM on it’s own and thus cannot run minikube within a VM.
VM can be container or virtual machine manager:
- Docker - VM + Container (preferred)
- Hyperkit - VM
Alternative 1 - Docker Client
Can also use kubernetes in docker client.
Enable Start a Kubernetes single-node cluster when starting Docker Desktop
kubectl get nodeskubectl config get-contextsProceed with following after setting up either minikube or kubernetes in docker client.
Kubectl
kubectl versionCheck if it is installed, else go through:
https://kubernetes.io/docs/tasks/tools/
Config
cat ~/.kube/config Deploy
Start
Create a sample deployment and expose it on port 8080:
kubectl create deployment hello-minikube --image=kicbase/echo-server:1.0
kubectl expose deployment hello-minikube --type=NodePort --port=8080It may take a moment, but your deployment will soon show up when you run:
kubectl get services hello-minikubeThe easiest way to access this service is to let minikube launch a web browser for you:
minikube service hello-minikubeAlternatively, use kubectl to forward the port:
kubectl port-forward service/hello-minikube 7080:8080Application is now available at http://localhost:7080/
Stop
minikube stopDelete
minikube deleteSetup - Prod
Kops
- Recommended for AWS
- Stands for Kubernetes Operations
- Installations, upgrades and management
- Only works on Mac / Linux
Vagrant
https://developer.hashicorp.com/vagrant/install?product_intent=vagrant
Initialize
mkdir ubuntu
cd ubuntu
vagrant init ubuntu/xenial64Start
vagrant uphttps://dev.to/mattdark/using-docker-as-provider-for-vagrant-10me
Vagrantfile for Docker
# -*- mode: ruby -*- # vi: set ft=ruby : # All Vagrant configuration is done below. The "2" in Vagrant.configure # configures the configuration version (we support older styles for # backwards compatibility). Please don't change it unless you know what # you're doing. Vagrant.configure("2") do |config| # The most common configuration options are documented and commented below. # For a complete reference, please see the online documentation at # https://docs.vagrantup.com. # Every Vagrant development environment requires a box. You can search for # boxes at https://vagrantcloud.com/search. # config.vm.box = "ubuntu/xenial64" # Disable automatic box update checking. If you disable this, then # boxes will only be checked for updates when the user runs # `vagrant box outdated`. This is not recommended. # config.vm.box_check_update = false # Create a forwarded port mapping which allows access to a specific port # within the machine from a port on the host machine. In the example below, # accessing "localhost:8080" will access port 80 on the guest machine. # NOTE: This will enable public access to the opened port # config.vm.network "forwarded_port", guest: 80, host: 8080 # Create a forwarded port mapping which allows access to a specific port # within the machine from a port on the host machine and only allow access # via 127.0.0.1 to disable public access # config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1" # Create a private network, which allows host-only access to the machine # using a specific IP. # config.vm.network "private_network", ip: "192.168.33.10" # Create a public network, which generally matched to bridged network. # Bridged networks make the machine appear as another physical device on # your network. # config.vm.network "public_network" # Share an additional folder to the guest VM. The first argument is # the path on the host to the actual folder. The second argument is # the path on the guest to mount the folder. And the optional third # argument is a set of non-required options. # config.vm.synced_folder "../data", "/vagrant_data" # Disable the default share of the current code directory. Doing this # provides improved isolation between the vagrant box and your host # by making sure your Vagrantfile isn't accessible to the vagrant box. # If you use this you may want to enable additional shared subfolders as # shown above. # config.vm.synced_folder ".", "/vagrant", disabled: true # Provider-specific configuration so you can fine-tune various # backing providers for Vagrant. These expose provider-specific options. # Example for VirtualBox: # # config.vm.provider "virtualbox" do |vb| # # Display the VirtualBox GUI when booting the machine # vb.gui = true # # # Customize the amount of memory on the VM: # vb.memory = "1024" # end # # View the documentation for the provider you are using for more # information on available options. # Enable provisioning with a shell script. Additional provisioners such as # Ansible, Chef, Docker, Puppet and Salt are also available. Please see the # documentation for more information about their specific syntax and use. # config.vm.provision "shell", inline: <<-SHELL # apt-get update # apt-get install -y apache2 # SHELL config.vm.provider :docker do |d| d.build_dir = "." d.remains_running = true d.has_ssh = true end end
Kubeadm