Blog

Working for a client, I just realized… I never wrote myself a deploy script so I can quickly deploy Docker / Kubernetes on a Debian 10 installation.

Let’s get 3 things so we can ease future pain, one is Ansible, second Docker and lastly Kubernetes.

Ansible, quite straight forward:

Install Ansible:

# In case its a brand new VM
apt update

# Lets add Ansible Repo
echo "deb http://ppa.launchpad.net/ansible/ansible/ubuntu bionic main" | tee /etc/apt/sources.list.d/ansible.list
apt -y install gnupg2
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 93C4A3FD7BB9C367

# Lets get the new repo
apt update

# Lets install ansible
apt install -y ansible

# Check ansible is installed.
ansible --version
ansible 2.8.6
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/home/debian/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.16 (default, Apr  6 2019, 01:42:57) [GCC 8.3.0]

Lets go over the details of ansible, later.

Install Docker:

# In case youre doing it wrong
apt update

# Install requirements, again, in case you're skipping ansible
apt -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common

# Lets add the key
curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -

# Lets add the repo
add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/debian \
   $(lsb_release -cs) \
   stable"

# Classic update to fetch new repo
apt update

# Install docker
apt -y install docker-ce docker-ce-cli containerd.io

# Add user and group
usermod -aG docker $USER
newgrp docker

# Check docker is good
docker version
Client: Docker Engine - Community
 Version:           19.03.2
 API version:       1.40
 Go version:        go1.12.8
 Git commit:        6a30dfc
 Built:             Thu Aug 29 05:29:29 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.2
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.8
  Git commit:       6a30dfc
  Built:            Thu Aug 29 05:28:05 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.6
  GitCommit:        894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc:
  Version:          1.0.0-rc8
  GitCommit:        425e105d5a03fabd737a126ad93d62a9eeede87f
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

If that worked, then you can test if you wish it’s working?

docker run --rm -it  --name test alpine:latest /bin/sh

Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
cd784148e348: Pull complete 
Digest: sha256:46e71df1e5191ab8b8034c5189e325258ec44ea739bba1e5645cff83c9048ff1
Status: Downloaded newer image for alpine:latest

 / # cat /etc/os-release 
 NAME="Alpine Linux"
 ID=alpine
 VERSION_ID=3.9.2
 PRETTY_NAME="Alpine Linux v3.9"
 HOME_URL="http://alpinelinux.org"
 BUG_REPORT_URL="http://bugs.alpinelinux.org"
 / # exit

If you were able to reproduce this on the screen, you should be good.

Install NodeJS:

curl -sL https://deb.nodesource.com/setup_15.x | bash -
apt-get install -y nodejs
npm install npm --global

Since we got this far, lets make our lives easier and get Ansible AWX installed.

apt -y install python3-pip git pwgen vim  python3-docker 
pip3 install requests==2.14.2
update-alternatives --install /usr/bin/python python /usr/bin/python2.7 1
update-alternatives --install /usr/bin/python python /usr/bin/python3 2

# MATCH DOCKER COMPOSE VERSIONS!
docker-compose version
docker-compose version 1.24.1, build 4667896
docker-py version: 3.7.3
CPython version: 3.6.8
OpenSSL version: OpenSSL 1.1.0j  20 Nov 2018

pip3 install docker-compose==1.24.1

# Clone the repo
su -
git clone --depth 50 https://github.com/ansible/awx.git
cd awx/installer/
# Customize
vim inventory
# Lets generate a password
pwgen -N 1 -s 30
KEY

Fix inventory

dockerhub_base=ansible
awx_task_hostname=awx
awx_web_hostname=awxweb
postgres_data_dir=/tmp/pgdocker
host_port=80
host_port_ssl=443
docker_compose_dir=/tmp/awxcompose
pg_username=awx
pg_password=awxpass
pg_database=awx
pg_port=5432
rabbitmq_password=awxpass
rabbitmq_erlang_cookie=cookiemonster
admin_user=admin
admin_password=StrongAdminpassword
create_preload_data=True
secret_key=KEY

Lets run that playbook!

ansible-playbook -i inventory install.yml

Now you can access the web on the host and port in inventory. At the folder:

cd ~/.awx/awxcompose/

Stop docker

docker-compose stop
Stopping awx_task      ... done
Stopping awx_web       ... done
Stopping awx_rabbitmq  ... done
Stopping awx_postgres  ... done
Stopping awx_memcached ... done

Lets pull new images:

docker-compose pull
Pulling rabbitmq  ... done
Pulling memcached ... done
Pulling postgres  ... done
Pulling web       ... done
Pulling task      ... done

Lets re-start it, and with that we have full loop!

docker-compose up --force-recreate -d
Recreating awx_postgres  ... done
Recreating awx_rabbitmq  ... done
Recreating awx_memcached ... done
Recreating awx_web       ... done
Recreating awx_task      ... done

Last step!

Kubernetes!

# We need to enable bridge netfilter
modprobe br_netfilter;
echo 'net.bridge.bridge-nf-call-iptables = 1' > /etc/sysctl.d/20-bridge-nf.conf;
sysctl --system;

Lets take some other steps in case you’re ignoring the rest.

# install tools for adding apt sources
apt-get update;
apt-get install -y \
  apt-transport-https \
  ca-certificates \
  curl \
  gnupg2;

If you already installed docker, ignore this step:

# install docker
mkdir /etc/docker;
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": { "max-size": "100m" },
  "storage-driver": "overlay2"
}
EOF
curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -;
echo 'deb [arch=amd64] https://download.docker.com/linux/debian buster stable' > /etc/apt/sources.list.d/docker.list;
apt-get update;
apt-get install -y --no-install-recommends docker-ce;

Finally, Kubernetes time!

# install kubernetes
# NOTE: "xenial" is correct here. Kubernetes publishes the Debian-based packages at kubernetes-xenial.
# reference: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-using-native-package-management
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": { "max-size": "100m" },
  "storage-driver": "overlay2"
}
EOF
service docker restart
swapoff -a
echo "REMEMBER TO DISABLE SWAP!"

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -;
echo 'deb https://apt.kubernetes.io/ kubernetes-xenial main' > /etc/apt/sources.list.d/kubernetes.list;
apt-get update;
apt-get install -y kubelet kubeadm kubectl;

# initialize kubernetes with a Flannel compatible pod network CIDR
kubeadm init --pod-network-cidr=10.244.0.0/16;

# Lets enable transparent huge pages
echo always > /sys/kernel/mm/transparent_hugepage/enabled

# setup kubectl
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config;

# install Flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml;

# install Dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml
cat > dashboard-admin.yaml <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard
EOF
kubectl delete clusterrolebinding/kubernetes-dashboard;
kubectl apply -f dashboard-admin.yaml;

# Revert to default iptables
update-alternatives --set iptables /usr/sbin/iptables-legacy
update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
update-alternatives --set arptables /usr/sbin/arptables-legacy
update-alternatives --set ebtables /usr/sbin/ebtables-legacy

# Accept all traffic first to avoid ssh lockdown  via iptables firewall rules #
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
 
# Flush All Iptables Chains/Firewall rules #
iptables -F
 
# Delete all Iptables Chains #
iptables -X
 
# Flush all counters too #
iptables -Z 
# Flush and delete all nat and  mangle #
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables iptables -t raw -F
iptables -t raw -X

# get the dashboard secret and display it
kubectl get secret -n kubernetes-dashboard \
| grep kubernetes-dashboard-token- \
| awk '{print $1}' \
| xargs kubectl describe secret -n kubernetes-dashboard;

# Lets enable running pods on master!
kubectl taint nodes --all node-role.kubernetes.io/master-

# Lets create a default storage location
mkdir -p /mnt/pv1
chmod 777 /mnt/pv1
cat > /root/kubernetes-storage.yaml <<EOF
{
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: localstorage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
reclaimPolicy: Delete
allowVolumeExpansion: True
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-volume1
  labels:
    type: local
spec:
  storageClassName:
  capacity:
    storage: 60Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/pv1"
}
EOF

# Lets apply it!
kubectl apply -f /root/kubernetes-storage.yaml

Perfect, now we have access to kubernetes, now lets make the access public on port 8001.

kubectl -n kubernetes-dashboard edit service kubernetes-dashboard

Let’s see the services running in the context of our dashboard.

kubectl -n kubernetes-dashboard get services

Let’s get the port being used:

kubectl -n kubernetes-dashboard get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 [none] 53/UDP,53/TCP 20d
kubernetes-dashboard NodePort 10.107.194.201 [none] 443:32414/TCP 20d

Let’s check it’s listening.

lsof -i tcp:32414
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
kube-prox 3440 root 7u IPv6 32584 0t0 TCP *:32414 (LISTEN)

Let’s get the token if needed and other info.

kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token:

Lets add some storage…

mkdir -p /mnt/pv{1,2}
kubectl create -f - <<EOF
kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-volume1
spec:
  storageClassName:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/pv1"
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-volume2
spec:
  storageClassName:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/pv2"
EOF

If we want to see our storage:

kubectl get pvc

This should be enough to start…. we’ll add more in a new post!

Related: uJl, QysDp, cqZT, Cthmn, oFQxkJ, wYf, jVrpZe, AHCkKB, sgmuv, MWt, edOmVg, rWn, SMt, GevCz, DjOjuF,
ddemuro
administrator

Sr. Software Engineer with over 10 years of experience. Hobbist photographer and mechanic. Tinkering soul in an endeavor to better understand this world. Love traveling, drinking coffee, and investments.

You may also like

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: