Close
0%
0%

Project Dandelion

Easy 64bit Ubuntu 20.04 microk8s on Raspberry Pi 4

Public Chat
Similar projects worth following
Since I first learned Kubernetes, I wanted to have my own cluster to understand the life-cycle, learn concepts and do things my employers wouldn't be thrilled with, like practice penetration testing. But running clusters of x86 hosts is expensive in power and kit, especially in the cloud. I want to build an appliance that houses the core highly available nodes together, and then add on worker nodes of various sorts over a gigabit network. The appliance is a fully functional Kubernetes 1.19 cluster on arm64 and consumes ~100watts. Additional nodes can be x86.

Kubernetes is a rather expansive thing - it becomes more of a practice than a unit of knowledge due to the velocity of development. To have a sense of mastery - I need a proper sandbox that can run the upstream code so I might keep up with the release cycle.

Prince or pauper, beggar man or thing
Play the game with every flower you bring

Dandelion don't tell no lies
Dandelion will make you wise
Tell me if she laughs or cries
Blow away dandelion

( Rolling Stones, 1967 )

  Ever since I first learned Kubernetes, I wanted to have my own cluster, that didn't bleed my hard earned income into the cloud.  My butt puckers a little every time I deploy in the cloud - wondering what that's gonna cost?  The important things to explore and manage are costly there - logs and observability. So runtime is the valuable quality to the Kubernetes student - containers are ephemeral but your cluster shouldn't be.   Churning a cluster configuration ( tear-down and rebuild ) is only good training for startup debugging. It doesn't teach you the lifecycle or the value of knowing how to find where the bodies are buried (logs and metrics).


  Kubernetes is a rather expansive thing - it becomes more of a practice than a unit of knowledge due to the velocity of development.  Project Dandelion is my go at learning Kubernetes wisdom in a contained environment.  The actual point is a software project - that needs a hardware bootstrap to run.  This project will cover my approach to creating a core private cloud appliance with Microk8s HA.    I use Pi 4/8G nodes and NVMe storage over USB - as well as Flash SATA and some spinning rust in the standby nodes.

  My primary source for instructions comes from the official site.

  Microk8s Documentation

  Ubuntu Microk8s Tutorial

One thing they don't go deep on is the ARM environment.  And the Pi is a toy platform, they say.  So was the 8086, at one time.  Microk8s is intended to be used as a way to build edge server systems that integrate more naturally with the upstream services of the cloud.  This is a project for folks that bore easily with IFTTT or who desire to attempt more complex integration using container applications.

I've done some of the steps differently. This project is to document those differences and inspire others to explore Kubernetes in a hands on way that is actually pretty rare.  As a technology Kubernetes is quite abstract and is usually deployed in a popular abstraction for hardware ( the cloud ).   This project allows a certain satisfying physical manifestation and a safe place to play, inside your firewall and outside your wallet. 

Prince or pauper, beggar man or thing
Play the game with every flower you bring

View all 14 components

  • EFK stack install

    CarbonCyclea day ago 0 comments

    Kyle            - Pi 4b/8G  NVMe

    Stan           - Pi 4b/8G  NVMe

    Cartman    - Pi 4b/8G  NVMe

    Kenny        - Pi 4b/8G  500G 7200rpm spinning rust

    Butters      - Pi 4b/8G  500G  Crucial flash sata

    A console node for serial ports - Pi 3b+

    K8console  - Pi 3B+  MicroSD

    Now there is a need for log aggregation.   This design will use Fluentd on the OS and Fluent-bit on the microk8s cluster.  The log destination will be to a Elasticsearch node with Kibana installed.

    Installed another Pi 4 / 4G with the new Ubuntu 20.10 64 bit release ( no desktop).  This release does not require the MicroSD card to boot - so I've used a SATA flash disk here.

    Adding this new node that runs Elasticsearch and Kibana.

    Ike           - Pi 4b/4G  240G Crucial flash sata

  • What to do?

    CarbonCycle4 days ago 0 comments

    One o'clock, two o'clock, three o'clock, four o'clock chimes

    Dandelions don't care about the time

    Most of the blogs that have you deploy microk8s may have the final step as just getting a dashboard.   There seems to be this tendency to avoid configuring ingress.  And then we wonder why folks end up in the weeds.

    Microk8s installs with most components disabled.  Once you configure a component on a node in a formed cluster - it becomes enabled on all the other nodes of the cluster.

    So lets not turn it all on at once.

    The first component enabled should be DNS.  HA is enabled by default in 1.19 and will activate once three nodes are present in the cluster. Additional nodes are backups to the master quorum of three.   Master selection is automatic.  If you have network connectivity issues (latency?) you may see some movement of nodes from MASTER to backup.  I saw this in my cluster as the original lead node was delegated to backup duty.  Surely the reason is in logs, but I haven't set up logging yet to see the reason.  I did install fluentd on all the nodes (easy) but I don't have an ELK stack, yet.   I'm thinking that perhaps this might be an awesome use for a free LogDNA account.  I've used their product before and liked what I could find with it, using rather naive search terms.   So look for that as a probable instruction entry.


    It is important to stabilize clusters EARLY in their deployment.  It becomes harder to debug issues when layers of malfunction are present.   I like to see 24-48 hours of runtime after a change to have confidence that my applied idea wasn't a poor life choice.

  • All nodes up

    CarbonCycle5 days ago 0 comments

    All 5 nodes are up now:

    ubuntu@kenny:~$ microk8s status
    microk8s is running
    high-availability: yes
      datastore master nodes: 192.168.1.40:19001 192.168.1.41:19001 192.168.1.43:19001
      datastore standby nodes: 192.168.1.30:19001 192.168.1.29:19001
    addons:
      enabled:
        dashboard            # The Kubernetes dashboard
        dns                  # CoreDNS
        ha-cluster           # Configure high availability on the current node
        ingress              # Ingress controller for external access
        metrics-server       # K8s Metrics Server for API access to service metrics
        storage              # Storage class; allocates storage from host directory
      disabled:
        helm                 # Helm 2 - the package manager for Kubernetes
        helm3                # Helm 3 - Kubernetes package manager
        host-access          # Allow Pods connecting to Host services smoothly
        metallb              # Loadbalancer for your Kubernetes cluster
        rbac                 # Role-Based Access Control for authorisation
        registry             # Private image registry exposed on localhost:32000
    ubuntu@kenny:~$ microk8s kubectl get nodes
    NAME      STATUS   ROLES    AGE     VERSION
    stan      Ready    <none>   4d10h   v1.19.0-34+ff9309c628eb68
    kyle      Ready    <none>   4d9h    v1.19.0-34+ff9309c628eb68
    cartman   Ready    <none>   4d9h    v1.19.0-34+ff9309c628eb68
    kenny     Ready    <none>   11h     v1.19.0-34+ff9309c628eb68
    butters   Ready    <none>   9h      v1.19.0-34+ff9309c628eb68

    One thing that seems odd is the AGE.  This seems to be the first boot, as these nodes have been powered down for some hardware assembly.  9 hours is closer to the actual age of this cycle. 

  • Putting the kit in an enclosure

    CarbonCycle10/19/2020 at 00:18 0 comments

    I'm putting the core nodes and the console server in a stand-alone box.   Additional nodes will have separate enclosures and power.

  • Seeking Prometheus

    CarbonCycle10/17/2020 at 07:03 0 comments

    Buglists are useful.

    For instance - next on my list was observability.  That mean deploying Prometheus. Then I saw this:

    Nothing to do for Prometheus #1576

    Fortunately, I'm late to the conversation that yields:

    As a workaround on 1.19 you can do as suggested in kubernetes/kube-state-metrics#1190 (comment) and modify https://github.com/prometheus-operator/kube-prometheus/blob/980e95de011319b88a3b9c0787a81dcdf338a898/manifests/kube-state-metrics-deployment.yaml#L26 from

    image: quay.io/coreos/kube-state-metrics:v1.9.7
    

    to

    image: gcr.io/k8s-staging-kube-state-metrics/kube-state-metrics-arm64:v1.9.7
    

    and everything comes up as expected.

    Yay!  ( I think. One finds out by giving it a go! )

  • The value-add in implementing serial consoles

    CarbonCycle10/17/2020 at 05:47 0 comments

    Usually, I don't bother or I drag whatever kit I'm wrangling over to a display and keyboard.  Turns out those micro-HDMI connectors are not awesome and vary considerably in physical dimension until persuaded to the male connector spec.   And now there are 4 of them to contend with without disturbing the USB ports.

    I decided my sanity requires more.  Cycling power is not calming to my personal being.  I'm fine with fixing broken, as long as I know just how that works.  The console is quite special in Unix for this specific purpose.  Just ask the kernel.

    To make this work in a more "appliance" way - I am using a Pi3B to perform a very useful function - separate from the running cluster.  This will provide the usual "lights out" function that defines a proper appliance.

    1. Serial console - FT4232H Quad HS USB-UART/FIFO used to provide simple connectivity to four nodes over a single USB port.  I'm using putty to create console logging files and have debug access when the network is unusable.

    2. Fan control -  The Pi 4B nodes need moving air and temperature monitoring. Not alot of air, but some. Drives the temps down an easy 20 degress C.

    3. Physical sensor data collection.  I'm planning on using Mycodo on this node for the PID cooling functions - Fan control and enclosure temp.  


    This is the way to go with multiple node deployments - you get a log of the boot process and all the messages for that eventual calamity will be captured in putty logs.   The other useful thing is catching the cloud-init config.   The root certs are presented on the console - if that matters to you.  With a putty console, I can log that output.

  • MSD devices

    CarbonCycle10/17/2020 at 02:12 0 comments

    I've been building Raspian nodes with external storage for awhile.  I've found the recent changes to the boot eeprom to work just fine to boot from USB. With Raspian.  With Ubuntu - not so much. At this point, I consider it a bit of a rabbit hole.   So I'm fine with booting initially from the MicroSD card.  It doesn't get mounted - and that is perhaps a good thing as I can still debug a boot problem separately from the state of the MSD device.

    The downside is that after kernel rebuild - that kernel is not installed automatically.  So I'm a few revisions behind already.   Resolving this issue with automation is currently deferred.  Once I figure out a manual method that is reliable - I'll document that here after some testing.

    Another note is the hardware used.  The PCI-e to USB bridge boards I'm using use the RTL9210B-CG chip on a generic adapter.   The good thing is: no heat issues. At all.  The problem?  The device has a firmware setting that turns off power ( IOERROR! ) after 10 minutes of no access.  The upside?  Microk8s hammers that disk regularly - so as long as Microk8s is running properly - the disk stays on.

    I disable all that power stuff ( to no effect, apparently ) with:

     - Disable sleep    

    sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target
    
    

      - Re-Enable sleep

    sudo systemctl unmask sleep.target suspend.target hibernate.target hybrid-sleep.target

  • Core HA nodes are up.

    CarbonCycle10/17/2020 at 01:38 0 comments

    Following the instructions here:  Adding nodes


    ubuntu@Node-1:~$ microk8s kubectl get all --all-namespaces -o=wide
    NAMESPACE     NAME                                          READY   STATUS    RESTARTS   AGE     IP             NODE     NOMINATED NODE   READINESS GATES
    ingress       pod/nginx-ingress-microk8s-controller-4tb5s   1/1     Running   1          6h33m   192.168.1.40   node-3   <none>           <none>
    kube-system   pod/calico-node-ggdrf                         1/1     Running   1          3d4h    192.168.1.40   node-3   <none>           <none>
    ingress       pod/nginx-ingress-microk8s-controller-kj8lt   1/1     Running   2          6h33m   192.168.1.41   node-2   <none>           <none>
    kube-system   pod/hostpath-provisioner-976f6d665-j7sgl      1/1     Running   1          3d19h   10.1.247.2     node-2   <none>           <none>
    kube-system   pod/calico-node-gh6jz                         1/1     Running   1          3d20h   192.168.1.41   node-2   <none>           <none>
    ingress       pod/nginx-ingress-microk8s-controller-hkxqd   1/1     Running   1          6h33m   192.168.1.30   node-1   <none>           <none>
    kube-system   pod/calico-node-449m7                         1/1     Running   1          3d20h   192.168.1.30   node-1   <none>           <none>
    kube-system   pod/calico-kube-controllers-847c8c99d-9nx7p   1/1     Running   1          3d20h   10.1.84.132    node-1   <none>           <none>
    kube-system   pod/coredns-86f78bb79c-ngbnx                  1/1     Running   1          3d20h   10.1.84.131    node-1   <none>           <none>
    
    NAMESPACE     NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
    default       service/kubernetes   ClusterIP   10.152.183.1    <none>        443/TCP                  3d20h   <none>
    kube-system   service/kube-dns     ClusterIP   10.152.183.10   <none>        53/UDP,53/TCP,9153/TCP   3d20h   k8s-app=kube-dns
    
    NAMESPACE     NAME                                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE     CONTAINERS               IMAGES                                                                  SELECTOR
    ingress       daemonset.apps/nginx-ingress-microk8s-controller   3         3         3       3            3           <none>                   6h33m   nginx-ingress-microk8s   quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.33.0   name=nginx-ingress-microk8s
    kube-system   daemonset.apps/calico-node                         3         3         3       3            3           kubernetes.io/os=linux   3d20h   calico-node              calico/node:v3.13.2                                                     k8s-app=calico-node
    
    NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS                IMAGES                                    SELECTOR
    kube-system   deployment.apps/hostpath-provisioner      1/1     1            1           3d19h   hostpath-provisioner      cdkbot/hostpath-provisioner-arm64:1.0.0   k8s-app=hostpath-provisioner
    kube-system   deployment.apps/calico-kube-controllers   1/1     1            1           3d20h   calico-kube-controllers   calico/kube-controllers:v3.13.2           k8s-app=calico-kube-controllers
    kube-system   deployment.apps/coredns                   1/1     1            1           3d20h   coredns                   coredns/coredns:1.6.6                     k8s-app=kube-dns
    
    NAMESPACE     NAME                                                DESIRED   CURRENT   READY   AGE     CONTAINERS                IMAGES                                    SELECTOR
    kube-system   replicaset.apps/hostpath-provisioner-976f6d665      1         1         1       3d19h   hostpath-provisioner      cdkbot/hostpath-provisioner-arm64:1.0.0   k8s-app=hostpath-provisioner,pod-template-hash=976f6d665
    kube-system   replicaset.apps/calico-kube-controllers-847c8c99d   1         1         1       3d20h   calico-kube-controllers   calico/kube-controllers:v3.13.2           k8s-app=calico-kube-controllers,pod-template-hash=847c8c99d
    kube-system   replicaset.apps/coredns-86f78bb79c                  1         1         1       3d20h   coredns                   coredns/coredns:1.6.6                     k8s-app=kube-dns,pod-template-hash=86f78bb79c
    ubuntu@Node-1:~

    I've enabled DNS, ingress and the (local) storage provider.

View all 8 project logs

  • 1
    Prepare media

    I found the Raspberry Pi Imager - Utility  to work best.   The first revision of the project will boot the initial kernel from MicroSD card and then pivot to the MSD device. 

    1. Use the Ubuntu 20.04 64bit version for both the MicroSD and the MSD device.

    2. After writing the MicroSD card - mount the /boot partition of that card and modify the /boot/cmdline.txt file to:

    net.ifnames=0 console=serial0,115200 console=tty1 root=LABEL=writable rootfstype=ext4 elevator=deadline rootwait fixrtc cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1 swapaccount=1

    Note that I removed the USB-OTG statement. Not gonna need that.

    3. It is helpful to remove or re-label the second partition on the MicroSD card - to prevent booting root there.  You really want to use the MSD device for all the things.

  • 2
    Boot the first node.

    Building a cluster a node at a time in the initial stages is a good thing.  Once you have a quorum in the control plane - and HA, adding nodes is less risky.

    Start with stabilizing the seed node.  Run the installation -

    sudo snap install microk8s --classic --channel=1.19/stable


    Then follow https://microk8s.io/docs/clustering

    microk8s add-node

    My nodes have been up for awhile. 

    ubuntu@Stan:~$ microk8s kubectl get nodes
    NAME      STATUS   ROLES    AGE     VERSION
    stan      Ready    <none>   8d      v1.19.0-34+ff9309c628eb68
    kenny     Ready    <none>   5d      v1.19.0-34+ff9309c628eb68
    kyle      Ready    <none>   8d      v1.19.0-34+ff9309c628eb68
    cartman   Ready    <none>   8d      v1.19.0-34+ff9309c628eb68
    butters   Ready    <none>   4d21h   v1.19.0-34+ff9309c628eb68
    ubuntu@Stan:~$ 
     
  • 3
    Enable DNS and add some tools

    Enable DNS.  Note that if you have all three nodes up, they will all have DNS enabled.

    microk8s enable dns

    The first three nodes are the core minimal requirement for HA.  This provides a voting logic quorum to ensure there is at least one sane node.

    You may want to monitor CPU temp.  

    sudo apt install lm-sensors
    
    The command to get CPU temp is "sensors"
    
    ubuntu@ike:~$ sensors
    cpu_thermal-virtual-0
    Adapter: Virtual device
    temp1:        +54.0°C  
    
    rpi_volt-isa-0000
    Adapter: ISA adapter
    in0:              N/A  
    
    ubuntu@ike:~$
    

    A great tool for using with your cluster is K9s. Install on at least one of your cluster nodes. I install on all the nodes.

    wget https://github.com/derailed/k9s/releases/download/v0.22.1/k9s_Linux_arm64.tar.gz

    The interface is helpful to those who are new to Kubernetes.

View all 4 instructions

Enjoy this project?

Share

Discussions

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates