Project Dandelion

Easy 64bit Ubuntu 20.04 microk8s on Raspberry Pi 4

Similar projects worth following
Ever since I first learned Kubernetes, I wanted to have my own cluster to understand the life-cycle, learn concepts and do perhaps things my employer wouldn't be thrilled with, like penetration testing. There are ideas I'd like to try before I attempt to convince my team of some direction that might be unproductive for the issue at hand. Kubernetes is a rather expansive thing - it becomes more of a practice than a unit of knowledge due to the velocity of development. This means to have a sense of mastery - I need a proper sandbox.

Dandelion don't tell no lies.
Dandelion will make you wise.

Dandelion don't tell no lies
Dandelion will make you wise
Tell me if she laughs or cries
Blow away dandelion

( Rolling Stones, 1967 )

  Ever since I first learned Kubernetes, I wanted to have my own cluster, that didn't bleed my hard earned income into the cloud.  My butt puckers a little every time I deploy in the cloud - wondering what that's gonna cost?  The important things to explore and manage are costly there - logs and observability. So runtime is the valuable quality to the Kubernetes student - containers are ephemeral but your cluster shouldn't be.   Churning a cluster configuration ( tear-down and rebuild ) is only good training for startup debugging. It doesn't teach you the lifecycle or the value of knowing how to find where the bodies are buried (logs and metrics).

  Kubernetes is a rather expansive thing - it becomes more of a practice than a unit of knowledge due to the velocity of development.  Project Dandelion is my go at learning Kubernetes wisdom in a contained environment.  The actual point is a software project - that needs a hardware bootstrap to run.  This project will cover my approach to creating a core private cloud appliance with Microk8s HA.    I use Pi 4/8G nodes and NVMe storage over USB - as well as Flash SATA and some spinning rust in the standby nodes.

  My primary source for instructions comes from the official site.

  Microk8s Documentation

  Ubuntu Microk8s Tutorial

One thing they don't go deep on is the ARM environment.  And the Pi is a toy platform, they say.  So was the 8086, at one time.  Microk8s is intended to be used as a way to build edge server systems that integrate more naturally with the upstream services of the cloud.  This is a project for folks that bore easily with IFTTT or who desire to attempt more complex integration using container applications.

I've done some of the steps differently. This project is to document those differences and inspire others to explore Kubernetes in a hands on way that is actually pretty rare.  As a technology Kubernetes is quite abstract and is usually deployed in a popular abstraction for hardware ( the cloud ).   This project allows a certain satisfying physical manifestation and a safe place to play, inside your firewall and outside your wallet. 

Prince or pauper, beggar man or thing
Play the game with every flower you bring

View all 14 components

  • What to do?

    CarbonCycle4 days ago 0 comments

    One o'clock, two o'clock, three o'clock, four o'clock chimes

    Dandelions don't care about the time

    Most of the blogs that have you deploy microk8s may have the final step as just getting a dashboard.   There seems to be this tendency to avoid configuring ingress.  And then we wonder why folks end up in the weeds.

    Microk8s installs with most components disabled.  Once you configure a component on a node in a formed cluster - it becomes enabled on all the other nodes of the cluster.

    So lets not turn it all on at once.

    The first component enabled should be DNS.  HA is enabled by default in 1.19 and will activate once three nodes are present in the cluster. Additional nodes are backups to the master quorum of three.   Master selection is automatic.  If you have network connectivity issues (latency?) you may see some movement of nodes from MASTER to backup.  I saw this in my cluster as the original lead node was delegated to backup duty.  Surely the reason is in logs, but I haven't set up logging yet to see the reason.  I did install fluentd on all the nodes (easy) but I don't have an ELK stack, yet.   I'm thinking that perhaps this might be an awesome use for a free LogDNA account.  I've used their product before and liked what I could find with it, using rather naive search terms.   So look for that as a probable instruction entry.

    It is important to stabilize clusters EARLY in their deployment.  It becomes harder to debug issues when layers of malfunction are present.   I like to see 24-48 hours of runtime after a change to have confidence that my applied idea wasn't a poor life choice.

  • All nodes up

    CarbonCycle4 days ago 0 comments

    All 5 nodes are up now:

    ubuntu@kenny:~$ microk8s status
    microk8s is running
    high-availability: yes
      datastore master nodes:
      datastore standby nodes:
        dashboard            # The Kubernetes dashboard
        dns                  # CoreDNS
        ha-cluster           # Configure high availability on the current node
        ingress              # Ingress controller for external access
        metrics-server       # K8s Metrics Server for API access to service metrics
        storage              # Storage class; allocates storage from host directory
        helm                 # Helm 2 - the package manager for Kubernetes
        helm3                # Helm 3 - Kubernetes package manager
        host-access          # Allow Pods connecting to Host services smoothly
        metallb              # Loadbalancer for your Kubernetes cluster
        rbac                 # Role-Based Access Control for authorisation
        registry             # Private image registry exposed on localhost:32000
    ubuntu@kenny:~$ microk8s kubectl get nodes
    stan      Ready    <none>   4d10h   v1.19.0-34+ff9309c628eb68
    kyle      Ready    <none>   4d9h    v1.19.0-34+ff9309c628eb68
    cartman   Ready    <none>   4d9h    v1.19.0-34+ff9309c628eb68
    kenny     Ready    <none>   11h     v1.19.0-34+ff9309c628eb68
    butters   Ready    <none>   9h      v1.19.0-34+ff9309c628eb68

    One thing that seems odd is the AGE.  This seems to be the first boot, as these nodes have been powered down for some hardware assembly.  9 hours is closer to the actual age of this cycle. 

  • Putting the kit in an enclosure

    CarbonCycle7 days ago 0 comments

    I'm putting the core nodes and the console server in a stand-alone box.   Additional nodes will have separate enclosures and power.

  • Seeking Prometheus

    CarbonCycle10/17/2020 at 07:03 0 comments

    Buglists are useful.

    For instance - next on my list was observability.  That mean deploying Prometheus. Then I saw this:

    Nothing to do for Prometheus #1576

    Fortunately, I'm late to the conversation that yields:

    As a workaround on 1.19 you can do as suggested in kubernetes/kube-state-metrics#1190 (comment) and modify from




    and everything comes up as expected.

    Yay!  ( I think. One finds out by giving it a go! )

  • The value-add in implementing serial consoles

    CarbonCycle10/17/2020 at 05:47 0 comments

    Usually, I don't bother or I drag whatever kit I'm wrangling over to a display and keyboard.  Turns out those micro-HDMI connectors are not awesome and vary considerably in physical dimension until persuaded to the male connector spec.   And now there are 4 of them to contend with without disturbing the USB ports.

    I decided my sanity requires more.  Cycling power is not calming to my personal being.  I'm fine with fixing broken, as long as I know just how that works.  The console is quite special in Unix for this specific purpose.  Just ask the kernel.

    To make this work in a more "appliance" way - I am using a Pi3B to perform a very useful function - separate from the running cluster.  This will provide the usual "lights out" function that defines a proper appliance.

    1. Serial console - FT4232H Quad HS USB-UART/FIFO used to provide simple connectivity to four nodes over a single USB port.  I'm using putty to create console logging files and have debug access when the network is unusable.

    2. Fan control -  The Pi 4B nodes need moving air and temperature monitoring. Not alot of air, but some. Drives the temps down an easy 20 degress C.

    3. Physical sensor data collection.  I'm planning on using Mycodo on this node for the PID cooling functions - Fan control and enclosure temp.  

    This is the way to go with multiple node deployments - you get a log of the boot process and all the messages for that eventual calamity will be captured in putty logs.   The other useful thing is catching the cloud-init config.   The root certs are presented on the console - if that matters to you.  With a putty console, I can log that output.

  • MSD devices

    CarbonCycle10/17/2020 at 02:12 0 comments

    I've been building Raspian nodes with external storage for awhile.  I've found the recent changes to the boot eeprom to work just fine to boot from USB. With Raspian.  With Ubuntu - not so much. At this point, I consider it a bit of a rabbit hole.   So I'm fine with booting initially from the MicroSD card.  It doesn't get mounted - and that is perhaps a good thing as I can still debug a boot problem separately from the state of the MSD device.

    The downside is that after kernel rebuild - that kernel is not installed automatically.  So I'm a few revisions behind already.   Resolving this issue with automation is currently deferred.  Once I figure out a manual method that is reliable - I'll document that here after some testing.

    Another note is the hardware used.  The PCI-e to USB bridge boards I'm using use the RTL9210B-CG chip on a generic adapter.   The good thing is: no heat issues. At all.  The problem?  The device has a firmware setting that turns off power ( IOERROR! ) after 10 minutes of no access.  The upside?  Microk8s hammers that disk regularly - so as long as Microk8s is running properly - the disk stays on.

    I disable all that power stuff ( to no effect, apparently ) with:

     - Disable sleep    

    sudo systemctl mask

      - Re-Enable sleep

    sudo systemctl unmask

  • Core HA nodes are up.

    CarbonCycle10/17/2020 at 01:38 0 comments

    Following the instructions here:  Adding nodes

    ubuntu@Node-1:~$ microk8s kubectl get all --all-namespaces -o=wide
    NAMESPACE     NAME                                          READY   STATUS    RESTARTS   AGE     IP             NODE     NOMINATED NODE   READINESS GATES
    ingress       pod/nginx-ingress-microk8s-controller-4tb5s   1/1     Running   1          6h33m   node-3   <none>           <none>
    kube-system   pod/calico-node-ggdrf                         1/1     Running   1          3d4h   node-3   <none>           <none>
    ingress       pod/nginx-ingress-microk8s-controller-kj8lt   1/1     Running   2          6h33m   node-2   <none>           <none>
    kube-system   pod/hostpath-provisioner-976f6d665-j7sgl      1/1     Running   1          3d19h     node-2   <none>           <none>
    kube-system   pod/calico-node-gh6jz                         1/1     Running   1          3d20h   node-2   <none>           <none>
    ingress       pod/nginx-ingress-microk8s-controller-hkxqd   1/1     Running   1          6h33m   node-1   <none>           <none>
    kube-system   pod/calico-node-449m7                         1/1     Running   1          3d20h   node-1   <none>           <none>
    kube-system   pod/calico-kube-controllers-847c8c99d-9nx7p   1/1     Running   1          3d20h    node-1   <none>           <none>
    kube-system   pod/coredns-86f78bb79c-ngbnx                  1/1     Running   1          3d20h    node-1   <none>           <none>
    NAMESPACE     NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
    default       service/kubernetes   ClusterIP    <none>        443/TCP                  3d20h   <none>
    kube-system   service/kube-dns     ClusterIP   <none>        53/UDP,53/TCP,9153/TCP   3d20h   k8s-app=kube-dns
    NAMESPACE     NAME                                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE     CONTAINERS               IMAGES                                                                  SELECTOR
    ingress       daemonset.apps/nginx-ingress-microk8s-controller   3         3         3       3            3           <none>                   6h33m   nginx-ingress-microk8s   name=nginx-ingress-microk8s
    kube-system   daemonset.apps/calico-node                         3         3         3       3            3    3d20h   calico-node              calico/node:v3.13.2                                                     k8s-app=calico-node
    NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS                IMAGES                                    SELECTOR
    kube-system   deployment.apps/hostpath-provisioner      1/1     1            1           3d19h   hostpath-provisioner      cdkbot/hostpath-provisioner-arm64:1.0.0   k8s-app=hostpath-provisioner
    kube-system   deployment.apps/calico-kube-controllers   1/1     1            1           3d20h   calico-kube-controllers   calico/kube-controllers:v3.13.2           k8s-app=calico-kube-controllers
    kube-system   deployment.apps/coredns                   1/1     1            1           3d20h   coredns                   coredns/coredns:1.6.6                     k8s-app=kube-dns
    NAMESPACE     NAME                                                DESIRED   CURRENT   READY   AGE     CONTAINERS                IMAGES                                    SELECTOR
    kube-system   replicaset.apps/hostpath-provisioner-976f6d665      1         1         1       3d19h   hostpath-provisioner      cdkbot/hostpath-provisioner-arm64:1.0.0   k8s-app=hostpath-provisioner,pod-template-hash=976f6d665
    kube-system   replicaset.apps/calico-kube-controllers-847c8c99d   1         1         1       3d20h   calico-kube-controllers   calico/kube-controllers:v3.13.2           k8s-app=calico-kube-controllers,pod-template-hash=847c8c99d
    kube-system   replicaset.apps/coredns-86f78bb79c                  1         1         1       3d20h   coredns                   coredns/coredns:1.6.6                     k8s-app=kube-dns,pod-template-hash=86f78bb79c

    I've enabled DNS, ingress and the (local) storage provider.

View all 7 project logs

  • 1
    Prepare media

    I found the Raspberry Pi Imager - Utility  to work best.   The first revision of the project will boot the initial kernel from MicroSD card and then pivot to the MSD device. 

    1. Use the Ubuntu 20.04 64bit version for both the MicroSD and the MSD device.

    2. After writing the MicroSD card - mount the /boot partition of that card and modify the /boot/cmdline.txt file to:

    net.ifnames=0 console=serial0,115200 console=tty1 root=LABEL=writable rootfstype=ext4 elevator=deadline rootwait fixrtc cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1 swapaccount=1

    Note that I removed the USB-OTG statement. Not gonna need that.

    3. It is helpful to remove or re-label the second partition on the MicroSD card - to prevent booting root there.  You really want to use the MSD device for all the things.

  • 2
    Boot the first node.

    Building a cluster a node at a time in the initial stages is a good thing.  Once you have a quorum in the control plane - and HA, adding nodes is less risky.

    Start with stabilizing the seed node.  Run the installation -

    sudo snap install microk8s --classic --channel=1.19/stable

    Then follow

    microk8s add-node

    My nodes have been up for awhile.  I just renamed the nodes ( rebooted after ) and it seems the node data is a bit stale:

    ubuntu@Stan:~$ microk8s kubectl get nodes
    node-1    NotReady   <none>   4d1h   v1.19.0-34+ff9309c628eb68
    node-2    NotReady   <none>   4d     v1.19.0-34+ff9309c628eb68
    node-3    NotReady   <none>   3d9h   v1.19.0-34+ff9309c628eb68
    kyle      Ready      <none>   19m    v1.19.0-34+ff9309c628eb68
    cartman   Ready      <none>   16m    v1.19.0-34+ff9309c628eb68
    stan      Ready      <none>   25m    v1.19.0-34+ff9309c628eb68

    Looks like a bug. Hmm. It may take a while to wade through the buglist to find the issue. 


    Ok, seems those records are sticky here. Gotta manually remove them.

    ubuntu@Stan:~$ microk8s remove-node node-1
    ubuntu@Stan:~$ microk8s remove-node node-2
    Removal failed. Node node-2 is registered with dqlite. Please, run first 'microk8s leave' on the departing node. If the node is not available anymore and will never attempt to join the cluster in the future use the '--force' flag to unregister the node while removing it.
    ubuntu@Stan:~$ microk8s remove-node node-3
    Removal failed. Node node-3 is registered with dqlite. Please, run first 'microk8s leave' on the departing node. If the node is not available anymore and will never attempt to join the cluster in the future use the '--force' flag to unregister the node while removing it.
    ubuntu@Stan:~$ microk8s kubectl get nodes
    node-2    NotReady   <none>   4d2h    v1.19.0-34+ff9309c628eb68
    node-3    NotReady   <none>   3d10h   v1.19.0-34+ff9309c628eb68
    cartman   Ready      <none>   104m    v1.19.0-34+ff9309c628eb68
    stan      Ready      <none>   112m    v1.19.0-34+ff9309c628eb68
    kyle      Ready      <none>   106m    v1.19.0-34+ff9309c628eb68
    ubuntu@Stan:~$ microk8s remove-node node-2 --force
    ubuntu@Stan:~$ microk8s remove-node node-3 --force
    ubuntu@Stan:~$ microk8s kubectl get nodes
    stan      Ready    <none>   114m   v1.19.0-34+ff9309c628eb68
    kyle      Ready    <none>   107m   v1.19.0-34+ff9309c628eb68
    cartman   Ready    <none>   105m   v1.19.0-34+ff9309c628eb68

    Hmm. The node hostname is a bit more sticky than I expected.

  • 3
    Add DNS

    Enable DNS.  Note that if you have all three nodes up, they all have DNS enabled.

    microk8s enable dns

    The first three nodes are the core minimal requirement for HA.  This provides a voting logic quorum to ensure there is at least one sane node.

View all 4 instructions

Enjoy this project?



Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates