Close
0%
0%

Raspberry Pi Ceph Cluster

A Raspberry Pi Ceph Cluster using 2TB USB drives.

Similar projects worth following
I needed an easily expandable storage solution for warehousing my ever growing hoard of data. I decided to go with Ceph since it's open source and I had slight experience from work. The most important benefit is that I can continuously expand the storage capacity as needed simply by adding more nodes and drives.

Current Hardware:

  • 1x RPi 4 w/ 2GB RAM
    • management machine
    • influxdb, prometheus, apcupsd, apt-cacher
  • 3x RPi 4 w/ 4GB RAM
    • 1x ceph mon/mgr/mds per RPi
  • 10x RPi 4 w/ 8GB RAM
    • 2 ceph osds per RPi
    • 2x Seagate 2TB USB 3.0 HDD per RPi

Current Total Raw Capacity: 36 TiB

The RPi's are all housed in a nine drawer cabinet with rear exhaust fans.  Each drawer has an independent 5V 10A power supply.  There is a 48-port network switch in the rear of the cabinet to provide the necessary network fabric.

The HDDs are double-stacked five wide to fit 10 HDDs in each drawer along with five RPi 4's.  A 2" x 7" x 1/8" aluminum bar is sandwiched between the drives for heat dissipation.  Each drawer has a custom 5-port USB power fanout board to power the RPi's.  The RPi's have the USB PMIC bypassed with a jumper wire to power the HDDs since the 1.2A current limit is insufficient to spin up both drives.

  • 1 × Raspberry Pi 4 w/ 2GB RAM
  • 3 × Raspberry Pi 4 w/ 4GB RAM
  • 10 × Raspberry Pi 4 w/ 8GB RAM
  • 14 × MB-MJ64GA/AM Samsung PRO Endurance 64GB 100MB/s (U1) MicroSDXC Memory Card with Adapter
  • 14 × USB C Cable (1 ft) USB Type C Cable Braided Fast Charge Cord

View all 10 components

  • Ceph PG Autoscaler

    Robert Rouquette2 days ago 0 comments

    It looks like the PG autoscaler kicked in last night., and made the following changes:

    • fs_ec_5_2: 64 pgs -> 256 pgs

    The autobalancer should kick in later tonight or tomorrow to pull the OSDs back into alignment.

  • Rebalance Complete

    Robert Rouquette6 days ago 0 comments

    The cluster rebalance is finally complete.   The final distribution also appears to be tighter than before the rebalance.

  • Rebalance Progress

    Robert Rouquette10/13/2020 at 21:01 0 comments

    The rebalance is still progressing.  It's particularly slower because I also changed the failure domain on my cephfs EC pool from OSD to HOST which increased the number of PGs that needed to be remapped.  Aside from that, the rate of progression appears fairly typical.  The two gaps in the plot are power outages caused by Hurricane Delta.

  • Added 8 more HDDs

    Robert Rouquette10/07/2020 at 22:51 0 comments

    I added 4 more RPi 4's and 8 2TB drives to bring the overall disk count to 20.  The cluster rebalance looks like it's going to take a few days, but at least the I/O performance isn't too severely impacted.

  • Grafana Dashboard

    Robert Rouquette09/29/2020 at 20:52 0 comments

    I added a dashboard to my grafana server to show the telegraf data from the ceph cluster.  You can see the disk io and cpu load from the cluster rebalancing from 10 osds to 12 osds.

View all 5 project logs

Enjoy this project?

Share

Discussions

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates