Close
0%
0%

Solar-powered cloud computing

Building a private cloud from scratch using low-power equipment

Similar projects worth following
At my workplace, we do a lot of software and SCADA type development which necessitates setting up virtual machines for testing projects. Then there's the needs of production workloads. So a private cloud was needed.
At my home, I have an aging Intel Atom D525-based webserver that has served me well for the past 5½ years, but is now starting to run out of puff. In addition, it's a base-load energy consumer, running 24×7. In addition to having something that can run VMs, it'd be great to have some redundancy and be semi-independent from the mains grid.
The aim of this project is to come up with a hardware and software stack that can be scaled up to meet the needs of small/medium business in a cost-effective manner and as an engineering challenge, be green at the same time.

Cluster specifications

  • Storage and compute nodes:
  • Storage nodes (3×):
    • RAM: 16GB ECC DDR3
    • HDD: HGST HTS541010A9 1TB
    • Storage software: Ceph 10.2
  • Compute nodes (2×):
    • RAM: 32GB ECC DDR3
    • Virtualisation software: KVM managed through OpenNebula
  • Network fabric: Linksys LGS326-AU, modified to run off 12V.  (I am looking to upgrade this to a Netgear GS748T as I have filled up the Linksys' ports.  I have the relevant parts to do this.)
  • Solar input: 3×120W 12V panels
  • Battery bank: 2×105Ah 12V AGM
  • Monitoring node:
    • Technologic Systems TS-7670v2 revision D
    • CPU: Freescale^WNXP i.MX286 at 454MHz ARMv5 (single-core)
    • RAM: 128MB
    • SSD: On-board 2GB eMMC
    • OS: Debian Stretch for now, but am compiling a port of Gentoo/musl which I'll release in due course.

What is this cluster doing?

Since late June 2016, it has been in production running a number of websites:

…among lots of ad-hoc projects.

It also has been used on a few occasions to run test instances at my workplace, in one case, providing about 5 virtual machines to try out Kubernetes, and in another, spinning up a test instance of WideSky, because our usual hosting provider (Vultr) was full (specifically their Sydney data centre).  For that reason, this equipment appears on my tax records.

In March 2018, I officially decommissioned the old Intel Atom D525 server that had been running much of my infrastructure to date, doing a physical-to-virtual migration of the old server onto a VM.  The old box was re-configured to just power on at 9PM so that its cron jobs could do a back-up of the real instances, then shut down.  This machine has since been reloaded, still performs the same function but now the OS is stripped down to the bare essentials.  (Thank-you Gentoo/musl.)

I may yet convert it to run off 12V with the cluster too as the PSU fan is making noises, we'll see.

Can it run entirely off solar?

In good weather, yes.  If there's good sunlight during the day.  An extra battery and another panel would help here, and I'm considering doing exactly that.

For now though, it runs both mains and solar, which already has reduced our power bill.

If doing it again, what would I do different?

  • The switch: the Linksys can't do LACP with more than 4 LAGs, whereas the Netgear one can do the required number of LAGs.
  • At the time that Supermicro board was one of the best options available, but buying DDR3 ECC SO-DIMMs is a pain.  There are newer ones now that aside from having more cores (up to 16!), take full-size DDR4 ECC DIMMs which are easier to come by.
  • The rack could be a bit taller (not a show stopper though)
  • Getting ATX DC-DC PSUs that can tolerate up to 16V natively.  (I think mini-box were out-of-stock of the other models, hence I took the ones I have and used LDOs to hack around the problem.)

battselect.circ

Battery selection circuit (Logisim)

circ - 9.82 kB - 03/22/2017 at 11:07

Download

charger.zip

Revised charger design with PCB layout

Zip Archive - 60.48 kB - 09/30/2016 at 09:32

Download

  • 5 × Supermicro A1SAi-2750F Mini-ITX Intel Atom C2750 motherboard
  • 5 × Mini-Box M350 Mini-ITX Case
  • 5 × Mini-Box PicoPSU 160-XT 12V ATX PSU
  • 5 × Samsung 850EVO 120GB Solid State Drive
  • 5 × Kingston 8GB Unbuffered ECC SO-DIMM Memory

View all 17 components

  • Considering storage expansion

    Stuart Longland2 days ago 0 comments

    One problem I face with the cluster as it stands now is that 2.5″ HDDs are actually quite restrictive in terms of size options.

    Right now the whole shebang runs on 1TB 5400RPM Hitachi laptop drives, which so far has been fine, but now that I’ve put my old server on as a VM, that’s chewed up a big chunk of space. I can survive a single drive crash, but not two.

    I can buy 2TB HDDs, WD make some and Scorptec sell them. Seagate make some bigger capacity drives, however I have a policy of not buying Seagate.

    At work we built a Ceph cluster on 3TB SV35 HDDs… 6 of them to be exact. Within 9 months, the drives started failing one-by-one. At first it was just the odd drive being intermittent, then the problem got worse. They all got RMAed, all 6 of them. Since we obviously needed drives to store data on until the RMAed drives returned, we bought identically sized consumer 5400RPM Hitachi drives. Those same drives are running happily in the same cluster today, some 3 years later.

    We also had one SV35 in a 3.5″ external enclosure that formed my workplace’s “disaster recovery” back-up drive. The idea being that if the place was in great peril and it was safe enough to do so, someone could just yank this drive from the rack and run. (If we didn’t, we also had truly off-site back-up NAS boxes.) That wound up failing as well before its time was due. That got replaced with one of the RMAed disks and used until the 3TB no longer sufficed.

    Anyway, enough of that diversion, long story short, I don’t trust Seagate disks for 24/7 operation. I don’t see other manufacturers (other than Seagate e.g. WD, Samsung, Hitachi) making >2TB HDDs in the 2.5″ form factor. They all seem to be going SSD.

    I have a Samsung 850EVO 2TB in the laptop I’m writing this on, bought a couple of years ago now, and so far, it has been reliable. The cluster also uses 120GB 850EVOs as OS drives. There’s now a 4TB version as well.

    The performance would be wonderful and they’d reduce the power consumption of the cluster, however, 3 4TB SSDs would cost $2700. That’s a big investment!

    The other option is to bolt on a 3.5″ HDD somehow. A DIN-rail mounted case would be ideal for this. 3.5″ high-capacity drives are much more common, and is using technology which is proven reliable and is comparatively inexpensive.

    In addition, by going to bigger external drives it also means I can potentially swap out those 2.5″ HDDs for SSDs at a later date. A WD Purple (5400RPM) 4TB sells for $166. I have one of these in my desktop at work, and so far its performance there has been fine. $3 more and I can get one of the WD Red (7200RPM) 4TB drives which are intended for NAS use. $265 buys a 6TB Toshiba 7200RPM HDD. In short, I have options.

    Now, mounting the drives in the rack is a problem. I could just make a shelf to sit the drive enclosures on, or I could buy a second rack and move the servers into that which would free up room for a second DIN rail for the HDDs to mount to. It’d be neat to DIN-rail mount the enclosures beside each Ceph node, but right now, there’s no room to do that.

    I’d also either need to modify or scratch-make a HDD enclosure that can be DIN-rail mounted.

    There’s then the thorny issue of interfacing. There are two options at my disposal: eSATA and USB3. (Thunderbolt and Firewire aren’t supported on these systems and adding a PCIe card would be tricky.)

    The Supermicro motherboards I’m using have 6 SATA ports. If you’re prepared to live with reduced cable lengths, you can use a passive SATA to eSATA adaptor bracket — and this works just fine for my use case since the drives will be quite close. I will have to power down a node and cut a hole in the case to mount the bracket, but this is doable.

    I haven’t tried this out yet, but I should be able to use the same type of adaptor inside the enclosure to connect the eSATA cable to the HDD. Trade-off will be further reduced cable...

    Read more »

  • Adventures in Ceph migration

    Stuart Longland01/28/2019 at 10:03 0 comments

    My cloud computing cluster like all cloud computing clusters of course needs a storage back-end. There were a number of options I could have chosen, but the one I went with in the end was Ceph, and so far, it’s ran pretty well.

    Lately though, I was starting to get some odd crashes out of ceph-osd. I was running release 10.2.3, which is quite dated now, this is one of the earlier Jewel releases. Adding to the fun, I’m running btrfs as my filesystem on the OS and the OSD, and I’m running it all on Gentoo. On top of this, my monitor nodes are my OSDs as well.

    Not exactly a “supported” configuration, never mind the hacks done at hardware level.

    There was also a nagging issue about too many placement groups in the Ceph cluster. When I first established the cluster, I christened it by dragging a few of my lxc containers off the old server and making them VMs in the cluster. This was done using libvirt and virt-manager. These got thrown into a storage pool called transitional-inst, with a VLAN set aside for the VMs to use. When I threw OpenNebula on, I created another Ceph pool, one for its images. The configuration of these lead to the “too many placement groups” warning, which until now, I just ignored.

    This weekend was a long weekend, for controversial reasons… and so I thought I’ll take a snapshot of all my VMs, download those snapshots to a HDD as raw images, then see if I can fix these issues, and migrate to Ceph Luminous (v12.2.10) at the same time.

    Backing up

    I was going to be doing some nasty things to the cluster, so I thought the first thing to do was to back up all images. This was done by using rbd snap create pool/image@date to create a snapshot of an image, then rbd export pool/image@date /path/to/storage/pool-image.img before blowing away the snapshot with rbd snap rm pool/image@date.

    This was done for all images on the Ceph cluster, stashing them on a 4TB hard drive I had bought for the purpose.

    Getting things ready

    My cluster is actually set up as a distcc cluster, with Apache HTTP server instances sharing out distfiles and binary package repositories, so if I build packages on one, I can have the others fetch the binary packages that it built. I started with a node, and got it to update all packages except Ceph. Made sure everything was up-to-date.

    Then, I ran emerge -B =ceph-10.2.10-r2. This was the first step in my migration, I’d move to the absolute latest Jewel release available in Gentoo. Once it built, I told all three storage nodes to install it (emerge -g =ceph-10.2.10-r2). This was followed up by a re-start of the mon daemons on each node (one at a time), then the mds daemons, finally the osd daemons.

    Resolving the “too many placement groups” warning

    To resolve this, I first researched the problem. An Internet search lead me to this Stack Overflow post. In it, it was suggested the problem could be alleviated by making a new pool with the correct settings, then copying the images over to it and blowing away the old one.

    As it happens, I had an easier solution… move the “transitional” images to OpenNebula. I created empty data blocks in OpenNebula for the three images, then used qemu-img convert -p /path/to/image.img rbd:pool/image to upload the images.

    It was then a case of creating a virtual machine template to boot them. I put them in a VLAN with the other servers, and when each one booted, edited the configuration with the new TCP/IP settings.

    Once all those were moved across, I blew away the old VMs and the old pool. The warning disappeared, and I was left with a HEALTH_OK message out of Ceph.

    The Luminous moment

    At this point I was ready to try migrating. I had a good read of the instructions beforehand. They seemed simple enough. I prepared as I did before by updating everything on the system except Ceph, then, telling Portage to build a binary package of Ceph itself....

    Read more »

  • Dusty solar panels?

    Stuart Longland12/14/2018 at 02:04 0 comments

    So recently, I had a melt-down with some of the monitor wiring on the cluster… to counteract that, I have some parts on order (RS Components annoyingly seem to have changed their shipping policies, so I suspect I'll get them Monday)… namely some thermocouple extension cable, some small 250mA fast-blow fuses and suitable in-line holders.

    In the meantime, I'm doing without the power controller, just turning the voltage down on the mains charger so the solar controller did most of the charging.

    This, isn't terribly reliable… and for a few days now my battery voltage has just sat at a flat 12.9V, which is the "boost" voltage set on the mains charger.

    Last night we had a little rain, and today I see this:

    Something got up and boogied this morning, and it was nothing I did to make that happen.  I'll re-instate that charger, or maybe a control-only version of the #High-power DC-DC power supply which I have the parts for, but haven't yet built.

  • When things get hot

    Stuart Longland11/29/2018 at 22:50 0 comments

    It’s been a while since I posted about this project… I haven’t had time to do many changes, just maintaining the current system as it is keeps me busy.

    One thing I noticed is that I started getting poor performance out of the solar system late last week.  This was about the time that Sydney was getting the dust storms from Broken Hill.


    Last week’s battery voltages (40s moving average)

    Now, being in Brisbane, I didn’t think that this was the cause, and the days were largely clear, I was a bit miffed why I was getting such poor performance.  When I checked on the solar system itself on Sunday, I was getting mixed messages looking at the LEDs on the Redarc BCDC-1225.

    I thought it was actually playing up, so I tried switching over to the other solar controller to see if that was better (even if I know it’s crap), but same thing.  Neither was charging, yet I had a full 20V available at the solar terminals.  It was a clear day, I couldn’t make sense of it.  On a whim, I checked the fuses on the panels.  All fuses were intact, but one fuse holder had melted!  The fuse holders are these ones from Jaycar.  10A fuses were installed, and they were connected to the terminal blocks using a ~20mm long length of stranded wire about 6mm thick!

    This should not have gotten hot.  I looked around on Mouser/RS/Element14, and came up with an order for 3 of these DIN-rail mounted fuse holders, some terminal blocks, and some 10A “midget” fuses.  I figured I’d install these one evening (when the solar was not live).

    These arrived yesterday afternoon.


    New fuse holders, terminal blocks, and fuses.

    However, it was yesterday morning whilst I was having breakfast, I could hear a smoke alarm going off.  At first I didn’t twig to it being our smoke alarm.  I wandered downstairs and caught a whiff of something.  Not silicon, thankfully, but something had burned, and the smoke alarm above the cluster was going berserk.

    I took that alarm down off the wall and shoved it it under a doonah to muffle it (seems they don’t test the functionality of the “hush” button on these things), switched the mains off and yanked the solar power.  Checking the cluster, all nodes were up, the switches were both on, there didn’t seem to be anything wrong there.  The cluster itself was fine, running happily.

    My power controller was off, at first I thought this odd.  Maybe something burned out there, perhaps the 5V LDO?  A few wires sprang out of the terminal blocks.  A frequent annoyance, as the terminal blocks were not designed for CAT5e-sized wire.

    By chance, I happened to run my hand along the sense cable (the unsheathed green pair of a dissected CAT5e cable) to the solar input, and noticed it got hot near the solar socket on the wall.  High current was flowing where high current was not planned for or expected, and the wire’s insulation had melted!  How that happened, I’m not quite sure.  I got some side-cutters, cut the wires at the wall-end of the patch cable and disconnected the power controller.  I’ll investigate it later.


    Power controller with crispy wiring

    With that rendered safe, I disconnected the mains charger from the battery and wound its float voltage back to about 12.2V, then plugged everything back in and turned everything on.  Things went fine, the solar even behaved itself (in-spite of the melty fuse holder on one panel).

    Last night, I tore down the old fuse box, hacked off a length of DIN rail, and set about mounting the new holders.  I had to do away with the backing plate due to clearance issues with the holders and re-locate my isolation switch, but things went okay.

    This is the installation of the fuses now:


    Fuse holders installed

    The re-located isolation switch has left some ugly holes, but we’ll plug those up with time (unless a friendly mud wasp does it for us).


    Solar...

    Read more »

  • Considering options for over-discharge protection

    Stuart Longland11/10/2018 at 07:39 0 comments

      [Heads up, I've been having problems reaching this site on occasion from my home Internet connection … something keeps terminating my browsers' connections during the TLS handshake phase.  This seems to be IP-related.  As such, my participation on Hackaday.io is under active review and may be terminated at any time.  You can find all the logs from this project mirrored on my blog.]

      Right now, the cluster is running happily with a Redarc BCDC-1225 solar controller, a Meanwell HEP-600C-12 acting as back-up supply, a small custom-made ATTiny24A-based power controller which manages the Meanwell charger.

      The earlier purchased controller, a Powertech MP-3735 now is relegated to the function of over-discharge protection relay.  The device is many times the physical size of a VSR, and isn’t a particularly attractive device for that purpose.  I had tried it recently as a solar controller, but it’s fair to say, it’s rubbish at it.  On a good day, it struggles to keep the battery above “rock bottom” and by about 2PM, I’ll have Grafana pestering me about the battery slipping below the 12V minimum voltage threshold.

      Actually, I’d dearly love t rip that Powertech controller apart and see what makes it tick (or not in this case).  It’d be an interesting study in what they did wrong to give such terrible results.

      So, if I pull that out, the question is, what will prevent an over-discharge event from taking place?  First, I wish to set some criteria, namely:

      1. it must be able to sustain a continuous load of 30A
      2. it should not induce back-EMF into either the upstream supply or the downstream load when activated or activated
      3. it must disconnect before the battery reaches 10.5V (ideally it should cut off somewhere around 11-11.5V)
      4. it must not draw excessive power whilst in operation at the full load

      With that in mind, I started looking at options.  One of the first places I looked was of course, Redarc.  They do have a VSR product, the VS12 which has a small relay in it, rated for 10A, so fails on (1).  I asked on their forums though, and it was suggested that for this task, a contactor, the SBI12, be used to do the actual load shedding.

      Now, deep inside the heart of the SBI12 is a big electromechanical contactor.  Many moons ago, working on an electric harvester platform out at Laidley for Mulgowie Farming Company, I recall we were using these to switch the 48V supply to the traction motors in the harvester platform.  The contactors there could switch 400A and the coils were driven from a 12V 7Ah battery, which in the initial phases, were connected using spade lugs.

      One day I was a little slow getting the spade lug on, so I was making-breaking-making-breaking contact.  *WHACK*… the contactor told me in no uncertain terms it was not happy with my hesitation and hit me with a nice big back-EMF spike!  I had a tingling arm for about 10 minutes.  Who knows how high that spike was… but it probably is higher than the 20V absolute maximum rating of the MIC29712s used for power regulation.  In fact, there’s a real risk they’ll happily let such a rapidly rising spike straight through to the motherboards, frying about $12000 worth of computers in the process!

      Hence why I’m keen to avoid a high back-EMF.  Supposedly the SBI12 “neutralises” this … not sure how, maybe there’s a flywheel diode or MOV in there (like this), or maybe instead of just removing power in a step function, they ramp the current down over a few seconds so that the back-EMF is reduced.  So this isn’t an issue for the SBI12, but may be for other electromechanical contactors.

      The other concern is the power consumption needed to keep such a beast activated.  The other factor was how much power these things need to stay actuated.  There’s an initial spike as the magnetic field ramps up and starts drawing the armature of the contactor closed, then...

    Read more »

  • Reverting back to the Powertech MP-3735

    Stuart Longland10/27/2018 at 04:28 0 comments

    So, for the past few weeks I've been running a Redarc BCDC-1225 solar controller to keep the batteries charged.  I initially found I had to make my little power controller back off on the mains charger a bit, but was finally able to prove conclusively that the Redarc was able to operate in both boost and float modes.

    In the interests of science, I have plugged the Powertech back in.  I have changed nothing else.  What I'm interested to see, is if the Powertech in fact behaves itself, or whether it will go back to its usual tricks.

    The following is the last 6 hours.

    Next week, particularly Thursday and Friday, are predicted to have similar weather patterns to today.  Today's not a good test, since the battery started at a much higher voltage, so I expect that the solar controller will be doing little more than keeping the battery voltage up to the float set-point.

    For reference, the settings on the MP-3735 are: Boost voltage 14.6V, Float voltage 13.8V.  These are the recommended settings according to Century's datasheets for the batteries concerned.

    Interestingly, no sooner do I wire this up, but the power controller reaches for the mains.  The MP-3735 definitely likes to flip-flop.  Here's a video of its behaviour shortly after connecting up the solar (and after I turned off the mains charger at the wall).

    Now looking, it's producing about 10A, much better than the 2A it was doing whilst filming.  So it can charge properly, when it wants to, but it's intermittent, and inside you can sometimes hear a quiet clicking noise as if it's switching a relay.  At 2A it's wasting time, as the cluster draws nearly 5× that.

    The hesitation was so bad, the power controller kicked the mains charger in for about 30 minutes, after that, the MP-3735 seems to be behaving itself.  I guess the answer is, see what it does tomorrow, and later this week without me intervening.

    If it behaves itself, I'm happy to leave it there, otherwise I'll be ordering a VSR, pulling out the Powertech MP-3735 and re-instating the Redarc BCDC-1225 with the VSR to protect against over-discharge.


    Update 2018-10-28… okay, overcast for a few hours this morning, but by 11AM it had fined up.  The solar performance however was abysmal.

    Let's see how it goes this week… but I think I might be ordering that VSR and installing the Redarc permanently now.


    Today's effort:

    Each one of those vertical lines was accompanied by a warning email.

  • Further power bill reductions

    Stuart Longland10/06/2018 at 06:44 0 comments

    So, since the last power bill, our energy usage has gone down even further.

    No idea what the month-on-month usage is (I haven't spotted it), but this is a scan from our last bill:

    GreenPower?  We need no stinkin' GreenPower!

    This won't take into consideration my tweaks to the controller where I now just bring the mains power in to do top-ups of the battery.  These other changes should see yet further reductions in the power bill.

  • Making the BCDC1225 get up and boogy!

    Stuart Longland10/04/2018 at 00:46 0 comments

    So, I've been running the Redarc controller for a little while now, and we've had some good days of sunshine to really test it out.

    Recall in an earlier posting with the Powertech solar controller I was getting this in broad daylight:

    Note the high amount of "noise", this is the Powertech solar controller PWMing its output.  I'm guessing output filtering is one of the corners they cut, I expect to see empty footprints for juicy big capacitors that would have been in the "gold" model sent for emissions testing.  It'll be interesting to tear that down some day.

    I've had to do some further tweaks to the power controller firmware, so this isn't an apples-to-apples comparison, maybe next week we'll try switching back and see what happens, but this was Tuesday, on the Redarc controller:

    You can see that overnight, the Meanwell 240V charger was active until a little after 5AM, when my power controller decided the sun should take over.  There's a bit of discharging, until the sun crept up over the roof of our back-fence-neighbour's house at about 8AM.  The Redarc basically started in "float" mode, because the Meanwell had done all the hard work overnight.  It remains so until the sun drops down over the horizon around 4PM, and the power controller kicks the mains back on around 6PM.

    I figured that, if the Redarc controller saw the battery get below the float voltage at around sunrise, it should boost the voltage.

    The SSR controlling the Meanwell was "powered" by the solar, meaning that by default, the charge controller would not be able to inhibit the mains charger at night as there was nothing to power the SSR.  I changed that last night, powering it from the battery.  Now, the power controller only brings in the mains charger when the battery is below about 12.75V.  It'll remain on until it's been at above 14.4V for 30 minutes, then turn off.

    In the last 24 hours, this is what the battery voltage looks like.

    I made the change at around 8PM (can you tell?), and so the battery immediately started discharging, then the charge-discharge cycles began.  I'm gambling on the power being always available to give the battery a boost here, but I think the gamble is a safe one.  You can see what happened 12 hours later when the sun started hitting the panels: the Redarc sprang into action and is on a nice steady trend to a boost voltage of 14.6V.

    We're predicted to get rain and storms tomorrow and Saturday, but maybe Monday, I might try swapping back to the Powertech controller for a few days and we'll be able to compare the two side-by-side with the same set-up.


    It's switched to float mode now having reached a peak boost voltage of 14.46V.  As Con the fruiterer would say … BEEEAAUUTIFUUUL!

  • Jury still out on solar controller, thinking of PSU designs

    Stuart Longland09/26/2018 at 23:09 0 comments

    So, the last few days it's been overcast.  Monday I had a firmware glitch that caused the mains supply to be brought in almost constantly, so I'd disregard that result.

    Basically, the moment the battery dropped below ~12.8V for even a brief second, the mains got brought in.  We were just teetering on the edge of 12.8V all day.  I realised that I really did need a delay on firing off the timer, so I've re-worked the logic:

    • If battery drops below V_L, start a 1-hour timer
    • If battery rises above V_L, reset the 1-hour timer
    • If the battery drops below V_CL or the timer expires, turn on the mains charger

    That got me better results.  It means V_CL can be quite low, without endangering the battery supply, and V_L can be at 12.8V where it basically ensures that the battery is at a good level for everything to operate.

    I managed to get through most of Tuesday until about 4PM, there was a bit of a hump which I think was the solar controller trying to extract some power from the panels.  I really need a good sunny day like the previous week to test properly.

    This is leading me to consider my monitoring device.  At the moment, it just monitors voltage (crudely) and controls the logic-level enable input on the mains charger.  Nothing more.  It has done that well.

    A thought is that maybe I should re-build this as a Modbus-enabled energy meter with control.  This idea has evolved a bit, enough to be its own project actually.  The thought I have now is a more modular design.

    If I take the INA219B and a surface-mount current shunt, I have a means to accurately measure input voltage and current.  Two of these, and I can measure the board's output too.  Stick a small microcontroller in between, some MOSFETs and other parts, and I can have a switchmode power supply module which can report on its input and output power and vary the PWM of the power supply to achieve any desired input or output voltage or current.

    The MCU could be the ATTiny24As I'm using, or a ATTiny861.  The latter is attractive as it can do high-speed PWM, but I'm not sure that's necessary in this application, and I have loads of SOIC ATTiny24As.  (Then again, I also have loads of PDIP ATTiny861s.)

    The board would expose the ICSP pins plus two more for interrupt and chip select, allowing for a simple jig for reprogramming.  I haven't decided on a topology yet, but the split-pi is looking attractive.  I might start with a buck converter first though.

    This would talk to a "master" microcontroller which would provide the UI and Modbus interface.  If the brains of the PSU MCU aren't sufficient, this could do the more grunty calculations too.

    This would allow me to swap out the PSU boards to try out different designs.

  • Return of the Redarc BCDC1225

    Stuart Longland09/23/2018 at 10:38 0 comments

    Well, I've now had the controller working for a week or so now… the solar output has never been quite what I'd call, "great", but it seems it's really been on the underwhelming side.

    One of the problems I had earlier before moving to this particular charger was that the Redarc wouldn't reliably switch between boosting from 12V to MPPT from solar.  It would get "stuck" and not do anything.  Coupled with the fact that there's no discharge protection, and well, the results were not a delight to the olfactory nerves at 2AM on a Sunday morning!

    It did okay as a MPPT charger, but I needed both functions.  Since the thinking was I could put a SSR between the 12V PSU and the Redarc charger, we tried going the route of buying the Powertech MP3735 solar charge controller to handle the solar side.

    When it wants to work, it can put over 14A in.  The system can run on solar exclusively.  But it's as if the solar controller "hesitates".

    I thought maybe the other charger was confusing it, but having now set up a little controller to "turn off" the other charger, I think I can safely put that theory to bed.  This was the battery voltage yesterday, where there was pretty decent sunshine.

    There's an odd blip at about 5:40AM, I don't know what that is, but the mains charger drops its output by a fraction for about 50 seconds.  At 6:37AM, the solar voltage rises above 14V and the little ATTiny24A decides to turn off the mains charger.

    The spikes indicate that something is active, but it's intermittent.  Ultimately, the voltage winds up slipping below the low voltage threshold at 11:29AM and the mains charger is brought in to give the batteries a boost.  I actually made a decision to tweak the thresholds to make things a little less fussy and to reduce the boost time to 30 minutes.

    The charge controller re-booted and turned off the mains charger at that point, and left it off until sunset, but the solar controller really didn't get off its butt to keep the voltages up.

    At the moment, the single 120W panel and 20A controller on my father's car is outperforming my 3-panel set-up by a big margin!

    Today, no changes to the hardware or firmware, but still a similar story:

    The battery must've been sitting just on the threshold, which tripped the charger for the 30 minutes I configured yesterday.  It was pretty much sunny all day, but just look at that moving average trend!  It's barely keeping up.

    A bit of searching suggests this is not a reliable piece of kit, with one thread in particular suggesting that this is not MPPT at all, and many people having problems.

    Now, I could roll the dice and buy another.

    I could throw another panel on the roof and see if that helps, we're considering doing that actually, and may do so regardless of whether I fix this problem or not.

    There's several MPPT charger projects on this very site.  DIY is a real possibility.  A thought in the back of my mind is to rip the Powertech MP3735 apart and re-purpose its guts, and make it a real MPPT charger.

    Perhaps one with Modbus RTU/RS-485 reporting so that I can poll it from the battery monitor computer and plot graphs up like I'm doing now for the battery voltage itself.  There's a real empty spot for 12V DC energy meters that speak Modbus.

    If I want a 240V mains energy meter, I only have to poke my head into the office of one of my colleagues (who works for the sister company selling this sort of kit) and I could pick up a little CET PMC-220 which with the addition of some terminating resistors (or just run comms at 4800 baud), work just fine.  Soon as you want DC, yeah, sure there's some for solar set-ups that do 300V DC, but not humble 12V DC.

    Mains energy meters often have extra features like digital inputs/outputs, so this could replace my little charge controller too.  This would be a separate project.

    But that would leave me without a solar controller, which is not ideal,...

    Read more »

View all 98 project logs

Enjoy this project?

Share

Discussions

RoGeorge wrote 01/28/2019 at 08:34 point

Seen your question about disk enclosures on the 'HaD stack', decided to answer it here.

Where I live, the best price/GB for storage this month was for 3.5'' HDDs, and is about twice cheaper than the 2.5'' you mentioned.  Price differences between the main brands Seagate/WD/Toshiba are not very big.  Cheapest here is Toshiba, most expensive is WD.  Between WD and Seagate, datasheets specifications are better for Seagate, e.g. MTFB 1.2 mil hours vs 1 mil with WD, error rate 10^(-15) for Seagate (I guess they have ECC RAM on the HDD's PCB to achieve this) vs. 10^(-14) for WD, same with other endurance parameters like max number of head landings (LLC), or the average GB writes/day.  Yet, I just bought a big WD, not a Seagate (was looking for speed transfer, too, which I guess is not critical for your use case)

It's hard to comment without numbers estimating how much space/traffic/speed/power busget is needed.  In my experience with SCADA (mostly for power grid), the traffic and the storage space was almost negligible, trivial to achieve.  Yet, long term reliability and 24/7 with minimal outage time were hard to achieve, even thought everything in the power distribution stations was either redundant or with hot standby if not both, from computer nodes to fiber optic paths and equipment in the field.  We also use to have brand redundancy, in the sense that a redundant equipment for a given function was from a different brand, in the hope bugs or other hidden problems won't hit two brand names in the same time.  For 24/7, the hardest thing to achieve was to eliminate as much single points of failure as possible for a given budget.

About HDDs power, 3.5'' can work with the spindle motor powered down (HDDs have big cache, e.g. 256MB cache for SCADA can fit the traffic for a very long time), so if you plan for regular maintenance/HDD replacements I guess you can go very aggressive with the power save.  I never aimed for a tight power budget, so only guessing here.  For a 3.5'' the power is about 10W spindle+PCB, 5W PCB only, less than 1W standby.  Choosing low RPM disks might help, too, with the power.  Video surveillance recommended HDDs might have lower RPMs than desktop HDDs.  For the 24/7 disks (not surveillance) I've seen a lot of Seagate are 5900RPM (sometimes improperly sold as 7200 - yet Seagate datasheet does not specify the RPM) while the equivalent performance from WD are usually 7200RPM.

About the file system used, I didn't have any practical experience with btrfs or ZFS, but it happens that I just looked them up lately for a project of mine, and I think ZFS might have some advantages if you can afford to mirror data only rarely, so not to keep the mirror disks powered all the time.  Not very sure about this last one.

Didn't answered the HDD enclosure yet.
:o)

Since you were considering a 3D printed enclosure, I'll guess the DYI solutions are acceptable, too.  I'll simply put a plane thick and very rigid board in the rack (nowadays magnetic record density is very high, and the HDD heads struggles to stay on track, read/write performance can go down 1 or 2 orders of magnitude because of mechanical vibrations of the disks), and fix the HDDs vertically.  Somewhere in the back, I'll put some low RPM and temperature controlled fans, in order to blow air between the vertical mounted HDDs only when needed.

Nice project this solar powered cloud computing project you have here, I like it.
Congrats!

  Are you sure? yes | no

Stuart Longland wrote 01/28/2019 at 10:10 point

Cheers on the insights there… yeah I should mention the prices I quoted are in AUD… so AU$1 is ~US$0.60 at the moment.

I'll admit to being leery about Seagate.  We had a lot of Seagate SV35 3TB HDDs in a Ceph cluster at work, and they ran fine until one week they all pretty much failed.  We'd replace one, and the next one would fail.  We ended up putting in consumer grade Hitachis running at 5400 RPM and haven't looked back.

My cluster right now is running on Hitachi laptop drives, 5400RPM spindle speed.

I'm tossing up whether to go to 7200RPM, battery life isn't a big consideration because I can just get more batteries.  I'm looking at the WD Purple HDDs -- got one at work in my desktop there and so far it's been good.

A lot will come down to price.  I've spent a lot on this cluster (over $10000), but I'm not made of money. :-)

  Are you sure? yes | no

[deleted]

[this comment has been deleted]

Stuart Longland wrote 04/11/2016 at 17:31 point

Note: This is in reply to a comment that has since been deleted, from a user that's no longer with us referencing a project that's been deleted.  The original comment read "check this out - it helps you with power […deleted project URL here…]",   The project was called the "MEDELIS battery", and there are still some videos on YouTube.

"Free Geen[sic] Energy" eh?

I did see the project but the page has almost no detail other than a link and a video.  The link offers little information either.  I was hoping the project page would have some sort of information into what chemical reaction is taking place or what power output is feasible for what volume of water.

It sounds a lot like the description here: https://en.wikipedia.org/wiki/Water-activated_battery in which, the magnesium electrode gets depleted.  So I'd be replacing magnesium electrodes as well as water.

http://peswiki.com/index.php/OS:Stephen_Dickens_Magnesium-Water-Copper_Battery suggests about 1.5V and a few milliamps current are possible.

This project draws a lot more power than the LED shown in the photo, or a phone charging for that matter.  Even "turned off", the computers in this cluster together draw 15W due to IPMI management.  On Saturday, I had them doing system updates (compiling from source), that drew about 70-80W, and the nodes were barely getting started.

Flat chat, I can expect each node to draw up to 2A, at 12V that's 24W, or 120W for the cluster, not including network switch.  Overnight it'd draw nearly 1.5kW.  I'd imagine it'd need to be scaled up quite considerably, and water is not a "free" resource.  The above numbers suggest something in the order of 64000 cells, and then we don't know what the internal resistance is.

In urban areas like where I am, the primary water supply is the mains feed which is billed by the kilolitre.  Australia is quite a dry climate, and so I'd imagine you'd need quite a big water tank to get you through between showers of rain.  Sun on the other hand, seems plentiful, and lead-acid batteries, for all their faults and failings, are cheap.

I think on that basis until there's some more detail (videos are good for demonstration but not documentation), I'll stick to what's proven.

Added note: The copper-magnesium-water battery described in that deleted project however, probably has some useful applications where very low power is all that's needed.  So still worthy of consideration in those cases.

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates