09/07/2018 at 10:52 •
The Japan Aerospace Exploration Agency (JAXA) is said to launch a tiny space elevator next week.
Just like TRILLSAT-1, it isn't a "real" space elevator, as this is still the domain of science fiction, but it serves as a miniature test platform. But the Japanese spacecraft is said to be the first experiment of a tethered "elevator movement" in actual space, which is pretty cool.
My TrillSat project will no longer be updated on this Hackaday.io project page, and any future updates will resume on its main site at http://greatfractal.com/TrillSat.html. I plan to take a long break before I resume, as this competition sucked most of the fun out of it for me.
TrillSat's latest subsystem #Tethered Haptics Using Morse Pulses was a project page I specifically created for the Human-Computer Interface Challenge (since the judges ignored TrillSat completely in the first 3 challenges), so this actually marks the 4th loss for TrillSat in a row.
As evident from my log entry back in July, the 2018 Hackaday Prize competition has been one of the most disappointing experiences in my life, something I will never repeat, as I rarely extend myself or personal projects in this way. I was overjoyed back in March 2018 when the competition was first announced to find out that TrillSat qualified for 4 of the 5 challenges and felt like the luckiest person in the competition. It is inconceivable to me that all 80 of the winning projects so far could have beaten TrillSat on merit--and this prevented it from even entering the finals.
The only reason that I decided to put in the final effort for the 4th challenge was mainly a matter of principle--it served to complete what I originally set out to do back in March. Suffice it to say, while TrillSat is an open-hardware, power-harvesting, robotic platform with numerous human-computer interfaces and thus directly qualified for 4 of the 5 challenges, it is definitely not a musical instrument and won't be entered into the 5th challenge. The judges can relax and don't have to look at it any longer.
Again, thanks followers/likers for recognizing this project--it was the only beacon of light in a dark place.
TRILLSAT-1, signing off.
08/28/2018 at 01:00 •
I started to look more closely at the centripetal force equation, since I've been wary of raising the Gyrofan speed higher with its full mass, as I didn't know exactly how much force was being generated. I'd like to ramp the thing up to get appreciable inertial stabilization, since my first test of the Gyrofan on the tether a few days ago had little effect at 360 RPM. The equation, though, shows that I was way too conservative and underestimated the non-linear effect of RPM on centripetal acceleration.
I also forgot that the high pitch of the sound isn't generated by the frequency of rotation but by the frequency of the electrical steps/pulse generation, which is much higher on a BLDC.
As the gyro speed increases (which increases gyroscopic stabilization), centripetal acceleration increases exponentially. And since centripetal force (and the related centrifugal force) is mass x acceleration, the force increases exponentially, too. If you double the RPM, for example, the force doesn't double, it squares. Doh!
The 24 steel spheres and their PETG sockets are somewhere around 5 grams each, plus or minus a few grams (heck, I forgot to even weigh them before I closed up the case!). At 360 RPM with a radius of around 5 cm, this creates a force of only around .08 lbf (.36 N).
But at 3 times that RPM (1080 RPM), the force would go up to around .72 lbf (3.2 N), 9 times the force, since 3 squared is 9.
And at 50 times that RPM (18,000 RPM), the force would go up to around 200 lbf (888 N), 2500 times the force, since 50 squared is 2500.
So this presents an interesting situation, since the actual stabilization effect is related to angular momentum, and is not exponential. In other words, I have to spin it fast to get the stabilization to occur, yet the unwanted force on my wheel starts to go up exponentially...
I am still down in the low numbers, though, and so I tried sending a gyro 33 command to ramp it up to 1320 RPM, which applied approximately 1 lbf of force (4.8 N) around the wheel. It runs for a few seconds, but then cuts off before I can evaluate it. For some reason, my stall detection code is randomly activating and is shutting down the BLDC. What is weird is that this bug occurs less often when the craft is tilted to the starboard side, and more often when tilted to the port side, but I've ruled out the accelerometer. And I don't think this is related to gyroscopic precession, either, since it also occurs at really low RPMs. Very odd. So I have to review all of my gyro code (and maybe even open up the case to check for loose connections) before I can resume testing.
My BLDC code is a nightmare, by the way, mainly due to the fact that I couldn't use the 8-bit hardware timer to drive the pins directly. I'm using them for other things, and even so, I need 3 pins for the 3-phase motor, but there are only 2. I use the pin change interrupts to detect the 3 hall-effect sensors, but for PWM, I used the 8-bit overflow and CTC interrupts to chop the power to the coils, using software routines to changes the pin states. And all kinds of bad things can happen: the pin states can change while I'm trying to read their states, counter overflows can occur, the timer interrupts can occur outside of my functions before I get a chance to read them, my code might not complete before the next interrupt, etc. There are still a lot of bugs that I need to work out.
The new ATtiny 1616 that came out last year would solve some of my problems, but I just need to improve my ATtiny 1634 code and find creative ways to handle it. I've got the main drive motor and haptic morse detection running on the 16-bit timer, and I use the 8-bit timer for the gyro and miscellaneous functions.
When the BLDC runs at higher RPMs, every clock cycle counts, but unfortunately I do a lot of things inside my interrupts which eat up a lot of the available cycles. For example, with 36 interrupts per mechanical revolution, at 3600 RPM, for instance, one revolution is 1/60th of a second or .016 seconds. The CPU clockspeed that I use is 8 Mhz, so it only has 133 or so Khz for that revolution. Split that into 36 parts, and I'm left with around 3700 cycles, used by the C compiler, per interrupt.
One way I've dealt with this is to create 3 different Gyro modes that activate automatically depending on the RPM assigned, called BLDC_LOCK_MODE (0-1320 RPM), BLDC_PWM_MODE (1360-1480 RPM), and BLDC_FAST_MODE (1520 to all out). But even in the "all out" mode, it tops out at a fairly low RPM due to my code efficiency. So I need to add a "step-skipping" mode as well.
Once I fix this primary bug, though, I'll retest at higher RPMs and document the results.
08/25/2018 at 03:28 •
Ok, I made the first update to the TrillSat source code since it was published almost 3 months ago, primarily to add the new #Tethered Haptics Using Morse Pulses subsystem, and added it to the Files section. The trillsat.c source is now dependent on (and performs a C include of) simtheo.h, my external Morse decoder module, and is now also dependent on eeprom.h (from the avr-libc project) for the haptic passcode storage. The trillsat.c source also adds the appropriate defines, global variables, pin/interrupt/timer initialization; adds two ISR interrupt routines for Sawyer and Huckleberry, adds a new function called Process_Haptic_Command to perform the actual command processing, adds the EEPROM haptic lock routine, and it, along with trillsat_bot.py, adds the new XMPP and THUMP commands.
They also temporarily disable the haptic system and reset the correct 16-bit timer prescaler during motor operations, but they do not yet tell Huckleberry to turn on the LED (in case it was grounding the MISO line which prevents hall-effect feedback during a non-haptic motor operation). For now, I just control this manually using a new hucklight [1|0] command during testing. And the BASH programming scripts for the two ATtinys for in-circuit programming were updated to compensate for the new haptic system running over the same line.
The simtheo.h C module is statically linked, compiled at the same time as trillsat.c. In the future, I plan to clean up the module to allow it to be dynamically linked, but for now, I just wanted to make sure that all of the code was published to show how the THUMP system was implemented for a capstan cablebot. I also updated simtheo.h to remove a small delay that was causing problems and to correct a typo in the README file.
The THUMP system can, of course, be implemented differently in different situations, depending on the type of craft or function it performs, but this is how I implemented it for TRILLSAT-1.
08/24/2018 at 01:31 •
I spent the last three days making 4 new videos for TrillSat:
- XMPP Command System
- Tethered Haptic Morse Code System
- Inertial Gyrofan
I still need to make 3 more videos (the power regulation, motor drive, and packet radio systems), but these are more complicated for me to demonstrate, so I'll work on those videos at a later time. But I hope they now give you a better feel for the device, such as the scale of the craft, how it swings, and how the command systems work. It's buggy and rough, but it's a real thing.
I decided to go ahead and create a YouTube channel called TrillSat just for this project and moved my old motor control test video to the new channel. Anyway, here they are:
08/21/2018 at 05:33 •
Over at my new #Tethered Haptics Using Morse Pulses project, I have decided to leave the haptic system enabled full time on the TRILLSAT-1 prototype, instead of just during low-power situations, and have created a new EEPROM haptic passcode validation system which reduces the chances of nature (or a casual passerby) to spontaneously enter commands over the tether. A two-capital-letter passcode is set via XMPP which allows 676 combinations.
While simple to use in a "typical" system, implementing all of this on TRILLSAT-1 is more complex due to both its oblique angle (and another angle which changes depending on the sun), and the fact that it resource-shares a single overloaded MISO line using two ATtinys running in parallel. I'm essentially cramming a new, full-time system onto an already overloaded line, and all of these systems have to be carefully orchestrated. There are some interesting nuances in exactly how the circuits and software need to interact to allow successful operation.
The MISO Line
The SPI Master In Slave Out line is now used for five different purposes:
- Occasional bitbang SPI in-circuit programming of the two ATtiny microcontrollers
- On/Off control, by both the Pi and ESP8266, of the LED spotlight
- Motor position alignment via a hall-effect switch read by Sawyer (the motor-control ATtiny)
- Serial bus between Sawyer and Huckleberry (the power-regulation ATtiny) for repeating standard Morse code characters from haptic input
- On/Off control of the LED spotlight by Huckleberry
And the settings vary depending on whether or not the Pi and ESP are up or down, the LEDs are on or off, the position of the motor is aligned over one particular hall-effect switch, and whether any in-circuit programming or haptic communication is taking place.
The I2C Lines
Two I2C lines (which are also used as SPI lines) are shared by two masters (Pi Zero W and ESP8266) and three slaves (LIS3DH and two ATtiny 1634 microcontrollers), but the infamous hardware I2C clock-stretching bug on the Pi has caused many difficulties over the last two years, forcing me to both slow down the bus and spawn off jobs in parallel so to not delay the line.
These lines are the only way to access the LIS3DH registers, so if the Pi or ESP go down, the LIS3DH can only generate interrupts to one of the ATtinys based on previous settings and the registers can no longer be set dynamically. So I had to make sure the Pi (or ESP) sets it correctly before any power failure occurs. Because the three slave devices cannot communicate with each other, the only options for the ATtinys would be to set variables which are polled by the Pi or ESP, if active, so they can then send master I2C commands as needed.
The LED Spotlight
The LED serves as both a spotlight for illumination and also an important diagnostic tool, allowing Morse code output to be visualized without turning on the haptic motor, and it can even flicker when performing SPI programming without affecting the data integrity.
If all systems are operational, the ESP should keep the LED spotlight ON (floating), but the Pi should keep the LED spotlight OFF. Because the LED is being controlled at the transistor base after the series resistor, it doesn't ground the entire MISO line and leaves it usable. So the Pi and ESP can keep the LED off, yet still allow the MISO line to operate for SPI programming, haptic Morse signals, and hall-effect motor position detection. But if Huckleberry needs to control that LED for testing when pulsemode is 0 (visual only; no haptic), the Pi and ESP must relinquish their hold on the LED spotlight, and then Huckleberry can control it. Sawyer can control it too, but it never needs to. It just needs to monitor that line at all times for its hall-effect switch, in-circuit programming, and incoming haptic signals.
So if Huckleberry is sending Morse to Sawyer, you'll see the LED spotlight flash, unless the Pi and ESP have turned it off. But if the Pi is down, it is no longer grounding that transistor base and the LED spotlight comes back on. And any haptic transmission from Sawyer to Huckleberry will light up the LED if the Pi and ESP are down.
Now, if just the Pi is down, the ESP should know this, as it controls the power to the Pi and has full communication with it over the UART. So the ESP can always take over to keep the LED spotlight off, too. It should default to ON, so haptic pulsemode 0 will work by default.
But if the ESP is also down, haptic pulses to Sawyer must always light up the LED.
Unless... the motor is in a position where a particular hall-effect switch has triggered, then the LED will remain off and neither Huckleberry, nor the Pi or ESP, can turn it back on. But in this case, haptic communication is no longer possible with Sawyer.
I've decided that I need to arrange the jumpers on the two hall-effect switches to make the one that uses the MISO line on the West, sunset-facing side. I'll explain why below.
The Sunset Position Hall-Effect Switch
When the Pi wants to move the motor, it must turn off the LED spotlight (to keep the light from flashing at night), then instruct Huckleberry to turn on the LED spotlight (but it doesn't really come on, due to the Pi override) to release the ground on the MISO line to free up the sunset hall-effect switch needed by the motor for position sensing, then tell both Huckleberry and Sawyer to ignore haptic pulses by disabling the interrupt, then tell Sawyer to move the motor, then it tells Huckleberry and Sawyer to receive haptic pulses again, then tells Huckleberry to set itself back to where it was, and then the Pi turns the LED spotlight back on again, to give Huckleberry a chance to control the light if it needs to.
If the LED was on, it will turn off if the motor is moved to the sunset position. And when that hall-effect switch is triggered, Sawyer can no longer be programmed nor receive haptic Morse code messages either.
Huckleberry could set a flag which is polled by the Pi and ESP8266 to instruct Sawyer to drive the motor away from the hall-effect switch to allow it to temporarily see those signals, but if the Pi and ESP are offline, this is not possible. However, if the hall-effect switch that uses the MISO line is the one at the sunset position, the most precarious physical position, where the planetary mass only needs a slight nudge to physically fall away from that sensor into the Night "tridant", it allows the user to simply "shake" the tether to free the sensor.
When programming Sawyer using a BASH shell, I added updates to my trillsat_burnsaw.sh script to first check to see if the MISO line is sinking (which means that either Huckleberry is grounding the line to keep the LED spotlight off or the sunset hall-effect switch is sinking), and then use i2ctransfer (one of the commands from the Linux i2c-tools package) to send an I2C command to Huckleberry to turn on the LED spotlight, and then checks again. If the MISO line is still sinking, it informs the user that the motor position must be moved before programming can commence.
When the hall-effect switch is not sinking, 5v is sent through 1502 ohm resistors which meet up with a 1k resistor to form a 1.5k/1k voltage divider at the LED spotlight's control transistor, sending about 2 volts to the Pi and MISO line which doesn't interfere with the Pi's 3.3v logic bitbang SPI programming. When the sensor is sinking, Sawyer sees a direct ground at the MISO line, but Huckleberry now sees the 3.3v line after only a 334 ohms (502 ohms + 1k in parallel) load is applied, which draws only about 10 mA on the Pi GPIO pin, not enough to keep the SPI programming from working, and the 3.3v logic is still seen as a 5v logic high on Huckleberry. It's interesting that if Huckleberry grounds the MISO line, then this does block Sawyer from programming, even if the hall-effect switch is not activated, since Sawyer sees a 1k/502 ohm voltage divider, which lowers it to 1.1 volts, too low for a 5v logic high.
And when programming Huckleberry, I also added updates to my trillsat_burnhuck.sh script to send an I2C command to Sawyer to tell it to disable its THUMP interrupt until programming is complete. This is necessary to ensure that, if the sunset hall-effect sensor is not blinding Sawyer from the MISO line that Sawyer doesn't try to interpret Huckleberry's SPI programming signals as haptic Morse code instead.
Whew! I told you TRILLSAT-1 was more complicated than a typical use case. But, wait, there is more!
Tilt Angle Changes THUMP Parameters
And finally, because TrillSat is an oblique-angled craft that changes its tilt angle for solar tracking, the tilt increases X-axis (or even Y-Axis) acceleration due to gravity that must be subtracted by the accelerometer to keep the sensitivity constant, instead of inversly proportional to the tilt angle. But the physical constraints of the tilt also limit the sensitivity, so both the LIS3DH high-pass filter REFERENCE and the threshold CLICK_THS registers would probably need to be read/set after ever motor move.
Ideally, I need two axes (X and Z) in TrillSat for consistent, single-axis click detection at every tilt angle, where a typical, flat, vertical hoistbot, that didn't tilt, would need only one. The accelerometer can be configured to detect multiple axes (6D orientation), but it would still have to be configured for each tilt angle to know how to interpret the pulse for an interrupt, which brings me back to the same problem. I do like this feature and may use it in the future, as it can exclude pulses in the wrong direction. Without using the LIS3DH I2C line, which is not possible on the ATtiny 1634s in my situation, or the Autoreset mode (which I'm still experimenting with), the accelerometer system is not smart enough for dynamic calculations. So I have to rely on the Raspberry Pi Zero W (or ESP8266) to set the accelerometer over I2C beforehand, which would allow the THUMP system to operate normally using interrupts at that tilt angle, until it changes again.
But because Sawyer and Huckleberry will continue to track the sun, even if the Pi and ESP are offline, this is not an ideal solution. So before a "planned" low-power shutdown situation occurs, I'm going to either have to switch to the Autoreset mode or have the Pi turn off the high-pass filter and thus make the pragmatic, but sometimes incorrect, assumption that the system is always level. This would mean that the haptic passcode could be easily unlocked at noon, when solar power is at maximum, or at night when a "shake" can move the mass to solar midnight position and also level the craft. Then, when unlocked, any, non-haptic, motor locomotion operations will be told to stop at noon position, to ensure subsequent haptic commands are easy to enter, until the THUMP system times out and normal solar tracking resumes.
And there are many more orchestrated systems like this in TrillSat--this is only a small example.
So just because things seem fairly easy going over at my #Tethered Haptics Using Morse Pulses project, every new system I cram into the TRILLSAT-1 prototype is a logistical nightmare. But this is by design--I wouldn't have it any other way. It is exciting!
That's the main reason I build projects like this--to see what the maze-like constraints of Nature and Information allow me to do, how much I can overload, how much I can't, to find those outer boundaries, and then, go deeper and find those inner ones, too.
08/16/2018 at 09:04 •
The Tethered Haptics Using Morse Pulses (THUMP) subsystem is now working on the TRILLSAT-1 prototype which allows two-way haptic communication using the A-Z subset of International Morse Code. It operates in parallel on both ATtiny CPUs, and creates its own serial bus protocol for daisy-chained, asynchronous operation. I've published a separate project page for it called #Tethered Haptics Using Morse Pulses and entered that project into the 2018 Human Computer Interface Challenge. I also released my SIMTHEO Decoder module source code under LGPL 3.0. It can be useful for a variety of different types of tethered robots, such as hoistbots and winchbots (and oblique, capstan cablebots, like TrillSat). I will release the updates to the TrillSat source code when I get a chance, which will show exactly how the THUMP subsystem is applied to the TRILLSAT-1 prototype.
08/10/2018 at 02:14 •
Six days ago, one of my 1980's school classmates was announced by NASA to be one of the first two US astronauts to fly on a commercial SpaceX Falcon 9 rocket, scheduled for next April, finally returning the US to human spaceflight after the shuttle program ended.
In the NASA photo, he is 3rd from the left, giving a big smile and thumbs up like All Might (for any My Hero Academia fans out there...)
I had mentioned him months ago in the Introduction section of this project, since his achievements over the years were one of my motivations for actually building TrillSat as a self-contained satellite/spacecraft analog and not just a traditional radio station (and like many astronauts, he even has a ham radio callsign--how cool).
Thanks, Bob, for setting the bar extremely high (literally) and reminding me that it is still worthwhile to do things "not because they are easy, but because they are hard".
The TrillSat project on Hackaday has been reactivated and work has resumed. More details to follow.
07/25/2018 at 02:43 •
The twenty 2018 Hackaday Prize Power Harvesting Challenge winners were announced today, and again I did not win a single thing, not even a non-monetary achievement. This is the 3rd loss for TrillSat in a row, and I see a trend here, unfortunately.
You'd think that its unique mass-tilt/catenary spiral-axis solar tracker (both elevating the craft for line-of-sight radio communication and approaching dual-axis efficiency) using a single $9 screwdriver motor (with a custom H-Bridge built into the handle) would win at least something, right? If so, you would be wrong.
You'd think that the only tether/winchbot in this year's Hackaday Prize competition might have a chance to win something. Heck, at the time of this writing, tether or winchbots are relatively rare, and TrillSat uses an onboard capstan winch for aerial locomotion, which is very rare (I've never even seen one used in this way before) and uses complex PWM drive mechanics. And it even has a gyrofan, using mass x velocity for experimental inertial stabilization built out of a repurposed DVD spindle motor, and the BLDC is driven via programming of a single ATtiny. But alas, it didn't win the robotics round--not even a non-monetary achievement. The capstan was kicked to the curb.
You'd think that a tiny audio/data/PTT interface custom-built to connect to a $26 ham radio (which was dismantled to eliminate weight and allow variable power from both series cells and boost converters) with a custom program that uses 16-byte chunks to quickly program the clone-mode flash to allow it to mimic the features of more expensive dual VFO's for both APRS/Packet functions during a single session would at least win something, even a non-monetary achievement, right? Sorry to disappoint you again.
You'd think that a custom Packet BBS in Python on a Pi Zero using Unix standard streams instead of the C library, connected to a custom XMPP server in Lua/NodeMCU on an ESP8266, things that had never been programmed on those architectures before (to the best of my knowledge) would at least earn the smallest of awards--again, nope. I even had to build a software simulation in order to test it, since I couldn't put it on the air. Heck, I even added waterproof, inductive Qi charging... Cool, right? The judges don't seem to think so.
And you'd think that showing proficiency with 4 interconnecting CPUs and 4 languages ATtinys (in C), ESP8266 (in Lua), Pi Zero (in Python/BASH) on a single project might garner some cred from the judges, right? Nah! It doesn't appear they look at such trivial things...
Oh, and the care taken to design all parts in OpenSCAD so the majority of the unit could be printed on a Prusa i3 printbed in PETG (built from kit, no less), using pronsole on a Raspberry Pi 3 would garner at least a tiny bit of modern maker cred, right? That would be a big NO.
Oh yeah, and the wooden Ark and Test Frame support structure that I had to build to test and calibrate the unit indoors--it's another project in itself: it folds, uses counterbalanced tethers, collapses for storage, and assembles and operates in several different modes. Neat, right? Ha! No prize for you!
And finally, I built an actual weatherproof, temperature-controlled prototype (it's not just vaporware like some of the projects), strong enough for testing, and I published all of the source code, schematic, and video. I also documented it in great detail (almost 200 pages), including an itemized Bill of Materials. But I also went through the trouble of writing up separate project text for Hackaday (until I ran out of room). Did I get recognized for such meticulous work? Of course not.
I don't see this trend changing and have therefore decided not to enter my haptic, tethered Morse code system into the 4th challenge (Human-Computer Interface). The competition is over for me. All future updates to the TrillSat project will take place where it first began at http://greatfractal.com/TrillSat.html.
Thank you Hackaday followers/likers for taking the time to acknowledge this project. I often work on my projects in a social vacuum, and your simple gesture means a lot.
07/16/2018 at 06:14 •
I just published the first video of the two motors in operation, showing the tilting of the solar panel/capstan hoist mechanism on the tether using the wooden Ark and Test Frame and also showing the Gyrofan used for inertial stabilization. It's the first YouTube video that I've ever uploaded, and it's pretty crude. I was able to fully load the mass on the gyro wheel and ramped it up to a fixed RPM, which it holds fairly constant, based on hall-effect sensor feedback. The wagon-wheel effect is pretty mesmerizing.
I also made a pass at computing the solar position using just the two CdS LDRs at rest, instead of relying on sunrise/sunset tables, but the trigonometry is surprisingly tricky. It's easy to find the brightest part in the sky--you just move the craft until the two opposing LDR sensors are close in value to one another, but it's much more difficult to estimate solar position using just these values alone. In other words, to have the craft "see" the sun and report its exact position is difficult.
First you have to know the angle of the craft (which I already know via the accelerometer), and then you have account for Lambert's cosine law, which is fairly easy. But if the angle of the craft is not horizontal (or even stationary), you don't really know which side of the sun to which one of your LDRs might be pointing. I can use sunrise/sunset tables for my location, but this requires an accurate clock, which is also a problem (as the lack of RTCs means that I have to manually set my clocks, and auto-setting them from solar position isn't accurate to more than around 30 minutes, which brings me back to the same problem...). So rather than estimating "where" the sun is, for now I'm going to have to just go out and find it like traditional solar trackers do, by moving the motors to find the brightest point.
I'm encountering some additional problems, too: the paracord sheath tends to slip over time, allowing the internal fibers to bunch-up, making the mechanism less smooth as I test. Testing it on the rough PETG snags and takes a toll on the fibers. And the Sign-Magnitude/Lock-Antiphase orbital mechanics that work so well testing on a table, which I added to my Park command, need to be heavily tweaked for the forces on the tether.
The capstan/motor is powerful enough to drive and tilt the craft, but the PWM slow-starting torque is low, and raising the torque to overcome the static friction causes jerks. These jerks were fairly low in early testing on the tether, but when I attached the heavy Planetary Mass, the jerking is heavily exaggerated. So I'll have to pulse the motor at higher current to overcome the static friction, then quickly slow it down. I might also experiment with PFM modulation in the future.
If you notice, in the video, you can hear the high-pitched whine of the PWM (since I use audible frequencies), but I have to raise the duty cycle fairly high before it starts moving (unless it is on a "downhill", and then it whips around really fast and has to be slowed down (which adds wear to the paracord). The ball-damper that I added inside Planetary Mass helps to remove the stress of the jerks, though.
The orbital mechanics aren't going to be a graceful truck driving down a smooth, sinusoidal hill, as I illustrated in my diagram--that's just the idealized path. In reality, it's going to be like a old truck driving down a bumpy, rock-strewn road full of potholes. The driver will have to hit the gas hard at points, then let off and hit the brakes, etc.
06/02/2018 at 05:21 •
Ok, the first version of the source code for TrillSat was released a couple of hours ago under GPL 3.0, and a copy of the source tarball was uploaded to the Files section of this project. It includes the most important files, the OpenSCAD program that generates the STL files for 3D-printing (and animation), the electrical schematic, the C source files for the two ATtiny 1634 microcontrollers, the Lua source for the Huzzah ESP8266 running NodeMCU, and the Python 2/3 and BASH sources for the Raspberry Pi Zero W. The README file also details several smaller changes that need to be made to various configuration files.
The tarball doesn't contain the entire project, only the core craft design files, so I didn't include the 100+ pages of technical information which are available on my TrillSat home page at http://greatfractal.com/TrillSat.html, since I wanted to keep the tarball small. I also left out the STL and G-code files that I used to print the parts, since they consumed several megabytes (and are easy to generate using my OpenSCAD program and Slic3r), and I left out the KiCad source and library files as I only use it for the visual schematic and don't use any of its other features.
The "Three-Tridant Orbital Mechanic" PWM drive system is now working, but it is very crude at this stage. I created a new "Park" command which drives the Planetary Mass at different acceleration/deceleration values and speeds as it passes the Hall-effect switches, automatically switching between Sign-Magnitude and Lock-Antiphase on-the-fly to re-park the mass at sunrise position during the night (and lifting the entire weight of the craft). It works in testing on a table, but I have not yet tested this system on the tether (and I will probably need to tweak the values to match different friction/torque requirements).
I had to match the speeds of Sign-Magnitude and Lock-Antiphase to ensure smooth hand-offs, since Lock-Antiphase runs at twice the frequency of Sign-Magnitude, so I sped up Sign-Magnitude to compensate, rather than slow Lock-Antiphase down. Lock-Antiphase has half the resolution, so slowing it down won't allow a perfect match with Sign-Magnitude during a non-linear acceleration, but Sign-Magnitude can have its resolution cut in half to match, which preserves the congruence.
I also had some spontaneous ATtiny reboots during testing, since I moved my bypass capacitor too far away from the ATtiny after I replaced the last BLDC, and the supply voltage dropped below my brown-out detection threshold of 4.3 volts, so I lowered it to 2.7 volts which helped until I can get around to fixing the problem.
The code is crude at this stage, but hopefully it will give others insight into solutions for several problems that I had to overcome: how to take control over the UV-5RA flash memory and minimize writes to the flash using 16-byte blocks (something Chirp wasn't designed to do), how to create a crude XMPP server on a memory-constrained ESP8266 running NodeMCU, how to program a PBBS using AX.25 Unix programming techniques instead of the C library, how to craft the APRS message, and how to wrangle a custom radio interface using a Virtual UART and Virtual PTT line.
It also shows the various ways to control the two very different types of motors under PWM (brushed and brushless) using a single microcontroller, maxing out both of its hardware timers and creatively using interrupts. And of course, there are a lot of different types of IPC, job processing, multiprocessing and concurrency going on with 4 CPUs that interoperate.
It is a fascinating experience for me, and I hope you find some of it useful.