• Going Off-grid, In Town

    04/29/2021 at 22:41 0 comments

    A scene in Barton Fink has a Hollywood script-writer on payroll but without work because the studio head was so disappointed with his wrestling screenplay.


    The studio head says, "I want you in town Fink, but out of my sight."  Such are the absurdities of contractual obligations. 
    This isn't easily relatable to going off grid, except for one tangential aspect- when one is off-grid, they are not part of the "social network" that is the internet- but there are many other networks, as the pre-Internet screenwriting era of the 1940s. 

    I've had a power outage once for 5 days, due to a snow storm causing many trees branches to buckle under the weight of a flash precipitation event and causing downed power lines. The event was otherwise unremarkable, except for the realization of how much I didn't really need electricity. I happened to be fortunate to have a fully charged Nokia phone at the time, which had a superior battery life that lasted me throughout the entire outage, mainly due to powering it off and only checking messages every several hours. That said, any modern phone would not last as long, because that Nokia was from the pre-smartphone era. That era was markedly different in what was considered high-tech. Smartphones are ubiquitous today, but appear to define high-tech merely out of convenience, rather than any particular advantage that a 6" AMOLED has over a more versatile notebook. 

    In 5 days I was able to study for a mid-term exam and score much higher than my previous exams in a semester of graduate school. Involuntary offline living is a blessing in disguise for those who are unable to turn off or tune out. That said, environmental cues and peer pressure has a large effect on the influence of social networking. It is very possible that rural areas are more able to turn off the phone and computer since the external stimuli on a farm or arid desert are far and few between. Compared to a Bladerunner city where all matter is cybernetic, finding a balance in technology may require re-evaluating one's entire definition of home. 

    That power outage was many years ago, but recently I decided to try a mini-experiment for only 3 days. You could call this going off-grid, in town. I decided that before I moved to my new condo, that I would not turn on the electricity until I absolutely needed to. I could have skipped the wait before I signed up for electricity service, but I had some solar panels and non-perishable foods, thus I was not in urgent need of refrigeration. I also had a rechargeable battery bank so I was able to recharge my phone and make calls, and receive internet. Laundry was also a non issue, since I had enough clean clothes. I have an LED lantern with lithium battery, so midnight bathroom trips were not difficult, and even somewhat like an exploration.

    What I learned from having no electricity for 3 days is, I only really use a tiny fraction of it for really essential things. I have long stopped using a full-size refrigerator more than 4 years ago, because I have learned to use a mini-fridge with much less wasted food. But even if I didn't need to freeze or keep fresh produce, I would still be able to enjoy fresh foods if I chose to shop more often. I merely use a fridge to save on trips to the store, but only need to visit the grocery every 2 weeks on average. 

    The same applies for computers- A solar panel could recharge a very light laptop if only I used it for much of my productivity, but since I have 50 or 100amp service, I can power several computers and several monitors. 

    I also enjoy making coffee and using an electric oven, which wouldn't be possible without electricity.

    One thing I also learned to make without electricity is oatmeal- I would pour cold water into a cereal bowl, and a half cup of oatmeal would soak overnight. By morning the...

    Read more »

  • The many levels of newbieness, writen by a newb

    04/09/2021 at 13:39 0 comments

    (Edit: I found a forum question that already asked this: https://softwareengineering.stackexchange.com/questions/256833/why-dont-developers-make-installation-wizards-on-linux )

    My favorite thing about the Raspberry Pi Zero is the cost. Since its release in 2015, the Pi Zero has allowed beginners and experts alike to harness computing power without needing a whole lot of other hardware.

    The purpose of the Raspberry Pi is and always remains education. The purpose of education and enterprise-industrial applications are quite different, if not polar opposites. So I try to remind myself that when something isn't working as I'd like, I think that its purpose is for learning how computers work, rather than the purchase being for a warranty of some type of software support.

    To describe this "dichotomy," let's first understand that economies of scale is what led to the Raspberry Pi becoming so inexpensive. The intent of the Rpi developers is to lower the entry to affording a computer, and its intention is not to support all the most advanced features.

    I think the meaning of basic computing should be explored. What is a basic computer? One that provides internet and has a common, but modern display cable, such as HDMI. These are basic computing features that are used by a vast majority of users, rather than being basic computing features in an early desktop computer from the 1980s. Back then, features that were considered basic computing would include Office suites, (printing), and intranet. 

    This is in no way a criticism of Rpi. In fact, I am immensely grateful for their disruptive technology. I own 1 Raspberry Pi 3B+ and 2 Raspberry Pi Zeros (the 2nd 0 I bought just because). However, I hope to find a use for all three- I use one of the zeros and the 3B+ frequently to test and benchmark the performance of various operating systems, particularly ones that boot from RAM, such as Puppy Linux, Diet Pi, PiCore and other x86 ports.

    The short explanation for this is I am interested in extending the mission of the Rpi by cataloguing the OSs other than Raspbian with the most Raspberry Pi support, and then determining which ones can boot from RAM. Of those, the ones that can run in RAM and run a select number of applications, which could then be optimized to run with or without a traditional suite of operating system apps so that the performance utilize the full amount of RAM, whether it is 512MB, 1GB, or 2GB. In this way, the educational mission of the Rpi can utilize another function of the Rpi software- the less often used initrd or intramfs that runs the entire OS in a high speed memory. With the proliferation of many carrier boards that are natively supporting NVMe, it is certainly encouraging to see enhanced performance of the RaspberryPi Compute Module 4 by utilizing the hardware that it already has. However, if the original mission of the Raspberry Pi was to educate, then a high speed native boot drive or PCI-express capability could be considered goal displacement. Which again, isn't necessarily a bad thing. 

    The definition of the Raspberry Pi "educational tool" could be re-examined a little further by determining the number of eras that a tool is supposed to help with. Is it supposed to only teach modern operating systems, or obsolete/outdated ones? A podcast last month reflects on software development of the 1990s with a much more critical eye. 11:30-16:30 talks about software efficiency and the amount of RAM required today is much greater than the earlier OSes, and efficiency has been lost.

    After listening to this, I thought, that it would be environmentally responsible to research the Raspberry Pi's performance based on the included RAM, which is far greater than many early operating systems, and the performance would rival many of the NVME carrier boards that are being developed. It is a long established fact that the hierarchy of computer speed is L1>L2>L3>DRAM>SSD>HDD....

    Read more »

  • What is Innovation? A review of 3 common types

    02/27/2021 at 19:53 0 comments

    Innovation is a central part of hacker/maker culture. What does it mean to innovate? The reasons may be personal or for entrepreneurial purpose, but the meaning is the same. In this blog post, I will briefly examine 2 type of innovation, and will review a third type of innovation. There are certainly other type of innovation, but I will focus on these in the nexus of science and technology.

    1. The discovery of a natural property, such as electricity or magnetism, and the development of a product, such as the lightbulb, or inductor.

    2. The modification/or redevelopment of an existing product, such as the incandescent bulb, to make a more efficient light, such as a CFL or LED.

    3. The transplantation of an existing product- such as a lightbulb, into another product, such as a car, to produce the headlight. 

    The third type of innovation interests me the most, because there are brilliant inventors in many different fields. But it is a bit wayside or unnatural for some people to be receptive to an idea like powering a a headlight by an internal combustion engine, even with a battery. I'm sure the idea eventually caught on, considering its practicality but how long did it take for the idea to be adopted? This is the core struggle of innovation. It not only faces a struggle in its own right- that of developing something new or modified for a new enhancement, but also faces a public adoption, where many more applications can be used, and is often a main reason for promoting a technology, as opposed to niche features. 

    Open-source ideology, is a very great concept. If one looked back into the history of open source, one finds a very strong push to establish what I believe was the first linux operating system: 

    from https://www.hipeac.net/vision/2021/ (111MB)

    smaller 6-page version here: https://cdn.hackaday.io/files/1777167603401344/192-197%20-%20Copy.pdf

    "The first point is to understand that

    there are fundamentally two separate

    families of open source licence. What we call
    permissive licences (Apache, MIT, BSD)
    basically allow your users to take what you
    have provided, use it, modify it and even
    sell it. They do not even have to tell you
    what they are doing with it. Most 

    annoyingly, they can take what you have 

    started and, when they make something better

    out of it, they do not have to share it with
    anyone else. Particularly at the beginning
    of the open source movement, this was
    seen as a major problem, and so called
    reciprocal licences were developed (GPL,
    LGPL). This second family of licence asks
    the user to make systems built using what
    they have received openly available under
    the very same licence."

    Why was it a "major" problem? To start, Ubuntu and the Linux kernel didn't exist then. Today, a free and easily downloadable ISO is almost taken for granted. I do not know how many developers there were who wrote the first kernel-3 at least? Since then there have been thousands of projects and forks of projects. That is obviously normal because there was no need to develop anything "centralized" anymore, once the software is developed, the hardware is a matter of aesthetics. Yet, as more developers seek to adopt open-hardware projects, I think of some islands of development that are stratified in their capability. There could be perhaps, more "mega-projects" to develop some commonality in terms of a desired hardware components, such as an a mini-ITX-like motherboard in a Raspberry Pi form factor. Of course I am plugging my own laptop project now, but its more about suggesting any mega-project that has a lot of features that a generation of developers would want to use. This isn't to say there is not already a lot of effort towards something like this. What would an open-hardware product look like? Maybe it would use a open RISC core like the Berkeley Out-of-Order Machine https://boom-core.org/  If someone wanted to develop an open-source...

    Read more »