04/29/2021 at 22:41 •
A scene in Barton Fink has a Hollywood script-writer on payroll but without work because the studio head was so disappointed with his wrestling screenplay.
The studio head says, "I want you in town Fink, but out of my sight." Such are the absurdities of contractual obligations.
This isn't easily relatable to going off grid, except for one tangential aspect- when one is off-grid, they are not part of the "social network" that is the internet- but there are many other networks, as the pre-Internet screenwriting era of the 1940s.
I've had a power outage once for 5 days, due to a snow storm causing many trees branches to buckle under the weight of a flash precipitation event and causing downed power lines. The event was otherwise unremarkable, except for the realization of how much I didn't really need electricity. I happened to be fortunate to have a fully charged Nokia phone at the time, which had a superior battery life that lasted me throughout the entire outage, mainly due to powering it off and only checking messages every several hours. That said, any modern phone would not last as long, because that Nokia was from the pre-smartphone era. That era was markedly different in what was considered high-tech. Smartphones are ubiquitous today, but appear to define high-tech merely out of convenience, rather than any particular advantage that a 6" AMOLED has over a more versatile notebook.
In 5 days I was able to study for a mid-term exam and score much higher than my previous exams in a semester of graduate school. Involuntary offline living is a blessing in disguise for those who are unable to turn off or tune out. That said, environmental cues and peer pressure has a large effect on the influence of social networking. It is very possible that rural areas are more able to turn off the phone and computer since the external stimuli on a farm or arid desert are far and few between. Compared to a Bladerunner city where all matter is cybernetic, finding a balance in technology may require re-evaluating one's entire definition of home.
That power outage was many years ago, but recently I decided to try a mini-experiment for only 3 days. You could call this going off-grid, in town. I decided that before I moved to my new condo, that I would not turn on the electricity until I absolutely needed to. I could have skipped the wait before I signed up for electricity service, but I had some solar panels and non-perishable foods, thus I was not in urgent need of refrigeration. I also had a rechargeable battery bank so I was able to recharge my phone and make calls, and receive internet. Laundry was also a non issue, since I had enough clean clothes. I have an LED lantern with lithium battery, so midnight bathroom trips were not difficult, and even somewhat like an exploration.
What I learned from having no electricity for 3 days is, I only really use a tiny fraction of it for really essential things. I have long stopped using a full-size refrigerator more than 4 years ago, because I have learned to use a mini-fridge with much less wasted food. But even if I didn't need to freeze or keep fresh produce, I would still be able to enjoy fresh foods if I chose to shop more often. I merely use a fridge to save on trips to the store, but only need to visit the grocery every 2 weeks on average.
The same applies for computers- A solar panel could recharge a very light laptop if only I used it for much of my productivity, but since I have 50 or 100amp service, I can power several computers and several monitors.
I also enjoy making coffee and using an electric oven, which wouldn't be possible without electricity.
One thing I also learned to make without electricity is oatmeal- I would pour cold water into a cereal bowl, and a half cup of oatmeal would soak overnight. By morning the...Read more
04/09/2021 at 13:39 •
(Edit: I found a forum question that already asked this: https://softwareengineering.stackexchange.com/questions/256833/why-dont-developers-make-installation-wizards-on-linux )
My favorite thing about the Raspberry Pi Zero is the cost. Since its release in 2015, the Pi Zero has allowed beginners and experts alike to harness computing power without needing a whole lot of other hardware.
The purpose of the Raspberry Pi is and always remains education. The purpose of education and enterprise-industrial applications are quite different, if not polar opposites. So I try to remind myself that when something isn't working as I'd like, I think that its purpose is for learning how computers work, rather than the purchase being for a warranty of some type of software support.
To describe this "dichotomy," let's first understand that economies of scale is what led to the Raspberry Pi becoming so inexpensive. The intent of the Rpi developers is to lower the entry to affording a computer, and its intention is not to support all the most advanced features.
I think the meaning of basic computing should be explored. What is a basic computer? One that provides internet and has a common, but modern display cable, such as HDMI. These are basic computing features that are used by a vast majority of users, rather than being basic computing features in an early desktop computer from the 1980s. Back then, features that were considered basic computing would include Office suites, (printing), and intranet.
This is in no way a criticism of Rpi. In fact, I am immensely grateful for their disruptive technology. I own 1 Raspberry Pi 3B+ and 2 Raspberry Pi Zeros (the 2nd 0 I bought just because). However, I hope to find a use for all three- I use one of the zeros and the 3B+ frequently to test and benchmark the performance of various operating systems, particularly ones that boot from RAM, such as Puppy Linux, Diet Pi, PiCore and other x86 ports.
The short explanation for this is I am interested in extending the mission of the Rpi by cataloguing the OSs other than Raspbian with the most Raspberry Pi support, and then determining which ones can boot from RAM. Of those, the ones that can run in RAM and run a select number of applications, which could then be optimized to run with or without a traditional suite of operating system apps so that the performance utilize the full amount of RAM, whether it is 512MB, 1GB, or 2GB. In this way, the educational mission of the Rpi can utilize another function of the Rpi software- the less often used initrd or intramfs that runs the entire OS in a high speed memory. With the proliferation of many carrier boards that are natively supporting NVMe, it is certainly encouraging to see enhanced performance of the RaspberryPi Compute Module 4 by utilizing the hardware that it already has. However, if the original mission of the Raspberry Pi was to educate, then a high speed native boot drive or PCI-express capability could be considered goal displacement. Which again, isn't necessarily a bad thing.
The definition of the Raspberry Pi "educational tool" could be re-examined a little further by determining the number of eras that a tool is supposed to help with. Is it supposed to only teach modern operating systems, or obsolete/outdated ones? A podcast last month reflects on software development of the 1990s with a much more critical eye. 11:30-16:30 talks about software efficiency and the amount of RAM required today is much greater than the earlier OSes, and efficiency has been lost.
After listening to this, I thought, that it would be environmentally responsible to research the Raspberry Pi's performance based on the included RAM, which is far greater than many early operating systems, and the performance would rival many of the NVME carrier boards that are being developed. It is a long established fact that the hierarchy of computer speed is L1>L2>L3>DRAM>SSD>HDD....Read more
02/27/2021 at 19:53 •
Innovation is a central part of hacker/maker culture. What does it mean to innovate? The reasons may be personal or for entrepreneurial purpose, but the meaning is the same. In this blog post, I will briefly examine 2 type of innovation, and will review a third type of innovation. There are certainly other type of innovation, but I will focus on these in the nexus of science and technology.
1. The discovery of a natural property, such as electricity or magnetism, and the development of a product, such as the lightbulb, or inductor.
2. The modification/or redevelopment of an existing product, such as the incandescent bulb, to make a more efficient light, such as a CFL or LED.
3. The transplantation of an existing product- such as a lightbulb, into another product, such as a car, to produce the headlight.
The third type of innovation interests me the most, because there are brilliant inventors in many different fields. But it is a bit wayside or unnatural for some people to be receptive to an idea like powering a a headlight by an internal combustion engine, even with a battery. I'm sure the idea eventually caught on, considering its practicality but how long did it take for the idea to be adopted? This is the core struggle of innovation. It not only faces a struggle in its own right- that of developing something new or modified for a new enhancement, but also faces a public adoption, where many more applications can be used, and is often a main reason for promoting a technology, as opposed to niche features.
Open-source ideology, is a very great concept. If one looked back into the history of open source, one finds a very strong push to establish what I believe was the first linux operating system:
from https://www.hipeac.net/vision/2021/ (111MB)
smaller 6-page version here: https://cdn.hackaday.io/files/1777167603401344/192-197%20-%20Copy.pdf
"The first point is to understand that
there are fundamentally two separate
families of open source licence. What we call
permissive licences (Apache, MIT, BSD)
basically allow your users to take what you
have provided, use it, modify it and even
sell it. They do not even have to tell you
what they are doing with it. Most
annoyingly, they can take what you have
started and, when they make something better
out of it, they do not have to share it with
anyone else. Particularly at the beginning
of the open source movement, this was
seen as a major problem, and so called
reciprocal licences were developed (GPL,
LGPL). This second family of licence asks
the user to make systems built using what
they have received openly available under
the very same licence."
Why was it a "major" problem? To start, Ubuntu and the Linux kernel didn't exist then. Today, a free and easily downloadable ISO is almost taken for granted. I do not know how many developers there were who wrote the first kernel-3 at least? Since then there have been thousands of projects and forks of projects. That is obviously normal because there was no need to develop anything "centralized" anymore, once the software is developed, the hardware is a matter of aesthetics. Yet, as more developers seek to adopt open-hardware projects, I think of some islands of development that are stratified in their capability. There could be perhaps, more "mega-projects" to develop some commonality in terms of a desired hardware components, such as an a mini-ITX-like motherboard in a Raspberry Pi form factor. Of course I am plugging my own laptop project now, but its more about suggesting any mega-project that has a lot of features that a generation of developers would want to use. This isn't to say there is not already a lot of effort towards something like this. What would an open-hardware product look like? Maybe it would use a open RISC core like the Berkeley Out-of-Order Machine https://boom-core.org/ If someone wanted to develop an open-source...Read more
02/27/2021 at 04:11 •
Why not just use a microprocessor?
Why not use a microcontroller?
These are the questions I anticipate. I think there is a mutually exclusive comfort zone for each preference, and a strong revulsion in between. Let's build a microcontroller that operates like a microprocessor. Or a microprocessor that resembles a microcontroller?
02/22/2021 at 02:35 •
02/12/2021 at 15:55 •
November 12, 1955
Today, Marty showed me a portable television studio my 2045 self which I connected to my TV showing me how a "solar capacitor mainframe" works.
The mainframe machine looked very much like a television, although it had no power cord. It wasn't powered by anything apparent. I thought it might be powered by some type of fusion, but my 2045 self quickly dispelled any notion of that.
I had to rewind the tape, because I misheard a part. When I replayed it, I heard, 1.21 milliwatts!! In 1955, the only thing that could power 1.21 milliwatts inside is a bulb of lighting!
02/03/2021 at 21:44 •
In 1968, the first computer mouse and hypertext was demonstrated by Douglas Engelbart.
"For van Dam - and others who witnessed it first hand - NLS went much deeper than the GUIs of today. "Everything inter-operated in this super rich environment. And if you look at the demo carefully, it's about modifying, it's about studying, it's about being really analytical, and reflecting about what's happening.""
..."When Kay thinks of NLS, he remembers Henry David Thoreau's response to the first transatlantic cable. "We are eager to tunnel under the Atlantic and bring the Old World some weeks closer to the New," Thoreau wrote in Walden. "But perchance the first news that will leak through into the broad, flapping American ear will be that Princess Adelaide has the whooping cough."
Kay calls this "one of the earliest examples of capturing the two sides of technology. Here's this incredibly difficult technical feat that could be used for expressing important information back and forth. But Thoreau understood exactly who human beings are and what they like to do with their technology."
"NLS wasn't meant to enhance the transatlantic cable, Kay says. It was meant to create an entirely new form of collaboration. "It showed that ideas could be organized in a different way, built in a different way. What we were looking at was not something that was trying to imitate what was already there.
"The jury is still out on how long - and whether - people are actually going to understand this." It took the world 150 years to realize the true power of the printing press, Kays says, citing Thomas Paine's Common Sense as the publication that finally did the trick. And he wonders if we will need another 150 years to embrace NLS."
"Prior to the demonstration, a significant portion of the computer science community thought Engelbart was "a crackpot."
I think about the eccentric people out there who are the next Douglas Engelbart. Not me! I am too normal. ;)
02/03/2021 at 04:33 •
"Any sufficiently advanced technology is indistinguishable from magic." - Clarke
"A robot may not injure a human being or, through inaction, allow a human being to come to harm."- Asimov
"One man's "magic" is another man's engineering" -Heinlein
Writers have writers block. Do engineers have engineer's block? All technology today is a product of an inspired sci-fi story. Someone thought of a futuristic product and made it happen.
I like to think of a non-biblical interpretation of the Tower of Babel. Imagine a number of scientists and engineers working on different technologies- transistors, rockets, radios, but they do not speak the same language. One day they determine they want to fly to the moon, and decide they need to learn each other's languages to build a space shuttle with ground station communications. But if all the different specialties decided not to learn each other's languages , then there would be no way to build the space shuttle. Unless there were translators, who could understand the concepts between the fields, even if they did not understand all the technical details, as long as the interface could be communicated for its purpose and how it could be connected- that is how the radios link up to the central processing unit, and how the rockets are programmed to shut on or off.
That is the barrier I see to science fiction and engineering. Engineering is the science of someone else's fiction.
- 02/02/2021 at 17:39 • 0 comments
01/03/2021 at 05:37 •
Building a solar powered e-ink laptop represents a forging of nearly contradictory ideas- powering a relatively hungry device by an energy source that is not available 24hrs a day, especially indoors. That said, it represents an ambitious goal of designing a laptop according to minimalistic needs, rather than having the greatest number of transistors.
In selecting a processor, a display, and a solar panel, I am beginning this project with an open mind as to the type of chip. I actually have not even settled on whether to use a microprocessor or a microcontroller. I am not an engineer, nor much of a hacker. I am more a collector and experimentalist- one that explores what is possible and whether it is practical. It's more like pre-engineering, or pre-hacking warm-up. My approach to hacking is a social endeavor- inquiring across multiple software and hardware development communities to sample the concepts and get an idea of what might be something that hasn't been tried, but is ready for a project. Approximately 8 years ago, I had this idea, but Dennard's scaling and semiconductors were still still power inefficient when even considering solar power.
While most people aren't focusing on energy efficiency in terms of the cost to compute individual needs, the overall landscape of power consumption in the long-view of Moore's law has opened up quite a few opportunities for self-powered devices like wearables. A quite accurate prediction by Intel's 8 years ago is discussed in the ExtremeTech article. Computing is nearly power free. But it is not zero power. Yet powering a single core processor by a solar panel seems tantalizingly close to being able to achieve in 2021. To start out, one can begin with a very simple gadget, like a microcontroller. There are already some solar powered Internet of Things, like remote sensors with ARM M0+, but what if powering a laptop could be done on a very lightweight microcontroller? One that might have only a couple linux processes for running a word processor in static RAM and retrieving additional applications externally? Multithreading isn't prohibitive on a microcontroller, but simplifying a laptop by structuring applications by archiving them until necessary could be a way to have a fast OS, even on a microcontroller running a couple hundred mHz.
When solid state drives were first released, the drives were reserved for boot and OS files, while the applications and documents were stored on the HDD due to the cost and limited size of the SSDs. In the same way, as microcontrollers become even more power efficient with TSMC's 22nm ULL Cortex M4F processors, it really makes me wonder whether microcontrollers are like that power efficient SSD in the early days of solid state disk speed gains. Microcontrollers at being powered by batteries for days, months, and years, so why not a laptop? A few real-time operating systems have brought a few linux features to microcontrollers, like Zephyr and Riot OS.
Today's operating systems run much faster on less power, but the same could be said for microcontrollers with more RAM. So if a lightweight linux OS can fit on a microcontroller, but access external storage for less-critical applications, then power efficiency could again be reached, providing a more feasible solar laptop.
The Raspberry Pi Zero is an accessible testbed for a low power laptop, although it uses the ARMv11 45nm-90nm process. To swap it with a single core 5nm Cortex ARM v8.2 A78 would not be much more power efficient, running at 3ghz. But to have a processor underclocked until it provides no noticeable performance loss and, if possible a significant power reduction, is necessary for building a solar laptop. The latest and...Read more