The cellular allegory

A project log for NPM - New Programming Model

NPM defines a programming model for new languages and "application processors", that emphasises safety, security and modularity.

Yann Guidon / YGDESYann Guidon / YGDES 05/30/2020 at 05:060 Comments

Here is one way to explain the "vision" of the programming model I try to define.

Initially I wanted to use robots in a factory, but a more natural example abounds on Earth : cells. More specifically : Eukaryotes. They have a nucleus that contains, among others, chromosomes, each made of genes, each written with codons of three nucleotides.

The analogy would then be :


Our "new computer" can perform parallel execution but requires of course synchronisation and semaphores. The number of execution threads is not specified and can vary wildly, as it is hypothetical so far.

Our computer has "threads" : this is an active instance of a program. Any number of instances could be running at any time... It depends on the implementation. The program starts from one "chromosome" and can then ask services to other programs/chromosomes. The callee can refuse or accept, depending on its own policy. This is performed with the "IPC/IPE/IPR" mechanism.

Communication is essential and "zero-copy" is required. How is our mRNA implemented ? Small chunks of data would fit in the registers, larger blocks require memory. The paging system ensures that a block of data has only one owner and the sender "yields" a data block ownership to the callee. To prevent dangling and zombie blocks, in case the callee has an issue, the callee must "accept" the new block (otherwise the block is garbage-collected). This block could then be yielded again to another program, and passed along a string of "genes" to perform any required processing.

Paging ensures that any access outside of the message traps. It's not foolproof because data will rarely fill a whole page... Smaller and bigger pages are required (with F-CPU we talked about 512, 4096, 32768 and 262144 bytes)

This also means that there must be a sort of fast and simple page allocator to yield pages of data at high rates, and also provide some garbage collection and TLB cleanup.

There is a shared space and a private area for all the threads. The private area implements the stack(s) and resides in the negative addresses. The positive addresses are shared though only one thread can "own" a block at once. So there are fewer chances of stack disruption.

As described before, the "yield" operation marks a block as candidate to belonging to the callee. The "accept" operation can then validate and acknowledge the reception and eventually remap the block into its private space.

Processor support is required for the best performance but these features could be implemented on existing systems through emulation. It would be slow (yes the Hurd was slow too) but will help refine the API.

Similarly : no mention of a "kernel" can be found : any "program" is structurally identical. All of them provide a function of some sort and the only difference is the access rights they provide. Each "thread" either inherit these rights from the thread that creates it, or can selectively "drop" some of those rights. This is not even a "microkernel" approach if there is no kernel !

However in the first implementations, no hardware provides those features and a nanokernel is necessary to perform these while we develop the software stack.