In the last month since my previous post, I've been busy reworking the entire Makernet software framework. Nearly every line of code has been rewritten and posted now to github here: https://github.com/jgilbert20/Makernet. The major advantages of the rewrite is pluggable support now for datalink layers other than I2C.
The initial version of the Makernet code was long overdue for an overhaul. I had originally cobbled it together when I was still learning about I2C internals, and many of the concepts were based on EKT's code from the Modulo project. This was good for proof of concept, but I rapidly realized that the architecture would be impossible to maintain unless I created a proper networking stack.
The new version of Makernet now is a fully OSI-relatable network stack with several layers. The first layer is the datalink, which provides medium access control, and forms input and output into frames. In the case of I2C, these frames are I2C transactions, but the code is generalized now to support nearly any way that a packet of information can get from A to B.
One nice advantage: my datalink layer handles all hardware concerns, making the upper layers of the stack fully hardware neutral. In fact, I implemented a second Datalink layer based on UNIX domain sockets which allows Makernet to run over MacOSX and Linux interprocess signalling!
Once frames leave the Datalink layer, they move up to the Network layer where they are interpreted as packets and dispatched to various registered handlers called Services. Because the packet format is constant across different datalinks, every part of the software code can interchange with new transport protocols like RFM69HW radios or RS485.
The code is now about 20% more compact than it was before, and all of the packet routing is now 100% object oriented, making it very clear to understand how packets and information flow between layers and also assisting in expanding the framework in the future.
The next innovation was the removal of command syntax entirely from the makernet specification. In the early version, there was the idea of a command that got sent from a master device to a slave device that caused something to happen. This imperative syntax is very easy to implement but causes a number of problems. First, it does not handle irregular communications very well. Say your command from A to B is lost in the network. Do you just sit there polling for an ACK, while the rest of your code grinds to a halt?
Furthermore, commands make it entirely too easy to forget about state, which can be very pernicious in a truly asynchronous programing model. Consider the example of a neopixel strip N and a controller/master C. The code on C fires off a bunch of commands to draw some pixels. What happens if strip N disconnects and reconnects? Or experiences a software reset? Suddenly, the code no longer functions as you intend, because each "command" to the strip was actually updating state on the remote node and now all that state is lost.
To remove the imperative command syntax, I created a built-in mechanism called mailboxes. In this model, both a master and slave device share a set of connected, automatically synchronized small buffers. When one buffer is updated, the buffer on the remote end is synchronized automatically by the networking services. Now, all of the state assumptions are clearly defined. The mailboxes are defined as clean C++ object types that can be subclassed and adapted to nearly any type of communication need including streams.
You might think a heavy OO framework would be too much for a small embedded framework. I certainly was worried about this, especially virtual functions which implement a hidden jump table on each object type. However, my extensive testing has shown that in fact this new architecture is faster for most operations. The gain comes from the fact that the new framework never generates packets it doesn't intend to send because packet generation is polled by the framework rather than "pushed" by the code. Furthermore, a simple set of shared, compact memory buffers allow transfer of packets from one layer to another as pointers which are O(1) copies.
In the new Makernet framework, it will be much easier for others to contribute. I've taken great care to implement a clean object oriented discipline around "separation of concerns." Don't like part of the networking stack? Its now very easy to replace by subclassing your own objects. If you think mailbox synchronization is too inefficient, than its easy to replace modules and classes and make it work a different way, all without necessarily disturbing existing code. As an example, I expanded my mailbox architecture to support large buffer types for a new GPIO device, and yet all of my existing firmware using the older architecture remained compatible.