I've added a link to the mesh source code so far and to the board files. To validate the idea, I've wrapped the Open Mesh implementation of the trickle algorithm in Python and have written a rudimentary event queue based network simulator. The network simulator shows how values travel between nodes over time. They do indeed hop from node to node and eventually become synchronized everywhere.
The case I was most interested in understanding is what will happen when the network is 'full', that is, when nodes cannot hold all the values that are currently being transmitted through the network. When this happens, a 'full' node might rapidly retransmit every new item it receives, wasting power; it could choose to forget newer values, or older values, or random values. The right choice will be a tradeoff between reachability and power consumption. The best option I've found so far seems to be to slow down the retransmission rate of every value on the node when a new value would take it over capacity. It might be possible to transmit more data than an individual node can hold at once by spreading the load out over time.
It will be important to handle a full network gracefully, but recall that the network is not likely to be full before you have a couple dozen nodes. If that happens you would want to partition the network into multiple separate meshes.
The second part of the source is a re-orderable priority queue plus key / value lookup data structure. The mesh network is always checking for the next expired message (values are retransmitted on timeout) and must be able to change their priority when it hears the same value on the network. The priority queue makes it very efficient to find the next item to retransmit, but normally does not allow re-ordering. Our priority queue maintains a sorted list of keys, for quick, memory efficient binary search to compare against incoming values, and maintains backlinks from the keys to their position in the priority queue. Combined we can efficiently search for values by key, change their priority, and update their position in the priority queue.
I've also been able to make a memory improvement to the OpenMesh trickle structure. The timeouts are always (minimum_timeout * (2**n)). The original code uses 13 bytes to store an instance of the trickle algorithm, storing (2**n) in a uint32_t, but we save 3 bytes per trickle by only storing n in a uint8_t.
The source repository also includes the Kicad board files for the pictured node.