PEAC Pisano with End-Around Carry algorithm

Add X to Y and Y to X, says the song. And carry on.

Similar projects worth following
A purely additive, non-cryptographic, fast and minimalistic checksum/PRNG/scrambler.

The sweet spot is a 32-bit checksum with a pair of 16-bit registers,
- very small, using only ADD,
- without the flaws of Adler32 and Fletcher, yet the same size and complexity,
- as good as CRC32, which it may replace when the memory footprint is critical
- can't be 0-crashed
- very easy to implement in digital circuits as well as microcontrollers,
- suitable for the detection of trivial corruption in network packets or files.
- not patented
- "as is", inappropriate for cryptography (though could inspire someone)

If you wonder why, just click and read the log.

For the what, check Definitions and Draft as well as PEAC vs LFSR for a comparison.

The original algo/project name was "Fibonacci checksum" but it later appeared that it was not the most accurate choice because Leonardo Pisano (the real name of Fibonacci) is the name associated to the periodicity of the sequence under modulo. All I did was add the tiny detail of "end-around carry", or "1-complement addition", which changed everything.


As for how I came to this system...

In 2012, Piano Magic released their album "Life Has Not Finished with Me Yet". One song contains a weird repeating pattern...

Glen Johnson's lyrics are often cryptic and more evocative than objective, but any geek's mind would cling on this mantra at the end:

"Add X to Y and Y to X"

This is really weird but... Why ? What's the point in this obscure "song" with no precise theme or clear subject ? And what does it do ? This last question is the most easily answered : just follow the damned algorithm.

C'est parti...

X=1, Y=0
Y+=X => 0+1=1
X+=Y => 1+1=2
Y+=X => 1+2=3
X+=Y => 2+3=5
Y+=X => 3+5=8
X+=Y => 5+8=13
X+=Y => 8+13=21
Y+=X => 13+21=34
X+=Y => 21+34=55

No need to go further, most of you should have recognised, the famous Fibonacci sequence.

This gave me a compelling idea to modify the old Fletcher & Adler algorithms, keeping their very low footprint and minimalistic computational complexity. Both of these well known algos use a pair of values and have a similar structure. The trick is that rearranging the data dependency graph provides the equivalent of a minimalistic polynomial checksum, because the result is fed back on itself, in a more robust way than Fletcher's algo.

At first glance, this new checksum loop's body becomes something like :

Y += ( X ^ datum[i  ] );
X += ( Y ^ datum[i+1] );

This loop body is totally trivial to unroll. As trivial is the design of the corresponding digital circuit. This early version seemed to contain the whole checksum entropy in the last computed value but now X and Y are part of the signature. And the really important enhancement is the use of the carry!

For superscalar 32 bits CPUs, the following code seems to work well though the missing carry hardware (and/or lack of language support) requires more instructions to emulate:

t = X + Y + C;    Y = X + data;
C = t >> 16;      X = t & 0xFFFF;

A more efficient variation exists which does not use a temporary variable:

C += X + Y;
Y = X + data;
X = C & MASK;
C = C >> WIDTH;

In this worst case, without support of a carry flag, that's 5 basic operations (not counting memory loads) that fit in 4 registers and 3 cycles, to process 2 bytes. Not too bad. I'll let you deal with alignment. But is it really safe or safer?

The following logs will show how the idea evolves and the performance increases, through discussions about carry wrap-around, register widths, scheduling, avalanche, parallelism, orbits, structure, and escaping black holes...


1. The variable
2. Datapath
3. Adler32 weakness
4. Easy circuit
5. Periodicity
6. A promising 32-bit checksum
7. Progress
8. Why
9. Orbital mechanics
10. Orbits, trajectories and that damned carry bit
11. Two hairy orbits
12. How to deal with black holes
13. Anteriority
14. Moonlighting as a PRNG
15. A new name
16. More orbits !
17. First image
18. Structure and extrapolations
19. Even more orbits !
20. Start and stop
21. Some theory
22. Some more theory
23. Even more theory.
24. A little enhancement
25. sdrawkcab gnioG
26. Further theorising
27. What is it ?
28. A bigger enhancement
29. Going both ways
30. Can you add states ?
31. Please, GCC !
32. _|_||_||_|______|______|____|_|___|_________|||_|__
33. Carry on with GCC
34. A good compromise
35. The new non-masking algorithm
36. Add or XOR ?
37. A 32/64-bit version
38. Closing the trajectories
39. Stats about missed crossings
40. Application...

Read more »


List of maximal and perfect orbits, from 2 to 131540.

gzip - 101.30 kB - 04/05/2022 at 02:03



like moduli_01.c but can handle larger data, for m>65K

x-csrc - 1.17 kB - 01/29/2022 at 21:36



1880 first results with maximal or double-maximal orbit lengths

x-log - 40.00 kB - 01/29/2022 at 20:36



check orbit length for non-power-of-two moduli

x-csrc - 1.14 kB - 01/29/2022 at 20:36



contains dump7ck, pscan_10 and fusion, good enough to test up to w26.

x-compressed-tar - 7.13 kB - 01/09/2022 at 13:34


View all 55 files

  • The second (as originally intended) application

    Yann Guidon / YGDES12/18/2022 at 01:18 0 comments

    Have a look at N00N protocol : Header and payload checksums with PEAC !

    To the curious, the first actual use of PEAC in a protocol is a scrambler for digital radio coms. But packet checksum is the application for which PEAC was initially developed.

    There are 2 cases, both using 16-bit words :

    1. Header : 5 words (80 bits) are checksummed into 1 word (16 bits), giving a 20% overhead that should still cover the great majority of cases, since normally, error probability is proportional to the size of the data to protect. Here, the 96 bits should be quite safe and we should run actual Hamming distance tests.
      Note 1 : a higher overhead is OK for such a small structure, particularly because it is part of "parsing" the continuous stream and it is part of the resynchronisation process. Unlike MIDI (which uses the MSB of the bytes) or MPEG (that uses the value FF as a dlimiter) N00N has no inband/outofband signaling and must rely on pattern matching to detect the start of a valid packet. Adding 2 bytes of checksum increases (by 64Ki times) the chance to reject coincidences that could appear inside or outside of a packet. Further filtering (by ID, type, flags etc.) helps reject even more malformed or desynchronised packets.
      Note 2 : the checksum protects the payload size field.
      The fixed size allows a dedicated implementation that saves some instruction in the beginning and the end of the routine, by propagating invariants. Handy !
    2. Payload : variable size, that could reach up to 256Ki bytes, so the full output of the checksum (32 bits) is used. Normally messages should weigh around 200 bytes, or 50 pairs of 32-bit words. That's 2% overhead. Up to 4KiB is OK in theory though pretty unlikely. The checksum and the protocol are optimised for pairs of 16-bit words so they are very easy to manage and handle, with the least corner cases possible. I'm pretty proud of this "clean" reference implementation though some platforms can benefit from particular optimisations.

    In both versions, I use the 3-variable code. It uses 2 operations to manage the carry for every 16-bit word. They may look overkill and may not affect the result as much, though this is a key difference between PEAC and other checksums (Fletcher in particular). Without these 2 operations (shift and mask that can execute simultaneously) the efficiency would drop, as some errors would not be distinguishable.

    Since this is a simple practical application, it's a good foundation to measure Hamming distances and trace graphs, compare with other methods... Stay tuned.

  • Parity with Fibonacci

    Yann Guidon / YGDES11/23/2022 at 00:57 0 comments

    One thing that sets PEAC apart from the Fibonacci and Pisano series is the parity, or mod 2 remainder.

    When you take the natural sequence 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597...

    The mod 2 is 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, ...

    But since PEAC uses all the state space, by definition there must be (almost) as many odd and even numbers. So the 0, 1, 1 sequence can not repeat infinitely.

    That's something to measure soon...

  • Composition

    Yann Guidon / YGDES10/17/2022 at 00:34 0 comments

    One aspect that has not yet been covered is "composition", or combination of values to create a new value.

    With LFSR-based CRCs, the logic is completely "linear" so you can combine one checksum with another. For example, you combine the checksums of multiple blocks and create a new checksum for the concatenated blocks, as long as you know their lengths.

    OTOH PEAC is not totally linear because of the 1/2 chance to increment the sum so such an operation is far from trivial, to the point where everything must be re-computed from scratch. This can be a problem because some people may want to do the same as with CRC32, however in practice it is rarely used or beneficial, except for one application : tampering with data.

    And here, PEAC is not as straightforward as CRCs. It's not impossible either but if you ignore the start of the stream/file, then you have no way to forge the appropriate data. After a few bytes or words, the complexity increases. For each 4 bytes, the sums are incremented, but this is only statistics. But each increment gets blown out of proportion at the rate of phi and this further scrambles the result, in non-linear ways.

    Oh, it's a tiny disturbance at first and you can still alter most of a PEAC16×2 checksum by inputting 2 words (4 bytes, plus another word if you want to alter the carry bit) but by the time you have a few words, the avalanche has already started through the carry chains. It takes about 24 words for the avalanche to amplify one bit to a full word. It starts easy so the only practical applications would be limited to only a few altered words.

    A pure "Pisano" checksum where the carry is lost would not have this "problem", as it is pretty linear, just compute the checksum of the block you want to replace, then compute the checksum of the forged block, compute some kind of difference and you're done (in theory). But PEAC is not that obvious because of the avalanche of the carries.

    So composition of blocks is not easy, but not impossible : it is better to just recompute everything. In practice though, checksums should be regularly spread in a stream to avoid losing too much data when a block is altered, which in turns limits the effort of forgeries.

    Some serious math is needed at this point, stay tuned.

  • Currently in (French) kiosks

    Yann Guidon / YGDES09/27/2022 at 16:42 0 comments

    I wrote a 16-page article about the dark art of checksumming and it is still in kiosks in France ! Here is the link to the editor's site :

    The article should become "free to read online" in mid-2023. It builds upon many of the subjects and data covered in this project and expands from there, I'd dare to say it's a "must read" if you're French and want to use/implement some checksum code. I'll see if I can make a short version in English somehow.

  • Draft

    Yann Guidon / YGDES09/12/2022 at 17:37 0 comments

    Quite a lot of my research used Wikipedia and it's likely that one day, PEAC will have its own page, so what should it contain? I've been collecting my thoughts lately and here is a first incomplete draft.

    PEAC (algorithm)

    PEAC is a modified Pisano sequence generator that uses ones' complement addition to handle overflows. Unlike the original Pisano algorithm, each time a sum exceeds the modulus value, the result is also incremented by one.

    It is the smallest possible configuration of Marsaglia's Lagged Fibonacci PRNG "add-with-carry" algorithm and only certain moduli give a maximum length sequence. It is also comparable to a Linear Congruential Generator with no multiplier and conditional increment, but with two variables instead of one, following the Fibonacci additive structure.

    When the modulus is a power of two, it is an alternative to GF(2) Galois fields in some applications :

    • the 16-bit wide version is an alternative to CRC32,
    • the 26-bit wide is a simple PRNG with more than 2^52 distinct states,
    • the 4 and 10-bit versions can be line scramblers/descramblers for serial communications for example.

    Non-power-of-two, or generalised PEAC, have better mixing properties and could be used as part of a hashing algorithm, though they have basic inherent biases.

    The maximal possible length period (or orbit) for a given modulus m is m^2  + m - 2 because the states all-0 and all-1 are degenerate but are never reached (when properly initialised). Binary PEAC of widths 3, 4, 10 and 16 bits have 2 symmetrical orbits of half this length and are useful in practice: the checksum built from the 16-bit PEAC wraps around after 2147516415 iterations over 16-bit words, or more than 4Gi bytes.

    The naive iteration algorithm of Pisano sequences is:

    t = X
    X = X+Y
    if X >= modulus then
       X = X-modulus
    Y = t

    The conditional subtraction of the modulus amounts to a modulo operation. The properties of the generated sequence is well studied and understood but practically useless. When the modulus is a power of two, the orbit length is only 1.5 times the modulus.

    Instead, the PEAC algorithm performs

    X = X + 1 - modulus

    The conditional +1 ideally comes from the carry flag that is set in a processor's ALU if the sum in X overflows in a power-of-two configuration, making the process almost free.

    PEAC shares only a few properties with Galois fields, mainly that only certain sizes provide useful characteristics, and maximal length sequences can be considered as a permutation of a monotonic sequence (because almost all values appear, and only once).

    PEAC is defined by two numbers plus a carry flag in some configurations, which allows the generation of the value 0, unlike LFSRs based on Galois fields. The maximal length of a sequence follows a different rule, m^2+m-2, whereas LFSR only go to m-1.


    As PEAC is not a field, it does not feature operations provided by Galois fields for example, such as addition and multiplication of two values or states. The only available operations are next and previous since the iteration is reversible, as it has only one result made from reversible operations. This forces a complete enumeration of all intermediate states when one requires the distance (number of steps or iterations) between two states. This can be viewed as a feature or a handicap depending on the context, for example PEAC can't directly replace Galois Fields for Error Correcting Codes (as used by BCH and Reed-Solomon codes) But there are at least 3 other applications where PEAC is an interesting alternative to Galois Fields:


    PEAC can be considered as the smallest possible Lagged Fibonacci PRNG with "add with carry" configuration. As such it features much shorter orbits but they are easier to study computationally and theoretically, as the only operation is one addition per word. This is barely more complex than a comparable LFSR while the latter only provides one bit per cycle.

    A 26-bit PEAC can generate a sequence...

    Read more »

  • Do you speak (insert programming language name here) ?

    Yann Guidon / YGDES09/12/2022 at 00:09 0 comments

    My latest article is published and it's a killer !

    It will be fully open around mid-2023. It packs a thorough analysis of existing checksums to deduce/develop the structure and the theory of PEAC checksums. A must-read for anyone who understands French and wants to develop or implement their own checksum!

    So I looked around and found that PEAC was picked up by I'd like to contribute a coding example but so far I have only coded it in C, asm and JS. Who wants to make other versions of the checksum in Python, Perl, Pascal, ...Psomethingelse ?

    DavidCary, if you read this, here's a first snippet to add to the PEAC page:

    // PEAC16x2 algorithm, not unrolled.
    // Good practice : Pad the end of the buffer
    // with 2 words (can be 0) to increase mixing.
    #include <stdint.h>
    uint32_t  PEAC16x2(int len, uint16_t *src) {
      uint16_t X=0xABCD;
      uint32_t Y=0x4567,
        C=len;  // add the size of the buffer to the checksum
      while (len > 0) {
        C += X;
        C += Y;
        Y  = X + *src;
        X  = C & 0xFFFF;
        C >>= 16;
      // return 32 bits
      return X | (Y << 16);

    The JavaScript version is almost identical, if we modify the declaration, the data types and the array indexing.

    // PEAC16x2 algorithm, not unrolled, in JS.
    function PEAC16x2(len, src) {
      var X=0xABCD,
      while (len > 0) {
        C += X + Y;
        Y  = X + src[i++];
        X  = C & 0xFFFF;
        C >>= 16;
      return X | (Y << 16);

    What's next ?

  • New scanner record format

    Yann Guidon / YGDES08/18/2022 at 21:43 0 comments

    So lately I started tinkering with #YAMS and things started to get weird because the most important place where I would need a good sorting algo would be here, to manage the merging of the trajectories to rebuild the whole orbit(s). Previously I used the traditional sort utility which implies a specific type of encoding. See
    94. New packing: p7k
    95. New packing at work

    But if I have my own sorting infrastructure, not only I can simplify/compact the records, but I could also exploit some parallelism here and there, eventually avoiding sort completely.

    For now the goal is to reach w32 and this determines the format:

    1. uint32_t Xstart : range from 0 to 0xFFFFFFFF
    2. uint32_t Xend: range from 0 to 0xFFFFFFFF
    3. uint32_t Arcs: range from 1 to 0xFFFFFFFF
    4. uint64_t Len: range from 1 to 0xFFFFFFFFFFFFFFFD
    5. uint8_t Parity flag : 0 or 1

    Total : 21 bytes per record, down from 26, so the final log for w32 would be about 90GB. The last byte has very little entropy but it is required to check that w26 is perfect, not just maximal.

    Maybe the record could be reduced a bit by using variable-sized Arcs and Len fields, because they are pretty low at the start and increase in size with each pass of fusion. Xstart and Xend remain fixed (to ease sorting). I'm fine with variable-sized records because random access is not required here.

    Arcs ranges from 1 to 4 bytes (2 bits), Len from 1 to 8 bytes (3 bits) so there would be 1 bit left in a byte to signal the parity flag. So the minimum size of a record would be 11 bytes. The 2 remaining bits could help reduce the size further: with 3 bits, one can encode either the number of bytes, or a direct value from 1 to 4.

    The fusion program should be able to perform the sorts only, so partial logs can be preprocessed for fusion. The input processing would likely be decoupled from the output processing (usint pthread ?) to keep the whole program fast, despite all the I/O stalls. The sum of all the lengths should always be computed, to catch any flaw during processing.

    Edit :

    For w32, the Len field should be capable to hold more than 64 bits... so 4 bits are required to encode more than 4 bytes, because the orbit could be perfect...

  • Related ideas

    Yann Guidon / YGDES07/02/2022 at 10:54 0 comments

    Veritasium just uploaded a new video.

    And apart from the funny subject in itself, it contains a few related and pertinent ideas.

    First : it calculates the probability for any permutation to create a "loop", or "orbit", of length one half (or more) of the total number of items. So this is a first hint to find the distribution of "maximal" or "perfect" orbits, except that generalised PEAC are not random.

    Second : adding an offset to the items' coordinates changes the topology of the loops/orbits. And this is exactly what the carry does in PEAC.

  • Funnels and dead ends

    Yann Guidon / YGDES06/15/2022 at 09:00 0 comments

    I have not progressed much lately on this front but I just finished writing a new article, due for publication next month probably.

    In there, I develop a little new theory derived from Bob Jenkin's funnel principle, but applied to the dataflow graph of the checksums.

    a) is obviously the very flawed checksum algorithm.

    b) is the Fletcher algo, not quite there.

    c) is PEAC which differs from Fletcher by a very little rewiring of a feedback operand...

    How come the Fibonacci structure is not the de facto standard ?

  • Line scrambling with PEAC

    Yann Guidon / YGDES04/24/2022 at 11:51 0 comments

    One significant application for binary PEAC is for scrambling data over a serial link. LFSRs are the traditional candidates because, well, what else ? But their shortcomings are quickly prominent.

    State of the art with LFSR

    I'll let you browse Wikipedia for the references and definitions but here are the major purposes of a line scrambler :

    • Avoid long runs of 0s or 1s to ensure clock recovery
    • Spread the spectrum of the bitstream to ensure EMC compliance
    • Ensure "DC balance" (equal average number of 0s and 1s) to prevent saturation of magnetics coupling
    • Eventually allow the insertion of signaling symbols (such as in the 8B10B) for line management or flow control
    • Eventually transform binary symbols to other bases (ternary, quinary) for PAM3, PAM4, PAM5 physical encoding...
    • Eventually add a tiny bit of redundancy to provide crude FEC

    LSFR-based systems are used almost everywhere and are well characterised. Their ability to "crash" however is not as well covered but an example was given by Wikipedia where an old transmission protocol used a 7-tap LFSR that was trivial to attack: just transmit all possible sequences of the LFSR output, so that one (or more) packets would be all (or mostly) 0 (or 1) and the receiver can't recover the clock, leading to a loss of synchronisation and a failure of the link.

      The run length statistics may get worse if the data consists of specifically chosen patterns, instead of being random. An earlier scrambler used in Packet over SONET/SDH (RFC 1619 (1994)) had a short polynomial with only 7 bits of internal state which allowed a malicious attacker to create a Denial-of-service attack by transmitting patterns in all 27−1 states, one of which was guaranteed to de-synchronize the clock recovery circuits. This vulnerability was kept secret until the scrambler length was increased to 43 bits (RFC 2615 (1999)) making it impossible for a malicious attacker to jam the system with a short sequence. (64b/66b_encoding#Run_length)

    Modern links use 64B66B (that is : raw copy of the data and add a couple of signaling bits) followed by a 58-tap LFSR. The "clock crash" attack becomes practically impossible but then you have to deal with the possibility of sequences of 57 identical bits. From what I have gathered, another smaller LFSR or encoder is used again to further ensure DC balance. At this point the designers bet on the statistical properties of the data and channel to "push" the probability of an "adverse condition" to the "extremely unlikely" domain but it's still not impossible. Apparently, low encoding overhead seems to be more important to push the bit-per-symbol ratio.

      "The scrambler cannot guarantee that output data will never have a long run-length of 0s or all 1s, or other undesirable properties in communications, but does allow strong statistical bounds to be put on the probability of such events. Practical designs will choose system parameters such that a bit-error due to long run-lengths is vanishingly unlikely." (64b/66b encoding)

    Of course, LFSRs must run at the bit/line speed so a 10Gbps link requires a 10GHz LFSR. Same if you want to reach 40 Gbps and there, the available technology requires a parallel LFSR implementation, meaning that the poly is extremely sparse to allow operation at a low-enough ratio. With the IEEE proposal x58+x19+1, there are 19 layers of XOR to implement before the circuits becomes too complex, but basically this limits the slowdown to 1/19. For a 1GHz logic speed, the bit speed is at most 20Gbps, and the scrambler size (and power draw) is directly linear to the slowdown.

    One thing I was surprised to not find is a detection that the LFSR got stuck or "0-crashed", in order to restart it automatically. It would depend if the scrambler is additive or multiplicative but apparently, handwaving and reducing the risks down to statistics is the preferred engineering method.


    Read more »

View all 135 project logs

Enjoy this project?



Yann Guidon / YGDES wrote 09/21/2022 at 17:14 point has some relevant ideas here and there :-)

  Are you sure? yes | no

Yann Guidon / YGDES wrote 07/28/2022 at 16:03 point

TODO : check the distribution and periodicity of each rank of bit to show they don't exhibit the flaws of,_c_=_0

  Are you sure? yes | no

Yann Guidon / YGDES wrote 04/08/2022 at 11:41 point

It seems that PEAC is a cyclic code (in a loose way because it's not using Galois fields but still interesting)

  Are you sure? yes | no

Ezra Wolf wrote 01/21/2022 at 23:18 point

Wow !

  Are you sure? yes | no

Yann Guidon / YGDES wrote 01/26/2022 at 05:27 point

Hi Ezra, feel free to elaborate :-)

  Are you sure? yes | no

Yann Guidon / YGDES wrote 01/17/2022 at 08:49 point

Correction to some of my statements here and there (and my videos) : φ is not transcendental but an irrational number that is a solution to the quadratic equation x² − x − 1 = 0. Mea culpa, but at least I learned something from one of the Bogdanoff brothers' countless errors and confusions :-D

  Are you sure? yes | no

Yann Guidon / YGDES wrote 01/14/2022 at 16:55 point

Oh my !

Now, THAT's a hack :-)

And I can see how it could increase the speed of some algorithms, even this one...

  Are you sure? yes | no

Tim wrote 01/14/2022 at 19:44 point

On ever other architecture, yes. On x86 is does not add notably to the entropy :)

  Are you sure? yes | no

Yann Guidon / YGDES wrote 01/15/2022 at 12:04 point

  Are you sure? yes | no

Yann Guidon / YGDES wrote 10/23/2021 at 21:33 point

Interesting !

  Are you sure? yes | no


[this comment has been deleted]

Yann Guidon / YGDES wrote 10/13/2021 at 14:39 point

so far, half a year and 50£.

  Are you sure? yes | no

Yann Guidon / YGDES wrote 09/14/2021 at 04:16 point

George Marsaglia and Arif Zaman got quite a lot figured out in 1991:

Why didn't this catch up, but they focus on extremely large sequences with tens of registers of lag, and didn't look enough at just a pair of registers. They also didn't look at the link with checksums.

  Are you sure? yes | no

Yann Guidon / YGDES wrote 09/08/2021 at 07:28 point

Funny !

Toward a universal random number generator, G.Marsaglia, A.Zaman

  Are you sure? yes | no

Yann Guidon / YGDES wrote 08/05/2021 at 23:24 point

"In practice, TCP trailer sums outperform even Fletcher header sums."


  Are you sure? yes | no

Yann Guidon / YGDES wrote 08/08/2021 at 00:51 point

"Ones complement (with EAC) Fletcher, however, has a weakness that sometimes offsets its probabilistic advantage: since bytes containing either 255 or 0 are considered identical by the checksum, certain common pathological cases cause total failure of Fletcher-255."


  Are you sure? yes | no

Yann Guidon / YGDES wrote 08/05/2021 at 23:09 point and others say that special adders exist to compute mod 2^n-1.

But this is not necessary because the carry is explicitly integrated in the next cycle.

  Are you sure? yes | no

Yann Guidon / YGDES wrote 08/05/2021 at 22:20 point  is a nice beginner-level overview of error detection basics. Polynomial checks are mentioned, as well as 1s complement / end-around-carry and Fletcher is well explained but no sight of Fibonacci.

  Are you sure? yes | no

Yann Guidon / YGDES wrote 05/24/2021 at 03:52 point

Oh, nice !

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates