Close

Back to the jobs

A project log for PEAC Pisano with End-Around Carry algorithm

Add X to Y and Y to X, says the song. And carry on.

yann-guidon-ygdesYann Guidon / YGDES 12/02/2021 at 07:020 Comments

The exploration and validation of w26 and w32 has been the focus of the last 6 months and the progress is such that I consider w26 in my reach: I have better software, which is going to be even better, and better computers. I must put all the pieces together to complete the puzzle though.

Let's look at the past logs about the subject:

41. I need a big cluster.
43. An individual job and all the necessary processing
44. Test jobs
46. The problem with SIMD
47. Protocol and format
48. Just a test
70. Another strategy to scan the statespace
71. Scanning with multiple processors

After the debacle of w26, it appears that the computations should be piecewise verifiable, easy to start, restart, or distribute... So I stick to the first approach, where each computer receives an allocated range to compute. Each semi-arc is sequentially computed in that range and the results are stored in files.

This reduces the need for a central scoreboard, with its own save/restore/synchronisation issues. OTOH the post-processing stage remains daunting: there is a trade-off to find between the compactness of the result files and the ease of processing or examination. It is possible by using higher bases than decimal or even hexadecimal. A look at the ASCII table shows that all the codes after Space (0x20) and before DEC (0x7F) are printable so a text editor can examine them and standard tools like grep and sort help with the processing.

Another desirable feature is that the result files can be seamlessly concatenated. So the format is simple: one line per semi-arc.

With 94 codes per byte, the density is closer to binary than decimal representation, though a space is still required to separate the fields. Going from 3 fields to 2 is easily performed by sort and sed, first sorting the Xinit, then removing it. The carriage return can't be removed either but bzip2 can squeeze some more room when needed.

I have configured a system where I can store uncompressed logs easily and rather fast (400MB/s read&write), so post-processing will still be possible with 50GB of logs for example.

At this moment, it seems best to simply start the computations and define a simple output format, to get useful data that can be later processed and compared. I'll design the processing scripts/programs later...

Discussions