Back to [Home]
The Chocoholic package has been developed on and for a Linux system. However, it shoud work on any POSIX-compliant system with GNU libraries and utilities, mainly bash, make, sort and gcc.
You can skip this if you have lots of spare space and time.
Otherwise....
Working on arenas of any size will involve large files, and time-consuming sort operations on them. The scripts provided here instruct sort to use temporary directories named tmp1 and tmp2 in the directory where you are running this software. It may or may not matter, depending on your setup, but it does offer you a way to speed things up quite a bit.
If you don't do anything, large temporary files will be built in the same directory where everything else is happening. You can improve things in several ways.
You can treat tmp1 and tmp2 as mount points and mount huge empty partitions of otherwise unused drives on them. This is what I do, using two USB 3.0 external 2TB drives, and it seems to make things run about 30% faster.
You can replace tmp1 and tmp2 with symbolic links to another place where you would like the temporaries to go. I would recommend huge empty partitions on unused drives, but your situation may well be different.
Wherever your temporaries go, make sure there will be enough room.
The Chocoholic software works on a rectangular starting position (more formally called an arena,) which you specify by giving the width in columns and the height in rows. Thus, to get solutions for a 6x5 arena:
:::bash
bash chomploop.sh 6 5
You will find this small arena is computed in seconds. This will create a directory named by the shape of the arena: 6x5 in this case. It contains a number of files with positions, organized by how many cells are in the positions of the file.
You can process this information into a more finished form:
:::bash
bash chompsmall.sh 6 5
This creates three families of files, with memebers of each family distinguished by suffix:
| Suffix | Contents |
|---|---|
| <None> | the uncompressed file |
| .md5sum | Checksum of the uncompressed file |
| .gz | compressed by gzip |
| .bz2 | compressed by bzip2 |
| .xz | compressed by xz |
The file contents are:
| Name | Contents |
|---|---|
| csort | All positions, sorted. |
| ppos | All P-positions ("losing" positions) with some related info. |
| pnam | All P-positions ("losing" positions) with just the raw position spec. |
The csort file will be found to be much bigger than the ppos file that's extracted out of it. The P-positions are singled out because there are so few (less than 1% in large arenas), and because it contains enough information to rebuild the entire csort file very quickly. This would not matter at the 6x5 size since the build process is so fast anyway. However, when I built the 16x16 solution it took about 8 days. Rebuilding from the ppos file took about 2 hours. You can expect it to be at least 100 times faster on any large solution.
A large part of the reason for the speed difference is that the rebuild process does not have to reconstruct the "winning" moves that go from a P-position to an N-position. The reduction in the size of data to be formed, stored and sorted accounts for the 100-to-1 speedup.
You can rebuild the full solution quickly from just the ppos file. You do the rebuild thus:
:::bash
bash rebuild.sh 6 5
This will create several files with the moves that arrive at the P-positions in the ppos file, and a rebuilt file rebuilt-6x5 that should be identical to the original csort file. In fact if the csort file still exists, it will compare the two and tell you if there is any difference.