libspectrum has lots of integer conversions between integer types that are commonly 32 and 64 bit on 64 bit platforms, e.g. libspectrum_bzip2_inflate( const libspectrum_byte bzptr, size_t bzlength, libspectrum_byte outptr, size_t outlength ) has unsigned int length2 which is assigned length2 = *outlength. On 64 bit Mac OS X size_t is unsigned long which is 64 bit while length2 is unsigned int.
These conversions should be located and removed.
The proper approach here is not completely obvious - Spectrum file formats are by definition 32 bit as they embed 32 bit length block and file lengths. However libspectrum uses size_t for a lot of the values that end up being read/written to those files and in memory block compression/decompression functions that would fail if more than 4GB was required for any block/file lengths.
My inclination is to formally make libspectrum a 32 bit library for these purposes and not worry about the limits that will impose - any dissenting thoughts out there?
Is not 'unsigned int' 64 bits long? (pun not intended). Building with standard C99 might mitigate these conversions.
Sounds reasonable.
OS X uses the LP64 model for 64 bit programming, so int is 32 bit while longs and pointers are 64. C89/C99 will both act the same way.
I messed up the whole thing. Interesting reading.