From: Bruno H. <br...@cl...> - 2005-01-26 20:01:56
|
Sam wrote: > > clarify what the function _should_ do? What > > does the berkeley-db module expect from it? > > this is a platform-independent integer <--> bit sequence conversion. > this should be the _inverse_ of fill_dbt() in bdb.c: > > } else if (bignump(obj)) { > int need = sizeof(uintD)*Bignum_length(obj); > key->ulen = key->size = MAX(re_len,need); > key->data = my_malloc(key->size); > begin_system_call(); > memset(key->data,0,key->size); > memcpy((char*)key->data + key->size - need,TheBignum(obj)->data,need); > end_system_call(); > return DBT_INTEGER; > } else ... I'm sorry, but this is not platform independent. 1) The size of uintD is platform dependent. It can currently be 32 or 16 (or even 8) bits, and we might a 64-bit type for some 64-bit machines in the future. 2) The result that you write is endianness dependent, therefore will be different on SPARC (big endian) than on x86 (little endian). For example, if obj = #x123456789ABCDEF0, TheBignum(obj)->data[0] = 0x12345678, TheBignum(obj)->data[1] = 0x9ABCDEF0, i.e. on a little-endian machine you will write out the bytes 0x78, 0x56, 0x34, 0x12, 0xF0, 0xDE, 0xBC, 0x9A, and on a big-endian machine you will write out the bytes 0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC, 0xDE, 0xF0. Look at the functions bitbuff_iu_I and bitbuff_ixu_sub in stream.d, or the functions READ-INTEGER and WRITE-INTEGER, to get an idea how to convert an integer from/to a byte sequence. Bruno |