|
From: Mathieu B. <ma...@ca...> - 2000-11-05 02:35:49
|
> which right now is so ineffcient memory-wise that brain looks like a toy language. > Right now the size of an object is about : > 4 * sizeOfWord + (numberOfParents * 2 * sizeOfWord) + (numberOfSlots * 4 * sizeOfWord) > whereby: sizeOfWord = size of machine word (pointer or integer) this is usually called wordsize; besides, since everything in your expression is words, let's factor it out: wordsize * (4 + 2*numberOfParents + 4*numberOfSlots) but: do you do any malloc()/free() to allocate the above bytes? if yes, then you are using more than that amount, and, depending on the number of allocations and the allocator used, maybe much more than that. OTOH, if you are doing your stuff via fixed-size cons cells, then you have a much slimmer overhead. > I am thinking about a new implementation of objects where the size of an > object will be: > 5 * sizeOfWord + (numberOfSlots * sizeOfWord) wordsize * (5 + numberOfSlots) > Also each cache object will have a hashtable which will map slot-names > -> indexes. The slot values will then be stored in an ARRAY in the > objects themselves rather than each object having it's own hashtable. I approve this idea :-) At that point, Brain will have a chance of being more memory-efficient than Perl/Python/Ruby. AFAIK, the latter three use one big hashtable per object, though they have inheritance (so 1 Ruby object = N Brain objects) matju |