From: Jim I. <ji...@ap...> - 2000-08-31 18:29:02
|
Eric, On Thursday, August 31, 2000, at 11:15 AM, Eric Melski wrote: > The same way that we pick any values like these: take something that > seems reasonable and run with it. If in 5 years it isn't working, adjust > it. As I consider these factors again, values of 4MB/4KB (respectively) > seem more reasonable to me. This seems more reasonable. > > Again, I urge everybody not to get hung up on the performance issue and > consider the memory issue: without _some_ kind of change, Tcl simply > cannot use all available memory. This modification is simple, it works, > and it is already implemented. I have not yet seen any other proposal > that can say the same. > Yes BUT... I can see myself writing and reading 1-4 Meg of data in realistic circumstances. Not often, but I can see this. I can't right now see a use for reading 128Meg of data into Tcl. So if you make operations on the 1-4 Meg area slow, just on the off chance someone will do the 128Meg allocation, you are not making a good trade-off. So you should make sure the cutoff where you switch from one algorithm to the other is well above the size most people will use, then it should be ok. I would argue for erring on the side of high rather than low, however, since if you can't get the very last 4-8 Meg of a 128 meg string allocation, for example, that is not so bad. It will almost never be an issue, and the solution is to just add some more swap space (this is not hard to do on most modern systems) and try again... Jim -- Jim Ingham ji...@ap... Developer Tools - gdb Apple Computer -- The TclCore mailing list is sponsored by Ajuba Solutions To unsubscribe: email tcl...@aj... with the word UNSUBSCRIBE as the subject. |