Pater and I have discussed this quite a bit now, and we
have a ton of stats to back up some pretty substantial
Essentially, there are several bugs working together to
make discussion pages blow up to hugely unweldly
sizes... this gets worse as pages get larger... it is
already evident on discussions with 500 comments...
A discussion with 700 comments at score:1 was
tested.... nested mode resulted in a 450k page,
threaded in 300k.
A story with 4000 comments (2872 at threshold 1) was
tested and was 2.2 *megs* nested, and 700k threaded.
THE BUG Comment Limit is essentially ignored for nested
discussions. We've found pages with hundreds upon
hundreds of comments displayed, in full text, using the
default user preferences. Often exceeding a meg for a
single page. The server issues are many- huge pages,
tons of extra DB calls, to say nothing of the
bandwidth. And these are with all normal user
preferences, save selecting nested mode instead of
threaded. Heaven forbid! What needs to happen is we
need to make a more realistic break in discussions.
We started by dropping comment limit, but it turns out
that this number is largely ignored. Breaks need to be
chosen much more conservatively! We shouldn't be
serving pages that are more than a meg in size /ever/.
I think 500k is a good upper end ceiling.
THE IDEAL SOLUTION: Ultimately I think that COMMENT
LIMIT should be replaced with a maximum discussion byte
count. This shouldn't be all that difficult to do
since our text is all in some variable anyway... of
course, the problem is that we have been avoiding the
transfer of comments_text until the last minute. I
assume we can cache a byte count at least (don't we
already to handle large comments being truncated?) but
we'll also need to account for the size consumed by
HTML for layout and such. We could probably use some
sort of average for reply lines & full comments
displayed since the actual amount of variation between
those is relatively small.
Anonymous users would be given a relatively small
default max discussion size... perhaps a couple hundred
k. We should test a few discussions for byte counts
and see approximately how many comments = how many
bytes so we can make a fair assessement of how small a
page can be before it makes page There is a potential
subscriber plum here as well but we'd want to be
careful to still have some sort of cap. I don't think
we should ever be serving 2 meg files ;)
I think this would be MUCH more clear than the existing
system of comment limits and indexes and splits and
breaks which is extremely confusing.