From: Grant T. <gt...@sw...> - 2002-06-05 00:08:44
|
>>>>> Erik Arjan Hendriks <er...@he...> writes: > That's good. I don't find it surprising though since most of the > addresses involved in the flush aren't going to be in the icache > anyway. Flushing stuff out to main memory early shouldn't cause any > problem either since you're going to end up forcing that to happen > anyway. Ah, this is true, I guess. At these sizes, it probably does stomp all over the cache plenty. > Hrm. That's pretty slow. I'm seeing 550m+bits/sec on our alpha > cluster here (IP over myrinet). Hmm. We're using gig ethernet. The ethernet is an on-chip mac with a series of mind-blowingly ugly hacks in the driver to make it deal with hardware bugs, so it is performance-limited to some extent. However, it's not limited to 100mbit by any stretch of the imagination; netperf goes at 500-800. > Every new page is going to cause a page fault. Yeah, this is what I figured. > It could be that page faults are particularly slow or something. > That certainly wouldn't surprise me. Me either, certainly not on this particular chip. Is there any way to "bulk allocate" them into existence before doing the reads? Seems like there's got to be some way to save time here. I'll go poke. Maybe there's some ugly way to speed faults for vmadump's special known-future case or for just my platform. Perhaps the profiler will suggest something. >> I don't suppose there would be a way to map the pages directly > No, not with the current dump file format. The basic problem is > VMADump goes out of its way to send less than the whole process > image. Therefore the dump file is a bunch of pages that aren't > necessarily contiguous. There's a tiny big of goop (page address) > in front of every page in the file too. Hmm. For us the heap is 95% of the thing, and it's pretty much all dirty. Too bad you've got a per page header ;( >> In the end I make the sender wait in a loop for the TCP_INFO >> ioctl to return a TCP state of TCP_CLOSE. No way should this be > Whoa. That's messed up. I've run into number of TCP bugs including > one that results in a spurious connection reset but nothing that > matches that. VMADump certainly should not cause any problems since > the only I/O related calls it makes are read and write. Oh, yes, it happens pretty much the same for all-userspace code using temp files. I'm quite sure it's specific to us; if it generally worked like this your average httpd would totally not work. > The reset bug I saw looks like this: If two machines are connected (A > and B) and A calls shutdown(fd, 1) (i.e. no more sends from A, send > TCP FIN), then B will sometimes get connection reset by peer on > writes. The whole Unix api is a little iffy about a lot of this stuff; often it's impossible for just a return value and errno to express what's going on. I've been working with O(1) poll replacements lately, boy do some of these interact oddly with signals/tcp/etc. -- Grant Taylor - x285 - http://pasta/~gtaylor/ Starent Networks - +1.978.851.1185 |