|
From: Tom H. <th...@cy...> - 2004-02-14 16:29:48
|
In message <Pin...@ye...>
Nicholas Nethercote <nj...@ca...> wrote:
> AIUI, the fdsets are bit arrays, with 1024 bits, each bit representing an
> fd; if bit N is set in an array, select will watch fd N appropriately.
> The first arg, n, is the highest N for which a bit is set, plus one.
> Therefore, select() will read at least n bits in each array. Thus, the
> a1/8 is close, but not quite right; it's not rounding up to allow for any
> unused bits at the end of the read bytes. For example, most of the time n
> will be less than 8, in which case the length passed to pre_mem_read will
> be zero! I think it needs to be (a1+7)/8.
First up an fdset only has 1024 bits by default - you are free to
redefine the size before including sys/select.h if your program needs
larger sets. That doesn't actually affect your argument though.
The problem with rounding up instead of down is that it may cause
false positives because there is no guarantee that all the bits in
the final byte will be defined - if I pass 3 as the first argument
that I only have to fill in the first three bits of the first byte
and the others may be undefined as the kernel won't pay any attention
to them.
The problem is that although valgrind tracks definedness at the bit
level the interface that the syscall stuff uses to check it only works
at byte granularity.
Tom
--
Tom Hughes (th...@cy...)
Software Engineer, Cyberscience Corporation
http://www.cyberscience.com/
|