> Quantum is not, but timeslice is. One explanation for this all would be that
> you are testing on a quiet system, and Jon is testing on a busy one.
Relevant information: my program was an OpenGL app. MacMcMullen mentioned
(in a private off-list response) that Direct3D does some things that have the side-effect
of changing the Sleep resolution to 1ms. I would guess
this is done globally, i.e. they wouldn't set it and then unset it when you enter D3D
functions. So probably if your process starts up D3D, you get 1ms, else you get
10ms. (In the same way that you get your floating point precision whacked to single
if you don't specify the special "please don't screw me" flag).
From what I saw while working, Sleep(0) was actually equivalent to Sleep(10),
i.e. it was worse than Sleep(1). It was pretty strange, not the behavior
one would expect.
Okay here comes the algorithm part to keep this on-topic:
> > No they don't, they just yield the current slice, something that could
> > happen to your thread at any time. You are not going to tell me that you get
> > random 10ms stops in your code !
Right, the problem only happens when you sleep. But the issue is, just
rounding to the end of the next slice, is not good enough. If you draw the
situation out on a piece of paper, you will see that (for example) this
causes your app to oscillate nastily between 50fps and 100fps when you
are right on the boundary between the two -- instead of getting, say,
99 to 101. Also it just generally doesn't give you control over how long
you will sleep when you enter Sleep, which is a bad thing -- consider that
the context this came up in was, "figure out how many milliseconds you want
to pad your frame time out to the target, then sleep for that long".