Thread: RE: [Algorithms] Game loop timings
Brought to you by:
vexxed72
From: Chuck W. <ch...@wi...> - 2005-03-31 06:37:53
|
Fixed-increment timing code is problematic on the PC now matter how clever you are about it. Since the performance characteristics of the system will vary widely from system-to-system, and even on the same system from driver-to-driver or due to component upgrades, you are inherently dealing with variable timing. Fixed-increment timing code also breaks when you are doing debug vs. release builds, profiling, or have to cope with variability introduced by multi-threading or background processes that the user wants/needs to have active while playing your game. Now, you can certainly set up your AI/physics to run a lower rate than your rendering rate. You can probably get close to constant increments if your resolution is low-enough via multi-media timers, but you will have to be stable enough to deal with the occasional glitch. -Chuck Walbourn SDE, Windows Gaming & Graphics |
From: Stuart H. <stu...@ku...> - 2005-04-01 15:43:48
|
I=20was=20giving=20this=20some=20thought:=20is=20it=20worth=20considering=20= the following...? Build=20your=20update=20functions=20into=20two=20separate=20types a,=20Have=20to=20be=20updated=20in=20discrete=20steps b,=20Can=20be=20updated=20in=20variable=20steps=20(multiples=20of=20minimu= m=20discrete=20step) Then,=20all=20your=20top-end=20loop=20needs=20to=20do=20is=20figure=20out=20= the=20number=20of discrete=20steps=20needed=20(frame=20time=20/=20min=20step=20time,=20round= ed=20down)=20and pass=20that=20number=20off=20to=20the=20update=20loop.=20=20The=20update=20= loop=20will=20pass=20this down=20to=20the=20lower=20level=20section=20updates=20(and=20sub-section=20= updates,=20etc), each=20of=20which=20can=20determine=20how=20they=20need=20to=20process=20t= his=20time=20step; either=20they=20run=20a=20series=20of=20iterations=20of=20fixed=20step=20(= [a]=20above),=20or process=20the=20whole=20step=20in=20one=20go=20([b]=20above). This=20breaks=20down=20your=20update=20from: n=20*=20(update=20time)=20+=20(rendering=20time) into: n=20*=20(fixed-step=20update)=20+=20(variable=20update)=20+=20(rendering=20= time) Where=20(fixed-step=20update)=20<<<=20(variable=20update),=20hence=20(hope= fully) avoiding=20the=20death=20spiral=20/=20well=20of=20despair=20issue. In=20the=20case=20where=20death=20spiral=20occurs=20when=20n=20>=3D=202,=20= some=20optimisation=20is probably=20called=20for=20:) -Dino. >=20-----Original=20Message----- >=20From:=20Stephen=20J=20Baker=20[mailto:sj...@li...]=20 >=20Sent:=2001=20April=202005=2015:32 >=20To:=20g...@li... >=20Subject:=20Re:=20[Algorithms]=20Game=20loop=20timings >=20 >=20 >=20Jon=20Watte=20wrote: >=20>>I=20realise=20my=20questions=20are=20a=20little=20vague=20here,=20bu= t=20I=20need=20general >=20>>pointers=20as=20to=20how=20to=20acheive=20this. >=20>=20 >=20>=20 >=20>=20Pseudo-code=20with=20explanation: >=20>=20 >=20>=20http://www.mindcontrol.org/~hplus/graphics/game_loop.html >=20>=20 >=20>=20(Btw:=20if=20anyone=20has=20feedback=20on=20how=20to=20make=20that= =20page=20clearer,=20 >=20>=20please=20shoot=20it=20my=20way.=20I=20find=20myself=20posting=20th= at=20link=20a=20lot=20in=20 >=20>=20various=20fora.) >=20 >=20I=20think=20the=20one=20thing=20it=20should=20perhaps=20elaborate=20a=20= little=20on=20is=20the >=20scenario=20when=20the=20system=20goes=20into=20a=20'death=20spiral'=20= when=20the >=20time=20taken=20to=20execute=20the=20fixed=20rate=20stuff=20is=20only=20= marginally=20better >=20than=20realtime. >=20 >=20On=20the=20first=20time=20around,=20you=20run=20the=20fixed=20rate=20c= ode=20once=20and=20the >=20rendering=20code=20once.=20=20On=20the=20next=20iteration,=20you=20see= =20that=20you=20took >=20more=20than=20twice=20the=20fixed=20rate=20iteration=20time=20-=20so=20= you=20run=20that=20code >=20twice=20then=20re-render.=20=20Now=20things=20are=20taking=20longer=20= than=20three=20times >=20the=20fixed=20rate=20-=20so=20on=20every=20consecutive=20iteration=20y= ou=20do=20more=20and >=20more=20fixed=20rate=20iterations=20and=20take=20longer=20and=20longer=20= to=20get=20around >=20the=20main=20loop.=20=20You=20can=20get=20into=20this=20spiral=20where= =20the=20fixed=20rate=20code >=20gets=20run=20more=20and=20more=20times=20for=20each=20iteration=20of=20= the=20graphics=20until >=20the=20system=20essentially=20never=20does=20graphics=20updates. >=20 >=20This=20can=20happen=20when=20some=20unusual=20game=20event=20(typicall= y,=20collision >=20detection=20or=20physics)=20takes=20an=20unexpectedly=20long=20amount=20= of=20time=20to >=20evaluate.=20=20They=20system=20can=20then=20spiral=20out=20of=20contro= l=20with=20no=20way >=20to=20ever=20recover=20-=20even=20if=20the=20event=20that=20caused=20it= =20goes=20away.=20=20Under >=20these=20circumstances,=20simply=20dropping=20a=20frame=20of=20the=20fi= xed=20rate=20stuff >=20will=20bring=20the=20system=20back=20into=20frame=20rate. >=20 >=20It's=20definitely=20worth=20adding=20a=20few=20lines=20of=20code=20to=20= spot=20this=20'death >=20spiral'=20and=20to=20correct=20it=20before=20bad=20things=20happen.=20= =20This=20hack=20can >=20allow=20you=20to=20run=20acceptably=20on=20significantly=20slower=20CP= U's=20than=20would >=20otherwise=20be=20possible. >=20 >=20There=20should=20at=20least=20be=20a=20couple=20of=20lines=20in=20the=20= explanation=20to=20cover >=20this=20situation=20because=20I've=20seen=20amateur=20games=20that=20pl= ay=20perfectly >=20well=20on=20an=20X=20Gigahertz=20CPU=20go=20into=20a=20death=20spiral=20= on=200.9*X=20 >=20GHz=20machines >=20to=20the=20great=20suprise=20of=20the=20author. >=20 >=20-------------------------------------------------------------- >=20--------- >=20The=20second=20law=20of=20Frisbee=20throwing=20states:=20"Never=20prec= ede=20any=20maneuver >=20by=20a=20comment=20more=20predictive=20than=20"Watch=20this!"...it=20t= urns=20out=20that >=20this=20also=20applies=20to=20writing=20Fragment=20Shaders. >=20-------------------------------------------------------------- >=20--------- >=20Steve=20Baker=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20(817)619-2657=20(Vox/Vox-Mail) >=20L3Com/Link=20Simulation=20&=20Training=20(817)619-2466=20(Fax) >=20Work:=20s...@li...=20=20=20=20=20=20=20=20=20=20=20http://www.li= nk.com >=20Home:=20s...@ai...=20=20=20=20=20=20=20http://www.sjbaker.or= g >=20 >=20 >=20 >=20------------------------------------------------------- >=20This=20SF.net=20email=20is=20sponsored=20by=20Demarc: >=20A=20global=20provider=20of=20Threat=20Management=20Solutions. >=20Download=20our=20HomeAdmin=20security=20software=20for=20free=20today!= >=20http://www.demarc.com/Info/Sentarus/hamr30 >=20_______________________________________________ >=20GDAlgorithms-list=20mailing=20list >=20G...@li... >=20https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >=20Archives: >=20http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 >=20 >=20______________________________________________________________________= >=20This=20email=20has=20been=20scanned=20by=20the=20MessageLabs=20Email=20= Security=20System. >=20For=20more=20information=20please=20visit=20http://www.messagelabs.com= /email=20 >=20______________________________________________________________________= >=20 __________________________________________________________________________= ________________________________________________________________ Information=20contained=20in=20this=20e-mail=20is=20intended=20for=20the=20= use=20of=20the=20addressee=20only,=20and=20is=20confidential=20and=20may=20= be=20the=20subject=20of=20Legal=20Professional=20Privilege.=20=20Any=20dis= semination,=20distribution,=20copying=20or=20use=20of=20this=20communicati= on=20without=20prior=20permission=20of=20the=20addressee=20is=20strictly=20= prohibited.The=20views=20of=20the=20author=20may=20not=20necessarily=20con= stitute=20the=20views=20of=20Kuju=20Entertainment=20Ltd.=20Nothing=20in=20= this=20email=20shall=20bind=20Kuju=20Entertainment=20Ltd=20in=20any=20cont= ract=20or=20obligation. The=20contents=20of=20an=20attachment=20to=20this=20e-mail=20may=20contain= =20software=20viruses=20which=20could=20damage=20your=20own=20computer=20s= ystem.=20While=20Kuju=20Entertainment=20has=20taken=20every=20reasonable=20= precaution=20to=20minimise=20this=20risk,=20we=20cannot=20accept=20liabili= ty=20for=20any=20damage=20which=20you=20sustain=20as=20a=20result=20of=20s= oftware=20viruses.=20You=20should=20carry=20out=20your=20own=20virus=20che= cks=20before=20opening=20the=20attachment. __________________________________________________________________________= ________________________________________________________________ This=20email=20has=20been=20scanned=20by=20the=20MessageLabs=20Email=20Sec= urity=20System. For=20more=20information=20please=20visit=20http://www.messagelabs.com/ema= il=20 __________________________________________________________________________= _______________________________________________________________ |
From: Jay S. <Ja...@va...> - 2005-04-01 17:49:01
|
> > Pseudo-code with explanation: > >=20 > > http://www.mindcontrol.org/~hplus/graphics/game_loop.html > >=20 > > (Btw: if anyone has feedback on how to make that page=20 > clearer, please=20 > > shoot it my way. I find myself posting that link a lot in various=20 > > fora.) >=20 > I think the one thing it should perhaps elaborate a little on=20 > is the scenario when the system goes into a 'death spiral'=20 > when the time taken to execute the fixed rate stuff is only=20 > marginally better than realtime. This is only true for stuff that costs more per unit realtime, but the simplest solution is to clamp realtime. We clamp realtime to 100ms to avoid these problems in our games. So you still get the progressive poor performance, but it hits a floor really quickly and then recovers. The artifact is, of course, that real time slows down in the game, but this is preferable to having your framerate drop even lower. Jay |
From: Gribb, G. <gg...@ra...> - 2005-04-01 19:11:47
|
>We clamp realtime to 100ms to avoid these problems in our games. One other great reason to do this is to prevent a hickup from causing = the player to move 100 feet in a single frame. Much better to slow down = time in this case. -Gil -----Original Message----- From: gda...@li... [mailto:gda...@li...]On Behalf Of Jay Stelly Sent: Friday, April 01, 2005 11:49 AM To: gda...@li... Subject: RE: [Algorithms] Game loop timings > > Pseudo-code with explanation: > >=20 > > http://www.mindcontrol.org/~hplus/graphics/game_loop.html > >=20 > > (Btw: if anyone has feedback on how to make that page=20 > clearer, please=20 > > shoot it my way. I find myself posting that link a lot in various=20 > > fora.) >=20 > I think the one thing it should perhaps elaborate a little on=20 > is the scenario when the system goes into a 'death spiral'=20 > when the time taken to execute the fixed rate stuff is only=20 > marginally better than realtime. This is only true for stuff that costs more per unit realtime, but the simplest solution is to clamp realtime. We clamp realtime to 100ms to avoid these problems in our games. So you still get the progressive poor performance, but it hits a floor really quickly and then recovers. The artifact is, of course, that real time slows down in the game, but this is preferable to having your framerate drop even lower. Jay ------------------------------------------------------- SF email is sponsored by - The IT Product Guide Read honest & candid reviews on hundreds of IT Products from real users. Discover which products truly live up to the hype. Start reading now. http://ads.osdn.com/?ad_ide95&alloc_id=14396&op=3Dick _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_ida88 |
From: Dave 'Z. K. <Zo...@Re...> - 2005-04-01 23:16:41
|
From: Tom Forsyth [mailto:tom...@ee...] >Fixed-time-increment _rendering_ is hard for the reasons Chuck mentioned. >But having your internal game state update at a fixed increment, or multiples >thereof, is not. A lot of games do it, and it removes a lot of problem with >reproducibility of results. Having your game change behaviour between debug and release, >or just by maving the camera, is not a good thing. The best example of this I can think of is the Quake engine games. If you had a low framerate, you could jump higher. The variable update quantization would cause rounding errors in the player movement code and when the error was compounded over multiple frames, you end up with a higher jump. This allowed people to essentially cheat in multiplayer since they could just limit their framerate and jump on a box other people couldn't. Every game I've worked on since Quake has used a fixed update rate, regardless of framerate. This isn't too bad on consoles, since you're locked at 60FPS for an NTSC signal anyway. But PC space has had some resistance, for example Doom3 is locked at a 60Hz update rate. People buying the latest GeForces don't see 200FPS in the corner and get confused. I'm completely sold on a fixed update rate. It just makes debugging and development easier. The only fun part was actually changing it to 50Hz for our PAL version. That actually exposed bugs since the player was moving a little further each frame. --Zoid zo...@re... |
From: Tom F. <tom...@ee...> - 2005-04-02 05:00:33
|
> People buying the latest GeForces don't see 200FPS in the corner and = get confused. They should still see huge framerates if you interpolate. In fact, if = your rendering rate is anywhere near your update rate (i.e. within a factor = of about five), you really need interpolation, or you're going to get bad aliasing between rendering and update, e.g. first frame gets two = updates, next frame gets three, next gets two, etc - stuff judders horribly when = this happens. So they should still see 200fps (well, not that you can see 200fps on a monitor running at 85Hz, but the graphics card is rendering the scenes = that fast). The most extreme examples of this are RTS games, which frequently have updates far lower than people expect. Because as long as you = interpolate, people just can't tell. StarTopia had 6 updates per second for example, = and at one stage (to run on min-spec machines) we almost dropped it to 3. TomF. -----Original Message----- From: gda...@li... [mailto:gda...@li...] On Behalf Of Dave 'Zoid' Kirsch Sent: 01 April 2005 15:16 To: gda...@li... Subject: RE: [Algorithms] Game loop timings From: Tom Forsyth [mailto:tom...@ee...]=20 >Fixed-time-increment _rendering_ is hard for the reasons Chuck = mentioned.=20 >But having your internal game state update at a fixed increment, or multiples=20 >thereof, is not. A lot of games do it, and it removes a lot of problem = with >reproducibility of results. Having your game change behaviour between = debug and release,=20 >or just by maving the camera, is not a good thing.=20 The best example of this I can think of is the Quake engine games. If = you had a low framerate, you could jump higher. The variable update quantization would cause rounding errors in the player movement code and when the error was compounded over multiple frames, you end up with a = higher jump. This allowed people to essentially cheat in multiplayer since = they could just limit their framerate and jump on a box other people = couldn't. Every game I've worked on since Quake has used a fixed update rate, regardless of framerate. This isn't too bad on consoles, since you're locked at 60FPS for an NTSC signal anyway. But PC space has had some resistance, for example Doom3 is locked at a 60Hz update rate. People buying the latest GeForces don't see 200FPS in the corner and get = confused. I'm completely sold on a fixed update rate. It just makes debugging and development easier. The only fun part was actually changing it to 50Hz = for our PAL version. That actually exposed bugs since the player was moving = a little further each frame. --Zoid=20 zo...@re...=20 |
From: Alen L. <ale...@cr...> - 2005-04-03 09:52:20
|
RE: [Algorithms] Game loop timingsActually, it is a bit more complicated than just "variable frame rate is evil". Most of the problem is in the fact that velocity and coordinates were truncated in every frame. As described by Coriolis, and included in the official FAQ: http://ucguides.savagehelp.com/Quake3/FAQFPSJumps.html . Check the graphs there. It shows differences in path that are due to just framerate differences, and those that happen when you add truncatenation. Looks like step time doesn't make much difference. I believe that he was not using a simple Euler integrator there, though. I guess it was implicit Euler, but didn't bother to check thoroughly yet. Alen ----- Original Message ----- From: Dave 'Zoid' Kirsch To: gda...@li... Sent: Saturday, April 02, 2005 1:16 AM Subject: RE: [Algorithms] Game loop timings From: Tom Forsyth [mailto:tom...@ee...] >Fixed-time-increment _rendering_ is hard for the reasons Chuck mentioned. >But having your internal game state update at a fixed increment, or multiples >thereof, is not. A lot of games do it, and it removes a lot of problem with >reproducibility of results. Having your game change behaviour between debug and release, >or just by maving the camera, is not a good thing. The best example of this I can think of is the Quake engine games. If you had a low framerate, you could jump higher. The variable update quantization would cause rounding errors in the player movement code and when the error was compounded over multiple frames, you end up with a higher jump. This allowed people to essentially cheat in multiplayer since they could just limit their framerate and jump on a box other people couldn't. Every game I've worked on since Quake has used a fixed update rate, regardless of framerate. This isn't too bad on consoles, since you're locked at 60FPS for an NTSC signal anyway. But PC space has had some resistance, for example Doom3 is locked at a 60Hz update rate. People buying the latest GeForces don't see 200FPS in the corner and get confused. I'm completely sold on a fixed update rate. It just makes debugging and development easier. The only fun part was actually changing it to 50Hz for our PAL version. That actually exposed bugs since the player was moving a little further each frame. --Zoid zo...@re... |
From: Jon W. <hp...@mi...> - 2005-04-04 17:48:34
|
> there. It shows differences in path that are due to just framerate > differences, and those that happen when you add truncatenation. Looks like > step time doesn't make much difference. I believe that he was not using a > simple Euler integrator there, though. I guess it was implicit Euler, but > didn't bother to check thoroughly yet. With non-fixed frame rates, you also have problems with penalty methods, because object won't come to "rest" as easily. Consider a box lying on the ground, not yet disabled. Each frame, gravity pulls it down into the ground, and a penalty force based on penetration depth is added to the box. With a longer time step, a greater force will be added. If the time step jitters, then variable forces will be added to the box, leading to a jittery box. With a fixed time step, the system will work like a dampened spring, and come to stability. Other positives of fixed step size include networking consistency, and recording/replay, as well as consistent reproducibility of bugs. I really can't think of any really good reason for variable frame time except that it's sometimes the simplest to implement up-front. Cheers, / h+ |
From: Alen L. <ale...@cr...> - 2005-04-05 11:13:35
|
> With non-fixed frame rates, you also have problems with penalty methods, > because object won't come to "rest" as easily. Consider a box lying on > the ground, not yet disabled. Each frame, gravity pulls it down into the > ground, and a penalty force based on penetration depth is added to the > box. With a longer time step, a greater force will be added. If the time > step jitters, then variable forces will be added to the box, leading to > a jittery box. With a fixed time step, the system will work like a > dampened spring, and come to stability. IMO, such problems are caused by bad contact modeling, not by the variable timestep itself. Fixed timestep is just hiding the fact that the contact model allows the penalty force to generate too high velocities. If the contact model is corrected, this should not happen even with variable timesteps. > Other positives of fixed step size include networking consistency, and > recording/replay, as well as consistent reproducibility of bugs. Yes, if I were to use fixed timestep, the deciding argument would be network consistency. But as I will explain below, this not a perfect solution. I worked on a few projects based on that idea, and there were problems with frame rates on consoles. If you have netwroking working, recording/replay work as well, this is not an issue, usually. Consistent reproducibility of bugs is easy to solve even with variable time steps, as explained by John Carmack in one of his .plans. If you are recording a replay for bug tracking, just record the timestep size of each step. Playback won't be smooth, but it will reproduce the bug. > I really can't think of any really good reason for variable frame time > except that it's sometimes the simplest to implement up-front. There is one big problem with fixed time step. If you are running on a console, you will want to lock the frame rate to the TV refresh. So the timestep has to be fixed at 60 Hz, or 30Hz. If it is 20Hz e.g, 0, 1, or 2 steps will occur on different frames, meaning you are not using your CPU completely, because you have to limit the effects so everything fits within the worst-case rendering frame, the one with 2 steps. You have to run at 60 or 30Hz instead. That's on NTSC, on PAL you have to run at 50 or 25Hz. But then you cannot network between PAL and NTSC versions, because your client and server run at different timesteps. You could implement the system that works if client and server work with different timesteps, but that will essentially be something like a variable timestep system, because you loose the benefit of network consistency. All in all, I wouldn't say that either variable or fixed timesteps are perfect. It depends on what are your target platforms and framerates going to be. If you can fix to a framerate and don't have to network, then fixing a timestep will be great. But if you need to network between 50 and 60 Hz consoles, you should consider variable timestep. JM2C, Alen |
From: Jon W. <hp...@mi...> - 2005-04-05 16:20:58
|
> There is one big problem with fixed time step. If you are running on a > console, you will want to lock the frame rate to the TV refresh. So the > timestep has to be fixed at 60 Hz, or 30Hz. If it is 20Hz e.g, 0, 1, or 2 First: if you pipeline your stepping and your rendering, it doesn't matter if you step N or N+1 times during a frame (you'll never step N-1 times); this assumes that rendering uses slight extrapolation (or slight latency with interpolation). I can see how, on a CPU-limited console, you'd want to run physics at a lower frame rate than graphics. However, for systems with fast-moving objects and massive, interactive physics, you really want physics stepped at rates of 100 Hz or more. Intel had a dual-core demo at GDC that used Novodex and stepped physics at 200 Hz; very smooth. Cheers, / h+ |
From: Alen L. <ale...@cr...> - 2005-04-07 05:36:01
|
> > There is one big problem with fixed time step. If you are running on a > > console, you will want to lock the frame rate to the TV refresh. So the > > timestep has to be fixed at 60 Hz, or 30Hz. If it is 20Hz e.g, 0, 1, or 2 > > First: if you pipeline your stepping and your rendering, it doesn't matter > if you step N or N+1 times during a frame (you'll never step N-1 times); Perhaps I am having a blind spot here, but from the intuition, and from some past experience, I would say that if timestep is 20Hz and frame rate is 30Hz, you _will_ be doing N-1 steps. Am I wrong and why? > this assumes that rendering uses slight extrapolation (or slight latency > with interpolation). Of course, as soon as timestep does not match or divide the frame time, you need inter/extrapolation. Anyway, doing N and N+1 steps will cause your frames to last longer on frame with N+1 steps, hence you are wasting CPU on other frames. (This is only in the worst case scenes, but it only matters in worst case scenes, because when frame rate is fixed, you are wasting CPU anyway on an average scene, but that is ok. You must optimize for the worst case, so it doesn't skip a frame.) The only way to alleviate this is to split physics and rendering into two threads and double buffer the world's state. Or if rendering was allowed to lag at least one frame behind, as some PC drivers allow. But, due to memory limits on consoles, both are usually not practical. And, both introduce lag to your input. Deathmatch players hate that. > I can see how, on a CPU-limited console, you'd want to run physics at a > lower frame rate than graphics. However, for systems with fast-moving > objects and massive, interactive physics, you really want physics > stepped at rates of 100 Hz or more. Intel had a dual-core demo at GDC > that used Novodex and stepped physics at 200 Hz; very smooth. You definitely need to step the solver and integrator at more than 30Hz. 100Hz is usually good enough, yes. But: a) Do you really want to actually move the bodies at this rate, and step the collider? With there is an issue of AI and collision callbacks. Usually, it is best if AI runs at the same rate as bodies move, otherwise collision callbacks can get called more often than AI steps. Some tests for rockets exploding on impact, jumping mechanics etc get a bit messy then. b) Does this have to be _fixed_ timestep? It needs to be sufficiently small, so that quickly rotating bodies and articulated systems can be solved and integrated with less error. But there is no real requirement that it is _fixed_. What we do is to have timestep==frame and split it into an integral number of substeps that are all of same size and _at most_ 10ms each. Collision and AI run on each timestep, so the collision callbacks are synchronized with AI, and much time is saved by not collisions on each substep. Solving and integration is done per substep, but the bodies are not actually _moved_ until the end of the whole timestep. A sweep per object takes care that no tunneling occurs, and the contact model is good enough to move bodies out of any remaining interpenetration. This system runs at variable frame rates without noticeable glitches. Additional bonus is that since solving and integration is done per object cluster (or group, or island, or whatever you call it), single objects that don't touch anything can be stepped without substeps. Since a single object cannot produce enough numerical error to require substepping. This saves a lot of CPU with projectiles, characters most bouncing objects, etc. Cheers, Alen |
From: Jon W. <hp...@mi...> - 2005-04-07 20:26:33
|
> > First: if you pipeline your stepping and your rendering, it doesn't matter > > if you step N or N+1 times during a frame (you'll never step N-1 times); > Perhaps I am having a blind spot here, but from the intuition, and from some > past experience, I would say that if timestep is 20Hz and frame rate is > 30Hz, you _will_ be doing N-1 steps. Am I wrong and why? The time-stepping and the frame-drawing work in phase similar to the Bresenham line drawing algorithm, which goes N items over before it goes one item up (which would be the N+1 case). In the case of 20 Hz physics and 30 Hz drawing, though, that would be similar to trying to draw a line with a slope steeper than 45 degrees, because you can't, on average, even do one step of physics in the time needed for one frame of graphics. Thus, you'll do 0 or 1 frames of physics per frame of graphics. "N" in this case is 0. > > this assumes that rendering uses slight extrapolation (or slight latency > > with interpolation). > Of course, as soon as timestep does not match or divide the frame time, you > need inter/extrapolation. With a good enough physics timestep, you don't need to. Physics at 100 Hz and "free running" frame rate around 25-35 (depending on machine/card/etc) seems to work just fine in my tests. > Anyway, doing N and N+1 steps will cause your frames to last longer on frame > with N+1 steps, hence you are wasting CPU on other frames. You're assuming that graphics and physics are not pipelined. I'm assuming that you triple-buffer your graphics, which will compensate for the jitter. If you're on a console, you might not have that option, which might make you take another approach. > a) Do you really want to actually move the bodies at this rate, and step the > collider? With there is an issue of AI and collision callbacks. Usually, it I would absolutely want to run the collider as often as the physics -- else why run the physics that fast? I don't do a lot of AI, and the AI I've done is high-level, goal-setting kind of things plus very low-level, force-producing kinds of things (which can integrate into physics constraints) so I haven't seen a problem with AI cycle consumption. > is best if AI runs at the same rate as bodies move, otherwise collision > callbacks can get called more often than AI steps. Some tests for rockets > exploding on impact, jumping mechanics etc get a bit messy then. Rockets exploding and jump behavior typically goes into the "physics constraints" category, and need to be locked to the frame step to feel good. > b) Does this have to be _fixed_ timestep? It needs to be sufficiently small, > so that quickly rotating bodies and articulated systems can be solved and > integrated with less error. But there is no real requirement that it is > _fixed_. It is possible to engineer stable, consistent, re-playable, networkable physics systems without jitter, using a variable time step. But life is too short :-) > substeps. Since a single object cannot produce enough numerical error to > require substepping. This saves a lot of CPU with projectiles, characters > most bouncing objects, etc. If it works well for you, it's obviously what you want to do :-) However, my experience has been that there's less pain and better feel in high step rates and collision/forces being at the same step rate as physics, and significantly better behavior with a fixed time step. So, if someone's asking me "how do I do this" then I'll probably keep saying that "the easiest way I've found is ..." Cheers, / h+ |
From: Alen L. <ale...@cr...> - 2005-04-08 07:27:54
|
> frame of graphics. Thus, you'll do 0 or 1 frames of physics per frame > of graphics. "N" in this case is 0. Yes, you are right. Either N, N+1 or N, N-1, depending on how you look at N, but never N-1, N, N+1. Correction accepted. > > Of course, as soon as timestep does not match or divide the frame time, > you > > need inter/extrapolation. > > With a good enough physics timestep, you don't need to. Physics at 100 Hz > and "free running" frame rate around 25-35 (depending on machine/card/etc) > seems to work just fine in my tests. Hmmmmm.... have you tried locking the fps at e.g. 30 and forcing gfx to sync to refresh (so you don't have screen tearing)? You should be able to see stuttering on a steadily moving object in this situation (30fps vs 100Hz physics, without interpolation). Screen tearing gives you a kind of "free temporal antialiasing". > You're assuming that graphics and physics are not pipelined. I'm assuming > that you triple-buffer your graphics, which will compensate for the > jitter. If you're on a console, you might not have that option, which > might make you take another approach. Console is one of the reasons tripple buffering is not an option. Hardcore FPS gamers are another reason. Input-to-render latency and twitch games just don't go together. In a previous FPS project, I implemented fixed timestep for physics, AI, controls and networking. It was simple and it worked. Used 20Hz for timestep, interpolated the graphics. Everyone said the thing was very nice and fast, and that it looks very, very smooth. Then we brought in a Quake player who does competitions etc, to let him check it out. After a few seconds running around, jumping like crazy and twitching the mouse left-right, he said: "this is wrong". It took some time to figure out that he is not happy unless I'm reading the mouse and moving the look direction, once per _frame_. Seems like he is able to see (or rather feel) that there is one additional frame that is rendered between the time that he changes direction of mouse moveemnt and the time this effect is seen on screen. After some training it was possible for others to discern this effect when paying particular attention. Even though most people don't notice, those that do seem to be particularly unsatisfied. :) With this particular project, we went with fixed timestep, and just processed input and look direction per-frame. But it was bit of a mess to handle. Moral of the story: don't introduce latency where you don't have to. > I would absolutely want to run the collider as often as the physics -- > else why run the physics that fast? Perhaps it is not that obvious, but there are two reasons to run physics at high rates. 1) Tunneling. IMO, solving tunneling by increasing physics rate is a bad idea. To prevent tunneling, you need sweeping. Period. Integration 2) Articulated systems. For a simple example, rotation of a body connected to another body by a ball joint, causes the ball joint to tear apart. Explicit Euler integrator introduces large errors where angular movement interacts with linear. So you need to keep the step size small enough to keep the linear error caused by rotation under control and allow joint to correct it. This is the primary reason for small timesteps. And it is perfectly valid. But it has nothing to do with collision, and it can be separated from collision rate. So, I see a perfectly good reason to run physics at high rates, but none to do so for collision. > I don't do a lot of AI, and the AI > I've done is high-level, goal-setting kind of things plus very low-level, > force-producing kinds of things (which can integrate into physics > constraints) so I haven't seen a problem with AI cycle consumption. Well, I can only say you are lucky. :) But that doesn't make it a generally acceptable fact. Some people spend significant ammount of cycles per-frame on AI. Duplicating that amount is not helping. > > is best if AI runs at the same rate as bodies move, otherwise collision > > callbacks can get called more often than AI steps. Some tests for rockets > > exploding on impact, jumping mechanics etc get a bit messy then. > > Rockets exploding and jump behavior typically goes into the "physics > constraints" category, and need to be locked to the frame step to feel > good. So, a rocket is going to explode at one timestep, and inflict damage to all objects in range, and the damaged AIs are going to handle the damage event a few timesteps later? A character is going to touch a platform for one timestep, after which the platform is going to separate and the character only gets to handle the touch event once he is again in the air, and has no contact anymore? What if there are several successive collisions and separations before AI gets that? Like all other problems, these are solveable. But if you don't have them, you don't need to solve them. > It is possible to engineer stable, consistent, re-playable, networkable > physics systems without jitter, using a variable time step. But life is > too short :-) Yes, of course. It requires some engineering to get around problems. But the same stands for separation of AI steps from physics, inter/extrapolation, NTSC/PAL networking etc, input latency, that appear as problems when using fixed time step. :) > If it works well for you, it's obviously what you want to do :-) > However, my experience has been that there's less pain and better > feel in high step rates and collision/forces being at the same step > rate as physics, and significantly better behavior with a fixed > time step. So, if someone's asking me "how do I do this" then I'll > probably keep saying that "the easiest way I've found is ..." Of course. That really is The Easiest Way. But it has downsides. That is good if you are doing a home project, if you are a newbie trying to make your first game, if you are just prototyping something etc. And it can be good for a real game, depending on the game type, target platforms, etc. All I'm saying is that if you are going to do a real game with it, you need to weight your requirements and see which is better for you, fixed or variable. Cheers, Alen |
From: Jon W. <hp...@mi...> - 2005-04-08 16:39:34
|
> > With a good enough physics timestep, you don't need to. Physics at 100 Hz > > and "free running" frame rate around 25-35 (depending on machine/card/etc) > > seems to work just fine in my tests. > Hmmmmm.... have you tried locking the fps at e.g. 30 and forcing gfx to sync > to refresh (so you don't have screen tearing)? You should be able to see > stuttering on a steadily moving object in this situation (30fps vs 100Hz If you're perfectionist, certainly interpolating will look better (this also depends on scene type). > Console is one of the reasons tripple buffering is not an option. Hardcore > FPS gamers are another reason. Input-to-render latency and twitch games just > don't go together. Hard-code PC FPS players will turn off vsync at all, right? Do you find there's a market for hard-core FPS players these days? Back when I tried it, I came to the conclusion that mouse and FPS were made for each other. > In a previous FPS project, I implemented fixed timestep for physics, AI, > controls and networking. It was simple and it worked. Used 20Hz for > timestep, interpolated the graphics. Everyone said the thing was very nice > and fast, and that it looks very, very smooth. Then we brought in a Quake > player who does competitions etc, to let him check it out. After a few > seconds running around, jumping like crazy and twitching the mouse > left-right, he said: "this is wrong". It took some time to figure out that Do you think the 20 Hz command rate has something to do with it? I think that with a fast physics rate/command rate, that'd be solved. > Moral of the story: don't introduce latency where you don't have to. I'm with you there. > 1) Tunneling. IMO, solving tunneling by increasing physics rate is a bad > idea. To prevent tunneling, you need sweeping. Period. Integration I agree -- you need to sweep if you move further than your maximum acceptable penetration depth in a single frame. > 2) Articulated systems. For a simple example, rotation of a body connected ... > correct it. This is the primary reason for small timesteps. And it is > perfectly valid. But it has nothing to do with collision, and it can be > separated from collision rate. I would argue that the interaction between those bodies may cause collisions at each of the time steps, so colliding at each time step is preferable. This might be one of those "horses for courses" things, though. (you know the absolute law of the universe: everything depends) > So, a rocket is going to explode at one timestep, and inflict damage to all > objects in range, and the damaged AIs are going to handle the damage event a > few timesteps later? The damaged AI is going to make a new decision about what its goals are a few steps later, yes. Most humans don't have infinitely fast reaction times, after all :-) If you damage something enough to kill it, well, death doesn't need lots of cycles (mostly just queuing notification messages). Is yours different? (I've kept our specific challenges out of this, like our opposing forces come from the OneSAF Objective System, which delivers entity data to us once every three seconds or so. Frame latency just isn't the biggest of our problems at that point :-) > > time step. So, if someone's asking me "how do I do this" then I'll > > probably keep saying that "the easiest way I've found is ..." > I'm saying is that if you are going to do a real game with it, you need to > weight your requirements and see which is better for you, fixed or variable. Fair enough. Hopefully now "igrok" (the original poster of this thread) has enough data to support his decision one way or the other. Cheers, / h+ |
From: Alen L. <ale...@cr...> - 2005-04-09 07:01:13
|
> Hard-code PC FPS players will turn off vsync at all, right? As I was told by one, they will turn vsync _on_, make sure machine runs exactly at e.g. 60 fps and then use a PS/2 mouse with custom driver so they can tune mouse sampling rate to be multiple of 60Hz, like 240Hz, or something, IIRC. This gives them guaranteed fixed (and very low) latency. > Do you find there's a market for hard-core FPS players these days? Yes, their time is passing, but they are still able to make much noise. If you know what I mean. > Do you think the 20 Hz command rate has something to do with it? > I think that with a fast physics rate/command rate, that'd be solved. Yes, of course. 20Hz was a Bad Idea. Just making a point of why input latency is bad. ("Moral of the story: don't introduce latency where you don't have to.") Triple buffering adds one frame of latency, so that's why triple buffering is not a good solution to smooth out CPU spikes caused by different number of physics steps between two frames. Better solution is not to have CPU spikes, if possible. :) > The damaged AI is going to make a new decision about what its goals > are a few steps later, yes. Most humans don't have infinitely fast > reaction times, after all :-) If you damage something enough to kill > it, well, death doesn't need lots of cycles (mostly just queuing > notification messages). Is yours different? LOL, that's true alright. :) I was referring to the parts of AI that deal with animation and physical/input response (double jumping, rocket jumping, etc.), not decision making. > Fair enough. Hopefully now "igrok" (the original poster of this thread) > has enough data to support his decision one way or the other. Correct. Cheers, Alen |
From: Martin P. <mar...@re...> - 2005-04-09 09:44:35
|
> Yes, of course. 20Hz was a Bad Idea. Just making a point of why input > latency is bad. ("Moral of the story: don't introduce latency where you > don't have to.") Triple buffering adds one frame of latency, so that's why > triple buffering is not a good solution to smooth out CPU spikes caused by > different number of physics steps between two frames. Better solution is > not > to have CPU spikes, if possible. :) Don't forget triple/quadruple buffering can be useful for smoothing out spikes in non-interactive sections, like cut scenes. |
From: Mark W. <Mwa...@to...> - 2005-04-07 23:39:24
|
> > I can see how, on a CPU-limited console, you'd want to run=20 > physics at=20 > > a lower frame rate than graphics. However, for systems with=20 > > fast-moving objects and massive, interactive physics, you=20 > really want=20 > > physics stepped at rates of 100 Hz or more. Intel had a=20 > dual-core demo=20 > > at GDC that used Novodex and stepped physics at 200 Hz; very smooth. >=20 > You definitely need to step the solver and integrator at more=20 > than 30Hz. > 100Hz is usually good enough, yes. But: Hmm, on Carmageddon TDR2000 we had cars and objects moving at up to 1000 km/h (probably faster) using swept collision and interpolating/extrapolating graphics (which was a requirement for variable speed replay). The physics and collision was run at a "constant" 25Hz using the discussed N, N+1 technique. Whilst it probably could've been better, it wasn't at all bad for 1999 and certainly didn't require 100Hz updates! If you're using intersection based collision techniques then I'd agree that a faster update rate would be desirable. Cheers, Mark Torus Games |
From: Alen L. <ale...@cr...> - 2005-04-08 07:27:56
|
See a part of my reply to Jon Watte. You don't need high rates if you don't have articulated systems, or hard cases like large stacks of boxes, etc. And even with intersection based collision, you can run collision at low rates, if you additionally run tunneling tests. So: solving and integration at high rate (but only for object clusters that require it), and collision and object movement at the same rate as rendering and AI. Alen ----- Original Message ----- From: "Mark Wayland" <Mwa...@to...> To: <gda...@li...> Sent: Thursday, April 07, 2005 23:39 Subject: RE: [Algorithms] Game loop timings > > I can see how, on a CPU-limited console, you'd want to run > physics at > > a lower frame rate than graphics. However, for systems with > > fast-moving objects and massive, interactive physics, you > really want > > physics stepped at rates of 100 Hz or more. Intel had a > dual-core demo > > at GDC that used Novodex and stepped physics at 200 Hz; very smooth. > > You definitely need to step the solver and integrator at more > than 30Hz. > 100Hz is usually good enough, yes. But: Hmm, on Carmageddon TDR2000 we had cars and objects moving at up to 1000 km/h (probably faster) using swept collision and interpolating/extrapolating graphics (which was a requirement for variable speed replay). The physics and collision was run at a "constant" 25Hz using the discussed N, N+1 technique. Whilst it probably could've been better, it wasn't at all bad for 1999 and certainly didn't require 100Hz updates! If you're using intersection based collision techniques then I'd agree that a faster update rate would be desirable. Cheers, Mark Torus Games ------------------------------------------------------- SF email is sponsored by - The IT Product Guide Read honest & candid reviews on hundreds of IT Products from real users. Discover which products truly live up to the hype. Start reading now. http://ads.osdn.com/?ad_ide95&alloc_id396&op=ick _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_ida88 |
From: Mark W. <Mwa...@to...> - 2005-04-08 08:52:27
|
> See a part of my reply to Jon Watte. You don't need high=20 > rates if you don't have articulated systems, or hard cases=20 > like large stacks of boxes, etc. Sorry, I didn't see that bit... but I recall we had both stacks of boxes (first level) and articulated bodies such as "mutant tail thing" which is essentially a large multi-part mace. I guess even the doors and hoods could be considered articulated, as they were hinged. The main problem scenario for us was tall thin poles with rotation about perpendicular to the long axis of the pole such that when one end of the pole was resolved, it'd just push the other end in resulting in small jitter. Mark |
From: Alen L. <ale...@cr...> - 2005-04-08 13:09:08
|
>Sorry, I didn't see that bit... I just ment to provide a reference. I posted that part after you posted your reply, so couldn't possibly see it in advance. > but I recall we had both stacks of boxes (first level) and articulated bodies such as "mutant tail thing" which is essentially a large multi-part mace. I guess even the doors and hoods could be considered articulated, as they were hinged. The main problem scenario for us was tall thin poles with rotation about perpendicular to the long axis of the pole such that when one end of the pole was resolved, it'd just push the other end in resulting in small jitter. Yes, it does work, but it is more stable with smaller timestep. We initially wanted to make our whole system run at frame time, and it worked even for stacks and for articulated systems. Mostly. But it jitters too much if systems are a bit more complicated (like ragdolls), or if stacks include articulated systems etc. So we now use substepping for all clusters with more than one body. Cheers, Alen |
From: Tom F. <tom...@ee...> - 2005-03-31 17:38:21
|
Fixed-time-increment _rendering_ is hard for the reasons Chuck = mentioned. But having your internal game state update at a fixed increment, or multiples thereof, is not. A lot of games do it, and it removes a lot of problem with reproducibility of results. Having your game change = behaviour between debug and release, or just by maving the camera, is not a good thing. There's a huge thread about this from a while back (a year or so?). It's = one of those semi-religious arguments, but I've done games with fixed = increments and variable increments. Despite being a lot simpler, the ones with = variable increments had tons of annoying bugs that "went away" in one build or = the other, recording demos was hard, you'd get bugs that could only be reproduced on some machines, etc. Nightmare. Not sure what it's got to do with the V-blank timer. Just use any old = timer. All you want to know is how long your rendering took. TomF. > From: Chuck Walbourn >=20 > Fixed-increment timing code is problematic on the PC now matter how > clever you are about it. Since the performance characteristics of the > system will vary widely from system-to-system, and even on the same > system from driver-to-driver or due to component upgrades, you are > inherently dealing with variable timing. Fixed-increment timing code > also breaks when you are doing debug vs. release builds, profiling, or > have to cope with variability introduced by multi-threading or > background processes that the user wants/needs to have active while > playing your game. >=20 > Now, you can certainly set up your AI/physics to run a lower rate than > your rendering rate. You can probably get close to constant=20 > increments > if your resolution is low-enough via multi-media timers, but you will > have to be stable enough to deal with the occasional glitch. >=20 > -Chuck Walbourn > SDE, Windows Gaming & Graphics |
From: igrok <ig...@bl...> - 2005-08-23 20:52:34
|
Can you explain how I take the rendering time and work this in to the AI update time? Currently, I have updates like this for the AI. (all multiples of 20fps) 0.05 seconds 0.05 0.05 0.10 0.05 0.05 etc. The 0.10 slips in because gradually the applied timer gets our of sync with the real amount of time passed and that must be made up at some point. The thing is, it is very noticable as a character on screen will suddenly jump a further distance than normal. It certainly doesn't look as smooth as when I've just been using basic 'time between frames' approach to update objects. Maybe this is what you are suggesting: Game update takes 0.0010 seconds Render frame takes 0.0015 (total=0.0025) Render frame takes 0.0015 (total=0.0040) Sleep for 0.0010 (because no time for next render frame before 0.05) Game update etc. So this would waste a certain amount of CPU time, but would only cause a slip if a render frame took -----Original Message----- From: gda...@li... [mailto:gda...@li...] On Behalf Of Tom Forsyth Sent: 31 March 2005 18:38 To: gda...@li... Subject: RE: [Algorithms] Game loop timings Fixed-time-increment _rendering_ is hard for the reasons Chuck mentioned. But having your internal game state update at a fixed increment, or multiples thereof, is not. A lot of games do it, and it removes a lot of problem with reproducibility of results. Having your game change behaviour between debug and release, or just by maving the camera, is not a good thing. There's a huge thread about this from a while back (a year or so?). It's one of those semi-religious arguments, but I've done games with fixed increments and variable increments. Despite being a lot simpler, the ones with variable increments had tons of annoying bugs that "went away" in one build or the other, recording demos was hard, you'd get bugs that could only be reproduced on some machines, etc. Nightmare. Not sure what it's got to do with the V-blank timer. Just use any old timer. All you want to know is how long your rendering took. TomF. > From: Chuck Walbourn > > Fixed-increment timing code is problematic on the PC now matter how > clever you are about it. Since the performance characteristics of the > system will vary widely from system-to-system, and even on the same > system from driver-to-driver or due to component upgrades, you are > inherently dealing with variable timing. Fixed-increment timing code > also breaks when you are doing debug vs. release builds, profiling, or > have to cope with variability introduced by multi-threading or > background processes that the user wants/needs to have active while > playing your game. > > Now, you can certainly set up your AI/physics to run a lower rate than > your rendering rate. You can probably get close to constant > increments > if your resolution is low-enough via multi-media timers, but you will > have to be stable enough to deal with the occasional glitch. > > -Chuck Walbourn > SDE, Windows Gaming & Graphics ------------------------------------------------------- This SF.net email is sponsored by Demarc: A global provider of Threat Management Solutions. Download our HomeAdmin security software for free today! http://www.demarc.com/Info/Sentarus/hamr30 _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_ida88 |
From: Tom P. <ga...@fa...> - 2005-08-23 21:09:13
|
> Can you explain how I take the rendering time and work this in > to the AI update time? I'll try something here. > Currently, I have updates like this for the AI. (all multiples > of 20fps) > 0.05 seconds > 0.05 > 0.05 > 0.10 > 0.05 > 0.05 > etc. > > The 0.10 slips in because gradually the applied timer gets our > of sync with the real amount of time passed and that must be > made up at some point. My take on this, given the behavior as you describe it, is that you just need to separate yourself somewhat from the concept of "real" time. Time is what you decide it is, so there's no harm keeping it at 1/20th of a second if indeed your target speed is 20 fps. As long as all of your updates are consistent, and as long as everything uses the same value, it should be fine. What you want to avoid, and what you're doing, is accumulating error and then cutting out the error whenever it accumulates big enough, despite what the actual frame times might be. Instead, just don't worry about the error. You want the apparent time passed to be pretty consistent with the actual time passed, otherwise you'll get jerky motions. I would disagree with TomF's assertion that variable framerates are easier. I think the fixed per-frame time is significantly easier to code up and put into play. Personally, I prefer variable time if you are truly running at arbitrary framerates, but on the platforms I work on the timestep is always going to be 1/60th of a second or some multiple thereof. When the framerate cuts from 60fps to 30, then all of the timesteps are magically larger. Note that this change may be significant, though, and it may then be desirable to run two 60Hz updates instead of worrying about missed collisions etc. that may start appearing when your timestep doubles. One problem I thought of the other day is that typically I've implemented and used systems where the timestep that is used is the time that it took the last frame to render. Unfortunately, that means if our framerate is really jittery, on a frame-to-frame level, we'll get really jerky motion. E.g. if we literally alternate 1/60 and 1/30 second frames, then we're rendering actions that should be separated by 1/60 of a second 1/30s apart, and also rendering things that should represent 1/30s only 1/60s apart. Is this fear rational? In practice probably not, since our framerate is going to be relatively consistent frame to frame, but still... Hmm... -tom! |
From: Tom F. <tom...@ee...> - 2005-08-24 02:58:50
|
> I would disagree with TomF's assertion that variable framerates > are easier. Wait - let's be clear about this. What I said was: "Fixed-time-increment _rendering_ is hard for the reasons Chuck = mentioned. But having your internal game state update at a fixed increment, or multiples thereof, is not. A lot of games do it, and it removes a lot of problem with reproducibility of results. Having your game change = behaviour between debug and release, or just by maving the camera, is not a good thing." So since we're talking about AI, then that's internal game state, or "gameplay" or whatever you want to call it. I 100% agree that should = happen at fixed timesteps. But the _rendering_ should just happen as fast as it can happen, and you interpolate between the current game state and the previous one. That = way if you're simulating at 24Hz (which is surprisingly common, simply because = a lot of animation systems run at that speed!), but rendering at 30Hz, the interpolation copes with the fact that you have slightly fewer game = turns than frames. Tom P is now going to say that on consoles, you should always be = rendering at 60Hz, and then I'm going to call bullshit on that, and I'll point to = 90% of the titles out there that ACTUALLY run at varying rates and tell him = he's in fantasy land, and then he'll tell me that that's because those games suck, and I'll agree. Just trying to save a bit of time here :-) TomF. > -----Original Message----- > From: gda...@li...=20 > [mailto:gda...@li...] On=20 > Behalf Of Tom Plunket > Sent: 23 August 2005 14:09 > To: gda...@li... > Subject: RE: [Algorithms] Game loop timings >=20 >=20 >=20 > > Can you explain how I take the rendering time and work this in > > to the AI update time? >=20 > I'll try something here. >=20 > > Currently, I have updates like this for the AI. (all multiples > > of 20fps) > > 0.05 seconds > > 0.05 > > 0.05 > > 0.10 > > 0.05 > > 0.05 > > etc. > > > > The 0.10 slips in because gradually the applied timer gets our > > of sync with the real amount of time passed and that must be > > made up at some point. >=20 > My take on this, given the behavior as you describe it, is that > you just need to separate yourself somewhat from the concept of > "real" time. Time is what you decide it is, so there's no harm > keeping it at 1/20th of a second if indeed your target speed is > 20 fps. As long as all of your updates are consistent, and as > long as everything uses the same value, it should be fine.=20 > What you want to avoid, and what you're doing, is accumulating > error and then cutting out the error whenever it accumulates > big enough, despite what the actual frame times might be.=20 > Instead, just don't worry about the error. You want the > apparent time passed to be pretty consistent with the actual > time passed, otherwise you'll get jerky motions. >=20 > I would disagree with TomF's assertion that variable framerates > are easier. I think the fixed per-frame time is significantly > easier to code up and put into play. Personally, I prefer > variable time if you are truly running at arbitrary framerates, > but on the platforms I work on the timestep is always going to > be 1/60th of a second or some multiple thereof. When the > framerate cuts from 60fps to 30, then all of the timesteps are > magically larger. Note that this change may be significant, > though, and it may then be desirable to run two 60Hz updates > instead of worrying about missed collisions etc. that may start > appearing when your timestep doubles. >=20 > One problem I thought of the other day is that typically I've > implemented and used systems where the timestep that is used is > the time that it took the last frame to render. Unfortunately, > that means if our framerate is really jittery, on a > frame-to-frame level, we'll get really jerky motion. E.g. if > we literally alternate 1/60 and 1/30 second frames, then we're > rendering actions that should be separated by 1/60 of a second > 1/30s apart, and also rendering things that should represent > 1/30s only 1/60s apart. Is this fear rational? In practice > probably not, since our framerate is going to be relatively > consistent frame to frame, but still... Hmm... >=20 > -tom! >=20 >=20 > ------------------------------------------------------- > SF.Net email is Sponsored by the Better Software Conference & EXPO > September 19-22, 2005 * San Francisco, CA * Development=20 > Lifecycle Practices > Agile & Plan-Driven Development * Managing Projects & Teams *=20 > Testing & QA > Security * Process Improvement & Measurement *=20 > http://www.sqe.com/bsce5sf >=20 > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 >=20 |
From: Kent Q. <ken...@co...> - 2005-08-24 03:26:29
|
I agree with Tom-Tom. Fixed physics rate is definitely the way to go. Consider what happens when you pause the game, or run it in slow motion, or the user switches to a different app for a while and your frame rate goes wacky because another process needed attention. In all those cases, you accumulate error and things explode. It's quite amusing to switch away from the game, switch back, and find yourself in a completely different country because you just moved 13 kilometers in one time step. I once spent quite a while implementing a variable rate clock that tried to redistribute lost error over a number of ticks, or throw out errors when they got too big. There were all sorts of interesting bugs for weird cases, and even when those were fixed, you still didn't have reproduceable physics. Switching to a fixed step physics clock dealt with all of that very nicely. What I didn't do, but should have, was what Tom suggested below, which is interpolating between physics clock ticks so that your physics doesn't have to be tied to your frame rate at all. Kent Tom Forsyth wrote: >>I would disagree with TomF's assertion that variable framerates >>are easier. >> >> > >Wait - let's be clear about this. What I said was: > >"Fixed-time-increment _rendering_ is hard for the reasons Chuck mentioned. >But having your internal game state update at a fixed increment, or >multiples thereof, is not. A lot of games do it, and it removes a lot of >problem with reproducibility of results. Having your game change behaviour >between debug and release, or just by maving the camera, is not a good >thing." > >So since we're talking about AI, then that's internal game state, or >"gameplay" or whatever you want to call it. I 100% agree that should happen >at fixed timesteps. > >But the _rendering_ should just happen as fast as it can happen, and you >interpolate between the current game state and the previous one. That way if >you're simulating at 24Hz (which is surprisingly common, simply because a >lot of animation systems run at that speed!), but rendering at 30Hz, the >interpolation copes with the fact that you have slightly fewer game turns >than frames. > > > >>My take on this, given the behavior as you describe it, is that >>you just need to separate yourself somewhat from the concept of >>"real" time. Time is what you decide it is, so there's no harm >>keeping it at 1/20th of a second if indeed your target speed is >>20 fps. As long as all of your updates are consistent, and as >>long as everything uses the same value, it should be fine. >>What you want to avoid, and what you're doing, is accumulating >>error and then cutting out the error whenever it accumulates >>big enough, despite what the actual frame times might be. >>Instead, just don't worry about the error. You want the >>apparent time passed to be pretty consistent with the actual >>time passed, otherwise you'll get jerky motions. >> >>I would disagree with TomF's assertion that variable framerates >>are easier. I think the fixed per-frame time is significantly >>easier to code up and put into play. Personally, I prefer >>variable time if you are truly running at arbitrary framerates, >>but on the platforms I work on the timestep is always going to >>be 1/60th of a second or some multiple thereof. When the >>framerate cuts from 60fps to 30, then all of the timesteps are >>magically larger. Note that this change may be significant, >>though, and it may then be desirable to run two 60Hz updates >>instead of worrying about missed collisions etc. that may start >>appearing when your timestep doubles. >> >>One problem I thought of the other day is that typically I've >>implemented and used systems where the timestep that is used is >>the time that it took the last frame to render. Unfortunately, >>that means if our framerate is really jittery, on a >>frame-to-frame level, we'll get really jerky motion. E.g. if >>we literally alternate 1/60 and 1/30 second frames, then we're >>rendering actions that should be separated by 1/60 of a second >>1/30s apart, and also rendering things that should represent >>1/30s only 1/60s apart. Is this fear rational? In practice >>probably not, since our framerate is going to be relatively >>consistent frame to frame, but still... Hmm... >> >> |