gdalgorithms-list Mailing List for Game Dev Algorithms (Page 1423)
Brought to you by:
vexxed72
You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(390) |
Aug
(767) |
Sep
(940) |
Oct
(964) |
Nov
(819) |
Dec
(762) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(680) |
Feb
(1075) |
Mar
(954) |
Apr
(595) |
May
(725) |
Jun
(868) |
Jul
(678) |
Aug
(785) |
Sep
(410) |
Oct
(395) |
Nov
(374) |
Dec
(419) |
2002 |
Jan
(699) |
Feb
(501) |
Mar
(311) |
Apr
(334) |
May
(501) |
Jun
(507) |
Jul
(441) |
Aug
(395) |
Sep
(540) |
Oct
(416) |
Nov
(369) |
Dec
(373) |
2003 |
Jan
(514) |
Feb
(488) |
Mar
(396) |
Apr
(624) |
May
(590) |
Jun
(562) |
Jul
(546) |
Aug
(463) |
Sep
(389) |
Oct
(399) |
Nov
(333) |
Dec
(449) |
2004 |
Jan
(317) |
Feb
(395) |
Mar
(136) |
Apr
(338) |
May
(488) |
Jun
(306) |
Jul
(266) |
Aug
(424) |
Sep
(502) |
Oct
(170) |
Nov
(170) |
Dec
(134) |
2005 |
Jan
(249) |
Feb
(109) |
Mar
(119) |
Apr
(282) |
May
(82) |
Jun
(113) |
Jul
(56) |
Aug
(160) |
Sep
(89) |
Oct
(98) |
Nov
(237) |
Dec
(297) |
2006 |
Jan
(151) |
Feb
(250) |
Mar
(222) |
Apr
(147) |
May
(266) |
Jun
(313) |
Jul
(367) |
Aug
(135) |
Sep
(108) |
Oct
(110) |
Nov
(220) |
Dec
(47) |
2007 |
Jan
(133) |
Feb
(144) |
Mar
(247) |
Apr
(191) |
May
(191) |
Jun
(171) |
Jul
(160) |
Aug
(51) |
Sep
(125) |
Oct
(115) |
Nov
(78) |
Dec
(67) |
2008 |
Jan
(165) |
Feb
(37) |
Mar
(130) |
Apr
(111) |
May
(91) |
Jun
(142) |
Jul
(54) |
Aug
(104) |
Sep
(89) |
Oct
(87) |
Nov
(44) |
Dec
(54) |
2009 |
Jan
(283) |
Feb
(113) |
Mar
(154) |
Apr
(395) |
May
(62) |
Jun
(48) |
Jul
(52) |
Aug
(54) |
Sep
(131) |
Oct
(29) |
Nov
(32) |
Dec
(37) |
2010 |
Jan
(34) |
Feb
(36) |
Mar
(40) |
Apr
(23) |
May
(38) |
Jun
(34) |
Jul
(36) |
Aug
(27) |
Sep
(9) |
Oct
(18) |
Nov
(25) |
Dec
|
2011 |
Jan
(1) |
Feb
(14) |
Mar
(1) |
Apr
(5) |
May
(1) |
Jun
|
Jul
|
Aug
(37) |
Sep
(6) |
Oct
(2) |
Nov
|
Dec
|
2012 |
Jan
|
Feb
(7) |
Mar
|
Apr
(4) |
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(10) |
2013 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(2) |
May
|
Jun
|
Jul
(9) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
(14) |
Feb
|
Mar
(2) |
Apr
|
May
(10) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(12) |
Nov
|
Dec
(1) |
2016 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
From: gl <gl...@nt...> - 2000-08-01 21:51:32
|
> Good animation is 30 fps. So so...20. 15 or less is blah. > Anymore than 30 is just game engine hype. :-) > > -DaveS You do actually read some of the messages here, right? ;) -- gl |
From: gl <gl...@nt...> - 2000-08-01 21:50:40
|
> Its interesting that you mention persistence of the phosphor. We see an > artifact when using shutter glasses for stereoscopic viewing that may be due > to this effect. On a normal CRT screen, if you close one eye while wearing > shutter glasses, you see a shadow of the object, a second image that appears > to be the image for the opposite eye. When you are viewing the same scene on > a projection display system, the shadow is not there. Since the shadow goes > away with the projection system, I feel the shadow must not be leakage > through the closed shutter of the open eye. Our theory is that residual > light from the previous frame must be the cause. Yeah, that's a well-known problem with LCD shutters - it is precisely because of the phosphor persistence that you get the ghosting in each eye. Another reason why consumer-based stereoscopic solutions are so disappointing. -- gl (still looking for the Holy Grail in consumer level stereoscopy). |
From: gl <gl...@nt...> - 2000-08-01 21:45:27
|
> your higher level cognitive functions...but when (redraw rate != video > rate) your mental interpolator gets confused because it evolved to > throw rocks at rabbits running behind trees. Are you sure about that? Awful way to spend your time ;) -- gl |
From: Bass, G. T. <gt...@ut...> - 2000-08-01 21:37:14
|
Algorithms folks, I'm surprised no one has mentioned thus far the most significant difference between movie frames and real-time gfx frames. I know I drag you all through this each time this discussion comes up, but I'll rehash once more. If you look at a single frame from a movie, you will notice that moving objects are blurred, since the film is exposed for a short period of time and records light reflected from moving objects continuously during that time. This is the well known "motion blur", which, incidentally, looks nothing at all like what 3dfx acheives with their T-buffer. Motion blur has the effect of connecting the multiple discrete images into a more complete recreation of motion as the eye is sequentially exposed to each frame, essentially creating a near overlap between each frame. Examination of a single frame rendered by a real-time gfx system will show something quite different. Objects in the single frame appear to be totally static, and there is no hint of motion. For this reason, higher framerates become important as 3D objects move more quickly across the screen, as these fast objects will appear more disconnected if allowed to move very far (in screenspace) before they are rendered again. A good example of this would be roadside objects in a driving game. As you pass lightpoles and other such goodies, they begin to cover screenspace very quickly near the outside edges of the screen, and may even appear to move backwards at certain speeds when the next object is closer to the location of the previous object from one frame to the next. Regards, Garett Bass gt...@ut... |
From: Tom H. <to...@3d...> - 2000-08-01 21:02:14
|
At 03:09 PM 8/1/2000 +0200, you wrote: >Good point, but I have a another problem I need the maximum point for, I >want to have different detailTextures, which are not to be drawn when they >are very far away. > >But having spent some background thinking on the problem now, is it true >that the point(on the AABB) with maximum distance to the other point, is >always on one of the 8 vertices? Which means I only have to find out which >of the 8 vertices is most far away from the other point. For the nearest point you can find places on the AABB that are closer than the vertices, but for the farthest point I can't come up with any conditions where that would be the case. Its possible that 2 or 4 points of a face of the AABB will be equidistant to the test point, but in that case it doesn't matter since the distance to any one of them will give you the right answer. Tom |
From: Pallister, K. <kim...@in...> - 2000-08-01 20:43:36
|
I too, can boldly propegate the never ending thread! Tom's point is correct, but not as simple as that. The eye doesn't just have a 'response time' (time from when light hits the retina until the rods & cones on the retina do their thing and call up the brain to say they've seen something). The eye actually has a response curve, and it changes for different wavelengths (color) of light and amplitude. After a similarly never-ending thread on this subject (eye response, maximum frame rates, etc) about a year ago I went out and looked for research on it. Turns out there has been quite a bit, most by NASA, the airforce, and various optical/vision societies. A couple I did find were: Window of Visibility: a psychophysical theory of fidelity in time sampled visual motion displays (Watson et al, 85] The Optimal Motion Simulus (Watson et al, 94) Anyhow, if I remember correctly, what it comes down to is that the frame rate at which people can no longer perceive discrete frames is somewhere between 40 and 90 frames per second (yeah, I know some of you disagree and will say it's higher), and this depends on: - speed of objects moving - size of objects on screen - color of objects, background, contrast between - lighting - etc - etc and oh yeah, everyone's response curves are different. Which leads me to a question: do you folks think that the eyes & brain can be trained to better perceive such errors? The movies never used to bother me until I became a graphics programmer. Now it bugs me. Also, PAL didn't used to bother me, but now I find the lower frame rate intolerable, which is why I make a point of not watching TV when I am in the UK, but spend the time in pubs instead! Kim Pallister We will find a way or we will make one. - Hannibal > -----Original Message----- > From: Tom Forsyth [mailto:to...@mu...] > Sent: Tuesday, August 01, 2000 8:38 AM > To: gda...@li... > Subject: RE: [Algorithms] FPS Questions > > > ...except that eye's don't have a "shutter" or a "framerate" > - they have a > response time, but it's continuous, not discrete like a movie > camera. So you > can't say that the eyes are "out of sync" - they don't _have_ a sync. > > As for finding the optimum details settings to get "n"Hz on a > machine, with > continuous LoD methods, you can get quite close (+/- 5Hz is > not too hard to > achieve 95% of the time). You have some sensible defaults for > rendering > quality (i.e. how many passes), according to card type, allow > the user to > override them if they feel like it, and then CLOD up or down > to get the > right frame rate (within sane limits). > > Tom Forsyth - Muckyfoot bloke. > Whizzing and pasting and pooting through the day. > > > -----Original Message----- > > From: Jim Offerman [mailto:j.o...@in...] > > Sent: 01 August 2000 15:58 > > To: Algorithms List > > Subject: Re: [Algorithms] FPS Questions > > > > > > I gave my original query some more thought, and I think I > have found a > > plausible explanation as to why 60 fps might be perceived as > > being smoother > > than 30 fps. > > > > The human eye in many ways works like a camera (note: > > actually, it is the > > other way around): the retina is exposed to light for a small > > period of time > > and then the accumulated light signal is transmitted to the > > brain before the > > retina is exposed again. > > > > Let's assume for a while that the eyes record at a steady 30 > > fps. If our > > game also runs at 30 fps, the eye sees one frame at the time. > > However, if > > the game runs at 60 fps, the eye sees two frames at the time, > > which are > > blurred together, resulting in a form of motion blur. > > > > Another important aspect is that your eyes will _never_ be in > > sync with the > > frame rate of your game, so it is possible that there exists > > a moment where > > your eye records a frame, but there is nothing to record > > (since the monitor > > is doing a vblank). This will certainly be perceived as a > > discontinuity of > > the ongoing motion on the screen. The higher the framerate, > > the less likely > > that such situations occur. > > > > Finally, I must agree with the lower latency factor, since > > while our eyes > > may be relatively slow, our responses (generally) are _really_ fast. > > Specially, if someone is trained in some response, then it becomes a > > reflex... An experienced FPS player might be using his brain > > as little as > > 25% of the time, the rest of the time, his actions are merely > > reflexes. > > Hence the phrase 'mindless killer' ;-). > > > > > Why not just ask the player what they want and then scale > > the engine to > > that > > > speed. If they want 'liquidity' at high frame rates then > > scale back and > > use > > > lower level of detail models. If they can't tell the > > difference then > > > they'll pull the frame rate down to 30fps and get better > > looking visuals. > > > > We usually achieve this by offering the player some controls > > over detail. > > Though it might be nice if your engine includes a little util > > which finds > > the optimum detail settings to get n fps on a given > > machine... but I can > > tell you that _won't_ be very easy, so better put that on the > > 'things to do > > when I have time left' list ;-). > > > > Jim Offerman > > > > Innovade > > - designing the designer > > > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: jason w. <jas...@po...> - 2000-08-01 19:05:41
|
just go look at any books on classic animation skills.. it's the same principles of sqaush and streach, anticipation and visual illusion. j |
From: Graham S. R. <gr...@se...> - 2000-08-01 18:46:07
|
Steve Baker wrote, > -----Original Message----- > I suspect the limit would be the persistance of the phosphor rather than > a limitation of the eye/brain. All those neurons are firing asynchronously > so there will always be some that see the interruption to the light. Its interesting that you mention persistence of the phosphor. We see an artifact when using shutter glasses for stereoscopic viewing that may be due to this effect. On a normal CRT screen, if you close one eye while wearing shutter glasses, you see a shadow of the object, a second image that appears to be the image for the opposite eye. When you are viewing the same scene on a projection display system, the shadow is not there. Since the shadow goes away with the projection system, I feel the shadow must not be leakage through the closed shutter of the open eye. Our theory is that residual light from the previous frame must be the cause. > > [from my previous post - GSR ] > > What happens in 5 years when we all have monitors running at 2000 pixels > > horizontally? Large immersive displays such as CAVE's already run at 96 > > frames per second at that resolution... What happens when we are all using > > stereoscopic hardware all the time, so that each eye sees have the total > > frame rate. 100fps becomes 50fps. We will then need 200fps to match today's > > 100fps..... Food for thought. > > No - that's not true. We've run 3500x2200 pixel screens and 60Hz > still looks just as good as it does on a 1000 pixel screen. Steve, I don't disagree with you here. Sure, 60Hz looks just as good on both screens. But think about stereoscopic mode (which is what I was talking about). At 60hz total, each eye gets 30Hz. Well, that's not quite correct. Each eye gets frames displayed at 60Hz, but rendered frames are delivered at a rate of 30 fps per eye, with 30 frames of blackness in addition. The black frames interlace with rendered frames causing opaque objects to seem transparent. Even at 96 fps (the weird number used by the ImmersaDesk system that we have used on contract to NASA Langley) opaque objects appear a bit transparent in stereo mode. In the office, we try to run 120Hz on CRT's when running in stereo mode. If you want 60Hz *per eye* in stereo mode, then I believe you will need 120Hz total, unless you are using a display such as an HMD that has separate screens for each eye. I was at the IRIS Performer meeting at 1999 SIGGRAPH, and a demonstration was given of the new Hayden planetarium projection display. This has multiple SGI InfiniteReality2 graphics pipes driving seven projector displays with more than 7 million total pixels (see link http://www.trimension-inc.com/company/press_stories.html). As I recall, this is running at only 30fps (it has a database of some 2 billion stars), and it looks just fine. But objects are not moving that fast as the software navigates through the stars. This is not a stereo display, though. The actual display is probably running at 60Hz, and frames are duplicated as you describe in your original post to get 30fps. I wonder if 60fps and stereo mode would look good for this particular software and display system? Graham Rhodes |
From: Jim O. <j.o...@in...> - 2000-08-01 18:38:57
|
> In a movie theater, each image is only painted once - so a perfect > straight line can be drawn through the points and no double-imaging > is aparrent - although the flicker is bad and the heavy motion blur > can get ugly. I tried, but I just can't resist.... We have established earlier that a cinema projector displays every image twice, to effectively realize 48 fps. Essentially, you would be experiencing the same phenomena in there, right? Jim Offerman Innovade - designing the designer |
From: Keith Z.L. <ke...@dr...> - 2000-08-01 18:22:42
|
Actually, if we are going to be nit-picky the ~60hz of TV is interlaced, so you are seeing half-frames, with an actual "frame" rate of 29.97hz Are movies also interlaced?? I thought that I read somewhere that movies had moved to a 48 hz refresh. Also, none of this really matters to computer frame rate discussion, the end result is that people can see the difference between 30 and 60 fps, and the change is significant. Keith |
From: Jim O. <j.o...@in...> - 2000-08-01 18:21:23
|
> ...except that eye's don't have a "shutter" or a "framerate" - they have a > response time, but it's continuous, not discrete like a movie camera. So you > can't say that the eyes are "out of sync" - they don't _have_ a sync. Not entirely true; The cells in your retina do need some time to 'discharge' (can't remember the proper term from my biology classes...) after they have received some light (and require some time to 'charge' before a new signal is sent to the brain). This is related to the fact that your eyes need time to adjust when you go from darkness into the light (though this is primarily caused by your pupils being wide open in the dark, so when you step into the light you experience a sort of overflow effect). I am pretty convinced that the eyes send discrete images to the brain, though maybe not as discrete as the frames in a computer game. However, the brain fills in the gaps. In fact, your brain does this all the time, with all sensory input. Most of the time, when you see a certain object, you only see it because the brain _expects_ to see that object. Jim Offerman Innovade - designing the designer |
From: Thatcher U. <tu...@tu...> - 2000-08-01 18:19:34
|
> On Sun, 30 Jul 2000, Pai-Hung Chen wrote: > > > (1) Is there a universally agreed way to calculate > > Frame-Per-Second information? Lately I've been displaying four values: 1) The "frame rate" of the previous frame (i.e. the reciprocal of the frame time) 2) The "frame rate" of the slowest frame in the last second 3) The "frame rate" of the fastest frame in the last second 4) The average frame rate over the last second You need to store recent frame times in a FIFO to be able to compute these stats, but I find them useful for diagnosing hitches and sync effects. -- Thatcher Ulrich http://tulrich.com |
From: Thatcher U. <tu...@tu...> - 2000-08-01 18:19:33
|
From: Allan Bentham <A.B...@Re...> To: <gda...@li...> Sent: Tuesday, August 01, 2000 7:19 AM Subject: Re: [Algorithms] Modifying the min and max altitude in a heightmap > Why not have a lower resolution height map that simply contains the values > of the max and min height 'hiexels' in that particular cell? > > That way you can more easily reject a large (depending of course on your > chosen resolution) number of 'heixels'. > > You'll still have to do a complete scan of your low res map but it will be > faster than goin through the individual 'hiexel' map. If you take that idea to its logical extreme (i.e. give the lower-res bitmap its own even-lower-res min/max map, etc), you end up with a hierarchy of maps. The mighty quadtree is an example of such a hierarchy, and it crushes this problem. A "linear" quadtree is probably what you want, and probably not to the full resolution of the original heightfield, to keep the memory costs down. Check the list archives and/or the web for copious info. -- Thatcher Ulrich http://tulrich.com > > Sam McGrath wrote: > > > > > This may have a simple solution, but I haven't thought of one yet... > > > > > > I have a heightmap of altitude values. The heightmap is subject to > > > modification during the course of my program. When I load the heightmap > I > > > compute the maximum and minimum values (altitudes). During the program, > > if > > > a value in heightmap that is modified goes beyond the maximum altitude, > I > > > assign that value to the maximum altitude (and similarly for the minimum > > > altitude). > > > > > > However, I also want to be able to tell if the maximum value has > _shrunk_ > > > (or if this minimum value has increased). So far I can't think of a > > simple > > > way to do this other than rescanning the whole heightmap whenever a > value > > is > > > modified, and clearly this is not a practical solution. > > > > > > Hopefully there's a nice simple method that I'm blind to, so please > > > enlighten me if you can. (-: |
From: Tom F. <to...@mu...> - 2000-08-01 18:12:50
|
I like the theory - shame move theaters show each image twice really, otherwise it'd be a really good one. Though to be fair I see double images when movies pan (really bad flickery double images too - ugh) - so your theory still holds. "it evolved to throw rocks at rabbits running behind trees." How true. How Quake :-) Tom Forsyth - Muckyfoot bloke. Whizzing and pasting and pooting through the day. > -----Original Message----- > From: Stephen J Baker [mailto:sj...@li...] > Sent: 01 August 2000 18:09 > To: gda...@li... > Subject: RE: [Algorithms] FPS Questions > > > On Mon, 31 Jul 2000, Graham S. Rhodes wrote: > > > I like Jason's post. It is fairly consistent with my own > observations, which > > I describe here. He may not agree with my whole discussion, though. > > This stuff is well researched and understood. > > Flight simulator people have been aware of the issue for 20 years. > > > The old "fact" that 25-30 fps is enough for animation that appears > > continuous to the human eye was discovered when viewing > movies of fairly > > slow moving objects. Slow compared to FPS games during > attack sequences, for > > example. It just happens that 60 fps or 100 fps or 25 or 30 > fps just works > > out nicely for most current games. > > There is also a BIG difference between 24Hz movies and (say) > 30Hz video. > > In a 24Hz movie, each image is only drawn once - so there are > 24 separate > still frames. > > <apologies to people who've heard me explain this many times before> > > In a 30Hz video animation on a 60Hz CRT, each image is drawn TWICE by > the CRT - so there are 60 separate still frames - with two consecutive > identical images being painted onto the phosphor between each > swapbuffer > call. > > That distinction might not seem to matter - so you'd expect > 30Hz graphics > to look better than 24Hz movies - however, they don't - and > here is why: > > Our brains evolved for tasks like watching a small furry > animal running > around in a forest - then letting us throw a rock at it and > stand a good > chance of hitting it. > > That means that when the cute bunny runs along and tree > trunks, bushes, > etc get between it and us, we have to mentally interpolate > it's position > in order to fill in the gaps in the imagery coming from our eyes. If > your brain didn't do that, you'd think that you were seeing a set of > separate disconnected events - and throwing a rock would be > impossible. > > That hardwired interpolation is what screws up our perception of 30Hz > animation on 60Hz video. > > Look at a graph of position against time for an object moving > at a constant speed - displayed using 30Hz graphics on a 60Hz video > screen: > > | > | . . > ^ | . . > | | . . > posn | . . > | . . > | . . > | . . > |____________________________ > time -> > > Linear motion - but two consecutive images at each position - right? > > Well, when your brain tries to interpolate between those still > images, it tries to make a straight line through the points, > but it can't - it's a stair-step function. > > However, one way to view this graph is as TWO parallel straight > lines. You can easily draw two parallel lines through those > points - and they fit the data perfectly. > > That's what your brain does. So you don't see ONE object moving > jerkily - you see TWO objects moving smoothly but flickering > at 30Hz. This means that all fast moving objects double-image > at 30Hz - and they flicker too. > > When the graphics are only updated at 20Hz, you get triple-imaging, > as you'd expect. But there comes a point at poor enough frame rates > at which your brain accepts that this is jerky motion and not > multiple moving objects. > > For me, that sometimes happens at 20Hz and sometimes at 15. It seems > to depend on the ambient lighting. Somewhere around that speed, I can > sometimes 'flip' my brain between seeing multiple images and jerkiness > by concentrating on that - just like you can make some > optical illusions > flip between two states by staring hard at them. > > Different people hit this effect at different speeds. One of > my co-workers > can see quadruple-images at 15Hz - I've never seen that. > > In a movie theater, each image is only painted once - so a perfect > straight line can be drawn through the points and no double-imaging > is aparrent - although the flicker is bad and the heavy motion blur > can get ugly. > > > Does anyone have any reference that states that the human > eye *cannot* > > resolve *much* better than, say, 100 fps? > > No - to the contrary. There are people who can resolve much > better than > 100Hz. If you run a CRT at 120Hz but update the graphics at 60, the > double imaging comes back. That proves that your eyes are > still seeing > a non-continuous image - even if your higher cognitive centers don't > notice it. It would be interesting to do that with 200Hz > video and 100Hz > rendering - but I don't have a CRT that'll go that fast. > > I suspect the limit would be the persistance of the phosphor > rather than > a limitation of the eye/brain. All those neurons are firing > asynchronously > so there will always be some that see the interruption to the light. > > The idea that the eye/brain somehow doesn't see the black > periods between > the redraws at over 20Hz is a fallacy. What happens is that the > interrupted image is reconstructed into smooth interpolated motion for > your higher level cognitive functions...but when (redraw rate != video > rate) your mental interpolator gets confused because it evolved to > throw rocks at rabbits running behind trees. > > Obviously humanity may sometime evolve to be able to interpolate 30Hz > images - but that presumes that people who play a lot of video games > successfully will be more likely to pass on their genes to the next > generation. The reality is probably the opposite of that! :-) > > > What happens in 5 years when we all have monitors running > at 2000 pixels > > horizontally? Large immersive displays such as CAVE's > already run at 96 > > frames per second at that resolution... What happens when > we are all using > > stereoscopic hardware all the time, so that each eye sees > have the total > > frame rate. 100fps becomes 50fps. We will then need 200fps > to match today's > > 100fps..... Food for thought. > > No - that's not true. We've run 3500x2200 pixel screens and > 60Hz still looks > just as good as it does on a 1000 pixel screen. > > Steve Baker (817)619-2657 (Vox/Vox-Mail) > L3Com/Link Simulation & Training (817)619-2466 (Fax) > Work: sj...@li... http://www.link.com > Home: sjb...@ai... http://web2.airmail.net/sjbaker1 |
From: Stephen J B. <sj...@li...> - 2000-08-01 17:33:40
|
On Mon, 31 Jul 2000, Graham S. Rhodes wrote: > I like Jason's post. It is fairly consistent with my own observations, which > I describe here. He may not agree with my whole discussion, though. This stuff is well researched and understood. Flight simulator people have been aware of the issue for 20 years. > The old "fact" that 25-30 fps is enough for animation that appears > continuous to the human eye was discovered when viewing movies of fairly > slow moving objects. Slow compared to FPS games during attack sequences, for > example. It just happens that 60 fps or 100 fps or 25 or 30 fps just works > out nicely for most current games. There is also a BIG difference between 24Hz movies and (say) 30Hz video. In a 24Hz movie, each image is only drawn once - so there are 24 separate still frames. <apologies to people who've heard me explain this many times before> In a 30Hz video animation on a 60Hz CRT, each image is drawn TWICE by the CRT - so there are 60 separate still frames - with two consecutive identical images being painted onto the phosphor between each swapbuffer call. That distinction might not seem to matter - so you'd expect 30Hz graphics to look better than 24Hz movies - however, they don't - and here is why: Our brains evolved for tasks like watching a small furry animal running around in a forest - then letting us throw a rock at it and stand a good chance of hitting it. That means that when the cute bunny runs along and tree trunks, bushes, etc get between it and us, we have to mentally interpolate it's position in order to fill in the gaps in the imagery coming from our eyes. If your brain didn't do that, you'd think that you were seeing a set of separate disconnected events - and throwing a rock would be impossible. That hardwired interpolation is what screws up our perception of 30Hz animation on 60Hz video. Look at a graph of position against time for an object moving at a constant speed - displayed using 30Hz graphics on a 60Hz video screen: | | . . ^ | . . | | . . posn | . . | . . | . . | . . |____________________________ time -> Linear motion - but two consecutive images at each position - right? Well, when your brain tries to interpolate between those still images, it tries to make a straight line through the points, but it can't - it's a stair-step function. However, one way to view this graph is as TWO parallel straight lines. You can easily draw two parallel lines through those points - and they fit the data perfectly. That's what your brain does. So you don't see ONE object moving jerkily - you see TWO objects moving smoothly but flickering at 30Hz. This means that all fast moving objects double-image at 30Hz - and they flicker too. When the graphics are only updated at 20Hz, you get triple-imaging, as you'd expect. But there comes a point at poor enough frame rates at which your brain accepts that this is jerky motion and not multiple moving objects. For me, that sometimes happens at 20Hz and sometimes at 15. It seems to depend on the ambient lighting. Somewhere around that speed, I can sometimes 'flip' my brain between seeing multiple images and jerkiness by concentrating on that - just like you can make some optical illusions flip between two states by staring hard at them. Different people hit this effect at different speeds. One of my co-workers can see quadruple-images at 15Hz - I've never seen that. In a movie theater, each image is only painted once - so a perfect straight line can be drawn through the points and no double-imaging is aparrent - although the flicker is bad and the heavy motion blur can get ugly. > Does anyone have any reference that states that the human eye *cannot* > resolve *much* better than, say, 100 fps? No - to the contrary. There are people who can resolve much better than 100Hz. If you run a CRT at 120Hz but update the graphics at 60, the double imaging comes back. That proves that your eyes are still seeing a non-continuous image - even if your higher cognitive centers don't notice it. It would be interesting to do that with 200Hz video and 100Hz rendering - but I don't have a CRT that'll go that fast. I suspect the limit would be the persistance of the phosphor rather than a limitation of the eye/brain. All those neurons are firing asynchronously so there will always be some that see the interruption to the light. The idea that the eye/brain somehow doesn't see the black periods between the redraws at over 20Hz is a fallacy. What happens is that the interrupted image is reconstructed into smooth interpolated motion for your higher level cognitive functions...but when (redraw rate != video rate) your mental interpolator gets confused because it evolved to throw rocks at rabbits running behind trees. Obviously humanity may sometime evolve to be able to interpolate 30Hz images - but that presumes that people who play a lot of video games successfully will be more likely to pass on their genes to the next generation. The reality is probably the opposite of that! :-) > What happens in 5 years when we all have monitors running at 2000 pixels > horizontally? Large immersive displays such as CAVE's already run at 96 > frames per second at that resolution... What happens when we are all using > stereoscopic hardware all the time, so that each eye sees have the total > frame rate. 100fps becomes 50fps. We will then need 200fps to match today's > 100fps..... Food for thought. No - that's not true. We've run 3500x2200 pixel screens and 60Hz still looks just as good as it does on a 1000 pixel screen. Steve Baker (817)619-2657 (Vox/Vox-Mail) L3Com/Link Simulation & Training (817)619-2466 (Fax) Work: sj...@li... http://www.link.com Home: sjb...@ai... http://web2.airmail.net/sjbaker1 |
From: Tom F. <to...@mu...> - 2000-08-01 17:24:36
|
Saccade - that was it - ta. Except that if the graphics move fast, your eyes _will_ follow them (and then "flick" back to the other side of the screen, inducing saccade). If they don't, that's a sign of neurological problems - it's one of the low-level reflexes that keep your eyes tracking moving objects, and it's very hard to override without defocussing your eyes. The things you learn when both parents are medics... :-) Tom Forsyth - Muckyfoot bloke. Whizzing and pasting and pooting through the day. > -----Original Message----- > From: Stephen J Baker [mailto:sj...@li...] > Sent: 01 August 2000 17:17 > To: gda...@li... > Subject: RE: [Algorithms] FPS Questions > > > On Mon, 31 Jul 2000, Tom Forsyth wrote: > > > Lower latency - that's what FPS players from higher fps > (excuse the pun). > > They also say that you can spin faster and not have any > gaps in your visual > > coverage of a room. I'll have to trust them on that - I > have no problems > > scanning a room quickly on my fairly average setup, which > is probably going > > at around 25-30Hz. If you spin too fast, your optical > shutters kick in (I > > used to know what they were called - anyone? > > Saccade. > > > - they stop you getting confused by rapid head movements). > > But I don't think that spinning the graphics around fast will > induce a saccade - your eyes have to move to make that happen. > > Steve Baker (817)619-2657 (Vox/Vox-Mail) > L3Com/Link Simulation & Training (817)619-2466 (Fax) > Work: sj...@li... http://www.link.com > Home: sjb...@ai... http://web2.airmail.net/sjbaker1 > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Brian M. <bma...@ra...> - 2000-08-01 17:11:53
|
There are 2 cases for fans. The first is where the first vertex is on the edge of the fan. In this case the list of fan vertices is just a convex polygon. This is what I tend to think of as fans. They output of clipping to a convex boundary will remain a fan. The second type of fan is one where the first vertex is in the middle of the fan rather than being on the edge. I think you're right that in this case the clipping won't produce a single fan. For generating fans offline you could test whether the angle between the first and last edges from the common vertex is greater that 180 degrees if the triangles are planar. If it is less than 180 degrees then its the first type of fan and normal polygon clipping against a convex boundary will take a fan and produce at most one fan. I can't think of any algorithm I've come across off hand that handles clipping type 2 fans...I think it might have to be done using something similar to the clipping techniques for concave polygons - if you think of a type 1 vrs type 2 fan then type 1 is convex and type 2 is concave. (Trace the edges between successive vertices to get the idea). I'm glad I don't have to deal with type 2! -Brian. > -----Original Message----- > From: gda...@li... > [mailto:gda...@li...]On Behalf Of Aaron > Drew > Sent: Wednesday, July 26, 2000 12:02 PM > To: gda...@li... > Subject: RE: [Algorithms] Polygon clipping > > > I might be overlooking something here but I can imagine a case where a fan > would not remain a fan after clipping that might complicate this method. > > What happens when the centre of the fan is out of the view > frustum? I can't > see how the fan can be preserved. It would be cut into a series of quads > which would no longer share a common vertex. Unless I'm thinking of > something wrongly, these can't be arranged in a fan-like fashion. > > - Aaron > > > -----Original Message----- > > From: gda...@li... > > [mailto:gda...@li...]On Behalf Of Tom > > Forsyth > > Sent: Wednesday, July 26, 2000 7:50 PM > > To: gda...@li... > > Subject: RE: [Algorithms] Polygon clipping > > > > > > > > Fans remain fans (though it may be knarly to find the ordering), strips > > don't - they can become multiple strips (think about a U-shaped > > strip where > > the bottom of the U is off-screen). > > > > You might also want to try sending objects that intersect the > > frustum edges > > through the non-VB path, so that D3D clips them for you - it's actually > > pretty fast. Things that don't need clipping should still go > > through the VB > > path of course. > > > > Oh, and I really recommend indexed lists instead of lots of > little strips > > and fans - you get far too many D3D calls that way, and if > you're going to > > do your own clipping, indexed lists are very easy to handle. > > > > Tom Forsyth - Muckyfoot bloke. > > Whizzing and pasting and pooting through the day. > > > > > -----Original Message----- > > > From: Klaus Hartmann [mailto:k_h...@os...] > > > Sent: 25 July 2000 23:49 > > > To: gda...@li... > > > Subject: [Algorithms] Polygon clipping > > > > > > > > > Hi all, > > > > > > Even though this is not an API-specific question, I'd like to use an > > > API-specific example to explain my problem. > > > > > > In Direct3D, when using *vertex buffer* with pre-transformed > > > and pre-lit > > > vertices, then Direct3D does not perform any clipping. So I > > > have to do this > > > myself. I could probably go ahead an clip single triangles > > > with a Sutherland > > > Hodgman algorithm, but... > > > > > > How do you people clip triangle strips and triangle fans? Is > > > it possible to > > > clip them so, that strips remain strips, and fans remain > > > fans, or is this > > > impossible? > > > > > > Also, which is the prefered clipping algorithm? Is it > > > Sutherland Hodgman? > > > > > > When you answer, please keep in mind that I also need to clip texture > > > coordinates and the diffuse/specular colors. > > > > > > Any help is greatly appreciated, > > > Niki > > > > > > > > > _______________________________________________ > > > GDAlgorithms-list mailing list > > > GDA...@li... > > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Allan B. <a.b...@re...> - 2000-08-01 16:57:27
|
(2nd attempt at posting..) Why not have a lower resolution height map that represents small grids on your proper height map (e.g. 1 low res cell maps to say a 4x4 grid on your full detail map) the cells of your low res height map will contain the maximum and minimum values of the hiexels in the grid it represents on the full detail map. This way you can more easily reject a large (depending of course on your chosen resolution) numbers of 'heixels'. You'll still have to do a complete scan of this low res map but I think it will be faster than going through your full 'hiexel' map. Al. ----- Original Message ----- From: "Robert Dibley" <RD...@ac...> To: <gda...@li...> Sent: Tuesday, August 01, 2000 12:00 PM Subject: RE: [Algorithms] Modifying the min and max altitude in a heightmap > He could however optimise the occasions when he does the full rescan, by > checking to see if the value you are replacing was the maximum value, in > which case it is _possible_ but by no means certain that your max value has > decreased. > > Or even, when tracking the maximum value, keep a count of how many match the > maximum (after all you will have to scan at the outset, so can get the > initial count there) and then if you increase the max, it becomes 1 (ie just > the new entry) and if you replace an entry with the current max height, with > a lower value, you can decrement the count. If the count of max height > entries reaches zero, then do a rescan. Oh, and make sure if you replace an > entry with a new one which matches the max value that you increment the > count too. > > Still doesn't solve the basic problem that you have to do a rescan > occasionally, but stops you doing it when you really don't need to. > > Rob > > -----Original Message----- > From: Jamie Fowlston [mailto:j.f...@re...] > Sent: 01 August 2000 11:35 > To: gda...@li... > Subject: Re: [Algorithms] Modifying the min and max altitude in a > heightmap > > > Unless you maintain a list of heixels sorted by height, I'm pretty sure you > can't do what you want. Sorry. > > Jamie > > > Sam McGrath wrote: > > > This may have a simple solution, but I haven't thought of one yet... > > > > I have a heightmap of altitude values. The heightmap is subject to > > modification during the course of my program. When I load the heightmap I > > compute the maximum and minimum values (altitudes). During the program, > if > > a value in heightmap that is modified goes beyond the maximum altitude, I > > assign that value to the maximum altitude (and similarly for the minimum > > altitude). > > > > However, I also want to be able to tell if the maximum value has _shrunk_ > > (or if this minimum value has increased). So far I can't think of a > simple > > way to do this other than rescanning the whole heightmap whenever a value > is > > modified, and clearly this is not a practical solution. > > > > Hopefully there's a nice simple method that I'm blind to, so please > > enlighten me if you can. (-: > > > > -Sam > > ______________________ > > Sam McGrath > > sa...@dn... > > http://www.dnai.com/~sammy > > ICQ 5151160 > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Stephen J B. <sj...@li...> - 2000-08-01 16:48:24
|
On Mon, 31 Jul 2000, jason watkins wrote: > there's also the isssue of aliasing if you're syncing to vsync. When you're > running say 72hz refresh.. if you don't have every frame ready in time, then > that frame has to wait an entire refresh. suddenly you've dropped a frame.. > if you're in the middle of an intense fire fight, it's concievable that you > may even go past 2 72hz periods, drop 2 frames, and end up with 18hz for a > frame or 2. Not 18, it would be 24Hz. Steve Baker (817)619-2657 (Vox/Vox-Mail) L3Com/Link Simulation & Training (817)619-2466 (Fax) Work: sj...@li... http://www.link.com Home: sjb...@ai... http://web2.airmail.net/sjbaker1 |
From: Stephen J B. <sj...@li...> - 2000-08-01 16:41:39
|
On Mon, 31 Jul 2000, Tom Forsyth wrote: > Lower latency - that's what FPS players from higher fps (excuse the pun). > They also say that you can spin faster and not have any gaps in your visual > coverage of a room. I'll have to trust them on that - I have no problems > scanning a room quickly on my fairly average setup, which is probably going > at around 25-30Hz. If you spin too fast, your optical shutters kick in (I > used to know what they were called - anyone? Saccade. > - they stop you getting confused by rapid head movements). But I don't think that spinning the graphics around fast will induce a saccade - your eyes have to move to make that happen. Steve Baker (817)619-2657 (Vox/Vox-Mail) L3Com/Link Simulation & Training (817)619-2466 (Fax) Work: sj...@li... http://www.link.com Home: sjb...@ai... http://web2.airmail.net/sjbaker1 |
From: Mark W. <mwi...@cy...> - 2000-08-01 16:01:37
|
Will and Jamie, Thanks for the help. This was exactly what I needed. The info was right in front of me in one of my books. I just didn't realize what it was called. -Mark > Catmull-Rom splines interpolate all control points, so they would be worth > checking out. Look at: > > http://graphics.cs.ucdavis.edu/CAGDNotes/Catmull-Rom-Spline/ Catmull-Rom-Spline.html > > The standard graphics textbooks have better expositions than this web page > if you have access to them. > > Will > > ---- > Will Portnoy > Easiest spline to use for this is Catmull Rom, which if I remember > correctly allows you to keyframe the way you want to. > > For a Java link, try > > http://www.media.mit.edu/~rich/research/java/docs/acg.stuttg art.rich.spline.CatmullRomSplineLoop3D.html > > I've no idea how good it is, I was only searching to remind myself of > the name :) > > Jamie > > > Mark Wilczynski wrote: > > > I'm trying to write a fly-by sequence (similar to the Unreal > > intro) for my 3d engine. Can anyone tell me how to smoothly > > move the camera along a set of predefined 3d points? > > Ideally, I would like a system where I manually move the > > camera around the environment and record keyframes at > > certain positions/orientations. Then I would like the > > system to automatically move the camera along a path > > interpolated from these keyframes. I'm guessing I need some > > sort of spline or other curve to do the job? > > > > -Mark > > |
From: Tom F. <to...@mu...> - 2000-08-01 15:42:40
|
...except that eye's don't have a "shutter" or a "framerate" - they have a response time, but it's continuous, not discrete like a movie camera. So you can't say that the eyes are "out of sync" - they don't _have_ a sync. As for finding the optimum details settings to get "n"Hz on a machine, with continuous LoD methods, you can get quite close (+/- 5Hz is not too hard to achieve 95% of the time). You have some sensible defaults for rendering quality (i.e. how many passes), according to card type, allow the user to override them if they feel like it, and then CLOD up or down to get the right frame rate (within sane limits). Tom Forsyth - Muckyfoot bloke. Whizzing and pasting and pooting through the day. > -----Original Message----- > From: Jim Offerman [mailto:j.o...@in...] > Sent: 01 August 2000 15:58 > To: Algorithms List > Subject: Re: [Algorithms] FPS Questions > > > I gave my original query some more thought, and I think I have found a > plausible explanation as to why 60 fps might be perceived as > being smoother > than 30 fps. > > The human eye in many ways works like a camera (note: > actually, it is the > other way around): the retina is exposed to light for a small > period of time > and then the accumulated light signal is transmitted to the > brain before the > retina is exposed again. > > Let's assume for a while that the eyes record at a steady 30 > fps. If our > game also runs at 30 fps, the eye sees one frame at the time. > However, if > the game runs at 60 fps, the eye sees two frames at the time, > which are > blurred together, resulting in a form of motion blur. > > Another important aspect is that your eyes will _never_ be in > sync with the > frame rate of your game, so it is possible that there exists > a moment where > your eye records a frame, but there is nothing to record > (since the monitor > is doing a vblank). This will certainly be perceived as a > discontinuity of > the ongoing motion on the screen. The higher the framerate, > the less likely > that such situations occur. > > Finally, I must agree with the lower latency factor, since > while our eyes > may be relatively slow, our responses (generally) are _really_ fast. > Specially, if someone is trained in some response, then it becomes a > reflex... An experienced FPS player might be using his brain > as little as > 25% of the time, the rest of the time, his actions are merely > reflexes. > Hence the phrase 'mindless killer' ;-). > > > Why not just ask the player what they want and then scale > the engine to > that > > speed. If they want 'liquidity' at high frame rates then > scale back and > use > > lower level of detail models. If they can't tell the > difference then > > they'll pull the frame rate down to 30fps and get better > looking visuals. > > We usually achieve this by offering the player some controls > over detail. > Though it might be nice if your engine includes a little util > which finds > the optimum detail settings to get n fps on a given > machine... but I can > tell you that _won't_ be very easy, so better put that on the > 'things to do > when I have time left' list ;-). > > Jim Offerman > > Innovade > - designing the designer |
From: Jim O. <j.o...@in...> - 2000-08-01 15:09:27
|
I gave my original query some more thought, and I think I have found a plausible explanation as to why 60 fps might be perceived as being smoother than 30 fps. The human eye in many ways works like a camera (note: actually, it is the other way around): the retina is exposed to light for a small period of time and then the accumulated light signal is transmitted to the brain before the retina is exposed again. Let's assume for a while that the eyes record at a steady 30 fps. If our game also runs at 30 fps, the eye sees one frame at the time. However, if the game runs at 60 fps, the eye sees two frames at the time, which are blurred together, resulting in a form of motion blur. Another important aspect is that your eyes will _never_ be in sync with the frame rate of your game, so it is possible that there exists a moment where your eye records a frame, but there is nothing to record (since the monitor is doing a vblank). This will certainly be perceived as a discontinuity of the ongoing motion on the screen. The higher the framerate, the less likely that such situations occur. Finally, I must agree with the lower latency factor, since while our eyes may be relatively slow, our responses (generally) are _really_ fast. Specially, if someone is trained in some response, then it becomes a reflex... An experienced FPS player might be using his brain as little as 25% of the time, the rest of the time, his actions are merely reflexes. Hence the phrase 'mindless killer' ;-). > Why not just ask the player what they want and then scale the engine to that > speed. If they want 'liquidity' at high frame rates then scale back and use > lower level of detail models. If they can't tell the difference then > they'll pull the frame rate down to 30fps and get better looking visuals. We usually achieve this by offering the player some controls over detail. Though it might be nice if your engine includes a little util which finds the optimum detail settings to get n fps on a given machine... but I can tell you that _won't_ be very easy, so better put that on the 'things to do when I have time left' list ;-). Jim Offerman Innovade - designing the designer |
From: Graham S. R. <gr...@se...> - 2000-08-01 14:33:37
|
> Also, consider motion blur. Without being too technically accurate about > how cameras work, lets simplify and state that a camera is actually > recording all the information available. Whilst it does chop up the 'real > stream' into (eg.) 24 fps, during each interval it is capturing the > remaining information by allowing light into its sensor (film etc) for the > full timeslice, resulting in motion blur. > > ie. motion blur actually represents all the 'missing' information in slow > fps media. That's why without it, computer graphics need a much > higher fps to get a similar feeling of smoothness. > -- > gl Yes, that's right. There is information in a real camera frame beyond just 24 fps of geometry positions that helps the eye extrapolate more detail. Such subtlety. Perhaps we can do with 60-100fps, but with at *least* 10 samples into an accumulation or T-buffer to get a nice, smooth (unbanded) motion blur effect. If we're actually redrawing entire scenes to get those 10 motion-blur samples (e.g., objects *and* the camera are moving), then we're once again up to sort of needing 500-1000 fps drawing rate. At least we can avoid the issue of monitors not being able to do huge, huge refresh rates. A bit easier to achieve if only a few small objects are moving, and the camera is not moving. Graham Rhodes |
From: Stephen J B. <sj...@li...> - 2000-08-01 14:04:29
|
On Mon, 31 Jul 2000, Jim Offerman wrote: > > 60fps is the ideal target. > > I blindly follow the masses here, but I can't help wondering why... Anything > above 24-25 fps will not be noticed by the human eye, 30 fps animations look > _really_ smooth. AAAAARRRRRGGGGGHHHHHH!!!!! 30Hz (on a 60Hz monitor) looks *terrible* compared to 60Hz. Whenever anything moves quickly in the scene, you get double imaging! Yuk! Try this: while ( 1 ) { Clear screen to black ; Draw (say) a 10x10 pixel white square at coordinate (x, 100) ; x += 15 pixels ; if ( x > screen_width ) x = 0 ; swapbuffers delay for a while } Set your card up to swapbuffers on a vertical retrace, then adjust the delay to run this at 60Hz and again at 30Hz. (or at 72Hz and 36Hz if you have a 72Hz monitor). Now imagine you are trying to shoot at the white square. At 30Hz you are seeing double. If you see double at 60Hz, you probably still have a hangover from last night - the effects are pretty similar. 20Hz is even worse because you get TRIPLE-imaging - although a few people see only a very jerky image - it depends on the individual. At 24Hz, you are going to see that horizontal tear moving rapidly up and down the screen whenever the eye is in motion because you can't possibly be locked to the vertical retrace as you should be at reasonable frame rates. Steve Baker (817)619-2657 (Vox/Vox-Mail) L3Com/Link Simulation & Training (817)619-2466 (Fax) Work: sj...@li... http://www.link.com Home: sjb...@ai... http://web2.airmail.net/sjbaker1 |