You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(390) 
_{Aug}
(767) 
_{Sep}
(940) 
_{Oct}
(964) 
_{Nov}
(819) 
_{Dec}
(762) 

2001 
_{Jan}
(680) 
_{Feb}
(1075) 
_{Mar}
(954) 
_{Apr}
(595) 
_{May}
(725) 
_{Jun}
(868) 
_{Jul}
(678) 
_{Aug}
(785) 
_{Sep}
(410) 
_{Oct}
(395) 
_{Nov}
(374) 
_{Dec}
(419) 
2002 
_{Jan}
(699) 
_{Feb}
(501) 
_{Mar}
(311) 
_{Apr}
(334) 
_{May}
(501) 
_{Jun}
(507) 
_{Jul}
(441) 
_{Aug}
(395) 
_{Sep}
(540) 
_{Oct}
(416) 
_{Nov}
(369) 
_{Dec}
(373) 
2003 
_{Jan}
(514) 
_{Feb}
(488) 
_{Mar}
(396) 
_{Apr}
(624) 
_{May}
(590) 
_{Jun}
(562) 
_{Jul}
(546) 
_{Aug}
(463) 
_{Sep}
(389) 
_{Oct}
(399) 
_{Nov}
(333) 
_{Dec}
(449) 
2004 
_{Jan}
(317) 
_{Feb}
(395) 
_{Mar}
(136) 
_{Apr}
(338) 
_{May}
(488) 
_{Jun}
(306) 
_{Jul}
(266) 
_{Aug}
(424) 
_{Sep}
(502) 
_{Oct}
(170) 
_{Nov}
(170) 
_{Dec}
(134) 
2005 
_{Jan}
(249) 
_{Feb}
(109) 
_{Mar}
(119) 
_{Apr}
(282) 
_{May}
(82) 
_{Jun}
(113) 
_{Jul}
(56) 
_{Aug}
(160) 
_{Sep}
(89) 
_{Oct}
(98) 
_{Nov}
(237) 
_{Dec}
(297) 
2006 
_{Jan}
(151) 
_{Feb}
(250) 
_{Mar}
(222) 
_{Apr}
(147) 
_{May}
(266) 
_{Jun}
(313) 
_{Jul}
(367) 
_{Aug}
(135) 
_{Sep}
(108) 
_{Oct}
(110) 
_{Nov}
(220) 
_{Dec}
(47) 
2007 
_{Jan}
(133) 
_{Feb}
(144) 
_{Mar}
(247) 
_{Apr}
(191) 
_{May}
(191) 
_{Jun}
(171) 
_{Jul}
(160) 
_{Aug}
(51) 
_{Sep}
(125) 
_{Oct}
(115) 
_{Nov}
(78) 
_{Dec}
(67) 
2008 
_{Jan}
(165) 
_{Feb}
(37) 
_{Mar}
(130) 
_{Apr}
(111) 
_{May}
(91) 
_{Jun}
(142) 
_{Jul}
(54) 
_{Aug}
(104) 
_{Sep}
(89) 
_{Oct}
(87) 
_{Nov}
(44) 
_{Dec}
(54) 
2009 
_{Jan}
(283) 
_{Feb}
(113) 
_{Mar}
(154) 
_{Apr}
(395) 
_{May}
(62) 
_{Jun}
(48) 
_{Jul}
(52) 
_{Aug}
(54) 
_{Sep}
(131) 
_{Oct}
(29) 
_{Nov}
(32) 
_{Dec}
(37) 
2010 
_{Jan}
(34) 
_{Feb}
(36) 
_{Mar}
(40) 
_{Apr}
(23) 
_{May}
(38) 
_{Jun}
(34) 
_{Jul}
(36) 
_{Aug}
(27) 
_{Sep}
(9) 
_{Oct}
(18) 
_{Nov}
(25) 
_{Dec}

2011 
_{Jan}
(1) 
_{Feb}
(14) 
_{Mar}
(1) 
_{Apr}
(5) 
_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}
(37) 
_{Sep}
(6) 
_{Oct}
(2) 
_{Nov}

_{Dec}

2012 
_{Jan}

_{Feb}
(7) 
_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}
(3) 
_{Jul}

_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}
(10) 
2013 
_{Jan}

_{Feb}
(1) 
_{Mar}
(7) 
_{Apr}
(2) 
_{May}

_{Jun}

_{Jul}
(9) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2014 
_{Jan}
(14) 
_{Feb}

_{Mar}
(2) 
_{Apr}

_{May}
(10) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}
(3) 
_{Dec}

2015 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}
(12) 
_{Nov}

_{Dec}
(1) 
2016 
_{Jan}

_{Feb}
(1) 
_{Mar}
(1) 
_{Apr}
(1) 
_{May}

_{Jun}
(1) 
_{Jul}

_{Aug}
(1) 
_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 

1

2
(1) 
3

4
(17) 
5
(7) 
6
(7) 
7

8
(1) 
9
(3) 
10

11
(11) 
12
(5) 
13
(9) 
14

15

16
(6) 
17

18
(6) 
19
(29) 
20
(2) 
21
(4) 
22
(2) 
23
(18) 
24
(4) 
25
(2) 
26
(6) 
27
(1) 
28

29

30
(1) 





From: Florian A. Strauss <fstrauss@bl...>  20080630 05:23:33

I've done a system like this in the past for PS2/GC/Wii and it worked out well for us. To cope with the potential explosion of combinations of bone weightings, the palette of weightings required for the mesh was quantised. The default setting for entries in the palette was 1.5 times the number of bones in the original mode, and this got good results nearly all the time. The worst case for this (as with any sort of palletisation) was when there was a gradient of weightings over a large area. This case doesn't occur on a human or animal type model, but can occur on other cases. To get around this, the quality could be increased (by changing the size of the palette) and the artists could also specify verts that be given greater importance during the quantization process. The downsides are as have been mentioned on this thread: increase in the number of meshes. For hardware that has proper vertex shaders, this technique is probably not as important, but I have been wondering if you could use a similar approach to create morph targets without requiring extra meshes by creating a bone per vert and then quantise down the palette of bones (this would require a different quantizer from what I mentioned above). Florian Original Message From: gdalgorithmslistbounces@... [mailto:gdalgorithmslistbounces@...] On Behalf Of Jon Watte Sent: Saturday, 28 June 2008 3:32 AM To: Game Development Algorithms Subject: Re: [Algorithms] Representing Animation Key Frame with Quaternion + Translation Vector Cedric Pinson wrote: > Hi, > I was just curious if it's used mostly or not, and why. > I vote for "not," because the CPU work to generate the matrices takes more time than the GPU shading work to combine the matrices per vertex. And I have better use for the CPU :) I don't write code for the more limited or esoteric platforms, though. Also, NVIDIA apparently recommends using streamout with scatter to do skinning on G80 hardware and up. This means that you store each vertex in bones space per bone, and then loop over the bones, and multiplyaddaccumulate into an output array. Once all bones are processed, you take that streamout and use as a transformed vertex array. I believe this is also how Doom III did it, although on the CPU instead. The benefit is that you can have certain vertices influenced by 30 bones if you want, and other vertices influenced by only 1, and you get no redundant calculation. The drawback of this method are that you store vertices multiple times (so the size grows by your average bone influence count), and you require scatterwrite, which means it doesn't work on most installed hardware (most current PCs, 360 or Wii). Sincerely, jw   Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://sourceforge.net/services/buy/index.php _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslis t 
From: Jon Watte <jwatte@gm...>  20080627 17:32:23

Cedric Pinson wrote: > Hi, > I was just curious if it's used mostly or not, and why. > I vote for "not," because the CPU work to generate the matrices takes more time than the GPU shading work to combine the matrices per vertex. And I have better use for the CPU :) I don't write code for the more limited or esoteric platforms, though. Also, NVIDIA apparently recommends using streamout with scatter to do skinning on G80 hardware and up. This means that you store each vertex in bones space per bone, and then loop over the bones, and multiplyaddaccumulate into an output array. Once all bones are processed, you take that streamout and use as a transformed vertex array. I believe this is also how Doom III did it, although on the CPU instead. The benefit is that you can have certain vertices influenced by 30 bones if you want, and other vertices influenced by only 1, and you get no redundant calculation. The drawback of this method are that you store vertices multiple times (so the size grows by your average bone influence count), and you require scatterwrite, which means it doesn't work on most installed hardware (most current PCs, 360 or Wii). Sincerely, jw 
From: Jim Schuler <jschuler@sl...>  20080626 23:45:27

I would say no, the difficulty in creating all the matrices especially for a system that supports 4 weights per bone is not really feasible. You really have to limit the weighting and even the artists to discrete 'steps' (like 0.0f to 1.0f weight in 8 steps) to limit the permutations. It's a lot more work on the CPU as well and the whole point of hardware skinning,... is to unload the work onto the GPU. On the Wii this is the only method you have available and depending on how much animation you have to do CPU based skinning may be a better solution especially as vertex counts on the Wii tend to be significantly lower than other consoles. If only the Wii had multiple cores.  Original Message  From: "Cedric Pinson" <mornifle@...> To: "Game Development Algorithms" <gdalgorithmslist@...> Sent: Thursday, June 26, 2008 3:06 PM Subject: Re: [Algorithms] Representing Animation Key Frame with Quaternion + Translation Vector > Hi, > I was just curious if it's used mostly or not, and why. > > Cedric > > Jason Hughes wrote: >> Cedric, >> >> For what it's worth, it's the only way to do hardware skinning on one >> of the current gen consoles. I'll give you three guesses which. >> >> Yes, it works. But it also means that if you want to shift skinning >> over to the GPU, you have to do it in several times the number of >> batches, because you explode the number of matrices requires (because >> they're a combination of ALL the weights and ALL the bone matrix >> requirements for a whole vertex). For a lastgen game with relatively >> few verts and relatively few matrices, it's probably a good trade >> off. If your game has 200+ bones and several thousand verts and >> allows for >3 bone skinning per vertex, you may be looking at close to >> half as many matrices as you have vertices, depending on how your art >> is generated. The artist has almost no feel for how many matrices >> they're effectively creating given standard tools at their disposal, >> as well. >> >> But there's nothing wrong with it, if it works for you. >> >> JH >> >> Cedric Pinson wrote: >>> Hi, >>> Interesting topic, i am currently doing skinning and i would like to >>> have some other point of views. I use similar technique from >>> http://www.intel.com/cd/ids/developer/asmona/eng/172124.htm >>> 1  in the pre process i identify set of vertex by uniq transform set >>> (eg v0 v1 and v2 are transformed by bone1 and bone2 with the weight1 and >>> weight2). >>> >>> 2  Then i have a flaten list of transform by vertex group to update >>> each frame eg: >>> so to have the final transform (in animation space) for the set of >>> vertex ( v0, v1,v2) i have to collapse ( transform bone1 * w1 + >>> transform bone2 *w2) >>> >>> 3  Then in cpu or gpu i have to do only one matrix * the vertex. >>> Because previous work computed the concatenation of matrix, it reduces >>> the number of matrix needed in the vertex shader. >>> >>> the method is describe here, >>> http://www.intel.com/cd/ids/developer/asmona/eng/172124.htm >>> What do you think about that ? >>> >>> Cedric >>> >>> Jon Watte wrote: >>> >>>> Lim Sin Chian wrote: >>>> >>>> >>>>> Just wondering if anyone has done this before and whether it is really >>>>> better in terms of performance and accuracy. >>>>> >>>>> >>>> I'm under the impression that everybody does that. Not only does it >>>> save >>>> space (assuming you don't need scale), but it also interpolates much >>>> better. Interpolating between two frames with a matrix looks pretty >>>> crufty. The only thing to watch out for is to make sure you go the >>>> "short way" around  dot product the two quaternions, and if the >>>> outcome is negative, negate all the values of the destination. >>>> >>>> To compose quaternions, you just multiply them. Because it's a >>>> rotationtranslation pair, if it's parent relative, then you apply the >>>> parent rotation to the child translation, and then apply your own >>>> rotation around that point. >>>> >>>> Sincerely, >>>> >>>> jw >>>> >>>> >>>>  >>>> Check out the new SourceForge.net Marketplace. >>>> It's the best place to buy or sell services for >>>> just about anything Open Source. >>>> http://sourceforge.net/services/buy/index.php >>>> _______________________________________________ >>>> GDAlgorithmslist mailing list >>>> GDAlgorithmslist@... >>>> https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist >>>> Archives: >>>> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist >>>> >>>> >>> >>> >>  >> >>  >> Check out the new SourceForge.net Marketplace. >> It's the best place to buy or sell services for >> just about anything Open Source. >> http://sourceforge.net/services/buy/index.php >>  >> >> _______________________________________________ >> GDAlgorithmslist mailing list >> GDAlgorithmslist@... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist > >  > +33 (0) 6 63 20 03 56 Cedric Pinson mailto:mornifle@... > http://www.plopbyte.net > > > >  > Check out the new SourceForge.net Marketplace. > It's the best place to buy or sell services for > just about anything Open Source. > http://sourceforge.net/services/buy/index.php > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist > > 
From: Cedric Pinson <mornifle@pl...>  20080626 22:06:46

Hi, I was just curious if it's used mostly or not, and why. Cedric Jason Hughes wrote: > Cedric, > > For what it's worth, it's the only way to do hardware skinning on one > of the current gen consoles. I'll give you three guesses which. > > Yes, it works. But it also means that if you want to shift skinning > over to the GPU, you have to do it in several times the number of > batches, because you explode the number of matrices requires (because > they're a combination of ALL the weights and ALL the bone matrix > requirements for a whole vertex). For a lastgen game with relatively > few verts and relatively few matrices, it's probably a good trade > off. If your game has 200+ bones and several thousand verts and > allows for >3 bone skinning per vertex, you may be looking at close to > half as many matrices as you have vertices, depending on how your art > is generated. The artist has almost no feel for how many matrices > they're effectively creating given standard tools at their disposal, > as well. > > But there's nothing wrong with it, if it works for you. > > JH > > Cedric Pinson wrote: >> Hi, >> Interesting topic, i am currently doing skinning and i would like to >> have some other point of views. I use similar technique from >> http://www.intel.com/cd/ids/developer/asmona/eng/172124.htm >> 1  in the pre process i identify set of vertex by uniq transform set >> (eg v0 v1 and v2 are transformed by bone1 and bone2 with the weight1 and >> weight2). >> >> 2  Then i have a flaten list of transform by vertex group to update >> each frame eg: >> so to have the final transform (in animation space) for the set of >> vertex ( v0, v1,v2) i have to collapse ( transform bone1 * w1 + >> transform bone2 *w2) >> >> 3  Then in cpu or gpu i have to do only one matrix * the vertex. >> Because previous work computed the concatenation of matrix, it reduces >> the number of matrix needed in the vertex shader. >> >> the method is describe here, >> http://www.intel.com/cd/ids/developer/asmona/eng/172124.htm >> What do you think about that ? >> >> Cedric >> >> Jon Watte wrote: >> >>> Lim Sin Chian wrote: >>> >>> >>>> Just wondering if anyone has done this before and whether it is really >>>> better in terms of performance and accuracy. >>>> >>>> >>> I'm under the impression that everybody does that. Not only does it save >>> space (assuming you don't need scale), but it also interpolates much >>> better. Interpolating between two frames with a matrix looks pretty >>> crufty. The only thing to watch out for is to make sure you go the >>> "short way" around  dot product the two quaternions, and if the >>> outcome is negative, negate all the values of the destination. >>> >>> To compose quaternions, you just multiply them. Because it's a >>> rotationtranslation pair, if it's parent relative, then you apply the >>> parent rotation to the child translation, and then apply your own >>> rotation around that point. >>> >>> Sincerely, >>> >>> jw >>> >>> >>>  >>> Check out the new SourceForge.net Marketplace. >>> It's the best place to buy or sell services for >>> just about anything Open Source. >>> http://sourceforge.net/services/buy/index.php >>> _______________________________________________ >>> GDAlgorithmslist mailing list >>> GDAlgorithmslist@... >>> https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist >>> Archives: >>> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist >>> >>> >> >> >  > >  > Check out the new SourceForge.net Marketplace. > It's the best place to buy or sell services for > just about anything Open Source. > http://sourceforge.net/services/buy/index.php >  > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist  +33 (0) 6 63 20 03 56 Cedric Pinson mailto:mornifle@... http://www.plopbyte.net 
From: Jason Hughes <jason_hughes@di...>  20080626 21:31:41

Cedric, For what it's worth, it's the only way to do hardware skinning on one of the current gen consoles. I'll give you three guesses which. Yes, it works. But it also means that if you want to shift skinning over to the GPU, you have to do it in several times the number of batches, because you explode the number of matrices requires (because they're a combination of ALL the weights and ALL the bone matrix requirements for a whole vertex). For a lastgen game with relatively few verts and relatively few matrices, it's probably a good trade off. If your game has 200+ bones and several thousand verts and allows for >3 bone skinning per vertex, you may be looking at close to half as many matrices as you have vertices, depending on how your art is generated. The artist has almost no feel for how many matrices they're effectively creating given standard tools at their disposal, as well. But there's nothing wrong with it, if it works for you. JH Cedric Pinson wrote: > Hi, > Interesting topic, i am currently doing skinning and i would like to > have some other point of views. I use similar technique from > http://www.intel.com/cd/ids/developer/asmona/eng/172124.htm > 1  in the pre process i identify set of vertex by uniq transform set > (eg v0 v1 and v2 are transformed by bone1 and bone2 with the weight1 and > weight2). > > 2  Then i have a flaten list of transform by vertex group to update > each frame eg: > so to have the final transform (in animation space) for the set of > vertex ( v0, v1,v2) i have to collapse ( transform bone1 * w1 + > transform bone2 *w2) > > 3  Then in cpu or gpu i have to do only one matrix * the vertex. > Because previous work computed the concatenation of matrix, it reduces > the number of matrix needed in the vertex shader. > > the method is describe here, > http://www.intel.com/cd/ids/developer/asmona/eng/172124.htm > What do you think about that ? > > Cedric > > Jon Watte wrote: > >> Lim Sin Chian wrote: >> >> >>> Just wondering if anyone has done this before and whether it is really >>> better in terms of performance and accuracy. >>> >>> >> I'm under the impression that everybody does that. Not only does it save >> space (assuming you don't need scale), but it also interpolates much >> better. Interpolating between two frames with a matrix looks pretty >> crufty. The only thing to watch out for is to make sure you go the >> "short way" around  dot product the two quaternions, and if the >> outcome is negative, negate all the values of the destination. >> >> To compose quaternions, you just multiply them. Because it's a >> rotationtranslation pair, if it's parent relative, then you apply the >> parent rotation to the child translation, and then apply your own >> rotation around that point. >> >> Sincerely, >> >> jw >> >> >>  >> Check out the new SourceForge.net Marketplace. >> It's the best place to buy or sell services for >> just about anything Open Source. >> http://sourceforge.net/services/buy/index.php >> _______________________________________________ >> GDAlgorithmslist mailing list >> GDAlgorithmslist@... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist >> >> > > 
From: Cedric Pinson <mornifle@pl...>  20080626 21:16:20

Hi, Interesting topic, i am currently doing skinning and i would like to have some other point of views. I use similar technique from http://www.intel.com/cd/ids/developer/asmona/eng/172124.htm 1  in the pre process i identify set of vertex by uniq transform set (eg v0 v1 and v2 are transformed by bone1 and bone2 with the weight1 and weight2). 2  Then i have a flaten list of transform by vertex group to update each frame eg: so to have the final transform (in animation space) for the set of vertex ( v0, v1,v2) i have to collapse ( transform bone1 * w1 + transform bone2 *w2) 3  Then in cpu or gpu i have to do only one matrix * the vertex. Because previous work computed the concatenation of matrix, it reduces the number of matrix needed in the vertex shader. the method is describe here, http://www.intel.com/cd/ids/developer/asmona/eng/172124.htm What do you think about that ? Cedric Jon Watte wrote: > Lim Sin Chian wrote: > >> Just wondering if anyone has done this before and whether it is really >> better in terms of performance and accuracy. >> > > I'm under the impression that everybody does that. Not only does it save > space (assuming you don't need scale), but it also interpolates much > better. Interpolating between two frames with a matrix looks pretty > crufty. The only thing to watch out for is to make sure you go the > "short way" around  dot product the two quaternions, and if the > outcome is negative, negate all the values of the destination. > > To compose quaternions, you just multiply them. Because it's a > rotationtranslation pair, if it's parent relative, then you apply the > parent rotation to the child translation, and then apply your own > rotation around that point. > > Sincerely, > > jw > > >  > Check out the new SourceForge.net Marketplace. > It's the best place to buy or sell services for > just about anything Open Source. > http://sourceforge.net/services/buy/index.php > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist >  +33 (0) 6 63 20 03 56 Cedric Pinson mailto:mornifle@... http://www.plopbyte.net 
From: Jon Watte <jwatte@gm...>  20080626 16:34:03

Marc B. Reynolds wrote: >>> Here's some sample >>> formulas that represent rotations in quaternions, with various >>> different constraints: >>> >>> Q P (1/Q) R.1 >>> (1/Q) P Q L.1 >>> > > >> I see you are assuming column vectors on the right in a righthanded >> space. You might want to state those assumptions to make your formulas >> unambiguous. For examples of popular graphics APIs that don't use those >> assumptions, look no further than to Direct3D. >> > > Now, here you have lost me. There are no matrices in the above equations. > And if you were to convert them into matrices, there would be neither > column nor row vectors. Or do you mean converting the entire function...and > if that is the case it would depend on whether your inject 3 results into > row > or columns. Am I missing something here? > > > > I was not disambiguous enough. This is why it's hard :) Also, I was confusing the vector multiplication order (qp vs pq) with the point transform functions, because I've recently had a discussion with some Microsofties about the insanity of using pq order for matrices but qp order for quaternions in the same API (XNA), especially when they use pq for both in another API (D3DX). What I should have said regarding your point transform functions: you are assuming a righthanded coordinate space when you write those formulas. In a lefthanded space, what you call R.1 would actually result in a lefthanded (realworld) rotation. I think we're mostly in understanding at this point, though. And regarding the double coverage pet peeve: as far as I can tell, Euler angles have double coverage too, if you let each of the angles range from 180 to 180. I'm wondering if that has some deeper meaning. Sincerely, jw 
From: Marc B. Reynolds <marc.reynolds@or...>  20080626 13:30:04

Believe me, I understand petpeeves very well. I was attempting to quickly address a couple of them in my original post. 1) The notion that only unit quaternions may be used to represent a rotation. They are merely the most practical from a computational standpoint. 2) The big deal that is made out of double coverage. 3) That there is more than one way to formulate a rotation and most formulations effect a uniform scale as well. I chose the two forms from my first post, because they are the only ones that seem to show up in modern literature. >> What I'm calling 'righthanded' will return X, 'lefthanded' will >> return X. >> (Or viceversa) Flip any of your coordinate definitions and the signs >> will flip. > To be specific, what you're calling "righthanded" (rotations) will > return X (in a righthanded coordinate space). > My point is that the words in parentheses are important, because leaving > them out leaves the text open for ambiguity. There's way too much > ambiguity in geometrics and especially graphics papers, because there > are two conventions for everything, and the authors usually assume that > everyone is using the particular conventions that he (the author) is > used to, without stating what they are. Yes, you are correct...I acknowledge your point. >> Here's some sample >> formulas that represent rotations in quaternions, with various >> different constraints: >> >> Q P (1/Q) R.1 >> (1/Q) P Q L.1 > I see you are assuming column vectors on the right in a righthanded > space. You might want to state those assumptions to make your formulas > unambiguous. For examples of popular graphics APIs that don't use those > assumptions, look no further than to Direct3D. Now, here you have lost me. There are no matrices in the above equations. And if you were to convert them into matrices, there would be neither column nor row vectors. Or do you mean converting the entire function...and if that is the case it would depend on whether your inject 3 results into row or columns. Am I missing something here? 
From: Jon Watte <jwatte@gm...>  20080625 17:45:22

Marc B. Reynolds wrote: > What I'm calling 'righthanded' will return X, 'lefthanded' will > return X. > (Or viceversa) Flip any of your coordinate definitions and the signs > will flip. To be specific, what you're calling "righthanded" (rotations) will return X (in a righthanded coordinate space). My point is that the words in parentheses are important, because leaving them out leaves the text open for ambiguity. There's way too much ambiguity in geometrics and especially graphics papers, because there are two conventions for everything, and the authors usually assume that everyone is using the particular conventions that he (the author) is used to, without stating what they are. > > We're obviously using the same term to refer to two different things, > because the algebraic formulations differ. Here's some sample > formulas that represent rotations in quaternions, with various > different constraints: > > Q P (1/Q) R.1 > (1/Q) P Q L.1 I see you are assuming column vectors on the right in a righthanded space. You might want to state those assumptions to make your formulas unambiguous. For examples of popular graphics APIs that don't use those assumptions, look no further than to Direct3D. > > I personally find "forward" and "reverse" more ambiguous terms, since > they require a given convention. Let's say 'counterclockwise' and > 'clockwise' > when the directed line is going into your eye, and then call them > 'forward' and > 'reverse' or 'inverse' respectively? I guess my point is that ALL those words are ambiguous, because they require that you specify at least two pieces of data when writing down on paper: vector convention (left or right) and coordinate system interpretation (righthanded or lefthanded). Using the same names as used for coordinate systems to also tell clockwise and counterclockwise rotations sans coordinate system apart leads to too much ambiguity. Just look at the number of quaternion resources on the web that are, at best, misleading, because they don't state their conventions, or even mix and match conventions maybe without even understanding what they're doing. > > > Rotation math, in itself, has no > > handedness; it is only the interpretation of the coordinates in the > > coordinate space that is handed. > > It should be obvious that I disagree with this statement. And it might be > more useful during algebraic manipulation to choose one form over the > other, regardless of the choose direction of a 'forward' rotation. Because authors all over the planet have chosen one or the other at different times, you need to choose, you need to stay internally consistent, and you need to DOCUMENT YOUR CHOICE. Sorry, I'm more than a little frustrated by this. It took me many years to get comfortable enough with the literature to realize that this was one reason why it seemed so much more confusing than it should be. Sincerely, jw 
From: Marc B. Reynolds <marc.reynolds@or...>  20080625 14:25:39

>> Cayley in [1,2] demonstrates general 3D rotations via quaternions, >> (lefthanded and righthanded respectively). > I'm sorry, but what separates a "lefthanded" from a "righthanded" > rotation? In what space interpretation? The space is the field around the directed line (in the case of 3D), so it is independent of any coordinate frame convention and has the same meaning if you're "coordinatefree". So if your thumb (in "thumpsup" position) points in direction of the directed line, curled fingers point in the direction of the surrounding the field. (If your very doublejointed, fingers toward your palms please!) Obviously the only difference between the two is direction of the surrounding field and the two flavors are mutual inverses. Of course you can flip one into the other by either changing the direction of the directed line or negation of the angle. > It should be obvious that a rotation in lefthanded space interpretation > is exactly equal to that same rotation in righthanded space > interpretation, but, because of the differences in interpretation, if > rendered into a fixed device space they will appear as mirror images. > More directly: rotation the Z unit vector around the Y unit vector as > axis by PI/2 radians always generates the X unit vector, no matter > whether you choose to interpret your coordinates lefthanded, or > righthanded. What I'm calling 'righthanded' will return X, 'lefthanded' will return X. (Or viceversa) Flip any of your coordinate definitions and the signs will flip. Stick your thumb pointing in 'Y', finger out straight on 'Z', turn pi/2 radian in the direction of your palm. Repeat with other hand and you're rotating in the opposite direction...regardless of how you name your coordinate frame. > In a lefthanded coordinate space, you use the left hand > to visualize the rotation; in a righthanded coordinate space, you use > the right hand to visualize the rotation, but the actual math is exactly > the same in both cases! We're obviously using the same term to refer to two different things, because the algebraic formulations differ. Here's some sample formulas that represent rotations in quaternions, with various different constraints: Q P (1/Q) R.1 (1/Q) P Q L.1 Q P Q* R.2 Q=1 Q* P Q L.2 Q=1 Q P R.3 P.Q=0, Q=1 P Q L.3 P.Q=0, Q=1 The differences between each pair are trivially once expanded (sign differences), but they are algebraically distinct and are valid representations of rotations. > Maybe what you (or the original paper author) mean to say is "both > forward and inverse rotations" ? Cayley came up with all kinds of cool stuff, so if he wants to call stuff right & left handed...I'm all for it. <http://en.wikipedia.org/wiki/Arthur_Cayley>; http://en.wikipedia.org/wiki/Arthur_Cayley (The proceeding was "tongueincheek"). I personally find "forward" and "reverse" more ambiguous terms, since they require a given convention. Let's say 'counterclockwise' and 'clockwise' when the directed line is going into your eye, and then call them 'forward' and 'reverse' or 'inverse' respectively? > Rotation math, in itself, has no > handedness; it is only the interpretation of the coordinates in the > coordinate space that is handed. It should be obvious that I disagree with this statement. And it might be more useful during algebraic manipulation to choose one form over the other, regardless of the choose direction of a 'forward' rotation. > That should be obvious to anyone who > does computer graphics, but time and again, I hear the misnomers > "left/righthanded rotation" and it drives me nuts! It's as bad as > talking about "row major matrices" without specifying whether you're > using row vectors (on the left) or column vectors (on the right). 
From: Jon Watte <jwatte@gm...>  20080624 17:27:44

Marc B. Reynolds wrote: > Cayley in [1,2] demonstrates general 3D rotations via quaternions, > (lefthanded and righthanded respectively). I'm sorry, but what separates a "lefthanded" from a "righthanded" rotation? In what space interpretation? It should be obvious that a rotation in lefthanded space interpretation is exactly equal to that same rotation in righthanded space interpretation, but, because of the differences in interpretation, if rendered into a fixed device space they will appear as mirror images. More directly: rotation the Z unit vector around the Y unit vector as axis by PI/2 radians always generates the X unit vector, no matter whether you choose to interpret your coordinates lefthanded, or righthanded. In a lefthanded coordinate space, you use the left hand to visualize the rotation; in a righthanded coordinate space, you use the right hand to visualize the rotation, but the actual math is exactly the same in both cases! Maybe what you (or the original paper author) mean to say is "both forward and inverse rotations" ? Rotation math, in itself, has no handedness; it is only the interpretation of the coordinates in the coordinate space that is handed. That should be obvious to anyone who does computer graphics, but time and again, I hear the misnomers "left/righthanded rotation" and it drives me nuts! It's as bad as talking about "row major matrices" without specifying whether you're using row vectors (on the left) or column vectors (on the right). Sincerely, jw 
From: Marc B. Reynolds <marc.reynolds@or...>  20080624 11:14:36

Cayley in [1,2] demonstrates general 3D rotations via quaternions, (lefthanded and righthanded respectively). >From [2]: P' = Q P (1/Q) (e.1) Notice that if you multiply Q by an nonzero scale factor 's', you get: P' = (sQ) P (1/(sQ)) = (s/s) Q P (1/Q) = Q P (1/Q) So if we consider some quaternion Q to respresent a rotation using (e.1), all scalar multiples of Q represent the same rotation. If we additionally consider Q to be a point in some 4D space, then the point is homogenous and (since we've dropped one degree of freedom) is a 3D object embedded in the 4D space. The set of all quaternions which represent the same rotation may be described as 'sQ', again for all 's' except zero and is a line through the space. Unit quaterions come into play since the inverse can be replaced by the conjugate, which has nice algebraic, numeric and computation properties. So if U is a unit quaternion we can instead use the following: R' = U R U* (e.2) The set of all unit quaternions is a sphere and the line of all quaternions which represent a given rotation intersects the sphere at two points (thus the double coverage). Continuing to use (e.2), let's plug in a non unit quaternion: Q=sU R'' = Q R Q* = (sU) R (sU)* = (s^2) U P U* = (s^2) R' So R'' is R' scaled by a factor of (s^2). Composition of rotations is achieved by the product: C = A B Multiply 'A' and 'B' by nonzero scales 'a' and 'b' respectively: C = (aA)(bB) = (ab)AB Combining all of the above, to effect a scaled rotation, use a quaternion which has a magnitude of the squareroot of the desired scaled factor in (e.2).  [ 1] "On certain results concerning quaternions", Arthur Cayley, 1845 [ 2] "On the Application of Quaternions to the Theory of Rotation", Arthur Cayley,1848 
From: pontus birgersson <herruppohoppa@ho...>  20080624 06:28:12

Thanks! I think I might be seing some intresting opmitizations ahead. Pontus > Date: Mon, 23 Jun 2008 16:45:05 0700> From: jwatte@...> To: gdalgorithmslist@...> Subject: Re: [Algorithms] Representing Animation Key Frame with Quaternion+Translation Vector> > pontus birgersson wrote:> > I'm guessing it depends on the target plattform. Early rather crude > > tests on my old GeForce 6800 told me that it might be worth sending a > > bit more data in order to releave the vertex shader of the additional > > reconstruction. Since since we're mainly targetting the Xbox 360 which > > I've heard has a beast for vertex processor, the same will probably > > not be true there.> > > > Why reconstruct a 4x4 for a bone matrix? You can blend the bones as > three vector4s, and you can transform the vertex using three dot > products. Only the projection matrix really needs a full 4x4.> The point about "sending more data" does not have to do with the data > throughput, it has to do with how many bones you can cram into a single > pass, without splitting your mesh. With 4x4 matrices, you can do about > 60; with 4x3 you can do about 80; with offset + quaternion you can do > about 120!> > Sincerely,> > jw> > > > Check out the new SourceForge.net Marketplace.> It's the best place to buy or sell services for> just about anything Open Source.> http://sourceforge.net/services/buy/index.php>; _______________________________________________> GDAlgorithmslist mailing list> GDAlgorithmslist@...> https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist>; Archives:> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist _________________________________________________________________ Utmana dina kompisar i Sten, sax, påse! http://www.live.msn.se/messenger/#/News/ 
From: Tony Cox <tonycox@mi...>  20080624 02:20:29

Also, consider preprocessing your mesh to avoid long skinny triangles if possible. Depending on how your mesh was generated and what else you are using it for, there may be many different triangulations of your space which suffice  can you choose the one which minimizes edge lengths? For example, consider a given quad (pair of adjacent triangles). If it is legal in your system to arbitrarily retriangulate that quad by flipping the interior edge to the 'other' diagonal, then do that where it reduces the length of the diagonal. I recall back in the mists of time on Dungeon Keeper, the AI navigation mesh was generated on the fly as the dungeon was being built. It was built in a quickanddirty way as edits were made, but in the background we'd run through the mesh looking for spots to make exactly that adjustment to reduce the number of skinny triangles, and thus get more pleasing navigation routes. We also used some of the other techniques mentioned in this thread  we got decent results by having two phases, first compute the A* path through the triangle centers, and then second postprocess that path to produce a more naturallooking route. (Working on this second phase of code was actually one of my first ever jobs as a fulltime game developer...ah, the memories...) (These days, with a grid based map I'm sure you'd just run A* on the raw map data, but we did a triangulation to drastically reduce the number of nodes in the graph that needed searching. When you had to run on a 486...) Original Message From: gdalgorithmslistbounces@... [mailto:gdalgorithmslistbounces@...] On Behalf Of Jon Watte Sent: Monday, June 23, 2008 4:48 PM To: Game Development Algorithms Subject: Re: [Algorithms] AStar Adjacency Measure for Navigation Mesh Sam Yatchmenoff wrote: > I have a few questions about this. First of all, can I safely alter my > adjacency measurement so that it takes into consideration the path back > to the starting position? If not, then what would be a good strategy to > deal with this problem? The simplest solution I can think of would be > to alter NavMesh so that it had more squarish nodes, but I'd like to > here any ideas that DON'T involve changing the nodes. > In the category of "quick fixes," create nodes along the edges of the triangles, as well as in the center. Allow any movment between nodes, not just through the center. I've found that having nodes at the center of each edge, as well as one avoidance radius away from each corner, plus in the center, will solve the most glaring problems with centeronly. Sincerely, jw  Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://sourceforge.net/services/buy/index.php _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist 
From: Jon Watte <jwatte@gm...>  20080623 23:48:08

Sam Yatchmenoff wrote: > I have a few questions about this. First of all, can I safely alter my > adjacency measurement so that it takes into consideration the path back > to the starting position? If not, then what would be a good strategy to > deal with this problem? The simplest solution I can think of would be > to alter NavMesh so that it had more squarish nodes, but I'd like to > here any ideas that DON'T involve changing the nodes. > In the category of "quick fixes," create nodes along the edges of the triangles, as well as in the center. Allow any movment between nodes, not just through the center. I've found that having nodes at the center of each edge, as well as one avoidance radius away from each corner, plus in the center, will solve the most glaring problems with centeronly. Sincerely, jw 
From: Jon Watte <jwatte@gm...>  20080623 23:45:10

pontus birgersson wrote: > I'm guessing it depends on the target plattform. Early rather crude > tests on my old GeForce 6800 told me that it might be worth sending a > bit more data in order to releave the vertex shader of the additional > reconstruction. Since since we're mainly targetting the Xbox 360 which > I've heard has a beast for vertex processor, the same will probably > not be true there. > Why reconstruct a 4x4 for a bone matrix? You can blend the bones as three vector4s, and you can transform the vertex using three dot products. Only the projection matrix really needs a full 4x4. The point about "sending more data" does not have to do with the data throughput, it has to do with how many bones you can cram into a single pass, without splitting your mesh. With 4x4 matrices, you can do about 60; with 4x3 you can do about 80; with offset + quaternion you can do about 120! Sincerely, jw 
From: Jason Hughes <jason_hughes@di...>  20080623 22:57:36

Hi Sam, I implemented a similar system for a company last year. The problem with polygon centers, as you've discovered, is they make a lousy approximation for area. If you constrain the problem to them, you can still get good results, however it takes a little more effort than adjusting the input. Adjacency should be literal topological adjacency, though. If your system assumes physical proximity, I'm not sure how that factors in... do you reject any faces that are not significantly planar with the ground? Weaving them into a coherent graph is not a trivial problem. You could try to increase the density of your A* graph by connecting all close vertices AND centers, then find a metric to remove redundant edges. Quite a bit of work, and bound to give bad results somewhere. In terms of pathing once you have a solution, I found that the triangle centers along a path are not the places you really want to walk towards. The constraints are really the two edges (incoming and outgoing) of the triangle, and you simply need to find a relaxed path that is as straight as possible and manages to hit all the edges of interest, with minimum angular deviation. Some pathing agents will behave differently (ie. tanks versus birds), and might need a different relaxation technique, or parameterize it with a smoothness factor . Check out "convex points in A*" to see how the rigorous libraries do it. Best of luck, JH Sam Yatchmenoff wrote: > I'm developing a pathfinding system that procedurally generates a > NavMesh from level geometry then uses A* to find the paths from point A > to point B. That's working fine except in case where I have very > elongated polygons in the mesh. The pathfinder will often give me an > obviously suboptimal path because my adjacency measurement is the > distance between the centers (averages of vertices) of the polygons and > this is sometimes a much greater distance than the agent will actually > travel through these polygons. > > I have a few questions about this. First of all, can I safely alter my > adjacency measurement so that it takes into consideration the path back > to the starting position? If not, then what would be a good strategy to > deal with this problem? The simplest solution I can think of would be > to alter NavMesh so that it had more squarish nodes, but I'd like to > here any ideas that DON'T involve changing the nodes. > > Thanks in advance, > Sam Yatchmenoff > > > 
From: Sam Yatchmenoff <sam.yatchmenoff@gm...>  20080623 22:03:14

I'm developing a pathfinding system that procedurally generates a NavMesh from level geometry then uses A* to find the paths from point A to point B. That's working fine except in case where I have very elongated polygons in the mesh. The pathfinder will often give me an obviously suboptimal path because my adjacency measurement is the distance between the centers (averages of vertices) of the polygons and this is sometimes a much greater distance than the agent will actually travel through these polygons. I have a few questions about this. First of all, can I safely alter my adjacency measurement so that it takes into consideration the path back to the starting position? If not, then what would be a good strategy to deal with this problem? The simplest solution I can think of would be to alter NavMesh so that it had more squarish nodes, but I'd like to here any ideas that DON'T involve changing the nodes. Thanks in advance, Sam Yatchmenoff 
From: pontus birgersson <herruppohoppa@ho...>  20080623 21:44:50

> pontus birgersson wrote:> > In the current solution I'm working with we keep the animation data as > > quaternions and vectors (all transforms relative the parent) in order > > to interpolate efficiently as well as perform blending operations. In > > the end we still convert them into matrices, then do a full hierarchy > > matrix mul before sending the matrices to the gpu.> > We do the same thing, except we do the hierarchy multiplication in > quaternions/offsets, and only convert to matrices when sending to the > card (as 3x4 mats). It's a decent compromise. Yes I'm also sending 3x4 matrices and then reconstructing them as 4x4 matrices in the vertex shader but I've been thinking of discarding that solution in order to avoid the reconstruction but not sure whether or not it's a good tradeoff. Have you measured any of this? I'm guessing it depends on the target plattform. Early rather crude tests on my old GeForce 6800 told me that it might be worth sending a bit more data in order to releave the vertex shader of the additional reconstruction. Since since we're mainly targetting the Xbox 360 which I've heard has a beast for vertex processor, the same will probably not be true there. Pontus _________________________________________________________________ Trött på jobbet? Hitta nya utmaningar här! http://msn.jobbguiden.se/jobseeker/resumes/postresumenew/postresumestart.aspx?sc_cmp2=JS_INT_SEMSN_NLPCV 
From: Jon Watte <jwatte@gm...>  20080623 21:31:35

pontus birgersson wrote: > > Intressting topic, how would you go about skinning a mesh using this > kind of data? Is hardware skinning still an option? I assume matrix > and vector operations are heavily optimized on current gpus. Yes, you can still hardware skin. In fact, you can do more bones per pass, because offset+quaternion is only 2 constant registers, whereas a matrix is at least 3 constant registers. The code to transform a vertex by a quaternion is slightly more expensive than the code to transform by a matrix, as is the code to blend quaternions, but it will still fit in a shader model 2 vertex shader. > > In the current solution I'm working with we keep the animation data as > quaternions and vectors (all transforms relative the parent) in order > to interpolate efficiently as well as perform blending operations. In > the end we still convert them into matrices, then do a full hierarchy > matrix mul before sending the matrices to the gpu. We do the same thing, except we do the hierarchy multiplication in quaternions/offsets, and only convert to matrices when sending to the card (as 3x4 mats). It's a decent compromise. Sincerely, jw 
From: pontus birgersson <herruppohoppa@ho...>  20080623 21:19:15

Intressting topic, how would you go about skinning a mesh using this kind of data? Is hardware skinning still an option? I assume matrix and vector operations are heavily optimized on current gpus. In the current solution I'm working with we keep the animation data as quaternions and vectors (all transforms relative the parent) in order to interpolate efficiently as well as perform blending operations. In the end we still convert them into matrices, then do a full hierarchy matrix mul before sending the matrices to the gpu. Pontus Date: Mon, 23 Jun 2008 13:01:52 0500From: jason_hughes@...: gdalgorithmslist@...: Re: [Algorithms] Representing Animation Key Frame with Quaternion+Translation Vector >From what you're asking, if you want to transform a relativespace vector from the leaf node, you do something like this:rv = relative to p1wv = desired world vectorwv = p0 + q0 * (p1 + q1 * rv) = p(n1) + (q(n1) * v) ...You simply need to invert the quaternions (negation) to transform the opposite direction. Transforming a point is just loading a quaternion with XYZ0 and doing a quaternion multiply, if I recall correctly. Since each node has translation relative to its parent, you add that after you transform the point into the parent space. Of course, you'll want to look over the Matrix and Quaternion FAQ for the exact math for it. It's sufficient for a layman to get the code working.Note that the above configuration is completely incompatible with scale, unless you allow nonunit quaternions and can wrangle that beast...Hope that helps,JHLim Sin Chian wrote: Thanks for the replies, guys. Suppose I have a hierarchy with 2 connected bones. One of them (bone0) is the root, the other (bone1) is the child. Say bone0 has transformation T0 (q0, p0) and bone 1 has T1 (q1, p1), where q0, q1 are quaternions representing modelling space rotation, and p0, p1 are vectors representing modelling space translations. To compute the world space orientation for bone1, I would do the following: q1World = qWorld X q0 X q1; I am really interested to find out if there is a similar way to compute the world space translation for bone1. Thanks! "Marc B. Reynolds" <marc.reynolds@...> wrote: > Is "study parameter" a typo there? I couldn't find any google hits onthat phrase.Try "study parameters", with an additional key, such as:kinematics, screw, wrench, twist, robotics, etc. Some works use "Study parameters" to replace the "dual", since dual isheavily overloaded in mathematics. Check out the new SourceForge.net Marketplace.It's the best place to buy or sell services forjust about anything Open Source.http://sourceforge.net/services/buy/index.php_______________________________________________GDAlgorithmslist mailing listGDAlgorithmslist@...nethttps://lists.sourceforge.net/lists/listinfo/gdalgorithmslistArchives:http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist  Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://sourceforge.net/services/buy/index.php _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist _________________________________________________________________ Ladda ner hela Windows Live gratis och upptäck fördelarna! http://get.live.com/ 
From: Jason Hughes <jason_hughes@di...>  20080623 18:02:00

From what you're asking, if you want to transform a relativespace vector from the leaf node, you do something like this: rv = relative to p1 wv = desired world vector wv = p0 + q0 * (p1 + q1 * rv) = p(n1) + (q(n1) * v) ... You simply need to invert the quaternions (negation) to transform the opposite direction. Transforming a point is just loading a quaternion with XYZ0 and doing a quaternion multiply, if I recall correctly. Since each node has translation relative to its parent, you add that after you transform the point into the parent space. Of course, you'll want to look over the Matrix and Quaternion FAQ for the exact math for it. It's sufficient for a layman to get the code working. Note that the above configuration is completely incompatible with scale, unless you allow nonunit quaternions and can wrangle that beast... Hope that helps, JH Lim Sin Chian wrote: > Thanks for the replies, guys. > > Suppose I have a hierarchy with 2 connected bones. One of them (bone0) > is the root, the other (bone1) is the child. > > Say bone0 has transformation T0 (q0, p0) and bone 1 has T1 (q1, p1), > where q0, q1 are quaternions representing modelling space rotation, > and p0, p1 are vectors representing modelling space translations. > > To compute the world space orientation for bone1, I would do the > following: > q1World = qWorld X q0 X q1; > > I am really interested to find out if there is a similar way to > compute the world space translation for bone1. > > Thanks! > > > */"Marc B. Reynolds" <marc.reynolds@...>/* wrote: > > > > > Is "study parameter" a typo there? I couldn't find any google > hits on > that phrase. > > Try "study parameters", with an additional key, such as: > kinematics, screw, wrench, twist, robotics, etc. > > Some works use "Study parameters" to replace the "dual", since dual is > heavily overloaded in mathematics. > > > > >  > Check out the new SourceForge.net Marketplace. > It's the best place to buy or sell services for > just about anything Open Source. > http://sourceforge.net/services/buy/index.php > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist > > >  > >  > Check out the new SourceForge.net Marketplace. > It's the best place to buy or sell services for > just about anything Open Source. > http://sourceforge.net/services/buy/index.php >  > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist 
From: Jon Watte <jwatte@gm...>  20080623 17:59:18

Lim Sin Chian wrote: > > Just wondering if anyone has done this before and whether it is really > better in terms of performance and accuracy. I'm under the impression that everybody does that. Not only does it save space (assuming you don't need scale), but it also interpolates much better. Interpolating between two frames with a matrix looks pretty crufty. The only thing to watch out for is to make sure you go the "short way" around  dot product the two quaternions, and if the outcome is negative, negate all the values of the destination. To compose quaternions, you just multiply them. Because it's a rotationtranslation pair, if it's parent relative, then you apply the parent rotation to the child translation, and then apply your own rotation around that point. Sincerely, jw 
From: jacob langford <jacob.langford@gm...>  20080623 14:54:05

Hi, We have a texeldensity debug shader that uses the ddx instructions to determine what mip level will be chosen. We then set it to a target resolution, e.g. 512x512 and it displays green when the 512x512 mip would be used, red if it would prefer a larger texture, and blue if 512x512 is too big. It also displays gridlines so you can see UV distortion. The other thing that has been helpful is we have a debug page in our game that allows all texture settings to be changed on the fly for any particular texture. If we change the min miplevel to 1.0, then that shows exactly what we would get by reducing the texture size. The debug shader code is below. jacob struct Interp { half4 position : POSITION; half2 uv : TEX0; }; void vp_uv512( in NvVertexStream0 vs0, out Interp interp ) { TransformedVertex vtx = TransformVertex( vs0 ); interp.position = vtx.screen_position; interp.uv = vtx.uv0 * 512.0f; } void vp_uv1024( in NvVertexStream0 vs0, out Interp interp ) { TransformedVertex vtx = TransformVertex( vs0 ); interp.position = vtx.screen_position; interp.uv = vtx.uv0 * 1024.0f; } void fp_texelDensity( in Interp interp, out half4 color : COLOR ) { half4 good = half4( 0.0f, 1.0f, 0.0f, 1.0f ); half4 goodToTooHigh = half4( 0.0f, 1.0f, 1.0f, 0.0f ); half4 goodToTooLow = half4( 1.0f, 1.0f, 0.0f, 0.0f ); half maxDu = max( abs(ddx( interp.uv.x )), abs(ddy( interp.uv.x ))); half maxDv = max( abs(ddx( interp.uv.y )), abs(ddy( interp.uv.y ))); half maxDuv = max( maxDu, maxDv ); half resTooHigh = smoothstep( 1.0f, 1.5f, maxDuv ); half resTooLow = 1.0f  smoothstep( 0.5f, 1.0f, maxDuv ); color = good + goodToTooHigh * resTooHigh + goodToTooLow * resTooLow; // Draw gridlines corresponding to 8 texels at // incoming UV resolution half inGrid = step(0.1f, frac(interp.uv.x * 0.125f)) * step(0.1f, frac(interp.uv.y * 0.125f)); color *= inGrid; } On Mon, Jun 23, 2008 at 6:58 AM, Juhani Honkala <juhnu@...> wrote: > Allowing them to reload content without restarting the game is essential > for good quality and fine tuned content as it reduces turnover time > considerably. It might be tricky to implement if not planned early on, > though. > > > > Juhani > > > On Fri, Jun 20, 2008 at 9:07 PM, Jason Hughes < > jason_hughes@...> wrote: > >> I've recently spent a little time helping our artists get better results >> in the game by putting a few tools into the runtime for them to evaluate >> their use of resources. >> >> Problem: Evaluating the use of texture resolution and UV mapping quality >> (pixel/texel ratio) on a perplatform basis >> >> Solution: I created a 512x512 mipmapped image, where each mip surface is >> a checkerboard of a different color and compile it into the executable >> during debug builds. When the artist hits a key while the game is >> running, I switch all the diffuse textures to point to the miptest >> checkerboard, and force sampling to nearest mip, point sampling. Using >> this mode, an artist can tell where the actual hardware will fetch >> from. The colors they see are the resolution limits they are to use for >> the final textures. Due to filtering, they bump up the resolution they >> see by one if it's close. >> >> I'm curious what others have done. If you are willing to share simple >> tips and tricks, please do. >> >> Thanks, >> JH >> >>  >> Check out the new SourceForge.net Marketplace. >> It's the best place to buy or sell services for >> just about anything Open Source. >> http://sourceforge.net/services/buy/index.php >> _______________________________________________ >> GDAlgorithmslist mailing list >> GDAlgorithmslist@... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist >> > > >  > Check out the new SourceForge.net Marketplace. > It's the best place to buy or sell services for > just about anything Open Source. > http://sourceforge.net/services/buy/index.php > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist > 
From: Juhani Honkala <juhnu@al...>  20080623 13:58:07

Allowing them to reload content without restarting the game is essential for good quality and fine tuned content as it reduces turnover time considerably. It might be tricky to implement if not planned early on, though. Juhani On Fri, Jun 20, 2008 at 9:07 PM, Jason Hughes < jason_hughes@...> wrote: > I've recently spent a little time helping our artists get better results > in the game by putting a few tools into the runtime for them to evaluate > their use of resources. > > Problem: Evaluating the use of texture resolution and UV mapping quality > (pixel/texel ratio) on a perplatform basis > > Solution: I created a 512x512 mipmapped image, where each mip surface is > a checkerboard of a different color and compile it into the executable > during debug builds. When the artist hits a key while the game is > running, I switch all the diffuse textures to point to the miptest > checkerboard, and force sampling to nearest mip, point sampling. Using > this mode, an artist can tell where the actual hardware will fetch > from. The colors they see are the resolution limits they are to use for > the final textures. Due to filtering, they bump up the resolution they > see by one if it's close. > > I'm curious what others have done. If you are willing to share simple > tips and tricks, please do. > > Thanks, > JH > >  > Check out the new SourceForge.net Marketplace. > It's the best place to buy or sell services for > just about anything Open Source. > http://sourceforge.net/services/buy/index.php > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist > 