gdalgorithms-list Mailing List for Game Dev Algorithms (Page 3)
Brought to you by:
vexxed72
You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(390) |
Aug
(767) |
Sep
(940) |
Oct
(964) |
Nov
(819) |
Dec
(762) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(680) |
Feb
(1075) |
Mar
(954) |
Apr
(595) |
May
(725) |
Jun
(868) |
Jul
(678) |
Aug
(785) |
Sep
(410) |
Oct
(395) |
Nov
(374) |
Dec
(419) |
2002 |
Jan
(699) |
Feb
(501) |
Mar
(311) |
Apr
(334) |
May
(501) |
Jun
(507) |
Jul
(441) |
Aug
(395) |
Sep
(540) |
Oct
(416) |
Nov
(369) |
Dec
(373) |
2003 |
Jan
(514) |
Feb
(488) |
Mar
(396) |
Apr
(624) |
May
(590) |
Jun
(562) |
Jul
(546) |
Aug
(463) |
Sep
(389) |
Oct
(399) |
Nov
(333) |
Dec
(449) |
2004 |
Jan
(317) |
Feb
(395) |
Mar
(136) |
Apr
(338) |
May
(488) |
Jun
(306) |
Jul
(266) |
Aug
(424) |
Sep
(502) |
Oct
(170) |
Nov
(170) |
Dec
(134) |
2005 |
Jan
(249) |
Feb
(109) |
Mar
(119) |
Apr
(282) |
May
(82) |
Jun
(113) |
Jul
(56) |
Aug
(160) |
Sep
(89) |
Oct
(98) |
Nov
(237) |
Dec
(297) |
2006 |
Jan
(151) |
Feb
(250) |
Mar
(222) |
Apr
(147) |
May
(266) |
Jun
(313) |
Jul
(367) |
Aug
(135) |
Sep
(108) |
Oct
(110) |
Nov
(220) |
Dec
(47) |
2007 |
Jan
(133) |
Feb
(144) |
Mar
(247) |
Apr
(191) |
May
(191) |
Jun
(171) |
Jul
(160) |
Aug
(51) |
Sep
(125) |
Oct
(115) |
Nov
(78) |
Dec
(67) |
2008 |
Jan
(165) |
Feb
(37) |
Mar
(130) |
Apr
(111) |
May
(91) |
Jun
(142) |
Jul
(54) |
Aug
(104) |
Sep
(89) |
Oct
(87) |
Nov
(44) |
Dec
(54) |
2009 |
Jan
(283) |
Feb
(113) |
Mar
(154) |
Apr
(395) |
May
(62) |
Jun
(48) |
Jul
(52) |
Aug
(54) |
Sep
(131) |
Oct
(29) |
Nov
(32) |
Dec
(37) |
2010 |
Jan
(34) |
Feb
(36) |
Mar
(40) |
Apr
(23) |
May
(38) |
Jun
(34) |
Jul
(36) |
Aug
(27) |
Sep
(9) |
Oct
(18) |
Nov
(25) |
Dec
|
2011 |
Jan
(1) |
Feb
(14) |
Mar
(1) |
Apr
(5) |
May
(1) |
Jun
|
Jul
|
Aug
(37) |
Sep
(6) |
Oct
(2) |
Nov
|
Dec
|
2012 |
Jan
|
Feb
(7) |
Mar
|
Apr
(4) |
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(10) |
2013 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(2) |
May
|
Jun
|
Jul
(9) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
(14) |
Feb
|
Mar
(2) |
Apr
|
May
(10) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(12) |
Nov
|
Dec
(1) |
2016 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
From: Chris G. <cg...@va...> - 2012-12-19 20:09:14
|
I'd think you'd want to compare not just the static configuration of the pose but also the point/joint velocities. Your ideal choice will be one where not only the pose matches well, but the body parts are also moving at similar directions and speeds to the ragdoll's current state. From: Ben Sunshine-Hill [mailto:sn...@gm...] Sent: Friday, December 14, 2012 2:35 AM To: Game Development Algorithms Subject: Re: [Algorithms] Finding the best pose to re-enter animation graph from ragdoll Ironically, that paper actually compares joint orientations instead of point clouds. For point cloud-based pose similarity estimation, Kovar's original motion graph paper is probably a good reference. http://pages.cs.wisc.edu/~kovar/mographs.pdf Ben On Thu, Dec 13, 2012 at 9:12 PM, Michael De Ruyter <mic...@gm...<mailto:mic...@gm...>> wrote: Also with the triangle normal idea, a notion of distance is missing, so you should add something like distance hips - hand, hips - foot, foot - foot, etc to your comparison algorithm if u were to try this method. Though since there is already a paper on the point cloud approach, it's probably safer to start with it. Sent from my iPhone On 2012-12-13, at 12:47 PM, Richard Fine <rf...@tb...<mailto:rf...@tb...>> wrote: > On 12/13/2012 8:09 PM, Michael De Ruyter wrote: > >> - when comparing local quaternions for the joints, the comparison will take into >> account the twist around the limbs even though any amount of twist doesn't >> change the position of the limb. Therefore you could get drastic differences >> when visually there are barely any. > > True in the general case, but I'm using a hierarchical setup, so a twist > in an upper arm can drastically change the position/orientation of a > forearm and so on. It matters less and less as you approach the leaf > nodes, but this is something I intended to eliminate with weighting. > >> - even if the joint orientation are different the position of the limbs, >> especially their endings like hands of feet, could still be in very close >> positions, i.e. potentially closer than limbs with similar rotations but with >> their root joint (shoulder for instance) off by a bit. You mention weighting >> system, but that is going to be a pain to tune. > > Ah, yes, OK. I'd assumed that a pose with leaf-node discrepancies would > be less visually different than one with trunk-node discrepancies, but > that's not a sound assumption. > > This sounds like it would be a problem for *any* hierarchy-based > approach, so anything based on comparing local-space positions is > probably a non-starter. > >> An other approach would be to find a comparison algorithm that compares the >> overall position of the limbs. >> >> For instance you could consider; >> - modeling triangles based of significant body joints, for instance >> + hips, shoulder, hand >> + hips, hand, foot >> + hips, shoulder, shoulder >> >> Then use the normal of those triangles for your pose comparison. >> >> You would still need to make the normals relative to the hips and then use a >> hips orientation comparison process. > > Right OK. Sounds similar to the point cloud approach, maybe a little > less prone to small discrepancies, as using the normal instead of the > joint positions would equate similar triangles. > > Cheers! > > - Richard > > ------------------------------------------------------------------------------ > LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial > Remotely access PCs and mobile devices and provide instant support > Improve your efficiency, and focus on delivering more value-add services > Discover what IT Professionals Know. Rescue delivers > http://p.sf.net/sfu/logmein_12329d2d > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li...<mailto:GDA...@li...> > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list ------------------------------------------------------------------------------ LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d _______________________________________________ GDAlgorithms-list mailing list GDA...@li...<mailto:GDA...@li...> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list |
From: Ben Sunshine-H. <sn...@gm...> - 2012-12-14 10:35:48
|
Ironically, that paper actually compares joint orientations instead of point clouds. For point cloud-based pose similarity estimation, Kovar's original motion graph paper is probably a good reference. http://pages.cs.wisc.edu/~kovar/mographs.pdf Ben On Thu, Dec 13, 2012 at 9:12 PM, Michael De Ruyter < mic...@gm...> wrote: > Also with the triangle normal idea, a notion of distance is missing, > so you should add something like distance hips - hand, hips - foot, > foot - foot, etc to your comparison algorithm if u were to try this > method. > > Though since there is already a paper on the point cloud approach, > it's probably safer to start with it. > > Sent from my iPhone > > On 2012-12-13, at 12:47 PM, Richard Fine <rf...@tb...> wrote: > > > On 12/13/2012 8:09 PM, Michael De Ruyter wrote: > > > >> - when comparing local quaternions for the joints, the comparison will > take into > >> account the twist around the limbs even though any amount of twist > doesn't > >> change the position of the limb. Therefore you could get drastic > differences > >> when visually there are barely any. > > > > True in the general case, but I'm using a hierarchical setup, so a twist > > in an upper arm can drastically change the position/orientation of a > > forearm and so on. It matters less and less as you approach the leaf > > nodes, but this is something I intended to eliminate with weighting. > > > >> - even if the joint orientation are different the position of the limbs, > >> especially their endings like hands of feet, could still be in very > close > >> positions, i.e. potentially closer than limbs with similar rotations > but with > >> their root joint (shoulder for instance) off by a bit. You mention > weighting > >> system, but that is going to be a pain to tune. > > > > Ah, yes, OK. I'd assumed that a pose with leaf-node discrepancies would > > be less visually different than one with trunk-node discrepancies, but > > that's not a sound assumption. > > > > This sounds like it would be a problem for *any* hierarchy-based > > approach, so anything based on comparing local-space positions is > > probably a non-starter. > > > >> An other approach would be to find a comparison algorithm that compares > the > >> overall position of the limbs. > >> > >> For instance you could consider; > >> - modeling triangles based of significant body joints, for instance > >> + hips, shoulder, hand > >> + hips, hand, foot > >> + hips, shoulder, shoulder > >> > >> Then use the normal of those triangles for your pose comparison. > >> > >> You would still need to make the normals relative to the hips and then > use a > >> hips orientation comparison process. > > > > Right OK. Sounds similar to the point cloud approach, maybe a little > > less prone to small discrepancies, as using the normal instead of the > > joint positions would equate similar triangles. > > > > Cheers! > > > > - Richard > > > > > ------------------------------------------------------------------------------ > > LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial > > Remotely access PCs and mobile devices and provide instant support > > Improve your efficiency, and focus on delivering more value-add services > > Discover what IT Professionals Know. Rescue delivers > > http://p.sf.net/sfu/logmein_12329d2d > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > > Archives: > > > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > > > ------------------------------------------------------------------------------ > LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial > Remotely access PCs and mobile devices and provide instant support > Improve your efficiency, and focus on delivering more value-add services > Discover what IT Professionals Know. Rescue delivers > http://p.sf.net/sfu/logmein_12329d2d > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Michael De R. <mic...@gm...> - 2012-12-13 21:13:18
|
Also with the triangle normal idea, a notion of distance is missing, so you should add something like distance hips - hand, hips - foot, foot - foot, etc to your comparison algorithm if u were to try this method. Though since there is already a paper on the point cloud approach, it's probably safer to start with it. Sent from my iPhone On 2012-12-13, at 12:47 PM, Richard Fine <rf...@tb...> wrote: > On 12/13/2012 8:09 PM, Michael De Ruyter wrote: > >> - when comparing local quaternions for the joints, the comparison will take into >> account the twist around the limbs even though any amount of twist doesn't >> change the position of the limb. Therefore you could get drastic differences >> when visually there are barely any. > > True in the general case, but I'm using a hierarchical setup, so a twist > in an upper arm can drastically change the position/orientation of a > forearm and so on. It matters less and less as you approach the leaf > nodes, but this is something I intended to eliminate with weighting. > >> - even if the joint orientation are different the position of the limbs, >> especially their endings like hands of feet, could still be in very close >> positions, i.e. potentially closer than limbs with similar rotations but with >> their root joint (shoulder for instance) off by a bit. You mention weighting >> system, but that is going to be a pain to tune. > > Ah, yes, OK. I'd assumed that a pose with leaf-node discrepancies would > be less visually different than one with trunk-node discrepancies, but > that's not a sound assumption. > > This sounds like it would be a problem for *any* hierarchy-based > approach, so anything based on comparing local-space positions is > probably a non-starter. > >> An other approach would be to find a comparison algorithm that compares the >> overall position of the limbs. >> >> For instance you could consider; >> - modeling triangles based of significant body joints, for instance >> + hips, shoulder, hand >> + hips, hand, foot >> + hips, shoulder, shoulder >> >> Then use the normal of those triangles for your pose comparison. >> >> You would still need to make the normals relative to the hips and then use a >> hips orientation comparison process. > > Right OK. Sounds similar to the point cloud approach, maybe a little > less prone to small discrepancies, as using the normal instead of the > joint positions would equate similar triangles. > > Cheers! > > - Richard > > ------------------------------------------------------------------------------ > LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial > Remotely access PCs and mobile devices and provide instant support > Improve your efficiency, and focus on delivering more value-add services > Discover what IT Professionals Know. Rescue delivers > http://p.sf.net/sfu/logmein_12329d2d > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list |
From: Richard F. <rf...@tb...> - 2012-12-13 20:44:59
|
On 12/13/2012 8:09 PM, Michael De Ruyter wrote: > - when comparing local quaternions for the joints, the comparison will take into > account the twist around the limbs even though any amount of twist doesn't > change the position of the limb. Therefore you could get drastic differences > when visually there are barely any. True in the general case, but I'm using a hierarchical setup, so a twist in an upper arm can drastically change the position/orientation of a forearm and so on. It matters less and less as you approach the leaf nodes, but this is something I intended to eliminate with weighting. > - even if the joint orientation are different the position of the limbs, > especially their endings like hands of feet, could still be in very close > positions, i.e. potentially closer than limbs with similar rotations but with > their root joint (shoulder for instance) off by a bit. You mention weighting > system, but that is going to be a pain to tune. Ah, yes, OK. I'd assumed that a pose with leaf-node discrepancies would be less visually different than one with trunk-node discrepancies, but that's not a sound assumption. This sounds like it would be a problem for *any* hierarchy-based approach, so anything based on comparing local-space positions is probably a non-starter. > An other approach would be to find a comparison algorithm that compares the > overall position of the limbs. > > For instance you could consider; > - modeling triangles based of significant body joints, for instance > + hips, shoulder, hand > + hips, hand, foot > + hips, shoulder, shoulder > > Then use the normal of those triangles for your pose comparison. > > You would still need to make the normals relative to the hips and then use a > hips orientation comparison process. Right OK. Sounds similar to the point cloud approach, maybe a little less prone to small discrepancies, as using the normal instead of the joint positions would equate similar triangles. Cheers! - Richard |
From: Richard F. <rf...@tb...> - 2012-12-13 20:17:04
|
On 12/13/2012 7:54 PM, Ben Sunshine-Hill wrote: > Joint angles suck for pose comparisons -- they just aren't the basis of our > intuitive notion of similarity. IMHO, point clouds work much better. Transform a > few bone-attached points -- say, pelvis, left shoulder, right shoulder, left > elbow, right elbow, left knee, right knee -- canonicalize by putting the pelvis > at zero and the shoulder midpoint at +X, and find the minimum squared distance > to a recovery pose (with some per-point weighting, if you like). Hm, right. I think my concern with this approach previously was that when the character is prone, the reference points would all be pretty much coplanar, making it hard to tell which way he's facing... but thinking about it now, that's silly, because his left and right sides always have to be a particular way around for a given faceup/facedown. Using a point cloud of just a few reference points is going to be a lot faster than calculating joint angles across all major skeleton bones, too. So I'll give this a shot first. > For references, the one that immediately comes to mind is "Dynamic Response for > Motion Capture Animation". They're blending into a response animation while the > character's still fully ragdoll, so they have to look at multiple frames to get > velocity effects in there -- if you're waiting until the guy's all fallen over, > your task will be simpler. Cool. Looks like they don't go into much detail in the paper, but it's a starting point if I want to look for related work. Thanks! - Richard > On Thu, Dec 13, 2012 at 6:22 PM, Richard Fine <rf...@tb... > <mailto:rf...@tb...>> wrote: > > Hi all, > > I've got a ragdolled character that I want to begin animating again. > I've got a number of states in my animation graph marked as 'recovery > points', i.e. animations that a ragdoll can reasonably be blended back > to before having the animation graph take over fully. The problem is, > I'm not sure how to identify which animation's first frame (the > 'recovery pose') is closest to the ragdoll's current pose. > > As I see it there are two components to computing a score for each > potential recovery point: > > 1) For each non-root bone, sum the differences in parent-space rotation > between current and recovery poses. This is simple enough to do; in > addition I think I need to weight the values (e.g. by the physics mass > of the bone), as a pose that is off by 30 degrees in the upper arm > stands to look a lot less similar to the ragdoll's pose than one that is > only off by 30 degrees in the wrist. The result of this step is some > kind of score representing the object-space similarity of the poses. > > 2) Add to (1) some value representing how similar the root bones are. > The problem I've got here is that I need to ignore rotation around the > global Y axis, while still accounting for other rotations. (I can ignore > position as well, as I can move the character's reference frame to > account for it). > > Suppose I have a recovery pose animation that has been authored such > that the character is lying stretched out prone, on his stomach, facing > along +Z. If the ragdoll is also lying stretched out prone on his > stomach, facing -X, then the recovery pose is still fine to use - I just > need to rotate the character's reference frame around the Y axis to > match, so the animation plays back facing the right direction. But, if > the ragdoll is lying on his back, or sitting up, then it's not usable, > regardless of which direction the character's facing in. So, I've got > the world-space rotation of the ragdoll's root bone as a quaternion, and > a quaternion representing the rotation of the corresponding root bone in > the recovery pose in *some* space (I think object-space, but I'm not > sure?) as starting points. What can I compute from them that has this > ignoring-rotation-around-global-Y property? > > It's been suggested there there's some canonicalization step I can > perform that would just eliminate any Y-rotation, but I don't know how > to do that other than by decomposing to Euler angles, and I suspect that > would have gimbal lock problems. > > This is probably some pretty simple linear algebra at the end of the > day, but between vague memories of eigenvectors, and a general > uncertainty as to whether I'm just overcomplicating this entire thing, I > could use a pointer in the right direction. Any thoughts or references > you could give me would be much appreciated. > > Cheers! > > - Richard > > ------------------------------------------------------------------------------ > LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial > Remotely access PCs and mobile devices and provide instant support > Improve your efficiency, and focus on delivering more value-add services > Discover what IT Professionals Know. Rescue delivers > http://p.sf.net/sfu/logmein_12329d2d > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > <mailto:GDA...@li...> > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > > > > > ------------------------------------------------------------------------------ > LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial > Remotely access PCs and mobile devices and provide instant support > Improve your efficiency, and focus on delivering more value-add services > Discover what IT Professionals Know. Rescue delivers > http://p.sf.net/sfu/logmein_12329d2d > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Michael De R. <mic...@gm...> - 2012-12-13 20:09:53
|
Hi Richard, I believe you will face a couple of problems with the approach you describe in 1) - when comparing local quaternions for the joints, the comparison will take into account the twist around the limbs even though any amount of twist doesn't change the position of the limb. Therefore you could get drastic differences when visually there are barely any. - even if the joint orientation are different the position of the limbs, especially their endings like hands of feet, could still be in very close positions, i.e. potentially closer than limbs with similar rotations but with their root joint (shoulder for instance) off by a bit. You mention weighting system, but that is going to be a pain to tune. An other approach would be to find a comparison algorithm that compares the overall position of the limbs. For instance you could consider; - modeling triangles based of significant body joints, for instance + hips, shoulder, hand + hips, hand, foot + hips, shoulder, shoulder Then use the normal of those triangles for your pose comparison. You would still need to make the normals relative to the hips and then use a hips orientation comparison process. I haven't actually implemented this, but that's how I would go about it. I hope this gives you a different perspective or more ideas. Michael On Thu, Dec 13, 2012 at 11:39 AM, Jeff Russell <je...@gm...> wrote: > There are probably a number of ways to do it. My first guess would be to > compute the difference in rotation for the root bone (that is, what > rotation takes you from your starting frame to the current ragdoll > orientation), and then examine the "up" vector of the resulting transform. > If it's too far from vertical, you don't have a very good match. You can > compute a score perhaps based on the dot product between the "up" basis of > this transform and the global up direction. > > On Thu, Dec 13, 2012 at 1:22 PM, Richard Fine <rf...@tb...> wrote: > >> Hi all, >> >> I've got a ragdolled character that I want to begin animating again. >> I've got a number of states in my animation graph marked as 'recovery >> points', i.e. animations that a ragdoll can reasonably be blended back >> to before having the animation graph take over fully. The problem is, >> I'm not sure how to identify which animation's first frame (the >> 'recovery pose') is closest to the ragdoll's current pose. >> >> As I see it there are two components to computing a score for each >> potential recovery point: >> >> 1) For each non-root bone, sum the differences in parent-space rotation >> between current and recovery poses. This is simple enough to do; in >> addition I think I need to weight the values (e.g. by the physics mass >> of the bone), as a pose that is off by 30 degrees in the upper arm >> stands to look a lot less similar to the ragdoll's pose than one that is >> only off by 30 degrees in the wrist. The result of this step is some >> kind of score representing the object-space similarity of the poses. >> >> 2) Add to (1) some value representing how similar the root bones are. >> The problem I've got here is that I need to ignore rotation around the >> global Y axis, while still accounting for other rotations. (I can ignore >> position as well, as I can move the character's reference frame to >> account for it). >> >> Suppose I have a recovery pose animation that has been authored such >> that the character is lying stretched out prone, on his stomach, facing >> along +Z. If the ragdoll is also lying stretched out prone on his >> stomach, facing -X, then the recovery pose is still fine to use - I just >> need to rotate the character's reference frame around the Y axis to >> match, so the animation plays back facing the right direction. But, if >> the ragdoll is lying on his back, or sitting up, then it's not usable, >> regardless of which direction the character's facing in. So, I've got >> the world-space rotation of the ragdoll's root bone as a quaternion, and >> a quaternion representing the rotation of the corresponding root bone in >> the recovery pose in *some* space (I think object-space, but I'm not >> sure?) as starting points. What can I compute from them that has this >> ignoring-rotation-around-global-Y property? >> >> It's been suggested there there's some canonicalization step I can >> perform that would just eliminate any Y-rotation, but I don't know how >> to do that other than by decomposing to Euler angles, and I suspect that >> would have gimbal lock problems. >> >> This is probably some pretty simple linear algebra at the end of the >> day, but between vague memories of eigenvectors, and a general >> uncertainty as to whether I'm just overcomplicating this entire thing, I >> could use a pointer in the right direction. Any thoughts or references >> you could give me would be much appreciated. >> >> Cheers! >> >> - Richard >> >> >> ------------------------------------------------------------------------------ >> LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial >> Remotely access PCs and mobile devices and provide instant support >> Improve your efficiency, and focus on delivering more value-add services >> Discover what IT Professionals Know. Rescue delivers >> http://p.sf.net/sfu/logmein_12329d2d >> _______________________________________________ >> GDAlgorithms-list mailing list >> GDA...@li... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >> > > > > -- > Jeff Russell > Engineer, Marmoset > www.marmoset.co > > > ------------------------------------------------------------------------------ > LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial > Remotely access PCs and mobile devices and provide instant support > Improve your efficiency, and focus on delivering more value-add services > Discover what IT Professionals Know. Rescue delivers > http://p.sf.net/sfu/logmein_12329d2d > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Alex L. <lin...@gm...> - 2012-12-13 19:55:26
|
I figure you've got 3 normalized vectors: dollForward = stomach of ragdoll vector, pointing along Y, downwards (not sure if positive or negative) dollUp = along spine towards head of ragdoll recoveryForward = stomach of recovery pose dollForward DOT recoveryForward will be near 1 if the doll is on its stomach. The same dot can be run against other categories of recovery pose for lying on side or back. Camera look-at style cross products with dollUp and dollForward will get you 3 axes and from them, a quat or matrix to apply to the recovery pose root, or take vectors to or from 'doll space' to 'recovery space' to match other limbs against your recovery-poses-on-stomach db. Just writing aloud, hope it helps! On Thu, Dec 13, 2012 at 11:39 AM, Jeff Russell <je...@gm...> wrote: > There are probably a number of ways to do it. My first guess would be to > compute the difference in rotation for the root bone (that is, what > rotation takes you from your starting frame to the current ragdoll > orientation), and then examine the "up" vector of the resulting transform. > If it's too far from vertical, you don't have a very good match. You can > compute a score perhaps based on the dot product between the "up" basis of > this transform and the global up direction. > > > On Thu, Dec 13, 2012 at 1:22 PM, Richard Fine <rf...@tb...> wrote: > >> Hi all, >> >> I've got a ragdolled character that I want to begin animating again. >> I've got a number of states in my animation graph marked as 'recovery >> points', i.e. animations that a ragdoll can reasonably be blended back >> to before having the animation graph take over fully. The problem is, >> I'm not sure how to identify which animation's first frame (the >> 'recovery pose') is closest to the ragdoll's current pose. >> >> As I see it there are two components to computing a score for each >> potential recovery point: >> >> 1) For each non-root bone, sum the differences in parent-space rotation >> between current and recovery poses. This is simple enough to do; in >> addition I think I need to weight the values (e.g. by the physics mass >> of the bone), as a pose that is off by 30 degrees in the upper arm >> stands to look a lot less similar to the ragdoll's pose than one that is >> only off by 30 degrees in the wrist. The result of this step is some >> kind of score representing the object-space similarity of the poses. >> >> 2) Add to (1) some value representing how similar the root bones are. >> The problem I've got here is that I need to ignore rotation around the >> global Y axis, while still accounting for other rotations. (I can ignore >> position as well, as I can move the character's reference frame to >> account for it). >> >> Suppose I have a recovery pose animation that has been authored such >> that the character is lying stretched out prone, on his stomach, facing >> along +Z. If the ragdoll is also lying stretched out prone on his >> stomach, facing -X, then the recovery pose is still fine to use - I just >> need to rotate the character's reference frame around the Y axis to >> match, so the animation plays back facing the right direction. But, if >> the ragdoll is lying on his back, or sitting up, then it's not usable, >> regardless of which direction the character's facing in. So, I've got >> the world-space rotation of the ragdoll's root bone as a quaternion, and >> a quaternion representing the rotation of the corresponding root bone in >> the recovery pose in *some* space (I think object-space, but I'm not >> sure?) as starting points. What can I compute from them that has this >> ignoring-rotation-around-global-Y property? >> >> It's been suggested there there's some canonicalization step I can >> perform that would just eliminate any Y-rotation, but I don't know how >> to do that other than by decomposing to Euler angles, and I suspect that >> would have gimbal lock problems. >> >> This is probably some pretty simple linear algebra at the end of the >> day, but between vague memories of eigenvectors, and a general >> uncertainty as to whether I'm just overcomplicating this entire thing, I >> could use a pointer in the right direction. Any thoughts or references >> you could give me would be much appreciated. >> >> Cheers! >> >> - Richard >> >> >> ------------------------------------------------------------------------------ >> LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial >> Remotely access PCs and mobile devices and provide instant support >> Improve your efficiency, and focus on delivering more value-add services >> Discover what IT Professionals Know. Rescue delivers >> http://p.sf.net/sfu/logmein_12329d2d >> _______________________________________________ >> GDAlgorithms-list mailing list >> GDA...@li... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >> > > > > -- > Jeff Russell > Engineer, Marmoset > www.marmoset.co > > > ------------------------------------------------------------------------------ > LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial > Remotely access PCs and mobile devices and provide instant support > Improve your efficiency, and focus on delivering more value-add services > Discover what IT Professionals Know. Rescue delivers > http://p.sf.net/sfu/logmein_12329d2d > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Ben Sunshine-H. <sn...@gm...> - 2012-12-13 19:55:11
|
Joint angles suck for pose comparisons -- they just aren't the basis of our intuitive notion of similarity. IMHO, point clouds work much better. Transform a few bone-attached points -- say, pelvis, left shoulder, right shoulder, left elbow, right elbow, left knee, right knee -- canonicalize by putting the pelvis at zero and the shoulder midpoint at +X, and find the minimum squared distance to a recovery pose (with some per-point weighting, if you like). For references, the one that immediately comes to mind is "Dynamic Response for Motion Capture Animation". They're blending into a response animation while the character's still fully ragdoll, so they have to look at multiple frames to get velocity effects in there -- if you're waiting until the guy's all fallen over, your task will be simpler. Ben On Thu, Dec 13, 2012 at 6:22 PM, Richard Fine <rf...@tb...> wrote: > Hi all, > > I've got a ragdolled character that I want to begin animating again. > I've got a number of states in my animation graph marked as 'recovery > points', i.e. animations that a ragdoll can reasonably be blended back > to before having the animation graph take over fully. The problem is, > I'm not sure how to identify which animation's first frame (the > 'recovery pose') is closest to the ragdoll's current pose. > > As I see it there are two components to computing a score for each > potential recovery point: > > 1) For each non-root bone, sum the differences in parent-space rotation > between current and recovery poses. This is simple enough to do; in > addition I think I need to weight the values (e.g. by the physics mass > of the bone), as a pose that is off by 30 degrees in the upper arm > stands to look a lot less similar to the ragdoll's pose than one that is > only off by 30 degrees in the wrist. The result of this step is some > kind of score representing the object-space similarity of the poses. > > 2) Add to (1) some value representing how similar the root bones are. > The problem I've got here is that I need to ignore rotation around the > global Y axis, while still accounting for other rotations. (I can ignore > position as well, as I can move the character's reference frame to > account for it). > > Suppose I have a recovery pose animation that has been authored such > that the character is lying stretched out prone, on his stomach, facing > along +Z. If the ragdoll is also lying stretched out prone on his > stomach, facing -X, then the recovery pose is still fine to use - I just > need to rotate the character's reference frame around the Y axis to > match, so the animation plays back facing the right direction. But, if > the ragdoll is lying on his back, or sitting up, then it's not usable, > regardless of which direction the character's facing in. So, I've got > the world-space rotation of the ragdoll's root bone as a quaternion, and > a quaternion representing the rotation of the corresponding root bone in > the recovery pose in *some* space (I think object-space, but I'm not > sure?) as starting points. What can I compute from them that has this > ignoring-rotation-around-global-Y property? > > It's been suggested there there's some canonicalization step I can > perform that would just eliminate any Y-rotation, but I don't know how > to do that other than by decomposing to Euler angles, and I suspect that > would have gimbal lock problems. > > This is probably some pretty simple linear algebra at the end of the > day, but between vague memories of eigenvectors, and a general > uncertainty as to whether I'm just overcomplicating this entire thing, I > could use a pointer in the right direction. Any thoughts or references > you could give me would be much appreciated. > > Cheers! > > - Richard > > > ------------------------------------------------------------------------------ > LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial > Remotely access PCs and mobile devices and provide instant support > Improve your efficiency, and focus on delivering more value-add services > Discover what IT Professionals Know. Rescue delivers > http://p.sf.net/sfu/logmein_12329d2d > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Jeff R. <je...@gm...> - 2012-12-13 19:40:13
|
There are probably a number of ways to do it. My first guess would be to compute the difference in rotation for the root bone (that is, what rotation takes you from your starting frame to the current ragdoll orientation), and then examine the "up" vector of the resulting transform. If it's too far from vertical, you don't have a very good match. You can compute a score perhaps based on the dot product between the "up" basis of this transform and the global up direction. On Thu, Dec 13, 2012 at 1:22 PM, Richard Fine <rf...@tb...> wrote: > Hi all, > > I've got a ragdolled character that I want to begin animating again. > I've got a number of states in my animation graph marked as 'recovery > points', i.e. animations that a ragdoll can reasonably be blended back > to before having the animation graph take over fully. The problem is, > I'm not sure how to identify which animation's first frame (the > 'recovery pose') is closest to the ragdoll's current pose. > > As I see it there are two components to computing a score for each > potential recovery point: > > 1) For each non-root bone, sum the differences in parent-space rotation > between current and recovery poses. This is simple enough to do; in > addition I think I need to weight the values (e.g. by the physics mass > of the bone), as a pose that is off by 30 degrees in the upper arm > stands to look a lot less similar to the ragdoll's pose than one that is > only off by 30 degrees in the wrist. The result of this step is some > kind of score representing the object-space similarity of the poses. > > 2) Add to (1) some value representing how similar the root bones are. > The problem I've got here is that I need to ignore rotation around the > global Y axis, while still accounting for other rotations. (I can ignore > position as well, as I can move the character's reference frame to > account for it). > > Suppose I have a recovery pose animation that has been authored such > that the character is lying stretched out prone, on his stomach, facing > along +Z. If the ragdoll is also lying stretched out prone on his > stomach, facing -X, then the recovery pose is still fine to use - I just > need to rotate the character's reference frame around the Y axis to > match, so the animation plays back facing the right direction. But, if > the ragdoll is lying on his back, or sitting up, then it's not usable, > regardless of which direction the character's facing in. So, I've got > the world-space rotation of the ragdoll's root bone as a quaternion, and > a quaternion representing the rotation of the corresponding root bone in > the recovery pose in *some* space (I think object-space, but I'm not > sure?) as starting points. What can I compute from them that has this > ignoring-rotation-around-global-Y property? > > It's been suggested there there's some canonicalization step I can > perform that would just eliminate any Y-rotation, but I don't know how > to do that other than by decomposing to Euler angles, and I suspect that > would have gimbal lock problems. > > This is probably some pretty simple linear algebra at the end of the > day, but between vague memories of eigenvectors, and a general > uncertainty as to whether I'm just overcomplicating this entire thing, I > could use a pointer in the right direction. Any thoughts or references > you could give me would be much appreciated. > > Cheers! > > - Richard > > > ------------------------------------------------------------------------------ > LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial > Remotely access PCs and mobile devices and provide instant support > Improve your efficiency, and focus on delivering more value-add services > Discover what IT Professionals Know. Rescue delivers > http://p.sf.net/sfu/logmein_12329d2d > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > -- Jeff Russell Engineer, Marmoset www.marmoset.co |
From: Richard F. <rf...@tb...> - 2012-12-13 18:49:26
|
Hi all, I've got a ragdolled character that I want to begin animating again. I've got a number of states in my animation graph marked as 'recovery points', i.e. animations that a ragdoll can reasonably be blended back to before having the animation graph take over fully. The problem is, I'm not sure how to identify which animation's first frame (the 'recovery pose') is closest to the ragdoll's current pose. As I see it there are two components to computing a score for each potential recovery point: 1) For each non-root bone, sum the differences in parent-space rotation between current and recovery poses. This is simple enough to do; in addition I think I need to weight the values (e.g. by the physics mass of the bone), as a pose that is off by 30 degrees in the upper arm stands to look a lot less similar to the ragdoll's pose than one that is only off by 30 degrees in the wrist. The result of this step is some kind of score representing the object-space similarity of the poses. 2) Add to (1) some value representing how similar the root bones are. The problem I've got here is that I need to ignore rotation around the global Y axis, while still accounting for other rotations. (I can ignore position as well, as I can move the character's reference frame to account for it). Suppose I have a recovery pose animation that has been authored such that the character is lying stretched out prone, on his stomach, facing along +Z. If the ragdoll is also lying stretched out prone on his stomach, facing -X, then the recovery pose is still fine to use - I just need to rotate the character's reference frame around the Y axis to match, so the animation plays back facing the right direction. But, if the ragdoll is lying on his back, or sitting up, then it's not usable, regardless of which direction the character's facing in. So, I've got the world-space rotation of the ragdoll's root bone as a quaternion, and a quaternion representing the rotation of the corresponding root bone in the recovery pose in *some* space (I think object-space, but I'm not sure?) as starting points. What can I compute from them that has this ignoring-rotation-around-global-Y property? It's been suggested there there's some canonicalization step I can perform that would just eliminate any Y-rotation, but I don't know how to do that other than by decomposing to Euler angles, and I suspect that would have gimbal lock problems. This is probably some pretty simple linear algebra at the end of the day, but between vague memories of eigenvectors, and a general uncertainty as to whether I'm just overcomplicating this entire thing, I could use a pointer in the right direction. Any thoughts or references you could give me would be much appreciated. Cheers! - Richard |
From: J. S. <cf...@cs...> - 2012-09-11 12:47:31
|
CALL FOR PAPERS - INTERNATIONAL JOURNAL OF EXPERIMENTAL ALGORITHMS (IJEA) ISSN: 2180-1282 Volume 3, Issue 1 Info. at http://www.cscjournals.org/csc/journals/IJEA/journal_cfp.php?JCode=IJEA Computer Science Journals (CSC Journals) invites researchers, editors, scientists & scholars to publish their scientific research papers in an International Journal of Experimental Algorithms (IJEA) Volume 3, Issue 1. Experimental Algorithmics studies algorithms and data structures by joining experimental studies with the more traditional theoretical analyses. With this regard, the aim of The International Journal of Experimental Algorithms (IJEA) is (1) to stimulate research in algorithms based upon implementation and experimentation; in particular, to encourage testing, evaluation and reuse of complex theoretical algorithms and data structures; and (2) to distribute programs and testbeds throughout the research community and to provide a repository of useful programs and packages to both researchers and practitioners. IJEA is a high-quality, refereed, archival journal devoted to the study of algorithms and data structures through a combination of experimentation and classical analysis and design techniques. IJEA contributions are also in the area of test generation and result assessment as applied to algorithms. CSC Journals anticipate and invite papers on any of the following topics: Algorithm Engineering Dynamic Graph Algorithms Algorithmic Code Experimental Techniques and Statistics Algorithmic Engineering Graph Manipulation Algorithmic Network Analysis Graphics Analysis of Algorithms Heuristics Approximation Techniques Mathematical Programming For Algorithms Cache Oblivious algorithm Metaheuristic Methodologies Combinatorial Optimization Network Design Combinatorial Structures and Graphs Parallel Processing Computational Biology Randomized Techniques in Algorithms Computational Geometry Routing and Scheduling Computational Learning Theory Searching and Sorting Computational Optimization Topological Accuracy Data Structures Visualization Code Distributed and Parallel Algorithms VLSI Design Important Dates - IJEA CFP - Volume 3, Issue 1. Paper Submission: September 30, 2012 Author Notification: November 15, 2012 Issue Publication: December 2012 For complete details about IJEA archives publications, abstracting/indexing, editorial board and other important information, please refer to IJEA homepage. We look forward to receive your valuable papers. If you have further questions please do not hesitate to contact us at csc...@cs.... Our team is committed to provide a quick and supportive service throughout the publication process. A complete list of journals can be found at http://www.cscjournals.org/csc/bysubject.php Sincerely, J. Stewart Computer Science Journals (CSC Journals) B-5-8 Plaza Mont Kiara, Mont Kiara 50480, Kuala Lumpur, Malaysia Tel: + 603 6207 1607, + 603 2782 6991 Fax:+ 603 6207 1697 Url: http://www.cscjournals.org |
From: Lorenzo P. <pas...@ul...> - 2012-06-27 23:23:15
|
This things a plague ... recieved a couple today from unrelated sources ... :/ > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list |
From: Cory B. <cor...@ya...> - 2012-06-27 21:52:43
|
http://recoveringgrace.org/media/googlesave.html?efj=xss.jdg&wrg=ar.sus&yesol=vyqt |
From: Cory B. <cor...@ya...> - 2012-06-27 21:47:56
|
http://ifos-formazione.com/modules/mod_related_items/googlesave.html?otv=ol.gio&ohsy=er.hkm&ghb=zqqd |
From: Eric C. <er....@gm...> - 2012-04-28 13:59:14
|
http://www.eamobile.com/bejeweled This looks like Terragen to me. http://www.planetside.co.uk/ On Sat, Apr 28, 2012 at 9:27 AM, Gino van den Bergen < gin...@gm...> wrote: > It could be Bryce: http://www.daz3d.com/i/products/bryce? > > Op 28-4-2012 12:36, Jose Marin schreef: > > Hi. > > Sorry for this OT, but I need to know from the experts. > > Do you know which software generates backgrounds like in Bejeweled? > > Thanks. > > Jose > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > > > _______________________________________________ > GDAlgorithms-list mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives:http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > > > > -- > Gino van den Bergen > Dtecta - middleware solutions for real-time 4D collision detection > T +31 (0)492 663259 | M +31 (0)6 383...@dt... | www.dtecta.com | Trade reg. 17135282 > > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Gino v. d. B. <gin...@gm...> - 2012-04-28 13:28:07
|
It could be Bryce: http://www.daz3d.com/i/products/bryce? Op 28-4-2012 12:36, Jose Marin schreef: > Hi. > > Sorry for this OT, but I need to know from the experts. > > Do you know which software generates backgrounds like in Bejeweled? > > Thanks. > > Jose > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list -- Gino van den Bergen Dtecta - middleware solutions for real-time 4D collision detection T +31 (0)492 663259 | M +31 (0)6 38305878 in...@dt... | www.dtecta.com | Trade reg. 17135282 |
From: James R. <ja...@os...> - 2012-04-28 12:21:55
|
From a quick peek at screen shots on Google Images, I would say those backgrounds are hand drawn. So the answer is: Photoshop. Probably. Jose Marin wrote: > Hi. > > Sorry for this OT, but I need to know from the experts. > > Do you know which software generates backgrounds like in Bejeweled? > > Thanks. > > Jose > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > ------------------------------------------------------------------------ > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list |
From: Jose M. <jos...@ya...> - 2012-04-28 10:37:03
|
Hi. Sorry for this OT, but I need to know from the experts. Do you know which software generates backgrounds like in Bejeweled? Thanks. Jose |
From: Jon W. <jw...@gm...> - 2012-02-21 21:26:33
|
Oh, look, yet another questionable "academic" for-profit posting on gd-algorithms! This "international" journal should not be confused with the ACM "Journal of Experimental Algorithms" (which has been going on for a long time.) That being said -- while the quality of the ACM publication is much higher, the politics and governance of the ACM has gotten so contrary to anything reasonable that I stopped paying my dues... Sincerely, jw -- Americans might object: there is no way we would sacrifice our living standards for the benefit of people in the rest of the world. Nevertheless, whether we get there willingly or not, we shall soon have lower consumption rates, because our present rates are unsustainable. On Tue, Feb 21, 2012 at 10:52 AM, J. Stewart <cf...@cs...> wrote: > ** > > > *CALL FOR PAPERS - INTERNATIONAL JOURNAL OF EXPERIMENTAL ALGORITHMS > (IJEA)* > *ISSN: * *2180-1282* > ** > Volume 3, Issue 1 > **Info. at > http://www.cscjournals.org/csc/journals/IJEA/journal_cfp.php?JCode=IJEA** > **** > ** > > Computer Science Journals (CSC Journals) invites researchers, editors, > scientists & scholars to publish their scientific research papers in an *International > Journal of Experimental Algorithms (IJEA) Volume 3, Issue 1.* > > * * > > Experimental Algorithmics studies algorithms and data structures by > joining experimental studies with the more traditional theoretical > analyses. With this regard, the aim of *The International Journal of > Experimental Algorithms (IJEA)* is (1) to stimulate research in > algorithms based upon implementation and experimentation; in particular, to > encourage testing, evaluation and reuse of complex theoretical algorithms > and data structures; and (2) to distribute programs and testbeds throughout > the research community and to provide a repository of useful programs and > packages to both researchers and practitioners. IJEA is a high-quality, > refereed, archival journal devoted to the study of algorithms and data > structures through a combination of experimentation and classical analysis > and design techniques. IJEA contributions are also in the area of test > generation and result assessment as applied to algorithms.** > > * * > > CSC Journals anticipate and invite papers on any of the following topics:* > *** > > * * > > * * > > Algorithm Engineering** > > Dynamic Graph Algorithms** > > ** ** > > Algorithmic Code** > > ** ** > > Experimental Techniques and Statistics** > > ** ** > > Algorithmic Engineering** > > ** ** > > Graph Manipulation** > > ** ** > > Algorithmic Network Analysis** > > ** ** > > Graphics** > > ** ** > > Analysis of Algorithms** > > ** ** > > Heuristics** > > ** ** > > Approximation Techniques** > > ** ** > > Mathematical Programming For Algorithms** > > ** ** > > Cache Oblivious algorithm** > > ** ** > > Metaheuristic Methodologies** > > ** ** > > Combinatorial Optimization** > > ** ** > > Network Design** > > ** ** > > Combinatorial Structures and Graphs** > > ** ** > > Parallel Processing** > > ** ** > > Computational Biology** > > ** ** > > Randomized Techniques in Algorithms** > > ** ** > > Computational Geometry** > > ** ** > > Routing and Scheduling** > > ** ** > > Computational Learning Theory** > > ** ** > > Searching and Sorting** > > ** ** > > Computational Optimization** > > ** ** > > Topological Accuracy** > > ** ** > > Data Structures** > > ** ** > > Visualization Code** > > ** ** > > Distributed and Parallel Algorithms** > > ** ** > > VLSI Design** > ** > > > > *Important Dates - IJEA CFP - Volume 3, Issue 1.* > ** > > Paper Submission: March 31, 2012 **** > > Author Notification: May 15, 2012 **** > > Issue Publication: June 2012 **** > > > > For complete details about IJEA archives publications, > abstracting/indexing, editorial board and other important information, > please refer to IJEA homepage<http://www.cscjournals.org/csc/journals/IJEA/description.php?JCode=IJEA> > .**** > > ** ** > > We look forward to receive your valuable papers. If you have further > questions please do not hesitate to contact us at csc...@cs.... > Our team is committed to provide a quick and supportive service throughout > the publication process.**** > > ** > > > > > *A complete list of journals can be found at * > http://www.cscjournals.org/csc/bysubject.php > > > > Sincerely, > > J. Stewart > Computer Science Journals (CSC Journals) > > B-5-8 Plaza Mont Kiara, Mont Kiara > 50480, Kuala Lumpur, Malaysia > > Tel: + 603 6207 1607, + 603 2782 6991 > Fax:+ 603 6207 1697 > Url: http://www.cscjournals.org > > > ------------------------------------------------------------------------------ > Keep Your Developer Skills Current with LearnDevNow! > The most comprehensive online learning library for Microsoft developers > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, > Metro Style Apps, more. Free future releases when you subscribe now! > http://p.sf.net/sfu/learndevnow-d2d > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: J. S. <cf...@cs...> - 2012-02-21 05:52:46
|
CALL FOR PAPERS - INTERNATIONAL JOURNAL OF EXPERIMENTAL ALGORITHMS (IJEA) ISSN: 2180-1282 Volume 3, Issue 1 Info. at http://www.cscjournals.org/csc/journals/IJEA/journal_cfp.php?JCode=IJEA Computer Science Journals (CSC Journals) invites researchers, editors, scientists & scholars to publish their scientific research papers in an International Journal of Experimental Algorithms (IJEA) Volume 3, Issue 1. Experimental Algorithmics studies algorithms and data structures by joining experimental studies with the more traditional theoretical analyses. With this regard, the aim of The International Journal of Experimental Algorithms (IJEA) is (1) to stimulate research in algorithms based upon implementation and experimentation; in particular, to encourage testing, evaluation and reuse of complex theoretical algorithms and data structures; and (2) to distribute programs and testbeds throughout the research community and to provide a repository of useful programs and packages to both researchers and practitioners. IJEA is a high-quality, refereed, archival journal devoted to the study of algorithms and data structures through a combination of experimentation and classical analysis and design techniques. IJEA contributions are also in the area of test generation and result assessment as applied to algorithms. CSC Journals anticipate and invite papers on any of the following topics: Algorithm Engineering Dynamic Graph Algorithms Algorithmic Code Experimental Techniques and Statistics Algorithmic Engineering Graph Manipulation Algorithmic Network Analysis Graphics Analysis of Algorithms Heuristics Approximation Techniques Mathematical Programming For Algorithms Cache Oblivious algorithm Metaheuristic Methodologies Combinatorial Optimization Network Design Combinatorial Structures and Graphs Parallel Processing Computational Biology Randomized Techniques in Algorithms Computational Geometry Routing and Scheduling Computational Learning Theory Searching and Sorting Computational Optimization Topological Accuracy Data Structures Visualization Code Distributed and Parallel Algorithms VLSI Design Important Dates - IJEA CFP - Volume 3, Issue 1. Paper Submission: March 31, 2012 Author Notification: May 15, 2012 Issue Publication: June 2012 For complete details about IJEA archives publications, abstracting/indexing, editorial board and other important information, please refer to IJEA homepage. We look forward to receive your valuable papers. If you have further questions please do not hesitate to contact us at csc...@cs.... Our team is committed to provide a quick and supportive service throughout the publication process. A complete list of journals can be found at http://www.cscjournals.org/csc/bysubject.php Sincerely, J. Stewart Computer Science Journals (CSC Journals) B-5-8 Plaza Mont Kiara, Mont Kiara 50480, Kuala Lumpur, Malaysia Tel: + 603 6207 1607, + 603 2782 6991 Fax:+ 603 6207 1697 Url: http://www.cscjournals.org |
From: Dan T. <dan...@gm...> - 2012-02-03 01:28:16
|
So in layman's terms I am using these three http://imgur.com/oOddh coefficients to derive the direction of the strongest light. Makes sense now. That is much easier than I expected, thank you for taking the time to explain it. On Fri, Feb 3, 2012 at 10:21 AM, Peter-Pike Sloan < pet...@ho...> wrote: > This is actually quite straightforward. > > The optimal linear direction is -L[3],-L[1],L[3] - assuming the usual sign > conventions, storage, etc. > > where in your case L could be the luminance of the RGB SH vectors. > > If you are using the coefficients from appendix 10, it turns out to be > even simpler - it is just the .xyz coefficients after doing a luminance > weighting of cAr/cAg/cAb. > > That give you the direction. > > To generate the color of the light, you can simply evaluate the outgoing > radiance using the above direction (as in the shader code in the appendix), > and that's the color for the light (if illuminating a white material that > is the light in the given direction that would give you the same diffuse > response as the SH probe.) > > See the section of the paper titled: "Extracting Conventional Lights from > SH" on a technique to solve for both a colored directional light and an > ambient light given SH coefficients, but if you are using the storage in > appendix 10 it's a bit trickier since the DC term has been "polluted" by > part of the quadratic ZH function to make evaluation faster. You would have > to reconstruct the original vector to do the math... > > Peter-Pike Sloan > > > ------------------------------ > Date: Fri, 3 Feb 2012 09:22:21 +1000 > From: dan...@gm... > > To: gda...@li... > Subject: [Algorithms] Pick dominant light from sh coeffs > > I have 9 red, 9 green and 9 blue sh coefficients (packed using the method > in appendix 10 of http://www.ppsloan.org/publications/StupidSH36.pdf). > > I want to pick a single dominant light to use for specular. How would I go > about efficiently extracting the direction and color of that light from the > coefficients? > > Looks like I need to calculate the "optimal linear direction" which is > supposedly in this paper > http://research.microsoft.com/en-us/um/people/johnsny/papers/ldprt.pdf, however > I can't see it. Worse still, if it is in there, it is probably an integral > that I will struggle to turn into code! > > Thanks > > Dan > > > > ------------------------------------------------------------------------------ > Keep Your Developer Skills Current with LearnDevNow! The most comprehensive > online learning library for Microsoft developers is just $99.99! Visual > Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, Metro Style Apps, more. > Free future releases when you subscribe now! > http://p.sf.net/sfu/learndevnow-d2d > _______________________________________________ GDAlgorithms-list mailing > list GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > > > ------------------------------------------------------------------------------ > Try before you buy = See our experts in action! > The most comprehensive online learning library for Microsoft developers > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, > Metro Style Apps, more. Free future releases when you subscribe now! > http://p.sf.net/sfu/learndevnow-dev2 > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Peter-Pike S. <pet...@ho...> - 2012-02-03 00:21:59
|
This is actually quite straightforward. The optimal linear direction is -L[3],-L[1],L[3] - assuming the usual sign conventions, storage, etc. where in your case L could be the luminance of the RGB SH vectors. If you are using the coefficients from appendix 10, it turns out to be even simpler - it is just the .xyz coefficients after doing a luminance weighting of cAr/cAg/cAb. That give you the direction. To generate the color of the light, you can simply evaluate the outgoing radiance using the above direction (as in the shader code in the appendix), and that's the color for the light (if illuminating a white material that is the light in the given direction that would give you the same diffuse response as the SH probe.) See the section of the paper titled: "Extracting Conventional Lights from SH" on a technique to solve for both a colored directional light and an ambient light given SH coefficients, but if you are using the storage in appendix 10 it's a bit trickier since the DC term has been "polluted" by part of the quadratic ZH function to make evaluation faster. You would have to reconstruct the original vector to do the math... Peter-Pike Sloan Date: Fri, 3 Feb 2012 09:22:21 +1000 From: dan...@gm... To: gda...@li... Subject: [Algorithms] Pick dominant light from sh coeffs I have 9 red, 9 green and 9 blue sh coefficients (packed using the method in appendix 10 of http://www.ppsloan.org/publications/StupidSH36.pdf). I want to pick a single dominant light to use for specular. How would I go about efficiently extracting the direction and color of that light from the coefficients? Looks like I need to calculate the "optimal linear direction" which is supposedly in this paper http://research.microsoft.com/en-us/um/people/johnsny/papers/ldprt.pdf, however I can't see it. Worse still, if it is in there, it is probably an integral that I will struggle to turn into code! Thanks Dan ------------------------------------------------------------------------------ Keep Your Developer Skills Current with LearnDevNow! The most comprehensive online learning library for Microsoft developers is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, Metro Style Apps, more. Free future releases when you subscribe now! http://p.sf.net/sfu/learndevnow-d2d _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list |
From: David N. <da...@re...> - 2012-02-03 00:06:36
|
Looks like you want to normalize the 3d vector made from the basis coefficients (-f[1,1], -f[1,-1], f[1,0]). Section 3.3 ZH Error Analysis paragraph 2. -= Dave From: Dan Treble [mailto:dan...@gm...] Sent: Thursday, February 02, 2012 3:22 PM To: gda...@li... Subject: [Algorithms] Pick dominant light from sh coeffs I have 9 red, 9 green and 9 blue sh coefficients (packed using the method in appendix 10 of http://www.ppsloan.org/publications/StupidSH36.pdf). I want to pick a single dominant light to use for specular. How would I go about efficiently extracting the direction and color of that light from the coefficients? Looks like I need to calculate the "optimal linear direction" which is supposedly in this paper http://research.microsoft.com/en-us/um/people/johnsny/papers/ldprt.pdf, however I can't see it. Worse still, if it is in there, it is probably an integral that I will struggle to turn into code! Thanks Dan |
From: Dan T. <dan...@gm...> - 2012-02-02 23:22:28
|
I have 9 red, 9 green and 9 blue sh coefficients (packed using the method in appendix 10 of http://www.ppsloan.org/publications/StupidSH36.pdf). I want to pick a single dominant light to use for specular. How would I go about efficiently extracting the direction and color of that light from the coefficients? Looks like I need to calculate the "optimal linear direction" which is supposedly in this paper http://research.microsoft.com/en-us/um/people/johnsny/papers/ldprt.pdf, however I can't see it. Worse still, if it is in there, it is probably an integral that I will struggle to turn into code! Thanks Dan |
From: marcsh <ma...@gm...> - 2011-10-23 00:38:51
|
Hi Game Development Algorithms, marcsh invites you to get to know friends better on Formspring. Once you join, you'll be able to ask and respond to fun questions like this one: What makes you laugh? Join marcsh on Formspring http://www.formspring.me/marcsh?utm_medium=email&utm_source=email&utm_campaign=invite&type=google&inviter=/Rkwh+WTTa8= ---------------------------------------- ===== To stop all email from Formspring immediately, click on the link below: http://www.formspring.me/site/unsubscribe/?email=FYWW0A%2BQdIwD86e%2B%2FF7D3nMoE96x2WAkG%2FB4JXLTeFCbDIi42QRHmg%3D%3D |