gdalgorithms-list Mailing List for Game Dev Algorithms (Page 6)
Brought to you by:
vexxed72
You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(390) |
Aug
(767) |
Sep
(940) |
Oct
(964) |
Nov
(819) |
Dec
(762) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(680) |
Feb
(1075) |
Mar
(954) |
Apr
(595) |
May
(725) |
Jun
(868) |
Jul
(678) |
Aug
(785) |
Sep
(410) |
Oct
(395) |
Nov
(374) |
Dec
(419) |
2002 |
Jan
(699) |
Feb
(501) |
Mar
(311) |
Apr
(334) |
May
(501) |
Jun
(507) |
Jul
(441) |
Aug
(395) |
Sep
(540) |
Oct
(416) |
Nov
(369) |
Dec
(373) |
2003 |
Jan
(514) |
Feb
(488) |
Mar
(396) |
Apr
(624) |
May
(590) |
Jun
(562) |
Jul
(546) |
Aug
(463) |
Sep
(389) |
Oct
(399) |
Nov
(333) |
Dec
(449) |
2004 |
Jan
(317) |
Feb
(395) |
Mar
(136) |
Apr
(338) |
May
(488) |
Jun
(306) |
Jul
(266) |
Aug
(424) |
Sep
(502) |
Oct
(170) |
Nov
(170) |
Dec
(134) |
2005 |
Jan
(249) |
Feb
(109) |
Mar
(119) |
Apr
(282) |
May
(82) |
Jun
(113) |
Jul
(56) |
Aug
(160) |
Sep
(89) |
Oct
(98) |
Nov
(237) |
Dec
(297) |
2006 |
Jan
(151) |
Feb
(250) |
Mar
(222) |
Apr
(147) |
May
(266) |
Jun
(313) |
Jul
(367) |
Aug
(135) |
Sep
(108) |
Oct
(110) |
Nov
(220) |
Dec
(47) |
2007 |
Jan
(133) |
Feb
(144) |
Mar
(247) |
Apr
(191) |
May
(191) |
Jun
(171) |
Jul
(160) |
Aug
(51) |
Sep
(125) |
Oct
(115) |
Nov
(78) |
Dec
(67) |
2008 |
Jan
(165) |
Feb
(37) |
Mar
(130) |
Apr
(111) |
May
(91) |
Jun
(142) |
Jul
(54) |
Aug
(104) |
Sep
(89) |
Oct
(87) |
Nov
(44) |
Dec
(54) |
2009 |
Jan
(283) |
Feb
(113) |
Mar
(154) |
Apr
(395) |
May
(62) |
Jun
(48) |
Jul
(52) |
Aug
(54) |
Sep
(131) |
Oct
(29) |
Nov
(32) |
Dec
(37) |
2010 |
Jan
(34) |
Feb
(36) |
Mar
(40) |
Apr
(23) |
May
(38) |
Jun
(34) |
Jul
(36) |
Aug
(27) |
Sep
(9) |
Oct
(18) |
Nov
(25) |
Dec
|
2011 |
Jan
(1) |
Feb
(14) |
Mar
(1) |
Apr
(5) |
May
(1) |
Jun
|
Jul
|
Aug
(37) |
Sep
(6) |
Oct
(2) |
Nov
|
Dec
|
2012 |
Jan
|
Feb
(7) |
Mar
|
Apr
(4) |
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(10) |
2013 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(2) |
May
|
Jun
|
Jul
(9) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
(14) |
Feb
|
Mar
(2) |
Apr
|
May
(10) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(12) |
Nov
|
Dec
(1) |
2016 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
From: Paul F. <pa...@wi...> - 2011-02-24 11:56:47
|
Hi guys, I know this list is mainly console developers, but i think there are a few guys doing web games on here, so i'll put this article out there in the hopes that its useful for someone :) Its about a different way to approach collision detection in an isometric 2d game, called "2d Isometric land – breaking out of the grid", i hope you like it: http://www.wildbunny.co.uk/blog/?p=63 :) Cheers, Paul. -- Paul Firth http://www.wildbunny.co.uk http://www.facebook.com/WildbunnyLtd email: pa...@wi... |
From: Diogo de A. <dio...@sp...> - 2011-02-17 10:47:14
|
Hi all! I've finally had the time to implement and use all your suggestions, and if you're interested you can see some of the results here: http://shadowcovenant.com/blog/2011/02/10/ambient-occlusion-maps/ Most of the issues I solved by using barycentric coordinates, instead of using just gradients calculated through interpolation... Thanks for all the help everyone! Best regards, Diogo -----Original Message----- From: Olivier Galibert [mailto:gal...@po...] Sent: sexta-feira, 4 de Fevereiro de 2011 19:32 To: gda...@li... Subject: Re: [Algorithms] Texel area On Fri, Feb 04, 2011 at 10:50:08AM -0000, Diogo de Andrade wrote: > So, what I want to do is, given > > - triangle T=(V1,V2,V3), in which V1, V2, V3 are vertexes that have > some properties (world space position, normal, diffuse color, texture > coordinates > 0 (in uniform space), texture coordinates 1 (in texture space, > [0..texture size[), etc), > > - texture size > > - position inside the triangle (absolute value in texture space) > > is to find out the rectangle that bounds the texel in world space > (origin+2 vectors)... If that's your real problem, the solution is reasonably simple and has nothing to do with the rasterization. You have three points when space/texture coordinates: (x0, y0, z0, u0, v0) (x1, y1, z1, u1, v1) (x2, y2, z2, u2, v2) Given u, v the position in texture space, you first want to find the barycentric coordinates (t1, t2): | u0 + t1*(u1-u0) + t2*(u2-u0) = u | v0 + t1*(v1-v0) + t2*(v2-v0) = v That's a rather usual linear system. Noting the inverse determinant di: di = 1/((v2-v0)*(u1-u0) - (u2-u0)*(v1-v0)) then: | t1 = di*((v2-v0)*(u-u0) - (u2-u0)*(v-v0)) | t2 = di*((v1-v0)*(u-u0) - (u1-u0)*(v-v0)) >From these, you can easily find the world coordinates: | x = x0 + t1*(x1-x0) + t2*(x2-x0) | y = y0 + t1*(y1-y0) + t2*(y2-y0) | z = z0 + t1*(z1-z0) + t2*(z2-z0) You'll notice that all these are linear systems, which means a texel rectangle size is independant of its position. So you can get your vector once by computing the position differences between (0, 0), (0.5, 0) and (0, 0.5) for instance. You'll notice the equations simplify nicely when computing deltas if you do the math. I suspect it's not your real problem though, so feel free to correct it :-) OG. ---------------------------------------------------------------------------- -- The modern datacenter depends on network connectivity to access resources and provide services. The best practices for maximizing a physical server's connectivity to a physical network are well understood - see how these rules translate into the virtual world? http://p.sf.net/sfu/oracle-sfdevnlfb _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list |
From: Jason H. <jh...@st...> - 2011-02-07 19:05:27
|
Actually, all this talk about triangles being flat is somewhat confusing. :-) The only way a triangle really represents a sliver of a plane is if the vertex normals are all parallel. Otherwise, it's really an approximation to a curved surface, and you're inducing faceting in your samples by assuming they are planar. Shouldn't the correct mapping should be concave or convex, according to the flex in the curvature of the triangle? Just saying... JH On 2/7/2011 12:15 PM, David Neubelt wrote: > > The gradient will be constant across the triangle. If > it's constant in dx and it's constant in dy then it's constant > everywhere. If you know the gradient in the two directions then you > know it everywhere (dx, dy, 0) > > *From:*Manuel Massing [mailto:m.m...@wa...] > *Sent:* Monday, February 07, 2011 2:47 AM > *To:* Game Development Algorithms > *Subject:* Re: [Algorithms] Texel area > > Hi Diogo, > > > This seems counter-intuitive... I understand your reasoning, and I can't > > > find a mistake in your logic, but it seems to me that the size > shouldn't be > > > constant (from an intuitive standpoint)... > > you are probably thinking about the usual texture mapping setup, where > > a projective mapping (from screen space to texture space) is performed. > > In this canonical case, the texture footprint is position dependent (you > > describe the mapping of projective planes by a homography, which is > > linear in homogenous coordinates, but non-linear in texture/screen > coordinates due to the perspective division). > > Your setup is just a linear transform of a 2D subspace (you want to > transform > > the UV plane into the world space triangle plane), i.e. you seek a > mapping of the UV-basis vectors to their respective world-space > counterparts. > > cheers, > > Manuel > > > ------------------------------------------------------------------------------ > The modern datacenter depends on network connectivity to access resources > and provide services. The best practices for maximizing a physical server's > connectivity to a physical network are well understood - see how these > rules translate into the virtual world? > http://p.sf.net/sfu/oracle-sfdevnlfb > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list |
From: David N. <da...@re...> - 2011-02-07 18:27:43
|
The gradient will be constant across the triangle. If it's constant in dx and it's constant in dy then it's constant everywhere. If you know the gradient in the two directions then you know it everywhere (dx, dy, 0) From: Manuel Massing [mailto:m.m...@wa...] Sent: Monday, February 07, 2011 2:47 AM To: Game Development Algorithms Subject: Re: [Algorithms] Texel area Hi Diogo, > This seems counter-intuitive... I understand your reasoning, and I can't > find a mistake in your logic, but it seems to me that the size shouldn't be > constant (from an intuitive standpoint)... you are probably thinking about the usual texture mapping setup, where a projective mapping (from screen space to texture space) is performed. In this canonical case, the texture footprint is position dependent (you describe the mapping of projective planes by a homography, which is linear in homogenous coordinates, but non-linear in texture/screen coordinates due to the perspective division). Your setup is just a linear transform of a 2D subspace (you want to transform the UV plane into the world space triangle plane), i.e. you seek a mapping of the UV-basis vectors to their respective world-space counterparts. cheers, Manuel |
From: Olivier G. <gal...@po...> - 2011-02-07 10:55:54
|
Hi, On Mon, Feb 07, 2011 at 10:04:23AM -0000, Diogo de Andrade wrote: > I'm just unclear on a couple of things: > > > That's a rather usual linear system. Noting the inverse determinant > > How does this matter? Not questioning it, just trying to understand what > follows from here... It matters just because equation systems of the form: ax+by = c dx+ey = f have a well known solution. > > You'll notice that all these are linear systems, which means a texel > rectangle size is independant of its position. > > This seems counter-intuitive... I understand your reasoning, and I can't > find a mistake in your logic, but it seems to me that the size shouldn't be > constant (from an intuitive standpoint)... Your intuition is thinking "projected on the screen" while your question was "in world space". In world space, camera position doesn't matter. Mapping the texture is equivalent to cutting a triangle in the texture with points on the (u, v) positions, and streching it like is was made of rubber to fit the world-space triangle. The squares that are the texels stretch into parallelograms, but they all end up the same size. OG. |
From: Manuel M. <m.m...@wa...> - 2011-02-07 10:46:59
|
Hi Diogo, > This seems counter-intuitive... I understand your reasoning, and I can't > find a mistake in your logic, but it seems to me that the size shouldn't be > constant (from an intuitive standpoint)... you are probably thinking about the usual texture mapping setup, where a projective mapping (from screen space to texture space) is performed. In this canonical case, the texture footprint is position dependent (you describe the mapping of projective planes by a homography, which is linear in homogenous coordinates, but non-linear in texture/screen coordinates due to the perspective division). Your setup is just a linear transform of a 2D subspace (you want to transform the UV plane into the world space triangle plane), i.e. you seek a mapping of the UV-basis vectors to their respective world-space counterparts. cheers, Manuel |
From: Diogo de A. <dio...@sp...> - 2011-02-07 10:21:37
|
Hi Olivier, I was already having similar thoughts on how to solve this (use the barycentric coordinates to solve a linear system), so it seems to me that you got the jist of it, I'm going to try implementing... I'm just unclear on a couple of things: > That's a rather usual linear system. Noting the inverse determinant How does this matter? Not questioning it, just trying to understand what follows from here... > You'll notice that all these are linear systems, which means a texel rectangle size is independant of its position. This seems counter-intuitive... I understand your reasoning, and I can't find a mistake in your logic, but it seems to me that the size shouldn't be constant (from an intuitive standpoint)... Best regards, Diogo -----Original Message----- From: Olivier Galibert [mailto:gal...@po...] Sent: sexta-feira, 4 de Fevereiro de 2011 19:32 To: gda...@li... Subject: Re: [Algorithms] Texel area On Fri, Feb 04, 2011 at 10:50:08AM -0000, Diogo de Andrade wrote: > So, what I want to do is, given > > - triangle T=(V1,V2,V3), in which V1, V2, V3 are vertexes that have > some properties (world space position, normal, diffuse color, texture > coordinates > 0 (in uniform space), texture coordinates 1 (in texture space, > [0..texture size[), etc), > > - texture size > > - position inside the triangle (absolute value in texture space) > > is to find out the rectangle that bounds the texel in world space > (origin+2 vectors)... If that's your real problem, the solution is reasonably simple and has nothing to do with the rasterization. You have three points when space/texture coordinates: (x0, y0, z0, u0, v0) (x1, y1, z1, u1, v1) (x2, y2, z2, u2, v2) Given u, v the position in texture space, you first want to find the barycentric coordinates (t1, t2): | u0 + t1*(u1-u0) + t2*(u2-u0) = u | v0 + t1*(v1-v0) + t2*(v2-v0) = v That's a rather usual linear system. Noting the inverse determinant di: di = 1/((v2-v0)*(u1-u0) - (u2-u0)*(v1-v0)) then: | t1 = di*((v2-v0)*(u-u0) - (u2-u0)*(v-v0)) | t2 = di*((v1-v0)*(u-u0) - (u1-u0)*(v-v0)) >From these, you can easily find the world coordinates: | x = x0 + t1*(x1-x0) + t2*(x2-x0) | y = y0 + t1*(y1-y0) + t2*(y2-y0) | z = z0 + t1*(z1-z0) + t2*(z2-z0) You'll notice that all these are linear systems, which means a texel rectangle size is independant of its position. So you can get your vector once by computing the position differences between (0, 0), (0.5, 0) and (0, 0.5) for instance. You'll notice the equations simplify nicely when computing deltas if you do the math. I suspect it's not your real problem though, so feel free to correct it :-) OG. ---------------------------------------------------------------------------- -- The modern datacenter depends on network connectivity to access resources and provide services. The best practices for maximizing a physical server's connectivity to a physical network are well understood - see how these rules translate into the virtual world? http://p.sf.net/sfu/oracle-sfdevnlfb _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list |
From: Manuel M. <m.m...@wa...> - 2011-02-04 20:03:18
|
Hi Diogo, > - triangle T=(V1,V2,V3), in which V1, V2, V3 are vertexes that have some > properties (world space position, normal, diffuse color, texture > coordinates 0 (in uniform space), texture coordinates 1 (in texture space, > [0..texture size[), etc), > > - texture size > > - position inside the triangle (absolute value in texture space) > > > > is to find out the rectangle that bounds the texel in world space (origin+2 > vectors)... if I understand your setup correctly, you're dealing with a linear transform (from a plane in UV-space to a plane in world space), so the partial derivatives with respect to u and v are constants across the triangle. If this is not too much setup overhead, you could proceed by finding the world space directions U and V corresponding to the u/v axes for the triangle by solving for a 3x2 matrix with the world-space-directions U and V as columns: matrix(U V) * (A-B) = A.worldpos - B.worldpos matrix(U V) * (B-C) = B.worldpos - C.worldpos where A, B are 2D u/v coordinates (in pixel units). cheers, Manuel |
From: Olivier G. <gal...@po...> - 2011-02-04 19:51:26
|
On Fri, Feb 04, 2011 at 10:50:08AM -0000, Diogo de Andrade wrote: > So, what I want to do is, given > > - triangle T=(V1,V2,V3), in which V1, V2, V3 are vertexes that have some > properties (world space position, normal, diffuse color, texture coordinates > 0 (in uniform space), texture coordinates 1 (in texture space, [0..texture > size[), etc), > > - texture size > > - position inside the triangle (absolute value in texture space) > > is to find out the rectangle that bounds the texel in world space (origin+2 > vectors)... If that's your real problem, the solution is reasonably simple and has nothing to do with the rasterization. You have three points when space/texture coordinates: (x0, y0, z0, u0, v0) (x1, y1, z1, u1, v1) (x2, y2, z2, u2, v2) Given u, v the position in texture space, you first want to find the barycentric coordinates (t1, t2): | u0 + t1*(u1-u0) + t2*(u2-u0) = u | v0 + t1*(v1-v0) + t2*(v2-v0) = v That's a rather usual linear system. Noting the inverse determinant di: di = 1/((v2-v0)*(u1-u0) - (u2-u0)*(v1-v0)) then: | t1 = di*((v2-v0)*(u-u0) - (u2-u0)*(v-v0)) | t2 = di*((v1-v0)*(u-u0) - (u1-u0)*(v-v0)) >From these, you can easily find the world coordinates: | x = x0 + t1*(x1-x0) + t2*(x2-x0) | y = y0 + t1*(y1-y0) + t2*(y2-y0) | z = z0 + t1*(z1-z0) + t2*(z2-z0) You'll notice that all these are linear systems, which means a texel rectangle size is independant of its position. So you can get your vector once by computing the position differences between (0, 0), (0.5, 0) and (0, 0.5) for instance. You'll notice the equations simplify nicely when computing deltas if you do the math. I suspect it's not your real problem though, so feel free to correct it :-) OG. |
From: Jason H. <jh...@st...> - 2011-02-04 18:19:14
|
Maybe this is an oddball suggestion but... perhaps you could go about sampling the triangle in one pass first using jittered barycentric coordinates and creating a bucketful of samples without regard to pixel stepping. Then, in a second pass step through the triangle in and find the nearest several samples and interpolate them with respect to their barycentric coordinates to get the value at the pixel center. This would give you some flexibility in sampling density (not requiring a sample per pixel), for faster runs with fewer samples, or higher quality runs with more samples and more added to the interpolation. Then again, it could just be horribly slow. And certainly I haven't helped address your questions. ;-) JH On 2/4/2011 7:12 AM, Diogo de Andrade wrote: > > Hi Sebastian, > > Well, no my rasterizer (which might be a piece of trash...) is a > scanline rasterizer... > > So, for the following triangle: > > A > > x > > xx > > xxx > > xxxx > > xxxxx > > xxxxxx > > xxxxxxx > > B C > > I know the left gradient, and the right gradient... Problem is that > the position gradient of AC (for example) is something like > scalar*(sqrt(2)/2,sqrt(2)/2) and not the "down" scalar*(0,1) gradient > I'd like... The horizontal gradient works allright, since I'm > interpolating between the spans... > > As far as I understand the problem, the issue is that for every pixel, > I know how to reach the next horizontal lumel (which is one of the > things I need), and on the edges, I know how to go to the next lumel > on the edge (which most of the time doesn't match the actual "down" > position I need). > > For communication purposes, I'm calling the "down" position to the > world-relative vector that takes me from the position of the current > lumel, to the one that's right below it in UV space... The "next > horizontal lumel" is the one that's right to the right of the current > one in UV space. > > Don't know if this was clear enough, I'm starting to have a hard time > getting my head around the problem (maybe too close to it by now)... > > Best regards, > > Diogo > > *From:*Sebastian Sylvan [mailto:seb...@gm...] > *Sent:* sexta-feira, 4 de Fevereiro de 2011 11:48 > *To:* Game Development Algorithms > *Subject:* Re: [Algorithms] Texel area > > On Fri, Feb 4, 2011 at 10:50 AM, Diogo de Andrade > <dio...@sp... > <mailto:dio...@sp...>> wrote: > > Hi all! > > I've been struggling with a problem and I hope somebody can point me > in the right direction... > > I'm making a lightmap/ambient occlusion map generator system, and for > that I built a rasterizer that calls a function that does the > computations and fills the target image. > > The rasterizer works in UV space, but all parameters of the triangle > are interpolated. > > Now, I wanted to add multisampling, so that I don't get just a single > sample for the ambient occlusion, and I want to "jitter" the source > point (not only the raycast direction), > > and for that I need to find out what's the area of the target lumel. > By area, I mean not only the actual area value, but the "rectangle" > that bounds the lumel, in world space... > > So, what I want to do is, given > > - triangle T=(V1,V2,V3), in which V1, V2, V3 are vertexes that have > some properties (world space position, normal, diffuse color, texture > coordinates 0 (in uniform space), texture coordinates 1 (in texture > space, [0..texture size[), etc), > > - texture size > > - position inside the triangle (absolute value in texture space) > > is to find out the rectangle that bounds the texel in world space > (origin+2 vectors)... > > I've been thinking about this, but can't seem to find an approach that > works correctly... I'm expecting some error in this calculation (due > to the rasterizing process itself), but it's not a big deal since it's > for sampling purposes, but even so I can't seem to make it work... > > So, if anyone got any links, keywords, or just a simple algorithm he > doesn't mind sharing, I'd appreciate it! > > The interpolator in the rasterizer should already know the gradient > for the position w.r.t. the pixel (i.e.lumel, in this case) position > right? So you once you know the center position for a given pixel > (lumel) it should be easy to compute the world space positions of the > corners of that pixel (just take the position gradient multiplied by > +/- 0.5pixels and add it to the center position), which gives you the > quadraliteral in world space. > > > -- > Sebastian Sylvan > > > ------------------------------------------------------------------------------ > The modern datacenter depends on network connectivity to access resources > and provide services. The best practices for maximizing a physical server's > connectivity to a physical network are well understood - see how these > rules translate into the virtual world? > http://p.sf.net/sfu/oracle-sfdevnlfb > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list |
From: Diogo de A. <dio...@sp...> - 2011-02-04 13:30:37
|
Hi Sebastian, Well, no my rasterizer (which might be a piece of trash...) is a scanline rasterizer. So, for the following triangle: A x xx xxx xxxx xxxxx xxxxxx xxxxxxx B C I know the left gradient, and the right gradient. Problem is that the position gradient of AC (for example) is something like scalar*(sqrt(2)/2,sqrt(2)/2) and not the "down" scalar*(0,1) gradient I'd like. The horizontal gradient works allright, since I'm interpolating between the spans. As far as I understand the problem, the issue is that for every pixel, I know how to reach the next horizontal lumel (which is one of the things I need), and on the edges, I know how to go to the next lumel on the edge (which most of the time doesn't match the actual "down" position I need). For communication purposes, I'm calling the "down" position to the world-relative vector that takes me from the position of the current lumel, to the one that's right below it in UV space. The "next horizontal lumel" is the one that's right to the right of the current one in UV space. Don't know if this was clear enough, I'm starting to have a hard time getting my head around the problem (maybe too close to it by now). Best regards, Diogo From: Sebastian Sylvan [mailto:seb...@gm...] Sent: sexta-feira, 4 de Fevereiro de 2011 11:48 To: Game Development Algorithms Subject: Re: [Algorithms] Texel area On Fri, Feb 4, 2011 at 10:50 AM, Diogo de Andrade <dio...@sp...> wrote: Hi all! I've been struggling with a problem and I hope somebody can point me in the right direction... I'm making a lightmap/ambient occlusion map generator system, and for that I built a rasterizer that calls a function that does the computations and fills the target image. The rasterizer works in UV space, but all parameters of the triangle are interpolated. Now, I wanted to add multisampling, so that I don't get just a single sample for the ambient occlusion, and I want to "jitter" the source point (not only the raycast direction), and for that I need to find out what's the area of the target lumel. By area, I mean not only the actual area value, but the "rectangle" that bounds the lumel, in world space... So, what I want to do is, given - triangle T=(V1,V2,V3), in which V1, V2, V3 are vertexes that have some properties (world space position, normal, diffuse color, texture coordinates 0 (in uniform space), texture coordinates 1 (in texture space, [0..texture size[), etc), - texture size - position inside the triangle (absolute value in texture space) is to find out the rectangle that bounds the texel in world space (origin+2 vectors)... I've been thinking about this, but can't seem to find an approach that works correctly... I'm expecting some error in this calculation (due to the rasterizing process itself), but it's not a big deal since it's for sampling purposes, but even so I can't seem to make it work... So, if anyone got any links, keywords, or just a simple algorithm he doesn't mind sharing, I'd appreciate it! The interpolator in the rasterizer should already know the gradient for the position w.r.t. the pixel (i.e.lumel, in this case) position right? So you once you know the center position for a given pixel (lumel) it should be easy to compute the world space positions of the corners of that pixel (just take the position gradient multiplied by +/- 0.5pixels and add it to the center position), which gives you the quadraliteral in world space. -- Sebastian Sylvan |
From: Sebastian S. <seb...@gm...> - 2011-02-04 11:47:58
|
On Fri, Feb 4, 2011 at 10:50 AM, Diogo de Andrade < dio...@sp...> wrote: > Hi all! > > > > I've been struggling with a problem and I hope somebody can point me in the > right direction... > > I'm making a lightmap/ambient occlusion map generator system, and for that > I built a rasterizer that calls a function that does the computations and > fills the target image. > > The rasterizer works in UV space, but all parameters of the triangle are > interpolated. > > > > Now, I wanted to add multisampling, so that I don't get just a single > sample for the ambient occlusion, and I want to "jitter" the source point > (not only the raycast direction), > > and for that I need to find out what's the area of the target lumel. By > area, I mean not only the actual area value, but the "rectangle" that bounds > the lumel, in world space... > > > > So, what I want to do is, given > > > > - triangle T=(V1,V2,V3), in which V1, V2, V3 are vertexes that have some > properties (world space position, normal, diffuse color, texture coordinates > 0 (in uniform space), texture coordinates 1 (in texture space, [0..texture > size[), etc), > > - texture size > > - position inside the triangle (absolute value in texture space) > > > > is to find out the rectangle that bounds the texel in world space (origin+2 > vectors)... > > > > I've been thinking about this, but can't seem to find an approach that > works correctly... I'm expecting some error in this calculation (due to the > rasterizing process itself), but it's not a big deal since it's for sampling > purposes, but even so I can't seem to make it work... > > > > So, if anyone got any links, keywords, or just a simple algorithm he > doesn't mind sharing, I'd appreciate it! > The interpolator in the rasterizer should already know the gradient for the position w.r.t. the pixel (i.e.lumel, in this case) position right? So you once you know the center position for a given pixel (lumel) it should be easy to compute the world space positions of the corners of that pixel (just take the position gradient multiplied by +/- 0.5pixels and add it to the center position), which gives you the quadraliteral in world space. -- Sebastian Sylvan |
From: Diogo de A. <dio...@sp...> - 2011-02-04 11:21:12
|
Hi all! I've been struggling with a problem and I hope somebody can point me in the right direction... I'm making a lightmap/ambient occlusion map generator system, and for that I built a rasterizer that calls a function that does the computations and fills the target image. The rasterizer works in UV space, but all parameters of the triangle are interpolated. Now, I wanted to add multisampling, so that I don't get just a single sample for the ambient occlusion, and I want to "jitter" the source point (not only the raycast direction), and for that I need to find out what's the area of the target lumel. By area, I mean not only the actual area value, but the "rectangle" that bounds the lumel, in world space... So, what I want to do is, given - triangle T=(V1,V2,V3), in which V1, V2, V3 are vertexes that have some properties (world space position, normal, diffuse color, texture coordinates 0 (in uniform space), texture coordinates 1 (in texture space, [0..texture size[), etc), - texture size - position inside the triangle (absolute value in texture space) is to find out the rectangle that bounds the texel in world space (origin+2 vectors)... I've been thinking about this, but can't seem to find an approach that works correctly... I'm expecting some error in this calculation (due to the rasterizing process itself), but it's not a big deal since it's for sampling purposes, but even so I can't seem to make it work... So, if anyone got any links, keywords, or just a simple algorithm he doesn't mind sharing, I'd appreciate it! Best regards, Diogo de Andrade |
From: Diogo de A. <dio...@sp...> - 2011-02-04 11:17:01
|
Hi all! I've been struggling with a problem and I hope somebody can point me in the right direction... I'm making a lightmap/ambient occlusion map generator system, and for that I built a rasterizer that calls a function that does the computations and fills the target image. The rasterizer works in UV space, but all parameters of the triangle are interpolated. Now, I wanted to add multisampling, so that I don't get just a single sample for the ambient occlusion, and I want to "jitter" the source point (not only the raycast direction), and for that I need to find out what's the area of the target lumel. By area, I mean not only the actual area value, but the "rectangle" that bounds the lumel, in world space... So, what I want to do is, given - triangle T=(V1,V2,V3), in which V1, V2, V3 are vertexes that have some properties (world space position, normal, diffuse color, texture coordinates 0 (in uniform space), texture coordinates 1 (in texture space, [0..texture size[), etc), - texture size - position inside the triangle (absolute value in texture space) is to find out the rectangle that bounds the texel in world space (origin+2 vectors)... I've been thinking about this, but can't seem to find an approach that works correctly... I'm expecting some error in this calculation (due to the rasterizing process itself), but it's not a big deal since it's for sampling purposes, but even so I can't seem to make it work... So, if anyone got any links, keywords, or just a simple algorithm he doesn't mind sharing, I'd appreciate it! Best regards, Diogo de Andrade |
From: luke h. <hut...@gm...> - 2011-01-14 15:14:17
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi, Isn't the main problem the fact that each channel is filtered separately, rather than the precision used for the filtering ? float extractValue(float2 pos) { float4 temp = tex2D(buffer, pos); return (temp.x * 16711680.0 + temp.y * 65280.0 + temp.z * 255.0) * (1.0 / 16777215.0); } This looks very much like reconstructing a depth value, but the same issue applies to any 24-bit value stored in 3, 8-bit components. Say the filtering was applied just between two texels, (0x00,0xff,0xff) and (0x01,0x00,0x00). When interpreted as 24-bit values, there is only a difference of 1 between them. But a 50/50 blend between the two will give something like (0x00,0x75,0x75) depending on the rounding; no where near the two 24-bit values. - - Luke -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBAgAGBQJNMGg+AAoJEPycJhJ2o4QYNkMP/j409k6gKn6d/PfWBVQ/Q3LL Ei8vk+wO08uF/KFbYB5P+d/XwMHwXPpZ9TMbb/V9Q/oYZAekPjhBvbCkklMOSF3T REYGePtu/BM1UUUO+nkkTqkdU8/2R0mqkBy38TOcrlTZ1wNBqIu3gIoHHLCI8aXu gxlyqbEaxUkSzpHHhq6sjPI9l8onUvNxryGst8o7imcUFEIevKADxB2sIYoqZtlb YrXeh4k8gPYnYvLRELJlkYK+6yvI1cRMw03JI8OE+0mcVwKeX6GnMrj7hmjmmImz lC3bTS61E89iXUmxLap46j9wuOGSKmZ5/Bf+QxD6Bs8pk8O2uYsgd34tSjQPNeIg HOQ0AxvIf8VbOjgC61H6hKtveZY2tkwDVp/0dGhkpITVbvzsV8G24v1gMSlVyJV2 nxYDbkmTzHj7jNpiejWLfYcAM4Vnr7Ilv3DgsouIH4zGzN0vk79WRt19fpfeTovc uyUIRMwHMrehIX4rQCDYEF3D9/CvXVOHaz7rJ4TFsXX0yjZ74ReON+cUNHmNT/U5 j8ZGXBmHIr10CMnU7twxp9QZSylfwOmutTzReVwlKQD/7vrZhZVRXIyocUKbGr9c D5ISBiUcCOLGuJg4BXqJfUuSYyiYPPunjy4I16qY2157eE81varKtGE6uuc8Y9CY MBnk133qKbpTWBQ5HhqH =Kwg3 -----END PGP SIGNATURE----- |
From: Fabian G. <ry...@gm...> - 2010-11-22 19:34:27
|
On 22.11.2010 02:59, Richard Fabian wrote: > Although theoretically impossible, it might be practically possible. > Fabian and Niklas are right in that you cannot hash to similar values, > and Niklas even has a good idea on how to circumvent the problem, the > only issue is, you need 2^D where D is the dimensionality of your data > set. This is probably why Niklas mentions using two hashes. > > for a 3D solution (which I think is what you're after), then > quantisation is key. You could quantise to e (where e is your largest > snap), and then inside that space, hash the 8 closest different > positions around your point. That's the algorithm I described earlier :) You still have a 2^D factor in there though, which is impractical for high dimensions (as are most conventional spatial partitioning methods). A different construction based on hashing is Locality Sensitive Hashing (LSH) which solves this problem approximately (to a given error tolerance) and is significantly faster in practice for data with high dimensionality. -Fabian |
From: Richard F. <ra...@gm...> - 2010-11-22 10:59:49
|
Although theoretically impossible, it might be practically possible. Fabian and Niklas are right in that you cannot hash to similar values, and Niklas even has a good idea on how to circumvent the problem, the only issue is, you need 2^D where D is the dimensionality of your data set. This is probably why Niklas mentions using two hashes. for a 3D solution (which I think is what you're after), then quantisation is key. You could quantise to e (where e is your largest snap), and then inside that space, hash the 8 closest different positions around your point. assuming an e of 1 v(5.2,1.3,2.2) -> q(5,1,2) & d(.2,.3,.2) from this produce a set of 8 ([2,2,2]) hashes from the floor and ceiling values, but, store the hashes based on the evenness of the quantised axes. e.g. hash for (5,1,2) goes into the [1,1,0] element of the 2x2x2 array becuase 5 and 1 are odd they go in the second element of their axis. other hashes: (5,1,3)->[1,1,1], (6,1,2)->[0,1,0], etc... this means that hashes for 5-6,1,2,2-3 are the the 2x2x2 array. with this done, you can know you're within 'e', if any of the 8 parts of the array match. e.g.testing against v(6.1,1.3, 1.8) (hashes 6-7,1-2,1,2 in the 2x2x2 array) the hashes at [0,1,0], and [0,0,0] will match as they will be the hashes for (6,2,2) and (6,1,2). so On 19 November 2010 23:18, Mat Noguchi <mat...@bu...> wrote: > On a related note, is there a way to hash two points such that they map to > the same value if they are within specified distance d of each other? > Obviously you can hash down to their positions quantized to a cell of d, but > if the two points happen to be in two different cells but less than d apart > they hash to different values. > > > > MSN > > From: Jeff Russell [mailto:je...@8m...] > Sent: Friday, November 19, 2010 2:43 PM > To: Game Development Algorithms > Subject: Re: [Algorithms] Acceleration Structures For Vertex Snapping > > > > It's not *really* O(1), since you can have an arbitrary number of vertices > in a single hash cell (worst case would be the entire mesh). Only perfect > hashing would allow for O(1) worst case behavior. It's really O(n) worst > case, but since the n is divided by such a large constant in practice it is > still a very fast approach. > > On Fri, Nov 19, 2010 at 4:31 PM, Fabian Giesen <ry...@gm...> wrote: > > On 11/19/2010 2:15 PM, Oscar Forth wrote: >> O(1) implies hash lookup is direct array index ... surely best case >> lookup is O(n * log( n ))? Or care to explain where I am going wrong on >> my thinking? Ta > > Vertex snapping: Take your snapping distance d > Imagine an infinite uniform grid with cell edge length e >= 2d. > Every vertex within d units of your query vertex must be either in the > same cell or in one of the directly adjacent cells (that's 2^n cells to > check where n is your dimension). > Hash on grid cell coordinates (x,y,z) to turn the theoretical infinite > grid into a very practical hash-table :) > > For vertex welding, the weld operation ensures that the number of points > in each infinite grid cell is <= some constant C(n,e/d). Size the hash > table according to the number of vertices and the expected number of > items in a cell is also a small constant. That makes the whole thing > expected O(1). > > -Fabian > > ------------------------------------------------------------------------------ > Beautiful is writing same markup. Internet Explorer 9 supports > standards for HTML5, CSS3, SVG 1.1, ECMAScript5, and DOM L2 & L3. > Spend less time writing and rewriting code and more time creating great > experiences on the web. Be a part of the beta today > http://p.sf.net/sfu/msIE9-sfdev2dev > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > > > -- > Jeff Russell > Engineer, 8monkey Labs > www.8monkeylabs.com > > ------------------------------------------------------------------------------ > Beautiful is writing same markup. Internet Explorer 9 supports > standards for HTML5, CSS3, SVG 1.1, ECMAScript5, and DOM L2 & L3. > Spend less time writing and rewriting code and more time creating great > experiences on the web. Be a part of the beta today > http://p.sf.net/sfu/msIE9-sfdev2dev > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > -- fabs(); Just because the world is full of people that think just like you, doesn't mean the other ones can't be right. |
From: Vilya H. <vil...@gm...> - 2010-11-20 16:11:09
|
This is what I currently do. It is nice and fast, but breaks down when you've got vertices clustered tightly together in screen space. You may have multiple vertices projecting to the same pixel, but you can only store one ID. If that's not a problem for your app, it's a great choice. It would be interesting to try this approach in conjunction with an A-buffer. Getting a list of the vertices at each pixel is exactly what you want here. Vil On 20 November 2010 12:56, Ola Olsson <ola...@gm...> wrote: > Just thought I'd point out that there is, most probably, a device already > in your target system that is pretty darn good at this sort of thing. That > is, the GPU, chances are your vertex data is already uploaded as well. So, > you can draw the vertices as points with ID's as colors, then read back a > little region around the point of interest and see what's there. If needed > you can always read back Z as well to check which is closest. > > Actually, for snapping you'd get by fine with just a chunk of Z buffer, > wouldnt you? Reconstruct the position from Z buffer and away you go, > right? (may have missed something here... Maybe not precise enough?) > > The render only needs doing whenever the viewpoint has changed, and reading > back a handful of pixels over PCI express is not too slow, though I dunno > what the latencies actually are. > > Cheers. > .ola > > ----- Original Message ----- > *From:* Chris Green <cg...@va...> > *To:* Game Development Algorithms<gda...@li...> > *Sent:* Friday, November 19, 2010 6:01 PM > *Subject:* Re: [Algorithms] Acceleration Structures For Vertex Snapping > > If you are optimizing for development time, and you only do this once in > response to a user action (meaning you don't need to be any faster than 18ms > or so), don't discount a brute force sequential scan. > > If your point data is stored in a memory/simd friendly format, you can > perform a ridiculous number of point comparison tests per second on a modern > multicore processor. If you have easy threading primitives in your system, > it's a trivial operation to thread as well. > > > > If you are going to use an auxiliary data structure to accelerate this > operation, you'll have to keep it incrementally updated, since if you built > the data structure every time you snapped, you'd have to examine all the > points anyway to build it. > > > > If you do have such a large number of points or need to do this operation > for many points per mouse-event, you want to optimize it for cache layout > and access pattern - it takes much longer to fetch the point position from > DRAM than it does to perform the distance^2 calculation, so you don't want > to swallow up all your memory bandwidth with pointers. For example, if using > a kd-tree, you don't want your leaf nodes to be individual points; you want > them to be cache-aligned simd arrays of points. > > > > > > > > > > *From:* Alexander Shafranov [mailto:sha...@gm...] > *Sent:* Friday, November 19, 2010 5:46 AM > *To:* Game Development Algorithms > *Subject:* [Algorithms] Acceleration Structures For Vertex Snapping > > > > Hi guys, > > I'm working on the vertex snapping feature for our level editor. > > So, in vertex snapping mode all translates are snapped to the nearest > vertex on screen. > > Brute force approach would be something like -- > > - project all vertices to screen-space > - find closest vertex to object's projected pivot > > I know, that nearest point query is an O(log N) problem with kd-tree. > But, building kd-tree for mesh vertices doesn't help here, because the > query is 'nearest point in screen-space'. > Kd-tree can be built for a set of projected (2d) vertices, it doesn't sound > great though, since tree has to be rebuilt on any camera moves. > > What would you recommend ? > > Or maybe I'm overengineering here and something more brute force will work > good enough. > > Cheers, > Alex. > > ------------------------------ > > > ------------------------------------------------------------------------------ > Beautiful is writing same markup. Internet Explorer 9 supports > standards for HTML5, CSS3, SVG 1.1, ECMAScript5, and DOM L2 & L3. > Spend less time writing and rewriting code and more time creating great > experiences on the web. Be a part of the beta today > http://p.sf.net/sfu/msIE9-sfdev2dev > > ------------------------------ > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > > > > ------------------------------------------------------------------------------ > Beautiful is writing same markup. Internet Explorer 9 supports > standards for HTML5, CSS3, SVG 1.1, ECMAScript5, and DOM L2 & L3. > Spend less time writing and rewriting code and more time creating great > experiences on the web. Be a part of the beta today > http://p.sf.net/sfu/msIE9-sfdev2dev > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Ola O. <ola...@gm...> - 2010-11-20 12:56:59
|
Just thought I'd point out that there is, most probably, a device already in your target system that is pretty darn good at this sort of thing. That is, the GPU, chances are your vertex data is already uploaded as well. So, you can draw the vertices as points with ID's as colors, then read back a little region around the point of interest and see what's there. If needed you can always read back Z as well to check which is closest. Actually, for snapping you'd get by fine with just a chunk of Z buffer, wouldnt you? Reconstruct the position from Z buffer and away you go, right? (may have missed something here... Maybe not precise enough?) The render only needs doing whenever the viewpoint has changed, and reading back a handful of pixels over PCI express is not too slow, though I dunno what the latencies actually are. Cheers. .ola ----- Original Message ----- From: Chris Green To: Game Development Algorithms Sent: Friday, November 19, 2010 6:01 PM Subject: Re: [Algorithms] Acceleration Structures For Vertex Snapping If you are optimizing for development time, and you only do this once in response to a user action (meaning you don't need to be any faster than 18ms or so), don't discount a brute force sequential scan. If your point data is stored in a memory/simd friendly format, you can perform a ridiculous number of point comparison tests per second on a modern multicore processor. If you have easy threading primitives in your system, it's a trivial operation to thread as well. If you are going to use an auxiliary data structure to accelerate this operation, you'll have to keep it incrementally updated, since if you built the data structure every time you snapped, you'd have to examine all the points anyway to build it. If you do have such a large number of points or need to do this operation for many points per mouse-event, you want to optimize it for cache layout and access pattern - it takes much longer to fetch the point position from DRAM than it does to perform the distance^2 calculation, so you don't want to swallow up all your memory bandwidth with pointers. For example, if using a kd-tree, you don't want your leaf nodes to be individual points; you want them to be cache-aligned simd arrays of points. From: Alexander Shafranov [mailto:sha...@gm...] Sent: Friday, November 19, 2010 5:46 AM To: Game Development Algorithms Subject: [Algorithms] Acceleration Structures For Vertex Snapping Hi guys, I'm working on the vertex snapping feature for our level editor. So, in vertex snapping mode all translates are snapped to the nearest vertex on screen. Brute force approach would be something like -- - project all vertices to screen-space - find closest vertex to object's projected pivot I know, that nearest point query is an O(log N) problem with kd-tree. But, building kd-tree for mesh vertices doesn't help here, because the query is 'nearest point in screen-space'. Kd-tree can be built for a set of projected (2d) vertices, it doesn't sound great though, since tree has to be rebuilt on any camera moves. What would you recommend ? Or maybe I'm overengineering here and something more brute force will work good enough. Cheers, Alex. ------------------------------------------------------------------------------ ------------------------------------------------------------------------------ Beautiful is writing same markup. Internet Explorer 9 supports standards for HTML5, CSS3, SVG 1.1, ECMAScript5, and DOM L2 & L3. Spend less time writing and rewriting code and more time creating great experiences on the web. Be a part of the beta today http://p.sf.net/sfu/msIE9-sfdev2dev ------------------------------------------------------------------------------ _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list |
From: Jeff R. <je...@8m...> - 2010-11-20 00:39:18
|
Christer is right (as usual) about the weld operation, it would allow you to get expected O(1). In my posts I was talking about the snapping question the OP had, so maybe we were talking about different things Fabian. Jeff On Fri, Nov 19, 2010 at 6:27 PM, <chr...@pl...>wrote: > > Fabian Giesen wrote: > > > On 11/19/2010 2:15 PM, Oscar Forth wrote: > > > O(1) implies hash lookup is direct array index ... surely best case > > > lookup is O(n * log( n ))? Or care to explain where I am going wrong on > > > my thinking? Ta > > > > Vertex snapping: Take your snapping distance d > > Imagine an infinite uniform grid with cell edge length e >= 2d. > > Every vertex within d units of your query vertex must be either in the > > same cell or in one of the directly adjacent cells (that's 2^n cells to > > check where n is your dimension). > > Hash on grid cell coordinates (x,y,z) to turn the theoretical infinite > > grid into a very practical hash-table :) > > > > For vertex welding, the weld operation ensures that the number of points > > in each infinite grid cell is <= some constant C(n,e/d). Size the hash > > table according to the number of vertices and the expected number of > > items in a cell is also a small constant. That makes the whole thing > > expected O(1). > > Exactly. I've described this in various internet posts over the > years, and also describe it in detail in Section 12.1 of RTCD > along with code in case anyone needs further elaboration. > > For vertex welding/snapping within a threshold this approach > cannot be beat. > > > Christer Ericson, Director of Tools and Technology > Sony Computer Entertainment, Santa Monica > > > > ------------------------------------------------------------------------------ > Beautiful is writing same markup. Internet Explorer 9 supports > standards for HTML5, CSS3, SVG 1.1, ECMAScript5, and DOM L2 & L3. > Spend less time writing and rewriting code and more time creating great > experiences on the web. Be a part of the beta today > http://p.sf.net/sfu/msIE9-sfdev2dev > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > -- Jeff Russell Engineer, 8monkey Labs www.8monkeylabs.com |
From: <chr...@pl...> - 2010-11-20 00:27:38
|
Fabian Giesen wrote: > On 11/19/2010 2:15 PM, Oscar Forth wrote: > > O(1) implies hash lookup is direct array index ... surely best case > > lookup is O(n * log( n ))? Or care to explain where I am going wrong on > > my thinking? Ta > > Vertex snapping: Take your snapping distance d > Imagine an infinite uniform grid with cell edge length e >= 2d. > Every vertex within d units of your query vertex must be either in the > same cell or in one of the directly adjacent cells (that's 2^n cells to > check where n is your dimension). > Hash on grid cell coordinates (x,y,z) to turn the theoretical infinite > grid into a very practical hash-table :) > > For vertex welding, the weld operation ensures that the number of points > in each infinite grid cell is <= some constant C(n,e/d). Size the hash > table according to the number of vertices and the expected number of > items in a cell is also a small constant. That makes the whole thing > expected O(1). Exactly. I've described this in various internet posts over the years, and also describe it in detail in Section 12.1 of RTCD along with code in case anyone needs further elaboration. For vertex welding/snapping within a threshold this approach cannot be beat. Christer Ericson, Director of Tools and Technology Sony Computer Entertainment, Santa Monica |
From: Jeff R. <je...@8m...> - 2010-11-20 00:00:27
|
Okay, worst case is different from average case, granted. But since the cell size in this case is fixed (related to snap distance), the number of mesh vertices that cell will contain varies with n. An example: if your mesh was evenly tessellated in space, the number of vertices in a single cell would be n / k, where k is the number of cells your mesh intersects. That makes finding the closest vertex an "n / k" operation, which is definitely not constant time. That is what I meant when I said it was O(n). It would only be O(1) average case if we had a "vertices per cell" limit we could somehow enforce, but we don't. No, that's not true. Hash tables are fast in practice because they do a > small number of reasonably fast operations in the average case, not > because they do a large number of operations in a magically ultra-fast > fashion (which would be the O(N) with tiny constant that you decribe). > There's no magic, an O(N) algorithm does not necessarily do N operations. Hash tables are fast because they run in O(n / k) time, where k is comparable to or larger than n. If you make k a function of n, as is often done, then it becomes constant time average case. -- Jeff Russell Engineer, 8monkey Labs www.8monkeylabs.com |
From: Sebastian S. <seb...@gm...> - 2010-11-20 00:00:09
|
On Fri, Nov 19, 2010 at 11:28 PM, Fabian Giesen <ry...@gm...> wrote: > On 11/19/2010 2:43 PM, Jeff Russell wrote: > > It's not *really* O(1), since you can have an arbitrary number of > > vertices in a single hash cell (worst case would be the entire mesh). > > Only perfect hashing would allow for O(1) worst case behavior. > > What do you mean by "not *really* O(1)"? Big O notation just means we're > talking about asymptotic complexity, it doesn't have to be worst-case > (hence "expected O(1)"). > O(1) only means that "something" is bounded by some constant as n goes to infinity, so you're both right, but you're talking about different "somethings". Its a good idea to specify what that "something" is, e.g. "average case time", "worst case time", "best case time", or "amortized time". It's a fuzzy convention, I guess, but generally people probably assume you're talking about worst case running times if you aren't explicit about meaning one of the others (probably because upper bounds are somewhat more interesting for the worst cases than for the others, where Big Theta or Big Omega are more used). Or maybe that's just me. I tend to slap a "more or less" on there to cover my bases :-). -- Sebastian Sylvan |
From: Niklas F. <ni...@fr...> - 2010-11-19 23:56:18
|
On Sat, Nov 20, 2010 at 12:18 AM, Mat Noguchi <mat...@bu...> wrote: > On a related note, is there a way to hash two points such that they map to > the same value if they are within specified distance d of each other? > Obviously you can hash down to their positions quantized to a cell of d, but > if the two points happen to be in two different cells but less than d apart > they hash to different values. > No. If any points within distance e should hash to the same value then p and p + e should hash to the same value. But p + e and p + 2e should also hash to the same value. By extension p and p + ne should hash to the same value. You can only do that if all points hash to the same value. But what you can do (that sometimes is interesting) is find two hash functions h & g such that if |p - q| < e then either h(p) == h(q) or g(p) == g(q). For 1-dimensional floats you could for instance use: h(x) = floor(x / 2e) g(x) = floor(x / 2e + 0.5) // Niklas |
From: Fabian G. <ry...@gm...> - 2010-11-19 23:53:26
|
On 11/19/2010 3:18 PM, Mat Noguchi wrote: > On a related note, is there a way to hash two points such that they map > to the same value if they are within specified distance d of each other? > Obviously you can hash down to their positions quantized to a cell of d, > but if the two points happen to be in two different cells but less than > d apart they hash to different values. There isn't (except for the trivial case where your hash function maps everything into the same bucket). Look at the set of points H(v) := { x | h(x) = v } where h is your hash function and v is some arbitrary hash bucket. H(v) is a set, and that set has a boundary. Pick a point on the boundary and there'll be "adjacent" points just outside the set (which by definition means they hash to a different cell). -Fabian |