You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(390) 
_{Aug}
(767) 
_{Sep}
(940) 
_{Oct}
(964) 
_{Nov}
(819) 
_{Dec}
(762) 

2001 
_{Jan}
(680) 
_{Feb}
(1075) 
_{Mar}
(954) 
_{Apr}
(595) 
_{May}
(725) 
_{Jun}
(868) 
_{Jul}
(678) 
_{Aug}
(785) 
_{Sep}
(410) 
_{Oct}
(395) 
_{Nov}
(374) 
_{Dec}
(419) 
2002 
_{Jan}
(699) 
_{Feb}
(501) 
_{Mar}
(311) 
_{Apr}
(334) 
_{May}
(501) 
_{Jun}
(507) 
_{Jul}
(441) 
_{Aug}
(395) 
_{Sep}
(540) 
_{Oct}
(416) 
_{Nov}
(369) 
_{Dec}
(373) 
2003 
_{Jan}
(514) 
_{Feb}
(488) 
_{Mar}
(396) 
_{Apr}
(624) 
_{May}
(590) 
_{Jun}
(562) 
_{Jul}
(546) 
_{Aug}
(463) 
_{Sep}
(389) 
_{Oct}
(399) 
_{Nov}
(333) 
_{Dec}
(449) 
2004 
_{Jan}
(317) 
_{Feb}
(395) 
_{Mar}
(136) 
_{Apr}
(338) 
_{May}
(488) 
_{Jun}
(306) 
_{Jul}
(266) 
_{Aug}
(424) 
_{Sep}
(502) 
_{Oct}
(170) 
_{Nov}
(170) 
_{Dec}
(134) 
2005 
_{Jan}
(249) 
_{Feb}
(109) 
_{Mar}
(119) 
_{Apr}
(282) 
_{May}
(82) 
_{Jun}
(113) 
_{Jul}
(56) 
_{Aug}
(160) 
_{Sep}
(89) 
_{Oct}
(98) 
_{Nov}
(237) 
_{Dec}
(297) 
2006 
_{Jan}
(151) 
_{Feb}
(250) 
_{Mar}
(222) 
_{Apr}
(147) 
_{May}
(266) 
_{Jun}
(313) 
_{Jul}
(367) 
_{Aug}
(135) 
_{Sep}
(108) 
_{Oct}
(110) 
_{Nov}
(220) 
_{Dec}
(47) 
2007 
_{Jan}
(133) 
_{Feb}
(144) 
_{Mar}
(247) 
_{Apr}
(191) 
_{May}
(191) 
_{Jun}
(171) 
_{Jul}
(160) 
_{Aug}
(51) 
_{Sep}
(125) 
_{Oct}
(115) 
_{Nov}
(78) 
_{Dec}
(67) 
2008 
_{Jan}
(165) 
_{Feb}
(37) 
_{Mar}
(130) 
_{Apr}
(111) 
_{May}
(91) 
_{Jun}
(142) 
_{Jul}
(54) 
_{Aug}
(104) 
_{Sep}
(89) 
_{Oct}
(87) 
_{Nov}
(44) 
_{Dec}
(54) 
2009 
_{Jan}
(283) 
_{Feb}
(113) 
_{Mar}
(154) 
_{Apr}
(395) 
_{May}
(62) 
_{Jun}
(48) 
_{Jul}
(52) 
_{Aug}
(54) 
_{Sep}
(131) 
_{Oct}
(29) 
_{Nov}
(32) 
_{Dec}
(37) 
2010 
_{Jan}
(34) 
_{Feb}
(36) 
_{Mar}
(40) 
_{Apr}
(23) 
_{May}
(38) 
_{Jun}
(34) 
_{Jul}
(36) 
_{Aug}
(27) 
_{Sep}
(9) 
_{Oct}
(18) 
_{Nov}
(25) 
_{Dec}

2011 
_{Jan}
(1) 
_{Feb}
(14) 
_{Mar}
(1) 
_{Apr}
(5) 
_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}
(37) 
_{Sep}
(6) 
_{Oct}
(2) 
_{Nov}

_{Dec}

2012 
_{Jan}

_{Feb}
(7) 
_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}
(3) 
_{Jul}

_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}
(10) 
2013 
_{Jan}

_{Feb}
(1) 
_{Mar}
(7) 
_{Apr}
(2) 
_{May}

_{Jun}

_{Jul}
(9) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2014 
_{Jan}
(14) 
_{Feb}

_{Mar}
(2) 
_{Apr}

_{May}
(10) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}
(3) 
_{Dec}

2015 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}
(12) 
_{Nov}

_{Dec}
(1) 
2016 
_{Jan}

_{Feb}
(1) 
_{Mar}
(1) 
_{Apr}
(1) 
_{May}

_{Jun}
(1) 
_{Jul}

_{Aug}
(1) 
_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 


1
(7) 
2

3
(6) 
4
(1) 
5
(3) 
6
(7) 
7

8
(5) 
9
(5) 
10

11
(2) 
12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28







From: Samuel Moll <samimoll@go...>  20100203 14:38:54

@Ben: Thanks for the link, but I wanted to avoid doing full Delauney triangulation since everything remotely Delauneylike suffices for me... but I can probably use some of the methods in this paper. @ Erwin: Thought I read somewhere that Box2D uses a hash... but the source code looks alot like a AABB tree ;) But I think I found something better, because I realized that I don't really need a triangulation, only some partition for example using polygons... Okay, here's the outline to my approach: Every objects center is a "node", around the node is this objects bounding volume (I think I'll use circles). Then you build a graph that partitions the plane into (convex and concave) polygons (You could make the graph so that only triangles are used). On every corner of a polygon is an object, no object is on the inside of a polygon. Now you test for every object which polygon its bounding volume intersects. Obviously it intersects at least all polygons adjacent to its node; if the bounding volumes are relatively tight, it should not intersect more than these. I think the average number of bounding volume tests for each object should be around 6. Now all nodes move, and if the graph changes its topology, it needs to be repaired. This is the part where I have no concrete algorithm yet... but I guess I'll find one. I'm relatively sure that this step should be fast ;) Then the graph is optimized: a few edges are swapped, inserted etc. to avoid slivers and such. This should be easy. Greets, Samuel On Wed, Feb 3, 2010 at 6:39 AM, Erwin Coumans <erwin.coumans@...> wrote: > > Hi Samuel, > > Box2d uses a dynamic AABB tree for the broadphase, based on the original > implementation from our Bullet library in 3d. > > This tree is incrementally updated, faster than sweep and prune. > > No spatial hash. > > I'm interested in your approach. How do you 'triangulate' the centers, and > compute connectivity? > > Thanks, > Erwin > > Feel free to forward this to the list, I cannot mail to the list because of > mail server issues. > > Sent from my iPhone > > On Feb 2, 2010, at 16:54, Samuel Moll <samimoll@...> wrote: > >> Hello all, >> >> I need a broadphase collision detection algorithm for 2D rigid body >> physics (continuous collision detection, but that shouldn't matter for >> the broadphase...). >> >> I have objects of wildly varying sizes, lots of free space and >> possibly lots of clustering or even a single object very far away from >> all the others, so a Quadtree (without some strange modifications, >> wrapping the world came to mind) is not the best option. >> >> What I came up with is to take all the objects positions and >> triangulate this point set. Then for each triangle I determine which >> objects it overlaps and test all these objects for collision. >> >> This has a number of advantages, and I believe it could be competitive >> to sweep and prune or Quadtrees (at least in 2D, in 3D its probably >> not cool, but who knows?). I couldn't find anything about this method, >> but if somebody already invented it I'd appreciate some pointers. >> >> What I need now is an (efficient :D) algorithm for updating the >> triangulation when the points move. i.e. given a valid triangulation >> (no triangles overlap, triangulates the convex hull of all points) and >> movement vectors for all points, how can I "repair" the old >> triangulation? >> >> The first part of the problem is how to maintain fat triangles >> (slivers lead to poor collision rejection). I'm sure there are already >> good algorithms to incrementally maintain a good triangulation, but I >> couldn't find them. Also, the union of all triangles must stay convex. >> >> The second problem is how to detect and remove triangle overlaps that >> arise due to fastmoving objects. I have no idea how to do this >> robustly (theoretically, the points could teleport arbitrarily) >> >> I guess somebody already has solved this problem, but I could find no >> relevant paper... >> >> Of course I'll release the finished implementation as open source, or >> try to incorporate it into Box2D if they want it and if it proves >> faster than their existing spatial hashing ;) >> >> >> Sincerely, >> Samuel Moll >> >> >>  >> The Planet: dedicated and managed hosting, cloud storage, colocation >> Stay online with enterprise data centers and the best network in the >> business >> Choose flexible plans and management services without longterm contracts >> Personal 24x7 support from experience hosting pros just a phone call away. >> http://p.sf.net/sfu/theplanetcom >> _______________________________________________ >> GDAlgorithmslist mailing list >> GDAlgorithmslist@... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist > 
From: Andreas Brinck <andreas.brinck@gm...>  20100203 10:54:53

Hi, I have a point inside a concave volume, does anyone know of an algorithm to find the shortest vector from the point to the surface of the volume? The volume in this case is defined as the union of a number of overlapping capsules (lines with radius). Thanks Andreas Brinck 
From: Sébastien Lagarde <sebastien.lagarde@do...>  20100203 10:10:11

> even if the compression get a 4:1 ratio, => even if the compression get a 6:1 ratio, De : Sébastien Lagarde Envoyé : mercredi 3 février 2010 11:01 À : Game Development Algorithms Objet : Re: [Algorithms] Some question about "Lighting and Material ofHalo3" and "Lightmap compression in Halo 3" Hi all, I take some time to try to fully understand what was saying :), I have increase my level of understanding, thanks to both of you Sam and David , I made the difference between incoming lighting and diffuse irradiance now, and better understand the convolution with cosine lobe. Alright for this part. Thanks to you PeterPike, for pointing the problem, in fact I didn't notice the rotation in local frame, and now I think I have more question than before :) I tried to understand what Halo 3 do cause this is a real XBOX360 shipped game with SH lighting in it with all console constraint on memory and shader performance. So I try to figure what tricks they used :) What I think now for halo 3 for their SH lightmap is: Incoming lighting generated by global illumination solver are projected in 9S H coefficient by color channel SH coefficient are rotated in tangent space Then they extract dominant light (intensity and direction). I suppose they store the DC + linear SH term , so 4 coefficient by channel, (as stated by PeterPike) And they store the intensity (RGB). This give the 5 * 3 float value At runtime, I suppose they used the linear SH term to retrieve the dominant light direction, the dominant light is used for the analytic cook torrance specular model. the dominant light should be used for diffuse part too: I think they need to extract dominant light intensity from SH, so they light with one diffuse directional light, and SH coefficient remaining after extraction of dominant light intensities. I think the extraction is done in shader and not in SH lightmap in order to be able to recover the right dominant light direction (I am certainly wrong :) ). But after that, I am lost with the rotation in local frame: I don't get the thing for the equation (5). which only require the 3 ZH coefficient. As my SH coefficient are store in tangent space, I am already in the case of equation (5). so I don't need to rotate SH coefficient once again (maybe I am wrong here, do I need to rotate with the tangent space normal extract from normal map ?), and so I only require SH coefficient for i = 0, 2, 6 ? The convolution by cosine lobe seems to be applied at this time, but two thing are missing, the constant: sqrt(4 pi / (2l + 1)) and the divide by PI to turn irradiance into the exit radiance. (But in their shader code provided, they divided lightprobe_color by Pi at end of diffuse_reflectance, so maybe they just not say it) But doing the convolution at this time mean that we store incoming lighting in SH lightmap, and not diffuse irradiance. So extracting dominant light from incoming lighting is maybe wrong ? Last though, with their 15 float to store, they said they used two DXT5 by SH band, mean we require 10 DXT5 to store only one lightmap. even if the compression get a 4:1 ratio, this require 10Mo for a SH lightmap of 1024x1024 without mipmap Which seems very huge for console memory even with a good streaming texture system... I am just curious about all these SH tricks, Anyway, thanks for the help you already provide to me. Lagarde Sébastien De : PeterPike Sloan [mailto:peter_pike_sloan@...] Envoyé : lundi 1 février 2010 22:46 À : gdalgorithmslist@... Objet : Re: [Algorithms] Some question about "Lighting and Material of Halo3" and "Lightmap compression in Halo 3" Hi, I believe the source of confusion is that in this equation (5) things are expressed in the local coordinate frame, the fact that they show 3 coefficients implies they are using quadratic SH (so when rotated into the local frame and integrated against a clamped cosine function you only need the 3 ZH coefficients.) The general case (where you don't know the coordinate frame, or you want to evaluate for any normal) requires all 9 coefficients. I think the "5*3" comes from the compression mentioned in Hao's slides (linear SH + RGB for directional light in "optimal linear" direction. Which is 5*3 scalars.) The compression work is Wang et al, I think Yaohua's slides are more indicative of what was actually used... PeterPike Sloan ________________________________ Date: Fri, 29 Jan 2010 16:28:57 +0100 From: sebastien.lagarde@... To: gdalgorithmslist@... Subject: [Algorithms] Some question about "Lighting and Material of Halo3" and "Lightmap compression in Halo 3" Hello all, I tried to contact the author of this two (now old) paper, without success,: "Lighting and Material of Halo 3" published at siggraph 2008 (http://ati.amd.com/developer/SIGGRAPH08/Chapter01ChenLighting_and_Material_of_Halo3.pdf) and GDC 2008 conference "Lightmap compression in Halo 3" (http://toomuchlogic.com/Lightmap_Compression_2008_02_22.pdf) so I will ask some question to the list, if anyone is interested by the subject and has better understanding of math than me :) 1. "Lighting and Material of Halo 3" About equation (5) the diffuse reflectance using SH basis the diffuse reflectance using SH basis is : k_d R_d Sum Lambda_i A_i A_i is the projection of the cosine lobe in SH, and as it is radially symetric. all coefficient with m != 0 are 0. After that the author give the first three term of A_i. I wondering how many band are use for this calculation ? (order 3, 4 or more ?) I read from "Stupid spherical harmonics tricks" from Perter Pike sloan (http://www.ppsloan.org/publications/StupidSH36.pdf) that order 3 SH is sufficient for approximate light source but for HDR light sources he recommand order 5. As order 4 is 0 (From paper http://www.eecs.berkeley.edu/~ravir/lighting.pdf I get the formula for A_i) This mean 4 coefficient to store by color channel. As I am pretty sure the author store HDR data, can someone lighten me ? 2. About texture storage (which are deduced from above statement) I try to figure out how are encoded the incident radiance in their SHLightmap. >From GDC 2008 conference "Lightmap compression in Halo 3" I can read that the author need to store for each texel a vector of 5 * 3 float values. I don't figure what are the values exactly. My assumption is that "3" is for each channel color RGB, But I can't figure what's the 5 is ? Are they 5 first band of SH order like I suppose above (but as I said, we only need 4 coefficient in this case) or maybe order 6 ? In this same paper later I found: A. Two DXT5 texture for each SH coefficient channel (HDR, positive/negative) And B. Each band of the SH coefficients (RGB) are converted to Luvw space I suppose that Luvw space is what is describe in this paper "Rendering from Compressed High Dynamic Range Textures on Programmable Graphics Hardware" by Peter Pike sloan and al.(ftp://ftp.research.microsoft.com/pub/tr/TR2006152.pdf) What I don't understand is that A and B seem different. I can't understand what is store. Are they storing for each band the triplet RGB of SH coeeficient, mean 3 float value for two DXT5 x order or do they store store 5 SH coefficient for channel color R in two DXT5 ? So what are the total storage cost of all the 5 * 3 float value in term of DXT5 texture ? Cause It looks like to be pretty big. Thanks for anyone interested by this post Best regards Lagarde Sébastien 
From: Sébastien Lagarde <sebastien.lagarde@do...>  20100203 09:44:15

Hi all, I take some time to try to fully understand what was saying :), I have increase my level of understanding, thanks to both of you Sam and David , I made the difference between incoming lighting and diffuse irradiance now, and better understand the convolution with cosine lobe. Alright for this part. Thanks to you PeterPike, for pointing the problem, in fact I didn't notice the rotation in local frame, and now I think I have more question than before :) I tried to understand what Halo 3 do cause this is a real XBOX360 shipped game with SH lighting in it with all console constraint on memory and shader performance. So I try to figure what tricks they used :) What I think now for halo 3 for their SH lightmap is: Incoming lighting generated by global illumination solver are projected in 9S H coefficient by color channel SH coefficient are rotated in tangent space Then they extract dominant light (intensity and direction). I suppose they store the DC + linear SH term , so 4 coefficient by channel, (as stated by PeterPike) And they store the intensity (RGB). This give the 5 * 3 float value At runtime, I suppose they used the linear SH term to retrieve the dominant light direction, the dominant light is used for the analytic cook torrance specular model. the dominant light should be used for diffuse part too: I think they need to extract dominant light intensity from SH, so they light with one diffuse directional light, and SH coefficient remaining after extraction of dominant light intensities. I think the extraction is done in shader and not in SH lightmap in order to be able to recover the right dominant light direction (I am certainly wrong :) ). But after that, I am lost with the rotation in local frame: I don't get the thing for the equation (5). which only require the 3 ZH coefficient. As my SH coefficient are store in tangent space, I am already in the case of equation (5). so I don't need to rotate SH coefficient once again (maybe I am wrong here, do I need to rotate with the tangent space normal extract from normal map ?), and so I only require SH coefficient for i = 0, 2, 6 ? The convolution by cosine lobe seems to be applied at this time, but two thing are missing, the constant: sqrt(4 pi / (2l + 1)) and the divide by PI to turn irradiance into the exit radiance. (But in their shader code provided, they divided lightprobe_color by Pi at end of diffuse_reflectance, so maybe they just not say it) But doing the convolution at this time mean that we store incoming lighting in SH lightmap, and not diffuse irradiance. So extracting dominant light from incoming lighting is maybe wrong ? Last though, with their 15 float to store, they said they used two DXT5 by SH band, mean we require 10 DXT5 to store only one lightmap. even if the compression get a 4:1 ratio, this require 10Mo for a SH lightmap of 1024x1024 without mipmap Which seems very huge for console memory even with a good streaming texture system... I am just curious about all these SH tricks, Anyway, thanks for the help you already provide to me. Lagarde Sébastien De : PeterPike Sloan [mailto:peter_pike_sloan@...] Envoyé : lundi 1 février 2010 22:46 À : gdalgorithmslist@... Objet : Re: [Algorithms] Some question about "Lighting and Material of Halo3" and "Lightmap compression in Halo 3" Hi, I believe the source of confusion is that in this equation (5) things are expressed in the local coordinate frame, the fact that they show 3 coefficients implies they are using quadratic SH (so when rotated into the local frame and integrated against a clamped cosine function you only need the 3 ZH coefficients.) The general case (where you don't know the coordinate frame, or you want to evaluate for any normal) requires all 9 coefficients. I think the "5*3" comes from the compression mentioned in Hao's slides (linear SH + RGB for directional light in "optimal linear" direction. Which is 5*3 scalars.) The compression work is Wang et al, I think Yaohua's slides are more indicative of what was actually used... PeterPike Sloan ________________________________ Date: Fri, 29 Jan 2010 16:28:57 +0100 From: sebastien.lagarde@... To: gdalgorithmslist@... Subject: [Algorithms] Some question about "Lighting and Material of Halo3" and "Lightmap compression in Halo 3" Hello all, I tried to contact the author of this two (now old) paper, without success,: "Lighting and Material of Halo 3" published at siggraph 2008 (http://ati.amd.com/developer/SIGGRAPH08/Chapter01ChenLighting_and_Material_of_Halo3.pdf) and GDC 2008 conference "Lightmap compression in Halo 3" (http://toomuchlogic.com/Lightmap_Compression_2008_02_22.pdf) so I will ask some question to the list, if anyone is interested by the subject and has better understanding of math than me :) 1. "Lighting and Material of Halo 3" About equation (5) the diffuse reflectance using SH basis the diffuse reflectance using SH basis is : k_d R_d Sum Lambda_i A_i A_i is the projection of the cosine lobe in SH, and as it is radially symetric. all coefficient with m != 0 are 0. After that the author give the first three term of A_i. I wondering how many band are use for this calculation ? (order 3, 4 or more ?) I read from "Stupid spherical harmonics tricks" from Perter Pike sloan (http://www.ppsloan.org/publications/StupidSH36.pdf) that order 3 SH is sufficient for approximate light source but for HDR light sources he recommand order 5. As order 4 is 0 (From paper http://www.eecs.berkeley.edu/~ravir/lighting.pdf I get the formula for A_i) This mean 4 coefficient to store by color channel. As I am pretty sure the author store HDR data, can someone lighten me ? 2. About texture storage (which are deduced from above statement) I try to figure out how are encoded the incident radiance in their SHLightmap. >From GDC 2008 conference "Lightmap compression in Halo 3" I can read that the author need to store for each texel a vector of 5 * 3 float values. I don't figure what are the values exactly. My assumption is that "3" is for each channel color RGB, But I can't figure what's the 5 is ? Are they 5 first band of SH order like I suppose above (but as I said, we only need 4 coefficient in this case) or maybe order 6 ? In this same paper later I found: A. Two DXT5 texture for each SH coefficient channel (HDR, positive/negative) And B. Each band of the SH coefficients (RGB) are converted to Luvw space I suppose that Luvw space is what is describe in this paper "Rendering from Compressed High Dynamic Range Textures on Programmable Graphics Hardware" by Peter Pike sloan and al.(ftp://ftp.research.microsoft.com/pub/tr/TR2006152.pdf) What I don't understand is that A and B seem different. I can't understand what is store. Are they storing for each band the triplet RGB of SH coeeficient, mean 3 float value for two DXT5 x order or do they store store 5 SH coefficient for channel color R in two DXT5 ? So what are the total storage cost of all the 5 * 3 float value in term of DXT5 texture ? Cause It looks like to be pretty big. Thanks for anyone interested by this post Best regards Lagarde Sébastien 
From: Ben SunshineHill <sneftel@gm...>  20100203 02:01:32

On Tue, Feb 2, 2010 at 7:54 PM, Samuel Moll <samimoll@...> wrote: > What I need now is an (efficient :D) algorithm for updating the > triangulation when the points move. i.e. given a valid triangulation > (no triangles overlap, triangulates the convex hull of all points) and > movement vectors for all points, how can I "repair" the old > triangulation? Take a look at "Voronoi Diagrams of Moving Points", link below. Remember that the Voronoi tessellation is the dual of the Delaunay triangulation, which in general tends not to produce many slivers. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.45.8944 Ben 
From: Samuel Moll <samimoll@go...>  20100203 00:54:12

Hello all, I need a broadphase collision detection algorithm for 2D rigid body physics (continuous collision detection, but that shouldn't matter for the broadphase...). I have objects of wildly varying sizes, lots of free space and possibly lots of clustering or even a single object very far away from all the others, so a Quadtree (without some strange modifications, wrapping the world came to mind) is not the best option. What I came up with is to take all the objects positions and triangulate this point set. Then for each triangle I determine which objects it overlaps and test all these objects for collision. This has a number of advantages, and I believe it could be competitive to sweep and prune or Quadtrees (at least in 2D, in 3D its probably not cool, but who knows?). I couldn't find anything about this method, but if somebody already invented it I'd appreciate some pointers. What I need now is an (efficient :D) algorithm for updating the triangulation when the points move. i.e. given a valid triangulation (no triangles overlap, triangulates the convex hull of all points) and movement vectors for all points, how can I "repair" the old triangulation? The first part of the problem is how to maintain fat triangles (slivers lead to poor collision rejection). I'm sure there are already good algorithms to incrementally maintain a good triangulation, but I couldn't find them. Also, the union of all triangles must stay convex. The second problem is how to detect and remove triangle overlaps that arise due to fastmoving objects. I have no idea how to do this robustly (theoretically, the points could teleport arbitrarily) I guess somebody already has solved this problem, but I could find no relevant paper... Of course I'll release the finished implementation as open source, or try to incorporate it into Box2D if they want it and if it proves faster than their existing spatial hashing ;) Sincerely, Samuel Moll 