Screenshot instructions:
Windows
Mac
Red Hat Linux
Ubuntu
Click URL instructions:
Rightclick on ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)
You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(390) 
_{Aug}
(767) 
_{Sep}
(940) 
_{Oct}
(964) 
_{Nov}
(819) 
_{Dec}
(762) 

2001 
_{Jan}
(680) 
_{Feb}
(1075) 
_{Mar}
(954) 
_{Apr}
(595) 
_{May}
(725) 
_{Jun}
(868) 
_{Jul}
(678) 
_{Aug}
(785) 
_{Sep}
(410) 
_{Oct}
(395) 
_{Nov}
(374) 
_{Dec}
(419) 
2002 
_{Jan}
(699) 
_{Feb}
(501) 
_{Mar}
(311) 
_{Apr}
(334) 
_{May}
(501) 
_{Jun}
(507) 
_{Jul}
(441) 
_{Aug}
(395) 
_{Sep}
(540) 
_{Oct}
(416) 
_{Nov}
(369) 
_{Dec}
(373) 
2003 
_{Jan}
(514) 
_{Feb}
(488) 
_{Mar}
(396) 
_{Apr}
(624) 
_{May}
(590) 
_{Jun}
(562) 
_{Jul}
(546) 
_{Aug}
(463) 
_{Sep}
(389) 
_{Oct}
(399) 
_{Nov}
(333) 
_{Dec}
(449) 
2004 
_{Jan}
(317) 
_{Feb}
(395) 
_{Mar}
(136) 
_{Apr}
(338) 
_{May}
(488) 
_{Jun}
(306) 
_{Jul}
(266) 
_{Aug}
(424) 
_{Sep}
(502) 
_{Oct}
(170) 
_{Nov}
(170) 
_{Dec}
(134) 
2005 
_{Jan}
(249) 
_{Feb}
(109) 
_{Mar}
(119) 
_{Apr}
(282) 
_{May}
(82) 
_{Jun}
(113) 
_{Jul}
(56) 
_{Aug}
(160) 
_{Sep}
(89) 
_{Oct}
(98) 
_{Nov}
(237) 
_{Dec}
(297) 
2006 
_{Jan}
(151) 
_{Feb}
(250) 
_{Mar}
(222) 
_{Apr}
(147) 
_{May}
(266) 
_{Jun}
(313) 
_{Jul}
(367) 
_{Aug}
(135) 
_{Sep}
(108) 
_{Oct}
(110) 
_{Nov}
(220) 
_{Dec}
(47) 
2007 
_{Jan}
(133) 
_{Feb}
(144) 
_{Mar}
(247) 
_{Apr}
(191) 
_{May}
(191) 
_{Jun}
(171) 
_{Jul}
(160) 
_{Aug}
(51) 
_{Sep}
(125) 
_{Oct}
(115) 
_{Nov}
(78) 
_{Dec}
(67) 
2008 
_{Jan}
(165) 
_{Feb}
(37) 
_{Mar}
(130) 
_{Apr}
(111) 
_{May}
(91) 
_{Jun}
(142) 
_{Jul}
(54) 
_{Aug}
(104) 
_{Sep}
(89) 
_{Oct}
(87) 
_{Nov}
(44) 
_{Dec}
(54) 
2009 
_{Jan}
(283) 
_{Feb}
(113) 
_{Mar}
(154) 
_{Apr}
(395) 
_{May}
(62) 
_{Jun}
(48) 
_{Jul}
(52) 
_{Aug}
(54) 
_{Sep}
(131) 
_{Oct}
(29) 
_{Nov}
(32) 
_{Dec}
(37) 
2010 
_{Jan}
(34) 
_{Feb}
(36) 
_{Mar}
(40) 
_{Apr}
(23) 
_{May}
(38) 
_{Jun}
(34) 
_{Jul}
(36) 
_{Aug}
(27) 
_{Sep}
(9) 
_{Oct}
(18) 
_{Nov}
(25) 
_{Dec}

2011 
_{Jan}
(1) 
_{Feb}
(14) 
_{Mar}
(1) 
_{Apr}
(5) 
_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}
(37) 
_{Sep}
(6) 
_{Oct}
(2) 
_{Nov}

_{Dec}

2012 
_{Jan}

_{Feb}
(7) 
_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}
(3) 
_{Jul}

_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}
(10) 
2013 
_{Jan}

_{Feb}
(1) 
_{Mar}
(7) 
_{Apr}
(2) 
_{May}

_{Jun}

_{Jul}
(9) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2014 
_{Jan}
(14) 
_{Feb}

_{Mar}
(2) 
_{Apr}

_{May}
(10) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}
(3) 
_{Dec}

2015 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}
(12) 
_{Nov}

_{Dec}
(1) 
2016 
_{Jan}

_{Feb}
(1) 
_{Mar}
(1) 
_{Apr}
(1) 
_{May}

_{Jun}
(1) 
_{Jul}

_{Aug}
(1) 
_{Sep}

_{Oct}

_{Nov}

_{Dec}

2017 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(1) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 

1
(5) 
2
(9) 
3
(1) 
4
(2) 
5

6
(3) 
7

8

9

10
(1) 
11
(3) 
12

13
(4) 
14

15

16
(2) 
17
(7) 
18
(3) 
19
(5) 
20
(1) 
21

22

23

24
(1) 
25
(7) 
26
(9) 
27
(3) 
28

29
(1) 
30
(10) 
31
(5) 




From: <pierloic@fr...>  20050531 15:50:03

Thanks for your contributions. I think i did progress a bit since the initial post as i managed to list = two main pathological cases and i *feel* like i have a solution for the first= one :). I still cant find anything for the second case and that is the bad news..= . I did a page to explain where i am, you might have a look if you whish : http://pierloic.free.fr/clip/clip.html I dont have precision problems for the moment. I simply use fabs(ab)<eps= ilon. That's allright for now and i dont get equality test problems. It may bec= ome an issue later... I do some strong states check with asserts. Mainly on even count of intersections and the alternates but that is after the graph is built. So: 1) If anyone can think of something about those cases, that's *cool* !!! 2) Reading your posts, it sounds like i should give up with this... Is there any other algorithm for the same specifications:  concave with holes polygon clipped with concave with holes polygon  the algorithm outputs 2 lists: inside polygons and outside polygons  ability to use results as inputs for several passes Thanks  Pierre Loic 
From: Gino van den Bergen <gvandenbergen@pl...>  20050531 10:03:42

=20 > Original Message > From: gdalgorithmslistadmin@...=20 > [mailto:gdalgorithmslistadmin@...] On=20 > Behalf Of Bretton Wade > Sent: Tuesday, May 31, 2005 10:45 AM > To: gdalgorithmslist@... > Subject: RE: [Algorithms] Weiler Atherton Implementation (2D=20 > polygon clipping) >=20 <snip> >=20 > 1) Be careful about testing for equality. This comes up in=20 > the collocation and collinear testing a lot. Epsilon testing=20 > is a generally accepted method for comparison of floats=20 > [abs(ab)<epsilon], but breaks down if epsilon is meaningless=20 > in the range of numbers you are working in. A better test is=20 > looking for least significant bits of deviation (typecast the=20 > floats to ints, and compare the result to less than some=20 > small enough integer tolerance). >=20 I would like to add that you should try to relate your epsilon to the actual input data. For instance, my (hyper)plane object maintains a noise value that is used to determine whether a point lies on the plane. Given a normal, an offset, and a noise value, a point p lies on the plane if dot(normal, p)  offset <=3D noise. The noise value is not an absolute epsilon. Rather, it is computed from the vertices that are known to lie on the plane. Computed normals are usually not perfectly orthogonal to the support plane of a polygon. Therefore, for these vertices, the signed distance dot(normal, p)  offset will not be exactly zero. The noise value is the maximum absolute signed distance of a vertex that is known to lie on the plane. Furthermore, I add an epsilon to this noise value that is related to the offset in order to catch rounding errors. This espilon is taken to be offset * std::numeric_limits<float>::epsilon() * C, where C is a small positive constant (e.g. 10). In this way, planes that are further away from the origin will have a larger noise value. You should be careful comparing floats by casting them to integers. You would like the float values 1.0 * 2^exp and 1.11111111111111111111111 * 2^(exp1) (in base2 representation) to be classified as equal. However, a direct cast to integer (through *reinterpret_cast<int*>(&value)) will show different values after masking away the least significant bits. It is better to get rid of the N least significant bits by adding a bias value that is 1.0 * 2^(exp + N) (2^N times the absolute maximum of the two) to both values and compare their 32bit float values. For N =3D 8, adding the bias results in 1.00000001000000000000000 * 2^(exp + N) for both values after rounding. (In order to make sure that you're testing only 32 bits and not the 80bit internal representation you could cast them to integers using the pointer cast, or use the floatconsistency compiler option Op.)=20 Gino 
From: Willem de Boer <wdeboer@pl...>  20050531 09:34:59

I recently had to implement a polygon scene clipper algorithm. I used a beamtree to speed up poly clipping, and a BSP tree to feed the beamtree a fronttoback ordered list of polygons. My findings were that you _do not_ want to convert to triangles during your beamtree and BSP tree constructions, as this will i) result in an exponential increase of tree nodes and ii) will entice a lot of roundingerror creepiness. Bretton Wade wrote: "As a general rule, clipping algorithms on complex=20 geometry are not robust and accurate with floating=20 point math. " I can also confirm this. Cheers, Willem 
From: Bretton Wade <brettonw@mi...>  20050531 08:45:12

(Despite my experience in this area, I continue to believe that object space algorithms like this can be made to work. It's my masochistic tendencies showing through.) As a general rule, clipping algorithms on complex geometry are not robust and accurate with floating point math.=20 In any large enough data set, you will find at least one unexpected special case that you must solve. The last two assertions you claim to have trouble with (types of intersections will alternate and the number will always be even) are implications of the Jordan curve theorem (Any continuous simple closed curve in the plane separates the plane into two disjoint regions: the inside and the outside.) You have to make these work in order to be sure the clipper is correct. If this is for offline processing, use arbitrary precision arithmetic to eliminate a major source of accuracy errors. It will be slow, but the alternative is manually examining thousands of failure cases. BSP based clipping is no better than the WeilerAtherton algorithm in the long run; it's just easier to code the operations. Unfortunately it will also give you more polygons, meaning more opportunities for error. I implemented the Sutherland and Hodgman algorithm a few years back for offline processing of water polygons in Flight Simulator. Polys with holes were clipped only to quads, and I had a @$%$ of a time getting that robust across the entire world.=20 Here are a few suggestions: 1) Be careful about testing for equality. This comes up in the collocation and collinear testing a lot. Epsilon testing is a generally accepted method for comparison of floats [abs(ab)<epsilon], but breaks down if epsilon is meaningless in the range of numbers you are working in. A better test is looking for least significant bits of deviation (typecast the floats to ints, and compare the result to less than some small enough integer tolerance). 2) Error check your input religiously. You want polygon chains that are simple, have no collinear or collocated vertices, have area greater than 0, etc., etc. Do all this conditioning before you feed the clipper, not while you feed it. Punt on any polygon that isn't well conditioned. 3) Skip the alternative winding order hole polygon approach if you can. I inserted the hole polygon into the source polygons and updated the test for simpleness to allow collinear edges in the polygon. This removes one major source of complexity from the algorithm. 4) Isolate your intersection routines so that they always order line segments the same way. This ensures that you get the same intersection point whether you are considering AB intersects CD, or CD intersects AB (or DC intersects BA...) 5) To make the Jordan curve theorem hold, I found that I had to sort the intersections along the clipping plane and fudge it if I got multiple intersection points that appeared to be collocated. Basically the sort routine had to enforce the alternating nature of the intersection points. If you keep going down this road... Good luck and please report back on your results. Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...] On Behalf Of Andrew Willmott Sent: Monday, May 30, 2005 12:51 PM To: gdalgorithmslist@... Subject: Re: [Algorithms] Weiler Atherton Implementation (2D polygon clipping) The short version: WA is not a practical, robust algorithm. This is not=20 a problem you can solve. The long version: a friend of mine was working on 2D clipping algorithms for his Master's thesis. He ran into exactly the same problems you=20 describe, and spent a lot of time trying to fix them. He eventually got=20 in touch with one of the paper authors, and established that their code=20 had the same set of problems. It was a nice algorithm in theory, but it=20 inevitably ran into robustness issues in practice. He went on to write=20 his own, which was based off a BSP tree approach, I think. (I might be=20 wrong about that  this was all over ten years ago, and I can't find=20 his thesis online. It was "Objectprecision methods for visible surface=20 determination", J. Williams, University of Auckland..) Andrew pierloic@... wrote: >Hello, > >I am implementing the Weiler Atherton algorithm as it really matches my needs >for some physic dev: > concave with holes polygon clipped with concave with holes polygon > the algorithm outputs 2 lists: inside polygons and outside polygons > ability to use results as inputs for several passes > >You can find the original article here : >http://www.cs.drexel.edu/~david/Classes/CS430/HWs/p214weiler.pdf > > >The general cases works really good but it is getting harder with the >pathological cases when some intersections points happens to be the same as >some polygon points: > 2 non parallel edges intersect on one end of one edge > 2 non parallel edges intersect on one end of one edge which is also an end of >the other edge > 2 parallel edges intersect on two points which are end of edges > >I tried different ways to treat them (insert intersection nodes or not) but >nothing really works for the moment. > >The paper is quite "vague" about it : >"If care is taken in placement of intersections where the subject and clip >polygon contours are identical in the xy plane, no degenerate polygons will be >produced by the clipping process" > >Then the text makes two assertions : >"These two types of intersections will be found to alternate along any given >contour and the number of intersections will always be even" > >I cant satisfy these two assertions with the different approaches i tried for >the pathological cases... > >Anyone has experimented these issues ? Any help is welcome. > >Thanks, > >Pierre Loic Herve > > > >SF.Net email is sponsored by: GoToMeeting  the easiest way to collaborate >online with coworkers and clients while avoiding the high cost of travel and >communications. There is no equipment to buy and you can meet as often as >you want. Try it free.http://ads.osdn.com/?ad_idt02&alloc_id=16135&op=3Dclick >_______________________________________________ >GDAlgorithmslist mailing list >GDAlgorithmslist@... >https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist >Archives: >http://sourceforge.net/mailarchive/forum.php?forum_ida88 > =20 >  This SF.Net email is sponsored by Yahoo. Introducing Yahoo! Search Developer Network  Create apps using Yahoo! Search APIs Find out how you can build Yahoo! directly into your own Applications  visit http://developer.yahoo.net/?fr=3Doffadysdnostgq22005 _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 
From: Willem de Boer <wdeboer@pl...>  20050531 06:56:46

Really, it should happen in texture space, as that's the closest you will get to integrating in the tangent plane. Ofcourse, this won't work where there are texture discontinuities. In fact, it would only really be correct if the mesh can be nicely unfolded onto the sphere. The best results I got when I compute an irradiance map from the lightsource (a la Dachsbacher et al.) and use that=20 information to integrate against, as well as filtering in=20 image space. That way, you get light shining through thin=20 parts of the mesh. (You can also use this depth map to=20 compute the single scattering term, ofcourse).  Willem H. de Boer Homepage: http://www.whdeboer.com=20 > Original Message > From: gdalgorithmslistadmin@...=20 > [mailto:gdalgorithmslistadmin@...] On=20 > Behalf Of Rowan Wyborn > Sent: Tuesday, May 31, 2005 1:27 AM > To: gdalgorithmslist@... > Subject: RE: [Algorithms] Subsurface scattering >=20 > cool! Its nice to finally see an analytical derivation for=20 > such a hacky 'just works' algorithm :) >=20 > One thing that wasnt clear to me was whether the application=20 > of the kernel was happening in texture space (ala traditional=20 > subsurface 'blur' approximation) or in image space or??? >=20 > > Original Message > > From: Willem de Boer [mailto:wdeboer@...] > > Sent: Friday, 27 May 2005 4:44 PM > > To: gdalgorithmslist@... > > Subject: [Algorithms] Subsurface scattering > >=20 > >=20 > > Hi all. > >=20 > > I've put up a draft of an article that describes how to simulate=20 > > sufficiently local multiplescattering (ie.,=20 > semitranslucent objects)=20 > > effects in realtime. Under some simplifying assumptions I show how=20 > > the full BSSRDF integral can be put into a much simpler=20 > form that is=20 > > simple enough to be evaluated quickly enough. > >=20 > > The paper can be found here: > > http://www.whdeboer.com/writings.html > >=20 > > Remember, it's a draft. There's bits missing, but the gist of the=20 > > technique is there. The justification for the paper is that=20 > a lot of=20 > > people simulate multiple scattering by blurring. This paper=20 > shows that=20 > > blurring actually has theoretical grounding. > >=20 > > Some screenshots can be found here: > > http://www.whdeboer.com/misc.html > >=20 > > I hope you like it. Enjoy! > >=20 > > Cheers, > > Willem > >=20 > >=20 > >=20 > >=20 > >  > > This SF.Net email is sponsored by Yahoo. > > Introducing Yahoo! Search Developer Network  Create apps=20 > using Yahoo! > > Search APIs Find out how you can build Yahoo! directly into=20 > your own=20 > > Applications  visit > > http://developer.yahoo.net/?fr=3Dfadysdnostgq22005 > > _______________________________________________ > > GDAlgorithmslist mailing list > > GDAlgorithmslist@... > > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > > Archives: > > http://sourceforge.net/mailarchive/forum.php?forum_ida88 > >=20 >=20 >=20 >  > This SF.Net email is sponsored by Yahoo. > Introducing Yahoo! Search Developer Network  Create apps using Yahoo! > Search APIs Find out how you can build Yahoo! directly into=20 > your own Applications  visit=20 > http://developer.yahoo.net/?fr=3Dfadysdnostgq22005 > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 >=20 
From: Rowan Wyborn <rowan@ir...>  20050530 23:25:01

cool! Its nice to finally see an analytical derivation for such a hacky = 'just works' algorithm :) One thing that wasnt clear to me was whether the application of the = kernel was happening in texture space (ala traditional subsurface 'blur' = approximation) or in image space or??? > Original Message > From: Willem de Boer [mailto:wdeboer@...] > Sent: Friday, 27 May 2005 4:44 PM > To: gdalgorithmslist@... > Subject: [Algorithms] Subsurface scattering >=20 >=20 > Hi all. >=20 > I've put up a draft of an article that describes how to simulate=20 > sufficiently local multiplescattering (ie., semitranslucent objects) > effects in realtime. Under some simplifying assumptions I show=20 > how the full BSSRDF integral can be put into a much simpler form=20 > that is simple enough to be evaluated quickly enough. >=20 > The paper can be found here: > http://www.whdeboer.com/writings.html >=20 > Remember, it's a draft. There's bits missing, but the gist of the=20 > technique is there. The justification for the paper is that a lot > of people simulate multiple scattering by blurring. This paper > shows that blurring actually has theoretical grounding. >=20 > Some screenshots can be found here: > http://www.whdeboer.com/misc.html >=20 > I hope you like it. Enjoy! >=20 > Cheers, > Willem >=20 >=20 >=20 >=20 >  > This SF.Net email is sponsored by Yahoo. > Introducing Yahoo! Search Developer Network  Create apps using Yahoo! > Search APIs Find out how you can build Yahoo! directly into your own > Applications  visit=20 > http://developer.yahoo.net/?fr=3Dfadysdnostgq22005 > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 >=20 
From: Rowan Wyborn <rowan@ir...>  20050530 23:18:47

cool! Its nice to finally see an analytical derivation for such a hacky=20 > Original Message > From: Willem de Boer [mailto:wdeboer@...] > Sent: Friday, 27 May 2005 4:44 PM > To: gdalgorithmslist@... > Subject: [Algorithms] Subsurface scattering >=20 >=20 > Hi all. >=20 > I've put up a draft of an article that describes how to simulate=20 > sufficiently local multiplescattering (ie., semitranslucent objects) > effects in realtime. Under some simplifying assumptions I show=20 > how the full BSSRDF integral can be put into a much simpler form=20 > that is simple enough to be evaluated quickly enough. >=20 > The paper can be found here: > http://www.whdeboer.com/writings.html >=20 > Remember, it's a draft. There's bits missing, but the gist of the=20 > technique is there. The justification for the paper is that a lot > of people simulate multiple scattering by blurring. This paper > shows that blurring actually has theoretical grounding. >=20 > Some screenshots can be found here: > http://www.whdeboer.com/misc.html >=20 > I hope you like it. Enjoy! >=20 > Cheers, > Willem >=20 >=20 >=20 >=20 >  > This SF.Net email is sponsored by Yahoo. > Introducing Yahoo! Search Developer Network  Create apps using Yahoo! > Search APIs Find out how you can build Yahoo! directly into your own > Applications  visit=20 > http://developer.yahoo.net/?fr=3Dfadysdnostgq22005 > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 >=20 
From: Andrew Willmott <awillmott@ma...>  20050530 19:50:21

The short version: WA is not a practical, robust algorithm. This is not a problem you can solve. The long version: a friend of mine was working on 2D clipping algorithms for his Master's thesis. He ran into exactly the same problems you describe, and spent a lot of time trying to fix them. He eventually got in touch with one of the paper authors, and established that their code had the same set of problems. It was a nice algorithm in theory, but it inevitably ran into robustness issues in practice. He went on to write his own, which was based off a BSP tree approach, I think. (I might be wrong about that  this was all over ten years ago, and I can't find his thesis online. It was "Objectprecision methods for visible surface determination", J. Williams, University of Auckland..) Andrew pierloic@... wrote: >Hello, > >I am implementing the Weiler Atherton algorithm as it really matches my needs >for some physic dev: > concave with holes polygon clipped with concave with holes polygon > the algorithm outputs 2 lists: inside polygons and outside polygons > ability to use results as inputs for several passes > >You can find the original article here : >http://www.cs.drexel.edu/~david/Classes/CS430/HWs/p214weiler.pdf > > >The general cases works really good but it is getting harder with the >pathological cases when some intersections points happens to be the same as >some polygon points: > 2 non parallel edges intersect on one end of one edge > 2 non parallel edges intersect on one end of one edge which is also an end of >the other edge > 2 parallel edges intersect on two points which are end of edges > >I tried different ways to treat them (insert intersection nodes or not) but >nothing really works for the moment. > >The paper is quite "vague" about it : >"If care is taken in placement of intersections where the subject and clip >polygon contours are identical in the xy plane, no degenerate polygons will be >produced by the clipping process" > >Then the text makes two assertions : >"These two types of intersections will be found to alternate along any given >contour and the number of intersections will always be even" > >I cant satisfy these two assertions with the different approaches i tried for >the pathological cases... > >Anyone has experimented these issues ? Any help is welcome. > >Thanks, > >Pierre Loic Herve > > > >SF.Net email is sponsored by: GoToMeeting  the easiest way to collaborate >online with coworkers and clients while avoiding the high cost of travel and >communications. There is no equipment to buy and you can meet as often as >you want. Try it free.http://ads.osdn.com/?ad_idt02&alloc_id135&op=click >_______________________________________________ >GDAlgorithmslist mailing list >GDAlgorithmslist@... >https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist >Archives: >http://sourceforge.net/mailarchive/forum.php?forum_ida88 > > 
From: Chris Haarmeijer <c.haarmeijer@ke...>  20050530 18:34:57

Ah, sounds cool. I am going to test it tomorrow and see what it does to performance, should be <a lot> better than current performance. The thing with demeaning is that I have an input frame from a video camera and want to remove the mean value of each row from each pixel to get a demeaned texture. This is input for another filter pass. Chris  Keep IT Simple Software P: P.O.Box 548, 7500 AM Enschede, The Netherlands W: http://www.keepitsimple.nl E: mailto:info@... T: +31 53 4356687 > Original Message > From: gdalgorithmslistadmin@... > [mailto:gdalgorithmslistadmin@...] On > Behalf Of Adrian Bentley > Sent: maandag 30 mei 2005 19:53 > To: gdalgorithmslist@... > Subject: Re: [Algorithms] Computing row means of a texture > > You can actually make bilinear adding do all the adds for you > if you combine the two above mentioned algorithms: > > for ceil( log n ) passes > pixel[i] = 0.5*pixel[2i] + 0.5*pixel[2i+1] //the bilinear > filter at coord 2i + 0.5 > > And that's it. Each pass will use the previously averaged > values from the previous ones. All you have to do is make > sure that if you sample off the boundary of the texture you > get no contribution (i.e. border color of black, 0 alpha). > You could also optimize by utilizing things like alpha > testing and vertex interpolation for 2i + 0.5 coordinate. > > I don't think there's a particularly good way to fix the "demeaning" > alg. The question is why would you want to demean texture A > if you already have texture D? Some numerical test? > > Adrian > > On 5/30/05, Tony Cox <tonycox@...> wrote: > > And you can coerce the texture sampler to do half of the > adds for you > > by using bilinear filtering. > > > > A better question is: why wouldn't you just do this once, > offline? Or > > is this for an offline tool and you're just trying to speed it up? > > > >  Tony > > > > Original Message > > From: gdalgorithmslistadmin@... > > [mailto:gdalgorithmslistadmin@...] On Behalf Of > > Anders Nilsson > > Sent: Monday, May 30, 2005 7:29 AM > > To: gdalgorithmslist@... > > Subject: Re: [Algorithms] Computing row means of a texture > > > > Hm could this be faster: Add the right half of the texture > to the left > > half of the texture (so added[0]=pixel[0]+pixel[128] for a > pixel with > > 256 in width). Then recurse for the left 128 pixels. Pretty > soon you > > should have the rowsums in one column (needs to be float > or something > > to not overflow though, but it seems your texture D should overflow > > with your algorithm as well so I've guess you've got that covered!). > > For a rowwidth of 2^k you need k divisions and they are > dependant, ie > > you need to add from what you are creating. They size of the added > > texture however halfs each time. > > > > Just a thought. > > > > Anders Nilsson > > > > > >  > > This SF.Net email is sponsored by Yahoo. > > Introducing Yahoo! Search Developer Network  Create apps > using Yahoo! > > Search APIs Find out how you can build Yahoo! directly into > your own > > Applications  visit > > http://developer.yahoo.net/?froffadysdnostgq22005 > > _______________________________________________ > > GDAlgorithmslist mailing list > > GDAlgorithmslist@... > > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > > Archives: > > http://sourceforge.net/mailarchive/forum.php?forum_ida88 > > > > >  > This SF.Net email is sponsored by Yahoo. > Introducing Yahoo! Search Developer Network  Create apps using Yahoo! > Search APIs Find out how you can build Yahoo! directly into > your own Applications  visit > http://developer.yahoo.net/?fr________________________________ > _______________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 
From: Chris Haarmeijer <c.haarmeijer@ke...>  20050530 18:32:53

Hi Tony, Offline is not possible. The algorithm is going to be used in realtime video analysis.... :( That's why I need it to be fast. Chris  Keep IT Simple Software P: P.O.Box 548, 7500 AM Enschede, The Netherlands W: http://www.keepitsimple.nl E: mailto:info@... T: +31 53 4356687 > Original Message > From: gdalgorithmslistadmin@... > [mailto:gdalgorithmslistadmin@...] On > Behalf Of Tony Cox > Sent: maandag 30 mei 2005 17:47 > To: gdalgorithmslist@... > Subject: RE: [Algorithms] Computing row means of a texture > > And you can coerce the texture sampler to do half of the adds > for you by using bilinear filtering. > > A better question is: why wouldn't you just do this once, > offline? Or is this for an offline tool and you're just > trying to speed it up? > >  Tony > > Original Message > From: gdalgorithmslistadmin@... > [mailto:gdalgorithmslistadmin@...] On > Behalf Of Anders Nilsson > Sent: Monday, May 30, 2005 7:29 AM > To: gdalgorithmslist@... > Subject: Re: [Algorithms] Computing row means of a texture > > Hm could this be faster: Add the right half of the texture to > the left half of the texture (so added[0]=pixel[0]+pixel[128] > for a pixel with > 256 in width). Then recurse for the left 128 pixels. Pretty > soon you should have the rowsums in one column (needs to be > float or something to not overflow though, but it seems your > texture D should overflow with your algorithm as well so I've > guess you've got that covered!). > For a rowwidth of 2^k you need k divisions and they are > dependant, ie you need to add from what you are creating. > They size of the added texture however halfs each time. > > Just a thought. > > Anders Nilsson > > >  > This SF.Net email is sponsored by Yahoo. > Introducing Yahoo! Search Developer Network  Create apps using Yahoo! > Search APIs Find out how you can build Yahoo! directly into > your own Applications  visit > http://developer.yahoo.net/?fr________________________________ > _______________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 
From: Adrian Bentley <adruab@gm...>  20050530 17:52:39

You can actually make bilinear adding do all the adds for you if you combine the two above mentioned algorithms: for ceil( log n ) passes pixel[i] =3D 0.5*pixel[2i] + 0.5*pixel[2i+1] //the bilinear filter at coord 2i + 0.5 And that's it. Each pass will use the previously averaged values from the previous ones. All you have to do is make sure that if you sample off the boundary of the texture you get no contribution (i.e. border color of black, 0 alpha). You could also optimize by utilizing things like alpha testing and vertex interpolation for 2i + 0.5 coordinate. I don't think there's a particularly good way to fix the "demeaning" alg. The question is why would you want to demean texture A if you already have texture D? Some numerical test? Adrian On 5/30/05, Tony Cox <tonycox@...> wrote: > And you can coerce the texture sampler to do half of the adds for you by > using bilinear filtering. >=20 > A better question is: why wouldn't you just do this once, offline? Or is > this for an offline tool and you're just trying to speed it up? >=20 >  Tony >=20 > Original Message > From: gdalgorithmslistadmin@... > [mailto:gdalgorithmslistadmin@...] On Behalf Of > Anders Nilsson > Sent: Monday, May 30, 2005 7:29 AM > To: gdalgorithmslist@... > Subject: Re: [Algorithms] Computing row means of a texture >=20 > Hm could this be faster: Add the right half of the texture to the left > half of the texture (so added[0]=3Dpixel[0]+pixel[128] for a pixel with > 256 in width). Then recurse for the left 128 pixels. Pretty soon you > should have the rowsums in one column (needs to be float or something > to not overflow though, but it seems your texture D should overflow > with your algorithm as well so I've guess you've got that covered!). > For a rowwidth of 2^k you need k divisions and they are dependant, ie > you need to add from what you are creating. They size of the added > texture however halfs each time. >=20 > Just a thought. >=20 > Anders Nilsson >=20 >=20 >  > This SF.Net email is sponsored by Yahoo. > Introducing Yahoo! Search Developer Network  Create apps using Yahoo! > Search APIs Find out how you can build Yahoo! directly into your own > Applications  visit http://developer.yahoo.net/?froffadysdnostgq22005 > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 > 
From: Tony Cox <tonycox@mi...>  20050530 15:46:40

And you can coerce the texture sampler to do half of the adds for you by using bilinear filtering. A better question is: why wouldn't you just do this once, offline? Or is this for an offline tool and you're just trying to speed it up?  Tony Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...] On Behalf Of Anders Nilsson Sent: Monday, May 30, 2005 7:29 AM To: gdalgorithmslist@... Subject: Re: [Algorithms] Computing row means of a texture Hm could this be faster: Add the right half of the texture to the left half of the texture (so added[0]=3Dpixel[0]+pixel[128] for a pixel with 256 in width). Then recurse for the left 128 pixels. Pretty soon you should have the rowsums in one column (needs to be float or something to not overflow though, but it seems your texture D should overflow with your algorithm as well so I've guess you've got that covered!). For a rowwidth of 2^k you need k divisions and they are dependant, ie you need to add from what you are creating. They size of the added texture however halfs each time. Just a thought. Anders Nilsson 
From: Anders Nilsson <breakin.outbreak@gm...>  20050530 14:29:17

Hm could this be faster: Add the right half of the texture to the left half of the texture (so added[0]=3Dpixel[0]+pixel[128] for a pixel with 256 in width). Then recurse for the left 128 pixels. Pretty soon you should have the rowsums in one column (needs to be float or something to not overflow though, but it seems your texture D should overflow with your algorithm as well so I've guess you've got that covered!). For a rowwidth of 2^k you need k divisions and they are dependant, ie you need to add from what you are creating. They size of the added texture however halfs each time. Just a thought. Anders Nilsson 
From: Ville Miettinen <wili@hy...>  20050530 13:20:02

> > Has anybody tried to implement a demeaning algorithm on the GPU > before ? > I've implemented a number of algorithms on the GPU although none of them have been particularly demeaning :) cheers, wili  Ville Miettinen, CTO Hybrid Graphics, Ltd. http://www.hybrid.fi > Kind regards, > > Chris 
From: Chris Haarmeijer <c.haarmeijer@ke...>  20050530 11:40:49

Hi, I need to compute the row means of a texture A. The resulting texture D is used to demean texture A. I've currently implemented it in a way that is really slow but works: walk over the columns of texture A, add the values of that column with additive blending to destination texture D. Finally, when demeaning texture A, subtract a value from texture A using texture D divided by the number of columns in texture A. Has anybody tried to implement a demeaning algorithm on the GPU before ? Kind regards, Chris  Keep IT Simple Software P: P.O.Box 548, 7500 AM Enschede, The Netherlands W: http://www.keepitsimple.nl E: mailto:info@... T: +31 53 4356687 
From: Joe Ante <joe@ot...>  20050529 13:15:08

> Thanks for all the insightful replies, guys. I think the ultimate answer > (in my case at least) is that there are more important things to worry > about at this point texture pipeline wise, and that this is a relatively > minor thing that I can add later. > > A related question: Do you just choose one filter for all your textures > when you batch process them, or do you allow artists to select which > filter to use on a pertexture basis and let them decide which looks > better. This is the usual more complexity/more control vs. less > complexity/less control kind of question, but I'm curious to know, from a > pipeline stand point, which works better for you and your artists. Our asset pipeline stores meta data for every file. This is where we store settings for how to import the texture. You can setup eg.  different mipmap filters to use,  Converting to a normal map using different filters.  or generating a cubemap from a texure using different mapping methods (Eg. Spheremap or cylindrical map) If the user doesn't care about the settings he will never see them. For that to work it is important that you pick good default values. Analyze the image a bit. Eg. Use dxtc 1 compression if there is no alpha in the texture, use dxtc 3 compression if there is alpha. http://www.otee.dk/joe/importsettings.jpg It worked very well for us and artists love it. Joachim Ante http://www.otee.dk 
From: <Brad_Byrd@pl...>  20050527 18:11:26

Thanks for all the insightful replies, guys. I think the ultimate answer (in my case at least) is that there are more important things to worry about at this point texture pipeline wise, and that this is a relatively minor thing that I can add later. A related question: Do you just choose one filter for all your textures when you batch process them, or do you allow artists to select which filter to use on a pertexture basis and let them decide which looks better. This is the usual more complexity/more control vs. less complexity/less control kind of question, but I'm curious to know, from a pipeline stand point, which works better for you and your artists. Thanks! Brad... gdalgorithmslistadmin@... wrote on 05/26/2005 08:07:05 PM: > > I'm not sure that's true. The edges of something that's a > > gradient from > > black on the left to white on the right will look pretty bad if you > > assume wrapping. I'd rather assume clamping (!) > > I assumed wrapping because if it IS wrapping, then you probbly actually use > the wrap, so the edges of the texture will actually be onscreen. Whereas if > you're using something else, it's likely that the mesh doesn't actually use > all the way to the edges, so if you get them slightly wrong, it's moderately > likely that those texels might not even be used, and even if they are, it's > likely there's not many of them. > > Obviously the idea situation is you use the right one, but if you have to > assume one, I'd say wrap. > > > TomF. > > > > > Original Message > > From: gdalgorithmslistadmin@... > > [mailto:gdalgorithmslistadmin@...] On > > Behalf Of Jon Watte > > Sent: 26 May 2005 14:00 > > To: gdalgorithmslist@... > > Subject: Re: [Algorithms] Wrap modes and MIP map generation. > > > > > > > > > filter kernel never needs to try to sample off the edge. If > > you're doing > > > something fancier then maybe you do care, but I bet you > > don't actually care > > > that much  the difference is probably going to be small, > > especially if you > > > always assume wrapping. > > > > I'm not sure that's true. The edges of something that's a > > gradient from > > black on the left to white on the right will look pretty bad if you > > assume wrapping. I'd rather assume clamping (!) > > > > > And I believe we've already had a discussion a while back > > where some people > > > moderately persuasively pointed out that a box filter is > > actually the best > > > possible mipmap filter. Slightly counterintuitive though. > > > > I don't think it's the best possible, but it's pretty good. > > The reason > > is that an image isn't really a pointsampled representation of a > > bandlimited signal. Instead, each sample can be construed as > > the area > > average (or integration) over the pixel, or over a gaussian > > shaped "tap" > > cocentric with the pixel. If you make the bandlimited pointsample > > assumption, then the source art that generated the output image might > > actually have significant (invisible) ringing in it, or the > > image may be > > highly aliased (in the signal processing sense). > > > > If the pixel is really the integral of intensity over a square box of > > area, then the box filter is the most accurate filter, because just > > summing them up by definition gives you the intensity over the larger > > area covered by the MIP mapped pixel. > > > > If the pixel is more of a gaussianshaped tap (sample area), then a > > gaussian filter that samples the reconstructed image might be more > > precise. It really comes down to how you capture (or create) > > your source > > art, and what assumptions you're making about it in the > > rendering stage. > > > > That being said, a lot of people prefer the slightly noisier > > image with > > emphasis on the higher frequencies that you get when you either bias > > your LOD, or use a ringing MIP map filter  "perfect" signal > > processing > > may or may not be what your artists actually want. > > > > Cheers, > > > > / h+ > > > > >  > This SF.Net email is sponsored by Yahoo. > Introducing Yahoo! Search Developer Network  Create apps using Yahoo! > Search APIs Find out how you can build Yahoo! directly into your own > Applications  visit http://developer.yahoo.net/?fr=offadysdnostgq22005 > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 > > 
From: Willem de Boer <wdeboer@pl...>  20050527 06:44:03

Hi all. I've put up a draft of an article that describes how to simulate=20 sufficiently local multiplescattering (ie., semitranslucent objects) effects in realtime. Under some simplifying assumptions I show=20 how the full BSSRDF integral can be put into a much simpler form=20 that is simple enough to be evaluated quickly enough. The paper can be found here: http://www.whdeboer.com/writings.html Remember, it's a draft. There's bits missing, but the gist of the=20 technique is there. The justification for the paper is that a lot of people simulate multiple scattering by blurring. This paper shows that blurring actually has theoretical grounding. Some screenshots can be found here: http://www.whdeboer.com/misc.html I hope you like it. Enjoy! Cheers, Willem 
From: Tom Forsyth <tom.forsyth@ee...>  20050527 03:07:10

> I'm not sure that's true. The edges of something that's a=20 > gradient from=20 > black on the left to white on the right will look pretty bad if you=20 > assume wrapping. I'd rather assume clamping (!) I assumed wrapping because if it IS wrapping, then you probbly actually = use the wrap, so the edges of the texture will actually be onscreen. = Whereas if you're using something else, it's likely that the mesh doesn't actually = use all the way to the edges, so if you get them slightly wrong, it's = moderately likely that those texels might not even be used, and even if they are, = it's likely there's not many of them. Obviously the idea situation is you use the right one, but if you have = to assume one, I'd say wrap. TomF. > Original Message > From: gdalgorithmslistadmin@...=20 > [mailto:gdalgorithmslistadmin@...] On=20 > Behalf Of Jon Watte > Sent: 26 May 2005 14:00 > To: gdalgorithmslist@... > Subject: Re: [Algorithms] Wrap modes and MIP map generation. >=20 >=20 >=20 > > filter kernel never needs to try to sample off the edge. If=20 > you're doing > > something fancier then maybe you do care, but I bet you=20 > don't actually care > > that much  the difference is probably going to be small,=20 > especially if you > > always assume wrapping. >=20 > I'm not sure that's true. The edges of something that's a=20 > gradient from=20 > black on the left to white on the right will look pretty bad if you=20 > assume wrapping. I'd rather assume clamping (!) >=20 > > And I believe we've already had a discussion a while back=20 > where some people > > moderately persuasively pointed out that a box filter is=20 > actually the best > > possible mipmap filter. Slightly counterintuitive though. >=20 > I don't think it's the best possible, but it's pretty good.=20 > The reason=20 > is that an image isn't really a pointsampled representation of a=20 > bandlimited signal. Instead, each sample can be construed as=20 > the area=20 > average (or integration) over the pixel, or over a gaussian=20 > shaped "tap"=20 > cocentric with the pixel. If you make the bandlimited pointsample=20 > assumption, then the source art that generated the output image might=20 > actually have significant (invisible) ringing in it, or the=20 > image may be=20 > highly aliased (in the signal processing sense). >=20 > If the pixel is really the integral of intensity over a square box of=20 > area, then the box filter is the most accurate filter, because just=20 > summing them up by definition gives you the intensity over the larger=20 > area covered by the MIP mapped pixel. >=20 > If the pixel is more of a gaussianshaped tap (sample area), then a=20 > gaussian filter that samples the reconstructed image might be more=20 > precise. It really comes down to how you capture (or create)=20 > your source=20 > art, and what assumptions you're making about it in the=20 > rendering stage. >=20 > That being said, a lot of people prefer the slightly noisier=20 > image with=20 > emphasis on the higher frequencies that you get when you either bias=20 > your LOD, or use a ringing MIP map filter  "perfect" signal=20 > processing=20 > may or may not be what your artists actually want. >=20 > Cheers, >=20 > / h+ 
From: PeterPike Sloan <ppsloan@wi...>  20050526 23:30:33

I don't buy that argument for using a box filter. Even if the finest pixels are area integrals, that doesn't mean you want the lower levels of the mip map to treat them that way. You need to take your reconstruction filter into account (bilinear) which makes things look sincish if you do this in a least squares sense (ie: project the constant functions at the most detailed representation into a bilinear basis...) Just look at the duals of the bilinear basis functions. If your goal is to reconstruct the finest level signal (and here you can only compare the integrals over whatever you neighborhood is), box filtering is also not the best answer (it's easy to validate this  again, just using least squares.) Also when filtering images it's common to have a DOF that trades off between ringing/blurring. Just use your windowed sinc approximation with a larger interpixel step size. PeterPike (If your reconstruction filter was also a box filter (ie: point sampling), then box filtering is the correct thing to do for mipmaps though  but I don't think that is generally the case...) Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...] On Behalf Of Jon Watte Sent: Thursday, May 26, 2005 2:00 PM To: gdalgorithmslist@... Subject: Re: [Algorithms] Wrap modes and MIP map generation. > filter kernel never needs to try to sample off the edge. If you're=20 > doing something fancier then maybe you do care, but I bet you don't=20 > actually care that much  the difference is probably going to be=20 > small, especially if you always assume wrapping. I'm not sure that's true. The edges of something that's a gradient from black on the left to white on the right will look pretty bad if you assume wrapping. I'd rather assume clamping (!) > And I believe we've already had a discussion a while back where some=20 > people moderately persuasively pointed out that a box filter is=20 > actually the best possible mipmap filter. Slightly counterintuitive though. I don't think it's the best possible, but it's pretty good. The reason is that an image isn't really a pointsampled representation of a bandlimited signal. Instead, each sample can be construed as the area average (or integration) over the pixel, or over a gaussian shaped "tap" cocentric with the pixel. If you make the bandlimited pointsample assumption, then the source art that generated the output image might actually have significant (invisible) ringing in it, or the image may be highly aliased (in the signal processing sense). If the pixel is really the integral of intensity over a square box of area, then the box filter is the most accurate filter, because just summing them up by definition gives you the intensity over the larger area covered by the MIP mapped pixel. If the pixel is more of a gaussianshaped tap (sample area), then a gaussian filter that samples the reconstructed image might be more precise. It really comes down to how you capture (or create) your source art, and what assumptions you're making about it in the rendering stage. That being said, a lot of people prefer the slightly noisier image with emphasis on the higher frequencies that you get when you either bias your LOD, or use a ringing MIP map filter  "perfect" signal processing may or may not be what your artists actually want. Cheers, / h+  This SF.Net email is sponsored by Yahoo. Introducing Yahoo! Search Developer Network  Create apps using Yahoo! Search APIs Find out how you can build Yahoo! directly into your own Applications  visit http://developer.yahoo.net/?fr=3Doffadysdnostgq22005 _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 
From: Jon Watte <hplus@mi...>  20050526 20:58:57

> filter kernel never needs to try to sample off the edge. If you're doing > something fancier then maybe you do care, but I bet you don't actually care > that much  the difference is probably going to be small, especially if you > always assume wrapping. I'm not sure that's true. The edges of something that's a gradient from black on the left to white on the right will look pretty bad if you assume wrapping. I'd rather assume clamping (!) > And I believe we've already had a discussion a while back where some people > moderately persuasively pointed out that a box filter is actually the best > possible mipmap filter. Slightly counterintuitive though. I don't think it's the best possible, but it's pretty good. The reason is that an image isn't really a pointsampled representation of a bandlimited signal. Instead, each sample can be construed as the area average (or integration) over the pixel, or over a gaussian shaped "tap" cocentric with the pixel. If you make the bandlimited pointsample assumption, then the source art that generated the output image might actually have significant (invisible) ringing in it, or the image may be highly aliased (in the signal processing sense). If the pixel is really the integral of intensity over a square box of area, then the box filter is the most accurate filter, because just summing them up by definition gives you the intensity over the larger area covered by the MIP mapped pixel. If the pixel is more of a gaussianshaped tap (sample area), then a gaussian filter that samples the reconstructed image might be more precise. It really comes down to how you capture (or create) your source art, and what assumptions you're making about it in the rendering stage. That being said, a lot of people prefer the slightly noisier image with emphasis on the higher frequencies that you get when you either bias your LOD, or use a ringing MIP map filter  "perfect" signal processing may or may not be what your artists actually want. Cheers, / h+ 
From: Marco Thrush <skyphos@co...>  20050526 13:40:38

I agree on the box filter thing. You are probably already doing this, but make sure that when you filter your mipmaps that you do it in linear space color space and you should be ok. Marco Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...] On Behalf Of Tom Forsyth Sent: Wednesday, May 25, 2005 10:39 PM To: gdalgorithmslist@... Subject: RE: [Algorithms] Wrap modes and MIP map generation. Well, if your mipmap filter is a standard dumb boxfilter that just takes each 2x2 block and finds the average colour (in some space  linear, gammacorrected, YUV, etc), then it doesn't matter either way, because your filter kernel never needs to try to sample off the edge. If you're doing something fancier then maybe you do care, but I bet you don't actually care that much  the difference is probably going to be small, especially if you always assume wrapping. And I believe we've already had a discussion a while back where some people moderately persuasively pointed out that a box filter is actually the best possible mipmap filter. Slightly counterintuitive though. TomF. > Original Message > From: gdalgorithmslistadmin@... > [mailto:gdalgorithmslistadmin@...] On > Behalf Of Brad_Byrd@... > Sent: 25 May 2005 19:02 > To: gdalgorithmslist@... > Subject: [Algorithms] Wrap modes and MIP map generation. > > > I'm working on our texture pipeline, and my lead mentioned to > me that I > should be sure and take the texture wrap mode (wrap vs. clamp) into > account when generating MIP maps. In other words, when the > mode is set to > clamp, I should clamp my filter on the edges of the map, but > when the mode > is wrap, I should let the filter kernel wrap around and pick > up pixels on > the opposite edge of the map. > > My question is, is this correct? I've never heard of anyone > doing this > before, so it kind of caught me offguard. I don't know enough about > image/signal processing to know if it is more correct to do > it this way or > to do it the "standard" way and just always clamp the filter > regardless of > wrap mode. Is the visual difference even worth the slight > increase in > complexity? > > Thanks, > > Brad...  SF.Net email is sponsored by: GoToMeeting  the easiest way to collaborate online with coworkers and clients while avoiding the high cost of travel and communications. There is no equipment to buy and you can meet as often as you want. Try it free.http://ads.osdn.com/?ad_id=7402&alloc_id=16135&op=click _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=6188 
From: <pierloic@fr...>  20050526 13:21:23

Hello, I am implementing the Weiler Atherton algorithm as it really matches my n= eeds for some physic dev:  concave with holes polygon clipped with concave with holes polygon  the algorithm outputs 2 lists: inside polygons and outside polygons  ability to use results as inputs for several passes You can find the original article here : http://www.cs.drexel.edu/~david/Classes/CS430/HWs/p214weiler.pdf The general cases works really good but it is getting harder with the pathological cases when some intersections points happens to be the same = as some polygon points:  2 non parallel edges intersect on one end of one edge  2 non parallel edges intersect on one end of one edge which is also an = end of the other edge  2 parallel edges intersect on two points which are end of edges I tried different ways to treat them (insert intersection nodes or not) b= ut nothing really works for the moment. The paper is quite "vague" about it : "If care is taken in placement of intersections where the subject and cli= p polygon contours are identical in the xy plane, no degenerate polygons w= ill be produced by the clipping process" Then the text makes two assertions : "These two types of intersections will be found to alternate along any gi= ven contour and the number of intersections will always be even" I cant satisfy these two assertions with the different approaches i tried= for the pathological cases... Anyone has experimented these issues ? Any help is welcome. Thanks,  Pierre Loic Herve 
From: Sylvester Hesp <s.hesp@xs...>  20050526 09:27:15

(I hope this works now, I've tried sending this mail tons of times but it got rejected by my ISP because the mailinglist seems blacklisted or something) > The totally accurate way to do it is to use either separating axis or > minkowski difference (maths in the same) and use axes of frustum plane > normals (pointing away from centre of frustum), and cross products of each > edge against every other edge... Right, and keep in mind it's pretty pointless to test parallel planes and edges. The typical frustum has 5 unique plane normals (near and far plane are parallel), and 6 unique edges (4 edges connecting the near plane with the far plane, and 2 perpendicular edges on both the near and the farplane).  Sylvester  Original Message  From: <Paul_Firth@...> To: <gdalgorithmslist@...> Sent: Wednesday, 18 May, 2005 13:08 Subject: Re: [Algorithms] FrustumFrustum intersection > gdalgorithmslistadmin@... wrote on 18/05/2005 > 11:51:48: > >> Hi, >> >> I'm sorry if this question has already been asked before, but the > archives >> seem to be offline. :( >> >> I'm looking for a function which provides a reasonably efficient way of >> detecting a frustumfrustum intersection. I'm not eager to use a basic >> convex collision testing function, because of all the baggage coming > with >> these libraries. >> >> I would only be interested in a true/false collision. No contact > generation >> etc is required. Any ideas? > > The totally accurate way to do it is to use either separating axis or > minkowski difference (maths in the same) and use axes of frustum plane > normals (pointing away from centre of frustum), and cross products of each > edge against every other edge... > > Start with a sep axis routine for OBB vs OBB and add more axes (it will be > optimised to assume each face has an exact opposite). > > Maybe someone else will be able to give a routine if you don't want > totally accuracy (and associated slowness).... > > Cheers, Paul. > > ********************************************************************** > This email and any files transmitted with it are confidential and > intended solely for the use of the individual or entity to whom they > are addressed. If you have received this email in error please notify > postmaster@... > > This footnote also confirms that this email message has been checked > for all known viruses. > > ********************************************************************** > Sony Computer Entertainment Europe > > > >  > This SF.Net email is sponsored by Oracle Space Sweepstakes > Want to be the first software developer in space? > Enter now for the Oracle Space Sweepstakes! > http://ads.osdn.com/?ad_id=7412&alloc_id=16344&op=click > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > 
From: Tom Forsyth <tom.forsyth@ee...>  20050526 03:38:38

Well, if your mipmap filter is a standard dumb boxfilter that just takes each 2x2 block and finds the average colour (in some space  linear, gammacorrected, YUV, etc), then it doesn't matter either way, because your filter kernel never needs to try to sample off the edge. If you're doing something fancier then maybe you do care, but I bet you don't actually care that much  the difference is probably going to be small, especially if you always assume wrapping. And I believe we've already had a discussion a while back where some people moderately persuasively pointed out that a box filter is actually the best possible mipmap filter. Slightly counterintuitive though. TomF. > Original Message > From: gdalgorithmslistadmin@... > [mailto:gdalgorithmslistadmin@...] On > Behalf Of Brad_Byrd@... > Sent: 25 May 2005 19:02 > To: gdalgorithmslist@... > Subject: [Algorithms] Wrap modes and MIP map generation. > > > I'm working on our texture pipeline, and my lead mentioned to > me that I > should be sure and take the texture wrap mode (wrap vs. clamp) into > account when generating MIP maps. In other words, when the > mode is set to > clamp, I should clamp my filter on the edges of the map, but > when the mode > is wrap, I should let the filter kernel wrap around and pick > up pixels on > the opposite edge of the map. > > My question is, is this correct? I've never heard of anyone > doing this > before, so it kind of caught me offguard. I don't know enough about > image/signal processing to know if it is more correct to do > it this way or > to do it the "standard" way and just always clamp the filter > regardless of > wrap mode. Is the visual difference even worth the slight > increase in > complexity? > > Thanks, > > Brad... 