gdalgorithms-list Mailing List for Game Dev Algorithms (Page 11)
Brought to you by:
vexxed72
You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(390) |
Aug
(767) |
Sep
(940) |
Oct
(964) |
Nov
(819) |
Dec
(762) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(680) |
Feb
(1075) |
Mar
(954) |
Apr
(595) |
May
(725) |
Jun
(868) |
Jul
(678) |
Aug
(785) |
Sep
(410) |
Oct
(395) |
Nov
(374) |
Dec
(419) |
2002 |
Jan
(699) |
Feb
(501) |
Mar
(311) |
Apr
(334) |
May
(501) |
Jun
(507) |
Jul
(441) |
Aug
(395) |
Sep
(540) |
Oct
(416) |
Nov
(369) |
Dec
(373) |
2003 |
Jan
(514) |
Feb
(488) |
Mar
(396) |
Apr
(624) |
May
(590) |
Jun
(562) |
Jul
(546) |
Aug
(463) |
Sep
(389) |
Oct
(399) |
Nov
(333) |
Dec
(449) |
2004 |
Jan
(317) |
Feb
(395) |
Mar
(136) |
Apr
(338) |
May
(488) |
Jun
(306) |
Jul
(266) |
Aug
(424) |
Sep
(502) |
Oct
(170) |
Nov
(170) |
Dec
(134) |
2005 |
Jan
(249) |
Feb
(109) |
Mar
(119) |
Apr
(282) |
May
(82) |
Jun
(113) |
Jul
(56) |
Aug
(160) |
Sep
(89) |
Oct
(98) |
Nov
(237) |
Dec
(297) |
2006 |
Jan
(151) |
Feb
(250) |
Mar
(222) |
Apr
(147) |
May
(266) |
Jun
(313) |
Jul
(367) |
Aug
(135) |
Sep
(108) |
Oct
(110) |
Nov
(220) |
Dec
(47) |
2007 |
Jan
(133) |
Feb
(144) |
Mar
(247) |
Apr
(191) |
May
(191) |
Jun
(171) |
Jul
(160) |
Aug
(51) |
Sep
(125) |
Oct
(115) |
Nov
(78) |
Dec
(67) |
2008 |
Jan
(165) |
Feb
(37) |
Mar
(130) |
Apr
(111) |
May
(91) |
Jun
(142) |
Jul
(54) |
Aug
(104) |
Sep
(89) |
Oct
(87) |
Nov
(44) |
Dec
(54) |
2009 |
Jan
(283) |
Feb
(113) |
Mar
(154) |
Apr
(395) |
May
(62) |
Jun
(48) |
Jul
(52) |
Aug
(54) |
Sep
(131) |
Oct
(29) |
Nov
(32) |
Dec
(37) |
2010 |
Jan
(34) |
Feb
(36) |
Mar
(40) |
Apr
(23) |
May
(38) |
Jun
(34) |
Jul
(36) |
Aug
(27) |
Sep
(9) |
Oct
(18) |
Nov
(25) |
Dec
|
2011 |
Jan
(1) |
Feb
(14) |
Mar
(1) |
Apr
(5) |
May
(1) |
Jun
|
Jul
|
Aug
(37) |
Sep
(6) |
Oct
(2) |
Nov
|
Dec
|
2012 |
Jan
|
Feb
(7) |
Mar
|
Apr
(4) |
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(10) |
2013 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(2) |
May
|
Jun
|
Jul
(9) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
(14) |
Feb
|
Mar
(2) |
Apr
|
May
(10) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(12) |
Nov
|
Dec
(1) |
2016 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
From: John R. <jra...@gm...> - 2010-06-18 17:40:35
|
Yes, I'm actually using Scabble's score for the game itself; however that only gives you indication of letter frequency. The more I think about it, I believe the perfect solution is to generate a database of word frequencies against a number of books with a known 'reading level'. I'm pretty much sure that is the best clean solution to the problem. John On Fri, Jun 18, 2010 at 12:33 PM, Jonathan Sauer <jon...@gm...>wrote: > Hello, > > > Any thoughts on an algorithm which could more or less automatically > > score the entire English language by 'difficultly to spell' and > > 'difficulty to recognize'? Assuming you have as input all of the > > data in a standard dictionary and thesaurus? > > Maybe you could use Scrabble's scoring to rank words. > > > Jonathan > > > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: John R. <jra...@gm...> - 2010-06-18 17:37:01
|
Great idea Jeff, I didn't think about that. I could download a number of open source books at various reading levels, and then build a database against each. That actually sounds like the perfect solution! Project Guttenberg, here I come with my big ass Perl script..... On Fri, Jun 18, 2010 at 12:26 PM, Jeff Russell <je...@8m...>wrote: > My thought was the same as yours as far as the frequency of use entering > into it. If you can't find any good data on that, it might not be hard to > generate it if you have a good deal of text on hand. Run a number of novels > or something through some simple app that tracks word frequencies, and you > might have the start to a database at least. > > Jeff > > On Fri, Jun 18, 2010 at 12:18 PM, John Ratcliff <jra...@gm... > > wrote: > >> I have an interesting little project I'm working on and I thought I would >> solicit the list to see if anyone else has some ideas. >> >> I'm creating an educational word game that focuses on spelling and >> vocabulary; it is designed to run on mobile devices (Ipad, Iphone, Droid, >> etc.). This is just a fun little side project I'm doing so my son can learn >> more hands on programming. My daughter is doing the artwork so we are >> making it a little family project. >> >> I first wrote this game for an Apple II in 1983 so it's kind of fun to be >> making a new version for today's devices. Back then, I didn't have enough >> memory to store a really large word list. Today I have the ability to store >> the entire English dictionary. And, not just the words, but also every >> component associated with each word (synonyms, etymology, definitions, etc.) >> >> The algorithm I am looking for is how to automatically come up with a >> 'difficulty' metric for each word in the English language. >> >> My thoughts are that I could consider the following: >> >> (1) Length of the word, though to be honest very short words can be >> difficult too if they are obscure. >> (2) Number of definitions. >> (3) Field of study of the word (biology, physics, etc.) The open source >> English dictionary I have access to provides this data. >> (4) Whether the word is a verb, noun, etc. >> (5) Cross reference each word against a thesaurus and consider the >> difficulty/obscurity based on how many synonyms and antonyms there are >> total. >> >> One thing that would help immensely if if I had access to a word list of >> the 'most common' words in the English language. Hopefully I can find such >> a list and this would provide me an excellent first guess at whether or not >> a word is obscure or not. >> >> When you play the game you get to choose the difficulty level you want to >> play at really could have two metrics. Difficulty to spell, or difficulty >> in terms of knowing recognizing the word. (The game itself more or less >> works like wheel or fortune or hangman, you are just trying to guess a >> single word rather than a phrase). >> >> Any thoughts on an algorithm which could more or less automatically score >> the entire English language by 'difficultly to spell' and 'difficulty to >> recognize'? Assuming you have as input all of the data in a standard >> dictionary and thesaurus? >> >> Thanks, >> >> John >> >> >> ------------------------------------------------------------------------------ >> ThinkGeek and WIRED's GeekDad team up for the Ultimate >> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >> lucky parental unit. See the prize list and enter to win: >> http://p.sf.net/sfu/thinkgeek-promo >> _______________________________________________ >> GDAlgorithms-list mailing list >> GDA...@li... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >> > > > > -- > Jeff Russell > Engineer, 8monkey Labs > www.8monkeylabs.com > > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Jonathan S. <jon...@gm...> - 2010-06-18 17:34:03
|
Hello, > Any thoughts on an algorithm which could more or less automatically > score the entire English language by 'difficultly to spell' and > 'difficulty to recognize'? Assuming you have as input all of the > data in a standard dictionary and thesaurus? Maybe you could use Scrabble's scoring to rank words. Jonathan |
From: Jeff R. <je...@8m...> - 2010-06-18 17:27:12
|
My thought was the same as yours as far as the frequency of use entering into it. If you can't find any good data on that, it might not be hard to generate it if you have a good deal of text on hand. Run a number of novels or something through some simple app that tracks word frequencies, and you might have the start to a database at least. Jeff On Fri, Jun 18, 2010 at 12:18 PM, John Ratcliff <jra...@gm...>wrote: > I have an interesting little project I'm working on and I thought I would > solicit the list to see if anyone else has some ideas. > > I'm creating an educational word game that focuses on spelling and > vocabulary; it is designed to run on mobile devices (Ipad, Iphone, Droid, > etc.). This is just a fun little side project I'm doing so my son can learn > more hands on programming. My daughter is doing the artwork so we are > making it a little family project. > > I first wrote this game for an Apple II in 1983 so it's kind of fun to be > making a new version for today's devices. Back then, I didn't have enough > memory to store a really large word list. Today I have the ability to store > the entire English dictionary. And, not just the words, but also every > component associated with each word (synonyms, etymology, definitions, etc.) > > The algorithm I am looking for is how to automatically come up with a > 'difficulty' metric for each word in the English language. > > My thoughts are that I could consider the following: > > (1) Length of the word, though to be honest very short words can be > difficult too if they are obscure. > (2) Number of definitions. > (3) Field of study of the word (biology, physics, etc.) The open source > English dictionary I have access to provides this data. > (4) Whether the word is a verb, noun, etc. > (5) Cross reference each word against a thesaurus and consider the > difficulty/obscurity based on how many synonyms and antonyms there are > total. > > One thing that would help immensely if if I had access to a word list of > the 'most common' words in the English language. Hopefully I can find such > a list and this would provide me an excellent first guess at whether or not > a word is obscure or not. > > When you play the game you get to choose the difficulty level you want to > play at really could have two metrics. Difficulty to spell, or difficulty > in terms of knowing recognizing the word. (The game itself more or less > works like wheel or fortune or hangman, you are just trying to guess a > single word rather than a phrase). > > Any thoughts on an algorithm which could more or less automatically score > the entire English language by 'difficultly to spell' and 'difficulty to > recognize'? Assuming you have as input all of the data in a standard > dictionary and thesaurus? > > Thanks, > > John > > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > -- Jeff Russell Engineer, 8monkey Labs www.8monkeylabs.com |
From: John R. <jra...@gm...> - 2010-06-18 17:18:21
|
I have an interesting little project I'm working on and I thought I would solicit the list to see if anyone else has some ideas. I'm creating an educational word game that focuses on spelling and vocabulary; it is designed to run on mobile devices (Ipad, Iphone, Droid, etc.). This is just a fun little side project I'm doing so my son can learn more hands on programming. My daughter is doing the artwork so we are making it a little family project. I first wrote this game for an Apple II in 1983 so it's kind of fun to be making a new version for today's devices. Back then, I didn't have enough memory to store a really large word list. Today I have the ability to store the entire English dictionary. And, not just the words, but also every component associated with each word (synonyms, etymology, definitions, etc.) The algorithm I am looking for is how to automatically come up with a 'difficulty' metric for each word in the English language. My thoughts are that I could consider the following: (1) Length of the word, though to be honest very short words can be difficult too if they are obscure. (2) Number of definitions. (3) Field of study of the word (biology, physics, etc.) The open source English dictionary I have access to provides this data. (4) Whether the word is a verb, noun, etc. (5) Cross reference each word against a thesaurus and consider the difficulty/obscurity based on how many synonyms and antonyms there are total. One thing that would help immensely if if I had access to a word list of the 'most common' words in the English language. Hopefully I can find such a list and this would provide me an excellent first guess at whether or not a word is obscure or not. When you play the game you get to choose the difficulty level you want to play at really could have two metrics. Difficulty to spell, or difficulty in terms of knowing recognizing the word. (The game itself more or less works like wheel or fortune or hangman, you are just trying to guess a single word rather than a phrase). Any thoughts on an algorithm which could more or less automatically score the entire English language by 'difficultly to spell' and 'difficulty to recognize'? Assuming you have as input all of the data in a standard dictionary and thesaurus? Thanks, John |
From: Jon W. <jw...@gm...> - 2010-06-18 17:01:34
|
> > at least the variance of each contributing pixel would have to be stored > for each pixel Unless I miss something, the variance is possible to calculate incrementally. You store the sum of the elements, the sum of the square of the elements, and the number of elements contributing, all of which can be incrementally added to. But maybe I'm missing something? Sincerely, jw -- Americans might object: there is no way we would sacrifice our living standards for the benefit of people in the rest of the world. Nevertheless, whether we get there willingly or not, we shall soon have lower consumption rates, because our present rates are unsustainable. On Fri, Jun 18, 2010 at 2:05 AM, Stefan Dänzer <ste...@gm...>wrote: > Jon, > > I'm not sure how one would model this as a probability density function. If > one would want the voxel value to be the expected value of the probability > density function, at least the variance of each contributing pixel would > have to be stored for each pixel. Also, there needs to be a function which > takes distance and value of a pixel as parameters and maps it into the space > of the probability density function of a voxel. > > Am I missing something with the probability-density function approach? > > Stefan > > On Wed, Jun 16, 2010 at 9:13 PM, Jon Watte <jw...@gm...> wrote: > >> If the algorithm requires "rich" data, then no, you can't incrementally >> compute it. >> >> If the algorithm ends up with a probability/density function for each >> target voxel after each image pass, then you could just adjust the >> probabilities based on the new data, which is possible to do incrementally. >> >> Sincerely, >> >> jw >> >> >> -- >> Americans might object: there is no way we would sacrifice our living >> standards for the benefit of people in the rest of the world. Nevertheless, >> whether we get there willingly or not, we shall soon have lower consumption >> rates, because our present rates are unsustainable. >> >> >> >> On Wed, Jun 16, 2010 at 4:00 AM, Stefan Dänzer <ste...@gm...>wrote: >> >>> Very interesting approaches here, >>> >>> @Manuel: I think the 3D-DDA (or the Bresenham 3D Algorithm suggested by >>> Jon Watte) might be the way to go. >>> >>> @Jon Olic: The application is 3-D Ultrasound (US) reconstruction from >>> spatially positioned and oriented 2-D US images. We basically use an optical >>> tracking system and spatially track the US-Probe. This way we can record >>> 3D-US volumes using a conventional 2D-US machine. Your suggested >>> rasterization algorithm which projects the voxel center onto the image plane >>> and calculates the distance of the projected point in the plane looks also >>> fine, but i'm not sure if I can implement it more efficiently than a >>> modified Bresenham 3D or 3D-DDA algorithm. >>> >>> @Jon Watte: If we end up weighting the contribution of each pixel in the >>> proximity of a voxel inverse linearly to the distance to the voxel center, >>> there might not be a way around using some extra storage for each voxel. We >>> might end up storing distance and color of a pixel for each voxel during >>> distance calculation and (as Manuel mentioned also) calculating the color of >>> each voxel as a post processing step. I don't see any way how it would be >>> possible to compose the value for each voxel incrementally. Am I missing >>> something here? >>> >>> Thanks for this excellent discussion so far, >>> >>> Stefan >>> >>> >>> On Tue, Jun 15, 2010 at 6:52 PM, Jon Watte <jw...@gm...> wrote: >>> >>>> You can incrementally build the 3D voxels by tracing the plane of each >>>> contributing image, using an algorithm similar to Bresenham or Marching >>>> Cubes. As long as the value for each voxel can be composed incrementally, >>>> you can do this for images sequentially without needing extra storage. >>>> >>>> And, yes, recursive tree-like data structures are almost always used for >>>> voxels, because > 90% of the voxel volume is either all-solid or all-open. >>>> >>>> Btw: if you're interested in voxel terrain, you probably should check >>>> out the implementation in the C4 engine. It's pretty nice feature-wise, and >>>> high quality code as well. The source license is $350, although you probably >>>> want to be careful with the license terms; you probably can't copy and paste >>>> code from the engine into your own code, for example. >>>> >>>> Sincerely, >>>> >>>> jw >>>> >>>> >>>> -- >>>> Americans might object: there is no way we would sacrifice our living >>>> standards for the benefit of people in the rest of the world. Nevertheless, >>>> whether we get there willingly or not, we shall soon have lower consumption >>>> rates, because our present rates are unsustainable. >>>> >>>> >>>> >>>> On Sun, Jun 13, 2010 at 12:10 PM, Jon Olick <jon...@gm...>wrote: >>>> >>>>> If its a 2D image to 3d voxel rasterization algorithm, definitely make >>>>> a oct-tree of the voxel data and then intersect it with the 2D image plane >>>>> (plus a depth if you want water-tight), and further clip by the extents of >>>>> the image. Then for each voxel which is considered intersecting the 2D image >>>>> volume, determine the color by projecting the voxel center onto the image >>>>> plane then do a sample (bi-linear interp with mip-maps perhaps?) of the >>>>> image. You could then alpha the voxel based on the amount that the voxel is >>>>> intersected with the volume. For example 20% inside = 20% alpha. >>>>> >>>>> On Sun, Jun 13, 2010 at 1:56 PM, Jon Olick <jon...@gm...>wrote: >>>>> >>>>>> Ah, so kind of like a rasterization of the 2d image into the 3d voxel >>>>>> space? >>>>>> >>>>>> >>>>>> On Sun, Jun 13, 2010 at 1:31 PM, Manuel Massing < >>>>>> m.m...@wa...> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> > @Manuel: I think he wants to find the nearest point on the image to >>>>>>> the >>>>>>> > voxel. I'm not sure how a DDA tracing algorithm would help there. >>>>>>> >>>>>>> I was thinking of the following: >>>>>>> Think of each 2d scanline as a 3d ray which passes through the >>>>>>> voxel grid. Traverse the voxel grid along the ray direction using >>>>>>> DDA, Insert the intersecting pixel spans (in fact a quadrilateral) >>>>>>> into the >>>>>>> appropriate voxel. >>>>>>> >>>>>>> Additionally, one could use summed area tables or somesuch to >>>>>>> pre-filter/match >>>>>>> the voxel and pixel footprint approximately. >>>>>>> >>>>>>> bye, >>>>>>> >>>>>>> Manuel >>>>>>> >>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>>>>>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>>>>>> lucky parental unit. See the prize list and enter to win: >>>>>>> http://p.sf.net/sfu/thinkgeek-promo >>>>>>> _______________________________________________ >>>>>>> GDAlgorithms-list mailing list >>>>>>> GDA...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >>>>>>> Archives: >>>>>>> >>>>>>> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>>>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>>>> lucky parental unit. See the prize list and enter to win: >>>>> http://p.sf.net/sfu/thinkgeek-promo >>>>> _______________________________________________ >>>>> GDAlgorithms-list mailing list >>>>> GDA...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >>>>> Archives: >>>>> >>>>> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >>>>> >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>>> lucky parental unit. See the prize list and enter to win: >>>> http://p.sf.net/sfu/thinkgeek-promo >>>> _______________________________________________ >>>> GDAlgorithms-list mailing list >>>> GDA...@li... >>>> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >>>> Archives: >>>> >>>> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >>>> >>> >>> >>> >>> -- >>> -- >>> Stefan Daenzer >>> Körnerplatz 8 >>> 04107 Leipzig >>> >>> Tel.: +49-176-61157550 >>> >>> >>> "Work like you don't need the money, love like you've never been hurt and >>> dance like no one is watching." - Randall G Leighton >>> >>> >>> ------------------------------------------------------------------------------ >>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>> lucky parental unit. See the prize list and enter to win: >>> http://p.sf.net/sfu/thinkgeek-promo >>> _______________________________________________ >>> GDAlgorithms-list mailing list >>> GDA...@li... >>> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >>> Archives: >>> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >>> >> >> >> >> ------------------------------------------------------------------------------ >> ThinkGeek and WIRED's GeekDad team up for the Ultimate >> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >> lucky parental unit. See the prize list and enter to win: >> http://p.sf.net/sfu/thinkgeek-promo >> _______________________________________________ >> GDAlgorithms-list mailing list >> GDA...@li... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >> > > > > -- > -- > Stefan Daenzer > Körnerplatz 8 > 04107 Leipzig > > Tel.: +49-176-61157550 > > "Work like you don't need the money, love like you've never been hurt and > dance like no one is watching." - Randall G Leighton > > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Stefan D. <ste...@gm...> - 2010-06-18 09:05:50
|
Jon, I'm not sure how one would model this as a probability density function. If one would want the voxel value to be the expected value of the probability density function, at least the variance of each contributing pixel would have to be stored for each pixel. Also, there needs to be a function which takes distance and value of a pixel as parameters and maps it into the space of the probability density function of a voxel. Am I missing something with the probability-density function approach? Stefan On Wed, Jun 16, 2010 at 9:13 PM, Jon Watte <jw...@gm...> wrote: > If the algorithm requires "rich" data, then no, you can't incrementally > compute it. > > If the algorithm ends up with a probability/density function for each > target voxel after each image pass, then you could just adjust the > probabilities based on the new data, which is possible to do incrementally. > > Sincerely, > > jw > > > -- > Americans might object: there is no way we would sacrifice our living > standards for the benefit of people in the rest of the world. Nevertheless, > whether we get there willingly or not, we shall soon have lower consumption > rates, because our present rates are unsustainable. > > > > On Wed, Jun 16, 2010 at 4:00 AM, Stefan Dänzer <ste...@gm...>wrote: > >> Very interesting approaches here, >> >> @Manuel: I think the 3D-DDA (or the Bresenham 3D Algorithm suggested by >> Jon Watte) might be the way to go. >> >> @Jon Olic: The application is 3-D Ultrasound (US) reconstruction from >> spatially positioned and oriented 2-D US images. We basically use an optical >> tracking system and spatially track the US-Probe. This way we can record >> 3D-US volumes using a conventional 2D-US machine. Your suggested >> rasterization algorithm which projects the voxel center onto the image plane >> and calculates the distance of the projected point in the plane looks also >> fine, but i'm not sure if I can implement it more efficiently than a >> modified Bresenham 3D or 3D-DDA algorithm. >> >> @Jon Watte: If we end up weighting the contribution of each pixel in the >> proximity of a voxel inverse linearly to the distance to the voxel center, >> there might not be a way around using some extra storage for each voxel. We >> might end up storing distance and color of a pixel for each voxel during >> distance calculation and (as Manuel mentioned also) calculating the color of >> each voxel as a post processing step. I don't see any way how it would be >> possible to compose the value for each voxel incrementally. Am I missing >> something here? >> >> Thanks for this excellent discussion so far, >> >> Stefan >> >> >> On Tue, Jun 15, 2010 at 6:52 PM, Jon Watte <jw...@gm...> wrote: >> >>> You can incrementally build the 3D voxels by tracing the plane of each >>> contributing image, using an algorithm similar to Bresenham or Marching >>> Cubes. As long as the value for each voxel can be composed incrementally, >>> you can do this for images sequentially without needing extra storage. >>> >>> And, yes, recursive tree-like data structures are almost always used for >>> voxels, because > 90% of the voxel volume is either all-solid or all-open. >>> >>> Btw: if you're interested in voxel terrain, you probably should check out >>> the implementation in the C4 engine. It's pretty nice feature-wise, and high >>> quality code as well. The source license is $350, although you probably want >>> to be careful with the license terms; you probably can't copy and paste code >>> from the engine into your own code, for example. >>> >>> Sincerely, >>> >>> jw >>> >>> >>> -- >>> Americans might object: there is no way we would sacrifice our living >>> standards for the benefit of people in the rest of the world. Nevertheless, >>> whether we get there willingly or not, we shall soon have lower consumption >>> rates, because our present rates are unsustainable. >>> >>> >>> >>> On Sun, Jun 13, 2010 at 12:10 PM, Jon Olick <jon...@gm...> wrote: >>> >>>> If its a 2D image to 3d voxel rasterization algorithm, definitely make a >>>> oct-tree of the voxel data and then intersect it with the 2D image plane >>>> (plus a depth if you want water-tight), and further clip by the extents of >>>> the image. Then for each voxel which is considered intersecting the 2D image >>>> volume, determine the color by projecting the voxel center onto the image >>>> plane then do a sample (bi-linear interp with mip-maps perhaps?) of the >>>> image. You could then alpha the voxel based on the amount that the voxel is >>>> intersected with the volume. For example 20% inside = 20% alpha. >>>> >>>> On Sun, Jun 13, 2010 at 1:56 PM, Jon Olick <jon...@gm...> wrote: >>>> >>>>> Ah, so kind of like a rasterization of the 2d image into the 3d voxel >>>>> space? >>>>> >>>>> >>>>> On Sun, Jun 13, 2010 at 1:31 PM, Manuel Massing < >>>>> m.m...@wa...> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> > @Manuel: I think he wants to find the nearest point on the image to >>>>>> the >>>>>> > voxel. I'm not sure how a DDA tracing algorithm would help there. >>>>>> >>>>>> I was thinking of the following: >>>>>> Think of each 2d scanline as a 3d ray which passes through the >>>>>> voxel grid. Traverse the voxel grid along the ray direction using >>>>>> DDA, Insert the intersecting pixel spans (in fact a quadrilateral) >>>>>> into the >>>>>> appropriate voxel. >>>>>> >>>>>> Additionally, one could use summed area tables or somesuch to >>>>>> pre-filter/match >>>>>> the voxel and pixel footprint approximately. >>>>>> >>>>>> bye, >>>>>> >>>>>> Manuel >>>>>> >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>>>>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>>>>> lucky parental unit. See the prize list and enter to win: >>>>>> http://p.sf.net/sfu/thinkgeek-promo >>>>>> _______________________________________________ >>>>>> GDAlgorithms-list mailing list >>>>>> GDA...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >>>>>> Archives: >>>>>> >>>>>> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >>>>>> >>>>> >>>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>>> lucky parental unit. See the prize list and enter to win: >>>> http://p.sf.net/sfu/thinkgeek-promo >>>> _______________________________________________ >>>> GDAlgorithms-list mailing list >>>> GDA...@li... >>>> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >>>> Archives: >>>> >>>> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >>>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>> lucky parental unit. See the prize list and enter to win: >>> http://p.sf.net/sfu/thinkgeek-promo >>> _______________________________________________ >>> GDAlgorithms-list mailing list >>> GDA...@li... >>> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >>> Archives: >>> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >>> >> >> >> >> -- >> -- >> Stefan Daenzer >> Körnerplatz 8 >> 04107 Leipzig >> >> Tel.: +49-176-61157550 >> >> >> "Work like you don't need the money, love like you've never been hurt and >> dance like no one is watching." - Randall G Leighton >> >> >> ------------------------------------------------------------------------------ >> ThinkGeek and WIRED's GeekDad team up for the Ultimate >> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >> lucky parental unit. See the prize list and enter to win: >> http://p.sf.net/sfu/thinkgeek-promo >> _______________________________________________ >> GDAlgorithms-list mailing list >> GDA...@li... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >> > > > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > -- -- Stefan Daenzer Körnerplatz 8 04107 Leipzig Tel.: +49-176-61157550 "Work like you don't need the money, love like you've never been hurt and dance like no one is watching." - Randall G Leighton |
From: Jon W. <jw...@gm...> - 2010-06-16 19:13:53
|
If the algorithm requires "rich" data, then no, you can't incrementally compute it. If the algorithm ends up with a probability/density function for each target voxel after each image pass, then you could just adjust the probabilities based on the new data, which is possible to do incrementally. Sincerely, jw -- Americans might object: there is no way we would sacrifice our living standards for the benefit of people in the rest of the world. Nevertheless, whether we get there willingly or not, we shall soon have lower consumption rates, because our present rates are unsustainable. On Wed, Jun 16, 2010 at 4:00 AM, Stefan Dänzer <ste...@gm...>wrote: > Very interesting approaches here, > > @Manuel: I think the 3D-DDA (or the Bresenham 3D Algorithm suggested by Jon > Watte) might be the way to go. > > @Jon Olic: The application is 3-D Ultrasound (US) reconstruction from > spatially positioned and oriented 2-D US images. We basically use an optical > tracking system and spatially track the US-Probe. This way we can record > 3D-US volumes using a conventional 2D-US machine. Your suggested > rasterization algorithm which projects the voxel center onto the image plane > and calculates the distance of the projected point in the plane looks also > fine, but i'm not sure if I can implement it more efficiently than a > modified Bresenham 3D or 3D-DDA algorithm. > > @Jon Watte: If we end up weighting the contribution of each pixel in the > proximity of a voxel inverse linearly to the distance to the voxel center, > there might not be a way around using some extra storage for each voxel. We > might end up storing distance and color of a pixel for each voxel during > distance calculation and (as Manuel mentioned also) calculating the color of > each voxel as a post processing step. I don't see any way how it would be > possible to compose the value for each voxel incrementally. Am I missing > something here? > > Thanks for this excellent discussion so far, > > Stefan > > > On Tue, Jun 15, 2010 at 6:52 PM, Jon Watte <jw...@gm...> wrote: > >> You can incrementally build the 3D voxels by tracing the plane of each >> contributing image, using an algorithm similar to Bresenham or Marching >> Cubes. As long as the value for each voxel can be composed incrementally, >> you can do this for images sequentially without needing extra storage. >> >> And, yes, recursive tree-like data structures are almost always used for >> voxels, because > 90% of the voxel volume is either all-solid or all-open. >> >> Btw: if you're interested in voxel terrain, you probably should check out >> the implementation in the C4 engine. It's pretty nice feature-wise, and high >> quality code as well. The source license is $350, although you probably want >> to be careful with the license terms; you probably can't copy and paste code >> from the engine into your own code, for example. >> >> Sincerely, >> >> jw >> >> >> -- >> Americans might object: there is no way we would sacrifice our living >> standards for the benefit of people in the rest of the world. Nevertheless, >> whether we get there willingly or not, we shall soon have lower consumption >> rates, because our present rates are unsustainable. >> >> >> >> On Sun, Jun 13, 2010 at 12:10 PM, Jon Olick <jon...@gm...> wrote: >> >>> If its a 2D image to 3d voxel rasterization algorithm, definitely make a >>> oct-tree of the voxel data and then intersect it with the 2D image plane >>> (plus a depth if you want water-tight), and further clip by the extents of >>> the image. Then for each voxel which is considered intersecting the 2D image >>> volume, determine the color by projecting the voxel center onto the image >>> plane then do a sample (bi-linear interp with mip-maps perhaps?) of the >>> image. You could then alpha the voxel based on the amount that the voxel is >>> intersected with the volume. For example 20% inside = 20% alpha. >>> >>> On Sun, Jun 13, 2010 at 1:56 PM, Jon Olick <jon...@gm...> wrote: >>> >>>> Ah, so kind of like a rasterization of the 2d image into the 3d voxel >>>> space? >>>> >>>> >>>> On Sun, Jun 13, 2010 at 1:31 PM, Manuel Massing < >>>> m.m...@wa...> wrote: >>>> >>>>> Hi, >>>>> >>>>> > @Manuel: I think he wants to find the nearest point on the image to >>>>> the >>>>> > voxel. I'm not sure how a DDA tracing algorithm would help there. >>>>> >>>>> I was thinking of the following: >>>>> Think of each 2d scanline as a 3d ray which passes through the >>>>> voxel grid. Traverse the voxel grid along the ray direction using >>>>> DDA, Insert the intersecting pixel spans (in fact a quadrilateral) into >>>>> the >>>>> appropriate voxel. >>>>> >>>>> Additionally, one could use summed area tables or somesuch to >>>>> pre-filter/match >>>>> the voxel and pixel footprint approximately. >>>>> >>>>> bye, >>>>> >>>>> Manuel >>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>>>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>>>> lucky parental unit. See the prize list and enter to win: >>>>> http://p.sf.net/sfu/thinkgeek-promo >>>>> _______________________________________________ >>>>> GDAlgorithms-list mailing list >>>>> GDA...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >>>>> Archives: >>>>> >>>>> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >>>>> >>>> >>>> >>> >>> >>> ------------------------------------------------------------------------------ >>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>> lucky parental unit. See the prize list and enter to win: >>> http://p.sf.net/sfu/thinkgeek-promo >>> _______________________________________________ >>> GDAlgorithms-list mailing list >>> GDA...@li... >>> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >>> Archives: >>> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >>> >> >> >> >> ------------------------------------------------------------------------------ >> ThinkGeek and WIRED's GeekDad team up for the Ultimate >> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >> lucky parental unit. See the prize list and enter to win: >> http://p.sf.net/sfu/thinkgeek-promo >> _______________________________________________ >> GDAlgorithms-list mailing list >> GDA...@li... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >> > > > > -- > -- > Stefan Daenzer > Körnerplatz 8 > 04107 Leipzig > > Tel.: +49-176-61157550 > > > "Work like you don't need the money, love like you've never been hurt and > dance like no one is watching." - Randall G Leighton > > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Stefan D. <ste...@gm...> - 2010-06-16 11:00:33
|
Very interesting approaches here, @Manuel: I think the 3D-DDA (or the Bresenham 3D Algorithm suggested by Jon Watte) might be the way to go. @Jon Olic: The application is 3-D Ultrasound (US) reconstruction from spatially positioned and oriented 2-D US images. We basically use an optical tracking system and spatially track the US-Probe. This way we can record 3D-US volumes using a conventional 2D-US machine. Your suggested rasterization algorithm which projects the voxel center onto the image plane and calculates the distance of the projected point in the plane looks also fine, but i'm not sure if I can implement it more efficiently than a modified Bresenham 3D or 3D-DDA algorithm. @Jon Watte: If we end up weighting the contribution of each pixel in the proximity of a voxel inverse linearly to the distance to the voxel center, there might not be a way around using some extra storage for each voxel. We might end up storing distance and color of a pixel for each voxel during distance calculation and (as Manuel mentioned also) calculating the color of each voxel as a post processing step. I don't see any way how it would be possible to compose the value for each voxel incrementally. Am I missing something here? Thanks for this excellent discussion so far, Stefan On Tue, Jun 15, 2010 at 6:52 PM, Jon Watte <jw...@gm...> wrote: > You can incrementally build the 3D voxels by tracing the plane of each > contributing image, using an algorithm similar to Bresenham or Marching > Cubes. As long as the value for each voxel can be composed incrementally, > you can do this for images sequentially without needing extra storage. > > And, yes, recursive tree-like data structures are almost always used for > voxels, because > 90% of the voxel volume is either all-solid or all-open. > > Btw: if you're interested in voxel terrain, you probably should check out > the implementation in the C4 engine. It's pretty nice feature-wise, and high > quality code as well. The source license is $350, although you probably want > to be careful with the license terms; you probably can't copy and paste code > from the engine into your own code, for example. > > Sincerely, > > jw > > > -- > Americans might object: there is no way we would sacrifice our living > standards for the benefit of people in the rest of the world. Nevertheless, > whether we get there willingly or not, we shall soon have lower consumption > rates, because our present rates are unsustainable. > > > > On Sun, Jun 13, 2010 at 12:10 PM, Jon Olick <jon...@gm...> wrote: > >> If its a 2D image to 3d voxel rasterization algorithm, definitely make a >> oct-tree of the voxel data and then intersect it with the 2D image plane >> (plus a depth if you want water-tight), and further clip by the extents of >> the image. Then for each voxel which is considered intersecting the 2D image >> volume, determine the color by projecting the voxel center onto the image >> plane then do a sample (bi-linear interp with mip-maps perhaps?) of the >> image. You could then alpha the voxel based on the amount that the voxel is >> intersected with the volume. For example 20% inside = 20% alpha. >> >> On Sun, Jun 13, 2010 at 1:56 PM, Jon Olick <jon...@gm...> wrote: >> >>> Ah, so kind of like a rasterization of the 2d image into the 3d voxel >>> space? >>> >>> >>> On Sun, Jun 13, 2010 at 1:31 PM, Manuel Massing < >>> m.m...@wa...> wrote: >>> >>>> Hi, >>>> >>>> > @Manuel: I think he wants to find the nearest point on the image to >>>> the >>>> > voxel. I'm not sure how a DDA tracing algorithm would help there. >>>> >>>> I was thinking of the following: >>>> Think of each 2d scanline as a 3d ray which passes through the >>>> voxel grid. Traverse the voxel grid along the ray direction using >>>> DDA, Insert the intersecting pixel spans (in fact a quadrilateral) into >>>> the >>>> appropriate voxel. >>>> >>>> Additionally, one could use summed area tables or somesuch to >>>> pre-filter/match >>>> the voxel and pixel footprint approximately. >>>> >>>> bye, >>>> >>>> Manuel >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>>> lucky parental unit. See the prize list and enter to win: >>>> http://p.sf.net/sfu/thinkgeek-promo >>>> _______________________________________________ >>>> GDAlgorithms-list mailing list >>>> GDA...@li... >>>> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >>>> Archives: >>>> >>>> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >>>> >>> >>> >> >> >> ------------------------------------------------------------------------------ >> ThinkGeek and WIRED's GeekDad team up for the Ultimate >> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >> lucky parental unit. See the prize list and enter to win: >> http://p.sf.net/sfu/thinkgeek-promo >> _______________________________________________ >> GDAlgorithms-list mailing list >> GDA...@li... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >> > > > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > -- -- Stefan Daenzer Körnerplatz 8 04107 Leipzig Tel.: +49-176-61157550 "Work like you don't need the money, love like you've never been hurt and dance like no one is watching." - Randall G Leighton |
From: Jon W. <jw...@gm...> - 2010-06-15 16:52:32
|
You can incrementally build the 3D voxels by tracing the plane of each contributing image, using an algorithm similar to Bresenham or Marching Cubes. As long as the value for each voxel can be composed incrementally, you can do this for images sequentially without needing extra storage. And, yes, recursive tree-like data structures are almost always used for voxels, because > 90% of the voxel volume is either all-solid or all-open. Btw: if you're interested in voxel terrain, you probably should check out the implementation in the C4 engine. It's pretty nice feature-wise, and high quality code as well. The source license is $350, although you probably want to be careful with the license terms; you probably can't copy and paste code from the engine into your own code, for example. Sincerely, jw -- Americans might object: there is no way we would sacrifice our living standards for the benefit of people in the rest of the world. Nevertheless, whether we get there willingly or not, we shall soon have lower consumption rates, because our present rates are unsustainable. On Sun, Jun 13, 2010 at 12:10 PM, Jon Olick <jon...@gm...> wrote: > If its a 2D image to 3d voxel rasterization algorithm, definitely make a > oct-tree of the voxel data and then intersect it with the 2D image plane > (plus a depth if you want water-tight), and further clip by the extents of > the image. Then for each voxel which is considered intersecting the 2D image > volume, determine the color by projecting the voxel center onto the image > plane then do a sample (bi-linear interp with mip-maps perhaps?) of the > image. You could then alpha the voxel based on the amount that the voxel is > intersected with the volume. For example 20% inside = 20% alpha. > > On Sun, Jun 13, 2010 at 1:56 PM, Jon Olick <jon...@gm...> wrote: > >> Ah, so kind of like a rasterization of the 2d image into the 3d voxel >> space? >> >> >> On Sun, Jun 13, 2010 at 1:31 PM, Manuel Massing < >> m.m...@wa...> wrote: >> >>> Hi, >>> >>> > @Manuel: I think he wants to find the nearest point on the image to the >>> > voxel. I'm not sure how a DDA tracing algorithm would help there. >>> >>> I was thinking of the following: >>> Think of each 2d scanline as a 3d ray which passes through the >>> voxel grid. Traverse the voxel grid along the ray direction using >>> DDA, Insert the intersecting pixel spans (in fact a quadrilateral) into >>> the >>> appropriate voxel. >>> >>> Additionally, one could use summed area tables or somesuch to >>> pre-filter/match >>> the voxel and pixel footprint approximately. >>> >>> bye, >>> >>> Manuel >>> >>> >>> ------------------------------------------------------------------------------ >>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>> lucky parental unit. See the prize list and enter to win: >>> http://p.sf.net/sfu/thinkgeek-promo >>> _______________________________________________ >>> GDAlgorithms-list mailing list >>> GDA...@li... >>> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >>> Archives: >>> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >>> >> >> > > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Jon O. <jon...@gm...> - 2010-06-13 19:11:19
|
If its a 2D image to 3d voxel rasterization algorithm, definitely make a oct-tree of the voxel data and then intersect it with the 2D image plane (plus a depth if you want water-tight), and further clip by the extents of the image. Then for each voxel which is considered intersecting the 2D image volume, determine the color by projecting the voxel center onto the image plane then do a sample (bi-linear interp with mip-maps perhaps?) of the image. You could then alpha the voxel based on the amount that the voxel is intersected with the volume. For example 20% inside = 20% alpha. On Sun, Jun 13, 2010 at 1:56 PM, Jon Olick <jon...@gm...> wrote: > Ah, so kind of like a rasterization of the 2d image into the 3d voxel > space? > > > On Sun, Jun 13, 2010 at 1:31 PM, Manuel Massing <m.m...@wa... > > wrote: > >> Hi, >> >> > @Manuel: I think he wants to find the nearest point on the image to the >> > voxel. I'm not sure how a DDA tracing algorithm would help there. >> >> I was thinking of the following: >> Think of each 2d scanline as a 3d ray which passes through the >> voxel grid. Traverse the voxel grid along the ray direction using >> DDA, Insert the intersecting pixel spans (in fact a quadrilateral) into >> the >> appropriate voxel. >> >> Additionally, one could use summed area tables or somesuch to >> pre-filter/match >> the voxel and pixel footprint approximately. >> >> bye, >> >> Manuel >> >> >> ------------------------------------------------------------------------------ >> ThinkGeek and WIRED's GeekDad team up for the Ultimate >> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >> lucky parental unit. See the prize list and enter to win: >> http://p.sf.net/sfu/thinkgeek-promo >> _______________________________________________ >> GDAlgorithms-list mailing list >> GDA...@li... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >> > > |
From: Jon O. <jon...@gm...> - 2010-06-13 18:56:54
|
Ah, so kind of like a rasterization of the 2d image into the 3d voxel space? On Sun, Jun 13, 2010 at 1:31 PM, Manuel Massing <m.m...@wa...>wrote: > Hi, > > > @Manuel: I think he wants to find the nearest point on the image to the > > voxel. I'm not sure how a DDA tracing algorithm would help there. > > I was thinking of the following: > Think of each 2d scanline as a 3d ray which passes through the > voxel grid. Traverse the voxel grid along the ray direction using > DDA, Insert the intersecting pixel spans (in fact a quadrilateral) into the > appropriate voxel. > > Additionally, one could use summed area tables or somesuch to > pre-filter/match > the voxel and pixel footprint approximately. > > bye, > > Manuel > > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Manuel M. <m.m...@wa...> - 2010-06-13 18:31:09
|
Hi, > @Manuel: I think he wants to find the nearest point on the image to the > voxel. I'm not sure how a DDA tracing algorithm would help there. I was thinking of the following: Think of each 2d scanline as a 3d ray which passes through the voxel grid. Traverse the voxel grid along the ray direction using DDA, Insert the intersecting pixel spans (in fact a quadrilateral) into the appropriate voxel. Additionally, one could use summed area tables or somesuch to pre-filter/match the voxel and pixel footprint approximately. bye, Manuel |
From: Jon O. <jon...@gm...> - 2010-06-13 18:04:12
|
@Manuel: I think he wants to find the nearest point on the image to the voxel. I'm not sure how a DDA tracing algorithm would help there. Whats the application? That all sounds like good ideas. If you want nearest filtering, the first approach is probably best. I would bet though that you want a smooth output, like radiating heat. So option number 3 sounds best for that. Alternatively to the inverse distance you could do a 3D Gaussian on the data depending on what kind of output you want. On Sun, Jun 13, 2010 at 10:38 AM, Stefan Dänzer <ste...@gm...>wrote: > Dear List, > > I'm trying to implement an efficient splatting algorithm. The input of the > system are two-dimensional images which have position and orientation in > space. Using these input images I want to "fill" a three-dimensional voxel > grid which also has position and orientation in space (Represented by a 4x4 > transformation matrix: 3x3 rotation matrix and translation vector). > > In order to fill the voxel grid using the pixels on the two-dimensional > input images one needs to implement a splatting / interpolation method which > determines the value of a voxel depending on the position, size and color of > the pixels on the input. There are three splatting methods for my data which > seem to make sense to me. > > 1. Find the nearest pixel to a voxel and assign the color of the pixel to > the voxel (nearest Neighbor) > 2. Find the n-nearest pixels to a voxel and weight the color-contribution > of these pixels inverse linearly with the distance of the pixels to the > voxel > 3. Find the nearest pixels within a pre-defined distance to the voxel and > weight the color-contribution of these pixels inverse linearly with the > distance to the voxel > > In ether way one would iterate through all voxels in the output and find > some pixels on the input images near it. This clearly calls for some sort of > efficient bounding hierarchy creation before conducting the neighbor search. > I was thinking that putting the voxels into an octree data structure and > putting the pixels into a quadtree datastructure might do for a good enough > performance. > > Another method to speed up the computation further is to reduce the number > of output voxels (given a fixed voxel size) by aligning the voxel grid > efficiently before the search. I basically try to align the voxel grid in a > way that the two dimensional inputs fit into the object-aligned bounding box > with maximum space occupancy on the voxel grid (minimal number of voxels > given a fixed voxel side length). > > I thought of a statistical approach here. The problem boils down to placing > and aligning two of the local coordinate axes of the volume along the > directions with the most variance in the dataset (with the dataset being the > positions of the pixels on the input images). This could be done by > calculating the covariance matrix of those positions and performing a > principal component analysis (PCA) on the covariance matrix. This gives the > eigenvalues and eigenvectors along the main variance directions of the input > pixel positions. The two eigenvectors with the largest eigenvalues would > then be good axes for the coordinate system of the voxel grid. The third > axis can then be determined by the "right-hand-rule". > > Regards, > > Stefan > -- > -- > Stefan Daenzer > Körnerplatz 8 > 04107 Leipzig > > > "Work like you don't need the money, love like you've never been hurt and > dance like no one is watching." - Randall G Leighton > > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Manuel M. <m.m...@wa...> - 2010-06-13 16:21:11
|
Hi Stefan, > 1. Find the nearest pixel to a voxel and assign the color of the pixel to > the voxel (nearest Neighbor) > 2. Find the n-nearest pixels to a voxel and weight the color-contribution > of these pixels inverse linearly with the distance of the pixels to the > voxel 3. Find the nearest pixels within a pre-defined distance to the > voxel and weight the color-contribution of these pixels inverse linearly > with the distance to the voxel Have you considered just using a 3D DDA which traces each 2d image through the voxel grid? This might be more efficient, although it will require a larger working set (because it involves collection of a number of contributing pixels and filtering them as a post-process). cheers, Manuel |
From: Stefan D. <ste...@gm...> - 2010-06-13 15:39:19
|
Dear List, I'm trying to implement an efficient splatting algorithm. The input of the system are two-dimensional images which have position and orientation in space. Using these input images I want to "fill" a three-dimensional voxel grid which also has position and orientation in space (Represented by a 4x4 transformation matrix: 3x3 rotation matrix and translation vector). In order to fill the voxel grid using the pixels on the two-dimensional input images one needs to implement a splatting / interpolation method which determines the value of a voxel depending on the position, size and color of the pixels on the input. There are three splatting methods for my data which seem to make sense to me. 1. Find the nearest pixel to a voxel and assign the color of the pixel to the voxel (nearest Neighbor) 2. Find the n-nearest pixels to a voxel and weight the color-contribution of these pixels inverse linearly with the distance of the pixels to the voxel 3. Find the nearest pixels within a pre-defined distance to the voxel and weight the color-contribution of these pixels inverse linearly with the distance to the voxel In ether way one would iterate through all voxels in the output and find some pixels on the input images near it. This clearly calls for some sort of efficient bounding hierarchy creation before conducting the neighbor search. I was thinking that putting the voxels into an octree data structure and putting the pixels into a quadtree datastructure might do for a good enough performance. Another method to speed up the computation further is to reduce the number of output voxels (given a fixed voxel size) by aligning the voxel grid efficiently before the search. I basically try to align the voxel grid in a way that the two dimensional inputs fit into the object-aligned bounding box with maximum space occupancy on the voxel grid (minimal number of voxels given a fixed voxel side length). I thought of a statistical approach here. The problem boils down to placing and aligning two of the local coordinate axes of the volume along the directions with the most variance in the dataset (with the dataset being the positions of the pixels on the input images). This could be done by calculating the covariance matrix of those positions and performing a principal component analysis (PCA) on the covariance matrix. This gives the eigenvalues and eigenvectors along the main variance directions of the input pixel positions. The two eigenvectors with the largest eigenvalues would then be good axes for the coordinate system of the voxel grid. The third axis can then be determined by the "right-hand-rule". Regards, Stefan -- -- Stefan Daenzer Körnerplatz 8 04107 Leipzig "Work like you don't need the money, love like you've never been hurt and dance like no one is watching." - Randall G Leighton |
From: Jon W. <jw...@gm...> - 2010-06-08 18:09:47
|
> > > 2) You definitely want to sum first, and then divide. In fact, if you > want a > > value from 0 .. 127, and have 10 bit numbers (0 .. 1023) coming in, I > would > > sample four times at evenly spread intervals into a ring buffer, and > > calculate the output values as the sum of the last four samples, shifted > > right five bits. (your range will be 0 .. 4092, which divided by 32 ends > up > > as 0 .. 127) > > This was exactly the form that caused the problem around values with a > lot of bits set. You're trying to discard the random bottom bits, but > you include their influence by summing before removing them by > truncation. > > Signal theory says you can't possibly be more noisy after summing and truncating than you were before. There is no way for the low 2 bits of input to "leak" into higher values on the output. There *is* a way for the outcome to toggle between, say, 63 and 64, at your sample rate, if there is a signal component present at (or close to) your Nyquist frequency. There's basically nothing you can do against that. And it will "flip the bits" for all the bits -- but, seen as a value, that's not really a problem. Basically, what I hear you saying is that, when you go from 10 bits input to 7 bits data using oversampling and summing, you will still get noise in the lowest bit of the 7-bit data, which means that you have at least 4 noisy bits in the input, not just two. The way to fix that is to make the input less noisy. There are many things that can make the input noisy. The input to the ADC is one -- that's where you want a good filter if possible (the RC was one suggestion which is cheap, there are better :-). Perhaps your ADC already has an adjustable anti-aliasing filter, in which case you can just lower the cut-off of that. The reference voltage for the ADC is another source of noise. If you can't get that steady, you're in trouble. Again, some RC goodness on that input might help. Another option is to apply real DSP. Summing values just implements a simple FIR filter with very shallow slope. You can get better response by designing a four-pole IIR filter and feeding it the inputs (assuming you can get integer quantization to stay stable -- else do it as two bi-quad filters). Set the cut-off at something significantly lower than your sampling rate. For example, if you sample at 4 kHz (4 samples per tick for USB 1 ms tick rate, say), then you can set the cut-off frequency (-3 dB point) to 200 Hz and nobody will feel any lag. But, really -- 7 bits is such low dynamic range (~43 dB), it should be easily addressable using cheap components. Sincerely, jw |
From: <pad...@ob...> - 2010-06-08 13:43:29
|
Adding an RC (effective low pass filter) is a trick that works because of the TTL (or CMOS) thresholds. Bounce can be seen as high frequency contamination. Hysteresis is not a phenonemon that concerns us, nor is it a significant property of RC which can be considered linear for all but the most pathological components. However, of course the lowpass adds a small delay to all operations, since the rise time is finite. Since bounce can be considered above a couple of hundred Hz, the slugging time of the switch will be at worst a few milliseconds. (Nothing that ever bothered us on old consoles) With regard to the LSB innaccuracy: I'm sorry that I didn't fully understand the OP question, but the first thing that popped in to my mind was that this is a dither problem. Dither only works when you add a _known_ noise source (predictable in spectral terms) to swamp the (unknown) LSB errors. Andy On 08 June 2010 at 07:38 Robin Green <rob...@gm...> wrote: > On Mon, Jun 7, 2010 at 9:12 PM, Jon Watte <jw...@gm...> wrote: > > I don't get the AND thing -- you can't get instant key response that way, > > because you need to wait for all the words to have the bit set before it > > registers as a "key down." Is there more state there than the description? > > The idea comes from Jack Ganssle's article on Software Debouncers: > > http://www.ganssle.com/debouncing-pt2.htm > > > > When it comes to the ADC, there are a few things I would think about, FWIW: > > 1) Put a capacitor and resistor on the analog input before the ADC. I don't > > care what they say; it's cheap, and very helpful! > > An R/C Network on the input, isn't that just another form of > hysteresis? And one that's subject to the astonishing inaccuracies of > electrolytic caps? > > > > 2) You definitely want to sum first, and then divide. In fact, if you want a > > value from 0 .. 127, and have 10 bit numbers (0 .. 1023) coming in, I would > > sample four times at evenly spread intervals into a ring buffer, and > > calculate the output values as the sum of the last four samples, shifted > > right five bits. (your range will be 0 .. 4092, which divided by 32 ends up > > as 0 .. 127) > > This was exactly the form that caused the problem around values with a > lot of bits set. You're trying to discard the random bottom bits, but > you include their influence by summing before removing them by > truncation. > > The solution I ended up using records the previous ADC value in it's > full 10-bit value. A new ADC value has to exceed this old sample by > more than the Hysteresis amount before I allow it through to > truncation. The insight was to keep the comparison values at full > resolution, discard values below the H threshold and truncate the > samples only just before they are used. Values are now rock solid. > > The problem was that the premature truncation was discarding important > information. > > > 3) You probably also want to implement some dead space at both ends in the > > hardware. This means, if the summed range is 0 .. 4092, you might actually > > want to treat 0 .. 100 as 0, and 3093 .. 4092 as 127, and spread the rest of > > the interval between the two values, assuming you have efficient > > multiplication or division. If this is for a thumb stick (game controller > > style) you also want a dead space in the center, although that could > > potentially be applied in the software driver rather than firmware. > > Linear POTs seem to have dead space at either end built in, but the > dead center is a good idea - but should it be circular or square? > > - Robin Green. > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list |
From: Robin G. <rob...@gm...> - 2010-06-08 05:38:52
|
On Mon, Jun 7, 2010 at 9:12 PM, Jon Watte <jw...@gm...> wrote: > I don't get the AND thing -- you can't get instant key response that way, > because you need to wait for all the words to have the bit set before it > registers as a "key down." Is there more state there than the description? The idea comes from Jack Ganssle's article on Software Debouncers: http://www.ganssle.com/debouncing-pt2.htm > When it comes to the ADC, there are a few things I would think about, FWIW: > 1) Put a capacitor and resistor on the analog input before the ADC. I don't > care what they say; it's cheap, and very helpful! An R/C Network on the input, isn't that just another form of hysteresis? And one that's subject to the astonishing inaccuracies of electrolytic caps? > 2) You definitely want to sum first, and then divide. In fact, if you want a > value from 0 .. 127, and have 10 bit numbers (0 .. 1023) coming in, I would > sample four times at evenly spread intervals into a ring buffer, and > calculate the output values as the sum of the last four samples, shifted > right five bits. (your range will be 0 .. 4092, which divided by 32 ends up > as 0 .. 127) This was exactly the form that caused the problem around values with a lot of bits set. You're trying to discard the random bottom bits, but you include their influence by summing before removing them by truncation. The solution I ended up using records the previous ADC value in it's full 10-bit value. A new ADC value has to exceed this old sample by more than the Hysteresis amount before I allow it through to truncation. The insight was to keep the comparison values at full resolution, discard values below the H threshold and truncate the samples only just before they are used. Values are now rock solid. The problem was that the premature truncation was discarding important information. > 3) You probably also want to implement some dead space at both ends in the > hardware. This means, if the summed range is 0 .. 4092, you might actually > want to treat 0 .. 100 as 0, and 3093 .. 4092 as 127, and spread the rest of > the interval between the two values, assuming you have efficient > multiplication or division. If this is for a thumb stick (game controller > style) you also want a dead space in the center, although that could > potentially be applied in the software driver rather than firmware. Linear POTs seem to have dead space at either end built in, but the dead center is a good idea - but should it be circular or square? - Robin Green. |
From: Jon W. <jw...@gm...> - 2010-06-08 04:13:02
|
I don't get the AND thing -- you can't get instant key response that way, because you need to wait for all the words to have the bit set before it registers as a "key down." Is there more state there than the description? When it comes to the ADC, there are a few things I would think about, FWIW: 1) Put a capacitor and resistor on the analog input before the ADC. I don't care what they say; it's cheap, and very helpful! 2) You definitely want to sum first, and then divide. In fact, if you want a value from 0 .. 127, and have 10 bit numbers (0 .. 1023) coming in, I would sample four times at evenly spread intervals into a ring buffer, and calculate the output values as the sum of the last four samples, shifted right five bits. (your range will be 0 .. 4092, which divided by 32 ends up as 0 .. 127) 3) You probably also want to implement some dead space at both ends in the hardware. This means, if the summed range is 0 .. 4092, you might actually want to treat 0 .. 100 as 0, and 3093 .. 4092 as 127, and spread the rest of the interval between the two values, assuming you have efficient multiplication or division. If this is for a thumb stick (game controller style) you also want a dead space in the center, although that could potentially be applied in the software driver rather than firmware. Sincerely, jw -- Americans might object: there is no way we would sacrifice our living standards for the benefit of people in the rest of the world. Nevertheless, whether we get there willingly or not, we shall soon have lower consumption rates, because our present rates are unsustainable. On Fri, May 28, 2010 at 3:18 PM, Tom Forsyth <tom...@ee...>wrote: > Not sure I understand what answer you want out of the last example. The > value is 63.5 +/- 0.5. You can either get back 63 or 64, but no matter how > you filter it, that value will still oscillate slightly at some ADC > setting. > > If you literally want no change if there's only a noise variance, just do a > threshold trick - do the filtering the way you're doing it, and then if the > filtered result is within some epsilon of the previously-returned value, > keep returning that previous value. Otherwise, send the new value. It's a > standard trick for things like analogue joysticks. It works OKish, but has > bizarre stair-step response to slow movement (whereas a properly-filtered > answer will also move slowly). > > TomF. > > > > -----Original Message----- > > From: Robin Green [mailto:rob...@gm...] > > Sent: Friday, May 28, 2010 10:02 AM > > To: Game Development Algorithms > > Subject: Re: [Algorithms] Truncate then average? Or average then > > truncate? > > > > OK, imagine this. We have a noisy two lowest bits, so we're removing > > the bottom three bits of the value and averaging four samples. If the > > value is around 180 we get: > > > > 10110101b > > + 10110100b > > + 10110110b > > + 10110101b > > --------------- > > 1011010100b >> 2 > > --------------- > > = 1011010b > > > > But if the value is close to a power of two, e.g. 64, we get unstable > > values: > > > > 1000000b > > + 0111111b > > + 1000001b > > + 0111111b > > ------------ > > 11111111b >> 2 > > ------------ > > = 0111111b > > > > The noise in the LSBs ends up bits flipping bits high up in the number > > in a way that truncating doesn't solve. Mr Bunnell is right, I should > > probably look more into numerical filtering rather than bit tricks. > > Duh. > > > > - Robin Green. > > > > > > On Fri, May 28, 2010 at 9:06 AM, Jeremiah Zanin > > <Jer...@vo...> wrote: > > > Broken divider? I don't see how the noise bits can bleed into the > > other bits after dividing unless it's rounding up. Have you tried > > taking a power of two samples and shifting instead of dividing? > > > > > > > ----------------------------------------------------------------------- > > ------- > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > > Archives: > > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms- > > list > > No virus found in this incoming message. > > Checked by AVG - www.avg.com > > Version: 8.5.437 / Virus Database: 271.1.1/2861 - Release Date: > > 05/28/10 06:25:00 > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Yannick L. <yan...@ub...> - 2010-06-01 21:34:05
|
Yeah actually I think combining your decomposition to Robin Green's inversion to reconstruct the permutation from the decomposition will do the trick. Thanks all for you input. -----Original Message----- From: Tim Ambrogi [mailto:tim...@gm...] Sent: 1 juin 2010 16:59 To: Game Development Algorithms Subject: Re: [Algorithms] Generating the ith permutation of N elements If I understand the problem correctly, a multi-base number conversion algorithm is what you are looking for. That is, you have number (the number of the permutation), and you have a number of sub-elements with a number of possible values each. This is the same problem as decimal-to-binary conversion, or extracting radices from a decimal number. I have written a wee-small python script (below) to demonstrate the algorithm I would use, which is O(n) over the number of bases. Let me know if this is helpful to you, or if this isn't the problem you are trying to solve. Cheers, --Tim A. Co-Founder/Engineer, Final Form Games http://www.finalformgames.com/ ### import math #some example bases: #bases = [10, 10, 10, 10, 10, 10, 10, 10, 10] # number -> decimal radices #bases = [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2] # decimal -> 16-bit binary #bases = [365, 24, 60, 60] # seconds -> day/hour/minute/second bases = [10, 10, 10, 10, 10, 10, 10, 10, 10] which = 128 # number of the permutation you want perm = [] accum = 1 x = which for i in reversed(range(0, len(bases))): s = bases[i] n = math.ceil((x / accum) % s) perm.insert(0, n) x -= n * accum accum *= s print("Permutation #" + str(int(which)) + " is " + str(perm)) ### On Tue, Jun 1, 2010 at 3:34 PM, Yannick Letourneau <yan...@ub...> wrote: > Yes any consistent order will do. > > > > Yannick > > > > From: Danny Kodicek [mailto:dr...@we...] > Sent: 1 juin 2010 14:48 > To: Game Development Algorithms > Subject: Re: [Algorithms] Generating the ith permutation of N elements > > > > > > I'm looking for a way to generate the ith permutation of N elements, at > approximately O(N) cost. > > > > All algorithms I've seen need the (i-1)th permutation in order to derive the > ith permutation from it (e.g. they work incrementally).# > > As far as I know there isn't a standard order for permutations and so there > isn't such a thing as 'the' i'th permutation. Am I wrong? Or are you just > looking for a consistent order rather than a standard one? > > > > Danny > > > > ------------------------------------------------------------------------------ > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > ------------------------------------------------------------------------------ _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list |
From: <chr...@pl...> - 2010-06-01 21:25:54
|
Yannick Letourneau wrote: > I’m looking for a way to generate the ith permutation of N elements, > at approximately O(N) cost. [...] http://stackoverflow.com/questions/1506078/fast-permutation-number-permutation-mapping-algorithms Christer Ericson, Director of Tools and Technology Sony Computer Entertainment, Santa Monica http://realtimecollisiondetection.net/blog/ |
From: Tim A. <tim...@gm...> - 2010-06-01 20:58:58
|
If I understand the problem correctly, a multi-base number conversion algorithm is what you are looking for. That is, you have number (the number of the permutation), and you have a number of sub-elements with a number of possible values each. This is the same problem as decimal-to-binary conversion, or extracting radices from a decimal number. I have written a wee-small python script (below) to demonstrate the algorithm I would use, which is O(n) over the number of bases. Let me know if this is helpful to you, or if this isn't the problem you are trying to solve. Cheers, --Tim A. Co-Founder/Engineer, Final Form Games http://www.finalformgames.com/ ### import math #some example bases: #bases = [10, 10, 10, 10, 10, 10, 10, 10, 10] # number -> decimal radices #bases = [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2] # decimal -> 16-bit binary #bases = [365, 24, 60, 60] # seconds -> day/hour/minute/second bases = [10, 10, 10, 10, 10, 10, 10, 10, 10] which = 128 # number of the permutation you want perm = [] accum = 1 x = which for i in reversed(range(0, len(bases))): s = bases[i] n = math.ceil((x / accum) % s) perm.insert(0, n) x -= n * accum accum *= s print("Permutation #" + str(int(which)) + " is " + str(perm)) ### On Tue, Jun 1, 2010 at 3:34 PM, Yannick Letourneau <yan...@ub...> wrote: > Yes any consistent order will do. > > > > Yannick > > > > From: Danny Kodicek [mailto:dr...@we...] > Sent: 1 juin 2010 14:48 > To: Game Development Algorithms > Subject: Re: [Algorithms] Generating the ith permutation of N elements > > > > > > I’m looking for a way to generate the ith permutation of N elements, at > approximately O(N) cost. > > > > All algorithms I’ve seen need the (i-1)th permutation in order to derive the > ith permutation from it (e.g. they work incrementally).# > > As far as I know there isn't a standard order for permutations and so there > isn't such a thing as 'the' i'th permutation. Am I wrong? Or are you just > looking for a consistent order rather than a standard one? > > > > Danny > > > > ------------------------------------------------------------------------------ > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Yannick L. <yan...@ub...> - 2010-06-01 19:34:15
|
Yes any consistent order will do. Yannick From: Danny Kodicek [mailto:dr...@we...] Sent: 1 juin 2010 14:48 To: Game Development Algorithms Subject: Re: [Algorithms] Generating the ith permutation of N elements I'm looking for a way to generate the ith permutation of N elements, at approximately O(N) cost. All algorithms I've seen need the (i-1)th permutation in order to derive the ith permutation from it (e.g. they work incrementally).# As far as I know there isn't a standard order for permutations and so there isn't such a thing as 'the' i'th permutation. Am I wrong? Or are you just looking for a consistent order rather than a standard one? Danny |
From: Danny K. <dr...@we...> - 2010-06-01 19:11:18
|
> I’m looking for a way to generate the ith permutation of N elements, at > approximately O(N) cost. > > > > All algorithms I’ve seen need the (i-1)th permutation in order to derive > the ith permutation from it (e.g. they work incrementally).# > As far as I know there isn't a standard order for permutations and so there isn't such a thing as 'the' i'th permutation. Am I wrong? Or are you just looking for a consistent order rather than a standard one? Danny |