From: Daniel M. G. <dm...@uv...> - 2006-10-25 18:21:40
|
JD Smith twisted the bytes to say: JD> 0 ignore data (e.g. for flattening) JD> 255 process data JD> other process data, but exclude pixels from histogram correction estimate Hi JD, everybody, I just committed the change to support this feature in PTblender (version 2.8.5pre7): JD, would you mind testing it? ---------------------------------------------------------------------- 2006-10-25 dmg <dm...@uv...> * version.h (VERSION), configure.ac: Upgraded to version 2.8.5pre7 * ColourBrightness.c (ReadHistograms): Compute histograms only when mask == 255. Ignore otherwise. -- Daniel M. German "Language alone protects us from the scariness of things with no names. Toni Morrison -> Language alone is meditation. " http://turingmachine.org/ http://silvernegative.com/ dmg (at) uvic (dot) ca replace (at) with @ and (dot) with . |
From: Jim W. <jwa...@ph...> - 2006-10-25 18:40:27
|
Daniel M. German wrote: > JD Smith twisted the bytes to say: > > JD> 0 ignore data (e.g. for flattening) > JD> 255 process data > JD> other process data, but exclude pixels from histogram correction estimate > > I just committed the change to support this feature in PTblender > (version 2.8.5pre7): > > --------------------------------------------------------------------- > > 2006-10-25 dmg <dm...@uv...> > > * version.h (VERSION), configure.ac: Upgraded to version 2.8.5pre7 > > * ColourBrightness.c (ReadHistograms): Compute histograms only > when mask == 255. Ignore otherwise. > Do we need another tool or option that would compare the overlap of the two images and create an mask where the difference is greater than some cutoff value. The purpose of this would be to eliminate objects in the frame that have moved or missing in the other. I am thinking mostly of people with bright colored clothing moving around. These mask would only be used in the calculating of the histogram correction and not for blending the final pan. Something similar could be used to to determine where the blending seams should be. -- Jim Watters jwatters @ photocreations . ca http://photocreations.ca |
From: Daniel M. G. <dm...@uv...> - 2006-10-25 20:36:58
|
Jim Watters twisted the bytes to say: Jim> Daniel M. German wrote: >> JD Smith twisted the bytes to say: >> JD> 0 ignore data (e.g. for flattening) JD> 255 process data JD> other process data, but exclude pixels from histogram correction estimate >> >> I just committed the change to support this feature in PTblender >> (version 2.8.5pre7): >> >> --------------------------------------------------------------------- >> >> 2006-10-25 dmg <dm...@uv...> >> >> * version.h (VERSION), configure.ac: Upgraded to version 2.8.5pre7 >> >> * ColourBrightness.c (ReadHistograms): Compute histograms only >> when mask == 255. Ignore otherwise. >> Jim> Do we need another tool or option that would compare the overlap of the Indeed, we need this new tool. I prefer a tool. This will be in line with the simplification of the panotools. Another tool that is needed is one that feathers the image in a TIFF (taking into the account the mask). Jim> two images and create an mask where the difference is greater than some Jim> cutoff value. The purpose of this would be to eliminate objects in the Jim> frame that have moved or missing in the other. I am thinking mostly of Jim> people with bright colored clothing moving around. These mask would Jim> only be used in the calculating of the histogram correction and not for Jim> blending the final pan. Something similar could be used to to determine Jim> where the blending seams should be. -- Daniel M. German http://turingmachine.org/ http://silvernegative.com/ dmg (at) uvic (dot) ca replace (at) with @ and (dot) with . |
From: JD S. <jd...@as...> - 2006-10-25 22:27:45
|
On Wed, 2006-10-25 at 11:21 -0700, Daniel M. German wrote: > JD Smith twisted the bytes to say: > > > JD> 0 ignore data (e.g. for flattening) > JD> 255 process data > JD> other process data, but exclude pixels from histogram correction estimate > > Hi JD, everybody, > > I just committed the change to support this feature in PTblender > (version 2.8.5pre7): > > JD, would you mind testing it? I'll have a look. FYI, I've added two new options: -i and -u, which are percentage numbers (I also rearranged the usage text a bit): Options: -o <prefix> Prefix for output filename -k <index> Index to image to use as a reference (0-based) -t [0,1,2] Type of colour correction: 0 full (default), 1 brightness only, 2 colour only -f <filename> Flatten images to single TIFF file -i <percent> Omit pixels with intensity differences larger than <percent>% -u <percent> Omit pixels with hue differences larger than <percent>% -q Quiet run -h Show this message -c Output photoshop curves smooth (one per corrected file) -m Output photoship curves arbitrary map (one per corrected file) I'll test these out on my perverse color shift cases. The theory is if there are significant unmasked differences between pixels at the same location, omitting pixels with large hue or intensity differences before matching the histograms should alleviate them. Especially -u might be stable enough to be a "set and forget" option, unless white balance is flaky, etc. If things work out I'll send the patches tomorrow. JD |
From: Daniel M. G. <dm...@uv...> - 2006-10-26 03:30:39
|
Hi JD, JD Smith twisted the bytes to say: JD> I'll have a look. FYI, I've added two new options: -i and -u, which are JD> percentage numbers (I also rearranged the usage text a bit): Technically speaking it is not that easy. Most images are not perfectly aligned. Having a comparison threshold will probably result in very few matches in most cases. PTblender requires regions of at least 1000 pixels to proceed with matching. A better solution will be to smooth the images before. This can done with a gaussian blur. Then the pixels are compared. Another option is to consider not only the pixel, but its neighborhood. That is why I think it would be better to implement these features externally. The tool will take an image, and create another image with a mask that tells PTblender how to proceed. This will have the benefit of allowing easy experimentation with new methods to compute these masks, without having to modify PTblender. Also, different algorithms can be implemented without bloating PTblender. JD> I'll test these out on my perverse color shift cases. The theory is if JD> there are significant unmasked differences between pixels at the same JD> location, omitting pixels with large hue or intensity differences before JD> matching the histograms should alleviate them. Especially -u might be JD> stable enough to be a "set and forget" option, unless white balance is JD> flaky, etc. If things work out I'll send the patches tomorrow. I am curious how it goes. Make sure you save the debug.txt files in both cases, to see how many matches you get with the new threshold values. JD> JD -- Daniel M. German "One person will bear with dissent in matters of church government, John Stuart Mill -> but not of dogma" http://turingmachine.org/ http://silvernegative.com/ dmg (at) uvic (dot) ca replace (at) with @ and (dot) with . |
From: JD S. <jd...@as...> - 2006-10-26 06:33:00
Attachments:
jds_ptblender.patch
|
On Oct 25, 2006, at 7:53 PM, Daniel M. German wrote: > > Hi JD, > > JD Smith twisted the bytes to say: > > JD> I'll have a look. FYI, I've added two new options: -i and -u, > which are > JD> percentage numbers (I also rearranged the usage text a bit): > Technically speaking it is not that easy. Most images are not > perfectly aligned. Having a comparison threshold will probably result > in very few matches in most cases. PTblender requires regions of at > least 1000 pixels to proceed with matching. Right, but nonalignment in the sky or a field of grass is a non- issue. The real issues are moving clouds, a swaying tree limb against sky, etc., sending the histogram into oblivion. These are the outlier points I'm trying to pre-trim, to give the histogram match a fighting chance. > A better solution will be to smooth the images before. This can done > with a gaussian blur. Then the pixels are compared. That's a good idea. Actually, since enblend already does this (and more), it might be good to talk Andrew into implementing at least a simple scalar (or gamma) brightness correction prior to blending, based on one of the smoothed image compare sets. He must have all the logic in enblend anyway (needs to know overlap to compute seams, etc., etc.). My main issue with PTblender now is that, since it operating at 8bits, you can end up with severe banding at some colors. All that conversion back and forth to HSV really does take a toll. It would have to be much more careful to keep the mapping curves smooth and not too steep to really avoid this. I admit I couldn't really follow the actually mapping function construction: was this source decompiled from binary? > I am curious how it goes. Make sure you save the debug.txt files in > both cases, to see how many matches you get with the new threshold > values. I've implemented the trim thresholds, and they work reasonably well. For instance, I specify -u 5 (only 5% hue variations allowed), and reject only up to 20% of the overlapping pixels, simultaneously removing (some) of the color cast. This doesn't seem to be a total magic bullet, but then again I don't have a test image set with an obvious and bad moving target (person, etc.). Those you usually want to mask out anyway. I've experimented combining this with brightness only corrections, and it also seems to work well. Hue delta trimming in particular seems very useful (intensity trimming is a bit more of an issue, if you have intrinsic strong brightness differences, as I often do when clouds move through). With half a million pixels in my overlap area typical, losing even 50% of them is not horrible. A better method would be to compute median intensity and hue deltas and the variance about that median, then reject pixels which deviated from the median by a significant multiple of the stddev. This would require two passes through the data, and isn't nearly as easy to code, but would make intensity trimming much more useful (and hue trimming more useful if you forgot to fix the white balance). See my patches attached. They also clean up the calling syntax, add a comment or two (to help me remember what's happening), and do away with the final read through the TIFF files simply to write a post- match report to Debug.txt (this seems to cost about 30% of the runtime for little value). I would guess PTblender could be sped up a fair bit by: - Standardizing on cropped TIFFs, and only compare images if their borders overlap (now all (n choose 2) comparisons are made, for every pixel in the entire image -- probably 99% of all pixels checks are wasted). - Only computing the histograms which are actually needed (not all 6 RGBHSV). - Not recalculating the return results of RGB -> HSV, etc. JD |
From: JD S. <jd...@as...> - 2006-10-26 06:53:15
|
On Oct 25, 2006, at 11:21 AM, Daniel M. German wrote: > > JD Smith twisted the bytes to say: > > > JD> 0 ignore data (e.g. for flattening) > JD> 255 process data > JD> other process data, but exclude pixels from histogram > correction estimate > > Hi JD, everybody, > > I just committed the change to support this feature in PTblender > (version 2.8.5pre7): > > JD, would you mind testing it? I tested it (in combination with my hue and intensity delta trimming), and found it works well. Just for the record, I used multi-layer TIFF output from Hugin, drug a large selection rectangle across the entire top of my image (where the clouds move and the trees bend in the breeze), and then added a layer mask to each image layer from its alpha channel. Then I use a brightness adjustment, moving the slider all the way down, to turn the selected part of the "white" mask area to gray, for all of my images. If I had to do a lot of these, I'd probably write a Gimp script-fu, and come up with a better way to turn white -> gray repeatably. I then use my multi-layer TIFF output script to send these layers with the new black/gray/white alpha applied to individual (full-size) TIFFs (modified to use deflate compression, since PTblender chokes on LZW). Then I ran PTblender on these with - o, and ran enblend on the results (curiously, I find enblend is actually faster than PTblend -f). Luckily, enblend uses the convention that any non-zero alpha is kept (I'm not sure if -f does?), so it all just works (a rare experience with libpano/enblend/ tiff/etc. ;). It's definitely a bit cumbersome, but gets the job done (then again, I find a tight hue trim and brightness only correction achieves about the same results). JD |
From: Daniel M. G. <dm...@uv...> - 2006-10-26 18:39:23
|
JD Smith twisted the bytes to say: JD> I tested it (in combination with my hue and intensity delta JD> trimming), and found it works well. Just for the record, I used JD> multi-layer TIFF output from Hugin, drug a large selection rectangle JD> across the entire top of my image (where the clouds move and the JD> trees bend in the breeze), and then added a layer mask to each image JD> layer from its alpha channel. Hi JD, I totally agree that the colour blending code requires major cleanup. I wrote by manually inspecting the assembly of PTstitcher and in many cases I did not have any idea what I was doing until everything fell in place. That is why the data structures and some functions have "funny" names. Now, with respect to your patch. I consider this feature still experimental. What I would really like is not to bloat PTblender (at least yet) and instead working at the mask level, externally. This is my proposal: * Add the ability to get the alpha channel from a second file, per image * Create a command line tool that creates the mask by analyzing the pixels (exactly the way you have done it). I really favor this approach, because it is easy to implement, and more important, to extend. You would not need to dive into the code of PTmender. This will allow many different approaches to be implementable and comparable. * Generalize the code to do 16 bit processing * In the meantime distribute your patch in a "contributions" section. This would not be for 3.0.0, but it is something that needs to be done. Comments? I fully agree that its performance sucks. But you need to remember it is a O(n^2) algorithm, so no matter what we do, it will always be slower than enblend. BTW, hopefully today I'll take care of the LZW issue. dmg JD> Then I use a brightness adjustment, moving the slider all the way JD> down, to turn the selected part of the "white" mask area to gray, for JD> all of my images. If I had to do a lot of these, I'd probably write JD> a Gimp script-fu, and come up with a better way to turn white -> gray JD> repeatably. I then use my multi-layer TIFF output script to send JD> these layers with the new black/gray/white alpha applied to JD> individual (full-size) TIFFs (modified to use deflate compression, JD> since PTblender chokes on LZW). Then I ran PTblender on these with - JD> o, and ran enblend on the results (curiously, I find enblend is JD> actually faster than PTblend -f). Luckily, enblend uses the JD> convention that any non-zero alpha is kept (I'm not sure if -f JD> does?), so it all just works (a rare experience with libpano/enblend/ JD> tiff/etc. ;). It's definitely a bit cumbersome, but gets the job JD> done (then again, I find a tight hue trim and brightness only JD> correction achieves about the same results). JD> JD JD> ------------------------------------------------------------------------- JD> Using Tomcat but need to do more? Need to support web services, security? JD> Get stuff done quickly with pre-integrated technology to make your job easier JD> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo JD> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 JD> _______________________________________________ JD> PanoTools-devel mailing list JD> Pan...@li... JD> https://lists.sourceforge.net/lists/listinfo/panotools-devel -- Daniel M. German "Give a man a password, he'll log in for a day. Teach him to code, Anonymous -> and he will hack his way in..." http://turingmachine.org/ http://silvernegative.com/ dmg (at) uvic (dot) ca replace (at) with @ and (dot) with . |
From: JD S. <jd...@as...> - 2006-10-26 19:34:06
|
On Thu, 2006-10-26 at 11:37 -0700, Daniel M. German wrote: > JD Smith twisted the bytes to say: > > JD> I tested it (in combination with my hue and intensity delta > JD> trimming), and found it works well. Just for the record, I used > JD> multi-layer TIFF output from Hugin, drug a large selection rectangle > JD> across the entire top of my image (where the clouds move and the > JD> trees bend in the breeze), and then added a layer mask to each image > JD> layer from its alpha channel. > > Hi JD, > > I totally agree that the colour blending code requires major > cleanup. I wrote by manually inspecting the assembly of PTstitcher and > in many cases I did not have any idea what I was doing until > everything fell in place. Sounds pretty unpleasant! > That is why the data structures and some functions have "funny" names. > > Now, with respect to your patch. I consider this feature still > experimental. What I would really like is not to bloat PTblender (at > least yet) and instead working at the mask level, externally. I think it's quite simple (adding only a simple pair of tests for hue and/or intensity diffs before computing the histogram, no new calculations required), and entirely separable from the tri-level alpha stuff. In fact, my best results came from "gray"-masking coupled with hue trimming. It would appear more than half of the ColourBrightness.c file could be removed (#if 0, uncalled functions, etc.), if bloat is the concern. > This is my proposal: > > * Add the ability to get the alpha channel from a second file, per > image > > * Create a command line tool that creates the mask by analyzing the > pixels (exactly the way you have done it). I really favor this > approach, because it is easy to implement, and more important, to > extend. You would not need to dive into the code of PTmender. This > will allow many different approaches to be implementable and > comparable. For generic trimming of pixels which differ too much, this is a heavy handed approach IMO, which in the end will require multiple unnecessary passes through the data. Just to throw out the 10 or 20% of pixels which differ in hue too much I'd have to: 1. Read all images in. 2. Find where they intersect. 3. Compute the hue from RGB of all intersecting pixels. 4. Zero out the mask(s) where the hue differs too much. 5. Write out the mask data. This essentially replicates more than half of the code already in ColourBrightness, which to my way of thinking is inefficient both from a coding and runtime viewpoint. > * Generalize the code to do 16 bit processing Absolutely. We could also take care to keep the mapping functions well-behaved to avoid the kind of banding and color gaps which 8bit processing reveals. To do this right would probably require a rewrite. Since I always use enblend in the end and fix white balance during shooting, it may be easier to concentrate my efforts there. JD |
From: Daniel M. G. <dm...@uv...> - 2006-10-26 22:28:43
|
JD> I think it's quite simple (adding only a simple pair of tests for hue JD> and/or intensity diffs before computing the histogram, no new JD> calculations required), and entirely separable from the tri-level alpha JD> stuff. In fact, my best results came from "gray"-masking coupled with JD> hue trimming. It would appear more than half of the ColourBrightness.c JD> file could be removed (#if 0, uncalled functions, etc.), if bloat is the JD> concern. I totally agree. That is why it is #if-ed. I refactor, but at I don't remove the original code until later. But it still needs major changes to make it ready for 16bit. JD> For generic trimming of pixels which differ too much, this is a heavy JD> handed approach IMO, which in the end will require multiple unnecessary JD> passes through the data. Just to throw out the 10 or 20% of pixels I don't see performance as a major issue (at least not now). JD> which differ in hue too much I'd have to: 1. Read all images in. 2. Find where they intersect. 3. Compute the hue from RGB of all intersecting pixels. 4. Zero out the mask(s) where the hue differs too much. 5. Write out the mask data. This is O(n) when using cropped tiffs. PTblender is O(n^2). It is not a mejor cost, IMHO. JD> This essentially replicates more than half of the code already in JD> ColourBrightness, which to my way of thinking is inefficient both from a JD> coding and runtime viewpoint. As i said in my other message, let us add the functionality, but in a more generic way. I hope this is a good compromise. >> * Generalize the code to do 16 bit processing JD> Absolutely. We could also take care to keep the mapping functions JD> well-behaved to avoid the kind of banding and color gaps which 8bit JD> processing reveals. To do this right would probably require a JD> rewrite. Probably. Let us see what we can rescue. JD> Since I always use enblend in the end and fix white balance during JD> shooting, it may be easier to concentrate my efforts there. in the 16 bits processing? JD> JD -- Daniel M. German "Language alone protects us from the scariness of things with no names. Toni Morrison -> Language alone is meditation. " http://turingmachine.org/ http://silvernegative.com/ dmg (at) uvic (dot) ca replace (at) with @ and (dot) with . |
From: Daniel M. G. <dm...@uv...> - 2006-10-26 22:14:44
|
Ok, let us put the code in, but in a more generalizable way. Instead of adding 2 parms, let us add only one (a pointer to a struct). The struct will contain: * Any info needed (in this case the 2 thresholds) * A pointer to a function to call to determine if one should use the points. This function should accept: What do you think? This will make it flexible to add other methods easily without making the code difficult to maintain. JD> See my patches attached. They also clean up the calling syntax, add JD> a comment or two (to help me remember what's happening), and do away JD> with the final read through the TIFF files simply to write a post- JD> match report to Debug.txt (this seems to cost about 30% of the JD> runtime for little value). JD> I would guess PTblender could be sped up a fair bit by: JD> - Standardizing on cropped TIFFs, and only compare images if their JD> borders overlap (now all (n choose 2) comparisons are made, for every JD> pixel in the entire image -- probably 99% of all pixels checks are JD> wasted). JD> - Only computing the histograms which are actually needed (not all 6 JD> RGBHSV). JD> - Not recalculating the return results of RGB -> HSV, etc. JD> JD -- Daniel M. German "Great algorithms are Francis Sullivan -> the poetry of computation" http://turingmachine.org/ http://silvernegative.com/ dmg (at) uvic (dot) ca replace (at) with @ and (dot) with . |
From: JD S. <jd...@as...> - 2006-10-26 22:34:18
|
On Thu, 2006-10-26 at 15:14 -0700, Daniel M. German wrote: > > > Ok, let us put the code in, but in a more generalizable way. > > Instead of adding 2 parms, let us add only one (a pointer to a > struct). > > The struct will contain: > > * Any info needed (in this case the 2 thresholds) > > * A pointer to a function to call to determine if one should use the > points. This function should accept: > > What do you think? Sounds reasonable. The problem is, what does that function take as input, RGBHSV for both pixels? Pre-compute them all before deciding whether to proceed? Would slow things down a bit. JD |
From: Daniel M. G. <dm...@uv...> - 2006-10-26 22:54:32
|
JD Smith twisted the bytes to say: JD> On Thu, 2006-10-26 at 15:14 -0700, Daniel M. German wrote: >> >> >> Ok, let us put the code in, but in a more generalizable way. >> >> Instead of adding 2 parms, let us add only one (a pointer to a >> struct). >> >> The struct will contain: >> >> * Any info needed (in this case the 2 thresholds) >> >> * A pointer to a function to call to determine if one should use the >> points. This function should accept: >> >> What do you think? JD> Sounds reasonable. The problem is, what does that function take as JD> input, RGBHSV for both pixels? Pre-compute them all before deciding JD> whether to proceed? Would slow things down a bit. This is an important design decision, I think. This is where C++ would help. Why dont' we discuss it until we are all happy: proposal A: * Refactor a function for the computation of the histograms for a given line of the images. (it is currently embedded in the middle). * Create one function to call for each method. * Add a switch statement that determines which method to call. proposal B: * replace the predicate of the if that determines if the method is to be used with a function call. This function call will take 2 parameters: 1. A struct with information about what type of method to use (this is the one passed to ReadHistograms) 2. A struct with info about the pixels (RGV, or HSV, and perhaps more data, such as the neighborhood). This might also require a switch to decide what type of information needs to be computed (if one wants it optimized for speed) I don't think A and B are mutually exclusive. I think we can start with B. I'd like to hear what you think. JD> JD JD> ------------------------------------------------------------------------- JD> Using Tomcat but need to do more? Need to support web services, security? JD> Get stuff done quickly with pre-integrated technology to make your job easier JD> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo JD> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 JD> _______________________________________________ JD> PanoTools-devel mailing list JD> Pan...@li... JD> https://lists.sourceforge.net/lists/listinfo/panotools-devel -- Daniel M. German "The WWW might get the glory, but e-mail is the quiet Beppi Crosanol -> workhorse of the digital age." http://turingmachine.org/ http://silvernegative.com/ dmg (at) uvic (dot) ca replace (at) with @ and (dot) with . |
From: Daniel M. G. <dm...@uv...> - 2006-10-26 23:27:58
|
Hi JD, JD Smith twisted the bytes to say: JD> On Thu, 2006-10-26 at 15:28 -0700, Daniel M. German wrote: JD> For generic trimming of pixels which differ too much, this is a heavy JD> handed approach IMO, which in the end will require multiple unnecessary JD> passes through the data. Just to throw out the 10 or 20% of pixels >> >> I don't see performance as a major issue (at least not now). JD> Well, from my point of view, PTblender takes almost as long as enblend, JD> and almost as long as nona to remap the images in the first place. JD> Ideally, it could be more useful for experimentation, twiddling JD> parameters, etc. Maybe I just have a slower maching than you ;). You can always make the images smaller, and experiment with them that way :) Now, seriously, maintenance eats a lot of time in the development cycle. That is why I am more keen on getting clean, easy to maintain code, than complex. I think you would also prefer an application that does what it is supposed to do, than one that run fast, but not always works. Making the code run faster, for the time being, is a secundary objective of panotools. Updating the code, for the sake of speed, will be done as long as it means easier to maintain code. Better algorithms, with better running times (in terms of complexity--big O, not just the coeficients of the polynomials) are something that I am very willing to accept. Max comments that I like to "beautify code" while maintaining (and sometimes reducing) functionality. It is true. Because at the end of the day I like to work with good, professional code. Panotools is not there yet (and I am also responsible for that), but I don't want to move backwards. That is why I have been working on improving the code base. if you look at the past of panotools, it was in life support. Most changes in the previous years were minor and designed to keep it afloat. Part of the reason is there is very little interest on its users to improve it. dmg -- Daniel M. German "The truth exists Georges Braque -> --only fictions are invented." http://turingmachine.org/ http://silvernegative.com/ dmg (at) uvic (dot) ca replace (at) with @ and (dot) with . |
From: JD S. <jd...@as...> - 2006-10-27 00:05:15
|
> Max comments that I like to "beautify code" while maintaining (and > sometimes reducing) functionality. It is true. Because at the end of > the day I like to work with good, professional code. Panotools is not > there yet (and I am also responsible for that), but I don't want to > move backwards. That is why I have been working on improving the code > base. Total agreement from this quarter. The knock-on effect of beautiful, well-documented, well-factored code is that the barrier of entry for new programming talent is much lower, and thus the code improves faster, lives longer and has a better chance of being kept up to date. > if you look at the past of panotools, it was in life support. Most > changes in the previous years were minor and designed to keep it > afloat. Part of the reason is there is very little interest on its > users to improve it. Well, in reality PanoTools may have become a liability for some developers, witness e.g. how Joost has moved most Pano functionality into the closed part of PTGui. Hugin has also reimplemented much PanoTools functionality, in a modern OOP generic programming framework which has real advantages (though barrier to entry for the average programmer isn't one). If this code is to remain relevant, it needs people such as yourself who think long term and big picture. Thanks for your work. JD |
From: JD S. <jd...@as...> - 2006-10-27 01:20:09
|
On Wed, 25 Oct 2006 14:40:18 -0400, Jim Watters wrote: > Daniel M. German wrote: >> JD Smith twisted the bytes to say: >> >> JD> 0 ignore data (e.g. for flattening) >> JD> 255 process data >> JD> other process data, but exclude pixels from histogram correction estimate >> >> I just committed the change to support this feature in PTblender >> (version 2.8.5pre7): >> >> --------------------------------------------------------------------- >> >> 2006-10-25 dmg <dm...@uv...> >> >> * version.h (VERSION), configure.ac: Upgraded to version 2.8.5pre7 >> >> * ColourBrightness.c (ReadHistograms): Compute histograms only >> when mask == 255. Ignore otherwise. >> > Do we need another tool or option that would compare the overlap of the > two images and create an mask where the difference is greater than some > cutoff value. The purpose of this would be to eliminate objects in the > frame that have moved or missing in the other. I am thinking mostly of > people with bright colored clothing moving around. These mask would > only be used in the calculating of the histogram correction and not for > blending the final pan. Something similar could be used to to determine > where the blending seams should be. That's essentially what my -u and -i options do for you (see contributed patch), and the "gray-masking" capability Daniel added does as well for hand painting such areas, values not 0 or 255 constitute such a secondary mask, and luckily enblend considers any number >=1 to be "on" for blending. I'd be happy to hear of people's experience with hue trimming (in particular) for cases where color shifts are a problem. JD |
From: JD S. <jd...@as...> - 2006-10-27 20:13:59
|
On Thu, 26 Oct 2006 15:54:21 -0700, Daniel M. German wrote: > > JD Smith twisted the bytes to say: > > JD> On Thu, 2006-10-26 at 15:14 -0700, Daniel M. German wrote: > >> > >> > >> Ok, let us put the code in, but in a more generalizable way. > >> > >> Instead of adding 2 parms, let us add only one (a pointer to a > >> struct). > >> > >> The struct will contain: > >> > >> * Any info needed (in this case the 2 thresholds) > >> > >> * A pointer to a function to call to determine if one should use the > >> points. This function should accept: > >> > >> What do you think? > > JD> Sounds reasonable. The problem is, what does that function take as > JD> input, RGBHSV for both pixels? Pre-compute them all before deciding > JD> whether to proceed? Would slow things down a bit. > > This is an important design decision, I think. This is where C++ would > help. > > Why dont' we discuss it until we are all happy: > > proposal A: > > * Refactor a function for the computation of the histograms for a > given line of the images. (it is currently embedded in the middle). > > * Create one function to call for each method. > > * Add a switch statement that determines which method to call. > > proposal B: > > * replace the predicate of the if that determines if the method is to > be used with a function call. This function call will take 2 > parameters: > > 1. A struct with information about what type of method to use (this > is the one passed to ReadHistograms) > > 2. A struct with info about the pixels (RGV, or HSV, and perhaps > more data, such as the neighborhood). > > This might also require a switch to decide what type of information > needs to be computed (if one wants it optimized for speed) > > I don't think A and B are mutually exclusive. I think we can start > with B. > > I'd like to hear what you think. Option B sounds good. If I were to completely gut PTblender, I'd: - Pre-compute which images could actually overlap from their TIFF offsets, adding only these to a linked list of pairs. Might as well support cropped TIFFs where possible. This will really help people who do >20 image multi-row sphericals (since the current algorithm loops over all pixels in the image N^2 times). For such panos, it may even be worth calling PTcrop (when it exists) first on the uncropped images. - Replace the two inner nested loops in ReadHistogram with one loop over the linked list of "possible match" images, and invert the order of the loops: for (each row) { read_row_from_images(row,&row_buffer); // careful with crop for (each match in matching_images_list) { if (row intersects both image boundaries) { for (each pix in row) { if pixel_include(row,pix,im1,im2,trim) add_to_histogram(pix,match); } } } } - Factor out the code which decides whether to use a given pixel in the histogram into a separate function (pixel_include() above), and pass it an options structure which gives it what it needs to know (the optional trim factors, etc., called 'trim' above). This is also where separate mask data could be used, but the "graymask" method currently employed may obviate that. - Simplify the actual histogram remapping and subsequent color correction code: 1. Always match all three histograms, RGB. Impose "brightness only" or other constraints on the mapping functions at the very end (see below). No HSV computations are ever performed. 2. Use a single routine to compute a mapping function (table) from histogram 1 (source) to histogram 2 (target). This routine will simply: a. Form cumulative totals of the both histograms. b. Create the 256 element floating point mapping function z which maps between them (one for each of RGB). This function will be called many times, so needs to be short and sweet. 3. Build a ragged array of length n_images, with each element holding a linked list of all other images to which it matches, keeping track of pixel overlap count, and omitting matches without enough pixels in overlap. 3. Compute the floating point mapping functions z for all pairs in the ragged array.. There is one z per pair for each of RBG. 4. "Anneal" the (potentially long list of) mapping functions z over the entire image: a. For each image, compute a master mapping function m for the image, from the overlapping pixel count-weighted average of all the modified sub-functions to all neighbors. b. The modified sub-function z' to a neighbor will depend on i) the mapping function between the two, z, and ii) the master function m of the neighbor, as: z'=m^-1 z The inverse of a mapping function m is that function which, when m is run through it, produces the unit vector (0..255). In the first round, all master functions are set to the unit vector (0..255), and z'=z. c. Repeatedly iterate over all images in this way until all master mapping functions converge. Convergence can progress non-uniformly (image by image); each image is marked as converged once its master function converges. Note that a reference image is no longer needed... the average best mapping to make all images compatible is automatically developed (e.g. for a range of brightnesses, the "average" brightness will be targeted). If a reference image is desired, it is marked "converged" before the first round of annealing, and everything proceeds in the same manner. 5. Normalize each image's annealed master mapping function, subject to the (optional) user constraints: -t 1 (brightness only): m(r) = m(g) = m(b) (one average table). -t 2 (color only): m(r) + m(g) + m(b) = I (unit vector) 6. Convert all normalized, annealed master mapping tables to byte, by rounding, perhaps with some care taken to avoid banding caused by large gaps. 7. Run each image's data through its master mapping table and write out to output image. - No flattening (separate tool). - Add an optional debug switch to enable all that Debug.txt output (just to stdout). JD |
From: Daniel M. G. <dm...@uv...> - 2006-10-27 21:21:22
|
JD> Option B sounds good. If I were to completely gut PTblender, I'd: Great ideas. JD, we have a plan. I am going to add your description to the TODO. I suggest the following. Let us get the first 3 points done first (I am duplicating below). They can be done incrementally with the current code base, So I suggest as the goal to write this functionality first. It will not be required for 3.0.0. Please check the Apache coding standards: http://httpd.apache.org/dev/styleguide.html Functions names are camel type, with pano always as a prefix (so we know it is our routines) and next the module they belong to (Colour). for instance, ReadHistograms should be renamed: panoColourHistogramsRead local identifiers are also camel based, but there is no real restriction except that they are self-describing. types are pano_whatever The current code is not obeying these rules, but new code should. dmg ---------------------------------------------------------------------- - Pre-compute which images could actually overlap from their TIFF offsets, adding only these to a linked list of pairs. Might as well support cropped TIFFs where possible. This will really help people who do >20 image multi-row sphericals (since the current algorithm loops over all pixels in the image N^2 times). For such panos, it may even be worth calling PTcrop (when it exists) first on the uncropped images. - Replace the two inner nested loops in ReadHistogram with one loop over the linked list of "possible match" images, and invert the order of the loops: for (each row) { read_row_from_images(row,&row_buffer); // careful with crop for (each match in matching_images_list) { if (row intersects both image boundaries) { for (each pix in row) { if pixel_include(row,pix,im1,im2,trim) add_to_histogram(pix,match); } } } } - Factor out the code which decides whether to use a given pixel in the histogram into a separate function (pixel_include() above), and pass it an options structure which gives it what it needs to know (the optional trim factors, etc., called 'trim' above). This is also where separate mask data could be used, but the "graymask" method currently employed may obviate that. ---------------------------------------------------------------------- -- Daniel M. German "Science can be esoteric The Economist -> technology has to be pragmatic" http://turingmachine.org/ http://silvernegative.com/ dmg (at) uvic (dot) ca replace (at) with @ and (dot) with . |