From: Geoffrey F. <fu...@li...> - 2002-04-04 19:11:08
|
Alan W. Irwin writes: > Geoffrey said: > > > Bottom line, imo, the 16 bit virtual coordinate thing in PLplot is > > probably the most serious "technical" defficiency in PLplot at this > > time. > > > Building a scalable rendering engine might be one way to relieve the > > symptoms, but unless we convert to 32 bit virtual coords, I won't regard > > the true underlying issue as having really been solved. > > I had no idea there was any 16-bit code in PLplot, but I have noticed some > minor character distortions for ordinary example plots that vary from > machine to machine which may be a symptom of the same problem. In any case > if we can increase the precision of how characters are drawn by eliminating > the 16-bit code I am all for it. So let's get started! > > If there is some low-level but tedious part of the task (i.e., janitorial > work) that needs to be done, I would be glad to help. Well, I'm glad for your enthusiasm, but I would encourage a cautious appraisal. There is a /lot/ of work involved with this PLplot Improvement Proposal. That's why I requoted the prior text, and I'll do it again now, jsut for emphasis, with some interspersed comments for amplification. > Distortion: > highly zoomed plots (Tk driver) show obvious coordinate > distortion. Believed traceable to 16 bit (short) ints in > core->driver interface. Fatten virtual coord rep to 32 bits. PLplot seems to plot on a grid of "virtual pixels", of which there are 2^16 (or is it 2^15?) across the viewport (or is it the physical page?). Ending points for drawing segments are, I believe, constrained to this grid. Some truncation effects probably come into play here at the point of selecting exactly which of two neighboring pixels certain an end point. I suppose, for educational enrichment, it would be good to understand clearly how this works out in practice. > Tasks: > core->driver dispatch funcs line, polyline, short->int Because the addressable graphics coordinate space is a square 2^16 (virtual) pixel grid, there are numerous places where shorts are passed around in the code. Specifically and significantly, in the interface to the driver drawing routines, called from the core. Those should all be fattened to 32 bits, and all the threads followed backward from that point to find the origin of all 16 bit quantities, and fatten them all to 32 bits. I think this is a fairly major undertaking all by itself. > plmeta, dump 4 bytes instead of 2 Irradication of 16 bit virtual pixel addressing and promotion to 32 bits means all plemta output will have to be fattened as well, thus instantaneously doublling the size of all meta files... > plrender, bump metafile version, backward handling for old > metafiles. which will break old metafiles for plrender, unless we do proper version increment and add code to read old 16 bit metafiles and promote them to 32 bits on the way into plrender. > dashing: currently walks virtual pixels one-by-one, horrific > if expand coord space by 2^16. Need to revamp dashing to > /calculate/ (in world coords) dash starts and stops, > including bracketing so polyline can turn corners > correctly. This is probably the harriest task. Dashing actually walks virtual pixels. I have not studied this code in sufficient detail to know what to do about it, but we will effectively ruin PLplot for dashed/dotted line drawing if we are not extremely careful. It seems dashing specification is based in physical coords (mm), and so the dasher somehow walks virtual pixels one by one looking to see when it has covered a specific physical distance. Increasing the address space will slow the current algorithm down by 2^16, which will be horrifically slow. I think all the dashing code needs to be gutted and converted to somehow /calculate/ the ending coordinates of each dash, rather than sampling the virtual pixel grid in a death march to oblivion. Note that for this to really be done right, dashes need to be able to "turn corners" when used in polyline drawing and so forth. This is a major undertaking, I am sure. > Tk driver, like plmeta/plrender, bump transmission to use 4 > bytes instead of 2. As with plmeta/plrender interface issues. > One point you should consider; if we are going to this effort in any case, > is there any reason not to go to long long integers (which would translate > to 64-bit integers on 32-bit machines)? I don't want to be dealing with > 32-bit to 64-bit issues 10 years from now because we didn't look ahead. > Also, before you say there is no way we would ever require 64-bit integers > consider Bill Gates' infamous statement that "64k of RAM is all anyone > should ever need."....;-) I think this is a very important issue. There are important things which depend directly on the bit width of plotting representation. These things are fundamentally: speed and space. Fatter bit width in the plotting representation implies higher fidelity plots, slower graphics performance, and larger transmission buffers (whether metafiles or sockets to Tk, etc). Now, there is no real way to accomodate all conceivable future plotting fidelity requirements, even by going to 64 bits. 16 bits seems to be enough for many types of scientific plots where you're just trying to represent a publication quality historgram, line, or contour plot, with human legible labelling. What I am doing right now, is drawing integrated circuits, which have vastly more feature detail than the histograms and line plots and contour plots currently exibited in our demo programs, for example. These plots have orders of magnitude more total detail, graphics elements, etc. 16 bits ain't enough for this application. I doubt this is really unqiue to my current application. I suspect there are lots of CAD-type potential users who are not using PLplot because it doesn't support the level of detail they need. But note, right now I am drawing pictures of chips with order of a million gates. But silicon integration marches ever forward. Current generation microprocessors already have on the order of 100 million gates, and we all know where this trend is headed. So, I doubt moving to even 64 bits would forever stem the tide. My feeling is that the only long term solution for scientific and engineering application domains involving "telescoping" plotting capabilities, is to implement a sophisticated multi-scale rendering engine which draws only the features that can actually be resolved at a given "depth" (for lack of a better word). And if this is the eventual solution, then I suppose it calls into question the advisability of even doing the 16->32 bit transition in the first place. I thnk 16 bits is plenty for human resolvable detail. The problem comes in when you start zooming with plframe. So, the real question here, is do we want to change the default fidelity so that it supports a signficantly greater level of "zoomability", fully realizing that this undetaking has several inherent properties: 1) It won't fully solve the problem, because there will still be a zoom depth beyond which more bits of fidelity would be required anyway. 2) It will make for fatter metafiles, or data representtion buffers in whatever form they take (like socket transmissions, whatever). 3) A truly scalable rendering engine will ultimately have to be developed anyway. I would very much like to see contributions to this thread from all concerned. My personal viewpoint is that I think 16 bits is too few, and we would be better off to incur the size penalty of moving to 32 bits. Zooming fidelity will be improved enough at a cost that I think is reasonable. Maybe there could be an API to set the granularity of plmeta recording? -- Geoffrey Furnish Lightspeed Semiconductor Corp voice: 408-616-3244 209 North Fair Oaks fax: 408-616-3201 Sunnyvale, CA 94085 |