You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(33) |
Dec
(20) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(7) |
Feb
(44) |
Mar
(51) |
Apr
(43) |
May
(43) |
Jun
(36) |
Jul
(61) |
Aug
(44) |
Sep
(25) |
Oct
(82) |
Nov
(97) |
Dec
(47) |
2005 |
Jan
(77) |
Feb
(143) |
Mar
(42) |
Apr
(31) |
May
(93) |
Jun
(93) |
Jul
(35) |
Aug
(78) |
Sep
(56) |
Oct
(44) |
Nov
(72) |
Dec
(75) |
2006 |
Jan
(116) |
Feb
(99) |
Mar
(181) |
Apr
(171) |
May
(112) |
Jun
(86) |
Jul
(91) |
Aug
(111) |
Sep
(77) |
Oct
(72) |
Nov
(57) |
Dec
(51) |
2007 |
Jan
(64) |
Feb
(116) |
Mar
(70) |
Apr
(74) |
May
(53) |
Jun
(40) |
Jul
(519) |
Aug
(151) |
Sep
(132) |
Oct
(74) |
Nov
(282) |
Dec
(190) |
2008 |
Jan
(141) |
Feb
(67) |
Mar
(69) |
Apr
(96) |
May
(227) |
Jun
(404) |
Jul
(399) |
Aug
(96) |
Sep
(120) |
Oct
(205) |
Nov
(126) |
Dec
(261) |
2009 |
Jan
(136) |
Feb
(136) |
Mar
(119) |
Apr
(124) |
May
(155) |
Jun
(98) |
Jul
(136) |
Aug
(292) |
Sep
(174) |
Oct
(126) |
Nov
(126) |
Dec
(79) |
2010 |
Jan
(109) |
Feb
(83) |
Mar
(139) |
Apr
(91) |
May
(79) |
Jun
(164) |
Jul
(184) |
Aug
(146) |
Sep
(163) |
Oct
(128) |
Nov
(70) |
Dec
(73) |
2011 |
Jan
(235) |
Feb
(165) |
Mar
(147) |
Apr
(86) |
May
(74) |
Jun
(118) |
Jul
(65) |
Aug
(75) |
Sep
(162) |
Oct
(94) |
Nov
(48) |
Dec
(44) |
2012 |
Jan
(49) |
Feb
(40) |
Mar
(88) |
Apr
(35) |
May
(52) |
Jun
(69) |
Jul
(90) |
Aug
(123) |
Sep
(112) |
Oct
(120) |
Nov
(105) |
Dec
(116) |
2013 |
Jan
(76) |
Feb
(26) |
Mar
(78) |
Apr
(43) |
May
(61) |
Jun
(53) |
Jul
(147) |
Aug
(85) |
Sep
(83) |
Oct
(122) |
Nov
(18) |
Dec
(27) |
2014 |
Jan
(58) |
Feb
(25) |
Mar
(49) |
Apr
(17) |
May
(29) |
Jun
(39) |
Jul
(53) |
Aug
(52) |
Sep
(35) |
Oct
(47) |
Nov
(110) |
Dec
(27) |
2015 |
Jan
(50) |
Feb
(93) |
Mar
(96) |
Apr
(30) |
May
(55) |
Jun
(83) |
Jul
(44) |
Aug
(8) |
Sep
(5) |
Oct
|
Nov
(1) |
Dec
(1) |
2016 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
(3) |
Sep
(1) |
Oct
(3) |
Nov
|
Dec
|
2017 |
Jan
|
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
(7) |
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Michael D. <md...@st...> - 2013-03-07 15:14:06
|
Ah, yes. I forgot that my issues with trying to do rendering in the browser shared some similarities here. When I did the last major backend refactoring (which is coming on 6 years ago now), I considered exactly this, which is why draw_path takes both a path object (which is unlikely to change) and a transform. It turned out to not be terribly useful for most the backends we have, since PDF, PS, SVG (at least in the versions of the specs we're using) all scale the stroke width along with the vertices, so you couldn't, for example, zoom in on a line plot without the line width exploding. The Agg backend is able to perform the transform itself since we wrote it to not scale the stroke using the path's transform, but there's little advantage since it's all software anyway. But the API still works this way even if nothing takes advantage of it. Though technically paths are mutable, since they store Numpy arrays which are mutable, in practice they are rarely if never mutated (I'll have to verify this, but certainly in the course of panning and zooming, they wouldn't be), so it should be possible to know whether a path is already on the GPU by checking its id against a table of what's on the GPU. The other wrinkle is that the transform is a combination of affine transforms (which I'm sure OpenGL handles just fine) with arbitrary transforms such as log scaling or polar transformations. I assume this would all have to be implemented as a vertex shader to get the performance benefits of OpenGL. We could probably implement all of the transforms built in to matplotlib (which cover a lot of cases), and then just provide some slow "escape route" to handle arbitrary transforms written in Numpy/Python, and for bonus points provide a hook for users to add their own OpenGL transforms (just as it would have been necessary to do with Javascript in the case of web browser rendering). But I bet a lot of people would be happy as a start to just have the built-in transforms working. The last thing to raise is the case of very large data. When I was investigating rendering in a web browser, there were pretty low limits to the number of vertices I could keep hold of in the browser. matplotlib has a "path simplification" algorithm to reduce the number of vertices without changing the appearance of the plot, but of course that's dependant on the scale, so it has to be rerun every time the scale changes. While path simplification may not be as necessary for speed on the GPU, it may be necessary due to memory constraints. Mike On 03/07/2013 09:50 AM, Nicolas Rougier wrote: > > Indeed, you analyzed/explained the overall situation very well in your blog post: http://mdboom.github.com/blog/2012/08/06/matplotlib-client-side/ > > > OpenGL can make things very fast as long as there is not too many transfer between CPU/GPU memory. Once the data is within the GPU, most transformations (offset/translate/scale/rotate/interpolation/etc) can be made entirely on the GPU side. In the template backend however, the "draw_path" function (for example) receives a path to be rendered and I would need to ensure it is build only once and only applying transforms for subsequent calls. > > At least it is my understanding of the matplotlib machinery but I may be wrong. > > > Nicolas > > > > > > > On Mar 7, 2013, at 15:36 , Michael Droettboom wrote: > >> I'm not aware of any discussion about participating in GSoC this year, >> though I am open to the idea. I was involved as a mentor a few years >> ago, but I wasn't terribly involved in the administrative side, so I >> don't know what's involved. I think then we did it under the umbrella >> of the PSF. >> >> I'd be interested in exploring a GL backend further. What, from 10,000 >> m, is the main impedence mismatch between the current matplotlib backend >> design and OpenGL rendering? >> >> Mike >> >> On 03/07/2013 03:39 AM, Nicolas Rougier wrote: >>> Hi all, >>> >>> Are there any ongoing project for GSOC 2013 ? I would like to propose something around a GL backend but I'm not still sure OpenGL "philosophy" is compatible with current matplotlib design and any project would require co-mentoring with a matplotlib devel guru. There is a lot of experienced people around (Luke Campagnola, Cyrille Rossant, Almar Klein to name a few) who can help on the GL part and we're also currently trying to design together a common low-level API that may definitely help for the GL backend. What do you think ? Do we need first to make sure a GL backend make any sense at all before going further ? >>> >>> >>> Here is a small set of GL experiments I did recently that make me thinks it should be possible to come up with something nice and fast: https://github.com/rougier/gl-agg >>> >>> >>> Some movies for those who don't want to test: >>> >>> http://www.youtube.com/watch?v=T010zMtorAk >>> http://www.youtube.com/watch?v=iFwEzV9Pw-4 >>> >>> >>> >>> >>> Nicolas >>> ------------------------------------------------------------------------------ >>> Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester >>> Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the >>> endpoint security space. For insight on selecting the right partner to >>> tackle endpoint security challenges, access the full report. >>> http://p.sf.net/sfu/symantec-dev2dev >>> _______________________________________________ >>> Matplotlib-devel mailing list >>> Mat...@li... >>> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel >> >> ------------------------------------------------------------------------------ >> Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester >> Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the >> endpoint security space. For insight on selecting the right partner to >> tackle endpoint security challenges, access the full report. >> http://p.sf.net/sfu/symantec-dev2dev >> _______________________________________________ >> Matplotlib-devel mailing list >> Mat...@li... >> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel |
From: Michael D. <md...@st...> - 2013-03-07 14:57:50
|
On 03/06/2013 08:28 PM, Jae-Joon Lee wrote: > While I cannot say much about the "compound renderering" in Agg as I > know little about it, but the upsampling method that Mike mentioned > seems quite robust. > Just a quick question, will it affect the rendered quality of AA'ed > artists, like texts? What I mean is, drawing texts w/ AA on (in the > upsampled canvas) and downsampling the result may give poor quality? > But maybe there is a way to adjust how AA is done in the first place. In my quick hack, I render the text at the final output resolution, it gets upsampled onto the "larger" canvas, and then downsampled back. (All this just because it was the easiest hack). The end result is definitely fuzzier. So we'd have to explore how we do text rendering in the new paradigm, particularly wrt hinting (which already contains a lot of cleverness). > > Maybe what we can try is to upsample only selected artists, similar to > rasterization in vector backends or agg_filter. This will likely > consume more memory and maybe more CPU, but may give minimal side > effects. That's an interesting idea. > > While it is not directly related to the current issue, I thought it > would be good to have, within a backend, multiple layers to render and > the final result is a composite of those layers. > And we may upsample some layers or apply effects selectively. And do > the alpha compositing of layers in the end. This will be also useful > for animations as we can only update selected layers. Yes -- I've thought we might need this for a while. Cairo has a very useful API -- "push_group"/"pop_group" and friends -- that push layers on and off a stack. It would solve some of the alpha blending problems we have because, for example, an entire contour set could be rendered on a single layer and then alpha composited at once on the layer below. Another side effect of that API is that it makes "stamping" things (as we do in draw_marker) possible without each of the backends having to understand so much about what a marker is as we have now. Cheers, Mike > > Back to the original issue, I am inclined to play with Mike's > upsamping method to see if it solves the problem without significant > side effects. > > Regards, > > -JJ > > > > On Thu, Mar 7, 2013 at 6:54 AM, Michael Droettboom <md...@st... > <mailto:md...@st...>> wrote: > > Thanks for bringing this up -- I was just looking at the test > images the other day and was reminded how filled contouring > doesn't look as good as it could be. > > I had played with the "compound renderer" example in Agg some > years ago, but couldn't really understand how it worked, so > ultimately gave up on it. I appreciate the research you've done > here, because it illustrates pretty clearly what is required. I > guess in the actual Agg draw_path_collection renderer, we would > have to build a style_handler table dynamically based on the > collection and then iterate through it as we draw... At least > it's clear now how that could be accomplished. > > I wonder, though, and the SVG article you link to hints at this, > if we wouldn't be better off just upsampling our Agg rendering > instead. The advantage of such an approach would be that the > structure or ordering of the drawing wouldn't matter at all -- > allaying most of Eric's concerns. It seems like it just overall > be "easier" to do the right thing. And it should benefit all > things, including other tricky things like quadmeshes, > "automatically". > > Of course, the downside is a performance hit. Assuming 2x > oversampling, it means the screen buffer will be 4x the size, > rendering will take "up to" four times as long, and then you have > to downsample again (which in the best case would happen in > hardware). Compound rendering has a memory penalty, too, of > course, since the scanlines for all of the paths have to be stored > until finally rendered out simultaneously. Estimating what that > overhead is much harder, of course, and is more content dependent, > whereas the performance hit upsampling is straightforward and has > an obvious upper bound. > > I put together a very quick hack on my agg-upsampling branch, > which hardcodes the upsampling to 2x in each direction. It > currently only works with the GtkAgg backend (even file saving > won't work correctly), and I only fixed things up for contour > plotting, so images (and probably other things) will be > misplaced. But it should give an idea for the performance hit and > quality benefit enough to play around with. > > Mike > > > On 03/06/2013 03:27 PM, Michael Droettboom wrote: >> I'm trying to compile your examples, but it seems perhaps you >> forget to include a file -- pixel_formats.hpp? It's not in the >> agg24 source tree. >> >> Mike >> >> On 03/06/2013 12:06 PM, Phil Elson wrote: >>> Smart rendering of adjacent, anti-aliased patches is a question >>> which has come up a couple of times in various guises in the past. >>> It is my understanding that the lack of this functionality led >>> us to disable anti-aliasing for contouring and is the reason the >>> following image has a white stripe around the circle where there >>> should be just a nice blend of the two colors: >>> >>> >>> import matplotlib.pyplot as plt >>> import numpy as np >>> import matplotlib.patches as mpatches >>> import matplotlib.path as mpath >>> import matplotlib.collections as mcol >>> >>> >>> # create two paths. One a circle, the other >>> # a square with the same circle cut out. >>> x = np.linspace(0, np.pi * 2, 1000) >>> >>> circle_coords = np.array(zip(*[np.sin(x) * 0.8, np.cos(x) * 0.8])) >>> pth_circle = mpath.Path(circle_coords) >>> >>> sqr_codes = np.repeat(mpath.Path.MOVETO, len(circle_coords) + 5) >>> sqr_codes[1:5] = mpath.Path.LINETO >>> sqr_codes[6:] = mpath.Path.LINETO >>> sqr_coords = np.concatenate([[[-1, -1], [-1, 1], [1, 1], [1, >>> -1], [-1, -1]], >>> circle_coords[::-1]], axis=0) >>> sqr_path = mpath.Path(sqr_coords, sqr_codes) >>> >>> >>> ax = plt.axes() >>> patches = [mpatches.PathPatch(pth_circle), >>> mpatches.PathPatch(sqr_path)] >>> col = mcol.PatchCollection(patches, >>> antialiaseds=True, >>> edgecolors='none', >>> facecolors=[(0, 0.0, 0.0, 0.9), (0.1, 0.1, 0.02, 0.9)]) >>> ax.add_collection(col) >>> ax.set_xlim([-1, 1]) >>> ax.set_ylim([-1, 1]) >>> plt.show() >>> >>> >>> >>> I know of lots of the workarounds for this (turn off AA, turn on >>> lines, extend the path slightly, set a dark background color) >>> all of which have down-sides, so I'm keen to find a final >>> solution to the problem. >>> >>> When the two patches marry up perfectly with full anti-aliasing, >>> the antigrain (AGG) community call this "flash" or compound >>> rendering, and this capability was added to Agg 2.4 (which we >>> already ship with mpl). >>> >>> In order to make full use of the compound rendering technique I >>> believe the drawing pipeline in "_backend_agg.cpp" would need to >>> change, which could be problematic. A less wide-impacting >>> alternative would be to draw all "patches" of a single >>> Collection in the same rasterization step (i.e. just change >>> _draw_path_collection_generic), though this does mean that, as >>> it stands, the result of plt.contourf would not be able to make >>> use of this new functionality - a MEP which changes the return >>> type of plt.contourf to a single Collection might be able to fix >>> that. >>> >>> I've put together a simple example similar to this in C++ using >>> agg (no mpl changes yet), showing the differences in the code >>> needed between the old technique vs the "new" compound renderer >>> (attached). >>> >>> >>> Ok, so the question to those that have knowledge of the >>> _backend_agg.cpp code (Mike, Eric, JJ + others?): >>> >>> * Have you already looked at doing this and determined that >>> this is a non-starter? >>> * Do you support adding the ability for the agg backend to >>> draw compound artists (i.e. Collections) in this way rather >>> than treating them as individual primitives in the canvas? >>> * Since many of the other backends can't do flash rendering, >>> would we even want to make this change? >>> o SVG in Firefox 10.0.2 has the same problem, it is >>> discussed slightly more in >>> http://www.svgopen.org/2002/papers/sorotokin__svg_secrets/ >>> o Acroread has the same problem with PDFs, only to a much >>> lesser extent than in the PNG attached >>> >>> >>> Thoughts? >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester >>> Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the >>> endpoint security space. For insight on selecting the right partner to >>> tackle endpoint security challenges, access the full report. >>> http://p.sf.net/sfu/symantec-dev2dev >>> >>> >>> _______________________________________________ >>> Matplotlib-devel mailing list >>> Mat...@li... <mailto:Mat...@li...> >>> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel >> >> >> >> ------------------------------------------------------------------------------ >> Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester >> Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the >> endpoint security space. For insight on selecting the right partner to >> tackle endpoint security challenges, access the full report. >> http://p.sf.net/sfu/symantec-dev2dev >> >> >> _______________________________________________ >> Matplotlib-devel mailing list >> Mat...@li... <mailto:Mat...@li...> >> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel > > > ------------------------------------------------------------------------------ > Symantec Endpoint Protection 12 positioned as A LEADER in The > Forrester > Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" > in the > endpoint security space. For insight on selecting the right partner to > tackle endpoint security challenges, access the full report. > http://p.sf.net/sfu/symantec-dev2dev > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > <mailto:Mat...@li...> > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel > > |
From: Nicolas R. <Nic...@in...> - 2013-03-07 14:51:05
|
Indeed, you analyzed/explained the overall situation very well in your blog post: http://mdboom.github.com/blog/2012/08/06/matplotlib-client-side/ OpenGL can make things very fast as long as there is not too many transfer between CPU/GPU memory. Once the data is within the GPU, most transformations (offset/translate/scale/rotate/interpolation/etc) can be made entirely on the GPU side. In the template backend however, the "draw_path" function (for example) receives a path to be rendered and I would need to ensure it is build only once and only applying transforms for subsequent calls. At least it is my understanding of the matplotlib machinery but I may be wrong. Nicolas On Mar 7, 2013, at 15:36 , Michael Droettboom wrote: > I'm not aware of any discussion about participating in GSoC this year, > though I am open to the idea. I was involved as a mentor a few years > ago, but I wasn't terribly involved in the administrative side, so I > don't know what's involved. I think then we did it under the umbrella > of the PSF. > > I'd be interested in exploring a GL backend further. What, from 10,000 > m, is the main impedence mismatch between the current matplotlib backend > design and OpenGL rendering? > > Mike > > On 03/07/2013 03:39 AM, Nicolas Rougier wrote: >> >> Hi all, >> >> Are there any ongoing project for GSOC 2013 ? I would like to propose something around a GL backend but I'm not still sure OpenGL "philosophy" is compatible with current matplotlib design and any project would require co-mentoring with a matplotlib devel guru. There is a lot of experienced people around (Luke Campagnola, Cyrille Rossant, Almar Klein to name a few) who can help on the GL part and we're also currently trying to design together a common low-level API that may definitely help for the GL backend. What do you think ? Do we need first to make sure a GL backend make any sense at all before going further ? >> >> >> Here is a small set of GL experiments I did recently that make me thinks it should be possible to come up with something nice and fast: https://github.com/rougier/gl-agg >> >> >> Some movies for those who don't want to test: >> >> http://www.youtube.com/watch?v=T010zMtorAk >> http://www.youtube.com/watch?v=iFwEzV9Pw-4 >> >> >> >> >> Nicolas >> ------------------------------------------------------------------------------ >> Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester >> Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the >> endpoint security space. For insight on selecting the right partner to >> tackle endpoint security challenges, access the full report. >> http://p.sf.net/sfu/symantec-dev2dev >> _______________________________________________ >> Matplotlib-devel mailing list >> Mat...@li... >> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel > > > ------------------------------------------------------------------------------ > Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester > Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the > endpoint security space. For insight on selecting the right partner to > tackle endpoint security challenges, access the full report. > http://p.sf.net/sfu/symantec-dev2dev > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel |
From: Michael D. <md...@st...> - 2013-03-07 14:50:06
|
On 03/07/2013 04:41 AM, Phil Elson wrote: > > Would this greatly slow down the rendering? > > That is a million dollar question. Without trying it to see, we could > ask the agg mailing list to see if they have any insight. > Personally I'd be surprised if anything we did at the Agg level was a > bottleneck given how much python code comes before it - though I admit > that doesn't mean anything in practice... At first glance, I would be concerned about how this scales with the number of paths (well, the number of vertices, really). With regular path rendering in Agg, the time grows exponentially with the number of vertices in the path, which is why we've put so much effort into path simplification etc. If we use this to render a contour with many levels, does it start to break down at some point? > > > > I wonder, though, and the SVG article you link to hints at this, if > we wouldn't be better off just upsampling our Agg rendering instead. > > ... > > I put together a very quick hack on my agg-upsampling branch, which > hardcodes the upsampling to 2x in each direction. > > I'm not a big fan of supersampling > (http://en.wikipedia.org/wiki/Supersample_anti-aliasing) - that is > essentially implementing a global anti-aliasing scheme which one > wouldn't be able to control on an artist by artist basis (such as > fonts) with the overhead cost of increasing the memory highwater by a > factor of 5 (as you say for an image of size (n, m) at a minimum the > drawing buffer needs to be (2n x 2m) + then downsampled to (n x m)). The last buffer can in many cases be done in the GPU, so there isn't necessarily an additional main memory buffer... But that's a minor point. > We might even want to upscale to a factor of 3 or 4 to really get rid > of most of the white in this example. Maybe it would be useful to quantify the amount of error between the two approaches. At a certain point, it gets below the level of perception and we can probably say it's "good enough". While I understand the performance implications, I like the fact that it is bounded. > > > From the linked SVG article: > > Perfect antialiasing can be done by rendering complete artwork at a > much higher resolution and subsequent downsampling. > > I also dispute this claim. Whilst good anti-aliasing can be achieved > using supersampling, I believe /no matter the size of your bigger draw > buffer /you will always get a component of background color (white in > our case) which goes into the averaging operation and therefore will > impact the final pixel color. It is then a compromise between > performance vs "good enough" rendering. I agree. It's perhaps worth determining how much worse it is and how good is "good enough". > > > Having said all of the above, supersampling would be a nice feature to > add to mpl's Agg backends (and maybe the Cairo backend too) - though I > do not believe it is a full solution to this problem. I think we agree there is a tradeoff here. On the one hand, there is a mathematically perfect solution that requires some rather pervasive refactoring of the code and permanent care in how plots are constructed to ensure it's running optimally, and an exponentially growing performance curve as the complexity of the plot increases. On the other, is a suboptimal end result that requires minimal changes, and a well-defined and upper bound performance penalty. I think to really determine where on the tradeoff we want to be, some more experiments comparing the quality of the end result of the two approaches would be helpful. Mike > > Cheers, > > > > > > > > > > > > > On 7 March 2013 09:15, Phil Elson <pel...@gm... > <mailto:pel...@gm...>> wrote: > > > I'm trying to compile your examples, but it seems perhaps you > forget to include a file > > Sorry, you can find pixel_formats.h in the examples directory. I > have a mirror of agg here: > https://github.com/pelson/antigrain/tree/master/agg-2.4/examples > > My compile steps are: > > export AGG_DIR=<path to antigrain checkout> > > g++ -c -O3 -I${AGG_DIR}/include -I${AGG_DIR}/examples > -L${AGG_DIR}/src basic_path.cpp -o basic_path.o > g++ -O3 -I${AGG_DIR}/include -L${AGG_DIR}/src basic_path.o -o > basic_path -lagg > > > > I had played with the "compound renderer" example in Agg some > years ago, but couldn't really understand how it worked > > Agreed. It took the best part of a day to figure out how to do it, > given no prior knowledge of Agg. I intend to submit a couple of > simpler (GUI-less) examples to be included with the agg repo - and > perhaps even write some documentation for this stuff! > > > > Does it work with alpha < 1? > > Yes. I've attached the two images (converted from ppm to png using > gimp + default settings) so that you can see the results. Each > shape has an alpha value of 200. > > Inline images 1Inline images 2 > > HTH > > > > > > On 6 March 2013 20:27, Michael Droettboom <md...@st... > <mailto:md...@st...>> wrote: > > I'm trying to compile your examples, but it seems perhaps you > forget to include a file -- pixel_formats.hpp? It's not in > the agg24 source tree. > > Mike > > > On 03/06/2013 12:06 PM, Phil Elson wrote: >> Smart rendering of adjacent, anti-aliased patches is a >> question which has come up a couple of times in various >> guises in the past. >> It is my understanding that the lack of this functionality >> led us to disable anti-aliasing for contouring and is the >> reason the following image has a white stripe around the >> circle where there should be just a nice blend of the two colors: >> >> >> import matplotlib.pyplot as plt >> import numpy as np >> import matplotlib.patches as mpatches >> import matplotlib.path as mpath >> import matplotlib.collections as mcol >> >> >> # create two paths. One a circle, the other >> # a square with the same circle cut out. >> x = np.linspace(0, np.pi * 2, 1000) >> >> circle_coords = np.array(zip(*[np.sin(x) * 0.8, np.cos(x) * >> 0.8])) >> pth_circle = mpath.Path(circle_coords) >> >> sqr_codes = np.repeat(mpath.Path.MOVETO, len(circle_coords) + 5) >> sqr_codes[1:5] = mpath.Path.LINETO >> sqr_codes[6:] = mpath.Path.LINETO >> sqr_coords = np.concatenate([[[-1, -1], [-1, 1], [1, 1], [1, >> -1], [-1, -1]], >> circle_coords[::-1]], axis=0) >> sqr_path = mpath.Path(sqr_coords, sqr_codes) >> >> >> ax = plt.axes() >> patches = [mpatches.PathPatch(pth_circle), >> mpatches.PathPatch(sqr_path)] >> col = mcol.PatchCollection(patches, >> antialiaseds=True, >> edgecolors='none', >> facecolors=[(0, 0.0, 0.0, 0.9), (0.1, 0.1, 0.02, 0.9)]) >> ax.add_collection(col) >> ax.set_xlim([-1, 1]) >> ax.set_ylim([-1, 1]) >> plt.show() >> >> >> >> I know of lots of the workarounds for this (turn off AA, turn >> on lines, extend the path slightly, set a dark background >> color) all of which have down-sides, so I'm keen to find a >> final solution to the problem. >> >> When the two patches marry up perfectly with full >> anti-aliasing, the antigrain (AGG) community call this >> "flash" or compound rendering, and this capability was added >> to Agg 2.4 (which we already ship with mpl). >> >> In order to make full use of the compound rendering technique >> I believe the drawing pipeline in "_backend_agg.cpp" would >> need to change, which could be problematic. A less >> wide-impacting alternative would be to draw all "patches" of >> a single Collection in the same rasterization step (i.e. just >> change _draw_path_collection_generic), though this does mean >> that, as it stands, the result of plt.contourf would not be >> able to make use of this new functionality - a MEP which >> changes the return type of plt.contourf to a single >> Collection might be able to fix that. >> >> I've put together a simple example similar to this in C++ >> using agg (no mpl changes yet), showing the differences in >> the code needed between the old technique vs the "new" >> compound renderer (attached). >> >> >> Ok, so the question to those that have knowledge of the >> _backend_agg.cpp code (Mike, Eric, JJ + others?): >> >> * Have you already looked at doing this and determined that >> this is a non-starter? >> * Do you support adding the ability for the agg backend to >> draw compound artists (i.e. Collections) in this way >> rather than treating them as individual primitives in the >> canvas? >> * Since many of the other backends can't do flash >> rendering, would we even want to make this change? >> o SVG in Firefox 10.0.2 has the same problem, it is >> discussed slightly more in >> http://www.svgopen.org/2002/papers/sorotokin__svg_secrets/ >> o Acroread has the same problem with PDFs, only to a >> much lesser extent than in the PNG attached >> >> >> Thoughts? >> >> >> >> >> >> >> >> >> >> >> >> >> ------------------------------------------------------------------------------ >> Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester >> Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the >> endpoint security space. For insight on selecting the right partner to >> tackle endpoint security challenges, access the full report. >> http://p.sf.net/sfu/symantec-dev2dev >> >> >> _______________________________________________ >> Matplotlib-devel mailing list >> Mat...@li... <mailto:Mat...@li...> >> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel > > > ------------------------------------------------------------------------------ > Symantec Endpoint Protection 12 positioned as A LEADER in The > Forrester > Wave(TM): Endpoint Security, Q1 2013 and "remains a good > choice" in the > endpoint security space. For insight on selecting the right > partner to > tackle endpoint security challenges, access the full report. > http://p.sf.net/sfu/symantec-dev2dev > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > <mailto:Mat...@li...> > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel > > > |
From: Michael D. <md...@st...> - 2013-03-07 14:37:47
|
I'm not aware of any discussion about participating in GSoC this year, though I am open to the idea. I was involved as a mentor a few years ago, but I wasn't terribly involved in the administrative side, so I don't know what's involved. I think then we did it under the umbrella of the PSF. I'd be interested in exploring a GL backend further. What, from 10,000 m, is the main impedence mismatch between the current matplotlib backend design and OpenGL rendering? Mike On 03/07/2013 03:39 AM, Nicolas Rougier wrote: > > Hi all, > > Are there any ongoing project for GSOC 2013 ? I would like to propose something around a GL backend but I'm not still sure OpenGL "philosophy" is compatible with current matplotlib design and any project would require co-mentoring with a matplotlib devel guru. There is a lot of experienced people around (Luke Campagnola, Cyrille Rossant, Almar Klein to name a few) who can help on the GL part and we're also currently trying to design together a common low-level API that may definitely help for the GL backend. What do you think ? Do we need first to make sure a GL backend make any sense at all before going further ? > > > Here is a small set of GL experiments I did recently that make me thinks it should be possible to come up with something nice and fast: https://github.com/rougier/gl-agg > > > Some movies for those who don't want to test: > > http://www.youtube.com/watch?v=T010zMtorAk > http://www.youtube.com/watch?v=iFwEzV9Pw-4 > > > > > Nicolas > ------------------------------------------------------------------------------ > Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester > Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the > endpoint security space. For insight on selecting the right partner to > tackle endpoint security challenges, access the full report. > http://p.sf.net/sfu/symantec-dev2dev > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel |
From: Ian T. <ian...@gm...> - 2013-03-07 09:34:48
|
Amit, On 6 March 2013 20:20, Amit Aronovitch <aro...@gm...> wrote: > So, "working"/"not working" test (possibly including some time > measurements) I can do on a fairly short notice. > Producing some more examples that fail with the current code might require > several hours of work, so would probably get delayed for a few weeks. > The quick working/not working tests without time measurements would be ample, thanks. I will be writing a set of small tests within the mpl testing framework for specific cases that currently fail, but it will be useful to check on some of the big datasets that you use. Ian |
From: Ian T. <ian...@gm...> - 2013-03-07 09:29:54
|
On 6 March 2013 19:10, Chris Barker - NOAA Federal <chr...@no...>wrote: > """ > Qhull does not support triangulation of non-convex surfaces, mesh > generation of non-convex objects, ... or constrained Delaunay > triangulations > """ > these are key to my needs, but are they for MPL? > No, all we need is a drop-in replacement for the existing code, i.e. 2D unconstrained Delaunay. Further functionality is outside our remit of "plotting library". I've also got a colleague with some nice code based on Donald Knuth's > Axioms and Hulls monograph. If someone it interested in adding some > tests and python wrappers, I'll bet he'd be glad to release it with a > suitable license. (C++). We've talked about this before and previously I have been interested. But now I think it is unnecessary to maintain our own code when we have a drop-in replacement that has worked so well for scipy. Ian |
From: Nicolas R. <Nic...@in...> - 2013-03-07 08:39:20
|
Hi all, Are there any ongoing project for GSOC 2013 ? I would like to propose something around a GL backend but I'm not still sure OpenGL "philosophy" is compatible with current matplotlib design and any project would require co-mentoring with a matplotlib devel guru. There is a lot of experienced people around (Luke Campagnola, Cyrille Rossant, Almar Klein to name a few) who can help on the GL part and we're also currently trying to design together a common low-level API that may definitely help for the GL backend. What do you think ? Do we need first to make sure a GL backend make any sense at all before going further ? Here is a small set of GL experiments I did recently that make me thinks it should be possible to come up with something nice and fast: https://github.com/rougier/gl-agg Some movies for those who don't want to test: http://www.youtube.com/watch?v=T010zMtorAk http://www.youtube.com/watch?v=iFwEzV9Pw-4 Nicolas |
From: Michael D. <md...@st...> - 2013-03-06 20:45:03
|
I'm trying to compile your examples, but it seems perhaps you forget to include a file -- pixel_formats.hpp? It's not in the agg24 source tree. Mike On 03/06/2013 12:06 PM, Phil Elson wrote: > Smart rendering of adjacent, anti-aliased patches is a question which > has come up a couple of times in various guises in the past. > It is my understanding that the lack of this functionality led us to > disable anti-aliasing for contouring and is the reason the following > image has a white stripe around the circle where there should be just > a nice blend of the two colors: > > > import matplotlib.pyplot as plt > import numpy as np > import matplotlib.patches as mpatches > import matplotlib.path as mpath > import matplotlib.collections as mcol > > > # create two paths. One a circle, the other > # a square with the same circle cut out. > x = np.linspace(0, np.pi * 2, 1000) > > circle_coords = np.array(zip(*[np.sin(x) * 0.8, np.cos(x) * 0.8])) > pth_circle = mpath.Path(circle_coords) > > sqr_codes = np.repeat(mpath.Path.MOVETO, len(circle_coords) + 5) > sqr_codes[1:5] = mpath.Path.LINETO > sqr_codes[6:] = mpath.Path.LINETO > sqr_coords = np.concatenate([[[-1, -1], [-1, 1], [1, 1], [1, -1], [-1, > -1]], > circle_coords[::-1]], axis=0) > sqr_path = mpath.Path(sqr_coords, sqr_codes) > > > ax = plt.axes() > patches = [mpatches.PathPatch(pth_circle), mpatches.PathPatch(sqr_path)] > col = mcol.PatchCollection(patches, > antialiaseds=True, > edgecolors='none', > facecolors=[(0, 0.0, 0.0, 0.9), (0.1, 0.1, 0.02, 0.9)]) > ax.add_collection(col) > ax.set_xlim([-1, 1]) > ax.set_ylim([-1, 1]) > plt.show() > > > > I know of lots of the workarounds for this (turn off AA, turn on > lines, extend the path slightly, set a dark background color) all of > which have down-sides, so I'm keen to find a final solution to the > problem. > > When the two patches marry up perfectly with full anti-aliasing, the > antigrain (AGG) community call this "flash" or compound rendering, and > this capability was added to Agg 2.4 (which we already ship with mpl). > > In order to make full use of the compound rendering technique I > believe the drawing pipeline in "_backend_agg.cpp" would need to > change, which could be problematic. A less wide-impacting alternative > would be to draw all "patches" of a single Collection in the same > rasterization step (i.e. just change _draw_path_collection_generic), > though this does mean that, as it stands, the result of plt.contourf > would not be able to make use of this new functionality - a MEP which > changes the return type of plt.contourf to a single Collection might > be able to fix that. > > I've put together a simple example similar to this in C++ using agg > (no mpl changes yet), showing the differences in the code needed > between the old technique vs the "new" compound renderer (attached). > > > Ok, so the question to those that have knowledge of the > _backend_agg.cpp code (Mike, Eric, JJ + others?): > > * Have you already looked at doing this and determined that this is > a non-starter? > * Do you support adding the ability for the agg backend to draw > compound artists (i.e. Collections) in this way rather than > treating them as individual primitives in the canvas? > * Since many of the other backends can't do flash rendering, would > we even want to make this change? > o SVG in Firefox 10.0.2 has the same problem, it is discussed > slightly more in > http://www.svgopen.org/2002/papers/sorotokin__svg_secrets/ > o Acroread has the same problem with PDFs, only to a much lesser > extent than in the PNG attached > > > Thoughts? > > > > > > > > > > > > > ------------------------------------------------------------------------------ > Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester > Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the > endpoint security space. For insight on selecting the right partner to > tackle endpoint security challenges, access the full report. > http://p.sf.net/sfu/symantec-dev2dev > > > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel |
From: Amit A. <aro...@gm...> - 2013-03-06 20:21:07
|
Thanks Ian. These examples occured when I processed large propriatary datasets. So far, scipy's triangulation worked whenever matplotlib failed. When we have a new implementation, it should be quite simple to check if it works where it had previously failed. Certainly easier than slicing the data to small chunks and trying to distill a failing example of reasonable size as I did in this case. So, "working"/"not working" test (possibly including some time measurements) I can do on a fairly short notice. Producing some more examples that fail with the current code might require several hours of work, so would probably get delayed for a few weeks. Amit On Wed, Mar 6, 2013 at 10:53 AM, Ian Thomas <ian...@gm...> wrote: > Hi Amit, > > I am with you 100% of the way. We should use an existing open source > Delaunay triangulator, and my preference is for QHull as well. > > "Improved Delaunay triangulator" is on my matplotlib todo list, albeit it > quite a long way from the top. I don't tend to use the existing code as I > usually specify my own triangulations, so I have never seen anything quite > as embarrassing as issue #1809. Perhaps I need to bump it up my priority > list. > > If I come up with a possible solution as a PR, would you be prepared to > help test it? You seem to have quite a few examples that don't work under > the existing code and would be very useful for demonstrating if the > improved code is indeed an improvement. > > Ian > > > On 5 March 2013 23:08, Amit Aronovitch <aro...@gm...> wrote: > >> Dear MPL-devs, >> >> Currently, matplotlib does Delaunay triangulation using a special >> purpose module written in C++ (if I'm not mistaken, it was originally >> forked off from some SciKit and wrapped into a Python module). >> Some people (here and on github issues) had suggested it might need some >> rewrites/modification. >> In particular I was wondering if we should continue maintaining it here >> or maybe switch to using some external library. >> >> Since triangulation is not a plotting-specific problem, and some free >> libraries are available for solving it, we might actually benefit in >> terms of efficiency and robustness. >> >> Specifically, I had suggested QHull, which is used by scipy (note that >> now there is also a stand-alone python interface: >> https://pypi.python.org/pypi/pyhull - I did not check that out yet). >> @dmcdougall had suggested Jonathan Shewchuk's triangle library (we >> should check the license though - I think it is "for non-commercial >> use", unlike mpl). There are also other alternatives. >> >> On the other hand, there's the issue of minimizing external >> dependencies. I think @ianthomas23 had once mentioned that he is happy >> with having Delaunay code reside in mpl (and, of course, "maintainable" >> is whatever is most convenient for the maintainers). >> >> I apologize for suggesting more tasks without contributing time to work >> on them. Just thought that since I finally sat down to report issue >> #1809 (which seems to be a particularly slippery bug in the code >> mentioned above), it might be a good time to discuss this topic again. >> >> thanks, >> >> Amit Aronovitch >> > |
From: Eric F. <ef...@ha...> - 2013-03-06 18:43:08
|
On 2013/03/06 7:06 AM, Phil Elson wrote: > Smart rendering of adjacent, anti-aliased patches is a question which > has come up a couple of times in various guises in the past. > It is my understanding that the lack of this functionality led us to > disable anti-aliasing for contouring and is the reason the following > image has a white stripe around the circle where there should be just a > nice blend of the two colors: > > > import matplotlib.pyplot as plt > import numpy as np > import matplotlib.patches as mpatches > import matplotlib.path as mpath > import matplotlib.collections as mcol > > > # create two paths. One a circle, the other > # a square with the same circle cut out. > x = np.linspace(0, np.pi * 2, 1000) > > circle_coords = np.array(zip(*[np.sin(x) * 0.8, np.cos(x) * 0.8])) > pth_circle = mpath.Path(circle_coords) > > sqr_codes = np.repeat(mpath.Path.MOVETO, len(circle_coords) + 5) > sqr_codes[1:5] = mpath.Path.LINETO > sqr_codes[6:] = mpath.Path.LINETO > sqr_coords = np.concatenate([[[-1, -1], [-1, 1], [1, 1], [1, -1], [-1, > -1]], > circle_coords[::-1]], axis=0) > sqr_path = mpath.Path(sqr_coords, sqr_codes) > > > ax = plt.axes() > patches = [mpatches.PathPatch(pth_circle), mpatches.PathPatch(sqr_path)] > col = mcol.PatchCollection(patches, > antialiaseds=True, > edgecolors='none', > facecolors=[(0, 0.0, 0.0, 0.9), (0.1, 0.1, > 0.02, 0.9)]) > ax.add_collection(col) > ax.set_xlim([-1, 1]) > ax.set_ylim([-1, 1]) > plt.show() > > > > I know of lots of the workarounds for this (turn off AA, turn on lines, > extend the path slightly, set a dark background color) all of which have > down-sides, so I'm keen to find a final solution to the problem. > > When the two patches marry up perfectly with full anti-aliasing, the > antigrain (AGG) community call this "flash" or compound rendering, and > this capability was added to Agg 2.4 (which we already ship with mpl). > > In order to make full use of the compound rendering technique I believe > the drawing pipeline in "_backend_agg.cpp" would need to change, which > could be problematic. A less wide-impacting alternative would be to draw > all "patches" of a single Collection in the same rasterization step > (i.e. just change _draw_path_collection_generic), though this does mean > that, as it stands, the result of plt.contourf would not be able to make > use of this new functionality - a MEP which changes the return type of > plt.contourf to a single Collection might be able to fix that. > > I've put together a simple example similar to this in C++ using agg (no > mpl changes yet), showing the differences in the code needed between the > old technique vs the "new" compound renderer (attached). > > > Ok, so the question to those that have knowledge of the _backend_agg.cpp > code (Mike, Eric, JJ + others?): > > * Have you already looked at doing this and determined that this is a > non-starter? > * Do you support adding the ability for the agg backend to draw > compound artists (i.e. Collections) in this way rather than treating > them as individual primitives in the canvas? > * Since many of the other backends can't do flash rendering, would we > even want to make this change? > o SVG in Firefox 10.0.2 has the same problem, it is discussed > slightly more in > http://www.svgopen.org/2002/papers/sorotokin__svg_secrets/ > o Acroread has the same problem with PDFs, only to a much lesser > extent than in the PNG attached > > > Thoughts? Phil, Would this greatly slow down the rendering? Does it work with alpha < 1? I'm initially not enthusiastic about having contourf return a single Collection, but maybe in practice it would not make much difference. The drawback, apart from code brakeage, is that it would remove the ability to pick out a level for additional customization. Could this be handled at a subsequent level, by having the renderer able to treat an arbitrary collection of artists as a group? It seems that contourf is where this "flash" capability would be most important; if it can't be made to work there, I think it might not be worth the trouble to add. Eric |
From: Phil E. <pel...@gm...> - 2013-03-06 17:06:19
|
Smart rendering of adjacent, anti-aliased patches is a question which has come up a couple of times in various guises in the past. It is my understanding that the lack of this functionality led us to disable anti-aliasing for contouring and is the reason the following image has a white stripe around the circle where there should be just a nice blend of the two colors: import matplotlib.pyplot as plt import numpy as np import matplotlib.patches as mpatches import matplotlib.path as mpath import matplotlib.collections as mcol # create two paths. One a circle, the other # a square with the same circle cut out. x = np.linspace(0, np.pi * 2, 1000) circle_coords = np.array(zip(*[np.sin(x) * 0.8, np.cos(x) * 0.8])) pth_circle = mpath.Path(circle_coords) sqr_codes = np.repeat(mpath.Path.MOVETO, len(circle_coords) + 5) sqr_codes[1:5] = mpath.Path.LINETO sqr_codes[6:] = mpath.Path.LINETO sqr_coords = np.concatenate([[[-1, -1], [-1, 1], [1, 1], [1, -1], [-1, -1]], circle_coords[::-1]], axis=0) sqr_path = mpath.Path(sqr_coords, sqr_codes) ax = plt.axes() patches = [mpatches.PathPatch(pth_circle), mpatches.PathPatch(sqr_path)] col = mcol.PatchCollection(patches, antialiaseds=True, edgecolors='none', facecolors=[(0, 0.0, 0.0, 0.9), (0.1, 0.1, 0.02, 0.9)]) ax.add_collection(col) ax.set_xlim([-1, 1]) ax.set_ylim([-1, 1]) plt.show() I know of lots of the workarounds for this (turn off AA, turn on lines, extend the path slightly, set a dark background color) all of which have down-sides, so I'm keen to find a final solution to the problem. When the two patches marry up perfectly with full anti-aliasing, the antigrain (AGG) community call this "flash" or compound rendering, and this capability was added to Agg 2.4 (which we already ship with mpl). In order to make full use of the compound rendering technique I believe the drawing pipeline in "_backend_agg.cpp" would need to change, which could be problematic. A less wide-impacting alternative would be to draw all "patches" of a single Collection in the same rasterization step (i.e. just change _draw_path_collection_generic), though this does mean that, as it stands, the result of plt.contourf would not be able to make use of this new functionality - a MEP which changes the return type of plt.contourf to a single Collection might be able to fix that. I've put together a simple example similar to this in C++ using agg (no mpl changes yet), showing the differences in the code needed between the old technique vs the "new" compound renderer (attached). Ok, so the question to those that have knowledge of the _backend_agg.cpp code (Mike, Eric, JJ + others?): - Have you already looked at doing this and determined that this is a non-starter? - Do you support adding the ability for the agg backend to draw compound artists (i.e. Collections) in this way rather than treating them as individual primitives in the canvas? - Since many of the other backends can't do flash rendering, would we even want to make this change? - SVG in Firefox 10.0.2 has the same problem, it is discussed slightly more in http://www.svgopen.org/2002/papers/sorotokin__svg_secrets/ - Acroread has the same problem with PDFs, only to a much lesser extent than in the PNG attached Thoughts? |
From: Ian T. <ian...@gm...> - 2013-03-06 08:53:11
|
Hi Amit, I am with you 100% of the way. We should use an existing open source Delaunay triangulator, and my preference is for QHull as well. "Improved Delaunay triangulator" is on my matplotlib todo list, albeit it quite a long way from the top. I don't tend to use the existing code as I usually specify my own triangulations, so I have never seen anything quite as embarrassing as issue #1809. Perhaps I need to bump it up my priority list. If I come up with a possible solution as a PR, would you be prepared to help test it? You seem to have quite a few examples that don't work under the existing code and would be very useful for demonstrating if the improved code is indeed an improvement. Ian On 5 March 2013 23:08, Amit Aronovitch <aro...@gm...> wrote: > Dear MPL-devs, > > Currently, matplotlib does Delaunay triangulation using a special > purpose module written in C++ (if I'm not mistaken, it was originally > forked off from some SciKit and wrapped into a Python module). > Some people (here and on github issues) had suggested it might need some > rewrites/modification. > In particular I was wondering if we should continue maintaining it here > or maybe switch to using some external library. > > Since triangulation is not a plotting-specific problem, and some free > libraries are available for solving it, we might actually benefit in > terms of efficiency and robustness. > > Specifically, I had suggested QHull, which is used by scipy (note that > now there is also a stand-alone python interface: > https://pypi.python.org/pypi/pyhull - I did not check that out yet). > @dmcdougall had suggested Jonathan Shewchuk's triangle library (we > should check the license though - I think it is "for non-commercial > use", unlike mpl). There are also other alternatives. > > On the other hand, there's the issue of minimizing external > dependencies. I think @ianthomas23 had once mentioned that he is happy > with having Delaunay code reside in mpl (and, of course, "maintainable" > is whatever is most convenient for the maintainers). > > I apologize for suggesting more tasks without contributing time to work > on them. Just thought that since I finally sat down to report issue > #1809 (which seems to be a particularly slippery bug in the code > mentioned above), it might be a good time to discuss this topic again. > > thanks, > > Amit Aronovitch > |
From: Amit A. <aro...@gm...> - 2013-03-05 23:08:53
|
Dear MPL-devs, Currently, matplotlib does Delaunay triangulation using a special purpose module written in C++ (if I'm not mistaken, it was originally forked off from some SciKit and wrapped into a Python module). Some people (here and on github issues) had suggested it might need some rewrites/modification. In particular I was wondering if we should continue maintaining it here or maybe switch to using some external library. Since triangulation is not a plotting-specific problem, and some free libraries are available for solving it, we might actually benefit in terms of efficiency and robustness. Specifically, I had suggested QHull, which is used by scipy (note that now there is also a stand-alone python interface: https://pypi.python.org/pypi/pyhull - I did not check that out yet). @dmcdougall had suggested Jonathan Shewchuk's triangle library (we should check the license though - I think it is "for non-commercial use", unlike mpl). There are also other alternatives. On the other hand, there's the issue of minimizing external dependencies. I think @ianthomas23 had once mentioned that he is happy with having Delaunay code reside in mpl (and, of course, "maintainable" is whatever is most convenient for the maintainers). I apologize for suggesting more tasks without contributing time to work on them. Just thought that since I finally sat down to report issue #1809 (which seems to be a particularly slippery bug in the code mentioned above), it might be a good time to discuss this topic again. thanks, Amit Aronovitch |
From: Damon M. <dam...@gm...> - 2013-03-04 21:33:07
|
On Mon, Mar 4, 2013 at 6:44 AM, David Verelst <dav...@gm...> wrote: > > sorry...forgot to tag the subject as [matplotlib-devel] The server does it for you. > > On 4 March 2013 13:35, David Verelst <dav...@gm...> wrote: >> >> Hi, >> >> I am running Arch Linux, Matplotlib 1.2, Python 2.7, and today I realized >> that generating *.eps figures when matplotlib is using the latex output >> rc('text', usetex=True) results in a corrupted eps figure in combination >> with Ghostscript 9.07. The png variant of the same figure works fine, eps >> works fine if usetex=False. Everything works fine when I downgrade back to >> Ghostscript 9.06. >> >> Is this related to this? >> https://github.com/matplotlib/matplotlib/issues/1693 >> https://github.com/matplotlib/matplotlib/pull/1694 >> If it is, I guess the problem is already solved. Haven't tested that yet >> (need to build matplitlib from git first...) >> >> I have no idea if this is Arch Linux packaging, Ghostscript or Matplotlib >> issue...hence this email. >> >> Regards, >> David >> >> >> An example, from: http://matplotlib.org/users/usetex.html >> >> #!/usr/bin/env python >> """ >> You can use TeX to render all of your matplotlib text if the rc >> parameter text.usetex is set. This works currently on the agg and ps >> backends, and requires that you have tex and the other dependencies >> described at http://matplotlib.sf.net/matplotlib.texmanager.html >> properly installed on your system. The first time you run a script >> you will see a lot of output from tex and associated tools. The next >> time, the run may be silent, as a lot of the information is cached in >> ~/.tex.cache >> >> """ >> from matplotlib import rc >> from numpy import arange, cos, pi >> from matplotlib.pyplot import figure, axes, plot, xlabel, ylabel, title, \ >> grid, savefig, show >> >> >> rc('text', usetex=True) >> rc('font', family='serif') >> figure(1, figsize=(6,4)) >> ax = axes([0.1, 0.1, 0.8, 0.7]) >> t = arange(0.0, 1.0+0.01, 0.01) >> s = cos(2*2*pi*t)+2 >> plot(t, s) >> >> xlabel(r'\textbf{time (s)}') >> ylabel(r'\textit{voltage (mV)}',fontsize=16) >> title(r"\TeX\ is Number >> $\displaystyle\sum_{n=1}^\infty\frac{-e^{i\pi}}{2^n}$!", >> fontsize=16, color='r') >> grid(True) >> savefig('tex_demo.eps') >> savefig('tex_demo.png') >> >> show() >> >> >> When converting the eps figure with imagemagick (just to check the file), >> the following error is given: >> $ convert Desktop/tex_demo.eps ddd.eps >> Error: /dictstackunderflow in --end-- >> Operand stack: >> >> Execution stack: >> %interp_exit .runexec2 --nostringval-- --nostringval-- >> --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- >> --nostringval-- false 1 %stopped_push 1900 1 3 %oparray_pop >> 1899 1 3 %oparray_pop --nostringval-- 1883 1 3 %oparray_pop >> 1771 1 3 %oparray_pop --nostringval-- %errorexec_pop .runexec2 >> --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push >> --nostringval-- >> Dictionary stack: >> --dict:1169/1684(ro)(G)-- --dict:0/20(G)-- --dict:82/200(L)-- >> Current allocation mode is local >> Last OS error: No such file or directory >> Current file position is 102614 >> GPL Ghostscript 9.07: Unrecoverable error, exit code 1 >> Error: /dictstackunderflow in --end-- >> Operand stack: >> >> Execution stack: >> %interp_exit .runexec2 --nostringval-- --nostringval-- >> --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- >> --nostringval-- false 1 %stopped_push 1900 1 3 %oparray_pop >> 1899 1 3 %oparray_pop --nostringval-- 1883 1 3 %oparray_pop >> 1771 1 3 %oparray_pop --nostringval-- %errorexec_pop .runexec2 >> --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push >> --nostringval-- >> Dictionary stack: >> --dict:1169/1684(ro)(G)-- --dict:0/20(G)-- --dict:82/200(L)-- >> Current allocation mode is local >> Last OS error: No such file or directory >> Current file position is 102614 >> GPL Ghostscript 9.07: Unrecoverable error, exit code 1 >> convert: Postscript delegate failed `Desktop/tex_demo.eps': No such file >> or directory @ error/ps.c/ReadPSImage/836. >> convert: no images defined `ddd.eps' @ >> error/convert.c/ConvertImageCommand/3068. >> > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_feb > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel > -- Damon McDougall http://www.damon-is-a-geek.com Institute for Computational Engineering Sciences 201 E. 24th St. Stop C0200 The University of Texas at Austin Austin, TX 78712-1229 |
From: David V. <dav...@gm...> - 2013-03-04 12:45:09
|
sorry...forgot to tag the subject as [matplotlib-devel] On 4 March 2013 13:35, David Verelst <dav...@gm...> wrote: > Hi, > > I am running Arch Linux, Matplotlib 1.2, Python 2.7, and today I realized > that generating *.eps figures when matplotlib is using the latex output > rc('text', usetex=True) results in a corrupted eps figure in combination > with Ghostscript 9.07. The png variant of the same figure works fine, eps > works fine if usetex=False. Everything works fine when I downgrade back to > Ghostscript 9.06. > > Is this related to this? > https://github.com/matplotlib/matplotlib/issues/1693 > https://github.com/matplotlib/matplotlib/pull/1694 > If it is, I guess the problem is already solved. Haven't tested that yet > (need to build matplitlib from git first...) > > I have no idea if this is Arch Linux packaging, Ghostscript or Matplotlib > issue...hence this email. > > Regards, > David > > > An example, from: http://matplotlib.org/users/usetex.html > > #!/usr/bin/env python"""You can use TeX to render all of your matplotlib text if the rcparameter text.usetex is set. This works currently on the agg and psbackends, and requires that you have tex and the other dependenciesdescribed at http://matplotlib.sf.net/matplotlib.texmanager.htmlproperly installed on your system. The first time you run a scriptyou will see a lot of output from tex and associated tools. The nexttime, the run may be silent, as a lot of the information is cached in~/.tex.cache > """from matplotlib import rcfrom numpy import arange, cos, pifrom matplotlib.pyplot import figure, axes, plot, xlabel, ylabel, title, \ > grid, savefig, show > > rc('text', usetex=True)rc('font', family='serif')figure(1, figsize=(6,4))ax = axes([0.1, 0.1, 0.8, 0.7])t = arange(0.0, 1.0+0.01, 0.01)s = cos(2*2*pi*t)+2plot(t, s) > xlabel(r'\textbf{time (s)}')ylabel(r'\textit{voltage (mV)}',fontsize=16)title(r"\TeX\ is Number $\displaystyle\sum_{n=1}^\infty\frac{-e^{i\pi}}{2^n}$!", > fontsize=16, color='r')grid(True)savefig('tex_demo.eps')savefig('tex_demo.png') > show() > > > When converting the eps figure with imagemagick (just to check the file), > the following error is given: > $ convert Desktop/tex_demo.eps ddd.eps > Error: /dictstackunderflow in --end-- > Operand stack: > > Execution stack: > %interp_exit .runexec2 --nostringval-- --nostringval-- > --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- > --nostringval-- false 1 %stopped_push 1900 1 3 %oparray_pop > 1899 1 3 %oparray_pop --nostringval-- 1883 1 3 > %oparray_pop 1771 1 3 %oparray_pop --nostringval-- > %errorexec_pop .runexec2 --nostringval-- --nostringval-- > --nostringval-- 2 %stopped_push --nostringval-- > Dictionary stack: > --dict:1169/1684(ro)(G)-- --dict:0/20(G)-- --dict:82/200(L)-- > Current allocation mode is local > Last OS error: No such file or directory > Current file position is 102614 > GPL Ghostscript 9.07: Unrecoverable error, exit code 1 > Error: /dictstackunderflow in --end-- > Operand stack: > > Execution stack: > %interp_exit .runexec2 --nostringval-- --nostringval-- > --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- > --nostringval-- false 1 %stopped_push 1900 1 3 %oparray_pop > 1899 1 3 %oparray_pop --nostringval-- 1883 1 3 > %oparray_pop 1771 1 3 %oparray_pop --nostringval-- > %errorexec_pop .runexec2 --nostringval-- --nostringval-- > --nostringval-- 2 %stopped_push --nostringval-- > Dictionary stack: > --dict:1169/1684(ro)(G)-- --dict:0/20(G)-- --dict:82/200(L)-- > Current allocation mode is local > Last OS error: No such file or directory > Current file position is 102614 > GPL Ghostscript 9.07: Unrecoverable error, exit code 1 > convert: Postscript delegate failed `Desktop/tex_demo.eps': No such file > or directory @ error/ps.c/ReadPSImage/836. > convert: no images defined `ddd.eps' @ > error/convert.c/ConvertImageCommand/3068. > > |
From: Thomas K. <th...@kl...> - 2013-03-04 12:37:36
|
On 2 March 2013 23:19, Thomas Kluyver <th...@kl...> wrote: > Not directly - it runs on a completely automated build server. I've just > pushed a commit to the packaging rules which will try this the next time it > does the build. But it's not exactly instant feedback for debugging ;-). > Hopefully once we see the ImportError, it will be clear what we need to > change. > The build environment was missing pyparsing. That doesn't affect compiling, but matplotlib can't be imported without it. I've added pyparsing to the build dependencies, so hopefully the next build will complete. Thanks, Thomas |
From: David V. <dav...@gm...> - 2013-03-04 12:35:53
|
Hi, I am running Arch Linux, Matplotlib 1.2, Python 2.7, and today I realized that generating *.eps figures when matplotlib is using the latex output rc('text', usetex=True) results in a corrupted eps figure in combination with Ghostscript 9.07. The png variant of the same figure works fine, eps works fine if usetex=False. Everything works fine when I downgrade back to Ghostscript 9.06. Is this related to this? https://github.com/matplotlib/matplotlib/issues/1693 https://github.com/matplotlib/matplotlib/pull/1694 If it is, I guess the problem is already solved. Haven't tested that yet (need to build matplitlib from git first...) I have no idea if this is Arch Linux packaging, Ghostscript or Matplotlib issue...hence this email. Regards, David An example, from: http://matplotlib.org/users/usetex.html #!/usr/bin/env python"""You can use TeX to render all of your matplotlib text if the rcparameter text.usetex is set. This works currently on the agg and psbackends, and requires that you have tex and the other dependenciesdescribed at http://matplotlib.sf.net/matplotlib.texmanager.htmlproperly installed on your system. The first time you run a scriptyou will see a lot of output from tex and associated tools. The nexttime, the run may be silent, as a lot of the information is cached in~/.tex.cache """from matplotlib import rcfrom numpy import arange, cos, pifrom matplotlib.pyplot import figure, axes, plot, xlabel, ylabel, title, \ grid, savefig, show rc('text', usetex=True)rc('font', family='serif')figure(1, figsize=(6,4))ax = axes([0.1, 0.1, 0.8, 0.7])t = arange(0.0, 1.0+0.01, 0.01)s = cos(2*2*pi*t)+2plot(t, s) xlabel(r'\textbf{time (s)}')ylabel(r'\textit{voltage (mV)}',fontsize=16)title(r"\TeX\ is Number $\displaystyle\sum_{n=1}^\infty\frac{-e^{i\pi}}{2^n}$!", fontsize=16, color='r')grid(True)savefig('tex_demo.eps')savefig('tex_demo.png') show() When converting the eps figure with imagemagick (just to check the file), the following error is given: $ convert Desktop/tex_demo.eps ddd.eps Error: /dictstackunderflow in --end-- Operand stack: Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1900 1 3 %oparray_pop 1899 1 3 %oparray_pop --nostringval-- 1883 1 3 %oparray_pop 1771 1 3 %oparray_pop --nostringval-- %errorexec_pop .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- Dictionary stack: --dict:1169/1684(ro)(G)-- --dict:0/20(G)-- --dict:82/200(L)-- Current allocation mode is local Last OS error: No such file or directory Current file position is 102614 GPL Ghostscript 9.07: Unrecoverable error, exit code 1 Error: /dictstackunderflow in --end-- Operand stack: Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1900 1 3 %oparray_pop 1899 1 3 %oparray_pop --nostringval-- 1883 1 3 %oparray_pop 1771 1 3 %oparray_pop --nostringval-- %errorexec_pop .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- Dictionary stack: --dict:1169/1684(ro)(G)-- --dict:0/20(G)-- --dict:82/200(L)-- Current allocation mode is local Last OS error: No such file or directory Current file position is 102614 GPL Ghostscript 9.07: Unrecoverable error, exit code 1 convert: Postscript delegate failed `Desktop/tex_demo.eps': No such file or directory @ error/ps.c/ReadPSImage/836. convert: no images defined `ddd.eps' @ error/convert.c/ConvertImageCommand/3068. |
From: Phil E. <pel...@gm...> - 2013-03-02 19:44:42
|
Nothing springs to mind. Perhaps the install is failing due to https://github.com/matplotlib/matplotlib/pull/1454? Sounds like you don't have access to the machine to check that $> python -c "import matplotlib" Works? On 2 March 2013 17:25, Thomas Kluyver <th...@kl...> wrote: > The Launchpad daily builds have recently started failing on the docs > build. I get the error message "Error: matplotlib must be installed before > building the documentation", which means that there's an ImportError raised > when it tries to import matplotlib. Unfortunately, it hides any other > information about the ImportError, and I can't currently replicate it on my > own computer. > > It builds the Python library, then attempts to build the docs with the > command: > > cd doc ; MATPLOTLIBDATA=../lib/matplotlib/mpl-data/ \ > PYTHONPATH=../build/lib.linux-i686-2.7 ./make.py --small all > > Does anyone know what might have changed a few days ago to cause this? It > built successfully on the 26th of February, there was a different problem > on the 27th, and when I resolved that, this failure started on the 28th. > I've had a look at the commit log, but nothing jumps out. > > Thanks, > Thomas > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_feb > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel > > |
From: Thomas K. <th...@kl...> - 2013-03-02 17:54:11
|
The Launchpad daily builds have recently started failing on the docs build. I get the error message "Error: matplotlib must be installed before building the documentation", which means that there's an ImportError raised when it tries to import matplotlib. Unfortunately, it hides any other information about the ImportError, and I can't currently replicate it on my own computer. It builds the Python library, then attempts to build the docs with the command: cd doc ; MATPLOTLIBDATA=../lib/matplotlib/mpl-data/ \ PYTHONPATH=../build/lib.linux-i686-2.7 ./make.py --small all Does anyone know what might have changed a few days ago to cause this? It built successfully on the 26th of February, there was a different problem on the 27th, and when I resolved that, this failure started on the 28th. I've had a look at the commit log, but nothing jumps out. Thanks, Thomas |
From: Damon M. <dam...@gm...> - 2013-03-01 20:36:30
|
On Thu, Feb 28, 2013 at 11:24 AM, Nelle Varoquaux <nel...@gm...> wrote: > Hello, > > Since I've updated my master branch, I have a new error when installing > matplotlib. It seems it doesn't find a header file from numpy: > > I've reinstalled numpy (development version), and I tried to reinstall > everything (on python 2.6) but I still get the error. Am I the only one > having installation problems ? > > Here is the (partial) traceback: > > building 'matplotlib._png' extension > gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -fPIC > -DPY_ARRAY_UNIQUE_SYMBOL=MPL_ARRAY_API -DPYCXX_ISO_CPP_LIB=1 > -I/usr/local/include -I/usr/include -I. -I/usr/include/libpng12 > -I/usr/lib/pymodules/python2.6/numpy/core/include -I/usr/include/python2.6 > -c src/_png.cpp -o build/temp.linux-x86_64-2.6/src/_png.o > In file included from src/_png.cpp:31: > src/file_compat.h:4:32: error: numpy/npy_3kcompat.h: No such file or > directory > In file included from src/_png.cpp:31: > src/file_compat.h: In function ‘int npy_PyFile_CloseFile(PyObject*)’: > src/file_compat.h:125: warning: deprecated conversion from string constant > to ‘char*’ > src/_png.cpp: In member function ‘Py::Object _png_module::write_png(const > Py::Tuple&)’: > src/_png.cpp:137: error: ‘npy_PyFile_OpenFile’ was not declared in this > scope > src/_png.cpp:147: error: ‘npy_PyFile_Dup’ was not declared in this scope > src/_png.cpp:243: error: ‘npy_PyFile_DupClose’ was not declared in this > scope > src/_png.cpp:264: error: ‘npy_PyFile_DupClose’ was not declared in this > scope > src/_png.cpp: In member function ‘PyObject* _png_module::_read_png(const > Py::Object&, bool, int)’: > src/_png.cpp:321: error: ‘npy_PyFile_OpenFile’ was not declared in this > scope > src/_png.cpp:329: error: ‘npy_PyFile_Dup’ was not declared in this scope > src/_png.cpp:577: error: ‘npy_PyFile_DupClose’ was not declared in this > scope > error: command 'gcc' failed with exit status 1 > > Thanks, > N > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_feb > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel > That's weird. Did you try installing a stable numpy version instead? I tried compiling mpl against numpy 1.6.2 and everything worked out fine. -- Damon McDougall http://www.damon-is-a-geek.com Institute for Computational Engineering Sciences 201 E. 24th St. Stop C0200 The University of Texas at Austin Austin, TX 78712-1229 |
From: Michael D. <md...@st...> - 2013-03-01 19:36:47
|
I often find that a `git clean -fxd` is enough, rather than blitzing my whole virtualenv. On 03/01/2013 10:16 AM, Nelle Varoquaux wrote: > > > That's weird. Did you try installing a stable numpy version instead? > I tried compiling mpl against numpy 1.6.2 and everything worked out > fine. > > > I'm reinstalled everything, and it works fine. I still have problems > when I switch from an old branch (before the merge of the packaging > changes) and master, but I've resigned myself to destroy my virtualenv > each time. > > Thanks for the inputs ! > N > > > -- > Damon McDougall > http://www.damon-is-a-geek.com > Institute for Computational Engineering Sciences > 201 E. 24th St. > Stop C0200 > The University of Texas at Austin > Austin, TX 78712-1229 > > > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_feb > > > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel |
From: Nelle V. <nel...@gm...> - 2013-03-01 19:28:54
|
> > > That's weird. Did you try installing a stable numpy version instead? > I tried compiling mpl against numpy 1.6.2 and everything worked out > fine. > I'm reinstalled everything, and it works fine. I still have problems when I switch from an old branch (before the merge of the packaging changes) and master, but I've resigned myself to destroy my virtualenv each time. Thanks for the inputs ! N > > -- > Damon McDougall > http://www.damon-is-a-geek.com > Institute for Computational Engineering Sciences > 201 E. 24th St. > Stop C0200 > The University of Texas at Austin > Austin, TX 78712-1229 > |
From: Michael D. <md...@st...> - 2013-03-01 18:50:18
|
It thinks the Numpy header files are here: /usr/lib/pymodules/python2.6/numpy/core/include are they there and from the right version of Numpy? When you run python (the same copy you're building with), and import numpy and print "numpy.__version__" does that give the expected version. What does "numpy.__file__" and "numpy.get_version()" give? Mike On 02/28/2013 11:24 AM, Nelle Varoquaux wrote: > Hello, > > Since I've updated my master branch, I have a new error when > installing matplotlib. It seems it doesn't find a header file from numpy: > > I've reinstalled numpy (development version), and I tried to reinstall > everything (on python 2.6) but I still get the error. Am I the only > one having installation problems ? > > Here is the (partial) traceback: > > building 'matplotlib._png' extension > gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -fPIC > -DPY_ARRAY_UNIQUE_SYMBOL=MPL_ARRAY_API -DPYCXX_ISO_CPP_LIB=1 > -I/usr/local/include -I/usr/include -I. -I/usr/include/libpng12 > -I/usr/lib/pymodules/python2.6/numpy/core/include > -I/usr/include/python2.6 -c src/_png.cpp -o > build/temp.linux-x86_64-2.6/src/_png.o > In file included from src/_png.cpp:31: > src/file_compat.h:4:32: error: numpy/npy_3kcompat.h: No such file or > directory > In file included from src/_png.cpp:31: > src/file_compat.h: In function 'int npy_PyFile_CloseFile(PyObject*)': > src/file_compat.h:125: warning: deprecated conversion from string > constant to 'char*' > src/_png.cpp: In member function 'Py::Object > _png_module::write_png(const Py::Tuple&)': > src/_png.cpp:137: error: 'npy_PyFile_OpenFile' was not declared in > this scope > src/_png.cpp:147: error: 'npy_PyFile_Dup' was not declared in this scope > src/_png.cpp:243: error: 'npy_PyFile_DupClose' was not declared in > this scope > src/_png.cpp:264: error: 'npy_PyFile_DupClose' was not declared in > this scope > src/_png.cpp: In member function 'PyObject* > _png_module::_read_png(const Py::Object&, bool, int)': > src/_png.cpp:321: error: 'npy_PyFile_OpenFile' was not declared in > this scope > src/_png.cpp:329: error: 'npy_PyFile_Dup' was not declared in this scope > src/_png.cpp:577: error: 'npy_PyFile_DupClose' was not declared in > this scope > error: command 'gcc' failed with exit status 1 > > Thanks, > N > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_feb > > > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel |
From: Nelle V. <nel...@gm...> - 2013-03-01 11:16:39
|
Hello, Since I've updated my master branch, I have a new error when installing matplotlib. It seems it doesn't find a header file from numpy: I've reinstalled numpy (development version), and I tried to reinstall everything (on python 2.6) but I still get the error. Am I the only one having installation problems ? Here is the (partial) traceback: building 'matplotlib._png' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -fPIC -DPY_ARRAY_UNIQUE_SYMBOL=MPL_ARRAY_API -DPYCXX_ISO_CPP_LIB=1 -I/usr/local/include -I/usr/include -I. -I/usr/include/libpng12 -I/usr/lib/pymodules/python2.6/numpy/core/include -I/usr/include/python2.6 -c src/_png.cpp -o build/temp.linux-x86_64-2.6/src/_png.o In file included from src/_png.cpp:31: src/file_compat.h:4:32: error: numpy/npy_3kcompat.h: No such file or directory In file included from src/_png.cpp:31: src/file_compat.h: In function ‘int npy_PyFile_CloseFile(PyObject*)’: src/file_compat.h:125: warning: deprecated conversion from string constant to ‘char*’ src/_png.cpp: In member function ‘Py::Object _png_module::write_png(const Py::Tuple&)’: src/_png.cpp:137: error: ‘npy_PyFile_OpenFile’ was not declared in this scope src/_png.cpp:147: error: ‘npy_PyFile_Dup’ was not declared in this scope src/_png.cpp:243: error: ‘npy_PyFile_DupClose’ was not declared in this scope src/_png.cpp:264: error: ‘npy_PyFile_DupClose’ was not declared in this scope src/_png.cpp: In member function ‘PyObject* _png_module::_read_png(const Py::Object&, bool, int)’: src/_png.cpp:321: error: ‘npy_PyFile_OpenFile’ was not declared in this scope src/_png.cpp:329: error: ‘npy_PyFile_Dup’ was not declared in this scope src/_png.cpp:577: error: ‘npy_PyFile_DupClose’ was not declared in this scope error: command 'gcc' failed with exit status 1 Thanks, N |