You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(33) |
Dec
(20) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(7) |
Feb
(44) |
Mar
(51) |
Apr
(43) |
May
(43) |
Jun
(36) |
Jul
(61) |
Aug
(44) |
Sep
(25) |
Oct
(82) |
Nov
(97) |
Dec
(47) |
2005 |
Jan
(77) |
Feb
(143) |
Mar
(42) |
Apr
(31) |
May
(93) |
Jun
(93) |
Jul
(35) |
Aug
(78) |
Sep
(56) |
Oct
(44) |
Nov
(72) |
Dec
(75) |
2006 |
Jan
(116) |
Feb
(99) |
Mar
(181) |
Apr
(171) |
May
(112) |
Jun
(86) |
Jul
(91) |
Aug
(111) |
Sep
(77) |
Oct
(72) |
Nov
(57) |
Dec
(51) |
2007 |
Jan
(64) |
Feb
(116) |
Mar
(70) |
Apr
(74) |
May
(53) |
Jun
(40) |
Jul
(519) |
Aug
(151) |
Sep
(132) |
Oct
(74) |
Nov
(282) |
Dec
(190) |
2008 |
Jan
(141) |
Feb
(67) |
Mar
(69) |
Apr
(96) |
May
(227) |
Jun
(404) |
Jul
(399) |
Aug
(96) |
Sep
(120) |
Oct
(205) |
Nov
(126) |
Dec
(261) |
2009 |
Jan
(136) |
Feb
(136) |
Mar
(119) |
Apr
(124) |
May
(155) |
Jun
(98) |
Jul
(136) |
Aug
(292) |
Sep
(174) |
Oct
(126) |
Nov
(126) |
Dec
(79) |
2010 |
Jan
(109) |
Feb
(83) |
Mar
(139) |
Apr
(91) |
May
(79) |
Jun
(164) |
Jul
(184) |
Aug
(146) |
Sep
(163) |
Oct
(128) |
Nov
(70) |
Dec
(73) |
2011 |
Jan
(235) |
Feb
(165) |
Mar
(147) |
Apr
(86) |
May
(74) |
Jun
(118) |
Jul
(65) |
Aug
(75) |
Sep
(162) |
Oct
(94) |
Nov
(48) |
Dec
(44) |
2012 |
Jan
(49) |
Feb
(40) |
Mar
(88) |
Apr
(35) |
May
(52) |
Jun
(69) |
Jul
(90) |
Aug
(123) |
Sep
(112) |
Oct
(120) |
Nov
(105) |
Dec
(116) |
2013 |
Jan
(76) |
Feb
(26) |
Mar
(78) |
Apr
(43) |
May
(61) |
Jun
(53) |
Jul
(147) |
Aug
(85) |
Sep
(83) |
Oct
(122) |
Nov
(18) |
Dec
(27) |
2014 |
Jan
(58) |
Feb
(25) |
Mar
(49) |
Apr
(17) |
May
(29) |
Jun
(39) |
Jul
(53) |
Aug
(52) |
Sep
(35) |
Oct
(47) |
Nov
(110) |
Dec
(27) |
2015 |
Jan
(50) |
Feb
(93) |
Mar
(96) |
Apr
(30) |
May
(55) |
Jun
(83) |
Jul
(44) |
Aug
(8) |
Sep
(5) |
Oct
|
Nov
(1) |
Dec
(1) |
2016 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
(3) |
Sep
(1) |
Oct
(3) |
Nov
|
Dec
|
2017 |
Jan
|
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
(7) |
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: David k. <dav...@ir...> - 2012-02-17 17:30:33
|
Hi, I will be on vacation with limited email from February 18-25, 2012. Bonjour, Je serai en conge du 18 au 25 fevrier, 2012. Thanks, David |
From: Benjamin R. <ben...@ou...> - 2012-02-17 17:27:47
|
On Fri, Feb 17, 2012 at 10:54 AM, Phil Elson <phi...@ho...>wrote: > I think this feature was originally intended to work (since > TransformedPath exists) > but it wasn't working [in the way that I was expecting it to]. > I made a change which now only invalidates non-affine transformations > if it is really > necessary. This change required a modification to the way > invalidation was passed through the transform stack, since certain > transform > subclasses need to override the mechanism. I will try to explain the > reason why > this is the case: > > > Suppose a TransformNode is told that it can no longer store the affine > transformed > path by its child node, then it must pass this message up to its parent > nodes, > until eventually a TransformedPath instance is invalidated (triggering > a re-computation). > With Transforms this recursion can simply pass the same invalidation > message up, > but for the more complex case of a CompositeTransform, which > represents the combination > of two Transforms, things get harder. I will devise a notation to help me > explain: > > Let a composite transform, A, represent an affine transformation (a1) > followed by a > non affine transformation (vc2) [vc stands for very complicated] we > can write this in > the form (a1, vc2). Since non-affine Transform instances are composed of a > non-affine transformation followed by an affine one, we can write (vc2) as > (c2, a2) and the composite can now be written as (a1, c2, a2). > > As a bit of background knowledge, computing the non-affine transformation > of A > involves computing (a1, c2) and leaves the term (a2) as the affine > component. Additionally, a CompositeTransform which looks like (c1, a1, > a2) can > be optimised such that its affine part is (a1, a2). > > There are four permutations of CompositeTransforms: > > A = (a1, c2, a2) > B = (c1, a1, a2) > C = (c1, a1, c2, a2) > D = (a1, a2) > > When a child of a CompositeTransform tells us that its affine part is > invalid, > we need to know which child it is that has told us. > > This statement is best demonstrated in transform A: > > If the invalid part is a1 then it follows that the non-affine part > (a1, c2) is also > invalid, hence A must inform its parent that its entire transform is > invalid. > > Conversely, if the invalid part is a2 then the non-affine part (a1, > c2) is unchanged and > therefore can pass on the message that only its affine part is invalid. > > > The changes can be found in > https://github.com/PhilipElson/matplotlib/compare/path_transform_cache > and I would really appreciate your feedback. > > I can make a pull request of this if that makes in-line discussion easier. > > Many Thanks, > > Chances are, you have just now become the resident expert on Transforms. A few very important questions. Does this change any existing API? If so, then changes will have to be taken very carefully. Does all current tests pass? Can you think of any additional tests to add (both for your changes and for the current behavior)? How does this impact the performance of existing code? Maybe some demo code to help us evaluate your use-case? Ben Root |
From: Benjamin R. <ben...@ou...> - 2012-02-17 17:17:56
|
On Fri, Feb 17, 2012 at 11:06 AM, Ryan May <rm...@gm...> wrote: > On Fri, Feb 17, 2012 at 10:14 AM, Benjamin Root <ben...@ou...> wrote: > > Hello all, > > > > I tracked down an annoying problem in one of applications to the Lasso > > widget I was using. The widget constructor lets you specify a function > to > > call when the lasso operation is complete. So, when I create a Lasso, I > set > > the canvas's widget lock to the new lasso, and the release function will > > unlock it when it is done. What would occassionally happen is that the > > canvas wouldn't get unlocked and I wouldn't be able to use any other > widget > > tools. > > > > It turns out that the release function is not called if the number of > > vertices collected is not more than 2. So, accidental clicks that > activate > > the lasso never get cleaned up. Because of this design, it would be > > impossible to guarantee a proper cleanup. One could add another > > button_release callback to clean up if the canvas is still locked, but > there > > is no guarantee that that callback is not called before the lasso's > > callback, thereby creating a race condition. > > > > The only solution I see is to guarantee that the release callback will be > > called regardless of the length of the vertices array. Does anybody see > a > > problem with that? > > Not having looked at the Lasso code, wouldn't it be possible to use > one internal callback for the button_release event, and have this > callback call the users' callbacks if points > 2 and always handle the > unlocking of the canvas? > > Ryan > > The problem is that the constructor does not establish the lock. It is the user's responsibility to establish a lock and release the locks for these widgets. Plus, if the user's callback has cleanup code (such as mine did), not guaranteeing that the callback is done can leave behind a mess. Now, if we were to change the paradigm so that the Widget class establishes and releases the lock, and that the user should never handle that, then that might be a partial solution, but still leaves unsolved the user's cleanup needs. Ben Root |
From: Ryan M. <rm...@gm...> - 2012-02-17 17:07:14
|
On Fri, Feb 17, 2012 at 10:14 AM, Benjamin Root <ben...@ou...> wrote: > Hello all, > > I tracked down an annoying problem in one of applications to the Lasso > widget I was using. The widget constructor lets you specify a function to > call when the lasso operation is complete. So, when I create a Lasso, I set > the canvas's widget lock to the new lasso, and the release function will > unlock it when it is done. What would occassionally happen is that the > canvas wouldn't get unlocked and I wouldn't be able to use any other widget > tools. > > It turns out that the release function is not called if the number of > vertices collected is not more than 2. So, accidental clicks that activate > the lasso never get cleaned up. Because of this design, it would be > impossible to guarantee a proper cleanup. One could add another > button_release callback to clean up if the canvas is still locked, but there > is no guarantee that that callback is not called before the lasso's > callback, thereby creating a race condition. > > The only solution I see is to guarantee that the release callback will be > called regardless of the length of the vertices array. Does anybody see a > problem with that? Not having looked at the Lasso code, wouldn't it be possible to use one internal callback for the button_release event, and have this callback call the users' callbacks if points > 2 and always handle the unlocking of the canvas? Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma |
From: Phil E. <phi...@ho...> - 2012-02-17 16:54:22
|
I think this feature was originally intended to work (since TransformedPath exists) but it wasn't working [in the way that I was expecting it to]. I made a change which now only invalidates non-affine transformations if it is really necessary. This change required a modification to the way invalidation was passed through the transform stack, since certain transform subclasses need to override the mechanism. I will try to explain the reason why this is the case: Suppose a TransformNode is told that it can no longer store the affine transformed path by its child node, then it must pass this message up to its parent nodes, until eventually a TransformedPath instance is invalidated (triggering a re-computation). With Transforms this recursion can simply pass the same invalidation message up, but for the more complex case of a CompositeTransform, which represents the combination of two Transforms, things get harder. I will devise a notation to help me explain: Let a composite transform, A, represent an affine transformation (a1) followed by a non affine transformation (vc2) [vc stands for very complicated] we can write this in the form (a1, vc2). Since non-affine Transform instances are composed of a non-affine transformation followed by an affine one, we can write (vc2) as (c2, a2) and the composite can now be written as (a1, c2, a2). As a bit of background knowledge, computing the non-affine transformation of A involves computing (a1, c2) and leaves the term (a2) as the affine component. Additionally, a CompositeTransform which looks like (c1, a1, a2) can be optimised such that its affine part is (a1, a2). There are four permutations of CompositeTransforms: A = (a1, c2, a2) B = (c1, a1, a2) C = (c1, a1, c2, a2) D = (a1, a2) When a child of a CompositeTransform tells us that its affine part is invalid, we need to know which child it is that has told us. This statement is best demonstrated in transform A: If the invalid part is a1 then it follows that the non-affine part (a1, c2) is also invalid, hence A must inform its parent that its entire transform is invalid. Conversely, if the invalid part is a2 then the non-affine part (a1, c2) is unchanged and therefore can pass on the message that only its affine part is invalid. The changes can be found in https://github.com/PhilipElson/matplotlib/compare/path_transform_cache and I would really appreciate your feedback. I can make a pull request of this if that makes in-line discussion easier. Many Thanks, |
From: Benjamin R. <ben...@ou...> - 2012-02-17 16:14:46
|
Hello all, I tracked down an annoying problem in one of applications to the Lasso widget I was using. The widget constructor lets you specify a function to call when the lasso operation is complete. So, when I create a Lasso, I set the canvas's widget lock to the new lasso, and the release function will unlock it when it is done. What would occassionally happen is that the canvas wouldn't get unlocked and I wouldn't be able to use any other widget tools. It turns out that the release function is not called if the number of vertices collected is not more than 2. So, accidental clicks that activate the lasso never get cleaned up. Because of this design, it would be impossible to guarantee a proper cleanup. One could add another button_release callback to clean up if the canvas is still locked, but there is no guarantee that that callback is not called before the lasso's callback, thereby creating a race condition. The only solution I see is to guarantee that the release callback will be called regardless of the length of the vertices array. Does anybody see a problem with that? Cheers! Ben Root |
From: Pavel M. <pma...@bl...> - 2012-02-16 22:10:26
|
Is this possible? How difficult is it? Is anyone interested in consulting work porting Matplotlib to IronPython? Appreciate any help/suggestions. Thanks ________________________________ This e-mail is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, the information in this e-mail by persons or entities other than the intended recipient is prohibited and may be unlawful. If you received this in error, please contact the sender and delete the material from any computer. This communication is for informational purposes only. It is not intended as and does not constitute an offer or solicitation for the purchase or sale of any financial instrument or as an official confirmation of any transaction. All market prices, data and other information are not warranted as to completeness or accuracy and are subject to change without notice. Any expected returns are provided for illustrative purposes only and are not intended to serve as, and must not be relied upon by any prospective investor as, a guaranty, an assurance, a prediction of a definitive statement of fact or a probability. Investment in funds managed by BlueMountain carries certain risks, including the risk of loss of principal. Unless indicated otherwise, performance results are presented net of fees and expenses. Certain market and economic events having an impact on performance may not repeat themselves. Any comments or statements made herein do not necessarily reflect those of BlueMountain Capital Management, LLC or its affiliates. PAST PERFORMANCE IS NOT NECESSARILY INDICATIVE OF FUTURE RESULTS AND NO REPRESENTATION IS MADE THAT RESULTS SIMILAR TO THOSE SHOWN CAN BE ACHIEVED. |
From: Benjamin R. <ben...@ou...> - 2012-02-14 04:47:35
|
On Monday, February 13, 2012, Tony Yu <ts...@gm...> wrote: > The title is a bit misleading: The problem is that the last font-related rc-setting seems to override all previous settings. To clarify, if I save a figure with certain font settings and *after that* change the rc-setting, the older figure appears to have the newer setting. Note that this only appears to happen with fonts---the linewidth setting, for example, shows up as expected. (See script belows) > > -Tony > > import matplotlib.pyplot as plt > def test_simple_plot(): > fig, ax = plt.subplots() > ax.plot([0, 1]) > ax.set_xlabel('x-label') > ax.set_ylabel('y-label') > ax.set_title('title') > return fig > plt.rcParams['lines.linewidth'] = 10 > plt.rcParams['font.family'] = 'serif' > plt.rcParams['font.size'] = 20 > fig1 = test_simple_plot() > plt.rcParams['lines.linewidth'] = 1 > plt.rcParams['font.family'] = 'sans-serif' > plt.rcParams['font.size'] = 10 > fig2 = test_simple_plot() > plt.show() Looks like we have an inconsistency here with how we process our None's. For most artists, properties defined as None in the constructor are then given defaults from the rcparams. I would guess that text objects are doing if on draw(), instead. At first glacé, I would guess that would be a bug, but I would welcome other comments on this. Ben Root |
From: Tony Yu <ts...@gm...> - 2012-02-14 04:31:15
|
The title is a bit misleading: The problem is that the last font-related rc-setting seems to override all previous settings. To clarify, if I save a figure with certain font settings and *after that* change the rc-setting, the older figure appears to have the newer setting. Note that this only appears to happen with fonts---the linewidth setting, for example, shows up as expected. (See script belows) -Tony import matplotlib.pyplot as plt def test_simple_plot(): fig, ax = plt.subplots() ax.plot([0, 1]) ax.set_xlabel('x-label') ax.set_ylabel('y-label') ax.set_title('title') return fig plt.rcParams['lines.linewidth'] = 10 plt.rcParams['font.family'] = 'serif' plt.rcParams['font.size'] = 20 fig1 = test_simple_plot() plt.rcParams['lines.linewidth'] = 1 plt.rcParams['font.family'] = 'sans-serif' plt.rcParams['font.size'] = 10 fig2 = test_simple_plot() plt.show() |
From: Fernando P. <fpe...@gm...> - 2012-02-13 21:56:12
|
Hi folks, [ I'm broadcasting this widely for maximum reach, but I'd appreciate it if replies can be kept to the *numpy* list, which is sort of the 'base' list for scientific/numerical work. It will make it much easier to organize a coherent set of notes later on. Apology if you're subscribed to all and get it 10 times. ] As part of the PyData workshop (http://pydataworkshop.eventbrite.com) to be held March 2 and 3 at the Mountain View Google offices, we have scheduled a session for an open discussion with Guido van Rossum and hopefully as many core python-dev members who can make it. We wanted to seize the combined opportunity of the PyData workshop bringing a number of 'scipy people' to Google with the timeline for Python 3.3, the first release after the Python language moratorium, being within sight: http://www.python.org/dev/peps/pep-0398. While a number of scientific Python packages are already available for Python 3 (either in released form or in their master git branches), it's fair to say that there hasn't been a major transition of the scientific community to Python3. Since there is no more development being done on the Python2 series, eventually we will all want to find ways to make this transition, and we think that this is an excellent time to engage the core python development team and consider ideas that would make Python3 generally a more appealing language for scientific work. Guido has made it clear that he doesn't speak for the day-to-day development of Python anymore, so we all should be aware that any ideas that come out of this panel will still need to be discussed with python-dev itself via standard mechanisms before anything is implemented. Nonetheless, the opportunity for a solid face-to-face dialog for brainstorming was too good to pass up. The purpose of this email is then to solicit, from all of our community, ideas for this discussion. In a week or so we'll need to summarize the main points brought up here and make a more concrete agenda out of it; I will also post a summary of the meeting afterwards here. Anything is a valid topic, some points just to get the conversation started: - Extra operators/PEP 225. Here's a summary from the last time we went over this, years ago at Scipy 2008: http://mail.scipy.org/pipermail/numpy-discussion/2008-October/038234.html, and the current status of the document we wrote about it is here: file:///home/fperez/www/site/_build/html/py4science/numpy-pep225/numpy-pep225.html. - Improved syntax/support for rationals or decimal literals? While Python now has both decimals (http://docs.python.org/library/decimal.html) and rationals (http://docs.python.org/library/fractions.html), they're quite clunky to use because they require full constructor calls. Guido has mentioned in previous discussions toying with ideas about support for different kinds of numeric literals... - Using the numpy docstring standard python-wide, and thus having python improve the pathetic state of the stdlib's docstrings? This is an area where our community is light years ahead of the standard library, but we'd all benefit from Python itself improving on this front. I'm toying with the idea of giving a lighting talk at PyConn about this, comparing the great, robust culture and tools of good docstrings across the Scipy ecosystem with the sad, sad state of docstrings in the stdlib. It might spur some movement on that front from the stdlib authors, esp. if the core python-dev team realizes the value and benefit it can bring (at relatively low cost, given how most of the information does exist, it's just in the wrong places). But more importantly for us, if there was truly a universal standard for high-quality docstrings across Python projects, building good documentation/help machinery would be a lot easier, as we'd know what to expect and search for (such as rendering them nicely in the ipython notebook, providing high-quality cross-project help search, etc). - Literal syntax for arrays? Sage has been floating a discussion about a literal matrix syntax (https://groups.google.com/forum/#!topic/sage-devel/mzwepqZBHnA). For something like this to go into python in any meaningful way there would have to be core multidimensional arrays in the language, but perhaps it's time to think about a piece of the numpy array itself into Python? This is one of the more 'out there' ideas, but after all, that's the point of a discussion like this, especially considering we'll have both Travis and Guido in one room. - Other syntactic sugar? Sage has "a..b" <=> range(a, b+1), which I actually think is both nice and useful... There's also the question of allowing "a:b:c" notation outside of [], which has come up a few times in conversation over the last few years. Others? - The packaging quagmire? This continues to be a problem, though python3 does have new improvements to distutils. I'm not really up to speed on the situation, to be frank. If we want to bring this up, someone will have to provide a solid reference or volunteer to do it in person. - etc... I'm putting the above just to *start* the discussion, but the real point is for the rest of the community to contribute ideas, so don't be shy. Final note: while I am here commiting to organizing and presenting this at the discussion with Guido (as well as contacting python-dev), I would greatly appreciate help with the task of summarizing this prior to the meeting as I'm pretty badly swamped in the run-in to pydata/pycon. So if anyone is willing to help draft the summary as the date draws closer (we can put it up on a github wiki, gist, whatever), I will be very grateful. I'm sure it will be better than what I'll otherwise do the last night at 2am :) Cheers, f ps - to the obvious question about webcasting the discussion live for remote participation: yes, we looked into it already; no, unfortunately it appears it won't be possible. We'll try to at least have the audio recorded (and possibly video) for posting later on. pps- if you are close to Mountain View and are interested in attending this panel in person, drop me a line at fer...@be.... We have a few spots available *for this discussion only* on top of the pydata regular attendance (which is long closed, I'm afraid). But we'll need to provide Google with a list of those attendees in advance. Please indicate if you are a core python committer in your email, as we'll give priority for this overflow pool to core python developers (but will otherwise accommodate as many people as Google lets us). |
From: Benjamin R. <ben...@ou...> - 2012-02-10 18:33:05
|
On Fri, Feb 10, 2012 at 11:53 AM, Phil Elson <phi...@ho...>wrote: > In much the same way Basemap can take an image in a Plate Carree map > projection (e.g. blue marble) and transform it onto another projection > in a non-affine way, I would like to be able to apply a non-affine > transformation to an image, only using the proper matplotlib Transform > framework. > To me, this means that I should not have to pre-compute the projected > image before adding it to the axes, instead I should be able to pass > the source image and the Transformation stack should take care of > transforming (warping) it for me (just like I can with a Path). > > As far as I can tell, there is no current matplotlib functionality to > do this (as I understand it, some backends can cope with affine image > transformations, but this has not been plumbed-in in the same style as > the Transform of paths and is done in the Image classes themselves). > (note: I am aware that there is some code to do affine transforms in > certain backends - > http://matplotlib.sourceforge.net/examples/api/demo_affine_image.html > - which is currently broken [I have a fix for this], but it doesn't > fit into the Transform framework at present.) > > I have code which will do the actual warping for my particular case, > and all I need to do is hook it in nicely... > > I was thinking of adding a method to the Transform class which > implements this functionality, psuedo code stubs are included: > > > class Transform: > ... > def transform_image(self, image): > return > self.transform_image_affine(self.transform_image_non_affine(image)) > > def transform_image_non_affine(self, image): > if not self.is_affine: > raise NotImplementedError('This is the hard part.') > return image > ... > def transform_image_affine(self, image): > # could easily handle scale & translations (by changing the > extent), but not rotations... > raise NotImplementedError("Need to do this. But rule out > rotations completely.") > > > This could then be used by the Image artist to do something like: > > class Image(Artist, ...): > ... > def draw(self, renderer, *args, **kwargs): > transform = self.get_transform() > timg = transform.transform_image_non_affine(self) > affine = transform.get_affine() > ... > renderer.draw_image(timg, ..., affine) > > > And the backends could implement: > > class Renderer*... > def draw_image(..., img, ..., transform=None): > # transform must be an affine transform > if transform.is_affine and i_can_handle_affines: > ... # convert the Transform into the backend's transform form > else: > timage = transform.transform_image(img) > > > > The warping mechanism itself would be fairly simple, in that it > assigns coordinate values to each pixel in the source cs (coordinate > system), transforms those points into the target cs, from which a > bounding box can be identified. The bbox is then treated as the bbox > of the target (warped) image, which is given an arbitrary resolution. > Finally the target image pixel coordinates are computed and their > associated pixel values are calculated by interpolating from the > source image (using target cs pixel values). > > > As mentioned, I have written the image warping code (for my higher > dimensional coordinate system case using > scipy.interpolate.NearestNDInterpolator) successfully already, so the > main motivations for this mail then, are: > * To get a feel for whether anyone else would find this functionality > useful? Where else can it be used and in what ways? > * To get feedback on the proposed change to the Transform class, > whether such a change would be acceptable and what pitfalls lie ahead. > * To hear alternative approaches to solving the same problem. > * To make sure I haven't missed a concept that already exists in the > Image module (there are 6 different "image" classes in there, 4 of > which undocumented) > * To find out if anyone else wants to collaborate in making the > required change. > > Thanks in advance for your time, > > Could this mean that we could support imshow() for polar axes? That would be nice! Ben Root |
From: Phil E. <phi...@ho...> - 2012-02-10 17:53:32
|
In much the same way Basemap can take an image in a Plate Carree map projection (e.g. blue marble) and transform it onto another projection in a non-affine way, I would like to be able to apply a non-affine transformation to an image, only using the proper matplotlib Transform framework. To me, this means that I should not have to pre-compute the projected image before adding it to the axes, instead I should be able to pass the source image and the Transformation stack should take care of transforming (warping) it for me (just like I can with a Path). As far as I can tell, there is no current matplotlib functionality to do this (as I understand it, some backends can cope with affine image transformations, but this has not been plumbed-in in the same style as the Transform of paths and is done in the Image classes themselves). (note: I am aware that there is some code to do affine transforms in certain backends - http://matplotlib.sourceforge.net/examples/api/demo_affine_image.html - which is currently broken [I have a fix for this], but it doesn't fit into the Transform framework at present.) I have code which will do the actual warping for my particular case, and all I need to do is hook it in nicely... I was thinking of adding a method to the Transform class which implements this functionality, psuedo code stubs are included: class Transform: ... def transform_image(self, image): return self.transform_image_affine(self.transform_image_non_affine(image)) def transform_image_non_affine(self, image): if not self.is_affine: raise NotImplementedError('This is the hard part.') return image ... def transform_image_affine(self, image): # could easily handle scale & translations (by changing the extent), but not rotations... raise NotImplementedError("Need to do this. But rule out rotations completely.") This could then be used by the Image artist to do something like: class Image(Artist, ...): ... def draw(self, renderer, *args, **kwargs): transform = self.get_transform() timg = transform.transform_image_non_affine(self) affine = transform.get_affine() ... renderer.draw_image(timg, ..., affine) And the backends could implement: class Renderer*... def draw_image(..., img, ..., transform=None): # transform must be an affine transform if transform.is_affine and i_can_handle_affines: ... # convert the Transform into the backend's transform form else: timage = transform.transform_image(img) The warping mechanism itself would be fairly simple, in that it assigns coordinate values to each pixel in the source cs (coordinate system), transforms those points into the target cs, from which a bounding box can be identified. The bbox is then treated as the bbox of the target (warped) image, which is given an arbitrary resolution. Finally the target image pixel coordinates are computed and their associated pixel values are calculated by interpolating from the source image (using target cs pixel values). As mentioned, I have written the image warping code (for my higher dimensional coordinate system case using scipy.interpolate.NearestNDInterpolator) successfully already, so the main motivations for this mail then, are: * To get a feel for whether anyone else would find this functionality useful? Where else can it be used and in what ways? * To get feedback on the proposed change to the Transform class, whether such a change would be acceptable and what pitfalls lie ahead. * To hear alternative approaches to solving the same problem. * To make sure I haven't missed a concept that already exists in the Image module (there are 6 different "image" classes in there, 4 of which undocumented) * To find out if anyone else wants to collaborate in making the required change. Thanks in advance for your time, |
From: Phil E. <phi...@ho...> - 2012-02-04 17:13:58
|
Thanks Mike. > I'm not quite sure what the above lines are meant to do. > matplotlib.transforms doesn't have a Polar member -- > matplotlib.projections.polar.Polar does not have a PolarTransform member > (on master or your polar_fun branch). Even given that, I think the user > should be specifying a projection, not a transformation, to create a new > axes. There is potential for confusion that some transformations will > allow getting a projection out and some won't (for some it doesn't even > really make sense). That was meant to be matplotlib.projections.polar.PolarAxes.PolarTransform but your right, defining the "projection" in the transform could lead to confusion, yet initialising an Axes as a projection seems like unnecessary complexity. This suggests that defining a "projection" class which is neither Transform nor Axes might make the most sense (note, what follows is pseudo code and does not exist in the branch): >>> polar_proj = Polar(theta0=np.pi/2) >>> ax = plt.axes(projection=polar_proj) >>> print ax.projection Polar(theta0=1.57) The PolarAxes would be initialised with the Projection instance, and the PolarAxes can initialise the PolarTransform with a reference to that projection. Thus changing the theta0 of the projection in the Axes would also change the projection which is used in the Transform instance, i.e.: ax.projection.theta0 = 3*np.pi/2 Would change the way that the overall axes looked. Interestingly, the work that I have been doing which requires the aforementioned pull request is doing precisely this - I have projection classes, one for each type of projection, but where each type of projection is parameterised, which, when passed through plt.axes(projection=<my projection class>) instantiate a generic "GenericProjectionAxes", which itself instantiates a generic "GenericProjectionTransform" (names for illustration purposes only) all the while the original projection is mutable via the MultiProjectionAxes.projection attribute. Did you have any feelings on the pull request? Thanks again for your time, Phil |
From: Michael D. <md...@st...> - 2012-02-03 18:09:15
|
Thanks for doing this work. On 02/03/2012 11:40 AM, Phil Elson wrote: > Currently, one can set the theta_0 of a polar plot with: > > ax = plt.axes(projection='polar') > ax.set_theta_offset(np.pi/2) > ax.plot(np.arange(100)*0.15, np.arange(100)) > > But internally there are some nasties going on (theta_0 is an attribute on the > axes, the transform is instantiated from within the axes and is given the axes > that is instantiating it, which is all a bit circular). I have made a branch > (https://github.com/PhilipElson/matplotlib/compare/master...polar_fun) which > alleviates the axes attribute issue and would allow something like: > > polar_trans = mpl.transforms.Polar.PolarTransform(theta_offset=np.pi/2) > ax = plt.axes(projection=polar_trans) > ax.plot(np.arange(100)*0.15, np.arange(100)) I agree that the canonical copy of theta_offset should probably live in the transform and not the PolarAxes. However, an important feature of the current system that seems to be lost in your branch is that the user deals with Projections (Axes subclasses) which bring together not only the transformation of points from one space to another, but the axes shape and tick placement etc., and they also allow for changing everything after the fact. The Transformation classes, as they stand now, are intended to be an implementation detail hidden from the user. I'm not quite sure what the above lines are meant to do. matplotlib.transforms doesn't have a Polar member -- matplotlib.projections.polar.Polar does not have a PolarTransform member (on master or your polar_fun branch). Even given that, I think the user should be specifying a projection, not a transformation, to create a new axes. There is potential for confusion that some transformations will allow getting a projection out and some won't (for some it doesn't even really make sense). > > Or, I have added a helper class which also demonstrates the proposed: > > non-string change: > ax = plt.axes(projection=Polar(theta0=90)) > ax.plot(np.arange(100)*0.15, np.arange(100)) > > As I said, I am not proposing these changes to the way Polar works at this > stage, but thought it was worth sharing to show what can be done once > something similar to the proposed change gets on to mpl master. > This makes more sense to me. It doesn't appear to allow for setting the theta0 after the fact since Polar doesn't propagate changes along to the PolarAxes object that it created and set_theta_offset has been removed from PolarAxes. Cheers, Mike |
From: Phil E. <phi...@ho...> - 2012-02-03 16:40:23
|
Some time back I asked about initialising a projection in MPL using generic objects rather than by class name. I created a pull request associated with this which was responded to fantastically by leejjoon which (after several months) I have finally got around to implementing. My changes have been added to the original pull request, which will eventually be obsoleted, but that doesn't seem to have notified the devel mailing list, therefore I would like to draw the list's attention to https://github.com/matplotlib/matplotlib/pull/470#issuecomment-3743543 on which I would greatly appreciate feedback & ultimately get onto the mpl master. The pull request in question would pave the way for non string projections so I thought I would play with how one might go about specifying the location of theta_0 in a polar plot (i.e. should it be due east or due north etc.). I have branched my changeset mentioned in the pull request above and implemented a couple of ideas, although I am not proposing that these changes go any further at this stage (I would be happy if someone wants to run with them though): Currently, one can set the theta_0 of a polar plot with: ax = plt.axes(projection='polar') ax.set_theta_offset(np.pi/2) ax.plot(np.arange(100)*0.15, np.arange(100)) But internally there are some nasties going on (theta_0 is an attribute on the axes, the transform is instantiated from within the axes and is given the axes that is instantiating it, which is all a bit circular). I have made a branch (https://github.com/PhilipElson/matplotlib/compare/master...polar_fun) which alleviates the axes attribute issue and would allow something like: polar_trans = mpl.transforms.Polar.PolarTransform(theta_offset=np.pi/2) ax = plt.axes(projection=polar_trans) ax.plot(np.arange(100)*0.15, np.arange(100)) Or, I have added a helper class which also demonstrates the proposed: non-string change: ax = plt.axes(projection=Polar(theta0=90)) ax.plot(np.arange(100)*0.15, np.arange(100)) As I said, I am not proposing these changes to the way Polar works at this stage, but thought it was worth sharing to show what can be done once something similar to the proposed change gets on to mpl master. Hope that makes sense. Many Thanks, |
From: Phil E. <phi...@ho...> - 2012-02-03 13:40:47
|
I'm trying to understand how the TransformedPath mechanism is working with only limited success, and was hoping someone could help. I have a non-affine transformation defined (subclass of matplotlib.transforms.Transform) which takes a path and applies an intensive transformation (path curving & cutting) which can take a little while, but am able to guarantee that this transformation is a one off and will never change for this transform instance, therefore there are obvious caching opportunities. I am aware that TransformedPath is doing some caching and would really like to hook into this rather than rolling my own caching mechanism but can't q uite figure out (the probably obvious!) way to do it. To see this problem for yourself I have attached a dummy example of what I am working on: import matplotlib.transforms class SlowNonAffineTransform(matplotlib.transforms.Transform): input_dims = 2 output_dims = 2 is_separable = False has_inverse = True def transform(self, points): return matplotlib.transforms.IdentityTransform().transform(points) def transform_path(self, path): # pretends that it is doing something clever & time consuming, but really is just sleeping import time # take a long time to do something time.sleep(3) # return the original path return matplotlib.transforms.IdentityTransform().transform_path(path) if __name__ == '__main__': import matplotlib.pyplot as plt ax = plt.axes() ax.plot([0, 10, 20], [1, 3, 2], transform=SlowNonAffineTransform() + ax.transData) plt.show() When this code is run the initial "show" is slow, which is fine, but a simple resize/zoom rect/pan/zoom will also take a long time. How can I tell mpl that I can guarantee that my level of the transform stack is never invalidated? Many Thanks, |
From: Jae-Joon L. <lee...@gm...> - 2012-01-30 04:49:50
|
Please see if this PR works. https://github.com/matplotlib/matplotlib/pull/689 Regards, -JJ On Mon, Jan 30, 2012 at 1:03 PM, Jae-Joon Lee <lee...@gm...> wrote: > On Mon, Jan 30, 2012 at 5:26 AM, Jeff Whitaker <js...@fa...> wrote: >> unless matplotlib takes into account all the artist >> objects associated with a figure. > > My primary reason behind not accounting all the artists was that, in > general, Matplotlib does not know the exact bounding box of artists > when the artists are clipped. > > In the current implementation, axes title, axis labels and ticklabels > are only accounted, and we have an optional parameter of > *bbox_extra_artists* so that users can manually change this behavior. > > By the way, for now, I'm more inclined to change the behavior to > account all the texts artists (at least), although we may see white > space sometimes. > > Regards, > > -JJ |
From: Jae-Joon L. <lee...@gm...> - 2012-01-30 04:03:23
|
On Mon, Jan 30, 2012 at 5:26 AM, Jeff Whitaker <js...@fa...> wrote: > unless matplotlib takes into account all the artist > objects associated with a figure. My primary reason behind not accounting all the artists was that, in general, Matplotlib does not know the exact bounding box of artists when the artists are clipped. In the current implementation, axes title, axis labels and ticklabels are only accounted, and we have an optional parameter of *bbox_extra_artists* so that users can manually change this behavior. By the way, for now, I'm more inclined to change the behavior to account all the texts artists (at least), although we may see white space sometimes. Regards, -JJ |
From: Fernando P. <fpe...@gm...> - 2012-01-29 23:29:16
|
Hi Eric, On Sun, Jan 29, 2012 at 1:35 PM, Eric Firing <ef...@ha...> wrote: > Yes, this became evident right away after the transition; in addition, > there was a coordination glitch such that quite a few bugs that I had > closed on SF, trying to clear out some junk before the transition, ended > up getting resurrected on github, complete with dead links. Got it, I missed that previous discussion. > This is a triage situation; we have to consider the cost/benefit > tradeoff of various ways of dealing with the mess, and with the glut of > bug and other reports. The fact is that we were way behind in dealing > with the SF bugs, and we are falling behind in dealing with github bugs. > > I think the best approach is to be fairly brutal in closing bug reports. > We don't have the developer time to deal with very many. Those that > accumulate faster than we can deal with them merely cost us time > whenever one of us scans the set of reports in an attempt to get the > list under control by finding ways to close a few. It's unfortunate that github doesn't offer a 'priority' field so that one can threshold on it. We use 'prio-X' labels with X in {low, medium, high, blocker} as a substitute, but there's no way to see 'all with prioirty >= high', for example. But we've been trying to aggressively label everything with a priority, and in practice only high/blocker ones really get worked on, unless someone external shows up with a pull request for anything below that. My take right now is that even bugs are almost all lower priority than pull requests: if someone took the time to actually contribute code, I think it's critically important to get back to them with feedback. Not having a timely review and response to a PR is the best way to discourage potential new contributors. My hope is that by being pretty aggressive on that, we'll grow our developer pool enough to be able to make some headway into the bug backlog. One can dream... :) > So, the dead SF links are the least of our problems; not that big a > deal. We would lose little by simply closing all of the transfered > reports; or at least closing all of those older than some threshold. You're probably right, though I sort of prefer to keep an open bug if the problem really isn't resolved. But marking all SF bugs with priority low (if you decide to create a similar set of priority labels) would at least indicate this intent, and let you focus only on the real problems more easily. Because actually closing them raises the risk that they will get re-reported, and then it's even more work to start linking bugs or closing dupes. Ultimately though, we're all (mpl, ipython, scipy, etc) suffering from the same problem: despite the enormous growth in the user base of the scientific python tools in the last few years, the developer pool has not really grown at the same rate. Our projects are have dangerously small core teams. I wish I knew how to change this... Cheers, f |
From: Eric F. <ef...@ha...> - 2012-01-29 21:35:37
|
On 01/29/2012 10:53 AM, Fernando Perez wrote: > Hi all, > > I don't know if you guys were aware of this, and if there's anything > that can be done, but I just realized that all the bugs tagged SF: Fernando, Yes, this became evident right away after the transition; in addition, there was a coordination glitch such that quite a few bugs that I had closed on SF, trying to clear out some junk before the transition, ended up getting resurrected on github, complete with dead links. This is a triage situation; we have to consider the cost/benefit tradeoff of various ways of dealing with the mess, and with the glut of bug and other reports. The fact is that we were way behind in dealing with the SF bugs, and we are falling behind in dealing with github bugs. I think the best approach is to be fairly brutal in closing bug reports. We don't have the developer time to deal with very many. Those that accumulate faster than we can deal with them merely cost us time whenever one of us scans the set of reports in an attempt to get the list under control by finding ways to close a few. So, the dead SF links are the least of our problems; not that big a deal. We would lose little by simply closing all of the transfered reports; or at least closing all of those older than some threshold. Eric > > https://github.com/matplotlib/matplotlib/issues?labels=SF&sort=created&direction=desc&state=open&page=1 > > have useless links to their SF original pages, b/c SF has completely > closed access to the old tracker, e.g.: > > http://sourceforge.net/tracker/?func=detail&aid=3044267&group_id=80706&atid=560723 > > I don't know if in the migration, the github issue has all the > information that was in the old bug. In ipython when we migrated from > launchpad we kept links to the old issues as well: > > https://github.com/ipython/ipython/issues/13 ==> > https://bugs.launchpad.net/ipython/+bug/508971 > > but fortunately launchpad continues to show the original bug in full. > This is useful for a number of reasons. Launchpad is nice enough to > let you disable the bug tracker for new bug submissions while leaving > all existing bug pages still available. This is much more sensible > than what SF seems to be doing. > > I don't know there's much we can do, since this is really a > SourceForge issue. But I wanted to mention it at least; if there's > important information buried in there, it might be possible to reopen > the SF tracker temporarily, scrape the bug pages for everything and > close it again. I just don't know if the orignal bug transfer process > managed to move everything or not... > > Cheers, > > f > > ------------------------------------------------------------------------------ > Try before you buy = See our experts in action! > The most comprehensive online learning library for Microsoft developers > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, > Metro Style Apps, more. Free future releases when you subscribe now! > http://p.sf.net/sfu/learndevnow-dev2 > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel |
From: Fernando P. <fpe...@gm...> - 2012-01-29 20:53:31
|
Hi all, I don't know if you guys were aware of this, and if there's anything that can be done, but I just realized that all the bugs tagged SF: https://github.com/matplotlib/matplotlib/issues?labels=SF&sort=created&direction=desc&state=open&page=1 have useless links to their SF original pages, b/c SF has completely closed access to the old tracker, e.g.: http://sourceforge.net/tracker/?func=detail&aid=3044267&group_id=80706&atid=560723 I don't know if in the migration, the github issue has all the information that was in the old bug. In ipython when we migrated from launchpad we kept links to the old issues as well: https://github.com/ipython/ipython/issues/13 ==> https://bugs.launchpad.net/ipython/+bug/508971 but fortunately launchpad continues to show the original bug in full. This is useful for a number of reasons. Launchpad is nice enough to let you disable the bug tracker for new bug submissions while leaving all existing bug pages still available. This is much more sensible than what SF seems to be doing. I don't know there's much we can do, since this is really a SourceForge issue. But I wanted to mention it at least; if there's important information buried in there, it might be possible to reopen the SF tracker temporarily, scrape the bug pages for everything and close it again. I just don't know if the orignal bug transfer process managed to move everything or not... Cheers, f |
From: Fernando P. <fpe...@gm...> - 2012-01-29 20:33:30
|
On Sun, Jan 29, 2012 at 12:26 PM, Jeff Whitaker <js...@fa...> wrote: > From matplotlib's perspective, the lat/lon labels in Basemap are > randomly located text objects - so it's not likely to ever work for > Basemap plots unless matplotlib takes into account all the artist > objects associated with a figure. Ah, OK. That's good to know, because then it means that the automatic use of 'tight' we make in ipython is probably over-aggressive then. I'll add your note above to the bug report, and we'll think how to best deal with it in ipython then. Cheers, f |
From: Jeff W. <js...@fa...> - 2012-01-29 20:26:37
|
On 1/29/12 11:30 AM, Fernando Perez wrote: > On Sun, Jan 29, 2012 at 10:22 AM, Nathaniel Smith<nj...@po...> wrote: >>> Not a bug. There are only so many artist objects we assume for determining the tight bbox. Suptitle is not one of them. >> Why is this the desired behavior? > I was just going to ask the same. And as Jeff Whitaker points out, > all x and y (longitude/latitude) labels also get clipped. > > I can understand not considering the position of arbitrarily laid out > text that a user could have put in a random location. From matplotlib's perspective, the lat/lon labels in Basemap are randomly located text objects - so it's not likely to ever work for Basemap plots unless matplotlib takes into account all the artist objects associated with a figure. -Jeff > But clipping > something that is provided by a standard api call, such as a figure > title, does seem much more like a bug than a feature to me, I'm afraid. > > Cheers, > > f |
From: Fernando P. <fpe...@gm...> - 2012-01-29 20:20:40
|
On Sun, Jan 29, 2012 at 11:25 AM, Benjamin Root <ben...@ou...> wrote: > I certainly have no objections. Most likely it was an oversight. OK, thanks. Filed so at least there's a record of it: https://github.com/matplotlib/matplotlib/issues/688 We'll find a workaround in ipython in the meantime. Cheers, f |
From: Benjamin R. <ben...@ou...> - 2012-01-29 19:25:46
|
On Sunday, January 29, 2012, Fernando Perez <fpe...@gm...> wrote: > On Sun, Jan 29, 2012 at 10:22 AM, Nathaniel Smith <nj...@po...> wrote: >>> Not a bug. There are only so many artist objects we assume for determining the tight bbox. Suptitle is not one of them. >> >> Why is this the desired behavior? > > I was just going to ask the same. And as Jeff Whitaker points out, > all x and y (longitude/latitude) labels also get clipped. > > I can understand not considering the position of arbitrarily laid out > text that a user could have put in a random location. But clipping > something that is provided by a standard api call, such as a figure > title, does seem much more like a bug than a feature to me, I'm afraid. > > Cheers, > > f > I certainly have no objections. Most likely it was an oversight. Ben Root |