pygef-develop Mailing List for pyGEF
Status: Pre-Alpha
Brought to you by:
hrgerber
You can subscribe to this list here.
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
(29) |
Jul
(15) |
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
---|
From: <pyg...@li...> - 2007-08-04 17:34:53
|
On 04 Aug 2007, at 1:44 AM, <pyg...@li...> wrote: > pyg...@li... wrote: > > Hi there > > > > Just to let you know that we have one new member, Louis Jordaan. > > He is one of my students and will be doing his final year > engineering > > project for the next five to six months. A large portion of his > work is to > > make pyGEF usable > > Great news -- I'm looking forward to the results! > > > I have been looking at Kiva for the last day or so. Have any of > you ever > > used it? > > No > > > Read about it? > > Yes. > > > As far as I could figure out it uses agg as a > > renderer/back-end. > > Well, it's supposed to a multi-back-end thing, like many, but Agg > is the > most-used one. I think the wx back-end for instance, renders with Agg, > then blits to wx -- like Matplotlib wxAgg. > > It's modeled after DisplayPDF, so it should work with on OS-X, and I"d > think the PDF back-end would be pretty simple! Having a PDF option > sure > would be nice. > > > Which you have previously suggested agg. > > That may have been me -- it does have very nice output, and it's > reported to be very fast. > > > It looks like it's rendering is much, MUCH faster than using the > GC directly > > I'm getting more disappointed with GC's performance. But does it > support > everything you need? > > > 1. Can or should pyGEF use Kiva as its primary backend? > > 2. In the current version of pyGEF I use a Plotter class to draw > my graphic > > objects. This Plotter Interface class (abstraction layer) allows > for the use > > of multiple rendering back-ends. The problem is that an API for > this Plotter > > Interface needs to be defined. I would appreciate suggestions, > with special > > considerations for that fact that pyGEF must be usable as a base > for a SVG > > type graphic drawing tool. > > Have you seen any of Chris Mellon's work with a wxPython SVG renderer? > Search the wxPython mailing list for more info. > > I will have a look. > > > I'm trying to teach +-480 first year students to program > > yow! Using Python, I hope! > > Good old fashion C. > > Bill, I saw that enthought has a very basic affine transform > module in kiva > > (all python), > > Are you sure. I see this on the Wiki: > > """ > AffineMatrix: > > (kiva_affine_matrix.h and affine_matrix.i) All of the following member > functions modify the instance on which they are called: > > * __init__(v0, v1, v2, v3, v4, v5) or __init__() > * reset() # sets this matrix to the identity > * multiply(AffineMatrix) # multiples this matrix by another > * invert() # sets this matrix to the inverse of itself > * flip_x() # mirrors around X > * flip_y() # mirrors around Y > * scale() -> float # returns the average scale of this matrix > * determinant() -> float # returns the determinant > > The following factory methods are available in the top-level "agg" > namespace to create specific kinds of AffineMatrix instances: > > * translation_matrix(float X, float Y) > * rotation_matrix(float angle_in_radians) > * scaling_matrix(float x_scale, float y_scale) > * skewing_matrix(float x_shear, float y_shear) > > FontType > > """ > This makes it look like there is a C/C++ implementation of some sort. > > I found affine.py in the kiva/agg sources, it's all python, while also using some numpy (which I assumes uses some pyrex). However the API is slightly different. --retief |
From: <pyg...@li...> - 2007-08-03 23:43:41
|
pyg...@li... wrote: > Hi there > > Just to let you know that we have one new member, Louis Jordaan. > He is one of my students and will be doing his final year engineering > project for the next five to six months. A large portion of his work is to > make pyGEF usable Great news -- I'm looking forward to the results! > I have been looking at Kiva for the last day or so. Have any of you ever > used it? No > Read about it? Yes. > As far as I could figure out it uses agg as a > renderer/back-end. Well, it's supposed to a multi-back-end thing, like many, but Agg is the most-used one. I think the wx back-end for instance, renders with Agg, then blits to wx -- like Matplotlib wxAgg. It's modeled after DisplayPDF, so it should work with on OS-X, and I"d think the PDF back-end would be pretty simple! Having a PDF option sure would be nice. > Which you have previously suggested agg. That may have been me -- it does have very nice output, and it's reported to be very fast. > It looks like it's rendering is much, MUCH faster than using the GC directly I'm getting more disappointed with GC's performance. But does it support everything you need? > 1. Can or should pyGEF use Kiva as its primary backend? > 2. In the current version of pyGEF I use a Plotter class to draw my graphic > objects. This Plotter Interface class (abstraction layer) allows for the use > of multiple rendering back-ends. The problem is that an API for this Plotter > Interface needs to be defined. I would appreciate suggestions, with special > considerations for that fact that pyGEF must be usable as a base for a SVG > type graphic drawing tool. Have you seen any of Chris Mellon's work with a wxPython SVG renderer? Search the wxPython mailing list for more info. > I'm trying to teach +-480 first year students to program yow! Using Python, I hope! > Bill, I saw that enthought has a very basic affine transform module in kiva > (all python), Are you sure. I see this on the Wiki: """ AffineMatrix: (kiva_affine_matrix.h and affine_matrix.i) All of the following member functions modify the instance on which they are called: * __init__(v0, v1, v2, v3, v4, v5) or __init__() * reset() # sets this matrix to the identity * multiply(AffineMatrix) # multiples this matrix by another * invert() # sets this matrix to the inverse of itself * flip_x() # mirrors around X * flip_y() # mirrors around Y * scale() -> float # returns the average scale of this matrix * determinant() -> float # returns the determinant The following factory methods are available in the top-level "agg" namespace to create specific kinds of AffineMatrix instances: * translation_matrix(float X, float Y) * rotation_matrix(float angle_in_radians) * scaling_matrix(float x_scale, float y_scale) * skewing_matrix(float x_shear, float y_shear) FontType """ This makes it look like there is a C/C++ implementation of some sort. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: <pyg...@li...> - 2007-08-03 20:20:11
|
Hi there Just to let you know that we have one new member, Louis Jordaan. He is one of my students and will be doing his final year engineering project for the next five to six months. A large portion of his work is to make pyGEF usable and hopefully release the first packaged version. Any help that can be provided to him will be much appreciated. On a different note (as the subject states): I have been looking at Kiva for the last day or so. Have any of you ever used it? Read about it? As far as I could figure out it uses agg as a renderer/back-end. Which you have previously suggested agg. It looks like it's rendering is much, MUCH faster than using the GC directly (well the way that I have been using it anyway). The graphics looks great and it has a lot of features that the GC don't have. But as I have said before, I have little experience with the low level graphics/rendering aspects, so I will need some help with this. 1. Can or should pyGEF use Kiva as its primary backend? 2. In the current version of pyGEF I use a Plotter class to draw my graphic objects. This Plotter Interface class (abstraction layer) allows for the use of multiple rendering back-ends. The problem is that an API for this Plotter Interface needs to be defined. I would appreciate suggestions, with special considerations for that fact that pyGEF must be usable as a base for a SVG type graphic drawing tool. Most of pyGEF's design has now been sorted out, some parts have been implemented already but not yet committed to SVN, as they are not finished and will break a lot of things. At the moment I have very little time to code (I'm trying to teach +-480 first year students to program). Hopefully Louis can get things to move along a bit quicker again. Bill, I saw that enthought has a very basic affine transform module in kiva (all python), I would like to suggest to them to convert to your geom library. --retief |
From: <pyg...@li...> - 2007-07-19 21:04:48
|
On 19 Jul 2007, at 9:52 PM, pyg...@li... wrote: > pyg...@li... wrote: >>> If you want to capture the mouse-leave and mouse-enter events, then >>> hit testing has to be fast it has to be done for every motion =20 >>> event -- >>> which is a lot! >> >> I know, thats why I really liked what you did in floatcanvas with the >> hitcanvas. >> Overlapping objects is just a problem, > > yes. I decided to simply not support that feature, but it has come up. > >> but I suppose one can use some >> form of color mixing to create a new color when objects overlap and >> store lists of objects associated with a hit color. > > hmmm -- maybe if you have alpha blending, you could do something with > that. This would take a bit of thought. > > The other downside of the FC approach is that it needs to render every > hit-able object twice, so hit-testing is blindingly fast, but overall > rendering is slowed down. As it's usually not useful to have 1000s of > hitable objects shown at once, it's not a big deal. > >> Having 10 layers, each with a hit canvas could pose a problem. > > Some memory issues -- I only have one or two now. Of course, maybe you > don't need to have all the layers hitable? What are layers for? > Something similar to Photoshop, or CADs. My main focus is EDA tools =20 (schematic capture, physical IC design). For IC the layers represent the actual metal layers. > Early in FC development, I started with an n-layer (each with an > off-screen bitmap) approach, but that required that the the layers be > transparent, which for the old DC meant a mask, and I found rendering > was really slow -- so I dropped it. > > if GraphicsContext can render transparent bitmaps fast, then I =20 > guess it > would be nice, but in practice, I find you're usually only really > changing one layer at time, so the FC approach of Background=20 > +Foreground > works OK. > I must still get round to trying this on the GC > =46rom a users perspective, it's nice to have more layers, so I've > thought of having layers, but not drawing them to separate bitmaps. > >> I'm aware of this, but currently the actual drawing is the biggest >> bottleneck with the GC. > > premature optimization is the root of all evil... > :) >> As far >> as I can remember fc also redrew the entire visible canvas at a time. > > Yes. > >> I want to be able to only redraw the dirty region of a dirty layer. > > That wouldn't be hard to add to FC -- I've just never bothered. > >> After this I think hitting will be the next biggest bottleneck, >> especially if enter and exit events must also be handled (At this >> stage I have quietly ignored this feature. :)). > > How often is it used? I had a use case in mind, but honestly, I'm not > sure I've used it for anything real after all. > Probably for some tool tips or something, the new office type popup =20 boxes. >>> It works with bounding boxes, and can quickly respond to the =20 >>> question: >>> what BBs is this point in? and what BBs intersect this BB? >>> >> >> Bill wrote a small geom library in pyrex, that I use for my hittest, >> so it should be really fast. > > yup -- will that replace your BoundingBox.py code? > Jip, but i have been having problems with our proxy. I have a rather =20 complicated setup to connect to the internet from home. So I will update the code as soon as I can get it to upload again. > However, it is still order N to search all the BBs, so if you =20 > really do > have 1000s of objects, then an order log(N) search would be better -- > but wait until there's a problem. > >> not >> everything is rectangular. This is something that has bothered me, >> but I found that many "vector" drawing apps only use the bb. > > And those that do drive me crazy! > >> Thus >> boundingbox is not the best way, but it is much faster than using >> path inclusion tests. > > Well, the compromise is to test BBs first, then only do path inclusion > for those that hit -- and I suppose it could be an option -- it =20 > would be > more important for some apps than others to have accurate hit testing. > I have also been thinking of something along these lines. >> Again here is where your hitcanvas shines. :) > > That's why I did it that way. Another option other than path inclusion > is to create a very small bitmap, centered around the mouse position, > then draw the object to it and see if any pixels change. That's =20 > what I'd > do with a DC, but maybe that would be even slower than a path =20 > inclusion > test -- if GraphicsContext was well written! > Must run some tests >> This would make interchangeability very difficult or >> would limit the functionality. > > Exactly why I think it's maybe not worth trying. Matplotlib has =20 > done it, > but it's a lot of extra work -- and they are moving toward the *Agg > back-ends -- i.e. rendering with Agg, then transferring the bitmap to > whatever GUI toolkit is used. That way all the drawing can be done =20 > with > Agg. They are also starting to use Cairo - but they have licensing > issues it unfortunately -- it would save them a lot of pain. > I really need to look at Agg now >> I do need to look at these other back-ends, but again, to get the >> higher level stuff stable and well structured is more important now. > > Agreed. > >> In this kind of design, it is the >> GraphicPrimitives that define the l.c.d., not entirely the canvas/ >> context. > > well, sort of. In a simple case, if you're using a wxDC -- you can't > have transparency -- at least not easily! So the renderer is limiting > what you can do - but maybe having the same framework for different > uses and capabilities still makes sense. > > Indeed, with wx -- maybe we can mix GraphicsContext and DCs -- for > simple drawing, use DCs for performance, and use GraphicsContext for > added features when required. > >> Then one also gets to the question: What should be part of >> the framework, and what should be part of the app. > > yup -- that's hard -- I've struggled a lot with that with FloatCanvas. > >> As an example: It only took me about a day to move my demo app from >> using fc to using GC. > > What that's pretty quick! > >> Obviously I first had to figure out how GC >> worked. In the meantime though a lot of the APIs have changed quite a >> bit and AffineTransforms are heavily used (something that fc does not >> support) so moving back to fc might be a little bit more difficult, >> but not impossible. > > hmm. doing affine transforms on coordinate based objects (polygons, > lines) would be easy -- but Circles and whatnot may be harder... > Why? >> The main reason for wanting to use fc as backend, is that it is much >> faster than the GC. > > That's probably the DC vs. GraphicsContext issue, so maybe you can =20 > make > a DC renderer for pyGEF instead -- you'd have to add some of the =20 > affine > transform stuff though. > > ? I would also not say that pyGEF overlaps that >> much with fc, I think it is the new GC that provides a lot of the >> same functionality as fc. > > Maybe, but the point is the same -- much of FloatCanvas is no longer > necessary. > >> Chris, have you looked at the traits and traits.ui libraries by >> enthought. I am also trying to fit pyGEF into their design structure, >> because I think they did some really good work. > > See my other note, but yes I've had my eye on traits for FC too -- and > it looks like it's starting to get some wider use outside Enthought. > > -Chris > > > --=20 > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chr...@no... > > ----------------------------------------------------------------------=20= > --- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2005. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > pyGEF-develop mailing list > pyG...@li... > https://lists.sourceforge.net/lists/listinfo/pygef-develop Retief Gerber Lektor Lecturer Departement Elektriese en Elektroniese Ingenieurswese Department of Electrical and Electronic Engineering Tel: +27 21 808 4011 I Faks/Fax: +27 21 808 4981 E-pos/E-mail: hrg...@su... Universiteit Stellenbosch University Privaat Sak/Private Bag X1 Matieland 7602 Suid-Afrika/South Africa www.eng.sun.ac.za =EF=BF=BC =EF=BF=BC |
From: <pyg...@li...> - 2007-07-19 19:51:36
|
pyg...@li... wrote: >> If you want to capture the mouse-leave and mouse-enter events, then >> hit testing has to be fast it has to be done for every motion event -- >> which is a lot! > > I know, thats why I really liked what you did in floatcanvas with the > hitcanvas. > Overlapping objects is just a problem, yes. I decided to simply not support that feature, but it has come up. > but I suppose one can use some > form of color mixing to create a new color when objects overlap and > store lists of objects associated with a hit color. hmmm -- maybe if you have alpha blending, you could do something with that. This would take a bit of thought. The other downside of the FC approach is that it needs to render every hit-able object twice, so hit-testing is blindingly fast, but overall rendering is slowed down. As it's usually not useful to have 1000s of hitable objects shown at once, it's not a big deal. > Having 10 layers, each with a hit canvas could pose a problem. Some memory issues -- I only have one or two now. Of course, maybe you don't need to have all the layers hitable? What are layers for? Early in FC development, I started with an n-layer (each with an off-screen bitmap) approach, but that required that the the layers be transparent, which for the old DC meant a mask, and I found rendering was really slow -- so I dropped it. if GraphicsContext can render transparent bitmaps fast, then I guess it would be nice, but in practice, I find you're usually only really changing one layer at time, so the FC approach of Background+Foreground works OK. From a users perspective, it's nice to have more layers, so I've thought of having layers, but not drawing them to separate bitmaps. > I'm aware of this, but currently the actual drawing is the biggest > bottleneck with the GC. premature optimization is the root of all evil... > As far > as I can remember fc also redrew the entire visible canvas at a time. Yes. > I want to be able to only redraw the dirty region of a dirty layer. That wouldn't be hard to add to FC -- I've just never bothered. > After this I think hitting will be the next biggest bottleneck, > especially if enter and exit events must also be handled (At this > stage I have quietly ignored this feature. :)). How often is it used? I had a use case in mind, but honestly, I'm not sure I've used it for anything real after all. >> It works with bounding boxes, and can quickly respond to the question: >> what BBs is this point in? and what BBs intersect this BB? >> > > Bill wrote a small geom library in pyrex, that I use for my hittest, > so it should be really fast. yup -- will that replace your BoundingBox.py code? However, it is still order N to search all the BBs, so if you really do have 1000s of objects, then an order log(N) search would be better -- but wait until there's a problem. > not > everything is rectangular. This is something that has bothered me, > but I found that many "vector" drawing apps only use the bb. And those that do drive me crazy! > Thus > boundingbox is not the best way, but it is much faster than using > path inclusion tests. Well, the compromise is to test BBs first, then only do path inclusion for those that hit -- and I suppose it could be an option -- it would be more important for some apps than others to have accurate hit testing. > Again here is where your hitcanvas shines. :) That's why I did it that way. Another option other than path inclusion is to create a very small bitmap, centered around the mouse position, then draw the object to it and see if any pixels change. That's what I'd do with a DC, but maybe that would be even slower than a path inclusion test -- if GraphicsContext was well written! > This would make interchangeability very difficult or > would limit the functionality. Exactly why I think it's maybe not worth trying. Matplotlib has done it, but it's a lot of extra work -- and they are moving toward the *Agg back-ends -- i.e. rendering with Agg, then transferring the bitmap to whatever GUI toolkit is used. That way all the drawing can be done with Agg. They are also starting to use Cairo - but they have licensing issues it unfortunately -- it would save them a lot of pain. > I do need to look at these other back-ends, but again, to get the > higher level stuff stable and well structured is more important now. Agreed. > In this kind of design, it is the > GraphicPrimitives that define the l.c.d., not entirely the canvas/ > context. well, sort of. In a simple case, if you're using a wxDC -- you can't have transparency -- at least not easily! So the renderer is limiting what you can do - but maybe having the same framework for different uses and capabilities still makes sense. Indeed, with wx -- maybe we can mix GraphicsContext and DCs -- for simple drawing, use DCs for performance, and use GraphicsContext for added features when required. > Then one also gets to the question: What should be part of > the framework, and what should be part of the app. yup -- that's hard -- I've struggled a lot with that with FloatCanvas. > As an example: It only took me about a day to move my demo app from > using fc to using GC. What that's pretty quick! > Obviously I first had to figure out how GC > worked. In the meantime though a lot of the APIs have changed quite a > bit and AffineTransforms are heavily used (something that fc does not > support) so moving back to fc might be a little bit more difficult, > but not impossible. hmm. doing affine transforms on coordinate based objects (polygons, lines) would be easy -- but Circles and whatnot may be harder... > The main reason for wanting to use fc as backend, is that it is much > faster than the GC. That's probably the DC vs. GraphicsContext issue, so maybe you can make a DC renderer for pyGEF instead -- you'd have to add some of the affine transform stuff though. ? I would also not say that pyGEF overlaps that > much with fc, I think it is the new GC that provides a lot of the > same functionality as fc. Maybe, but the point is the same -- much of FloatCanvas is no longer necessary. > Chris, have you looked at the traits and traits.ui libraries by > enthought. I am also trying to fit pyGEF into their design structure, > because I think they did some really good work. See my other note, but yes I've had my eye on traits for FC too -- and it looks like it's starting to get some wider use outside Enthought. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: <pyg...@li...> - 2007-07-19 08:00:57
|
On 19 Jul 2007, at 2:54 AM, pyg...@li... wrote: > Howdy Chris, > > On 7/19/07, pyg...@li... <pygef-=20 > de...@li... > wrote: > Hi Chris > > On 18 Jul 2007, at 11:42 PM, pyg...@li... =20 > wrote: > > > On hit testing code: > > > > If you want to capture the mouse-leave and mouse-enter events, then > > hit > > testing has to be fast it has to be done for every motion event -- > > which > > is a lot! > > Hit testing doesn't really scare me. I come from 3D graphics where =20= > we have decent algorithms for real-time N-body object-object =20 > collision detection. In comparison, point-object collision =20 > detection in 2D is pretty benign. When push comes to shove you can =20= > always resort to rasterization for any object you can't reject in =20 > some other way. > > I know, thats why I really liked what you did in floatcanvas with the > hitcanvas. > > How does that work? I'm not familiar enough with fc. Do you do =20 > automate the rasterization for hit testing somehow in a second buffer? > It is probably better if Chris explains this. The idea is to draw all object to a second memory canvas, on this =20 hitcanvas each object is drawn with a fill in a unique color. A hit =20 then gets the color at the selected pixel O(1) and then gets the =20 object associated with the color from a hitcolor dictionary O(1). So =20 hit tests take O(1). Or something along those lines. > Overlapping objects is just a problem, but I suppose one can use some > form of color mixing to create a new color when objects overlap and > store lists of objects associated with a hit color. > > Hmm, but that'll give you 24 objects at most. I think sequential =20 > testing makes the most sense. You do know their z-order after all, =20= > so you can test a stack of things you can't reject any other way by =20= > rasterizing them one by one into a hit buffer till you hit one. > > > > bb wrote: > >> building a hierarchy on the > >> document bounding boxes so you avoid having to do lots of > >> transforms into > >> individual object coordinates is probably the better way to go. > >> That'll > >> give O(lg N) performance in hit testing, whereas improving > >> individual hit > >> tests will still give you something that's O(N). > > > > Are you talking about some sort of spatial index? If so, that =20 > would be > > great, and you might want to check out rtree. I've had it in mind =20= > for > > FloatCanvas for a while: > > > > http://zcologia.com/news/433/rtree-0-1-0/ > > Cool. Thanks for the link. I've heard of those, but hadn't really =20= > spent any time looking into what makes the most sense as a spatial =20 > index for a canvas (Since it doesn't really seem to be a bottle =20 > neck at all right now) > > > > Retief wrote: > >> A developer's specific > >> application using pyGEF could replace Plotter with any =20 > implementation > >> of their choice, as long as it supplies the minimum functionality > >> required by the rest of the framework. > > > > I've always been pretty skeptical of "backend-independent" systems. > > You're kind of stuck with the lowest common denominator. As it is, > > GraphicsContext is a backend independent wrapper, as is Cairo > > (though at > > least Cairo does it's own in-memory bitmap rendering). > > > Maybe I can clarify this a bit. You are correct that l.c.d. always > rules when trying to create an interchangeable back-end. The problem > is that the API's of the different possible back-ends differ quite > substantially. This would make interchangeability very difficult or > would limit the functionality. pyGEF as an MVC framework must provide > high-level framework and structure to a graphically oriented > application. > > I took a look at how Inkscape is organized. There each component =20 > is responsible for generating an inkscape path object that =20 > represents it. Basically a GraphicsContext path type thing, but =20 > not really tied to any back-end. It's their own internal path =20 > object. Then there's a rasterizer component that takes all the =20 > paths and is responsible for actually rendering them. Anyway, I =20 > think having components generate path objects rather than actually =20 > issuing graphics commands makes for a nice separation of objects =20 > from GUIs. Inkscape currently has two back-ends, the second one is =20= > used for rendering a fast outlines-only mode. > > > Chris, have you looked at the traits and traits.ui libraries by > enthought. I am also trying to fit pyGEF into their design structure, > because I think they did some really good work. > > I'm not yet convinced that traits.ui is apropriate for more than =20 > just rapid prototyping, and a few odds and ends parts of an app's =20 > gui. =46rom screenshots I've seen, my impression is that it often =20 > generates pretty ugly GUIs when you just let it do its thing. But =20 > Traits itself seems like pure winner all the way. > Just as soon as they get it untangled from the rest of enthought =20 > and installable via PyPi I think it's going to take over the world. > I agree traits.ui can be pretty ugly if left to do its own thing, but =20= the idea is not to use it for the entire ui. Traits.ui uses a very =20 comprehensive MVC pattern, that is very useful when using traits. =20 However the views will have to be customized for it to look good. =20 While trying out traits.ui, I wrote a few test apps, where the end =20 result looks like any normal wx app, in only a few lines of code. In any case, even though the pyGEF uses/can use traits.ui, it does =20 not mean that one would have to use it in the actual application ui. Traits.ui is also a relatively new lib, Dave started traits way back, =20= ui only came later. So it will improve. Have you seen the newest themeing capabilities of traits.ui? http://www.dmorrill.com/david/other/traits_ui_ss Here is a little app I wrote to test some more "advanced" (at the =20 time) features of traits.ui. See the interaction between the =20 controls, validation of the property name and value, the updating of =20 the preview and the correct OK/Cancel behavior. Just the ui portion =20 of this app previously used many many hundreds of line of code and =20 still did not work as well. If you get this app to work, pyGEF should also work. from enthought.traits.api import HasTraits, Interface, implements, =20 Trait, Float, Int, List, Long, Str, CFloat, Instance from enthought.traits.api import Delegate, Bool, Any, TraitString from enthought.traits.ui.api import View, Item, Group, Include, Handler from enthought.traits.ui.menu import OKButton, CancelButton, Menu, =20 MenuBar, Action ########################################################################=20= #### ########################################################################=20= #### class IPropertyValue(Interface): """ """ def eval(self): """ """ def get_value(self): """ """ def set_value(self, value): """ """ ########################################################################=20= #### ########################################################################=20= #### class AbstractPropertyValue(HasTraits): """ """ implements(IPropertyValue) value =3D Any def eval(self): """ """ raise NotImplementedError('Method must be overridden in =20 derived class') def get_value(self): """ """ raise NotImplementedError('Method must be overridden in =20 derived class') def set_value(self, value): """ """ raise NotImplementedError('Method must be overridden in =20 derived class') ########################################################################=20= #### ########################################################################=20= #### class PropertyValue(AbstractPropertyValue): value =3D Float def eval(self): """ """ return self.value def get_value(self): """ """ return self.value def set_value(self, value): """ """ self.value =3D value class PropertyData(HasTraits): """ """ name =3D Trait('', TraitString(maxlen=3D50, regex=3Dr'^[A-Za-z][A-Za-= =20 z_0-9]*$')) #name =3D Str(maxlen=3D50, regex=3Dr'^[A-Za-z][A-Za-z_0-9]*$') value_object =3D Instance(IPropertyValue) value =3D Delegate('value_object', modify=3DFalse) include_name =3D Bool renameable =3D Bool preview =3D Str def __str__(self): """ """ propstr =3D'' if self.include_name: propstr +=3D str(self.name) + '=3D' propstr +=3D str(self.value_object.eval()) return propstr def _value_changed(self, old, new): """ """ print('Value changed in PropertyData') self.value_object.value =3D new self.preview =3D str(self) def _anytrait_changed(self, name, old, new): """ """ print(str(name) + ' thrait changed from ' + str(old) + \ ' to ' + str(new) + ' in PropertyData') self.preview =3D str(self) value_group =3D Group(Item('name', enabled_when=3D'renameable'), \ Item('value')) option_group =3D Group(Item('renameable'), \ Item('include_name'), \ orientation=3D'horizontal') preview_group =3D Group(Item('preview', style=3D'readonly'), \ label=3D'Preview:', \ show_labels=3DFalse, show_border=3DTrue) readonly_view =3D View(Item('name', style=3D'readonly'), \ Item('value', style=3D'readonly')) simple_view =3D View(Item('name', enabled_when=3D'renameable'), Item('value'), buttons=3D[OKButton, CancelButton], kind=3D'modal') detailed_view =3D View(Group(value_group, option_group, =20 preview_group), \ buttons=3D[OKButton, CancelButton], \ kind=3D'modal') view =3D detailed_view def testPropertyDataModel(): value_object=3DPropertyValue(value =3D 10) p =3D PropertyData(name=3D'value', value_object=3Dvalue_object) print '0', p p.configure_traits(view=3D'detailed_view') print '1 ', p > --bb > ----------------------------------------------------------------------=20= > --- > This SF.net email is sponsored by DB2 Express > Download DB2 Express C - the FREE version of DB2 express and take > control of your XML. No limits. Just data. Click to get it now. > http://sourceforge.net/powerbar/db2/=20 > _______________________________________________ > pyGEF-develop mailing list > pyG...@li... > https://lists.sourceforge.net/lists/listinfo/pygef-develop Retief Gerber Lektor Lecturer Departement Elektriese en Elektroniese Ingenieurswese Department of Electrical and Electronic Engineering Tel: +27 21 808 4011 I Faks/Fax: +27 21 808 4981 E-pos/E-mail: hrg...@su... Universiteit Stellenbosch University Privaat Sak/Private Bag X1 Matieland 7602 Suid-Afrika/South Africa www.eng.sun.ac.za =EF=BF=BC =EF=BF=BC |
From: <pyg...@li...> - 2007-07-19 07:08:44
|
Hi On 19 Jul 2007, at 1:36 AM, pyg...@li... wrote: > Hi, > > I'm trying to give pyGEF a try, but I'm getting: > > Traceback (most recent call last): > File "pyGEFDemo.py", line 45, in <module> > from enthought.traits.ui.api import View, Item, ListEditor > ImportError: No module named api > It looks like you are missing traits.ui. At the moment you need to =20 install at least traits and traits.ui, and all there dependences. I know that it is a real pain to install traits at the moment, but it =20= has gotten a lot better in the last month or two that I have been =20 using it. I gave a lot of feedback to Rick how is writing the =20 enstaller. The problem is that the installation instructions on their =20= wiki is a bit out of date. I will set up a document with the =20 requirements and installation instructions. Dave said that everything =20= at enthought should be sorted out in about a month or two. > > I've just installed traits 1.1 from the enthought site, but it does =20= > not, > indeed have a enthought.traits.ui.api module. > > What version should I be using? > traits 3 - The unstable branch on there svn > Note that there has been discussion on the matplotlib list recently > about using Traits with matplotlib. =46rom that discussion: > > """ > Well, I do suggest that matplotlib start using Traits 3 rather than > Traits 2. Otherwise, they will need to switch later. > """ > > Which probably applies to this project too. > Jip > -Chris > > > --=20 > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chr...@no... > > ----------------------------------------------------------------------=20= > --- > This SF.net email is sponsored by DB2 Express > Download DB2 Express C - the FREE version of DB2 express and take > control of your XML. No limits. Just data. Click to get it now. > http://sourceforge.net/powerbar/db2/ > _______________________________________________ > pyGEF-develop mailing list > pyG...@li... > https://lists.sourceforge.net/lists/listinfo/pygef-develop Retief Gerber Lektor Lecturer Departement Elektriese en Elektroniese Ingenieurswese Department of Electrical and Electronic Engineering Tel: +27 21 808 4011 I Faks/Fax: +27 21 808 4981 E-pos/E-mail: hrg...@su... Universiteit Stellenbosch University Privaat Sak/Private Bag X1 Matieland 7602 Suid-Afrika/South Africa www.eng.sun.ac.za =EF=BF=BC |
From: <pyg...@li...> - 2007-07-19 01:17:25
|
> I've always been pretty skeptical of "backend-independent" systems. > You're kind of stuck with the lowest common denominator. As it is, > GraphicsContext is a backend independent wrapper, as is Cairo (though at > least Cairo does it's own in-memory bitmap rendering). > > Actually, Cairo has a lot going for it -- it's powerful, flexible, under > active development, and is being used by Major projects (GTK and > Mozilla). It also gives you other output options: PDF, Postscript, > Printer? My main problem with Cairo is just that it seems to be overly entangled with GTK/Linux/GCC. I don't think it's really a problem inherent in Cairo, just it seems that's what the current user base is. It seems to be getting better slowly though. AGG has minimal dependencies and compiles pretty easily. In fact I think Maxim just recommends adding the AGG library files directly into your project, whatever platform/tool you use. Maybe from a Linux user's perspective AGG has the opposite problem of seeming too Windows-y. But I think AGG comes with makefiles and such for GCC/Linux right out of the box. Anyway, I know Cairo *can* be compiled and used on Windows, it's just it seems more like at the stage of "Hey look some guy managed to compile it on Windows too!" as opposed to intentionally and pro-actively supporting non-Linux, non-GTK usages. I understand Agg is faster - I wish Cairo used Agg as it's in-memory > renderer, but as it started with that part, it probably never will, and > at this point, I don't know that Agg really is any better than Cairo. > And performance improvements will come. Yeh. Probably the most interesting thing about AGG is that it is so pervasively templatized, allowing you to do things like create a dedicated pipeline for rendering directly to some odd image format just by changing one parameter and implementing a few low-level rasterization routines. In theory anyway :-). Anyway AGG is really more a construction kit for building vertex processing pipelines. Cairo, I think, is a concrete implementation of one particular pipeline. I wonder if it would be possible to use the Xara engine some how (GPL).... I think the current plan is BSD-like license, but I guess that could change. --bb |
From: <pyg...@li...> - 2007-07-19 00:54:54
|
Howdy Chris, On 7/19/07, pyg...@li... < pyg...@li...> wrote: > > Hi Chris > > On 18 Jul 2007, at 11:42 PM, pyg...@li... wrote: > > > On hit testing code: > > > > If you want to capture the mouse-leave and mouse-enter events, then > > hit > > testing has to be fast it has to be done for every motion event -- > > which > > is a lot! Hit testing doesn't really scare me. I come from 3D graphics where we have decent algorithms for real-time N-body object-object collision detection. In comparison, point-object collision detection in 2D is pretty benign. When push comes to shove you can always resort to rasterization for any object you can't reject in some other way. I know, thats why I really liked what you did in floatcanvas with the > hitcanvas. How does that work? I'm not familiar enough with fc. Do you do automate the rasterization for hit testing somehow in a second buffer? Overlapping objects is just a problem, but I suppose one can use some > form of color mixing to create a new color when objects overlap and > store lists of objects associated with a hit color. Hmm, but that'll give you 24 objects at most. I think sequential testing makes the most sense. You do know their z-order after all, so you can test a stack of things you can't reject any other way by rasterizing them one by one into a hit buffer till you hit one. > bb wrote: > >> building a hierarchy on the > >> document bounding boxes so you avoid having to do lots of > >> transforms into > >> individual object coordinates is probably the better way to go. > >> That'll > >> give O(lg N) performance in hit testing, whereas improving > >> individual hit > >> tests will still give you something that's O(N). > > > > Are you talking about some sort of spatial index? If so, that would be > > great, and you might want to check out rtree. I've had it in mind for > > FloatCanvas for a while: > > > > http://zcologia.com/news/433/rtree-0-1-0/ Cool. Thanks for the link. I've heard of those, but hadn't really spent any time looking into what makes the most sense as a spatial index for a canvas (Since it doesn't really seem to be a bottle neck at all right now) > Retief wrote: > >> A developer's specific > >> application using pyGEF could replace Plotter with any implementation > >> of their choice, as long as it supplies the minimum functionality > >> required by the rest of the framework. > > > > I've always been pretty skeptical of "backend-independent" systems. > > You're kind of stuck with the lowest common denominator. As it is, > > GraphicsContext is a backend independent wrapper, as is Cairo > > (though at > > least Cairo does it's own in-memory bitmap rendering). > > > Maybe I can clarify this a bit. You are correct that l.c.d. always > rules when trying to create an interchangeable back-end. The problem > is that the API's of the different possible back-ends differ quite > substantially. This would make interchangeability very difficult or > would limit the functionality. pyGEF as an MVC framework must provide > high-level framework and structure to a graphically oriented > application. I took a look at how Inkscape is organized. There each component is responsible for generating an inkscape path object that represents it. Basically a GraphicsContext path type thing, but not really tied to any back-end. It's their own internal path object. Then there's a rasterizer component that takes all the paths and is responsible for actually rendering them. Anyway, I think having components generate path objects rather than actually issuing graphics commands makes for a nice separation of objects from GUIs. Inkscape currently has two back-ends, the second one is used for rendering a fast outlines-only mode. Chris, have you looked at the traits and traits.ui libraries by > enthought. I am also trying to fit pyGEF into their design structure, > because I think they did some really good work. I'm not yet convinced that traits.ui is apropriate for more than just rapid prototyping, and a few odds and ends parts of an app's gui. From screenshots I've seen, my impression is that it often generates pretty ugly GUIs when you just let it do its thing. But Traits itself seems like pure winner all the way. Just as soon as they get it untangled from the rest of enthought and installable via PyPi I think it's going to take over the world. --bb |
From: <pyg...@li...> - 2007-07-19 00:10:13
|
It is using svn traits 3.0 right now. It's going to use Traits 3.0's 'interfaces'. That said I haven't tried to run it since Retief started going to town with the Traits-ification. I was sort of waiting for that 3.0 release which I'm sure must be just around the corner. --bb On 7/19/07, pyg...@li... < pyg...@li...> wrote: > > Hi, > > I'm trying to give pyGEF a try, but I'm getting: > > Traceback (most recent call last): > File "pyGEFDemo.py", line 45, in <module> > from enthought.traits.ui.api import View, Item, ListEditor > ImportError: No module named api > > > I've just installed traits 1.1 from the enthought site, but it does not, > indeed have a enthought.traits.ui.api module. > > What version should I be using? > > Note that there has been discussion on the matplotlib list recently > about using Traits with matplotlib. From that discussion: > > """ > Well, I do suggest that matplotlib start using Traits 3 rather than > Traits 2. Otherwise, they will need to switch later. > """ > > Which probably applies to this project too. > > -Chris > > > -- > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chr...@no... > > ------------------------------------------------------------------------- > This SF.net email is sponsored by DB2 Express > Download DB2 Express C - the FREE version of DB2 express and take > control of your XML. No limits. Just data. Click to get it now. > http://sourceforge.net/powerbar/db2/ > _______________________________________________ > pyGEF-develop mailing list > pyG...@li... > https://lists.sourceforge.net/lists/listinfo/pygef-develop > |
From: <pyg...@li...> - 2007-07-18 23:35:42
|
Hi, I'm trying to give pyGEF a try, but I'm getting: Traceback (most recent call last): File "pyGEFDemo.py", line 45, in <module> from enthought.traits.ui.api import View, Item, ListEditor ImportError: No module named api I've just installed traits 1.1 from the enthought site, but it does not, indeed have a enthought.traits.ui.api module. What version should I be using? Note that there has been discussion on the matplotlib list recently about using Traits with matplotlib. From that discussion: """ Well, I do suggest that matplotlib start using Traits 3 rather than Traits 2. Otherwise, they will need to switch later. """ Which probably applies to this project too. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: <pyg...@li...> - 2007-07-18 23:29:01
|
Hi Chris On 18 Jul 2007, at 11:42 PM, pyg...@li... wrote: > Hi Retief, Bill, and ???, > > I figured it was time to keep an eye on what you've been up to, it > certainly looks like some good stuff, so I've joined the list. > > I scanned through the mailing list a bit, and have a few thoughts > to share: > > On hit testing code: > > If you want to capture the mouse-leave and mouse-enter events, then > hit > testing has to be fast it has to be done for every motion event -- > which > is a lot! > I know, thats why I really liked what you did in floatcanvas with the hitcanvas. Overlapping objects is just a problem, but I suppose one can use some form of color mixing to create a new color when objects overlap and store lists of objects associated with a hit color. Having 10 layers, each with a hit canvas could pose a problem. > bb wrote: >> building a hierarchy on the >> document bounding boxes so you avoid having to do lots of >> transforms into >> individual object coordinates is probably the better way to go. >> That'll >> give O(lg N) performance in hit testing, whereas improving >> individual hit >> tests will still give you something that's O(N). > > Are you talking about some sort of spatial index? If so, that would be > great, and you might want to check out rtree. I've had it in mind for > FloatCanvas for a while: > > http://zcologia.com/news/433/rtree-0-1-0/ > I'm aware of this, but currently the actual drawing is the biggest bottleneck with the GC. So after stabilizing the higher level editing framework stuff, I will move over to optimizing the drawing. As far as I can remember fc also redrew the entire visible canvas at a time. I want to be able to only redraw the dirty region of a dirty layer. After this I think hitting will be the next biggest bottleneck, especially if enter and exit events must also be handled (At this stage I have quietly ignored this feature. :)). > It works with bounding boxes, and can quickly respond to the question: > what BBs is this point in? and what BBs intersect this BB? > Bill wrote a small geom library in pyrex, that I use for my hittest, so it should be really fast. > Another note: > > It sounds like you're defining hitting as a hit within the Bounding > Box > of the object, which is really not that good -- there are times > when you > really want the hit to be a hit on the object itself -- or have I > missed something? > Yes, currently the hit is tested against the boundingbox around an object, as it is a simple < or > that x,y,w,h type test. BUT not everything is rectangular. This is something that has bothered me, but I found that many "vector" drawing apps only use the bb. Thus boundingbox is not the best way, but it is much faster than using path inclusion tests. Again here is where your hitcanvas shines. :) > Retief wrote: >> A developer's specific >> application using pyGEF could replace Plotter with any implementation >> of their choice, as long as it supplies the minimum functionality >> required by the rest of the framework. > > I've always been pretty skeptical of "backend-independent" systems. > You're kind of stuck with the lowest common denominator. As it is, > GraphicsContext is a backend independent wrapper, as is Cairo > (though at > least Cairo does it's own in-memory bitmap rendering). > Maybe I can clarify this a bit. You are correct that l.c.d. always rules when trying to create an interchangeable back-end. The problem is that the API's of the different possible back-ends differ quite substantially. This would make interchangeability very difficult or would limit the functionality. pyGEF as an MVC framework must provide high-level framework and structure to a graphically oriented application. > Actually, Cairo has a lot going for it -- it's powerful, flexible, > under > active development, and is being used by Major projects (GTK and > Mozilla). It also gives you other output options: PDF, Postscript, > Printer? > > I understand Agg is faster - I wish Cairo used Agg as it's in-memory > renderer, but as it started with that part, it probably never will, > and > at this point, I don't know that Agg really is any better than Cairo. > And performance improvements will come. > > I wonder if it would be possible to use the Xara engine some how > (GPL).... > > http://www.xaraxtreme.org/ > I do need to look at these other back-ends, but again, to get the higher level stuff stable and well structured is more important now. The current framework does not really depend that much on the actual drawing back-end. The most important dependency is that the canvas/ context must support AffineTransforms or provide a means of "simulating" them. The GraphicPrimitives (rect, cir, DrawObject(fc), etc.) depend on the drawing back-end, so each drawing backend will provide its own set of graphic primitives. These primitives will probably not be usable on other back-ends. Then the Views will depend on the GraphicPrimitives, i.e. Views will be defined for a specific back-end. One can then define multiple Views on multiple back-ends for the same "Document". In this kind of design, it is the GraphicPrimitives that define the l.c.d., not entirely the canvas/ context. Then one also gets to the question: What should be part of the framework, and what should be part of the app. As an example: It only took me about a day to move my demo app from using fc to using GC. Obviously I first had to figure out how GC worked. In the meantime though a lot of the APIs have changed quite a bit and AffineTransforms are heavily used (something that fc does not support) so moving back to fc might be a little bit more difficult, but not impossible. > Oh, and about using FloatCanvas as a back-end -- it seems it > overlaps so > much with pyGEF, that there really wouldn't be much point. Indeed, I'm > here to see if it's time to drop FloatCanvas and move the pyGEF! > The main reason for wanting to use fc as backend, is that it is much faster than the GC. I would also not say that pyGEF overlaps that much with fc, I think it is the new GC that provides a lot of the same functionality as fc. Myself and I think many other people, needed an editing framework, that would > I'm very interested to see where this all leads... > Chris, have you looked at the traits and traits.ui libraries by enthought. I am also trying to fit pyGEF into their design structure, because I think they did some really good work. --retief |
From: <pyg...@li...> - 2007-07-18 21:40:52
|
Hi Retief, Bill, and ???, I figured it was time to keep an eye on what you've been up to, it certainly looks like some good stuff, so I've joined the list. I scanned through the mailing list a bit, and have a few thoughts to share: On hit testing code: If you want to capture the mouse-leave and mouse-enter events, then hit testing has to be fast it has to be done for every motion event -- which is a lot! bb wrote: > building a hierarchy on the > document bounding boxes so you avoid having to do lots of transforms into > individual object coordinates is probably the better way to go. That'll > give O(lg N) performance in hit testing, whereas improving individual hit > tests will still give you something that's O(N). Are you talking about some sort of spatial index? If so, that would be great, and you might want to check out rtree. I've had it in mind for FloatCanvas for a while: http://zcologia.com/news/433/rtree-0-1-0/ It works with bounding boxes, and can quickly respond to the question: what BBs is this point in? and what BBs intersect this BB? Another note: It sounds like you're defining hitting as a hit within the Bounding Box of the object, which is really not that good -- there are times when you really want the hit to be a hit on the object itself -- or have I missed something? Retief wrote: > A developer's specific > application using pyGEF could replace Plotter with any implementation > of their choice, as long as it supplies the minimum functionality > required by the rest of the framework. I've always been pretty skeptical of "backend-independent" systems. You're kind of stuck with the lowest common denominator. As it is, GraphicsContext is a backend independent wrapper, as is Cairo (though at least Cairo does it's own in-memory bitmap rendering). Actually, Cairo has a lot going for it -- it's powerful, flexible, under active development, and is being used by Major projects (GTK and Mozilla). It also gives you other output options: PDF, Postscript, Printer? I understand Agg is faster - I wish Cairo used Agg as it's in-memory renderer, but as it started with that part, it probably never will, and at this point, I don't know that Agg really is any better than Cairo. And performance improvements will come. I wonder if it would be possible to use the Xara engine some how (GPL).... http://www.xaraxtreme.org/ Oh, and about using FloatCanvas as a back-end -- it seems it overlaps so much with pyGEF, that there really wouldn't be much point. Indeed, I'm here to see if it's time to drop FloatCanvas and move the pyGEF! I'm very interested to see where this all leads... -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: <pyg...@li...> - 2007-07-12 08:28:36
|
Hi I have also completed my new Graphics module, which now uses traits =20 and your geom lib. I also added a GraphicsPrimitives module that contains some graphics =20 primitives. This is basically just a rework of my previous =20 primitives, but with a cleaner API and now also uses traits and the =20 Graphic module. There is also a very simple test app =20 BasicGraphicDemo. You can view the code to see which keys are bound =20 to events (no mouse interaction at this stage as I am not using =20 Tools, Controls or the Editor yet, still needs refactoring). I will upload these files later today. I have also been spending the last couple of days looking at other =20 drawing tools and their capabilities. I think something like inkscape is a good benchmark. I saw that they have the linewidth problems when scaling is applied =20 to a Graphic. This could be overcome by scaling the pen size, =20 depending on the overall scale of the Graphic. This however only =20 looks good if the scaling is sx=C2=B1=3Dsy. The other option is to = override =20 the scale method in each of the GraphicPrimitives, the the scale =20 factor is used to resize the Graphic and the scales remain at 1. This =20= is probably the best option but will require the most amount of work. =20= Any thoughts? I am also looking at custom Editors and Handlers in traits, so that I =20= can make the pyGEF compatible with traits UI. In the =20 BasicGraphicDemo, one can already use ctrl-E to configure the traits =20 of the Graphic (after making any changes one must just press any key =20 for the screen to update). At this stage this produces the default =20 traits view, which must still be customized, but it's a start. --retief Retief Gerber Lektor Lecturer Departement Elektriese en Elektroniese Ingenieurswese Department of Electrical and Electronic Engineering Tel: +27 21 808 4011 I Faks/Fax: +27 21 808 4981 E-pos/E-mail: hrg...@su... Universiteit Stellenbosch University Privaat Sak/Private Bag X1 Matieland 7602 Suid-Afrika/South Africa www.eng.sun.ac.za =EF=BF=BC |
From: <pyg...@li...> - 2007-07-07 23:33:55
|
On 7/8/07, pyg...@li... < pyg...@li...> wrote: > > Hi > On 07 Jul 2007, at 11:32 PM, pyg...@li... wrote: > > > On 7/8/07, pyg...@li... <pyg...@li...> > wrote: > > > > Hi > > I'm trying to optimize the hitting and selection of graphical objects. > > I can't quite make up my mind, maybe you can help. > > > > In the previous design a Graphic object can define a boundingbox, this > > boundingbox is stored in the untransformed form, just like the Graphic. So > > to draw the Graphic, I concatenate their Affine with the GC's and draw them > > in the own coordinate system. I can therefore use their boundingbox to draw > > a selection region around the selected object. > > > > When i added the SelectionManager, things had to change slightly, > > because when multiple Graphics are selected the bounding region must be > > square with the axis, this will also be used to identify dirty regions when > > the view needs repainting. > > > > To simplify things I made the boundingbox to always be square with the > > axis. However when only one is selected and when a hit is tested the > > original transformed boundingbox should be used. > > > > > Do you actually have a performance problem with hit testing? Or are you > just worried that you will? > If you do an early reject using bounding boxes you can avoid the most > expensive part of the testing. Then the only case to worry about is the one > where thousands of objects are all piled on top of one another. But a) > that's not that likely a case and b) it still may be fast enough for GUI > speeds without doing anything special. > > Ok, but I'll assume for now that you really do have a performance problem > with hit testing -- > > I don't have any performance problems per say, but it is the operation > that probably will be performed the most, so it should be done as > effectively as possible. > It may be performed a lot but it doesn't really need to be able to return an answer that quickly for interactive use. Meaning you don't generally have situations where you have to do hit testing 1000 times per second or anything like that. But even if you do, building a hierarchy on the document bounding boxes so you avoid having to do lots of transforms into individual object coordinates is probably the better way to go. That'll give O(lg N) performance in hit testing, whereas improving individual hit tests will still give you something that's O(N). Another performance issue is that a lot of inverse transforms are needed to > > identify a hit, as shown in the example: > > 1. Mouse is pressed at XY > > 2. XY gets transformed to DOCUMENT coordinates > > 3. Loop over all hitable Graphics (as a make a list of all the Graphics > > that were hit, this way a hit on the same point will select the next one in > > the list) > > 4. Inverse transform the DOCXY to the coordinates of the Graphic > > 5. Use contains to see of the Graphic was hit > > > > Any other clever way of avoiding this constant inverse transformation, > > other than making boundingboxes square with the axis. > > Would cashing the inverse transform provide any performance increases. > > > > Yes, I think always storing the inverse transform along with the forward > transform would probably help. > But I also think sticking in a bbox check at step 3 will help a lot too. > You do maintain a bounding box in doc coordinates for each object already, > right? > > You just gave me an idea that I did not think of before. Thanks. I will > just have to update my dirty region slightly as it fits around the objects > inside the graphic, not the actual bbox. Currently some of the corners of a > rotated object could get missed. This is probably also the better way of > handing the dirty region. > Yeh, I think so too. Having a "bounding box" that doesn't actually bound the object seems odd. Could you consider adding the following functionality to your geom library. > > > > def HitTest(Point p, Rect rect, Affine transform): > > """ must return True if the rectangle is hit """ > > > > If this is done with pyrex it could really help with performance. This > > also defines a "standard" interface and we can later play with the > > implementation. > > > > So that would do an inverse transform of the point and then test it > against the rect? I don't think that's going to be much faster than > rect.contains(transform.inv_transform(p)) > > because the contains() and inv_transform() are already in pyrex. > > So the method calls do not add that much overhead? > Well you're it would just make it one method call instead of two. I don't think that's going to be that significant compared to the overall work involved in calling the two, but benchmarking is the only way to be sure. I think I have found some bugs in your geom lib: > 1. rect.intersects returns None if there is an intersection a return True > is probably all that is missing from the end of the method. > Yes, probably a bug. Thanks. 2. affine.shearing is missing, affine.scaling is there > I guess I could add that. I couldn't think of much use for it. So just set the shx and shy parts of the transform and leave sx,sy as 1.0? Other comments on geom: > 1. If affine.scaling returns a point then AffineScale should probably also > be allowed to accept a point as constructor argument, even though this might > not make any sense it will make the API uniform as most other accepts > points. Just a thought. > 2. the some applies to AffineShear > Yeh, maybe so. --bb |
From: <pyg...@li...> - 2007-07-07 21:55:30
|
Hi On 07 Jul 2007, at 11:32 PM, pyg...@li... wrote: > > On 7/8/07, pyg...@li... < pygef-=20 > de...@li...> wrote: > Hi > > I'm trying to optimize the hitting and selection of graphical objects. > I can't quite make up my mind, maybe you can help. > > In the previous design a Graphic object can define a boundingbox, =20 > this boundingbox is stored in the untransformed form, just like the =20= > Graphic. So to draw the Graphic, I concatenate their Affine with =20 > the GC's and draw them in the own coordinate system. I can =20 > therefore use their boundingbox to draw a selection region around =20 > the selected object. > > When i added the SelectionManager, things had to change slightly, =20 > because when multiple Graphics are selected the bounding region =20 > must be square with the axis, this will also be used to identify =20 > dirty regions when the view needs repainting. > > To simplify things I made the boundingbox to always be square with =20 > the axis. However when only one is selected and when a hit is =20 > tested the original transformed boundingbox should be used. > > > Do you actually have a performance problem with hit testing? Or =20 > are you just worried that you will? > If you do an early reject using bounding boxes you can avoid the =20 > most expensive part of the testing. Then the only case to worry =20 > about is the one where thousands of objects are all piled on top of =20= > one another. But a) that's not that likely a case and b) it still =20 > may be fast enough for GUI speeds without doing anything special. > > Ok, but I'll assume for now that you really do have a performance =20 > problem with hit testing -- > I don't have any performance problems per say, but it is the =20 operation that probably will be performed the most, so it should be =20 done as effectively as possible. > > > Another performance issue is that a lot of inverse transforms are =20 > needed to identify a hit, as shown in the example: > 1. Mouse is pressed at XY > 2. XY gets transformed to DOCUMENT coordinates > 3. Loop over all hitable Graphics (as a make a list of all the =20 > Graphics that were hit, this way a hit on the same point will =20 > select the next one in the list) > 4. Inverse transform the DOCXY to the coordinates of the Graphic > 5. Use contains to see of the Graphic was hit > > Any other clever way of avoiding this constant inverse =20 > transformation, other than making boundingboxes square with the axis. > Would cashing the inverse transform provide any performance increases. > > Yes, I think always storing the inverse transform along with the =20 > forward transform would probably help. > But I also think sticking in a bbox check at step 3 will help a lot =20= > too. You do maintain a bounding box in doc coordinates for each =20 > object already, right? > You just gave me an idea that I did not think of before. Thanks. I =20 will just have to update my dirty region slightly as it fits around =20 the objects inside the graphic, not the actual bbox. Currently some =20 of the corners of a rotated object could get missed. This is probably =20= also the better way of handing the dirty region. > > Could you consider adding the following functionality to your geom =20 > library. > > def HitTest(Point p, Rect rect, Affine transform): > """ must return True if the rectangle is hit """ > > If this is done with pyrex it could really help with performance. =20 > This also defines a "standard" interface and we can later play with =20= > the implementation. > > So that would do an inverse transform of the point and then test it =20= > against the rect? I don't think that's going to be much faster than > rect.contains(transform.inv_transform(p)) > > because the contains() and inv_transform() are already in pyrex. > So the method calls do not add that much overhead? > --bb > I think I have found some bugs in your geom lib: 1. rect.intersects returns None if there is an intersection a = return =20 True is probably all that is missing from the end of the method. 2. affine.shearing is missing, affine.scaling is there Other comments on geom: 1. If affine.scaling returns a point then AffineScale should =20 probably also be allowed to accept a point as constructor argument, =20 even though this might not make any sense it will make the API =20 uniform as most other accepts points. Just a thought. 2. the some applies to AffineShear --retief > ----------------------------------------------------------------------=20= > --- > This SF.net email is sponsored by DB2 Express > Download DB2 Express C - the FREE version of DB2 express and take > control of your XML. No limits. Just data. Click to get it now. > http://sourceforge.net/powerbar/db2/=20 > _______________________________________________ > pyGEF-develop mailing list > pyG...@li... > https://lists.sourceforge.net/lists/listinfo/pygef-develop Retief Gerber Lektor Lecturer Departement Elektriese en Elektroniese Ingenieurswese Department of Electrical and Electronic Engineering Tel: +27 21 808 4011 I Faks/Fax: +27 21 808 4981 E-pos/E-mail: hrg...@su... Universiteit Stellenbosch University Privaat Sak/Private Bag X1 Matieland 7602 Suid-Afrika/South Africa www.eng.sun.ac.za =EF=BF=BC |
From: <pyg...@li...> - 2007-07-07 21:32:16
|
On 7/8/07, pyg...@li... < pyg...@li...> wrote: > > Hi > I'm trying to optimize the hitting and selection of graphical objects. > I can't quite make up my mind, maybe you can help. > > In the previous design a Graphic object can define a boundingbox, this > boundingbox is stored in the untransformed form, just like the Graphic. So > to draw the Graphic, I concatenate their Affine with the GC's and draw them > in the own coordinate system. I can therefore use their boundingbox to draw > a selection region around the selected object. > > When i added the SelectionManager, things had to change slightly, because > when multiple Graphics are selected the bounding region must be square with > the axis, this will also be used to identify dirty regions when the view > needs repainting. > > To simplify things I made the boundingbox to always be square with the > axis. However when only one is selected and when a hit is tested the > original transformed boundingbox should be used. > Do you actually have a performance problem with hit testing? Or are you just worried that you will? If you do an early reject using bounding boxes you can avoid the most expensive part of the testing. Then the only case to worry about is the one where thousands of objects are all piled on top of one another. But a) that's not that likely a case and b) it still may be fast enough for GUI speeds without doing anything special. Ok, but I'll assume for now that you really do have a performance problem with hit testing -- Another performance issue is that a lot of inverse transforms are needed to > identify a hit, as shown in the example: > 1. Mouse is pressed at XY > 2. XY gets transformed to DOCUMENT coordinates > 3. Loop over all hitable Graphics (as a make a list of all the Graphics > that were hit, this way a hit on the same point will select the next one in > the list) > 4. Inverse transform the DOCXY to the coordinates of the Graphic > 5. Use contains to see of the Graphic was hit > > Any other clever way of avoiding this constant inverse transformation, > other than making boundingboxes square with the axis. > Would cashing the inverse transform provide any performance increases. > Yes, I think always storing the inverse transform along with the forward transform would probably help. But I also think sticking in a bbox check at step 3 will help a lot too. You do maintain a bounding box in doc coordinates for each object already, right? Could you consider adding the following functionality to your geom library. > > def HitTest(Point p, Rect rect, Affine transform): > """ must return True if the rectangle is hit """ > > If this is done with pyrex it could really help with performance. This > also defines a "standard" interface and we can later play with the > implementation. > So that would do an inverse transform of the point and then test it against the rect? I don't think that's going to be much faster than rect.contains(transform.inv_transform(p)) because the contains() and inv_transform() are already in pyrex. --bb |
From: <pyg...@li...> - 2007-07-07 17:46:03
|
Hi I'm trying to optimize the hitting and selection of graphical objects. I can't quite make up my mind, maybe you can help. In the previous design a Graphic object can define a boundingbox, =20 this boundingbox is stored in the untransformed form, just like the =20 Graphic. So to draw the Graphic, I concatenate their Affine with the =20 GC's and draw them in the own coordinate system. I can therefore use =20 their boundingbox to draw a selection region around the selected object. When i added the SelectionManager, things had to change slightly, =20 because when multiple Graphics are selected the bounding region must =20 be square with the axis, this will also be used to identify dirty =20 regions when the view needs repainting. To simplify things I made the boundingbox to always be square with =20 the axis. However when only one is selected and when a hit is tested =20 the original transformed boundingbox should be used. Another performance issue is that a lot of inverse transforms are =20 needed to identify a hit, as shown in the example: 1. Mouse is pressed at XY 2. XY gets transformed to DOCUMENT coordinates 3. Loop over all hitable Graphics (as a make a list of all the =20 Graphics that were hit, this way a hit on the same point will select =20 the next one in the list) 4. Inverse transform the DOCXY to the coordinates of the Graphic 5. Use contains to see of the Graphic was hit Any other clever way of avoiding this constant inverse =20 transformation, other than making boundingboxes square with the axis. Would cashing the inverse transform provide any performance increases. Could you consider adding the following functionality to your geom =20 library. def HitTest(Point p, Rect rect, Affine transform): """ must return True if the rectangle is hit """ If this is done with pyrex it could really help with performance. =20 This also defines a "standard" interface and we can later play with =20 the implementation. --retief Retief Gerber Lektor Lecturer Departement Elektriese en Elektroniese Ingenieurswese Department of Electrical and Electronic Engineering Tel: +27 21 808 4011 I Faks/Fax: +27 21 808 4981 E-pos/E-mail: hrg...@su... Universiteit Stellenbosch University Privaat Sak/Private Bag X1 Matieland 7602 Suid-Afrika/South Africa www.eng.sun.ac.za =EF=BF=BC =EF=BF=BC |
From: <pyg...@li...> - 2007-06-29 09:00:15
|
Hi On 29 Jun 2007, at 6:33 AM, <pyg...@li...> wrote: > You were saying you noticed some low level differences in positioning > / rendering in wx.GraphicsContext on different platforms. I'm finding > that too. Also font scaling seems odd, and it seems difficult to > consistently render a bitmap at pixel-for-pixel on the screen (for > rendering grab handles regardless of view scale). Scaling of stroke > widths seems also to be very odd. There are big jumps in the width > when you pass a certain scale. > > I asked about GraphicsContext on the wxWidgets dev list and it seems > like the response was that no one was really working on it right now, > and there aren't really any improvements in the works, though they > generally agreed with my comments on things that need to be done. So > it may be going in the right direction, but it's not going there any > time soon. > > This is making me think maybe it would be better to implement our own > GraphicsContext-like class on top of a platform-independent renderer > like Agg or Cairo. That way all rendering will be guaranteed platform > independent, and additionally it would give us better access to the > innards. I.e. if a feature is missing or a bug is found it could be > fixed in the local rendering lib rather than requiring getting the fix > into wxWidgets itself and waiting for it to percolate through to > wxPython. > > - In Cairo's favor it's pretty full featured, and will probably > eventually be available as an official implementation of > wx.GraphicsContext on all platforms (there's already a wxCairoContext > in the "generic" directory of wxWidgets' source, just not compiled for > Windows usually). On the down side, the only way I've ever seen > Cairo compiled on Windows is using GNU tools, so I suspect it doesn't > have a very portable build system. There's a pyCairo project I found > but it appears to be Unix only. > > - Agg is not as feature-ful in many respects, and has a very unusual > API. But it is quite fast and generates nice-looking output. The > most promising thing for making a GraphicsContext subclass out of it > seemed to me to be the Graphin project, which provides a GC-like API > in C implemented using AGG that could easily be wrapped by a Python > class. I used Agg on my last project and was happy with everything > but the peculiar API. But with this Graphin wrapper it should be less > painful. Graphin will probably require a little work to get compiling > on platforms besides Windows, though. > > That said, really I'd like to avoid the above if at all possible. > It's doable and would have some benefits, but would end up being a lot > work. Really, mostly what I'm seeing as odd with GraphicsContext seem > to be things related to the use of GraphicsContext's transform. So it > might be easier just to redo things to handle the view transform on my > own and let GraphicsContext think it's drawing at 1-to-1 all the time. > > I would also really prefer to avoid this (at this stage at least), as doing the extra work NOW could not be worth it in the end. As I see it, it is going to take quite a while for the pyGEF framework to become "complete" and stable. Hopefully by the time this happen wx.GraphicsContext would have pulled up their socks, or . My idea with the pyGEF framework is to support any developer supplied GC. In my current implementation i use the Plotter class that in turns make use of the wx.GraphicContext. A developer's specific application using pyGEF could replace Plotter with any implementation of their choice, as long as it supplies the minimum functionality required by the rest of the framework. At this stage it is a bit early to define or fix this interface. At some later stage I want to be able to support floatcanvas again. As I see it the requirements of the GC is heavily dependent on the application not the framework. So try to create the all singing all dancing GC for the framework to satisfy everybody, could in the end satisfy None. So my priorities are as follow: 1. Stabilize the pyGEF design, framework and API using traits and traits UI. 2. Implement a complete but basic application to demonstrate all the features of pyGEF. 3. Do work on the GCs (hopefully some 3D stuff as well). 4. Add more GCs so the user can simply select one for their application. > Really in the end, if you want to give a vector drawing program > fancier features, you need to do your own stroking as no standard API > can handle things like variable width strokes automatically. In that > regard the work that lib2geom (http://lib2geom.sourceforge.net/) folks > are doing seems interesting. > > I will have a look at this. > But most of this stuff is really only of interest to a vector drawing > app. I suspect it's less of an issue for a Simulink type of thing > where you can probably get by with the simple stroke/fill options the > API gives you. > > --bb > > ---------------------------------------------------------------------- > --- > This SF.net email is sponsored by DB2 Express > Download DB2 Express C - the FREE version of DB2 express and take > control of your XML. No limits. Just data. Click to get it now. > http://sourceforge.net/powerbar/db2/ > _______________________________________________ > pyGEF-develop mailing list > pyG...@li... > https://lists.sourceforge.net/lists/listinfo/pygef-develop > --retief |
From: <pyg...@li...> - 2007-06-29 04:33:19
|
You were saying you noticed some low level differences in positioning / rendering in wx.GraphicsContext on different platforms. I'm finding that too. Also font scaling seems odd, and it seems difficult to consistently render a bitmap at pixel-for-pixel on the screen (for rendering grab handles regardless of view scale). Scaling of stroke widths seems also to be very odd. There are big jumps in the width when you pass a certain scale. I asked about GraphicsContext on the wxWidgets dev list and it seems like the response was that no one was really working on it right now, and there aren't really any improvements in the works, though they generally agreed with my comments on things that need to be done. So it may be going in the right direction, but it's not going there any time soon. This is making me think maybe it would be better to implement our own GraphicsContext-like class on top of a platform-independent renderer like Agg or Cairo. That way all rendering will be guaranteed platform independent, and additionally it would give us better access to the innards. I.e. if a feature is missing or a bug is found it could be fixed in the local rendering lib rather than requiring getting the fix into wxWidgets itself and waiting for it to percolate through to wxPython. - In Cairo's favor it's pretty full featured, and will probably eventually be available as an official implementation of wx.GraphicsContext on all platforms (there's already a wxCairoContext in the "generic" directory of wxWidgets' source, just not compiled for Windows usually). On the down side, the only way I've ever seen Cairo compiled on Windows is using GNU tools, so I suspect it doesn't have a very portable build system. There's a pyCairo project I found but it appears to be Unix only. - Agg is not as feature-ful in many respects, and has a very unusual API. But it is quite fast and generates nice-looking output. The most promising thing for making a GraphicsContext subclass out of it seemed to me to be the Graphin project, which provides a GC-like API in C implemented using AGG that could easily be wrapped by a Python class. I used Agg on my last project and was happy with everything but the peculiar API. But with this Graphin wrapper it should be less painful. Graphin will probably require a little work to get compiling on platforms besides Windows, though. That said, really I'd like to avoid the above if at all possible. It's doable and would have some benefits, but would end up being a lot work. Really, mostly what I'm seeing as odd with GraphicsContext seem to be things related to the use of GraphicsContext's transform. So it might be easier just to redo things to handle the view transform on my own and let GraphicsContext think it's drawing at 1-to-1 all the time. Really in the end, if you want to give a vector drawing program fancier features, you need to do your own stroking as no standard API can handle things like variable width strokes automatically. In that regard the work that lib2geom (http://lib2geom.sourceforge.net/) folks are doing seems interesting. But most of this stuff is really only of interest to a vector drawing app. I suspect it's less of an issue for a Simulink type of thing where you can probably get by with the simple stroke/fill options the API gives you. --bb |
From: <pyg...@li...> - 2007-06-22 18:09:09
|
On 6/22/07, pyg...@li... <pyg...@li... > wrote: > > I would love to have a look at your pyrex module, as I have realized tha= t > I would need something like this, and also did not find anything very > promising. I would ideally like to include it directly within pyGEF. Mayb= e > for now you can just import it somewhere under pyGEF. Then I can also wor= k > on it if I can. Maybe a geo package in the current source? > I've attached the current code as it now stands. [wvb- actually no, the SF mailing list wouldn't let me attach it so I'll send it separately directly to you] I'm adding functions to it daily still as I find the need for this or that. There are a few other handy functions in AGG's affine class that I'd like t= o implement still. I just haven't had the need for them yet. I'll probably eventually implement a poly/bezier path thing of some sort. I did read somewhere that PIL has modules that support these sort of things= , > but couldn't find it within 5 minutes so I moved on. > Interesting. I just took a quick look at the code myself and couldn't find anything either. I did find some places where a plain python list was bein= g used to represent an affine matrix. There's an ImageTransform.py, but it doesn't seem to do much of anything by itself. I would also like to see how you decompose the affine transform, as I did > not have much success (I did not try to hard though). Can you extract bot= h > orientation, scale and shear? Or only orientation? I suppose I only reall= y > need the orientation. > The Polar decomposition. It can split a matrix into R*S where R is orthogonal, S is symmetric. I also separate out the translation part, whic= h is easy. The code I have does it iteratively, but it seems to only take a few iterations for the matrices typical of user applications which are usually well conditioned and don't stray too far from a pure rotation/scale= . The S matrix contains both axial scale as well as shearing. But it seems pretty easy to further factor that one into (axial scaling) * (pure shear): # Now S contains all the scale/shear # extract axial scaling self.scalex =3D S.sx self.scaley *=3D S.sy # modify S to represent just shear S.shy /=3D S.sx S.shx /=3D S.sy S.sx =3D 1.0 S.sy =3D 1.0 Maybe it's worth putting that in the lib too. I can't say for sure this is the way to go. There are definitely some advantages to storing angle separately, for instance with a separate angle it becomes trivial to both interpolate rotations and represent rotations of more than 360 degrees. It helps to be able to represent such a thing if yo= u want to do animations, say interpolate between this thing with angle 0 and this thing with angle 720. It should do two flips rather than just sit there and do nothing. :-) However, my hunch is that for such things, if needed at all, they will be reasonably easy to represent using a separate variable that gives the number of additional full rotations on top of what'= s stored in the matrix. Currently I store the orientation, scale and shear in my AffineTransform > class and then create the matrix on the fly. Currently this is a bottlene= ck > as it is done many times. > Well the need for both representations is going to be there no matter which representation you store. But the matrix form is what's most useful when i= t comes to rendering, and rendering needs to be fast, so that says to me the matrix is what should be stored. Either way you might find it best to cach= e the other representation and update it only on demand. Properties give you a nice way to do that in Python. Some API considerations. > > > > Which of the following APIs are better: > > > > class Fig: > > def __init__(self, x, y, w,h): > > self.x =3D x > > self.y =3D y > > =85.. > > > > def Move(self, dx, dy): > > =85 > > > > or > > > > class Fig: > > def __init__(self, xy, shape): > > self.xy =3D xy # Array of len 2 > > =85.. > > > > def Move(self, dxy): > > =85 > > > > or > > > > class Fig: > > def __init__(self, xy, w,h): > > self.xy =3D xy # Something like Point2D > > =85.. > > > > def Move(self, dxy): > > =85 > > > > One has to consider the user, programmer and efficiency point of view. > > Traits has a way of forcing the shape etc. of an array. > > Then numpy arrays are used for do calculations. > > This way however there is a lot of back and forth conversions. > > So I personally would prefer not to use simple arrays at all. > > > > I think having a Point2D, and Matrix2D (especially if implemented in > something like pyrex) is probably the best choice if they directly suppor= t > all the needed calculations. > > > > > > Would the API then always use this Point2D as arguments or should simple = x > and y arguments be used. There seems to be no consensus between the > different graphic intensive libraries, maybe because of backwards > compatibility. And something like wx has a huge API to support all the > variations. I am hoping to avoid this. > I'm not very consistent about this myself. On the one hand, if you've got an x and a y it seems wrong to introduce the overhead of creating a new Point object just to pass the x,y to the function. On the other hand if th= e library is going to turn around and store it as a Point internally anyway, might as well have the user do that. One idiom I've kind of taken from Numpy is the 'asarray'. In the geom lib attached there are funcitons 'asPoint', 'asRect' and 'asAffine'. These do their best to convert the input to my Point/Rect/Affine type. For instance asPoint accepts anything with .x and .y fields, or any sequence of length 2 using val[0] and val[1] as x and y. And of course if what you pass in is a Point to begin with it just leaves it alone. Then in functions you do: def move(self, pt): pt =3D asPoint(pt) ... This allows someone to call it efficiently using a Point if they have one already. Or with a wx.Point if that's what they have, or if they have just an x and a y, they can call it using a tuple: obj.move((x,y)). Of course it's more efficient if they do obj.move(Point(x,y)) [avoids creation of an unnecessary tuple], but it's nice to be flexible I think. Something else that I have been thinking about, is improving the performanc= e > of drawing. One thing that the Tigris GEF guys are doing is very nice, > although it is probably standard practice. The keep track of objects that > change, or move. They then build up a dirty region from the boundary boxe= s > of the affected components and only repaint this region. Do you have any > idea if this is possible with the GraphicsContext in wxPython. > Sure. Definitely possible. You just set a clipping region. But it's also good to do culling prior to that. You don't even need to call draw() for things whose bounding boxes are outside the dirty region. At the moment I also draw all my layers to one bitmap, and then blit this > bitmap to the screen. Is it possible to draw each layer to its own bitmap > and then blit the bitmaps on top of one another? Any Transparency issues? > > Then I also only have to redraw the affected layers. > Yes, I think that's possible. I was thinking about that myself. That's th= e only way I can think of to be able to quickly drag one object around that's in the middle of a stack to 20 very full layers. If you force the user to always have a "current layer" then you can cache the above and below subset= s into bitmaps. Until pretty recently photoshop didn't allow for more than one layer to be selected at a time, so it's not unprecedented. I have to little knowledge about the underlying graphics systems to make > judgment calls on their considerations. If you also don' t, maybe you can > point me to someone that can help. > Graphics in python is kind of new territory for me too. I'm most familiar with OpenGL and C++. The performance considerations are very different in Python. It seems it's much harder to predict what will be fast in Python code than with C++ code. Even Python gurus seem to be frequently surprised by the results of timeit tests. But I'm often surprised that python is as fast as it is when I think about all the layers of indirection you have to go through to call a single function. 2Ghz is a heck of a lot of Hertz, I think is the reason. Regards, --bb |
From: <pyg...@li...> - 2007-06-14 00:53:47
|
I see what you mean about using traits as a messaging system now. You can do: sender.on_trait_changed(callback, name="trait_name") sender.trait_name = True # calls callback Somebody on the wxPython list pointed me to Louie, which is a pubsub framework available via easy_install. The basic pattern there is louie.connect(callback, "a.signal", sender) louie.send("a.signal", sender, <whatever args you want>) Here 'a.signal' can be any string or hashable python object. And the sender doesn't have to be any special class either. It's just an object that specifies the origin. So every button can have a "clicked" signal, but if you louie.connect(callback, "clicked", cool_button) you only get callbacks from cool_button. If you to know the sender then you have to pass it at send time: louie.send("a.signal", obj, sender=obj) But at the end of the day do accomplish more or less the same thing. --bb |
From: <pyg...@li...> - 2007-06-14 00:24:35
|
On 6/14/07, pyg...@li... < pyg...@li...> wrote: I don't really have a good feeling for what the Editor's responsibilities are from looking at the code. It seems like a lot of what's there is stuff that I might want to own myself at the application level. Like the CommandStacks. On the other hand the LayerManager seems to me like it should be in the canvas. Seems like much of the functionality in Editor is either simply delegating to someone else (e.g.. addfigure/removefigure) or it's implementing functionality that may need to be overridden with something more app-specific (like Load/Save if you don't want to use pickle as your file format.) I guess just aggregating a canvas and its undo stack and related tools into one object is somewhat useful from the App's perspective. Probably wouldn't be a bad idea to push as much of the functionality as possible down to the canvas, though, so it's as easy as possible to replace Editor with custom code. One thing that Inkscape does is separate out a layer between "view" and "document". This layer deals with caching representations of document objects that are tailored to the specific view. For instance in a drawing app, Bezier curves should be tessellated into line segments according to their size on the screen, which depends on the zoom level of the document. The doc contains bezier curves, the graphics API used by "View" only knows how to display line segments. So the doc is asked to create a display-list for the current view. The display-list in Inkscape's case has to be only things that Cairo knows how to draw. In our case it might consist of things wx.GraphicsContext knows how to draw. Anyway just thought I'd mention that. At this point such a middle layer should probably be consided an application-specific addition to a view. [Retief:] Ok, so Document should only contain ElementData. So then what > I had as Editor should then maybe be called > DocumentController/Controller. This will clean up the loading and saving > of Documents as it only contains DataModel objects that can be > serialized. It is then easy to create a View for/from a Document. I > think this makes things clearer. > > > > > Canvas -> View or DocumentView > > > > Similarly it seems Editor has-a Canvas whereas in the typical Doc/View > > paradigm one "Document" can have many "Views". > > > > [Retief:] Yes, the a DocumentController can then have many > Views/DocumentViews? > > > > > > > > > Control -> Element or > > > DocumentElement/Component/DocumentControl > > > > I like Element, yep. > > > > > > > > > > DataModel -> ElementData or > > > ElementDataModel/DocumentDataModel/ElementModel > > > > I like ElementData too, yep. > > > > > > Figure -> ElementView/ Graphic ? or > > > GraphicObject/DrawObject/DocumentGraphic > > > > I lean towards Graphic or GraphicObject. > > > [Retief:] I was also leaning towards Graphic, but then the ElementData > and DocumentView made me lean toward ElementView. Then again a figure is > a graphic not a view. So I'm happy with Graphic > > > [Retief:] I am just staring to convert things to traits, but there are > still a few things that I need to sort out. How long have you been using > traits? > Is there a nice document about the inner workings/implementation of > traits, as there are a lot of things that happen in the background, that > traits "figures out" for itself. If you mean something other than the "Traits User Manual" I don't know of anything. There are a bunch of examples But the mailing list is very helpful, as you've found. --bb |
From: <pyg...@li...> - 2007-06-13 21:41:24
|
> -----Original Message----- > From: pyg...@li... [mailto:pygef- > dev...@li...] On Behalf Of pygef- > de...@li... > Sent: 13 June 2007 09:29 PM > To: pyg...@li... > Subject: Re: [pyGEF-develop] Some python tech: > zope.interfaceandenthoughTraits >=20 > On 6/13/07, Bill Baxter <wb...@gm...> wrote: > > > [[RETIEF:]] There are a couple of options. In pyGEF I create the > new > > Control (Document element), but don't add it to the document. All > controls > > have a createfigure and createdragfigure method. When a element is > added to > > the design the createfigure method is used to create the figure that > is > > added to the document. The createdragfigure is used to create the > figure > > that is dragged. This figure is held by the Tool, which adds and > removes it > > from the canvas. All dragfigures should be placed on the drag layer. > Then > > only this layer needs to be redrawn as the figure is moved. The > other > > layers can be drawn as a background bitmap. The CreateCommand, that > adds > > the element to the document, is then only called one the element is > placed. > > This make undo easy. It also makes the escape easy as the add figure > to > > canvas is not implemented as a command. You generally can't undo the > escape > > from a specific tool, although this could be an interesting feature. >=20 > That sounds more or less like how I was doing it. Tool holds onto the > new object till creation parameters are fixed, and then the > "AddObject" command is issued. But for a more general drawing app you > might want the interactive placement step to draw the object in the > correct layer while dragging, so it won't necessarily be on top. >=20 > > > [[RETIEF:]] I can see that I must change the names of some of my > classes > > to make things more clear. > > > > > > Here are some suggestions (please comment): > > > > > > Editor -> Document >=20 > Document is often used to mean "DataModel" in things like the wx > DocView framework. I'm not sure that's the role Editor is serving. >=20 [Retief:] Ok, so Document should only contain ElementData. So then what I had as Editor should then maybe be called DocumentController/Controller. This will clean up the loading and saving of Documents as it only contains DataModel objects that can be serialized. It is then easy to create a View for/from a Document. I think this makes things clearer. > > > Canvas -> View or DocumentView >=20 > Similarly it seems Editor has-a Canvas whereas in the typical Doc/View > paradigm one "Document" can have many "Views". >=20 [Retief:] Yes, the a DocumentController can then have many Views/DocumentViews? > > > > > > Control -> Element or > > DocumentElement/Component/DocumentControl >=20 > I like Element, yep. >=20 > > > > > > DataModel -> ElementData or > > ElementDataModel/DocumentDataModel/ElementModel >=20 > I like ElementData too, yep. >=20 > > > Figure -> ElementView/ Graphic ? or > > GraphicObject/DrawObject/DocumentGraphic >=20 > I lean towards Graphic or GraphicObject. >=20 [Retief:] I was also leaning towards Graphic, but then the ElementData and DocumentView made me lean toward ElementView. Then again a figure is a graphic not a view. So I'm happy with Graphic [Retief:] I am just staring to convert things to traits, but there are still a few things that I need to sort out. How long have you been using traits? Is there a nice document about the inner workings/implementation of traits, as there are a lot of things that happen in the background, that traits "figures out" for itself. > ----------------------------------------------------------------------- > -- > This SF.net email is sponsored by DB2 Express > Download DB2 Express C - the FREE version of DB2 express and take > control of your XML. No limits. Just data. Click to get it now. > http://sourceforge.net/powerbar/db2/ > _______________________________________________ > pyGEF-develop mailing list > pyG...@li... > https://lists.sourceforge.net/lists/listinfo/pygef-develop |
From: <pyg...@li...> - 2007-06-13 19:28:42
|
On 6/13/07, Bill Baxter <wb...@gm...> wrote: > > [[RETIEF:]] There are a couple of options. In pyGEF I create the new > Control (Document element), but don't add it to the document. All controls > have a createfigure and createdragfigure method. When a element is added to > the design the createfigure method is used to create the figure that is > added to the document. The createdragfigure is used to create the figure > that is dragged. This figure is held by the Tool, which adds and removes it > from the canvas. All dragfigures should be placed on the drag layer. Then > only this layer needs to be redrawn as the figure is moved. The other > layers can be drawn as a background bitmap. The CreateCommand, that adds > the element to the document, is then only called one the element is placed. > This make undo easy. It also makes the escape easy as the add figure to > canvas is not implemented as a command. You generally can't undo the escape > from a specific tool, although this could be an interesting feature. That sounds more or less like how I was doing it. Tool holds onto the new object till creation parameters are fixed, and then the "AddObject" command is issued. But for a more general drawing app you might want the interactive placement step to draw the object in the correct layer while dragging, so it won't necessarily be on top. > > [[RETIEF:]] I can see that I must change the names of some of my classes > to make things more clear. > > > > Here are some suggestions (please comment): > > > > Editor -> Document Document is often used to mean "DataModel" in things like the wx DocView framework. I'm not sure that's the role Editor is serving. > > Canvas -> View or DocumentView Similarly it seems Editor has-a Canvas whereas in the typical Doc/View paradigm one "Document" can have many "Views". > > > > Control -> Element or > DocumentElement/Component/DocumentControl I like Element, yep. > > > > DataModel -> ElementData or > ElementDataModel/DocumentDataModel/ElementModel I like ElementData too, yep. > > Figure -> ElementView/ Graphic ? or > GraphicObject/DrawObject/DocumentGraphic I lean towards Graphic or GraphicObject. --bb |