From: <bri...@gm...> - 2006-08-02 00:36:11
|
On Tue, Aug 01, 2006 at 11:43:57PM +0000, jos...@ju... wrote: > I thought that's what we had already.. except that yes, > 'spectra' is better than 'spectrums' :) sorry, missed that :) > Ummmm... You could fade in/out between two different grad > geometry types.. it's just that it won't really look like a smooth > *deformation* of one geometry to another. > Similarly for states with different spread modes.. you could > do the fading in/out between the two, it just won't look like you're > smoothly deforming repeat to reflect say, because there is no such > smooth deformation between those two. > > I'm mot sure that I'm following you here...?? > Quick recap of the two methods (A) use a single edje part (and thus a single evas object) - interpolate grad 'by hand' in edje. can only do 'same type' transitions (B) use two edje parts (two evas objects) and just fade em together. slightly less attractive .edc, but more flexible (and a LOT less work) so, we're going with (B) > > As for images, we have also have 'tween' transitions (frame based > > animation), so fading between states isn't supported. > > > You could have frame-by-frame 'tween' transitions within a > state, as with images.. just tween thru spectra. Yeah... you COULD. Would be aesthetically ugly and with little use imo though. So, I'm not going to support it for now. If someone else wants to add it in, after i get the brunt done, feel free :) > > For now, i'll leave it simple for gradients. If we later want to > > support blending between gradient states we can. > > > > One last thing. For .png (or whatever image format) spectra, a > > black to white spectrum just be: [black, white] (2 pixels), right? > > So, wouldnt' they just be the subset of 'stop' based spectra that > > have equidistant stops? (which all spectra can be represented as). > > In other words, they'd all get converted into the same format > > internally, right? > > > Internally all will actually get 'converted' to a span of > pixels! It's getting there from whatever the initial description -- > that's the hard part.. When we load from a png image say, we could > consider it as a set of equidistant stops and do whatever we would > do with such in general.. or we can just do image-span scaling, > which is pretty much linear interpolation.. So that's what we do, > for speed and for simplicity. > The main reason for allowing image files as sources (and it > really should be a wx1 image) is just that they have so much support, > there are so many tools for making images, etc.. You can get really > interesting spectra from just taking a 'slice' from most any image > you have around! makes sense. esp. since we already have optimized blending routines. ...coming along rephorm |