From: Chris C. <ca...@al...> - 2005-05-18 11:25:48
|
On Wednesday 18 May 2005 11:54, Guillaume Laurent wrote: > For the record, the main problem is that the previews are generated > by the model (following the idea that the model handles the raw data, > and the view asks it for abstract drawing info). But the > AudioPreviewThread needs to know the size of the rect in which the > preview will be drawn This means that (if using the current AudioPreviewThread API) we need to make a new request to the preview thread each time the tempo changes, in order to handle tempo scaling properly. i.e. if the tempo is 120 from the start of the segment up to a point 400 pixels into the segment width, and then changes to 140 until the end of the segment 200 pixels later, then we need to make two requests for previews, one for the RealTime section starting at the start of the segment and extending to the RealTime equivalent of that 400 pixel zone at 120bpm in the current zoom size (with width = 400), and another starting where the first leaves off and extending for the RealTime equivalent of 200 pixels at 140bpm in the current zoom size (with width = 200). > and that size is computed by the view at > drawing time according to the current zoom level. The model can and > does compute rectangles from segments, but it's up to the view to > scale them. Don't forget we need to recalculate audio previews when zooming -- they're calculated to a particular resolution. That's one reason we have the thread, otherwise zooming is slow because we have to wait for the previews. Chris |