From: M G. <el...@al...> - 2002-02-01 18:44:39
|
Siggi, > > Günter already answered the "roadmap" part of this. > I appologise for rehashing old design choices. Maybe there was never a discussion on it ... I don't know. I have only been following this list for about a month now. > For now, you can just press "h" to hide the video window if it disturbs > you while listening to mp3s... > I understand. The issue I have is that xine does not offer a mechanism to allow the gui to automate this for the user. I imagine it could be as simple as sending a XINE_EVENT_VIDEO_STARTED and XINE_EVENT_VIDEO_FINISHED to the gui. The gui could start with a hidden rendering context and make it visible if the stream contained video. I guess I am not in the majority here in thinking this would be a good idea so I guess Im just sol. No big deal ;) > > If the previous seems reasonable ... xine expecting x number of frames to be allocatable considering the vast array of video gpu/memory used these days. I f! > I don't quite get the point here. Perhaps I was not clear, the point is to increase the likelyhood optimal video output on all platforms. > What's the issue? The issue is that I can't write a video driver that really takes advantage of what most h/w is capable of in any given situation ... and that bothers me. > How would you resolve it? The best solution I can think of ( so far ) would be to change the way demuxers handle frame allocation and format changes. Right now, there may still be frames queued and waiting for output ( hopefully sitting in video memory ) that are in a different frame format than what Is currently being rendered. To create complex flipping chains for a video overlay, backsurfaces are all required to be the same height/width/format as the overlay itself. Because of this, there is no way to transistion cleanly from one frame type to another unless the frame queue is completely depleted and reallocated for the next format type. The alternative ( what happens right now ) is that the video driver hands the decoder a malloc'd piece of memory to render into and then it memcpys ( uggg ) into the single h/w overlay that is actually used by the h/w for display. The other issue I have, is with xine making the assumtion that a video card will be able to allocate x number of frames for a given format ( heh ) without telling the driver the value of x. To make the best decision possible ( and accomodate all video cards ), it needs to know the format ( for bitdepth ) and the width/height to determine how many frames it can allocate for the decoder. Ie, is 10 frames in video memory better than 16 frames of system memory + memcpys? Maybe the h/w supports scaling from both local video memory and AGP memory so I would allocate the max requested frames. etc ... -Matthew Find the best deals on the web at AltaVista Shopping! http://www.shopping.altavista.com |