From: Scott W. <bau...@co...> - 2005-02-15 17:13:54
|
Dennis Smit <synap@yo...> wrote: > Internally, the data is pushed to the plugins, but on the frontside > we pull it. This is because, VisInput also has plugins VisInputPlugins. > These plugins can act as a capture for alsa, jack, esd and such. We"ve > written this because we have a standalone client in thought. So the > callback is nothing more than the pull implementation what normally > gets done in the plugin. The main difference between the two push/pull models is that with a pull model on the front end, you _may_ not know when the pcm data has changed. Looking at the input plug-ins for ESD and Jack, I see you are simply stuffing the incoming pcm data into a buffer, which is transferred at some non-specific later point in time by a polled called to the upload callback. If the data hasn't changed between polls, Libvisual and the visualizer plugin(s) are wasting CPU time working on data they've already finished with. So - you could simply let the application/input plugin specify with the result codes what has happened since the last call (new_data, stream_start, stream_end, stream_pause, stream_resume, dropped_frame.) If the visualizer is a data recorder, for instance, it would want to know about dropped frames. The other possibility is to offer a push interface for audio data. You simply spec to the plug-ins that they are going to get no less than 512 samples per frame, for example, and let the data receiver buffer incoming data from the application until a full frame is received (or a stream_pause or stream_end happens, in which case you pass along the partial frame with zeros in the rest of the frame buffer.) Since all of the input and visualizer plugins written for xmms/winamp and a bunch of others work on a push model, porting is one step less difficult (a small step, I admit, since you are pushing between Libvisual and the plug-in!) > I am not sure about the special data types, as we try to keep the > framework semi specific. In a philosophical sense, you're writing a library, not a religion. Part of the joy of doing this comes from seeing folks do wild-ass things you never dreamnt of - but there has to be enough openness to the design to allow it to happen. Now consider this a "not ready for prime time" suggestion, because more thought has to go into it - there has to be a way for the visualizer to callback to deallocate/ dispose of the special data packets created elsewhere and sent to it, for example. > Unless the need is REALLY there, I don"t consider YUV, because it > adds a load of complexion. I think the basics are actually easy to add > but the conversion (when needed) between yuv/rgb and vice versa is a > pain. Tho, if the need is there with suffecient reason, > we will implement it (and might like some help with it). Well, if you're interested in VJ'ing, you are going to want music videos or mpeg eye candy, right? If you want to support videos, then YUV is a LOT faster than RGB (which is the reason it came into existence - because it was backwards compatible with black & white TV's and took up a lot less bandwidth than a pure RGB color television signal, and also because it has direct hardware support in most video cards.) It's not as difficult as it sounds - since you're already familiar with SDL, just look at the YUVOverlay video calls. One other plus is that it scales much more smoothly than RGB in SDL. > we"ve got visual_video_blit_overlay. Which for RGB supports alpha > channels. It"s rather fast as it"s done in mmx. > What you describe there, with the overlay is very possible > at this very moment. But only if the two VisVideo's are of the same type, correct? Or can I blit an RGB VisVideo onto a GL VisVideo? I guess I was thinking of something more abstract, where the overlay engine would offer a series of fonts, and then the app could call the overlay engine "I want this text to appear using font X/size Z to appear as a right-to-left scrolling bar Y high on the bottom of the screen (or as a static box in the center of the display, etc) using the following foreground/background colors for the text and the (if specified) enclosing rectangle and border." > Regarding the fullscreen stuff, lvdisplay will be really helping with > this, it will be multihead aware so you can also use dualhead setups, > one for control, 1 with the graphics for example. This is good stuff! Now, if only SDL offered more than one video display surface, I could have a small spectrum analyzer and VU meter docked inside the control interface, and eye-candy or videos running on the second head! > Could you point me to some wxWidgets docs ? VisUI is there to > be extended :) http://www.wxwidgets.org is the main page, http://www.wxwidgets.org/lnk_tool.htm#ides lists a couple of dialog box editors that can save designs in a "resource file" textual format. While you're pondering cross-platform, may I also suggest you check out www.portaudio.com and bakefile.sourceforge.net Somewhere down the road, I would like to see Libvisual compilable by MS Visual Studio rather than MinGW, as it then allows the app developer to drill into the Libvisual source during debugging sessions (FFMPEG uses MinGW to create Windows DLL's, and I really miss being able to get inside there when things go wrong.) If you're not using any C99 extensions (a show stopper) the biggest obstacle is going to be the format for the inline assembler (very different between gcc and visual c++ - but you could always punt and use NASM if a function call is not too expensive.) But, just like your DLL export interfaces, you only gotta do it once :-) S.W. |