From: Dennis S. <sy...@yo...> - 2005-02-15 17:46:04
|
On Tue, 2005-02-15 at 12:16 -0500, Scott Watson wrote: > Dennis Smit <synap@yo...> wrote: > > > Internally, the data is pushed to the plugins, but on the frontside > > we pull it. This is because, VisInput also has plugins VisInputPlugins. > > These plugins can act as a capture for alsa, jack, esd and such. We"ve > > written this because we have a standalone client in thought. So the > > callback is nothing more than the pull implementation what normally > > gets done in the plugin. > > The main difference between the two push/pull models is that with a > pull model on the front end, you _may_ not know when the pcm data > has changed. Looking at the input plug-ins for ESD and Jack, I see > you are simply stuffing the incoming pcm data into a buffer, which > is transferred at some non-specific later point in time by a polled > called to the upload callback. If the data hasn't changed between > polls, Libvisual and the visualizer plugin(s) are wasting CPU time > working on data they've already finished with. So - you could simply > let the application/input plugin specify with the result codes what > has happened since the last call (new_data, stream_start, stream_end, > stream_pause, stream_resume, dropped_frame.) If the visualizer is a > data recorder, for instance, it would want to know about dropped > frames. You're making one mistake here, even if the same PCM buffer is presented twice. The plugins need to draw, that is because they do not just draw their scope, but also all their colorize, displacement fieldness etc. Tho it's true that VisAudio might be reanalyzed for nothing. This will also be fixed within the new VisAudio design. What do you mean by data recorder ?, I audio capture plugin ? I don't think you should implement one as a VisActor :) for that kind of stuff you really want gstreamer (http://gstreamer.net) > The other possibility is to offer a push interface for audio data. > You simply spec to the plug-ins that they are going to get no less than > 512 samples per frame, for example, and let the data receiver buffer > incoming data from the application until a full frame is received (or > a stream_pause or stream_end happens, in which case you pass along the > partial frame with zeros in the rest of the frame buffer.) Since all > of the input and visualizer plugins written for xmms/winamp and a > bunch of others work on a push model, porting is one step less difficult > (a small step, I admit, since you are pushing between Libvisual and > the plug-in!) What you could do is the following (metacode): VisAudio *audio = visual_audio_new (); while (render) { while (notFull) { audio->pcmdata[0][curindex] = newdataLeft(); audio->pcmdata[1][curindex] = newdataRight(); curindex = newCurBufIndex(); } visual_audio_analyze (audio); visual_actor_run (actor, audio); } > > I am not sure about the special data types, as we try to keep the > > framework semi specific. > > In a philosophical sense, you're writing a library, not a religion. > Part of the joy of doing this comes from seeing folks do wild-ass > things you never dreamnt of - but there has to be enough openness to > the design to allow it to happen. Now consider this a "not ready > for prime time" suggestion, because more thought has to go into it - > there has to be a way for the visualizer to callback to deallocate/ > dispose of the special data packets created elsewhere and sent to it, > for example. Owyeah, I totally agree with this. BUT, we rather write a library that is good at one subject. Than one that sucks and multiple subjects. So I like to stay goal focused, and do really super wild stuff using a libvisual + gstreamer combo for example. Tho as said, special objects van be pushed to a plugin both as parameters or as VisPluginEnviron elements. So you can already push whatever kind of data into plugins. (aka it's there) :) > > Unless the need is REALLY there, I don"t consider YUV, because it > > adds a load of complexion. I think the basics are actually easy to add > > but the conversion (when needed) between yuv/rgb and vice versa is a > > pain. Tho, if the need is there with suffecient reason, > > we will implement it (and might like some help with it). > > Well, if you're interested in VJ'ing, you are going to want music > videos or mpeg eye candy, right? If you want to support videos, then > YUV is a LOT faster than RGB (which is the reason it came into existence > - because it was backwards compatible with black & white TV's and took > up a lot less bandwidth than a pure RGB color television signal, and also > because it has direct hardware support in most video cards.) It's not > as difficult as it sounds - since you're already familiar with SDL, just > look at the YUVOverlay video calls. One other plus is that it scales > much more smoothly than RGB in SDL. Hmms well yeah, I am also discussing the whole YUV thingy with Vitaly in private, and we might want to support it. It isn't too hard to add however what concerns me most is the colorspace conversions between RGB <-> YUV and between different YUV specs. Also RGB->YUV isn't too fast either, that while visualisation plugins themself really live in a rgb world ;). On the other side, we could easily support Xv using libvisual-display. I am not sure how much code would go into RGB -> YUV, YUVwhatever <-> YUVwhatever and YUV -> RGB. But one thing we really want is being able to transform from whatever to whatever, always :) Of course we would also need a YUV overlay (in mmx as well). You really have a point tho, (I am easily convinced yeah). We would love some help with it tho :) > > we"ve got visual_video_blit_overlay. Which for RGB supports alpha > > channels. It"s rather fast as it"s done in mmx. > > What you describe there, with the overlay is very possible > > at this very moment. > > But only if the two VisVideo's are of the same type, correct? Or can > I blit an RGB VisVideo onto a GL VisVideo? I guess I was thinking of > something more abstract, where the overlay engine would offer a series > of fonts, and then the app could call the overlay engine "I want this > text to appear using font X/size Z to appear as a right-to-left > scrolling bar Y high on the bottom of the screen (or as a static box in > the center of the display, etc) using the following foreground/background > colors for the text and the (if specified) enclosing rectangle and border." No it's only if it's the same type, on same depth. Tho using VisActors libvisual is capable of automaticly transforming depths to the requested depth. Of course when you're using an overlay, you're talking alpha so you want everything in ARGB. Regarding GL, how would you like to overlay it? like some glDrawPixels call, or ? (vitaly jump in here!) I am not really a big gl hero myself actually, is there a way to savely overlay over a GL frame, while a GL scene is rendered ? > > Regarding the fullscreen stuff, lvdisplay will be really helping with > > this, it will be multihead aware so you can also use dualhead setups, > > one for control, 1 with the graphics for example. > > This is good stuff! Now, if only SDL offered more than one video display > surface, I could have a small spectrum analyzer and VU meter docked inside > the control interface, and eye-candy or videos running on the second head! Exactly, we're trying to fix that :). The whole project is huge, obviously. We maintain around 100k lines of code in CVS (plugins, libs, programs) and we're currently with three developers and we're all very busy IRL as well :). So things aren't always going as fast as we would like to see. > > Could you point me to some wxWidgets docs ? VisUI is there to > > be extended :) > > http://www.wxwidgets.org is the main page, > http://www.wxwidgets.org/lnk_tool.htm#ides lists a couple of dialog box > editors that can save designs in a "resource file" textual format. I am going to browse through these > While you're pondering cross-platform, may I also suggest you check out > www.portaudio.com > and > bakefile.sourceforge.net And these :) > Somewhere down the road, I would like to see Libvisual compilable by > MS Visual Studio rather than MinGW, as it then allows the app developer > to drill into the Libvisual source during debugging sessions (FFMPEG > uses MinGW to create Windows DLL's, and I really miss being able to > get inside there when things go wrong.) If you're not using any C99 > extensions (a show stopper) the biggest obstacle is going to be the > format for the inline assembler (very different between gcc and visual > c++ - but you could always punt and use NASM if a function call is not > too expensive.) But, just like your DLL export interfaces, you only > gotta do it once :-) Yes true, I would like this as well, tho I think it's going to be quite some work. Personally, I won't look into this soonish, where soonish is within a year. Your comments are very helpful, and without some, if not most is going to get incorporated into the libvisual core lib. Of course, we always accept help :) Thanks, Dennis |