You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
|
Apr
(38) |
May
(22) |
Jun
(92) |
Jul
(101) |
Aug
(18) |
Sep
(286) |
Oct
(180) |
Nov
(73) |
Dec
(14) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
(18) |
Feb
(74) |
Mar
(56) |
Apr
(11) |
May
(5) |
Jun
(4) |
Jul
(20) |
Aug
(4) |
Sep
|
Oct
|
Nov
(1) |
Dec
(2) |
2006 |
Jan
(11) |
Feb
(2) |
Mar
(10) |
Apr
(2) |
May
(1) |
Jun
|
Jul
(24) |
Aug
(11) |
Sep
(5) |
Oct
(16) |
Nov
(25) |
Dec
(8) |
2007 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
(1) |
Aug
|
Sep
|
Oct
(4) |
Nov
(12) |
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(5) |
From: Dennis S. <sy...@yo...> - 2005-02-15 19:40:17
|
Please take a look, comment if needed. If nothing comes in I will start implementing this tomorrow. The basics: /* Registry control */ struct _VisConfigRegistry { char *filename; VisList *sections; }; struct _VisConfigRegistrySection { char *name; char *data; }; VisConfigRegistry *visual_config_registry_new (); VisConfigRegistry *visual_config_registry_open (char *configfile); VisConfigRegistrySection *visual_config_registry_section_new (); VisConfigRegistrySection *visual_config_registry_section (VisConfigRegistry *registry, char *name); VisConfigRegistrySection *visual_config_registry_section_open (char *name, char *configfile); visual_config_registry_write_by_data (VisConfigRegistry *registry, char *name, char *data); visual_config_registry_write (VisConfigRegistry *registry, VisConfigRegistrySection *section); // add some methods that write the registry in mem only, handy with avs presets, later on :) NOTE: There will be some control of where the config file is, if we should write, or even read using the global config VisParamContainer. /* Param (de)serialize control */ visual_param_deserialize_from_section (VisParamEntry *param, VisConfigRegistrySection *section); visual_param_deserialize_from_data (VisParamEntry *param, char *data); visual_param_serialize_from_section (VisParamEntry *param, VisConfigRegistrySection *section); visual_param_serialize_from_data (VisParamEntry *param, char **data); visual_param_container_deserialize_namespace (VisParamContainer *container, VisConfigRegistry *registry, const char *namespace); // for example "Libvisual:core:actor:oinksie" could be a namespace // Libvisual:core:actor:oinksie:blur could be a type member. visual_param_container_serialize_namespace (VisParamContainer *container, VisConfigRegistry *registry, const char *namespace); |
From: Dennis S. <sy...@yo...> - 2005-02-15 17:46:04
|
On Tue, 2005-02-15 at 12:16 -0500, Scott Watson wrote: > Dennis Smit <synap@yo...> wrote: > > > Internally, the data is pushed to the plugins, but on the frontside > > we pull it. This is because, VisInput also has plugins VisInputPlugins. > > These plugins can act as a capture for alsa, jack, esd and such. We"ve > > written this because we have a standalone client in thought. So the > > callback is nothing more than the pull implementation what normally > > gets done in the plugin. > > The main difference between the two push/pull models is that with a > pull model on the front end, you _may_ not know when the pcm data > has changed. Looking at the input plug-ins for ESD and Jack, I see > you are simply stuffing the incoming pcm data into a buffer, which > is transferred at some non-specific later point in time by a polled > called to the upload callback. If the data hasn't changed between > polls, Libvisual and the visualizer plugin(s) are wasting CPU time > working on data they've already finished with. So - you could simply > let the application/input plugin specify with the result codes what > has happened since the last call (new_data, stream_start, stream_end, > stream_pause, stream_resume, dropped_frame.) If the visualizer is a > data recorder, for instance, it would want to know about dropped > frames. You're making one mistake here, even if the same PCM buffer is presented twice. The plugins need to draw, that is because they do not just draw their scope, but also all their colorize, displacement fieldness etc. Tho it's true that VisAudio might be reanalyzed for nothing. This will also be fixed within the new VisAudio design. What do you mean by data recorder ?, I audio capture plugin ? I don't think you should implement one as a VisActor :) for that kind of stuff you really want gstreamer (http://gstreamer.net) > The other possibility is to offer a push interface for audio data. > You simply spec to the plug-ins that they are going to get no less than > 512 samples per frame, for example, and let the data receiver buffer > incoming data from the application until a full frame is received (or > a stream_pause or stream_end happens, in which case you pass along the > partial frame with zeros in the rest of the frame buffer.) Since all > of the input and visualizer plugins written for xmms/winamp and a > bunch of others work on a push model, porting is one step less difficult > (a small step, I admit, since you are pushing between Libvisual and > the plug-in!) What you could do is the following (metacode): VisAudio *audio = visual_audio_new (); while (render) { while (notFull) { audio->pcmdata[0][curindex] = newdataLeft(); audio->pcmdata[1][curindex] = newdataRight(); curindex = newCurBufIndex(); } visual_audio_analyze (audio); visual_actor_run (actor, audio); } > > I am not sure about the special data types, as we try to keep the > > framework semi specific. > > In a philosophical sense, you're writing a library, not a religion. > Part of the joy of doing this comes from seeing folks do wild-ass > things you never dreamnt of - but there has to be enough openness to > the design to allow it to happen. Now consider this a "not ready > for prime time" suggestion, because more thought has to go into it - > there has to be a way for the visualizer to callback to deallocate/ > dispose of the special data packets created elsewhere and sent to it, > for example. Owyeah, I totally agree with this. BUT, we rather write a library that is good at one subject. Than one that sucks and multiple subjects. So I like to stay goal focused, and do really super wild stuff using a libvisual + gstreamer combo for example. Tho as said, special objects van be pushed to a plugin both as parameters or as VisPluginEnviron elements. So you can already push whatever kind of data into plugins. (aka it's there) :) > > Unless the need is REALLY there, I don"t consider YUV, because it > > adds a load of complexion. I think the basics are actually easy to add > > but the conversion (when needed) between yuv/rgb and vice versa is a > > pain. Tho, if the need is there with suffecient reason, > > we will implement it (and might like some help with it). > > Well, if you're interested in VJ'ing, you are going to want music > videos or mpeg eye candy, right? If you want to support videos, then > YUV is a LOT faster than RGB (which is the reason it came into existence > - because it was backwards compatible with black & white TV's and took > up a lot less bandwidth than a pure RGB color television signal, and also > because it has direct hardware support in most video cards.) It's not > as difficult as it sounds - since you're already familiar with SDL, just > look at the YUVOverlay video calls. One other plus is that it scales > much more smoothly than RGB in SDL. Hmms well yeah, I am also discussing the whole YUV thingy with Vitaly in private, and we might want to support it. It isn't too hard to add however what concerns me most is the colorspace conversions between RGB <-> YUV and between different YUV specs. Also RGB->YUV isn't too fast either, that while visualisation plugins themself really live in a rgb world ;). On the other side, we could easily support Xv using libvisual-display. I am not sure how much code would go into RGB -> YUV, YUVwhatever <-> YUVwhatever and YUV -> RGB. But one thing we really want is being able to transform from whatever to whatever, always :) Of course we would also need a YUV overlay (in mmx as well). You really have a point tho, (I am easily convinced yeah). We would love some help with it tho :) > > we"ve got visual_video_blit_overlay. Which for RGB supports alpha > > channels. It"s rather fast as it"s done in mmx. > > What you describe there, with the overlay is very possible > > at this very moment. > > But only if the two VisVideo's are of the same type, correct? Or can > I blit an RGB VisVideo onto a GL VisVideo? I guess I was thinking of > something more abstract, where the overlay engine would offer a series > of fonts, and then the app could call the overlay engine "I want this > text to appear using font X/size Z to appear as a right-to-left > scrolling bar Y high on the bottom of the screen (or as a static box in > the center of the display, etc) using the following foreground/background > colors for the text and the (if specified) enclosing rectangle and border." No it's only if it's the same type, on same depth. Tho using VisActors libvisual is capable of automaticly transforming depths to the requested depth. Of course when you're using an overlay, you're talking alpha so you want everything in ARGB. Regarding GL, how would you like to overlay it? like some glDrawPixels call, or ? (vitaly jump in here!) I am not really a big gl hero myself actually, is there a way to savely overlay over a GL frame, while a GL scene is rendered ? > > Regarding the fullscreen stuff, lvdisplay will be really helping with > > this, it will be multihead aware so you can also use dualhead setups, > > one for control, 1 with the graphics for example. > > This is good stuff! Now, if only SDL offered more than one video display > surface, I could have a small spectrum analyzer and VU meter docked inside > the control interface, and eye-candy or videos running on the second head! Exactly, we're trying to fix that :). The whole project is huge, obviously. We maintain around 100k lines of code in CVS (plugins, libs, programs) and we're currently with three developers and we're all very busy IRL as well :). So things aren't always going as fast as we would like to see. > > Could you point me to some wxWidgets docs ? VisUI is there to > > be extended :) > > http://www.wxwidgets.org is the main page, > http://www.wxwidgets.org/lnk_tool.htm#ides lists a couple of dialog box > editors that can save designs in a "resource file" textual format. I am going to browse through these > While you're pondering cross-platform, may I also suggest you check out > www.portaudio.com > and > bakefile.sourceforge.net And these :) > Somewhere down the road, I would like to see Libvisual compilable by > MS Visual Studio rather than MinGW, as it then allows the app developer > to drill into the Libvisual source during debugging sessions (FFMPEG > uses MinGW to create Windows DLL's, and I really miss being able to > get inside there when things go wrong.) If you're not using any C99 > extensions (a show stopper) the biggest obstacle is going to be the > format for the inline assembler (very different between gcc and visual > c++ - but you could always punt and use NASM if a function call is not > too expensive.) But, just like your DLL export interfaces, you only > gotta do it once :-) Yes true, I would like this as well, tho I think it's going to be quite some work. Personally, I won't look into this soonish, where soonish is within a year. Your comments are very helpful, and without some, if not most is going to get incorporated into the libvisual core lib. Of course, we always accept help :) Thanks, Dennis |
From: Scott W. <bau...@co...> - 2005-02-15 17:13:54
|
Dennis Smit <synap@yo...> wrote: > Internally, the data is pushed to the plugins, but on the frontside > we pull it. This is because, VisInput also has plugins VisInputPlugins. > These plugins can act as a capture for alsa, jack, esd and such. We"ve > written this because we have a standalone client in thought. So the > callback is nothing more than the pull implementation what normally > gets done in the plugin. The main difference between the two push/pull models is that with a pull model on the front end, you _may_ not know when the pcm data has changed. Looking at the input plug-ins for ESD and Jack, I see you are simply stuffing the incoming pcm data into a buffer, which is transferred at some non-specific later point in time by a polled called to the upload callback. If the data hasn't changed between polls, Libvisual and the visualizer plugin(s) are wasting CPU time working on data they've already finished with. So - you could simply let the application/input plugin specify with the result codes what has happened since the last call (new_data, stream_start, stream_end, stream_pause, stream_resume, dropped_frame.) If the visualizer is a data recorder, for instance, it would want to know about dropped frames. The other possibility is to offer a push interface for audio data. You simply spec to the plug-ins that they are going to get no less than 512 samples per frame, for example, and let the data receiver buffer incoming data from the application until a full frame is received (or a stream_pause or stream_end happens, in which case you pass along the partial frame with zeros in the rest of the frame buffer.) Since all of the input and visualizer plugins written for xmms/winamp and a bunch of others work on a push model, porting is one step less difficult (a small step, I admit, since you are pushing between Libvisual and the plug-in!) > I am not sure about the special data types, as we try to keep the > framework semi specific. In a philosophical sense, you're writing a library, not a religion. Part of the joy of doing this comes from seeing folks do wild-ass things you never dreamnt of - but there has to be enough openness to the design to allow it to happen. Now consider this a "not ready for prime time" suggestion, because more thought has to go into it - there has to be a way for the visualizer to callback to deallocate/ dispose of the special data packets created elsewhere and sent to it, for example. > Unless the need is REALLY there, I don"t consider YUV, because it > adds a load of complexion. I think the basics are actually easy to add > but the conversion (when needed) between yuv/rgb and vice versa is a > pain. Tho, if the need is there with suffecient reason, > we will implement it (and might like some help with it). Well, if you're interested in VJ'ing, you are going to want music videos or mpeg eye candy, right? If you want to support videos, then YUV is a LOT faster than RGB (which is the reason it came into existence - because it was backwards compatible with black & white TV's and took up a lot less bandwidth than a pure RGB color television signal, and also because it has direct hardware support in most video cards.) It's not as difficult as it sounds - since you're already familiar with SDL, just look at the YUVOverlay video calls. One other plus is that it scales much more smoothly than RGB in SDL. > we"ve got visual_video_blit_overlay. Which for RGB supports alpha > channels. It"s rather fast as it"s done in mmx. > What you describe there, with the overlay is very possible > at this very moment. But only if the two VisVideo's are of the same type, correct? Or can I blit an RGB VisVideo onto a GL VisVideo? I guess I was thinking of something more abstract, where the overlay engine would offer a series of fonts, and then the app could call the overlay engine "I want this text to appear using font X/size Z to appear as a right-to-left scrolling bar Y high on the bottom of the screen (or as a static box in the center of the display, etc) using the following foreground/background colors for the text and the (if specified) enclosing rectangle and border." > Regarding the fullscreen stuff, lvdisplay will be really helping with > this, it will be multihead aware so you can also use dualhead setups, > one for control, 1 with the graphics for example. This is good stuff! Now, if only SDL offered more than one video display surface, I could have a small spectrum analyzer and VU meter docked inside the control interface, and eye-candy or videos running on the second head! > Could you point me to some wxWidgets docs ? VisUI is there to > be extended :) http://www.wxwidgets.org is the main page, http://www.wxwidgets.org/lnk_tool.htm#ides lists a couple of dialog box editors that can save designs in a "resource file" textual format. While you're pondering cross-platform, may I also suggest you check out www.portaudio.com and bakefile.sourceforge.net Somewhere down the road, I would like to see Libvisual compilable by MS Visual Studio rather than MinGW, as it then allows the app developer to drill into the Libvisual source during debugging sessions (FFMPEG uses MinGW to create Windows DLL's, and I really miss being able to get inside there when things go wrong.) If you're not using any C99 extensions (a show stopper) the biggest obstacle is going to be the format for the inline assembler (very different between gcc and visual c++ - but you could always punt and use NASM if a function call is not too expensive.) But, just like your DLL export interfaces, you only gotta do it once :-) S.W. |
From: Dennis S. <sy...@yo...> - 2005-02-15 10:37:46
|
On Tue, 2005-02-15 at 02:37 -0500, Scott Watson wrote: > I'm very excited about finding Libvisual! I wanted to take a > few days to grok how it all works and meshes together before > I posted my comments, and I _think_ I'm getting a handle on > this. If I'm completely off-base, feel free to bash me upside > the head. Well, I am delighted to see someone excited by libvisual, and thus I will love to guide you through the mess we wrote :)! > Before my comments start, let me point out that in case it hasn't > been reported yet, the frame-limiter for 0.2.0 is broken, at least > for the XMMS plugin. It's chewing up all available CPU time. I > profiled one of the simpler plugins (the scope) to check this > out and when sized very small (roughly 100 X 50 by my eye) the > "render" callback was still getting called more than 600 times > per second even though frames were (theoretically) being limited > to 30/second. I removed the xmms frame limiter completely, it was very borked, and we will replace the client completely with a rewrite that uses libvisual-display. lvdisplay is a library we're working on that provides display abstraction in the same fashion as libvisual does with visualisation plugins. It's going to be very powerful and Vitaly is working on it. You can check out the work in libvisual-display within the CVS server. > So here're my initial thoughts and suggestions, for what they're > worth: > > It would appear to me that instead of pcm data being "pushed" to the > visualizer engine as is done in the visualizer plug-in models put > forth by XMMS and WinAmp, the pcm data in Libvisual is being "pulled" > via an Input plugin's VisPluginInputUploadFunc or by implementing > an upload callback. Internally, the data is pushed to the plugins, but on the frontside we pull it. This is because, VisInput also has plugins VisInputPlugins. These plugins can act as a capture for alsa, jack, esd and such. We've written this because we have a standalone client in thought. So the callback is nothing more than the pull implementation what normally gets done in the plugin. > The "pull" model works just fine, but Libvisual needs to add > some means for synchronization and non-audio data or else you'll > cut out an entire class of visualizations. That's probably > unclear, so let me give an example: > > Let's say I'm decoding an MPEG or AVI file using FFMPEG > (ffmpeg.sourceforge.net) - as I decode, I'm going to get > interleaved "packets" of audio and video data. Depending on the > codec, the display times between individual video frames can vary > wildly so a simple latency calculation won't synch the video to the > audio. Instead, the codec (or ffmpeg or the application itself) > calculates and provides a "presentation time stamp" in stream > relative time of when to display each frame of video. The video > visualization plugin's job is to buffer frames, and display them > when the proper time comes. So, if you stick with the "pull" data > model, you need to have the ability for the application to expose > a method for the visualization plugin to get the current playing > stream time. Likewise, there needs to be a method to query > visualization plugins to see if they can accept and handle certain > special data types (so I don't exacerbate entropy sending video > packets to Goom, for instance) and an API for getting that special > data to those that do (as simple as a "userdata" callback in > addition to the "render" callback.) Well, actually it is possible to semi push it before render, by writing the VisInput layer yourself. It isn't very hard. However you're right. What we're going to do is rewrite VisAudio within a few months, to add support for more audio formats, and floating point audio. As well cooler detections, BPM detection. So we could take this as well into the design. I am not sure about the special data types, as we try to keep the framework semi specific. > In the same light, consider a Karaoke (mp3+cdg or ogg+cdg) plugin. > There are actually two data sources: a standard .mp3 or .ogg file, > and a separate .cdg file that contains the karaoke lyrics and > graphics that were ripped out of the subchannel data of an audio > CD+G disc. > > A karaoke visualizer would get the .cdg data sent to it as a one shot > package at the start of the stream, and before returning from a > "song_start" callback (and there should be one of these, as well as > callbacks for "song_pause", "song_resume", and "song end" so the > thing ain't chewing up CPU cycles if Joe User needs to pause audio > to do something CPU-intensive for a bit) it decodes the CDG data > into frames, and generates presentation time stamps for each frame. > Again, during song playback, it doesn't care a whiff about pcm data, > it just wants to monitor stream playback time and display each frame > synched to the audio output. In fact, there's another standard > using MIDI with karaoke lyrics that may not generate any pcm data > at all (and while I'm at it, I might want my non-karaoke MIDI file > player to generate data for a graphic piano keyboard visualizer in > pass-through or hardware-synth mode, or use "regular" audio > visualizers when using a software-synth that creates regular pcm data.) Aah here we have a case! :) Well, what I suggest we could do is use the VisObject VisParam. Define a special KaroakeObject, and send this as a parameter to the plugin it's event loop the event loop gets called every frame before render, so it's suffecient to synchrose using this. > In both of the above cases, the "render" callback may or may not > actually draw a frame if the call happens between two presentation > time stamps. For that matter, Libvisual should not assume that > any Actor actually draws a frame during any particular "render" > call unless the Actor tells Libvisual that it _did_ draw a frame, > and likewise there needs to be a method or callback to the application > to let it know that a new frame is available for drawing so that > an unchanged video buffer isn't getting re-blitted without cause) As > an application writer, I obviously have a vested interest in any > CPU-saving tweak possible. Alright, we could like return '0' or '1' within the render function if a frame is drawn, would this be suffecient ? We also have an interest in CPU-saving tweaks, since, with VJing where we combine multiple visuals, and ran them through transforms. This is very important :) > As far as the interface for querying which special data types a > visualization plugin can accept, I think something similar to > the way a WinAmp input plug-in exposes which filetypes it can > handle would work great. Off the top of my head - here are a > few special data types of interest: > > Streaming Video > Karoke CDG > Stream Tags (i.e. ID3V2 or APE) > Either in addition to, or in place of the VisSongInfo stuff, > you could also abstract special data types for: > Artist/Track Title/Album/Year > Still Images Associated with an artist/track (album cover art, > artist photos, etc. in JPEG, BMP, GIF, PNG, etc.) I think we could work this out using the VisObject system, and set up some guidelines. We could also create a frontend around this. It's very intresting to see this suggestions because they contain things I never thought off! > I also want to point out an obvious omission here (understandable, > since you probably weren't considering a video class of visualizers.) > In a addition to RGB and OpenGL display types, there should also be > a YUV display type. The same rules of no-blit/no-morph between > disimilar display types should apply. YUV420P (SDL type: > SDL_YV12_OVERLAY) format should be sufficient out of the starting > gate. Unless the need is REALLY there, I don't consider YUV, because it adds a load of complexion. I think the basics are actually easy to add but the conversion (when needed) between yuv/rgb and vice versa is a pain. Tho, if the need is there with suffecient reason, we will implement it (and might like some help with it). > I also think there should be a non-gui method for getting/setting > individual visualizer configuration settings by a serialized string. > The application may or may not be able to decipher the contents of > the command strings for a particular visualizer plug-in, but they > can still be thought of as a "bookmark" of what the user likes > above and beyond the last-used configuration settings. Wouldn't > it also be nice if the plug-ins exposed to the application > language neutral author/copyright/credits/plug-in name & version > info. A method of getting a list of presets (for those visualizers > that support them) and selecting one by non-gui means would also > be cool. We're working on the serialization :), and i18n support regarding languages would be nice yeah :). > Finally, on my wish list (but I realize the added YUV mode adds > another layer of complexity to this and alpha-channelled shaped > text is probably out of the question for that) I'd love to see > some sort of overlay engine: Perhaps I, as an application author, > want a scrolling ticker at the bottom of the screen announcing > sports scores or happy hour specials (!) or some On Screen Display > info for a TV or radio tuner card when I'm running in full-screen > mode (or otherwise.) Not that big of a deal for me to add at > application level, but a guy can dream, right? we've got visual_video_blit_overlay. Which for RGB supports alpha channels. It's rather fast as it's done in mmx. What you describe there, with the overlay is very possible at this very moment. Regarding the fullscreen stuff, lvdisplay will be really helping with this, it will be multihead aware so you can also use dualhead setups, one for control, 1 with the graphics for example. But I suspect that you have the need for YUV, so we could start investigating how to implement this best :) > In VisUI, a useful addition would be a tab or page widget for > implementing multiple dialog box pages. I think wxWidgets has > about the snazziest way of specifying a platform-neutral dialog > box that I've seen. Could you point me to some wxWidgets docs ? VisUI is there to be extended :) > One last question, is frequency spectrum analysis being done even > for those visualizers that don't need it? If there isn't, there > should be a way to turn off those expensive FFT's if they don't > help a given visualizer (and/or morph) if there isn't. I agree, you can control it by hand, but not easily. The VisAudio replacement that is upcoming (but atleast 2 months away) will fix this all! :) Thanks for your interests, and keep in mind, we're really in diehard development. We're far from stable both API and ABI. But on the positive side, we're open for suggestions, and like to get it right now and not in three years when we've had 4 API stable branches :) Keep your suggestions coming as they only improve what we write :) Thanks, Dennis |
From: Scott W. <bau...@co...> - 2005-02-15 07:35:40
|
I'm very excited about finding Libvisual! I wanted to take a few days to grok how it all works and meshes together before I posted my comments, and I _think_ I'm getting a handle on this. If I'm completely off-base, feel free to bash me upside the head. Before my comments start, let me point out that in case it hasn't been reported yet, the frame-limiter for 0.2.0 is broken, at least for the XMMS plugin. It's chewing up all available CPU time. I profiled one of the simpler plugins (the scope) to check this out and when sized very small (roughly 100 X 50 by my eye) the "render" callback was still getting called more than 600 times per second even though frames were (theoretically) being limited to 30/second. So here're my initial thoughts and suggestions, for what they're worth: It would appear to me that instead of pcm data being "pushed" to the visualizer engine as is done in the visualizer plug-in models put forth by XMMS and WinAmp, the pcm data in Libvisual is being "pulled" via an Input plugin's VisPluginInputUploadFunc or by implementing an upload callback. The "pull" model works just fine, but Libvisual needs to add some means for synchronization and non-audio data or else you'll cut out an entire class of visualizations. That's probably unclear, so let me give an example: Let's say I'm decoding an MPEG or AVI file using FFMPEG (ffmpeg.sourceforge.net) - as I decode, I'm going to get interleaved "packets" of audio and video data. Depending on the codec, the display times between individual video frames can vary wildly so a simple latency calculation won't synch the video to the audio. Instead, the codec (or ffmpeg or the application itself) calculates and provides a "presentation time stamp" in stream relative time of when to display each frame of video. The video visualization plugin's job is to buffer frames, and display them when the proper time comes. So, if you stick with the "pull" data model, you need to have the ability for the application to expose a method for the visualization plugin to get the current playing stream time. Likewise, there needs to be a method to query visualization plugins to see if they can accept and handle certain special data types (so I don't exacerbate entropy sending video packets to Goom, for instance) and an API for getting that special data to those that do (as simple as a "userdata" callback in addition to the "render" callback.) In the same light, consider a Karaoke (mp3+cdg or ogg+cdg) plugin. There are actually two data sources: a standard .mp3 or .ogg file, and a separate .cdg file that contains the karaoke lyrics and graphics that were ripped out of the subchannel data of an audio CD+G disc. A karaoke visualizer would get the .cdg data sent to it as a one shot package at the start of the stream, and before returning from a "song_start" callback (and there should be one of these, as well as callbacks for "song_pause", "song_resume", and "song end" so the thing ain't chewing up CPU cycles if Joe User needs to pause audio to do something CPU-intensive for a bit) it decodes the CDG data into frames, and generates presentation time stamps for each frame. Again, during song playback, it doesn't care a whiff about pcm data, it just wants to monitor stream playback time and display each frame synched to the audio output. In fact, there's another standard using MIDI with karaoke lyrics that may not generate any pcm data at all (and while I'm at it, I might want my non-karaoke MIDI file player to generate data for a graphic piano keyboard visualizer in pass-through or hardware-synth mode, or use "regular" audio visualizers when using a software-synth that creates regular pcm data.) In both of the above cases, the "render" callback may or may not actually draw a frame if the call happens between two presentation time stamps. For that matter, Libvisual should not assume that any Actor actually draws a frame during any particular "render" call unless the Actor tells Libvisual that it _did_ draw a frame, and likewise there needs to be a method or callback to the application to let it know that a new frame is available for drawing so that an unchanged video buffer isn't getting re-blitted without cause) As an application writer, I obviously have a vested interest in any CPU-saving tweak possible. As far as the interface for querying which special data types a visualization plugin can accept, I think something similar to the way a WinAmp input plug-in exposes which filetypes it can handle would work great. Off the top of my head - here are a few special data types of interest: Streaming Video Karoke CDG Stream Tags (i.e. ID3V2 or APE) Either in addition to, or in place of the VisSongInfo stuff, you could also abstract special data types for: Artist/Track Title/Album/Year Still Images Associated with an artist/track (album cover art, artist photos, etc. in JPEG, BMP, GIF, PNG, etc.) I also want to point out an obvious omission here (understandable, since you probably weren't considering a video class of visualizers.) In a addition to RGB and OpenGL display types, there should also be a YUV display type. The same rules of no-blit/no-morph between disimilar display types should apply. YUV420P (SDL type: SDL_YV12_OVERLAY) format should be sufficient out of the starting gate. I also think there should be a non-gui method for getting/setting individual visualizer configuration settings by a serialized string. The application may or may not be able to decipher the contents of the command strings for a particular visualizer plug-in, but they can still be thought of as a "bookmark" of what the user likes above and beyond the last-used configuration settings. Wouldn't it also be nice if the plug-ins exposed to the application language neutral author/copyright/credits/plug-in name & version info. A method of getting a list of presets (for those visualizers that support them) and selecting one by non-gui means would also be cool. Finally, on my wish list (but I realize the added YUV mode adds another layer of complexity to this and alpha-channelled shaped text is probably out of the question for that) I'd love to see some sort of overlay engine: Perhaps I, as an application author, want a scrolling ticker at the bottom of the screen announcing sports scores or happy hour specials (!) or some On Screen Display info for a TV or radio tuner card when I'm running in full-screen mode (or otherwise.) Not that big of a deal for me to add at application level, but a guy can dream, right? In VisUI, a useful addition would be a tab or page widget for implementing multiple dialog box pages. I think wxWidgets has about the snazziest way of specifying a platform-neutral dialog box that I've seen. One last question, is frequency spectrum analysis being done even for those visualizers that don't need it? If there isn't, there should be a way to turn off those expensive FFT's if they don't help a given visualizer (and/or morph) if there isn't. Keep up the great work! S.W. |
From: Dennis S. <sy...@yo...> - 2005-02-14 23:39:47
|
Heya, the win32.dll is building here under windows, I am now diving into the win32 sdk docs to fill up the out defined parts (#if defined(VISUAL_OS_WIN32)) is what I mean :) This is while using the mingw environment btw. I think it would be quite fun to support winamp in the near future! :). We would have a cross operation system audio visualisation platform! how is that :) Cheers, Dennis On Mon, 2005-02-14 at 23:14 +0200, Vitaly V. Bursov wrote: > On Mon, 14 Feb 2005 21:41:24 +0100 > Dennis Smit <sy...@yo...> wrote: > > > And put it in front of every symbol you export, IE every >EVERY< > > function. > Well, it is a common practice... > > I believe it not a mandatory. I've built dll library and a import > library. Looks like every thing's fine. > > [vitalyb@vb .libs]$ i386-pc-mingw32-objdump -x libvisual-0.dll > ..... > [Ordinal/Name Pointer] Table > [ 0] __lv_initialized > [ 1] __lv_paramcontainer > [ 2] __lv_plugins > [ 3] __lv_plugins_actor > [ 4] __lv_plugins_input > [ 5] __lv_plugins_morph > [ 6] __lv_plugins_transform > [ 7] __lv_plugpath_cnt > [ 8] __lv_plugpaths > [ 9] __lv_progname > [ 10] __lv_userinterface > [ 11] _lv_blit_overlay_alpha32_mmx > [ 12] _lv_log > [ 13] _lv_scale_bilinear_32_mmx > [ 14] visual_actor_get_list > [ 15] visual_actor_get_next_by_name > [ 16] visual_actor_get_next_by_name_gl > .... > [ 371] visual_video_set_palette > [ 372] visual_video_set_pitch > [ 373] win32_sig_handler_sse@4 > .... > > Every non-static symbol was exported. As by default is. > > ============ test.c > int main() > { > return visual_actor_get_list(); > } > ============ > > [vitalyb@vb .libs]$ i386-pc-mingw32-gcc test.c -L. -lvisual.dll > [vitalyb@vb .libs]$ i386-pc-mingw32-objdump -x a.exe > .... > The Import Tables (interpreted .idata section contents) > vma: Hint Time Forward DLL First > Table Stamp Chain Name Thunk > 00004000 00004054 00000000 00000000 000041bc 000040a0 > > DLL Name: libvisual-0.dll > vma: Hint/Ord Member-Name Bound-To > 40e8 14 visual_actor_get_list > .... > > So, with GNU tools EXPORT stuff is not necessary. > > But with non-GNU tools you need these at least within the headers, > for successful compilation, I think. Import library can be built > from .def (hm, I don't remember exactly) file. > |
From: Duilio J. P. <dp...@fc...> - 2005-02-14 22:22:13
|
> I think. Import library can be built > from .def (hm, I don't remember exactly) file. You are right, using 'autogen' tool. |
From: Vitaly V. B. <vit...@uk...> - 2005-02-14 21:55:56
|
On Mon, 14 Feb 2005 21:41:24 +0100 Dennis Smit <sy...@yo...> wrote: > And put it in front of every symbol you export, IE every >EVERY< > function. Well, it is a common practice... I believe it not a mandatory. I've built dll library and a import library. Looks like every thing's fine. [vitalyb@vb .libs]$ i386-pc-mingw32-objdump -x libvisual-0.dll ..... [Ordinal/Name Pointer] Table [ 0] __lv_initialized [ 1] __lv_paramcontainer [ 2] __lv_plugins [ 3] __lv_plugins_actor [ 4] __lv_plugins_input [ 5] __lv_plugins_morph [ 6] __lv_plugins_transform [ 7] __lv_plugpath_cnt [ 8] __lv_plugpaths [ 9] __lv_progname [ 10] __lv_userinterface [ 11] _lv_blit_overlay_alpha32_mmx [ 12] _lv_log [ 13] _lv_scale_bilinear_32_mmx [ 14] visual_actor_get_list [ 15] visual_actor_get_next_by_name [ 16] visual_actor_get_next_by_name_gl .... [ 371] visual_video_set_palette [ 372] visual_video_set_pitch [ 373] win32_sig_handler_sse@4 .... Every non-static symbol was exported. As by default is. ============ test.c int main() { return visual_actor_get_list(); } ============ [vitalyb@vb .libs]$ i386-pc-mingw32-gcc test.c -L. -lvisual.dll [vitalyb@vb .libs]$ i386-pc-mingw32-objdump -x a.exe .... The Import Tables (interpreted .idata section contents) vma: Hint Time Forward DLL First Table Stamp Chain Name Thunk 00004000 00004054 00000000 00000000 000041bc 000040a0 DLL Name: libvisual-0.dll vma: Hint/Ord Member-Name Bound-To 40e8 14 visual_actor_get_list .... So, with GNU tools EXPORT stuff is not necessary. But with non-GNU tools you need these at least within the headers, for successful compilation, I think. Import library can be built from .def (hm, I don't remember exactly) file. -- Vitaly GPG Key ID: F95A23B9 |
From: Dennis S. <sy...@yo...> - 2005-02-14 20:41:29
|
On Mon, 2005-02-14 at 17:27 -0500, Duilio J. Protti wrote: > Why this will be a disaster? EXPORT is defined by mingw? > If it is not, you just must define EXPORT to the corresponding thing > when building on mingw, or define to empty when not. And put it in front of every symbol you export, IE every >EVERY< function. so we get: EXPORT VisActor *visual_actor_new (char *name); and this for every call. Or do I get something wrongly ? |
From: Duilio J. P. <dp...@fc...> - 2005-02-14 20:35:30
|
Why this will be a disaster? EXPORT is defined by mingw? If it is not, you just must define EXPORT to the corresponding thing when building on mingw, or define to empty when not. Bye, Duilio. > Heya reading the mingw docs, I keep across this: > > http://mingw.sourceforge.net/docs.shtml > > > Here's an example. Cut and paste the following into a file named > dllfct.h: > > #ifdef BUILD_DLL > // the dll exports > #define EXPORT __declspec(dllexport) > #else > // the exe imports > #define EXPORT __declspec(dllimport) > #endif > > // function to be imported/exported > EXPORT void tstfunc (void); > > > Cut and paste the following into a file named dllfct.c: > > #include <stdio.h> > #include "dllfct.h" > > EXPORT void tstfunc (void) > { > printf ("Hello\n"); > } > > > > > > This makes me nearly sick :), is there a way to get around this ?, because > this would be a complete disaster within libvisual :) |
From: Mark K. <ma...@we...> - 2005-02-14 19:49:19
|
The amaroK team is proud to announce version 1.2 of the amaroK audio player! Markey, muesli and mxcl were walking along the bypass the other week when these aliens came down. And they said, "Dudes! Make us the best media-player in the world. Ever!" and we looked at each other, and said "What's wrong with XMMS?" and they said (in unison, which was kinda creepy), "Oh come on dudes! We're aliens, you try pushing 2 pixel square buttons when you have 70-million dpi screens!". So they put us in a slave-camp and we managed to get a new version of amaroK out, but we may have done too good a job because now they want to keep us here forever. Save us! Someone! New in amaroK 1.2: * Full support for Audioscrobbler! Share your music taste with friends on the net. * Generate dynamic playlists based on Audioscrobbler suggestions. * Support for MySQL databases. Now you can keep your Collection on a remote computer. * The playlist has seen vast speed improvements. * 10-band graphic Equalizer. * Many usability improvements. We have made amaroK more accessible to new users and more comfortable for power-users. * Automatic song lyrics display. Shows the lyrics to the song you're currently playing. * Support for your iPod with the all new media-browser. * On screen display has been revamped, now with optional translucency. * Theme your ContextBrowser with custom CSS support. * Support for the latest LibVisual library for stunning visualizations. * Great new amaroK icon "Blue Wolf", made by KDE artist Da-Flow. * Powerful scripting interface, allowing for easy extension of amaroK. Please see our new wiki (http://amarok.kde.org/wiki/) if you want to get started writing a script plugin! The amaroK team --------------- amaroK is a soundsystem-independent audio-player for *nix. Its interface uses a powerful "browser" metaphor that allows you to create playlists that make the most of your music collection. We have a fast development-cycle and super-happy users. We also provide pensions and other employment-benefits. "Easily the best media-player for Linux at the moment. Install it now!" - Linux Format Magazine WWW: http://amarok.kde.org WIKI: http://amarok.kde.org/wiki/ IRC: irc.freenode.net #amarok MAIL: ama...@li... |
From: Dennis S. <sy...@yo...> - 2005-02-14 19:43:44
|
Heya reading the mingw docs, I keep across this: http://mingw.sourceforge.net/docs.shtml Here's an example. Cut and paste the following into a file named dllfct.h: #ifdef BUILD_DLL // the dll exports #define EXPORT __declspec(dllexport) #else // the exe imports #define EXPORT __declspec(dllimport) #endif // function to be imported/exported EXPORT void tstfunc (void); Cut and paste the following into a file named dllfct.c: #include <stdio.h> #include "dllfct.h" EXPORT void tstfunc (void) { printf ("Hello\n"); } This makes me nearly sick :), is there a way to get around this ?, because this would be a complete disaster within libvisual :) |
From: Dennis S. <sy...@yo...> - 2005-02-14 19:19:04
|
Heya Fellahs, To store plugin settings, and also as a backend for the lvavs preset storage. I'd like to introduce a subsystem that is capable of serializing VisParamEntries, and for what it matters. Complete VisParamContainers. I'd like to discuss about the following: Where do we store our param registry ? "~/.libvisual/settings" ? And what kind of format do we want. We can go for a binair format, which obviously is fast, and easy to implement. Also using a binair system (and some hacking) we can easily serialize and deserialize objects. We could have a 'serialize, deserialize' method in VisObjects. that supports (de)serialization from binair data, that is. When the interface is implemented for the object. This could be nice with VisPalette, VisColor and the such. Of course we should be careful with not bloating our object system, But I think the (de)serialize optional interface on VisObject, to (de)serialize objects is quite a legal thing to do. I personally favor the following: A binair file in "~/.libvisual/settings". What do you people thing ? Cheers, Dennis |
From: Dennis S. <sy...@yo...> - 2005-02-14 18:52:20
|
On Mon, 2005-02-14 at 20:27 +0200, Vitaly V. Bursov wrote: > On Mon, 14 Feb 2005 19:15:26 +0100 > Dennis Smit <sy...@yo...> wrote: > > > On Fri, 2005-02-11 at 16:21 -0500, Duilio J. Protti wrote: > > > You must also choose which installer to use, to start adjusting things > > > for them. > > > > Any suggestions on this, I think we should go for an installer that is > > used a lot and is well known with the windows users. > > Inno Setup http://www.jrsoftware.org/isinfo.php should be a good choice. Looks like neat software, I've written it down for when it's time :) Thanks for pointing out! Cheers, Dennis |
From: Vitaly V. B. <vit...@uk...> - 2005-02-14 18:28:01
|
On Mon, 14 Feb 2005 19:15:26 +0100 Dennis Smit <sy...@yo...> wrote: > On Fri, 2005-02-11 at 16:21 -0500, Duilio J. Protti wrote: > > You must also choose which installer to use, to start adjusting things > > for them. > > Any suggestions on this, I think we should go for an installer that is > used a lot and is well known with the windows users. Inno Setup http://www.jrsoftware.org/isinfo.php should be a good choice. -- Vitaly GPG Key ID: F95A23B9 |
From: Dennis S. <sy...@yo...> - 2005-02-14 18:16:13
|
On Fri, 2005-02-11 at 16:25 -0500, Duilio J. Protti wrote: > Oh, and I forget to mention that some special care must be taken with > the structure's packaging, and keep in mind, when you use asm, that the > function's call convention on Windows is different from that of standard > C, and the inline assembly usage will be different while compiling with > Visual Studio. Yep, we'll have to use gcc :) |
From: Dennis S. <sy...@yo...> - 2005-02-14 18:15:29
|
On Fri, 2005-02-11 at 16:21 -0500, Duilio J. Protti wrote: > You can build all the thing using Cygwin alone, but in that case the > libraries are linked against libcygwin.dll.a, and you need to distribute > it along with libvisual in your releases. But if you use Mingw, you link > against Window's dlls, so the only thing needed is Windows itself. I think we should go for mingw, I've been playing with it for a bit, not extremely succesful, but capable of running configure, and build a very small part of libvisual. As Vitaly already pointed out, it's borking a lot :) > > and how can we make sure that > > things like 'dlopen' are staticly linked ? > > dlopen on Windows requires special care, I will post more on this issue > soon. Yes please give me some more information regarding this. I think, for the windows platform we should really focus for a, works out of the box installer, probably including plugins for atleast winamp and windows mediaplayer. Is there a way of obtaining the standard header files for the windows platform, without the need to buy visual studio, or something ? > You must also choose which installer to use, to start adjusting things > for them. Any suggestions on this, I think we should go for an installer that is used a lot and is well known with the windows users. Cheers, Dennis |
From: Dennis S. <sy...@yo...> - 2005-02-12 13:57:47
|
On Sat, 2005-02-12 at 02:32 +0200, Vitaly V. Bursov wrote: > Glibc has it's own mem* functions, compiler can compile it's > own inlined mem* versions. Compiler is the most interesting player > here especially if data size is a constant. > > mmx/sse memcpy and memset can be useful while working with a pretty > large framebuffers. That is true, it won't matter that much, because our video buffers. while being multiple megabytes, it's still small, but well, maybe in a case like lv-avs, where there are multiple buffers, it matters a few frames.. :) |
From: Dennis S. <sy...@yo...> - 2005-02-12 13:56:12
|
On Fri, 2005-02-11 at 20:44 -0300, Duilio Javier Protti wrote: > > A better autogen.sh, and review our build trees. > > I can work on autogen.sh this weekend. Excelent! > > Libvisual: > > Setup our own memcpy/memset wrappers and accelerate > > these with mmx/sse etc etc. > > I see on the Linux kernel (include/asm-i386/string.h): > > static __inline__ void *__memcpy3d(void *to, const void *from, size_t > len) > { > if (len < 512) > return __constant_memcpy(to, from, len); > return _mmx_memcpy(to, from, len); > } > > so it looks like MMX pays well on buffers > 512 bytes long I did some serious playing with this a few weeks ago. MMX does matter, because you can load store 8 * 8 bytes, (all the mmx regs). However what does matter more is cache control, by using 3dnow, or sse. That way I can pump 2 gigabyte a second on my machine, without cache 1 gigabyte would take 1.4 sec. > I must finish the gtk1-widget!!! I will work on this Sounds good, I changed a lot in the gtk2 version, (not one big function, but a function for every widget). You want to look at this. |
From: Vitaly V. B. <vit...@uk...> - 2005-02-12 00:33:26
|
On 11 Feb 2005 20:44:43 -0300 Duilio Javier Protti <dp...@fc...> wrote: > > Libvisual: > > Setup our own memcpy/memset wrappers and accelerate > > these with mmx/sse etc etc. > > I see on the Linux kernel (include/asm-i386/string.h): > > static __inline__ void *__memcpy3d(void *to, const void *from, size_t > len) > { > if (len < 512) > return __constant_memcpy(to, from, len); > return _mmx_memcpy(to, from, len); > } > > so it looks like MMX pays well on buffers > 512 bytes long That's kernel :) Glibc has it's own mem* functions, compiler can compile it's own inlined mem* versions. Compiler is the most interesting player here especially if data size is a constant. mmx/sse memcpy and memset can be useful while working with a pretty large framebuffers. -- Vitaly GPG Key ID: F95A23B9 |
From: Duilio J. P. <dp...@fc...> - 2005-02-11 23:32:54
|
> That said, the plan: > > 0.3.0: > General: > Code reviewing. > Bug fixes. > Speed improvements (aka more mmx hacking). > A better autogen.sh, and review our build trees. I can work on autogen.sh this weekend. > Libvisual: > Setup our own memcpy/memset wrappers and accelerate > these with mmx/sse etc etc. I see on the Linux kernel (include/asm-i386/string.h): static __inline__ void *__memcpy3d(void *to, const void *from, size_t len) { if (len < 512) return __constant_memcpy(to, from, len); return _mmx_memcpy(to, from, len); } so it looks like MMX pays well on buffers > 512 bytes long > 0.3.1: > Libvisual: > Introduce VisScript, our math expression evaluator, to > be used by libvisual-avs and G-Force. > (This is the initial VM version, after this > it will have a long development cycle to go) > Introduce VisPipeline, advanced pipeline connection of > VisActors, VisMorphs, VisTransforms. > Rewrite VisBin using VisPipeline. Really good! > Libvisual-widgets: > Good working gtk2 widget. > Good working gtk1 widget. > Good working QT widget. I must finish the gtk1-widget!!! I will work on this Bye, Duilio. |
From: Duilio J. P. <dp...@fc...> - 2005-02-11 19:35:16
|
Oh, and I forget to mention that some special care must be taken with the structure's packaging, and keep in mind, when you use asm, that the function's call convention on Windows is different from that of standard C, and the inline assembly usage will be different when compiling with Visual Studio. Bye, Duilio. |
From: Duilio J. P. <dp...@fc...> - 2005-02-11 19:32:58
|
Oh, and I forget to mention that some special care must be taken with the structure's packaging, and keep in mind, when you use asm, that the function's call convention on Windows is different from that of standard C, and the inline assembly usage will be different while compiling with Visual Studio. Bye, Duilio. |
From: Duilio J. P. <dp...@fc...> - 2005-02-11 19:29:36
|
> I'd like to try and get a libvisual.dll working under windows 32. > > Tho I've got >zero< experience with this. So what is needed to run > the autotools, and gcc under windows ?, I heavily suggest to use Cygwin+Mingw. Cygwin is a Posix emulation layer for Windows systems. Mingw stands for "Minimal GNU on Windows". You can build all the thing using Cygwin alone, but in that case the libraries are linked against libcygwin.dll.a, and you need to distribute it along with libvisual in your releases. But if you use Mingw, you link against Window's dlls, so the only thing needed is Windows itself. > and how can we make sure that > things like 'dlopen' are staticly linked ? dlopen on Windows requires special care, I will post more on this issue soon. You must also choose which installer to use, to start adjusting things for them. Bye, Duilio. |
From: Vitaly V. B. <vit...@uk...> - 2005-02-11 19:05:56
|
On Fri, 11 Feb 2005 18:10:14 +0100 Dennis Smit <sy...@yo...> wrote: $ unset CC CXX CFLAGS CXXFLAGS (I have a cross-compiler and stuff :) $ ./configure --host=i386-pc-mingw32 $ make fixes/hacks $ make fixes/hacks .... ... and here's successfully built non-working libvisual library :) patch to highlight problebatic places is attached. It's dlopen() related mostly.... -- Vitaly GPG Key ID: F95A23B9 |