You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
|
Apr
(38) |
May
(22) |
Jun
(92) |
Jul
(101) |
Aug
(18) |
Sep
(286) |
Oct
(180) |
Nov
(73) |
Dec
(14) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
(18) |
Feb
(74) |
Mar
(56) |
Apr
(11) |
May
(5) |
Jun
(4) |
Jul
(20) |
Aug
(4) |
Sep
|
Oct
|
Nov
(1) |
Dec
(2) |
2006 |
Jan
(11) |
Feb
(2) |
Mar
(10) |
Apr
(2) |
May
(1) |
Jun
|
Jul
(24) |
Aug
(11) |
Sep
(5) |
Oct
(16) |
Nov
(25) |
Dec
(8) |
2007 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
(1) |
Aug
|
Sep
|
Oct
(4) |
Nov
(12) |
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(5) |
From: Dennis S. <sy...@yo...> - 2005-03-08 15:06:20
|
On Tue, 2005-03-08 at 16:23 +0200, Vitaly V. Bursov wrote: > On Tue, 08 Mar 2005 14:26:46 +0100 > Dennis Smit <sy...@yo...> wrote: > > > On Tue, 2005-03-08 at 15:15 +0200, Vitaly V. Bursov wrote: > > > I think it would be nice to have an "audio filter" stack, tree or > > > whatever and some "control" api (on/off, value from a range) > > > > Audio filter stack ? what do you mean ? > Well, Actor gets not "raw" stream but processed by filters. > Including channel mixers, speed up/down, pitch, voice remove, etc. > It would be a nice place to put a resampler too. > > Ofcourse some data should be grabbed from a VJ app if there's one. > > Just an idea... This is out of the scope for just libvisual, however a friend of mine is semi into audio, and I was talking to him about a lib on top of libvisual for audio processing. So that nicely fits :) > > > I don't think it's good idea to use fixed single frequency... > > > Anyway, does it matter? > > > > Well the more we generalise the less the Actor plugin has to think. > > And keep in mind, more than one actor can connect to one VisAudio. > Yeah, probably you're right. I was thinking, we could store samples in a different type, VisAudioSample, timestamp it there, connect a type to it. And then make it possible to go from VisAudio to a few common types where while converting stuff is put together, (we can store a history of samples). > > > > How large do we want our buffers ? > > > Large buffer means high latency. Small buffer (upto 64 bytes) measns low > > > latency. > > > > True, how many samples of how many bytes are pushed out a second ? :) > If sampling rate is 44100, you'll get 44100*2channels*16bit = 176400 bps :) > so, with a 2048 sample buffer there will be (2048*2channels*16bit)/176400 > ~= 46 msec lag if a frame is rendered in tenth of a millisecond. Or > around 21 "real" FPS. Ok so we prolly want to fetch with buffers of around 256 ? and combine multiple samples together if the plugin needs a larger worker array ? (Thanks for the calculus btw :)) > > > Personally I believe buffer should be variable in size. It will be > > > useful if we're going to use "audio filters". > > > > What do you want to do with these audio filters ?, what we could > > also do is take a non naieve approach, where a processed VisAudio > > is rather a VisAudioPool that negotiates with the different > > actors. However this does not make things easier, and I think we should > > keep goal focused, proof me wrong tho :) > > Well, I'm not sure... What about one sample array, from which plugins can ask a array version themself ?, something like: In VisAudio: VisAudioSample *samples; // We might want to make this a rotating buffer... anyway just for concept int samples_count; VisAudioSample { VisObject object; VisAudioSampleFormatType format; VisTime timestamp; sampledata... origdata... } void visual_audio_convert_to (VisAudio *audio, VisAudioSampleFormatType format, void *data, int size); > ===== > AudioSource (N channels, interleaved or not) > || || > \/ || > Filter1 || > | \ || > | \ || > v v \/ > Filter2 Filter3 > | | > v v > Actor1 Actor2 > ===== > > or, if there's a VJ app > ===== > AudioSource (N channels, interleaved or not, better not) > Ch1 Ch2 ... ChN > | | > v v > Actor1 Actor2 > ===== Yes that setup makes a lot of sense. Tho for the real connection of this I think we really should introduce 'another' new lib :) Anyway my focus isn't there atm, however, we need to keep open the possibility. > > > What about audio frame timestamping? > > > > GOOD one, putting that on the list. We need that anyway, to use it for > > the Beat Detection. Maybe we should keep a general sample history, > > of which a [3][size] buffer can be derived ? > Hm, I still think we should have small and variable buffer size. > buffer size can specify how much data is available(no guarantee > that frame proccessing will take exactly N usec); > small buffer means low latency; > all data should be processed to detect beats accurately. Agree. I am incorporating all the comments in a new version of the draft. > It's virtually impossible to sync a/v if libvisual has no control > over an audio stream (i.e. nobody can delay it - live music). > The only solution here is a small buffer. Yep. Since we can use a sample history with the new design, we might want to add a 'delay' control that keeps back newer samples for that amount of msecs ? Or would it be better to implement this as a plugin within an audio lib on top ? Cheers, Dennis |
From: Vitaly V. B. <vit...@uk...> - 2005-03-08 14:24:11
|
On Tue, 08 Mar 2005 14:26:46 +0100 Dennis Smit <sy...@yo...> wrote: > On Tue, 2005-03-08 at 15:15 +0200, Vitaly V. Bursov wrote: > > I think it would be nice to have an "audio filter" stack, tree or > > whatever and some "control" api (on/off, value from a range) > > Audio filter stack ? what do you mean ? Well, Actor gets not "raw" stream but processed by filters. Including channel mixers, speed up/down, pitch, voice remove, etc. It would be a nice place to put a resampler too. Ofcourse some data should be grabbed from a VJ app if there's one. Just an idea... > > > Audio energy registration, calculation (already done). > > > normalized spectrum (logified). > > > Internally, use floats to represent audio. (from 0.0 to 1.0 or -1.0 to > > > 1.0 ?) > > > Use floats to represent the spectrum (from 0.0 to 1.0) > > > Have conversion code to go from short audio to float. > > there's K7 (semi) optimized verions within scivi > > That is neat!, will look at it. that's plugin.c, "USE_ASM_K7" def > > I don't think it's good idea to use fixed single frequency... > > Anyway, does it matter? > > Well the more we generalise the less the Actor plugin has to think. > And keep in mind, more than one actor can connect to one VisAudio. Yeah, probably you're right. > > > How large do we want our buffers ? > > Large buffer means high latency. Small buffer (upto 64 bytes) measns low > > latency. > > True, how many samples of how many bytes are pushed out a second ? :) If sampling rate is 44100, you'll get 44100*2channels*16bit = 176400 bps :) so, with a 2048 sample buffer there will be (2048*2channels*16bit)/176400 ~= 46 msec lag if a frame is rendered in tenth of a millisecond. Or around 21 "real" FPS. or something like that. :) > > Personally I believe buffer should be variable in size. It will be > > useful if we're going to use "audio filters". > > What do you want to do with these audio filters ?, what we could > also do is take a non naieve approach, where a processed VisAudio > is rather a VisAudioPool that negotiates with the different > actors. However this does not make things easier, and I think we should > keep goal focused, proof me wrong tho :) Well, I'm not sure... ===== AudioSource (N channels, interleaved or not) || || \/ || Filter1 || | \ || | \ || v v \/ Filter2 Filter3 | | v v Actor1 Actor2 ===== or, if there's a VJ app ===== AudioSource (N channels, interleaved or not, better not) Ch1 Ch2 ... ChN | | v v Actor1 Actor2 ===== > > What about audio frame timestamping? > > GOOD one, putting that on the list. We need that anyway, to use it for > the Beat Detection. Maybe we should keep a general sample history, > of which a [3][size] buffer can be derived ? Hm, I still think we should have small and variable buffer size. buffer size can specify how much data is available(no guarantee that frame proccessing will take exactly N usec); small buffer means low latency; all data should be processed to detect beats accurately. > > How to guarantee (do our's best) to synchronize video with audio? > > Hard video processing requires some time... > > This is hard, I haven't put much thought into this, but timestamping > should help right ? :) I really don't know how to synchronize it :) It's virtually impossible to sync a/v if libvisual has no control over an audio stream (i.e. nobody can delay it - live music). The only solution here is a small buffer. -- Vitaly GPG Key ID: F95A23B9 |
From: Dennis S. <sy...@yo...> - 2005-03-08 13:26:56
|
On Tue, 2005-03-08 at 15:15 +0200, Vitaly V. Bursov wrote: > I think it would be nice to have an "audio filter" stack, tree or > whatever and some "control" api (on/off, value from a range) Audio filter stack ? what do you mean ? > > Audio energy registration, calculation (already done). > > normalized spectrum (logified). > > Internally, use floats to represent audio. (from 0.0 to 1.0 or -1.0 to > > 1.0 ?) > > Use floats to represent the spectrum (from 0.0 to 1.0) > > Have conversion code to go from short audio to float. > there's K7 (semi) optimized verions within scivi That is neat!, will look at it. > I don't think it's good idea to use fixed single frequency... > Anyway, does it matter? Well the more we generalise the less the Actor plugin has to think. And keep in mind, more than one actor can connect to one VisAudio. > > How large do we want our buffers ? > Large buffer means high latency. Small buffer (upto 64 bytes) measns low > latency. True, how many samples of how many bytes are pushed out a second ? :) > Personally I believe buffer should be variable in size. It will be > useful if we're going to use "audio filters". What do you want to do with these audio filters ?, what we could also do is take a non naieve approach, where a processed VisAudio is rather a VisAudioPool that negotiates with the different actors. However this does not make things easier, and I think we should keep goal focused, proof me wrong tho :) > What about audio frame timestamping? GOOD one, putting that on the list. We need that anyway, to use it for the Beat Detection. Maybe we should keep a general sample history, of which a [3][size] buffer can be derived ? > How to guarantee (do our's best) to synchronize video with audio? > Hard video processing requires some time... This is hard, I haven't put much thought into this, but timestamping should help right ? :) Cheers, Dennis |
From: Vitaly V. B. <vit...@uk...> - 2005-03-08 13:15:40
|
On Mon, 07 Mar 2005 20:50:23 +0100 Dennis Smit <sy...@yo...> wrote: > The document: > ==================================================================================== > > This document describes the initial VisAudio overhaul design. > > Version 0.1 > > Dennis Smit <sy...@yo...> > Libvisual project leader > > _____________________________________________________________________________________ > > Goals: > Ability to mix channels. I think it would be nice to have an "audio filter" stack, tree or whatever and some "control" api (on/off, value from a range) > Audio energy registration, calculation (already done). > normalized spectrum (logified). > Internally, use floats to represent audio. (from 0.0 to 1.0 or -1.0 to > 1.0 ?) > Use floats to represent the spectrum (from 0.0 to 1.0) > Have conversion code to go from short audio to float. there's K7 (semi) optimized verions within scivi > Basic conversion for the frequency (hz) of the input. > BPM detection: > > http://www.gamedev.net/reference/programming/features/beatdetection/page2.asp > > Provide 6 vars to directly detect hits on the six bands. > > Open questions: > How do we want to do the number of samples ? (aka the length of the > buffer) > How to handle multiple channels, channel mixing, how to handle 5.1, > 7.1, 9.1 etc ? > > Internal audio representation: > In floats. > Fixed frequency ? (I personally prefer this) I don't think it's good idea to use fixed single frequency... Anyway, does it matter? > How large do we want our buffers ? Large buffer means high latency. Small buffer (upto 64 bytes) measns low latency. Personally I believe buffer should be variable in size. It will be useful if we're going to use "audio filters". What about audio frame timestamping? How to guarantee (do our's best) to synchronize video with audio? Hard video processing requires some time... -- Vitaly GPG Key ID: F95A23B9 |
From: Dennis S. <sy...@yo...> - 2005-03-08 12:35:51
|
> If some source and destination module have different formats, > just fire up a gavl_audio_converter_t, and it will do the conversion. > Futhermore, there are functions for allocating/freeing audio frames > (with memory alignment) as well as a copy function, which lets you > easily create an audio buffer. It looks very advanced without doubt yeah. > I'm using this in my own projects for quite a time now and I never > got a format, which couldn't be handled by gavl. > The only thing which might eventually be missing is 7.1 (2 side channels), > but this could be added as well if needed. > > I think it would provide a greater flexibility than the > VisAudioSoundImportType enum :-) It does without doubt. > Whether or not the plugins have restrictions concerning the formats is > another question. The 512 samples in each frame seem to be hardcoded > in most plugins and I don't know if this mumber must become variable. > > Concerning the samplerate, most material is 44100 but there are also > 48000 streams (e.g. from Music DVDs) or 22050 (used by some > web radio stations). So you could let the plugins handle all these rates > or use gavl to resample everything to 44100. The idea is to provide ONE format to the VisActor plugins, since they shouldn't be dealing with all kind of different formats. However we should be able to handle input in different formats nicely. I think that a dependency is going to give us problems, also since we're porting to windows, mac os X. However would you mind us assimilating features from gavl into our VisAudio ? The biggest problem is that we're LGPL and you're GPL. Thanks, Dennis |
From: Burkhard P. <pl...@ip...> - 2005-03-08 12:28:41
|
> Yep, totally agree, please clearify the 'quiet' stuff a bit more btw :) It is set to 1 if the loudness dropped to analog silence during the last processed samples. For subsequent silent frames, it's set to 0. > I am currently writing the proposal for the VisAudio rewrite, I will > post that later in the evening. Please take a look at it :) Did so, see other message. > Depends, we want to provide good detection, but of course some plugins > will have need for something very specific that isn't in the library. > > We try to implement generalized things in the lib. Ok, this is what I have now: time_buffer_read contains the pcm sampes, after the call, e->loudness, e->beat_detected and e->quiet are updated. (e is the lemuria engine which holds all data belonging to one lemuria instance). It's a copy and paste from an onld blursk version and I didn't bother yet to improve this. #define BEAT_MAX 200 /* Config values from bursk */ #define BEAT_SENSITIVITY 4 typedef struct { int32_t beathistory[BEAT_MAX]; int beatbase; int32_t aged; /* smoothed out loudness */ int32_t lowest; /* quietest point in current beat */ int elapsed; /* frames since last beat */ int isquiet; /* was previous frame quiet */ int prevbeat; /* period of previous beat */ } lemuria_analysis; static int detect_beat(lemuria_analysis * a, int32_t loudness, int *thickref, int *quietref) { int beat, i, j; int32_t total; int sensitivity; /* Incorporate the current loudness into history */ a->aged = (a->aged * 7 + loudness) >> 3; a->elapsed++; /* If silent, then clobber the beat */ if (a->aged < 2000 || a->elapsed > BEAT_MAX) { a->elapsed = 0; a->lowest = a->aged; memset(a->beathistory, 0, sizeof a->beathistory); } else if (a->aged < a->lowest) a->lowest = a->aged; /* Beats are detected by looking for a sudden loudness after a lull. * They are also limited to occur no more than once every 15 frames, * so the beat flashes don't get too annoying. */ j = (a->beatbase + a->elapsed) % BEAT_MAX; a->beathistory[j] = loudness - a->aged; beat = FALSE; if (a->elapsed > 15 && a->aged > 2000 && loudness * 4 > a->aged * 5) { /* Compute the average loudness change, assuming this is beat */ for (i = BEAT_MAX / a->elapsed, total = 0; --i > 0; j = (j + BEAT_MAX - a->elapsed) % BEAT_MAX) { total += a->beathistory[j]; } total = total * a->elapsed / BEAT_MAX; /* Tweak the sensitivity to emphasize a consistent rhythm */ sensitivity = BEAT_SENSITIVITY; i = 3 - abs(a->elapsed - a->prevbeat)/2; if (i > 0) sensitivity += i; /* If average change is significantly positive, this is a beat. */ if (total * sensitivity > a->aged) { a->prevbeat = a->elapsed; a->beatbase = (a->beatbase + a->elapsed) % BEAT_MAX; a->lowest = a->aged; a->elapsed = 0; beat = TRUE; } } /* Thickness is computed from the difference between the instantaneous * loudness and the a->aged loudness. Thus, a sudden increase in volume * will produce a thick line, regardless of rhythm. */ if (a->aged < 1500) *thickref = 0; else { *thickref = loudness * 2 / a->aged; if (*thickref > 3) *thickref = 3; } /* Silence is computed from the a->aged loudness. The quietref value is * set to TRUE only at the start of silence, not throughout the silent * period. Also, there is some hysteresis so that silence followed * by a slight noise and more silence won't count as two silent * periods -- that sort of thing happens during many fade edits, so * we have to account for it. */ if (a->aged < (a->isquiet ? 1500 : 500)) { /* Quiet now -- is this the start of quiet? */ *quietref = !a->isquiet; a->isquiet = TRUE; } else { *quietref = FALSE; a->isquiet = FALSE; } /* return the result */ return beat; } void lemuria_analysis_perform(lemuria_engine_t * e) { int i, imin, imax, start; int32_t delta_sum; lemuria_analysis * a = (lemuria_analysis*)e->analysis; /* Find the maximum and minimum, with the restriction that * the minimum must occur after the maximum. */ for (i = 1, imin = imax = 0, delta_sum = 0; i < 127 / 2; i++) { if (e->time_buffer_read[0][i] < e->time_buffer_read[0][imin]) imin = i; if (e->time_buffer_read[0][i] > e->time_buffer_read[0][imax]) imin = imax = i; delta_sum += abs(e->time_buffer_read[0][i] - e->time_buffer_read[0][i - i]); } /* Triggered sweeps start halfway between min & max */ start = (imax + imin) / 2; /* Compute the loudness. We don't want to do a full spectrum analysis * to do this, but we can guess the low-frequency sound is proportional * to the maximum difference found (because loud low frequencies need * big signal changes), and that high-frequency sound is proportional * to the differences between adjacent samples. We want to be sensitive * to both of those, while ignoring the mid-range sound. * * Because we have only one low-frequency difference, but hundreds of * high-frequency differences, we need to give more weight to the * low-frequency difference (even though each high-frequency difference * is small). */ e->loudness = (((int32_t)e->time_buffer_read[0][imax] - (int32_t)e->time_buffer_read[0][imin]) * 60 + delta_sum) / 75; e->beat_detected = detect_beat(a, e->loudness, &(e->thickness), &(e->quiet)); } -- _____________________________ Dr.-Ing. Burkhard Plaum Institut fuer Plasmaforschung Pfaffenwaldring 31 70569 Stuttgart Tel.: +49 711 685-2187 Fax.: -3102 |
From: Burkhard P. <pl...@ip...> - 2005-03-08 12:16:46
|
Dennis Smit wrote: > Heya everyone. > > I wrote the first version of the new VisAudio API proposal. > > It's going to be a complete overhaul, and thus should be done right. > > PLEASE read this document, and comment upon it! If you are brave enough to add another dependency to libvisual, you can use my gavl library for the audio handling: http://cvs.sourceforge.net/viewcvs.py/gmerlin/gavl/include/gavl/gavl.h?rev=1.24&view=log It supports the following formats: - Arbitrary samplerates - 8, 16 and 32 bit integer and floating point - 3 different interleave modes: None, All and Interleaved pairs of channels (e.g. for 5.1 playback on 3 stereo devices) - Container structs for Audio format and Audio frame - Conversion between ALL supported formats including resampling at several quality levels (using libsamplerate) and audio dithering Format definition looks like this: /* Sample formats: all multibyte numbers are native endian */ typedef enum { GAVL_SAMPLE_NONE = 0, GAVL_SAMPLE_U8 = 1, GAVL_SAMPLE_S8 = 2, GAVL_SAMPLE_U16 = 3, GAVL_SAMPLE_S16 = 4, GAVL_SAMPLE_S32 = 5, GAVL_SAMPLE_FLOAT = 6 } gavl_sample_format_t; /* Interleave modes */ typedef enum { GAVL_INTERLEAVE_NONE = 0, /* No interleaving, all channels separate */ GAVL_INTERLEAVE_2 = 1, /* Interleaved pairs of channels */ GAVL_INTERLEAVE_ALL = 2 /* Everything interleaved */ } gavl_interleave_mode_t; /* * Audio channel setup: This can be used with * AC3 decoders to support all speaker configurations */ typedef enum { GAVL_CHANNEL_NONE = 0, GAVL_CHANNEL_MONO = 1, GAVL_CHANNEL_STEREO = 2, /* 2 Front channels (Stereo or Dual channels) */ GAVL_CHANNEL_3F = 3, GAVL_CHANNEL_2F1R = 4, GAVL_CHANNEL_3F1R = 5, GAVL_CHANNEL_2F2R = 6, GAVL_CHANNEL_3F2R = 7 } gavl_channel_setup_t; /* Channel IDs */ typedef enum { GAVL_CHID_NONE = 0, GAVL_CHID_FRONT, GAVL_CHID_FRONT_LEFT, GAVL_CHID_FRONT_RIGHT, GAVL_CHID_FRONT_CENTER, GAVL_CHID_REAR, GAVL_CHID_REAR_LEFT, GAVL_CHID_REAR_RIGHT, GAVL_CHID_LFE } gavl_channel_id_t; /* Structure describing an audio format */ typedef struct gavl_audio_format_s { int samples_per_frame; /* Maximum number of samples per frame */ int samplerate; int num_channels; gavl_sample_format_t sample_format; gavl_interleave_mode_t interleave_mode; gavl_channel_setup_t channel_setup; int lfe; /* Low frequency effect channel present */ float center_level; /* linear factor for mixing center to front */ float rear_level; /* linear factor for mixing rear to front */ /* Which channel is stored where */ gavl_channel_id_t channel_locations[GAVL_MAX_CHANNELS]; } gavl_audio_format_t; The audio frame then looks like this: typedef union gavl_audio_samples_u { uint8_t * u_8; int8_t * s_8; uint16_t * u_16; int16_t * s_16; uint32_t * u_32; int32_t * s_32; float * f; } gavl_audio_samples_t; /* Container for noninterleaved audio channels */ typedef union gavl_audio_channels_u { uint8_t * u_8[GAVL_MAX_CHANNELS]; int8_t * s_8[GAVL_MAX_CHANNELS]; uint16_t * u_16[GAVL_MAX_CHANNELS]; int16_t * s_16[GAVL_MAX_CHANNELS]; uint32_t * u_32[GAVL_MAX_CHANNELS]; int32_t * s_32[GAVL_MAX_CHANNELS]; float * f[GAVL_MAX_CHANNELS]; } gavl_audio_channels_t; /* Audio frame */ typedef struct gavl_audio_frame_s { gavl_audio_samples_t samples; gavl_audio_channels_t channels; int valid_samples; /* Real number of samples */ } gavl_audio_frame_t; If some source and destination module have different formats, just fire up a gavl_audio_converter_t, and it will do the conversion. Futhermore, there are functions for allocating/freeing audio frames (with memory alignment) as well as a copy function, which lets you easily create an audio buffer. I'm using this in my own projects for quite a time now and I never got a format, which couldn't be handled by gavl. The only thing which might eventually be missing is 7.1 (2 side channels), but this could be added as well if needed. I think it would provide a greater flexibility than the VisAudioSoundImportType enum :-) Whether or not the plugins have restrictions concerning the formats is another question. The 512 samples in each frame seem to be hardcoded in most plugins and I don't know if this mumber must become variable. Concerning the samplerate, most material is 44100 but there are also 48000 streams (e.g. from Music DVDs) or 22050 (used by some web radio stations). So you could let the plugins handle all these rates or use gavl to resample everything to 44100. -- _____________________________ Dr.-Ing. Burkhard Plaum Institut fuer Plasmaforschung Pfaffenwaldring 31 70569 Stuttgart Tel.: +49 711 685-2187 Fax.: -3102 |
From: Dennis S. <sy...@yo...> - 2005-03-07 20:16:16
|
The module libvisual-hackground has been checked into CVS. This should function as an universal playground for the libvisual hackers. Play around in this module, add your tests, tryouts, examples and have fun. The code in this module is allowed to be hacky, of low quality, whatever just see it as a playground/hackground. I've added some benchmarks and tests. Take a look! Cheers, Dennis |
From: Dennis S. <sy...@yo...> - 2005-03-07 19:50:31
|
Heya everyone. I wrote the first version of the new VisAudio API proposal. It's going to be a complete overhaul, and thus should be done right. PLEASE read this document, and comment upon it! Please confirm if it's good or not. there are some open questions, discuss these. I want to set this in stone, so we can start on it :) Cheers, Dennis The document: ==================================================================================== This document describes the initial VisAudio overhaul design. Version 0.1 Dennis Smit <sy...@yo...> Libvisual project leader _____________________________________________________________________________________ Goals: Ability to mix channels. Audio energy registration, calculation (already done). normalized spectrum (logified). Internally, use floats to represent audio. (from 0.0 to 1.0 or -1.0 to 1.0 ?) Use floats to represent the spectrum (from 0.0 to 1.0) Have conversion code to go from short audio to float. Basic conversion for the frequency (hz) of the input. BPM detection: http://www.gamedev.net/reference/programming/features/beatdetection/page2.asp Provide 6 vars to directly detect hits on the six bands. Open questions: How do we want to do the number of samples ? (aka the length of the buffer) How to handle multiple channels, channel mixing, how to handle 5.1, 7.1, 9.1 etc ? Internal audio representation: In floats. Fixed frequency ? (I personally prefer this) How large do we want our buffers ? API: enum { VISUAL_AUDIO_FROM_SIGNED_96000 VISUAL_AUDIO_FROM_SIGNED_48000 VISUAL_AUDIO_FROM_SIGNED_44100 VISUAL_AUDIO_FROM_SIGNED_32000 VISUAL_AUDIO_FROM_SIGNED_22500 VISUAL_AUDIO_FROM_SIGNED_11250 VISUAL_AUDIO_FROM_SIGNED_8000 VISUAL_AUDIO_FROM_UNSIGNED_96000 VISUAL_AUDIO_FROM_UNSIGNED_48000 VISUAL_AUDIO_FROM_UNSIGNED_44100 VISUAL_AUDIO_FROM_UNSIGNED_32000 VISUAL_AUDIO_FROM_UNSIGNED_22500 VISUAL_AUDIO_FROM_UNSIGNED_11250 VISUAL_AUDIO_FROM_UNSIGNED_8000 } VisAudioSoundImportType; VisAudio { VisObject object; float sound[3][2048]; float spectrum[3][256]; float spectrum_normalized[3][256] float energy; VisFFTState *fft_state; VisAudioBeat *bpm; }; VisAudioBeat { VisObject object; ... history[1024][6]; ... energy[6]; int bpm; int beat; int beat_channels[6]; // Set the specific channel on a beat int active_channel; float accuracy; }; /* Prototypes */ VisAudio *visual_audio_new (void); int visual_audio_analyze (VisAudio *audio); int visual_audio_sound_import (VisAudio *audio, VisAudioSoundImportType type, void *sound_buffer, int buffer_length); VisAudioBeat *visual_audio_beat_new (void); int visual_audio_beat_analyse (VisAudio *audio); |
From: Dennis S. <sy...@yo...> - 2005-03-07 19:07:54
|
On Thu, 2005-03-03 at 14:17 +0100, Burkhard Plaum wrote: > Hi all, > > I checked the libvisual CVS to estimate, what's necessary to port > lemuria to libvisual. > > Lemuria has 2 variables, which trigger some changes of the animations: > > quiet: Set to 1 if silence began in these 512 samples, 0 else > beat_detected: Set to 1 if a beat was detected in these 512 samples, 0 else > > I have my own routine for computing these values, but I borrowed it from > blursk some years ago and I think it's not the best possible beat detection. > In fact I have one song with lots of beats, but lemuria detects > not a single one of these :-) > > From what I see, they could go into the _VisAudio struct and could be > computed by visual_audio_analyze(). Yep, totally agree, please clearify the 'quiet' stuff a bit more btw :) I am currently writing the proposal for the VisAudio rewrite, I will post that later in the evening. Please take a look at it :) > Are there any plans to detect such "audio events" in a generic way, or > should each plugin handle them itself? Depends, we want to provide good detection, but of course some plugins will have need for something very specific that isn't in the library. We try to implement generalized things in the lib. > If someones interested, I can post my beat detection stuff here. Please do so! |
From: Burkhard P. <pl...@ip...> - 2005-03-03 13:13:16
|
Hi all, I checked the libvisual CVS to estimate, what's necessary to port lemuria to libvisual. Lemuria has 2 variables, which trigger some changes of the animations: quiet: Set to 1 if silence began in these 512 samples, 0 else beat_detected: Set to 1 if a beat was detected in these 512 samples, 0 else I have my own routine for computing these values, but I borrowed it from blursk some years ago and I think it's not the best possible beat detection. In fact I have one song with lots of beats, but lemuria detects not a single one of these :-) From what I see, they could go into the _VisAudio struct and could be computed by visual_audio_analyze(). Are there any plans to detect such "audio events" in a generic way, or should each plugin handle them itself? If someones interested, I can post my beat detection stuff here. Cheers Burkhard |
From: Dennis S. <sy...@yo...> - 2005-03-01 22:47:39
|
Yay to Amarok! Congrats mates On Tue, 2005-03-01 at 23:32 +0100, Mark Kretschmann wrote: > The amaroK team announces version 1.2.1 of the amaroK audio player > > > Changes relative to version 1.2: > > FIX: Made the Tag-Editor only operate on visible items. (BR 100268) > ADD: Database settings added to the first-run wizard. > FIX: playlist2html generates UTF-8 output now. (BR 100140) > FIX: Bitrate/length showed random values for untagged mp3 files. (BR 100200) > FIX: Crash when recoding stream MetaData without CODEC selected. (BR 100077) > CHG: Show an additional "Compilations with Artist" box in ContextBrowser. > ADD: Remember collapse-state of boxes in ContextBrowser. (BR 98664) > ADD: Display an error when unable to connect to MySQL. > ADD: Konqueror Sidebar now has full drag and drop support. > CHG: Replaced "Blue Wolf" icon with Nenad Grujicic's amaroK 1.1 > icon, due to legal issues. > ADD: Parameter "%score" shows the current song's score in OSD. > CHG: When you delete a song within amaroK, it gets removed from > the Collection automatically. > FIX: Directory column in the playlist was eating the first letter. > ADD: New DCOP call "playlist: setStopAfterCurrent(bool)". (BR 99944) > FIX: Coverfetcher: Do not crash when no cover was found. (BR 99942) > FIX: Support for amazon.co.jp was broken. > CHG: Toolbar items reordered for optimal usability, as suggested by > Aaron "Tom Green" Seigo. > FIX: Show covers for albums containing chars '#' or '?'. (BR 96971 99780) > ADD: Help file for the playlist2html script. > ADD: New DCOP call "playlist: int getActiveIndex()". > ADD: New DCOP call "playlist: playByIndex(int)". > CHG: Upgraded internal SQLite database to 3.1.3. > FIX: Update the database after editing tags in playlist. (BR 99593) > ADD: New DCOP function "player: trackPlayCounter". (BR 99575) > ADD: .ram playlist support with code from Kaffeine. (BR 96101) > FIX: amaroK can now determine the correct track-length even for formats > unknown to TagLib. Makes it possible to seek e.g. in m4a tracks. > ADD: Can now pick from multiple Musicbrainz results. Patch from > Jonathan Halcrow <gt...@pr...>. (BR 89701) > ADD: May now set a custom cover on multiple albums in the Cover-Manager. > ADD: Support relative path of tracks in writing playlists. (BR 91053) > FIX: Don't inline-edit tags for the whole playlist's selection. > FIX: Fix "Recode Tags" crash issues. (BR 95041) > ADD: "Set Custom Cover" can fetch remote images. (BR 90499) > > > The amaroK team > --------------- > > amaroK is a soundsystem-independent audio-player for *nix. Its interface uses > a powerful "browser" metaphor that allows you to create playlists that make > the most of your music collection. We have a fast development-cycle and > super-happy users. We also provide pensions and other employment-benefits. > > "Easily the best media-player for Linux at the moment. Install it now!" > - Linux Format Magazine > > WWW: http://amarok.kde.org > WIKI: http://amarok.kde.org/wiki/ > IRC: irc.freenode.net #amarok > MAIL: ama...@li... > > > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real users. > Discover which products truly live up to the hype. Start reading now. > http://ads.osdn.com/?ad_ide95&alloc_id396&op=click > _______________________________________________ > Libvisual-devel mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libvisual-devel |
From: Mark K. <ma...@we...> - 2005-03-01 21:02:22
|
The amaroK team announces version 1.2.1 of the amaroK audio player Changes relative to version 1.2: FIX: Made the Tag-Editor only operate on visible items. (BR 100268) ADD: Database settings added to the first-run wizard. FIX: playlist2html generates UTF-8 output now. (BR 100140) FIX: Bitrate/length showed random values for untagged mp3 files. (BR 1002= 00) FIX: Crash when recoding stream MetaData without CODEC selected. (BR 1000= 77) CHG: Show an additional "Compilations with Artist" box in ContextBrowser. ADD: Remember collapse-state of boxes in ContextBrowser. (BR 98664) ADD: Display an error when unable to connect to MySQL. ADD: Konqueror Sidebar now has full drag and drop support. CHG: Replaced "Blue Wolf" icon with Nenad Grujicic's amaroK 1.1 icon, due to legal issues. ADD: Parameter "%score" shows the current song's score in OSD. CHG: When you delete a song within amaroK, it gets removed from the Collection automatically. FIX: Directory column in the playlist was eating the first letter. ADD: New DCOP call "playlist: setStopAfterCurrent(bool)". (BR 99944) FIX: Coverfetcher: Do not crash when no cover was found. (BR 99942) FIX: Support for amazon.co.jp was broken. CHG: Toolbar items reordered for optimal usability, as suggested by Aaron "Tom Green" Seigo. FIX: Show covers for albums containing chars '#' or '?'. (BR 96971 99780) ADD: Help file for the playlist2html script. ADD: New DCOP call "playlist: int getActiveIndex()". ADD: New DCOP call "playlist: playByIndex(int)". CHG: Upgraded internal SQLite database to 3.1.3. FIX: Update the database after editing tags in playlist. (BR 99593) ADD: New DCOP function "player: trackPlayCounter". (BR 99575) ADD: .ram playlist support with code from Kaffeine. (BR 96101) FIX: amaroK can now determine the correct track-length even for formats unknown to TagLib. Makes it possible to seek e.g. in m4a tracks. ADD: Can now pick from multiple Musicbrainz results. Patch from Jonathan Halcrow <gt...@pr...>. (BR 89701) ADD: May now set a custom cover on multiple albums in the Cover-Manager. ADD: Support relative path of tracks in writing playlists. (BR 91053) FIX: Don't inline-edit tags for the whole playlist's selection. FIX: Fix "Recode Tags" crash issues. (BR 95041) ADD: "Set Custom Cover" can fetch remote images. (BR 90499) The amaroK team =2D-------------- amaroK is a soundsystem-independent audio-player for *nix. Its interface us= es=20 a powerful "browser" metaphor that allows you to create playlists that make= =20 the most of your music collection. We have a fast development-cycle and=20 super-happy users. We also provide pensions and other employment-benefits. "Easily the best media-player for Linux at the moment. Install it now!" =A0 =A0 - Linux Format Magazine WWW: =A0http://amarok.kde.org WIKI: http://amarok.kde.org/wiki/ IRC: =A0irc.freenode.net #amarok MAIL: ama...@li... |
From: Dennis S. <sy...@yo...> - 2005-03-01 12:37:12
|
On Tue, 2005-03-01 at 04:29 +0100, Jon =C3=98yvind Kjellman wrote: > Thanks, alsa input works ok now, although still upside down (I only =20 > noticed it on the G-Force text.) What am I doing wrong? I am not sure, but I see Vitaly using the following within lvdisplay it's glx driver: glMatrixMode(GL_PROJECTION); glPixelZoom(1.0f, -1.0f); glRasterPos2f(-1.0f, 1.0f); glDrawPixels(.... Regarding lvdisplay, it's our in development displaying framework :) > In my project I want the option to choose random plug-ins. I started =20 > hacking some nasty code to switch plug-in, but noticed =20 > visual_bin_switch_actor_by_name(). How does this work? Will it create a= =20 > new actor? ... destroy the old? ... preserve rundepth (8bit -> 24bit) s= et =20 > during visual_actor_video_negotiate()? When using a bin, you should use it 'managed', it supports fading=20 between plugins, however there are some serious problems with it. The whole VisBin will be replaced within this devel cycle to make place for VisPipeline, we will still have a VisBin but that would serve as an abstract facade to the VisPipeline stuff. I think it's wiser to deal with the actor yourself for now, and sit in there till the new code is there, if you decide otherwise, check the xmms and bmp plugin out. Also the 'morph' example (which is outdated[tm]), might proof some basic information. It it doesn't work out, ask again :) > Not entirely on the topic of libvisual, but in my project I'm rendering= =20 > the plug-in in the background with some other tings in front of it. =20 > However when I tried with the OpenGL based plug-ins it is a royal mess.= It =20 > displays, but there are lots of strange artifacts. I'm pretty sure it's= =20 > due to me messing up OpenGL buffers and attributes the plug-in uses and= it =20 > messing it up for me. The question is (besides any good ideas on how to= do =20 > this in a working fassion): are OpenGL plug-ins allowed everything (by = =20 > convention) or do they adhere to some rules which may make my life simp= ler? I am pretty sure this is possible, we had a discussion on this before. AND we want to support blit overlaying in GL, however=20 honestly, I am not GL master, I think Vitaly has to jump in here and help you, so, Vitaly, or any other openGL know it all, help him out! :) > Are there anyone working on a xine-lib input plug-in? Nope, and this would be GREATLY appreciated, also native xine support ;) so they can use lv to render libvisual :) > > I am very interested in this, I want to work up our state of docs, I > > also want to startup a wiki as a source of information and the such. >=20 > I've started writing on a small tutorial and I'll 'bomb' the example wi= th =20 > relevant comments. I'll post it for review when it's done. Sounds superb, in what format are you writing the tutorial ? I would love a docbook, if this is not too hard :) (anything other would be cool of course) > > I understand that you're working on some car multimedia project ? do > > you have urls regarding this, I am very interested about all this! :) >=20 > No urls at the moment. It isn't release worthy yet. But it'll be =20 > open-sourced and I'll post something here when I unleash it on the net = =20 > (shouldn't be many weeks.) Ok keep us updated, this is super! Cheers, Dennis |
From: <jo...@so...> - 2005-03-01 03:21:54
|
Thanks, alsa input works ok now, although still upside down (I only noticed it on the G-Force text.) What am I doing wrong? In my project I want the option to choose random plug-ins. I started hacking some nasty code to switch plug-in, but noticed visual_bin_switch_actor_by_name(). How does this work? Will it create a new actor? ... destroy the old? ... preserve rundepth (8bit -> 24bit) set during visual_actor_video_negotiate()? Not entirely on the topic of libvisual, but in my project I'm rendering the plug-in in the background with some other tings in front of it. However when I tried with the OpenGL based plug-ins it is a royal mess. It displays, but there are lots of strange artifacts. I'm pretty sure it's due to me messing up OpenGL buffers and attributes the plug-in uses and it messing it up for me. The question is (besides any good ideas on how to do this in a working fassion): are OpenGL plug-ins allowed everything (by convention) or do they adhere to some rules which may make my life simpler? Are there anyone working on a xine-lib input plug-in? >> If you're interested, when I get it to work properly I could create a >> short tutorial. However it would need to be reviewed as there probably >> would be some mistakes. > > I am very interested in this, I want to work up our state of docs, I > also want to startup a wiki as a source of information and the such. I've started writing on a small tutorial and I'll 'bomb' the example with relevant comments. I'll post it for review when it's done. > I understand that you're working on some car multimedia project ? do > you have urls regarding this, I am very interested about all this! :) No urls at the moment. It isn't release worthy yet. But it'll be open-sourced and I'll post something here when I unleash it on the net (shouldn't be many weeks.) best, Jon Øyvind |
From: Dennis S. <sy...@yo...> - 2005-02-28 21:10:19
|
On Sun, 2005-02-27 at 02:46 +0100, Jon =C3=98yvind Kjellman wrote: > Thanks for the advice, I created a small test program that runs, though= I =20 > have some issues: >=20 > 1. The program displays, but doesn't react to the music using alsa inpu= t. =20 > mplayer plug-in fails with: > > libvisual ERROR: ./example: inp_mplayer_init(): \ > > Could not open file '/home/jon/.mplayer/mplayer-af_export': No such f= ile =20 > > or directory You have to open the alsa capture gain, using alsamixer (this is=20 incredibly lame, I agree). (it works perfectly with alsa here) (your=20 example that is) > I'm unable to test with others. The examples from CVS failed similarly > 1. When a plug-in is 8-bit, is it always palette based or can it be gra= y =20 > scale. How do I determine? It's palette based, period :) You retrieve the palette from a VisActor=20 using VisPalette *visual_actor_get_palette (VisActor *actor). > 2. Take a look at display(), are my assumptions about color representat= ion =20 > good? Does plug-in's use alpha channel? The framework uses alpha channels for overlaying and such, but plugins=20 themself (are right now) not actively using alpha channels. I think the color ordering seems right, tho I am not sure how it would=20 hold on other endianess systems. Tho you draw the frame upside down :) (Just pointing out) > 3. Why doesn't visual_actor_new("foobar") return NULL if "foobar" doesn= 't =20 > exist? It's returns a plugin less VisActor. VisActor is just a facade for the underlaying VisActorPlugin stuff, and it helps you with video=20 negotiation. You can check for a plugin it's existance by doing: int visual_actor_valid_by_name (const char *name) > 4. Looking at the source. Why does the for-loop printing input names wo= rk =20 > (commented out), but the one printing actor names SIGSEGV? for(i =3D visual_actor_get_list()->head; i; i =3D i->next) printf("%s\n", ((VisPluginRef*)i->data)->info->plugname); Use that, just like with the input_get_list, it's a filtered list from the global plugin registry (VisPluginRefs) list. > A last observation. There exists a function visual_video_new_with_buffe= r() =20 > which will create a VisVideo object and allocate a buffer. However I ha= ve =20 > found no way to re-size the VisVideo object which will also re-size the= =20 > buffer. I have to do this with visual_video_allocate_buffer() (btw. the= =20 > docs doesn't mention that this _reallocates_ under certain circumstance= s.) =20 > Personally, IMHO, I find it a bit inconsistent, the object does keep tr= ack =20 > of whether or not it's in charge of buffer resizing and could very well= =20 > perform the re-size within visual_video_set_dimension(). After a resize, you can safely to visual_video_allocate_buffer, this=20 will free, the old one if it was allocated internally, by new_with_buffer for example, and after that reallocate a new buffer. If you manage your own allocated buffer, don't use this function but=20 manage your allocating yourself. > If you're interested, when I get it to work properly I could create a =20 > short tutorial. However it would need to be reviewed as there probably = =20 > would be some mistakes. I am very interested in this, I want to work up our state of docs, I=20 also want to startup a wiki as a source of information and the such. I understand that you're working on some car multimedia project ? do=20 you have urls regarding this, I am very interested about all this! :) > Best, > Jon =C3=98yvind Kjellman Cheers, Dennis |
From: Dennis S. <sy...@yo...> - 2005-02-28 20:48:57
|
I added a flag to the visual_plugin_get_list function to disable or enable checking for directories their existance. On Wed, 2005-02-23 at 13:34 -0500, Duilio J. Protti wrote: > For simplicity, is enough to create the transform dir by hand. I have > had that problem too. It seems a little problem on the installation > scripts, or a previous libvisual installed. We will fix that soon. > > Bye, > Duilio, a libvisual developer. |
From: Dennis S. <sy...@yo...> - 2005-02-28 20:42:59
|
Thanks a lot for checking out!, updated in CVS. > For FreeBSD I guess it should work, see > http://www.freebsd.org/cgi/man.cgi?query=sysctl%283%29&apropos=0&sektion=0&manpath=FreeBSD+5.3-RELEASE+and+Ports&format=html > > OpenBSD also seems to have it: > http://www.openbsd.org/cgi-bin/man.cgi?query=sysctl&apropos=0&sektion=0&manpath=OpenBSD+Current&arch=i386&format=html |
From: Dennis S. <sy...@yo...> - 2005-02-28 17:30:25
|
On Sun, 2005-02-27 at 18:01 +0100, Eelco Schijf wrote: > Hi, > > I just started playing around with libvisual in an attempt to get the hang of > it. So far there's been one thing I haven't gotten to work; detectig a new > song. > To spot a new song I figured that the events function would be the best place > for this. Check the event queue and see if there's a VISUAL_EVENT_NEWSONG > type in the queue. However, nothing happens (tested it with amarok and xmms), > there is no event indicating a new song started playing. What could be the > problem? Is this a libvisual problem or isn't it implemented in the player(s)? > I've pasted the events function below for reference. Amarok does not support this but beep-media-player and xmms should work fine. > Btw, as you can see there's also the VISUAL_EVENT_RESIZE event, which works > fine. > > int liquidmp3_events (VisPluginData *plugin, VisEventQueue *events) > { > VisEvent ev; > VisSongInfo *songinfo; > > // retrieve songinfo > songinfo = visual_plugin_actor_get_songinfo( VISUAL_PLUGIN_ACTOR > (visual_plugin_get_specific(plugin)) ); > > while (visual_event_queue_poll (events, &ev)) { > switch (ev.type) { > case VISUAL_EVENT_RESIZE: > liquidmp3_dimension (plugin, ev.resize.video, > ev.resize.width, ev.resize.height); > break; > > case VISUAL_EVENT_NEWSONG: > printf( New song detected (%s)", songinfo->song ); > break; > > default: // to avoid warnings > break; > } > } > > return 0; > } > This should actually work, tho I advice you to retrieve the VisSongInfo from the VisEvent. struct _VisEventNewSong { VisObject object; /**< The VisObject data. */ VisEventType type; /**< Event type of the event being emitted. */ VisSongInfo *songinfo; /**< Pointer to the VisSongInfo structure containing all the information about * the new song. */ }; so that would end up in being "event.newsong.songinfo;". Tho this does not explain why it's not detecting new songs, could you send me your code so I can have a try with it :) Good luck, Dennis |
From: Eelco S. <ee...@sc...> - 2005-02-27 17:01:15
|
Hi, I just started playing around with libvisual in an attempt to get the hang of it. So far there's been one thing I haven't gotten to work; detectig a new song. To spot a new song I figured that the events function would be the best place for this. Check the event queue and see if there's a VISUAL_EVENT_NEWSONG type in the queue. However, nothing happens (tested it with amarok and xmms), there is no event indicating a new song started playing. What could be the problem? Is this a libvisual problem or isn't it implemented in the player(s)? I've pasted the events function below for reference. Kind regards, Eelco Schijf Btw, as you can see there's also the VISUAL_EVENT_RESIZE event, which works fine. int liquidmp3_events (VisPluginData *plugin, VisEventQueue *events) { VisEvent ev; VisSongInfo *songinfo; // retrieve songinfo songinfo = visual_plugin_actor_get_songinfo( VISUAL_PLUGIN_ACTOR (visual_plugin_get_specific(plugin)) ); while (visual_event_queue_poll (events, &ev)) { switch (ev.type) { case VISUAL_EVENT_RESIZE: liquidmp3_dimension (plugin, ev.resize.video, ev.resize.width, ev.resize.height); break; case VISUAL_EVENT_NEWSONG: printf( New song detected (%s)", songinfo->song ); break; default: // to avoid warnings break; } } return 0; } |
From: <jo...@so...> - 2005-02-27 01:39:25
|
Thanks for the advice, I created a small test program that runs, though I have some issues: 1. The program displays, but doesn't react to the music using alsa input. mplayer plug-in fails with: > libvisual ERROR: ./example: inp_mplayer_init(): \ > Could not open file '/home/jon/.mplayer/mplayer-af_export': No such file > or directory I'm unable to test with others. The examples from CVS failed similarly 1. When a plug-in is 8-bit, is it always palette based or can it be gray scale. How do I determine? 2. Take a look at display(), are my assumptions about color representation good? Does plug-in's use alpha channel? 3. Why doesn't visual_actor_new("foobar") return NULL if "foobar" doesn't exist? 4. Looking at the source. Why does the for-loop printing input names work (commented out), but the one printing actor names SIGSEGV? A last observation. There exists a function visual_video_new_with_buffer() which will create a VisVideo object and allocate a buffer. However I have found no way to re-size the VisVideo object which will also re-size the buffer. I have to do this with visual_video_allocate_buffer() (btw. the docs doesn't mention that this _reallocates_ under certain circumstances.) Personally, IMHO, I find it a bit inconsistent, the object does keep track of whether or not it's in charge of buffer resizing and could very well perform the re-size within visual_video_set_dimension(). Just my $0.02. If you're interested, when I get it to work properly I could create a short tutorial. However it would need to be reviewed as there probably would be some mistakes. Best, Jon Øyvind Kjellman Don't know if the list allows attachments so here's the code: makefile: ========= CC = gcc CFLAGS = -g -std=c99 LIBS = -L/usr/X11R6/lib -lpthread -lm -ldl -lvisual -lglut example: example.c $(CC) $(CFLAGS) $(LIBS) -o $@ $< clean: rm example example.c: ========== #include <GL/glut.h> #include <GL/gl.h> #include <libvisual/libvisual.h> #include <string.h> #include <stdlib.h> VisInput* input = NULL; VisActor* actor = NULL; VisVideo* video = NULL; VisBin* bin = NULL; VisVideoDepth depth = VISUAL_VIDEO_DEPTH_ERROR; GLuint window = 0, width = 640, height = 480; int use_gl = 0; /* Standard glut keyboard function. * Kills the program on any keypress. */ void key_func(unsigned char k, int x, int y) { glutDestroyWindow(window); exit(0); } /* Standard glut resize function. */ void resize(int w, int h) { width = w; height = h; glViewport(0, 0, width, height); if(visual_video_set_dimension(video, width, height) != VISUAL_OK) visual_log(VISUAL_LOG_ERROR, "Unable to resize video."); if(!use_gl) /* visual_video_set_dimension doesn't resize the buffer so ... */ if(visual_video_allocate_buffer(video) != VISUAL_OK) visual_log(VISUAL_LOG_ERROR, "Unable to resize buffer."); /* This lets actor know about the format change. */ if(visual_actor_video_negotiate(actor, 0, FALSE, FALSE) != VISUAL_OK) visual_log(VISUAL_LOG_ERROR, "actor/video negotiation failed."); } /* Standard glut display function. */ void display() { glClear(GL_COLOR_BUFFER_BIT); /* This renders. */ visual_bin_run(bin); if(!use_gl) { /* We have to render video's buffer. */ GLenum format = GL_R3_G3_B2; /* Reasonable safety against reads outside of the buffer. */ switch(visual_video_bpp_from_depth(depth)) { case 4: format = GL_RGBA; break; case 3: format = GL_RGB; break; case 2: format = GL_RGBA4; break; /* case 1: defaults */ } glDrawPixels(width, height, format, GL_UNSIGNED_BYTE, (void*)video->pixels); } glutSwapBuffers(); glutPostRedisplay(); } int main(int argc, char** argv) { /* GLUT initialization */ glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE); glutInitWindowPosition(100, 100); glutInitWindowSize(width, height); window = glutCreateWindow("LibVisual example"); glutDisplayFunc(display); glutReshapeFunc(resize); glutKeyboardFunc(key_func); /* We init libvisual after glut since some plug-ins need * OpenGL during initialization. */ if(visual_init(&argc, &argv) < 0) visual_log(VISUAL_LOG_ERROR, "Couldn't initialize Libvisual."); char* input_name = NULL; char* actor_name = NULL; for(int i = 1; i < argc; ++i) { if(strcmp(argv[i], "--actor") == 0 && i+1 < argc) actor_name = argv[++i]; else if(strcmp(argv[i], "--input") == 0 && i+1 < argc) input_name = argv[++i]; else if(strcmp(argv[i], "--help") == 0) { printf("LibVisual example.\n"); printf("Use --actor and --input flags to specify\nplug-in and source from list below.\n"); printf("\nAvaiable inputs:\n"); VisListEntry* i; for(i = visual_input_get_list()->head; i; i = i->next) printf("%s\n", ((VisPluginRef*)i->data)->info->name); printf("\nAvaiable actors:\n"); /* for(i = visual_actor_get_list()->head; i; i = i->next) printf("%s\n", ((VisActor*)i->data)->plugin->info->plugname);*/ for(char* name = visual_actor_get_next_by_name_nogl(NULL); name; name = visual_actor_get_next_by_name_nogl(name)) printf("%s\n", name); for(char* name = visual_actor_get_next_by_name_gl(NULL); name; name = visual_actor_get_next_by_name_gl(name)) printf("%s (OpenGL)\n", name); return 0; } } if(!actor_name) actor_name = "oinksie"; /* Default actor */ if(!input_name) input_name = "alsa"; /* Default input */ if((input = visual_input_new(input_name)) == NULL) visual_log(VISUAL_LOG_ERROR, "No input loaded."); /* initializes input plug-in */ if(visual_input_realize(input) != VISUAL_OK) visual_log(VISUAL_LOG_ERROR, "Unable to realize input."); if((actor = visual_actor_new(actor_name)) == NULL) visual_log(VISUAL_LOG_ERROR, "No actor loaded."); /* initializes actor plug-in */ if(visual_actor_realize(actor) != VISUAL_OK) visual_log(VISUAL_LOG_ERROR, "Unable to realize actor."); depth = visual_actor_get_supported_depth(actor); if(depth == VISUAL_VIDEO_DEPTH_NONE || depth == VISUAL_VIDEO_DEPTH_ERROR) visual_log(VISUAL_LOG_ERROR, "Received fubar depthflag."); if(!(depth & VISUAL_VIDEO_DEPTH_GL)) { if(depth & VISUAL_VIDEO_DEPTH_32BIT) depth &= VISUAL_VIDEO_DEPTH_32BIT; else if(depth & VISUAL_VIDEO_DEPTH_24BIT) depth &= VISUAL_VIDEO_DEPTH_24BIT; else if(depth & VISUAL_VIDEO_DEPTH_16BIT) depth &= VISUAL_VIDEO_DEPTH_16BIT; else if(depth & VISUAL_VIDEO_DEPTH_8BIT) depth &= VISUAL_VIDEO_DEPTH_8BIT; visual_log(VISUAL_LOG_INFO, "Using %d bit framebuffer.", visual_video_bpp_from_depth(depth) * 8); } else { depth &= VISUAL_VIDEO_DEPTH_GL; use_gl = 1; visual_log(VISUAL_LOG_INFO, "Using OpenGL plug-in."); } if(!use_gl) { /* Plugin isn't OpenGL */ video = visual_video_new_with_buffer(width, height, depth); } else { video = visual_video_new(); if(visual_video_set_depth(video, depth) != VISUAL_OK) visual_log(VISUAL_LOG_ERROR, "Unable to set depth on video."); } if(visual_actor_set_video(actor, video) != VISUAL_OK) visual_log(VISUAL_LOG_ERROR, "Unable to set video for actor."); /* This lets actor know about the format of video. */ if(visual_actor_video_negotiate(actor, 0, FALSE, FALSE) != VISUAL_OK) visual_log(VISUAL_LOG_ERROR, "actor/video negotiation failed."); /* Associate input with actor. */ bin = visual_bin_new(); visual_bin_connect(bin, actor, input); visual_bin_realize(bin); glEnable(GL_TEXTURE_2D); glMatrixMode(GL_PROJECTION); glLoadIdentity(); if(use_gl) glFrustum(-0.5, 0.5, -0.5, 0.5, 0.1, 10000); else glOrtho(-1, 1, -1, 1, 0.1, 1); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); /* Enters the main (Free)GLUT processing loop. */ glutMainLoop(); return 0; } |
From: Dennis S. <sy...@yo...> - 2005-02-26 16:23:13
|
Moin, I think we should start up a documentation and information strategy. We're getting lack of information complaints more often recently. Some people manage very well, but these often have extraordinary experience with programming and the concepts of topic. So for them an API reference is anough. I think we have to do the following: 1. Explain the API reference with more information, and include a start page within that explains the basic setup. 2. Create docbooks that are written in tutorial style, for application development, and plugin development. 3. Start up an online WIKI (using mediawiki software) so we can share thoughts, documents, howtos, and todos over there. (When one will come I plan to only allow edits for people who subscribe, since wiki vandalism is completely cool nowadays) On the object_destroy, it's true but it's best to exclusively always use: visual_object_unref (VISUAL_OBJECT (object)); Only use _destroy if you REALLY know what you're doing :) Cheers, Dennis On Sat, 2005-02-26 at 12:20 -0500, Duilio J. Protti wrote: > Is true, the examples are really out of date, but that's because they > are not included on the distro, and libvisual is under heavy > development, so API changes occur very often. > > visual_bin_destroy() was deprecated when libvisual moves to his new > object-model. Now almost every datatype is whithin an object, so > destruction is made on a generic fashion through visual_object_destroy > (). > > To compile a program which's is using libvisual, just pass the result of > `pkg-config libvisual --cflags` to you compiler, and for to link against > libvisual, pass the result of `pkg-config libvisual --libs` to the > linker. > > And you are right, we need a tutorial and probably a FAQ too! > > > Bye, > Duilio. |
From: Duilio J. P. <dp...@fc...> - 2005-02-26 15:28:35
|
Is true, the examples are really out of date, but that's because they are not included on the distro, and libvisual is under heavy development, so API changes occur very often. visual_bin_destroy() was deprecated when libvisual moves to his new object-model. Now almost every datatype is whithin an object, so destruction is made on a generic fashion through visual_object_destroy (). To compile a program which's is using libvisual, just pass the result of `pkg-config libvisual --cflags` to you compiler, and for to link against libvisual, pass the result of `pkg-config libvisual --libs` to the linker. And you are right, we need a tutorial and probably a FAQ too! Bye, Duilio. |
From: <jo...@so...> - 2005-02-25 22:55:19
|
Hep, got the examples to work (at least it compiles) by commenting out those two. Shouldn't be a problem in the example. Jon Øyvind Kjellman On Fri, 25 Feb 2005 23:52:07 +0100, Jon Øyvind Kjellman <jo...@so...> wrote: > Thanks, but I didn't really get very far with that. I still don't > understand what overall design is, or how it works. > > I'm trying to get the examples to compile but it won't link. > visual_bin_destroy and visual_video_free is missing. I grabbed the > examples from CVS, but I'm compiling against 0.2.0 and make ignores the > examples (configure --enable-examples && make). > > I've tried pure CVS too, but autoconf/configure chockes. Thus I have one > feature request: make an introduction text of some sort!! Just cover the > basic, simple example, linker options, a few caveats and the usual > stuff. So that programmers like myself who stumble over this seemingly > great library can get cracking in an hour. I would have written it > myself, had I known how to use the library. > > Best, > Jon Øyvind Kjellman > > On Fri, 25 Feb 2005 14:13:17 +0100, salsaman <sal...@xs...> wrote: > >> Hi Jon, >> you might want to look at my libvis.c, which is a wrapper to >> init/deinit and create a single frame of video: >> >> http://cvs.sourceforge.net/viewcvs.py/lives/lives-plugins/livido-plugins/libvis.c >> >> Regards, >> Gabriel. > > > > > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real users. > Discover which products truly live up to the hype. Start reading now. > http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click > _______________________________________________ > Libvisual-devel mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libvisual-devel |
From: <jo...@so...> - 2005-02-25 22:44:48
|
Thanks, but I didn't really get very far with that. I still don't understand what overall design is, or how it works. I'm trying to get the examples to compile but it won't link. visual_bin_destroy and visual_video_free is missing. I grabbed the examples from CVS, but I'm compiling against 0.2.0 and make ignores the examples (configure --enable-examples && make). I've tried pure CVS too, but autoconf/configure chockes. Thus I have one feature request: make an introduction text of some sort!! Just cover the basic, simple example, linker options, a few caveats and the usual stuff. So that programmers like myself who stumble over this seemingly great library can get cracking in an hour. I would have written it myself, had I known how to use the library. Best, Jon Øyvind Kjellman On Fri, 25 Feb 2005 14:13:17 +0100, salsaman <sal...@xs...> wrote: > Hi Jon, > you might want to look at my libvis.c, which is a wrapper to init/deinit > and create a single frame of video: > > http://cvs.sourceforge.net/viewcvs.py/lives/lives-plugins/livido-plugins/libvis.c > > Regards, > Gabriel. |