From: Dennis S. <sy...@yo...> - 2005-03-07 19:50:31
|
Heya everyone. I wrote the first version of the new VisAudio API proposal. It's going to be a complete overhaul, and thus should be done right. PLEASE read this document, and comment upon it! Please confirm if it's good or not. there are some open questions, discuss these. I want to set this in stone, so we can start on it :) Cheers, Dennis The document: ==================================================================================== This document describes the initial VisAudio overhaul design. Version 0.1 Dennis Smit <sy...@yo...> Libvisual project leader _____________________________________________________________________________________ Goals: Ability to mix channels. Audio energy registration, calculation (already done). normalized spectrum (logified). Internally, use floats to represent audio. (from 0.0 to 1.0 or -1.0 to 1.0 ?) Use floats to represent the spectrum (from 0.0 to 1.0) Have conversion code to go from short audio to float. Basic conversion for the frequency (hz) of the input. BPM detection: http://www.gamedev.net/reference/programming/features/beatdetection/page2.asp Provide 6 vars to directly detect hits on the six bands. Open questions: How do we want to do the number of samples ? (aka the length of the buffer) How to handle multiple channels, channel mixing, how to handle 5.1, 7.1, 9.1 etc ? Internal audio representation: In floats. Fixed frequency ? (I personally prefer this) How large do we want our buffers ? API: enum { VISUAL_AUDIO_FROM_SIGNED_96000 VISUAL_AUDIO_FROM_SIGNED_48000 VISUAL_AUDIO_FROM_SIGNED_44100 VISUAL_AUDIO_FROM_SIGNED_32000 VISUAL_AUDIO_FROM_SIGNED_22500 VISUAL_AUDIO_FROM_SIGNED_11250 VISUAL_AUDIO_FROM_SIGNED_8000 VISUAL_AUDIO_FROM_UNSIGNED_96000 VISUAL_AUDIO_FROM_UNSIGNED_48000 VISUAL_AUDIO_FROM_UNSIGNED_44100 VISUAL_AUDIO_FROM_UNSIGNED_32000 VISUAL_AUDIO_FROM_UNSIGNED_22500 VISUAL_AUDIO_FROM_UNSIGNED_11250 VISUAL_AUDIO_FROM_UNSIGNED_8000 } VisAudioSoundImportType; VisAudio { VisObject object; float sound[3][2048]; float spectrum[3][256]; float spectrum_normalized[3][256] float energy; VisFFTState *fft_state; VisAudioBeat *bpm; }; VisAudioBeat { VisObject object; ... history[1024][6]; ... energy[6]; int bpm; int beat; int beat_channels[6]; // Set the specific channel on a beat int active_channel; float accuracy; }; /* Prototypes */ VisAudio *visual_audio_new (void); int visual_audio_analyze (VisAudio *audio); int visual_audio_sound_import (VisAudio *audio, VisAudioSoundImportType type, void *sound_buffer, int buffer_length); VisAudioBeat *visual_audio_beat_new (void); int visual_audio_beat_analyse (VisAudio *audio); |
From: Burkhard P. <pl...@ip...> - 2005-03-08 12:16:46
|
Dennis Smit wrote: > Heya everyone. > > I wrote the first version of the new VisAudio API proposal. > > It's going to be a complete overhaul, and thus should be done right. > > PLEASE read this document, and comment upon it! If you are brave enough to add another dependency to libvisual, you can use my gavl library for the audio handling: http://cvs.sourceforge.net/viewcvs.py/gmerlin/gavl/include/gavl/gavl.h?rev=1.24&view=log It supports the following formats: - Arbitrary samplerates - 8, 16 and 32 bit integer and floating point - 3 different interleave modes: None, All and Interleaved pairs of channels (e.g. for 5.1 playback on 3 stereo devices) - Container structs for Audio format and Audio frame - Conversion between ALL supported formats including resampling at several quality levels (using libsamplerate) and audio dithering Format definition looks like this: /* Sample formats: all multibyte numbers are native endian */ typedef enum { GAVL_SAMPLE_NONE = 0, GAVL_SAMPLE_U8 = 1, GAVL_SAMPLE_S8 = 2, GAVL_SAMPLE_U16 = 3, GAVL_SAMPLE_S16 = 4, GAVL_SAMPLE_S32 = 5, GAVL_SAMPLE_FLOAT = 6 } gavl_sample_format_t; /* Interleave modes */ typedef enum { GAVL_INTERLEAVE_NONE = 0, /* No interleaving, all channels separate */ GAVL_INTERLEAVE_2 = 1, /* Interleaved pairs of channels */ GAVL_INTERLEAVE_ALL = 2 /* Everything interleaved */ } gavl_interleave_mode_t; /* * Audio channel setup: This can be used with * AC3 decoders to support all speaker configurations */ typedef enum { GAVL_CHANNEL_NONE = 0, GAVL_CHANNEL_MONO = 1, GAVL_CHANNEL_STEREO = 2, /* 2 Front channels (Stereo or Dual channels) */ GAVL_CHANNEL_3F = 3, GAVL_CHANNEL_2F1R = 4, GAVL_CHANNEL_3F1R = 5, GAVL_CHANNEL_2F2R = 6, GAVL_CHANNEL_3F2R = 7 } gavl_channel_setup_t; /* Channel IDs */ typedef enum { GAVL_CHID_NONE = 0, GAVL_CHID_FRONT, GAVL_CHID_FRONT_LEFT, GAVL_CHID_FRONT_RIGHT, GAVL_CHID_FRONT_CENTER, GAVL_CHID_REAR, GAVL_CHID_REAR_LEFT, GAVL_CHID_REAR_RIGHT, GAVL_CHID_LFE } gavl_channel_id_t; /* Structure describing an audio format */ typedef struct gavl_audio_format_s { int samples_per_frame; /* Maximum number of samples per frame */ int samplerate; int num_channels; gavl_sample_format_t sample_format; gavl_interleave_mode_t interleave_mode; gavl_channel_setup_t channel_setup; int lfe; /* Low frequency effect channel present */ float center_level; /* linear factor for mixing center to front */ float rear_level; /* linear factor for mixing rear to front */ /* Which channel is stored where */ gavl_channel_id_t channel_locations[GAVL_MAX_CHANNELS]; } gavl_audio_format_t; The audio frame then looks like this: typedef union gavl_audio_samples_u { uint8_t * u_8; int8_t * s_8; uint16_t * u_16; int16_t * s_16; uint32_t * u_32; int32_t * s_32; float * f; } gavl_audio_samples_t; /* Container for noninterleaved audio channels */ typedef union gavl_audio_channels_u { uint8_t * u_8[GAVL_MAX_CHANNELS]; int8_t * s_8[GAVL_MAX_CHANNELS]; uint16_t * u_16[GAVL_MAX_CHANNELS]; int16_t * s_16[GAVL_MAX_CHANNELS]; uint32_t * u_32[GAVL_MAX_CHANNELS]; int32_t * s_32[GAVL_MAX_CHANNELS]; float * f[GAVL_MAX_CHANNELS]; } gavl_audio_channels_t; /* Audio frame */ typedef struct gavl_audio_frame_s { gavl_audio_samples_t samples; gavl_audio_channels_t channels; int valid_samples; /* Real number of samples */ } gavl_audio_frame_t; If some source and destination module have different formats, just fire up a gavl_audio_converter_t, and it will do the conversion. Futhermore, there are functions for allocating/freeing audio frames (with memory alignment) as well as a copy function, which lets you easily create an audio buffer. I'm using this in my own projects for quite a time now and I never got a format, which couldn't be handled by gavl. The only thing which might eventually be missing is 7.1 (2 side channels), but this could be added as well if needed. I think it would provide a greater flexibility than the VisAudioSoundImportType enum :-) Whether or not the plugins have restrictions concerning the formats is another question. The 512 samples in each frame seem to be hardcoded in most plugins and I don't know if this mumber must become variable. Concerning the samplerate, most material is 44100 but there are also 48000 streams (e.g. from Music DVDs) or 22050 (used by some web radio stations). So you could let the plugins handle all these rates or use gavl to resample everything to 44100. -- _____________________________ Dr.-Ing. Burkhard Plaum Institut fuer Plasmaforschung Pfaffenwaldring 31 70569 Stuttgart Tel.: +49 711 685-2187 Fax.: -3102 |
From: Dennis S. <sy...@yo...> - 2005-03-08 12:35:51
|
> If some source and destination module have different formats, > just fire up a gavl_audio_converter_t, and it will do the conversion. > Futhermore, there are functions for allocating/freeing audio frames > (with memory alignment) as well as a copy function, which lets you > easily create an audio buffer. It looks very advanced without doubt yeah. > I'm using this in my own projects for quite a time now and I never > got a format, which couldn't be handled by gavl. > The only thing which might eventually be missing is 7.1 (2 side channels), > but this could be added as well if needed. > > I think it would provide a greater flexibility than the > VisAudioSoundImportType enum :-) It does without doubt. > Whether or not the plugins have restrictions concerning the formats is > another question. The 512 samples in each frame seem to be hardcoded > in most plugins and I don't know if this mumber must become variable. > > Concerning the samplerate, most material is 44100 but there are also > 48000 streams (e.g. from Music DVDs) or 22050 (used by some > web radio stations). So you could let the plugins handle all these rates > or use gavl to resample everything to 44100. The idea is to provide ONE format to the VisActor plugins, since they shouldn't be dealing with all kind of different formats. However we should be able to handle input in different formats nicely. I think that a dependency is going to give us problems, also since we're porting to windows, mac os X. However would you mind us assimilating features from gavl into our VisAudio ? The biggest problem is that we're LGPL and you're GPL. Thanks, Dennis |
From: Burkhard P. <pl...@ip...> - 2005-03-08 16:52:15
|
Dennis Smit wrote: > The idea is to provide ONE format to the VisActor plugins, since they > shouldn't be dealing with all kind of different formats. Makes sense. > However we > should be able to handle input in different formats nicely. Ok, then a generic audio converter becomes necessary. It could, however, be a bit more simple than in gavl, because the output format is always the same. > I think that a dependency is going to give us problems, also since we're > porting to windows, mac os X. The ANSI C-part of gavl should be quite portable. The MMX stuff (used for Video only at the moment) should be disabled by the configure script on unsupported platforms. > However would you mind us assimilating > features from gavl into our VisAudio ? Hmm, depends on how it's done. If single functions are copied and pasted, I don't want them to become LGPL. If you write an own implementation based on IDEAS from gavl, it's ok. > The biggest problem is > that we're LGPL and you're GPL. If I gain something from it, we could solve this. But the only nonstandard dependency of gavl is libsamplerate, which is also GPL. So forget resampling (or switch to GPL). -- _____________________________ Dr.-Ing. Burkhard Plaum Institut fuer Plasmaforschung Pfaffenwaldring 31 70569 Stuttgart Tel.: +49 711 685-2187 Fax.: -3102 |
From: Dennis S. <sy...@yo...> - 2005-03-08 17:00:45
|
On Tue, 2005-03-08 at 17:56 +0100, Burkhard Plaum wrote: > Ok, then a generic audio converter becomes necessary. > It could, however, be a bit more simple than in gavl, because the > output format is always the same. Yep. > > I think that a dependency is going to give us problems, also since we're > > porting to windows, mac os X. > > The ANSI C-part of gavl should be quite portable. > The MMX stuff (used for Video only at the moment) should be disabled > by the configure script on unsupported platforms. > > > However would you mind us assimilating > > features from gavl into our VisAudio ? > > Hmm, depends on how it's done. If single functions are copied and > pasted, I don't want them to become LGPL. > > If you write an own implementation based on IDEAS from gavl, it's ok. I think we have to use the ideas then, since GPL licensing libvisual is not an option for me. > > The biggest problem is > > that we're LGPL and you're GPL. > > If I gain something from it, we could solve this. > > But the only nonstandard dependency of gavl is libsamplerate, which is > also GPL. So forget resampling (or switch to GPL). Bummer, I really want a LGPL library, so we have to work out something differently. But you have put much research into gavl for sure, so we can use the lessons you've learned. Thanks, Dennis |
From: Burkhard P. <pl...@ip...> - 2005-03-08 17:22:28
|
> Bummer, I really want a LGPL library, Why? -- _____________________________ Dr.-Ing. Burkhard Plaum Institut fuer Plasmaforschung Pfaffenwaldring 31 70569 Stuttgart Tel.: +49 711 685-2187 Fax.: -3102 |
From: Dennis S. <sy...@yo...> - 2005-03-08 17:33:02
|
On Tue, 2005-03-08 at 18:26 +0100, Burkhard Plaum wrote: > > Bummer, I really want a LGPL library, > Why? Because I refuse to close the doors for non GPL compatible software. Just a small example: We wouldn't be able to be used by commercial gstreamer products. Or by closed source media plugins on the windows platform. It's just closing the doors, for many possibilities. Cheers, Dennis |
From: Vitaly V. B. <vit...@uk...> - 2005-03-08 13:15:40
|
On Mon, 07 Mar 2005 20:50:23 +0100 Dennis Smit <sy...@yo...> wrote: > The document: > ==================================================================================== > > This document describes the initial VisAudio overhaul design. > > Version 0.1 > > Dennis Smit <sy...@yo...> > Libvisual project leader > > _____________________________________________________________________________________ > > Goals: > Ability to mix channels. I think it would be nice to have an "audio filter" stack, tree or whatever and some "control" api (on/off, value from a range) > Audio energy registration, calculation (already done). > normalized spectrum (logified). > Internally, use floats to represent audio. (from 0.0 to 1.0 or -1.0 to > 1.0 ?) > Use floats to represent the spectrum (from 0.0 to 1.0) > Have conversion code to go from short audio to float. there's K7 (semi) optimized verions within scivi > Basic conversion for the frequency (hz) of the input. > BPM detection: > > http://www.gamedev.net/reference/programming/features/beatdetection/page2.asp > > Provide 6 vars to directly detect hits on the six bands. > > Open questions: > How do we want to do the number of samples ? (aka the length of the > buffer) > How to handle multiple channels, channel mixing, how to handle 5.1, > 7.1, 9.1 etc ? > > Internal audio representation: > In floats. > Fixed frequency ? (I personally prefer this) I don't think it's good idea to use fixed single frequency... Anyway, does it matter? > How large do we want our buffers ? Large buffer means high latency. Small buffer (upto 64 bytes) measns low latency. Personally I believe buffer should be variable in size. It will be useful if we're going to use "audio filters". What about audio frame timestamping? How to guarantee (do our's best) to synchronize video with audio? Hard video processing requires some time... -- Vitaly GPG Key ID: F95A23B9 |
From: Dennis S. <sy...@yo...> - 2005-03-08 13:26:56
|
On Tue, 2005-03-08 at 15:15 +0200, Vitaly V. Bursov wrote: > I think it would be nice to have an "audio filter" stack, tree or > whatever and some "control" api (on/off, value from a range) Audio filter stack ? what do you mean ? > > Audio energy registration, calculation (already done). > > normalized spectrum (logified). > > Internally, use floats to represent audio. (from 0.0 to 1.0 or -1.0 to > > 1.0 ?) > > Use floats to represent the spectrum (from 0.0 to 1.0) > > Have conversion code to go from short audio to float. > there's K7 (semi) optimized verions within scivi That is neat!, will look at it. > I don't think it's good idea to use fixed single frequency... > Anyway, does it matter? Well the more we generalise the less the Actor plugin has to think. And keep in mind, more than one actor can connect to one VisAudio. > > How large do we want our buffers ? > Large buffer means high latency. Small buffer (upto 64 bytes) measns low > latency. True, how many samples of how many bytes are pushed out a second ? :) > Personally I believe buffer should be variable in size. It will be > useful if we're going to use "audio filters". What do you want to do with these audio filters ?, what we could also do is take a non naieve approach, where a processed VisAudio is rather a VisAudioPool that negotiates with the different actors. However this does not make things easier, and I think we should keep goal focused, proof me wrong tho :) > What about audio frame timestamping? GOOD one, putting that on the list. We need that anyway, to use it for the Beat Detection. Maybe we should keep a general sample history, of which a [3][size] buffer can be derived ? > How to guarantee (do our's best) to synchronize video with audio? > Hard video processing requires some time... This is hard, I haven't put much thought into this, but timestamping should help right ? :) Cheers, Dennis |
From: Vitaly V. B. <vit...@uk...> - 2005-03-08 14:24:11
|
On Tue, 08 Mar 2005 14:26:46 +0100 Dennis Smit <sy...@yo...> wrote: > On Tue, 2005-03-08 at 15:15 +0200, Vitaly V. Bursov wrote: > > I think it would be nice to have an "audio filter" stack, tree or > > whatever and some "control" api (on/off, value from a range) > > Audio filter stack ? what do you mean ? Well, Actor gets not "raw" stream but processed by filters. Including channel mixers, speed up/down, pitch, voice remove, etc. It would be a nice place to put a resampler too. Ofcourse some data should be grabbed from a VJ app if there's one. Just an idea... > > > Audio energy registration, calculation (already done). > > > normalized spectrum (logified). > > > Internally, use floats to represent audio. (from 0.0 to 1.0 or -1.0 to > > > 1.0 ?) > > > Use floats to represent the spectrum (from 0.0 to 1.0) > > > Have conversion code to go from short audio to float. > > there's K7 (semi) optimized verions within scivi > > That is neat!, will look at it. that's plugin.c, "USE_ASM_K7" def > > I don't think it's good idea to use fixed single frequency... > > Anyway, does it matter? > > Well the more we generalise the less the Actor plugin has to think. > And keep in mind, more than one actor can connect to one VisAudio. Yeah, probably you're right. > > > How large do we want our buffers ? > > Large buffer means high latency. Small buffer (upto 64 bytes) measns low > > latency. > > True, how many samples of how many bytes are pushed out a second ? :) If sampling rate is 44100, you'll get 44100*2channels*16bit = 176400 bps :) so, with a 2048 sample buffer there will be (2048*2channels*16bit)/176400 ~= 46 msec lag if a frame is rendered in tenth of a millisecond. Or around 21 "real" FPS. or something like that. :) > > Personally I believe buffer should be variable in size. It will be > > useful if we're going to use "audio filters". > > What do you want to do with these audio filters ?, what we could > also do is take a non naieve approach, where a processed VisAudio > is rather a VisAudioPool that negotiates with the different > actors. However this does not make things easier, and I think we should > keep goal focused, proof me wrong tho :) Well, I'm not sure... ===== AudioSource (N channels, interleaved or not) || || \/ || Filter1 || | \ || | \ || v v \/ Filter2 Filter3 | | v v Actor1 Actor2 ===== or, if there's a VJ app ===== AudioSource (N channels, interleaved or not, better not) Ch1 Ch2 ... ChN | | v v Actor1 Actor2 ===== > > What about audio frame timestamping? > > GOOD one, putting that on the list. We need that anyway, to use it for > the Beat Detection. Maybe we should keep a general sample history, > of which a [3][size] buffer can be derived ? Hm, I still think we should have small and variable buffer size. buffer size can specify how much data is available(no guarantee that frame proccessing will take exactly N usec); small buffer means low latency; all data should be processed to detect beats accurately. > > How to guarantee (do our's best) to synchronize video with audio? > > Hard video processing requires some time... > > This is hard, I haven't put much thought into this, but timestamping > should help right ? :) I really don't know how to synchronize it :) It's virtually impossible to sync a/v if libvisual has no control over an audio stream (i.e. nobody can delay it - live music). The only solution here is a small buffer. -- Vitaly GPG Key ID: F95A23B9 |
From: Dennis S. <sy...@yo...> - 2005-03-08 15:06:20
|
On Tue, 2005-03-08 at 16:23 +0200, Vitaly V. Bursov wrote: > On Tue, 08 Mar 2005 14:26:46 +0100 > Dennis Smit <sy...@yo...> wrote: > > > On Tue, 2005-03-08 at 15:15 +0200, Vitaly V. Bursov wrote: > > > I think it would be nice to have an "audio filter" stack, tree or > > > whatever and some "control" api (on/off, value from a range) > > > > Audio filter stack ? what do you mean ? > Well, Actor gets not "raw" stream but processed by filters. > Including channel mixers, speed up/down, pitch, voice remove, etc. > It would be a nice place to put a resampler too. > > Ofcourse some data should be grabbed from a VJ app if there's one. > > Just an idea... This is out of the scope for just libvisual, however a friend of mine is semi into audio, and I was talking to him about a lib on top of libvisual for audio processing. So that nicely fits :) > > > I don't think it's good idea to use fixed single frequency... > > > Anyway, does it matter? > > > > Well the more we generalise the less the Actor plugin has to think. > > And keep in mind, more than one actor can connect to one VisAudio. > Yeah, probably you're right. I was thinking, we could store samples in a different type, VisAudioSample, timestamp it there, connect a type to it. And then make it possible to go from VisAudio to a few common types where while converting stuff is put together, (we can store a history of samples). > > > > How large do we want our buffers ? > > > Large buffer means high latency. Small buffer (upto 64 bytes) measns low > > > latency. > > > > True, how many samples of how many bytes are pushed out a second ? :) > If sampling rate is 44100, you'll get 44100*2channels*16bit = 176400 bps :) > so, with a 2048 sample buffer there will be (2048*2channels*16bit)/176400 > ~= 46 msec lag if a frame is rendered in tenth of a millisecond. Or > around 21 "real" FPS. Ok so we prolly want to fetch with buffers of around 256 ? and combine multiple samples together if the plugin needs a larger worker array ? (Thanks for the calculus btw :)) > > > Personally I believe buffer should be variable in size. It will be > > > useful if we're going to use "audio filters". > > > > What do you want to do with these audio filters ?, what we could > > also do is take a non naieve approach, where a processed VisAudio > > is rather a VisAudioPool that negotiates with the different > > actors. However this does not make things easier, and I think we should > > keep goal focused, proof me wrong tho :) > > Well, I'm not sure... What about one sample array, from which plugins can ask a array version themself ?, something like: In VisAudio: VisAudioSample *samples; // We might want to make this a rotating buffer... anyway just for concept int samples_count; VisAudioSample { VisObject object; VisAudioSampleFormatType format; VisTime timestamp; sampledata... origdata... } void visual_audio_convert_to (VisAudio *audio, VisAudioSampleFormatType format, void *data, int size); > ===== > AudioSource (N channels, interleaved or not) > || || > \/ || > Filter1 || > | \ || > | \ || > v v \/ > Filter2 Filter3 > | | > v v > Actor1 Actor2 > ===== > > or, if there's a VJ app > ===== > AudioSource (N channels, interleaved or not, better not) > Ch1 Ch2 ... ChN > | | > v v > Actor1 Actor2 > ===== Yes that setup makes a lot of sense. Tho for the real connection of this I think we really should introduce 'another' new lib :) Anyway my focus isn't there atm, however, we need to keep open the possibility. > > > What about audio frame timestamping? > > > > GOOD one, putting that on the list. We need that anyway, to use it for > > the Beat Detection. Maybe we should keep a general sample history, > > of which a [3][size] buffer can be derived ? > Hm, I still think we should have small and variable buffer size. > buffer size can specify how much data is available(no guarantee > that frame proccessing will take exactly N usec); > small buffer means low latency; > all data should be processed to detect beats accurately. Agree. I am incorporating all the comments in a new version of the draft. > It's virtually impossible to sync a/v if libvisual has no control > over an audio stream (i.e. nobody can delay it - live music). > The only solution here is a small buffer. Yep. Since we can use a sample history with the new design, we might want to add a 'delay' control that keeps back newer samples for that amount of msecs ? Or would it be better to implement this as a plugin within an audio lib on top ? Cheers, Dennis |
From: Burkhard P. <pl...@ip...> - 2005-03-09 10:55:25
|
Dennis Smit wrote: > Just a small example: We wouldn't be able to be used by commercial > gstreamer products. Or by closed source media plugins on the windows > platform. If you write the best visualization framework on this planet, you have the chance to make all commercial programs obsolete, so the GPL apps would win. > It's just closing the doors, for many possibilities. Using LGPL closes the doors for the possibility to use GPL libs. Nobody paid me a cent for writing my stuff, so why should I allow others to make money with it? Everyone has his own brain yea :-) Cheers Burkhard |
From: Dennis S. <sy...@yo...> - 2005-03-09 11:24:41
|
On Wed, 2005-03-09 at 12:01 +0100, Burkhard Plaum wrote: > Dennis Smit wrote: > > > Just a small example: We wouldn't be able to be used by commercial > > gstreamer products. Or by closed source media plugins on the windows > > platform. > > If you write the best visualization framework on this planet, > you have the chance to make all commercial programs obsolete, > so the GPL apps would win. I am not yet placing my bets on that :) > > It's just closing the doors, for many possibilities. > > Using LGPL closes the doors for the possibility to use > GPL libs. It has two sides yes, I've chosen this one ;) > Nobody paid me a cent for writing my stuff, so why should I allow > others to make money with it? I can't care too much about others making profit using lv, not sure how the other developers thing about this topic. But I think that total widespread is more important in this case ;) > Everyone has his own brain yea :-) Very true :) Cheers, Dennis |