osalp-dev Mailing List for Open Source Audio Library Project (Page 3)
Status: Abandoned
Brought to you by:
daservis
This list is closed, nobody may subscribe to it.
| 2000 |
Jan
(9) |
Feb
(4) |
Mar
|
Apr
(1) |
May
(3) |
Jun
(1) |
Jul
(2) |
Aug
(3) |
Sep
(5) |
Oct
(4) |
Nov
(4) |
Dec
(5) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2001 |
Jan
(7) |
Feb
(27) |
Mar
(7) |
Apr
(6) |
May
(4) |
Jun
(2) |
Jul
(2) |
Aug
(12) |
Sep
|
Oct
(23) |
Nov
|
Dec
(2) |
| 2002 |
Jan
|
Feb
(3) |
Mar
(16) |
Apr
|
May
(1) |
Jun
(1) |
Jul
(4) |
Aug
(4) |
Sep
|
Oct
|
Nov
(1) |
Dec
|
| 2003 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(1) |
May
(3) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2004 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
(10) |
Dec
|
| 2005 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
|
| 2006 |
Jan
|
Feb
(2) |
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2007 |
Jan
(2) |
Feb
(2) |
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2008 |
Jan
(3) |
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
| 2009 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Akos M. <da...@ty...> - 2002-03-06 08:31:07
|
Darrick Servis wrote:
> This is what you want to do. We'll eventually support the double type. Is
> double the type you're using for mixing?
With all due respect: this is not what I want to do. When I want to
resample, I already have the data read from the soundcard in memory.
This is due to the fact that I may have several targets for the same
input data, with possibly different target sampling rates. Thus the
solution: read at the highest required sampling rate, and downsample for
those targets that require lower rates.
Something like this:
- raw audio from /dev/dsp at 44.1 kHz
+--- target 1: needs audio at 44.1 kHz
+--- target 2: needs audio at 22.05 kHz
+--- target 3: needs audio at 11.025 kHz
\--- target 4: needs audio at 44.1 kHz
I need the double type because I send this audio data to lame or ogg
vorbis for encoding, and they work on doubles.
Akos
|
|
From: Darrick S. <da...@dc...> - 2002-03-06 05:31:37
|
This is what you want to do. We'll eventually support the double type. =
Is=20
double the type you're using for mixing?
aflibConfig in_config, out_config;
aflibStatus status =3D AFLIB_SUCCESS;
string in_file, out_file;
string in_format, out_format;
aflibAudioFile* input;
aflibAudioFile* output;
in_format =3D "DEVICE";
in_config.setSampleSize(AFLIB_DATA_16S);
in_config.setSamplesPerSecond(44100);
in_config.setChannels(2);
in_file =3D "/dev/dsp";
/* get input aflibAudioFile object=20
* this call will fill in_config with the appropriate values if
* the filetype supports that
*/
input =3D new aflibAudioFile(in_format,in_file,&in_config,&status);
if(status !=3D AFLIB_SUCCESS)
{
aflib_fatal("Error opening input file %s", in_file.c_str());
}
double factor =3D 0.5
=A0
output =3D aflibAudioSampleRateCvt (*input, factor, FALSE, FALSE, FALSE)
aflibData* out_data;
/* process it */
do
{
num_samples =3D BufferSize;
out_data =3D output->process(status, -1, num_samples);
for(int ch =3D 0 ; ch < out_data.getConfig().getChannels() ; ch++)
for(int i =3D 0 ; i < out_data.getConfig().getChannels() ; i++)
my_array[ch][i] =3D (double)out_data.getSample(i,ch);
}
while(status =3D=3D AFLIB_SUCCESS ||=20
some other condition you want to set to stop recording);
delete output;
delete input;
On Tuesday 05 March 2002 04:23 pm, Akos Maroy wrote:
> What I want to do is: I have raw audio read from the soundcard
> (interleaved), and I need to resample it. The ultimate output will be a=
n
> array of samples for each channel (of type double), but of course
> interleaved raw data is good enough (in a short array).
>
> It's all part of a project called darkice, http://darkice.sourceforge.n=
et/
>
> >>>- in inArray, should I supply a 2 channel input with channels
> >
> > If you have more then one channel the data should be consectutive. i=
.e.
> > all the data for the first channel then all the data for the second.
>
> I see. So maybe it's better split an interleaved input to two channels,
> and resample them seperately?
>
>
> Thanks for all the info,
>
>
> Akos
|
|
From: Akos M. <da...@ty...> - 2002-03-06 00:25:48
|
Darrick Servis wrote: > I do agree don't use the class directly. It's based on Julius Smith's > resample-1.6 program and is very hacked up to allow for streaming data. > > http://www-ccrma.stanford.edu/~jos/resample/Available_Software.html > > What is exactly is it you are trying to do. I've spent a lot of time on this > class and the library in general over the past couple of weeks so maybe there > is a better way to do what you need with using the aflibConvertor class > directly. What I want to do is: I have raw audio read from the soundcard (interleaved), and I need to resample it. The ultimate output will be an array of samples for each channel (of type double), but of course interleaved raw data is good enough (in a short array). It's all part of a project called darkice, http://darkice.sourceforge.net/ >>>- in inArray, should I supply a 2 channel input with channels >>> > > If you have more then one channel the data should be consectutive. i.e. all > the data for the first channel then all the data for the second. I see. So maybe it's better split an interleaved input to two channels, and resample them seperately? Thanks for all the info, Akos |
|
From: Darrick S. <da...@dc...> - 2002-03-05 23:59:45
|
I do agree don't use the class directly. It's based on Julius Smith's resample-1.6 program and is very hacked up to allow for streaming data. http://www-ccrma.stanford.edu/~jos/resample/Available_Software.html What is exactly is it you are trying to do. I've spent a lot of time on this class and the library in general over the past couple of weeks so maybe there is a better way to do what you need with using the aflibConvertor class directly. On Tuesday 05 March 2002 09:52 am, Bruce Forsberg wrote: > Akos Maroy wrote: > > - is inCount an output only parameter, or is it in/out? inCount is the size of the inArray. This value will be also be set by the resample function to the number of samples actually used. > > - in inArray, should I supply a 2 channel input with channels If you have more then one channel the data should be consectutive. i.e. all the data for the first channel then all the data for the second. > > interleaved? - will the outArray contain them interleaved? No. > > - for inCount / outCount and the return value: a value of n means: > > - n * channels values in the actual array? > > - n values in the actual array? > > > > For example: > > > > aflibConverter * converter = new aflibConverter( true, true, false); > > > > converter->initialize( 0.5, 2); > > > > short inArray[2048]; > > short outArray[1024]; > > int inCount = 2048; > > int outCount = 1024; > > > > // suppose inArray contains two channel interleaved raw audio data of > > // 1024 samples > > > > int converted = converter->resample( inCount, > > outCount, > > inArray, > > outArray); > > > > after this call I get 2048 for inCount and 1024 for the return value. > > But the data in outArray[0 ... 1024] is no good, if I save it to a file, > > it's not proper .wav data: inCount is the number of shorts processed. return is the number of samples produced. It doesn't produce a wav file. It's just raw data. |
|
From: Bruce F. <for...@tn...> - 2002-03-05 17:50:23
|
Akos Maroy wrote: > > Hi, > > I'm trying to use the class aflibConverter from OSALP to do some raw > audio data resampling. And I can't get it to work. I read through the > documentation for the class at > http://osalp.sourceforge.net/doc/html/class_aflibConverter.html > but could someone please elaborate more on the parameters to the > aflibConverter::resample() function? In particular: > > - is inCount an output only parameter, or is it in/out? > - in inArray, should I supply a 2 channel input with channels interleaved? > - will the outArray contain them interleaved? > - for inCount / outCount and the return value: a value of n means: > - n * channels values in the actual array? > - n values in the actual array? > > For example: > > aflibConverter * converter = new aflibConverter( true, true, false); > > converter->initialize( 0.5, 2); > > short inArray[2048]; > short outArray[1024]; > int inCount = 2048; > int outCount = 1024; > > // suppose inArray contains two channel interleaved raw audio data of > // 1024 samples > > int converted = converter->resample( inCount, > outCount, > inArray, > outArray); > > after this call I get 2048 for inCount and 1024 for the return value. > But the data in outArray[0 ... 1024] is no good, if I save it to a file, > it's not proper .wav data: > > $ file result.wav > result.wav: data > $ play result.wav > sox: WAVE: RIFF header not found > > I'm trying to get it work, but so far only resampling with a factor of 1 > gives me any useful results (which is of course quite useless :). Any > help would be appreciated. > This class is not really designed to be used by itself. It is certainly difficult to understand. It has also been awhile. The best that I can recommend is to look at the aflibAudioSampleRateCvt class. I would also use an audio chain using this class. This will simplify the task for you. This is the way it was intended to be used. Bruce Forsberg |
|
From: Akos M. <da...@ty...> - 2002-03-03 13:04:54
|
Hi, I'm trying to use the class aflibConverter from OSALP to do some raw audio data resampling. And I can't get it to work. I read through the documentation for the class at http://osalp.sourceforge.net/doc/html/class_aflibConverter.html but could someone please elaborate more on the parameters to the aflibConverter::resample() function? In particular: - is inCount an output only parameter, or is it in/out? - in inArray, should I supply a 2 channel input with channels interleaved? - will the outArray contain them interleaved? - for inCount / outCount and the return value: a value of n means: - n * channels values in the actual array? - n values in the actual array? For example: aflibConverter * converter = new aflibConverter( true, true, false); converter->initialize( 0.5, 2); short inArray[2048]; short outArray[1024]; int inCount = 2048; int outCount = 1024; // suppose inArray contains two channel interleaved raw audio data of // 1024 samples int converted = converter->resample( inCount, outCount, inArray, outArray); after this call I get 2048 for inCount and 1024 for the return value. But the data in outArray[0 ... 1024] is no good, if I save it to a file, it's not proper .wav data: $ file result.wav result.wav: data $ play result.wav sox: WAVE: RIFF header not found I'm trying to get it work, but so far only resampling with a factor of 1 gives me any useful results (which is of course quite useless :). Any help would be appreciated. Akos |
|
From: Bruce F. <for...@tn...> - 2002-02-27 03:18:26
|
I made a change to the ogg file format so that it will buffer upon reading. This allows sample rate conversion to occur on ogg files. Bruce Forsberg |
|
From: Darrick S. <da...@dc...> - 2002-02-24 23:48:13
|
o Changed format of Changelog putting most recent entries up top. o Renamed CHANGES to Changelog. o Renamed TASKS to TODO. o Fixed bug in aflibAudioFile constuctor that caused segfault if _file_return was NULL. o Fixed bug in aflibMemCache which caused position = -1 to be cached. o Updated aflibAudioSampleRateCvt to handle -1 position. Added isChannelsSupported function since class only handles up to two channels. o Fixed bug in aflibConvertor which caused too many samples to be processed if the requested number of samples was more then what was actually availible (i.e at the end of file). The fix is a guess at how many samples to process and more work will need to be done to make it more accurate. o Fixed bug in aflibAudio which caused _cvt to receive the wrong config if a _mix was also added internally. o Added virtual setPowerMeter and setAudioSpectrum functions to aflibAudioSpectum class so one may inherit this class and do what they want when these functions are called. This will allow more then one power meter and/or audio specturm per app. o In aflibAudio, getName() and compute_segment() no longer required to be implemented. o Changed aflibSoxFile module to link to external libst.a library. o Added aflibDebug class to consolidate calls to stderr and getenv. o Added file conversion test scripts to osalp directory. o Cleaned up osalp example to better explain how to use the library. |
|
From: Bruce F. <for...@tn...> - 2002-02-16 18:08:26
|
I have added a new format library to the CVS repository. It was submitted by Jeroen Versteeg. It reads Ogg Vorbis files. For now it only reads. Hopefully in the future it will write as well. For now you will need to install the Ogg Vorbis libraries on your machine to compile this format library. Bruce Forsberg |
|
From: Bruce F. <for...@tn...> - 2001-12-30 04:03:01
|
I have made some changes to the aflibAudioEdit class. I added a removeSegmentsFromInput function. This removes all edits from a specific input. I also made some changes to misc classes for the GNU 3.0 compilers. I will try to get back to the delay function requested in a couple of days. Bruce Forsberg |
|
From: Rakan <rf...@co...> - 2001-12-06 08:28:08
|
Hello,
I'm trying to use libaf to analyze wav files (for a machine learning
project). However I'm having a lot of trouble with the library.
Is there a tutorial somewhere I could use?
I had a look at the included examples, but they don't help too much..
All I want to do is open a wav file and analyse it's spectrum (frequency
distribution). A quick way to do it would be to use aflibAudioSpectrum in
some way. However, I don't understand what the difference is between a
chain and a simple file ... What's the difference?
I would have thought I needed to use afFile (or afWavFile), but I'm having
trouble with that.
Thanks for your help!
Rakan.
--
"I remember red"
-- Jennifer Eight
|
|
From: Bruce F. <for...@tn...> - 2001-10-27 04:35:43
|
Jin-hyuk, Jong wrote: > > Hi all, > I tried to use spectrum library(aflibAudioSpectrum). > but spectrum data is faster than sound data. about 0.5 sec > so graph is not sync. > > Chain is like this > > input---->spectrum---->output > > How to sync spectrum data? > There is a problem with the spectrum class. It displays data when data is sent through the chain. Therefore you need to select the parameters to it very carefully. Also if one end is an audio device then these can access data sporatically. To fix this requires some work and is listed in the TASKS file. I plan on buffering the data in this class and then calling you callback in a signal handler or thread. Bruce Forsberg |
|
From: Jin-hyuk, J. <nt...@dr...> - 2001-10-26 09:06:41
|
Hi all, I tried to use spectrum library(aflibAudioSpectrum). but spectrum data is faster than sound data. about 0.5 sec so graph is not sync. Chain is like this input---->spectrum---->output How to sync spectrum data? |
|
From: Bruce F. <for...@tn...> - 2001-10-24 04:58:55
|
Robert Pittana wrote: > > Perhaps I am doing things incorrectly. In your example code I noticed that you just copied the config from the input to the output. If I open two inputs and use the config from the lower sampling rate input source as a parameter into the output then all inputs play at that lower rate. This is why I thought I needed to do some manual conversion. > Let me explain how the config data is passed. The first time the process function is called a check of the audio chain is made. At the start of the chain should be some source of audio data such as an aflibAudioFile derived class. This sets the config data for the source. This is passed to each member of the audio chain up the chain. A member receives an input config data processes it and it becomes its output config data. Its output config data is passed to the input config of the next audio element in the chain. When the config data reaches the mixer class it has a setInputConfig member function. It will set the output config data based upon the input config data from 1 or more inputs. It is coded to output the sample rate that is the highest amoungst the inputs. The aflibAudio base class will automatically insert a aflibAudioSampleRateCvt class before any input that needs to have its sample rate converted. For the final output class in the chain you can set its config data to whatever you will want. Any conversions needed to be made will be made in the chain. Of course you need to be carefull or all your CPU will be taken up doing conversions. For the mixer class I need to look at the code in detail but it might not work for adding inputs dynamically. I need to work on this to make it work. Bruce Forsberg |
|
From: Robert P. <Rob...@am...> - 2001-10-22 14:44:32
|
Perhaps I am doing things incorrectly. In your example code I noticed that = you just copied the config from the input to the output. If I open two = inputs and use the config from the lower sampling rate input source as a = parameter into the output then all inputs play at that lower rate. This is = why I thought I needed to do some manual conversion. Attached is my source code for your perusal. Its just a mainline hack at = the moment until I test all the functionality I will need for the real = classes. Thanks! -Rob >>> Bruce Forsberg <for...@tn...> 10/21/01 08:08PM >>> Robert Pittana wrote: >=20 > Yes it worked fine. Thanks Bruce. Now Im just fiddling with doing it = dynamically. i.e Have one playing, then mix another in at some point, then = another, then one stops and the others keep playing. I figure it'll take = some accounting to keep track of the mix amp. Does the amp have to add up = to 100 for all channels? >=20 > The main problem I am having now is that when I add the second input it = starts playing that input from the current position of the first input = which results in only the last half of the new input being played. Your right it will send the current file position to both files. I need to add something to the library that will allow one to either delay one channel or allow a file position adjustment of some sort. Let me think about this for a couple of days. >=20 > Also, if there a way to mix two inputs with different sampling rates and = have them come out sounding right? At the moment setting the output = configuration to the highest sampling makes the other one sound like the = chipmonks. :) Do I use aflibConverter for this? In an audio chain you would use the aflibAudioSampleRateCvt class. aflibConverter is a low level class and is difficult to use. When you mixed the two audio files together it should have adjusted both to the highest sample rate. It would have automatically inserted an aflibAudioSampleRateCvt class. There might be a problem when you add a channel dynamically. Please let me know if you did this dynamically and it did not work. If so it is a bug and needs to be fixed. Thanks, Bruce Forsberg |
|
From: Bruce F. <for...@tn...> - 2001-10-22 01:08:18
|
Robert Pittana wrote: > > Yes it worked fine. Thanks Bruce. Now Im just fiddling with doing it dynamically. i.e Have one playing, then mix another in at some point, then another, then one stops and the others keep playing. I figure it'll take some accounting to keep track of the mix amp. Does the amp have to add up to 100 for all channels? > > The main problem I am having now is that when I add the second input it starts playing that input from the current position of the first input which results in only the last half of the new input being played. Your right it will send the current file position to both files. I need to add something to the library that will allow one to either delay one channel or allow a file position adjustment of some sort. Let me think about this for a couple of days. > > Also, if there a way to mix two inputs with different sampling rates and have them come out sounding right? At the moment setting the output configuration to the highest sampling makes the other one sound like the chipmonks. :) Do I use aflibConverter for this? In an audio chain you would use the aflibAudioSampleRateCvt class. aflibConverter is a low level class and is difficult to use. When you mixed the two audio files together it should have adjusted both to the highest sample rate. It would have automatically inserted an aflibAudioSampleRateCvt class. There might be a problem when you add a channel dynamically. Please let me know if you did this dynamically and it did not work. If so it is a bug and needs to be fixed. Thanks, Bruce Forsberg |
|
From: Robert P. <Rob...@am...> - 2001-10-19 16:46:43
|
Yes it worked fine. Thanks Bruce. Now Im just fiddling with doing it =
dynamically. i.e Have one playing, then mix another in at some point, then =
another, then one stops and the others keep playing. I figure it'll take =
some accounting to keep track of the mix amp. Does the amp have to add up =
to 100 for all channels?=20
The main problem I am having now is that when I add the second input it =
starts playing that input from the current position of the first input =
which results in only the last half of the new input being played.
Also, if there a way to mix two inputs with different sampling rates and =
have them come out sounding right? At the moment setting the output =
configuration to the highest sampling makes the other one sound like the =
chipmonks. :) Do I use aflibConverter for this?
TIA,
-Rob
>>> Bruce Forsberg <for...@tn...> 10/19/01 12:14AM >>>
Robert Pittana wrote:
>=20
> Ok, so now Im figuring I can do it with the mixer class. But will it be =
possible to mix in a file while one is already playing. I guess it =
will.... I hope.
>=20
The mixer class should work just fine, if I understand your problem.
aflibFile ----+
|----aflibAudioMixer---aflibDevFile
aflibFile ----+
This is how you would create the chain. With two file
formats. Both would be mixed and sent to the audio device.
See the audio programmers manual section 4.2.7 for an
example.
Bruce Forsberg
|
|
From: Bruce F. <for...@tn...> - 2001-10-19 05:14:53
|
Robert Pittana wrote:
>
> Ok, so now Im figuring I can do it with the mixer class. But will it be possible to mix in a file while one is already playing. I guess it will.... I hope.
>
The mixer class should work just fine, if I understand your problem.
aflibFile ----+
|----aflibAudioMixer---aflibDevFile
aflibFile ----+
This is how you would create the chain. With two file
formats. Both would be mixed and sent to the audio device.
See the audio programmers manual section 4.2.7 for an
example.
Bruce Forsberg
|
|
From: Robert P. <Rob...@am...> - 2001-10-18 16:40:13
|
Ok, so now Im figuring I can do it with the mixer class. But will it be = possible to mix in a file while one is already playing. I guess it = will.... I hope. |
|
From: Robert P. <Rob...@am...> - 2001-10-18 16:09:29
|
First of all I'd just like to say this library is awesome! Now I'm looking for a way to support asynchronous playback. I am currently = using the Qt library which does support this but only for wav files. It = seems aflib locks out anyone else from playing when it is. I am trying to = play wav files over mp3. Is this possible? TIA -Rob |
|
From: Bruce F. <for...@tn...> - 2001-10-13 21:50:41
|
I finally got around to installing Linux on my Sun Ultra 5 Sparc machine. I got it installed and tested OSALP and it works fine on Linux on Sparc with no mods. Bruce Forsberg |
|
From: Bruce F. <for...@tn...> - 2001-10-13 03:18:52
|
I would try the following chain:
mp3stream1 mp3stream2 mp3stream3
| | |
| | |
| +-------------+
| |
| |
| aflibAudioEdit
| |
+---------------------+
|
mixer
|
output
If you use the audio edit class then you can select 3000 samples from
input
1 and the rest from input 2. See the programmer manual for a good
example
of this class.
Bruce Forsberg
> I tried to use the library, but I occured a simle problem with a mixer
> object. My chain looks very simple:
>
> mp3stream1 mp3stream2
> | |
> | |
> +------+-------+
> |
> mixer
> |
> output
>
> Everthings goes ok, but there is a one proble, lest say that we are on
> position 3000 in mixer. Now I would like to replace the mp3stream2 in
> chain and set new stream: mp3stram3. From this moment the mp3stream1
> should process form 3000 offset, and mp3stream3 for 0 offset. Hou could I
> control the offsets of items in the chain, not only last object in the
> chain.
>
|
|
From: Krzysztof M. <gu...@tm...> - 2001-10-12 11:00:39
|
hi
I tried to use the library, but I occured a simle problem with a mixer
object. My chain looks very simple:
mp3stream1 mp3stream2
| |
| |
+------+-------+
|
mixer
|
output
Everthings goes ok, but there is a one proble, lest say that we are on
position 3000 in mixer. Now I would like to replace the mp3stream2 in
chain and set new stream: mp3stram3. From this moment the mp3stream1
should process form 3000 offset, and mp3stream3 for 0 offset. Hou could I
control the offsets of items in the chain, not only last object in the
chain.
Thanks for replay ..
Gustaw
--
.-Post sent on: Oct 12, 2001
| By: Krzysztof Modzelewski |
| gu...@tm... alt//:km...@po... |
`________Linux/Web Developer______________'
|
|
From: Bruce F. <for...@tn...> - 2001-10-12 03:56:30
|
Jin-hyuk, Jong wrote: > > Hi all, > Can I implement equlizer with this library? > If it's passible, what class is need? > And how to... > There is no equalizer class in the library. There is a ButterWorth filter class aflibAudioBWFilter. You might be able to use this multiple times in a BandPass mode to serve as an equalizer but I don't think this would be very efficient. Bruce Forsberg |
|
From: Jin-hyuk, J. <nt...@dr...> - 2001-10-10 08:41:49
|
Hi all, Can I implement equlizer with this library? If it's passible, what class is need? And how to... Thanks in advance. |