How much time you have for this and how experienced in programming you
already are are crucial to any decision. From when until when is the
project? I'm asking because it's important to complete something and
write it up. It's more important than taking on 'the most ambitious
goals possible'. You must think about what your write up will look like
from day one. Your answer to time and experience will help you and us
in deciding what to tackle.
I've frequently seen developers trying to go for too fine grained
parallelism. For example in an animation it is easier to farm out each
frame to a different process than to split one frame between a large
number of processes. To me the simpler division of work is just good
sense. It's smarter to get a good outcome with less work.
I would suggest you target audio effects on long audio tracks. So if
there is a 2 hour recording you process four half hour (plus small
overlap) segments in parallel, for example. You can probably attack
most of the built in effects at the same time if you use this approach
since most are derived from the same base class.
I would put effort into metering... code for measuring the speed with
and without parallelism, code to show you graphically how close you are
to being IO bound... The results of this will all go straight into the
write up [no gain whatsoever on reverse or amplify, significant gains on
noise removal, changing pitch without changing speed, etc...]
I would be inclined not to go down the OpenMP road. Partly that's
because the free versions of MSVC do not support its pragmas and we do
not want to require Audacity developers to have the pro version. Partly
it's because I think you give up too much control over the parallelism.
I'm somewhat shocked that you have to go to Cluster OpenMP to have the
option of sharing across machines. Once you have a way to farm out time
segments to different threads, farming them out to different machines
should not be so very difficult.
Have a look too at the scripting module. If you want to automate tests
it's worth getting familiar with.
Looking forward to hearing more,
On 17/08/2010 10:52, Brian McKenna wrote:
> Hi everyone,
> I've started a parallel programming unit at University. Our major
> project is to take a program that is largely sequential and improve
> its execution time by making certain parts run in parallel. We get
> marked on how significantly we can improve the speed and I thought
> that some parts of audio processing are inherently parallel so it'd be
> nice to work on Audacity.
> I've been looking at each built-in effect to figure out the amount of
> data dependencies. So far, it looks like anything that has low data
> dependencies is largely I/O bound (Amplify, Invert, Reverse, etc) and
> most of the others have a large amount of dependencies.
> Is there any particular areas in Audacity that could use some
> parallelism? Or am I looking at the wrong project entirely?
> Also, what types of parallel technologies would be best to use in
> Audacity? Sadly, it looks like GPGPU libraries aren't very widely
> available so they won't be able to benefit a lot of people. OpenMP
> seems like it could be viable but would it be preferred to just use a
> simple thread pool?
> Brian McKenna