I have writen same code which add support for large data sample=20
(as described in bug# 222865) for sweep.
This means that now we can works with sample files larger then=20
availabe virtual memory. The main concept is allocating small memory=20
buffers (as windows) and hold all sample data in swap file on filesystem.
This means that we should change the way of allocating and=20
accesing sample data. So I've designed simple API for doing this.
The code is more/less stable (i've tested it a bit) and works fine for me
but has some disadvantages i.e. plugin api should be changed a bit,
refreshing sample view take a lot of disk io and time and should be=20
rewiritten, but playing sample works good. I'm working on threaded
swap file io, which should improve realtime accesing sample data.
Attached diff has implementation of large data with many fixes for sweep
files to support large data api and a draft of documentation (in doc=20
directory). I've also patched some sweep plugins: normalise and=20
The patch is huge (about 52K) and I decided to gzip it.
Let's test it, and I am waiting for feedback if you like it or not....???
More detailed information about methods how it works I'll attach in=20
documentation when it will be ready.