SKoT McDonald <skot@...> wrote:
[ Would like to open the floor for comments regarding a batch of
[ broom-sweeping changes to Snd I'd like to implement.
I've never really looked at the MusicKit or SndKit (surprising, I know ;-),
but if you're prepared to make some sweeping changes, I hope you'll be willing
to consider some different approaches than what you've outlined so far. Of
course, everything I've written below is open for comment as well.
You had four major points, so I'll try to summarize them briefly rather than
fully quote them (otherwise this message might be harder to follow):
1) The Snd class currently encapsulates a SndSoundStruct, so it seems you're
suggesting to change this to a new SndFormatStruct.
* What would be the advantage of a new structure? I hope to make a case for
enhancing the Snd object rather than simply defining a new struct. I generally
think it is a bad idea to expose any kind of structure, because you're stuck
with it. If you instead expose accessor methods in your class design, then
subclasses can modify things as needed, rather than having to compute all the
fields in the structure at creation time.
2) You suggest using the existing SndAudioBuffer class within the Snd object
(I guess Snd currently just uses the raw data from its encapsulated
* This seems like a step in the right direction, but I wonder if there is an
even better way to abstract the sound data.
Also, what about streams? Does the Snd object currently handle non-file data
- e.g. coming from a socket? Is there any way it could?
3) It seems that you're suggesting an added flag to distinguish sound data
that is manually cached (using code that has yet to be written) from a large
file on disk, with capabilities for look-ahead. You point out that this would
prevent random prowling through the sound data (which can be done with the
current design), since some sections may not be cached yet.
* I highly suggest that you not implement your own caching. This is best
handled by the operating system folks. A minor hitch is that it may be
different for each operating system, but that seems far better than writing
For example, NEXTSTEP (and Mac OS X) has very handy memory mapping of files.
You simply tell the system the file name and it will effectively give you a
memory address back. You can prowl through the largest of files at random, and
the system will handle paging efficiently for you. This doesn't directly
support look-ahead, but if your OS-specific code asks the system for the paging
size, all you would need to do is touch one byte in the next page during
non-interrupt time to increase the likelihood that there won't be a page fault
at the last minute when the sound data is needed for playback.
In other words, on the primary OS (I'm assuming that its NEXTSTEP lineage
means MusicKit will feel most at home on Mac OS X ;-), with the right calls,
the caching behavior you want to happen will be automatic. While the current
Snd object may actually be reading the whole file into memory the long way, it
should be rather simple to change the code so it merely maps the file virtually
without actually reading anything (other than stat on the file's inode).
Individual pages (I think they're 8 kbytes these days) will be read on demand,
whether the sound is accessed sequentially or at random, and no data is read
from the file until you actually touch the memory address it is mapped to.
This won't be directly useful for compressed formats, so your Snd object
re-design might reflect some efficient mechanism for obtaining the uncompressed
data from an object that represents the actual compressed data. I don't know,
but in some cases it might be possible to decompress less than the whole file,
e.g. if the compressed format is frame-based.
SUMMARY for 1 - 3:
I see a number of distinct goals that I think are important, so I'll start by
listing them and then see if I can describe an object design that meets these
1) sound data should be available in as abstract a format as possible,
regardless of the original file format, or whether it was compressed - to me,
this means having a pointer to the data, and indication of its format and
length, and a few other variables like sample rate.
2) any support in the operating system for virtual memory mapping of files
should be utilized where possible, but I don't see any way to handle compressed
files without some indirection - fortunately, an improved Snd class could
handle this more abstractly than a new struct.
3) in memory structures should probably match the file format, although this
opens up endian mismatches, and can really only be considered an advantage for
uncompressed files - classes can hide this as much as possible.
4) an abstract sound class should be designed which provides everything needed
by the rest of the MusicKit/SndKit, passing through raw data from mapped files
where possible, and translating the data where this isn't possible.
subclasses of this abstract class would handle specific file formats as well as
basic in-memory sounds.
So, rather than create SndFormatStruct as an improvement on SndSoundStruct, I
would suggest reworking the Snd class so that it could be subclassed instead of
creating a new structure with fields that cannot be subclassed to behave
differently. Just like the NSString class cluster delivers instances of
specialized classes that handle different kinds of strings in the most
efficient manner for that type of string, the same class cluster could be
developed for the SndKit.
My primary goal would be that in the best case - where an uncompressed file is
selected for playback - the Snd object would have a subclass for that specific
file format which maps the entire file into virtual memory without actually
reading anything yet, the -play: method of the resulting Snd instance (probably
a generic method inherited from the master Snd class) would ask for the
address of the sound data using an accessor method, and you might even be able
to pull off look-ahead by smart implementation of callbacks and a little
computation to predict what byte to touch to force a page fault (but this would
be done from user code rather than in any interrupt callback that might suffer
if a page-fault occurred).
The only code I ever wrote along these lines was a delegate for the
NXSoundDevice, NXSoundOut, and NXPlayStream group of classes (yes, I'm showing
my age by mentioning NX* class names!), but I am not familiar with whether Snd
has a similar delegate and/or callback. A quick glance at -[Snd play:] shows
notification functions, but I only see begin and end. It would be ideal to
have access to the individual buffers as the sound is being delivered to the
hard, with a running callback which could be implemented to handle prepping for
When you don't have the best case - e.g. a compressed file, or maybe sound
data which has just been recorded but not saved to any file yet - there would
be a subclass of Snd which handled the sound data in memory. Again, virtual
memory is your friend here, but I don't see any way to avoid running through
the entire compressed file data to create an uncompressed copy that could be
returned from the public accessor method that is supposed to return the sound
data. Also, any editing of sound data would probably end up pulling in the
entire sound file.
QUESTION: Is there any way in Darwin to hook into the virtual memory system,
or maybe provide a custom file system, such that page faults could be
redirected to go through user code which pulls the data from somewhere other
than a "real" file? I know that NEXTSTEP had a special filesystem for
compressing the swapfile which appeared to be an uncompressed image to the
virtual memory system but was really a much smaller file on the physical disk
(this was a big hit on performance, but some folks lacked disk space - we
wouldn't mind the performance hit, since uncompression is unavoidable for these
4) You suggest breaking Snd into categories, in particular, three files
covering about four functions each. I think you kept all the interfaces
together in one header, so only the implementations are separate.
* I really like the idea of using Objective C categories to reduce source file
size and therefore increase compile speeds during rapid development. Until
Objective C has single method compilation and incremental linking (I think
Microsoft was experimenting with this for C and maybe C++, but I don't live in
that world so I have no idea what made it out the door), the best way for now
is to break up an object into categories. There is a point of diminishing
returns when using this tactic. It's not always worth the trouble to split
pieces of code into a new category. I usually sort my .m source by file size,
and concentrate on the largest chunks of source code in a project. There often
tends to be one or two objects that are an order of magnitude larger than the
others. Once you break these down to a certain point, you don't really want to
take the time to go much further.