The sample player turns each laptop into an instrument. The basic function is to play .wav files.
How are they played? Probably there is a way to select a bank of sounds (e.g. typing a number key) and a way to play a sound (e.g. typing an alphabetic character). The interactive control is part of this project (not part of the performance interface), so you should coordinate with the performance interface group to integrate your control software with their graphical interface (and if you have graphical components, you will want to coordinate on screen layout too).
The sample player should operate as a JavaSound Synthesizer, taking MIDI messages in and playing sounds. (It would also be possible to have a non-MIDI protocol between the performance gestures -- typing keys, using the mouse -- and the synthesizer, but using MIDI might be helpful for testing and making the protocol clear.
The player must have control over duration, e.g. by how long you hold a key. Duration is changed by multiplying the sample stream by an envelope that ramps to zero when the "note" stops.
Volume control would be nice.
Other features can be added: filters or effects, but keep in mind it must be very simple for others to learn and operate.
The GUI has been integrated with the Performer UI. Running their code will initialize and start up the Sample Player.
The Player has a Bank of Instruments, where an instrument contains a mapping from note names to the absolute path of the .wav files.
Currently we have two teams, one working on the GUI and the other working on the back end sound player.
Playing a sound is implemented with a keylistener and currently justs plays a sound based on the ASCII of the key.
Kris-
Construct basic GUI
Capture Key Presses/Releases
Jeff-
Expand on GUI controls
Map Key Presses/Releases to MIDI messages
Mike-
Set up wav player and threading
Implement note duration detection
Mo-
Create sound bank
Interaction between GUI and wav player
We have a minimal GUI which displays the current note being played
(based on the key currently being pressed). The GUI also includes a
spinner for setting note velocity, and an attribute selector (currently
just a placeholder until we get tighter integration with the Sound Design
team's output). It then generates MIDI messages and sends it to the
back-end.
The back-end has functionality for loading different instruments into a
bank, and even supports multiple banks of instruments. When a keypress is
registered by the GUI, a note is generated corresponding to the key's pitch
on the current instrument. That note is then played until the GUI detects
that they key has been released.