Hello Folks!

In an earlier post I mentioned how I achieved success at displaying video in a PyQt4 widget by converting a numpy array into a QImage through the use of qimage2ndarray .

Now I'm trying to play the audio from the video's audio stream, and I'm getting another headache; I don't see many tutorials around for how to use cgkit.mediafile for this. Any suggestions? My initial attempts look something like the following but without any sound. I'm not worrying about syncing or playing at proper speed for now; just trying to get some noise through my speakers!

Best regards,
Tim Grove

 ## reader = cgkit.mediafile.Media_Read object
 ## stream = cgkit.mediafile.AudioStream object

format = QAudioFormat()
format.setSampleRate(self.stream.sampleRate)
format.setChannelCount(self.stream.numChannels)

## unsure of arguments for next 4 lines - example values taken from www
format.setCodec("audio/pcm")
format.setSampleSize(16)
format.setByteOrder(QAudioFormat.LittleEndian) 
format.setSampleType(QAudioFormat.SignedInt)
       
## The 'theory' behind the following is to read audio data into a buffer for output through
## my system's (Windows 7, 32-bit) default audio device
output = QAudioOutput(QAudioDeviceInfo.defaultOutputDevice(), format)
buffer = QBuffer()
buffer.open(QIODevice.ReadWrite)
for data in reader.iterData([stream]):
    buffer.write(QByteArray(data.samples))
    self.output.start(buffer)

## finished
buffer.close()
output.stop()
reader.close()