Thread: [libdc] frames_behind doesn't seem reliable
Capture and control API for IIDC compliant cameras
Brought to you by:
ddouxchamps,
gordp
From: Michael C. <cr...@wa...> - 2008-04-16 02:14:23
|
According to the libdc1394 2.x FAQ (http://damien.douxchamps.net/ieee1394/libdc1394/v2.x/faq/ ) the frame_behind field of a captured frame should contain the number of frames ready to be copied from the capture ring buffer. Specifically the question "How can I find out if a frame was dropped ..." states that one can test for a (likely) dropped frame by testing frames_behind equality to ring buffer size less one. I am capturing from an AVT Guppy camera at 50fps using libdc1394 2.0.1 on Mac OS X (both Tiger and Leopard, intel core duo) with a ring buffer size of 50 frames. I am both displaying the captured frames (via SDL library) on screen and saving to hard disc as raw video data (using open/write calls) while the system is under load (dirty great big compiles that exercise both CPU and disc access) to verify that the capture software can detect dropped frames. However frames_behind only ever increments up to 47, which is the ring buffer size less three! Nevertheless dropped frames are quite noticeable both in the preview window and the saved video stream. This seems to be behaviour that is at odds with the documentation. The capture code is also compileable under Linux, and when I get a chance to take a camera to a Linux computer I can run a test there to see if the same behaviour occurs. Cheerz Michael |
From: David M. <dcm@MIT.EDU> - 2008-04-16 03:07:28
|
On Wed, 2008-04-16 at 14:14 +1200, Michael Cree wrote: > According to the libdc1394 2.x FAQ (http://damien.douxchamps.net/ieee1394/libdc1394/v2.x/faq/ > ) the frame_behind field of a captured frame should contain the number > of frames ready to be copied from the capture ring buffer. > Specifically the question "How can I find out if a frame was > dropped ..." states that one can test for a (likely) dropped frame by > testing frames_behind equality to ring buffer size less one. > Yes, as you've discovered, that advice is mostly useless. It was true when libdc1394 only had one back end, but now that it has three, we can't really make any guarantees about the exact value of frames_behind when the queue is full. In general though, if frames_behind is "large", your app has a bottleneck and should be streamlined. There are several reasons a frame might be dropped: 1. The camera doesn't send it. In this case, we can't detect the drop at all, unless the camera has some extra features that enable a frame counter or something like that. 2. An iso packet is corrupt, missing, or a buffer overrun occurs during DMA transfer. In 2.0.1 on Mac OS X, all these cases will result in a visibly corrupted frame. In the latest SVN on Mac OS X, you can weed out these visibly corrupted frames using dc1394_capture_is_frame_corrupt(). This feature will make its way to Linux eventually. However, currently, for each corrupted frame detected in this fashion, you may actually be dropping _two_ consecutive frames. This will also hopefully be remedied in the future. 3. Your frame queue fills. frames_behind will help you detect this case. Generally, if frames_behind starts to get anywhere larger than half of the total buffers, you should think about removing latency in your app or making the queue larger. Once the queue overflows, we can't detect how many frames have been dropped. Developing a mechanism to actually count the dropped frames is not out of the question here, but I'm not sure that we could make it cross-platform. -David |
From: Michael C. <cr...@wa...> - 2008-04-16 23:35:32
|
On 16/04/2008, at 2:57 PM, David Moore wrote: > On Wed, 2008-04-16 at 14:14 +1200, Michael Cree wrote: >> According to the libdc1394 2.x FAQ (http://damien.douxchamps.net/ieee1394/libdc1394/v2.x/faq/ >> ) the frame_behind field of a captured frame should contain the >> number >> of frames ready to be copied from the capture ring buffer. >> Specifically the question "How can I find out if a frame was >> dropped ..." states that one can test for a (likely) dropped frame by >> testing frames_behind equality to ring buffer size less one. >> > > Yes, as you've discovered, that advice is mostly useless. It was true > when libdc1394 only had one back end, but now that it has three, we > can't really make any guarantees about the exact value of > frames_behind > when the queue is full. In general though, if frames_behind is > "large", > your app has a bottleneck and should be streamlined. Thanks for the quick reply. May I therefore suggest that the advice on the website be updated to reflect the current situation. It would save people, like myself, from mucking around for a day trying to test for the wrong thing. For scientific applications (such as the work I am involved in) dropped frames are critical. In our particular applications we can ultimately detect missed or corrupted frames by analysing the captured video sequence (usually off line) and using the fact that a frame can be reasonably deterministically predicted from a sequence of prior frames. We end up wasting time writing clever software to detect corruptions of the video sequence when it would be much nicer if the hardware and camera software drivers could report a missed frame, or at the very least the onset of missed frames. Then we would know immediately that we need to treat that particular video sequence with caution. > 3. Your frame queue fills. frames_behind will help you detect this > case. Generally, if frames_behind starts to get anywhere larger than > half of the total buffers, you should think about removing latency in > your app or making the queue larger. Yes and no. If the maximum latency in the system is well characterised it may be possible to allocate a buffer that fills to say, to 90%, because you know that longer delays in removing frames from the buffer will never occur. Of course, on a modern OS (such as Linux or Mac OS X) guaranteeing the frequency and maximal length of latencies is generally not possible. Then you say that if buffer half fills then making the queue larger is an option. But if the ongoing latencies are such that on average the removal of frames is slower than the capture of frames into the buffer, then no size of buffer is any use for continuous capturing of video data. The buffer will overflow at some point. I tried the code out on a Linux machine (rather old, Alpha CPU, 667 MHz CPU clock) and at the 50 fps capture rate that I wanted but it couldn't keep up under any circumstance. The buffer always overflowed. It could keep up at 30 fps. I found that frames_behind on this system increased to number of buffer less 2 and no more. So even for the Linux system the advice on the website is incorrect (and I presume it was written for the Linux drivers in the first case). I will take your advice and test for the buffer growing to half its size. The system for which this is intended is fast enough that this should never occur under normal conditions. Michael. |
From: Johann S. <j.s...@ir...> - 2008-04-17 22:53:46
|
Michael Cree wrote: > On 16/04/2008, at 2:57 PM, David Moore wrote: >> 3. Your frame queue fills. frames_behind will help you detect this >> case. Generally, if frames_behind starts to get anywhere larger than >> half of the total buffers, you should think about removing latency in >> your app or making the queue larger. > > Yes and no. If the maximum latency in the system is well > characterised it may be possible to allocate a buffer that fills to > say, to 90%, because you know that longer delays in removing frames > from the buffer will never occur. Of course, on a modern OS (such as > Linux or Mac OS X) guaranteeing the frequency and maximal length of > latencies is generally not possible. Then you say that if buffer half > fills then making the queue larger is an option. But if the ongoing > latencies are such that on average the removal of frames is slower > than the capture of frames into the buffer, then no size of buffer is > any use for continuous capturing of video data. The buffer will > overflow at some point. > > I tried the code out on a Linux machine (rather old, Alpha CPU, 667 > MHz CPU clock) and at the 50 fps capture rate that I wanted but it > couldn't keep up under any circumstance. The buffer always > overflowed. It could keep up at 30 fps. I found that frames_behind on > this system increased to number of buffer less 2 and no more. So even > for the Linux system the advice on the website is incorrect (and I > presume it was written for the Linux drivers in the first case). > Hi there Michael We had exactly the same latency/dropped frames problem in our project. I also observed the fact that frames_behind increases to a maximum of 2 less than the buffer size, so you are not mad (unless I am too). Damien Douxchamps is always very happy to accept patches to the libdc1394 FAQs. David Moore is right in that there does not seem to be an ideal solution apart from setting a large buffer size and raising the alarm if it fills more than a predetermined threshold. Memory is cheap nowadays. We found that the largest latencies were introduced by the kernel filesystem when saving images to disk and the disk buffers filling up. Then everything stops periodically while the backlog is flushed to disk. One can sometimes make a dramatic difference to the maximum latency by forcing regular disk flushing with for example C code like: fsync(fileno(imagefile)); after each image is written (just before its file is closed). Letting the buffer overflow is a bad idea not only because of dropped frames but also because we have found that it sometimes caused other random instabilities which we would rather do without. The later versions of libdc1394 or Linux kernel may be more stable. In our application missing frames are not too serious, so I simply flush the buffers when I detect impending overflow. I include some of my C code below for what it's worth. It's for version 1.x of libdc1394 and Camwire, but it should give you the idea. /* ---------------------------------------------------------------------- Checks the number of pending filled buffers and flushes some of them if there are too many. This helps to ensure that frame numbers are accurately updated when frames are dropped. Otherwise buffer overflows may result in the user not knowing if frames have been lost. */ static void manage_buffer_level(const Camwire_handle c_handle, FILE *logfile) { int total_frames, current_level, num_to_flush; camwire_get_num_framebuffers(c_handle, &total_frames); if (total_frames < 3) return; camwire_get_framebuffer_lag(c_handle, ¤t_level); current_level++; /* Buffer lag does not count current frame.*/ /* It seems that the DMA buffers sometimes do not fill up completely, hence the extra -1 in the if expression below: */ if (current_level >= total_frames - 1) { /* Hit the ceiling.*/ num_to_flush = total_frames; if (camwire_flush_framebuffers(c_handle, num_to_flush, NULL, NULL) != CAMWIRE_SUCCESS) fprintf(stderr, "Could not flush all buffers in manage_buffer_level().\n"); if (logfile != NULL) { fprintf(logfile, "Frame buffers overflowed. " "Frame numbers may no longer be in synch.\n"); fflush(logfile); } } else if (current_level + 0.5 >= BUF_HIGH_MARK*total_frames) { num_to_flush = current_level - BUF_LOW_MARK*total_frames; if (camwire_flush_framebuffers(c_handle, num_to_flush, NULL, NULL) != CAMWIRE_SUCCESS) fprintf(stderr, "Could not flush %d buffers in manage_buffer_level().\n", num_to_flush); } /* else don't flush. */ } And I use it like this: #define BUF_LOW_MARK 0.1 #define BUF_HIGH_MARK 0.9 or perhaps, depending on your application: #define BUF_LOW_MARK 0.5 #define BUF_HIGH_MARK 0.5 manage_buffer_level(c_handle, NULL); or, with logging: manage_buffer_level(c_handle, logfile); Best regards, Johann -- Johann Schoonees Senior Engineer Industrial Research Ltd, PO Box 2225, Auckland, New Zealand Phone +64 9 9203679 Fax +64 9 3028106 www.irl.cri.nz |
From: Michael C. <cr...@wa...> - 2008-04-20 22:16:45
|
Hi Johann! > We had exactly the same latency/dropped frames problem in our > project. I also observed the fact that frames_behind increases to a > maximum of 2 less than the buffer size, so you are not mad (unless I > am too). Yeah, seems to be two behind on Linux and three behind on Mac OS X. > David Moore is right in that there does not seem to be an ideal > solution apart from setting a large buffer size and raising the > alarm if it fills more than a predetermined threshold. Memory is > cheap nowadays. True, memory is cheap, and one can increase the buffer size to ridiculous sizes, but these answers don't inspire confidence in me that frames really are never ever being missed. > We found that the largest latencies were introduced by the kernel > filesystem when saving images to disk and the disk buffers filling > up. Then everything stops periodically while the backlog is flushed > to disk. Yes, I had thought of that. On Mac OS X one can set up a block contiguous disc file, and turn off caching, with a fcntl() call. I haven't done enough testing to establish whether that is any real advantage. A quick search of the Linux man pages (which are generally so poorly written that they are, at best, next to useless, and, at worst, damn right misleading) didn't reveal anything other than the fsync() call you suggest. > In our application missing frames are not too serious, so I simply > flush the buffers when I detect impending overflow. In ours, missing frames pose a real problem. Indeed, knowing that the frame is captured at the correct instant is also becoming important. We are calculating speeds of particles by their shift in position from frame to frame. If a frame is missed then that is ultimately detectable because there will be a spike in the particle's speed. I have just, in the last couple of days, captured the first video sequence of an experimental run. There is an interesting oscillation in the speed of the particles calculated from the video sequence. There is a good reason to believe that it is real and that the particle's speeds are truly oscillating about a fixed value - there is a small periodic force in the experiment. It would be really awesome, indeed, if we are measuring that periodic force. So now I want to be able to completely remove the possibility that the frames are being captured at slightly irregular times. If the camera/firewire interface/device drivers can't tell me that then I am going to have to embark on a set of experiments to verify the regularity of frame capture. Michael. -- Dr Michael Cree, Senior Lecturer Dept. Engineering, University of Waikato, Private Bag 3105, Hamilton 3240, New Zealand +64(7)8384301 cr...@wa... |
From: Stefan R. <st...@s5...> - 2008-04-20 22:37:37
|
Michael Cree wrote: > missing frames pose a real problem. So you need a camera which timestamps the frames, a realtime¹ OS, and a realtime¹ storage system. ¹ guaranteeing an upper bound of latency Actually you can't even use FireWire, because that one guarantees either delivery or maximum latency, but not both together. -- Stefan Richter -=====-==--- -=-- =-=-= http://arcgraph.de/sr/ |
From: David M. <dcm@MIT.EDU> - 2008-04-20 22:37:43
|
On Mon, 2008-04-21 at 10:16 +1200, Michael Cree wrote: > In ours, missing frames pose a real problem. Indeed, knowing that the > frame is captured at the correct instant is also becoming important. > We are calculating speeds of particles by their shift in position from > frame to frame. If a frame is missed then that is ultimately > detectable because there will be a spike in the particle's speed. > For your application, I suggest relying on timestamps rather than assuming a frame rate. Even if no frames are dropped, there's no guarantee that your camera is keeping up with its advertised frame rate. If you rely on timestamps instead to compute your speeds, the computed value would be accurate regardless of any frame jitter or dropped frames. The timestamps produced by libdc1394 under Mac OS are very accurate and should not exceed 1/8000th of a second of error. The timestamps tells you the time that the first packet of a frame was received by the ieee1394 hardware. The linux video1394 timestamps will have more jitter. Juju timestamps are currently non-existent, but when they are implemented, will match the accuracy of Mac OS. Also, you should investigate whether your particular camera has internal timestamping capability. This would transmit a timestamp for each frame with the frame data itself, and would be the most accurate of all methods. -David |
From: Michael C. <cr...@wa...> - 2008-04-20 23:01:57
|
On 21/04/2008, at 10:37 AM, David Moore wrote: > On Mon, 2008-04-21 at 10:16 +1200, Michael Cree wrote: > >> In ours, missing frames pose a real problem. Indeed, knowing that >> the >> frame is captured at the correct instant is also becoming important. >> We are calculating speeds of particles by their shift in position >> from >> frame to frame. If a frame is missed then that is ultimately >> detectable because there will be a spike in the particle's speed. > > For your application, I suggest relying on timestamps rather than > assuming a frame rate. Even if no frames are dropped, there's no > guarantee that your camera is keeping up with its advertised frame > rate. > > If you rely on timestamps instead to compute your speeds, the computed > value would be accurate regardless of any frame jitter or dropped > frames. Thanks, David. That sounds like a great idea. > The timestamps produced by libdc1394 under Mac OS are very accurate > and > should not exceed 1/8000th of a second of error. That allows me to place an uncertainty on calculated times. That is useful - it looks like it will be just under 1% precision (for 50fps), which while I would've like to have better (don't we always :-) is still very useful to know. > The timestamps tells > you the time that the first packet of a frame was received by the > ieee1394 hardware. In other words, when the transmission of the frame data down the firewire cable is begun to be received by the computer??? > Also, you should investigate whether your particular camera has > internal > timestamping capability. This would transmit a timestamp for each > frame > with the frame data itself, and would be the most accurate of all > methods. Nice suggestion. We are using an AVT Guppy F033B. I will have to investigate further its capabilities. Michael. -- Dr Michael Cree, Senior Lecturer Dept. Engineering, University of Waikato, Private Bag 3105, Hamilton 3240, New Zealand +64(7)8384301 cr...@wa... |
From: David M. <dcm@MIT.EDU> - 2008-04-20 23:35:26
|
On Mon, 2008-04-21 at 11:01 +1200, Michael Cree wrote: > That allows me to place an uncertainty on calculated times. That is > useful - it looks like it will be just under 1% precision (for 50fps), > which while I would've like to have better (don't we always :-) is > still very useful to know. > The ieee1394 packet transfers happen at 8000Hz increments, so it's the best precision that can be achieved without help from extra camera features. > > The timestamps tells > > you the time that the first packet of a frame was received by the > > ieee1394 hardware. > > In other words, when the transmission of the frame data down the > firewire cable is begun to be received by the computer??? > Yes, exactly. What's unknown is the latency between actual frame capture and the start of transmission by the camera. Also, you have to hope this latency is relatively constant and does not jitter. -David |