|
From: Tim R. <ti...@pr...> - 2014-01-03 18:48:29
|
Abhishek Madaan wrote: > > The reason we send the control transfer to start the > frame is that we want to make sure that we always sync up with the > frame. We always make sure that FSYNC goes high and LSYNC goes high > then we activate the data. Once the FSYNC goes low we know that we got > the full frame and then we terminate the GPIF waveform at the firmware > level. But we also tried doing the video streaming but we lost sync > with the frame. The way we did it instead of terminating GPIF waveform > once we get the first frame, we reset it. GPIF waveform runs forever > and continuously sending frame data to the host. Even though we are > making sure that host is always receiving in a separate thread, we get > only 58 FPS but our camera is running at 60 FPS. I believe this leads > to buffer overflow and this causes the frame to go out of phase. You are describing a timing nightmare here. Have you tried to draw out a timeline on a whiteboard? This is way more delicate than you think. So, you transfer one frame, then you process it, then you send a control request to trigger another. But, given what you are saying, ALL of that has to be done during the vertical blank interval. If you take a little too long, or if you get an inconvenient context switch, your control request will arrive just after the start of the video frame, in which case you'll drop that frame. Right? Whereas, if you sent a short packet after end of frame, you could queue up two or three frame read requests in your application, and the firmware could send data continuously. You'd always have a request ready to receive the next frame, and the short packet would let you get aligned to start-of-frame again. THAT'S how streaming video is done. -- Tim Roberts, ti...@pr... Providenza & Boekelheide, Inc. |