Thanks Ryan, I might try that, would need to flash a NAND with updated kernel first. 


jumpnow:

Pseudo Code:


*****
Collection Thread:
 loop:
    Read data from device
    if data is new, then
         Store data in shared global vars  (lock and unlock var mutex before & after)
 end loop

******
Buffer Thread:
loop
   if time_elapsed > x then
         grab data from shared global vars (lock and unlock var mutex before & after)
         place data in preallocated buffer
         if data has been grabbed 100 times, trip the write flag

*******
Write Thread:
loop
   if write flag
   copy buffer locally
   clear buffer
   write(local copy)
   sleep for a little bit before check if write flag is tripped again
*******

The purpose of the write thread is to allow the OS to determine the CPU priority given to writing data, with hopes that it could do it smarter than if it was in the buffer thread and to keep the write from blocking the buffer thread. 

The 'hang' that I see is that the collection loop does not update the shared data, but the buffer thread continues to grab the data at the same rate, so I see the same data for .5 seconds worth of data or so, so several lines. So, the whole system is not frozen since the buffer thread is still going. I thought for a while that the collection thread had a bug, but the experiments and fixes I did as described all point to data writing as the bottleneck if I'm not mistaken. Testing by writing to ramfs and running flawlessly, in my mind, confirms that suspicion. What do you think? 



On Thu, Jan 3, 2013 at 10:17 AM, jumpnowdev <scott@jumpnowtek.com> wrote:
Some code or pseudo code from you might help.

Are you dropping data because the serial reader thread is blocked
by the writer thread?

Or is your whole system somehow freezing while disk writes are
going on causing you to miss data?

You could test this second case by running your current program
writing to a RAMFS and at the same time running another program
doing disk writes to try and freeze the system.


Hopefully it's the first case.

If so, it's not clear to me what your double-buffering step entails
or how adding an extra copy helps things.


Our typical multi-threaded data routines look like this

Collection Thread (queue writer)

loop:
  alloc buffer (or more typically grab from a pre-allocated pool)
  read data from device
  add buffer to queue


Disk Thread (queue reader)

loop:
  if queue empty
    sleep
  else
    remove buffer from queue
    write data to disk
    free buffer (or return to pool)


With only one queue writer and reader, moving only the head
or tail respectively, you don't even need locking.

The only way the reader thread gets blocked with this algorithm is
if we run out of memory for buffers. But if that's happening then
we are truly exceeding the throughput limits of the system.

Your throughput requirements are not that big.




--
View this message in context: http://gumstix.8.n6.nabble.com/data-writing-to-file-causing-intermittent-hanging-tp4966278p4966363.html
Sent from the Gumstix mailing list archive at Nabble.com.

------------------------------------------------------------------------------
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712
_______________________________________________
gumstix-users mailing list
gumstix-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/gumstix-users



--
Dan Kuehme
AREA-I
where ideas take flight

1590 N. Roberts Rd., Ste 203
Kennesaw, GA 30144
Phone: 678.594.5227
Fax:     678.594.5228
Cell:     678.653.6662

www.areai.aero