From: Josef W. <jw...@ra...> - 2014-03-10 17:16:47
|
Hello folks, I am considering to rewrite an application I wrote about 10 years ago in C into CL. This (single-threaded) application takes as input a DVB-S transponder, takes it apart, parses the TS/SI/whatever tables, demuxes the services and serves the services to clients as HTTP streams. This used to work great: with an 400MHz Pentium box, four such applications were running, receiving 4 transponder and serve HTTP streams to up to 200 clients. The box was running 24/7 and even managed to survive two soccer world championships :-). The only reason for downtime were kernel upgrades. The problem is that the structure of the TS/SI/whatever tables is rather complicated, so I'd like to take advantage of CL's macros to parse them. My first experimentations in this direction are quite encouraging, so I'd like to go further this path. But I am not entirely sure about the GC behavior and how the GC can be tuned. After all, the satellite won't stop sending data with 5MBytes per second just because my application decides to take a nip with the GC. And the clients won't like it also, every set-top-box will get upset when the A/V-stream has too much delays. I don't expect the CL implementation to compete with the performance I mentioned above. But I definitely need a way to ensure that the GC won't interrupt my main processing for more than -- say -- 20 milliseconds. This is the the time to fill 50% of the hardware-ringbuffer of the DVB card. 20ms every second (1% overhead) would be great. But 2 seconds once a day (0.0023% overhead) would be fatal. You get the idea: I am willing to trade overhead against responsiveness. I know I can force GC by calling SB-EXT:GC. And I see there are several options to tune HOW OFTEN the GC is invoked. But I can't find a way to limit the DURATION of GC invocations, so that the rest of the garbage is left for the next GC invocation. Or maybe there's a way to move the GC into a separate thread, which would then not interfere with my main working thread? I know I can reduce the frequency of GC runs by avoid of consing, managing memory pools, and by allocating large memory chunks statically. But my problem is _not_ the _frequency_ of GC invocations. My problem is the _duration_ of the invocations. I don't think I can completely avoid consing, so GC will need to be run from time to time. Any thoughts? -- Josef Wolf jw...@ra... |