Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo

Close

#31 Killed while retrieving headers

open
None
5
2004-10-27
2004-10-02
Joseph
No

Similar to a previous post
(https://sourceforge.net/tracker/index.php?
func=detail&aid=810446&group_id=3121&atid=103121)
while getting headers nget is killed.

After doing: nget -g <group> -i -r <regexp>
it would get begin retreiving headers then be killed.
(e.g. Retrieving headers 18687179-19232729 :
386119/545551/545551 70% 31944B/s 27m27sKilled)
Depending how many other processes are running it
would progress to different points, but always be killed.
The above example is with as much possible being killed.
Sadly I do not have the option of increasing the
available physical memory as this particular machine is
somewhat old and has a 64M cap. Brief system specs
are P200MMX 64MB (60476k swap space) Slackware
Linux 9.1 (2.4.22).

I have updated to the current ver of nget (nget v0.27)
and running nget with a minimal set of other processes.
Smaller groups are not a problem, however recently ISP
outsourced usenet service and the current service has a
30day+ retention for some groups so there are quite a
few headers to download.

Any suggestions? Is there a way to download headers
in lumps avoiding this that I missed in the manpage?
The -x is not an option either as the server does not
support it. Any ideas are welcome.

Discussion

    • assigned_to: nobody --> donut
     
  • Logged In: YES
    user_id=65253

    Unfortunately this is an inherent problem with large groups
    and nget's current cache implementation, and I haven't had
    time recently to do anything about it.

     
  • Maarten
    Maarten
    2004-11-19

    Logged In: YES
    user_id=1162616

    Any progress on this?

     
  • Logged In: YES
    user_id=65253

    CVS now has a maxheaders option to limit the number of
    headers retrieved, so you can work around the problem. A
    fix for the underlying issue is still needed, though.