#116 speedcheck fails => undef contentlength

libcurl (356)

% curl -V
curl 7.9 (i686-pc-linux-gnu) libcurl 7.9 (ipv6 enabled)

In transfer.c,function Transfer():
urg = Curl_speedcheck(data,now)

If the speedcheck fails (ie.: the operation is too
slow), it returns CURLE_OPERATION_TIMEOUTED, and this
goes up to Transfer(), then to Curl_Perform(), finnaly
to curl_easy_perform.

The problem (i think) is that conn->bytecountp is not
set in this case.

That is a real problem because if you give a
CURLOPT_FILE to libcurl, then curl_easy_perform already
wrote to that file, and there is no (clean) way to know
how many bytes it wrote.

By the way, i think that *maybe* it should return
CURLE_PARTIAL_FILE if the connection gets to slow.

I'm using curl as part of my PhD thesis in computer
science, i'm in the early stages experimenting with
scheduling policies with a web crawler. I found this
misbehavior because in web crawlers it's very important
to reschedule web pages if they get too slow, because
web servers have a rather high variance in service time.

I first wanted to use libwww but stupidly it's not
MTsafe. Then i found curl and think it's great. Thanks you.


  • Logged In: YES

    I'm not sure I understand what problem you experience here.

    Did you use curl_easy_getinfo() with CURLINFO_SIZE_DOWNLOAD
    to get the downloaded size after an aborted transfer and it
    returned a bad value?

  • Logged In: NO

    > Did you use curl_easy_getinfo() with
    > CURLINFO_SIZE_DOWNLOAD to get the downloaded
    > size after an aborted transfer and it returned
    > a bad value ?

    Yes. That's what happened.

    (The transfer was aborted because i used curl_easy_setopt()

  • Logged In: YES

    Hm, the "downloaded size" that is returned when
    CURLINFO_SIZE_DOWNLOAD is used, is set in the progress
    struct by the function Curl_pgrsSetDownloadCounter().

    That function is called every lap in the select() loop (when
    anything actually has been downloaded), and it is called
    before the speedcheck calls. So I'd say that it _should_
    have the counter updated, even when it gets aborted for
    being too slow.

    Can you somehow repeat this or debug it somehow? Could it
    possibly be so that when you noticed this problem, nothing
    had been downloaded so the download counter was rightfully zero?

  • Logged In: YES

    There's a bug in your test program. Let me quote from the
    curl_easy_getinfo() man page:

    Pass a pointer to a double to receive the total amount of
    bytes that were downloaded.
    It must be a pointer to a double. You passed a pointer to a
    'long' and that probably explains why you got nothing but

    • priority: 5 --> 4
  • Logged In: NO

    You are right ! Sorry for wasting your time.

  • Logged In: YES

    No harm done!

    • status: open --> closed-invalid