Menu

#3964 chunked headers leak in data when -handler used

obsolete: 8.5.2
open
7
2008-06-30
2008-03-28
No

Support for chunk transfer in http 2.7 introduces a serious incompatibility with respect to previous http package versions.

When a -handler argument is defined, chunk processing is not done, so returned data has "<size>\n" chunk headers leaked in, together with a terminating "0\n" segment. It means any current code which calls geturl with a -handler argument (e.g. hv3, but also probably most code caring about reporting progress during non-blocking http transfer) will get corrupted answer. In new http.tcl:1029, one can find:

if {[info exists state(-handler)]} {
set n [eval $state(-handler) [list $sock $token]]
} elseif {[info exists state(transfer_final)]} {
....
} elseif {[info exists state(transfer)]
&& $state(transfer) eq "chunked"} {
....

so, as long as state(-handler) exists, chunk processing is not performed. Also, while having a look at this new http implementation, I noticed that in the chunk handling code, socket is temporaly switched back to blocking mode when reading chunk length header

fconfigure $sock -blocking 1
set chunk [read $sock $size]
fconfigure $sock -blocking $bl

which seems to me quite hazardous, since this might freeze a whole application in Event callback one may assume to be non-blocking.

Discussion

  • Eric Hassold

    Eric Hassold - 2008-03-28
    • priority: 5 --> 7
     
  • Alexandre Ferrieux

    Logged In: YES
    user_id=496139
    Originator: NO

    Just fixed typo in title.
    Will look at this, but no timing guarantees ;-)

     
  • Alexandre Ferrieux

    • summary: chunked headers link in data when -handler used --> chunked headers leak in data when -handler used
     
  • Arjen Markus

    Arjen Markus - 2009-11-13

    I had a problem with http::geturl that seems related to chunked transfers:

    I ran into this issue with http and the -channel option, version 2.7.4,
    running Windows XP. I wanted to retrieve zip files and did so with code like:

    set outfile [open myzipfile.zip w]
    fconfigure $outfile -translation binary
    ::http::geturl $URL -channel $outfile
    close $outfile

    This led to corrupted zip files that could not be read by the zip::vfs package.
    The corruption was limited:
    - A line with a number (609) appeared at the start (ended with carriage-return line-feed)
    - The actual contents of the zip-file I wanted
    - A line with a number (0) and two newlines

    Using code like:

    set token [::http::geturl $URL]
    set outfile [open myzipfile.zip w]
    fconfigure $outfile -translation binary
    puts -nonewline $outfile [::http::data $token]
    close $outfile

    gave me the zip files I wanted.

    According to Pat Thoyts this has to do with chunked data transfers.