From: SourceForge.net <no...@so...> - 2009-11-13 09:57:02
|
Bugs item #1928131, was opened at 2008-03-28 18:07 Message generated for change (Comment added) made by arjenmarkus You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=110894&aid=1928131&group_id=10894 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: 29. http Package Group: obsolete: 8.5.2 Status: Open Resolution: None Priority: 7 Private: No Submitted By: Eric Hassold (hassold) Assigned to: Jeffrey Hobbs (hobbs) Summary: chunked headers leak in data when -handler used Initial Comment: Support for chunk transfer in http 2.7 introduces a serious incompatibility with respect to previous http package versions. When a -handler argument is defined, chunk processing is not done, so returned data has "<size>\n" chunk headers leaked in, together with a terminating "0\n" segment. It means any current code which calls geturl with a -handler argument (e.g. hv3, but also probably most code caring about reporting progress during non-blocking http transfer) will get corrupted answer. In new http.tcl:1029, one can find: if {[info exists state(-handler)]} { set n [eval $state(-handler) [list $sock $token]] } elseif {[info exists state(transfer_final)]} { .... } elseif {[info exists state(transfer)] && $state(transfer) eq "chunked"} { .... so, as long as state(-handler) exists, chunk processing is not performed. Also, while having a look at this new http implementation, I noticed that in the chunk handling code, socket is temporaly switched back to blocking mode when reading chunk length header fconfigure $sock -blocking 1 set chunk [read $sock $size] fconfigure $sock -blocking $bl which seems to me quite hazardous, since this might freeze a whole application in Event callback one may assume to be non-blocking. ---------------------------------------------------------------------- >Comment By: Arjen Markus (arjenmarkus) Date: 2009-11-13 10:57 Message: I had a problem with http::geturl that seems related to chunked transfers: I ran into this issue with http and the -channel option, version 2.7.4, running Windows XP. I wanted to retrieve zip files and did so with code like: set outfile [open myzipfile.zip w] fconfigure $outfile -translation binary ::http::geturl $URL -channel $outfile close $outfile This led to corrupted zip files that could not be read by the zip::vfs package. The corruption was limited: - A line with a number (609) appeared at the start (ended with carriage-return line-feed) - The actual contents of the zip-file I wanted - A line with a number (0) and two newlines Using code like: set token [::http::geturl $URL] set outfile [open myzipfile.zip w] fconfigure $outfile -translation binary puts -nonewline $outfile [::http::data $token] close $outfile gave me the zip files I wanted. According to Pat Thoyts this has to do with chunked data transfers. ---------------------------------------------------------------------- Comment By: Alexandre Ferrieux (ferrieux) Date: 2008-06-30 23:47 Message: Logged In: YES user_id=496139 Originator: NO Just fixed typo in title. Will look at this, but no timing guarantees ;-) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=110894&aid=1928131&group_id=10894 |