#247 S3: Fix wasteful string conversion of large binary object

open
Darren New
amazon-s3 (2)
5
2012-02-25
2012-02-25
Sam O'Connor
No

In our app S3::Get is used to download large binary objects from AWS (many hundreds of MB).
The current code stores the downloaded object in "x" then does: S3::debug "Body: $x"
This causes "x" to be converted to a string, and creates a second huge string "Body: .....".
The patch below passes the large object directly from the binary network channel to the result dict entry.

A small test program that downloads a 5MB file uses 24MB before the patch vs 9MB with the patch.

--- /System/Library/Tcl/tcllib1.12/amazon-s3/S3.tcl.orig 2012-02-25 15:32:45.000000000 +1100
+++ /System/Library/Tcl/tcllib1.12/amazon-s3/S3.tcl 2012-02-25 15:33:25.000000000 +1100
@@ -580,9 +580,7 @@
if {[dict exists $thunk orig outchan]} {
fcopy $s3 [dict get $thunk orig outchan]
} else {
- set x [read $s3]
- dict set thunk outbody $x
- S3::debug "Body: $x"
+ dict set thunk outbody [read $s3]
}
return [S3::nextdo all_done $thunk readable]
} else {

Discussion