From: Michal L. <mi...@lo...> - 2007-03-22 12:54:18
|
Stef Bon wrote: > But I ran into troubles when I wanted to try it. The description you'll find > at: > > https://sourceforge.net/tracker/index.php?func=detail&aid=1681567&group_id=129981&atid=716425 As I suspected this happens to files larger than 1MB. I have debugged the behaviour and the problem is: - the content of a file is held in a "LONGBLOB" field in the DB - as the data come in 4kB chunks, longblob gets appended the current chunk - this is done with "UPDATE ... SET data=CONCAT(data, <new chunk>)" like query - Now when the current length of "data" plus the length of "new chunk" goes over 1MB mysql fails with: "Result of concat() was larger than max_allowed_packet (1048576) - truncated" and 'data' is set to NULL. So this is in brief the problem. I'm not sure how to solve it though. The best approach seems to be splitting the data field into logical "blocks" of, say, 4kB. Then, instead of having the file contents in a single row in a single field in the database it'll be in a number of related rows. It will complicate the write()-call logic a little bit, but it should both improve write speed and fix this problem. IIRC this has been proposed on the list or maybe on SF tracker some months ago but I haven't followed up on that. Shame on me. I hope to have a new release ready sometime next week. And if not, well ... then it will be later ;-) Michal |