From: Lester H. <hig...@10...> - 2007-03-22 13:07:54
|
Michal, On Mon, 25 Sep 2006 at 13:29:17 EDT, I submitted a patch for mysqlfs-0.2, to mys...@li..., which does exactly what you describe below, and under the subject "Patch to mysqlfs-0.2 to break file data into records of 4k chunks." You can likely save yourself some time by referring to that patch. A positive side-effect that I noted in that email is, "My testing of this code shows better than a 10-fold increase in write speed, and that improvement is more pronounced on larger files. It also shows a 3-fold increase in read speed." If you have trouble finding that email, drop me a separate email and I'll send it to you again. Sincerely, -- Lester Hightower On Fri, 23 Mar 2007, Michal Ludvig wrote: > Stef Bon wrote: > > > But I ran into troubles when I wanted to try it. The description you'll find > > at: > > > > https://sourceforge.net/tracker/index.php?func=detail&aid=1681567&group_id=129981&atid=716425 > > As I suspected this happens to files larger than 1MB. I have debugged > the behaviour and the problem is: > - the content of a file is held in a "LONGBLOB" field in the DB > - as the data come in 4kB chunks, longblob gets appended the current > chunk > - this is done with "UPDATE ... SET data=CONCAT(data, <new chunk>)" like > query > - Now when the current length of "data" plus the length of "new chunk" > goes over 1MB mysql fails with: > "Result of concat() was larger than max_allowed_packet (1048576) - > truncated" and 'data' is set to NULL. > > So this is in brief the problem. I'm not sure how to solve it though. > > The best approach seems to be splitting the data field into logical > "blocks" of, say, 4kB. Then, instead of having the file contents in a > single row in a single field in the database it'll be in a number of > related rows. It will complicate the write()-call logic a little bit, > but it should both improve write speed and fix this problem. IIRC this > has been proposed on the list or maybe on SF tracker some months ago but > I haven't followed up on that. Shame on me. > > I hope to have a new release ready sometime next week. And if not, well > ... then it will be later ;-) > > Michal |