From: Michal L. <mi...@lo...> - 2007-03-22 13:26:30
|
Hi Lester, that's right, I have already found it in archives on SourceForge [*]. But the patch isn't quite readable there. Could you resend it please? BTW Have you dealt with mid-file updates in your patch? For instance write 8kB file and then rewrite 2kB of data at offset 3kB, i.e. touching two 4kB blocks with one update. And how about truncate()? From what I could read from the archive the patch is somewhat incomplete in these "corner" cases. Michal [*] https://sourceforge.net/mailarchive/forum.php?thread_id=30626251&forum_id=50318 Lester Hightower wrote: > Michal, > > On Mon, 25 Sep 2006 at 13:29:17 EDT, I submitted a patch for mysqlfs-0.2, > to mys...@li..., which does exactly what you > describe below, and under the subject "Patch to mysqlfs-0.2 to break file > data into records of 4k chunks." You can likely save yourself some time > by referring to that patch. A positive side-effect that I noted in that > email is, "My testing of this code shows better than a 10-fold increase in > write speed, and that improvement is more pronounced on larger files. It > also shows a 3-fold increase in read speed." > > If you have trouble finding that email, drop me a separate email and I'll > send it to you again. > > Sincerely, > > -- > Lester Hightower > > > On Fri, 23 Mar 2007, Michal Ludvig wrote: > >> Stef Bon wrote: >> >>> But I ran into troubles when I wanted to try it. The description you'll find >>> at: >>> >>> https://sourceforge.net/tracker/index.php?func=detail&aid=1681567&group_id=129981&atid=716425 >> As I suspected this happens to files larger than 1MB. I have debugged >> the behaviour and the problem is: >> - the content of a file is held in a "LONGBLOB" field in the DB >> - as the data come in 4kB chunks, longblob gets appended the current >> chunk >> - this is done with "UPDATE ... SET data=CONCAT(data, <new chunk>)" like >> query >> - Now when the current length of "data" plus the length of "new chunk" >> goes over 1MB mysql fails with: >> "Result of concat() was larger than max_allowed_packet (1048576) - >> truncated" and 'data' is set to NULL. >> >> So this is in brief the problem. I'm not sure how to solve it though. >> >> The best approach seems to be splitting the data field into logical >> "blocks" of, say, 4kB. Then, instead of having the file contents in a >> single row in a single field in the database it'll be in a number of >> related rows. It will complicate the write()-call logic a little bit, >> but it should both improve write speed and fix this problem. IIRC this >> has been proposed on the list or maybe on SF tracker some months ago but >> I haven't followed up on that. Shame on me. >> >> I hope to have a new release ready sometime next week. And if not, well >> ... then it will be later ;-) >> >> Michal > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys-and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Mysqlfs-general mailing list > Mys...@li... > https://lists.sourceforge.net/lists/listinfo/mysqlfs-general |