From: William P. <wil...@ya...> - 2011-11-24 18:45:44
|
Just a follow-up to this problem, with some more details: On November 12 2011 17:53:02 GMT, a user sucessfully uploaded a "Fig4" data file with 10 taxa x 352,120 characters. On November 22 2011 22:03:41 GMT, he successfully uploaded his "Fig5" data file with 10 taxa x 876,159 characters (even bigger!). But starting November 23, the very same "Fig4" data file wouldn't upload. Even by subdividing this file into smaller files (e.g. 10 x 60,000 each), these still cause the system to choke. So what happened around November 22-23 that has hobbled our ability to ingest large files? Did we, for example, shorten the SQL timeout? (in response to heavy API hits). bp On Nov 24, 2011, at 12:00 AM, William Piel wrote: > > So after failing to upload several large files to production, I tried uploading a large file to dev. (10 taxa x 66472 characters). The upload page did a proxy time-out, but the request kept going on and on. Here it shows it some 6 hours later, consuming a lot of memory. > > It's odd because I don't think this problem used to happen. TreeBASE has files that are considerably larger that this one -- e.g. we had a submission on September 7. And I think a much larger file was uploaded about two weeks ago. > > So something weird has happened, with the result that TreeBASE is now underperforming. > > In two hours from now this wii all be reset because dev will be refreshed from production. But clearly there is a problem (a) because tasks that TreeBASE used to be able to do, presently it cannot do, and (b) because what it cannot do seems to tie up a lot of memory and CPU, crippling it. > > Mattison: Can you think of any changes in terms of memory allocation (or recent upgrades) that may be affecting performance? e.g. were SQL timeouts shortened to deal with the hits we were getting on the API last week? > > It's all very vexing. > > bp |