You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(16) |
Nov
(10) |
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
(34) |
Feb
(12) |
Mar
(21) |
Apr
|
May
(5) |
Jun
(13) |
Jul
(50) |
Aug
(62) |
Sep
(72) |
Oct
(17) |
Nov
(16) |
Dec
(19) |
2006 |
Jan
(26) |
Feb
(9) |
Mar
|
Apr
(8) |
May
(5) |
Jun
(7) |
Jul
(21) |
Aug
(33) |
Sep
(17) |
Oct
(4) |
Nov
(9) |
Dec
|
2007 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
(6) |
Jun
(16) |
Jul
(8) |
Aug
(1) |
Sep
|
Oct
(2) |
Nov
(2) |
Dec
(2) |
2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(4) |
Jul
(11) |
Aug
(6) |
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(4) |
Nov
|
Dec
|
2014 |
Jan
(2) |
Feb
(4) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
2016 |
Jan
(4) |
Feb
(4) |
Mar
(3) |
Apr
|
May
(1) |
Jun
(1) |
Jul
(1) |
Aug
(2) |
Sep
(1) |
Oct
(1) |
Nov
(1) |
Dec
|
2017 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Arnold P. <ap...@ma...> - 2005-06-28 15:02:42
|
Hi, I have two questions about compression, one local and one global. The local one: In UserList.pm you can now sort by clicking on a label which uses the GET method (I experimented with a POST method but it didn't look good). The problem is that you need to encode the user_id's of the visible users in the URL and MSIE has a limit of 2083 characters. What I'm doing is joining the id's with ":" and passing one long string. This way I can handle about 230 visible users. If the string is too long, clicking on a label will return all users. Hopefully most people who are looking at a subset of students will be looking at fewer than 230. What I would like to do is to compress the string (e.g. with the module compress::Zlib) first which maybe would allow 500 -700 visible users. That would means people would have to install compress::Zlib. Does anyone have an objection to this? Would this be useful in other places? Is there a better compression module? The Global question. Would it be a good idea to consider and/or experiment with using something like Apache::Dynagzip to compress all WeBWorK output? Look at http://perl.apache.org/docs/tutorials/client/compression/compression.html for a discussion of this. Arnie Prof. Arnold K. Pizer Dept. of Mathematics University of Rochester Rochester, NY 14627 (585) 275-7767 |
From: Davide P.C. <dp...@un...> - 2005-06-28 02:29:20
|
> I think this is just the right solution. The desire to upload or > download many set definition files or a whole directory of privately > written problems is one that probably occurs at the beginning and end > of each semester. That seemed to be my impression. I'm not sure if there are any security issues to consider. If the tar file had absolute file names (rather than relative path names) it might be possible to write to directories outside the course tree. I'll have to look into the options for tar to see if files could be forced to unpack into a given directory (I assume this is possible). At the least, the table of contents could be looked at to see if it contains absolute paths, and an error reported. > So the idea would be: The user is responsible for gzip and tar before > upload and after download. The FileManager takes care of it > automatically on the server? Well, I hadn't really thought about the details, but the archive COULD be unpacked automatically. I suppose there could be another checkbox in the upload section that controlled whether archives were unpacked and deleted automatically. Of course, we'll run into Arnie's issue of what to do with files that already exist. :-) I assume that they would be overwritten silently, just as tar does now. Davide |
From: Michael E. G. <ga...@ma...> - 2005-06-28 02:13:55
|
On Jun 27, 2005, at 10:01 PM, Davide P.Cervone wrote: > Folks: > > In response to the recent request to Mike to move large numbers of > files to a WW account, it would be possible to add some sort of "make > archive" and "unpack archive" button to the File Manager that would > allow a user to make a gzipped tar file from a directory, say, (or a > collection of sellected files) or unpack such a file. Do you think > this is an important feature to add? How often will this be > necessary? I suppose that a professor might want to get a collection > of set.def files and their associated headers from a into a single > archive to be able to move them to a new course in a new term. I > suggest gzipped tar format since that will certainly be available, > while the tools for making things like ZIP files might not. Do you > think that would be sufficient? > I think this is just the right solution. The desire to upload or download many set definition files or a whole directory of privately written problems is one that probably occurs at the beginning and end of each semester. Since we have unix accounts we don't notice the need so acutely. What you propose is just what I was thinking of and in fact it's similar to what is currently done in a slightly hacked fashion for Course Administration. If you Export a course it creates a gzipped file in the templates directory for download. When you import a course it has to be from a gzipped file. (The Course Admin change is pretty recent and we can still modify that so that it is compatible with rules used by the file manager.) So the idea would be: The user is responsible for gzip and tar before upload and after download. The FileManager takes care of it automatically on the server? Is there ever a case where you would want to upload a gzipped file and not have it processed? (aside from the CourseAdmin import/exports -- we can handle them as special cases.) Take care, Mike > Davide > > > > ------------------------------------------------------- > SF.Net email is sponsored by: Discover Easy Linux Migration Strategies > from IBM. Find simple to follow Roadmaps, straightforward articles, > informative Webcasts and more! Get everything you need to get up to > speed, fast. http://ads.osdn.com/?ad_id=7477&alloc_id=16492&op=click > _______________________________________________ > OpenWeBWorK-Devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/openwebwork-devel > |
From: Davide P.C. <dp...@un...> - 2005-06-28 02:01:20
|
Folks: In response to the recent request to Mike to move large numbers of files to a WW account, it would be possible to add some sort of "make archive" and "unpack archive" button to the File Manager that would allow a user to make a gzipped tar file from a directory, say, (or a collection of sellected files) or unpack such a file. Do you think this is an important feature to add? How often will this be necessary? I suppose that a professor might want to get a collection of set.def files and their associated headers from a into a single archive to be able to move them to a new course in a new term. I suggest gzipped tar format since that will certainly be available, while the tools for making things like ZIP files might not. Do you think that would be sufficient? Davide |
From: Michael E. G. <ga...@ma...> - 2005-06-10 20:27:42
|
Hi Gavin, It's not stupid -- I had to relearn how to do it. Check out a fresh copy of webwork2 using cvs -d:ext:gl...@cv...:/webwork/cvs/system co -A webwork2 (I don't think you need the -A. but just in case.) Then in the webwork2 directory do this: cvs update -kk -j rel-2-1a This will merge all of the 2.1a files with your copy of HEAD. The -j means join and keeps the HEAD files from just being over written. The -kk means that certain key variables at the top of the file are interpreted correctly so that you don't get unnecessary collisions. (Without this EVERY file will appear to be changed, because the version information has changed.) Now you will have a number of files that need to be reconciled -- (they usually show up with a C in front of them if you use cvs -n update). In these files you'll see sections like: <<<<<< UserList.pm old code ========== new code >>>>>>>>>> new version number You need to check the changes you've made against changes that have been made by others and reconcile them. It can take some time I discovered. :-) Once you are all done and if possible have been able to test the code locally you can use cvs commit to add all of these changed files back into the HEAD of webwork2. Let me know if I can help. Take care, Mike On Jun 10, 2005, at 4:14 PM, P Gavin LaRose wrote: > Hi Mike, > >> John and Gavin -- go ahead and make your changes to the HEAD branch >> and >> we'll start refining from there... > > Stupid question: how can I merge the changes from the rel-2.1a branch I > was working in, into HEAD? > > Thanks, > Gavin > > On Fri, 10 Jun 2005 at 16:07 Michael E. Gage wrote: > >> Hi all, >> >> >> I have tagged all of the files currently in the CVS at rel-2-1-3. All >> of the changes have also been placed in the rel-2-1-patches branch as >> well. This instant the HEAD branch and the rel-2-1-patches branch are >> the same, as best I can tell. >> >> John and Gavin -- go ahead and your changes to the HEAD branch and >> we'll >> start testing and refining from there. Sorry for the delay in getting >> all the files in sync in the CVS -- it took longer than I thought. >> >> Those using WeBWorK for active classes should definitely not update >> from >> the HEAD branch for a week or so while we get the bugs sorted out. >> >> Take care, >> >> Mike > > -- > P. Gavin LaRose, Ph.D. Program Manager (Instructional Tech.) > Math Dept., University of Michigan > gl...@um... "There's no use in trying," > [Alice] > 734.764.6454 said. "One Can't believe > impossible > http://www.math.lsa.umich.edu/~glarose/ things." "I daresay you > haven't had > much practice," said the > Queen. > - Lewis > Carrol > |
From: Michael E. G. <ga...@ma...> - 2005-06-10 20:11:17
|
Hi all, I have tagged all of the files currently in the CVS at rel-2-1-3. All of the changes have also been placed in the rel-2-1-patches branch as well. This instant the HEAD branch and the rel-2-1-patches branch are the same, as best I can tell. John and Gavin -- go ahead and your changes to the HEAD branch and we'll start testing and refining from there. Sorry for the delay in getting all the files in sync in the CVS -- it took longer than I thought. Those using WeBWorK for active classes should definitely not update from the HEAD branch for a week or so while we get the bugs sorted out. Take care, Mike |
From: Michael E. G. <ga...@ma...> - 2005-06-05 21:01:20
|
Hi John, Let's put it in Tuesday. If someone with a production course starts using the new mysql setup it might be hard to back out if there are problems. Thanks for following up on this -- improving the database speed is a pretty important part of what I want to have available for my own courses this fall. I think we'll have enough usage over the summer to get the bugs out by then. Talk to you soon. Take care, Mike On Jun 5, 2005, at 4:31 PM, John Jones wrote: > Michael E. Gage wrote: > >> Hi everyone, >> >> I'd like to fold Gavin's gateway quiz into the main CVS build within >> a couple of weeks. I'd first >> like to have some some smaller user interface and cosmetic details >> fixed -- Arnie has done >> several fixes to the scoring module for example. Once that is done >> we'll label that >> 2.1.2 and add Gavin's modules in. >> >> I've been adding items that allow interconnection of moodle to >> webwork using SOAP, but >> that's pretty independent of the rest of the code, so I've been >> adding it directly to HEAD. It >> won't affect anything unless you actually use the SOAP interface. >> >> If everyone else is ok with this, let's plan to stamp the current >> HEAD as 2.1.2 next Monday (June 6). >> Meantime get any small changes or stability issues into HEAD and hold >> off any >> potentially unstable changes. Once 2.1.2 is released we'll start >> folding in Gateway testing >> and anything else that is ready. >> >> Sound ok? > > Hi Mike, > > If you recall, there was a problem with webwork being slow for big > courses. I had put in the innocuous part of the fix before, but that > part doesn't actually do anything (which is why it is innocuous). > Classes got busy before I could complete the fix. > > I have the rest of the changes now, but they are only lightly tested. > It affects the creation of mysql tables, so in principle, mistakes > could cause all sorts of headaches. Should I commit and let others > test it, or wait until Tuesday? > > John > > > > ------------------------------------------------------- > This SF.Net email is sponsored by: NEC IT Guy Games. How far can you > shotput > a projector? How fast can you ride your desk chair down the office > luge track? > If you want to score the big prize, get to know the little guy. Play > to win an NEC 61" plasma display: http://www.necitguy.com/?r=20 > _______________________________________________ > OpenWeBWorK-Devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/openwebwork-devel > |
From: John J. <jj...@as...> - 2005-06-05 20:32:12
|
Michael E. Gage wrote: > Hi everyone, > > I'd like to fold Gavin's gateway quiz into the main CVS build within a > couple of weeks. I'd first > like to have some some smaller user interface and cosmetic details > fixed -- Arnie has done > several fixes to the scoring module for example. Once that is done > we'll label that > 2.1.2 and add Gavin's modules in. > > I've been adding items that allow interconnection of moodle to webwork > using SOAP, but > that's pretty independent of the rest of the code, so I've been adding > it directly to HEAD. It > won't affect anything unless you actually use the SOAP interface. > > If everyone else is ok with this, let's plan to stamp the current HEAD > as 2.1.2 next Monday (June 6). > Meantime get any small changes or stability issues into HEAD and hold > off any > potentially unstable changes. Once 2.1.2 is released we'll start > folding in Gateway testing > and anything else that is ready. > > Sound ok? Hi Mike, If you recall, there was a problem with webwork being slow for big courses. I had put in the innocuous part of the fix before, but that part doesn't actually do anything (which is why it is innocuous). Classes got busy before I could complete the fix. I have the rest of the changes now, but they are only lightly tested. It affects the creation of mysql tables, so in principle, mistakes could cause all sorts of headaches. Should I commit and let others test it, or wait until Tuesday? John |
From: Samuel H. <sh...@ma...> - 2005-06-01 05:22:09
|
On Tue, 31 May 2005, Michael E. Gage wrote: > I'd like to fold Gavin's gateway quiz into the main CVS build within a > couple of weeks. I'd first like to have some some smaller user > interface and cosmetic details fixed -- Arnie has done several fixes to > the scoring module for example. Once that is done we'll label that > 2.1.2 and add Gavin's modules in. > > I've been adding items that allow interconnection of moodle to webwork > using SOAP, but that's pretty independent of the rest of the code, so > I've been adding it directly to HEAD. It won't affect anything unless > you actually use the SOAP interface. > > If everyone else is ok with this, let's plan to stamp the current HEAD > as 2.1.2 next Monday (June 6). Meantime get any small changes or > stability issues into HEAD and hold off any potentially unstable > changes. Once 2.1.2 is released we'll start folding in Gateway testing > and anything else that is ready. > > Sound ok? Sounds great! -sam |
From: P G. L. <gl...@um...> - 2005-05-31 16:06:41
|
Hi Mike, Sounds great to me. My target for ironing out the bugs I've got left is the 10th. That might slip to the following week depending on how some other things fall out this week. Thanks, Gavin On Tue, 31 May 2005 at 11:58 Michael E. Gage wrote: > Hi everyone, > > I'd like to fold Gavin's gateway quiz into the main CVS build within a > couple of weeks. I'd first like to have some some smaller user > interface and cosmetic details fixed -- Arnie has done several fixes to > the scoring module for example. Once that is done we'll label that > 2.1.2 and add Gavin's modules in. > > I've been adding items that allow interconnection of moodle to webwork > using SOAP, but that's pretty independent of the rest of the code, so > I've been adding it directly to HEAD. It won't affect anything unless > you actually use the SOAP interface. > > If everyone else is ok with this, let's plan to stamp the current HEAD > as 2.1.2 next Monday (June 6). Meantime get any small changes or > stability issues into HEAD and hold off any potentially unstable > changes. Once 2.1.2 is released we'll start folding in Gateway testing > and anything else that is ready. > > Sound ok? > > Take care, > > Mike -- P. Gavin LaRose, Ph.D. Program Manager (Instructional Tech.) Math Dept., University of Michigan gl...@um... "There's no use in trying," [Alice] 734.764.6454 said. "One Can't believe impossible http://www.math.lsa.umich.edu/~glarose/ things." "I daresay you haven't had much practice," said the Queen. - Lewis Carrol |
From: Michael E. G. <ga...@ma...> - 2005-05-31 16:01:08
|
Hi everyone, I'd like to fold Gavin's gateway quiz into the main CVS build within a couple of weeks. I'd first like to have some some smaller user interface and cosmetic details fixed -- Arnie has done several fixes to the scoring module for example. Once that is done we'll label that 2.1.2 and add Gavin's modules in. I've been adding items that allow interconnection of moodle to webwork using SOAP, but that's pretty independent of the rest of the code, so I've been adding it directly to HEAD. It won't affect anything unless you actually use the SOAP interface. If everyone else is ok with this, let's plan to stamp the current HEAD as 2.1.2 next Monday (June 6). Meantime get any small changes or stability issues into HEAD and hold off any potentially unstable changes. Once 2.1.2 is released we'll start folding in Gateway testing and anything else that is ready. Sound ok? Take care, Mike On May 31, 2005, at 11:45 AM, P Gavin LaRose wrote: > Hi guys, > > I got a call from Jeff Holt the other day asking about the gateway/quiz > module that I've been working on for WeBWorK. I'm hoping to have the > bugs > that I found when testing it this spring fixed in the next couple of > weeks. This inspires me to ask if it's something that we should be > thinking about eventually folding back into the main WeBWorK code. > This > is mostly a self-serving thought---if it's there I don't have to > maintain > two installs of the software. And if we're thinking of that, what > steps > are needed and what time-line is realistic. > > This may boil down to the question "how bug-free is the testing > module," > which I don't have a good answer for. I know there are a couple of > things > that I need to fix, obviously, and once those are dealt with don't > know of > anything else. > > Thanks, > Gavin > > -- > P. Gavin LaRose, Ph.D. Program Manager (Instructional Tech.) > Math Dept., University of Michigan > gl...@um... "There's no use in trying," > [Alice] > 734.764.6454 said. "One Can't believe > impossible > http://www.math.lsa.umich.edu/~glarose/ things." "I daresay you > haven't had > much practice," said the > Queen. > - Lewis > Carrol > |
From: Davide P.C. <dp...@un...> - 2005-05-04 18:56:59
|
> To expand on it I think something similar will need to be done for the > "alias" subroutine -- give it a list of places to find images, applets > and so forth. The alias subroutine is a bit of a mess -- it hasn't > been looked at in quite a long time. Yes, it is in need of updating. When I wrote my problems that use LiveGraphics3D for interactive 3D surfaces, I had to make the data files end in .html so that alias would handle them. There really needs to be a way to extend alias to handle additional types in a systematic way. >> I would also like to suggest that loadMacros be able to access >> subdirectories of the macro directory (not by searching them >> automatically, but by making explicit references to them). For >> example, there are several macro files that defined various Parser >> contents, and it would be nice to have a "context" subdirectory of >> the macros directory and then use >> >> loadMacros("context/LimitedVector.pl"); >> > We should add something like this. Would using double colons be > better -- following the perl Module convention? > loadMacros("context::LimitedVector.pl"); I think this is likely to cause confusion. Most of the problem writers will not be working with perl at the package level, so will not know the :: syntax. Also, I suspect that these macro files will not be creating package of their own, and that might cause confusion. Loading something called "context::LimitedVector.pl" would suggest that there will be a "context::LimitedVector" package available after the load, and that may not be the case. These really are file references, and I think problem authors will think of them that way, so "context/LimitedVector.pl" is probably the right format for this. Davide |
From: Michael G. <ga...@ma...> - 2005-05-04 13:42:13
|
On May 3, 2005, at 9:23 PM, Davide P.Cervone wrote: > Folks: > > On the discussion board at > > http://webhost.math.rochester.edu/webworkdocs/discuss/msgReader$3244? > mode=topic&y=2005&m=4&d=22 > > a user asks how to add directories that can be searched for macro > files by loadMacros(). The current implementation doesn't provide for > this, but it would not be hard to do. One possibility would be to add > a variable to global.conf that is an array of directories to be > searched (in the order given). In dangerousMacros around line 260, > you would repalce the explicit check of $macroDirectory and > $courseScriptsDirectory by a loop through the macro directory array. > (The macroDirectory and courseScriptsDirectory variables can then be > dropped.) > This sounds like an excellent idea. To expand on it I think something similar will need to be done for the "alias" subroutine -- give it a list of places to find images, applets and so forth. The alias subroutine is a bit of a mess -- it hasn't been looked at in quite a long time. > I would also like to suggest that loadMacros be able to access > subdirectories of the macro directory (not by searching them > automatically, but by making explicit references to them). For > example, there are several macro files that defined various Parser > contents, and it would be nice to have a "context" subdirectory of the > macros directory and then use > > loadMacros("context/LimitedVector.pl"); > We should add something like this. Would using double colons be better -- following the perl Module convention? loadMacros("context::LimitedVector.pl"); > The curernt loadMacros() will actually do this, but the _init() > routine stuff won't work for this. This can be fixed with a single > line in dangerousMacros. At around line 205, right after > > my $init_subroutine_name = "_${macro_file_name}_init"; > > you should add > > $init_subroutine_name =~ s/[^a-z0-9_]/_/gi; > > to convert any '/' to a '_' in the name. (This also converts any > other special characters to underscores, so that in case the file name > contains characters that are not legal in a perl subroutine name, they > will not cause problems with the _init() routine. This would be a > good thing to do even if you don't like the sub-directory idea.) > > What do you think? > Both ideas seem good to me. What do you think of the double colon idea? Do you think the slash is more straightforward? Take care, Mike > Davide > > > > ------------------------------------------------------- > This SF.Net email is sponsored by: NEC IT Guy Games. > Get your fingers limbered up and give it your best shot. 4 great > events, 4 > opportunities to win big! Highest score wins.NEC IT Guy Games. Play to > win an NEC 61 plasma display. Visit http://www.necitguy.com/?r=20 > _______________________________________________ > OpenWeBWorK-Devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/openwebwork-devel > |
From: Davide P.C. <dp...@un...> - 2005-05-04 01:23:59
|
Folks: On the discussion board at http://webhost.math.rochester.edu/webworkdocs/discuss/msgReader$3244? mode=topic&y=2005&m=4&d=22 a user asks how to add directories that can be searched for macro files by loadMacros(). The current implementation doesn't provide for this, but it would not be hard to do. One possibility would be to add a variable to global.conf that is an array of directories to be searched (in the order given). In dangerousMacros around line 260, you would repalce the explicit check of $macroDirectory and $courseScriptsDirectory by a loop through the macro directory array. (The macroDirectory and courseScriptsDirectory variables can then be dropped.) I would also like to suggest that loadMacros be able to access subdirectories of the macro directory (not by searching them automatically, but by making explicit references to them). For example, there are several macro files that defined various Parser contents, and it would be nice to have a "context" subdirectory of the macros directory and then use loadMacros("context/LimitedVector.pl"); The curernt loadMacros() will actually do this, but the _init() routine stuff won't work for this. This can be fixed with a single line in dangerousMacros. At around line 205, right after my $init_subroutine_name = "_${macro_file_name}_init"; you should add $init_subroutine_name =~ s/[^a-z0-9_]/_/gi; to convert any '/' to a '_' in the name. (This also converts any other special characters to underscores, so that in case the file name contains characters that are not legal in a perl subroutine name, they will not cause problems with the _init() routine. This would be a good thing to do even if you don't like the sub-directory idea.) What do you think? Davide |
From: Arnold P. <ap...@ma...> - 2005-03-29 21:30:18
|
Hi, I don't have a solution to the Peekaboo bug but a very reliable way to see the bug in MSIE is to view the WeBWorK questionnaire. I just wanted to record this so that if I don't remember it, a search might bring up this email. Arnie Prof. Arnold K. Pizer Dept. of Mathematics University of Rochester Rochester, NY 14627 (585) 275-7767 |
From: Arnold P. <ap...@ma...> - 2005-03-21 12:45:46
|
At 11:30 AM 3/17/2005, Arnold Pizer wrote: The small test I did showing a dramatic speed up with John's fix of removing "binary" seems to be born out by last night results. Here's a small snippet. Most times seem to be 0 or 1 second which is what we saw last semester with gdbm. [Sun Mar 20 21:49:41 2005] 38879 1111373381 - [/webwork2/mth162/8/8/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:49:47 2005] 39785 1111373387 - [/webwork2/mth162/8/5/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:49:48 2005] 39166 1111373388 - [/webwork2/mth162/8/1/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:49:49 2005] 39501 1111373389 - [/webwork2/mth165/9/4/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:49:50 2005] 40245 1111373390 - [/webwork2/mth162/8/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:49:51 2005] 40559 1111373391 - [/webwork2/mth162/8/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:49:52 2005] 40245 1111373392 - [/webwork2/mth162/8/9/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:49:54 2005] 40961 1111373394 - [/webwork2/mth162/8/9/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:49:57 2005] 38171 1111373397 - [/webwork2/mth161/7/12/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:49:58 2005] 38879 1111373398 - [/webwork2/mth162/8/10/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:00 2005] 38171 1111373400 - [/webwork2/mth161/logout/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:01 2005] 41128 1111373401 - [/webwork2/mth162/8/10/] [runTime = 1.0 sec sql_single] [Sun Mar 20 21:50:01 2005] 39785 1111373401 - [/webwork2/mth162/8/8/] [runTime = 1.0 sec sql_single] [Sun Mar 20 21:50:05 2005] 38590 1111373405 - [/webwork2/mth162/8/5/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:07 2005] 39178 1111373407 - [/webwork2/mth162/8/3/] [runTime = 1.0 sec sql_single] [Sun Mar 20 21:50:08 2005] 39871 1111373408 - [/webwork2/mth162/8/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:09 2005] 39945 1111373409 - [/webwork2/mth162/8/9/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:10 2005] 39871 1111373410 - [/webwork2/mth162/8/10/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:13 2005] 40559 1111373413 - [/webwork2/mth162/8/6/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:13 2005] 39871 1111373413 - [/webwork2/mth162/8/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:14 2005] 41038 1111373414 - [/webwork2/mth162/8/7/] [runTime = 1.0 sec sql_single] [Sun Mar 20 21:50:14 2005] 38171 1111373414 - [/webwork2/mth162/8/6/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:15 2005] 41038 1111373415 - [/webwork2/mth162/8/8/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:15 2005] 41129 1111373415 - [/webwork2/mth162/8/10/] [runTime = 1.0 sec sql_single] [Sun Mar 20 21:50:16 2005] 40961 1111373416 - [/webwork2/mth162/8/5/] [runTime = 1.0 sec sql_single] [Sun Mar 20 21:50:16 2005] 39166 1111373416 - [/webwork2/mth162/8/5/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:18 2005] 40559 1111373418 - [/webwork2/mth162/8/6/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:18 2005] 39785 1111373418 - [/webwork2/mth162/8/6/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:20 2005] 41038 1111373420 - [/webwork2/mth162/8/9/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:20 2005] 39785 1111373420 - [/webwork2/mth165/9/1/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:21 2005] 40559 1111373421 - [/webwork2/mth162/8/7/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:21 2005] 39501 1111373421 - [/webwork2/mth162/8/9/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:22 2005] 39945 1111373422 - [/webwork2/mth165/logout/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:22 2005] 39501 1111373422 - [/webwork2/mth162/8/10/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:22 2005] 40559 1111373422 - [/webwork2/mth162/8/8/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:24 2005] 38171 1111373424 - [/webwork2/mth162/8/6/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:24 2005] 40559 1111373424 - [/webwork2/mth162/8/9/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:25 2005] 39785 1111373425 - [/webwork2/mth165/9/1/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:27 2005] 41128 1111373427 - [/webwork2/mth162] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:27 2005] 38879 1111373427 - [/webwork2/mth162/8/2/] [runTime = 0.0 sec sql_single] Arnie >At 10:47 AM 3/17/2005, John Jones wrote: > >>What about timing with old SQL.pm with binary removed? If you might have >>a case conflict (database keys which differ only in case), then I >>wouldn't try it until after the semester is over. It would tell us if >>there are gains to be had by making further changes to SQL.pm. >> >>John > > >>What about timing with old SQL.pm with binary removed? If you might have >>a case conflict (database keys which differ only in case), then I >>wouldn't try it until after the semester is over. It would tell us if >>there are gains to be had by making further changes to SQL.pm. >> >>John > >With binary removed there seems to be a dramatic speed up. > >With your new version of SQL.pm > >[Thu Mar 17 11:16:24 2005] 38563 1111076184 - [/webwork2/mth162/7/6/] >[runTime = 2.0 sec sql_single] >[Thu Mar 17 11:16:28 2005] 38563 1111076188 - [/webwork2/mth162/7/6/] >[runTime = 3.0 sec sql_single] >[Thu Mar 17 11:16:31 2005] 38563 1111076191 - [/webwork2/mth162/7/6/] >[runTime = 2.0 sec sql_single] >[Thu Mar 17 11:16:35 2005] 38565 1111076195 - [/webwork2/mth162/7/6/] >[runTime = 3.0 sec sql_single] >[Thu Mar 17 11:16:38 2005] 38565 1111076198 - [/webwork2/mth162/7/6/] >[runTime = 3.0 sec sql_single] >[Thu Mar 17 11:16:41 2005] 38565 1111076201 - [/webwork2/mth162/7/6/] >[runTime = 2.0 sec sql_single] >[Thu Mar 17 11:17:05 2005] 38564 1111076225 - [/webwork2/mth162/8/8/] >[runTime = 3.0 sec sql_single] > >With the old version of SQL.pm but with binary removed > >[Thu Mar 17 11:17:07 2005] 38564 1111076227 - [/webwork2/mth162/8/] >[runTime = 0.0 sec sql_single] >[Thu Mar 17 11:17:09 2005] 38564 1111076229 - [/webwork2/mth162/8/11/] >[runTime = 0.0 sec sql_single] >[Thu Mar 17 11:17:42 2005] 48562 1111076262 - [/webwork2/mth162/7/6/] >[runTime = 1.0 sec sql_single] >[Thu Mar 17 11:17:44 2005] 48564 1111076264 - [/webwork2/mth162/7/6/] >[runTime = 1.0 sec sql_single] >[Thu Mar 17 11:17:44 2005] 48562 1111076264 - [/webwork2/mth162/7/6/] >[runTime = 0.0 sec sql_single] >[Thu Mar 17 11:17:45 2005] 48562 1111076265 - [/webwork2/mth162/7/6/] >[runTime = 0.0 sec sql_single] >[Thu Mar 17 11:17:46 2005] 48564 1111076266 - [/webwork2/mth162/7/6/] >[runTime = 0.0 sec sql_single] >[Thu Mar 17 11:17:47 2005] 48564 1111076267 - [/webwork2/mth162/7/6/] >[runTime = 0.0 sec sql_single] >[Thu Mar 17 11:17:48 2005] 48562 1111076268 - [/webwork2/mth162/7/6/] >[runTime = 0.0 sec sql_single] >[Thu Mar 17 11:17:49 2005] 48575 1111076269 - [/webwork2/mth162/8/11/] >[runTime = 1.0 sec sql_single] >[Thu Mar 17 11:17:49 2005] 48564 1111076269 - [/webwork2/mth162/7/6/] >[runTime = 0.0 sec sql_single] >[Thu Mar 17 11:17:50 2005] 48564 1111076270 - [/webwork2/mth162/7/6/] >[runTime = 1.0 sec sql_single] >[Thu Mar 17 11:17:51 2005] 48562 1111076271 - [/webwork2/mth162/7/6/] >[runTime = 0.0 sec sql_single] > >If using case insenstive keys does not mess up how things are read out >(e.g. you want set "Exam1" to be in that form, not e.g. "EXAM1" or >"exam1"), then it should be easy to enforce that there are no conflicts >when userid's, set names, etc are created. > >Since I can not imagine that we have any database keys which differ only >in case, I'll try this out now. > >Thanks. > >Arnie > > > >------------------------------------------------------- >SF email is sponsored by - The IT Product Guide >Read honest & candid reviews on hundreds of IT Products from real users. >Discover which products truly live up to the hype. Start reading now. >http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click >_______________________________________________ >OpenWeBWorK-Devel mailing list >Ope...@li... >https://lists.sourceforge.net/lists/listinfo/openwebwork-devel Prof. Arnold K. Pizer Dept. of Mathematics University of Rochester Rochester, NY 14627 (585) 275-7767 |
From: John J. <jj...@as...> - 2005-03-17 21:51:16
|
Arnold Pizer wrote: > At 12:00 PM 3/17/2005, John Jones wrote: > >> If we don't care about supporting users/sets which differ only in >> case, then there is one bug to be re-patched (when producing hardcopy >> for a student, webwork checks that the student's user_id matches case >> sensitively with the user_id of the hardcopy being requested - this >> should be changed to insensitively so that students who change >> capitalization of their user_id on login can still get hardcopy). > > > Another option might be to require the login to be case sensitive > (i.e. matches whatever the initial login was set up as). We should > probably do whatever is most standard. I thought of that, but that puts the change in Authen.pm. I didn't look and I wasn't sure if that would mean the case checking on the login name would happen every time a page is generated. Looking at it now, I suppose it could be just checked when verifying passwords since that only happens on login. John |
From: Arnold P. <ap...@ma...> - 2005-03-17 20:30:10
|
At 12:00 PM 3/17/2005, John Jones wrote: >If we don't care about supporting users/sets which differ only in case, >then there is one bug to be re-patched (when producing hardcopy for a >student, webwork checks that the student's user_id matches case >sensitively with the user_id of the hardcopy being requested - this should >be changed to insensitively so that students who change capitalization of >their user_id on login can still get hardcopy). Another option might be to require the login to be case sensitive (i.e. matches whatever the initial login was set up as). We should probably do whatever is most standard. Arnie Prof. Arnold K. Pizer Dept. of Mathematics University of Rochester Rochester, NY 14627 (585) 275-7767 |
From: John J. <jj...@as...> - 2005-03-17 17:09:30
|
Arnold Pizer wrote: > If using case insenstive keys does not mess up how things are read out > (e.g. you want set "Exam1" to be in that form, not e.g. "EXAM1" or > "exam1"), then it should be easy to enforce that there are no > conflicts when userid's, set names, etc are created. > > Since I can not imagine that we have any database keys which differ > only in case, I'll try this out now. The only things used as database keys are set_id, user_id, and problem_id. The last one is always a number, so no conflicts are possible. You can double check in a given course by looking at the list of sets and list of users in the corresponding instructor pages. The strings are displayed properly, so that isn't an issue. Enforcing no conflicts on creation is what would happen by default without binary there. Once you have Exam1, you cannot create exam1 because the database says exam1 already matches an existing set. However, if you created Exam1 and exam1 a week ago (when "binary" was there), then webwork would let you do it. Operating now without binary then mucks up the database. So, simply deleting binary is a reaonsable short term solution. In fact, that is what we are using here at the moment. Longer term, the question is whether to support users/sets which differ only in case. If we want to do that, some other key-scheme should probably be used in the database. This is a moderately big hassle. If we don't care about supporting users/sets which differ only in case, then there is one bug to be re-patched (when producing hardcopy for a student, webwork checks that the student's user_id matches case sensitively with the user_id of the hardcopy being requested - this should be changed to insensitively so that students who change capitalization of their user_id on login can still get hardcopy). John |
From: Arnold P. <ap...@ma...> - 2005-03-17 16:29:53
|
At 10:47 AM 3/17/2005, John Jones wrote: >What about timing with old SQL.pm with binary removed? If you might have >a case conflict (database keys which differ only in case), then I wouldn't >try it until after the semester is over. It would tell us if there are >gains to be had by making further changes to SQL.pm. > >John >What about timing with old SQL.pm with binary removed? If you might have >a case conflict (database keys which differ only in case), then I wouldn't >try it until after the semester is over. It would tell us if there are >gains to be had by making further changes to SQL.pm. > >John With binary removed there seems to be a dramatic speed up. With your new version of SQL.pm [Thu Mar 17 11:16:24 2005] 38563 1111076184 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Thu Mar 17 11:16:28 2005] 38563 1111076188 - [/webwork2/mth162/7/6/] [runTime = 3.0 sec sql_single] [Thu Mar 17 11:16:31 2005] 38563 1111076191 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Thu Mar 17 11:16:35 2005] 38565 1111076195 - [/webwork2/mth162/7/6/] [runTime = 3.0 sec sql_single] [Thu Mar 17 11:16:38 2005] 38565 1111076198 - [/webwork2/mth162/7/6/] [runTime = 3.0 sec sql_single] [Thu Mar 17 11:16:41 2005] 38565 1111076201 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Thu Mar 17 11:17:05 2005] 38564 1111076225 - [/webwork2/mth162/8/8/] [runTime = 3.0 sec sql_single] With the old version of SQL.pm but with binary removed [Thu Mar 17 11:17:07 2005] 38564 1111076227 - [/webwork2/mth162/8/] [runTime = 0.0 sec sql_single] [Thu Mar 17 11:17:09 2005] 38564 1111076229 - [/webwork2/mth162/8/11/] [runTime = 0.0 sec sql_single] [Thu Mar 17 11:17:42 2005] 48562 1111076262 - [/webwork2/mth162/7/6/] [runTime = 1.0 sec sql_single] [Thu Mar 17 11:17:44 2005] 48564 1111076264 - [/webwork2/mth162/7/6/] [runTime = 1.0 sec sql_single] [Thu Mar 17 11:17:44 2005] 48562 1111076264 - [/webwork2/mth162/7/6/] [runTime = 0.0 sec sql_single] [Thu Mar 17 11:17:45 2005] 48562 1111076265 - [/webwork2/mth162/7/6/] [runTime = 0.0 sec sql_single] [Thu Mar 17 11:17:46 2005] 48564 1111076266 - [/webwork2/mth162/7/6/] [runTime = 0.0 sec sql_single] [Thu Mar 17 11:17:47 2005] 48564 1111076267 - [/webwork2/mth162/7/6/] [runTime = 0.0 sec sql_single] [Thu Mar 17 11:17:48 2005] 48562 1111076268 - [/webwork2/mth162/7/6/] [runTime = 0.0 sec sql_single] [Thu Mar 17 11:17:49 2005] 48575 1111076269 - [/webwork2/mth162/8/11/] [runTime = 1.0 sec sql_single] [Thu Mar 17 11:17:49 2005] 48564 1111076269 - [/webwork2/mth162/7/6/] [runTime = 0.0 sec sql_single] [Thu Mar 17 11:17:50 2005] 48564 1111076270 - [/webwork2/mth162/7/6/] [runTime = 1.0 sec sql_single] [Thu Mar 17 11:17:51 2005] 48562 1111076271 - [/webwork2/mth162/7/6/] [runTime = 0.0 sec sql_single] If using case insenstive keys does not mess up how things are read out (e.g. you want set "Exam1" to be in that form, not e.g. "EXAM1" or "exam1"), then it should be easy to enforce that there are no conflicts when userid's, set names, etc are created. Since I can not imagine that we have any database keys which differ only in case, I'll try this out now. Thanks. Arnie |
From: John J. <jj...@as...> - 2005-03-17 15:47:07
|
Arnold Pizer wrote: > At 03:45 PM 3/16/2005, John Jones wrote: > > Hi John, > > I was being stupid. I forgot to restart the web server after > changing the SQL.pm file. Doing that gives the times below. There > seems to be a definite improvement using your new SQL.pm but not a > dramatic one. When I did this there was a very light load on the > server. Last semester using gdbm the vast majority of operations had > runTime = 0.0 sec (and most operations are done during heavy use > periods). Also Bill Wheeler reports that he switched to BerkeleyBD > from gdbm because with his loads, gdbm was too slow (because writing > locks applied to the whole database). I assume in his case, > sql_single would be unusable as is. This is one area that needs a lot > of optimization. > > Hopefully the current fix will be sufficient to provide a better > experience for my 162 students. We'll see Sunday night when our next > assignment is due. What about timing with old SQL.pm with binary removed? If you might have a case conflict (database keys which differ only in case), then I wouldn't try it until after the semester is over. It would tell us if there are gains to be had by making further changes to SQL.pm. John > Thu Mar 17 09:00:28 2005] 38570 1111068028 - [/webwork2/mth162/7/6/] > [runTime = 2.0 sec sql_single] > [Thu Mar 17 09:00:35 2005] 38566 1111068035 - [/webwork2/mth162/7/6/] > [runTime = 2.0 sec sql_single] > [Thu Mar 17 09:00:39 2005] 38566 1111068039 - [/webwork2/mth162/7/6/] > [runTime = 2.0 sec sql_single] > [Thu Mar 17 09:00:43 2005] 38566 1111068043 - [/webwork2/mth162/7/6/] > [runTime = 2.0 sec sql_single] > [Thu Mar 17 09:00:46 2005] 38570 1111068046 - [/webwork2/mth162/7/6/] > [runTime = 2.0 sec sql_single] > [Thu Mar 17 09:00:49 2005] 38566 1111068049 - [/webwork2/mth162/7/6/] > [runTime = 2.0 sec sql_single] > [Thu Mar 17 09:03:02 2005] 38564 1111068182 - > [/webwork2/mth162/instructor/progress/] [runTime = 1.0 sec sql_single] > [Thu Mar 17 09:03:18 2005] 38565 1111068198 - > [/webwork2/mth162/instructor/progress/set/7/] [runTime = 6.0 sec > sql_single] > [Thu Mar 17 09:05:11 2005] 38571 1111068311 - [/webwork2/mth162/] > [runTime = 0.0 sec sql_single] > [Thu Mar 17 09:05:13 2005] 38577 1111068313 - [/webwork2/mth162/7/] > [runTime = 0.0 sec sql_single] > [Thu Mar 17 09:05:14 2005] 38571 1111068314 - [/webwork2/mth162/7/1/] > [runTime = 0.0 sec sql_single] > [Thu Mar 17 09:05:20 2005] 38571 1111068320 - [/webwork2/mth162/7/1/] > [runTime = 3.0 sec sql_single] > [Thu Mar 17 09:05:24 2005] 38571 1111068324 - [/webwork2/mth162/7/1/] > [runTime = 2.0 sec sql_single] > [Thu Mar 17 09:05:27 2005] 38571 1111068327 - [/webwork2/mth162/7/1/] > [runTime = 2.0 sec sql_single] > [Thu Mar 17 09:05:31 2005] 38571 1111068331 - [/webwork2/mth162/7/1/] > [runTime = 3.0 sec sql_single] > [Thu Mar 17 09:05:34 2005] 38571 1111068334 - [/webwork2/mth162/7/1/] > [runTime = 2.0 sec sql_single] > >> Arnold Pizer wrote: >> >>> At 02:57 PM 3/16/2005, John Jones wrote: >>> >>> I don't know if I can tell anything from the above or not. I seems >>> submitting an answer does slow stuff down somewhat. How many BD >>> calls are involved in that process? >> >> >> I don't know offhand. My basic approach to figuring this out is >> either throw extra information in the timing log, or to write >> information to a special log file. Including what webwork function >> in SQL is being called, and the sql statement it uses should give an >> idea of how much and what kind of activity is involved in a single >> submission. >> >> John >> >> >> ------------------------------------------------------- >> SF email is sponsored by - The IT Product Guide >> Read honest & candid reviews on hundreds of IT Products from real users. >> Discover which products truly live up to the hype. Start reading now. >> http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click >> _______________________________________________ >> OpenWeBWorK-Devel mailing list >> Ope...@li... >> https://lists.sourceforge.net/lists/listinfo/openwebwork-devel > > > Prof. Arnold K. Pizer > Dept. of Mathematics > University of Rochester > Rochester, NY 14627 > (585) 275-7767 > > > > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real users. > Discover which products truly live up to the hype. Start reading now. > http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click > _______________________________________________ > OpenWeBWorK-Devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/openwebwork-devel |
From: Arnold P. <ap...@ma...> - 2005-03-17 14:23:33
|
At 03:45 PM 3/16/2005, John Jones wrote: Hi John, I was being stupid. I forgot to restart the web server after changing the SQL.pm file. Doing that gives the times below. There seems to be a definite improvement using your new SQL.pm but not a dramatic one. When I did this there was a very light load on the server. Last semester using gdbm the vast majority of operations had runTime = 0.0 sec (and most operations are done during heavy use periods). Also Bill Wheeler reports that he switched to BerkeleyBD from gdbm because with his loads, gdbm was too slow (because writing locks applied to the whole database). I assume in his case, sql_single would be unusable as is. This is one area that needs a lot of optimization. Hopefully the current fix will be sufficient to provide a better experience for my 162 students. We'll see Sunday night when our next assignment is due. Arnie Thu Mar 17 09:00:28 2005] 38570 1111068028 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Thu Mar 17 09:00:35 2005] 38566 1111068035 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Thu Mar 17 09:00:39 2005] 38566 1111068039 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Thu Mar 17 09:00:43 2005] 38566 1111068043 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Thu Mar 17 09:00:46 2005] 38570 1111068046 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Thu Mar 17 09:00:49 2005] 38566 1111068049 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Thu Mar 17 09:03:02 2005] 38564 1111068182 - [/webwork2/mth162/instructor/progress/] [runTime = 1.0 sec sql_single] [Thu Mar 17 09:03:18 2005] 38565 1111068198 - [/webwork2/mth162/instructor/progress/set/7/] [runTime = 6.0 sec sql_single] [Thu Mar 17 09:05:11 2005] 38571 1111068311 - [/webwork2/mth162/] [runTime = 0.0 sec sql_single] [Thu Mar 17 09:05:13 2005] 38577 1111068313 - [/webwork2/mth162/7/] [runTime = 0.0 sec sql_single] [Thu Mar 17 09:05:14 2005] 38571 1111068314 - [/webwork2/mth162/7/1/] [runTime = 0.0 sec sql_single] [Thu Mar 17 09:05:20 2005] 38571 1111068320 - [/webwork2/mth162/7/1/] [runTime = 3.0 sec sql_single] [Thu Mar 17 09:05:24 2005] 38571 1111068324 - [/webwork2/mth162/7/1/] [runTime = 2.0 sec sql_single] [Thu Mar 17 09:05:27 2005] 38571 1111068327 - [/webwork2/mth162/7/1/] [runTime = 2.0 sec sql_single] [Thu Mar 17 09:05:31 2005] 38571 1111068331 - [/webwork2/mth162/7/1/] [runTime = 3.0 sec sql_single] [Thu Mar 17 09:05:34 2005] 38571 1111068334 - [/webwork2/mth162/7/1/] [runTime = 2.0 sec sql_single] >Arnold Pizer wrote: > >>At 02:57 PM 3/16/2005, John Jones wrote: >> >>I don't know if I can tell anything from the above or not. I seems >>submitting an answer does slow stuff down somewhat. How many BD calls are >>involved in that process? > >I don't know offhand. My basic approach to figuring this out is either >throw extra information in the timing log, or to write information to a >special log file. Including what webwork function in SQL is being called, >and the sql statement it uses should give an idea of how much and what >kind of activity is involved in a single submission. > >John > > >------------------------------------------------------- >SF email is sponsored by - The IT Product Guide >Read honest & candid reviews on hundreds of IT Products from real users. >Discover which products truly live up to the hype. Start reading now. >http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click >_______________________________________________ >OpenWeBWorK-Devel mailing list >Ope...@li... >https://lists.sourceforge.net/lists/listinfo/openwebwork-devel Prof. Arnold K. Pizer Dept. of Mathematics University of Rochester Rochester, NY 14627 (585) 275-7767 |
From: John J. <jj...@as...> - 2005-03-16 20:46:32
|
Arnold Pizer wrote: > At 02:57 PM 3/16/2005, John Jones wrote: > > I don't know if I can tell anything from the above or not. I seems > submitting an answer does slow stuff down somewhat. How many BD calls > are involved in that process? I don't know offhand. My basic approach to figuring this out is either throw extra information in the timing log, or to write information to a special log file. Including what webwork function in SQL is being called, and the sql statement it uses should give an idea of how much and what kind of activity is involved in a single submission. John |
From: Arnold P. <ap...@ma...> - 2005-03-16 20:30:53
|
At 02:57 PM 3/16/2005, John Jones wrote: Hi John, I made the change using the original version of SQL.pm, but removed the word binary from it. Here are the results: [Wed Mar 16 15:10:29 2005] 57113 1111003829 - [/webwork2/mth162/7/1/] [runTime = 0.0 sec sql_single] [Wed Mar 16 15:11:17 2005] 53856 1111003877 - [/webwork2/mth162/7/1/] [runTime = 5.0 sec sql_single] [Wed Mar 16 15:11:27 2005] 53856 1111003887 - [/webwork2/mth162/7/1/] [runTime = 6.0 sec sql_single] [Wed Mar 16 15:11:34 2005] 53856 1111003894 - [/webwork2/mth162/7/1/] [runTime = 5.0 sec sql_single] Not much difference. The 0.0 was viewing the problem for the first time and the next three were submitting answers. I also decided to try viewing other problems for the first time with the following results: [Wed Mar 16 15:17:10 2005] 54980 1111004230 - [/webwork2/mth162/7/2/] [runTime = 0.0 sec sql_single] [Wed Mar 16 15:17:14 2005] 53856 1111004234 - [/webwork2/mth162/7/3/] [runTime = 2.0 sec sql_single] [Wed Mar 16 15:17:15 2005] 54980 1111004235 - [/webwork2/mth162/7/4/] [runTime = 0.0 sec sql_single] [Wed Mar 16 15:17:18 2005] 53856 1111004238 - [/webwork2/mth162/7/5/] [runTime = 2.0 sec sql_single] [Wed Mar 16 15:17:22 2005] 53856 1111004242 - [/webwork2/mth162/7/6/] [runTime = 3.0 sec sql_single] Then I just submitted blank answers with the following results: [Wed Mar 16 15:17:42 2005] 55114 1111004262 - [/webwork2/mth162/7/6/] [runTime = 3.0 sec sql_single] [Wed Mar 16 15:17:46 2005] 55114 1111004266 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Wed Mar 16 15:17:50 2005] 55114 1111004270 - [/webwork2/mth162/7/6/] [runTime = 3.0 sec sql_single] [Wed Mar 16 15:17:53 2005] 55114 1111004273 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] and now with simple wrong answers: [Wed Mar 16 15:26:23 2005] 56205 1111004783 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Wed Mar 16 15:26:30 2005] 53856 1111004790 - [/webwork2/mth162/7/6/] [runTime = 6.0 sec sql_single] [Wed Mar 16 15:26:37 2005] 53856 1111004797 - [/webwork2/mth162/7/6/] [runTime = 6.0 sec sql_single] [Wed Mar 16 15:26:43 2005] 56205 1111004803 - [/webwork2/mth162/7/6/] [runTime = 3.0 sec sql_single] [Wed Mar 16 15:26:47 2005] 56205 1111004807 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Wed Mar 16 15:26:51 2005] 54980 1111004811 - [/webwork2/mth162/7/] [runTime = 1.0 sec sql_single] I don't know if I can tell anything from the above or not. I seems submitting an answer does slow stuff down somewhat. How many BD calls are involved in that process? Arnie >Arnold Pizer wrote: > >>At 02:06 PM 3/10/2005, John Jones wrote: >> >>Hi John, >> >>Thanks very much for the patch. I put it in and tested it a little bit. I >>haven't seen any problems but also I don't see a great deal of >>improvement in speed. I just tested it on one problem inserting correct >>and incorrect answers. With the original SQL.pm it took 8 seconds (I >>only did one attempt) and with the new SQL.pm between 2 and 8 seconds >>(details below). The load avg. on the server was about .35 and mysql >>cpu usage was about 30 or 40%. I guess we will have to wait until my >>next assignment is due to see if this makes a difference but the times >>below are still way too long. > >Hi Arnie, > >One thing you can do for comparison is use the old version of SQL.pm, but >remove the word binary from it (it appears in one place). This should be >ok provided you do not have 2 set names or 2 user id's which differ only >in case. This will show the maximum speed gain you can get from working >around this indexing problem. When assigning a set to a class, changing >this was something like a factor of 10 in speed between the original and >the original with "binary" removed. > >John > Prof. Arnold K. Pizer Dept. of Mathematics University of Rochester Rochester, NY 14627 (585) 275-7767 |
From: Arnold P. <ap...@ma...> - 2005-03-16 19:48:39
|
At 02:06 PM 3/10/2005, John Jones wrote: Hi John, Thanks very much for the patch. I put it in and tested it a little bit. I haven't seen any problems but also I don't see a great deal of improvement in speed. I just tested it on one problem inserting correct and incorrect answers. With the original SQL.pm it took 8 seconds (I only did one attempt) and with the new SQL.pm between 2 and 8 seconds (details below). The load avg. on the server was about .35 and mysql cpu usage was about 30 or 40%. I guess we will have to wait until my next assignment is due to see if this makes a difference but the times below are still way too long. Arnie using original SQL.pm [Wed Mar 16 14:03:12 2005] 42868 1110999792 - [/webwork2/mth162/7/1/] [runTime = 8.0 sec sql_single] using new SQL.pm [Wed Mar 16 14:27:14 2005] 53844 1111001234 - [/webwork2/mth162/7/1/] [runTime = 5.0 sec sql_single] [Wed Mar 16 14:27:44 2005] 54209 1111001264 - [/webwork2/mth162/7/1/] [runTime = 7.0 sec sql_single] [Wed Mar 16 14:28:00 2005] 54209 1111001280 - [/webwork2/mth162/7/1/] [runTime = 6.0 sec sql_single] [Wed Mar 16 14:28:18 2005] 53850 1111001298 - [/webwork2/mth162/7/1/] [runTime = 2.0 sec sql_single] [Wed Mar 16 14:28:27 2005] 54434 1111001307 - [/webwork2/mth162/7/1/] [runTime = 3.0 sec sql_single] [Wed Mar 16 14:28:37 2005] 54434 1111001317 - [/webwork2/mth162/7/1/] [runTime = 3.0 sec sql_single] [Wed Mar 16 14:28:45 2005] 54434 1111001325 - [/webwork2/mth162/7/1/] [runTime = 3.0 sec sql_single] [Wed Mar 16 14:30:43 2005] 53844 1111001443 - [/webwork2/mth162/7/1/] [runTime = 6.0 sec sql_single] [Wed Mar 16 14:30:56 2005] 53844 1111001456 - [/webwork2/mth162/7/1/] [runTime = 6.0 sec sql_single] [Wed Mar 16 14:31:05 2005] 53844 1111001465 - [/webwork2/mth162/7/1/] [runTime = 6.0 sec sql_single] [Wed Mar 16 14:31:22 2005] 53844 1111001482 - [/webwork2/mth162/7/1/] [runTime = 6.0 sec sql_single] [Wed Mar 16 14:34:10 2005] 53850 1111001650 - [/webwork2/mth162/7/1/] [runTime = 8.0 sec sql_single] [Wed Mar 16 14:35:44 2005] 54964 1111001744 - [/webwork2/mth162/7/1/] [runTime = 5.0 sec sql_single] [Wed Mar 16 14:44:09 2005] 55114 1111002249 - [/webwork2/mth162/7/1/] [runTime = 2.0 sec sql_single] >Hi, > >In a previous e-mail I had found that the slowdown was due to >case-sensitivity in the mysql index. The index is not case sensitive, so >if you force key elements to treated as case sensitive, then the index is >ignored and the whole thing slows to a crawl with big databases. > >Attached is a version of lib/WeBWorK/DB/Schema/SQL.pm which tries to >adjust for this. If you are making a deletion of a database record, or >putting something into the database, then those calls have to use case >sensitive key elements, so they will remain slow. For select statements, >the call can be not case sensitive, and then we filter out the records >which we really want. > >I have tested this, but it would be better if other people tested it >too. Unlike changes to other files, an error here can mess up an existing >course. > >I thought about other ways to handle this. One I mentioned before was to >try to change the data type of the key fields so that the indexing might >end up case sensitive. I haven't found a way to do that yet. > >If the fact that inserting and deleting information from the database is >still slow is a problem, then another possibility would be to add a unique >numeric key to each table. Then when you want to delete rows from a >table, you select (case insensitively), pick out the ones which match your >favorite key fields (including case), then use the unique id number for >those rows in the actual deletion statement. A similar strategy might >work for put type statements. > >John > > > >################################################################################ ># WeBWorK Online Homework Delivery System ># Copyright ) 2000-2003 The WeBWorK Project, http://openwebwork.sf.net/ ># $CVSHeader: webwork-modperl/lib/WeBWorK/DB/Schema/SQL.pm,v 1.24 >2004/10/22 22:59:52 sh002i Exp $ ># ># This program is free software; you can redistribute it and/or modify it >under ># the terms of either: (a) the GNU General Public License as published by the ># Free Software Foundation; either version 2, or (at your option) any later ># version, or (b) the "Artistic License" which comes with this package. ># ># This program is distributed in the hope that it will be useful, but WITHOUT ># ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or >FITNESS ># FOR A PARTICULAR PURPOSE. See either the GNU General Public License or the ># Artistic License for more details. >################################################################################ > >package WeBWorK::DB::Schema::SQL; >use base qw(WeBWorK::DB::Schema); > >=head1 NAME > >WeBWorK::DB::Schema::SQL - support SQL access to all tables. > >=cut > >use strict; >use warnings; >use Carp qw(croak); >use Date::Format; > >use constant TABLES => qw(*); >use constant STYLE => "dbi"; > >=head1 SUPPORTED PARAMS > >This schema pays attention to the following items in the C<params> entry. > >=over > >=item tableOverride > >Alternate name for this table, to satisfy SQL naming requirements. > >=item fieldOverride > >A reference to a hash mapping field names to alternate names, to satisfy SQL >naming requirements. > >=back > >=cut > > >################################################################################ ># constructor for SQL-specific behavior >################################################################################ > >sub new { > my ($proto, $db, $driver, $table, $record, $params) = @_; > my $self = $proto->SUPER::new($db, $driver, $table, $record, > $params); > > ## override table name if tableOverride param is given > #$self->{table} = $params->{tableOverride} if > $params->{tableOverride}; > > # add sqlTable field > $self->{sqlTable} = $params->{tableOverride} || $self->{table}; > > return $self; >} > >################################################################################ ># table access functions >################################################################################ > >sub count { > my ($self, @keyparts) = @_; > > my $table = $self->{table}; > my $sqlTable = $self->{sqlTable}; > my @keynames = $self->sqlKeynames(); > > croak "too many keyparts for table $table (need at most: @keynames)" > if @keyparts > @keynames; > > my ($where, @where_args) = $self->makeWhereClause(0, @keyparts); > > my $stmt = "SELECT * FROM `$sqlTable` $where"; > $self->debug("SQL-count: $stmt\n"); > > $self->{driver}->connect("ro"); > > my $sth = $self->{driver}->dbi()->prepare($stmt); > $sth->execute(@where_args); > my ($result) = $sth->fetchall_arrayref; > > $self->{driver}->disconnect(); >my @arr_res = case_check($result, @keyparts); > > return scalar(@arr_res); >} > >sub list($@) { > my ($self, @keyparts) = @_; > > my $table = $self->{table}; > my $sqlTable = $self->{sqlTable}; > my @keynames = $self->sqlKeynames(); > my $keynames = join(", ", @keynames); > > croak "too many keyparts for table $table (need at most: @keynames)" > if @keyparts > @keynames; > > my ($where, @where_args) = $self->makeWhereClause(0, @keyparts); > > my $stmt = "SELECT $keynames FROM `$sqlTable` $where"; > $self->debug("SQL-list: $stmt\n"); > > $self->{driver}->connect("ro"); > > my $sth = $self->{driver}->dbi()->prepare($stmt); > $sth->execute(@where_args); > my $result = $sth->fetchall_arrayref; > > $self->{driver}->disconnect(); > > croak "failed to SELECT: $DBI::errstr" unless defined $result; >my @arr_res = case_check($result, @keyparts); > return @arr_res; >} > >sub exists($@) { > my ($self, @keyparts) = @_; > > my $table = $self->{table}; > my $sqlTable = $self->{sqlTable}; > my @keynames = $self->sqlKeynames(); > > croak "wrong number of keyparts for table $table (needs: @keynames)" > unless @keyparts == @keynames; > > my ($where, @where_args) = $self->makeWhereClause(0, @keyparts); > > my $stmt = "SELECT * FROM `$sqlTable` $where"; > $self->debug("SQL-exists: $stmt\n"); > > $self->{driver}->connect("ro"); > > my $sth = $self->{driver}->dbi()->prepare($stmt); > $sth->execute(@where_args); > my ($result) = $sth->fetchall_arrayref; > > $self->{driver}->disconnect(); > > croak "failed to SELECT: $DBI::errstr" unless defined $result; > >my @arr_res = case_check($result, @keyparts); > return scalar(@arr_res) > 0; >} > >sub add($$) { > my ($self, $Record) = @_; > > my @realKeynames = $self->{record}->KEYFIELDS(); > my @keyparts = map { $Record->$_() } @realKeynames; > croak "(" . join(", ", @keyparts) . "): exists (use put)" > if $self->exists(@keyparts); > > my $table = $self->{table}; > my $sqlTable = $self->{sqlTable}; > my @fieldnames = $self->sqlFieldnames(); > my $fieldnames = join(", ", @fieldnames); > my $marks = join(", ", map { "?" } @fieldnames); > > my @realFieldnames = $self->{record}->FIELDS(); > my @fieldvalues = map { $Record->$_() } @realFieldnames; > > my $stmt = "INSERT INTO `$sqlTable` ($fieldnames) VALUES ($marks)"; > $self->debug("SQL-add: $stmt\n"); > > $self->{driver}->connect("rw"); > my $sth = $self->{driver}->dbi()->prepare($stmt); > my $result = $sth->execute(@fieldvalues); > $self->{driver}->disconnect(); > > unless (defined $result) { > my @realKeynames = $self->{record}->KEYFIELDS(); > my @keyvalues = map { $Record->$_() } @realKeynames; > croak "(" . join(", ", @keyvalues) . "): failed to > INSERT: $DBI::errstr"; > } > > return 1; >} > >sub get($@) { > my ($self, @keyparts) = @_; > > return ($self->gets(\@keyparts))[0]; >} > >sub gets($@) { > my ($self, @keypartsRefList) = @_; > > my $table = $self->{table}; > my $sqlTable = $self->{sqlTable}; > my @keynames = $self->sqlKeynames(); > > my @records; > $self->{driver}->connect("ro"); > foreach my $keypartsRef (@keypartsRefList) { > my @keyparts = @$keypartsRef; > > croak "wrong number of keyparts for table $table (needs: > @keynames)" > unless @keyparts == @keynames; > > my ($where, @where_args) = $self->makeWhereClause(0, > @keyparts); > > my $stmt = "SELECT * FROM `$sqlTable` $where"; > $self->debug("SQL-gets: $stmt\n"); > > my $sth = $self->{driver}->dbi()->prepare($stmt); > $sth->execute(@where_args); > my $result = $sth->fetchall_arrayref; > > if (defined $result) { >my @record = case_check($result, @keyparts); > if(@record) { > @record = @{$record[0]}; #adjust for fetchall > my $Record = $self->{record}->new(); > my @realFieldnames = > $self->{record}->FIELDS(); > foreach (@realFieldnames) { > my $value = shift @record; > $value = "" unless defined > $value; # promote undef to "" > $Record->$_($value); > } > push @records, $Record; > } else { > push @records, undef; > } > } else { > push @records, undef; > } > } > $self->{driver}->disconnect(); > > return @records; >} > ># getAll($userID, $setID) ># ># Returns all problems in a given set. Only supported for the problem and ># problem_user tables. > >sub getAll { > my ($self, @keyparts) = @_; > my $table = $self->{table}; > my $sqlTable = $self->{sqlTable}; > > croak "getAll: only supported for the problem_user table" > unless $table eq "problem" or $table eq "problem_user"; > > my @keynames = $self->sqlKeynames(); > pop @keynames; # get rid of problem_id > > my ($where, @where_args) = $self->makeWhereClause(0, @keyparts); > > my $stmt = "SELECT * FROM `$sqlTable` $where"; > $self->debug("SQL-getAll: $stmt\n"); > > my @records; > > $self->{driver}->connect("ro"); > > my $sth = $self->{driver}->dbi()->prepare($stmt); > $sth->execute(@where_args); > my $results = $sth->fetchall_arrayref; >my @arr_res = case_check($results, @keyparts); > > foreach my $result (@arr_res) { > if (defined $result) { > my @record = @$result; > my $Record = $self->{record}->new(); > my @realFieldnames = $self->{record}->FIELDS(); > foreach (@realFieldnames) { > my $value = shift @record; > $value = "" unless defined $value; # > promote undef to "" > $Record->$_($value); > } > push @records, $Record; > } > } > $self->{driver}->disconnect(); > > return @records; >} > >sub put($$) { > my ($self, $Record) = @_; > > my @realKeynames = $self->{record}->KEYFIELDS(); > my @keyparts = map { $Record->$_() } @realKeynames; > croak "(" . join(", ", @keyparts) . "): not found (use add)" > unless $self->exists(@keyparts); > > my $table = $self->{table}; > my $sqlTable = $self->{sqlTable}; > my @fieldnames = $self->sqlFieldnames(); > my $fieldnames = join(", ", @fieldnames); > my $marks = join(", ", map { "?" } @fieldnames); > > my @realFieldnames = $self->{record}->FIELDS(); > my @fieldvalues = map { $Record->$_() } @realFieldnames; > > my ($where, @where_args) = $self->makeWhereClause(1, map { > $Record->$_() } @realKeynames); > > my $stmt = "UPDATE `$sqlTable` SET"; > while (@fieldnames) { > $stmt .= " " . (shift @fieldnames) . "=?"; > $stmt .= "," if @fieldnames; > } > $stmt .= " $where"; > $self->debug("SQL-put: $stmt\n"); > > $self->{driver}->connect("rw"); > my $sth = $self->{driver}->dbi()->prepare($stmt); > my $result = $sth->execute(@fieldvalues, @where_args); > $self->{driver}->disconnect(); > > unless (defined $result) { > croak "(" . join(", ", @keyparts) . "): failed to UPDATE: > $DBI::errstr"; > } > > return 1; >} > >sub delete($@) { > my ($self, @keyparts) = @_; > > return 0 unless $self->exists(@keyparts); > > my $table = $self->{table}; > my $sqlTable = $self->{sqlTable}; > my @keynames = $self->sqlKeynames(); > > croak "wrong number of keyparts for table $table (needs: @keynames)" > unless @keyparts == @keynames; > > my ($where, @where_args) = $self->makeWhereClause(1, @keyparts); > > my $stmt = "DELETE FROM `$sqlTable` $where"; > $self->debug("SQL-delete: $stmt\n"); > > $self->{driver}->connect("rw"); > > my $sth = $self->{driver}->dbi()->prepare($stmt); > my $result = $sth->execute(@where_args); > > $self->{driver}->disconnect(); > croak "failed to DELETE: $DBI::errstr" unless defined $result; > > return $result; >} > >################################################################################ ># utility functions >################################################################################ > >sub makeWhereClause($@) { > my ($self, $use_binary, @keyparts) = @_; > my $binary_string = ""; > $binary_string = "BINARY" if($use_binary); > my @keynames = $self->sqlKeynames(); > > my $where = ""; > my @used_keyparts; > > my $first = 1; > while (@keyparts) { > my $name = shift @keynames; > my $part = shift @keyparts; > > next unless defined $part; > > $where .= " AND" unless $first; > $where .= " $binary_string $name=?"; > push @used_keyparts, $part; > > $first = 0; > } > > my $clause = $where ? "WHERE$where" : ""; > > return ($clause, @used_keyparts); >} > >sub sqlKeynames($) { > my ($self) = @_; > my @keynames = $self->{record}->KEYFIELDS(); > return map { "`$_`" } map { > $self->{params}->{fieldOverride}->{$_} || $_ } @keynames; >} > >sub sqlFieldnames($) { > my ($self) = @_; > my @keynames = $self->{record}->FIELDS(); > return map { "`$_`" } map { > $self->{params}->{fieldOverride}->{$_} || $_ } @keynames; >} > >sub debug($@) { > my ($self, @string) = @_; > > if ($self->{params}->{debug}) { > warn @string; > } >} > > ># Temporary debugging function > >sub see_check { > my $what = shift; > my $msg = shift; > local *LOG; > open(LOG, ">>", "/tmp/check-log"); > print LOG "$msg\n"; > if($what == 0) { > close(LOG); > return(); > } > my $result = shift; > my @keyparts = @_; > my ($ind, @arr_res, $jj); > @arr_res = @$result; > print LOG "\n"; > @arr_res = @$result; > for $ind (0..(scalar(@keyparts)-1)) { > print LOG "$keyparts[$ind] <-> "; > for $jj (@arr_res) { > print LOG " " . $jj->[$ind]; > } > print LOG "\n"; > } > close(LOG); >} > >sub case_check { > my $result = shift; > my @keyparts = @_; ># see_check(1, "In case check:", $result, @keyparts); > my ($ind, @arr_res); > @arr_res = @$result; > for $ind (0..(scalar(@keyparts)-1)) { > if(defined($keyparts[$ind])) { > @arr_res = grep {$keyparts[$ind] eq $_->[$ind]} > @arr_res; > } > } ># see_check(1, "Leaving case check:", [@arr_res], @keyparts); > return(@arr_res); >} > >1; Prof. Arnold K. Pizer Dept. of Mathematics University of Rochester Rochester, NY 14627 (585) 275-7767 |