You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(16) |
Nov
(10) |
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
(34) |
Feb
(12) |
Mar
(21) |
Apr
|
May
(5) |
Jun
(13) |
Jul
(50) |
Aug
(62) |
Sep
(72) |
Oct
(17) |
Nov
(16) |
Dec
(19) |
2006 |
Jan
(26) |
Feb
(9) |
Mar
|
Apr
(8) |
May
(5) |
Jun
(7) |
Jul
(21) |
Aug
(33) |
Sep
(17) |
Oct
(4) |
Nov
(9) |
Dec
|
2007 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
(6) |
Jun
(16) |
Jul
(8) |
Aug
(1) |
Sep
|
Oct
(2) |
Nov
(2) |
Dec
(2) |
2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(4) |
Jul
(11) |
Aug
(6) |
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(4) |
Nov
|
Dec
|
2014 |
Jan
(2) |
Feb
(4) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
2016 |
Jan
(4) |
Feb
(4) |
Mar
(3) |
Apr
|
May
(1) |
Jun
(1) |
Jul
(1) |
Aug
(2) |
Sep
(1) |
Oct
(1) |
Nov
(1) |
Dec
|
2017 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Sam H. <sh...@ma...> - 2006-07-21 00:50:34
|
On Jul 20, 2006, at 6:17 PM, Michael Gage wrote: > On Jul 20, 2006, at 3:20 PM, Sam Hathaway wrote: > >> So you'd think that the solution would be to simply add the "- >> nosticky" pragma, like we originally tried. However, from looking at >> the CGI.pm code, I think -nosticky doesn't do what we think it does. >> What it appears to do is suppress the output of hidden ".cgifields" >> fields naming each of the radio buttons, checkboxes, and scrolling >> lists in the form. (And also give the default name ".submit" to >> unnamed submit buttons.) THAT'S ALL. >> >> As far as i can tell, all that ".cgifields" does is ensures that >> the parameter *names* for checkboxes, radio buttons, and scrolling >> lists are still known to CGI.pm even if none of them are selected >> (and thus don't appear in the request). This has very little to do >> with form field "stickiness", despite statements to the contrary in >> the docs. (The only connection is that it prevents checkboxes, radio >> buttons, and scrolling lists from being reset to their defaults if >> none of their items are selected.) It looks to me like the only way >> to turn off form field stickiness is to specify "-override=>1" in >> each field. >> >> This seems really weird to me, since it implies that the CGI module >> documentation is just plain wrong. Could you do me a favor and look >> at $NOSTICKY in the CGI.pm source and tell me if I'm missing >> something? > > I've looked through the code. As best I can tell you are right. > We'll need -overrides > in nearly all of the CGI::() calls since we assumed that our values > wouldn't be overridden > by values from the param list. I remember that when we did the CGI > scripts for WeBWorK 1.x > we had a lot of problems of when to use overrides and when they > weren't needed. We were > probably using GET a good deal more often in that case and actually > needed the overrides. > > Adding overrides seemed to work. I added several in my first > attempt to analyze the problem. > I got tired after awhile, but it is not that overwhelming a chore > -- just tedious. I think we can > safely assume that we want -override =>1 in every CGI call with a - > name parameter. I tried attacking this from the other angle -- delete all the params from the CGI object before any form field functions can be called. This makes all requests (GET and POST) under both versions of Apache behave like POST requests under Apache 1. As far as I can tell, this is working. Plus, it's a much less invasive solution than CGIParamShim or CGIeasytags -- it's pretty much a one-liner in a subclass. I've added it as WeBWorK::CGIDeleteParams. I also added it as an alternative in WeBWorK::CGI. It needs additional testing, which I will give it as I continue porting efforts. > I haven't been doing > lots of testing with the standard CGI -- are you still getting > weird behavior a lot of the time? Yeah, CGIParamShim wasn't really solving the problem. But CGIDeleteParams seems to so far. > We could also decide to go with CGIeasytags. If the extra {} are > bothering you, we can > be a bit more clever about deciding whether the input to CGI::foo() > is of the form: > (1) CGI::foo({params], text] -- easy > (2) CGI::foo(-name=>'foo', value = >'bar') or > (3) CGI:: foo( "name", "foo", "value", "bar") i.e. is supposed to > produce namefoovaluebar > (4)CGI::foo("foo", "bar"); > > Nothing would be perfect but one could do a lot more than I tried to. The extra {} are a bit annoying, but what really bugs me about CGIeasytags is that you've had to reimplement parts of CGI (and CGI::Util) to make it compatible with how we've been calling CGI functions. I just recently discovered the rearrange function in CGI::Util. It does most of the parameter munging for the CGI functions, and handles all four of the forms you list above. For example, here's the call in CGI::hidden: my($name,$default,$override,@other) = rearrange([NAME,[DEFAULT,VALUE,VALUES],[OVERRIDE,FORCE]],@p); If you think CGI::EasyTags is the way to go, you might consider replacing the parameter parsing with calls to rearrange. -sam |
From: Michael G. <ga...@ma...> - 2006-07-20 22:17:28
|
On Jul 20, 2006, at 3:20 PM, Sam Hathaway wrote: > So you'd think that the solution would be to simply add the "- > nosticky" pragma, like we originally tried. However, from looking at > the CGI.pm code, I think -nosticky doesn't do what we think it does. > What it appears to do is suppress the output of hidden ".cgifields" > fields naming each of the radio buttons, checkboxes, and scrolling > lists in the form. (And also give the default name ".submit" to > unnamed submit buttons.) THAT'S ALL. > > As far as i can tell, all that ".cgifields" does is ensures that > the parameter *names* for checkboxes, radio buttons, and scrolling > lists are still known to CGI.pm even if none of them are selected > (and thus don't appear in the request). This has very little to do > with form field "stickiness", despite statements to the contrary in > the docs. (The only connection is that it prevents checkboxes, radio > buttons, and scrolling lists from being reset to their defaults if > none of their items are selected.) It looks to me like the only way > to turn off form field stickiness is to specify "-override=>1" in > each field. > > This seems really weird to me, since it implies that the CGI module > documentation is just plain wrong. Could you do me a favor and look > at $NOSTICKY in the CGI.pm source and tell me if I'm missing > something? I've looked through the code. As best I can tell you are right. We'll need -overrides in nearly all of the CGI::() calls since we assumed that our values wouldn't be overridden by values from the param list. I remember that when we did the CGI scripts for WeBWorK 1.x we had a lot of problems of when to use overrides and when they weren't needed. We were probably using GET a good deal more often in that case and actually needed the overrides. Adding overrides seemed to work. I added several in my first attempt to analyze the problem. I got tired after awhile, but it is not that overwhelming a chore -- just tedious. I think we can safely assume that we want -override =>1 in every CGI call with a - name parameter. I haven't been doing lots of testing with the standard CGI -- are you still getting weird behavior a lot of the time? We could also decide to go with CGIeasytags. If the extra {} are bothering you, we can be a bit more clever about deciding whether the input to CGI::foo() is of the form: (1) CGI::foo({params], text] -- easy (2) CGI::foo(-name=>'foo', value = >'bar') or (3) CGI:: foo( "name", "foo", "value", "bar") i.e. is supposed to produce namefoovaluebar (4)CGI::foo("foo", "bar"); Nothing would be perfect but one could do a lot more than I tried to. Take care, Mike > -sam > |
From: Sam H. <sh...@ma...> - 2006-07-20 19:20:19
|
Hi Mike, In doing some testing, I've discovered that CGI.pm doesn't behave the way we thought it did under Apache 1. For some reason, the default CGI object ($CGI::Q) doesn't seem to be getting parameters from POSTDATA. I've added a Test.pm content generator (/webwork2/coursename/test/) which demonstrates this behavior. Click "Refresh" and note that CGI::param('pwd') returns an empty value under Apache 1, even though there is a "pwd" field in the request. Also note that the $CGI::Q object dumped at the top of the page is essentially empty. If you change the POST to a GET, the parameters show up in the CGI object, and the new "pwd" value from $self->{pwd} isn't used in the hidden field. We actually rely on this lack of existing values. For example, FileManager relies on the value of $self->{pwd} actually being written out in the line: print CGI::hidden({name=>'pwd',value=>$self->{pwd}}); In order for this to happen, CGI has to think that there is no existing "pwd" parameter. In Apache 2, CGI.pm *is* able to get parameters from POSTDATA. This is the REAL CAUSE of the original parameter problem that we've been working on. CGI.pm failing to get parameter data is not the problem -- it's actually when it succeeds that things break. So this changes the contours of the problem a bit. Only two forms in WeBWorK use GET requests. They are the database export form in CourseAdmin.pm and the main form in Instructor/ Index.pm. Sticky values *could* be a problem in these forms, but as it turns out, they aren't. The rest of WeBWorK's forms use POST, and many form fields have been written with the assumption that the "- value" or "-default" will always be used. So you'd think that the solution would be to simply add the "- nosticky" pragma, like we originally tried. However, from looking at the CGI.pm code, I think -nosticky doesn't do what we think it does. What it appears to do is suppress the output of hidden ".cgifields" fields naming each of the radio buttons, checkboxes, and scrolling lists in the form. (And also give the default name ".submit" to unnamed submit buttons.) THAT'S ALL. As far as i can tell, all that ".cgifields" does is ensures that the parameter *names* for checkboxes, radio buttons, and scrolling lists are still known to CGI.pm even if none of them are selected (and thus don't appear in the request). This has very little to do with form field "stickiness", despite statements to the contrary in the docs. (The only connection is that it prevents checkboxes, radio buttons, and scrolling lists from being reset to their defaults if none of their items are selected.) It looks to me like the only way to turn off form field stickiness is to specify "-override=>1" in each field. This seems really weird to me, since it implies that the CGI module documentation is just plain wrong. Could you do me a favor and look at $NOSTICKY in the CGI.pm source and tell me if I'm missing something? -sam |
From: Davide P. C. <dp...@un...> - 2006-07-17 21:14:41
|
> There is a significant difference in how fun_cmp displays "correct > answers" in WW 1.9 and WW 2. ... For example if one asked for an > antiderivative of 7x^12 and gave fun_cmp("(7/13)x^13") as the answer, > the student in WW 1.9 sees (7/13)x^13 as the correct answer. > In WW 2, the student sees .538462*x^13. Would it be easy (I guess > this is a question for Davide) to recover the WW 1 behavior? Yes, it is easy, and I have made the change in PGanswermacros.pl in the CVS repository. Davide |
From: Arnold P. <ap...@ma...> - 2006-07-17 18:51:15
|
Hi, There is a significant difference in how fun_cmp displays "correct answers" in WW 1.9 and WW 2. This has to do with displaying correct answers symbolically. For num_cmp the behavior in WW 1.9 and WW 2 is the same and it's easy (though maybe not well know) to display correct answers e.g. as 2*cos(6) rather than the difficult to understand 1.92034057330073. The trick is to give the answer as a string --- see http://webhost.math.rochester.edu/webworkdocs/discuss/msgReader$651 Since fun_cmp takes string answers, this behavior was automatic in WW 1. For example if one asked for an antiderivative of 7x^12 and gave fun_cmp "(7/13)x^13" as the answer, the student in WW 1.9 sees (7/13)x^13 as the correct answer. In WW 2, the student sees .538462*x^13. Would it be easy (I guess this is a question for Davide) to recover the WW 1 behavior? Arnie Prof. Arnold K. Pizer Dept. of Mathematics University of Rochester Rochester, NY 14627 (585) 275-7767 |
From: Sam H. <sa...@uo...> - 2006-07-05 16:21:09
|
Hi All, I just removed the directory moodle_mod from the webwork2 CVS module. Moodle integration code is now in a separate module, wwmoodle. If you update your webwork2 working copy, you'll see the following error: cvs update: Updating moodle_mod cvs update: cannot open directory /webwork/cvs/system/webwork2/ moodle_mod: No such file or directory cvs update: skipping directory moodle_mod The error is harmless but annoying. To eliminate it, edit the file CVS/Entries in the root of your working copy, and remove this line: D/moodle_mod//// Then delete the moodle_mod directory. -sam |
From: Michael G. <ga...@ma...> - 2006-06-25 00:02:50
|
Hi Davide, I like this solution. Thanks for all of the clean up work. Take care, Mike On Jun 24, 2006, at 5:12 PM, dpvc via activitymail wrote: > Log Message: > ----------- > Changed messages that refer to 'Save as' to use 'Create a copy' > instead (which is the current wording for that function). > > Problem: Since the default in that pop-up menu us "Rename file" > rather than "Create a copy", it is not immediately clear what this > message refers to. > > I still think "Create a copy" should be the default rather than > "Rename file", since the latter only makes sense when you got to the > editor from a file linked to a homework set, which is not always the > case (e.g., when you use the Library Browser). It would also make > these error messages refer to something visible rather than a menu > item that is not showing. |
From: Sam H. <sh...@ma...> - 2006-06-23 23:29:39
|
On Jun 21, 2006, at 11:13 AM, Mike Gage via activitymail wrote: > Log Message: > ----------- > Commiting our local version of wwmoodle modules for bridging moodle > and webwork Hey Mike, Before I left for the week, I created a new "wwmoodle" CVS module in the system repository, with a copy of the moodle files from wwmoodle beta 4. My plan was to then merge in the changes that we've made locally into that tree. I prefer to have these files in a separate tree, but if you'd rather they be within webwork2, that's fine too. I also noticed that the original files have CVS $Id$ tags in them. Might Peter have a CVS repository? If so, it might be nice to "adopt" that rather than starting over -- having the complete history would be nice. -sam |
From: Sam H. <sh...@ma...> - 2006-06-09 16:23:10
|
On Jun 9, 2006, at 10:57 AM, Arnold Pizer wrote: > You could create a standard dummy course (WeBWorK Instructors) and > add the new instructors there. Of course this would mean a few extra > steps (logging into the WeBWorK Instructors course) which you are > trying to avoid but it might be cleaner than messing with the Admin > course. Maybe hooks for optional custom actions on course creation/deletion would be appropriate for this. Logging the course name and institution could use this as well. The hooks could be defined as anonymous subroutines in Constants.pm, perhaps. -sam |
From: Arnold P. <ap...@ma...> - 2006-06-09 14:59:18
|
Hi, You could create a standard dummy course (WeBWorK Instructors) and add the new instructors there. Of course this would mean a few extra steps (logging into the WeBWorK Instructors course) which you are trying to avoid but it might be cleaner than messing with the Admin course. Arnie Prof. Arnold K. Pizer Dept. of Mathematics University of Rochester Rochester, NY 14627 (585) 275-7767 |
From: P. G. L. <gl...@um...> - 2006-06-09 13:14:21
|
Hi all, Philosophically to me it seems odd to put people in the admin course when they aren't administrators. I can see the argument either way, and certainly don't have a good enough feel for the code base as a whole to tout my reaction as gospel truth. But it seems it would be aesthetically more pleasing to have an 'e-mail all course instructors' option in the admin course, or something similar, rather than simply putting them all in as members of the course. It seems this could be the same as the e-mail students option in a regular class, with some conditional that builds a list of instructors instead of students when the list of people to e-mail is generated. Probably not even two full cents there, so call it 1.37 cents, Gavin -- P Gavin LaRose, PhD | gl...@um... | 734.764.6454 | ...you have Program Manager, Instructional Technology | to respect someone who can Mathematics Dept, University of Michigan | spell Tuesday, even if they http://www.math.lsa.umich.edu/~glarose/ | can't spell it right. -Milne On Thu, 8 Jun 2006, Sam Hathaway wrote: > On Jun 7, 2006, at 10:18 PM, Michael Gage wrote: > >> Hi everyone, >> >> I'd appreciate comments on the following ideas I have for modifying >> the admin course (CourseAdmin.pm Module). >> >> Since I run hosted.webwork.rochester.edu >> I am frequently setting up new courses for folks and then archiving >> them at the end of the semester. I have a log file >> of all of the courses I add, but I need to process that (tracking >> insertions and deletions) in order to get a mailing list for >> those currently using the course. My suggested changes would make it >> easier to email the people responsible for active courses on the >> server. >> >> The classlist of the admin course is currently not being used for >> much, except to store the names and passwords of those allowed to >> setup new courses on the server. These folks are all "professors" in >> the admin course. By default they are all transferred to any new >> courses created as "professors" so that they can help with any >> problems that arise. >> >> When a new course is created I would like to add the new instructor >> for the new course as a "student" in the admin course. >> Students in admin would not be transferred to new courses that are >> created. By looking at the list of students in admin I would have an >> automatic way to send email to all "students", i.e. instructors for >> existing courses on the server. >> >> When a course is dropped the instructor of that course would be >> dropped as a "student" in the admin course. This means there would >> still be a record of the instructor in the course, but they would not >> be active and would not normally receive e-mail. >> >> Since one person might be an instructor for several courses, I would >> use courseName_userID as the student id in the admin course, so >> UR_mth161_gage and UR_mth162_gage would be separate students. I >> could also use the comments field to store additional information. >> (or perhaps gage_mth161 and gage_mth162 would be better) >> >> I'd appreciate comments or suggestions. I realize that others may >> use the CourseAdmin somewhat differently so let me know if this would >> interfere (or help) with your own use of CourseAdmin. It's >> certainly true that I can write a script to massage the log file and >> obtain current information about the state of the server, but using >> the email system that is already set up for the course appeals to me. > > I always figured that the admin course was a temporary hack. If we > had had a global user table (like Moodle does) we wouldn't be using > it at all. > > More practically, adding non-administrators to the admin course > requires it to have a custom permission setup, one where only > professors are allowed to log in. This would have to be defined in > the admin course's course.conf, so it would make the admin course > setup process slightly more complicated. > > You'd also want to double-check that the hasPermissions calls in > CourseAdmin are appropriate. Up until now, I've been assuming that > only administrators will have access to the admin course at all, so > they may be inappropriately lax. > -sam > > > > _______________________________________________ > OpenWeBWorK-Devel mailing list > Ope...@li... > https://lists.sf.net/lists/listinfo/openwebwork-devel > > > |
From: Sam H. <sh...@ma...> - 2006-06-09 02:52:41
|
On Jun 7, 2006, at 10:18 PM, Michael Gage wrote: > Hi everyone, > > I'd appreciate comments on the following ideas I have for modifying > the admin course (CourseAdmin.pm Module). > > Since I run hosted.webwork.rochester.edu > I am frequently setting up new courses for folks and then archiving > them at the end of the semester. I have a log file > of all of the courses I add, but I need to process that (tracking > insertions and deletions) in order to get a mailing list for > those currently using the course. My suggested changes would make it > easier to email the people responsible for active courses on the > server. > > The classlist of the admin course is currently not being used for > much, except to store the names and passwords of those allowed to > setup new courses on the server. These folks are all "professors" in > the admin course. By default they are all transferred to any new > courses created as "professors" so that they can help with any > problems that arise. > > When a new course is created I would like to add the new instructor > for the new course as a "student" in the admin course. > Students in admin would not be transferred to new courses that are > created. By looking at the list of students in admin I would have an > automatic way to send email to all "students", i.e. instructors for > existing courses on the server. > > When a course is dropped the instructor of that course would be > dropped as a "student" in the admin course. This means there would > still be a record of the instructor in the course, but they would not > be active and would not normally receive e-mail. > > Since one person might be an instructor for several courses, I would > use courseName_userID as the student id in the admin course, so > UR_mth161_gage and UR_mth162_gage would be separate students. I > could also use the comments field to store additional information. > (or perhaps gage_mth161 and gage_mth162 would be better) > > I'd appreciate comments or suggestions. I realize that others may > use the CourseAdmin somewhat differently so let me know if this would > interfere (or help) with your own use of CourseAdmin. It's > certainly true that I can write a script to massage the log file and > obtain current information about the state of the server, but using > the email system that is already set up for the course appeals to me. I always figured that the admin course was a temporary hack. If we had had a global user table (like Moodle does) we wouldn't be using it at all. More practically, adding non-administrators to the admin course requires it to have a custom permission setup, one where only professors are allowed to log in. This would have to be defined in the admin course's course.conf, so it would make the admin course setup process slightly more complicated. You'd also want to double-check that the hasPermissions calls in CourseAdmin are appropriate. Up until now, I've been assuming that only administrators will have access to the admin course at all, so they may be inappropriately lax. -sam |
From: Michael G. <ga...@ma...> - 2006-06-08 02:19:30
|
Hi everyone, I'd appreciate comments on the following ideas I have for modifying the admin course (CourseAdmin.pm Module). Since I run hosted.webwork.rochester.edu I am frequently setting up new courses for folks and then archiving them at the end of the semester. I have a log file of all of the courses I add, but I need to process that (tracking insertions and deletions) in order to get a mailing list for those currently using the course. My suggested changes would make it easier to email the people responsible for active courses on the server. The classlist of the admin course is currently not being used for much, except to store the names and passwords of those allowed to setup new courses on the server. These folks are all "professors" in the admin course. By default they are all transferred to any new courses created as "professors" so that they can help with any problems that arise. When a new course is created I would like to add the new instructor for the new course as a "student" in the admin course. Students in admin would not be transferred to new courses that are created. By looking at the list of students in admin I would have an automatic way to send email to all "students", i.e. instructors for existing courses on the server. When a course is dropped the instructor of that course would be dropped as a "student" in the admin course. This means there would still be a record of the instructor in the course, but they would not be active and would not normally receive e-mail. Since one person might be an instructor for several courses, I would use courseName_userID as the student id in the admin course, so UR_mth161_gage and UR_mth162_gage would be separate students. I could also use the comments field to store additional information. (or perhaps gage_mth161 and gage_mth162 would be better) I'd appreciate comments or suggestions. I realize that others may use the CourseAdmin somewhat differently so let me know if this would interfere (or help) with your own use of CourseAdmin. It's certainly true that I can write a script to massage the log file and obtain current information about the state of the server, but using the email system that is already set up for the course appeals to me. Take care, Mike |
From: Sam H. <sh...@ma...> - 2006-05-25 20:59:58
|
Hello WeBWorK Team, I'm preparing to release WeBWorK and PG 2.2.1. Here are the changes that have gone in so far: http://devel.webwork.rochester.edu/twiki/bin/view/Webwork/ WeBWorKRelease2pt2pt1 http://devel.webwork.rochester.edu/twiki/bin/view/Webwork/ PGLanguageRelease2pt2pt1 As usual, please let me know if you have any "pet bugs" you'd like fixed before this release. I'm planning to do the release tomorrow night unless you tell me about bugs. Thanks. -sam |
From: P. G. L. <gl...@um...> - 2006-05-18 12:12:05
|
> One concern is that even if we split source_file up into multiple fields > later, we still end up having to support this format for old set definition > files. Probably not a big deal. > > Another issue is not having a one-to-one correspondence between problems in > the "prototype" UserSet and problems in the versioned UserSet. If that seems > like a problem then the second solution would probably be better. > These were, essentially, my concerns, though not so well phrased. I also wondered if there would ever be a case in which one might want to have two problems in a set drawn from the same group and not worry about excluding the possibility of a repeat. In that each problem gets a different seed that might not be a bad thing in many cases, but I can't think of when it would be desirable. In any event, having multiple problem entries in the UserSet and then just requiring that no group problems are repeated would prevent this. Gavin -- P. Gavin LaRose / Instructional Tech., The tough problem is not in iden- Math Dept., University of Michigan tifying winners; it is in making gl...@um... winners of ordinary people. That, 734.764.6454 after all, is the overwhelming http://www.math.lsa.umich.edu/~glarose/ purpose of education. -K.P.Cross On Thu, 18 May 2006, Sam Hathaway wrote: > On May 15, 2006, at 11:22 AM, P. Gavin LaRose wrote: > >> I think the logical way to do this is to refine the way problems are >> selected from the group to include an indication of the number of problems >> to be selected from the group. That is, instead of having the set declare >> a problem "group:topic1", have the declaration be "group:topic1:N", where >> N is the number of problems to include. Then when a new version of the >> set is created and assignProblemToUserSetVersion is called, it would >> actually add N problems to the user's set version. > > This looks good, and I think you should try it. > > One concern is that even if we split source_file up into multiple fields > later, we still end up having to support this format for old set definition > files. Probably not a big deal. > > Another issue is not having a one-to-one correspondence between problems in > the "prototype" UserSet and problems in the versioned UserSet. If that seems > like a problem then the second solution would probably be better. > > Speaking of which, I don't think option 2 would be that hard to implement. > assignSetVersionToUser could keep a hash and pass it by reference to > assignProblemToUserSetVersion, which would record the actual source files > chosen for each group and check the existing entries to rule out duplicates. > -sam |
From: Sam H. <sh...@ma...> - 2006-05-18 04:43:20
|
On May 15, 2006, at 11:22 AM, P. Gavin LaRose wrote: > I think the logical way to do this is to refine the way problems are > selected from the group to include an indication of the number of > problems > to be selected from the group. That is, instead of having the set > declare > a problem "group:topic1", have the declaration be "group:topic1:N", > where > N is the number of problems to include. Then when a new version of > the > set is created and assignProblemToUserSetVersion is called, it would > actually add N problems to the user's set version. This looks good, and I think you should try it. One concern is that even if we split source_file up into multiple fields later, we still end up having to support this format for old set definition files. Probably not a big deal. Another issue is not having a one-to-one correspondence between problems in the "prototype" UserSet and problems in the versioned UserSet. If that seems like a problem then the second solution would probably be better. Speaking of which, I don't think option 2 would be that hard to implement. assignSetVersionToUser could keep a hash and pass it by reference to assignProblemToUserSetVersion, which would record the actual source files chosen for each group and check the existing entries to rule out duplicates. -sam |
From: Jeff H. <je...@vi...> - 2006-05-16 15:49:56
|
We've been using Gavin's gateway testing feature for the last two terms. There have been times where the "group:topic1:N" would have been handy to have. I'd be in favor of adding this in if it's not too much trouble. --Jeff --On Monday, May 15, 2006 11:22 AM -0400 "P. Gavin LaRose" <gl...@um...> wrote: > Hi all, > > This is a question about adding problems to a set from a group of > problems, which is done for gateway tests. This is (only) implemented > for versioned sets (that is, gateway tests). The idea is that one can > define a problem set (gateway test) for which each problem on the set is > drawn from a group of possible problems. (For example, on a derivative > gateway test we might want one "derivatives of trig functions" problem, > one "derivatives of exponentials", etc.) > > This is done in the following manner. > - a set, say, settopic1.def, is created that lists all of the problems > in > the group. then > - in the set definition for the set including a problem from the group > declares the problem source to be "group:topic1" > - this source file name is caught by the subroutine > Instructor::assignProblemToUserSetVersion when a new version of the > set > is being created for the user, and the problem that is actually added > to the user's set is randomly selected from the list of problems in > the > topic1 set. > > I'm now thinking about how to include more than one problem from a > problem group. > > I think the logical way to do this is to refine the way problems are > selected from the group to include an indication of the number of problems > to be selected from the group. That is, instead of having the set declare > a problem "group:topic1", have the declaration be "group:topic1:N", where > N is the number of problems to include. Then when a new version of the > set is created and assignProblemToUserSetVersion is called, it would > actually add N problems to the user's set version. > > An argument against this might be that it's aesthetically displeasing: > the set is being created with Instructor:assignSetVersionToUser, which > (currently) makes one call to assignProblemToUserSetVersion for every > problem to be added to the set. What I've suggested above would push some > of the "set construction" down into the problem assignment (because > assignProblemToUserSetVersion could now be doing multiple assignments per > "problem"), which seems logically incorrect. > > The other option would be to allow multiple problems to be assigned from > the group, and to have some check built into assignProblemToUserSetVersion > that it's not assigning the same problem that's already been assigned. > But that would be rather more difficult to implement. > > Any comments? > > Thanks, > Gavin > > -- > P. Gavin LaRose / Instructional Tech., The tough problem is not in iden- > Math Dept., University of Michigan tifying winners; it is in making > gl...@um... winners of ordinary people. That, > 734.764.6454 after all, is the overwhelming > http://www.math.lsa.umich.edu/~glarose/ purpose of education. -K.P.Cross > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > OpenWeBWorK-Devel mailing list > Ope...@li... > https://lists.sf.net/lists/listinfo/openwebwork-devel |
From: P. G. L. <gl...@um...> - 2006-05-15 15:22:29
|
Hi all, This is a question about adding problems to a set from a group of problems, which is done for gateway tests. This is (only) implemented for versioned sets (that is, gateway tests). The idea is that one can define a problem set (gateway test) for which each problem on the set is drawn from a group of possible problems. (For example, on a derivative gateway test we might want one "derivatives of trig functions" problem, one "derivatives of exponentials", etc.) This is done in the following manner. - a set, say, settopic1.def, is created that lists all of the problems in the group. then - in the set definition for the set including a problem from the group declares the problem source to be "group:topic1" - this source file name is caught by the subroutine Instructor::assignProblemToUserSetVersion when a new version of the set is being created for the user, and the problem that is actually added to the user's set is randomly selected from the list of problems in the topic1 set. I'm now thinking about how to include more than one problem from a problem group. I think the logical way to do this is to refine the way problems are selected from the group to include an indication of the number of problems to be selected from the group. That is, instead of having the set declare a problem "group:topic1", have the declaration be "group:topic1:N", where N is the number of problems to include. Then when a new version of the set is created and assignProblemToUserSetVersion is called, it would actually add N problems to the user's set version. An argument against this might be that it's aesthetically displeasing: the set is being created with Instructor:assignSetVersionToUser, which (currently) makes one call to assignProblemToUserSetVersion for every problem to be added to the set. What I've suggested above would push some of the "set construction" down into the problem assignment (because assignProblemToUserSetVersion could now be doing multiple assignments per "problem"), which seems logically incorrect. The other option would be to allow multiple problems to be assigned from the group, and to have some check built into assignProblemToUserSetVersion that it's not assigning the same problem that's already been assigned. But that would be rather more difficult to implement. Any comments? Thanks, Gavin -- P. Gavin LaRose / Instructional Tech., The tough problem is not in iden- Math Dept., University of Michigan tifying winners; it is in making gl...@um... winners of ordinary people. That, 734.764.6454 after all, is the overwhelming http://www.math.lsa.umich.edu/~glarose/ purpose of education. -K.P.Cross |
From: Sam H. <sh...@ma...> - 2006-04-18 03:19:25
|
On Apr 17, 2006, at 7:56 PM, Davide P. Cervone wrote: > Sam: > > I see that you have recently backported to rel-2-2-dev a number of > changes, but I'm a little confused. Now that 2.2 has been > released, I don't see why the 2-2-dev release should be changed. > Wasn't this supposed to be for the last-minute changes needed for > 2.2, and the main HEAD branch was for continued development that > was not part of 2.2? Since 2.2 is out, shouldn't we just be using > HEAD until we get ready for rev-2-3-dev in preparation for the 2.3 > release? > > Just curious. > > Davide Hi Davide, I'm planning a 2.2.1 bugfix release, and I'm using rel-2-2-dev to get ready for it. -sam |
From: Davide P. C. <dp...@un...> - 2006-04-17 23:56:34
|
Sam: I see that you have recently backported to rel-2-2-dev a number of changes, but I'm a little confused. Now that 2.2 has been released, I don't see why the 2-2-dev release should be changed. Wasn't this supposed to be for the last-minute changes needed for 2.2, and the main HEAD branch was for continued development that was not part of 2.2? Since 2.2 is out, shouldn't we just be using HEAD until we get ready for rev-2-3-dev in preparation for the 2.3 release? Just curious. Davide |
From: Davide P.C. <dp...@un...> - 2006-04-12 14:22:32
|
> I would vote for keeping the overhead low and the processing as simple > and straight forward as possible. Since perl uses double precision, > errors are quite rare but certainly not so rare that they don't show > up. Double precision, while it sometimes helps, is not enough to prevent it. (My examples of 16 digit number are ALREADY double-precision numbers, that's 16 DECIMAL digits not binary digits.) In the cases where subtractive cancellation are going to occur, more digits often don't help, since you just get more digits that cancel and are still left with just a couple of the least-significant digits. After all, if the answer is really SUPPOSED to be zero, and it's ending up as 3.5E-14 because of round-off errors due to holding only a fixed number of digits (and not because that's what the answer REALLY is supposed to be), then if you had twice as many digits, your round-off errors would be occurring twice as far from the decimal place, and you'd still end up with a result with the same precision in the end. I've seen it happen probably 6 times this year. Certainly not a huge amount, but it's something that could be detected and avoided automatically. > At least they are reasonably understandable to someone who knows a > little numerical analysis. My experience is that it's not something most people are going to think of. > It's possible a more complicated system, in addition to possibly > slowing things down, might have unanticipated actions which will be > harder for people to decipher. I agree with that in terms of the more aggressive "convert to zero" idea, but in terms of discarding test points based on this, I'm not sure how that would happen. (Unless EVERY test point had subtractive cancellation, and so not enough points could be found, but the problem developer might want to know that the function involved is numerically unstable, and I have ideas in mind for that anyway.) Once the checker comes up with the test points, the rest of the test is the same as usual. It's only in choosing the points in the first place that the cancellation is taken in to account (in my original suggestion). > I very much like your idea of providing a numeric stability test. I > think the place to put that test would be on the "Edit" problem page > (maybe renamed to "Edit and Test"). One thing to keep in mind is that the problem can include many answer checkers, and you may only want to be looking at one at a time (the current diagnostics include graphs of the functions, and other things that take up considerable space, so you might not want to see that for every function if you are trying to analyze only one answer blank. So this may need to be tied to the answers more tightly, and so might need information that is not available to the editor (which is why I suggested the problem page). We may also want to include a way of selecting what kinds of tests to perform (I have several in mind) and whether to include the graphs, and so on. The interface to this needs to be thought out in more detail. > Since I assume to use your tests, the problem has to load the parser, > you could have a bold message that says "Testing methods only > available to problems using the parser" if used on an old problem. > That way more people might be made aware of the availability and use > of the parser. Since the function checker now calls the Parser behind the scenes unless you request the old answer macros (I don't remember what the default setting is for that), the diagnostics are available for all the old problems as well as new explicitly parser-based ones. You don't need to load Parser.pl explicitly. Just add diagnostics=>1 to a fun_cmp() call and submit an answer to that answer blank and you'll see the results. Davide |
From: P. G. L. <gl...@um...> - 2006-04-12 13:52:53
|
Hi Davide, Thanks for confirming what I had suspected was the case in the different results returned by the Parser and non-parser evaluators. This is a "for your information" follow-up; you may be aware of these ideas already, but they're sufficiently novel to me that I'm forwarding them to the group. I had an exchange with John Orr at some point about how answer checking is/was done in eGrade(/EDU), as a result of which he referred me to an article he wrote with Fisher and Scott ("Randomized Interval Analysis Checks for the Equivalence of Mathematical Expressions", by TW Fisher, JL Orr and SD Scott; I can find it listed as a preprint on his webpage, <http://www.math.unl.edu/~jorr1/research/abstracts.html>, but it doesn't say there if it was published). The upshot of the article as I understand it is that they replace the standard numerical check of equivalence with an interval equivalence: given the standard operations and functions on machine precision numbers, they define the rounded interval versions of these functions and operations to operate on intervals, returning an interval that is guaranteed to include all values of the function when correctly evaluated on the interval. That is, given an interval A = [a1,a2], then f(A) = { f(a) | a in A }, and the rounded interval version of f returns an interval that is guaranteed to include f(A) (plus, of course, possible additional values). Thus, for each operation or elementary function they have an interval definition that returns an interval in which the exact interval evaluation of the function or operation must lie. Equivalence of functions is then determined by picking test points and determining the rounded intervals in which the functions evaluated at these points must lie. If at least one of the resulting pairs of intervals is non-overlapping, the functions are judged unequal. This method still has trouble with numerical instability (the intervals generated are large in this case), of course. Cheers, Gavin -- P. Gavin LaRose / Instructional Tech., The tough problem is not in iden- Math Dept., University of Michigan tifying winners; it is in making gl...@um... winners of ordinary people. That, 734.764.6454 after all, is the overwhelming http://www.math.lsa.umich.edu/~glarose/ purpose of education. -K.P.Cross On Wed, 12 Apr 2006, Davide P.Cervone wrote: >> I'm afraid I'm going to sully your elegant explanation with a stupid >> question. I (believe I) understand the issue of cancellation, however poor >> I may be at predicting it. In the example problem that we discussed before >> you sent this e-mail, I found that the issue with subtractive cancellation >> occurred when I used the Parser version of fun_cmp, but not the older >> version. Is there any reason besides blind luck that the older version >> didn't also mis-grade the problem? > > Yes, there is a reason: the two versions of the checker pick the random > points differently (it is a simple matter of the order in which things are > done, so the random numbers you get are different in the two orders), so when > you switch to the older checker, the chances are you don't get the point near > the origin that is the one that has the instability. > > Davide > > > > |
From: Davide P.C. <dp...@un...> - 2006-04-12 13:45:14
|
> I'm afraid I'm going to sully your elegant explanation with a stupid > question. I (believe I) understand the issue of cancellation, however > poor I may be at predicting it. In the example problem that we > discussed before you sent this e-mail, I found that the issue with > subtractive cancellation occurred when I used the Parser version of > fun_cmp, but not the older version. Is there any reason besides blind > luck that the older version didn't also mis-grade the problem? Yes, there is a reason: the two versions of the checker pick the random points differently (it is a simple matter of the order in which things are done, so the random numbers you get are different in the two orders), so when you switch to the older checker, the chances are you don't get the point near the origin that is the one that has the instability. Davide |
From: Arnold P. <ap...@ma...> - 2006-04-12 13:27:58
|
At 08:56 AM 4/12/2006, P. Gavin LaRose wrote: Hi Davide, I would vote for keeping the overhead low and the processing as simple and straight forward as possible. Since perl uses double precision, errors are quite rare but certainly not so rare that they don't show up. At least they are reasonably understandable to someone who knows a little numerical analysis. It's possible a more complicated system, in addition to possibly slowing things down, might have unanticipated actions which will be harder for people to decipher. I very much like your idea of providing a numeric stability test. I think the place to put that test would be on the "Edit" problem page (maybe renamed to "Edit and Test"). Prof's should have the option of testing a problem with a specific seed (usually errors occur for one "bad" seed) and also the option of testing on a chosen number of random seeds. Since I assume to use your tests, the problem has to load the parser, you could have a bold message that says "Testing methods only available to problems using the parser" if used on an old problem. That way more people might be made aware of the availability and use of the parser. Arnie Prof. Arnold K. Pizer Dept. of Mathematics University of Rochester Rochester, NY 14627 (585) 275-7767 |
From: P. G. L. <gl...@um...> - 2006-04-12 12:56:27
|
Hi Davide, I'm afraid I'm going to sully your elegant explanation with a stupid question. I (believe I) understand the issue of cancellation, however poor I may be at predicting it. In the example problem that we discussed before you sent this e-mail, I found that the issue with subtractive cancellation occurred when I used the Parser version of fun_cmp, but not the older version. Is there any reason besides blind luck that the older version didn't also mis-grade the problem? I'm personally nervous about adding too much overhead to WeBWorK processing, though some of that may be residual from overloading a server back when we were running pre-mod_perl WeBWorK. Gavin -- P. Gavin LaRose / Instructional Tech., The tough problem is not in iden- Math Dept., University of Michigan tifying winners; it is in making gl...@um... winners of ordinary people. That, 734.764.6454 after all, is the overwhelming http://www.math.lsa.umich.edu/~glarose/ purpose of education. -K.P.Cross On Wed, 12 Apr 2006, Davide P. Cervone wrote: > Folks: > > Well, this mailing list has been pretty quiet, so I thought I'd bring up an > issue I've been thinking about. On a number of occasions, I've seen WeBWorK > reject a correct answer because of numeric instability at one of the test > points. This is usually the result of one of the main sources of error in > numeric computations: subtractive cancellation. > > When two nearly identical numbers are subtracted, most of the digits cancel > leaving something near zero. That is correct and perfectly normal. The > problem, however, is that when you are only storing a fixed number of digits, > you can go from having 16 digits of precision and cancel out 13 of them, > leaving only 3 digits left, so you have a massive loss of quality of the > results. Moreover, these lowest 3 digits are where all the accumulated > round-off errors are, and so these 3 digits are essentially random, and so > those random digits have been moved up to the most significant digits, and > your results are essentially worthless. > > For example, if you compute a - b where a = 3.14159265395 and b = > 3.14159265213, then you get 1-b = .00000000182, and even though your original > numbers have 12 significant digits, the result has only three, which is a > significant loss of precision. And since these 3 digits are the least > significant ones in the original numbers, they are subject to the most error > due to accumulated round-off from previous computations. This is how > round-off error becomes significant in a computation. > > Note also in this example that the result has only three digits, so if this > gets compared to a student's answer that is computed in a slightly different > way (with a slightly different 3-digit result), the relative error between > the two answers will be quite high (it will be in the second or third digit, > which is far higher than the usual tolerances, which are in the 4th digit or > so). I know that the zeroLevel and zeroLevelTol settings are intended to > address this issue, but there is a problem with that approach: the > subtractive cancellation can occur at any order of magnitude. For example, > with numbers on the order of 1000, when 16 digits are being stored (the usual > number for 64-bit floating point reals), if say 12 digits cancel (leaving > only 4), the result will be on the order of 1E-8, which is significantly > ABOVE the zeroLevelTol of 1E-12, and so will not be detected via that method, > and we will go on with only 4 digits of precision to obtain values that are > substantially less precise than usual. > > The only way out of this currently is to adjust the limits used in the > problem to avoid the test points where subtractive cancellation occurs. > Usually, this means avoiding the points where the function is zero (these > points almost always involve subtractive cancellation somehow). But it is > not always easy to determine where these will be, especially when it depends > on the random parameters, and since so few problem designers think carefully > about their domains in any case, this is is not a very reliable method even > when the zeros are clear. > > I have been looking into a method that could help with this sort of problem. > Since the Parser handles the various operations (like subtraction) through > overloaded operators, it would be possible to check each subtraction to see > whether there has been a significant loss of precision (i.e., subtractive > cancellation), and track the maximum loss for the evaluation of the function. > When selecting test points, if the computation caused a significant loss, the > test point could be discarded and another one selected. > > A more aggressive alternative would be to have subtraction convert the answer > to zero when such a massive loss of precision occurs. This has the advantage > of working no mater what the magnitude of the results, but might have > unforeseen consequences, and would have to be watched carefully. > > The drawback to this approach is that there would be more overhead involved > with EVERY computation. Currently, the computations performed when two > functions are compared are NOT performed through the overloaded operators > (for efficiency), and so changing things to allow tracking of cancellation > would mean adding a lot of new overhead. It may not be worth the cost. > > A less aggressive approach would be to add a numeric stability test to the > diagnostic output that is available in the Parser's function checker, so that > the problem author could get a report of ranges where subtractive > cancellation is a problem, and so can adjust the limits to avoid them. This > was one of the things I had in mind when I wrote the diagnostic display, but > hadn't gotten to it. (BTW, we still need a better way to get the diagnostic > output into the WW page -- currently it is via warn messages, which fills up > the server error log with lots of long messages that are not really errors. > But that's a different discussion.) That is, the overhead would only occur > at the request of the professor during problem development, not when the > student is running the problem, and so would not be such a problem. But is > suffers from the flaw that the problem developer needs to do something extra > in order to get the diagnostics. (Of course, if we made that easier, say > with a button on the professor's screen, and it became a normal part of the > development process, that might not be so bad.) > > Anyway, I have made some tests, and it IS feasible to track the subtractive > cancellation, and so it could be used to avoid the major cause of numeric > instability. Any comments? > > Davide > > > > > > ------------------------------------------------------- > This SF.Net email is sponsored by xPML, a groundbreaking scripting language > that extends applications into web and mobile media. Attend the live webcast > and join the prime developer group breaking into this new coding territory! > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 > _______________________________________________ > OpenWeBWorK-Devel mailing list > Ope...@li... > https://lists.sf.net/lists/listinfo/openwebwork-devel > > |