From: Arnold P. <ap...@ma...> - 2005-02-04 18:49:27
|
Hi, I'm trying to get some idea of the relative speed of sql_single and gdbm and have been looking at our timing log. This is prompted by some of my students reaction that WeBWorK is slower this semester than last. Last semester all courses (except MTH 162 which used sql) used gdbm. This semester all courses are using sql_single. It seems gdbm is the fastest, then sql_single, and then sql. Also sql and sql_single have many more times that are long compared to gdbm. The details are below but here is a snap shot: For gdbm 68% of the gdbm entires took 0.0 seconds (the log rounds to 1 second) 29% of the gdbm entires took 1.0 seconds 0.01% of the gdbm entires took 10.0 seconds For sql_single the percentages were: 52% 34% 1.7% and for sql 40% 31% 3% This data came from our timing log which covers Oct 16, 2004 through Feb 3, 2005 (data before Oct 16 were in a different format) I'm not sure what we can do with this data but we should be aware of it. Also I don't know how BerkelyDB would compare. Bill Wheeler is using that for both WW1.9 and WW2 at Indiana. Arnie Here's the actual data: There were 702291 total timings for gdbm, 441585 for sql and 290963 for sql_single. grep '0.0 sec gdbm' timing.02_03_05 | wc -l gives the first entry, etc. 476652 0.0 sec gdbm 205360 1.0 sec gdbm 10644 2.0 sec gdbm 4936 3.0 sec gdbm 1058 4.0 sec gdbm 1177 5.0 sec gdbm 1784 6.0 sec gdbm 404 7.0 sec gdbm 153 8.0 sec gdbm 123 9.0 sec gdbm 69 10.0 sec gdbm 58 11.0 sec gdbm 26 12.0 sec gdbm 20 13.0 sec gdbm 23 14.0 sec gdbm 46 15.0 sec gdbm 24 16.0 sec gdbm 14 17.0 sec gdbm 12 18.0 sec gdbm 5 19.0 sec gdbm 14 20.0 sec gdbm grep '0.0 sec sql_single' timing.02_03_05 | wc -l gives the first entry, etc. 150926 0.0 sec sql_single 100426 1.0 sec sql_single 22972 2.0 sec sql_single 7179 3.0 sec sql_single 3358 4.0 sec sql_single 2082 5.0 sec sql_single 1486 6.0 sec sql_single 1053 7.0 sec sql_single 840 8.0 sec sql_single 641 9.0 sec sql_single 484 10.0 sec sql_single 374 11.0 sec sql_single 289 12.0 sec sql_single 207 13.0 sec sql_single 190 14.0 sec sql_single 162 15.0 sec sql_single 145 16.0 sec sql_single 129 17.0 sec sql_single 129 18.0 sec sql_single 89 19.0 sec sql_single 94 20.0 sec sql_single grep '0.0 sec sql' timing.02_03_05 | wc -l gives the first entry, etc. 175395 0.0 sec sql 135452 1.0 sec sql 35487 2.0 sec sql 35450 3.0 sec sql 30331 4.0 sec sql 12301 5.0 sec sql 6750 6.0 sec sql 4583 7.0 sec sql 3286 8.0 sec sql 2550 9.0 sec sql 1657 10.0 sec sql 1306 11.0 sec sql 992 12.0 sec sql 758 13.0 sec sql 682 14.0 sec sql 565 15.0 sec sql 530 16.0 sec sql 479 17.0 sec sql 397 18.0 sec sql 325 19.0 sec sql 271 20.0 sec sql Prof. Arnold K. Pizer Dept. of Mathematics University of Rochester Rochester, NY 14627 (585) 275-7767 |
From: William H. W. <wh...@in...> - 2005-02-04 23:19:59
|
Dear Arnie (et al), Thanks for your message. I can provide some important information about gdbm, but I can't help with timing comparisons to sql_single and sql, because I'm using BerkeleyDB instead of MySQL. Last semester, I used gdbm for all of our courses. That was not successful, because gdbm locks the entire database when it needs to write a record. Although that locking strategy was not a problem in WeBWorK1 (because there were only 2 accesses to webwork-database per student submission), that locking strategy is impractical in WeBWork2 where webwork-database is used to simulate several tables. At periods of peak demand last semester, the wait time in the queue to access webwork-database would exceed 60 seconds, Apache children would proliferate, RAM memory would be exhausted, and the server would slow to a crawl as it spent most of its time swapping memory pages. So this semester, I swapped to BerkeleyDB. There has been a big, sometimes painful learning curve, but now things are running smoothly. The execution times are somewhat less than wth GDBM. But the crucial improvement is that BerkeleyDB does page locking rather than file locking. So there are no queues to access webwork-database even at peak demand. I've also grown to appreciate and like BerkeleyDB's logging and recovery procedures and its tuning features. There are some subtleties to using BerkeleyDB in the WeBWorK environment (not least because the documentation for BerkeleyDB.pm has many holes). On several occasions, after making mistakes with the BerkeleyDB utilities, I had to rebuild a few webwork-databases from the transaction.log files. But things are going smoothly now. Best wishes. Sincerely, Bill -------------------------------------------------------------------------------- On Fri, 4 Feb 2005, Arnold Pizer wrote: > Date: Fri, 04 Feb 2005 13:50:13 -0500 > From: Arnold Pizer <ap...@ma...> > To: ga...@ma..., sh...@ma..., jj...@as..., > wh...@in..., ope...@li... > Cc: ap...@ma... > Subject: relative speed of sql_single and gdbm > > Hi, > > I'm trying to get some idea of the relative speed of sql_single and gdbm and > have been looking at our timing log. This is prompted by some of my students > reaction that WeBWorK is slower this semester than last. Last semester all > courses (except MTH 162 which used sql) used gdbm. This semester all courses > are using sql_single. It seems gdbm is the fastest, then sql_single, and then > sql. Also sql and sql_single have many more times that are long compared to > gdbm. The details are below but here is a snap shot: > > For gdbm > > 68% of the gdbm entires took 0.0 seconds (the log rounds to 1 second) > 29% of the gdbm entires took 1.0 seconds > 0.01% of the gdbm entires took 10.0 seconds > > For sql_single the percentages were: > > 52% > 34% > 1.7% > > and for sql > > 40% > 31% > 3% > > This data came from our timing log which covers Oct 16, 2004 through Feb 3, > 2005 > (data before Oct 16 were in a different format) > > I'm not sure what we can do with this data but we should be aware of > it. Also I > don't know how BerkelyDB would compare. Bill Wheeler is using that for both > WW1.9 and WW2 at Indiana. > > Arnie > > Here's the actual data: > > There were 702291 total timings for gdbm, 441585 for sql and > 290963 for sql_single. > > grep '0.0 sec gdbm' timing.02_03_05 | wc -l gives the first entry, etc. > > 476652 0.0 sec gdbm > 205360 1.0 sec gdbm > 10644 2.0 sec gdbm > 4936 3.0 sec gdbm > 1058 4.0 sec gdbm > 1177 5.0 sec gdbm > 1784 6.0 sec gdbm > 404 7.0 sec gdbm > 153 8.0 sec gdbm > 123 9.0 sec gdbm > 69 10.0 sec gdbm > 58 11.0 sec gdbm > 26 12.0 sec gdbm > 20 13.0 sec gdbm > 23 14.0 sec gdbm > 46 15.0 sec gdbm > 24 16.0 sec gdbm > 14 17.0 sec gdbm > 12 18.0 sec gdbm > 5 19.0 sec gdbm > 14 20.0 sec gdbm > > > grep '0.0 sec sql_single' timing.02_03_05 | wc -l gives the first entry, etc. > > 150926 0.0 sec sql_single > 100426 1.0 sec sql_single > 22972 2.0 sec sql_single > 7179 3.0 sec sql_single > 3358 4.0 sec sql_single > 2082 5.0 sec sql_single > 1486 6.0 sec sql_single > 1053 7.0 sec sql_single > 840 8.0 sec sql_single > 641 9.0 sec sql_single > 484 10.0 sec sql_single > 374 11.0 sec sql_single > 289 12.0 sec sql_single > 207 13.0 sec sql_single > 190 14.0 sec sql_single > 162 15.0 sec sql_single > 145 16.0 sec sql_single > 129 17.0 sec sql_single > 129 18.0 sec sql_single > 89 19.0 sec sql_single > 94 20.0 sec sql_single > > > grep '0.0 sec sql' timing.02_03_05 | wc -l gives the first entry, etc. > > > 175395 0.0 sec sql > 135452 1.0 sec sql > 35487 2.0 sec sql > 35450 3.0 sec sql > 30331 4.0 sec sql > 12301 5.0 sec sql > 6750 6.0 sec sql > 4583 7.0 sec sql > 3286 8.0 sec sql > 2550 9.0 sec sql > 1657 10.0 sec sql > 1306 11.0 sec sql > 992 12.0 sec sql > 758 13.0 sec sql > 682 14.0 sec sql > 565 15.0 sec sql > 530 16.0 sec sql > 479 17.0 sec sql > 397 18.0 sec sql > 325 19.0 sec sql > 271 20.0 sec sql > > > Prof. Arnold K. Pizer > Dept. of Mathematics > University of Rochester > Rochester, NY 14627 > (585) 275-7767 > > > |
From: John J. <jj...@as...> - 2005-03-10 19:07:29
|
Hi, In a previous e-mail I had found that the slowdown was due to case-sensitivity in the mysql index. The index is not case sensitive, so if you force key elements to treated as case sensitive, then the index is ignored and the whole thing slows to a crawl with big databases. Attached is a version of lib/WeBWorK/DB/Schema/SQL.pm which tries to adjust for this. If you are making a deletion of a database record, or putting something into the database, then those calls have to use case sensitive key elements, so they will remain slow. For select statements, the call can be not case sensitive, and then we filter out the records which we really want. I have tested this, but it would be better if other people tested it too. Unlike changes to other files, an error here can mess up an existing course. I thought about other ways to handle this. One I mentioned before was to try to change the data type of the key fields so that the indexing might end up case sensitive. I haven't found a way to do that yet. If the fact that inserting and deleting information from the database is still slow is a problem, then another possibility would be to add a unique numeric key to each table. Then when you want to delete rows from a table, you select (case insensitively), pick out the ones which match your favorite key fields (including case), then use the unique id number for those rows in the actual deletion statement. A similar strategy might work for put type statements. John |
From: Arnold P. <ap...@ma...> - 2005-03-16 19:48:39
|
At 02:06 PM 3/10/2005, John Jones wrote: Hi John, Thanks very much for the patch. I put it in and tested it a little bit. I haven't seen any problems but also I don't see a great deal of improvement in speed. I just tested it on one problem inserting correct and incorrect answers. With the original SQL.pm it took 8 seconds (I only did one attempt) and with the new SQL.pm between 2 and 8 seconds (details below). The load avg. on the server was about .35 and mysql cpu usage was about 30 or 40%. I guess we will have to wait until my next assignment is due to see if this makes a difference but the times below are still way too long. Arnie using original SQL.pm [Wed Mar 16 14:03:12 2005] 42868 1110999792 - [/webwork2/mth162/7/1/] [runTime = 8.0 sec sql_single] using new SQL.pm [Wed Mar 16 14:27:14 2005] 53844 1111001234 - [/webwork2/mth162/7/1/] [runTime = 5.0 sec sql_single] [Wed Mar 16 14:27:44 2005] 54209 1111001264 - [/webwork2/mth162/7/1/] [runTime = 7.0 sec sql_single] [Wed Mar 16 14:28:00 2005] 54209 1111001280 - [/webwork2/mth162/7/1/] [runTime = 6.0 sec sql_single] [Wed Mar 16 14:28:18 2005] 53850 1111001298 - [/webwork2/mth162/7/1/] [runTime = 2.0 sec sql_single] [Wed Mar 16 14:28:27 2005] 54434 1111001307 - [/webwork2/mth162/7/1/] [runTime = 3.0 sec sql_single] [Wed Mar 16 14:28:37 2005] 54434 1111001317 - [/webwork2/mth162/7/1/] [runTime = 3.0 sec sql_single] [Wed Mar 16 14:28:45 2005] 54434 1111001325 - [/webwork2/mth162/7/1/] [runTime = 3.0 sec sql_single] [Wed Mar 16 14:30:43 2005] 53844 1111001443 - [/webwork2/mth162/7/1/] [runTime = 6.0 sec sql_single] [Wed Mar 16 14:30:56 2005] 53844 1111001456 - [/webwork2/mth162/7/1/] [runTime = 6.0 sec sql_single] [Wed Mar 16 14:31:05 2005] 53844 1111001465 - [/webwork2/mth162/7/1/] [runTime = 6.0 sec sql_single] [Wed Mar 16 14:31:22 2005] 53844 1111001482 - [/webwork2/mth162/7/1/] [runTime = 6.0 sec sql_single] [Wed Mar 16 14:34:10 2005] 53850 1111001650 - [/webwork2/mth162/7/1/] [runTime = 8.0 sec sql_single] [Wed Mar 16 14:35:44 2005] 54964 1111001744 - [/webwork2/mth162/7/1/] [runTime = 5.0 sec sql_single] [Wed Mar 16 14:44:09 2005] 55114 1111002249 - [/webwork2/mth162/7/1/] [runTime = 2.0 sec sql_single] >Hi, > >In a previous e-mail I had found that the slowdown was due to >case-sensitivity in the mysql index. The index is not case sensitive, so >if you force key elements to treated as case sensitive, then the index is >ignored and the whole thing slows to a crawl with big databases. > >Attached is a version of lib/WeBWorK/DB/Schema/SQL.pm which tries to >adjust for this. If you are making a deletion of a database record, or >putting something into the database, then those calls have to use case >sensitive key elements, so they will remain slow. For select statements, >the call can be not case sensitive, and then we filter out the records >which we really want. > >I have tested this, but it would be better if other people tested it >too. Unlike changes to other files, an error here can mess up an existing >course. > >I thought about other ways to handle this. One I mentioned before was to >try to change the data type of the key fields so that the indexing might >end up case sensitive. I haven't found a way to do that yet. > >If the fact that inserting and deleting information from the database is >still slow is a problem, then another possibility would be to add a unique >numeric key to each table. Then when you want to delete rows from a >table, you select (case insensitively), pick out the ones which match your >favorite key fields (including case), then use the unique id number for >those rows in the actual deletion statement. A similar strategy might >work for put type statements. > >John > > > >################################################################################ ># WeBWorK Online Homework Delivery System ># Copyright ) 2000-2003 The WeBWorK Project, http://openwebwork.sf.net/ ># $CVSHeader: webwork-modperl/lib/WeBWorK/DB/Schema/SQL.pm,v 1.24 >2004/10/22 22:59:52 sh002i Exp $ ># ># This program is free software; you can redistribute it and/or modify it >under ># the terms of either: (a) the GNU General Public License as published by the ># Free Software Foundation; either version 2, or (at your option) any later ># version, or (b) the "Artistic License" which comes with this package. ># ># This program is distributed in the hope that it will be useful, but WITHOUT ># ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or >FITNESS ># FOR A PARTICULAR PURPOSE. See either the GNU General Public License or the ># Artistic License for more details. >################################################################################ > >package WeBWorK::DB::Schema::SQL; >use base qw(WeBWorK::DB::Schema); > >=head1 NAME > >WeBWorK::DB::Schema::SQL - support SQL access to all tables. > >=cut > >use strict; >use warnings; >use Carp qw(croak); >use Date::Format; > >use constant TABLES => qw(*); >use constant STYLE => "dbi"; > >=head1 SUPPORTED PARAMS > >This schema pays attention to the following items in the C<params> entry. > >=over > >=item tableOverride > >Alternate name for this table, to satisfy SQL naming requirements. > >=item fieldOverride > >A reference to a hash mapping field names to alternate names, to satisfy SQL >naming requirements. > >=back > >=cut > > >################################################################################ ># constructor for SQL-specific behavior >################################################################################ > >sub new { > my ($proto, $db, $driver, $table, $record, $params) = @_; > my $self = $proto->SUPER::new($db, $driver, $table, $record, > $params); > > ## override table name if tableOverride param is given > #$self->{table} = $params->{tableOverride} if > $params->{tableOverride}; > > # add sqlTable field > $self->{sqlTable} = $params->{tableOverride} || $self->{table}; > > return $self; >} > >################################################################################ ># table access functions >################################################################################ > >sub count { > my ($self, @keyparts) = @_; > > my $table = $self->{table}; > my $sqlTable = $self->{sqlTable}; > my @keynames = $self->sqlKeynames(); > > croak "too many keyparts for table $table (need at most: @keynames)" > if @keyparts > @keynames; > > my ($where, @where_args) = $self->makeWhereClause(0, @keyparts); > > my $stmt = "SELECT * FROM `$sqlTable` $where"; > $self->debug("SQL-count: $stmt\n"); > > $self->{driver}->connect("ro"); > > my $sth = $self->{driver}->dbi()->prepare($stmt); > $sth->execute(@where_args); > my ($result) = $sth->fetchall_arrayref; > > $self->{driver}->disconnect(); >my @arr_res = case_check($result, @keyparts); > > return scalar(@arr_res); >} > >sub list($@) { > my ($self, @keyparts) = @_; > > my $table = $self->{table}; > my $sqlTable = $self->{sqlTable}; > my @keynames = $self->sqlKeynames(); > my $keynames = join(", ", @keynames); > > croak "too many keyparts for table $table (need at most: @keynames)" > if @keyparts > @keynames; > > my ($where, @where_args) = $self->makeWhereClause(0, @keyparts); > > my $stmt = "SELECT $keynames FROM `$sqlTable` $where"; > $self->debug("SQL-list: $stmt\n"); > > $self->{driver}->connect("ro"); > > my $sth = $self->{driver}->dbi()->prepare($stmt); > $sth->execute(@where_args); > my $result = $sth->fetchall_arrayref; > > $self->{driver}->disconnect(); > > croak "failed to SELECT: $DBI::errstr" unless defined $result; >my @arr_res = case_check($result, @keyparts); > return @arr_res; >} > >sub exists($@) { > my ($self, @keyparts) = @_; > > my $table = $self->{table}; > my $sqlTable = $self->{sqlTable}; > my @keynames = $self->sqlKeynames(); > > croak "wrong number of keyparts for table $table (needs: @keynames)" > unless @keyparts == @keynames; > > my ($where, @where_args) = $self->makeWhereClause(0, @keyparts); > > my $stmt = "SELECT * FROM `$sqlTable` $where"; > $self->debug("SQL-exists: $stmt\n"); > > $self->{driver}->connect("ro"); > > my $sth = $self->{driver}->dbi()->prepare($stmt); > $sth->execute(@where_args); > my ($result) = $sth->fetchall_arrayref; > > $self->{driver}->disconnect(); > > croak "failed to SELECT: $DBI::errstr" unless defined $result; > >my @arr_res = case_check($result, @keyparts); > return scalar(@arr_res) > 0; >} > >sub add($$) { > my ($self, $Record) = @_; > > my @realKeynames = $self->{record}->KEYFIELDS(); > my @keyparts = map { $Record->$_() } @realKeynames; > croak "(" . join(", ", @keyparts) . "): exists (use put)" > if $self->exists(@keyparts); > > my $table = $self->{table}; > my $sqlTable = $self->{sqlTable}; > my @fieldnames = $self->sqlFieldnames(); > my $fieldnames = join(", ", @fieldnames); > my $marks = join(", ", map { "?" } @fieldnames); > > my @realFieldnames = $self->{record}->FIELDS(); > my @fieldvalues = map { $Record->$_() } @realFieldnames; > > my $stmt = "INSERT INTO `$sqlTable` ($fieldnames) VALUES ($marks)"; > $self->debug("SQL-add: $stmt\n"); > > $self->{driver}->connect("rw"); > my $sth = $self->{driver}->dbi()->prepare($stmt); > my $result = $sth->execute(@fieldvalues); > $self->{driver}->disconnect(); > > unless (defined $result) { > my @realKeynames = $self->{record}->KEYFIELDS(); > my @keyvalues = map { $Record->$_() } @realKeynames; > croak "(" . join(", ", @keyvalues) . "): failed to > INSERT: $DBI::errstr"; > } > > return 1; >} > >sub get($@) { > my ($self, @keyparts) = @_; > > return ($self->gets(\@keyparts))[0]; >} > >sub gets($@) { > my ($self, @keypartsRefList) = @_; > > my $table = $self->{table}; > my $sqlTable = $self->{sqlTable}; > my @keynames = $self->sqlKeynames(); > > my @records; > $self->{driver}->connect("ro"); > foreach my $keypartsRef (@keypartsRefList) { > my @keyparts = @$keypartsRef; > > croak "wrong number of keyparts for table $table (needs: > @keynames)" > unless @keyparts == @keynames; > > my ($where, @where_args) = $self->makeWhereClause(0, > @keyparts); > > my $stmt = "SELECT * FROM `$sqlTable` $where"; > $self->debug("SQL-gets: $stmt\n"); > > my $sth = $self->{driver}->dbi()->prepare($stmt); > $sth->execute(@where_args); > my $result = $sth->fetchall_arrayref; > > if (defined $result) { >my @record = case_check($result, @keyparts); > if(@record) { > @record = @{$record[0]}; #adjust for fetchall > my $Record = $self->{record}->new(); > my @realFieldnames = > $self->{record}->FIELDS(); > foreach (@realFieldnames) { > my $value = shift @record; > $value = "" unless defined > $value; # promote undef to "" > $Record->$_($value); > } > push @records, $Record; > } else { > push @records, undef; > } > } else { > push @records, undef; > } > } > $self->{driver}->disconnect(); > > return @records; >} > ># getAll($userID, $setID) ># ># Returns all problems in a given set. Only supported for the problem and ># problem_user tables. > >sub getAll { > my ($self, @keyparts) = @_; > my $table = $self->{table}; > my $sqlTable = $self->{sqlTable}; > > croak "getAll: only supported for the problem_user table" > unless $table eq "problem" or $table eq "problem_user"; > > my @keynames = $self->sqlKeynames(); > pop @keynames; # get rid of problem_id > > my ($where, @where_args) = $self->makeWhereClause(0, @keyparts); > > my $stmt = "SELECT * FROM `$sqlTable` $where"; > $self->debug("SQL-getAll: $stmt\n"); > > my @records; > > $self->{driver}->connect("ro"); > > my $sth = $self->{driver}->dbi()->prepare($stmt); > $sth->execute(@where_args); > my $results = $sth->fetchall_arrayref; >my @arr_res = case_check($results, @keyparts); > > foreach my $result (@arr_res) { > if (defined $result) { > my @record = @$result; > my $Record = $self->{record}->new(); > my @realFieldnames = $self->{record}->FIELDS(); > foreach (@realFieldnames) { > my $value = shift @record; > $value = "" unless defined $value; # > promote undef to "" > $Record->$_($value); > } > push @records, $Record; > } > } > $self->{driver}->disconnect(); > > return @records; >} > >sub put($$) { > my ($self, $Record) = @_; > > my @realKeynames = $self->{record}->KEYFIELDS(); > my @keyparts = map { $Record->$_() } @realKeynames; > croak "(" . join(", ", @keyparts) . "): not found (use add)" > unless $self->exists(@keyparts); > > my $table = $self->{table}; > my $sqlTable = $self->{sqlTable}; > my @fieldnames = $self->sqlFieldnames(); > my $fieldnames = join(", ", @fieldnames); > my $marks = join(", ", map { "?" } @fieldnames); > > my @realFieldnames = $self->{record}->FIELDS(); > my @fieldvalues = map { $Record->$_() } @realFieldnames; > > my ($where, @where_args) = $self->makeWhereClause(1, map { > $Record->$_() } @realKeynames); > > my $stmt = "UPDATE `$sqlTable` SET"; > while (@fieldnames) { > $stmt .= " " . (shift @fieldnames) . "=?"; > $stmt .= "," if @fieldnames; > } > $stmt .= " $where"; > $self->debug("SQL-put: $stmt\n"); > > $self->{driver}->connect("rw"); > my $sth = $self->{driver}->dbi()->prepare($stmt); > my $result = $sth->execute(@fieldvalues, @where_args); > $self->{driver}->disconnect(); > > unless (defined $result) { > croak "(" . join(", ", @keyparts) . "): failed to UPDATE: > $DBI::errstr"; > } > > return 1; >} > >sub delete($@) { > my ($self, @keyparts) = @_; > > return 0 unless $self->exists(@keyparts); > > my $table = $self->{table}; > my $sqlTable = $self->{sqlTable}; > my @keynames = $self->sqlKeynames(); > > croak "wrong number of keyparts for table $table (needs: @keynames)" > unless @keyparts == @keynames; > > my ($where, @where_args) = $self->makeWhereClause(1, @keyparts); > > my $stmt = "DELETE FROM `$sqlTable` $where"; > $self->debug("SQL-delete: $stmt\n"); > > $self->{driver}->connect("rw"); > > my $sth = $self->{driver}->dbi()->prepare($stmt); > my $result = $sth->execute(@where_args); > > $self->{driver}->disconnect(); > croak "failed to DELETE: $DBI::errstr" unless defined $result; > > return $result; >} > >################################################################################ ># utility functions >################################################################################ > >sub makeWhereClause($@) { > my ($self, $use_binary, @keyparts) = @_; > my $binary_string = ""; > $binary_string = "BINARY" if($use_binary); > my @keynames = $self->sqlKeynames(); > > my $where = ""; > my @used_keyparts; > > my $first = 1; > while (@keyparts) { > my $name = shift @keynames; > my $part = shift @keyparts; > > next unless defined $part; > > $where .= " AND" unless $first; > $where .= " $binary_string $name=?"; > push @used_keyparts, $part; > > $first = 0; > } > > my $clause = $where ? "WHERE$where" : ""; > > return ($clause, @used_keyparts); >} > >sub sqlKeynames($) { > my ($self) = @_; > my @keynames = $self->{record}->KEYFIELDS(); > return map { "`$_`" } map { > $self->{params}->{fieldOverride}->{$_} || $_ } @keynames; >} > >sub sqlFieldnames($) { > my ($self) = @_; > my @keynames = $self->{record}->FIELDS(); > return map { "`$_`" } map { > $self->{params}->{fieldOverride}->{$_} || $_ } @keynames; >} > >sub debug($@) { > my ($self, @string) = @_; > > if ($self->{params}->{debug}) { > warn @string; > } >} > > ># Temporary debugging function > >sub see_check { > my $what = shift; > my $msg = shift; > local *LOG; > open(LOG, ">>", "/tmp/check-log"); > print LOG "$msg\n"; > if($what == 0) { > close(LOG); > return(); > } > my $result = shift; > my @keyparts = @_; > my ($ind, @arr_res, $jj); > @arr_res = @$result; > print LOG "\n"; > @arr_res = @$result; > for $ind (0..(scalar(@keyparts)-1)) { > print LOG "$keyparts[$ind] <-> "; > for $jj (@arr_res) { > print LOG " " . $jj->[$ind]; > } > print LOG "\n"; > } > close(LOG); >} > >sub case_check { > my $result = shift; > my @keyparts = @_; ># see_check(1, "In case check:", $result, @keyparts); > my ($ind, @arr_res); > @arr_res = @$result; > for $ind (0..(scalar(@keyparts)-1)) { > if(defined($keyparts[$ind])) { > @arr_res = grep {$keyparts[$ind] eq $_->[$ind]} > @arr_res; > } > } ># see_check(1, "Leaving case check:", [@arr_res], @keyparts); > return(@arr_res); >} > >1; Prof. Arnold K. Pizer Dept. of Mathematics University of Rochester Rochester, NY 14627 (585) 275-7767 |
From: John J. <jj...@as...> - 2005-03-16 20:46:32
|
Arnold Pizer wrote: > At 02:57 PM 3/16/2005, John Jones wrote: > > I don't know if I can tell anything from the above or not. I seems > submitting an answer does slow stuff down somewhat. How many BD calls > are involved in that process? I don't know offhand. My basic approach to figuring this out is either throw extra information in the timing log, or to write information to a special log file. Including what webwork function in SQL is being called, and the sql statement it uses should give an idea of how much and what kind of activity is involved in a single submission. John |
From: Arnold P. <ap...@ma...> - 2005-03-17 14:23:33
|
At 03:45 PM 3/16/2005, John Jones wrote: Hi John, I was being stupid. I forgot to restart the web server after changing the SQL.pm file. Doing that gives the times below. There seems to be a definite improvement using your new SQL.pm but not a dramatic one. When I did this there was a very light load on the server. Last semester using gdbm the vast majority of operations had runTime = 0.0 sec (and most operations are done during heavy use periods). Also Bill Wheeler reports that he switched to BerkeleyBD from gdbm because with his loads, gdbm was too slow (because writing locks applied to the whole database). I assume in his case, sql_single would be unusable as is. This is one area that needs a lot of optimization. Hopefully the current fix will be sufficient to provide a better experience for my 162 students. We'll see Sunday night when our next assignment is due. Arnie Thu Mar 17 09:00:28 2005] 38570 1111068028 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Thu Mar 17 09:00:35 2005] 38566 1111068035 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Thu Mar 17 09:00:39 2005] 38566 1111068039 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Thu Mar 17 09:00:43 2005] 38566 1111068043 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Thu Mar 17 09:00:46 2005] 38570 1111068046 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Thu Mar 17 09:00:49 2005] 38566 1111068049 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Thu Mar 17 09:03:02 2005] 38564 1111068182 - [/webwork2/mth162/instructor/progress/] [runTime = 1.0 sec sql_single] [Thu Mar 17 09:03:18 2005] 38565 1111068198 - [/webwork2/mth162/instructor/progress/set/7/] [runTime = 6.0 sec sql_single] [Thu Mar 17 09:05:11 2005] 38571 1111068311 - [/webwork2/mth162/] [runTime = 0.0 sec sql_single] [Thu Mar 17 09:05:13 2005] 38577 1111068313 - [/webwork2/mth162/7/] [runTime = 0.0 sec sql_single] [Thu Mar 17 09:05:14 2005] 38571 1111068314 - [/webwork2/mth162/7/1/] [runTime = 0.0 sec sql_single] [Thu Mar 17 09:05:20 2005] 38571 1111068320 - [/webwork2/mth162/7/1/] [runTime = 3.0 sec sql_single] [Thu Mar 17 09:05:24 2005] 38571 1111068324 - [/webwork2/mth162/7/1/] [runTime = 2.0 sec sql_single] [Thu Mar 17 09:05:27 2005] 38571 1111068327 - [/webwork2/mth162/7/1/] [runTime = 2.0 sec sql_single] [Thu Mar 17 09:05:31 2005] 38571 1111068331 - [/webwork2/mth162/7/1/] [runTime = 3.0 sec sql_single] [Thu Mar 17 09:05:34 2005] 38571 1111068334 - [/webwork2/mth162/7/1/] [runTime = 2.0 sec sql_single] >Arnold Pizer wrote: > >>At 02:57 PM 3/16/2005, John Jones wrote: >> >>I don't know if I can tell anything from the above or not. I seems >>submitting an answer does slow stuff down somewhat. How many BD calls are >>involved in that process? > >I don't know offhand. My basic approach to figuring this out is either >throw extra information in the timing log, or to write information to a >special log file. Including what webwork function in SQL is being called, >and the sql statement it uses should give an idea of how much and what >kind of activity is involved in a single submission. > >John > > >------------------------------------------------------- >SF email is sponsored by - The IT Product Guide >Read honest & candid reviews on hundreds of IT Products from real users. >Discover which products truly live up to the hype. Start reading now. >http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click >_______________________________________________ >OpenWeBWorK-Devel mailing list >Ope...@li... >https://lists.sourceforge.net/lists/listinfo/openwebwork-devel Prof. Arnold K. Pizer Dept. of Mathematics University of Rochester Rochester, NY 14627 (585) 275-7767 |
From: Arnold P. <ap...@ma...> - 2005-03-16 20:30:53
|
At 02:57 PM 3/16/2005, John Jones wrote: Hi John, I made the change using the original version of SQL.pm, but removed the word binary from it. Here are the results: [Wed Mar 16 15:10:29 2005] 57113 1111003829 - [/webwork2/mth162/7/1/] [runTime = 0.0 sec sql_single] [Wed Mar 16 15:11:17 2005] 53856 1111003877 - [/webwork2/mth162/7/1/] [runTime = 5.0 sec sql_single] [Wed Mar 16 15:11:27 2005] 53856 1111003887 - [/webwork2/mth162/7/1/] [runTime = 6.0 sec sql_single] [Wed Mar 16 15:11:34 2005] 53856 1111003894 - [/webwork2/mth162/7/1/] [runTime = 5.0 sec sql_single] Not much difference. The 0.0 was viewing the problem for the first time and the next three were submitting answers. I also decided to try viewing other problems for the first time with the following results: [Wed Mar 16 15:17:10 2005] 54980 1111004230 - [/webwork2/mth162/7/2/] [runTime = 0.0 sec sql_single] [Wed Mar 16 15:17:14 2005] 53856 1111004234 - [/webwork2/mth162/7/3/] [runTime = 2.0 sec sql_single] [Wed Mar 16 15:17:15 2005] 54980 1111004235 - [/webwork2/mth162/7/4/] [runTime = 0.0 sec sql_single] [Wed Mar 16 15:17:18 2005] 53856 1111004238 - [/webwork2/mth162/7/5/] [runTime = 2.0 sec sql_single] [Wed Mar 16 15:17:22 2005] 53856 1111004242 - [/webwork2/mth162/7/6/] [runTime = 3.0 sec sql_single] Then I just submitted blank answers with the following results: [Wed Mar 16 15:17:42 2005] 55114 1111004262 - [/webwork2/mth162/7/6/] [runTime = 3.0 sec sql_single] [Wed Mar 16 15:17:46 2005] 55114 1111004266 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Wed Mar 16 15:17:50 2005] 55114 1111004270 - [/webwork2/mth162/7/6/] [runTime = 3.0 sec sql_single] [Wed Mar 16 15:17:53 2005] 55114 1111004273 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] and now with simple wrong answers: [Wed Mar 16 15:26:23 2005] 56205 1111004783 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Wed Mar 16 15:26:30 2005] 53856 1111004790 - [/webwork2/mth162/7/6/] [runTime = 6.0 sec sql_single] [Wed Mar 16 15:26:37 2005] 53856 1111004797 - [/webwork2/mth162/7/6/] [runTime = 6.0 sec sql_single] [Wed Mar 16 15:26:43 2005] 56205 1111004803 - [/webwork2/mth162/7/6/] [runTime = 3.0 sec sql_single] [Wed Mar 16 15:26:47 2005] 56205 1111004807 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Wed Mar 16 15:26:51 2005] 54980 1111004811 - [/webwork2/mth162/7/] [runTime = 1.0 sec sql_single] I don't know if I can tell anything from the above or not. I seems submitting an answer does slow stuff down somewhat. How many BD calls are involved in that process? Arnie >Arnold Pizer wrote: > >>At 02:06 PM 3/10/2005, John Jones wrote: >> >>Hi John, >> >>Thanks very much for the patch. I put it in and tested it a little bit. I >>haven't seen any problems but also I don't see a great deal of >>improvement in speed. I just tested it on one problem inserting correct >>and incorrect answers. With the original SQL.pm it took 8 seconds (I >>only did one attempt) and with the new SQL.pm between 2 and 8 seconds >>(details below). The load avg. on the server was about .35 and mysql >>cpu usage was about 30 or 40%. I guess we will have to wait until my >>next assignment is due to see if this makes a difference but the times >>below are still way too long. > >Hi Arnie, > >One thing you can do for comparison is use the old version of SQL.pm, but >remove the word binary from it (it appears in one place). This should be >ok provided you do not have 2 set names or 2 user id's which differ only >in case. This will show the maximum speed gain you can get from working >around this indexing problem. When assigning a set to a class, changing >this was something like a factor of 10 in speed between the original and >the original with "binary" removed. > >John > Prof. Arnold K. Pizer Dept. of Mathematics University of Rochester Rochester, NY 14627 (585) 275-7767 |
From: John J. <jj...@as...> - 2005-03-17 15:47:07
|
Arnold Pizer wrote: > At 03:45 PM 3/16/2005, John Jones wrote: > > Hi John, > > I was being stupid. I forgot to restart the web server after > changing the SQL.pm file. Doing that gives the times below. There > seems to be a definite improvement using your new SQL.pm but not a > dramatic one. When I did this there was a very light load on the > server. Last semester using gdbm the vast majority of operations had > runTime = 0.0 sec (and most operations are done during heavy use > periods). Also Bill Wheeler reports that he switched to BerkeleyBD > from gdbm because with his loads, gdbm was too slow (because writing > locks applied to the whole database). I assume in his case, > sql_single would be unusable as is. This is one area that needs a lot > of optimization. > > Hopefully the current fix will be sufficient to provide a better > experience for my 162 students. We'll see Sunday night when our next > assignment is due. What about timing with old SQL.pm with binary removed? If you might have a case conflict (database keys which differ only in case), then I wouldn't try it until after the semester is over. It would tell us if there are gains to be had by making further changes to SQL.pm. John > Thu Mar 17 09:00:28 2005] 38570 1111068028 - [/webwork2/mth162/7/6/] > [runTime = 2.0 sec sql_single] > [Thu Mar 17 09:00:35 2005] 38566 1111068035 - [/webwork2/mth162/7/6/] > [runTime = 2.0 sec sql_single] > [Thu Mar 17 09:00:39 2005] 38566 1111068039 - [/webwork2/mth162/7/6/] > [runTime = 2.0 sec sql_single] > [Thu Mar 17 09:00:43 2005] 38566 1111068043 - [/webwork2/mth162/7/6/] > [runTime = 2.0 sec sql_single] > [Thu Mar 17 09:00:46 2005] 38570 1111068046 - [/webwork2/mth162/7/6/] > [runTime = 2.0 sec sql_single] > [Thu Mar 17 09:00:49 2005] 38566 1111068049 - [/webwork2/mth162/7/6/] > [runTime = 2.0 sec sql_single] > [Thu Mar 17 09:03:02 2005] 38564 1111068182 - > [/webwork2/mth162/instructor/progress/] [runTime = 1.0 sec sql_single] > [Thu Mar 17 09:03:18 2005] 38565 1111068198 - > [/webwork2/mth162/instructor/progress/set/7/] [runTime = 6.0 sec > sql_single] > [Thu Mar 17 09:05:11 2005] 38571 1111068311 - [/webwork2/mth162/] > [runTime = 0.0 sec sql_single] > [Thu Mar 17 09:05:13 2005] 38577 1111068313 - [/webwork2/mth162/7/] > [runTime = 0.0 sec sql_single] > [Thu Mar 17 09:05:14 2005] 38571 1111068314 - [/webwork2/mth162/7/1/] > [runTime = 0.0 sec sql_single] > [Thu Mar 17 09:05:20 2005] 38571 1111068320 - [/webwork2/mth162/7/1/] > [runTime = 3.0 sec sql_single] > [Thu Mar 17 09:05:24 2005] 38571 1111068324 - [/webwork2/mth162/7/1/] > [runTime = 2.0 sec sql_single] > [Thu Mar 17 09:05:27 2005] 38571 1111068327 - [/webwork2/mth162/7/1/] > [runTime = 2.0 sec sql_single] > [Thu Mar 17 09:05:31 2005] 38571 1111068331 - [/webwork2/mth162/7/1/] > [runTime = 3.0 sec sql_single] > [Thu Mar 17 09:05:34 2005] 38571 1111068334 - [/webwork2/mth162/7/1/] > [runTime = 2.0 sec sql_single] > >> Arnold Pizer wrote: >> >>> At 02:57 PM 3/16/2005, John Jones wrote: >>> >>> I don't know if I can tell anything from the above or not. I seems >>> submitting an answer does slow stuff down somewhat. How many BD >>> calls are involved in that process? >> >> >> I don't know offhand. My basic approach to figuring this out is >> either throw extra information in the timing log, or to write >> information to a special log file. Including what webwork function >> in SQL is being called, and the sql statement it uses should give an >> idea of how much and what kind of activity is involved in a single >> submission. >> >> John >> >> >> ------------------------------------------------------- >> SF email is sponsored by - The IT Product Guide >> Read honest & candid reviews on hundreds of IT Products from real users. >> Discover which products truly live up to the hype. Start reading now. >> http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click >> _______________________________________________ >> OpenWeBWorK-Devel mailing list >> Ope...@li... >> https://lists.sourceforge.net/lists/listinfo/openwebwork-devel > > > Prof. Arnold K. Pizer > Dept. of Mathematics > University of Rochester > Rochester, NY 14627 > (585) 275-7767 > > > > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real users. > Discover which products truly live up to the hype. Start reading now. > http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click > _______________________________________________ > OpenWeBWorK-Devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/openwebwork-devel |
From: Arnold P. <ap...@ma...> - 2005-03-17 16:29:53
|
At 10:47 AM 3/17/2005, John Jones wrote: >What about timing with old SQL.pm with binary removed? If you might have >a case conflict (database keys which differ only in case), then I wouldn't >try it until after the semester is over. It would tell us if there are >gains to be had by making further changes to SQL.pm. > >John >What about timing with old SQL.pm with binary removed? If you might have >a case conflict (database keys which differ only in case), then I wouldn't >try it until after the semester is over. It would tell us if there are >gains to be had by making further changes to SQL.pm. > >John With binary removed there seems to be a dramatic speed up. With your new version of SQL.pm [Thu Mar 17 11:16:24 2005] 38563 1111076184 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Thu Mar 17 11:16:28 2005] 38563 1111076188 - [/webwork2/mth162/7/6/] [runTime = 3.0 sec sql_single] [Thu Mar 17 11:16:31 2005] 38563 1111076191 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Thu Mar 17 11:16:35 2005] 38565 1111076195 - [/webwork2/mth162/7/6/] [runTime = 3.0 sec sql_single] [Thu Mar 17 11:16:38 2005] 38565 1111076198 - [/webwork2/mth162/7/6/] [runTime = 3.0 sec sql_single] [Thu Mar 17 11:16:41 2005] 38565 1111076201 - [/webwork2/mth162/7/6/] [runTime = 2.0 sec sql_single] [Thu Mar 17 11:17:05 2005] 38564 1111076225 - [/webwork2/mth162/8/8/] [runTime = 3.0 sec sql_single] With the old version of SQL.pm but with binary removed [Thu Mar 17 11:17:07 2005] 38564 1111076227 - [/webwork2/mth162/8/] [runTime = 0.0 sec sql_single] [Thu Mar 17 11:17:09 2005] 38564 1111076229 - [/webwork2/mth162/8/11/] [runTime = 0.0 sec sql_single] [Thu Mar 17 11:17:42 2005] 48562 1111076262 - [/webwork2/mth162/7/6/] [runTime = 1.0 sec sql_single] [Thu Mar 17 11:17:44 2005] 48564 1111076264 - [/webwork2/mth162/7/6/] [runTime = 1.0 sec sql_single] [Thu Mar 17 11:17:44 2005] 48562 1111076264 - [/webwork2/mth162/7/6/] [runTime = 0.0 sec sql_single] [Thu Mar 17 11:17:45 2005] 48562 1111076265 - [/webwork2/mth162/7/6/] [runTime = 0.0 sec sql_single] [Thu Mar 17 11:17:46 2005] 48564 1111076266 - [/webwork2/mth162/7/6/] [runTime = 0.0 sec sql_single] [Thu Mar 17 11:17:47 2005] 48564 1111076267 - [/webwork2/mth162/7/6/] [runTime = 0.0 sec sql_single] [Thu Mar 17 11:17:48 2005] 48562 1111076268 - [/webwork2/mth162/7/6/] [runTime = 0.0 sec sql_single] [Thu Mar 17 11:17:49 2005] 48575 1111076269 - [/webwork2/mth162/8/11/] [runTime = 1.0 sec sql_single] [Thu Mar 17 11:17:49 2005] 48564 1111076269 - [/webwork2/mth162/7/6/] [runTime = 0.0 sec sql_single] [Thu Mar 17 11:17:50 2005] 48564 1111076270 - [/webwork2/mth162/7/6/] [runTime = 1.0 sec sql_single] [Thu Mar 17 11:17:51 2005] 48562 1111076271 - [/webwork2/mth162/7/6/] [runTime = 0.0 sec sql_single] If using case insenstive keys does not mess up how things are read out (e.g. you want set "Exam1" to be in that form, not e.g. "EXAM1" or "exam1"), then it should be easy to enforce that there are no conflicts when userid's, set names, etc are created. Since I can not imagine that we have any database keys which differ only in case, I'll try this out now. Thanks. Arnie |
From: Arnold P. <ap...@ma...> - 2005-03-21 12:45:46
|
At 11:30 AM 3/17/2005, Arnold Pizer wrote: The small test I did showing a dramatic speed up with John's fix of removing "binary" seems to be born out by last night results. Here's a small snippet. Most times seem to be 0 or 1 second which is what we saw last semester with gdbm. [Sun Mar 20 21:49:41 2005] 38879 1111373381 - [/webwork2/mth162/8/8/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:49:47 2005] 39785 1111373387 - [/webwork2/mth162/8/5/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:49:48 2005] 39166 1111373388 - [/webwork2/mth162/8/1/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:49:49 2005] 39501 1111373389 - [/webwork2/mth165/9/4/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:49:50 2005] 40245 1111373390 - [/webwork2/mth162/8/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:49:51 2005] 40559 1111373391 - [/webwork2/mth162/8/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:49:52 2005] 40245 1111373392 - [/webwork2/mth162/8/9/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:49:54 2005] 40961 1111373394 - [/webwork2/mth162/8/9/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:49:57 2005] 38171 1111373397 - [/webwork2/mth161/7/12/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:49:58 2005] 38879 1111373398 - [/webwork2/mth162/8/10/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:00 2005] 38171 1111373400 - [/webwork2/mth161/logout/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:01 2005] 41128 1111373401 - [/webwork2/mth162/8/10/] [runTime = 1.0 sec sql_single] [Sun Mar 20 21:50:01 2005] 39785 1111373401 - [/webwork2/mth162/8/8/] [runTime = 1.0 sec sql_single] [Sun Mar 20 21:50:05 2005] 38590 1111373405 - [/webwork2/mth162/8/5/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:07 2005] 39178 1111373407 - [/webwork2/mth162/8/3/] [runTime = 1.0 sec sql_single] [Sun Mar 20 21:50:08 2005] 39871 1111373408 - [/webwork2/mth162/8/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:09 2005] 39945 1111373409 - [/webwork2/mth162/8/9/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:10 2005] 39871 1111373410 - [/webwork2/mth162/8/10/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:13 2005] 40559 1111373413 - [/webwork2/mth162/8/6/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:13 2005] 39871 1111373413 - [/webwork2/mth162/8/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:14 2005] 41038 1111373414 - [/webwork2/mth162/8/7/] [runTime = 1.0 sec sql_single] [Sun Mar 20 21:50:14 2005] 38171 1111373414 - [/webwork2/mth162/8/6/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:15 2005] 41038 1111373415 - [/webwork2/mth162/8/8/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:15 2005] 41129 1111373415 - [/webwork2/mth162/8/10/] [runTime = 1.0 sec sql_single] [Sun Mar 20 21:50:16 2005] 40961 1111373416 - [/webwork2/mth162/8/5/] [runTime = 1.0 sec sql_single] [Sun Mar 20 21:50:16 2005] 39166 1111373416 - [/webwork2/mth162/8/5/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:18 2005] 40559 1111373418 - [/webwork2/mth162/8/6/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:18 2005] 39785 1111373418 - [/webwork2/mth162/8/6/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:20 2005] 41038 1111373420 - [/webwork2/mth162/8/9/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:20 2005] 39785 1111373420 - [/webwork2/mth165/9/1/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:21 2005] 40559 1111373421 - [/webwork2/mth162/8/7/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:21 2005] 39501 1111373421 - [/webwork2/mth162/8/9/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:22 2005] 39945 1111373422 - [/webwork2/mth165/logout/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:22 2005] 39501 1111373422 - [/webwork2/mth162/8/10/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:22 2005] 40559 1111373422 - [/webwork2/mth162/8/8/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:24 2005] 38171 1111373424 - [/webwork2/mth162/8/6/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:24 2005] 40559 1111373424 - [/webwork2/mth162/8/9/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:25 2005] 39785 1111373425 - [/webwork2/mth165/9/1/] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:27 2005] 41128 1111373427 - [/webwork2/mth162] [runTime = 0.0 sec sql_single] [Sun Mar 20 21:50:27 2005] 38879 1111373427 - [/webwork2/mth162/8/2/] [runTime = 0.0 sec sql_single] Arnie >At 10:47 AM 3/17/2005, John Jones wrote: > >>What about timing with old SQL.pm with binary removed? If you might have >>a case conflict (database keys which differ only in case), then I >>wouldn't try it until after the semester is over. It would tell us if >>there are gains to be had by making further changes to SQL.pm. >> >>John > > >>What about timing with old SQL.pm with binary removed? If you might have >>a case conflict (database keys which differ only in case), then I >>wouldn't try it until after the semester is over. It would tell us if >>there are gains to be had by making further changes to SQL.pm. >> >>John > >With binary removed there seems to be a dramatic speed up. > >With your new version of SQL.pm > >[Thu Mar 17 11:16:24 2005] 38563 1111076184 - [/webwork2/mth162/7/6/] >[runTime = 2.0 sec sql_single] >[Thu Mar 17 11:16:28 2005] 38563 1111076188 - [/webwork2/mth162/7/6/] >[runTime = 3.0 sec sql_single] >[Thu Mar 17 11:16:31 2005] 38563 1111076191 - [/webwork2/mth162/7/6/] >[runTime = 2.0 sec sql_single] >[Thu Mar 17 11:16:35 2005] 38565 1111076195 - [/webwork2/mth162/7/6/] >[runTime = 3.0 sec sql_single] >[Thu Mar 17 11:16:38 2005] 38565 1111076198 - [/webwork2/mth162/7/6/] >[runTime = 3.0 sec sql_single] >[Thu Mar 17 11:16:41 2005] 38565 1111076201 - [/webwork2/mth162/7/6/] >[runTime = 2.0 sec sql_single] >[Thu Mar 17 11:17:05 2005] 38564 1111076225 - [/webwork2/mth162/8/8/] >[runTime = 3.0 sec sql_single] > >With the old version of SQL.pm but with binary removed > >[Thu Mar 17 11:17:07 2005] 38564 1111076227 - [/webwork2/mth162/8/] >[runTime = 0.0 sec sql_single] >[Thu Mar 17 11:17:09 2005] 38564 1111076229 - [/webwork2/mth162/8/11/] >[runTime = 0.0 sec sql_single] >[Thu Mar 17 11:17:42 2005] 48562 1111076262 - [/webwork2/mth162/7/6/] >[runTime = 1.0 sec sql_single] >[Thu Mar 17 11:17:44 2005] 48564 1111076264 - [/webwork2/mth162/7/6/] >[runTime = 1.0 sec sql_single] >[Thu Mar 17 11:17:44 2005] 48562 1111076264 - [/webwork2/mth162/7/6/] >[runTime = 0.0 sec sql_single] >[Thu Mar 17 11:17:45 2005] 48562 1111076265 - [/webwork2/mth162/7/6/] >[runTime = 0.0 sec sql_single] >[Thu Mar 17 11:17:46 2005] 48564 1111076266 - [/webwork2/mth162/7/6/] >[runTime = 0.0 sec sql_single] >[Thu Mar 17 11:17:47 2005] 48564 1111076267 - [/webwork2/mth162/7/6/] >[runTime = 0.0 sec sql_single] >[Thu Mar 17 11:17:48 2005] 48562 1111076268 - [/webwork2/mth162/7/6/] >[runTime = 0.0 sec sql_single] >[Thu Mar 17 11:17:49 2005] 48575 1111076269 - [/webwork2/mth162/8/11/] >[runTime = 1.0 sec sql_single] >[Thu Mar 17 11:17:49 2005] 48564 1111076269 - [/webwork2/mth162/7/6/] >[runTime = 0.0 sec sql_single] >[Thu Mar 17 11:17:50 2005] 48564 1111076270 - [/webwork2/mth162/7/6/] >[runTime = 1.0 sec sql_single] >[Thu Mar 17 11:17:51 2005] 48562 1111076271 - [/webwork2/mth162/7/6/] >[runTime = 0.0 sec sql_single] > >If using case insenstive keys does not mess up how things are read out >(e.g. you want set "Exam1" to be in that form, not e.g. "EXAM1" or >"exam1"), then it should be easy to enforce that there are no conflicts >when userid's, set names, etc are created. > >Since I can not imagine that we have any database keys which differ only >in case, I'll try this out now. > >Thanks. > >Arnie > > > >------------------------------------------------------- >SF email is sponsored by - The IT Product Guide >Read honest & candid reviews on hundreds of IT Products from real users. >Discover which products truly live up to the hype. Start reading now. >http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click >_______________________________________________ >OpenWeBWorK-Devel mailing list >Ope...@li... >https://lists.sourceforge.net/lists/listinfo/openwebwork-devel Prof. Arnold K. Pizer Dept. of Mathematics University of Rochester Rochester, NY 14627 (585) 275-7767 |
From: John J. <jj...@as...> - 2005-03-17 17:09:30
|
Arnold Pizer wrote: > If using case insenstive keys does not mess up how things are read out > (e.g. you want set "Exam1" to be in that form, not e.g. "EXAM1" or > "exam1"), then it should be easy to enforce that there are no > conflicts when userid's, set names, etc are created. > > Since I can not imagine that we have any database keys which differ > only in case, I'll try this out now. The only things used as database keys are set_id, user_id, and problem_id. The last one is always a number, so no conflicts are possible. You can double check in a given course by looking at the list of sets and list of users in the corresponding instructor pages. The strings are displayed properly, so that isn't an issue. Enforcing no conflicts on creation is what would happen by default without binary there. Once you have Exam1, you cannot create exam1 because the database says exam1 already matches an existing set. However, if you created Exam1 and exam1 a week ago (when "binary" was there), then webwork would let you do it. Operating now without binary then mucks up the database. So, simply deleting binary is a reaonsable short term solution. In fact, that is what we are using here at the moment. Longer term, the question is whether to support users/sets which differ only in case. If we want to do that, some other key-scheme should probably be used in the database. This is a moderately big hassle. If we don't care about supporting users/sets which differ only in case, then there is one bug to be re-patched (when producing hardcopy for a student, webwork checks that the student's user_id matches case sensitively with the user_id of the hardcopy being requested - this should be changed to insensitively so that students who change capitalization of their user_id on login can still get hardcopy). John |
From: Arnold P. <ap...@ma...> - 2005-03-17 20:30:10
|
At 12:00 PM 3/17/2005, John Jones wrote: >If we don't care about supporting users/sets which differ only in case, >then there is one bug to be re-patched (when producing hardcopy for a >student, webwork checks that the student's user_id matches case >sensitively with the user_id of the hardcopy being requested - this should >be changed to insensitively so that students who change capitalization of >their user_id on login can still get hardcopy). Another option might be to require the login to be case sensitive (i.e. matches whatever the initial login was set up as). We should probably do whatever is most standard. Arnie Prof. Arnold K. Pizer Dept. of Mathematics University of Rochester Rochester, NY 14627 (585) 275-7767 |
From: John J. <jj...@as...> - 2005-03-17 21:51:16
|
Arnold Pizer wrote: > At 12:00 PM 3/17/2005, John Jones wrote: > >> If we don't care about supporting users/sets which differ only in >> case, then there is one bug to be re-patched (when producing hardcopy >> for a student, webwork checks that the student's user_id matches case >> sensitively with the user_id of the hardcopy being requested - this >> should be changed to insensitively so that students who change >> capitalization of their user_id on login can still get hardcopy). > > > Another option might be to require the login to be case sensitive > (i.e. matches whatever the initial login was set up as). We should > probably do whatever is most standard. I thought of that, but that puts the change in Authen.pm. I didn't look and I wasn't sure if that would mean the case checking on the login name would happen every time a page is generated. Looking at it now, I suppose it could be just checked when verifying passwords since that only happens on login. John |