You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(267) |
Nov
(344) |
Dec
(119) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(23) |
Feb
(15) |
Mar
(16) |
Apr
(388) |
May
|
Jun
(4) |
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
|
Dec
|
|
From: <jas...@us...> - 2002-11-15 21:55:41
|
Update of /cvsroot/genex/genex-server/Genex/ArrayDesign In directory usw-pr-cvs1:/tmp/cvs-serv16636/Genex/ArrayDesign Modified Files: ArrayDesign.pm Log Message: new Index: ArrayDesign.pm =================================================================== RCS file: /cvsroot/genex/genex-server/Genex/ArrayDesign/ArrayDesign.pm,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -d -r1.6 -r1.7 |
|
From: <jas...@us...> - 2002-11-15 21:55:41
|
Update of /cvsroot/genex/genex-server/Genex/Array In directory usw-pr-cvs1:/tmp/cvs-serv16636/Genex/Array Modified Files: Array.pm Log Message: new Index: Array.pm =================================================================== RCS file: /cvsroot/genex/genex-server/Genex/Array/Array.pm,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -d -r1.6 -r1.7 |
|
From: <jas...@us...> - 2002-11-15 21:55:40
|
Update of /cvsroot/genex/genex-server/Genex/AM_SuspectSpots In directory usw-pr-cvs1:/tmp/cvs-serv16636/Genex/AM_SuspectSpots Modified Files: AM_SuspectSpots.pm Log Message: new Index: AM_SuspectSpots.pm =================================================================== RCS file: /cvsroot/genex/genex-server/Genex/AM_SuspectSpots/AM_SuspectSpots.pm,v retrieving revision 1.29 retrieving revision 1.30 diff -C2 -d -r1.29 -r1.30 |
|
From: <jas...@us...> - 2002-11-15 21:55:40
|
Update of /cvsroot/genex/genex-server/Genex/AM_Spots
In directory usw-pr-cvs1:/tmp/cvs-serv16636/Genex/AM_Spots
Modified Files:
AM_Spots.pm
Log Message:
new
Index: AM_Spots.pm
===================================================================
RCS file: /cvsroot/genex/genex-server/Genex/AM_Spots/AM_Spots.pm,v
retrieving revision 1.29
retrieving revision 1.30
diff -C2 -d -r1.29 -r1.30
*** AM_Spots.pm 9 Nov 2002 00:35:34 -0000 1.29
--- AM_Spots.pm 15 Nov 2002 21:55:07 -0000 1.30
***************
*** 999,1003 ****
# first we ensure that the container instance is really in the DB
my $sql = $dbh->create_select_sql( COLUMNS=>['pba_pk'],
! FROM=>[$table_name],
WHERE=>qq[pba_pk='$lt_pkey'],
);
--- 999,1003 ----
# first we ensure that the container instance is really in the DB
my $sql = $dbh->create_select_sql( COLUMNS=>['pba_pk'],
! FROM=>[Bio::Genex::PhysicalBioAssay->table_or_viewname($dbh)],
WHERE=>qq[pba_pk='$lt_pkey'],
);
***************
*** 1010,1014 ****
# ok to proceed with insert
my $header = shift @{$matrix};
! push(@{$header},'pba_fk');
$sql = $dbh->create_insert_sql($table_name,
$header,
--- 1010,1024 ----
# ok to proceed with insert
my $header = shift @{$matrix};
! push(@{$header},'pba_fk','ams_pk');
!
! # pre-fetch the next value of the sequence
! my $seq = 'GENEX_ID_SEQ';
! my $seq_sql = qq[SELECT nextval('"$seq"'::text)];
! my $seq_sth = $dbh->prepare($seq_sql)
! or $dbh->error(@error_args,
! sql=>$seq_sql,
! message=>"Couldn't prepare nextval for sequence $seq",
! );
!
$sql = $dbh->create_insert_sql($table_name,
$header,
***************
*** 1019,1026 ****
sql=>$sql);
foreach my $row (@{$matrix}) {
! my $sth = $sth->execute(@{$row},$lt_pkey)
or $dbh->error(@error_args,
message=>"Couldn't execute insert sql with args: "
! . join(',',@{$row}),
sth=>$sth,
sql=>$sql);
--- 1029,1050 ----
sql=>$sql);
foreach my $row (@{$matrix}) {
! $seq_sth->execute()
! or $dbh->error(@error_args,
! message=>"Couldn't execute nextval from sequence $seq",
! sth=>$seq_sth,
! sql=>$seq_sql);
! my $pkey = $seq_sth->fetchrow_arrayref();
! $dbh->error(@error_args,
! message=>"Couldn't fetch nextval from sequence $seq",
! sth=>$seq_sth,
! sql=>$seq_sql)
! unless defined $pkey && $pkey->[0];
!
! # DBI won't re-run the SQL until we finish
! $seq_sth->finish();
! $sth->execute(@{$row},$lt_pkey,$pkey->[0])
or $dbh->error(@error_args,
message=>"Couldn't execute insert sql with args: "
! . join(',',@{$row},$pkey->[0]),
sth=>$sth,
sql=>$sql);
|
|
From: <jas...@us...> - 2002-11-15 21:55:40
|
Update of /cvsroot/genex/genex-server/Genex/AM_FactorValues In directory usw-pr-cvs1:/tmp/cvs-serv16636/Genex/AM_FactorValues Modified Files: AM_FactorValues.pm Log Message: new Index: AM_FactorValues.pm =================================================================== RCS file: /cvsroot/genex/genex-server/Genex/AM_FactorValues/AM_FactorValues.pm,v retrieving revision 1.29 retrieving revision 1.30 diff -C2 -d -r1.29 -r1.30 |
|
From: <td...@us...> - 2002-11-15 21:54:56
|
Update of /cvsroot/genex/genex-server/site/webtools
In directory usw-pr-cvs1:/tmp/cvs-serv16958
Modified Files:
Tag: Rel-1_0_1-branch
runtree.pl
Log Message:
Syntax etc.
Index: runtree.pl
===================================================================
RCS file: /cvsroot/genex/genex-server/site/webtools/Attic/runtree.pl,v
retrieving revision 1.1.2.1
retrieving revision 1.1.2.2
diff -C2 -d -r1.1.2.1 -r1.1.2.2
*** runtree.pl 15 Nov 2002 20:16:45 -0000 1.1.2.1
--- runtree.pl 15 Nov 2002 21:54:53 -0000 1.1.2.2
***************
*** 44,50 ****
$sth->execute();
! my ($tree_pk, $name, $fi_input_fk) =
$sth->fetchrow_array();
! # TODO - die if bad tree name
if ($sth->fetchrow_array())
--- 44,54 ----
$sth->execute();
! my ($tree_pk, $name) =
$sth->fetchrow_array();
! if (!defined($name))
! {
! $dbh->disconnect;
! die "Tree: $treeName doesn't exist in db";
! }
if ($sth->fetchrow_array())
***************
*** 54,87 ****
# determine log name
!
! my $logfile = "./treelog.txt";
open(LOG, "> $logfile") or die "Unable to open $logfile: $!\n";
print LOG `date`;
print LOG "\n Running tree: $treeName owner: $treeOwner\n";
! processTree($dbh, $tree_pk, $fi_input_fk, LOG);
close(LOG);
# TODO - stat logfile
! # TODO - insert logfile into file_info
! #my $fields = "('file_name','timestamp', 'owner', 'comments', 'checksum')";
! $stm = "insert into file_info (file_name) values ($logfile)";
! $sth = $dbh->prepare($stm);
! $sth->execute();
! $dbh->commit();
!
! $stm = "select from file_info where 'file_name' = $filename";
! $sth = $dbh->prepare($stm);
! $sth->execute;
!
! my $fi_pk = $sth->fetchrow_array;
!
! #update tree record with logfile fk
! $stm = "update tree set fi_log_fk = $fi_pk";
! $sth = $dbh->prepare($stm);
! $sth->execute();
! $dbh->commit();
!
$dbh->disconnect();
} #runChain
--- 58,76 ----
# determine log name
! $stm="select file_name from file_info,tree where fi_pk = fi_log_fk";
! $sth=$dbh->prepare($stm);
! $sth->execute;
!
! my ($logfile) = $sth->fetchrow_array;
open(LOG, "> $logfile") or die "Unable to open $logfile: $!\n";
print LOG `date`;
print LOG "\n Running tree: $treeName owner: $treeOwner\n";
! processTree($dbh, $tree_pk, LOG);
close(LOG);
# TODO - stat logfile
! $sth->finish;
$dbh->disconnect();
} #runChain
***************
*** 89,96 ****
sub processTree
{
! my ($dbh, $tree_pk, $fi_input_fk, $logfile) = @_;
my ($stm, $sth);
! print $logfile "Processing tree pk: $tree_pk input: $fi_input_fk\n";
$stm="select node_pk, an_fk from node" .
--- 78,85 ----
sub processTree
{
! my ($dbh, $tree_pk, $logfile) = @_;
my ($stm, $sth);
! print $logfile "Processing tree pk: $tree_pk\n";
$stm="select node_pk, an_fk from node" .
***************
*** 290,294 ****
while (my ($fileprefix) = $sth2->fetchrow_array)
{
! $userParams =~ /$fileprefix\s*([\S]+)\s/;
my $filename = $1;
insertFile($dbh, $node_pk, $filename);
--- 279,283 ----
while (my ($fileprefix) = $sth2->fetchrow_array)
{
! $userParams =~ /$fileprefix\s*'+([\S]+)'+\s/;
my $filename = $1;
insertFile($dbh, $node_pk, $filename);
|
|
From: <td...@us...> - 2002-11-15 21:54:35
|
Update of /cvsroot/genex/genex-server/site/webtools/analysis/test
In directory usw-pr-cvs1:/tmp/cvs-serv16784
Modified Files:
Tag: Rel-1_0_1-branch
demo demo.cfg demo1.cfg demo2 demo2.cfg demo3.cfg demo4.cfg
depopulate populate test_tree.pl
Log Message:
syntax etc.
Index: demo
===================================================================
RCS file: /cvsroot/genex/genex-server/site/webtools/analysis/test/Attic/demo,v
retrieving revision 1.1.2.1
retrieving revision 1.1.2.2
diff -C2 -d -r1.1.2.1 -r1.1.2.2
*** demo 15 Nov 2002 20:44:41 -0000 1.1.2.1
--- demo 15 Nov 2002 21:54:32 -0000 1.1.2.2
***************
*** 7,11 ****
=head1 SYNOPSIS
! ./demo --demoUser <whatever>
--infile <filename> --outfile <filename> --logfile <filename>
--- 7,11 ----
=head1 SYNOPSIS
! ./demo --demoUser <whatever> --demoSys <whatever>
--infile <filename> --outfile <filename> --logfile <filename>
***************
*** 26,29 ****
--- 26,30 ----
my $logfile = '';
my $demoUser = '';
+ my $demoSys = '';
getOptions();
***************
*** 70,73 ****
--- 71,75 ----
'logfile=s' =>\$logfile,
'demoUser=s' =>\$demoUser,
+ 'demoSys=s' =>\$demoSys,
);
} #have command line args
Index: demo.cfg
===================================================================
RCS file: /cvsroot/genex/genex-server/site/webtools/analysis/test/Attic/demo.cfg,v
retrieving revision 1.1.2.1
retrieving revision 1.1.2.2
diff -C2 -d -r1.1.2.1 -r1.1.2.2
*** demo.cfg 15 Nov 2002 20:44:41 -0000 1.1.2.1
--- demo.cfg 15 Nov 2002 21:54:32 -0000 1.1.2.2
***************
*** 24,28 ****
# analysis
name = demo
! cmdstr = ./demo
#analysis_filetypes_link
--- 24,28 ----
# analysis
name = demo
! cmdstr = ./analysis/test/demo
#analysis_filetypes_link
Index: demo1.cfg
===================================================================
RCS file: /cvsroot/genex/genex-server/site/webtools/analysis/test/Attic/demo1.cfg,v
retrieving revision 1.1.2.1
retrieving revision 1.1.2.2
diff -C2 -d -r1.1.2.1 -r1.1.2.2
*** demo1.cfg 15 Nov 2002 20:44:41 -0000 1.1.2.1
--- demo1.cfg 15 Nov 2002 21:54:32 -0000 1.1.2.2
***************
*** 33,37 ****
# analysis
name = demo1
! cmdstr = ./demo1
#analysis_filetypes_link
--- 33,37 ----
# analysis
name = demo1
! cmdstr = ./analysis/test/demo1
#analysis_filetypes_link
Index: demo2
===================================================================
RCS file: /cvsroot/genex/genex-server/site/webtools/analysis/test/Attic/demo2,v
retrieving revision 1.1.2.1
retrieving revision 1.1.2.2
diff -C2 -d -r1.1.2.1 -r1.1.2.2
*** demo2 15 Nov 2002 20:44:41 -0000 1.1.2.1
--- demo2 15 Nov 2002 21:54:32 -0000 1.1.2.2
***************
*** 27,30 ****
--- 27,31 ----
my $path = '';
my $graphFormat = '';
+ my $jpgoutfile = '';
getOptions();
***************
*** 72,75 ****
--- 73,77 ----
'graphFormat=s' =>\$graphFormat,
'path=s' =>\$path,
+ 'jpgoutfile=s' =>\$jpgoutfile,
);
} #have command line args
Index: demo2.cfg
===================================================================
RCS file: /cvsroot/genex/genex-server/site/webtools/analysis/test/Attic/demo2.cfg,v
retrieving revision 1.1.2.1
retrieving revision 1.1.2.2
diff -C2 -d -r1.1.2.1 -r1.1.2.2
*** demo2.cfg 15 Nov 2002 20:44:41 -0000 1.1.2.1
--- demo2.cfg 15 Nov 2002 21:54:32 -0000 1.1.2.2
***************
*** 33,37 ****
# analysis
name = demo2
! cmdstr = ./demo2
#analysis_filetypes_link
--- 33,37 ----
# analysis
name = demo2
! cmdstr = ./analysis/test/demo2
#analysis_filetypes_link
Index: demo3.cfg
===================================================================
RCS file: /cvsroot/genex/genex-server/site/webtools/analysis/test/Attic/demo3.cfg,v
retrieving revision 1.1.2.1
retrieving revision 1.1.2.2
diff -C2 -d -r1.1.2.1 -r1.1.2.2
*** demo3.cfg 15 Nov 2002 20:44:41 -0000 1.1.2.1
--- demo3.cfg 15 Nov 2002 21:54:32 -0000 1.1.2.2
***************
*** 24,28 ****
# analysis
name = demo3
! cmdstr = ./demo3
#analysis_filetypes_link
--- 24,28 ----
# analysis
name = demo3
! cmdstr = ./analysis/test/demo3
#analysis_filetypes_link
Index: demo4.cfg
===================================================================
RCS file: /cvsroot/genex/genex-server/site/webtools/analysis/test/Attic/demo4.cfg,v
retrieving revision 1.1.2.1
retrieving revision 1.1.2.2
diff -C2 -d -r1.1.2.1 -r1.1.2.2
*** demo4.cfg 15 Nov 2002 20:44:41 -0000 1.1.2.1
--- demo4.cfg 15 Nov 2002 21:54:32 -0000 1.1.2.2
***************
*** 24,28 ****
# analysis
name = demo4
! cmdstr = ./demo4
#analysis_filetypes_link
--- 24,28 ----
# analysis
name = demo4
! cmdstr = ./analysis/test/demo4
#analysis_filetypes_link
Index: depopulate
===================================================================
RCS file: /cvsroot/genex/genex-server/site/webtools/analysis/test/Attic/depopulate,v
retrieving revision 1.1.2.1
retrieving revision 1.1.2.2
diff -C2 -d -r1.1.2.1 -r1.1.2.2
*** depopulate 15 Nov 2002 20:44:41 -0000 1.1.2.1
--- depopulate 15 Nov 2002 21:54:32 -0000 1.1.2.2
***************
*** 1,6 ****
./test_tree.pl --action remove
! ./add_analysis.pl --configfile analysis/demo4.cfg --action remove
! ./add_analysis.pl --configfile analysis/demo3.cfg --action remove
! ./add_analysis.pl --configfile analysis/demo2.cfg --action remove
! ./add_analysis.pl --configfile analysis/demo1.cfg --action remove
! ./add_analysis.pl --configfile analysis/demo.cfg --action remove
--- 1,8 ----
./test_tree.pl --action remove
! ../../add_analysis.pl --configfile demo4.cfg --action remove
! ../../add_analysis.pl --configfile demo3.cfg --action remove
! ../../add_analysis.pl --configfile demo2.cfg --action remove
! ../../add_analysis.pl --configfile demo1.cfg --action remove
! ../../add_analysis.pl --configfile demo.cfg --action remove
! rm /tmp/initial_in
! # should delete rows created in file_info from runTree
Index: populate
===================================================================
RCS file: /cvsroot/genex/genex-server/site/webtools/analysis/test/Attic/populate,v
retrieving revision 1.1.2.1
retrieving revision 1.1.2.2
diff -C2 -d -r1.1.2.1 -r1.1.2.2
*** populate 15 Nov 2002 20:44:41 -0000 1.1.2.1
--- populate 15 Nov 2002 21:54:32 -0000 1.1.2.2
***************
*** 1,6 ****
! ./add_analysis.pl --configfile analysis/demo.cfg --action insert
! ./add_analysis.pl --configfile analysis/demo1.cfg --action insert
! ./add_analysis.pl --configfile analysis/demo2.cfg --action insert
! ./add_analysis.pl --configfile analysis/demo3.cfg --action insert
! ./add_analysis.pl --configfile analysis/demo4.cfg --action insert
./test_tree.pl --action insert
--- 1,13 ----
! ../../add_analysis.pl --configfile ./demo.cfg --action insert
! ../../add_analysis.pl --configfile ./demo1.cfg --action insert
! ../../add_analysis.pl --configfile ./demo2.cfg --action insert
! ../../add_analysis.pl --configfile ./demo3.cfg --action insert
! ../../add_analysis.pl --configfile ./demo4.cfg --action insert
./test_tree.pl --action insert
+ cat > /tmp/initial_in << EOF
+ This is the
+ initial input
+ for
+ the
+ demo test scripts.
+ EOF
Index: test_tree.pl
===================================================================
RCS file: /cvsroot/genex/genex-server/site/webtools/analysis/test/Attic/test_tree.pl,v
retrieving revision 1.1.2.1
retrieving revision 1.1.2.2
diff -C2 -d -r1.1.2.1 -r1.1.2.2
*** test_tree.pl 15 Nov 2002 20:44:41 -0000 1.1.2.1
--- test_tree.pl 15 Nov 2002 21:54:32 -0000 1.1.2.2
***************
*** 38,56 ****
# insert initial input file values
$stm = "insert into file_info (node_fk, file_name, use_as_input, fi_comments)"
! . "values (NULL, '/home/tdj4m/files/initial_in', 't', 'initial input file for demo')";
}
else
{
! $stm = "select fi_input_fk from tree where tree_name = 'demoTree'";
my $sth = $dbh->prepare($stm);
$sth->execute();
! my ($fi_input_fk) = $sth->fetchrow_array;
! $stm = "delete from file_info where fi_pk = $fi_input_fk";
}
- my $sth = $dbh->prepare($stm);
- $sth->execute();
- $dbh->commit();
}
--- 38,64 ----
# insert initial input file values
$stm = "insert into file_info (node_fk, file_name, use_as_input, fi_comments)"
! . "values (NULL, '/tmp/initial_in', 't', 'initial input file for demo');";
! $stm .="insert into file_info (node_fk, file_name, use_as_input, fi_comments)"
! . "values (NULL, '/tmp/treelog.txt', 'f', 'logfile for demo');";
! my $sth = $dbh->prepare($stm);
! $sth->execute();
! $dbh->commit();
}
else
{
! $stm = "select fi_input_fk, fi_log_fk from tree where tree_name = 'demoTree'";
my $sth = $dbh->prepare($stm);
$sth->execute();
! while (my ($fi_ifk, $fi_lfk) = $sth->fetchrow_array)
! {
! $stm = "delete from file_info where fi_pk = $fi_ifk;";
! $stm .= "delete from file_info where fi_pk = $fi_lfk;";
! my $sth = $dbh->prepare($stm);
! $sth->execute();
! $dbh->commit();
! }
}
}
***************
*** 65,74 ****
if ($action eq "insert")
{
! $stm = "select fi_pk from file_info where file_name like '%initial_in%'";
$sth = $dbh->prepare($stm);
$sth->execute();
my $fi_ifk = $sth->fetchrow_array;
! $stm = "insert into tree (tree_name, fi_input_fk) values ('demoTree', $fi_ifk)";
}
else
--- 73,87 ----
if ($action eq "insert")
{
! $stm = "select fi_pk from file_info where file_name like '%treelog%'";
$sth = $dbh->prepare($stm);
$sth->execute();
+ my $fi_lfk = $sth->fetchrow_array;
+ $stm = "select fi_pk from file_info where file_name like '%initial_in%'";
+ $sth = $dbh->prepare($stm);
+ $sth->execute();
my $fi_ifk = $sth->fetchrow_array;
!
! $stm = "insert into tree (tree_name, fi_input_fk, fi_log_fk) values ('demoTree', $fi_ifk, $fi_lfk)";
}
else
***************
*** 193,197 ****
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, 'demoLogfile.log', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
--- 206,210 ----
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, '/tmp/demoLogfile.log', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
***************
*** 204,208 ****
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, 'demoOutput.demoouttxt', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
--- 217,221 ----
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, '/tmp/demoOutput.demoouttxt', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
***************
*** 244,248 ****
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, 'outa.demo1atxt', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
--- 257,261 ----
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, '/tmp/outa.demo1atxt', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
***************
*** 255,259 ****
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, 'outb.demo1btxt', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
--- 268,272 ----
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, '/tmp/outb.demo1btxt', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
***************
*** 266,270 ****
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, 'demo1Logfile.log', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
--- 279,283 ----
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, '/tmp/demo1Logfile.log', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
***************
*** 286,290 ****
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, 'demo2Logfile.log', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
--- 299,303 ----
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, '/tmp/demo2Logfile.log', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
***************
*** 297,301 ****
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, 'demo2Output.demo2outtxt', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
--- 310,314 ----
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, '/tmp/demo2Output.demo2outtxt', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
***************
*** 308,312 ****
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, 'demo2Output.jpg', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
--- 321,325 ----
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, '/tmp/demo2Output.jpg', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
***************
*** 337,341 ****
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, 'demo3Logfile.log', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
--- 350,354 ----
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, '/tmp/demo3Logfile.log', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
***************
*** 348,352 ****
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, 'demo3Output.demo3outtxt', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
--- 361,365 ----
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, '/tmp/demo3Output.demo3outtxt', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
***************
*** 377,381 ****
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, 'demo4Logfile.log', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
--- 390,394 ----
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, '/tmp/demo4Logfile.log', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
***************
*** 388,392 ****
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, 'demo4Output.demo4outtxt', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
--- 401,405 ----
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
! " values ($node_fk, '/tmp/demo4Output.demo4outtxt', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
***************
*** 450,454 ****
$stm = "insert into sys_parameter_values (node_fk, sp_value, spn_fk)" .
! " values ($node_fk, '/home/tdj4m/files/initial_in', $spn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
--- 463,467 ----
$stm = "insert into sys_parameter_values (node_fk, sp_value, spn_fk)" .
! " values ($node_fk, '/tmp/initial_in', $spn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
***************
*** 473,487 ****
($node_fk, $an_key) = $sth->fetchrow_array;
- # $stm = "select spn_pk from sys_parameter_names " .
- # "where sp_name like '%infile%' and an_fk = $an_key";
- # $sth = $dbh->prepare($stm);
- # $sth->execute();
- # $spn_fk = $sth->fetchrow_array;
-
- # $stm = "insert into sys_parameter_values (node_fk, sp_value, spn_fk)" .
- # " values ($node_fk, 'XXX', $spn_fk);";
- # $sth = $dbh->prepare($stm);
- # $sth->execute();
-
$stm = "select spn_pk from sys_parameter_names " .
"where sp_name like '%demo1Sys%' and an_fk = $an_key";
--- 486,489 ----
***************
*** 514,528 ****
($node_fk, $an_key) = $sth->fetchrow_array;
- # $stm = "select spn_pk from sys_parameter_names " .
- # "where sp_name like '%infile%' and an_fk = $an_key";
- # $sth = $dbh->prepare($stm);
- # $sth->execute();
- # $spn_fk = $sth->fetchrow_array;
-
- # $stm = "insert into sys_parameter_values (node_fk, sp_value, spn_fk)" .
- # " values ($node_fk, 'demoOutput', $spn_fk);";
- # $sth = $dbh->prepare($stm);
- # $sth->execute();
-
$stm = "select spn_pk from sys_parameter_names " .
"where sp_name like '%path%' and an_fk = $an_key";
--- 516,519 ----
***************
*** 543,574 ****
($node_fk, $an_key) = $sth->fetchrow_array;
- # $stm = "select spn_pk from sys_parameter_names " .
- # "where sp_name like '%infile%' and an_fk = $an_key";
- # $sth = $dbh->prepare($stm);
- # $sth->execute();
- # $spn_fk = $sth->fetchrow_array;
-
- # $stm = "insert into sys_parameter_values (node_fk, sp_value, spn_fk)" .
- # " values ($node_fk, 'outa', $spn_fk);";
- # $sth = $dbh->prepare($stm);
- # $sth->execute();
-
- #-demo4 params: infile
- $stm = "select node_pk, an_pk from node,analysis " .
- " where an_fk = an_pk and an_name = 'demo4'";
- $sth = $dbh->prepare($stm);
- $sth->execute();
- ($node_fk, $an_key) = $sth->fetchrow_array;
-
- # $stm = "select spn_pk from sys_parameter_names " .
- # "where sp_name like '%infile%' and an_fk = $an_key";
- # $sth = $dbh->prepare($stm);
- # $sth->execute();
- # $spn_fk = $sth->fetchrow_array;
-
- # $stm = "insert into sys_parameter_values (node_fk, sp_value, spn_fk)" .
- # " values ($node_fk, 'outb', $spn_fk);";
- # $sth = $dbh->prepare($stm);
- # $sth->execute();
}
elsif ($action eq "remove")
--- 534,537 ----
|
|
From: <jas...@us...> - 2002-11-15 21:51:31
|
Update of /cvsroot/genex/genex-server/Genex
In directory usw-pr-cvs1:/tmp/cvs-serv15690/Genex
Modified Files:
ChangeLog Genex.pm.in
Log Message:
* Genex.pm.in (Repository):
replaced old calls to tablename() with table_or_viewname()
Index: ChangeLog
===================================================================
RCS file: /cvsroot/genex/genex-server/Genex/ChangeLog,v
retrieving revision 1.118
retrieving revision 1.119
diff -C2 -d -r1.118 -r1.119
*** ChangeLog 9 Nov 2002 00:48:02 -0000 1.118
--- ChangeLog 15 Nov 2002 21:51:26 -0000 1.119
***************
*** 1,2 ****
--- 1,7 ----
+ 2002-11-15 Jason E. Stewart <ja...@op...>
+
+ * Genex.pm.in (Repository):
+ replaced old calls to tablename() with table_or_viewname()
+
2002-11-08 Jason E. Stewart <ja...@op...>
Index: Genex.pm.in
===================================================================
RCS file: /cvsroot/genex/genex-server/Genex/Genex.pm.in,v
retrieving revision 1.49
retrieving revision 1.50
diff -C2 -d -r1.49 -r1.50
*** Genex.pm.in 9 Nov 2002 00:30:52 -0000 1.49
--- Genex.pm.in 15 Nov 2002 21:51:27 -0000 1.50
***************
*** 800,804 ****
$sql = $dbh->create_select_sql(
COLUMNS=>[$raw_fkey_name],
! FROM=>[$self->tablename($dbh)],
WHERE=>$WHERE);
$val = $dbh->selectall_arrayref($sql)->[0][0];
--- 800,804 ----
$sql = $dbh->create_select_sql(
COLUMNS=>[$raw_fkey_name],
! FROM=>[$self->table_or_viewname($dbh)],
WHERE=>$WHERE);
$val = $dbh->selectall_arrayref($sql)->[0][0];
***************
*** 833,837 ****
croak "Couldn't find fkey for $pkey_name in " . $fkey_ref->table_name()
unless defined $column_name;
! my $calling_class = $self->tablename();
my $class_to_fetch = 'Bio::Genex::' . $fkey_ref->table_name();
eval "require $class_to_fetch";
--- 833,837 ----
croak "Couldn't find fkey for $pkey_name in " . $fkey_ref->table_name()
unless defined $column_name;
! my $calling_class = $self->table_or_viewname($dbh);
my $class_to_fetch = 'Bio::Genex::' . $fkey_ref->table_name();
eval "require $class_to_fetch";
|
|
From: <td...@us...> - 2002-11-15 20:46:07
|
Update of /cvsroot/genex/genex-server/site/webtools/analysis/test
In directory usw-pr-cvs1:/tmp/cvs-serv24421
Added Files:
Tag: Rel-1_0_1-branch
demo demo1 demo1.cfg demo2 demo2.cfg demo3 demo3.cfg demo4
demo4.cfg demo.cfg depopulate populate test_tree.pl
Log Message:
Initial draft - for testing runtree
--- NEW FILE: demo ---
#!/usr/bin/perl -w
=head1 NAME
demo - sample analysis for building analysis tree
=head1 SYNOPSIS
./demo --demoUser <whatever>
--infile <filename> --outfile <filename> --logfile <filename>
=head1 DESCRIPTION
This script is a placeholder/example for building analysis trees.
=cut
use strict;
use Getopt::Long 2.13;
#command line options
my $debug ='';
my $infile = '';
my $outfile = '';
my $logfile = '';
my $demoUser = '';
getOptions();
#create logfile
open (LOG, "> $logfile") or die "Unable to open $logfile\n";
#open input file
open (IN, "$infile") or die "Unable to open $infile\n";
#open output file
open (OUT, "> $outfile") or die "Unable to open $outfile\n";
print LOG "Running demo: in: $infile out: $outfile\n";
print LOG "Params: demoUser: $demoUser\n";
#write contents of input file to output file
while( my $line = <IN>)
{
print OUT "Demo: $line";
}
#close all
close (IN);
close (OUT);
close (LOG);
sub getOptions
{
my $help;
if (@ARGV > 0 )
{
# format
# string : 'var=s' => \$var,
# boolean : 'var!' => \$var,
GetOptions(
'help|?' => \$help,
'debug!' => \$debug,
'infile=s' =>\$infile,
'outfile=s' =>\$outfile,
'logfile=s' =>\$logfile,
'demoUser=s' =>\$demoUser,
);
} #have command line args
if ($help)
{
usage();
};
} # getOptions
sub usage
{
print "Usage: \n";
print "./demo --demoUser <string> --infile <filename> --outfile <filename> --logfile <filename>\n";
} # usage
--- NEW FILE: demo1 ---
#!/usr/bin/perl -w
=head1 NAME
demo - sample analysis for building analysis tree
=head1 SYNOPSIS
./demo --demo1user <?> --demo1Sys <?> --demo2Sys <?> --outfilea <filename>
--infile <filename> --outfileb <filename> --logfile <filename>
=head1 DESCRIPTION
This script is a placeholder/example for building analysis trees.
=cut
use strict;
use Getopt::Long 2.13;
#command line options
my $debug ='';
my $infile = '';
my $outfilea = '';
my $outfileb = '';
my $logfile = '';
my $demo1user = '';
my $demo1Sys = '';
my $demo2Sys = '';
getOptions();
#create logfile
open (LOG, "> $logfile") or die "Unable to open $logfile\n";
#open input file
open (IN, "$infile") or die "Unable to open $infile\n";
#open output file
open (OUTA, "> $outfilea") or die "Unable to open $outfilea\n";
open (OUTB, "> $outfileb") or die "Unable to open $outfileb\n";
print LOG "Demo1: in: $infile outa: $outfilea outb: $outfileb\n";
print LOG "Params: user: $demo1user sys1: $demo1Sys sys2: $demo2Sys\n";
#write contents of input file to output file
while( my $line = <IN>)
{
print OUTA "A OUT: $line";
print OUTB "B OUT: $line";
}
#close all
close (IN);
close (OUTA);
close (OUTB);
close (LOG);
sub getOptions
{
my $help;
if (@ARGV > 0 )
{
# format
# string : 'var=s' => \$var,
# boolean : 'var!' => \$var,
GetOptions(
'help|?' => \$help,
'debug!' => \$debug,
'infile=s' =>\$infile,
'outfilea=s' =>\$outfilea,
'outfileb=s' =>\$outfileb,
'logfile=s' =>\$logfile,
'demo1user=s' =>\$demo1user,
'demo1Sys=s' =>\$demo1Sys,
'demo2Sys=s' =>\$demo2Sys,
);
} #have command line args
if ($help)
{
usage();
};
} # getOptions
sub usage
{
print "Usage: \n";
} # usage
--- NEW FILE: demo1.cfg ---
## filetypes
filetype = name=demoouttxt
filetype = comment=Input file for demo1 analysis
filetype = arg_name=--infile
filetype = name=demo1atxt
filetype = comment=Text output for demo analysis
filetype = arg_name=--outfilea
filetype = name=demo1btxt
filetype = comment=Text output for demo analysis
filetype = arg_name=--outfileb
filetype = name=log
filetype = comment=Log file
filetype = arg_name=--logfile
#extension
extension = filetype=demoouttxt
extension = ext=demoouttxt
extension = filetype=demo1btxt
extension = ext=demo1btxt
extension = filetype=demo1atxt
extension = ext=demo1atxt
extension = filetype=log
extension = ext=txt
# analysis
name = demo1
cmdstr = ./demo1
#analysis_filetypes_link
analysisfile = filetype =demoouttxt
analysisfile = input =1
analysisfile = filetype =demo1atxt
analysisfile = input =0
analysisfile = filetype =demo1btxt
analysisfile = input =0
analysisfile = filetype =log
analysisfile = input =0
#user_parameter_names
up = name=--demo1user
up = display_name=Generic parameter for demo1
up = type=text
up = name=--logfile
up = display_name=Filename for demo1 logfile
up = type=text
up = default=demo1log
up = name=--outfileb
up = display_name=Filename for output b
up = type=text
up = default=demo1b
up = name=--outfilea
up = display_name=Filename for output a
up = type=text
up = default=demo1a
#sys_parameter_names
sp = name=--demo1Sys
sp = name=--demo2Sys
sp = name=--infile
--- NEW FILE: demo2 ---
#!/usr/bin/perl -w
=head1 NAME
demo2 - sample analysis for building analysis tree
=head1 SYNOPSIS
./demo2 --path <whatever> --graphFormat <jpg|png|pdf> --jpgoutfile <filename>
--infile <filename> --outfile <filename> --logfile <filename>
=head1 DESCRIPTION
This script is a placeholder/example for building analysis trees.
=cut
use strict;
use Getopt::Long 2.13;
#command line options
my $debug ='';
my $infile = '';
my $outfile = '';
my $logfile = '';
my $path = '';
my $graphFormat = '';
getOptions();
#create logfile
open (LOG, "> $logfile") or die "Unable to open $logfile\n";
#open input file
open (IN, "$infile") or die "Unable to open $infile\n";
#open output file
open (OUT, "> $outfile") or die "Unable to open $outfile\n";
print LOG "Running demo2: in: $infile out: $outfile jpg: $jpgoutfile\n";
print LOG "Params: graphFormat: $graphFormat path: $path\n";
#write contents of input file to output file
while( my $line = <IN>)
{
print OUT "Demo2: $line";
}
#close all
close (IN);
close (OUT);
close (LOG);
sub getOptions
{
my $help;
if (@ARGV > 0 )
{
# format
# string : 'var=s' => \$var,
# boolean : 'var!' => \$var,
GetOptions(
'help|?' => \$help,
'debug!' => \$debug,
'infile=s' =>\$infile,
'outfile=s' =>\$outfile,
'logfile=s' =>\$logfile,
'graphFormat=s' =>\$graphFormat,
'path=s' =>\$path,
);
} #have command line args
if ($help)
{
usage();
};
} # getOptions
sub usage
{
print "Usage: \n";
print "./demo2 --graphFormat <jpg|png|pdf> --path <string> --infile <filename> --outfile <filename> --logfile <filename> --jpgoutfile <filename>\n";
} # usage
--- NEW FILE: demo2.cfg ---
## filetypes
filetype = name=demoouttxt
filetype = comment=Text input for demo2 analysis
filetype = arg_name=--infile
filetype = name=demo2outtxt
filetype = comment=Text output for demo2 analysis
filetype = arg_name=--outfile
filetype = name=demo2outjpg
filetype = comment=Graphical output for demo2 analysis
filetype = arg_name=--jpgoutfile
filetype = name=log
filetype = comment= Demo2 log file
filetype = arg_name=--logfile
#extension
extension = filetype=demoouttxt
extension = ext=demoouttxt
extension = filetype=demo2outtxt
extension = ext=demo2outtxt
extension = filetype=demo2outjpg
extension = ext=jpg
extension = filetype=log
extension = ext=txt
# analysis
name = demo2
cmdstr = ./demo2
#analysis_filetypes_link
analysisfile = filetype =demoouttxt
analysisfile = input =1
analysisfile = filetype =demo2outtxt
analysisfile = input =0
analysisfile = filetype =demo2outjpg
analysisfile = input =0
analysisfile = filetype =log
analysisfile = input =0
#user_parameter_names
up = name =--graphFormat
up = display_name = File type of graphical output
up = type=radio png pdf jpg
up = default=jpg
up = name=--outfile
up = display_name=Output file name for demo2
up = type=text
up = name=--logfile
up = display_name=Log filename for demo2
up = type=text
up = default=demo2log
up = name=--jpgoutfile
up = display_name=Output file name for demo2 graphical output
up = type = text
up = default=demo2graph
#sys_parameter_names
sp = name=--infile
sp = name=path
sp = default=./
--- NEW FILE: demo3 ---
#!/usr/bin/perl -w
=head1 NAME
demo3 - sample analysis for building analysis tree
=head1 SYNOPSIS
./demo3 --user <whatever>
--infile <filename> --outfile <filename> --logfile <filename>
=head1 DESCRIPTION
This script is a placeholder/example for building analysis trees.
=cut
use strict;
use Getopt::Long 2.13;
#command line options
my $debug ='';
my $infile = '';
my $outfile = '';
my $logfile = '';
my $user = '';
getOptions();
#create logfile
open (LOG, "> $logfile") or die "Unable to open $logfile\n";
#open input file
open (IN, "$infile") or die "Unable to open $infile\n";
#open output file
open (OUT, "> $outfile") or die "Unable to open $outfile\n";
print LOG "Running demo3: in: $infile out: $outfile\n";
print LOG "Params: user: $user\n";
#write contents of input file to output file
while( my $line = <IN>)
{
print OUT "Demo3: $line";
}
#close all
close (IN);
close (OUT);
close (LOG);
sub getOptions
{
my $help;
if (@ARGV > 0 )
{
# format
# string : 'var=s' => \$var,
# boolean : 'var!' => \$var,
GetOptions(
'help|?' => \$help,
'debug!' => \$debug,
'infile=s' =>\$infile,
'outfile=s' =>\$outfile,
'logfile=s' =>\$logfile,
'user=s' =>\$user,
);
} #have command line args
if ($help)
{
usage();
};
} # getOptions
sub usage
{
print "Usage: \n";
print "./demo3 --user <string> --infile <filename> --outfile <filename> --logfile <filename>\n";
} # usage
--- NEW FILE: demo3.cfg ---
## filetypes
filetype = name=demo1atxt
filetype = comment=Input file for demo3 analysis
filetype = arg_name=--infile
filetype = name=demo3outtxt
filetype = comment=Text output for demo3 analysis
filetype = arg_name=--outfile
filetype = name=log
filetype = comment=Log file
filetype = arg_name=--logfile
#extension
extension = filetype=demo1atxt
extension = ext=demo1atxt
extension = filetype=demo3outtxt
extension = ext=demo3outtxt
extension = filetype=log
extension = ext=txt
# analysis
name = demo3
cmdstr = ./demo3
#analysis_filetypes_link
analysisfile = filetype =demo1atxt
analysisfile = input =1
analysisfile = filetype =demo3outtxt
analysisfile = input =0
analysisfile = filetype =log
analysisfile = input =0
#user_parameter_names
up = name =--user
up = display_name = Generic parameter for user demo3
up = type =text
up = name=--outfile
up = display_name=Output file name for demo3
up = type=text
up = name=--logfile
up = display_name=Log file name for demo3
up = type=text
up = default=demo3log
#sys_parameter_names
sp = name=--infile
--- NEW FILE: demo4 ---
#!/usr/bin/perl -w
=head1 NAME
demo4 - sample analysis for building analysis tree
=head1 SYNOPSIS
./demo4 --user <whatever>
--infile <filename> --outfile <filename> --logfile <filename>
=head1 DESCRIPTION
This script is a placeholder/example for building analysis trees.
=cut
use strict;
use Getopt::Long 2.13;
#command line options
my $debug ='';
my $infile = '';
my $outfile = '';
my $logfile = '';
my $user = '';
getOptions();
#create logfile
open (LOG, "> $logfile") or die "Unable to open $logfile\n";
#open input file
open (IN, "$infile") or die "Unable to open $infile\n";
#open output file
open (OUT, "> $outfile") or die "Unable to open $outfile\n";
print LOG "Running demo4: in: $infile out: $outfile\n";
print LOG "Params: user: $user\n";
#write contents of input file to output file
while( my $line = <IN>)
{
print OUT "Demo4: $line";
}
#close all
close (IN);
close (OUT);
close (LOG);
sub getOptions
{
my $help;
if (@ARGV > 0 )
{
# format
# string : 'var=s' => \$var,
# boolean : 'var!' => \$var,
GetOptions(
'help|?' => \$help,
'debug!' => \$debug,
'infile=s' =>\$infile,
'outfile=s' =>\$outfile,
'logfile=s' =>\$logfile,
'user=s' =>\$user,
);
} #have command line args
if ($help)
{
usage();
};
} # getOptions
sub usage
{
print "Usage: \n";
print "./demo4 --user <string> --infile <filename> --outfile <filename> --logfile <filename>\n";
} # usage
--- NEW FILE: demo4.cfg ---
## filetypes
filetype = name=demo4intxt
filetype = comment=Input file for demo4 analysis
filetype = arg_name=--infile=
filetype = name=demo4outtxt
filetype = comment=Text output for demo4 analysis
filetype = arg_name=--outfile=
filetype = name=log
filetype = comment=Log file
filetype = arg_name=--logfile=
#extension
extension = filetype=demo4intxt
extension = ext=demo1btxt
extension = filetype=demo4outtxt
extension = ext=demo4outtxt
extension = filetype=log
extension = ext=txt
# analysis
name = demo4
cmdstr = ./demo4
#analysis_filetypes_link
analysisfile = filetype =demo4intxt
analysisfile = input =1
analysisfile = filetype =demo4outtxt
analysisfile = input =0
analysisfile = filetype =log
analysisfile = input =0
#user_parameter_names
up = name =--user
up = display_name = Generic parameter for user demo4
up = type=text
up = name=--outfile
up = display_name=Output file name for demo4
up = type=text
up = name=--logfile
up = display_name=Log file name for demo4
up = type=text
up = default=demo4log
#sys_parameter_names
sp = name=--infile
--- NEW FILE: demo.cfg ---
## filetypes
filetype = name=demointxt
filetype = comment=Input file for demo analysis
filetype = arg_name=--infile
filetype = name=demoouttxt
filetype = comment=Text output for demo analysis
filetype = arg_name=--outfile
filetype = name=log
filetype = comment=Log file
filetype = arg_name=--logfile
#extension
extension = filetype=demointxt
extension = ext=demointxt
extension = filetype=demoouttxt
extension = ext=demoouttxt
extension = filetype=log
extension = ext=txt
# analysis
name = demo
cmdstr = ./demo
#analysis_filetypes_link
analysisfile = filetype =demointxt
analysisfile = input =1
analysisfile = filetype =demoouttxt
analysisfile = input =0
analysisfile = filetype =log
analysisfile = input =0
#user_parameter_names
up = name=--demoUser
up = display_name=Generic parameter for demo analysis
up = type=text
up = name=--outfile
up = display_name=Output filename
up = type=text
up = default=demoout
up = name=--logfile
up = display_name=Logfile filename
up = type=text
up = default=demolog
#sys_parameter_names
sp = name=--infile
sp = name=--demoSys
--- NEW FILE: depopulate ---
./test_tree.pl --action remove
./add_analysis.pl --configfile analysis/demo4.cfg --action remove
./add_analysis.pl --configfile analysis/demo3.cfg --action remove
./add_analysis.pl --configfile analysis/demo2.cfg --action remove
./add_analysis.pl --configfile analysis/demo1.cfg --action remove
./add_analysis.pl --configfile analysis/demo.cfg --action remove
--- NEW FILE: populate ---
./add_analysis.pl --configfile analysis/demo.cfg --action insert
./add_analysis.pl --configfile analysis/demo1.cfg --action insert
./add_analysis.pl --configfile analysis/demo2.cfg --action insert
./add_analysis.pl --configfile analysis/demo3.cfg --action insert
./add_analysis.pl --configfile analysis/demo4.cfg --action insert
./test_tree.pl --action insert
--- NEW FILE: test_tree.pl ---
#!/usr/bin/perl -w
use DBI;
require "sessionlib.pl";
use Getopt::Long 2.13;
#command line options
my $action = '';
getOptions();
my $dbh = new_connection();
if ($action eq "insert")
{
act_file_info($dbh, $action);
act_tree($dbh, $action);
act_nodes($dbh, $action);
act_user_parameter_values($dbh, $action);
act_sys_parameter_values($dbh, $action);
}
elsif ($action eq "remove")
{
act_sys_parameter_values($dbh, $action);
act_user_parameter_values($dbh, $action);
act_file_info($dbh, $action);
act_nodes($dbh, $action);
act_tree($dbh, $action);
}
$dbh->disconnect();
sub act_file_info
{
my ($dbh, $action) = @_;
if ($action eq "insert")
{
# insert initial input file values
$stm = "insert into file_info (node_fk, file_name, use_as_input, fi_comments)"
. "values (NULL, '/home/tdj4m/files/initial_in', 't', 'initial input file for demo')";
}
else
{
$stm = "select fi_input_fk from tree where tree_name = 'demoTree'";
my $sth = $dbh->prepare($stm);
$sth->execute();
my ($fi_input_fk) = $sth->fetchrow_array;
$stm = "delete from file_info where fi_pk = $fi_input_fk";
}
my $sth = $dbh->prepare($stm);
$sth->execute();
$dbh->commit();
}
# insert tree
sub act_tree
{
my ($dbh, $action) = @_;
my ($stm, $sth);
if ($action eq "insert")
{
$stm = "select fi_pk from file_info where file_name like '%initial_in%'";
$sth = $dbh->prepare($stm);
$sth->execute();
my $fi_ifk = $sth->fetchrow_array;
$stm = "insert into tree (tree_name, fi_input_fk) values ('demoTree', $fi_ifk)";
}
else
{
$stm = "delete from tree where tree_name = 'demoTree'";
}
$sth = $dbh->prepare($stm);
$sth->execute();
$dbh->commit();
}
# insert node
sub act_nodes
{
my ($dbh, $action) = @_;
my ($stm, $sth);
$stm = "select tree_pk from tree where tree_name = 'demoTree'";
$sth = $dbh->prepare($stm);
$sth->execute();
my $tree_fk = $sth->fetchrow_array;
if ($action eq "insert")
{
$stm = "select an_pk from analysis where an_name = 'demo'";
$sth = $dbh->prepare($stm);
$sth->execute();
my $an_fk_demo = $sth->fetchrow_array;
$stm = "select an_pk from analysis where an_name = 'demo1'";
$sth = $dbh->prepare($stm);
$sth->execute();
my $an_fk_demo1 = $sth->fetchrow_array;
$stm = "select an_pk from analysis where an_name = 'demo2'";
$sth = $dbh->prepare($stm);
$sth->execute();
my $an_fk_demo2 = $sth->fetchrow_array;
$stm = "select an_pk from analysis where an_name = 'demo3'";
$sth = $dbh->prepare($stm);
$sth->execute();
my $an_fk_demo3 = $sth->fetchrow_array;
$stm = "select an_pk from analysis where an_name = 'demo4'";
$sth = $dbh->prepare($stm);
$sth->execute();
my $an_fk_demo4 = $sth->fetchrow_array;
$stm = "insert into node (tree_fk, an_fk, parent_key) values " .
"($tree_fk, $an_fk_demo, NULL)";
$sth = $dbh->prepare($stm);
$sth->execute();
$dbh->commit();
$stm = "select node_pk from node where an_fk = $an_fk_demo";
$sth = $dbh->prepare($stm);
$sth->execute();
my $node_pk_demo = $sth->fetchrow_array;
$stm = "insert into node (tree_fk, an_fk, parent_key) values ".
"($tree_fk, $an_fk_demo1, $node_pk_demo)";
$sth = $dbh->prepare($stm);
$sth->execute();
$stm = "insert into node (tree_fk, an_fk, parent_key) values ".
"($tree_fk, $an_fk_demo2, $node_pk_demo)";
$sth = $dbh->prepare($stm);
$sth->execute();
$dbh->commit();
$stm = "select node_pk from node where an_fk = $an_fk_demo1";
$sth = $dbh->prepare($stm);
$sth->execute();
my $node_pk_demo1 = $sth->fetchrow_array;
$stm = "insert into node (tree_fk, an_fk, parent_key) values ".
"($tree_fk, $an_fk_demo3, $node_pk_demo1)";
$sth = $dbh->prepare($stm);
$sth->execute();
$stm = "insert into node (tree_fk, an_fk, parent_key) values ".
"($tree_fk, $an_fk_demo4, $node_pk_demo1)";
}
else
{
$stm = "delete from node where tree_fk = $tree_fk";
}
$sth = $dbh->prepare($stm);
$sth->execute();
$dbh->commit();
}
#insert into user_parameter_values
sub act_user_parameter_values
{
my ($dbh, $action) = @_;
my ($stm, $sth);
if ($action eq "insert")
{
# demo - params demoUser outfile logfile
$stm = "select node_pk, an_pk from node,analysis " .
" where an_fk = an_pk and an_name = 'demo'";
$sth = $dbh->prepare($stm);
$sth->execute();
my ($node_fk, $an_key) = $sth->fetchrow_array;
$stm = "select upn_pk from user_parameter_names " .
"where up_name like '%logfile%' and an_fk = $an_key";
$sth = $dbh->prepare($stm);
$sth->execute();
my $upn_fk = $sth->fetchrow_array;
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
" values ($node_fk, 'demoLogfile.log', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
$stm = "select upn_pk from user_parameter_names " .
"where up_name like '%outfile%' and an_fk = $an_key";
$sth = $dbh->prepare($stm);
$sth->execute();
$upn_fk = $sth->fetchrow_array;
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
" values ($node_fk, 'demoOutput.demoouttxt', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
$stm = "select upn_pk from user_parameter_names " .
"where up_name like '%demoUser%' and an_fk = $an_key";
$sth = $dbh->prepare($stm);
$sth->execute();
$upn_fk = $sth->fetchrow_array;
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
" values ($node_fk, 'Cruella de Ville', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
# demo1 - params demo1User outfilea outfileb logfile
$stm = "select node_pk, an_pk from node,analysis " .
" where an_fk = an_pk and an_name = 'demo1'";
$sth = $dbh->prepare($stm);
$sth->execute();
($node_fk, $an_key) = $sth->fetchrow_array;
$stm = "select upn_pk from user_parameter_names " .
"where up_name like '%demo1user%' and an_fk = $an_key";
$sth = $dbh->prepare($stm);
$sth->execute();
$upn_fk = $sth->fetchrow_array;
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
" values ($node_fk, 'Jasper', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
$stm = "select upn_pk from user_parameter_names " .
"where up_name like '%outfilea%' and an_fk = $an_key";
$sth = $dbh->prepare($stm);
$sth->execute();
$upn_fk = $sth->fetchrow_array;
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
" values ($node_fk, 'outa.demo1atxt', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
$stm = "select upn_pk from user_parameter_names " .
"where up_name like '%outfileb%' and an_fk = $an_key";
$sth = $dbh->prepare($stm);
$sth->execute();
$upn_fk = $sth->fetchrow_array;
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
" values ($node_fk, 'outb.demo1btxt', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
$stm = "select upn_pk from user_parameter_names " .
"where up_name like '%logfile%' and an_fk = $an_key";
$sth = $dbh->prepare($stm);
$sth->execute();
$upn_fk = $sth->fetchrow_array;
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
" values ($node_fk, 'demo1Logfile.log', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
# demo2 - params graphFormat outfile logfile jpgoutfile
$stm = "select node_pk, an_pk from node,analysis " .
" where an_fk = an_pk and an_name = 'demo2'";
$sth = $dbh->prepare($stm);
$sth->execute();
($node_fk, $an_key) = $sth->fetchrow_array;
$stm = "select upn_pk from user_parameter_names " .
"where up_name like '%logfile%' and an_fk = $an_key";
$sth = $dbh->prepare($stm);
$sth->execute();
$upn_fk = $sth->fetchrow_array;
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
" values ($node_fk, 'demo2Logfile.log', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
$stm = "select upn_pk from user_parameter_names " .
"where up_name like '%outfile%' and an_fk = $an_key";
$sth = $dbh->prepare($stm);
$sth->execute();
$upn_fk = $sth->fetchrow_array;
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
" values ($node_fk, 'demo2Output.demo2outtxt', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
$stm = "select upn_pk from user_parameter_names " .
"where up_name like '%jpgoutfile%' and an_fk = $an_key";
$sth = $dbh->prepare($stm);
$sth->execute();
$upn_fk = $sth->fetchrow_array;
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
" values ($node_fk, 'demo2Output.jpg', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
$stm = "select upn_pk from user_parameter_names " .
"where up_name like '%graphFormat%' and an_fk = $an_key";
$sth = $dbh->prepare($stm);
$sth->execute();
$upn_fk = $sth->fetchrow_array;
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
" values ($node_fk, 'Horace', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
# demo3 - params outfile user logfile
$stm = "select node_pk, an_pk from node,analysis " .
" where an_fk = an_pk and an_name = 'demo3'";
$sth = $dbh->prepare($stm);
$sth->execute();
($node_fk, $an_key) = $sth->fetchrow_array;
$stm = "select upn_pk from user_parameter_names " .
"where up_name like '%logfile%' and an_fk = $an_key";
$sth = $dbh->prepare($stm);
$sth->execute();
$upn_fk = $sth->fetchrow_array;
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
" values ($node_fk, 'demo3Logfile.log', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
$stm = "select upn_pk from user_parameter_names " .
"where up_name like '%outfile%' and an_fk = $an_key";
$sth = $dbh->prepare($stm);
$sth->execute();
$upn_fk = $sth->fetchrow_array;
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
" values ($node_fk, 'demo3Output.demo3outtxt', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
$stm = "select upn_pk from user_parameter_names " .
"where up_name like '%user%' and an_fk = $an_key";
$sth = $dbh->prepare($stm);
$sth->execute();
$upn_fk = $sth->fetchrow_array;
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
" values ($node_fk, 'Perdita', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
# demo4 - params outfile user logfile
$stm = "select node_pk, an_pk from node,analysis " .
" where an_fk = an_pk and an_name = 'demo4'";
$sth = $dbh->prepare($stm);
$sth->execute();
($node_fk, $an_key) = $sth->fetchrow_array;
$stm = "select upn_pk from user_parameter_names " .
"where up_name like '%logfile%' and an_fk = $an_key";
$sth = $dbh->prepare($stm);
$sth->execute();
$upn_fk = $sth->fetchrow_array;
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
" values ($node_fk, 'demo4Logfile.log', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
$stm = "select upn_pk from user_parameter_names " .
"where up_name like '%outfile%' and an_fk = $an_key";
$sth = $dbh->prepare($stm);
$sth->execute();
$upn_fk = $sth->fetchrow_array;
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
" values ($node_fk, 'demo4Output.demo4outtxt', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
$stm = "select upn_pk from user_parameter_names " .
"where up_name like '%user%' and an_fk = $an_key";
$sth = $dbh->prepare($stm);
$sth->execute();
$upn_fk = $sth->fetchrow_array;
$stm = "insert into user_parameter_values (node_fk, up_value, upn_fk)" .
" values ($node_fk, 'Pongo', $upn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
}
elsif ($action eq "remove")
{
$stm = "select node_pk from node, tree where tree_fk = tree_pk and ".
" tree_name = 'demoTree'";
my $node_fk;
my $sth2 = $dbh->prepare($stm);
$sth2->execute();
while (($node_fk) = $sth2->fetchrow_array)
{
$stm = "delete from user_parameter_values where node_fk = $node_fk";
$sth = $dbh->prepare($stm);
$sth->execute();
}
}
$dbh->commit();
}
#insert into sys_parameter_values
sub act_sys_parameter_values
{
my ($dbh, $action) = @_;
my ($stm, $sth);
if ($action eq "insert")
{
#-demo params: infile, demoSys
$stm = "select node_pk, an_pk from node,analysis " .
" where an_fk = an_pk and an_name = 'demo'";
$sth = $dbh->prepare($stm);
$sth->execute();
my ($node_fk, $an_key) = $sth->fetchrow_array;
$stm = "select spn_pk from sys_parameter_names " .
"where sp_name like '%infile%' and an_fk = $an_key";
$sth = $dbh->prepare($stm);
$sth->execute();
my $spn_fk = $sth->fetchrow_array;
$stm = "insert into sys_parameter_values (node_fk, sp_value, spn_fk)" .
" values ($node_fk, '/home/tdj4m/files/initial_in', $spn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
$stm = "select spn_pk from sys_parameter_names " .
"where sp_name like '%demoSys%' and an_fk = $an_key";
$sth = $dbh->prepare($stm);
$sth->execute();
$spn_fk = $sth->fetchrow_array;
$stm = "insert into sys_parameter_values (node_fk, sp_value, spn_fk)" .
" values ($node_fk, 'Arial', $spn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
#-demo1 params: demo1Sys, demo2Sys
$stm = "select node_pk, an_pk from node,analysis " .
" where an_fk = an_pk and an_name = 'demo1'";
$sth = $dbh->prepare($stm);
$sth->execute();
($node_fk, $an_key) = $sth->fetchrow_array;
# $stm = "select spn_pk from sys_parameter_names " .
# "where sp_name like '%infile%' and an_fk = $an_key";
# $sth = $dbh->prepare($stm);
# $sth->execute();
# $spn_fk = $sth->fetchrow_array;
# $stm = "insert into sys_parameter_values (node_fk, sp_value, spn_fk)" .
# " values ($node_fk, 'XXX', $spn_fk);";
# $sth = $dbh->prepare($stm);
# $sth->execute();
$stm = "select spn_pk from sys_parameter_names " .
"where sp_name like '%demo1Sys%' and an_fk = $an_key";
$sth = $dbh->prepare($stm);
$sth->execute();
$spn_fk = $sth->fetchrow_array;
$stm = "insert into sys_parameter_values (node_fk, sp_value, spn_fk)" .
" values ($node_fk, 'Sebastian', $spn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
$stm = "select spn_pk from sys_parameter_names " .
"where sp_name like '%demo2Sys%' and an_fk = $an_key";
$sth = $dbh->prepare($stm);
$sth->execute();
$spn_fk = $sth->fetchrow_array;
$stm = "insert into sys_parameter_values (node_fk, sp_value, spn_fk)" .
" values ($node_fk, 'Ursala', $spn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
#-demo2 params: infile, path
$stm = "select node_pk, an_pk from node,analysis " .
" where an_fk = an_pk and an_name = 'demo2'";
$sth = $dbh->prepare($stm);
$sth->execute();
($node_fk, $an_key) = $sth->fetchrow_array;
# $stm = "select spn_pk from sys_parameter_names " .
# "where sp_name like '%infile%' and an_fk = $an_key";
# $sth = $dbh->prepare($stm);
# $sth->execute();
# $spn_fk = $sth->fetchrow_array;
# $stm = "insert into sys_parameter_values (node_fk, sp_value, spn_fk)" .
# " values ($node_fk, 'demoOutput', $spn_fk);";
# $sth = $dbh->prepare($stm);
# $sth->execute();
$stm = "select spn_pk from sys_parameter_names " .
"where sp_name like '%path%' and an_fk = $an_key";
$sth = $dbh->prepare($stm);
$sth->execute();
$spn_fk = $sth->fetchrow_array;
$stm = "insert into sys_parameter_values (node_fk, sp_value, spn_fk)" .
" values ($node_fk, 'Eric', $spn_fk);";
$sth = $dbh->prepare($stm);
$sth->execute();
#-demo3 params: infile
$stm = "select node_pk, an_pk from node,analysis " .
" where an_fk = an_pk and an_name = 'demo3'";
$sth = $dbh->prepare($stm);
$sth->execute();
($node_fk, $an_key) = $sth->fetchrow_array;
# $stm = "select spn_pk from sys_parameter_names " .
# "where sp_name like '%infile%' and an_fk = $an_key";
# $sth = $dbh->prepare($stm);
# $sth->execute();
# $spn_fk = $sth->fetchrow_array;
# $stm = "insert into sys_parameter_values (node_fk, sp_value, spn_fk)" .
# " values ($node_fk, 'outa', $spn_fk);";
# $sth = $dbh->prepare($stm);
# $sth->execute();
#-demo4 params: infile
$stm = "select node_pk, an_pk from node,analysis " .
" where an_fk = an_pk and an_name = 'demo4'";
$sth = $dbh->prepare($stm);
$sth->execute();
($node_fk, $an_key) = $sth->fetchrow_array;
# $stm = "select spn_pk from sys_parameter_names " .
# "where sp_name like '%infile%' and an_fk = $an_key";
# $sth = $dbh->prepare($stm);
# $sth->execute();
# $spn_fk = $sth->fetchrow_array;
# $stm = "insert into sys_parameter_values (node_fk, sp_value, spn_fk)" .
# " values ($node_fk, 'outb', $spn_fk);";
# $sth = $dbh->prepare($stm);
# $sth->execute();
}
elsif ($action eq "remove")
{
$stm = "select node_pk from node, tree where tree_fk = tree_pk and ".
" tree_name = 'demoTree'";
my $sth2 = $dbh->prepare($stm);
$sth2->execute();
while (my ($node_fk) = $sth2->fetchrow_array)
{
$stm = "delete from sys_parameter_values where node_fk = $node_fk";
$sth = $dbh->prepare($stm);
$sth->execute();
}
}
$dbh->commit();
}
sub getOptions
{
my $help;
if (@ARGV > 0 )
{
# format
# string : 'var=s' => \$var,
# boolean : 'var!' => \$var,
GetOptions(
'action=s' => \$action,
'help|?' => \$help,
);
} #have command line args
usage() if $help;
if (($action ne "insert") && ($action ne "remove"))
{
print "Must specify a valid action.\n";
usage();
}
} # getOptions
sub usage
{
print "Usage: \n";
print " ./test_tree.pl --action=<insert|remove> \n";
exit;
}
|
Update of /cvsroot/genex/genex-server/site/webtools/analysis
In directory usw-pr-cvs1:/tmp/cvs-serv24040
Added Files:
Tag: Rel-1_0_1-branch
qualityControl.rw statAnalysis.rw westfallYoung.rw Rwrapper.pl
sourceSPLUSbioinfo.ssc
Log Message:
Moved from webtools dir
--- NEW FILE: qualityControl.rw ---
# This function performs quality control and normalization on
# gene chip data. The function is intended to handle data for
# up to 20 chips.
#
# INPUT: dataframe with Probe.Set.Name and signals for all chips
# outFile - output filename for text output
# outFileJpeg - output filename for graphical output
# conditions - a vector describing experiment set
# length is the number of conditions
# ventor[condition] is number of replicates for condition
#
# OUTPUT: sensitivity and specificity
# outFile.out - text analysis info
# outFile.jpeg - graphical analysis
#
# FUNCTIONALITY: The function will prompt for two output filenames.
# One will contain a plot of the data. The other will contain
# normalized values and specificity and sensitivity.
#
# The convertCols.pl script can be used to convert data from
# MAS and dChip to the format expected by this function.
#
#
qualityControl <- function(geneData, outFileBase, conditions)
{
# include necessary functions
source("sourceSPLUSbioinfo.ssc")
#remove control spots
#expected format Probe.Set.Name -Signal columns named "set.chip"
geneData <- geneData[substring(geneData$Probe.Set,1,4)!='AFFX',]
#remove Probe.Set.Name and make type numeric for the applies below
geneSigs <- as.matrix(geneData[,-1])
geneSigs.N <- btwn.norm(geneSigs)
#results to confirm btwn.norm is effective
normData <- cbind(apply(geneSigs,2,median,na.rm=T),
apply(geneSigs,2,quantile,0.75,na.rm=T)
-apply(geneSigs,2,quantile,0.25,na.rm=T),
apply(geneSigs.N,2,median,na.rm=T),
apply(geneSigs.N,2,quantile,0.75,na.rm=T)
-apply(geneSigs.N,2,quantile,0.25,na.rm=T))
# output data after normalization
outFile <- paste(outFileBase, ".txt", sep="")
write("Adjusted data values:", outFile, append=FALSE)
write(t(normData), outFile, ncolumns=4, append=TRUE)
write("\n\n", outFile, append=TRUE)
#Threshold and log adjusted avg difference
geneSigs.N[geneSigs.N<1] <- 1
geneSigs.N <- data.frame(logb(geneSigs.N,base=2))
# output graph data
outFileGph <- paste(outFileBase, ".", gphFormat, sep="")
if (gphFormat == "jpg")
{
bitmap(file=outFileGph, type="jpeg")
pairs(geneSigs.N,pch=".")
graphics.off()
}
if (gphFormat == "pdf")
{
pdf(file=outFileGph)
pairs(geneSigs.N, pch=".")
dev.off()
}
#calculate values for specificity and sensitivity
tmp.cor <- cor(geneSigs.N,use="complete.obs")
write("Correlation: ", outFile, append=TRUE)
write(t(tmp.cor), outFile, ncolumns=9, append=TRUE)
write("\n", outFile, append=TRUE)
numConditions <- length(conditions)
meanList <- c()
x <- 1
rowbase <- 1
while (x <= numConditions)
{
replicates <- conditions[x]
rowIdx <- rowbase
rowLess <- rowbase + replicates - 1
colLess <- rowbase + replicates
while (rowIdx < rowLess)
{
colIdx <- rowIdx + 1
while (colIdx < colLess)
{
meanList <- c(meanList, tmp.cor[rowIdx,colIdx])
colIdx <- colIdx + 1
}
rowIdx <- rowIdx + 1
}
rowbase <- rowIdx+1
x <- x + 1
}
specificity <- mean(meanList)
sensitivity <- specificity - mean(
setdiff(tmp.cor[lower.tri(tmp.cor)], meanList))
#output sensitivity and specificity values
write("Specificity: ", outFile, append=TRUE)
write(specificity, outFile, append=TRUE)
write("\n", outFile, append=TRUE)
write("Sensitivity: ", outFile, append=TRUE)
write(sensitivity, outFile, append=TRUE)
write("\n", outFile, append=TRUE)
lowess.nor.mult(geneSigs.N, outFile)
return(sensitivity, specificity)
} #qualityControl
# This function performs lowess normalization on
# gene chip data. The function is intended to handle data for
# up to 20 chips.
#
# INPUT: array with columns of data representing pairs on which to perform
# lowess normalization
# outfile - name of output file for output data
#
# OUTPUT: out - array of x, y normalization values.
# outfile - text analysis info
#
# FUNCTIONALITY:
# This function will assume the first column in the input array is the column
# to compare all other columns against. It then performs lowess normalization
# for each column.
#
lowess.nor.mult <- function(datacols, outFile)
{
out <- c()
numCols <- ncol(datacols)
x <- datacols[,1]
for (i in 2:numCols)
{
y <- datacols[,i]
na.point <- (1:length(x))[!is.na(x) & !is.na(y)]
x <- x[na.point]; y <- y[na.point]
fit <- lowess(x+y, y-x)
# moc: changed from approx(fit,x+y)
diff.fit <- approx(fit$x,fit$y,x+y)
diffy <- y - diff.fit$y
out <- cbind(out, diffy)
}
outData <- cbind(x, out)
write("Lowess norm values: ", outFile, append=TRUE)
write(outData, outFile, ncolumns=numCols, append=TRUE)
write("\n", outFile, append=TRUE)
return(out)
}
### MAIN ###
### REPLACE inputDataFile ###
geneData <- read.delim("inputDataFile")
### REPLACE conds
conditions <- c(conds)
### REPLACE graphFormat
gphFormat <- "graphFormat"
### REPLACE outputFile ###
qualityControl(geneData, "outputFile", conditions)
--- NEW FILE: statAnalysis.rw ---
# This function performs traditional fold change computation on
# gene chip data. The function is intended to handle data for
# up to 20 chips.
#
# INPUT: dataframe on which no log transformation has not been done
# outFile - output filename for text output
# conditions - a vector describing experiment set
# length is the number of conditions
# vector[condition] is number of replicates for condition
#
# OUTPUT:
# outFile.out - text analysis info
#
# FUNCTIONALITY:
#
#
traditionalFoldChange <- function(notLogData, conditions, outFile, labels)
{
numConditions <- length(conditions)
curCol <- 1
avgCols <- c()
colAvg <- c()
for (x in 1:numConditions)
{
hybrids <- conditions[x]
#mode must be numeric
# divide by num repliates
sumCol <- as.array(apply(notLogData[,curCol:(hybrids + curCol -1)],1,sum)/hybrids)
curCol <- curCol + hybrids
colAvg <- rbind(colAvg, sumCol)
}
numAverages <- nrow(colAvg)
tradFC <- c()
i <- 1
for (y in 1:numAverages)
{
for (z in 1:numAverages)
{
if (y < z)
{
tradFC <- cbind(tradFC, (colAvg[y,] / colAvg[z,]))
dimnames(tradFC)[[2]][i] <- paste(labels[y], labels[z], sep='-')
i <- i + 1
}
}
}
tradFC.N <- logb(tradFC, base=2)
outCols <- ncol(tradFC.N)
write("Traditional Fold Change: ", outFile, append=TRUE)
write("Log 2 transformation of ratio of averages: ", outFile, append=TRUE)
write(dimnames(tradFC.N)[[2]], outFile, ncolumns=outCols, append=TRUE)
write(t(tradFC.N), outFile, ncolumns=outCols, append=TRUE)
write("\n", outFile, append=TRUE)
return (tradFC.N)
}
logFoldChange <- function(logData, conditions, outFile, labels)
{
numConditions <- length(conditions)
curCol <- 1
avgCols <- c()
colAvg <- c()
for (x in 1:numConditions)
{
hybrids <- conditions[x]
#mode must be numeric
sumCol <- as.array(apply(logData[,curCol:(hybrids + curCol -1)],1,sum)/hybrids)
curCol <- curCol + hybrids
colAvg <- rbind(colAvg, sumCol)
}
numAverages <- nrow(colAvg)
logFC <- c()
i <- 1
for (y in 1:numAverages)
{
for (z in 1:numAverages)
{
if (y < z)
{
logFC <- cbind(logFC, (colAvg[y,] - colAvg[z,]))
dimnames(logFC)[[2]][i] <- paste(labels[y], labels[z], sep='-')
i <- i + 1
}
}
}
outCols <- ncol(logFC)
write("LogFoldChange: ", outFile, append=TRUE)
write("Average difference of log 2 transformation: ", outFile, append=TRUE)
write(dimnames(logFC)[[2]], outFile, ncolumns=outCols, append=TRUE)
write(t(logFC), outFile, ncolumns=outCols, append=TRUE)
write("\n", outFile, append=TRUE)
return(logFC)
}
tTest <- function(logData, conditions, outFile, labels)
{
# tTest needs to be calculated for each distinct pair of conditions
numConds <- length(conditions)
yColStart <- 1
allResults <- c()
for (y in 1:numConds)
{
zColStart <- 1
yColEnd <- yColStart + conditions[y] - 1
for (z in 1:numConds)
{
if (y < z)
{
results <- data.frame(matrix(NA,dim(logData)[1],3))
names(results) <- c("t.stat", "T.p.value","bonferr")
zColEnd <- zColStart + conditions[z] -1
A <- logData[,yColStart:yColEnd]
B <- logData[,zColStart:zColEnd]
mean.A <- apply(A, 1, mean, na.rm=T)
mean.B <- apply(B, 1, mean, na.rm=T)
var.A <- apply(A, 1, var, use="complete.obs")
var.B <- apply(B, 1, var, use="complete.obs")
results$t.stat <- (mean.A - mean.B)/
sqrt( var.A/(ncol(A)-1) + var.B/(ncol(B)-1))
p.value <- pt(results$t.stat, ncol(A) + ncol(B) -2)
results$T.p.value <- apply( cbind(p.value, 1- p.value), 1, min)
results$bonferr <- ifelse(results[,2]*dim(results)[1]>1,1,results[,2]*dim(results)[1])
colPrefix <- paste(labels[y], labels[z], sep="-")
num <- length(names(results))
for (t in 1:num)
{
names(results)[t] <- paste(colPrefix, names(results)[t], sep="-")
}
if (length(allResults) == 0)
{
allResults <- results
} else
{
allResults <- cbind(allResults, results)
}
}
zColStart <- zColStart + conditions[z]
}
yColStart <- yColStart + conditions[y]
}
outCols <- ncol(allResults)
write("Results from T-test: ", outFile, append=TRUE)
write(names(allResults), outFile, ncolumns=outCols, append=TRUE)
write(t(allResults), outFile, ncolumns=outCols, append=TRUE)
write("\n", outFile, append=TRUE)
return(allResults)
} #tTest
LPE <- function(logData, conditions, outFile, labels)
{
source("sourceLPE.ssc")
# needs to be calculated for each distinct pair of conditions
numConds <- length(conditions)
yColStart <- 1
allResults <- c()
i <- 0
for (y in 1:numConds)
{
zColStart <- 1
yColEnd <- yColStart + conditions[y] - 1
for (z in 1:numConds)
{
if (y < z)
{
zColEnd <- zColStart + conditions[z] -1
A <- logData[,yColStart:yColEnd]
B <- logData[,zColStart:zColEnd]
var.A <- baseOlig.error(A, q=0.05)
var.B <- baseOlig.error(B, q=0.05)
AB.lpe <- lpe(A, B, var.A, var.B)
index <- conditions[y] + conditions[z] + 5
allResults <- cbind(allResults, AB.lpe[,index])
i <- i + 1
dimnames(allResults)[[2]][i] <- paste(labels[y], labels[z], 'pvalue', sep='-')
}
zColStart <- zColStart + conditions[z]
}
yColStart <- yColStart + conditions[y]
}
outCols <- ncol(allResults)
write("Results from LPE: ", outFile, append=TRUE)
write(dimnames(allResults)[[2]], outFile, ncolumns=outCols, append=TRUE)
write(t(allResults), outFile, ncolumns=outCols, append=TRUE)
write("\n", outFile, append=TRUE)
return(allResults)
}#LPE
### MAIN ###
### REPLACE inputDataFile
geneData <- read.delim("inputDataFile")
### REPLACE conds
conditions <- c(conds)
### REPLACE outputFile
outFile <- "outputFile.txt"
### REPLACE condLabels
labels <- c(condLabels)
# prepare data
source("sourceSPLUSbioinfo.ssc")
geneData <- geneData[substring(geneData$Probe.Set,1,4)!='AFFX',]
geneSigs <- as.matrix(geneData[,-1])
geneSigs.N <- btwn.norm(geneSigs)
geneSigs.N[geneSigs.N<1] <- 1
write("Statistical Analysis Results", outFile, append=FALSE)
traditionalFoldChange(geneSigs.N, conditions, outFile, labels)
geneSigs.N <- logb(geneSigs.N,base=2)
logFoldChange(geneSigs.N, conditions, outFile, labels)
tTest(geneSigs.N, conditions, outFile, labels)
LPE(geneSigs.N, conditions, outFile, labels)
--- NEW FILE: westfallYoung.rw ---
### MAIN ###
### REPLACE inputDataFile
geneData <- read.delim("inputDataFile")
### REPLACE conds
conditions <- c(conds)
### REPLACE outputFile
outFile <- "outputFile.txt"
### REPLACE condLabels
labels <- c(condLabels)
# prepare data
source("sourceSPLUSbioinfo.ssc")
geneData <- geneData[substring(geneData$Probe.Set,1,4)!='AFFX',]
geneSigs <- as.matrix(geneData[,-1])
geneSigs.N <- btwn.norm(geneSigs)
geneSigs.N[geneSigs.N<1] <- 1
geneSigs.N <- logb(geneSigs.N,base=2)
#Westfeld & Young
library(multtest)
numConds <- length(conditions)
yColStart <- 1
allResults <- c()
for (y in 1:numConds)
{
zColStart <- 1
yColEnd <- yColStart + conditions[y] - 1
for (z in 1:numConds)
{
if (y < z)
{
zColEnd <- zColStart + conditions[z] - 1
A <- geneSigs.N[,yColStart:yColEnd]
Albl <- rep(0, conditions[y])
B <- geneSigs.N[,zColStart:zColEnd]
Blbl <- rep(1, conditions[z])
classlbl <- c(Albl, Blbl)
colPrefix <- paste(labels[y], labels[z], sep="-")
AB <- cbind(A,B)
resT<-mt.maxT(AB, classlbl, B=1E8)
num <- length(names(resT))
for (t in 1:num)
{
names(resT)[t] <- paste(colPrefix, names(resT)[t],
sep="-")
}
if (length(allResults) == 0)
{
allResults <- resT
} else
{
allResults <- cbind(allResults, resT)
}
}
zColStart <- zColStart + conditions[z]
}
yColStart <- yColStart + conditions[y]
}
outCols <- ncol(allResults)
write("Results from Westfall & Young: ", outFile, append=FALSE)
write(names(allResults), outFile, ncolumns=outCols, append=TRUE)
write(t(allResults), outFile, ncolumns=outCols, append=TRUE)
write("\n", outFile, append=TRUE)
--- NEW FILE: Rwrapper.pl ---
#!/usr/bin/perl -w
use strict;
use Getopt::Long 2.13;
=head1 NAME
Rwrapper - spawn and report on specified R analysis
=head1 SYNOPSIS
./Rwrapper --kind <qualityControl|statAnalysis> --email <addy to notify>
--settings <name=arg>
Multiple settings are possible and represent values in the script
that should be changed--essentially a command line arg hack.
Current settings include: conds, inputDataFile, outputFile,
path, graphFormat.
./Rwrapper --kind qualityControl
--email "tdj\@virginia.edu"
--settings conds="2,4,5"
--settings condLabels='"min10","min10","hr4"'
--settings inputDataFile="dChipAfter.txt"
--settings outputFile="custom"
--settings fileURI="http://www.somelink.net"
--settings graphFormat="pdf"
--settings path="/home/tdj"
=head1 DESCRIPTION
The Rwrapper is one component of a web solution to spawn
various R analysis. If an email address is specified,
mail will be sent to that user when the analysis is
complete. The kind command line parameter specifies
the type of analysis to be performed. The settings file
contains variables that need to be set inside the
R environment before the analysis function is called.
Definition of settings parameters:
conds - The number of elements in this list specify the
number of conditions being analyzed. Each element specifies
the number of replicates for a condition. These values are
expected to correspond to the order of the data in the
inputDataFile. Thus, if conds="2,4,5", then the data file
should contain 3 conditions. The first two columns contain values
for two replicates of the first condition. The next four columns
contain values for four replicates of the second condition. The
last five columns contain data for five replicates of the third
condition.
condLabels - a column name for the condition - used to make results
more meaningful. Note that R doesn't like column names to start
with a number and will prefix such labels with an X
inputDataFile - the name of the file containing signal data. Should
correspond to the condition structure as specified above. Currently
expects tab separated data.
outputFile - basename of the file to store output in. Each analysis
will output a text file named "basename.txt". If other output files
are generated (i.e. graphical) they will take the form "basename.jpg"
or "basename.pdf"
graphFormat - preferred format for graphical output. Accepts jpg or
pdf.
fileURI - (optional) used by genex web interface to specify where the
result file will be posted. Effects the email notification.
path - where to write temporary files. Defaults to ".".
=cut
#command line options
my $kind = "" ;
my $email = "";
my %settings;
my @fileURI=();
#main
getOptions();
# check for lock file
my $lockFile = $settings{path} . $kind . ".lck";
if (-s $lockFile)
{
die "You can only run one $kind analysis at a time\n";
}
else
{
open(LOCK, "> $lockFile") || die "Unable to open $lockFile:$!\n";
print LOCK $$;
close(LOCK);
}
my @kindList;
# if kind isn't specified perform all analysis
if ($kind eq "")
{
@kindList = ("qualityControl", "statAnalysis", "westfallYoung");
}
else
{
push(@kindList, $kind);
}
foreach my $temp (@kindList)
{
$kind = $temp;
parseSettings($lockFile);
initiateAnalysis($lockFile);
}
emailNotify(@fileURI);
#remove lock file
clearLock($lockFile);
# end main
sub getOptions
{
my $help;
if (@ARGV > 0 )
{
GetOptions('kind=s' => \$kind,
'email=s' => \$email,
'settings=s' => \%settings,
'fileURI=s' => \@fileURI,
'help|?' => \$help);
} #have command line args
else
{
$help = 1;
}
if ($help)
{
print "Usage: \n";
print "./Rwrapper.pl \n --kind [qualityControl | statAnalysis | westfallYoung] \n";
print "--email <email address - escape the \@ sign> \n";
print "--fileURI <value> --fileURI <value>\n";
print "--settings <name>=<value>\n";
print " setting names: inputDataFile conds condLabels outputFile";
print " path graphFormat\n";
exit;
};
# verify that, if an email address is specified, we have a fileURI
if (($email ne "") && ($#fileURI == -1))
{
die "You must specify a fileURI setting if an email setting is specified";
}
$settings{path} = "./" if (!exists $settings{path});
# ensure that path ends in /
my $temp = $settings{path};
my $lastchar = chop($temp);
$settings{path} .= "/" if ($lastchar ne "/");
# where is inputFile - user might give us complete path to input file
# or may give us name and assume we will prepend path
if (exists($settings{inputDataFile}))
{
if (! -s $settings{inputDataFile})
{
my $other = $settings{path} . $settings{inputDataFile};
$settings{inputDataFile} = $other if (-s $other);
}
}
#what about output file - if they don't supply specific path info
#then prepend settings path
if (exists($settings{outputFile}))
{
if ($settings{outputFile} !~ /\//)
{
$settings{outputFile} = $settings{path} . $settings{outputFile};
}
}
} # getOptions
sub parseSettings
{
my $lockFile = shift;
open(INFILE, "$kind.rw") || die "Couldn't open $kind.rw: $!\n";
open(OUTFILE, "> $settings{path}/$kind.r") ||
((clearLock($lockFile)) &&
(die "Couldn't open $settings{path}/$kind.r: $!\n"));
my $line;
while ($line = <INFILE>)
{
if ($line =~ /### REPLACE ([\S]*)/)
{
#print "Line: $line\n";
#print "About to replace $1\n";
my $match = $1;
my $nextline = <INFILE>;
# check needed to handle non-existent string
if (!exists $settings{$match})
{
clearLock($lockFile);
die "Require $match parameter for $kind\n";
}
else
{
$nextline =~ /(.*)$match(.*)/;
$line = "$1$settings{$match}$2\n";
}
}
print OUTFILE $line;
} #while
close(INFILE);
close(OUTFILE);
} #parseSettings
sub initiateAnalysis
{
my $lockFile = shift;
if (($kind ne "qualityControl") &&
($kind ne "statAnalysis") &&
($kind ne "westfallYoung"))
{
clearLock($lockFile);
die "Invalid command type: $kind\n";
}
# note: this must manually be kept in sync with qualityControl requirements
if ($kind eq "qualityControl")
{
if ((!exists $settings{conds}) ||
(!exists $settings{inputDataFile}) ||
(!exists $settings{outputFile}) ||
(!exists $settings{graphFormat}))
{
clearLock($lockFile);
die "Settings conditions, inputDataFile, and outputFile must have data for a quality control analysis\n";
}
if (($settings{graphFormat} ne "pdf") &&
($settings{graphFormat} ne "jpg") &&
($settings{graphFormat} ne "png"))
{
clearLock($lockFile);
die "graphFormat must be pdf or jpg\n";
}
}
#note: this must manually be kept in sync with statAnalysis requirements
if ($kind eq "statAnalysis")
{
if ((!exists $settings{conds}) ||
(!exists $settings{inputDataFile}) ||
(!exists $settings{outputFile}))
{
clearLock($lockFile);
die "Settings conditions, inputDataFile, and outputFile must have data for stat analysis\n";
}
}
#note: this must manually be kept in sync with westfallYoung requirements
if ($kind eq "westfallYoung")
{
if ((!exists $settings{conds}) ||
(!exists $settings{inputDataFile}) ||
(!exists $settings{outputFile}))
{
clearLock($lockFile);
die "Settings conditions, inputDataFile, and outputFile must have data for stat analysis\n";
}
}
my $script = $settings{path} . "$kind.r";
my $outScript = $settings{path} . $kind . ".Rout";
my $done = `R CMD BATCH $script $outScript`;
} #initiateAnalysis
sub emailNotify
{
my @fURI = @_;
if ($email)
{
if ($email =~ /.+\\@.+/)
{
open (MAIL, "| mail -s \"Statistical Analysis Complete\" $email") or
warn "Can't send mail to $email: $!\n";
print MAIL "Your statistical analysis is complete.\n";
print MAIL "File(s) are at: \n ";
foreach (@fileURI)
{
print MAIL "$_\n";
}
close (MAIL);
}
else
{
warn "Email address needs a \\ in front of \@. Remember to use double quotes from the command line.\n";
}
}
} # emailNotify
sub clearLock
{
my $lockFile = shift;
unlink($lockFile);
}
=head1 NOTES
=head1 AUTHOR
Teela James
--- NEW FILE: sourceSPLUSbioinfo.ssc ---
##################################################################
##### NIH S-PLUS Training in Bioinformatics, Feb. 20, 2002 #######
##### Jae K. Lee and Jennifer Gibson #######
##### Division of Biostatistics and Epidemiology #######
##### University of Virginia School of Medicine #######
##### Michael OConnell #######
##### Insightful Corporation #######
##### This file contains code for functions #######
##### Companion file contains code for running analyses #######
##### assumes running from users\splusbioinfo #######
##### data in users\splusbioinfo\NIH #######
##### scripts in users\splusbioinfo #######
####### LPE SIGNIFICANCE EVALUATION ##########################################################
# Significance evaluation based on Local-Pooled-Error (LPE) distribution #
# Nadon R., Shi P., Skandalis A., Woody E., Hubschle H., Susko E., Rghei N., #
# Ramm P. (2001). Statistical inference methods for gene expression arrays, #
# Proceedings of SPIE, BIOS 2001, Microarrays: Optical Technologies and Informatics, 46-55. #
# Lee J.K. (2002). Discovery and validation of microarray gene expression patterns, #
# LabMedia International, to appear. #
##############################################################################################
pvals <- function(diffmat,sdmat)
{
ngenes <- dim(diffmat)[1]
nrepls <- dim(diffmat)[2]
pright <- matrix(NA, nrow=ngenes, ncol=nrepls)
pleft <- matrix(NA, nrow=ngenes, ncol=nrepls)
for(i in 1:ngenes)
{
if(i%%500==0) print(i)
for (j in 1:nrepls)
{
if(!is.na(diffmat[i,j]))
pright[i,j] <- 1-pnorm(diffmat[i,j],mean=0,sd=sdmat[i,j])
pleft[i,j] <- pnorm(diffmat[i,j],mean=0,sd=sdmat[i,j])
}
}
#Calculate left tail product of p-values
prprod <- apply(pright,1,FUN=prod)
#Calculate right tail product of p-values
plprod <- apply(pleft,1,FUN=prod)
#Keep the minimum of the left tail product, right tail product
minp <- apply(cbind(prprod,plprod),1,FUN=min)
pout <- cbind(diffmat,prprod,plprod,2*minp)
return(pout)
}
####### Lowess normalization between two channels or two chips ###############
# Yang, Y.H., Dudoit, S., Luu, P., and Speed, T.P. (2001). #
# Normalization for cDNA microarray data, Proceedings of SPIE, BIOS 2001, #
# Microarrays: Optical Technologies and Informatics, 141-152 #
##############################################################################
lowess.nor <- function(x,y)
{
# x = log(cy3 or chip1) and y = log(cy5 or chip2)
out <- list()
na.point <- (1:length(x))[!is.na(x) & !is.na(y)]
x <- x[na.point]; y <- y[na.point]
fit <- lowess(x+y, y-x)
# moc: changed from approx(fit,x+y)
diff.fit <- approx(fit$x,fit$y,x+y)
out$y <- y - diff.fit$y
out$x <- x
return(out)
}
####### Inter-Quartile-Range NORMALIZATION #######
### Find interquartile range ###
iqrfn <- function(x)
{
quantile(x,0.75,na.rm=T)-quantile(x,0.25,na.rm=T)
}
### IRQ normalization between channels or chips ###
btwn.norm <- function(tmp)
{
#Adjust IQ ranges to be the same as max of IQRs
divisor <- matrix(rep(apply(tmp,2,iqrfn)/max(apply(tmp,2,iqrfn)),
dim(tmp)[1]), nrow=dim(tmp)[1],byrow=T)
tmp.adj <- tmp/divisor
#Adjust medians to be the same as max of medians
adjustment <- matrix(rep(max(apply(tmp.adj,2,median,na.rm=T))-apply(tmp.adj,2,median,na.rm=T),
dim(tmp.adj)[1]),nrow=dim(tmp.adj)[1],byrow=T)
tmp.adj2 <- tmp.adj+adjustment
return(tmp.adj2)
}
### Functions for clustering and heatmap ###
flip2.fn<-function(x) {
# flips a matrix x side-to-side (for display)
# x is a matrix
rows<-dim(x)[1]
for (i in 1:rows)
{
x[i,]<-rev(x[i,])
}
x
}
#######
distcor.fn<-function(x) {
# computes distance matrix based on correlation
# x is a matrix
# distances are computed among rows of x
# this function is very slow and wants to be optimized
if (!is.matrix(x)) {stop("X must be a matrix")}
rows <- dim(x)[1]
if (rows>100) {cat("Lots of rows, this will be slow.\n")}
distmat<-matrix(NA,ncol=rows,nrow=rows)
for (i in 1:rows)
{
cat(i,"\n")
for (j in i:rows)
{
this.cor<-cor(x[i,],x[j,],na.method="omit")
distmat[i,j]<-this.cor
distmat[j,i]<-this.cor
}
}
distmat
}
#######
distcor2.fn <-function(x) {
# computes distance matrix based on correlation
# computes distance among rows of x
if (!is.matrix(x)) {stop("X must be a matrix")}
rows <- dim(x)[1]
cols <- dim(x)[2]
means <- apply(x,1,mean, na.rm=T)
sigmas <- sqrt(apply(x,1,var, na.method="omit"))
distmat <- matrix(NA,ncol=rows,nrow=rows)
x <- sweep(x,1,means)
myna <- function(x) {sum(is.na(x))}
row.nas <- apply(x,1,myna)
#ij.nas is the number which are not NAs
ij.nas <- (1-is.na(x))%*%(1-t(is.na(x)))
#zero will cause no contribution to sum of squares
x[is.na(x)] <- 0
#this could maybe be replaced by a sweep
for (i in 1:rows)
{
x[i,]<-x[i,]/sigmas[i]
}
for (i in 1:rows)
{
#uncomment the line below to print iteration number while running
#cat(i,"\n")
sumsq <- (x[i,])%*%(t(x))
#sumsq is a vector. this.cor is a vector
this.cor <- ((1/(ij.nas[i,]-1)))*sumsq
distmat[i,] <- this.cor
}
distmat
}
#######
plclust2.fn<-function(x.h,rotate.me=F, colors=NA, lwd=1, ...) {
# this function is like the plclust function, but allows a 90 degree
# rotation of the dendrogram and color coding.
# x.h is a hierarchical clustering object (the output of hclust)
# get info from plclust
x.pl <- plclust(x.h,las=1,labels=F,plot=F)
# set up the plotting area
if (!rotate.me)
{
plot(1,1,type="n",xlim=c(1,max(x.h$order)),
ylim=c(min(x.pl$y),max(x.pl$yn)),yaxt="n",bty="n",xlab="",ylab="", ...)
}
else
{
plot(1,1,type="n",ylim=c(1,max(x.h$order)),
xlim=c(-max(x.pl$yn),-min(x.pl$y)),xaxt="n",bty="n",xlab="",ylab="", ...)
}
# prepare for plotting
n <- max(x.h$order)
nodecolors <- rep(1,(n-1))
if (any(is.na(colors)))
{
colors <- rep(1,n)
}
# plot each branch of the tree
for (i in 1:(n-1))
{
temp <- x.h$merge[i,]
if (temp[1]<0)
{
x1 <- x.pl$x[abs(temp[1])]
# y1<-x.pl$y[abs(temp[1])]
# use the line below for leaves which extend to the boundary
y1 <- (-min(x.pl$y))
}
else
{
x1<-x.pl$xn[abs(temp[1])]
y1<-x.pl$yn[abs(temp[1])]
}
x2 <- x.pl$xn[i]
y2 <- x.pl$yn[i]
if (temp[2]<0)
{
x3 <- x.pl$x[abs(temp[2])]
# y3<-x.pl$y[abs(temp[2])]
# use the line below for leaves which extend to the boundary
y3 <- (-min(x.pl$y))
}
else
{
x3 <- x.pl$xn[abs(temp[2])]
y3 <- x.pl$yn[abs(temp[2])]
}
# set up colors for this branch
# right side
if(temp[1]<0)
{
color1<-colors[abs(temp[1])]
}
else
{
color1<-nodecolors[temp[1]]
}
# left side
if(temp[2]<0)
{
color3<-colors[abs(temp[2])]
}
else { color3<-nodecolors[temp[2]]
}
# middle
if(color1==color3)
{
color2<-color1
nodecolors[i]<-color2
}
else
{
color2<-1
}
# draw the branch
if (!rotate.me)
{
lines(c(x1,x1),c(y1,y2),col=color1,lwd=lwd)
lines(c(x1,x3),c(y2,y2),col=color2,lwd=lwd)
lines(c(x3,x3),c(y2,y3),col=color3,lwd=lwd)
# lines(c(x1,x1,x3,x3),c(y1,y2,y2,y3))
}
else
{
lines(c(-y1,-y2),c(x1,x1),col=color1,lwd=lwd)
lines(c(-y2,-y2),c(x1,x3),col=color2,lwd=lwd)
lines(c(-y2,-y3),c(x3,x3),col=color3,lwd=lwd)
}
}
# end of loop over all branches
}
###################### readMicroarrayData ##############################
# This function is used to read the raw experimental data from yeast #
# genome microarrays posted on http://genome-www.stanford.edu/swisnf/ #
# and generate a dataset, my.s2, that is used in a mixed model #
# analysis of these data in the vein of Wolfinger, et al. (2000). #
# A complete dataset, my.s2.complete, of which my.s2 is a subset, #
# can also be generated by this function. #
# #
# To use this function, download all 12 datasets (*.txt files) from #
# the URL listed above to a directory. In this directory, rename #
# the 12 downloaded files to a sequence of files with filenames #
# distinguished only by the numbers 1, 2, ..., 12. For example, one #
# could do the following: #
# move snf2ypda.txt sudarsanam1.txt #
# move snf2ypdc.txt sudarsanam2.txt #
# move snf2ypdd.txt sudarsanam3.txt #
# move snf2mina.txt sudarsanam4.txt #
# move snf2minc.txt sudarsanam5.txt #
# move snf2mind.txt sudarsanam6.txt #
# move swi1ypda.txt sudarsanam7.txt #
# move swi1ypdc.txt sudarsanam8.txt #
# move swi1ypdd.txt sudarsanam9.txt #
# move swi1mina.txt sudarsanam10.txt #
# move swi1minc.txt sudarsanam11.txt #
# move swi1mind.txt sudarsanam12.txt. #
# Note that these 12 txt files should be the only 12 txt files #
# contained in this directory. No other txt files except these 12 #
# should be in this directory during the running of this function. #
# #
# Then, run this function in S-PLUS 6 by #
# my.s2 <- readMicroarrayData("path of the directory") #
# my.s2.complete will also be generated by running the function. #
# #
# Note: It takes about 2 minutes on a 1.4 GHz machine to run. #
########################################################################
readMicroarrayData <- function(path=".",out1="my.s2.complete",out2="my.s2",where=1)
{
files <- dos(paste("dir", paste("\"", path, "\"", sep = ""), "/B /A:-D"))
files <- files[grep(".[Tt][Xx][Tt]",files)]
PathNames <- paste(path, "/", files, sep="")
DatasetNames <- substring(files, 1, nchar(files)-4)
maxnchar <- max(nchar(DatasetNames))
for (i in 1:length(files)) {
assign(DatasetNames[i], importData(PathNames[i], type="ASCII",
delimiter="\t", separateDelimiters=T))
NonEmptyRows <- which(!is.na(get(DatasetNames[i])[,1]) |
!is.na(get(DatasetNames[i])[,2]))
assign(DatasetNames[i],get(DatasetNames[i])[NonEmptyRows,])
data <- get(DatasetNames[i])
data[,"TYPE"] <- casefold(data[,"TYPE"],upper=T)
data[,"NAME"] <- casefold(data[,"NAME"],upper=T)
data[,"GENE"] <- casefold(data[,"GENE"],upper=T)
NAME.NA.Rows <- which(is.na(get(DatasetNames[i])[,"NAME"]))
GENE.NA.Rows <- which(is.na(get(DatasetNames[i])[,"GENE"]))
if (length(NAME.NA.Rows) > 0)
data[NAME.NA.Rows,"NAME"] <- data[NAME.NA.Rows,"TYPE"]
if (length(GENE.NA.Rows) > 0)
data[GENE.NA.Rows,"GENE"] <- data[GENE.NA.Rows,"NAME"]
data[,"NAME"] <- as.factor(data[,"NAME"])
data[,"GENE"] <- as.factor(data[,"GENE"])
data[,"TYPE"] <- as.factor(data[,"TYPE"])
nchari <- nchar(DatasetNames[i])
if (nchari == maxnchar-1)
my.array <- as.integer(substring(DatasetNames[i],nchari,nchari))
else if (nchari == maxnchar)
my.array <- as.integer(substring(DatasetNames[i],nchari-1,nchari))
spot <- as.integer(row.names(get(DatasetNames[i])))
assign(DatasetNames[i], cbind(data, data.frame(SPOT=spot)))
Flag0Rows <- which(get(DatasetNames[i])[,"FLAG"] == 0)
assign(DatasetNames[i],get(DatasetNames[i])[Flag0Rows,])
if (my.array <= 3) strain = "snf2rich"
else if (my.array <= 6) strain <- "snf2mini"
else if (my.array <= 9) strain <- "swi1rich"
else if (my.array <= 12) strain <- "swi1mini"
my.diff <- get(DatasetNames[i])[,"CH1I"] - get(DatasetNames[i])[,"CH1B"]
logi <- rep(NA, length(my.diff))
logi[which(my.diff > 0)] <- logb(my.diff[which(my.diff > 0)], base=2)
Mutant <- cbind(get(DatasetNames[i]),
data.frame(ARRAY=my.array,STRAIN=strain,DIFF=my.diff,LOGI=logi))
strain = "wildtype"
my.diff <- get(DatasetNames[i])[,"CH2I"] - get(DatasetNames[i])[,"CH2B"]
logi <- rep(NA, length(my.diff))
logi[which(my.diff > 0)] <- logb(my.diff[which(my.diff > 0)], base=2)
Wildtype <- cbind(get(DatasetNames[i]),
data.frame(ARRAY=my.array,STRAIN=strain,DIFF=my.diff,LOGI=logi))
assign(DatasetNames[i],
data.frame(rbind(Mutant,Wildtype),row.names=1:(2*dim(Mutant)[1])))
if(i == 1) my.s2.complete <- get(DatasetNames[1])
else my.s2.complete <- rbind(my.s2.complete, get(DatasetNames[i]))
}
my.s2.complete[,"STRAIN"] <- as.character(my.s2.complete[,"STRAIN"])
my.s2.complete <- my.s2.complete[order(my.s2.complete$STRAIN),]
my.s2.complete[,"STRAIN"] <- as.factor(my.s2.complete[,"STRAIN"])
my.s2.complete <- data.frame(my.s2.complete[order(my.s2.complete$ARRAY,
my.s2.complete$SPOT,
my.s2.complete$GENE,
my.s2.complete$NAME,
my.s2.complete$STRAIN),],
row.names=1:dim(my.s2.complete)[1])
my.s2 <- data.frame(gene=my.s2.complete[,"GENE"],name=my.s2.complete[,"NAME"],
array=my.s2.complete[,"ARRAY"],spot=my.s2.complete[,"SPOT"],
strain=my.s2.complete[,"STRAIN"],logi=my.s2.complete[,"LOGI"])
assign(out1,my.s2.complete,where=1)
assign(out2,my.s2,where=1)
return(my.s2)
invisible()
}
|
|
From: <td...@us...> - 2002-11-15 20:44:10
|
Update of /cvsroot/genex/genex-server/site/webtools/analysis/test In directory usw-pr-cvs1:/tmp/cvs-serv24170/test Log Message: Directory /cvsroot/genex/genex-server/site/webtools/analysis/test added to the repository --> Using per-directory sticky tag `Rel-1_0_1-branch' |
Update of /cvsroot/genex/genex-server/site/webtools
In directory usw-pr-cvs1:/tmp/cvs-serv14745
Removed Files:
Tag: Rel-1_0_1-branch
qualityControl.rw qualitycontrol.R Rwrapper.pl statAnalysis.rw
westfallYoung.rw sourceSPLUSbioinfo.ssc
Log Message:
Moved to new analysis directory
--- qualityControl.rw DELETED ---
--- qualitycontrol.R DELETED ---
--- Rwrapper.pl DELETED ---
--- statAnalysis.rw DELETED ---
--- westfallYoung.rw DELETED ---
--- sourceSPLUSbioinfo.ssc DELETED ---
|
|
From: <td...@us...> - 2002-11-15 20:17:17
|
Update of /cvsroot/genex/genex-server/site/webtools
In directory usw-pr-cvs1:/tmp/cvs-serv13602
Added Files:
Tag: Rel-1_0_1-branch
runtree.pl
Log Message:
Almost done initial draft
--- NEW FILE: runtree.pl ---
#!/usr/bin/perl -w
#TODO - change to accomodate version number as part of name
=head1 NAME
runtree - initiates execution of an analysis tree stored in the db
=head1 SYNOPSIS
./runtree.pl --treeName <tree_name> --treeOwner <owner>
=head1 DESCRIPTION
=cut
# use strict;
use DBI;
use Getopt::Long 2.13;
require "sessionlib.pl";
#command line options
my $debug ='';
my $treeName ='';
my $owner='';
getOptions();
runChain($treeName, $owner);
sub runChain
{
my ($treeName, $treeOwner) = @_;
my $dbh = new_connection();
# TODO - how do we check owner stuff - remember to make unnecessary
my $stm=
"select * from tree where tree_name = '$treeName'";
my $sth = $dbh->prepare($stm);
$sth->execute();
my ($tree_pk, $name, $fi_input_fk) =
$sth->fetchrow_array();
# TODO - die if bad tree name
if ($sth->fetchrow_array())
{
die "Too many tree records with name $name and owner $owner.\n";
}
# determine log name
my $logfile = "./treelog.txt";
open(LOG, "> $logfile") or die "Unable to open $logfile: $!\n";
print LOG `date`;
print LOG "\n Running tree: $treeName owner: $treeOwner\n";
processTree($dbh, $tree_pk, $fi_input_fk, LOG);
close(LOG);
# TODO - stat logfile
# TODO - insert logfile into file_info
#my $fields = "('file_name','timestamp', 'owner', 'comments', 'checksum')";
$stm = "insert into file_info (file_name) values ($logfile)";
$sth = $dbh->prepare($stm);
$sth->execute();
$dbh->commit();
$stm = "select from file_info where 'file_name' = $filename";
$sth = $dbh->prepare($stm);
$sth->execute;
my $fi_pk = $sth->fetchrow_array;
#update tree record with logfile fk
$stm = "update tree set fi_log_fk = $fi_pk";
$sth = $dbh->prepare($stm);
$sth->execute();
$dbh->commit();
$dbh->disconnect();
} #runChain
sub processTree
{
my ($dbh, $tree_pk, $fi_input_fk, $logfile) = @_;
my ($stm, $sth);
print $logfile "Processing tree pk: $tree_pk input: $fi_input_fk\n";
$stm="select node_pk, an_fk from node" .
" where tree_fk = $tree_pk and parent_key is NULL";
$sth=$dbh->prepare($stm);
$sth->execute();
my ($node_pk, $an_fk) = $sth->fetchrow_array;
runNodeRecur($dbh, $tree_pk, $node_pk, $an_fk, $logfile);
} # processTree
sub runNodeRecur
{
my ($dbh, $tree_pk, $node_pk, $an_fk, $logfile) = @_;
my ($stm, $sth);
my @files;
my $files;
#process this node
runNode($dbh, $node_pk, $an_fk, $logfile);
# get all output files from the node
$stm = "select file_name from file_info where node_fk = $node_pk";
$sth = $dbh->prepare($stm);
$sth->execute();
while (($files) = $sth->fetchrow_array)
{
push(@files, $files);
}
#process its children
$stm = "select node_pk, an_fk from node" .
" where tree_fk = $tree_pk and parent_key = $node_pk";
$sth= $dbh->prepare($stm);
$sth->execute();
# is this necessary - we should check files existence when we run the
#child
while (my ($child_node_pk, $child_an_fk) = $sth->fetchrow_array )
{
# determine input file for children
# note: it is highly likely that the input file is the same for all
# children, namely the output file of the current node
# however, it is conceivable that the current node has more than
# one output file and different children need different ones
# therefore, we need to figure it out for each option.
$stm="select arg_name, extension from analysis_filetypes_link," .
" filetypes,extension where input='t' and " .
"analysis_filetypes_link.ft_fk = filetypes.ft_pk and " .
"extension.ft_fk = filetypes.ft_pk and an_fk = $child_an_fk";
my $sth2 = $dbh->prepare($stm);
$sth2->execute();
my $child_input = "";
my $have_all_inputs = 1;
while (my ($prefix, $ext) = $sth2->fetchrow_array)
{
my $found = 0;
# foreach needed input file
foreach my $infile (@files)
{
$infile =~ /.*\.(.*)/;
my $file_ext = $1;
if ($ext eq $file_ext)
{
if ($found == 1)
{
warn "Great trauma! There are two appropriate input files";
}
$found = 1;
# insert the name into the input file name value in
# parameters table
$stm = "select spn_pk from sys_parameter_names where sp_name = '$prefix' and an_fk = $child_an_fk";
my $sth3 = $dbh->prepare($stm);
$sth3->execute();
my $spn_fk = $sth3->fetchrow_array;
$stm = "insert into sys_parameter_values " .
" (node_fk, spn_fk, sp_value) values " .
" ($child_node_pk, $spn_fk, '$infile')";
$sth3 = $dbh->prepare($stm);
$sth3->execute();
$dbh->commit();
}
}
if ($found == 0)
{
warn "No input file with type: $ext for node: $child_node_pk\n";
$have_all_inputs = 0;
}
}
runNodeRecur($dbh, $tree_pk, $child_node_pk, $child_an_fk,
$logfile) if $have_all_inputs;
}
} # runNodeRecur
sub runNode
{
my ($dbh, $node_pk, $an_fk, $logfile) = @_;
my ($stm, $sth);
# get command string from analysis
$stm = "select cmdstr from analysis where an_pk = $an_fk";
$sth = $dbh->prepare($stm);
$sth->execute();
my $cmdStr = $sth->fetchrow_array();
if (!defined $cmdStr)
{
print $logfile "Unable to cmdstr for analysis key: $an_fk";
warn "Unable to cmdstr for analysis key: $an_fk";
}
# get user parameters for the node
my $userParams = " ";
$stm = "select up_value, upn_fk from user_parameter_values " .
" where node_fk = $node_pk";
$sth = $dbh->prepare($stm);
$sth->execute();
while( my ($up_value, $upn_fk) = $sth->fetchrow_array)
{
$stm = "select up_name, up_default from user_parameter_names " .
"where upn_pk=$upn_fk";
my $sth2 = $dbh->prepare($stm);
$sth2->execute();
my ($up_name, $up_default) = $sth2->fetchrow_array();
if ($sth2->fetchrow_array)
{
warn "Uh oh!! Never should have multiple returns!";
}
$userParams .= $up_name . " '$up_value' ";
# TODO - do I need to use default or does Tom set?
}
# get system parameters for the node
my $sysParams = " ";
$stm = "select sp_value, spn_fk from sys_parameter_values " .
" where node_fk = $node_pk";
$sth = $dbh->prepare($stm);
$sth->execute();
while( my ($sp_value, $spn_fk) = $sth->fetchrow_array)
{
$stm = "select sp_name, sp_default from sys_parameter_names " .
"where spn_pk=$spn_fk";
my $sth2 = $dbh->prepare($stm);
$sth2->execute();
my ($sp_name, $sp_default) = $sth2->fetchrow_array();
if ($sth2->fetchrow_array)
{
warn "Uh oh!! Never should have multiple returns!";
}
$sysParams .= $sp_name . " '$sp_value' ";
# TODO - do I need to use default or does Tom set?
}
# excute the command
my $cmd = $cmdStr . $userParams . $sysParams ;
print $logfile "Running cmd:\n$cmd\n";
my $rc = system("$cmd");
warn "Bad return value from: $cmd" if $rc;
#verify existence of output files and insert into file_info table
$stm="select arg_name from analysis_filetypes_link," .
" filetypes,analysis,node where input='f' and " .
" ft_pk=ft_fk and analysis_filetypes_link.an_fk = an_pk ".
" and an_pk=node.an_fk and node_pk = $node_pk";
$sth2 = $dbh->prepare($stm);
$sth2->execute();
while (my ($fileprefix) = $sth2->fetchrow_array)
{
$userParams =~ /$fileprefix\s*([\S]+)\s/;
my $filename = $1;
insertFile($dbh, $node_pk, $filename);
}
} # runNode
sub insertFile
{
my ($dbh, $node_fk, $file) = @_;
#TODO - set these values!
my $timestamp = 0;
my $uai= 't';
my $comments="Output from node $node_fk";
my $checksum="check:-)";
my $stm = "insert into file_info (node_fk, file_name, timestamp," .
" use_as_input, fi_comments, fi_checksum) values ($node_fk," .
" '$file', $timestamp, '$uai', '$comments', '$checksum')";
my $sth= $dbh->prepare($stm);
$sth->execute();
$dbh->commit();
}
sub getOptions
{
my $help;
if (@ARGV > 0 )
{
# format
# string : 'var=s' => \$var,
# boolean : 'var!' => \$var,
GetOptions(
'owner=s' => \$owner,
'treeName=s' => \$treeName,
'help|?' => \$help,
'debug!' => \$debug,
);
} #have command line args
if ($treeName eq "")
{
print "Must specify a treeName to run.\n";
usage();
}
if ($owner eq "")
{
print "Must specify a owner to run.\n";
usage();
}
usage() if ($help);
} # getOptions
sub usage
{
print "Usage: \n";
print " ./runTree --treeName=<treeName> --owner=<owner>\n";
exit;
} #usage
|
|
From: <td...@us...> - 2002-11-15 20:16:13
|
Update of /cvsroot/genex/genex-server/site/webtools
In directory usw-pr-cvs1:/tmp/cvs-serv13408
Modified Files:
Tag: Rel-1_0_1-branch
add_analysis.pl
Log Message:
Almost done initial draft
Index: add_analysis.pl
===================================================================
RCS file: /cvsroot/genex/genex-server/site/webtools/Attic/add_analysis.pl,v
retrieving revision 1.1.2.3
retrieving revision 1.1.2.4
diff -C2 -d -r1.1.2.3 -r1.1.2.4
*** add_analysis.pl 12 Nov 2002 16:32:28 -0000 1.1.2.3
--- add_analysis.pl 15 Nov 2002 20:16:05 -0000 1.1.2.4
***************
*** 65,70 ****
{
# insert appropriate values in appropriate tables
- # we need to add filetypes (optional)
- act_filetype($dbh, $config->get('filetype'), $action);
# we need to add the analysis
--- 65,68 ----
***************
*** 72,78 ****
$action);
# we need to add the links to filetypes
act_file_links($dbh, $config->get('analysisfile'),
! $config->get('name'), $action);
# we need to add the sysparams
--- 70,79 ----
$action);
+ # we need to add filetypes (optional)
+ my $keysref = act_filetype($dbh, $config->get('filetype'), $action);
+
# we need to add the links to filetypes
act_file_links($dbh, $config->get('analysisfile'),
! $config->get('name'), $action, $keysref);
# we need to add the sysparams
***************
*** 83,87 ****
# insert appropriate values in extension table
! act_extension($dbh, $config->get('extension'), $action);
}
elsif ($action eq "remove")
--- 84,88 ----
# insert appropriate values in extension table
! act_extension($dbh, $config->get('extension'), $action, $keysref);
}
elsif ($action eq "remove")
***************
*** 91,99 ****
# delete appropriate values in extension table
! act_extension($dbh, $config->get('extension'), $action);
# we need to delete the links to filetypes
act_file_links($dbh, $config->get('analysisfile'),
! $config->get('name'), $action);
# we need to delete the sysparams
--- 92,100 ----
# delete appropriate values in extension table
! act_extension($dbh, $config->get('extension'), $action, []);
# we need to delete the links to filetypes
act_file_links($dbh, $config->get('analysisfile'),
! $config->get('name'), $action, []);
# we need to delete the sysparams
***************
*** 152,155 ****
--- 153,157 ----
my $record;
my $table = "filetypes";
+ my @keys = ();
my @records = parse_into_records(@$filetype);
***************
*** 162,167 ****
my $arg_name = $rec{arg_name};
- checkValidFields($table, [ "name", "arg_name" ], ["name", "comment", "arg_name"],
- $record);
if ($action eq "remove")
--- 164,167 ----
***************
*** 171,174 ****
--- 171,176 ----
else
{
+ checkValidFields($table, [ "name", "arg_name" ],
+ ["name", "comment", "arg_name"], $record);
$stm = "insert into $table (ft_name, ft_comments, arg_name) " .
"values ('$name', '$comment', '$arg_name');";
***************
*** 179,182 ****
--- 181,207 ----
}
$dbh->commit();
+
+ foreach $record (@records)
+ {
+ my %rec = %$record;
+ my $name = $rec{name};
+ my $comment = $rec{comment};
+ my $arg_name = $rec{arg_name};
+
+ if ($action eq "insert")
+ {
+ $stm = "select ft_pk from $table where ft_name = '$name' " .
+ " and ft_comments = '$comment' and arg_name = '$arg_name'";
+ my $sth = $dbh->prepare( $stm );
+ $sth->execute();
+ my $lastkey;
+ while (my @row = $sth->fetchrow_array())
+ {
+ ($lastkey) = @row;
+ }
+ push(@keys, $lastkey);
+ }
+ }
+ return(\@keys);
} # act_filetype
***************
*** 203,211 ****
sub act_extension
{
! my ($dbh, $ext, $action) = @_;
my $stm = "";
my $record;
my $table="extension";
my @records = parse_into_records(@$ext);
--- 228,237 ----
sub act_extension
{
! my ($dbh, $ext, $action, $keysref) = @_;
my $stm = "";
my $record;
my $table="extension";
+ my @keys = @$keysref;
my @records = parse_into_records(@$ext);
***************
*** 215,219 ****
my %rec = %$record;
- checkValidFields($table, [ "filetype","ext" ], ["filetype", "ext"], $record);
my $filetype = $rec{filetype};
--- 241,244 ----
***************
*** 225,258 ****
my $ft_fk = $stm->fetchrow_array();
! if (!defined $ft_fk)
{
warn "Unable to get filetypes key value for $filetype";
}
! else
{
- if ($action eq "remove")
- {
$stm = "delete from $table where ft_fk = '$ft_fk'";
! }
! else
! {
! $stm = "insert into $table (ft_fk, extension) " .
! "values ('$ft_fk', '$ext');";
! }
! print "$stm\n" if $debug;
! my $sth = $dbh->prepare( $stm );
! $sth->execute();
}
}
! } # act_file_links
sub act_file_links
{
! my ($dbh, $filelist, $name, $action) = @_;
my $stm = "";
my $record;
my $table="analysis_filetypes_link";
my @records = parse_into_records(@$filelist);
--- 250,301 ----
my $ft_fk = $stm->fetchrow_array();
!
! if ((!defined $ft_fk) && ($action eq "insert"))
{
warn "Unable to get filetypes key value for $filetype";
}
!
! if (($action eq "remove") && (defined $ft_fk))
{
$stm = "delete from $table where ft_fk = '$ft_fk'";
! print "$stm\n" if $debug;
! my $sth = $dbh->prepare( $stm );
! $sth->execute();
! }
! if (($action eq "insert") && (defined $ft_fk))
! {
! checkValidFields($table, [ "filetype","ext" ],
! ["filetype", "ext"], $record);
! while (! numIn($ft_fk, @keys))
! {
! $ft_fk = $stm->fetchrow_array;
! break() if (!defined $ft_fk);
! }
!
! $stm = $dbh->prepare("select ext_pk from extension where " .
! " ft_fk = $ft_fk and extension = '$ext'");
! $stm->execute();
! my $exists = $stm->fetchrow_array();
! if (!defined $exists)
! {
! $stm = "insert into $table (ft_fk, extension) " .
! "values ($ft_fk, '$ext');";
! print "$stm\n" if $debug;
! my $sth = $dbh->prepare( $stm );
! $sth->execute();
! }
}
}
! } # act_extension
sub act_file_links
{
! my ($dbh, $filelist, $name, $action, $keysref) = @_;
my $stm = "";
my $record;
my $table="analysis_filetypes_link";
+ my @keys = @$keysref;
my @records = parse_into_records(@$filelist);
***************
*** 274,278 ****
my %rec = %$record;
- checkValidFields($table, [ "filetype","input" ], ["filetype", "input"], $record);
my $filetype = $rec{filetype};
--- 317,320 ----
***************
*** 280,305 ****
# select the filetypes pk for the filetype
! $stm = $dbh->prepare("select ft_pk from filetypes where ft_name= '$filetype'");
$stm->execute();
! my $ft_fk = $stm->fetchrow_array();
! if (!defined $ft_fk)
{
warn "Unable to get filetypes key value for $filetype";
}
! else
{
- if ($action eq "remove")
- {
$stm = "delete from $table where an_fk = '$an_fk'";
! }
! else
! {
! $stm = "insert into $table (an_fk, ft_fk, input) " .
! "values ('$an_fk', '$ft_fk', '$input');";
! }
! print "$stm\n" if $debug;
! my $sth = $dbh->prepare( $stm );
! $sth->execute();
}
}
--- 322,359 ----
# select the filetypes pk for the filetype
! $stm = $dbh->prepare("select ft_pk from filetypes where ft_name= '$filetype' ");
!
$stm->execute();
! my $ft_pk = $stm->fetchrow_array;
!
!
! if ((!defined $ft_pk) && ($action eq "insert"))
{
warn "Unable to get filetypes key value for $filetype";
}
!
! if (($action eq "remove") && (defined $ft_pk))
{
$stm = "delete from $table where an_fk = '$an_fk'";
! print "$stm\n" if $debug;
! my $sth = $dbh->prepare( $stm );
! $sth->execute();
! }
!
! if (($action eq "insert") && (defined $ft_pk))
! {
! checkValidFields($table, [ "filetype","input"],
! ["filetype", "input"], $record);
! while (! numIn($ft_pk, @keys))
! {
! $ft_pk = $stm->fetchrow_array;
! break() if (! defined $ft_pk);
! }
! $stm = "insert into $table (an_fk, ft_fk, input) " .
! "values ('$an_fk', '$ft_pk', '$input');";
! print "$stm\n" if $debug;
! my $sth = $dbh->prepare( $stm );
! $sth->execute();
}
}
***************
*** 430,433 ****
--- 484,503 ----
} # checkValidFields
+ sub numIn
+ {
+ my ($scalar) = shift;
+ my @array = @_;
+
+ my $in = 0;
+ foreach my $item (@array)
+ {
+ if ($item == $scalar)
+ {
+ $in=1;
+ }
+ }
+ return($in);
+ }
+
sub usage
{
***************
*** 436,437 ****
--- 506,518 ----
exit;
} # usage
+
+ =head1 NOTES
+
+ This script is not intended to be a great database configuration tool.
+ It is designed to load demo*.cfg for testing. Note that if you remove
+ a cfg, it may remove extension types needed by other analysis. I could
+ fix this, but have deemed that it isn't worth the time right now.
+
+ =head1 AUTHOR
+
+ Teela James
|
Update of /cvsroot/genex/genex-server/site/webtools
In directory usw-pr-cvs1:/tmp/cvs-serv22806
Modified Files:
Tag: Rel-1_0_1-branch
edit_sample1.pl edit_sample2.pl sql_lib.pl edit_sample1.html
Log Message:
added billing code to the order web pages
Index: edit_sample1.pl
===================================================================
RCS file: /cvsroot/genex/genex-server/site/webtools/Attic/edit_sample1.pl,v
retrieving revision 1.1.2.1
retrieving revision 1.1.2.2
diff -C2 -d -r1.1.2.1 -r1.1.2.2
*** edit_sample1.pl 24 Oct 2002 18:26:53 -0000 1.1.2.1
--- edit_sample1.pl 15 Nov 2002 15:16:28 -0000 1.1.2.2
***************
*** 53,56 ****
--- 53,63 ----
}
+ {
+ my $sth = getq("get_billing_code", $dbh);
+ $sth->execute($ch{oi_pk}) || die "Query get_billing_code execute failed. $DBI::errstr\n";
+ ($ch{billing_code}) = $sth->fetchrow_array();
+ }
+
+
my $allhtml = readfile("edit_sample1.html");
Index: edit_sample2.pl
===================================================================
RCS file: /cvsroot/genex/genex-server/site/webtools/Attic/edit_sample2.pl,v
retrieving revision 1.1.2.2
retrieving revision 1.1.2.3
diff -C2 -d -r1.1.2.2 -r1.1.2.3
*** edit_sample2.pl 12 Nov 2002 22:34:36 -0000 1.1.2.2
--- edit_sample2.pl 15 Nov 2002 15:16:28 -0000 1.1.2.3
***************
*** 4,11 ****
use CGI;
use CGI::Carp qw(fatalsToBrowser);
- # use HTTP::Request::Common qw(POST);
- # use LWP::UserAgent;
! require "sessionlib.pl";
main:
--- 4,9 ----
use CGI;
use CGI::Carp qw(fatalsToBrowser);
! require "./sessionlib.pl";
main:
***************
*** 18,21 ****
--- 16,21 ----
my $message = write_sample($dbh, $us_fk, $q);
+ write_order($dbh, $us_fk, $q);
+
if ($q->param("add_sample"))
{
***************
*** 158,159 ****
--- 158,171 ----
}
+ sub write_order
+ {
+ my $dbh = $_[0];
+ my $us_fk = $_[1];
+ my $q = $_[2];
+ my %ch = $q->Vars();
+ my $sth = getq("update_billing_code", $dbh);
+
+ $sth->execute($ch{billing_code},
+ $ch{oi_pk}) || die "Query update_billing_code execute failed. $DBI::errstr\n";
+
+ }
Index: sql_lib.pl
===================================================================
RCS file: /cvsroot/genex/genex-server/site/webtools/Attic/sql_lib.pl,v
retrieving revision 1.1.2.6
retrieving revision 1.1.2.7
diff -C2 -d -r1.1.2.6 -r1.1.2.7
*** sql_lib.pl 12 Nov 2002 22:34:36 -0000 1.1.2.6
--- sql_lib.pl 15 Nov 2002 15:16:28 -0000 1.1.2.7
***************
*** 59,63 ****
elsif ($q_name eq "update_tree_name")
{
! $sql = "update tree set tree_name=? where tree_pk=?";
}
--- 59,71 ----
elsif ($q_name eq "update_tree_name")
{
! $sql = "update tree set tree_name=trim(?) where tree_pk=?";
! }
! elsif ($q_name eq "update_billing_code")
! {
! $sql = "update billing set billing_code=trim(?) where oi_fk=?";
! }
! elsif ($q_name eq "get_billing_code")
! {
! $sql = "select billing_code from billing where oi_fk=?";
}
Index: edit_sample1.html
===================================================================
RCS file: /cvsroot/genex/genex-server/site/webtools/Attic/edit_sample1.html,v
retrieving revision 1.1.2.1
retrieving revision 1.1.2.2
diff -C2 -d -r1.1.2.1 -r1.1.2.2
*** edit_sample1.html 24 Oct 2002 18:26:53 -0000 1.1.2.1
--- edit_sample1.html 15 Nov 2002 15:16:28 -0000 1.1.2.2
***************
*** 20,24 ****
<table border=0 cellpadding=3 cellspacing=0>
<tr>
! <td>Order Number: </td><td>{order_number}<br></td>
</tr>
<tr>
--- 20,27 ----
<table border=0 cellpadding=3 cellspacing=0>
<tr>
! <td>Order number: </td><td>{order_number}<br></td>
! </tr>
! <tr>
! <td>Billing code: </td><td><input type="text" name="billing_code" value="{billing_code}"><br></td>
</tr>
<tr>
|
|
From: <tw...@us...> - 2002-11-15 14:24:48
|
Update of /cvsroot/genex/genex-server/site/webtools
In directory usw-pr-cvs1:/tmp/cvs-serv16754
Modified Files:
Tag: Rel-1_0_1-branch
vacuum.pl
Log Message:
Better comments.
Index: vacuum.pl
===================================================================
RCS file: /cvsroot/genex/genex-server/site/webtools/Attic/vacuum.pl,v
retrieving revision 1.1.2.1
retrieving revision 1.1.2.2
diff -C2 -d -r1.1.2.1 -r1.1.2.2
*** vacuum.pl 15 Nov 2002 14:15:09 -0000 1.1.2.1
--- vacuum.pl 15 Nov 2002 14:24:41 -0000 1.1.2.2
***************
*** 1,4 ****
--- 1,11 ----
#!/usr/bin/perl
+ # Step 1.
+ # You must create /var/lib/pgsql/p.dat
+ # with the postgres password in clear text.
+ # Make that file 600 rw------- so that only postgres can read it.
+
+
+ # Step 2.
# Run from cron.
# Copy the lines below to /var/lib/pgsql/crontab.txt
***************
*** 22,25 ****
--- 29,35 ----
#0 4 * * * /var/www/html/genex/webtools/vacuum.pl
+ # Step 3.
+ # The results are in a log file
+ # /var/lig.pgsql/vacuum.log
***************
*** 29,33 ****
my $p_file = "/var/lib/pgsql/p.dat"; # clear text password
! my $e_file = "/var/lib/pgsql/error.txt"; # log file
main:
--- 39,43 ----
my $p_file = "/var/lib/pgsql/p.dat"; # clear text password
! my $e_file = "/var/lib/pgsql/vacuum.log"; # log file
main:
|
|
From: <tw...@us...> - 2002-11-15 14:15:16
|
Update of /cvsroot/genex/genex-server/site/webtools
In directory usw-pr-cvs1:/tmp/cvs-serv13256
Added Files:
Tag: Rel-1_0_1-branch
vacuum.pl
Log Message:
Script for user postgres to call to vacuum the db.
--- NEW FILE: vacuum.pl ---
#!/usr/bin/perl
# Run from cron.
# Copy the lines below to /var/lib/pgsql/crontab.txt
# Uncomment the last line.
# Leave the other comments for reference
# Do:
# crontab crontab.txt
# Verify with:
# crontab -l
# (that is dash ell)
# field allowed values
# ----- --------------
# minute 0-59
# hour 0-23
# day of month 0-31
# month 0-12 (or names, see below)
# day of week 0-7 (0 or 7 is Sun, or use names)
# Run at zero minutes, 04:00 every day.
#0 4 * * * /var/www/html/genex/webtools/vacuum.pl
use strict;
use CGI;
use DBI;
my $p_file = "/var/lib/pgsql/p.dat"; # clear text password
my $e_file = "/var/lib/pgsql/error.txt"; # log file
main:
{
my $passwd;
if ($ENV{LOGNAME} ne "postgres")
{
die "This vacuum script can only be run as user postgres\n";
}
if (-e $p_file)
{
$passwd = `cat $p_file`;
}
else
{
print "Cannot find $p_file. Exiting.\n";
}
chomp($passwd);
#
# Put the password in an external file. That file should be postgres read-only.
# AutoCommit must be 1 to disable transactions. VACUUM by its nature must perform
# its own internal commits, and therefore it cannot be part of another transaction.
#
my $connect_string = "dbi:Pg:dbname=genex;host=localhost;port=5432";
my $dbargs = {AutoCommit => 1, PrintError => 1};
my $dbh = DBI->connect($connect_string,
"postgres",
"$passwd",
$dbargs);
write_log("Connected...\n");
my $sth;
my $start;
my $end;
$sth = $dbh->prepare("vacuum analyze");
$start = time();
$sth->execute();
$end = time();
my $va_time = $end - $start;
write_log("Vacuum analyze complete...\n");
$sth = $dbh->prepare("vacuum");
$start = time();
$sth->execute();
$end = time();
my $v_time = $end - $start;
write_log("Vacuum complete...\n");
$sth = $dbh->prepare("vacuum full");
$start = time();
$sth->execute();
$end = time();
my $vf_time = $end - $start;
write_log("Vacuum full complete...\n");
$dbh->disconnect();
write_log("vaccuum analyze: $va_time seconds\n");
write_log("vaccuum: $v_time seconds\n");
write_log("vaccuum full: $vf_time seconds\n");
}
sub write_log
{
open(LOG_OUT, ">> $e_file") || die "Could not open $e_file for write\n";
print LOG_OUT "$_[0]\n";
close(LOG_OUT);
chmod(0600, $e_file);
}
|
|
From: <tw...@us...> - 2002-11-14 21:55:33
|
Update of /cvsroot/genex/genex-server/site/webtools
In directory usw-pr-cvs1:/tmp/cvs-serv28471
Modified Files:
Tag: Rel-1_0_1-branch
edit_study1.html
Log Message:
wrap="virtual" on the textareas
Index: edit_study1.html
===================================================================
RCS file: /cvsroot/genex/genex-server/site/webtools/Attic/edit_study1.html,v
retrieving revision 1.1.2.1
retrieving revision 1.1.2.2
diff -C2 -d -r1.1.2.1 -r1.1.2.2
*** edit_study1.html 24 Oct 2002 18:26:53 -0000 1.1.2.1
--- edit_study1.html 14 Nov 2002 21:55:29 -0000 1.1.2.2
***************
*** 25,29 ****
</td></tr>
<tr><td>
! <div align="right">Study Comments </div></td><td> <textarea name="comments" rows=5 cols=40>{comments}</textarea>
</td></tr>
--- 25,29 ----
</td></tr>
<tr><td>
! <div align="right">Study Comments </div></td><td> <textarea name="comments" rows=5 cols=40 wrap="virtual">{comments}</textarea>
</td></tr>
***************
*** 86,90 ****
<td valign="top"><div align="right">Notes </div></td>
<td>
! <textarea name="notes" rows="5" cols="40">{notes}</textarea>
</td>
</tr>
--- 86,90 ----
<td valign="top"><div align="right">Notes </div></td>
<td>
! <textarea name="notes" rows="5" cols="40" wrap="virtual">{notes}</textarea>
</td>
</tr>
|
|
From: <jas...@us...> - 2002-11-14 21:23:08
|
Update of /cvsroot/genex/genex-server In directory usw-pr-cvs1:/tmp/cvs-serv13469 Modified Files: ChangeLog Log Message: usual Index: ChangeLog =================================================================== RCS file: /cvsroot/genex/genex-server/ChangeLog,v retrieving revision 1.105 retrieving revision 1.106 diff -C2 -d -r1.105 -r1.106 *** ChangeLog 14 Nov 2002 19:45:45 -0000 1.105 --- ChangeLog 14 Nov 2002 21:23:05 -0000 1.106 *************** *** 3,6 **** --- 3,7 ---- * Install (Repository): added Mason subdirs + ensure that GENEX_BIN_DIR gets created before copying files 2002-11-08 Jason E. Stewart <ja...@op...> |
|
From: <jas...@us...> - 2002-11-14 21:22:55
|
Update of /cvsroot/genex/genex-server
In directory usw-pr-cvs1:/tmp/cvs-serv13337
Modified Files:
Install
Log Message:
* Install (Repository):
ensure that GENEX_BIN_DIR gets created before copying files
Index: Install
===================================================================
RCS file: /cvsroot/genex/genex-server/Install,v
retrieving revision 1.13
retrieving revision 1.14
diff -C2 -d -r1.13 -r1.14
*** Install 14 Nov 2002 19:45:17 -0000 1.13
--- Install 14 Nov 2002 21:22:52 -0000 1.14
***************
*** 195,198 ****
--- 195,200 ----
genex_mkdir($DIR, 777) unless -d $DIR;
+ genex_mkdir($GENEX_BIN_DIR) unless -d $GENEX_BIN_DIR;
+
# --------------- controlled vocabs ----------------
# make the directory for the controlled vocabularies
***************
*** 221,225 ****
$DIR = "$VARS{CGIDIR}/$VARS{CYBERT_DIR}"; # Brevity && Clarity!
genex_mkdir($DIR) unless -d $DIR;
- genex_mkdir($GENEX_BIN_DIR) unless -d $GENEX_BIN_DIR;
genex_system("cd $MOTHERDIR/CyberT-dist; cp CyberT*pl $DIR;
cp munge4R.pl cyberfilter.pl genex_reaper.pl $VARS{GENEX_BIN_DIR}");
--- 223,226 ----
|
|
From: <jas...@us...> - 2002-11-14 19:45:48
|
Update of /cvsroot/genex/genex-server In directory usw-pr-cvs1:/tmp/cvs-serv28355 Modified Files: ChangeLog Log Message: usual Index: ChangeLog =================================================================== RCS file: /cvsroot/genex/genex-server/ChangeLog,v retrieving revision 1.104 retrieving revision 1.105 diff -C2 -d -r1.104 -r1.105 *** ChangeLog 9 Nov 2002 00:48:15 -0000 1.104 --- ChangeLog 14 Nov 2002 19:45:45 -0000 1.105 *************** *** 1,2 **** --- 1,7 ---- + 2002-11-14 Jason E. Stewart <ja...@op...> + + * Install (Repository): + added Mason subdirs + 2002-11-08 Jason E. Stewart <ja...@op...> |
|
From: <jas...@us...> - 2002-11-14 19:45:20
|
Update of /cvsroot/genex/genex-server
In directory usw-pr-cvs1:/tmp/cvs-serv28200
Modified Files:
Install
Log Message:
* Install (Repository):
added Mason subdirs
Index: Install
===================================================================
RCS file: /cvsroot/genex/genex-server/Install,v
retrieving revision 1.12
retrieving revision 1.13
diff -C2 -d -r1.12 -r1.13
*** Install 14 Nov 2002 04:32:41 -0000 1.12
--- Install 14 Nov 2002 19:45:17 -0000 1.13
***************
*** 182,188 ****
print STDERR "\n\nInstalling the mason scripts..\n\n";
! $DIR = $VARS{GENEX_WORKSPACE_DIR}; # Brevity && Clarity!
genex_mkdir($DIR) unless -d $DIR;
- # genex_system("cd $MOTHERDIR/G2G/mason; cp -r * $DIR");
# the mason data dir needs to be world writable
--- 182,191 ----
print STDERR "\n\nInstalling the mason scripts..\n\n";
! $DIR = $VARS{GENEX_WORKSPACE_DIR};
! genex_mkdir($DIR) unless -d $DIR;
! $DIR = "$VARS{GENEX_WORKSPACE_DIR}/comps";
! genex_mkdir($DIR) unless -d $DIR;
! $DIR = "$VARS{GENEX_WORKSPACE_DIR}/workspace-comps";
genex_mkdir($DIR) unless -d $DIR;
# the mason data dir needs to be world writable
|
|
From: <man...@us...> - 2002-11-14 04:32:44
|
Update of /cvsroot/genex/genex-server
In directory usw-pr-cvs1:/tmp/cvs-serv12252
Modified Files:
INSTALL Install
Log Message:
edits of INSTALL, Install
Index: INSTALL
===================================================================
RCS file: /cvsroot/genex/genex-server/INSTALL,v
retrieving revision 1.22
retrieving revision 1.23
diff -C2 -d -r1.22 -r1.23
*** INSTALL 13 Nov 2002 21:45:17 -0000 1.22
--- INSTALL 14 Nov 2002 04:32:40 -0000 1.23
***************
*** 59,63 ****
apt-get install postgresql postgresql-dev libdbd-pg-perl expat \
! perlSGML libxeres17 libxeres17-dev libxml-xerces-perl libxaw7-dev
--- 59,64 ----
apt-get install postgresql postgresql-dev libdbd-pg-perl expat \
! perlSGML libxeres17 libxeres17-dev libxml-xerces-perl libxaw7-dev \
! apache-dev libapache-mod-perl libdevel-symdump-perl
***************
*** 177,181 ****
Apache::Session - used authentication and session management scripts
Apache::Request - used by HTML::Mason (Mason will catch the depedency and
! install it if using CPAN.)
Digest::MD5 - used by LoginUtils.pm
DBI - used by everything (this *is* a gene expression DATABASE)
--- 178,183 ----
Apache::Session - used authentication and session management scripts
Apache::Request - used by HTML::Mason (Mason will catch the depedency and
! install it if using CPAN; requires the apache-dev deb or
! the apache libs&includes)
Digest::MD5 - used by LoginUtils.pm
DBI - used by everything (this *is* a gene expression DATABASE)
Index: Install
===================================================================
RCS file: /cvsroot/genex/genex-server/Install,v
retrieving revision 1.11
retrieving revision 1.12
diff -C2 -d -r1.11 -r1.12
*** Install 7 Nov 2002 15:00:40 -0000 1.11
--- Install 14 Nov 2002 04:32:41 -0000 1.12
***************
*** 257,263 ****
my $R = $VERBOSE ? 'R' : 'R --silent';
! genex_system("$R INSTALL --library=$LOCAL_LIB/R/library $MOTHERDIR/CyberT-dist/xgobi_1.2-2.tar.gz");
# and copy over the substituted xgobi R lib as well
! genex_system("cd $MOTHERDIR/CyberT-dist; cp xgobi $LOCAL_LIB/R/library/xgobi/R");
slow() if ($SLOW);
--- 257,263 ----
my $R = $VERBOSE ? 'R' : 'R --silent';
! genex_system("$R INSTALL --library=$LOCAL_LIB/R/library $MOTHERDIR/CyberT-dist/xgobi_1.2-7.tar.gz");
# and copy over the substituted xgobi R lib as well
! genex_system("cd $MOTHERDIR/CyberT-dist; cp xgobi.R $LOCAL_LIB/R/library/xgobi/R");
slow() if ($SLOW);
***************
*** 446,450 ****
# Harry is foolishly working in the CVS checkout dir.
# & genex/index.shtml -> genex_info.shtml, not analysis_tools.shtml
! genex_system("cd $MOTHERDIR/top_level; cp *.shtml $HTMLDIR/$GENEX_DIR;");
}
--- 446,450 ----
# Harry is foolishly working in the CVS checkout dir.
# & genex/index.shtml -> genex_info.shtml, not analysis_tools.shtml
! genex_system("cd $MOTHERDIR/top_level; cp *html $HTMLDIR/$GENEX_DIR;");
}
|
|
From: <man...@us...> - 2002-11-14 04:30:38
|
Update of /cvsroot/genex/genex-server/CyberT-dist In directory usw-pr-cvs1:/tmp/cvs-serv11680 Added Files: index.html.in xgobi.R.in xgobi_1.2-7.tar.gz Log Message: new versions of xgobi, index.html --- NEW FILE: index.html.in --- <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <HTML> <HEAD> <TITLE>Welcome to Cyber-T</TITLE> </HEAD> <BODY> <H2>Welcome to Cyber T</H2> Cyber-T is a Web interface designed to reliably detect changes in gene expression in large scale gene expression experiments and operates on a set of functions written in R by <A HREF="mailto:td...@uc...?subject=CyberT">Tony Long</A> and <A HREF="mailto:hj...@ta...?subject=CyberT">Harry Mangalam</A>. The functions are based on Bayesian statistical approaches and extensions developed by <A HREF="mailto:pf...@ic...?subject=CyberT">Pierre Baldi</a> and Tony Long. <P> <HR noshade size=5> <A HREF="%%CGI_ROOT_URL%%/%%CYBERT_DIR%%/CyberT-7.1.form.pl?DATATYPE=P">Go to the <b>PAIRED DATA</b> analysis page</A>, <BR>if you have 2-dye data (such as what would be generated by the usual glass slide arrays probed with cy3/cy5-labelled cDNA. <br><b>Now with Xgobi visualization option.</b> <br><b>Now with VNC 'X11-in-browser' option.</b> <HR noshade size=2> <A HREF="%%CGI_ROOT_URL%%/%%CYBERT_DIR%%/CyberT-7.1.form.pl?DATATYPE=CE">Go to the <b>Control+Experimental</b> analysis page</A>, <BR> if you have Affymetrix-based data, which consists of a separate control and experimental arrays. <br><b>Now with Xgobi visualization option.</b> <br><b>Now with VNC 'X11-in-browser' option.</b> <HR noshade size=2> <A HREF="CTHelp.html">Browse thru the Help Page</A> (a concatenation of all the various help sections). <HR noshade size=2> <A HREF="CTHelp.html#LocalNonWebInstall"> How to install and use ONLY the hdarray R library</A> <P> <A HREF="hdarray">Download only the hdarray R library</A>. <HR noshade size=2> <A HREF="CTHelp.html#LocalWebInstall">How to download and install the entire Web interface</A>. <HR noshade size=5> Questions or suggestions pertaining to the interface? <br>Contact: <a href="mailto:hj...@ta...?subject=CyberT">Harry Mangalam</a> <HR noshade size=5> <P> <h3>Some background:</h3> It has recently become possible to study genome-wide patterns of gene expression. This is accomplished by hybridizing labeled cDNA or cRNA representing transcribed genes to thousands of probes arrayed in a manner such that the identity of each probe is known based on its position in the array. Data from these experiments consist of gene expression measures obtained from different tissue types or under different experimental treatments. In order to establish the statistical significance of any observed differences in expression between experimental conditions treatments will often be replicated. This web page employs statistical tests, based on the t-test, which can be conveniently used on high-density array data to test for statistically significant differences between sample sets. These t-tests employ either the observed variance among replicates within treatments or a Bayesian estimate of the variance among replicates within treatments based on a prior distribution obtained from a local estimate of the standard deviation. <P> <h3>Some advice:</h3> If you're familiar with Unix/Linux and the XWindows interface, and especially the <A HREF="http://lib.stat.cmu.edu/R/CRAN/">R statistical language</A>, you may well find direct manipulations of the R primitives less trouble and more flexible than installing the Web interface locally. If you hate all things Unix, you may well prefer the Web interface, but if you do these analyses a lot, you (and we) will want (you) to install and run them locally. </BODY> </HTML> --- NEW FILE: xgobi.R.in --- .packageName <- "xgobi" quadplot <- function(mat4, pointlabs = rownames(mat4), vertexlabs = paste(1:4), normalize = median(abs(c(mat4))) > 1) { mat4 <- if(is.data.frame(mat4)) data.matrix(mat4) else as.matrix(mat4) n <- nrow(mat4) m <- ncol(mat4) if(m != 4) stop("Data matrix `mat4' must have four columns.") if(normalize) mat4 <- mat4 / c(mat4 %*% c(1,1,1,1)) ## == sweep(mat4, 1, apply(mat4,1,sum), "/") rt3 <- 1/sqrt(3) rt14 <- 1/sqrt(14) projct <- cbind(c(0, -1,-4, 5)*(rt3*rt14), c(3, -1,-1,-1)* rt3/2, c(0, -3, 2, 1)* rt14) tetralines <- cbind(c(1,1,1,2,2,3), c(2,3,4,3,4,4)) mat3 <- (rbind(diag(4), mat4) - 1/4) %*% projct if(is.null(pointlabs)) pointlabs <- as.character(1:n) xgobi(mat3, lines = tetralines, rowlab = c(vertexlabs, pointlabs), resource=c("*showLines: True", "*showAxes: False")) invisible(mat3) # or invisible() } reggeom <- function(matrx=matrix(c( 0,5780,-1156,3468,3468,3468,-867,4335,0,0,-612,4080,5440,2652,3468,3420,3468, 0, 0,4624,3468,3468, 0,3468,0,3468,4624,2448,1020,1360,3264,3264,3456,3456, 0, 0, 0,4624,0, 0,0,0,0,0,0,0,0,0,0,0,0), nrow=17, ncol=3), collab=c("U","V","W"), rowlab=c( "o","x1","x2","y","b","c","d","e","f","g","h","k","m","p","q","r","s"), colors=NULL, glyphs=NULL, erase=NULL, lines=matrix(c(1,6,8,1,11,7,1,1,5,6,6,15,17,8,5,9,1,9,10, 6,8,2,11,7,3,4,5,4,4,15,17,5,5,9,7,9,10,3), nrow=19, ncol=2), linecolors=c("red", "yellow", "yellow", "yellow", "yellow", "yellow", "orchid", "green", "green", "red", "skyblue", "skyblue", "skyblue", "white", "white", "white", "slateblue", "slateblue", "slateblue"), resources=c("*showLines: True", "*showAxes: False", "*showPoints: False", "*XGobi*PlotWindow.height: 500", "*XGobi*PlotWindow.width: 500", "*XGobi*VarPanel.width: 50"), title="Regression Geometry", vgroups = c(1,1,1), std = "msd", # dev = 1.5, #default is 2 nlinkable = NULL, subset = NULL, display = NULL) { xgobi(matrx=matrx, collab=collab, rowlab=rowlab, colors=colors, glyphs=glyphs, erase=erase, lines=lines, linecolors=linecolors, resources=resources, title=title, vgroups=vgroups, std=std, ## dev=dev, nlinkable=nlinkable, subset=subset, display=display, permfile=permfile) } ## These should really match the *brushColor[0-9] `fallback resources' in ## XGOBISRC/src/xgobitop.h : ## [these are ok for the "Dec. 1999" version of xgobi]: xgobi.colors.default <- c("DeepPink", "OrangeRed1", "DarkOrange", "Gold", "Yellow", "DeepSkyBlue1", "SlateBlue1", "YellowGreen", "MediumSpringGreen", "MediumOrchid") if(!exists("Sys.sleep", mode = "function")) { warning("\n*** Your R version is outdated.\n*** Consider upgrading!!\n") Sys.sleep <- function(time) system(paste("sleep",time)) } ############################### xgobi ##################################### xgobi <- function(matrx, collab = dimnames(matrx)[[2]], rowlab = dimnames(matrx)[[1]], colors = NULL, glyphs = NULL, erase = NULL, lines = NULL, linecolors = NULL, resources = NULL, title = deparse(substitute(matrx)), vgroups= NULL, std = "mmx", nlinkable = NULL, subset = NULL, display= NULL, keep = FALSE, fprefix= "xgobi-", permfile = "/tmp/xgobi-R.junk") { x <- if(is.expression(matrx) || is.character(matrx)) eval(matrx) else matrx if(is.data.frame(x)) x <- data.matrix(x) if (any(is.infinite(x[!is.na(x)]))) stop("Sorry, xgobi can't handle Inf's") if (!is.null(title) && !is.character(title)) stop("title must be a character string") # dfile <- tempfile(paste(fprefix, abbreviate(gsub("[^A-Za-z0-9]","",title), 5), sep="")) # file(description = "", open = "", blocking = TRUE, # encoding = getOption("encoding")) # next line is HJMs, trying to fix the temp directory problem.. # dfile <- file(permfile, "w") # have to do this extra assignment bc file() returns a connection object and tempfile() returns a STRING # that's the NAME of a temp file that is guaranteed to be unique. Now it's set OK.. filehandle <- file(permfile, "w") dfile = permfile write.table(x, file = dfile, quote = FALSE, row.names = FALSE, col.names = FALSE) if(!keep) on.exit(unlink(dfile), add = TRUE) args <- paste("-std", std) ##, "-dev", dev) ## Column / Var labels ### if (!is.null(collab)) { if (!is.vector(collab) || !is.character(collab))# check data type stop("The `collab' argument needs to be a character vector") if (!missing(collab) && length(collab) != NCOL(x)) stop("`collab' has wrong length (not matching NCOL(x))") cat(collab, file = (colfile <- paste(dfile, ".col", sep="")), sep="\n") if(!keep) on.exit(unlink(colfile), add = TRUE) } ## Row / Case labels ### if (!is.null(rowlab)) { if (!is.vector(rowlab) || !is.character(rowlab)) stop("The `rowlab' argument needs to be a character vector") if (!missing(rowlab) && length(rowlab) != NROW(x)) stop("`rowlab' has wrong length (not matching NROW(x))") cat(rowlab, file = (rowfile <- paste(dfile, ".row", sep="")), sep="\n") if(!keep) on.exit(unlink(rowfile), add = TRUE) } ## Variable groups ## if (!is.null(vgroups)) { if (!is.vector(vgroups) || !is.numeric(vgroups)) stop("The `vgroups' argument needs to be a numeric vector") cat(vgroups, file=(vgfile <- paste(dfile,".vgroups",sep="")), sep="\n") if(!keep) on.exit(unlink(vgfile), add = TRUE) } ## Colors ## if (!is.null(colors)) { if (!is.vector(colors) || !is.character(colors)) stop("The `colors' argument needs to be a character vector") cat(colors, file = (clrfile <- paste(dfile,".colors",sep="")), sep="\n") if(!keep) on.exit(unlink(clrfile), add = TRUE) } ## Glyphs ## if (!is.null(glyphs)) { if (!is.vector(glyphs) || !is.numeric(glyphs)) stop("The `glyphs' argument needs to be a numeric vector") glyphfile <- paste(dfile, ".glyphs", sep = "") cat(glyphs, file = glyphfile, sep = "\n") if(!keep) on.exit(unlink(glyphfile), add = TRUE) } ## Erase ## if (!is.null(erase)) { if (!is.vector(erase) || !is.numeric(erase)) stop("The `erase' argument needs to be a numeric vector") erasefile <- paste(dfile, ".erase", sep = "") cat(erase, file = erasefile, sep = "\n") if(!keep) on.exit(unlink(erasefile), add = TRUE) } ## Connected lines ## if (!is.null(lines)) { if (!is.matrix(lines) || !is.numeric(lines) || dim(lines)[2] != 2) stop("The `lines' argument must be a numeric 2-column matrix") linesfile <- paste(dfile, ".lines", sep = "") unlink(linesfile)# in case it existed if (nrow(lines) > 0) { for (i in 1:nrow(lines)) cat(lines[i, ], "\n", file = linesfile, append = TRUE) } if(!keep) on.exit(unlink(linesfile), add = TRUE) ## Line colors ## if (!is.null(linecolors)) { if (!is.vector(linecolors) || !is.character(linecolors)) stop("The `linecolors' argument must be a character vector") linecolorfile <- paste(dfile, ".linecolors", sep = "") cat(linecolors, file = linecolorfile, sep = "\n") if(!keep) on.exit(unlink(linecolorfile), add = TRUE) } } ## Resources ## if (!is.null(resources)) { if (!is.vector(resources) || !is.character(resources)) stop("The `resources' argument must be a character vector") resourcefile <- paste(dfile, ".resources", sep = "") cat(resources, file = resourcefile, sep = "\n") if(!keep) on.exit(unlink(resourcefile), add = TRUE) } ## nlinkable ## if (!is.null(nlinkable)) { nlinkable <- as.integer(nlinkable) if (length(nlinkable) > 1) stop("The `nlinkable' argument must be a scalar integer") linkablefile <- paste(dfile, ".nlinkable", sep = "") cat(nlinkable, "\n", file = linkablefile) if(!keep) on.exit(unlink(linkablefile), add = TRUE) } ## subset ## subsetarg <- "" if (!is.null(subset)) { subset <- as.integer(subset) if (length(subset) > 1) stop("The `subset' argument must be a scalar integer") if (subset == 0 || subset > nrow(x)) stop("The `subset' argument must be >0 and <= nrows") subsetarg <- paste(" -subset ", subset, sep = "") args <- paste(args, subsetarg, sep = " ") } if (!is.null(display)) { if (!is.character(display)) warning("display must be a character string") else args <- paste("-display", display, args) } args <- paste("-title", paste("'", title, "'", sep = ""), args) ### Note to installer: ### Here you will need to specify the path to the xgobi executable ### on your system (here we assume it *is* in the user's PATH :) command <- paste("%%XGOBI%%", args, dfile, "&") cat(command, "\n") s <- system(command, FALSE) ## Now wait a bit before unlinking all the files via on.exit(.) : if(!keep) Sys.sleep(3) invisible(s) } ######################################### xgvis ############################### xgvis <- function(dmat = NULL, edges = NULL, pos = NULL, rowlab = dimnames(dmat)[[1]], colors = NULL, glyphs = NULL, erase = NULL, lines = NULL, linecolors = NULL, resources = NULL, display = NULL, keep = FALSE, fprefix= "xgvis-") { if (is.null(edges) && is.null(pos) && is.null(dmat)) stop("One of dmat, edges, or pos must be present") basefile <- tempfile(fprefix) ## distance matrix ### if (!is.null(dmat)) { dmat <- eval(dmat) if (any(isinf <- is.infinite(dmat[!is.na(dmat)]))) { warning("xgvis can't handle Inf's in dmat; replaced with NA") dmat[isinf] <- NA } dfile <- paste(basefile, ".dist", sep="") write(t(dmat), file = dfile, ncolumns = ncol(dmat)) if(!keep) on.exit(unlink(dfile), add=TRUE) } ## Edges ### if (!is.null(edges)) { # check data type if (!is.matrix(edges) || !is.numeric(edges) || dim(edges)[2] != 2) stop("The `edges' argument must be a numeric 2-column matrix") edgesfile <- paste(basefile, ".edges", sep="") if (nrow(edges) > 0) { write(t(edges), file = edgesfile, ncol=2) } if(!keep) on.exit(unlink(edgesfile), add=TRUE) } ## position matrix ### if (!is.null(pos)) { pos <- eval(pos) if (any(isinf <- is.infinite(pos[!is.na(pos)]))) { warning("xgvis can't handle Inf's in pos; replaced with NA") pos[isinf] <- NA } pfile <- paste(basefile, ".pos", sep="") write(t(pos), file = pfile, ncolumns = ncol(pos)) if(!keep) on.exit(unlink(pfile), add = TRUE) } ## Row / Case labels ### if (!is.null(rowlab)) { if (!is.vector(rowlab) || !is.character(rowlab))# check data type stop("The `rowlab' argument needs to be a character vector") if (!missing(rowlab) && length(rowlab) != NROW(dmat)) stop("`rowlab' has wrong length (not matching NROW(dmat))") cat(rowlab, file = (rowfile <- paste(basefile, ".row", sep="")), sep="\n") if(!keep) on.exit(unlink(rowfile), add = TRUE) } ## Colors ### if (!is.null(colors)) { # check data type if (!is.vector(colors) || !is.character(colors)) stop("The `colors' argument needs to be a character vector") colorfile <- paste(basefile, ".colors", sep="") write(colors, file = colorfile, ncol=1) if(!keep) on.exit(unlink(colorfile), add = TRUE) } ## Glyphs ### if (!is.null(glyphs)) { # check data type if (!is.vector(glyphs) || !is.numeric(glyphs)) stop("The `glyphs' argument needs to be a numeric vector") glyphfile <- paste(basefile, ".glyphs", sep="") write(glyphs, file = glyphfile, ncol=1) if(!keep) on.exit(unlink(glyphfile), add = TRUE) } ## Erase ### if (!is.null(erase)) { # check data type if (!is.vector(erase) || !is.numeric(erase)) stop("The `erase' argument needs to be a numeric vector") erasefile <- paste(basefile, ".erase", sep="") write(erase, file = erasefile, ncol=1) if(!keep) on.exit(unlink(erasefile), add = TRUE) } ## Connected lines ### if (!is.null(lines)) { # check data type if (!is.matrix(lines) || !is.numeric(lines) || dim(lines)[2] != 2) stop("The `lines' argument must be a numeric 2-column matrix") linesfile <- paste(basefile, ".lines", sep="") if (nrow(lines) > 0) { write(t(lines), file = linesfile, ncol=2) if(!keep) on.exit(unlink(linesfile), add = TRUE) } } ## Line colors ### if ((!is.null(lines) || !is.null(edges)) && !is.null(linecolors)) { # check data type if (!is.vector(linecolors) || !is.character(linecolors)) stop("The `linecolors' argument must be a character vector") linecolorfile <- paste(basefile, ".linecolors", sep="") write(linecolors, file = linecolorfile, ncol=1) if(!keep) on.exit(unlink(linecolorfile), add = TRUE) } ## Resources ### if (!is.null(resources)) { # check data type if (!is.vector(resources) || !is.character(resources)) stop("The `resources' argument must be a character vector") resourcefile <- paste(basefile, ".resources", sep="") write(resources, file = resourcefile, ncol=1) if(!keep) on.exit(unlink(resourcefile), add = TRUE) } ### Note to installer: ### Here you need to specify the path to the xgvis executable / batch file ### on your system. command <- paste("xgvis", basefile, "&") cat(command, "\n") ## dos: ## invisible(dos(command, multi= F, minimized=T, output.to.S=F, translate=T)) s <- system(command, FALSE) ## Now wait a bit before unlinking all the files via on.exit(.) : if(!keep) Sys.sleep(3) invisible(s) } --- NEW FILE: xgobi_1.2-7.tar.gz --- (This appears to be a binary file; contents omitted.) |
|
From: <man...@us...> - 2002-11-13 21:45:20
|
Update of /cvsroot/genex/genex-server
In directory usw-pr-cvs1:/tmp/cvs-serv29657
Modified Files:
INSTALL
Added Files:
Porting.txt Using.CPAN
Log Message:
edited INSTALL, broke out some text to additional files
--- NEW FILE: Porting.txt ---
Some notes on Alien Platforms
=============================
* GeneX has been developed on Linux and Solaris and those are the current
systems we support. Porting to another Unix-based system should not be
difficult but we can't support all platforms obviously.
* That said, the system is Open Source and we encourage people to try to
break it by porting it to other platforms (that which does not kill us
makes us stronger..) We've had requests to port it to various other
platforms; below are some preliminary results and thoughts on this.
* Mac OSX - as essentially BSD Unix under the hood, this should be a pretty
straight port and as Daniel E Sabath <dsabath@u.washington.edu> discovered,
it pretty much is. Gotchas are that since the g77 package isn't widely
available for OSX, you may have to grab a binary package of R that has it
built with another Fortran compiler, and due to a strange quirk of
packaging, Apple has included the non-GNU version of several file utilities
so you'll have to go get the utilities and compile them yourself. The
installer should catch this deviation and tell you in a (possibly hard to
understand) error message that starts:
"Ugh! What gibberish! The [app] you have certainly does not
taste like GNU [app]."
and then it'll tell you where to get the appropriate one.
* Microsoft Windows NT & 2000 - these creatures are apparantly POSIX compliant
at some level and in a miracle of programming, the Cygnus people have
provided a toolkit which enables them to behave like a unix system - the
Cygwin tools:
http://sources.redhat.com/cygwin/ [home]
or
http://xfree86.cygwin.com/docs/howto/cygwin-installing.html [fr XFree86]
which gives Windows almost all the resources of Linux/Unix, including an X
server if you install the XFree86 package as well. It's a huge package
(175MB installed), but it's free, very high quality, and capable. Other
more complex systems have been ported to this environment and as far as I
know, almost of the subsystems that GeneX requires can be compiled and run
on the Cygwin platform.
HOWEVER, we have not done this and we'll probably not have the time for a
while so if there are any enterprising young hackers out there with time on
their grubby little hands looking to help out, this would obviously be a
useful thing to do.
--- NEW FILE: Using.CPAN ---
Using the CPAN shell to install Perl Modules
--------------------------------------------
To install any of the above, we STRONGLY recommend using the CPAN shell (see
the file 'Using.CPAN'.
To learn more about using CPAN run 'perldoc CPAN' or 'man CPAN'. If you
plan to install the modules in the default perl library, you will need
to run it as root. In short:
$ su
<type-your-root-password>
$ cpan
(If you are using a version of Perl before 5.8.0, you may not have
the 'cpan' program on your system, so you will need to run the
following command instead: 'perl -MCPAN -e shell')
If the person who installed perl has not yet configured CPAN, it
will ask you the following question:
Are you ready for manual configuration? [yes]
Just type 'no' and hit enter. Perl will auto-configure CPAN for
you, and this should work fine. However, if the person who installed
CPAN did not configure the URL list, it will force you to
proceed. The defaults should all work fine, and when you get to the
URL configuration, just pick a server close to you.
You should now get the CPAN prompt, 'cpan> ', to download,
configure, test, and install any perl module, just type:
install Full::Module::Name
at the 'cpan>' prompt. One other really nice feature of CPAN, is
that if a module has defined pre-requisites, then CPAN will pause
building the current module and fetch, build, and install the
pre-requisites first, then continue building the current module.
For example, to install 'CGI.pm' enter:
cpan> install CGI
Perl will first download a number of module control files, and
then fetch the CGI tarball, and will automatically run the normal
perl module install procedure:
perl Makefile.PL
make
make test
make install
If, for some reason, if Perl encounters errors while running make
test, Perl will halt and will not install the module. If you decide
it is a minor error, and want to install anyway, you can enter the
following:
cpan> force install Full::Module::Name
and perl will not stop at errors.
If you don't know exactly what you need to install you can ask the
CPAN shell for some hints:
cpan> i /CGI/
asks for all info on names that contain 'CGI' (which are quite
numerous). You can then select the one you need and install it.
Remember, before starting CPAN to install DBD::Pg, you must set the
POSTGRES_INCLUDE and POSTGRES_LIB environment variables, or the
build will fail at the 'perl Makefile.PL' phase.
Index: INSTALL
===================================================================
RCS file: /cvsroot/genex/genex-server/INSTALL,v
retrieving revision 1.21
retrieving revision 1.22
diff -C2 -d -r1.21 -r1.22
*** INSTALL 20 Oct 2002 21:17:37 -0000 1.21
--- INSTALL 13 Nov 2002 21:45:17 -0000 1.22
***************
*** 3,54 ****
# CVS $Id$
- There are some basic pieces that you need to have in place before you
- start.
-
- Some notes on Alien Platforms
- =============================
-
- * GeneX has been developed on Linux and Solaris and those are the current
- systems we support. Porting to another Unix-based system should not be
- difficult but we can't support all platforms obviously.
-
- * That said, the system is Open Source and we encourage people to try to
- break it by porting it to other platforms (that which does not kill us
- makes us stronger..) We've had requests to port it to various other
- platforms; below are some preliminary results and thoughts on this.
-
- * Mac OSX - as essentially BSD Unix under the hood, this should be a pretty
- straight port and as Daniel E Sabath <dsabath@u.washington.edu> discovered,
- it pretty much is. Gotchas are that since the g77 package isn't widely
- available for OSX, you may have to grab a binary package of R that has it
- built with another Fortran compiler, and due to a strange quirk of
- packaging, Apple has included the non-GNU version of several file utilities
- so you'll have to go get the utilities and compile them yourself. The
- installer should catch this deviation and tell you in a (possibly hard to
- understand) error message that starts:
- "Ugh! What gibberish! The [app] you have certainly does not
- taste like GNU [app]."
- and then it'll tell you where to get the appropriate one.
-
- * Microsoft Windows NT & 2000 - these creatures are apparantly POSIX compliant
- at some level and in a miracle of programming, the Cygnus people have
- provided a toolkit which enables them to behave like a unix system - the
- Cygwin tools:
- http://sources.redhat.com/cygwin/ [home]
- or
- http://xfree86.cygwin.com/docs/howto/cygwin-installing.html [fr XFree86]
- which gives Windows almost all the resources of Linux/Unix, including an X
- server if you install the XFree86 package as well. It's a huge package
- (175MB installed), but it's free, very high quality, and capable. Other
- more complex systems have been ported to this environment and as far as I
- know, almost of the subsystems that GeneX requires can be compiled and run
- on the Cygwin platform.
-
- HOWEVER, we have not done this and we'll probably not have the time for a
- while so if there are any enterprising young hackers out there with time on
- their grubby little hands looking to help out, this would obviously be a
- useful thing to do.
-
-
What you need before you configure GeneX
========================================
--- 3,6 ----
***************
*** 97,102 ****
are installing a package, such as debian's libdbd-pg-perl, then
you will not need them.
!
! * gnu textutils
* gnu fileutils
* sendmail (or a sendmail replacement such as exim)
--- 49,71 ----
are installing a package, such as debian's libdbd-pg-perl, then
you will not need them.
!
! For Debian:
! if you add the following lines to your /etc/apt/sources.list:
!
! #Diane trout's libxerces site
! deb http://woldlab.caltech.edu/software woldlab main
!
! the following line (as root) will install and configure all the pieces noted
!
! apt-get install postgresql postgresql-dev libdbd-pg-perl expat \
! perlSGML libxeres17 libxeres17-dev libxml-xerces-perl libxaw7-dev
!
!
! Required Utilities
! ==================
! The following utilities are included in all Linux distributions and are
! installed with many proprietary Unices:
!
! * gnu textutils
* gnu fileutils
* sendmail (or a sendmail replacement such as exim)
***************
*** 107,111 ****
esp. the tutorial: <URL http://httpd.apache.org/docs/howto/ssi.html>
this can often be done by adding the following lines to your [httpd.conf]
! file and then rebooting.
# Allow Server-side Includes
--- 76,80 ----
esp. the tutorial: <URL http://httpd.apache.org/docs/howto/ssi.html>
this can often be done by adding the following lines to your [httpd.conf]
! or [commonhttpd.conf] file and then rebooting.
# Allow Server-side Includes
***************
*** 114,120 ****
AddHandler server-parsed .shtml
DirectoryIndex index.html index.htm index.shtml index.cgi
! Optional:
! * R (>= 1.1.1) => http://cran.r-project.org
R is needed by many of the analysis tools. You will be able to
install genex without it, but you will not be able to use many
--- 83,92 ----
AddHandler server-parsed .shtml
DirectoryIndex index.html index.htm index.shtml index.cgi
+
! Optional Applications:
! ======================
!
! * R (>= 1.5.1) => http://cran.r-project.org
R is needed by many of the analysis tools. You will be able to
install genex without it, but you will not be able to use many
***************
*** 123,134 ****
* g77 GNU Fortran compiler (req. to compile R from source)
* ghostscript => http://www.cs.wisc.edu/~ghost
* xgobi/xgobi => http://www.research.att.com/areas/stat/xgobi
* vncserver => http://www.uk.research.att.com/vnc
* mpage => http://rpmfind.net/linux/RPM/mpage.html
* libexpat => http://sourceforge.net/projects/expat/
* perlSGML => http://genex.ncgr.org/genex/download/genex-server/perlSGML.2001Jan23.tar.gz
! supplies the dtd2html utility to generate HTML versions of the DTDs
* xcluster => http://genome-www.stanford.edu/~sherlock/cluster.html
! * jpythonc => http://www.jpython.org
!! These should be installed BEFORE ..repeat.. !!
--- 95,111 ----
* g77 GNU Fortran compiler (req. to compile R from source)
* ghostscript => http://www.cs.wisc.edu/~ghost
+ (often installed by default)
* xgobi/xgobi => http://www.research.att.com/areas/stat/xgobi
* vncserver => http://www.uk.research.att.com/vnc
* mpage => http://rpmfind.net/linux/RPM/mpage.html
* libexpat => http://sourceforge.net/projects/expat/
+ (if you didn't use the apt-get command above)
* perlSGML => http://genex.ncgr.org/genex/download/genex-server/perlSGML.2001Jan23.tar.gz
! (supplies the dtd2html utility to generate HTML versions
! of the DTDs - if you didn't use the apt-get command above)
* xcluster => http://genome-www.stanford.edu/~sherlock/cluster.html
! * jythonc => http://www.jython.org (only to recompile the jar file
! from supplied python - usually only needed by especially
! masochistic developers)
!! These should be installed BEFORE ..repeat.. !!
***************
*** 151,155 ****
must be modified. To do this you must edit the Postgres
configuration file called pg_hba.conf (installed under
! /etc/postgresql in debian, and ??? under RedHat). You must ensure
that the first two lines after all the comments look like:
--- 128,133 ----
must be modified. To do this you must edit the Postgres
configuration file called pg_hba.conf (installed under
! /etc/postgresql in debian, and /var/lib/pgsql/data/pg_hba.conf
! under Mandrake and ??? under RedHat). You must ensure
that the first two lines after all the comments look like:
***************
*** 165,257 ****
IF YOU FAIL TO SET THIS UP PROPERLY, THE INSTALLATION OF THE GENEX
DB WILL FAIL.
! 2) Perl & Using the CPAN shell to install Modules
! ==============================================
!
! We also require the following modules that are not included with the
! standard perl library:
!
! Term::ReadKey - used by Configure
! CGI - used by most cgi scripts
! XML::Xerces - used by the GeneXML utils
! Apache::Session - used authentication and session management scripts
! Digest::MD5 - used by LoginUtils.pm
! DBI - used by everything (this *is* a gene expression DATABASE ;-)
! DBD::Pg - for the Postgres installation (see Known Problems below)
! HTML::Mason - for the new WWW/CGI applications
! Class::ObjectTemplate::DB - used by everything
! Bio::MAGE - used for importing and exporting MAGE-ML
!
! Using the CPAN shell to install Perl Modules
! --------------------------------------------
!
! To install any of the above, we STRONGLY recommend using the CPAN shell. To
! learn more about using CPAN run 'perldoc CPAN' or 'man CPAN'. If you plan to
! install the modules in the default perl library, you will need to run it as
! root. In short:
!
! $ su
! <type-your-root-password>
! $ cpan
!
! (If you are using a version of Perl before 5.8.0, you may not have
! the 'cpan' program on your system, so you will need to run the
! following command instead: 'perl -MCPAN -e shell')
!
!
! If the person who installed perl has not yet configured CPAN, it
! will ask you the following question:
!
! Are you ready for manual configuration? [yes]
!
! Just type 'no' and hit enter. Perl will auto-configure CPAN for
! you, and this should work fine. However, if the person who installed
! CPAN did not configure the URL list, it will force you to
! proceed. The defaults should all work fine, and when you get to the
! URL configuration, just pick a server close to you.
!
! You should now get the CPAN prompt, 'cpan> ', to download,
! configure, test, and install any perl module, just type:
!
! install Full::Module::Name
!
! at the 'cpan>' prompt. One other really nice feature of CPAN, is
! that if a module has defined pre-requisites, then CPAN will pause
! building the current module and fetch, build, and install the
! pre-requisites first, then continue building the current module.
! For example, to install 'CGI.pm' enter:
!
! cpan> install CGI
!
! Perl will first download a number of module control files, and
! then fetch the CGI tarball, and will automatically run the normal
! perl module install procedure:
!
! perl Makefile.PL
! make
! make test
! make install
! If, for some reason, if Perl encounters errors while running make
! test, Perl will halt and will not install the module. If you decide
! it is a minor error, and want to install anyway, you can enter the
! following:
- cpan> force install Full::Module::Name
! and perl will not stop at errors.
! If you don't know exactly what you need to install you can ask the
! CPAN shell for some hints:
! cpan> i /CGI/
! asks for all info on names that contain 'CGI' (which are quite
! numerous). You can then select the one you need and install it.
! Remember, before starting CPAN to install DBD::Pg, you must set the
! POSTGRES_INCLUDE and POSTGRES_LIB environment variables, or the
! build will fail at the 'perl Makefile.PL' phase.
Known Problems using CPAN to install these modules
--- 143,197 ----
IF YOU FAIL TO SET THIS UP PROPERLY, THE INSTALLATION OF THE GENEX
DB WILL FAIL.
+
+ REMOTE ACCESS
+ =============
+ To enable the GeneX 2 DB to be accessible remotely via TCP, install as
+ a standalone system and then configure for TCP access. The additional work
+ is trivial. You need to change the 'pg_hba.conf' file to include a line
+ like th eone below (changing the params for your case.
! # TYPE DATABASE IP_ADDRESS MASK AUTH_TYPE AUTH_ARGUMENT
! host genex 192.168.1.0 255.255.255.0 password
! which allows machines from a private net (192.168.1.0, masked 255.255.255.0)
! to access only the genex DB if they are known postgres users and supply the
! correct postgresql password. There are more (and less) stringent
! authentication schemes described in the comments in 'pg_hba.conf'.
! the postgres configureation file 'postgresql.conf' (usually kept in the
! same place as the other config files) must include the line:
! tcpip_socket = 1
! (and does in the default Debian configuration)
! do additional config changes.
! 2) Perl Modules
! ============
! We require the following modules that are usually not included with the
! standard perl library:
! -- Modules that should install easily --
! Term::ReadKey - used by Configure
! CGI - used by most cgi scripts
! Apache::Session - used authentication and session management scripts
! Apache::Request - used by HTML::Mason (Mason will catch the depedency and
! install it if using CPAN.)
! Digest::MD5 - used by LoginUtils.pm
! DBI - used by everything (this *is* a gene expression DATABASE)
! HTML::Mason - for the new WWW/CGI applications (has lots of dependencies;
! use the CPAN shell or else)
! Class::ObjectTemplate::DB - used by everything
! Tie::IxHash - required for Bio::MAGE (and if Bio::MAGE cannot be
! installed by CPAN, it will miss the dependency and skip it.
! -- Modules with known problems (see below) --
+ DBD::Pg - for the Postgres installation (see Known Problems below)
+ XML::Xerces - used by the GeneXML utils (see Known Problems below)
+ Bio::MAGE - used for importing and exporting MAGE-ML; may have to
+ download it manually if CPAN refuses to acknowledge its
+ existance... try:
+ http://search.cpan.org/CPAN/authors/id/J/JA/JASONS/Bio-MAGE-2002-09-02_0.tar.gz
Known Problems using CPAN to install these modules
***************
*** 281,292 ****
2) download version 1.7.0 of XML::Xerces from Apache:
! http://xml.apache.org/dist/xerces-p/stable/
! unpack it and follow the build instructions in the README file
! * Ron Ophir notes that when he tried to use CPAN, it bugged him to install
! 'Bundle::libnet' in order to be able to install NET::FTP. This may cause
! a dependency problem which results in the upgrade of your entire Perl
! package. Be careful about following this advice.
Installation script Assumptions
--- 221,236 ----
2) download version 1.7.0 of XML::Xerces from Apache:
! http://xml.apache.org/dist/xerces-p/stable/
! Unpack it and follow the build instructions in the README file
! NOTE: 'make test' may note a number of failures; if it passed most
! of the tests and did not cause any cores, it's probably OK (the
! failures are probably due to some stringent 'locale' tests.
! * Ron Ophir notes that when he tried to use CPAN, it bugged him
! to install Bundle::libnet' in order to be able to install
! 'NET::FTP'. This may cause dependency problems which result in the
! upgrade of your entire Perl ackage. Be careful about following
! this advice.
Installation script Assumptions
***************
*** 425,430 ****
==============================
In a nutshell:
!
% tar zxf GeneX-Server-X.Y.Z.tar.gz
% cd GeneX-Server-X.Y.Z/
--- 369,382 ----
==============================
In a nutshell:
! From CVS:
!
! cvs -d:pserver:ano...@cv...:/cvsroot/genex co \
! genex-server
!
! From a tarball:
!
% tar zxf GeneX-Server-X.Y.Z.tar.gz
+
+ From there, it's the same.
% cd GeneX-Server-X.Y.Z/
***************
*** 434,441 ****
% make install
! The basic approach is, once having installed the parts above, cd to the
! directory formed by untarring the distribution tar file
! (GeneX-Server-[version#]), and running 'make configure'. This will use
! the Perl executable that comes first in your path.
After configuration, you can install the system by running 'make
--- 386,393 ----
% make install
! Once having installed the parts above, cd to the directory formed by
! untarring the distribution tar file (GeneX-Server-[version#]) and
! run 'make configure'. This will use the Perl executable that comes
! first in your path.
After configuration, you can install the system by running 'make
|