You can subscribe to this list here.
2001 |
Jan
(135) |
Feb
(57) |
Mar
(84) |
Apr
(43) |
May
(77) |
Jun
(51) |
Jul
(21) |
Aug
(55) |
Sep
(37) |
Oct
(56) |
Nov
(75) |
Dec
(23) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(32) |
Feb
(174) |
Mar
(121) |
Apr
(70) |
May
(55) |
Jun
(20) |
Jul
(23) |
Aug
(15) |
Sep
(12) |
Oct
(58) |
Nov
(203) |
Dec
(90) |
2003 |
Jan
(37) |
Feb
(15) |
Mar
(14) |
Apr
(57) |
May
(7) |
Jun
(40) |
Jul
(36) |
Aug
(1) |
Sep
(56) |
Oct
(38) |
Nov
(105) |
Dec
(2) |
2004 |
Jan
|
Feb
(117) |
Mar
(69) |
Apr
(160) |
May
(165) |
Jun
(35) |
Jul
(7) |
Aug
(80) |
Sep
(47) |
Oct
(23) |
Nov
(8) |
Dec
(42) |
2005 |
Jan
(19) |
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Harry M. <hj...@ta...> - 2004-07-02 15:52:57
|
Thanks, Karen, Am forwarding to the list. ksc...@gm... wrote: > Hi Harry, > > this is just about perfect. a couple little comments > > 1) the Affy software can also emit a plain text 'metric' file > which contains a single value for each *GENE*, not > probe pair. > > 2) the GCOS/MAS5 summary value per transcript is > expressed as a *function*, of the Average Difference > of all the mm/pm differences. it's actually a trimmed > mean, and then they perform a one-step Tukey > procedure. whatever. it's pretty much the mean. > > 3) yes, the data is first scaled for each chip, with a target > mean intensity of 500. ( a common target value.) then > because it's been shown that the ABSENT, PRESENT > and MARGINAL are pretty much random for signal > values < 100, these low-intensity values are ignored. > that's the easy-cheezy way of doing it. a better way > is to use RMA. > > 4) Affy provides you with 3 basic metrics: > the signal (ave diff between pm/mm's), > the call (ABSENT, PRESENT, MARGINAL) > the p-value (how significant is the call) > > ===2-color ======================= > > 5) i don't understand this sentence: > " For detecting each transcript there are not as many > spots, but there can be more characterization data for > each spot. " I think I meant that for 2 color image analysis of each spot, there are not as many semi-indeendent estimates for expression (just one spot per transcript), but in many of the analysis outputs I've seen, there 10s of variables describing the fitness of that spot (columns describing roundness/donutness, emission spectra, emission profile, dip number, etc - in SMD, there were >60 variables per spot). > > 6) Not sure whether RMA works for 2-color. > it's designed for Affy, I think. > > > -- Cheers, Harry Harry J Mangalam - 949 856 2847 (vox; email for fax) - hj...@ta... <<plain text preferred>> |
From: Harry J M. <hj...@ta...> - 2004-07-01 22:41:46
|
Hi All, (esp KAS to verify that I've got this right) Off the phone with Karen & Jennifer (just back from OR) discussing what kinds of processing (as opposed to analysis) they'll need for the incoming GE data. Forgive a lot of background, but it may be useful to keep together. Affy data tends to come to the 'GE analyst' as a set of CEL and/or DAT files. The DAT files are Affy's proprietary image format and they can be converted to jpgs or TIF files in the analysis process, tho they'll lose dynamic range in this process. The CEL files are binary data files which can be converted to tab-delimited text files with a variety of utilities. Karen generally brings the DAT files into the Affy-supplied MAS5 or GCOS software to convert it to a CEL dataset and then reduces that data for analysis. The CEL data represents the full dataset - for each gene or transcript, there are 11-25 probe pairs (pm & mm) each of ~25 bp long. Each probe pair has several parameters so the entire dataset is quite large. (the Affy software can also emit a plain text 'metric' file which contains a single value for each probe pair. GeneX wants to store all of this data if it's going to be used for a lab and analysis system. If it's going to be used as a large-scale repository, it prob wants to store just the summary, which is one value per transcript. The input data is integrated/reduced by the GCOS sw to a single summary value per transcript, usually expressed as the Average Difference of all the mm/pm differences. There are various transforms that can be applied to this summary - BioC does an RMA analysis which normalizes to a log-transform of the values (and the MAS5 sw is supposed to include that as well for next year. Also, many users normalize to an intensity value of ~500 and ignore those values less than 100. (KAS - is this before or after the average Difference calc'n??) The Affy sw also provides an 'Absent' column which means that the transcript measurement is bad, not necessarily absent. KAS says that recently, the chip to chip correlation for Affy is astonishingly high - on the order of 98-99%, so chip to chip normalization does not appear to be as big a deal as it once was and many people ignore it. Just the same, Affy does provide on-chip control spots for background, spike-in controls and QC areas to provide this info. ======================= For 2-Color experiments, the raw data is differently filtered images and after optimizing the image, and finding the spots, the ratios are calculated by simply dividing the background-subtracted summary values for each spot. For this approach, slide to slide normalization is more critical and the experiments have to provide control spots and spiked-in positives for this purpose. For detecting each transcript there are not as many spots, but there can be more characterization data for each spot. Normalizations can be done via the RMA process or via the lowess normalization - both available in the BioConductor pkg. In both Affy and 2 color assays, the ultimate goals is a data frame of rows and columns containing 1 row per gene ID or transcript, with a number of columns of values, 1 for each kind of experimental procedure. These are all normalization processes - transforms that reduce the number of variables per transcript and hopefully increase their accuracy. -- Cheers, Harry Harry J Mangalam - 949 856 2847 (v&f) - hj...@ta... <<plain text preferred>> |
From: <ja...@op...> - 2004-07-01 20:48:00
|
Hey All, I'm pushing ahead with streamlining the use of external applications in genex. When I tested my original concept last december I hacked it up rather quickly, and hard-coded a lot of pieces that I knew I would have to modify later. So I'm going through and making it a flexible framework that can support many different types of applications and also ensuring that it is convenient for curators to add tools. If it's too complicated for the curators to add tools, they won't want to do it, and the users will get frustrated because Genex won't have the functionality they want, and so they won't use Genex. Anyhow, I'm documenting the pieces as I go along so that it will be simpler to write a curator's guide to adding tool support to Genex. The basic concept is simple: adding an external tool is just another type of Protocol. The trick is how to set up all the pieces so the tool gets recognised by the Mason GUI's. For example what value should the Protocol 'type' be? There are currently two important values: mba-filter and dba-analysis. The first is used when copying data from and MBA table to the Scratch table, the second when running an external data processing or analysis tool. Another issue is how to configure the Procedure and what Parameters need to be defined. Procedure has a number of important parameters, such as call_style, type, path, and name. path is the file system path used when running an external application, name is used for stored procedures and other methods directly callable by the Mason (the webserver). call_style determines how arguments are passed (long-options, e.g. --foo, short options, e.g. -a, or order, e.g. foo(a,b,c)). Finally the framework expects the Procedure to define certain system parameters that will be used when running the tools, these must all be documented in order for curators to understand what is happening. Cheers, jas. |
From: Brandon K. <ki...@ca...> - 2004-06-30 01:42:30
|
Hi Guys, I finally GeneX installed... I removed all previous version of MAGE-Perl I could find on the machine and re-installed. Next up, loading some of our data. I'll get back to you guys in the near future with results and/or questions. -Brandon |
From: <ja...@op...> - 2004-06-29 18:23:30
|
Hey All, Here's the directions for using a stored procedure SQL method as a data processing procedure in genex. Here are the steps we will take: 1) add any utility SQL methods that are needed 2) use the Mason GUI to add the Procedure to the DB 3) use the Mason GUI to add the Protocol to the DB and associate the Procedure from 2) We will create a test example. 1) copy the utility procedure in SQL/Pg/copy_array.sql into a psql shell. 2) Go to the main workspace screen: http://<YOUR_SERVER>/genex/mason/login/workspace/workspace.html Log in as a user with curator privelege - the 'genex_test_curator' user is available with password 'test'. Once logged in click on the 'Annotation' tab. Scroll down until you see the 'Curator Annotation' navbar. Click on the 'Insert -> Stored Procedure' navbar link. This will take you to: /genex/mason/login/workspace/insert/genex-stored-procedure-insert.html Set 'Procedure Name' => 'compute_ratio' Set 'Source Code Language' => 'procedural-language' Erase the 'required you must fill me in' text from the 'Source Code' textarea. Open SQL/Pg/compute_ratio.plpgsql and copy the whole file into textarea. Set 'Write Group' => 'genex_test_curator' Set 'Number of Parameters' => 2 Hit the 'Add Parameter' button. ==> The page will be refreshed with new options for 2 Parameters For Parameter 1: Set 'Name' => dba_pk !!! this name is important Set 'Type' => predefined For Parameter 2: Set 'Name' => zero_threshold Set 'DataType' => float Set 'Type' => user_supplied Set 'Default Value' => 0.01 Set 'Description' => value to be used in case the denominator of the ratio is zero Hit the 'Enter Procedure' button. You should get a 'Congratulations!!' message. 3) Click on the Annotation tab again, and scroll down again to the 'Curator Annotation' navbar. Click on the 'Insert -> Data Processing Protocol' link. This will take you to: /genex/mason/login/workspace/insert/genex-protocol-insert.html Set 'Write Group' => 'genex_test_curator' Set 'Name' => 'compute_ratio' Set 'Type' => 'dba_analysis' !!!! This is important Set 'Procedure' => 'compute_ratio' (should be the only choice) Hit the 'Submit Protocol' button. You should be taken to the 'Genex Job Status' page, and in about 10 seconds the page should refresh with a 'Success' message. Congratulations! You have successfully Entered your first data processing Protocol into genex. Now to use it on some data. Hit the back button on your browser and click on the 'Workspace' tab. This should present you with a single experiment. Check the checkbox for the experiment and click the 'Analyze Single Experiment' button. This should take you to: /genex/mason/login/workspace/display-experiment-bioassays.html There are three MBA's and one DBA in this experiment. Check the checkbox in the single DBA (the table on the bottom), choose the 'compute_ratio' entry from the drop down menu next to the 'Run Analysis' button (it should be the only entry), and hit the 'Run Analysis' button. This should take you to a page asking you to confirm the value for the 'zero_threshold' parameter - choose any value you wish that is a floating point number and hit the 'Set Parameters' button. This should send you to a page with a 'PERMISSION DENIED' error. Welcome to the genex security system. You are logged in as the genex_test_curator user, although this user is a curator, it doesn't give the user any special ability to see or modify data that lives in the USER tables - it only gives permission to insert and modify data in CURATOR tables. The stored procedure we are using as a test is very simplistic - when it copies the DBA it preserves all of the parameters of the source DBA including the write group setting. So if that original write group excluded us we are creating a new DBA that we don't have permission to modify - hence the PERMISSION DENIED error. Hit the 'Logout' link on the bar under the tabs, and login again as the 'genex_test' user with password 'test'. Now repeat the same process for running the stored procedure, and you will get a successful result - because when we copy the DBA, we are in the write group, and so we can compute the ratio. To see the result, choose the 'Workspace' tab. You will now see that the Experiment has a new DBA. To look at the data, check the Experiment, and hit the 'Analyze Single Experiments' button again. Check the box for the new DBA and hit the 'Export DBA Data for Analysis' button. This will take you to a page saying the data has been written to a file. Follow the 'here' link and you will see a (short) tab delimited file with your test data in it. I hope this simple test is a useful intro to the DB. Any and all comments, criticisms, and suggestions are welcome. Cheers, jas. |
From: <ja...@op...> - 2004-06-29 13:08:26
|
Hey All, I've switched port 80 on genex2 to be running apach1, instead of apache2 (SVN). SVN needs to run on port 80 because sf.net blocks all the other ports, so if we want to update the sf.net site we need to reactivate apache2 on port 80. just an FYI. Cheers, jas. |
From: <ja...@op...> - 2004-06-29 13:06:34
|
Hey All, I've just committed all my latest work. This requires the following: 1) svn update - grab the new files 2) make configure - registers the new DB version 3) sudo make install - this will install all three components: files, modules, *and DB* Upgrading the DB in this manner will cause a loss of *all* data - the changes needed in the QT tables was too big to warrant the time for a DB update script. Note: this fixes the MAGE version problem which Brandon discovered - there was a bogus $VERSION string in MAGE.pm. I will add a note in a followup email how to use the new data processing code. Cheers, jas. |
From: <ja...@op...> - 2004-06-29 05:13:54
|
Harry Mangalam <hj...@ta...> writes: > What about either the cybert app - it decays to an commandline R app, > or if you want to avoid R completely, Gavin's xcluster? - it's a plain > vanilla C app. Hey Harry, So the long and the short of it is, I'd be happy to test out hooking up CyberT. If I understand it correctly, it takes multiple columns of input where each column is data from a single chip and produces a single column of output which is a statistical measure of the expression values for the genes. Is this correct? In order to do this I need to know the following: 1) How is it run? Directly by invoking R? Or from a wrapper script? 2) What components are needed to make it work? There is some C file that must be compiled to a .so file, yes? 3) What are the required arguments to 1)? I need the names, descriptions, and expected data type (int, float, bool, string) 4) What are the optional arguments to 1)? I need the names, descriptions, and expected data type (int, float, bool, string) 5) What is the format of the input data file (tab-delimited I assume, rows being genes, and columns chips)? Does it allow a header? Does it honor a comment character? How does it handle non-existent data points? 6) How is the input data given to 1)? Through a command line option (--input)? Via stdin? 7) How is the output file specified to 1)? Through a command line option (--output)? Via stdout? Once I'm armed with this, I can create a CyberT Procedure with all the needed Parameter's. Then we can create a data processing Protocol that can be invoked using multiple DBA, and producing a single DBA as an output file. Also, I would need the latest files. Cheers, jas. |
From: <ja...@op...> - 2004-06-29 04:59:25
|
Harry Mangalam <hj...@ta...> writes: > What about either the cybert app - it decays to an commandline R app, > or if you want to avoid R completely, Gavin's xcluster? - it's a plain > vanilla C app. > > I think he makes it available from his web site, but if not, I'll send > you the latest version I have direct. Hey Harry, I seem to be making a distinction that is confusing to others. Let me clear this up. I see Genex being hooked up to two types of tools: 1) data processing tools 2) data analysis tools CyberT is a possibility because I think it is a data *processing* tool, but xcluster isn't because it is a data *analysis* tool. Unless I misunderstand it completely, xcluster is a clustering programq, and Genex is not set up to store any kind of clustering data (hierarchical nodes with values associated with genes). We *could* down the line, but we aren't there, and aftern talking with you - I was pretty convinced no one would *want* that anyway. The data processing tools (DP tools) are for massaging the data - normalizing, averaging, subtracting background, doing ratios, weeding out bad data by doing things like dye-swapping correction, across chip normalization, and other statistical methods (like CyberT??). The whole idea behind DP tools is to make the data set small and of high quality so that you can perform analysis on it. The output data type of these programs is just a data matrix where the rows are genes - if we've merged multiple arrays or multiple spots on an array, and the columns are expression values. This *is* the type of data that Genex can handle, and this is the type of data that can easily be stored back into the DB. From my limited understanding, researchers have a standard data processing pipeline which they apply to their data to arrive at the final spot data - after which they begin their analysis. My goal was to allow them to do all that work directly within genex instead of needing to export the data as tab-delimited files and do the same pipeline using disk files. *After* the DP tools have done their job, then analysis takes place - the output data type could be anything under the sun, and genex can't handle any possible output type, so that will have to be done by exporting data as tab files, and doing the work on disk. We can archive the files for them as part of the experiment, but there's no way currently to use the DB to query the data. I hope this distinction I'm making between data *processing* and data *analysis* is slightly more clear. If I should be using other terms please let me know. Cheers, jas. |
From: Harry M. <hj...@ta...> - 2004-06-28 20:37:09
|
What about either the cybert app - it decays to an commandline R app, or if you want to avoid R completely, Gavin's xcluster? - it's a plain vanilla C app. I think he makes it available from his web site, but if not, I'll send you the latest version I have direct. hjm Jason E. Stewart wrote: > Hey All, > > I'd like a short piece of advice. In the process of adding test data > to the DB install I'd like to add a few useful data processing tools > so that people can test out how data processing in genex works. > > I am making my own test apps that don't do very much - average, ratio, > etc. But I thought it would be more useful to hook up something that a > researcher might actually *use*. > > Any suggestions? Something in R might be OK, but probably not > bioconductor for my first attempt. Let that wait til I've got the > system up and running. > > Cheers, > jas. > > > ------------------------------------------------------- > This SF.Net email sponsored by Black Hat Briefings & Training. > Attend Black Hat Briefings & Training, Las Vegas July 24-29 - > digital self defense, top technical experts, no vendor pitches, > unmatched networking opportunities. Visit www.blackhat.com > _______________________________________________ > Genex-dev mailing list > Gen...@li... > https://lists.sourceforge.net/lists/listinfo/genex-dev > -- Cheers, Harry Harry J Mangalam - 949 856 2847 (vox; email for fax) - hj...@ta... <<plain text preferred>> |
From: <ja...@op...> - 2004-06-28 20:21:28
|
Hey All, OK, I didn't make my self-appointed goal to get my data processing changes committed by Monday, sorry. I ran into a snag I didn't anticipate with needing to revamp the QT Dimension stuff - which is all working quite nicely, now. Also adding scratch views to the DB, is really, REALLY simple thanks to using MAGE QT Dimensions. I wasn't able to finish the data processing testing because the code that exports the DBA data uses the QT Dimension info to figure out which column to export - it picks the column of type 'derived_signal' - but because I was lazy when I first wrote the scracth table insertion code, I didn't actually use QT Dimensions. So today when I ran my data processing code a got a lovely error 'No QT Dimension for table 'one_channel_scratch_view'... Now thanks to the magic of Perl, 'one_channel_scratch_view' has a QT Dimension! So tomorrow I hope to polish it all off. Thanks for your patience, jas. |
From: <ja...@op...> - 2004-06-28 20:03:29
|
Hey All, I'd like a short piece of advice. In the process of adding test data to the DB install I'd like to add a few useful data processing tools so that people can test out how data processing in genex works. I am making my own test apps that don't do very much - average, ratio, etc. But I thought it would be more useful to hook up something that a researcher might actually *use*. Any suggestions? Something in R might be OK, but probably not bioconductor for my first attempt. Let that wait til I've got the system up and running. Cheers, jas. |
From: Harry M. <hj...@ta...> - 2004-06-28 18:46:18
|
Hi Hrishi, I think this web site provides code to work with Zope, a python-powered content management system. As such, what this URL points to is code that provides the use of OOo to Zope (python gets passed a doc and then the MIME type says to call OOo and feeds it the doc, returning the OOo-processed doc to the browser or for further operations. So it's similar to what we want for ROO, but it's a bit backwards for what we really want, in that in this case Zope/python is the calling app and OOo is the processing agent. We want OOo to be the calling app and python/R to be the processing agent. It does show how the process works in the reverse direction tho and that's valuable. The PyUNO docs also show a similar flow with an external python interacting with OOo more as peers and this seems to be a similar process. I never saw that site before either - thanks. Harry Hrishikesh Deshmukh wrote: > http://cvs.bluedynamics.org/viewcvs/SmartSections/plugins/ > > -- Cheers, Harry Harry J Mangalam - 949 856 2847 (vox; email for fax) - hj...@ta... <<plain text preferred>> |
From: <ja...@op...> - 2004-06-28 09:11:36
|
Hey All, If you read the two bugs that just flashed past - I just discovered a major issue with the QuantitationType table - it was never upgraded to use OntologyEntry's and was still using free text strings for every column. If it was important, I could modify my simple DB upgrade script to hand this - export the existing QT's drop the table, reimport that QT's. But this will take me a while to write the procedure and test it. Since at this point the only data in the QT table is the test data that is loaded at DB install time - I feel it is smarter to just drop the table during an upgrade, and reload the test data. Because of this, I'm am also going to perform a second dificult to upgrade modification on the other two QT-dependant tables: QT Dimension and QT DimensionLink. All three of these tables are currently user tables - i.e. they have row level security. In reviewing the process of actually adding QT Dimensions to the DB, I've admitted that this makes no sense. These tables should be curator tables with public data, just like ArrayDesign, Reporter, etc. Harry and Jennifer, what is the status of the DB on genex1? Was it ever upgraded? Has data been entered? Cheers, jas. |
From: SourceForge.net <no...@so...> - 2004-06-28 08:55:15
|
Bugs item #981106, was opened at 2004-06-28 02:55 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116453&aid=981106&group_id=16453 Category: DB Schema Group: Genex-2 Status: Open Resolution: None Priority: 9 Submitted By: Jason E. Stewart (jason_e_stewart) Assigned to: Jason E. Stewart (jason_e_stewart) Summary: QT Tables want to be curator tables Initial Comment: Since it looks like the QT table is going to be heavily modified, I might as well make this change, too. It makes no sense to track security for these objects, they want to be curator controlled. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116453&aid=981106&group_id=16453 |
From: SourceForge.net <no...@so...> - 2004-06-28 08:53:08
|
Bugs item #981103, was opened at 2004-06-28 02:53 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116453&aid=981103&group_id=16453 Category: DB Schema Group: Genex-2 Status: Open Resolution: None Priority: 9 Submitted By: Jason E. Stewart (jason_e_stewart) Assigned to: Jason E. Stewart (jason_e_stewart) Summary: QuantitationType Table needs OntologyEntries not strings Initial Comment: The QT table was never converted to use OntologyEntry's, it is still using strings. When entering QT Dimensions we need to convert from MAGE OE entries to Genex OE entries. I'm afraid this is going to require a table drop and reload. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116453&aid=981103&group_id=16453 |
From: <ja...@op...> - 2004-06-27 16:31:08
|
Hey All, I didn't get as far as I'd hoped to today (primarily because of a big loooong nap smack in the middle of the day), but I needed to overhaul one of the earliest Mason GUI's I built (workspace.html) because, well, it sucked - full of spaghetti code because I didn't properly understand Mason when I originally wrote it. Hopefully, I understand Mason better now, and the code is easier to maintain. Genex now has a GUI that properly inserts an external application for data processing as a Procedure into the DB, and after I overhauled my original GUI for adding data processing protocols, you can associate the Procedure with top-level Protocol's which can be executed from the Mason Experiment Analysis GUI. I was hoping to also get a chance to overhaul the old external program execution code - the piece that exports the data from the DB, runs the external program, reads in the output and stores it back to the DB - but alas, I slept too long. So that will have to wait until tomorrow. By SOB tomorrow (US East Coast time, that is) I intend to have a fully working version which can support adding both SQL stored procs and external applications as data processing procedures, as well as code to run them against data in the DB, and store the results back to the DB. I also intend to have this committed to SVN and tested on genex2. I'm not sure whether I'll have sufficient time to make a nice HOWTO, so I may have to write it in an email and let someone else make the HOWTO. Cheers, jas. |
From: <ja...@op...> - 2004-06-26 15:40:10
|
Hi All, I did a bit more work on SQL stored procs as data processing procedures. I decided a two-step approach was best: 1) Enter the Procedure and Parameter information 2) Create a Protocol using the newly created Procedure I felt this encouraged better reuasability, at least at seemed more natural for me. I began work on the external data processing tools, and discovered that Procedure desperately needed additional columns (see the SF bugs I created). The GUI framework for this tool was simply stolen from the sproc GUI above - but the internal logic must be changed. Then I have to overhaul the process for executing an external program, including adding the input/output filter mechanism Harry and I discussed. That will be tomorrow's work. Finally today I wrote a DB upgrade script to compliment the schema diff I am maintaining. The DB script does things like update the Controlled Vocabularies which have been modified - addition only so that fkeys are preserved. I hope to have this work committed by start of business Monday so that Caltech can test out the new code. That includes all the schema changes, new sample data, sample data processing tools, and some new docs to explain how curators can add DP tools, and how users can access the tools once added. G'night all, jas. |
From: SourceForge.net <no...@so...> - 2004-06-26 15:01:40
|
Bugs item #980298, was opened at 2004-06-26 09:01 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116453&aid=980298&group_id=16453 Category: DB Schema Group: Genex-2 Status: Open Resolution: None Priority: 9 Submitted By: Jason E. Stewart (jason_e_stewart) Assigned to: Jason E. Stewart (jason_e_stewart) Summary: Procedure needs extra columns Initial Comment: In order to make data processing procedures work we need extra information: * parameter call style (order, long option names, short options names) * path (can't overload name for this) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116453&aid=980298&group_id=16453 |
From: SourceForge.net <no...@so...> - 2004-06-26 14:57:30
|
Bugs item #980297, was opened at 2004-06-26 08:57 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116453&aid=980297&group_id=16453 Category: DB Schema Group: Genex-2 Status: Open Resolution: None Priority: 9 Submitted By: Jason E. Stewart (jason_e_stewart) Assigned to: Jason E. Stewart (jason_e_stewart) Summary: Parameter needs extra columns Initial Comment: In order to make data processing procedures work we need extra information: * parameter order * data type * is_required ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116453&aid=980297&group_id=16453 |
From: <ja...@op...> - 2004-06-26 05:34:00
|
Brandon King <ki...@ca...> writes: > I've almost got GeneX installed Hey, great! It seems that Harry's hard work on the installer has paid off, that was a lot fewer messages than the previous time you installed genex. > but I ran into an error with > MAGE. The version numbers aren't matching up properly and causing > it to fail. I've remove what I believed to be the old version of > MAGE and had the GeneX install script reinstall MAGE. Here's the > error I'm getting: [snip] > I assume line 14 in qtdim-insert.pl needs to be updated with version > 36.0731000580321? Most definately *not*. This is a bogus version number because I didn't understand how Perl's automatic version detection system works. A long time ago Bio::MAGE used strings for version numbers and those strings get converted into the ridiculous numbers like the one above. Recently I discovered this error, and have since adopted a different version strategy: yyyymmdd.v for example: 20040513.4 Which is the forth version on May 13, 2004. The version from Perl/Bio-MAGE/Makefile.PL is: 20020902.6 So if you don't see that, it means that either Bio-MAGE is not getting installed, or you have an old version lying around somewhere. I would first remove *all* old versions, and then run: sudo make install_modules from within the genex-server root (or don't bother using sudo if you're running it as root). Check to see that the MAGE modules actually get installed - they should be third in the list after Class-ObjectTemplate and Class-ObjectTemplate-DB. Cheers, jas. |
From: Brandon K. <ki...@ca...> - 2004-06-25 23:18:29
|
Harry Mangalam wrote: > Hi Brandon, > Jason has probably gone to sleep and he would know, but I'd agree with > you - just change the version numbers to match & try again - I've > never seen that version mismatch error before. Hi Harry, I tried to change line 14 of qtdim-insert.pl to use the same version number, but then I got the same error, even though the numbers match perfectly. ------------------------------------------------------ /usr/local/genex/bin/insert-test-data.pl --dbname genex --username genex --password genex19 --dir /home/king/proj/genex/genex-server Adding quantarray QuantitationTypeDimension ... /usr/local/genex/bin/qtdim-insert.pl --dbname genex --username genex --password genex19 /home/king/proj/genex/genex-server/DB/curated_data/quantarray.xml --name=QuantArray --abbrev_name=dqa --version=3.0 --feat="1.2.3.4" --data_start_regex1=^Begin\s+Data --data_start_regex2=^Number --data_end_regex=^End Bio::MAGE version 36.0731000580321 required--this is only version 36.0731000580321 at /usr/local/genex/bin/qtdim-insert.pl line 14. BEGIN failed--compilation aborted at /usr/local/genex/bin/qtdim-insert.pl line 14. Died at /usr/local/genex/bin/insert-test-data.pl line 88. Died at Perl/scripts/gendb.pl line 221. FATAL ERROR I got an error when I ran the DB installer. ------------------------------------------------------ > Did you install your own version of the MAGE module? The GeneX > install is supposed to install the compatible svn version along with > the GeneX server code. I tried installing the one that comes with the Genex svn checkout. I'm not sure what's going on there. > harry > > > Brandon King wrote: > >> Hi Guys, >> I've almost got GeneX installed, but I ran into an error with >> MAGE. The version numbers aren't matching up properly and causing it >> to fail. I've remove what I believed to be the old version of MAGE >> and had the GeneX install script reinstall MAGE. Here's the error I'm >> getting: >> >> -------------------------------------------------------- >> /usr/local/genex/bin/insert-test-data.pl --dbname genex >> --username genex --password <hidden> --dir >> /home/king/proj/genex/genex-server >> Adding quantarray QuantitationTypeDimension ... >> /usr/local/genex/bin/qtdim-insert.pl --dbname genex --username >> genex --password <hidden> >> /home/king/proj/genex/genex-server/DB/curated_data/quantarray.xml >> --name=QuantArray --abbrev_name=dqa --version=3.0 --feat="1.2.3.4" >> --data_start_regex1=^Begin\s+Data --data_start_regex2=^Number >> --data_end_regex=^End >> Bio::MAGE version 20020902.3 required--this is only version >> 36.0731000580321 at /usr/local/genex/bin/qtdim-insert.pl line 14. >> BEGIN failed--compilation aborted at >> /usr/local/genex/bin/qtdim-insert.pl line 14. >> Died at /usr/local/genex/bin/insert-test-data.pl line 88. >> Died at Perl/scripts/gendb.pl line 221. >> >> FATAL ERROR >> >> I got an error when I ran the DB installer. >> -------------------------------------------------------- >> >> I assume line 14 in qtdim-insert.pl needs to be updated with version >> 36.0731000580321? >> >> -Brandon >> >> >> >> ------------------------------------------------------- >> This SF.Net email sponsored by Black Hat Briefings & Training. >> Attend Black Hat Briefings & Training, Las Vegas July 24-29 - digital >> self defense, top technical experts, no vendor pitches, unmatched >> networking opportunities. Visit www.blackhat.com >> _______________________________________________ >> Genex-dev mailing list >> Gen...@li... >> https://lists.sourceforge.net/lists/listinfo/genex-dev >> > |
From: Harry M. <hj...@ta...> - 2004-06-25 19:33:24
|
Hi Brandon, Jason has probably gone to sleep and he would know, but I'd agree with you - just change the version numbers to match & try again - I've never seen that version mismatch error before. Did you install your own version of the MAGE module? The GeneX install is supposed to install the compatible svn version along with the GeneX server code. harry Brandon King wrote: > Hi Guys, > I've almost got GeneX installed, but I ran into an error with MAGE. > The version numbers aren't matching up properly and causing it to fail. > I've remove what I believed to be the old version of MAGE and had the > GeneX install script reinstall MAGE. Here's the error I'm getting: > > -------------------------------------------------------- > /usr/local/genex/bin/insert-test-data.pl --dbname genex > --username genex --password <hidden> --dir > /home/king/proj/genex/genex-server > Adding quantarray QuantitationTypeDimension ... > /usr/local/genex/bin/qtdim-insert.pl --dbname genex --username > genex --password <hidden> > /home/king/proj/genex/genex-server/DB/curated_data/quantarray.xml > --name=QuantArray --abbrev_name=dqa --version=3.0 --feat="1.2.3.4" > --data_start_regex1=^Begin\s+Data --data_start_regex2=^Number > --data_end_regex=^End > Bio::MAGE version 20020902.3 required--this is only version > 36.0731000580321 at /usr/local/genex/bin/qtdim-insert.pl line 14. > BEGIN failed--compilation aborted at > /usr/local/genex/bin/qtdim-insert.pl line 14. > Died at /usr/local/genex/bin/insert-test-data.pl line 88. > Died at Perl/scripts/gendb.pl line 221. > > FATAL ERROR > > I got an error when I ran the DB installer. > -------------------------------------------------------- > > I assume line 14 in qtdim-insert.pl needs to be updated with version > 36.0731000580321? > > -Brandon > > > > ------------------------------------------------------- > This SF.Net email sponsored by Black Hat Briefings & Training. > Attend Black Hat Briefings & Training, Las Vegas July 24-29 - digital > self defense, top technical experts, no vendor pitches, unmatched > networking opportunities. Visit www.blackhat.com > _______________________________________________ > Genex-dev mailing list > Gen...@li... > https://lists.sourceforge.net/lists/listinfo/genex-dev > -- Cheers, Harry Harry J Mangalam - 949 856 2847 (vox; email for fax) - hj...@ta... <<plain text preferred>> |
From: Brandon K. <ki...@ca...> - 2004-06-25 19:15:07
|
Hi Guys, I've almost got GeneX installed, but I ran into an error with MAGE. The version numbers aren't matching up properly and causing it to fail. I've remove what I believed to be the old version of MAGE and had the GeneX install script reinstall MAGE. Here's the error I'm getting: -------------------------------------------------------- /usr/local/genex/bin/insert-test-data.pl --dbname genex --username genex --password <hidden> --dir /home/king/proj/genex/genex-server Adding quantarray QuantitationTypeDimension ... /usr/local/genex/bin/qtdim-insert.pl --dbname genex --username genex --password <hidden> /home/king/proj/genex/genex-server/DB/curated_data/quantarray.xml --name=QuantArray --abbrev_name=dqa --version=3.0 --feat="1.2.3.4" --data_start_regex1=^Begin\s+Data --data_start_regex2=^Number --data_end_regex=^End Bio::MAGE version 20020902.3 required--this is only version 36.0731000580321 at /usr/local/genex/bin/qtdim-insert.pl line 14. BEGIN failed--compilation aborted at /usr/local/genex/bin/qtdim-insert.pl line 14. Died at /usr/local/genex/bin/insert-test-data.pl line 88. Died at Perl/scripts/gendb.pl line 221. FATAL ERROR I got an error when I ran the DB installer. -------------------------------------------------------- I assume line 14 in qtdim-insert.pl needs to be updated with version 36.0731000580321? -Brandon |
From: <ja...@op...> - 2004-06-25 18:06:24
|
Hey All, I just added a Mason GUI for running data processing algorithms that are SQL stored procedures. Just as a test I ported one of Michael Pear's old scripts from Genex1. Here's the outline of how to make it work: 1) Someone has to write the SQL, and add it to the DB 2) There is a Mason GUI for adding a Protocol for a sproc 3) Then the sproc shows up in the data processing algorithms drop down menu on the analyze experiment page If the procedure has specified parameters of type 'user_supplied', the Mason GUI will pop up a page asking for the user to supply the values. I believe this is *very* useful - if I do say so myself - because it means that we don't have to make a special application page for each external app that we want to run (like the old CyberT page). The page is automatically generated by Mason whenever the tool is run. This is done through the magic of the new Protocol/Step/Procedure model we stole from ESTAP. When the sproc is added as a Protocol, the curator can add any number of Parameters, and for each one specifying whether the following info: * optional or required * user-supplied or predefined * default value * description * parameter order (for placing the parameters in the proper order to the procedure call - for languages that don't use named parameters) So if the Procedure has any Parameters of type 'user-supplied', then it asks for the user to confirm the values of the Parameters - giving the default value (if any was given). If the Parameter is given a good enough description, the page is quite meaningful. Plust the description for the Procedure itself can be added to the top of the page - I haven't done this yet (just thought of it). I've only just begun to test this out, and so I don't have a 'best practices' document yet, but it is important how the sprocs are named and how the parameters are named as well. Once I have a few more test cases I'll commit the code so others can try it out. Then I'll move on to running external applications (like Perl scripts) as data processing tools. This was working previously, but the it broke a while back. Cheers, jas. |