You can subscribe to this list here.
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(11) |
Dec
(3) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2008 |
Jan
(13) |
Feb
(13) |
Mar
(8) |
Apr
(33) |
May
(15) |
Jun
(7) |
Jul
(20) |
Aug
(74) |
Sep
(5) |
Oct
(3) |
Nov
|
Dec
|
2009 |
Jan
|
Feb
|
Mar
(3) |
Apr
(1) |
May
(8) |
Jun
(4) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
2012 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Egon W. <ego...@gm...> - 2012-03-18 07:07:01
|
Hi all, I am very happy to let you know what the below mentioned NRNB project, Bioclipse and the CDK (again) have found their way into the Google Summer of Code. The main projects for this GSoC are WikiPathways with PathVisio and Cytoscape, but there are in total 4 project ideas that involve these two Blue Obelisk projects. Two project ideas involve writing Bioclipse plugins, one for PathVisio, one for the Cytoscape huge graph rendering functionality. When selecting content in these two widgets, they will pass around selection events, allowing further detail to be visualized in Bioclipse's other views. Two other ideas involve the CDK. One is a plugin for PathVisio showing molecular properties related to NMR and mass spectrometry (metabolomics oriented). The other idea is to update the Cytoscape plugin for visualizing chemical structures which was in fact developed in a previous GSoC. I invite all students who want a really cool summer job, to brush up their programming skills, write a terrific application (there are limited seats; only the best applications will be accepted; starting coding and interacting early with developers always helps), and hack on Bioclipse or the CDK this year in the GSoC 2012! All project ideas can be found at http://nrnb.org/gsoc/ and also read the below detail. With kind regards, Egon ---------- Forwarded message ---------- From: Alexander Pico <ap...@gl...> Date: Sun, Mar 18, 2012 at 2:35 AM Subject: [wp-discuss] Google Summer of Code 2012 To: wikipathways-discuss <wik...@go...> As part of NRNB, we will once again be participating in the Google Summer of Code (GSoC) program. http://nrnb.org/gsoc GSoC is a global program that funds student programmers around the world to write code for open source projects. This will be our 6th year participating and we have a lot of projects to choose from. If you are a student interested in coding for our open source projects, then check out the link above and apply before April 6th. If you know of any students who might be interested, then forward this announcement, mention it in the classroom or post the attached flyer. http://www.booki.cc/gsocstudentguide/what-is-google-summer-of-code/ The application period is March 26 - April 6th. Hop on the discussion mailing list and run your ideas by us before applying to improve your application. - Alex -- You received this message because you are subscribed to the Google Groups "wikipathways-discuss" group. To post to this group, send email to wik...@go.... To unsubscribe from this group, send email to wik...@go.... For more options, visit this group at http://groups.google.com/group/wikipathways-discuss?hl=en. -- Dr E.L. Willighagen Postdoctoral Researcher Department of Bioinformatics - BiGCaT Maastricht University (http://www.bigcat.unimaas.nl/) Homepage: http://egonw.github.com/ LinkedIn: http://se.linkedin.com/in/egonw Blog: http://chem-bla-ics.blogspot.com/ PubList: http://www.citeulike.org/user/egonw/tag/papers |
From: Steffen N. <sne...@ip...> - 2009-12-02 14:02:52
|
Hi, for quite some time we have struggled to map the controlled vocabularies and ontologies from the mzML (and mzIdentML) world into the schema of CML. The following examples are taken from the mzML tiny example file <cvList count="2"> <cv id="MS" fullName="Proteomics Standards Initiative Mass Spectrometry Ontology" version="2.26.0" URI="http://psidev.cvs.sourceforge.net/*checkout*/psidev/psi/psi-ms/mzML/controlledVocabulary/psi-ms.obo"/> <cv id="UO" fullName="Unit Ontology" version="14:07:2009" URI="http://obo.cvs.sourceforge.net/*checkout*/obo/obo/ontology/phenotype/unit.obo"/> </cvList> <cvParam cvRef="MS" accession="MS:1000569" name="SHA-1" value="1234567890123456"/> <cvParam cvRef="MS" accession="MS:1000130" name="positive scan" value=""/> <cvParam cvRef="MS" accession="MS:1000045" name="collision energy" value="35" unitCvRef="UO" unitAccession="UO:0000266" unitName="electronvolt"/> Just for reference I include the link to the CML schema, http://msbi.ipb-halle.de/~sneumann/cmlschema.html that has been generated by http://xml.fiforms.org/xs3p/ using (I hope) the current CML XSD schema. I'd like to ask for input especially from CML people how these can be expressed as valid CML, then it (should be) no problem to specify which bits of the mzML CV metadata can be specified for mzAnnotate, and create the dict and schematron (almost ?) automatically from the mzML mapping file. My initial attempt is: <cml xmlns="http://www.xml-cml.org/schema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" -> cvRef:MS="http://psidev.cvs.sourceforge.net/*checkout*/psidev/psi/psi-ms/mzML/controlledVocabulary/psi-ms.obo dict/psims.xml" -> cvRef:UO="http://obo.cvs.sourceforge.net/*checkout*/obo/obo/ontology/phenotype/unit.obo dict/unit.xml" xsi:schemaLocation="http://www.xml-cml.org/schema schema.xsd"> <metadata dictRef="cvRef:MS" id="MS:1000569" name="SHA-1">1234567890123456</metadata> or <metadata dictRef="cvRef:MS" id="MS:1000569" name="SHA-1" content="1234567890123456"/> and <metadata dictRef="cvRef:MS" id="MS:1000130" name="positive scan"/> but I am lost where to place the units for the collision energy, and I am unsure about the correct definition in the <cml ...> tag. Also, I haven't started creating those dict/psims.xml and dict/unit.xml yet. Thoughts ? Yours, Steffen -- IPB Halle AG Massenspektrometrie & Bioinformatik Dr. Steffen Neumann http://www.IPB-Halle.DE Weinberg 3 http://msbi.bic-gh.de 06120 Halle Tel. +49 (0) 345 5582 - 1470 +49 (0) 345 5582 - 0 sneumann(at)IPB-Halle.DE Fax. +49 (0) 345 5582 - 1409 |
From: Egon W. <ego...@gm...> - 2009-07-03 06:34:21
|
Hi Steffen, sorry, missed this email... almost back to zero in my inbox, and still discovering new emails :) On Fri, Jun 12, 2009 at 10:35 PM, Steffen Neumann<sne...@ip...> wrote: > we currently have the problem > that the recently introduced foreign key constraints > cause SQL exceptions if you try to save a Bean which > as no / a non-existing reference: Yes, I noted this too by now, with that in memory database... > org.postgresql.util.PSQLException: > ERROR: insert or update on table "metchar_putative_metabolites_identities" > violates foreign key constraint "metchar_putative_metabolites_identitie_metobserv_output_id_fkey" > Detail: Key (metobserv_output_id)=(67) is not present in table "metobserv_output". Yeah, the 'problem' is that explicit foreign keys requires all beans to be constructed... that is, you cannot do a partial construct of a data structure, and have to give beans for all links you make... actually, this would not be so bad... > I currently see two alternative solutions: > > 1) Revert the foreign key constraints, such that > (possibly inconsistent) data can enter the DB, > hoping that some following INSERTs fix that. Yes, that's sort of the current behavior... > 2) Enforce the correct ordering of Beans, > and either catch the SQLException, > or verify via our exists() that the referenced entity is there. This equals fixing the unit tests? What do you mean with 'correct ordering of Beans' ? Egon -- Post-doc @ Uppsala University http://chem-bla-ics.blogspot.com/ |
From: SourceForge.net <no...@so...> - 2009-06-30 19:45:32
|
Bugs item #2814806, was opened at 2009-06-30 19:45 Message generated for change (Tracker Item Submitted) made by sneumann You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=1014034&aid=2814806&group_id=210511 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Steffen Neumann (sneumann) Assigned to: Egon Willighagen (egonw) Summary: Change redirection from metware.sf.net Initial Comment: to the new wiki location http://sourceforge.net/apps/mediawiki/metware/index.php?title=Main_Page Yours, Steffen ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=1014034&aid=2814806&group_id=210511 |
From: Steffen N. <sne...@ip...> - 2009-06-23 09:34:25
|
Hi, this is a set of notes. Some of them will be elaborated, some will become bugreports. Yours, Steffen Experiment Description is missing, hypothesis is quite specific: a true of false hypothesis Experiment has "Experiment set name": where does "set" come from ? growthenvironment.growthprotocol is a free text, where it could be a protocol_id ? @Stephan: Anzuchtmedium ist Teil von Anzuchprotokoll ? Treatment needs more fields. At least a name. We have e.g. "Basta sprayed" "mock treatment" @Stephan: Treatment is Part of TreatmentProtocol ? Collectione: Absolute 1.4.08 / Relative 6 weeks old Where to put Date of application of a protocol ? => metmeta_event needs to be used everywhere you apply a protocol Pooling and Aliquotation (?!) are Protocols ? Does ISAcreator know about Pooling ? "Run Machine Setup Reference" is missing in the database ?! Does bean.save() need a sister method bean.update() ? -- IPB Halle AG Massenspektrometrie & Bioinformatik Dr. Steffen Neumann http://www.IPB-Halle.DE Weinberg 3 http://msbi.bic-gh.de 06120 Halle Tel. +49 (0) 345 5582 - 1470 +49 (0) 345 5582 - 0 sneumann(at)IPB-Halle.DE Fax. +49 (0) 345 5582 - 1409 |
From: Steffen N. <sne...@ip...> - 2009-06-12 20:48:57
|
On Fri, 2009-06-12 at 22:35 +0200, Steffen Neumann wrote: > 2) Enforce the correct ordering of Beans, > and either catch the SQLException, > or verify via our exists() that the referenced entity is there. Which means more intelligent tests ;-) Steffen |
From: Steffen N. <sne...@ip...> - 2009-06-12 20:35:50
|
Hi, we currently have the problem that the recently introduced foreign key constraints cause SQL exceptions if you try to save a Bean which as no / a non-existing reference: org.postgresql.util.PSQLException: ERROR: insert or update on table "metchar_putative_metabolites_identities" violates foreign key constraint "metchar_putative_metabolites_identitie_metobserv_output_id_fkey" Detail: Key (metobserv_output_id)=(67) is not present in table "metobserv_output". I currently see two alternative solutions: 1) Revert the foreign key constraints, such that (possibly inconsistent) data can enter the DB, hoping that some following INSERTs fix that. 2) Enforce the correct ordering of Beans, and either catch the SQLException, or verify via our exists() that the referenced entity is there. Thoughts ? Yours, Steffen |
From: SourceForge.net <no...@so...> - 2009-05-27 15:47:49
|
Bugs item #2797445, was opened at 2009-05-27 15:47 Message generated for change (Tracker Item Submitted) made by sneumann You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=1014034&aid=2797445&group_id=210511 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Steffen Neumann (sneumann) Assigned to: Nobody/Anonymous (nobody) Summary: metware tracker needs moderation on metware-devel Initial Comment: [17:44] <sneumann> why do we still have to moderate the sf.net bug reports on the metware-devel ? [17:44] <sneumann> https://lists.sourceforge.net/lists/admin/metware-devel/privacy/sender [17:45] <sneumann> has no...@sf... in its "List of non-member addresses whose postings should be automatically accepted." As list administrator, your authorization is requested for the following mailing list posting: List: Met...@li... From: no...@so... Subject: [ metware-Bugs-2797426 ] InstallDemoData does not follow full sample path Reason: Message has implicit destination ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=1014034&aid=2797445&group_id=210511 |
From: SourceForge.net <no...@so...> - 2009-05-27 15:16:30
|
Bugs item #2797433, was opened at 2009-05-27 15:16 Message generated for change (Tracker Item Submitted) made by sneumann You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=1014034&aid=2797433&group_id=210511 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Steffen Neumann (sneumann) Assigned to: Egon Willighagen (egonw) Summary: Concepts with same name and different Definition Initial Comment: grep ";Date\"" metware.skos <skos:Concept rdf:about="&metware;Date"> <skos:definition>Date and time when the Data File was uploaded.</skos:definition> <skos:Concept rdf:about="&metware;Date"> <skos:definition>Date on which the data preprocessing analysis has been performed.</skos:definition> You can find more of these via grep "<skos:Concept" metware.skos | sort | uniq -d <skos:Concept rdf:about="&metware;Date"> <skos:Concept rdf:about="&metware;FactorName"> <skos:Concept rdf:about="&metware;Location"> <skos:Concept rdf:about="&metware;MonoisotopicMass"> <skos:Concept rdf:about="&metware;Name"> <skos:Concept rdf:about="&metware;ProtocolName"> ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=1014034&aid=2797433&group_id=210511 |
From: SourceForge.net <no...@so...> - 2009-05-27 15:04:01
|
Bugs item #2797426, was opened at 2009-05-27 17:03 Message generated for change (Tracker Item Submitted) made by egonw You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=1014034&aid=2797426&group_id=210511 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 9 Private: No Submitted By: Egon Willighagen (egonw) Assigned to: Egon Willighagen (egonw) Summary: InstallDemoData does not follow full sample path Initial Comment: It should exemplify: run <- analysismaterial <- aliqout <- sample <- explant <- single plant ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=1014034&aid=2797426&group_id=210511 |
From: Egon W. <ego...@gm...> - 2009-05-26 20:26:49
|
Hi all, pending replying the Steffen's recent proposals, I worked on the current SKOS versions of the ArMet and MetWare ontologies, and managed to get Bioclipse2 to validate the SKOS, and added the involved JavaScripts to BigMet/onto/validate/... During this process, I cleaned up the SKOS of both, and added axioms, so that skos:Concept's from the ArMet skos:ConceptScheme, are now also rdf:type ArmetConcept, and likewise for MetwareConcept. I found that this was needed because the current set up uses skos:broader to link MetWare derivatives to their ArMet ancestors. There is rumour of SKOS-Mapping to do this, but have not found something definite... I need this to test that all Armet concepts have equivalents in MetWare... (this will later be used for the MSI ontology too, which extends Armet, and we will want to be compatible and thus extend the MSI onto too) Anyway, the current sources use OWL axioms and only typing the skos:ConceptScheme "&armet;armet" as armet:ArmetConceptScheme too... the armet:ArmetConcept rdf:type-ing follows from the axioms when using a reasoning SPARQL engine like Pellet, as I do in the RDF feature of Bioclipse2. So, what does this mean practically... I have now written the first few unit tests for the SKOS files, testing these assumptions: * all skos:Concept in armet.skos must be a ArmetConcept * all skos:Collection in armet.skos must be a ArmetCollection Test results: RDFStore: 1068 triples Model is valid. Collections not in the Armet scheme: 0 Concepts not in the Armet scheme: 0 * all skos:Concept in metware.skos must be a MetwareConcept * all skos:Collection in metware.skos must be a MetwareCollection Test results: RDFStore: 2718 triples Model is valid. Collections not in the Metware scheme: 0 Concepts not in the Metware scheme: 19 [[http://metware.sf.net/onto/DataPreprocessingAnalysis, null], [http://metware.sf.net/onto/Organism, null], [http://metware.sf.net/onto/Run, null], [http://metware.sf.net/onto/User, null], [http://metware.sf.net/onto/Observation, null], [http://metware.sf.net/onto/Protocol, null], [http://metware.sf.net/onto/Machine, null], [http://metware.sf.net/onto/Metabolite, null], [http://metware.sf.net/onto/Project, null], [http://metware.sf.net/onto/AnalysisMaterial, null], [http://metware.sf.net/onto/Experiments, null], [http://metware.sf.net/onto/Tissue, null], [armet:mchSetupRef, null], [armet:runID, null], [http://metware.sf.net/onto/Institute, null], [armet:PreprationMethod, null], [http://metware.sf.net/onto/Experiment, null], [armet:archiveRef, null], [http://metware.sf.net/onto/Organs, null]] Clearly, I need to clean up the metware.skos a bit more. Three principle problems: * incorrect namespacing: http://metware.sf.net/onto/Protocol should be http://metware.sf.net/onto/#Protocol * there are a few put in the armet namespace in the metware.skos * 19 concepts are not assigned to a ConceptScheme yet Will look at this tomorrow... Egon -- Post-doc @ Uppsala University http://chem-bla-ics.blogspot.com/ |
From: SourceForge.net <no...@so...> - 2009-05-26 12:24:10
|
Bugs item #2796813, was opened at 2009-05-26 12:24 Message generated for change (Tracker Item Submitted) made by nobody You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=1014034&aid=2796813&group_id=210511 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: Namespace URIs propagate to SQL Initial Comment: The recent change in the Namespace URIs metware:[895] BigMet/trunk/src/main/resources/metware.skos now results in very ugly (if not illegal) SQL schema names if ant.properties defines pgsql as target. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=1014034&aid=2796813&group_id=210511 |
From: Steffen N. <sne...@ip...> - 2009-05-25 14:19:41
|
Hi, With the components of the schema in place, we need to fix issues with relationships. The main problem seems to be the lack of sql stuff in the SKOS concepts. Is there a way to generate some of the SQL attributes in the SKOS ? Or at least verify they're there ? 1) One of the main problems is that currently we cant find an SQL join path between project and datafile. So a run is measured from Analysis Material -> metmeta_run.metmeta_analysis_material_id So an Analysis Material is obtained from an aliquot -> metmeta_analysis_material.metmeta_aliquot_id So an Aliquot is obtained run is measured from an aliqout -> metmeta_run.metmeta_aliquot_id An aliquot is taken from a Sample -> metmeta_aliquot.metmeta_sample_id Actually, there is possibly pooling, which is probably a protocol. 2) The current metmeta_run.metmeta_sample_id should probably be removed due to the above. 3) Currently we have metraw_datafileset sitting between metmeta_run and metraw_datafile. Run 1:1 DataFileSet DataFileSet 1:N Datafile IMHO we can collapse that to Run 1:N Datafile unless DataFileSet provides any information beyond the Datafile. 4) Currently one Run can result in multiple Datafiles. This is i.e. the case if you have one native file in 3 different converted formats (CDF, mzML, ASCII peaklist) Or if a machine has multiple outputs, such as MS and UV data. But then I'd suggest to have these as different runs on different machines from the same aliquot. At some stage you want to say "Give me all MS files in negative mode from my experiment" 5) <skos:Concept rdf:about="RunMchSetupRef"> Currently has no SQL tables assigned. It'll be a challenge to do that well, because it is very generic. Yours, Steffen -- IPB Halle AG Massenspektrometrie & Bioinformatik Dr. Steffen Neumann http://www.IPB-Halle.DE Weinberg 3 http://msbi.bic-gh.de 06120 Halle Tel. +49 (0) 345 5582 - 1470 +49 (0) 345 5582 - 0 sneumann(at)IPB-Halle.DE Fax. +49 (0) 345 5582 - 1409 |
From: Steffen N. <sne...@ip...> - 2009-05-21 22:45:42
|
Hi, to enforce some referential integrity I have added metware:references attributes to the sqlfields in the metware.skos. This allows to create a better reverse-engineered schema. I didn't succeed to get nice diagrams from either mysql-workbench (ex-DBDesigner) or pgdesigner, but I got a nice example using postgresql autodoc: http://msbi.ipb-halle.de/~sneumann/metware/postgresschema.pdf This should be a start to find all those "loose ends", tables which currently have no serious link to another table. Yours, Steffen |
From: SourceForge.net <no...@so...> - 2009-05-21 21:36:00
|
Bugs item #2795085, was opened at 2009-05-21 21:35 Message generated for change (Tracker Item Submitted) made by sneumann You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=1014034&aid=2795085&group_id=210511 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Steffen Neumann (sneumann) Assigned to: Nobody/Anonymous (nobody) Summary: PGSQL schema not created Initial Comment: The generated SQL for Postgres needs CREATE SCHEMA name; statements. Possibly using Collections as mentioned in #2073760. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=1014034&aid=2795085&group_id=210511 |
From: Egon W. <ego...@gm...> - 2009-04-08 07:28:37
|
I think I'm going to submit Metware as example SKOS... ---------- Forwarded message ---------- From: Antoine Isaac <ai...@fe...> Date: Wed, Apr 8, 2009 at 12:38 AM Subject: Request for Implementation Input: SKOS Simple Knowledge Organization System To: SKOS <pub...@w3...>, sem...@w3..., DC-...@ji..., dc-...@ji..., ec...@np..., pub...@w3..., pub...@w3..., fac...@ya..., NK...@oc..., pub...@w3..., di...@in..., in...@li..., ont...@ma..., sew...@li..., pub...@w3..., pub...@w3..., pub...@w3... Cc: SWD WG <pub...@w3...> The W3C Semantic Web Deployment Working Group is pleased to announce the publication of a Candidate Recommendation for the Simple Knowledge Organization System (SKOS) Reference: http://www.w3.org/TR/2009/CR-skos-reference-20090317/ A new Working Draft of the accompanying SKOS Primer has also been published: http://www.w3.org/TR/2009/WD-skos-primer-20090317/ The Working Group now *calls for implementations.* We would like to hear of any vocabulary (thesaurus, classification system, subject heading system, taxonomy or other KOS) or mapping between vocabularies that has been published in the Web as machine-readable data using SKOS, and/or has been made available via programmatic services using SKOS. We would also like to hear of any software that has the capability to read and/or write SKOS data, and/or can check whether a given SKOS dataset is consistent with the SKOS data model. If you would like to notify us of a vocabulary, vocabulary mapping, and/or software as a SKOS implementation, *please send an email to pub...@w3... before 30 April 2009*, providing the information described below. Please also begin the subject line with "SKOS Implementation:". == Vocabulary Implementations == If you are notifying us of one or more vocabularies or vocabulary mappings as an implementation, please provide *at least* the following information: * vocabulary title(s) (e.g. Library of Congress Subject Headings) * name of person and/or organisation responsible for the implementation * a list of the SKOS constructs used (e.g. skos:Concept, skos:ConceptScheme, skos:inScheme, skos:broader, skos:prefLabel, skos:closeMatch ... etc.) * URL(s) where the published SKOS data may be obtained, if the data are publicly available We would also welcome any further information you care to provide, however this is *not mandatory*. For example, we would be interested to know the scope and size of the vocabulary, what it is primarily used for, in what languages the vocabulary is provided, any other URLs describing the vocabulary or providing further information. == Software Implementations == If you are notifying us of software as an implementation, please provide *at least* the following information: * name of the software (e.g. SKOSEd) * name of person and/or organisation responsible for the implementation * URLs for software home page and/or download location if publicly downloadable * can the software read SKOS data? * can the software write SKOS data? * can the software check consistency of SKOS data with respect to the SKOS data model? For more information on what we mean by reading, writing or checking SKOS data, see: http://www.w3.org/2006/07/SWD/wiki/SKOSImplementationReport We would also welcome any further information you care to provide, however this is *not mandatory*. For example, we would be interested in the main purpose and functionality of the software, the programming language and/or software frameworks used, details of the SKOS constructs which are supported, any other URLs describing the software or providing further information. Kind regards, Alistair Miles and Antoine Isaac on behalf of the W3C Semantic Web Deployment Working Group -- Post-doc @ Uppsala University http://chem-bla-ics.blogspot.com/ |
From: SourceForge.net <no...@so...> - 2009-03-26 12:12:11
|
Bugs item #2714322, was opened at 2009-03-26 12:11 Message generated for change (Tracker Item Submitted) made by sneumann You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=1014034&aid=2714322&group_id=210511 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Steffen Neumann (sneumann) Assigned to: Egon Willighagen (egonw) Summary: BigMet build broken after onto -> resources transition Initial Comment: [10:02] <sneumann> egonw: the build.xml et al stuff in BigMet [10:02] <sneumann> is not yet adapted to onto -> resources ? [10:02] <sneumann> Or just not committed ? [10:04] <egonw> sorry [10:04] <egonw> mom [10:10] <egonw> crap [10:10] <egonw> please file a bug report ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=1014034&aid=2714322&group_id=210511 |
From: Steffen N. <sne...@ip...> - 2009-03-24 12:27:20
|
On Tue, 2009-03-24 at 11:59 +0000, Susanna Sansone wrote: > Hi Steffen, > > would it be ok with you if we 'advertise' this ongoing work on the ISA > website? Sure! Please mention "Under development". http://metware.sourceforge.net/ Yours, Steffen -- IPB Halle AG Massenspektrometrie & Bioinformatik Dr. Steffen Neumann http://www.IPB-Halle.DE Weinberg 3 http://msbi.bic-gh.de 06120 Halle Tel. +49 (0) 345 5582 - 1470 +49 (0) 345 5582 - 0 sneumann(at)IPB-Halle.DE Fax. +49 (0) 345 5582 - 1409 |
From: Steffen N. <sne...@ip...> - 2009-03-18 13:30:00
|
Hi, during the Metabolomics Workshop[1] we got to know the ISA-Tab[2] format, introduced by Phillipe Rocca-Serra. ISA-Tab can describe (among others) a set of metabolomics profiling experiments, users, experiment factors, plants, ... so it is a perfect match as an input format to MetWare. The next thing that will be built is a standalone import utility, which reads the ISA-Tab CSV files, creates MetBeans from the information (using classes from metbeans.jar and bigmet.jar) and persists them in the metware database using the save() methods of our metbeans classes. During the first steps, this ISA-Tab <-> MetWare binding will be done manually, and for things like "experiment" and "user" the mapping should be trivial. Once we have some (likely incomplete) import of ISA-Tab into MetWare, the next step would be to build a SKOS-ified ISA-Tab, which allows a mapping of terms form ISA-world to MetWare-world. Once we have that, the core of the import utility can be rewritten to use the ISA-SKOS for Building the corresponding MetWare Objects, so that the converter can remain fixed, even if the Ontologies evolve. Egonw, your opiniion ? Suggestions ? Yours, Steffen [1] http://www.elixir-europe.org/page.php?page=metabolomics_workshop [2] http://isatab.sourceforge.net/ -- IPB Halle AG Massenspektrometrie & Bioinformatik Dr. Steffen Neumann http://www.IPB-Halle.DE Weinberg 3 http://msbi.bic-gh.de 06120 Halle Tel. +49 (0) 345 5582 - 1470 +49 (0) 345 5582 - 0 sneumann(at)IPB-Halle.DE Fax. +49 (0) 345 5582 - 1409 |
From: Malick H. <heu...@ch...> - 2008-10-20 12:35:19
|
An important set of commands that can be used to see which Peaks are ommited # Assuming that you have created an object xraw with the xcmsRaw function call. #This command filters out all last mz's from your xraw@filterLine slot > bound <- unique(as.numeric( sub("([^@]*).*","\\1", (grep("@", > unlist(strsplit(levels(xraw@filterLine), " ")), value=TRUE))))) # Assuming that you have created an object xfrag with the xcmsFragments #function call. This command compares your column xfrag@peaks[,5] (storing the #xf@peaks mz values ) with all the filterLine mz's and keeps the mz in noise #that are not present in xfrag@peaks[,5]. The second command converts the noise #back to numeric values > noise <- > setdiff(as.character(bound),as.character(round(xfrag@peaks[,5],2))); > noise <- as.numeric(noise) # Now you need to know a priori howmany peaks are returned by xf@peaks and you #can cbind the xfrag@peaks[,5] column to the xfrag@peaks[,6] column while #omitting the ms1 peaks. So that they properly bind to each other. I have for #example 48 peaks in xf@peaks and 5 ms1 peaks, so I start the cbind for #xf@peaks[,6] as follow: > cbind(round( (xfrag@peaks)[xfrag@peaks[,2],5],2) , xfrag@peaks[5:48,6]) #this allows to see if the filterLine mz and xf#peaks mz are the same #noise be called to see which peaks are skipped ;-) Quoting Malick Heuvel <heu...@ch...>: > With this message I ask some attention for a new > xcmsFragments I've added 5 new columns in the xcmsFragments > xf@peaks table the: > * CID, * filterLine, * AquisitionNumber of a FilterLine, * last mz of > a filterLine > > In this way we can compare for a Peak if its corresponding > msnParentPeak's mz value correspond to last mz in FilterLine. Still a > note to be aware of. > > There are cases that an mzxml file has more last mz's in > unique(FilterLines) which are not present in xf@peaks. How to deal > with those missing peaks are they noise peaks when to state that they > are noise. > > The aquisitionNumber in xf@peaks tells for each peak from which scan > it was derived and corresponds with the "Scan num=" in your mzxml file. > The same Scan in mzxml has an filterLine in xf@peaks, which should > correspond with the mz in column "last mz of a filterLine" in > xf@peaks. I verified that this was the case for an FTMS experiment > with unusual settings. Please your attention and comments. I want to > be sure that its failsafe > > > Regards > > Malick A.D Heuvel > Netherlands Metabolomics Centre > Gorlaeus Laboratories > PO Box 9502 > 2300 RA Leiden > The Netherlands Malick A.D Heuvel Netherlands Metabolomics Centre Gorlaeus Laboratories PO Box 9502 2300 RA Leiden The Netherlands |
From: Steffen N. <sne...@ip...> - 2008-10-15 10:04:51
|
On Wed, 2008-10-15 at 10:34 +0200, Malick Heuvel wrote: > With this message I ask some attention for a new > xcmsFragments I've added 5 new columns in the xcmsFragments > xf@peaks table the: > * CID, * filterLine, * AquisitionNumber of a FilterLine, * last mz of > a filterLine CID is definitely needed in xf@peaks. > In this way we can compare for a Peak if its corresponding > msnParentPeak's mz value correspond to last mz in FilterLine. Still a > note to be aware of. This should appear as warning("Parent Peak mz " + parentmz + "different from Filterline "+ filterlinemz + "Settings") when constructing the xf@peaks table. > There are cases that an mzxml file has more last mz's in > unique(FilterLines) which are not present in xf@peaks. How to deal > with those missing peaks are they noise peaks when to state that they > are noise. So you mean there is a Filterlines e.g. FTMS - c NSI d Full ms5 351.22@cid35.00 333.21@cid35.00 175.11@cid35.00 119.09@cid35.00 [50.00-130.0] which has a lastmz of 119.09 that means you record an MS5 spectrum with peaks between mz 50-130 from the MS4 Ion with mz 119.09 but the MS4 Ion 119.09 is not present in xf@peaks ? Then one needs to check with e.g. vendor software whether your 119.09 is noise or a real peak which has been overseen by xcms. In the first case an MS5 spectrum obtained from a noisy MS4 parent should not be part of xf@peaks in first place. Should it ? > The aquisitionNumber in xf@peaks tells for each peak from which scan > it was derived and corresponds with the "Scan num=" in your mzxml file. > The same Scan in mzxml has an filterLine in xf@peaks, which should > correspond with the mz in column "last mz of a filterLine" in > xf@peaks. I verified that this was the case for an FTMS experiment > with unusual settings. Are you suggesting here to create the tree for these FTMS experiments using the aquisitionNumber to lookup the parent/child relationships ? That's definitely a good idea if aquisitionNumber are available. Yours, Steffen -- IPB Halle AG Massenspektrometrie & Bioinformatik Dr. Steffen Neumann http://www.IPB-Halle.DE Weinberg 3 http://msbi.bic-gh.de 06120 Halle Tel. +49 (0) 345 5582 - 1470 +49 (0) 345 5582 - 0 sneumann(at)IPB-Halle.DE Fax. +49 (0) 345 5582 - 1409 |
From: Malick H. <heu...@ch...> - 2008-10-15 08:40:31
|
With this message I ask some attention for a new xcmsFragments I've added 5 new columns in the xcmsFragments xf@peaks table the: * CID, * filterLine, * AquisitionNumber of a FilterLine, * last mz of a filterLine In this way we can compare for a Peak if its corresponding msnParentPeak's mz value correspond to last mz in FilterLine. Still a note to be aware of. There are cases that an mzxml file has more last mz's in unique(FilterLines) which are not present in xf@peaks. How to deal with those missing peaks are they noise peaks when to state that they are noise. The aquisitionNumber in xf@peaks tells for each peak from which scan it was derived and corresponds with the "Scan num=" in your mzxml file. The same Scan in mzxml has an filterLine in xf@peaks, which should correspond with the mz in column "last mz of a filterLine" in xf@peaks. I verified that this was the case for an FTMS experiment with unusual settings. Please your attention and comments. I want to be sure that its failsafe Regards Malick A.D Heuvel Netherlands Metabolomics Centre Gorlaeus Laboratories PO Box 9502 2300 RA Leiden The Netherlands |
From: Steffen N. <sne...@ip...> - 2008-09-16 14:06:08
|
Hi, just listening to this one: https://admin.adobe.acrobat.com/_a300965365/p19677840/ There are some basics of facelets in there, and it includes demo of new code completion stuff for *.xhtml in WTP 3.0.x and -- very interesting -- a demo of a simple tag library, I guess the RDFa one we have is still a JSP variant, rather than pure facelets. The release mentioned in the webinar has happened in the meantime: http://www.eclipse.org/projects/project_summary.php?projectid=webtools Yours, Steffen |
From: SourceForge.net <no...@so...> - 2008-09-15 06:34:36
|
Bugs item #2112260, was opened at 2008-09-15 15:34 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=1014034&aid=2112260&group_id=210511 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: tjeerdabma (tjeerdabma) Assigned to: Nobody/Anonymous (nobody) Summary: SKOS sqlmapping namespace error Initial Comment: The Metware-SKOS ( http://metware.svn.sourceforge.net/viewvc/metware/BigMet/trunk/src/main/onto/metware.skos ) refers to a sqlmapping namespace: http://metware.sourceforge.net/onto/sqlmapping/ But it doesn't exist at the specified location. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=1014034&aid=2112260&group_id=210511 |
From: Malick H. <heu...@ch...> - 2008-09-14 11:43:53
|
Dear all, I'm trying to use XCMS in combination with RODBC to tweak dbimport.R so that we can sent XCMS data to the NMC platform. As I understand I need to load the RODBC package in R, so I try to install version RODBC_1.2.3.tar.gz first on 5.0.51a-3ubuntu5.1. But install fails and I don't know why. I've read the README and INSTALL docs of this R package. The README of this package says: An ODBC driver manager needs to be installed, as well as an ODBC driver for each database to be used. Testing is done using unixODBC (http://www.unixODBC.org), but iOBDC (http://www.iODBC.org) has also been used. The RODBC package is installed in the standard way (R CMD INSTALL RODBC) and needs the ODBC driver manager header files and library (-lodbc or -lodbc32 or -liodbc) to be available. Use the configure options --with-odbc-include and --with-odbc-lib or environment variables ODBC_INCLUDE and ODBC_LIBS to set the include and library paths as needed. This message looks straightforward I either need to have the ODBC driver manager iOBDC or unixODBC on 5.0.51a-3ubuntu5.1. I verified that I have unixODBC version 2.2.11 with the following command: malick@malick-desktop:~/Desktop$ odbcinst -j unixODBC 2.2.11 DRIVERS............: /etc/odbcinst.ini SYSTEM DATA SOURCES: /etc/odbc.ini USER DATA SOURCES..: /home/malick/.odbc.ini The RODBC readme also says something about configure options --with-odbc-include and --with-odbc-lib or environment variables ODBC_INCLUDE and ODBC_LIBS So my ODBC driver manager must be configured well. In the manuals at http://www.unixodbc.org/ I've read some more info about unixodbc In the manual "A guide to using unixODBC without the GUI" I've read: odbcinst.ini This contains a section heading that provides a name for the driver, so for the example below PostgreSQL to indicate a Postgres driver. The following lines contain a description and then the important bits. The Driver and Setup paths point to the ODBC driver and setup libs. The setup lib is used when you click on Add in ODBCConfig to add a new DSN, but as this document is about not using the GUI tools, this is not that important for us. Far more important is the Driver entry (vital in fact) This is the library that the driver manager will dynamicaly load when SQLConnect or SQLDriverConnect is called for that DSN. If this points to the wrong place the DSN will not work. If the dlopen() fails the DSN will not work. The fileusage entry is added by the odbcinst program, so if you are using a text editor, you will need to add it yourself. templates odbcinst expects to be supplied with a template file. If you are adding a driver for the above entry the template file would contain the following [PostgreSQL] Description = PostgreSQL driver for Linux & Win32 Driver = /usr/local/lib/libodbcpsql.so Setup = /usr/local/lib/libodbcpsqlS.so and you would invoke odbcinst with the following arguments, assuming that you have created a file template_file with the above entries in. odbcinst -i -d -f template_file The args to odbcinst are as follows -i install -d driver -f name of template file Threads Since 1.6 if the driver manager was built with thread support you may add another entry to each driver entry. For example [PostgreSQL] Description = PostgreSQL driver for Linux & Win32 Driver = /usr/local/lib/libodbcpsql.so Setup = /usr/local/lib/libodbcpsqlS.so Threading = 2 This entry alters the default thread serialization level. More details can be found in the file DriverManager/__handles.c in the source tree. [.]odbc.ini The contents of the odbc.ini files are a bit more complicated, but they follow just the same format as the odbcinst.ini entries. These are complicated by each driver requiring different entries. The entries for all the drivers supplied with the distribution are included bellow for reference. The entries may be added in the same way using odbcinst, or a text editor. A sample entry to match the above driver could be [PostgreSQL] Description = Test to Postgres Driver = PostgreSQL Trace = Yes TraceFile = sql.log Database = nick Servername = localhost UserName = Password = Port = 5432 Protocol = 6.4 ReadOnly = No RowVersioning = No ShowSystemTables = No ShowOidColumn = No FakeOidIndex = No ConnSettings = And this may be written to a template file, and inserted in the ini file for the current user by odbcinst -i -s -f template_file The individual entries of course may vary. The Driver line is used to match the [section] entry in the odbcinst.ini file and the the Driver line in the odbcinst file is used to find the path for the driver library, and this loaded and the connection is then established. It's possible to replace the driver entry with a path to the driver itself. This can be used, for example if the user can't get root access to setup anything in /etc (less important now because of the movable etc path). For example [PostgreSQL] Description = Test to Postgres Driver = /usr/local/lib/libodbcpsql.so Trace = Yes TraceFile = sql.log Database = nick Servername = localhost UserName = Password = Port = 5432 Protocol = 6.4 ReadOnly = No RowVersioning = No ShowSystemTables = No ShowOidColumn = No FakeOidIndex = No ConnSettings = Templates The templates for the included drivers are... Postgress [PostgreSQL] Description = Test to Postgres Driver = PostgreSQL Trace = Yes TraceFile = sql.log Database = nick Servername = localhost UserName = Password = Port = 5432 Protocol = 6.4 ReadOnly = No RowVersioning = No ShowSystemTables = No ShowOidColumn = No FakeOidIndex = No ConnSettings = Mini SQL [Mini SQL] Description = MiniSQL Driver = MiniSQL Trace = No TraceFile = Host = localhost Database = ConfigFile = MySQL [MySQL-test] Description = MySQL test database Trace = Off TraceFile = stderr Driver = MySQL SERVER = 192.168.1.26 USER = pharvey PASSWORD = PORT = 3306 DATABASE = test Also this mannual is very clear I need to ensure the /etc/odbcinst.ini and /etc/odbc.ini contain to be used DSN drivers and to-be-used database params To verify which ODBC drivers I have on my 5.0.51a-3ubuntu5.1 I did: malick@malick-desktop:~/Desktop$ cd /usr/lib/odbc/ malick@malick-desktop:/usr/lib/odbc$ ls libesoobS.so libmyodbc.so libodbcdrvcfg1S.so libodbcminiS.so libodbcnnS.so libodbctxt.so liboplodbcS.so libsapdbS.so psqlodbca.la psqlodbcw.la libmimerS.so libnn.so libodbcdrvcfg2S.so libodbcmyS.so libodbcpsqlS.so libodbctxtS.so liboraodbcS.so libtdsS.so psqlodbca.so psqlodbcw.so This looks nice I have libodbcpsqlS.so and libmyodbc.so the drivers for respectively Postgresql and Mysql. So I'm first configuring the unixODBC to DSN to mysql and the metware db. At http://www.math.ias.edu/doc/MyODBC-2.50.39/INSTALL I've read some more info about the howtos. I did the following: malick@malick-desktop:/usr/lib/odbc$ sudo gedit /etc/odbcinst.ini [sudo] password for malick: And added the content: Description = MyODBC Driver Driver = /usr/lib/odbc/libmyodbc.so Setup = /usr/lib/odbc/libodbcmyS.so UsageCount = 2 FileUsage = 1 malick@malick-desktop:/usr/lib/odbc$ sudo gedit /etc/odbc.ini And added the content: [ODBC Data Sources] odbcname = MyODBC 3.51 Driver DSN [odbcname] Driver = /usr/lib/odbc/libmyodbc.so Description = MyODBC 3.51 Driver DSN SERVER = localhost PORT = USER = root Password = root Database = metware OPTION = 3 SOCKET = Tracefile = /var/log/iodbc.trace Trace = 1 [Default] Driver = /usr/local/lib/libmyodbc.so Description = MyODBC 3.51 Driver DSN SERVER = localhost PORT = USER = root Password = root Database = metware OPTION = 3 SOCKET = Once the files are created and saved I should install them with odbcinst -i -d -f /etc/odbcinst.ini to install the driver and odbcinst -i -s -l -f /etc/odbc.ini to install the DSN. I try this as follow: malick@malick-desktop:/usr/lib/odbc$ odbcinst -i -s -f /etc/odbcinst.ini odbcinst: iniOpen failed on /etc/odbcinst.ini. malick@malick-desktop:/usr/lib/odbc$ odbcinst -i -d -f /etc/odbcinst.ini odbcinst: iniOpen failed on /etc/odbcinst.ini. malick@malick-desktop:/usr/local/lib$ odbcinst -i -s -l -f /etc/odbc.ini odbcinst: SQLWritePrivateProfileString failed with General error request failed. odbcinst: SQLWritePrivateProfileString failed with General error request failed. I can test the DSN install by listening the datasources as follow: malick@malick-desktop:/usr/local/lib$ odbcinst -s -q [odbcname] [Default] This messages tells me that something went wrong for iniOpen and the installation of the DSN failed. I'm not sure If I have to waste my time on these messages so I leave it as something on the to-do-list. Next I try to use the unixODBC isql program: malick@malick-desktop:/usr/lib/odbc$ isql ********************************************** * unixODBC - isql * ********************************************** * Syntax * * * * isql DSN [UID [PWD]] [options] * * * * Options * * * * -b batch.(no prompting etc) * * -dx delimit columns with x * * -x0xXX delimit columns with XX, where * * x is in hex, ie 0x09 is tab * * -w wrap results in an HTML table * * -c column names on first row. * * (only used when -d) * * -mn limit column display width to n * * -v verbose. * * -lx set locale to x * * --version version * * * * Notes * * * * isql supports redirection and piping * * for batch processing. * * * * Examples * * * * cat My.sql | isql WebDB MyID MyPWD -w * * * * Each line in My.sql must contain * * exactly 1 SQL command except for the * * last line which must be blank. * * * * Please visit; * * * * http://www.unixodbc.org * * ph...@co... * * ni...@ea... * ********************************************** malick@malick-desktop:/usr/lib/odbc$ isql dsn [ISQL]ERROR: Could not SQLConnect malick@malick-desktop:/usr/lib/odbc$ isql dsn | grep odbc.ini [ISQL]ERROR: Could not SQLConnect malick@malick-desktop:/usr/lib/odbc$ isql -v dsn [01000][unixODBC][Driver Manager]Can't open lib '/usr/local/lib/libmyodbc.so' : /usr/local/lib/libmyodbc.so: cannot open shared object file: No such file or directory [ISQL]ERROR: Could not SQLConnect malick@malick-desktop:/usr/lib/odbc$ cd /usr/local/lib/ malick@malick-desktop:/usr/local/lib$ ls eclipse pkgconfig python2.5 R site_ruby malick@malick-desktop:/usr/local/lib$ cp /usr/lib/odbc/libmyodbc.so /usr/local/lib/ malick@malick-desktop:/usr/local/lib$ ls eclipse libmyodbc.so pkgconfig python2.5 R site_ruby malick@malick-desktop:/usr/local/lib$ sudo isql -v dsn [S1000][unixODBC][MySQL][ODBC 3.51 Driver]Access denied for user 'root'@'localhost' (using password: NO) [ISQL]ERROR: Could not SQLConnect malick@malick-desktop:/usr/local/lib$ sudo isql -v metware root root +---------------------------------------+ | Connected! | | | | sql-statement | | help [tablename] | | quit | | | +---------------------------------------+ SQL> Ack this looks better I have setup the msql driver for unixODBC next I'll try to install RODBC: malick@malick-desktop:/usr/local$ R CMD INSTALL /home/malick/Desktop/RODBC_1.2-3.tar.gz * Installing to library '/home/malick/R/i686-pc-linux-gnu-library/2.7' * Installing *source* package 'RODBC' ... checking for gcc... gcc -std=gnu99 checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc -std=gnu99 accepts -g... yes checking for gcc -std=gnu99 option to accept ANSI C... none needed checking how to run the C preprocessor... gcc -std=gnu99 -E checking for egrep... grep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking sql.h usability... no checking sql.h presence... no checking for sql.h... no checking sqlext.h usability... no checking sqlext.h presence... no checking for sqlext.h... no configure: error: "ODBC headers sql.h and sqlext.h not found" ERROR: configuration failed for package 'RODBC' ** Removing '/home/malick/R/i686-pc-linux-gnu-library/2.7/RODBC' malick@malick-desktop:/usr/local$ This looks like there are 2 headers missing sql.h and sqlext.h the same error is generated if I try: R CMD INSTALL /home/malick/Desktop/RODBC_1.2-3.tar.gz --with-odbc-include --with-odbc-lib So I try it with another method by setting the export paths for ODBC_INCLUDE and ODBC_LIBS as follow: malick@malick-desktop:/usr/local/lib$ export ODBC_INCLUDE=/usr/local/ malick@malick-desktop:/usr/local/lib$ export ODBC_LIBS=/usr/local/lib/ malick@malick-desktop:/usr/local/lib$ export | grep ODBC declare -x ODBC_INCLUDE="/usr/local/" declare -x ODBC_LIBS="/usr/local/lib/" malick@malick-desktop:/usr/local/lib$ Even this doesn't helped the error about configure: error: "ODBC headers sql.h and sqlext.h not found" Is generated and I don't know how to solve it. Malick A.D Heuvel Netherlands Metabolomics Centre Gorlaeus Laboratories PO Box 9502 2300 RA Leiden The Netherlands |