You can subscribe to this list here.
2005 |
Jan
|
Feb
(16) |
Mar
(6) |
Apr
(38) |
May
(23) |
Jun
(5) |
Jul
(1) |
Aug
|
Sep
(10) |
Oct
(7) |
Nov
(6) |
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2006 |
Jan
(20) |
Feb
(32) |
Mar
(24) |
Apr
(29) |
May
(5) |
Jun
(10) |
Jul
(12) |
Aug
(7) |
Sep
(1) |
Oct
(2) |
Nov
(27) |
Dec
(4) |
2007 |
Jan
(37) |
Feb
(10) |
Mar
(19) |
Apr
(10) |
May
(10) |
Jun
(7) |
Jul
(19) |
Aug
(29) |
Sep
(5) |
Oct
(17) |
Nov
(14) |
Dec
(2) |
2008 |
Jan
(4) |
Feb
(4) |
Mar
|
Apr
(8) |
May
|
Jun
(8) |
Jul
(1) |
Aug
(7) |
Sep
|
Oct
(1) |
Nov
(4) |
Dec
|
2009 |
Jan
|
Feb
(1) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(2) |
2010 |
Jan
|
Feb
(2) |
Mar
(2) |
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(4) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(2) |
2019 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Richard C. <ri...@cy...> - 2006-06-06 06:55:33
|
AJ, I think most of these are fixed in CVS already. As the message says, you can set allow_call_time_pass_reference to true in your php.ini file to make the warnings go away. Best, Richard On 5 Jun 2006, at 08:44, AJ Chen wrote: > I also repeatedly get the following warning messages in my apache > server error log. It would be great if it can be fixed. Thanks, --AJ > > [Sun Jun 04 23:37:28 2006] [error] [client 127.0.0.1] PHP Warning: > Call-time pass-by-reference has been deprecated - argument passed > by value; If you would like to pass it by reference, modify the > declaration of [runtime function name](). If you would like to > enable call-time pass-by-reference, you can set > allow_call_time_pass_reference to true in your INI file. However, > future versions may not support this any longer. in C:\\apache\ > \Apache2\\htdocs\\wordpress\\wp-content\\plugins\\web2xpress\ > \rdfapi-php\\api\\model\\Model.php on line 238, referer: http:// > localhost/wordpress/wp-admin/post-new.php > [Sun Jun 04 23:37:28 2006] [error] [client 127.0.0.1] PHP Warning: > Call-time pass-by-reference has been deprecated - argument passed > by value; If you would like to pass it by reference, modify the > declaration of [runtime function name](). If you would like to > enable call-time pass-by-reference, you can set > allow_call_time_pass_reference to true in your INI file. However, > future versions may not support this any longer. in C:\\apache\ > \Apache2\\htdocs\\wordpress\\wp-content\\plugins\\web2xpress\ > \rdfapi-php\\api\\model\\Model.php on line 296, referer: http:// > localhost/wordpress/wp-admin/post-new.php > [Sun Jun 04 23:37:28 2006] [error] [client 127.0.0.1] PHP Warning: > Call-time pass-by-reference has been deprecated - argument passed > by value; If you would like to pass it by reference, modify the > declaration of [runtime function name](). If you would like to > enable call-time pass-by-reference, you can set > allow_call_time_pass_reference to true in your INI file. However, > future versions may not support this any longer. in C:\\apache\ > \Apache2\\htdocs\\wordpress\\wp-content\\plugins\\web2xpress\ > \rdfapi-php\\api\\model\\Model.php on line 411, referer: http:// > localhost/wordpress/wp-admin/post-new.php > [Sun Jun 04 23:37:28 2006] [error] [client 127.0.0.1] PHP Warning: > Call-time pass-by-reference has been deprecated - argument passed > by value; If you would like to pass it by reference, modify the > declaration of [runtime function name](). If you would like to > enable call-time pass-by-reference, you can set > allow_call_time_pass_reference to true in your INI file. However, > future versions may not support this any longer. in C:\\apache\ > \Apache2\\htdocs\\wordpress\\wp-content\\plugins\\web2xpress\ > \rdfapi-php\\api\\model\\Model.php on line 415, referer: http:// > localhost/wordpress/wp-admin/post-new.php > [Sun Jun 04 23:37:28 2006] [error] [client 127.0.0.1] PHP Warning: > Call-time pass-by-reference has been deprecated - argument passed > by value; If you would like to pass it by reference, modify the > declaration of [runtime function name](). If you would like to > enable call-time pass-by-reference, you can set > allow_call_time_pass_reference to true in your INI file. However, > future versions may not support this any longer. in C:\\apache\ > \Apache2\\htdocs\\wordpress\\wp-content\\plugins\\web2xpress\ > \rdfapi-php\\api\\model\\DbStore.php on line 568, referer: http:// > localhost/wordpress/wp-admin/post-new.php > [Sun Jun 04 23:37:28 2006] [error] [client 127.0.0.1] PHP Warning: > Call-time pass-by-reference has been deprecated - argument passed > by value; If you would like to pass it by reference, modify the > declaration of [runtime function name](). If you would like to > enable call-time pass-by-reference, you can set > allow_call_time_pass_reference to true in your INI file. However, > future versions may not support this any longer. in C:\\apache\ > \Apache2\\htdocs\\wordpress\\wp-content\\plugins\\web2xpress\ > \rdfapi-php\\api\\model\\DbStore.php on line 568, referer: http:// > localhost/wordpress/wp-admin/post-new.php > [Sun Jun 04 23:37:28 2006] [error] [client 127.0.0.1] PHP Warning: > Call-time pass-by-reference has been deprecated - argument passed > by value; If you would like to pass it by reference, modify the > declaration of [runtime function name](). If you would like to > enable call-time pass-by-reference, you can set > allow_call_time_pass_reference to true in your INI file. However, > future versions may not support this any longer. in C:\\apache\ > \Apache2\\htdocs\\wordpress\\wp-content\\plugins\\web2xpress\ > \rdfapi-php\\api\\model\\DbStore.php on line 612, referer: http:// > localhost/wordpress/wp-admin/post-new.php > [Sun Jun 04 23:37:28 2006] [error] [client 127.0.0.1] PHP Warning: > Call-time pass-by-reference has been deprecated - argument passed > by value; If you would like to pass it by reference, modify the > declaration of [runtime function name](). If you would like to > enable call-time pass-by-reference, you can set > allow_call_time_pass_reference to true in your INI file. However, > future versions may not support this any longer. in C:\\apache\ > \Apache2\\htdocs\\wordpress\\wp-content\\plugins\\web2xpress\ > \rdfapi-php\\api\\model\\DbStore.php on line 612, referer: http:// > localhost/wordpress/wp-admin/post-new.php > > > _______________________________________________ > Rdfapi-php-interest mailing list > Rdf...@li... > https://lists.sourceforge.net/lists/listinfo/rdfapi-php-interest |
From: AJ C. <ca...@gm...> - 2006-06-06 02:58:36
|
I created a ResModel for an instance of class Experiment (from my own ontology called "exp") and wrote it into RDF file using $model->saveAs($save_file). In the RDF file, the class instance started with <rdf:Description rdf:ID="i"> How do I tell the resmodel that it's an instance of Experiment class? I want to have the RDF writen as <rdf:RDF <exp:Experiment rdf:ID="i"> ... Thanks, AJ |
From: AJ C. <ca...@gm...> - 2006-06-06 01:43:19
|
I also repeatedly get the following warning messages in my apache server error log. It would be great if it can be fixed. Thanks, --AJ [Sun Jun 04 23:37:28 2006] [error] [client 127.0.0.1] PHP Warning: Call-time pass-by-reference has been deprecated - argument passed by value; If you would like to pass it by reference, modify the declaration of [runtime function name](). If you would like to enable call-time pass-by-reference, you can set allow_call_time_pass_reference to true in your INI file. However, future versions may not support this any longer. in C:\\apache\\Apache2\\htdocs\\wordpress\\wp-content\\plugins\\web2xpress\\rdfapi-php\\api\\model\\Model.php on line 238, referer: http://localhost/wordpress/wp-admin/post-new.php [Sun Jun 04 23:37:28 2006] [error] [client 127.0.0.1] PHP Warning: Call-time pass-by-reference has been deprecated - argument passed by value; If you would like to pass it by reference, modify the declaration of [runtime function name](). If you would like to enable call-time pass-by-reference, you can set allow_call_time_pass_reference to true in your INI file. However, future versions may not support this any longer. in C:\\apache\\Apache2\\htdocs\\wordpress\\wp-content\\plugins\\web2xpress\\rdfapi-php\\api\\model\\Model.php on line 296, referer: http://localhost/wordpress/wp-admin/post-new.php [Sun Jun 04 23:37:28 2006] [error] [client 127.0.0.1] PHP Warning: Call-time pass-by-reference has been deprecated - argument passed by value; If you would like to pass it by reference, modify the declaration of [runtime function name](). If you would like to enable call-time pass-by-reference, you can set allow_call_time_pass_reference to true in your INI file. However, future versions may not support this any longer. in C:\\apache\\Apache2\\htdocs\\wordpress\\wp-content\\plugins\\web2xpress\\rdfapi-php\\api\\model\\Model.php on line 411, referer: http://localhost/wordpress/wp-admin/post-new.php [Sun Jun 04 23:37:28 2006] [error] [client 127.0.0.1] PHP Warning: Call-time pass-by-reference has been deprecated - argument passed by value; If you would like to pass it by reference, modify the declaration of [runtime function name](). If you would like to enable call-time pass-by-reference, you can set allow_call_time_pass_reference to true in your INI file. However, future versions may not support this any longer. in C:\\apache\\Apache2\\htdocs\\wordpress\\wp-content\\plugins\\web2xpress\\rdfapi-php\\api\\model\\Model.php on line 415, referer: http://localhost/wordpress/wp-admin/post-new.php [Sun Jun 04 23:37:28 2006] [error] [client 127.0.0.1] PHP Warning: Call-time pass-by-reference has been deprecated - argument passed by value; If you would like to pass it by reference, modify the declaration of [runtime function name](). If you would like to enable call-time pass-by-reference, you can set allow_call_time_pass_reference to true in your INI file. However, future versions may not support this any longer. in C:\\apache\\Apache2\\htdocs\\wordpress\\wp-content\\plugins\\web2xpress\\rdfapi-php\\api\\model\\DbStore.php on line 568, referer: http://localhost/wordpress/wp-admin/post-new.php [Sun Jun 04 23:37:28 2006] [error] [client 127.0.0.1] PHP Warning: Call-time pass-by-reference has been deprecated - argument passed by value; If you would like to pass it by reference, modify the declaration of [runtime function name](). If you would like to enable call-time pass-by-reference, you can set allow_call_time_pass_reference to true in your INI file. However, future versions may not support this any longer. in C:\\apache\\Apache2\\htdocs\\wordpress\\wp-content\\plugins\\web2xpress\\rdfapi-php\\api\\model\\DbStore.php on line 568, referer: http://localhost/wordpress/wp-admin/post-new.php [Sun Jun 04 23:37:28 2006] [error] [client 127.0.0.1] PHP Warning: Call-time pass-by-reference has been deprecated - argument passed by value; If you would like to pass it by reference, modify the declaration of [runtime function name](). If you would like to enable call-time pass-by-reference, you can set allow_call_time_pass_reference to true in your INI file. However, future versions may not support this any longer. in C:\\apache\\Apache2\\htdocs\\wordpress\\wp-content\\plugins\\web2xpress\\rdfapi-php\\api\\model\\DbStore.php on line 612, referer: http://localhost/wordpress/wp-admin/post-new.php [Sun Jun 04 23:37:28 2006] [error] [client 127.0.0.1] PHP Warning: Call-time pass-by-reference has been deprecated - argument passed by value; If you would like to pass it by reference, modify the declaration of [runtime function name](). If you would like to enable call-time pass-by-reference, you can set allow_call_time_pass_reference to true in your INI file. However, future versions may not support this any longer. in C:\\apache\\Apache2\\htdocs\\wordpress\\wp-content\\plugins\\web2xpress\\rdfapi-php\\api\\model\\DbStore.php on line 612, referer: http://localhost/wordpress/wp-admin/post-new.php |
From: Richard C. <ri...@cy...> - 2006-05-30 11:07:33
|
Hi AJ, On 30 May 2006, at 09:00, AJ Chen wrote: > I debuged it with Jena and fixed the RDF file. Now, I can read/ > write ontology in RDF/XML with RAP. RAP is great! I'm using it to > develop plugins for semantic web support. :-) > Another question: how to control what default namespaces to use? > there are some default namespaces that I don't want to write into > rdf file. I know they can be removed by calling the remove method > on model. But, if there is a config file listing these namespaces, > then it will be easier to add and remove. Please give me a pointer. Have a look at api/constants.php, there's this section: // ------------------------------------------------------------------------ ---------- // RDQL, SPARQL and parser default namespace prefixes // ------------------------------------------------------------------------ ---------- $default_prefixes = array( RDF_NAMESPACE_PREFIX => RDF_NAMESPACE_URI, RDF_SCHEMA_PREFIX => RDF_SCHEMA_URI, 'xsd' => 'http://www.w3.org/2001/XMLSchema#', OWL_PREFIX => OWL_URI, 'dc' => 'http://purl.org/dc/elements/1.1/', 'dcterms' => 'http://purl.org/dc/terms/', 'vcard' => 'http://www.w3.org/2001/vcard-rdf/3.0#', 'kb_sys' => 'http://purl.org/knowledgebay/ontology/sys#', 'kb_person' => 'http://purl.org/knowledgebay/ontology/person#', 'kb_keyword' => 'http://purl.org/knowledgebay/ontology/keyword#', 'kb_lecture' => 'http://purl.org/knowledgebay/ontology/lecture#', 'kb_location' => 'http://purl.org/knowledgebay/ontology/location#' ); Just remove those you don't want. Richard > > Thanks, > AJ > > On 5/29/06, Richard Cyganiak <ri...@cy...> wrote: Hi AJ, > > can you please post the complete RDF file (or make it available > online and send a link)? Have you checked it against the RDF > validator [1]? Which PHP version are you using? The snippet looks > fine and RAP shouldn't have trouble with it. > > Cheers, > Richard > > [1] http://www.w3.org/RDF/Validator/ > > > On 29 May 2006, at 05:45, AJ Chen wrote: > > > Hi, I'm new to RAP, trying to use RAP to load a DOAP file. But, > > there is a fatal error: > > > > PHP Fatal error: RDFAPI error (class: parser; method: > > generateModel): XML-parser-error 201 in Line 13 of input document. > > > > line 13 is <doap:shortdesc> in the following rdf section: > > <doap:Project> > > <doap:name>Redland RDF Application Framework</doap:name> > > <doap:homepage rdf:resource="http:// > > www.redland.opensource.ac.uk/" /> > > <doap:created>2000-06-21</doap:created> > > <doap:shortdesc> > > A library for the Resource Description Framework (RDF) > > allowing > > it parsed from XML, stored, queried and > > manipulated. > > </doap:shortdesc> > > .... > > > > my code to load ontology is: > > > > $ontModel = ModelFactory::getOntModel(MEMMODEL,RDFS_VOCABULARY); > > $ontModel->load("doap_redland.rdf"); > > > > Any suggestion for loading DOAP file? I assume RAP knows how to > > load instance of any class like doap:Project. If this is not true, > > then how to have RAP to work with a custom ontology? > > > > Thanks, > > AJ > > |
From: Richard C. <ri...@cy...> - 2006-05-29 12:40:21
|
Hi AJ, can you please post the complete RDF file (or make it available online and send a link)? Have you checked it against the RDF validator [1]? Which PHP version are you using? The snippet looks fine and RAP shouldn't have trouble with it. Cheers, Richard [1] http://www.w3.org/RDF/Validator/ On 29 May 2006, at 05:45, AJ Chen wrote: > Hi, I'm new to RAP, trying to use RAP to load a DOAP file. But, > there is a fatal error: > > PHP Fatal error: RDFAPI error (class: parser; method: > generateModel): XML-parser-error 201 in Line 13 of input document. > > line 13 is <doap:shortdesc> in the following rdf section: > <doap:Project> > <doap:name>Redland RDF Application Framework</doap:name> > <doap:homepage rdf:resource="http:// > www.redland.opensource.ac.uk/" /> > <doap:created>2000-06-21</doap:created> > <doap:shortdesc> > A library for the Resource Description Framework (RDF) > allowing > it parsed from XML, stored, queried and > manipulated. > </doap:shortdesc> > .... > > my code to load ontology is: > > $ontModel = ModelFactory::getOntModel(MEMMODEL,RDFS_VOCABULARY); > $ontModel->load("doap_redland.rdf"); > > Any suggestion for loading DOAP file? I assume RAP knows how to > load instance of any class like doap:Project. If this is not true, > then how to have RAP to work with a custom ontology? > > Thanks, > AJ |
From: AJ C. <ca...@gm...> - 2006-05-29 03:45:13
|
Hi, I'm new to RAP, trying to use RAP to load a DOAP file. But, there is a fatal error: PHP Fatal error: RDFAPI error (class: parser; method: generateModel): XML-parser-error 201 in Line 13 of input document. line 13 is <doap:shortdesc> in the following rdf section: <doap:Project> <doap:name>Redland RDF Application Framework</doap:name> <doap:homepage rdf:resource=3D"http://www.redland.opensource.ac.uk/" /> <doap:created>2000-06-21</doap:created> <doap:shortdesc> A library for the Resource Description Framework (RDF) allowing it parsed from XML, stored, queried and manipulated. </doap:shortdesc> .... my code to load ontology is: $ontModel =3D ModelFactory::getOntModel(MEMMODEL,RDFS_VOCABULARY); $ontModel->load("doap_redland.rdf"); Any suggestion for loading DOAP file? I assume RAP knows how to load instance of any class like doap:Project. If this is not true, then how to have RAP to work with a custom ontology? Thanks, AJ |
From: Richard C. <ri...@cy...> - 2006-05-10 11:22:56
|
What problem? On 8 May 2006, at 13:30, khang pham wrote: > I get problem while using Unicode (for ex : Vietnamese utf-8 ) in =20= > RAP . > Please help me ! > > Thank you very much ! > > Best regards ! > > B=E1=BA=A1n c=C3=B3 s=E1=BB=AD d=E1=BB=A5ng Yahoo! kh=C3=B4ng? > H=C3=A3y xem th=E1=BB=AD trang ch=E1=BB=A7 Yahoo! Vi=E1=BB=87t Nam! |
From: khang p. <sut...@ya...> - 2006-05-08 12:30:30
|
I get problem while using Unicode (for ex : Vietnamese utf-8 ) in RAP . Please help me ! Thank you very much ! Best regards ! --------------------------------- Bạn có sử dụng Yahoo! không? Hãy xem thử trang chủ Yahoo! Việt Nam! |
From: SourceForge.net <no...@so...> - 2006-04-26 16:29:43
|
Bugs item #1477061, was opened at 2006-04-26 18:29 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=503361&aid=1477061&group_id=63257 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Parsers Group: Current CVS Status: Open Resolution: None Priority: 3 Submitted By: Richard Cyganiak (cyganiak) Assigned to: Nobody/Anonymous (nobody) Summary: PHP generates notice when parsing RDF/XML Initial Comment: Here's the first bug for the new tracker :-) Parsing the Protégé Camera ontology (available at http:// protege.stanford.edu/plugins/owl/owl-library/camera.owl ) produces several notices in RdfParser: Notice: Undefined index: first_blank_node_id in /Users/richard/rap/ htdocs/rdfapi-php/api/syntax/RdfParser.php on line 1877 Here's some simple code to show the problem: <?php define("RDFAPI_INCLUDE_DIR", "rdfapi-php/api/"); include(RDFAPI_INCLUDE_DIR . "RdfAPI.php"); $model = ModelFactory::getDefaultModel(); $model->load('http://protege.stanford.edu/plugins/owl/owl-library/ camera.owl'); ?> I'm running this with PHP 4.3.4 on OS X. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=503361&aid=1477061&group_id=63257 |
From: Richard C. <ri...@cy...> - 2006-04-26 16:19:11
|
Hi all, We've set up a bug tracker for RAP at SourceForge. It's here: http://sourceforge.net/tracker/?group_id=63257&atid=503361 The tracker will serve as a visible list of issues that have not yet been addressed. I've set up the tracker to send a notification mail for every new bug to this list. I'm not sure if this is a good idea and will switch it off if anyone thinks it is annoying. Best, Richard |
From: Christian B. <chr...@re...> - 2006-04-19 14:26:26
|
Thank you Richard, On Wed, 2006-04-19 at 12:19 +0200, Richard Cyganiak wrote: > You can cover the simpler cases "the hard way" by finding the > owl:domain and rdfs:domain triples in your model (or InfModel if you > can). that's exactly what I've done even if I was hoping in something better...anyway I'll manage. Thanks again, ciao! Christian. |
From: Richard C. <ri...@cy...> - 2006-04-19 10:19:04
|
Christian, On 13 Apr 2006, at 21:34, Christian Barbato wrote: > I really don't understand if there is a simple way to have a list > of all the properties of a class or of an individual (in OntModel). > With "simple" I mean something like $class->listProperties() (I > know that already exists a method with that name but it doesn't do > what I'm looking for). In short: As far as I know, there is no simple way to do that. Finding out which properties are available for a given class requires OWL reasoning in many cases. RAP doesn't have enough support for OWL reasoning to do that properly. You can cover the simpler cases "the hard way" by finding the owl:domain and rdfs:domain triples in your model (or InfModel if you can). Sorry about that. Richard > Thanks! > Christian. > > > ------------------------------------------------------- > This SF.Net email is sponsored by xPML, a groundbreaking scripting > language > that extends applications into web and mobile media. Attend the > live webcast > and join the prime developer group breaking into this new coding > territory! > http://sel.as-us.falkag.net/sel? > cmd=lnk&kid=110944&bid=241720&dat=121642 > _______________________________________________ > Rdfapi-php-interest mailing list > Rdf...@li... > https://lists.sourceforge.net/lists/listinfo/rdfapi-php-interest > |
From: Richard C. <ri...@cy...> - 2006-04-19 10:10:44
|
Hi Antonia, On 14 Apr 2006, at 09:19, =EC=96=91=EA=B2=BD=EC=95=84 wrote: > I'm a beginner for RAP and PHP. Welcome! > I'm trying to get ontClaess by using listClasses() > > $array =3D ontModel->listClass(); Variables in PHP *always* have a dollar sign in front of the name. So =20= it should be $ontModel. The method is also called listClasses, not =20 listClass. > foreach( $array as $resource){ > $ontClass =3D (OntClass) $resource; Usually you don't do typecasting in PHP. This is not Java ;-) You still need to turn the resource into an ontClass though: $ontClass =3D $ontModel->createOntClass($resource->getURI()); > $arrayClass =3D $ontClass->listSubClasses(true); > } > > You know It occurs an error. (In general, it's helpful if you say *what* error occurs. This often =20 saves us some time.) > Is there anybody to teach me a tip of this problem. Here's a simple complete script that works for me: include(RDFAPI_INCLUDE_DIR . "RdfAPI.php"); $model =3D ModelFactory::getDefaultModel(); $model->load('http://protege.stanford.edu/plugins/owl/owl-library/=20 camera.owl'); $ontModel =3D ModelFactory::getOntModelForBaseModel($model, =20 RDFS_VOCABULARY); $array =3D $ontModel->listClasses(); foreach($array as $resource){ echo "Listing subclasses of " . $resource->getURI() . "\n"; $ontClass =3D $ontModel->createOntClass($resource->getURI()); $arrayClass =3D $ontClass->listSubClasses(true); foreach ($arrayClass as $subclass) { echo $subclass->getLabel() . "\n"; } } Best, Richard > > > > > Sincerely > > Antonia > |
From: <sem...@ya...> - 2006-04-14 07:19:34
|
Hello~ I'm a beginner for RAP and PHP. I'm trying to get ontClaess by using listClasses() $array = ontModel->listClass(); // to get all classes in ontModel foreach( $array as $resource){ $ontClass = (OntClass) $resource; // to get OntClass data type $arrayClass = $ontClass->listSubClasses(true); // to get sub classes in each class } You know It occurs an error. Is there anybody to teach me a tip of this problem. Sincerely Antonia |
From: Christian B. <chr...@re...> - 2006-04-13 19:35:10
|
Hi to all, I really don't understand if there is a simple way to have a list of all the properties of a class or of an individual (in OntModel). With "simple" I mean something like $class->listProperties() (I know that already exists a method with that name but it doesn't do what I'm looking for). Thanks! Christian. |
From: Benjamin N. <bn...@ap...> - 2006-04-13 14:01:13
|
Hi, I'm about to release a new ARC "multi-store" in the next days which will allow app-specific store tuning (e.g. customizable table layouts, hash generation, and index structures). As AndyS said, insert speed is a bottleneck with large stores, and there is a trade-off between insert and query speed. I haven't tested the scalability of the new ARC stores yet, but some of them should work fine in an Mtriple environment. It always depends on the type of queries, of course. A "6 degrees of separation" query will kill even small stores ;) However, ARC doesn't cover the whole syntax of SPARQL as RAP does, and some of the store's features are still experimental (e.g. the split-up table space), at least in mysql 4.x. hth a bit, more concrete info to be available soonish, benjamin -- Benjamin Nowack Kruppstr. 100 45145 Essen, Germany http://www.bnode.org/ |
From: Seaborne, A. <and...@hp...> - 2006-04-13 13:04:31
|
-------- Original Message -------- > From: Chris Bizer <> > Date: 12 April 2006 16:50 >=20 > Hi Markus, >=20 > >=20 > > We consider using RAP as a quadstore for Semantic MediaWiki (see > > http://wiki.ontoworld.org). >=20 > Interesting. >=20 > > In the long run, we are interested in > > inferencing, but for now Wikipedia-size scalability is most = important. >=20 > Hmm sorry, up to my knowledge there are no systematic comparisons of > the performance of RAP with other RDF toolkits.=20 >=20 > We did some relatively unsystematic performance testing when we > implemented different features, but the results are outdated by now.=20 >=20 > S=F6ren Auer and Bart Pieterse (both cc'ed) have used RAP in bigger > projects and I guess they are the best sources for practical > experiences with the performance of RAP with bigger real world > datasets. =20 >=20 > My general impression is that as PHP itself is still slower than > languages like Java or C, RAP is also slow and its performance can not > be compared with toolkits like Jena or Sesame. S=F6ren might disagree = on > this point with me. =20 >=20 > > Are > > there recent evaluations concerning the performance of the different > > storage models? In particular, we are interested in scalability of > > the following functions: > >=20 > > 1 SPARQL queries: > > 1.1 general performance >=20 > Around one second for a medium complex query against a data set with > 100 000 triples in memory, much slower if the data set is in a > database. Tobias Gauss can give you details. =20 >=20 > An PHP alternative for SPARQL queries against data sets which are > stored in a database is Benjamin appmosphere toolkit > http://www.appmosphere.com/pages/en-arc. He does smarter SPARQL to = SQL > rewriting than RAP and should theoretically be faster. =20 >=20 > > 1.2 performance of "join-intensive" queries (involving long chains > > of triples)=20 > > 1.3 performance of datatype queries (e.g. selecting/sorting results > > by some > > xsd:int or xsd:decimal) > > 1.4 performance for partial result lists (e.g. getting only the > > first 20) 2 simple read access (e.g. getting all triples of a = certain > > pattern or RDF dataset) >=20 > OK with models up to 100 000 triples. Don't know about bigger models. > S=F6ren?=20 >=20 > > 3 write access > > 3.1 adding triples to an existing store > > 3.2 deleting selected triples from the store >=20 > Should be OK. I think S=F6ren implemented some work arounds for bulk > updates.=20 >=20 > > 4 impact of RDF dataset features/named graph functionality >=20 > About 5% slower than operations on classic RDF models. >=20 > > For inclusion in Wikipedia, dealing with about 10 Mio triples split > > into 1 Mio RDF datasets is probably necessary. >=20 > Too much for RAP, too much for appmoshere (Benjamin?), and I guess = even > hard for Jena, Redland and Co if the queries become more complicated.=20 That would be surprising. Although it depends on the query greatly, = graphs upto 100e6 should be no problem. They take a while to load = though :-) http://esw.w3.org/topic/LargeTripleStores http://esw.w3.org/topic/LargeQuadStores We're using a 10e6 triple graph for regular testing at the moment with a = new database store for Jena and we use the same data with the existing = Jena database solution. (The choice of 10e6 is based on big enough to = show effects of scale but small enough to be manageable as we keep = reloading due to schema experimentation.) Real testing is on 100e6 = triples. Support for interactive use is trickier especially if the queries are = arbitrary as its possible to write queries that are always going to have = large intermediate results. If the app does not allow the user to = (indirectly) write an arbirary query, then a little care with queries = should make interactive use possible on data up to 100e6 and beyond. = Steve Harris (3Store) has a lot of experience with this. At 1e6 triples, it is more a matter of running in memory (if you can = afford the system resources). Is there a SPARQL protocol driver for RAP? If so, the database can be = any RDF system you want, and the RAP application can issue requests over = the SPARQL protocol. >=20 > > We are working on useful update and > > caching strategies to reduce access to the RDF store, but a rather > > high number of parallel requests still is to be expected (though > > normal reading of articles will not touch the store). It would also = be > > possible to restrict to certain types of queries if this leads to > > improved performance.=20 > >=20 > > We currently use RAP as an RDF parser for importing ontologies into > > Semantic MediaWiki. For querying our RDF data, we consider reusing = an > > existing triplestores such as Redland or RAP, but also using SQL > > queries directly. > > Java toolkits are not an option since Wikipedia requires the use of > > free software (and free Java implementations probably don't support > > current RDF stores). Jena runs with IKVM and GNUClasspath. Also runs on .Net and Mono via = IKVM. My experiences with the current IKVM have been very good and they would = suggest that most RDF/Java toolkits will run quite adequately these = days. It wasn't true awhile ago but things have moved on rapidly = recently. Hope that helps, Andy >=20 > If current RDF stores means Named Graph stores then you could use a > combination of Jena and NG4J. Jena is BSD and supports SPARQL. NG4J > adds a API for manipulating Named Graph sets. See: =20 > http://www.wiwiss.fu-berlin.de/suhl/bizer/ng4j/ >=20 > >=20 > > I can imagine that one can already find performance measures for RAP > > somewhere on the web -- sorry if I missed this. >=20 > Not that I know. But all efforts into that direction are highly > welcomed.=20 >=20 > Cheers >=20 > Chris >=20 >=20 > > Best regards, > >=20 > > Markus > >=20 > > -- > > Markus Kr=F6tzsch > > Institute AIFB, University of Karlsruhe, D-76128 Karlsruhe > > ma...@ai... phone +49 (0)721 608 7362 > > www.aifb.uni-karlsruhe.de/WBS/ fax +49 (0)721 693 717 >=20 >=20 > -- > Chris Bizer > Freie Universit=E4t Berlin > Phone: +49 30 838 54057 > Mail: ch...@bi... > Web: www.bizer.de >=20 >=20 >=20 > ------------------------------------------------------- > This SF.Net email is sponsored by xPML, a groundbreaking scripting > language=20 > that extends applications into web and mobile media. Attend the live > webcast=20 > and join the prime developer group breaking into this new coding > territory!=20 > http://sel.as-us.falkag.net/sel?cmd=3Dk&kid=110944&bid$1720&dat=121642 > _______________________________________________ > Rdfapi-php-interest mailing list > Rdf...@li... > https://lists.sourceforge.net/lists/listinfo/rdfapi-php-interest |
From: Markus <ma...@ai...> - 2006-04-12 18:13:07
|
On Wednesday 12 April 2006 20:08, Markus Kr=F6tzsch wrote: > Hi Chris, > > thanks for the quick answer. I take from your words that RAP might not yet > be quick enough *in general*. On the other hand, no single tool really > meets all our needs (especially since the Java-stores are out), and RAP at > least appears to be well-maintained and evolving. Also I like that RAP can > be configured for various settings (e.g. with various levels of > inferencing), so we could allow people to switch on complex features if > they have smaller wikis. > > My inquiry was rather general. We have a lot of data, but we do not need > all of these functions to be very fast. What we really do is: > > =3D=3D Standard wiki usage =3D=3D > > * On normal article *views* (by far the most common operation), at most > some simple reads are needed (if the article is not in cache and certain > annotations are used). Same is true for *previews* during editing. * On > every article *write*, the store has to be updated (delete + write). This > could be optimized by checking for actual changes in the RDF. > > =3D=3D Semantic features =3D=3D > > * Further simple reads occur for exporting RDF. This could be optimized by > caching. > * Complex queries shall be supported in a simplified inline syntax: users > add queries to the article source, and the article then shows the result > lists. These lists need to be updated regularly, but not on every change. > So, if it is not affordable to do live-updates for the query results, > updating result lists included in articles once a day might also be > acceptable. This is quite an extreme case (and might not be motivating for > contributors who want to see their changes have immediate effect), but it > illustrates that we are somewhat flexible. > > What we really need is to guarantee that the standard usage is hardly > slowed down at all. The added semantic features are somewhat optional: we > need a certain amount to convince anyone to use the extension, but we can > be restrictive to ensure acceptable performance. It would also be OK to > restrict queries wrt. complexity or size of result set. Our problem with > evaluation is that we do not have real testing data until the extension is > active in some major wiki, but that we need to ensure some amount of > scalability before that. > > > I would also like to learn more about the current capabilities of > Appmosphere. My impression was that its RDF-store and query features are > rather new -- is it currently recommended for major productive use? Having > an integrated API of RAP and Appmosphere would clearly be great for our > setting. > > Redland is the third store that we really consider. Since it seems to be a > one-man-project, I wonder whether its future development is secured (e.g. > the demos on the site where all disabled when Dave Beckett switched to > Yahoo!) > > Concerning 3Store, I thought that they have a document-centeric approach > where you first load a large RDF document and then ask queries. Whatever > the performance of the querying is, we could not afford to reload the who= le > data everytime someone makes a change. The PHP-binding of 3Store is > realized by making calls to shell-commands from PHP. Ups -- accidently pressed a key and the email was gone :-) ... Anyway, that= =20 was basically it. I am still eager to hear about more practical experiences= =20 with RAP, Appmosphere, and Redland. Regards, Markus > > On Wednesday 12 April 2006 17:50, Chris Bizer wrote: > > Hi Markus, > > > > > We consider using RAP as a quadstore for Semantic MediaWiki (see > > > http://wiki.ontoworld.org). > > > > Interesting. > > > > > In the long run, we are interested in > > > inferencing, but for now Wikipedia-size scalability is most important. > > > > Hmm sorry, up to my knowledge there are no systematic comparisons of the > > performance of RAP with other RDF toolkits. > > > > We did some relatively unsystematic performance testing when we > > implemented different features, but the results are outdated by now. > > > > S=F6ren Auer and Bart Pieterse (both cc'ed) have used RAP in bigger > > projects and I guess they are the best sources for practical experiences > > with the performance of RAP with bigger real world datasets. > > > > My general impression is that as PHP itself is still slower than > > languages like Java or C, RAP is also slow and its performance can not = be > > compared with toolkits like Jena or Sesame. S=F6ren might disagree on t= his > > point with me. > > > > > Are > > > there recent evaluations concerning the performance of the different > > > storage > > > models? In particular, we are interested in scalability of the > > > following functions: > > > > > > 1 SPARQL queries: > > > 1.1 general performance > > > > Around one second for a medium complex query against a data set with 100 > > 000 triples in memory, much slower if the data set is in a database. > > Tobias Gauss can give you details. > > > > An PHP alternative for SPARQL queries against data sets which are stored > > in a database is Benjamin appmosphere toolkit > > http://www.appmosphere.com/pages/en-arc. He does smarter SPARQL to SQL > > rewriting than RAP and should theoretically be faster. > > > > > 1.2 performance of "join-intensive" queries (involving long chains of > > > triples) > > > 1.3 performance of datatype queries (e.g. selecting/sorting results = by > > > some > > > xsd:int or xsd:decimal) > > > 1.4 performance for partial result lists (e.g. getting only the first > > > 20) 2 simple read access (e.g. getting all triples of a certain patte= rn > > > or RDF dataset) > > > > OK with models up to 100 000 triples. Don't know about bigger models. > > S=F6ren? > > > > > 3 write access > > > 3.1 adding triples to an existing store > > > 3.2 deleting selected triples from the store > > > > Should be OK. I think S=F6ren implemented some work arounds for bulk > > updates. > > > > > 4 impact of RDF dataset features/named graph functionality > > > > About 5% slower than operations on classic RDF models. > > > > > For inclusion in Wikipedia, dealing with about 10 Mio triples split > > > into 1 Mio > > > RDF datasets is probably necessary. > > > > Too much for RAP, too much for appmoshere (Benjamin?), and I guess even > > hard for Jena, Redland and Co if the queries become more complicated. > > > > > We are working on useful update and > > > caching strategies to reduce access to the RDF store, but a rather hi= gh > > > number of parallel requests still is to be expected (though normal > > > reading of > > > articles will not touch the store). It would also be possible to > > > restrict to > > > certain types of queries if this leads to improved performance. > > > > > > We currently use RAP as an RDF parser for importing ontologies into > > > Semantic > > > MediaWiki. For querying our RDF data, we consider reusing an existing > > > triplestores such as Redland or RAP, but also using SQL queries > > > directly. Java toolkits are not an option since Wikipedia requires the > > > use of free software (and free Java implementations probably don't > > > support current RDF stores). > > > > If current RDF stores means Named Graph stores then you could use a > > combination of Jena and NG4J. Jena is BSD and supports SPARQL. NG4J adds > > a API for manipulating Named Graph sets. See: > > http://www.wiwiss.fu-berlin.de/suhl/bizer/ng4j/ > > > > > I can imagine that one can already find performance measures for RAP > > > somewhere > > > on the web -- sorry if I missed this. > > > > Not that I know. But all efforts into that direction are highly welcome= d. > > > > Cheers > > > > Chris > > > > > Best regards, > > > > > > Markus > > > > > > -- > > > Markus Kr=F6tzsch > > > Institute AIFB, University of Karlsruhe, D-76128 Karlsruhe > > > ma...@ai... phone +49 (0)721 608 7362 > > > www.aifb.uni-karlsruhe.de/WBS/ fax +49 (0)721 693 717 =2D-=20 Markus Kr=F6tzsch Institute AIFB, University of Karlsruhe, D-76128 Karlsruhe ma...@ai... phone +49 (0)721 608 7362 www.aifb.uni-karlsruhe.de/WBS/ fax +49 (0)721 693 717 |
From: Markus <ma...@ai...> - 2006-04-12 18:09:54
|
Hi Chris, thanks for the quick answer. I take from your words that RAP might not yet = be=20 quick enough *in general*. On the other hand, no single tool really meets a= ll=20 our needs (especially since the Java-stores are out), and RAP at least=20 appears to be well-maintained and evolving. Also I like that RAP can be=20 configured for various settings (e.g. with various levels of inferencing), = so=20 we could allow people to switch on complex features if they have smaller=20 wikis. My inquiry was rather general. We have a lot of data, but we do not need al= l=20 of these functions to be very fast. What we really do is: =3D=3D Standard wiki usage =3D=3D * On normal article *views* (by far the most common operation), at most som= e=20 simple reads are needed (if the article is not in cache and certain=20 annotations are used). Same is true for *previews* during editing. * On every article *write*, the store has to be updated (delete + write). T= his=20 could be optimized by checking for actual changes in the RDF. =3D=3D Semantic features =3D=3D * Further simple reads occur for exporting RDF. This could be optimized by= =20 caching. * Complex queries shall be supported in a simplified inline syntax: users a= dd=20 queries to the article source, and the article then shows the result lists.= =20 These lists need to be updated regularly, but not on every change. So, if i= t=20 is not affordable to do live-updates for the query results, updating result= =20 lists included in articles once a day might also be acceptable. This is qui= te=20 an extreme case (and might not be motivating for contributors who want to s= ee=20 their changes have immediate effect), but it illustrates that we are somewh= at=20 flexible. What we really need is to guarantee that the standard usage is hardly slowe= d=20 down at all. The added semantic features are somewhat optional: we need a=20 certain amount to convince anyone to use the extension, but we can be=20 restrictive to ensure acceptable performance. It would also be OK to restri= ct=20 queries wrt. complexity or size of result set. Our problem with evaluation = is=20 that we do not have real testing data until the extension is active in some= =20 major wiki, but that we need to ensure some amount of scalability before=20 that.=20 I would also like to learn more about the current capabilities of Appmosphe= re.=20 My impression was that its RDF-store and query features are rather new -- i= s=20 it currently recommended for major productive use? Having an integrated API= =20 of RAP and Appmosphere would clearly be great for our setting. Redland is the third store that we really consider. Since it seems to be a= =20 one-man-project, I wonder whether its future development is secured (e.g. t= he=20 demos on the site where all disabled when Dave Beckett switched to Yahoo!) Concerning 3Store, I thought that they have a document-centeric approach wh= ere=20 you first load a large RDF document and then ask queries. Whatever the=20 performance of the querying is, we could not afford to reload the whole dat= a=20 everytime someone makes a change. The PHP-binding of 3Store is realized by= =20 making calls to shell-commands from PHP. On Wednesday 12 April 2006 17:50, Chris Bizer wrote: > Hi Markus, > > > We consider using RAP as a quadstore for Semantic MediaWiki (see > > http://wiki.ontoworld.org). > > Interesting. > > > In the long run, we are interested in > > inferencing, but for now Wikipedia-size scalability is most important. > > Hmm sorry, up to my knowledge there are no systematic comparisons of the > performance of RAP with other RDF toolkits. > > We did some relatively unsystematic performance testing when we implement= ed > different features, but the results are outdated by now. > > S=F6ren Auer and Bart Pieterse (both cc'ed) have used RAP in bigger proje= cts > and I guess they are the best sources for practical experiences with the > performance of RAP with bigger real world datasets. > > My general impression is that as PHP itself is still slower than languages > like Java or C, RAP is also slow and its performance can not be compared > with toolkits like Jena or Sesame. S=F6ren might disagree on this point w= ith > me. > > > Are > > there recent evaluations concerning the performance of the different > > storage > > models? In particular, we are interested in scalability of the following > > functions: > > > > 1 SPARQL queries: > > 1.1 general performance > > Around one second for a medium complex query against a data set with 100 > 000 triples in memory, much slower if the data set is in a database. Tobi= as > Gauss can give you details. > > An PHP alternative for SPARQL queries against data sets which are stored = in > a database is Benjamin appmosphere toolkit > http://www.appmosphere.com/pages/en-arc. He does smarter SPARQL to SQL > rewriting than RAP and should theoretically be faster. > > > 1.2 performance of "join-intensive" queries (involving long chains of > > triples) > > 1.3 performance of datatype queries (e.g. selecting/sorting results by > > some > > xsd:int or xsd:decimal) > > 1.4 performance for partial result lists (e.g. getting only the first > > 20) 2 simple read access (e.g. getting all triples of a certain pattern > > or RDF dataset) > > OK with models up to 100 000 triples. Don't know about bigger models. > S=F6ren? > > > 3 write access > > 3.1 adding triples to an existing store > > 3.2 deleting selected triples from the store > > Should be OK. I think S=F6ren implemented some work arounds for bulk upda= tes. > > > 4 impact of RDF dataset features/named graph functionality > > About 5% slower than operations on classic RDF models. > > > For inclusion in Wikipedia, dealing with about 10 Mio triples split into > > 1 Mio > > RDF datasets is probably necessary. > > Too much for RAP, too much for appmoshere (Benjamin?), and I guess even > hard for Jena, Redland and Co if the queries become more complicated. > > > We are working on useful update and > > caching strategies to reduce access to the RDF store, but a rather high > > number of parallel requests still is to be expected (though normal > > reading of > > articles will not touch the store). It would also be possible to restri= ct > > to > > certain types of queries if this leads to improved performance. > > > > We currently use RAP as an RDF parser for importing ontologies into > > Semantic > > MediaWiki. For querying our RDF data, we consider reusing an existing > > triplestores such as Redland or RAP, but also using SQL queries directl= y. > > Java toolkits are not an option since Wikipedia requires the use of free > > software (and free Java implementations probably don't support current > > RDF stores). > > If current RDF stores means Named Graph stores then you could use a > combination of Jena and NG4J. Jena is BSD and supports SPARQL. NG4J adds a > API for manipulating Named Graph sets. See: > http://www.wiwiss.fu-berlin.de/suhl/bizer/ng4j/ > > > I can imagine that one can already find performance measures for RAP > > somewhere > > on the web -- sorry if I missed this. > > Not that I know. But all efforts into that direction are highly welcomed. > > Cheers > > Chris > > > Best regards, > > > > Markus > > > > -- > > Markus Kr=F6tzsch > > Institute AIFB, University of Karlsruhe, D-76128 Karlsruhe > > ma...@ai... phone +49 (0)721 608 7362 > > www.aifb.uni-karlsruhe.de/WBS/ fax +49 (0)721 693 717 =2D-=20 Markus Kr=F6tzsch Institute AIFB, University of Karlsruhe, D-76128 Karlsruhe ma...@ai... phone +49 (0)721 608 7362 www.aifb.uni-karlsruhe.de/WBS/ fax +49 (0)721 693 717 |
From: <au...@in...> - 2006-04-12 16:25:19
|
We have used Powl with RAP as a basis with quite large knowledge bases (0.5M triples). My experience is, that performance is determined largely by the underlying database, if you try to encode as much as possible in SQL queries. In Powl we have gone this way and enhanced RAP with quite a lot of API functions triggering quite complex SQL queries. For sure it would be better to use SPARQL instead, but in the time we started to work on Powl SPARQL was not yet available and still I have the impression, there are lots of crucial features missing (e.g. aggregations). From my point of view it would be good if someone could integrate a SPARQL-SQL query rewriting in to RAP (e.g. port Benjamins work). We are actually working on an intelligent caching strategy allowing for selective cache object invalidation on updates and their implementation for RAP. Cheers, Sören |
From: Richard C. <ri...@cy...> - 2006-04-12 15:55:26
|
Hi Markus, Great work you guys are doing with Semantic MediaWiki. Just my two cents: RAP's SPARQL engine is not optimized for accessing =20= database models. It does much heavy lifting in PHP code and I guess =20 it will be rather slow in such an setup. (Disclaimer: I've never =20 actually used it with a DBModel. Tobias, please correct me if I'm =20 getting something wrong.) If you need a high-performance triple store for a PHP app, I think =20 you should evaluate Benjamin Nowack's ARC (there was some talk about =20 integrating this into RAP -- is this still being considered?). =20 There's not much else in native PHP. For really good performance you =20 want an external triple store. If Java is forbidden, that leaves =20 pretty much only 3Store which reputedly is very fast, does cool stuff =20= with SPARQL, and AFAIK has some kind of PHP interface. (This is all just my personal opinion and not backed by actual =20 experience and I'm not a core RAP developer.) Best, Richard On 11 Apr 2006, at 16:40, Markus Kr=F6tzsch wrote: > Hi. > > We consider using RAP as a quadstore for Semantic MediaWiki (see > http://wiki.ontoworld.org). In the long run, we are interested in > inferencing, but for now Wikipedia-size scalability is most =20 > important. Are > there recent evaluations concerning the performance of the =20 > different storage > models? In particular, we are interested in scalability of the =20 > following > functions: > > 1 SPARQL queries: > 1.1 general performance > 1.2 performance of "join-intensive" queries (involving long chains of > triples) > 1.3 performance of datatype queries (e.g. selecting/sorting =20 > results by some > xsd:int or xsd:decimal) > 1.4 performance for partial result lists (e.g. getting only the =20 > first 20) > 2 simple read access (e.g. getting all triples of a certain pattern =20= > or RDF > dataset) > 3 write access > 3.1 adding triples to an existing store > 3.2 deleting selected triples from the store > 4 impact of RDF dataset features/named graph functionality > > For inclusion in Wikipedia, dealing with about 10 Mio triples split =20= > into 1 Mio > RDF datasets is probably necessary. We are working on useful update =20= > and > caching strategies to reduce access to the RDF store, but a rather =20 > high > number of parallel requests still is to be expected (though normal =20 > reading of > articles will not touch the store). It would also be possible to =20 > restrict to > certain types of queries if this leads to improved performance. > > We currently use RAP as an RDF parser for importing ontologies into =20= > Semantic > MediaWiki. For querying our RDF data, we consider reusing an existing > triplestores such as Redland or RAP, but also using SQL queries =20 > directly. > Java toolkits are not an option since Wikipedia requires the use of =20= > free > software (and free Java implementations probably don't support =20 > current RDF > stores). > > I can imagine that one can already find performance measures for =20 > RAP somewhere > on the web -- sorry if I missed this. > > Best regards, > > Markus > > --=20 > Markus Kr=F6tzsch > Institute AIFB, University of Karlsruhe, D-76128 Karlsruhe > ma...@ai... phone +49 (0)721 608 7362 > www.aifb.uni-karlsruhe.de/WBS/ fax +49 (0)721 693 717 |
From: Chris B. <ch...@bi...> - 2006-04-12 15:51:10
|
Hi Markus, >=20 > We consider using RAP as a quadstore for Semantic MediaWiki (see > http://wiki.ontoworld.org).=20 Interesting. > In the long run, we are interested in > inferencing, but for now Wikipedia-size scalability is most important. Hmm sorry, up to my knowledge there are no systematic comparisons of the performance of RAP with other RDF toolkits. We did some relatively unsystematic performance testing when we = implemented different features, but the results are outdated by now. S=F6ren Auer and Bart Pieterse (both cc'ed) have used RAP in bigger = projects and I guess they are the best sources for practical experiences with the performance of RAP with bigger real world datasets. =20 My general impression is that as PHP itself is still slower than = languages like Java or C, RAP is also slow and its performance can not be compared with toolkits like Jena or Sesame. S=F6ren might disagree on this point = with me. > Are > there recent evaluations concerning the performance of the different > storage > models? In particular, we are interested in scalability of the = following > functions: >=20 > 1 SPARQL queries: > 1.1 general performance Around one second for a medium complex query against a data set with 100 = 000 triples in memory, much slower if the data set is in a database. Tobias Gauss can give you details. An PHP alternative for SPARQL queries against data sets which are stored = in a database is Benjamin appmosphere toolkit http://www.appmosphere.com/pages/en-arc. He does smarter SPARQL to SQL rewriting than RAP and should theoretically be faster.=20 > 1.2 performance of "join-intensive" queries (involving long chains of > triples) > 1.3 performance of datatype queries (e.g. selecting/sorting results = by > some > xsd:int or xsd:decimal) > 1.4 performance for partial result lists (e.g. getting only the first = 20) > 2 simple read access (e.g. getting all triples of a certain pattern or = RDF > dataset) OK with models up to 100 000 triples. Don't know about bigger models. = S=F6ren? > 3 write access > 3.1 adding triples to an existing store > 3.2 deleting selected triples from the store Should be OK. I think S=F6ren implemented some work arounds for bulk = updates.=20 > 4 impact of RDF dataset features/named graph functionality About 5% slower than operations on classic RDF models. > For inclusion in Wikipedia, dealing with about 10 Mio triples split = into 1 > Mio > RDF datasets is probably necessary.=20 Too much for RAP, too much for appmoshere (Benjamin?), and I guess even = hard for Jena, Redland and Co if the queries become more complicated. > We are working on useful update and > caching strategies to reduce access to the RDF store, but a rather = high > number of parallel requests still is to be expected (though normal = reading > of > articles will not touch the store). It would also be possible to = restrict > to > certain types of queries if this leads to improved performance. >=20 > We currently use RAP as an RDF parser for importing ontologies into > Semantic > MediaWiki. For querying our RDF data, we consider reusing an existing > triplestores such as Redland or RAP, but also using SQL queries = directly. > Java toolkits are not an option since Wikipedia requires the use of = free > software (and free Java implementations probably don't support current = RDF > stores). If current RDF stores means Named Graph stores then you could use a combination of Jena and NG4J. Jena is BSD and supports SPARQL. NG4J adds = a API for manipulating Named Graph sets. See: http://www.wiwiss.fu-berlin.de/suhl/bizer/ng4j/ >=20 > I can imagine that one can already find performance measures for RAP > somewhere > on the web -- sorry if I missed this. Not that I know. But all efforts into that direction are highly = welcomed. Cheers Chris > Best regards, >=20 > Markus >=20 > -- > Markus Kr=F6tzsch > Institute AIFB, University of Karlsruhe, D-76128 Karlsruhe > ma...@ai... phone +49 (0)721 608 7362 > www.aifb.uni-karlsruhe.de/WBS/ fax +49 (0)721 693 717 --=20 Chris Bizer Freie Universit=E4t Berlin Phone: +49 30 838 54057 Mail: ch...@bi... Web: www.bizer.de |
From: Markus <ma...@ai...> - 2006-04-11 14:41:05
|
Hi. We consider using RAP as a quadstore for Semantic MediaWiki (see=20 http://wiki.ontoworld.org). In the long run, we are interested in=20 inferencing, but for now Wikipedia-size scalability is most important. Are= =20 there recent evaluations concerning the performance of the different storag= e=20 models? In particular, we are interested in scalability of the following=20 functions: 1 SPARQL queries: 1.1 general performance 1.2 performance of "join-intensive" queries (involving long chains of=20 triples) 1.3 performance of datatype queries (e.g. selecting/sorting results by some xsd:int or xsd:decimal) 1.4 performance for partial result lists (e.g. getting only the first 20) 2 simple read access (e.g. getting all triples of a certain pattern or RDF= =20 dataset) 3 write access 3.1 adding triples to an existing store 3.2 deleting selected triples from the store 4 impact of RDF dataset features/named graph functionality =46or inclusion in Wikipedia, dealing with about 10 Mio triples split into = 1 Mio=20 RDF datasets is probably necessary. We are working on useful update and=20 caching strategies to reduce access to the RDF store, but a rather high=20 number of parallel requests still is to be expected (though normal reading = of=20 articles will not touch the store). It would also be possible to restrict t= o=20 certain types of queries if this leads to improved performance. We currently use RAP as an RDF parser for importing ontologies into Semanti= c=20 MediaWiki. For querying our RDF data, we consider reusing an existing=20 triplestores such as Redland or RAP, but also using SQL queries directly.=20 Java toolkits are not an option since Wikipedia requires the use of free=20 software (and free Java implementations probably don't support current RDF= =20 stores).=20 I can imagine that one can already find performance measures for RAP somewh= ere=20 on the web -- sorry if I missed this. Best regards, Markus =2D-=20 Markus Kr=F6tzsch Institute AIFB, University of Karlsruhe, D-76128 Karlsruhe ma...@ai... phone +49 (0)721 608 7362 www.aifb.uni-karlsruhe.de/WBS/ fax +49 (0)721 693 717 |
From: Richard C. <ri...@cy...> - 2006-04-11 12:08:35
|
Hi, Parsing the Prot=E9g=E9 Camera ontology (available at http://=20 protege.stanford.edu/plugins/owl/owl-library/camera.owl ) produces =20 several notices in RdfParser: Notice: Undefined index: first_blank_node_id in /Users/richard/=20 rap/htdocs/rdfapi-php/api/syntax/RdfParser.php on line 1877 Here's some simple code to show the problem: <?php define("RDFAPI_INCLUDE_DIR", "rdfapi-php/api/"); include(RDFAPI_INCLUDE_DIR . "RdfAPI.php"); $model =3D ModelFactory::getDefaultModel(); $model->load('http://protege.stanford.edu/plugins/owl/owl-library/=20 camera.owl'); ?> I'm running this with PHP 4.3.4 on OS X. Best, Richard= |
From: Richard C. <ri...@cy...> - 2006-04-11 11:45:23
|
Hi Yang, On 11 Apr 2006, at 10:47, =EC=96=91=EA=B2=BD=EC=95=84 wrote: > I am a new user from RDFAPI-PHP and I can 't assume a owl to read. > I want to mount all of classes, properties and instances defined in =20= > owl file to memory. > Exactly, I can load the file but don 't know how I can get the =20 > classes, properties and instances. > > OntModel only supports the API, listClasses() to get classes. Am I =20= > right?? > listClasses() returns Resource data type. How can I change =20 > Resource to OntClass.. If the resource returned by listClasses() has a URI, then you can use =20= $ontModel->createOntClass($uri) to retrieve a matching OntClass =20 object. I don't know if it is possible with an anonymous class. OntModel also has createOntProperty and createIndividual methods. Hope that helps, Richard > > This is the part of code to get classes in owl file. > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=20 > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > //create new memory model > $ontModel =3D ModelFactory::getOntModel(MEMMODEL,RDFS_VOCABULARY); > > //load the parse document > $ontModel->load(ROOT_DIRECTORY."software.owl"); > > // load classes > $array =3D $ontModel->listClasses(" http://ozzy.cbnu.ac.kr/ontology/ = "); > > foreach( $array as $resource ){ > > echo $resouce->getLabel(); > > // How can I get the classes, properties and instances ?? > > } > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=20 > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > I'll very happy if you give me a answer > Thank you. > Yang. |