You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
|
Feb
|
Mar
(1) |
Apr
(3) |
May
(2) |
Jun
(2) |
Jul
|
Aug
(2) |
Sep
(1) |
Oct
|
Nov
(9) |
Dec
(4) |
2011 |
Jan
(1) |
Feb
(6) |
Mar
(9) |
Apr
(9) |
May
(20) |
Jun
(1) |
Jul
(2) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
(13) |
Dec
(1) |
2012 |
Jan
(1) |
Feb
(12) |
Mar
(5) |
Apr
(8) |
May
|
Jun
(25) |
Jul
(7) |
Aug
(4) |
Sep
(14) |
Oct
(5) |
Nov
(22) |
Dec
(6) |
2013 |
Jan
(18) |
Feb
(28) |
Mar
(11) |
Apr
(18) |
May
(4) |
Jun
|
Jul
(9) |
Aug
(6) |
Sep
(4) |
Oct
(4) |
Nov
(6) |
Dec
(7) |
2014 |
Jan
(9) |
Feb
(15) |
Mar
(14) |
Apr
(7) |
May
(5) |
Jun
(15) |
Jul
(2) |
Aug
(6) |
Sep
(5) |
Oct
(7) |
Nov
(9) |
Dec
(16) |
2015 |
Jan
(4) |
Feb
(5) |
Mar
(1) |
Apr
(2) |
May
(2) |
Jun
(15) |
Jul
(8) |
Aug
|
Sep
(4) |
Oct
|
Nov
(4) |
Dec
(4) |
2016 |
Jan
(1) |
Feb
(1) |
Mar
|
Apr
|
May
(5) |
Jun
(5) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: George A. <geo...@gm...> - 2013-04-04 17:02:48
|
Hi, By when will the latest update of dotNetRDF be released as a lot of bugs in the existing version 9 has been rectified, but a binary is not yet available. -- Regards,* George Abraham* M.Tech CSE, VIT University Director - alpha Techno Consultancy *m:* +91-8870456778 | *e:* geo...@gm..., geo...@vi... <geo...@vi...> *w: *alpha Techno Consultancy <http://alphatechnoconsultancy.webs.com/> |
From: Thomas F. <tho...@sp...> - 2013-03-29 11:14:02
|
Dear all We have experienced a "system.outofmemoryerror" with the latest version of dotnetrdf and the following code executed repeatedly (multiple times in a loop) against a Sesame server : Dim endpoint = New SparqlRemoteEndpoint(New Uri(" http://localhost:8080/sesame/repositories/my_repo")) Dim queryString As SparqlParameterizedString = New SparqlParameterizedString() queryString.Namespaces.AddNamespace("annot", New Uri(oAppSettingsReader.GetValue("BaseUriSite", GetType(System.String)) & "/annotations.owl#")) queryString.CommandText = "SELECT DISTINCT ?text WHERE {?annotation annot:onContent <" & _uriDocument & "> ; annot:onContentPart """ & ContentPart & """ ; annot:text ?text ; annot:isValid ""false""^^xsd:boolean . }" Dim results As SparqlResultSet = endpoint.QueryWithResultSet(queryString.ToString) For Each result As SparqlResult In results Console.WriteLine(DirectCast(result.Value("text"), LiteralNode).Value) Next results.Dispose() (note the use of "results.Dispose" that does not seem to have any effect on memory) That version of the code, however, works fine (no memory leak) : Dim store = New Storage.SesameHttpProtocolVersion6Connector(" http://localhost:8080/sesame/", "my_repo") Dim queryString As SparqlParameterizedString = New SparqlParameterizedString() queryString.Namespaces.AddNamespace("annot", New Uri(oAppSettingsReader.GetValue("BaseUriSite", GetType(System.String)) & "/annotations.owl#")) queryString.CommandText = "SELECT DISTINCT ?text WHERE {?annotation annot:onContent <" & _uriDocument & "> ; annot:onContentPart """ & ContentPart & """ ; annot:text ?text ; annot:isValid ""false""^^xsd:boolean . }" Dim results As [Object] = store.Query(queryString.ToString) For Each result As SparqlResult In DirectCast(results, SparqlResultSet) Console.WriteLine(DirectCast(result.Value("text"), LiteralNode).Value) Next Do you see anything wrong or missing in the first snippet above that could lead to a memory leak ? We are fairly new with DotNetRdf so we might have missed something - in that case would someone be kind enough to fix the first code snippet and send it back ? Or is it that Sesame server should not be queried with a SparqlRemoteEndpoint ? Thanks Thomas & Eric -- * * *Thomas Francart* - Sparna Consultant Indépendant Data, Sémantique, Contenus, Connaissances web : http://sparna.fr, blog : http://francart.fr Tel : +33 (0)6.71.11.25.97 Fax : +33 (0)9.58.16.17.14 Skype : francartthomas |
From: Rob V. <rv...@do...> - 2013-03-28 18:24:34
|
Hi I'm not sure what the problem could be, can you provide the exact steps you take in order to aid debugging? Also I'm not sure if the tool will work with ASP.Net MVC applications, it was designed and tested only against ASP.Net Web Applications Regards, Rob From: Alans Alexer <ala...@gm...> Date: Tuesday, March 26, 2013 9:12 PM To: Rob Vesse <rv...@do...> Subject: Problem with Creating SPARQL Endpoint > Hi, Rob: > I'm a beginner in ASP.Net, now I want to create a SPARQL Endpoint by following > your example here > <https://bitbucket.org/dotnetrdf/dotnetrdf/wiki/UserGuide/ASP/Deploying%20with > %20rdfWebDeploy> . > I followed every step in your automated rdfWebDeploy example, but I got this > error:"can't parse Default Web Site's website ID"(original message wasn't in > English, so I just translated the meaning). I've tried it both on VS2012+IIS > Express and VS2010+IIS 7, problem is the same. The web application I created > was just the one VS2010 provided(new->project->ASP.NET <http://ASP.NET> MVC 2 > Web Application). > Could you please help me figure out where the problem is? I can provide you > with more information if necessary, thanks a lot! |
From: Rob V. <rv...@do...> - 2013-03-25 18:55:43
|
Hi Gerd The NuGet packages should include the PDB files necessary for the VS debugger to allow you to view the code. It is possible that the Just My Code option may be the culprit, go to Options > Tools > Debugging > General and uncheck the Just my Code option and then hopefully the debugger should pick up the source from the included PDBs This is not something I can easily reproduce in my own environment because obviously the source files exist for me Rob On 3/23/13 4:52 AM, "Gerd Gröner" <gr...@un...> wrote: >Dear Rob, > >I am using dotNetRDF (version 0.9.0.2110) in F# in Visual Studio, >installed via NuGet. >In particular, I use "SparqlRemoteEndpoint". >When debugging my program several source files are not found and the >debugger asks for them. As expected / original address an absolute >(local) >path is given, e.g., > >c:\Users\rvesse\Documents\mercurial\dotnetrdf\Libraries\core\net40\Core\Ba >seEndpoint.cs > >c:\Users\rvesse\Documents\mercurial\dotnetrdf\Libraries\core\net40\Query\S >PARQLResultSet.cs > >c:\Users\rvesse\Documents\mercurial\dotnetrdf\Libraries\core\net40\Parsing >\Handlers\ResultSetHandler.cs >... > >Do you have any ideas? > >Thanks a lot! > >Regards, >Gerd > > |
From: Rob V. <rv...@do...> - 2013-03-22 21:46:34
|
I will try and take a proper look at your example Monday, Rob From: Dennis Ludl <Den...@St...> Reply-To: dotNetRDF User Help and Support <dot...@li...> Date: Wednesday, March 20, 2013 10:01 PM To: dotNetRDF User Help and Support <dot...@li...> Subject: Re: [dotNetRDF-Support] Out of Memory Exception > Hi Rob, > thanks for your fast response. I think you're right with the few billion > results as source of the oom exception. I've added 2 files with some simple > sampledata. What I want to do with this: > > First of all select some data from the real.txt example, e.g. > > ?s real:id "1234". > ?s real:parts ?part. > ?part real:weight ?weight. > FILTER(weight >= 200) > ?part real:shapeRepresentation ?shape. > > So ?shape would contain a object, which refers to a node in the graph of > "geo.txt". As you can see in the image (TripleStore Image), the graph of > "geo.txt" is tree based. The red lines are inferenced properties (children is > transitive, all the other properties are subclasses of children. Same for > parent). > What I want to archive: > In the first step I selected 1 shape in the example above. In this case it > would be Scene1-Group5-Shape1. So as my final result I want to create a graph > which contains only the path Scene1-Group5-Shape1 (Result Image) as well as > all the DEF node. (I need to serialize all the data in this graph, so I want a > graph which only contains relevant data. With the resulting graph I could > simply query ?s ?p ?o and serialize the results). > I thought about deleting all subnodes of the shapes I don't want to use like i > wrote in my email before. So something like that: > > ?s1 ?p ?o. > FILTER EXISTS(?shape ont:children ?o) > > That's the step with the oom exception. (The Group node - Group3 - wouldn't be > deleted, but that's not critical, it's enough to delete Shape + all > childnodes). > I thought about another solution now and came up with something like this: > > ?s real:id "A340Full". > ?s real:part ?parts. > ?parts real:shapeRepresentation ?shapes. > ?shapes ont:parent ?parents. > ?shapes ont:children ?children. > > This works fine and I get my results instantly. But I can't get further. Right > now I just have all the nodes of my graph, stored in 3 variables (?parents, > ?shapes and ?children). Sadly,I don't have any propertys except of ont:parent > and ont:children, but I need the other properties between the nodes too (e.g. > ont:DEF). I'm not sure how to get the properties of all the nodes out of such > a situation. If I try something like ?children ?p ?o as a next step it takes a > long time (I stopped the query after 15minutes). > > Thank you, > Dennis > > Am 21.03.13 schrieb Rob Vesse <rv...@do...>: >> Hi Dennis >> >> From what you have described the problem is in part down to the nature of >> your query/data and partly due to the evaluation strategy that dotNetRDF >> uses. >> >> dotNetRDF builds up intermediate results as it goes along, if there are a lot >> of possibilities this can lead to very large intermediate results and high >> memory usage on relatively small datasets >> >> I will try and break down your query a little and explain what I mean: >> >> ?id test:id "123". //Returns 1 Node >> ?id test:parts ?parts. //Returns ~2000 nodes. >> >> Since there is a single ID but it is associated with ~2000 parts the >> intermediate results at this stage of processing will be ~2,000 results >> >> ?parts test:children ?children //?children would contain about 40k nodes >> >> But once we join with this pattern the intermediate results are now around >> ~42,000 results (depending on your data could be more/less), up to this point >> you are likely fine. >> >> ?s ?p ?o. >> >> When you add this pattern then you are in trouble since that basically >> returns your entire triple store (~112,000 triples) and it is a cross product >> since there are no shared variables, intermediate results are now >> ~4,704,000,000 results. So dotNetRDF likely OOMs long before it is able to >> finish evaluating the cross product or start considering the MINUS. I don't >> know anything about the evaluation strategy used in AllegroGraph but given >> that you are generating so many intermediate results it is likely due to the >> same problem. >> >> If you want to figure out exactly at what point dotNetRDF fails you can try >> following the first part of Debugging SPARQL Queries [1], this uses the >> ExplainQueryProcessor to print evaluation explanations including intermediate >> result counts to the console. This will tell you exactly how many results >> each join is generating and let you see at what point the query fails. >> >> I'm not sure I exactly understand your intentions with the last pattern, >> would it not make more sense to ask ?parts ?p ?o? >> >> Maybe if you explained your problem with a trivial data sample and desired >> results I might be able to help come up with an alternative SPARQL query that >> doesn't run into this problem. Without changing the query I would suspect >> you will struggle to run this query on most triple stores and those that are >> capable may take a very long time to give you an answer. >> >> Hope this helps, >> >> Rob >> >> [1]: >> https://bitbucket.org/dotnetrdf/dotnetrdf/wiki/HowTo/Debug%20SPARQL%20Queries >> >> >> From: Dennis Ludl <Den...@St...> >> Reply-To: dotNetRDF User Help and Support >> <dot...@li...> >> Date: Wednesday, March 20, 2013 5:17 PM >> To: dotNetRDF User Help and Support >> <dot...@li...> >> Subject: [dotNetRDF-Support] Out of Memory Exception >> >>> Hi, >>> i have some problems with Out of Memory Exceptions right now. >>> I have a graph in a triplestore with about ~112k triples. Right now I want >>> to query this graph to get all nodes MINUS some specific nodes (and all >>> subnodes of this nodes - The graph is tree based). test:children would >>> return all subnodes of a specific node. >>> My query looks like this: >>> >>> SELECT ?s ?p ?o >>> FROM <...> >>> WHERE >>> { >>> ?id test:id "123". //Returns 1 Node >>> ?id test:parts ?parts. //Returns ~2000 nodes. >>> ?parts test:children ?children //?children would contain about 40k nodes >>> ?s ?p ?o. >>> MINUS { ?children ?p ?o. } //Returns about 60k triples >>> } >>> >>> When I try to execute the query I always get a Out of Memory exception. I >>> tried it with dotNetRDF's InMemory TripleStore and also with AllegroGraph >>> (on a VM with 2GB RAM). >>> I'm not sure if it should be possible to handle queries with variables which >>> contain > 20k results, so I just wanted to ask if there's something wrong >>> with the query or it's just not possible to do querys like this. >>> >>> Thank you, >>> Dennis >>> >>> ---------------------------------------------------------------------------- >>> -- Everyone hates slow websites. So do we. Make your web apps faster with >>> AppDynamics Download AppDynamics Lite for free today: >>> http://p.sf.net/sfu/appdyn_d2d_mar__________________________________________ >>> _____ dotNetRDF-Support mailing list >>> dot...@li...https://lists.sourceforge.net/lists/l >>> istinfo/dotnetrdf-support >> ----------------------------------------------------------------------------- >> - Everyone hates slow websites. So do we. Make your web apps faster with >> AppDynamics Download AppDynamics Lite for free today: >> http://p.sf.net/sfu/appdyn_d2d_mar___________________________________________ >> ____ dotNetRDF-Support mailing list dot...@li... >> https://lists.sourceforge.net/lists/listinfo/dotnetrdf-support |
From: Dennis L. <Den...@St...> - 2013-03-21 05:01:23
|
@prefix geo: <http://localhost/uploads/geo.xml#>. @prefix ont: <http://www.test.org/ontologies/ont#>. @prefix datatype: <http://www.test.org/ontologies/datatypes#>. geo:Scene1 a ont:Scene ; ont:children geo:Scene1-Group3, geo:Scene1-Group5. geo:Scene1-Group3 a ont:Group ; ont:children geo:Scene1-Group3-Shape1 . geo:Scene1-Group3-Shape1 a ont:Shape ; ont:DEF geo:Scene1-Group3-Shape1-@DEF1 . geo:Scene1-Group3-Shape1-@DEF1 a ont:SFString ; datatype:hasValue "blabla1"^^xsd:string . geo:Scene1-Group5 a ont:Group ; ont:children geo:Scene1-Group5-Shape1 . geo:Scene1-Group5-Shape1 a ont:Shape ; ont:DEF geo:Scene1-Group5-Shape1-@DEF1 . geo:Scene1-Group5-Shape1-@DEF1 a ont:SFString ; datatype:hasValue "blabla2"^^xsd:string . |
From: Rob V. <rv...@do...> - 2013-03-21 03:02:36
|
Hi Dennis >From what you have described the problem is in part down to the nature of your query/data and partly due to the evaluation strategy that dotNetRDF uses. dotNetRDF builds up intermediate results as it goes along, if there are a lot of possibilities this can lead to very large intermediate results and high memory usage on relatively small datasets I will try and break down your query a little and explain what I mean: ?id test:id "123". //Returns 1 Node ?id test:parts ?parts. //Returns ~2000 nodes. Since there is a single ID but it is associated with ~2000 parts the intermediate results at this stage of processing will be ~2,000 results ?parts test:children ?children //?children would contain about 40k nodes But once we join with this pattern the intermediate results are now around ~42,000 results (depending on your data could be more/less), up to this point you are likely fine. ?s ?p ?o. When you add this pattern then you are in trouble since that basically returns your entire triple store (~112,000 triples) and it is a cross product since there are no shared variables, intermediate results are now ~4,704,000,000 results. So dotNetRDF likely OOMs long before it is able to finish evaluating the cross product or start considering the MINUS. I don't know anything about the evaluation strategy used in AllegroGraph but given that you are generating so many intermediate results it is likely due to the same problem. If you want to figure out exactly at what point dotNetRDF fails you can try following the first part of Debugging SPARQL Queries [1], this uses the ExplainQueryProcessor to print evaluation explanations including intermediate result counts to the console. This will tell you exactly how many results each join is generating and let you see at what point the query fails. I'm not sure I exactly understand your intentions with the last pattern, would it not make more sense to ask ?parts ?p ?o? Maybe if you explained your problem with a trivial data sample and desired results I might be able to help come up with an alternative SPARQL query that doesn't run into this problem. Without changing the query I would suspect you will struggle to run this query on most triple stores and those that are capable may take a very long time to give you an answer. Hope this helps, Rob [1]: https://bitbucket.org/dotnetrdf/dotnetrdf/wiki/HowTo/Debug%20SPARQL%20Querie s From: Dennis Ludl <Den...@St...> Reply-To: dotNetRDF User Help and Support <dot...@li...> Date: Wednesday, March 20, 2013 5:17 PM To: dotNetRDF User Help and Support <dot...@li...> Subject: [dotNetRDF-Support] Out of Memory Exception > Hi, > i have some problems with Out of Memory Exceptions right now. > I have a graph in a triplestore with about ~112k triples. Right now I want to > query this graph to get all nodes MINUS some specific nodes (and all subnodes > of this nodes - The graph is tree based). test:children would return all > subnodes of a specific node. > My query looks like this: > > SELECT ?s ?p ?o > FROM <...> > WHERE > { > ?id test:id "123". //Returns 1 Node > ?id test:parts ?parts. //Returns ~2000 nodes. > ?parts test:children ?children //?children would contain about 40k nodes > ?s ?p ?o. > MINUS { ?children ?p ?o. } //Returns about 60k triples > } > > When I try to execute the query I always get a Out of Memory exception. I > tried it with dotNetRDF's InMemory TripleStore and also with AllegroGraph (on > a VM with 2GB RAM). > I'm not sure if it should be possible to handle queries with variables which > contain > 20k results, so I just wanted to ask if there's something wrong with > the query or it's just not possible to do querys like this. > > Thank you, > Dennis > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. Make your web apps faster with > AppDynamics Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar____________________________________________ > ___ dotNetRDF-Support mailing list dot...@li... > https://lists.sourceforge.net/lists/listinfo/dotnetrdf-support |
From: Dennis L. <Den...@St...> - 2013-03-21 00:29:43
|
Hi, i have some problems with Out of Memory Exceptions right now. I have a graph in a triplestore with about ~112k triples. Right now I want to query this graph to get all nodes MINUS some specific nodes (and all subnodes of this nodes - The graph is tree based). test:children would return all subnodes of a specific node. My query looks like this: SELECT ?s ?p ?o FROM <...> WHERE { ?id test:id "123". //Returns 1 Node ?id test:parts ?parts. //Returns ~2000 nodes. ?parts test:children ?children //?children would contain about 40k nodes ?s ?p ?o. MINUS { ?children ?p ?o. } //Returns about 60k triples } When I try to execute the query I always get a Out of Memory exception. I tried it with dotNetRDF's InMemory TripleStore and also with AllegroGraph (on a VM with 2GB RAM). I'm not sure if it should be possible to handle queries with variables which contain > 20k results, so I just wanted to ask if there's something wrong with the query or it's just not possible to do querys like this. Thank you, Dennis |
From: Rob V. <rv...@do...> - 2013-03-19 16:18:05
|
Hi Moritz Without any kind of minimal code sample it is hard to say what the problem might be. Are you sure it it specifically the ParseFromString() method and not the actual query evaluation? There is a known issue that queries which use a lot of SPARQL expressions, so things with FILTER, BIND, project expressions, aggregates etc may run vastly slower under the debugger. This has to do with the fact that errors in expression evaluation propagate via exceptions, under the debugger this means that VS has to check on every exception thrown whether it meets the Break on Exception criteria which depending on your expressions, query and data can be very expensive. There has also been some changes in an internal data structure used in the 0.8/0.9 releases which can cause similar behaviors, these have been fixed for the in-progress 1.0 release and I have noticed an improvement in query evaluation performance under the debugger Rob On 3/18/13 4:34 AM, "Moritz Eberl" <ebe...@go...> wrote: >Hi Rob, > >i'm working with the SparqlParser from dotNetRdf and since i've updated >to the Version 0.9 (from 0.6.1) the ParseFromString method is really >slow during Debug sessions. When i'm running my application normally, it >works fast as it should. >Do you have any idea what the reason for this behaviour could be? > >I use Visual Studio Express, so i have a little trouble building and >debugging the library myself. > >Thanks in advance for your help, >best regards, >Moritz > |
From: Rob V. <rv...@do...> - 2013-03-12 16:46:03
|
Hi Sura Apologies but this appears to be a bug in how the API finds DatatypeProperties, this has been filed as CORE-339 [1]. The bug is that there is a typo in the constant for the URI for datatype properties meaning it doesn't correctly match datatype properties. We will get a fix made shortly and this will be available in the next release, in the meantime you can use the following code to retrieve DatatypeProperties: foreach (OntologyProperty prop in ontologyGraph.GetProperties(ontologyGraph.CreateUriNode(new Uri(NamespaceMapper.OWL + "DatatypeProperty")))) { Console.WriteLine(prop); } Apologies for the inconvenience, Rob [1] http://dotnetrdf.org/tracker/Issues/IssueDetail.aspx?id=339 From: Sura Monday <sur...@ya...> Reply-To: Sura Monday <sur...@ya...>, dotNetRDF User Help and Support <dot...@li...> Date: Monday, March 11, 2013 2:54 PM To: "dot...@li..." <dot...@li...> Subject: [dotNetRDF-Support] ontologyGraph.AllProperties does not retrieve DatatypeProperties > Hi, > I am using dotnetrdf 0.9.0, and am trying to retrieve the DatatypeProperties > in my ontology. > My ontology has DatatypeProperties, as well as ObjectProperties as shown > below: > > <owl:DatatypeProperty rdf:about="&ve;toName"> > <rdfs:label xml:lang="en">toName</rdfs:label> > <rdfs:domain rdf:resource="&ve;CommunicationMessage"/> > <rdfs:range rdf:resource="&xsd;string"/> > </owl:DatatypeProperty> > > > <owl:ObjectProperty rdf:about="&ve;isSituationOf"> > <rdfs:label xml:lang="en">isSituationOf</rdfs:label> > <rdfs:domain rdf:resource="&ve;Situation"/> > <rdfs:subPropertyOf rdf:resource="&ve;isSnapshotPartOf"/> > </owl:ObjectProperty> > > However, the below code fragment returns only ObjectProperties: > ontologyGraph = new OntologyGraph(); > FileLoader.Load(ontologyGraph, "ve-ontology.owl"); > > foreach (OntologyProperty prop in ontologyGraph.AllProperties) > Console.WriteLine(prop); > > Can someone please point me out the way to retrieve the DatatypeProeprties in > my ontology? > > Thanks in advance > /Sura > ------------------------------------------------------------------------------ > Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester > Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the > endpoint security space. For insight on selecting the right partner to tackle > endpoint security challenges, access the full report. > http://p.sf.net/sfu/symantec-dev2dev__________________________________________ > _____ dotNetRDF-Support mailing list dot...@li... > https://lists.sourceforge.net/lists/listinfo/dotnetrdf-support |
From: Sura M. <sur...@ya...> - 2013-03-11 21:54:18
|
Hi, I am using dotnetrdf 0.9.0, and am trying to retrieve the DatatypeProperties in my ontology. My ontology has DatatypeProperties, as well as ObjectProperties as shown below: <owl:DatatypeProperty rdf:about="&ve;toName"> <rdfs:label xml:lang="en">toName</rdfs:label> <rdfs:domain rdf:resource="&ve;CommunicationMessage"/> <rdfs:range rdf:resource="&xsd;string"/> </owl:DatatypeProperty> <owl:ObjectProperty rdf:about="&ve;isSituationOf"> <rdfs:label xml:lang="en">isSituationOf</rdfs:label> <rdfs:domain rdf:resource="&ve;Situation"/> <rdfs:subPropertyOf rdf:resource="&ve;isSnapshotPartOf"/> </owl:ObjectProperty> However, the below code fragment returns only ObjectProperties: ontologyGraph = new OntologyGraph(); FileLoader.Load(ontologyGraph, "ve-ontology.owl"); foreach (OntologyProperty prop in ontologyGraph.AllProperties) Console.WriteLine(prop); Can someone please point me out the way to retrieve the DatatypeProeprties in my ontology? Thanks in advance /Sura |
From: Rob V. <rv...@do...> - 2013-02-23 17:41:28
|
Hi All Various users have recently reported or enquired about a NullReferenceException that can seen when running dotNetRDF 0.8.2/0.9.0 under the VS debugger. Rest assured that this error is completely harmless and handled by the responsible code. It appears to be down to a strange quirk of how the debugger reports errors. See http://www.dotnetrdf.org/blogitem.asp?blogID=74 for a fuller explanation of the history of this issue. Best Regards, Rob Vesse |
From: Rob V. <ra...@ec...> - 2013-02-22 17:21:25
|
Hi Ross Yes this is a known issue but it is also a non-issue, see CORE-292 [1] Basically an external library we use does a check to see if the hash function provided supports null keys and catches the NullReferenceException and swallows it. For some reason in the 0.8.x releases VS would report this as unhandled even when it was actually handled. The fix we made in 0.9.0 for this was to include the PDBs in the NuGet packages which results in VS correctly ignoring the error most of the time. However you can still see this error if you have Break on Thrown set for CLR exceptions. The error is completely harmless and if you keep clicking Continue in the debugger the code will proceed to run fine, unfortunately each Graph instance has 4-7 instances of the culprit data structure under the hood so you will have to click a whole bunch of times. There will be a more comprehensive fix for this coming care of an updated version of the external library which we will use for 1.0.0 which should eliminate even the thrown exception. Hope this helps, Rob p.s. I was awarded my PhD officially in December, thanks for the good wishes :) [1]: http://dotnetrdf.org/tracker/Issues/IssueDetail.aspx?id=292 On 2/22/13 7:12 AM, "Ross Horne" <ros...@gm...> wrote: >Hi Rob, > >We're having difficulty with the dotNetRDF libraries. The latest 4.0 >library works well on mono. However, when we try through Visual Studio >10.0 for .NET 4.0, when trying to create a new graph, as follows, > >Graph g = new Graph(); > >we get the following error: > >"NullReferenceException was unhandled by user code." > >Is this a known problem with a particular combination of version? >Thank you for any pointers. > >Also, good luck with your viva, I notice you have submitted! > >Regards, > >Ross |
From: Rob V. <rv...@do...> - 2013-02-22 00:35:29
|
Hi Kenneth Glad you solved your problem One thing I will suggest for future reference is that occasionally some servers may misinterpret the Accept header that dotNetRDF sends and reply with HTML even when RDF formats are available. You can try and workaround this by using the two argument form of LoadFromUri() where the second argument is an instance of a parser (IRdfReader) that you want to use. If you do this then dotNetRDF will tailor the Accept header to only refer to that format rather than providing one that references all supported formats. This can force some servers to send back RDF instead of HTML. Hope this helps, Rob From: Kenneth Gangstoe <sp...@gm...> Reply-To: dotNetRDF User Help and Support <dot...@li...> Date: Thursday, February 21, 2013 6:57 AM To: dotNetRDF User Help and Support <dot...@li...> Subject: Re: [dotNetRDF-Support] GetTriplesWithSubject > Never mind this, just noticed I did my query against www.geonames.org > <http://www.geonames.org> instead of sws.geonames.org > <http://sws.geonames.org> ! > > On Thu, Feb 21, 2013 at 3:03 PM, Kenneth Gangstoe <sp...@gm...> wrote: >> Thanks for the feedback! >> >> When I was digging deeper into fetching geonames information, it seems >> geonames uses a redirect for their resources: >> >> "GeoNames is using 303 (See Other) redirection to distinguish the Concept >> (thing as is) from the Document about it. >> >> For the town Embrun in France we have these two URIs : >> [1] http://sws.geonames.org/3020251/ >> [2] http://sws.geonames.org/3020251/about.rdf >> >> The first URI [1] stands for the town in France. You use this URI if you want >> to refer to the town. The second URI [2] is the document with the information >> geonames has about Embrun. The geonames web server is configured to redirect >> requests for [1] to [2]. The redirection tells Semantic Web Agents that >> Embrun is not residing on the geonames server but that geonames has >> information about it instead." >> >> When I do a >> >> graph.LoadFromUri("http://www.geonames.org/3161732/"); >> >> the server actually redirects to >> >> http://www.geonames.org/3161732/bergen.html >> >> and not to >> >> http://sws.geonames.org/3161732/about.rdf >> >> as I would expect. >> >> This causes a big problem, as DotNetRdf then parses the HTML content instead >> of the RDF content, basically giving me very few triples instead of the full >> set. >> >> I'm not sure what in DotNetRdfs HttpWebRequest thats makes it redirect to a >> webpage instead of the rdf data. I've tried setting the UserAgent to >> something random, but it didn't work. Any ideas? >> >> On Fri, Feb 15, 2013 at 9:58 PM, Rob Vesse <rv...@do...> wrote: >>> Hi Kenneth >>> >>> What you are doing looks perfectly sensible, the problem you appear to have >>> is that the URI that some of your data is linking to is not the same as the >>> URIs in the data retrieved from that URI. This is ultimately a data quality >>> issue and there is not much you can do about it other than complain to the >>> data provider and ask them to clean their data up. >>> >>> As far as fixing your code I would suggest keeping a HashSet<Uri> or the >>> URIs you have resolved and updating that as you go so that when you run into >>> this kind of data problem you don't end up in a endless resolution cycle >>> since you will be able to tell that you already tried to resolve that URI. >>> >>> Hope this helps, >>> >>> Rob >>> >>> From: Kenneth Gangstoe <sp...@gm...> >>> Reply-To: dotNetRDF User Help and Support >>> <dot...@li...> >>> Date: Friday, February 15, 2013 5:51 AM >>> To: <dot...@li...> >>> Subject: [dotNetRDF-Support] GetTriplesWithSubject >>> >>>> Hi, >>>> >>>> I am currently creating some code to resolve an RDF source. I'm pretty new >>>> to dotNetRdf, so sorry in advance if this is a silly mistake. >>>> >>>> Here is what I do: >>>> >>>> 1. Initialize store and load initial graph >>>> >>>> IInMemoryQueryableStore store = new TripleStore()) >>>> UriLoader.Load(graph, new Uri(resource)); >>>> >>>> 3. Start resolving objects that are UriNodes: >>>> >>>> foreach (Triple triple in graph.Triples) >>>> { >>>> switch (triple.Object.NodeType) >>>> { >>>> case NodeType.Uri: >>>> { >>>> string predicateTitle = ResolveTitle(triple.Predicate, store); >>>> string objectTitle = ResolveTitle(triple.Object, store); >>>> ... >>>> >>>> >>>> What ResolveTitle basically does is that it searches the Store for any >>>> matching subjects: >>>> >>>> string ResolveTitle(INode node, IInMemoryQueryableStore store) >>>> { >>>> IEnumerable<Triple> triples = store.GetTriplesWithSubject(node); >>>> >>>> If it can't find any subjects in the store that matches the node, it >>>> resolves the URI, and puts it into the store >>>> >>>> Graph g = new Graph(); >>>> g.LoadFromUri(node.Uri); >>>> store.Add(g); >>>> >>>> and tries the matching again. >>>> >>>> This approach seems to work fine, except that in some cases the Object >>>> looks like this: >>>> >>>> triple.Object {http://sws.geonames.org/3161732} VDS.RDF.INode >>>> {VDS.RDF.UriNode} >>>> >>>> while the loaded Graph in the store has Triples that look like this: >>>> >>>> VDS.RDF.Triple {http://sws.geonames.org/3161732/ , >>>> http://www.geonames.org/ontology#name , Bergen} >>>> >>>> As you can see, the Subject has a trailing slash, while the Object does not >>>> have a trailing slash. >>>> >>>> So when it tries to find any matching triples: >>>> >>>> IEnumerable<Triple> triples = store.GetTriplesWithSubject(node); >>>> >>>> it returns none, since they do not match. >>>> >>>> Am I misunderstanding something? >>>> >>>> Is there a way to make sure my object and the subject in the store is >>>> actually the same? >>>> >>>> Also, it would be nice with some feedback if my approach to this is totally >>>> wrong :) >>>> >>>> Best regards, >>>> Kenneth >>>> --------------------------------------------------------------------------- >>>> --- Free Next-Gen Firewall Hardware Offer Buy your Sophos next-gen firewall >>>> before the end March 2013 and get the hardware for free! Learn more. >>>> http://p.sf.net/sfu/sophos-d2d-feb_________________________________________ >>>> ______ dotNetRDF-Support mailing list >>>> dot...@li...https://lists.sourceforge.net/lists/ >>>> listinfo/dotnetrdf-support >>> >>> ---------------------------------------------------------------------------- >>> -- >>> The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, >>> is your hub for all things parallel software development, from weekly >>> thought >>> leadership blogs to news, videos, case studies, tutorials, tech docs, >>> whitepapers, evaluation guides, and opinion stories. Check out the most >>> recent posts - join the conversation now. http://goparallel.sourceforge.net/ >>> _______________________________________________ >>> >>> dotNetRDF-Support mailing list >>> dot...@li... >>> https://lists.sourceforge.net/lists/listinfo/dotnetrdf-support >>> >> > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. Make your web apps faster with > AppDynamics Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_feb____________________________________________ > ___ dotNetRDF-Support mailing list dot...@li... > https://lists.sourceforge.net/lists/listinfo/dotnetrdf-support |
From: Kenneth G. <sp...@gm...> - 2013-02-21 14:57:25
|
Never mind this, just noticed I did my query against www.geonames.orginstead of sws.geonames.org! On Thu, Feb 21, 2013 at 3:03 PM, Kenneth Gangstoe <sp...@gm...> wrote: > Thanks for the feedback! > > When I was digging deeper into fetching geonames information, it seems > geonames uses a redirect for their resources: > > "GeoNames is using *303 (See Other) redirection* to distinguish the * > Concept* (thing as is) from the *Document* about it. > > For the town *Embrun* in France we have these two URIs : > [1] http://sws.geonames.org/3020251/ > [2] http://sws.geonames.org/3020251/about.rdf > > The first URI [1] stands for the town in France. You use this URI if you > want to refer to the town. The second URI [2] is the document with the > information geonames has about *Embrun*. The geonames web server is > configured to redirect requests for [1] to [2]. The redirection tells > Semantic Web Agents that *Embrun* is not residing on the geonames server > but that geonames has information about it instead." > > When I do a > > graph.LoadFromUri("http://www.geonames.org/3161732/"); > > the server actually redirects to > > http://www.geonames.org/3161732/bergen.html > > and not to > > http://sws.geonames.org/3161732/about.rdf > > as I would expect. > > This causes a big problem, as DotNetRdf then parses the HTML content > instead of the RDF content, basically giving me very few triples instead of > the full set. > > I'm not sure what in DotNetRdfs HttpWebRequest thats makes it redirect to > a webpage instead of the rdf data. I've tried setting the UserAgent to > something random, but it didn't work. Any ideas? > > On Fri, Feb 15, 2013 at 9:58 PM, Rob Vesse <rv...@do...> wrote: > >> Hi Kenneth >> >> What you are doing looks perfectly sensible, the problem you appear to >> have is that the URI that some of your data is linking to is not the same >> as the URIs in the data retrieved from that URI. This is ultimately a data >> quality issue and there is not much you can do about it other than complain >> to the data provider and ask them to clean their data up. >> >> As far as fixing your code I would suggest keeping a HashSet<Uri> or the >> URIs you have resolved and updating that as you go so that when you run >> into this kind of data problem you don't end up in a endless resolution >> cycle since you will be able to tell that you already tried to resolve that >> URI. >> >> Hope this helps, >> >> Rob >> >> From: Kenneth Gangstoe <sp...@gm...> >> Reply-To: dotNetRDF User Help and Support < >> dot...@li...> >> Date: Friday, February 15, 2013 5:51 AM >> To: <dot...@li...> >> Subject: [dotNetRDF-Support] GetTriplesWithSubject >> >> Hi, >> >> I am currently creating some code to resolve an RDF source. I'm pretty >> new to dotNetRdf, so sorry in advance if this is a silly mistake. >> >> Here is what I do: >> >> 1. Initialize store and load initial graph >> >> IInMemoryQueryableStore store = new TripleStore()) >> UriLoader.Load(graph, new Uri(resource)); >> >> 3. Start resolving objects that are UriNodes: >> >> foreach (Triple triple in graph.Triples) >> { >> switch (triple.Object.NodeType) >> { >> case NodeType.Uri: >> { >> string predicateTitle = ResolveTitle(triple.Predicate, store); >> string objectTitle = ResolveTitle(triple.Object, store); >> ... >> >> >> What ResolveTitle basically does is that it searches the Store for any >> matching subjects: >> >> string ResolveTitle(INode node, IInMemoryQueryableStore store) >> { >> IEnumerable<Triple> triples = store.GetTriplesWithSubject(node); >> >> If it can't find any subjects in the store that matches the node, it >> resolves the URI, and puts it into the store >> >> Graph g = new Graph(); >> g.LoadFromUri(node.Uri); >> store.Add(g); >> >> and tries the matching again. >> >> This approach seems to work fine, except that in some cases the Object >> looks like this: >> >> triple.Object {*http://sws.geonames.org/3161732*} VDS.RDF.INode >> {VDS.RDF.UriNode} >> >> while the loaded Graph in the store has Triples that look like this: >> >> VDS.RDF.Triple {*http://sws.geonames.org/3161732/* , >> http://www.geonames.org/ontology#name , Bergen} >> >> As you can see, the Subject has a trailing slash, while the Object does >> not have a trailing slash. >> >> So when it tries to find any matching triples: >> >> IEnumerable<Triple> triples = store.GetTriplesWithSubject(node); >> >> it returns none, since they do not match. >> >> Am I misunderstanding something? >> >> Is there a way to make sure my object and the subject in the store is >> actually the same? >> >> Also, it would be nice with some feedback if my approach to this is >> totally wrong :) >> >> Best regards, >> Kenneth >> ------------------------------------------------------------------------------ >> Free Next-Gen Firewall Hardware Offer Buy your Sophos next-gen firewall >> before the end March 2013 and get the hardware for free! Learn more. >> http://p.sf.net/sfu/sophos-d2d-feb_______________________________________________dotNetRDF-Support mailing list >> dot...@li... >> https://lists.sourceforge.net/lists/listinfo/dotnetrdf-support >> >> >> >> ------------------------------------------------------------------------------ >> The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, >> is your hub for all things parallel software development, from weekly >> thought >> leadership blogs to news, videos, case studies, tutorials, tech docs, >> whitepapers, evaluation guides, and opinion stories. Check out the most >> recent posts - join the conversation now. >> http://goparallel.sourceforge.net/ >> _______________________________________________ >> >> dotNetRDF-Support mailing list >> dot...@li... >> https://lists.sourceforge.net/lists/listinfo/dotnetrdf-support >> >> > |
From: Kenneth G. <sp...@gm...> - 2013-02-21 14:03:28
|
Thanks for the feedback! When I was digging deeper into fetching geonames information, it seems geonames uses a redirect for their resources: "GeoNames is using *303 (See Other) redirection* to distinguish the *Concept * (thing as is) from the *Document* about it. For the town *Embrun* in France we have these two URIs : [1] http://sws.geonames.org/3020251/ [2] http://sws.geonames.org/3020251/about.rdf The first URI [1] stands for the town in France. You use this URI if you want to refer to the town. The second URI [2] is the document with the information geonames has about *Embrun*. The geonames web server is configured to redirect requests for [1] to [2]. The redirection tells Semantic Web Agents that *Embrun* is not residing on the geonames server but that geonames has information about it instead." When I do a graph.LoadFromUri("http://www.geonames.org/3161732/"); the server actually redirects to http://www.geonames.org/3161732/bergen.html and not to http://sws.geonames.org/3161732/about.rdf as I would expect. This causes a big problem, as DotNetRdf then parses the HTML content instead of the RDF content, basically giving me very few triples instead of the full set. I'm not sure what in DotNetRdfs HttpWebRequest thats makes it redirect to a webpage instead of the rdf data. I've tried setting the UserAgent to something random, but it didn't work. Any ideas? On Fri, Feb 15, 2013 at 9:58 PM, Rob Vesse <rv...@do...> wrote: > Hi Kenneth > > What you are doing looks perfectly sensible, the problem you appear to > have is that the URI that some of your data is linking to is not the same > as the URIs in the data retrieved from that URI. This is ultimately a data > quality issue and there is not much you can do about it other than complain > to the data provider and ask them to clean their data up. > > As far as fixing your code I would suggest keeping a HashSet<Uri> or the > URIs you have resolved and updating that as you go so that when you run > into this kind of data problem you don't end up in a endless resolution > cycle since you will be able to tell that you already tried to resolve that > URI. > > Hope this helps, > > Rob > > From: Kenneth Gangstoe <sp...@gm...> > Reply-To: dotNetRDF User Help and Support < > dot...@li...> > Date: Friday, February 15, 2013 5:51 AM > To: <dot...@li...> > Subject: [dotNetRDF-Support] GetTriplesWithSubject > > Hi, > > I am currently creating some code to resolve an RDF source. I'm pretty new > to dotNetRdf, so sorry in advance if this is a silly mistake. > > Here is what I do: > > 1. Initialize store and load initial graph > > IInMemoryQueryableStore store = new TripleStore()) > UriLoader.Load(graph, new Uri(resource)); > > 3. Start resolving objects that are UriNodes: > > foreach (Triple triple in graph.Triples) > { > switch (triple.Object.NodeType) > { > case NodeType.Uri: > { > string predicateTitle = ResolveTitle(triple.Predicate, store); > string objectTitle = ResolveTitle(triple.Object, store); > ... > > > What ResolveTitle basically does is that it searches the Store for any > matching subjects: > > string ResolveTitle(INode node, IInMemoryQueryableStore store) > { > IEnumerable<Triple> triples = store.GetTriplesWithSubject(node); > > If it can't find any subjects in the store that matches the node, it > resolves the URI, and puts it into the store > > Graph g = new Graph(); > g.LoadFromUri(node.Uri); > store.Add(g); > > and tries the matching again. > > This approach seems to work fine, except that in some cases the Object > looks like this: > > triple.Object {*http://sws.geonames.org/3161732*} VDS.RDF.INode > {VDS.RDF.UriNode} > > while the loaded Graph in the store has Triples that look like this: > > VDS.RDF.Triple {*http://sws.geonames.org/3161732/* , > http://www.geonames.org/ontology#name , Bergen} > > As you can see, the Subject has a trailing slash, while the Object does > not have a trailing slash. > > So when it tries to find any matching triples: > > IEnumerable<Triple> triples = store.GetTriplesWithSubject(node); > > it returns none, since they do not match. > > Am I misunderstanding something? > > Is there a way to make sure my object and the subject in the store is > actually the same? > > Also, it would be nice with some feedback if my approach to this is > totally wrong :) > > Best regards, > Kenneth > ------------------------------------------------------------------------------ > Free Next-Gen Firewall Hardware Offer Buy your Sophos next-gen firewall > before the end March 2013 and get the hardware for free! Learn more. > http://p.sf.net/sfu/sophos-d2d-feb_______________________________________________dotNetRDF-Support mailing list > dot...@li... > https://lists.sourceforge.net/lists/listinfo/dotnetrdf-support > > > > ------------------------------------------------------------------------------ > The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, > is your hub for all things parallel software development, from weekly > thought > leadership blogs to news, videos, case studies, tutorials, tech docs, > whitepapers, evaluation guides, and opinion stories. Check out the most > recent posts - join the conversation now. > http://goparallel.sourceforge.net/ > _______________________________________________ > dotNetRDF-Support mailing list > dot...@li... > https://lists.sourceforge.net/lists/listinfo/dotnetrdf-support > > |
From: Rob V. <rv...@do...> - 2013-02-19 20:25:34
|
Hi Roberta Answers inline: From: Roberta Rascazzo <ra...@ho...> Date: Tuesday, February 19, 2013 9:58 AM To: Rob Vesse <rv...@do...> Subject: help - Fuseki > Hi Robert, > I'm particularly interested in the possibility to interface with a Fuseki > server on wich I have a dataset containing data. I' d like to know some other > details and example about this issue. > How can I visualize triples in my windows application? This is generally speaking up to you, there are APIs that can help you format Nodes/Graphs as strings for display but not really for GUI elements. See the Formatting API [1] documentation for some information on this. > > How can I update data on the server? See the documentation on Triple Store Integration [2] > > Could you give me some example please, specific for a Fuseki server? Specifically for Fuseki see [3] though that primarily only covers creating a connection, general purpose examples are shown on the Triple Store Integration page > > I hope you can help me... If you have more specific questions please let me know, emails to the support mailing list CC'd on this mail (which requires subscription) are preferable since then the whole community can benefit from your question. > > > I congratulate you a lot for your work. Thanks for the compliment, Rob [1] https://bitbucket.org/dotnetrdf/dotnetrdf/wiki/UserGuide/Formatting%20API [2] https://bitbucket.org/dotnetrdf/dotnetrdf/wiki/UserGuide/Triple%20Store%20In tegration [3] https://bitbucket.org/dotnetrdf/dotnetrdf/wiki/UserGuide/Storage/Fuseki > > > Sincerely, > Roberta > |
From: Rob V. <rv...@do...> - 2013-02-19 19:26:23
|
Hi Dennis I don't know of any obvious reason why you should end up with an extra slash in one URI, certainly the triple store should not be doing that. That looks like a data issue however with the examples you have given I cannot reproduce this, it is possible there is some error in your original data files that did not translate in your email since I cannot reproduce this myself. Can you provide example data files that reproduce the issue? Rob From: Dennis Ludl <Den...@St...> Reply-To: Dennis Ludl <Den...@St...>, dotNetRDF User Help and Support <dot...@li...> Date: Friday, February 15, 2013 3:08 PM To: dotNetRDF User Help and Support <dot...@li...> Subject: Re: [dotNetRDF-Support] Variable passing > I've implemeted it with the VALUES clause now and it works perfectly fine > (performance is way better now). But I came across another question while > comparing nodes for the VALUES clause. > I have two RDF files (turtle syntax) which I load both of them to a triple > store with LoadFromFile. > Here are two examples: > > File 1: > <http://example.org/example.ttl#1-part0> a bla:Part; > bla:hasFile <http://localhost/uploads/xyz.xml#Bla1>; > > File 2: > <http://localhost/uploads/xyz.xml#Bla1> a bla:File. > > So I would expact that 1 node "http://localhost/uploads/xyz.xml#Bla1" is > created and used for the triples from both files. My problem right now is that > it reads: > > File1: > http://localhost/uploads/xyz.xml#Bla1 > > File 2: > http:///localhost/uploads/xyz.xml#Bla1 > > I'm not sure if the additional / is an issue with the triplestore, but I can't > find any issues in the RDF files. Could it be the case, that the triplestore > adds the additional / to create 2 seperate nodes? > > > ------------------- > Hi Rob, > thanks for your answer. Populating over the results value by value (1 query > per result) leads to very long processing times for me (A few k values per > result -> 2-3minutes). Thats way to long for my application. I'm not able to > reduce the number of results and checked some benchmarks which shows that > current TripleStores can process ~200-700 (simple) queries per second. So > right now I'm trying to reduce my used queries to a minimum. > Thanks for your suggestion with the VALUES clause, this looks like very > promising. I'll implement it that way and hope for a better performance. > > Thank you, > Dennis > ------------------- > Hi Dennis > > The code you suggest should work fine, make your query to get the values for > ?var and then loop over the results populating and making a new query. The > downside to this is that you have to make a query per result from your first > query. > > The other option if you are trying to get to minimize the number of queries > would be to use the results from the first query to build up a VALUES clause > for the second query and then execute that getting back all the final values > in one go, see http://www.w3.org/TR/sparql11-query/#inline-data > <http://www.w3.org/TR/sparql11-query/#inline-data> for syntax of the VALUES > clause. > > Hope this helps, > > Rob > ------------------------------------------------------------------------------ > The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, is > your hub for all things parallel software development, from weekly thought > leadership blogs to news, videos, case studies, tutorials, tech docs, > whitepapers, evaluation guides, and opinion stories. Check out the most recent > posts - join the conversation now. > http://goparallel.sourceforge.net/____________________________________________ > ___ dotNetRDF-Support mailing list dot...@li... > https://lists.sourceforge.net/lists/listinfo/dotnetrdf-support |
From: Dennis L. <Den...@St...> - 2013-02-15 23:09:10
|
I've implemeted it with the VALUES clause now and it works perfectly fine (performance is way better now). But I came across another question while comparing nodes for the VALUES clause. I have two RDF files (turtle syntax) which I load both of them to a triple store with LoadFromFile. Here are two examples: File 1: <http://example.org/example.ttl#1-part0> a bla:Part; bla:hasFile <http://localhost/uploads/xyz.xml#Bla1>; File 2: <http://localhost/uploads/xyz.xml#Bla1> a bla:File. So I would expact that 1 node "http://localhost/uploads/xyz.xml#Bla1" is created and used for the triples from both files. My problem right now is that it reads: File1: http://localhost/uploads/xyz.xml#Bla1 File 2: http:///localhost/uploads/xyz.xml#Bla1 I'm not sure if the additional / is an issue with the triplestore, but I can't find any issues in the RDF files. Could it be the case, that the triplestore adds the additional / to create 2 seperate nodes? ------------------- Hi Rob, thanks for your answer. Populating over the results value by value (1 query per result) leads to very long processing times for me (A few k values per result -> 2-3minutes). Thats way to long for my application. I'm not able to reduce the number of results and checked some benchmarks which shows that current TripleStores can process ~200-700 (simple) queries per second. So right now I'm trying to reduce my used queries to a minimum. Thanks for your suggestion with the VALUES clause, this looks like very promising. I'll implement it that way and hope for a better performance. Thank you, Dennis ------------------- Hi Dennis The code you suggest should work fine, make your query to get the values for ?var and then loop over the results populating and making a new query. The downside to this is that you have to make a query per result from your first query. The other option if you are trying to get to minimize the number of queries would be to use the results from the first query to build up a VALUES clause for the second query and then execute that getting back all the final values in one go, see http://www.w3.org/TR/sparql11-query/#inline-data for syntax of the VALUES clause. Hope this helps, Rob |
From: Dennis L. <Den...@St...> - 2013-02-15 21:40:33
|
Hi Rob, thanks for your answer. Populating over the results value by value (1 query per result) leads to very long processing times for me (A few k values per result -> 2-3minutes). Thats way to long for my application. I'm not able to reduce the number of results and checked some benchmarks which shows that current TripleStores can process ~200-700 (simple) queries per second. So right now I'm trying to reduce my used queries to a minimum. Thanks for your suggestion with the VALUES clause, this looks like very promising. I'll implement it that way and hope for a better performance. Thank you, Dennis ------------------- Hi Dennis The code you suggest should work fine, make your query to get the values for ?var and then loop over the results populating and making a new query. The downside to this is that you have to make a query per result from your first query. The other option if you are trying to get to minimize the number of queries would be to use the results from the first query to build up a VALUES clause for the second query and then execute that getting back all the final values in one go, see http://www.w3.org/TR/sparql11-query/#inline-data for syntax of the VALUES clause. Hope this helps, Rob |
From: Rob V. <rv...@do...> - 2013-02-15 21:02:59
|
Hi Dennis The code you suggest should work fine, make your query to get the values for ?var and then loop over the results populating and making a new query. The downside to this is that you have to make a query per result from your first query. The other option if you are trying to get to minimize the number of queries would be to use the results from the first query to build up a VALUES clause for the second query and then execute that getting back all the final values in one go, see http://www.w3.org/TR/sparql11-query/#inline-data for syntax of the VALUES clause. Hope this helps, Rob From: Dennis Ludl <Den...@St...> Reply-To: Dennis Ludl <Den...@St...>, dotNetRDF User Help and Support <dot...@li...> Date: Friday, February 15, 2013 11:54 AM To: <dot...@li...> Subject: [dotNetRDF-Support] Variable passing > Hi, > i have a short question. > Is it possible to pass variables to other queries? > > e.g. a Query > SELECT ?var > WHERE > { > ?var bla:bla "30". > } > > ?var would be filled with all nodes which match this criteria. I need to pass > the nodes from ?var to another query, so i would like to do something like > this: > > var queryString = new SparqlParameterizedQuery(@" > SELECT ?final > WHERE > { > ?var bla:creator ?final. > } > "); > queryString.AddVariable(results.Variable["var"]). > > In my case it's not possible to do this in 1 query because I have to do some > processing between the queries and need a specific structure. > Is there any possibility to archive this? > > Thank you, > Dennis > > > ------------------------------------------------------------------------------ > The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, is > your hub for all things parallel software development, from weekly thought > leadership blogs to news, videos, case studies, tutorials, tech docs, > whitepapers, evaluation guides, and opinion stories. Check out the most recent > posts - join the conversation now. > http://goparallel.sourceforge.net/____________________________________________ > ___ dotNetRDF-Support mailing list dot...@li... > https://lists.sourceforge.net/lists/listinfo/dotnetrdf-support |
From: Rob V. <rv...@do...> - 2013-02-15 20:59:40
|
Hi Kenneth What you are doing looks perfectly sensible, the problem you appear to have is that the URI that some of your data is linking to is not the same as the URIs in the data retrieved from that URI. This is ultimately a data quality issue and there is not much you can do about it other than complain to the data provider and ask them to clean their data up. As far as fixing your code I would suggest keeping a HashSet<Uri> or the URIs you have resolved and updating that as you go so that when you run into this kind of data problem you don't end up in a endless resolution cycle since you will be able to tell that you already tried to resolve that URI. Hope this helps, Rob From: Kenneth Gangstoe <sp...@gm...> Reply-To: dotNetRDF User Help and Support <dot...@li...> Date: Friday, February 15, 2013 5:51 AM To: <dot...@li...> Subject: [dotNetRDF-Support] GetTriplesWithSubject > Hi, > > I am currently creating some code to resolve an RDF source. I'm pretty new to > dotNetRdf, so sorry in advance if this is a silly mistake. > > Here is what I do: > > 1. Initialize store and load initial graph > > IInMemoryQueryableStore store = new TripleStore()) > UriLoader.Load(graph, new Uri(resource)); > > 3. Start resolving objects that are UriNodes: > > foreach (Triple triple in graph.Triples) > { > switch (triple.Object.NodeType) > { > case NodeType.Uri: > { > string predicateTitle = ResolveTitle(triple.Predicate, store); > string objectTitle = ResolveTitle(triple.Object, store); > ... > > > What ResolveTitle basically does is that it searches the Store for any > matching subjects: > > string ResolveTitle(INode node, IInMemoryQueryableStore store) > { > IEnumerable<Triple> triples = store.GetTriplesWithSubject(node); > > If it can't find any subjects in the store that matches the node, it resolves > the URI, and puts it into the store > > Graph g = new Graph(); > g.LoadFromUri(node.Uri); > store.Add(g); > > and tries the matching again. > > This approach seems to work fine, except that in some cases the Object looks > like this: > > triple.Object {http://sws.geonames.org/3161732} VDS.RDF.INode > {VDS.RDF.UriNode} > > while the loaded Graph in the store has Triples that look like this: > > VDS.RDF.Triple {http://sws.geonames.org/3161732/ , > http://www.geonames.org/ontology#name , Bergen} > > As you can see, the Subject has a trailing slash, while the Object does not > have a trailing slash. > > So when it tries to find any matching triples: > > IEnumerable<Triple> triples = store.GetTriplesWithSubject(node); > > it returns none, since they do not match. > > Am I misunderstanding something? > > Is there a way to make sure my object and the subject in the store is actually > the same? > > Also, it would be nice with some feedback if my approach to this is totally > wrong :) > > Best regards, > Kenneth > ------------------------------------------------------------------------------ > Free Next-Gen Firewall Hardware Offer Buy your Sophos next-gen firewall before > the end March 2013 and get the hardware for free! Learn more. > http://p.sf.net/sfu/sophos-d2d-feb____________________________________________ > ___ dotNetRDF-Support mailing list dot...@li... > https://lists.sourceforge.net/lists/listinfo/dotnetrdf-support |
From: Dennis L. <Den...@St...> - 2013-02-15 20:21:38
|
Hi, i have a short question. Is it possible to pass variables to other queries? e.g. a Query SELECT ?var WHERE { ?var bla:bla "30". } ?var would be filled with all nodes which match this criteria. I need to pass the nodes from ?var to another query, so i would like to do something like this: var queryString = new SparqlParameterizedQuery(@" SELECT ?final WHERE { ?var bla:creator ?final. } "); queryString.AddVariable(results.Variable["var"]). In my case it's not possible to do this in 1 query because I have to do some processing between the queries and need a specific structure. Is there any possibility to archive this? Thank you, Dennis |
From: Kenneth G. <sp...@gm...> - 2013-02-15 13:51:27
|
Hi, I am currently creating some code to resolve an RDF source. I'm pretty new to dotNetRdf, so sorry in advance if this is a silly mistake. Here is what I do: 1. Initialize store and load initial graph IInMemoryQueryableStore store = new TripleStore()) UriLoader.Load(graph, new Uri(resource)); 3. Start resolving objects that are UriNodes: foreach (Triple triple in graph.Triples) { switch (triple.Object.NodeType) { case NodeType.Uri: { string predicateTitle = ResolveTitle(triple.Predicate, store); string objectTitle = ResolveTitle(triple.Object, store); ... What ResolveTitle basically does is that it searches the Store for any matching subjects: string ResolveTitle(INode node, IInMemoryQueryableStore store) { IEnumerable<Triple> triples = store.GetTriplesWithSubject(node); If it can't find any subjects in the store that matches the node, it resolves the URI, and puts it into the store Graph g = new Graph(); g.LoadFromUri(node.Uri); store.Add(g); and tries the matching again. This approach seems to work fine, except that in some cases the Object looks like this: triple.Object {*http://sws.geonames.org/3161732*} VDS.RDF.INode {VDS.RDF.UriNode} while the loaded Graph in the store has Triples that look like this: VDS.RDF.Triple {*http://sws.geonames.org/3161732/* , http://www.geonames.org/ontology#name , Bergen} As you can see, the Subject has a trailing slash, while the Object does not have a trailing slash. So when it tries to find any matching triples: IEnumerable<Triple> triples = store.GetTriplesWithSubject(node); it returns none, since they do not match. Am I misunderstanding something? Is there a way to make sure my object and the subject in the store is actually the same? Also, it would be nice with some feedback if my approach to this is totally wrong :) Best regards, Kenneth |
From: Rob V. <rv...@do...> - 2013-02-14 20:25:49
|
This has been added as of commit https://bitbucket.org/dotnetrdf/dotnetrdf/commits/c048d8dc84de8e03db66ebaecb 08bba0331c90df If you are able to build from source then you can try this out now, I've tested against the latest Franz provided AllegroGraph AMIs and it appears to work fine Rob From: Yossi Cohen <yos...@li...> Reply-To: dotNetRDF User Help and Support <dot...@li...> Date: Tuesday, February 5, 2013 10:06 AM To: DotNetRDF mailing-list <dot...@li...> Subject: Re: [dotNetRDF-Support] AllegroGraphConnector & IUpdateableStorage > Okay, > Thanks. > > Date: Tue, 5 Feb 2013 09:43:07 +0000 > From: rv...@do... > To: dot...@li... > Subject: Re: [dotNetRDF-Support] AllegroGraphConnector & IUpdateableStorage > > Yes I guess it should, the connector was developed for 3.x and then later > updated for 4.0 at which point I don't believe it supported SPARQL update. > > It should be relatively easy to add support for updates, filed as CORE-308 > <http://dotnetrdf.org/tracker/Issues/IssueDetail.aspx?id=308> > > Note that as a workaround you can use GenericUpdateProcessor to apply updates > over an AllegroGraphConnector. Then once an updated version of dotNetRDF with > IUpdateableStorage supported on AllegroGraphConnector is available you won't > need to change your code since GenericUpdateProcessor defers to the provided > update implementation if the IStorageProvider instance implements > IUpdateableStorage > > Rob > > From: Yossi Cohen <yos...@li...> > Reply-To: dotNetRDF User Help and Support > <dot...@li...> > Date: Tuesday, February 5, 2013 11:30 AM > To: DotNetRDF mailing-list <dot...@li...> > Subject: SPAM-LOW: [dotNetRDF-Support] AllegroGraphConnector & > IUpdateableStorage > >> Shouldn't AllegroGraphConnector implement IUpdateableStorage ? >> ----------------------------------------------------------------------------- >> - Free Next-Gen Firewall Hardware Offer Buy your Sophos next-gen firewall >> before the end March 2013 and get the hardware for free! Learn more. >> http://p.sf.net/sfu/sophos-d2d-feb___________________________________________ >> ____ dotNetRDF-Support mailing list >> dot...@li...https://lists.sourceforge.net/lists/li >> stinfo/dotnetrdf-support > > ------------------------------------------------------------------------------ > Free Next-Gen Firewall Hardware Offer Buy your Sophos next-gen firewall before > the end March 2013 and get the hardware for free! Learn more. > http://p.sf.net/sfu/sophos-d2d-feb > _______________________________________________ dotNetRDF-Support mailing list > dot...@li... > https://lists.sourceforge.net/lists/listinfo/dotnetrdf-support > > ------------------------------------------------------------------------------ > Free Next-Gen Firewall Hardware Offer Buy your Sophos next-gen firewall before > the end March 2013 and get the hardware for free! Learn more. > http://p.sf.net/sfu/sophos-d2d-feb____________________________________________ > ___ dotNetRDF-Support mailing list dot...@li... > https://lists.sourceforge.net/lists/listinfo/dotnetrdf-support |