You can subscribe to this list here.
| 2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(1) |
Nov
|
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2010 |
Jan
|
Feb
(11) |
Mar
(17) |
Apr
(12) |
May
(2) |
Jun
(20) |
Jul
(2) |
Aug
(2) |
Sep
(2) |
Oct
(2) |
Nov
|
Dec
(5) |
| 2011 |
Jan
(4) |
Feb
(1) |
Mar
(2) |
Apr
(2) |
May
(5) |
Jun
|
Jul
(12) |
Aug
(4) |
Sep
(5) |
Oct
(1) |
Nov
(38) |
Dec
(27) |
| 2012 |
Jan
(46) |
Feb
(182) |
Mar
(83) |
Apr
(22) |
May
(68) |
Jun
(47) |
Jul
(135) |
Aug
(84) |
Sep
(57) |
Oct
(45) |
Nov
(27) |
Dec
(61) |
| 2013 |
Jan
(59) |
Feb
(78) |
Mar
(66) |
Apr
(107) |
May
(27) |
Jun
(56) |
Jul
(53) |
Aug
(3) |
Sep
(19) |
Oct
(41) |
Nov
(44) |
Dec
(54) |
| 2014 |
Jan
(49) |
Feb
(72) |
Mar
(22) |
Apr
(41) |
May
(63) |
Jun
(27) |
Jul
(45) |
Aug
(12) |
Sep
(3) |
Oct
(8) |
Nov
(27) |
Dec
(16) |
| 2015 |
Jan
(3) |
Feb
(20) |
Mar
(6) |
Apr
(4) |
May
(15) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
(1) |
Oct
(1) |
Nov
|
Dec
|
| 2016 |
Jan
|
Feb
|
Mar
|
Apr
(16) |
May
(9) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Rob V. <rv...@vd...> - 2010-06-21 13:22:25
|
The exact behaviour of SaveGraph() with regards to how it affects an existing Graph of the same URI is left up to the implementer as it may be very dependent on the underlying store. The XML comments for the interface state that implementations should note their exact behaviour in the comments for the SaveGraph() method. This is typically stated in the remarks section rather than in the method summary though I'll make sure that methods clearly state this in their summaries from now on. I have updated all the comments on the existing implementations as of revision 789 Will also update online documentation shortly to highlight this Rob ---------------------------------------- From: Michael Friis <fr...@gm...> Sent: Monday, June 21, 2010 11:52 AM To: rv...@vd... Subject: Re: Updating/inserting-into DB-backed graph without doing LoadGraph > Do NOT use the SaveGraph() method as that typically overwrites any existing > Graph completely rather than merging with it (this is the behaviour for > Virtuoso) Great, that likely explains a lot of weirdness. Maybe that method should have a warning sticker, either in the online docs or in the intellisense comments... Michael -- http://friism.com (+45) 27122799 Sapere aude |
|
From: Michael F. <fr...@gm...> - 2010-06-21 10:51:56
|
> Do NOT use the SaveGraph() method as that typically overwrites any existing > Graph completely rather than merging with it (this is the behaviour for > Virtuoso) Great, that likely explains a lot of weirdness. Maybe that method should have a warning sticker, either in the online docs or in the intellisense comments... Michael -- http://friism.com (+45) 27122799 Sapere aude |
|
From: Rob V. <rv...@vd...> - 2010-06-21 09:48:26
|
The best way to do this with the existing API is to create a new empty graph, use that Graph to create the nodes you need and generate two lists of triples - those to be added and those to be removed and then call the UpdateGraph() method passing in your graphs URI and the two lists. Do NOT use the SaveGraph() method as that typically overwrites any existing Graph completely rather than merging with it (this is the behaviour for Virtuoso) If you are willing to use the very latest (and potentially unstable) builds from SVN then revision 780 contains a new WriteOnlyStoreGraph class which fills in a hole in the API which your question highlights. This provides a write-only view onto a Graph stored in some arbitrary store so you pass in a Graph URI and a IGenericIOManager (in your case an instance of a VirtuosoManager) and then use the Assert() and Retract() methods as usual and your changes will get automatically persisted to the backing Store. Note that this persistence is done batched in the background so changes aren't persisted instantaneously and it's worth explicitly calling the Dispose() method on the Graph when you're done with it and this forces any remaining unpersisted changes to be persisted to the Store. Regards, Rob Vesse ---------------------------------------- From: Michael Friis <fr...@gm...> Sent: Monday, June 21, 2010 10:23 AM To: dot...@li..., rv...@vd... Subject: SPAM-LOW: Updating/inserting-into DB-backed graph without doing LoadGraph I have a very large graph which I persist in a SQL-database (switching to Virtuoso). Looking at the dotNetRDF API, it looks like you have to have a graph object to create nodes and triples. So, I've been doing manager.LoadGraph(), inserting/updating and then manager.SaveGraph(). Loading the graph takes a lot of time and consumes a lot of memory though. What's the performant way to do this? Should I just new up a new graph, set the various namespaces and then hope the manager merges it into the existing one when I do SaveGraph()? Regards Michael -- http://friism.com (+45) 27122799 Sapere aude |
|
From: Michael F. <fr...@gm...> - 2010-06-21 09:15:32
|
I have a very large graph which I persist in a SQL-database (switching to Virtuoso). Looking at the dotNetRDF API, it looks like you have to have a graph object to create nodes and triples. So, I've been doing manager.LoadGraph(), inserting/updating and then manager.SaveGraph(). Loading the graph takes a lot of time and consumes a lot of memory though. What's the performant way to do this? Should I just new up a new graph, set the various namespaces and then hope the manager merges it into the existing one when I do SaveGraph()? Regards Michael -- http://friism.com (+45) 27122799 Sapere aude |
|
From: Rob V. <rv...@do...> - 2010-06-21 07:28:50
|
Sorry for the slow reply, I've was away over the weekend. Ok so it's not an issue of Blank Nodes and it doesn't appear to be an issue with the query. My next suggestion would be that it might be due to the way your SPARQL endpoint is configured. Are you creating a SPARQL endpoint in an ASP.Net application and then querying against it? If so then you may need to check the configuration of your endpoint. By default the endpoint is configured with the LoadMode setting set to OnDemand which means that it only starts with the default graph loaded in memory and only loads other graphs from the SQL store as and when queries demand them. If it is not already set to something else I would recommend setting the LoadMode setting to be PreloadAll which ensures that all the data is loaded from the store before it answers any queries. Another possible explanation for your missing triples is that if you've set this to PreloadAllAsync then the endpoint will start answering queries before it has fully loaded the data which means that until the data finishes loading the data your queries will return incomplete results. If this is not the cause of your issue then is it possible for you to send me some sample data so that I can do some tests and debugging to determine what the cause of the problem might be. Thanks, Rob Vesse ---------------------------------------- From: "Michael Friis" <fr...@gm...> Sent: Thursday, June 17, 2010 10:02 PM To: rv...@vd... Subject: Re: [dotNetRDF-develop] Triples evident in database not visible via Sparql query > Of course none of the above may be the actual cause of your issue but > without seeing any data or SPARQL queries I can only speculate. Here's a SPARQL query which returns no results (I've checked the prefixes): select distinct ?obj where { ?obj eb-owl:isaliasof ?subj } If I do whis to the database: select * from NODES n where n.nodeValue like '%isalias%' ... I can see that the eb-owl:isaliasof node has nodeId 4047 If I then do: select * from TRIPLES t where t.triplePredicate = 4047 ... I get over a thousand rows. Note that this seems to work after initial data load, but it then borks as I add more tripples... Michael -- http://friism.com (+45) 27122799 Sapere aude ---------------------------------------------------------------------------- -- ThinkGeek and WIRED's GeekDad team up for the Ultimate GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the lucky parental unit. See the prize list and enter to win: http://p.sf.net/sfu/thinkgeek-promo _______________________________________________ Dotnetrdf-develop mailing list Dot...@li... https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop |
|
From: Michael F. <fr...@gm...> - 2010-06-17 21:01:27
|
> Of course none of the above may be the actual cause of your issue but
> without seeing any data or SPARQL queries I can only speculate.
Here's a SPARQL query which returns no results (I've checked the prefixes):
select distinct ?obj
where {
?obj eb-owl:isaliasof ?subj
}
If I do whis to the database:
select * from NODES n
where n.nodeValue like '%isalias%'
... I can see that the eb-owl:isaliasof node has nodeId 4047
If I then do:
select *
from TRIPLES t
where t.triplePredicate = 4047
... I get over a thousand rows.
Note that this seems to work after initial data load, but it then
borks as I add more tripples...
Michael
--
http://friism.com
(+45) 27122799
Sapere aude
|
|
From: Rob V. <rv...@vd...> - 2010-06-17 16:13:27
|
Hi Michael
So I'm not entirely sure what the issue is from your description though I have a possible idea. Firstly I need to know what your SPARQL queries look like as without seeing those I can only guess at the issue so if you could send those that would be very helpful. Also if possible can you send some data that will reproduce the issue - a subset of your data which shows the issue would be sufficient.
Secondly I note that you state "entities/nodes" in your report which leads me to believe that some of the things are blank nodes - is this correct? If they are then you may not be getting the behaviour you expect from SPARQL for several reasons:
1) Blank Nodes in a SPARQL query act as anonymous variables rather than node matches. For example the query SELECT * WHERE { _:bnode ?p ?o } would not get all the predicates and objects associated with a specific Blank Node as you might be expecting but rather gets the predicates and objects of every single triple in the store since _:bnode is an anonymous variable.
2) Blank Nodes are scoped to a specific Graph so if you've added the data as several separate graphs then the Blank Node _:bnode in one is not the same as _:bnode in another though they'll have the same Node ID in the database hence why a SQL query would join them while a SPARQL query would not.
3) Blank Nodes labels in the output get automatically rewritten in a number of instances by the SPARQL engine to attempt to uniquely distinguish them since identical labels may have been used across different graphs in the store and as per point 2 these are not the same node
Of course none of the above may be the actual cause of your issue but without seeing any data or SPARQL queries I can only speculate.
Regards,
Rob Vesse
----------------------------------------
From: Michael Friis <fr...@gm...>
Sent: 17 June 2010 15:41
To: dot...@li..., rv...@vd...
Subject: Triples evident in database not visible via Sparql query
(this report hasn't been boiled down completely, apologize)
I have a large graph where I initially assert a lot of triples. These
triples are then visible just fine through Sparql queries. I then
assert some more triples which are not really related to the old ones
(although they concern the same entities/nodes). This causes some of
the original triples to disappear from sparql queries. The specific
triples that are not visible, are those where one of my entities
(UriNodes) are subjects instead of being objects. I can find the
relevant nodes and triples fine in the database (I use MS SQL Server),
they just don't show up when I run Sparql queries.
What's going on?
Here's an example SQL query that shows me the relevant nodes and
triples are in place:
select n2.*, n.*
from TRIPLES t
inner join NODES n on t.tripleObject = n.nodeID
inner join NODES n2 on t.tripleSubject = n2.nodeID
where t.tripleSubject = 27
select n2.*, n.*
from TRIPLES t
inner join NODES n on t.tripleSubject = n.nodeID
inner join NODES n2 on t.tripleObject = n2.nodeID
where t.tripleObject = 27
Regards
Michael
|
|
From: Michael F. <fr...@gm...> - 2010-06-17 14:40:48
|
(this report hasn't been boiled down completely, apologize) I have a large graph where I initially assert a lot of triples. These triples are then visible just fine through Sparql queries. I then assert some more triples which are not really related to the old ones (although they concern the same entities/nodes). This causes some of the original triples to disappear from sparql queries. The specific triples that are not visible, are those where one of my entities (UriNodes) are subjects instead of being objects. I can find the relevant nodes and triples fine in the database (I use MS SQL Server), they just don't show up when I run Sparql queries. What's going on? Here's an example SQL query that shows me the relevant nodes and triples are in place: select n2.*, n.* from TRIPLES t inner join NODES n on t.tripleObject = n.nodeID inner join NODES n2 on t.tripleSubject = n2.nodeID where t.tripleSubject = 27 select n2.*, n.* from TRIPLES t inner join NODES n on t.tripleSubject = n.nodeID inner join NODES n2 on t.tripleObject = n2.nodeID where t.tripleObject = 27 Regards Michael |
|
From: Michael F. <fr...@gm...> - 2010-06-10 19:21:16
|
> Hope this helped explain things a little, if you want more > information/examples please let me know. Right, thanks a bunch. I think I might try and resurrect linqtordf, I don't much like writing SPARQL :-). dotNetRDF would still be useful for updating the store though. Regards Michael -- http://friism.com (+45) 27122799 Sapere aude |
|
From: Rob V. <rv...@vd...> - 2010-06-10 13:46:37
|
Hi Michael
When I said dotNetRDF works with Linq I meant it is designed so you can use it with Microsoft's System.Linq namespace.
LinqToRdf is a now inactive project (afaik) developed by the guy who now runs Semantic Overflow which was a Linq style wrapper around the also inactive SemWeb library which mapped from Linq methods into SPARQL queries.
So when I say dotNetRDF supports Linq what I'm talking about is the fact that many objects and methods are defined as IEnumerable<T> which means that all of Microsoft's Linq methods for IEnumerable<T> can be used.
For example you can do something like the following:
Graph g = new Graph();
//Get some data from somewhere...
//Use the Linq OrderBy() method to order Triples by predicate
foreach (Triple t in g.Triples.OrderBy(item => item.Predicate))
{
//Do something with each Triple
}
Or equally you can use the natural language style syntax to do things:
//Use Linq natural language syntax with a where clause to select Triples with a Blank Node subject
IEnumerable<Triple> ts = from t in g.Triples where t.Subject.NodeType == NodeType.Blank select t;
So basically any of the methods defined by the Enumerable class in the System.Linq namespace can be used against many of the objects and their methods from the dotNetRDF library - http://msdn.microsoft.com/en-us/library/bb345746%28v=VS.100%29.aspx
Hope this helped explain things a little, if you want more information/examples please let me know.
Regards,
Rob Vesse
----------------------------------------
From: Michael Friis <fr...@gm...>
Sent: 09 June 2010 21:55
To: rv...@vd...
Subject: dotNetRDF and Linq
Hi Rob, you mention in this answer that dotNetRDF works with Linq:
http://www.semanticoverflow.com/questions/448/what-are-microsofts-offerings-in-the-semantic-world/458#458
Could you describe this in more detail? I'm looking at LinqtoRDF, but
it seems kinda old and kinda broke...
Michael
--
http://friism.com
(+45) 27122799
Sapere aude
|
|
From: Rob V. <rv...@vd...> - 2010-06-07 13:28:11
|
Hi Michael I've never had such an error reported before so could you please provide a complete test case that I can use to attempt to reproduce this in order that I can determine what the cause of the problem might be. If you can send me an example graph and examples of the query/queries you are attempting to execute then I will be able to investigate the issue and attempt to find a solution for you. Thanks, Rob Vesse ---------------------------------------- From: Michael Friis <fr...@gm...> Sent: 07 June 2010 13:35 To: dot...@li... Subject: [dotNetRDF-develop] Insufficient Error with MSSSQLServer datastore Hello I tend to get "There is insufficient system memory in resource pool 'internal' to run this query." errors when running queries against a fairly modest graph stored in a MSSSQLServer datastore. Is this to be expected? Is there scope for optimising the database using indexes or something else? Regards Michael -- http://friism.com (+45) 27122799 Sapere aude ------------------------------------------------------------------------------ ThinkGeek and WIRED's GeekDad team up for the Ultimate GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the lucky parental unit. See the prize list and enter to win: http://p.sf.net/sfu/thinkgeek-promo _______________________________________________ Dotnetrdf-develop mailing list Dot...@li... https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop |
|
From: Michael F. <fr...@gm...> - 2010-06-07 12:34:41
|
Hello I tend to get "There is insufficient system memory in resource pool 'internal' to run this query." errors when running queries against a fairly modest graph stored in a MSSSQLServer datastore. Is this to be expected? Is there scope for optimising the database using indexes or something else? Regards Michael -- http://friism.com (+45) 27122799 Sapere aude |
|
From: Rob V. <rv...@vd...> - 2010-06-04 08:53:02
|
Hi Michael
Thanks for spotting this, yes this is a bug and has been fixed in SVN as of revision 752
Thanks,
Rob Vesse
----------------------------------------
From: Michael Friis <fr...@gm...>
Sent: 03 June 2010 23:32
To: dot...@li...
Subject: [dotNetRDF-develop] Operator overloading
Imagine "graph" being an empty graph, this code returns false:
graph.GetUriNode("operator-overloaders-should-be-shot") == null
... this is due to wonky operator overloading (or maybe I don't get
the semantics of node equality).
Michael
------------------------------------------------------------------------------
ThinkGeek and WIRED's GeekDad team up for the Ultimate
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the
lucky parental unit. See the prize list and enter to win:
http://p.sf.net/sfu/thinkgeek-promo
_______________________________________________
Dotnetrdf-develop mailing list
Dot...@li...
https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop
|
|
From: Rob V. <rv...@vd...> - 2010-06-04 08:51:48
|
Hi Michael All the hash codes for Nodes and Triples in dotNetRDF (and most other internal objects which need their own hash codes) are based upon using GetHashCode() on a string representation of the object in question. Yes hash codes are not stable across .Net versions and platform architectures but we do not require them to be since the vast majority of the time these hash codes are only being used in memory for the purpose of storage and lookup in various hash code based data structures and algorithms. Hash codes just need to be fast and efficient to compute which the .Net string classes GetHashCode() implementation is regardless of .Net version/architecture since we want to compute the hash codes once and then store the value and just return it when needed. While we could potentially design and implement our own hash code algorithm for these things there is no point reinventing the wheel and it would almost certainly take far more effort to implement than the potential benefit of finding/designing an algorithm which generated codes with sufficient uniqueness and efficiency. With regards to their use for database identity this is a pragmatic design decision which makes a trade off between read/write speed and data instability. Hash codes are used as part of database identity only for our own SQL based stored simply because it makes a significant difference in speed and most of the time you'll create and access your data on the same architecture so hash code instability won't be an issue. Since it is a potential issue the database code is all designed to take account of this hash code instability and work around it automatically and seamlessly. Actual database identity is based on numeric identifiers and the hash codes are only used as a means to speed up and cache conversions between Nodes and database IDs. Regards, Rob Vesse ---------------------------------------- From: Michael Friis <fr...@gm...> Sent: 03 June 2010 16:05 To: dot...@li... Subject: [dotNetRDF-develop] Use of string.GetHashCode() for database identety As far as I can determine from the code in eg. UriNode.cs, dotNetRDF uses string.GetHashCode() for database identety. This is bad design because the string hashcodes are not stable accross .Net version nor accross architecture: http://stackoverflow.com/questions/2099998/hash-quality-and-stability-of-string-gethashcode-in-net Regards Michael ------------------------------------------------------------------------------ ThinkGeek and WIRED's GeekDad team up for the Ultimate GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the lucky parental unit. See the prize list and enter to win: http://p.sf.net/sfu/thinkgeek-promo _______________________________________________ Dotnetrdf-develop mailing list Dot...@li... https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop |
|
From: Michael F. <fr...@gm...> - 2010-06-04 08:42:17
|
Great, didn't realise the hashes in the database were used for caching only. Michael On Fri, Jun 4, 2010 at 10:36 AM, Rob Vesse <rv...@vd...> wrote: > Hi Michael > > All the hash codes for Nodes and Triples in dotNetRDF (and most other > internal objects which need their own hash codes) are based upon using > GetHashCode() on a string representation of the object in question. Yes > hash codes are not stable across .Net versions and platform architectures > but we do not require them to be since the vast majority of the time these > hash codes are only being used in memory for the purpose of storage and > lookup in various hash code based data structures and algorithms. Hash > codes just need to be fast and efficient to compute which the .Net string > classes GetHashCode() implementation is regardless of .Net > version/architecture since we want to compute the hash codes once and then > store the value and just return it when needed. > > While we could potentially design and implement our own hash code algorithm > for these things there is no point reinventing the wheel and it would almost > certainly take far more effort to implement than the potential benefit of > finding/designing an algorithm which generated codes with sufficient > uniqueness and efficiency. > > With regards to their use for database identity this is a pragmatic design > decision which makes a trade off between read/write speed and data > instability. Hash codes are used as part of database identity only for our > own SQL based stored simply because it makes a significant difference in > speed and most of the time you'll create and access your data on the same > architecture so hash code instability won't be an issue. Since it is a > potential issue the database code is all designed to take account of this > hash code instability and work around it automatically and seamlessly. > Actual database identity is based on numeric identifiers and the hash codes > are only used as a means to speed up and cache conversions between Nodes and > database IDs. > > Regards, > > Rob Vesse > > ________________________________ > From: Michael Friis <fr...@gm...> > Sent: 03 June 2010 16:05 > To: dot...@li... > Subject: [dotNetRDF-develop] Use of string.GetHashCode() for database > identety > > As far as I can determine from the code in eg. UriNode.cs, dotNetRDF > uses string.GetHashCode() for database identety. This is bad design > because the string hashcodes are not stable accross .Net version nor > accross architecture: > http://stackoverflow.com/questions/2099998/hash-quality-and-stability-of-string-gethashcode-in-net > > Regards > Michael > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > Dotnetrdf-develop mailing list > Dot...@li... > https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop > > -- http://friism.com (+45) 27122799 Sapere aude |
|
From: Michael F. <fr...@gm...> - 2010-06-03 22:32:00
|
Imagine "graph" being an empty graph, this code returns false:
graph.GetUriNode("operator-overloaders-should-be-shot") == null
... this is due to wonky operator overloading (or maybe I don't get
the semantics of node equality).
Michael
|
|
From: Michael F. <fr...@gm...> - 2010-06-03 15:04:46
|
As far as I can determine from the code in eg. UriNode.cs, dotNetRDF uses string.GetHashCode() for database identety. This is bad design because the string hashcodes are not stable accross .Net version nor accross architecture: http://stackoverflow.com/questions/2099998/hash-quality-and-stability-of-string-gethashcode-in-net Regards Michael |
|
From: Rob V. <rv...@do...> - 2010-05-24 09:55:26
|
Hi Alexander Sorry for the slowness of reply but I have been rather ill lately. Why exactly do you need SkipLocalParsing for the VirtuosoManager? If parsing fails then the manager still submits the query to Virtuoso anyway as it assumes that you've used Virtuoso specific syntax which the SPARQL Parser doesn't understand. Is there a particular scenario/test case you can give me that demonstrates a need for this? Rob Vesse From: Alexander Sidorov [mailto:ale...@gm...] Sent: 17 May 2010 07:42 To: dotNetRDF Developer Discussion and Feature Request Subject: [dotNetRDF-develop] VirtuosoManager SkipLocalParsing Hi Rob, Could you please implement SkipLocalParsing for VirtuosoManager? Regards, Alexander |
|
From: Alexander S. <ale...@gm...> - 2010-05-17 06:42:11
|
Hi Rob, Could you please implement SkipLocalParsing for VirtuosoManager? Regards, Alexander |
|
From: Rob V. <rv...@do...> - 2010-04-28 15:57:14
|
Hi Tana
Thanks for the feedback
This behaviour of the Notation3Writer is by design, while quotes should be escaped in a normal literal e.g.
"Normal literal with a \"quote\" in it"
They are not required to be escaped in a long literal (though they may be escaped), for example the official test suite for the Turtle syntax (which is a subset of Notation 3) includes a test which uses unescaped quotes in a long literal e.g.
"""A long literal with unescaped "quotes" is acceptable"""
If you are aware of other libraries/parsers which won't properly handle such literals then I would be happy to change this behaviour for future releases.
By the way I released version 0.2.2 just over a week ago which contains some minor bug fixes and some major SPARQL optimisations which you might want to upgrade to.
Best regards,
Rob Vesse
----------------------------------------
From: "Tana Isaac" <Tan...@bb...>
Sent: 28 April 2010 13:34
To: dot...@li...
Subject: [dotNetRDF-bugs] Notation3Writer not escaping double quotes
Hi there,
Loving your work :-).
Got a problem though using v0.2.1.24471:
When a graph contains a literal node that contains double quotes the quotes aren't escaped by the Notation3Writer.
E.g.
...
graph.CreateLiteralNode("This is some text \"That contains quotes\"");
.
Outputs """ This is some text "That contains quotes"""";
I believe it should output """ This is some text \ "That contains quotes\"""";
Cheers,
Tana Isaac
http://www.bbc.co.uk
This e-mail (and any attachments) is confidential and may contain personal views which are not the views of the BBC unless specifically stated.
If you have received it in error, please delete it from your system.
Do not use, copy or disclose the information in any way nor act in reliance on it and notify the sender immediately.
Please note that the BBC monitors e-mails sent or received.
Further communication will signify your consent to this.
|
|
From: Rob V. <rv...@do...> - 2010-04-28 15:47:02
|
Hi Koos
Ok I have now made further changes to the SparqlRemoteEndpoint class so
there are SetProxyCredentials() methods for setting credentials on a proxy
server provided you've already set a proxy server. If your credentials for
the request and the proxy are the same you can set the
UseCredentialsForProxy property to indicate that the same credentials are
used. If you think it'd be useful I'll add some more overloads which set
the Proxy and Credentials at the same time?
With regards to SPARQL endpoints it may depend on the query and how you've
configured your endpoint to operate plus what your backing store is.
Firstly the SPARQL engine in dotNetRDF passes all but 5 of the official
SPARQL test suite, 3 fails are regarding unicode normalization behaviour
and 2 are regarding very complex joins. So it is possible that your
queries fall into one of these categories or that SemWeb itself is less
closely aligned to the official SPARQL specification (I've never tested
this so don't know) and so it's answers include incorrect things that
dotNetRDF excludes.
The other possibility is to do with endpoint configuration, for example if
you have a SQL backed store and have set the load behaviour to be
asynchronous the endpoint will answer queries before the data has fully
loaded so you will get incomplete results until the endpoint has fully
loaded the data. Also if you haven't set the load mode it defaults to
OnDemand which means it only loads data as it thinks it needs it so your
queries may not be causing it to load all the relevant data or it may not
load any data at all.
The third possibility is that if you've populated a SQL store your data may
be incomplete depending on how you've done this. A lot of the writing to
the store is done in the background and while the code is designed to
ensure it finishes writing depending on how your program was terminated you
may not write all data to the database.
If you could send me some test cases i.e. example queries, data and
web.config file snippets (with anything sensitive removed/obfustucated)
then I could give you a better answer on this.
Regards,
Rob Vesse
----------------------------------------
From: "Strydom, Koos" <jst...@ha...>
Sent: 28 April 2010 11:52
To: rv...@do...
Subject: RE: [dotNetRDF-bugs] RdfXmlParser error with Root Element not
found
Hi Rob,
Thanks for that, it helped heaps. One thing you need to
add to that is you'll need credentials in most cases for
a proxy server.
I did came over some things where I query our old
SemWeb sparql endpoint with a specific sparql query and
it returned the expected results, but if I query my new
dotnetRDF endpoint I do not get the same results.
Both triple stores were populated with the same
generated rdf and the respective parsers.
Is that possible or where should I look for the
problem.
Regards,
Koos
----------------------------------------
From: Rob Vesse [mailto:rv...@do...]
Sent: Wednesday, 28 April 2010 2:23 AM
To: Strydom,
Koos
Subject: RE: [dotNetRDF-bugs] RdfXmlParser error with Root
Element not found
Hi
Koos
I've committed a change to the repository which instead adds
SetCredentials(), ClearCredentials(), SetProxy() and ClearProxy() methods
which
let you set/unset credentials and proxy server as required. The
credentials and/or proxy are stored as member variables in the
SparqlRemoteEndpoint instance.
Then I add some stuff to the impementation
of the ExecuteQuery() method so that it just adds this information to the
request as is necessary. This avoids adding too many overloads to the API
and means you don't have to specify credentials/proxy for every single
request.
Hope that helps
Rob Vesse
----------------------------------------
From: "Strydom, Koos"
<jst...@ha...>
Sent: 26 April 2010
23:17
To: rv...@do..., "dotNetRDF Bug Report
tracking and resolution"
<dot...@li...>
Subject: RE:
[dotNetRDF-bugs] RdfXmlParser error with Root Element not found
Hi Rob,
Thanks for the reply. I'll put a test case together and
forward it to you.
In the meantime I came across another opportunity
:)
I need remote SparqlEndpoint authentication and in some
cases also WebProxy support.
We do access a fair number of remote Sparql endpoints most
of which require authentication
and some need to be able to query endpoints from behind
firewalls.
I've started to create some overloads for the
ExecuteQuery(), QueryInternal(), QueryWithResultSet() and
QueryWithResultGraph() methods to
include NetworkCredentials and another set also including
proxy server information and Credentials. I'm not sure if that is the best
way to implement authentication but initial testing suggest that it should
work
just fine.
Regards,
Koos
----------------------------------------
From: Rob Vesse
[mailto:rv...@do...]
Sent: Tuesday, 27 April 2010
7:58 AM
To: dotNetRDF Bug Report tracking and
resolution
Subject: Re: [dotNetRDF-bugs] RdfXmlParser error
with Root Element not found
Hi Koos
I will go ahead and add a similar method to SVN once I have proper internet
again. Flew out to states today for a conference and the hotel wi-fi can't
cope this time of day with all the techies trying to use the wi-fi at the
same
time.
If you have a minimal test case example that reproduces the issue so I
could look into why the bug happens and potentially fix it that would be
very
helpful.
Yes I have made a start on Sparql Update support, not sure how quickly
that'll progress as only really done initial backend API outlines at the
moment
and haven't started properly on parser and execution support
Thanks,
Rob
----------------------------------------
From: "Strydom, Koos"
<jst...@ha...>
Sent: Sunday, April 25, 2010
5:05 AM
To:
dot...@li...
Subject:
[dotNetRDF-bugs] RdfXmlParser error with Root Element not found
Hi
Rob,
As I mentioned to
you before I'm actively in the process to implement your library in our
open
source project.
In doing so I came
across an issue where I could not parse an rdf document that's been
generated
dynamically,
because it returned
with the error "Root element not found" although the RDF document were
valid.
So to solve my
problem I added a method to the RdfXmlParser class like
follow
public void
Load(IGraph g, XmlDocument
document)
{
try
{
RdfXmlParserContext context = new RdfXmlParserContext(g, document,
this._traceparsing);
this.Parse(context);
}........
This did the trick
for me.
Also I noticed that
you've started on implementing Sparql Update in the library. That's very
exciting.
Best
Regards,
Koos
Strydom
----------------------------------------
N O T I C E - This message from Hatch is intended only for
the use of the individual or entity to which it is addressed and may
contain information which is privileged, confidential or proprietary.
Internet communications cannot be guaranteed to be secure or
error-free as
information could be intercepted, corrupted, lost, arrive late or
contain
viruses. By communicating with us via e-mail, you accept such risks.
When
addressed to our clients, any information, drawings, opinions or
advice
(collectively, "information") contained in this e-mail is subject to
the
terms and conditions expressed in the governing agreements. Where no
such
agreement exists, the recipient shall neither rely upon nor disclose
to
others, such information without our written consent. Unless
otherwise
agreed, we do not assume any liability with respect to the accuracy
or
completeness of the information set out in this e-mail. If you have
received this message in error, please notify us immediately by
return
e-mail and destroy and delete the message from your
computer.
----------------------------------------
|
|
From: Alexander S. <ale...@gm...> - 2010-04-27 18:55:59
|
Sorry, Rob. Looks like it is SPARQL-endpoint problem. 2010/4/27 Rob Vesse <rv...@do...> > Hi Alexander > > I will look into this asap when I can as am in the states at the moment for > work. > > Do you have an example query and data that reproduces this issue? > > Thanks, > Rob > > ------------------------------ > *From*: "Alexander Sidorov" <ale...@gm...> > *Sent*: Saturday, April 24, 2010 8:15 PM > *To*: dot...@li... > *Subject*: [dotNetRDF-develop] SparqlConnector ORDER BY > > > Hi Rob, > > Looks like SparqlConnector doesn't follow order of ORDER BY clause. > > I use SparqlConnector this way: > > 1. Create SparqlConnector instance > 2. Call sparqlConnector.Query and cast result to SparqlResultSet > 3. Enumerate SparqlResultSet > > Regards, > Alexander > > |
|
From: Rob V. <rv...@do...> - 2010-04-26 22:00:36
|
Hi Alexander I will look into this asap when I can as am in the states at the moment for work. Do you have an example query and data that reproduces this issue? Thanks,Rob ---------------------------------------- From: "Alexander Sidorov" <ale...@gm...> Sent: Saturday, April 24, 2010 8:15 PM To: dot...@li... Subject: [dotNetRDF-develop] SparqlConnector ORDER BY Hi Rob, Looks like SparqlConnector doesn't follow order of ORDER BY clause. I use SparqlConnector this way: 1. Create SparqlConnector instance 2. Call sparqlConnector.Query and cast result to SparqlResultSet 3. Enumerate SparqlResultSet Regards, Alexander |
|
From: Alexander S. <ale...@gm...> - 2010-04-24 19:14:51
|
Hi Rob, Looks like SparqlConnector doesn't follow order of ORDER BY clause. I use SparqlConnector this way: 1. Create SparqlConnector instance 2. Call sparqlConnector.Query and cast result to SparqlResultSet 3. Enumerate SparqlResultSet Regards, Alexander |
|
From: Rob V. <rv...@do...> - 2010-04-16 14:19:42
|
Hi Alexander
Here's a couple of examples based on code taken from one of my unit tests:
//Create the Nodes
Graph g = new Graph();
UriNode u = g.CreateUriNode(new Uri("http://www.google.com"));
LiteralNode l = g.CreateLiteralNode("http://www.google.com/");
Console.WriteLine("Created a URI and Literal Node both referring
to 'http://www.google.com'");
Console.WriteLine("String form of URI Node is:");
Console.WriteLine(u.ToString());
Console.WriteLine("String form of Literal Node is:");
Console.WriteLine(l.ToString());
Console.WriteLine("Hash Code of URI Node is " +
u.GetHashCode());
Console.WriteLine("Hash Code of Literal Node is " +
l.GetHashCode());
Console.WriteLine("Hash Codes are Equal? " +
u.GetHashCode().Equals(l.GetHashCode()));
Console.WriteLine("Nodes are equal? " + u.Equals(l));
//Create some plain and typed literals which may have colliding
Hash Codes
LiteralNode plain =
g.CreateLiteralNode("test^^http://example.org/type");
LiteralNode typed = g.CreateLiteralNode("test", new
Uri("http://example.org/type"));
Console.WriteLine();
Console.WriteLine("Created a Plain and Typed Literal where the
String representations are identical");
Console.WriteLine("Plain Literal String form is:");
Console.WriteLine(plain.ToString());
Console.WriteLine("Typed Literal String from is:");
Console.WriteLine(typed.ToString());
Console.WriteLine("Hash Code of Plain Literal is " +
plain.GetHashCode());
Console.WriteLine("Hash Code of Typed Literal is " +
typed.GetHashCode());
Console.WriteLine("Hash Codes are Equal? " +
plain.GetHashCode().Equals(typed.GetHashCode()));
Console.WriteLine("Nodes are equal? " + plain.Equals(typed));
Note that not only can two literals nodes have equivalent string
representations while having different values but as you can see a UriNode
can have an equivalent representation to a LiteralNode. But when you look
at their hash codes and equality they will be none-equal since the actual
values are different.
Hope that clarifies things a bit more for you
Rob
From: Alexander Sidorov [mailto:ale...@gm...]
Sent: 15 April 2010 19:47
To: dot...@li...
Subject: [dotNetRDF-develop] Typed literal
Hi Rob,
Thank you for clarifications.
Could you provide an example of different values that can generate
equivalent strings. I'm just curious :)
Regards,
Alexander
|