You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(1) |
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
|
Feb
(11) |
Mar
(17) |
Apr
(12) |
May
(2) |
Jun
(20) |
Jul
(2) |
Aug
(2) |
Sep
(2) |
Oct
(2) |
Nov
|
Dec
(5) |
2011 |
Jan
(4) |
Feb
(1) |
Mar
(2) |
Apr
(2) |
May
(5) |
Jun
|
Jul
(12) |
Aug
(4) |
Sep
(5) |
Oct
(1) |
Nov
(38) |
Dec
(27) |
2012 |
Jan
(46) |
Feb
(182) |
Mar
(83) |
Apr
(22) |
May
(68) |
Jun
(47) |
Jul
(135) |
Aug
(84) |
Sep
(57) |
Oct
(45) |
Nov
(27) |
Dec
(61) |
2013 |
Jan
(59) |
Feb
(78) |
Mar
(66) |
Apr
(107) |
May
(27) |
Jun
(56) |
Jul
(53) |
Aug
(3) |
Sep
(19) |
Oct
(41) |
Nov
(44) |
Dec
(54) |
2014 |
Jan
(49) |
Feb
(72) |
Mar
(22) |
Apr
(41) |
May
(63) |
Jun
(27) |
Jul
(45) |
Aug
(12) |
Sep
(3) |
Oct
(8) |
Nov
(27) |
Dec
(16) |
2015 |
Jan
(3) |
Feb
(20) |
Mar
(6) |
Apr
(4) |
May
(15) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
(1) |
Oct
(1) |
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
(16) |
May
(9) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: <tr...@do...> - 2011-11-25 21:54:40
|
A new comment has been added to the following issue. Title: Add MySql support for the new ADO Store backend Project: Data.Sql (dotNetRDF.Data.Sql.dll) Created By: Rob Vesse Date: 2011-11-25 09:53 PM Comment: Bumping this to the 0.6.0 release as no-one is asking for this urgently and most .Net devs will tend to prefer SQL Server anyway More information on this issue can be found at http://www.dotnetrdf.org/tracker/Issues/IssueDetail.aspx?id=79 If you no longer wish to receive notifications, please visit http://www.dotnetrdf.org/tracker/UserProfile.aspx and change your notifications options. |
From: <tr...@do...> - 2011-11-25 21:53:34
|
The following issue has been updated by Rob Vesse: Title: 0.5.0 MS SQL backend poor performance Project: Data.Sql (dotNetRDF.Data.Sql.dll) - Status changed from "Confirmed" to "Completed" - Resolution changed from "none" to "Fixed" - Percent complete changed from "30%" to "100%" More information on this issue can be found at http://www.dotnetrdf.org/tracker/Issues/IssueDetail.aspx?id=146 If you no longer wish to receive notifications, please visit http://www.dotnetrdf.org/tracker/UserProfile.aspx and change your notifications options. |
From: <tr...@do...> - 2011-11-25 21:53:19
|
A new comment has been added to the following issue. Title: 0.5.0 MS SQL backend poor performance Project: Data.Sql (dotNetRDF.Data.Sql.dll) Created By: Rob Vesse Date: 2011-11-25 09:52 PM Comment: Fixed in SVN revision 1965 Turns out the problem is not due to streaming/batching of the data but rather due to a function call within triple enumeration (which is the bulk of SPARQL processing) that required a database query each time. By caching the results of this function internally the queries become highly responsive finishing in around 0.5 seconds (compared to 0.1 seconds for database server on localhost). Note that this fix also vastly improves local performance (5x) compared to my initial testing to investigate this issue. Thanks for bringing this to my attention, this fix will be in the 0.5.1 release that will be out in the next week or so More information on this issue can be found at http://www.dotnetrdf.org/tracker/Issues/IssueDetail.aspx?id=146 If you no longer wish to receive notifications, please visit http://www.dotnetrdf.org/tracker/UserProfile.aspx and change your notifications options. |
From: <tr...@do...> - 2011-11-23 23:09:02
|
The following issue has been added to a project that you are monitoring. Title: Backend should store namespace prefixes Project: Data.Sql (dotNetRDF.Data.Sql.dll) Created By: Rob Vesse Milestone: 0.6.0 Beta Category: Core Library Integration Priority: Normal Type: New Feature Description: The new backend does not store namespace prefixes and several users have complained about this missing feature. Revise the schema appropriately to allow for storing prefixes on a per graph basis More information on this issue can be found at http://www.dotnetrdf.org/tracker/Issues/IssueDetail.aspx?id=148 If you no longer wish to receive notifications, please visit http://www.dotnetrdf.org/tracker/UserProfile.aspx and change your notifications options. |
From: <tr...@do...> - 2011-11-23 07:42:40
|
The following issue has been updated by Rob Vesse: Title: 0.5.0 MS SQL backend poor performance Project: Data.Sql (dotNetRDF.Data.Sql.dll) - Percent complete changed from "10%" to "30%" More information on this issue can be found at http://www.dotnetrdf.org/tracker/Issues/IssueDetail.aspx?id=146 If you no longer wish to receive notifications, please visit http://www.dotnetrdf.org/tracker/UserProfile.aspx and change your notifications options. |
From: <tr...@do...> - 2011-11-23 07:42:03
|
A new comment has been added to the following issue. Title: 0.5.0 MS SQL backend poor performance Project: Data.Sql (dotNetRDF.Data.Sql.dll) Created By: Rob Vesse Date: 2011-11-23 07:41 AM Comment: Started work on the code changes required for this, initial changes committed as SVN revision 1964, not yet finished nor is there yet a code path to enable this alternate behaviour More information on this issue can be found at http://www.dotnetrdf.org/tracker/Issues/IssueDetail.aspx?id=146 If you no longer wish to receive notifications, please visit http://www.dotnetrdf.org/tracker/UserProfile.aspx and change your notifications options. |
From: <tr...@do...> - 2011-11-22 17:04:05
|
The following issue has been added to a project that you are monitoring. Title: In a VB.Net WinForms environment multi-threaded Triple Store Writers fail Project: Core Library (dotNetRDF.dll) Created By: Rob Vesse Milestone: 0.5.1 Beta Category: Writing Priority: High Type: Improvement Description: In a VB.Net WinForms environment WaitHandler.WaitAll() cannot be called unless a user explicitly generates a new thread in MTA mode in which they then invoke a ITripleStoreWriter which is less than ideal. The TriGWriter already supports the IMultiThreadedWriter interface which allows MTA behaviour to be disabled, other writers should also support this. The default theading mode for these writers should also be controllable via the Options static class (just like default compression level is currently) so that users can simply set single threaded writing mode globally and then they can invoke extension methods like SaveToFile() without having to instantiate a writer directly. More information on this issue can be found at http://www.dotnetrdf.org/tracker/Issues/IssueDetail.aspx?id=147 If you no longer wish to receive notifications, please visit http://www.dotnetrdf.org/tracker/UserProfile.aspx and change your notifications options. |
From: <tr...@do...> - 2011-11-22 03:36:15
|
The following issue has been updated by Rob Vesse: Title: 0.5.0 MS SQL backend poor performance Project: Data.Sql (dotNetRDF.Data.Sql.dll) - Status changed from "Planned" to "Confirmed" - Percent complete changed from "0%" to "10%" More information on this issue can be found at http://www.dotnetrdf.org/tracker/Issues/IssueDetail.aspx?id=146 If you no longer wish to receive notifications, please visit http://www.dotnetrdf.org/tracker/UserProfile.aspx and change your notifications options. |
From: <tr...@do...> - 2011-11-22 03:35:53
|
A new comment has been added to the following issue. Title: 0.5.0 MS SQL backend poor performance Project: Data.Sql (dotNetRDF.Data.Sql.dll) Created By: Rob Vesse Date: 2011-11-22 03:34 AM Comment: So this definitely appears to be a network related issue. On a local instance with 418 triples the query executes in 0.5s With the same data on a Windows Server 2008 box located back in the UK accessed from my apartment in California the same query takes 1m 13s, I will try and debug further to see how many queries have to go over the network to serve this query. Also I wonder whether this may be an issue related to streaming the data with a DbDataReader versus just dumping it into memory with a DbDataAdaptor, depending on the outcome of this I may add an operation mode whereby the provider will request query results all at once. If this will be the case then this will not be the default mode of operation because it reverses any attempts to reduce memory usage which was one of the key aims of the new backend. More information on this issue can be found at http://www.dotnetrdf.org/tracker/Issues/IssueDetail.aspx?id=146 If you no longer wish to receive notifications, please visit http://www.dotnetrdf.org/tracker/UserProfile.aspx and change your notifications options. |
From: <tr...@do...> - 2011-11-22 01:21:27
|
A new comment has been added to the following issue. Title: 0.5.0 MS SQL backend poor performance Project: Data.Sql (dotNetRDF.Data.Sql.dll) Created By: Koos Date: 2011-11-22 12:50 AM Comment: The sql server is somewhere else on our network and its a 2008 r2 server. We subsequently created another db on the same server for the 0.4.0 db. As it is at this stage our 0.5.0 deployment is not usable at all. Thanks for the quick response. More information on this issue can be found at http://www.dotnetrdf.org/tracker/Issues/IssueDetail.aspx?id=146 If you no longer wish to receive notifications, please visit http://www.dotnetrdf.org/tracker/UserProfile.aspx and change your notifications options. |
From: Rob V. <rv...@do...> - 2011-10-06 09:43:42
|
Hi Gianluca Namespaces are no longer persisted in the new SQL Store, they were only ever persisted as a convinience in the old store and the decision was made not to do this in the new store. If this is an important feature to you it could potentially be added back into future versions of the store but we would only do this as a vanity feature i.e. they would not change how we persist URIs into the store. To handle namespaces you should use the APIs, so when you create a Graph into which you are going to load data use the NamespaceMap property to register relevant namespaces for your data to be used when outputting it e.g. Graph g = new Graph(); g.NamespaceMap.Add("ex", new Uri("http://example.org")); g.NamespaceMap.Add("foaf", new Uri("http://xmlns.com/foaf/0.1/")); If you use a whole bunch of namespaces regularly it might be easiest to create an extension method that you can simply call to automatically register all your common namespaces. SqlGraph has been deprecated for a while and was superceded by the StoreGraphPersistenceWrapper which is a wrapper graph that can be placed around any graph backed by a IGenericIOManager instance and ensures any changes are persisted. This has some advantages over the old SqlGraph in terms of both how it is implemented, the number of supported stores and its ability to provide API level transactionality i.e. changes are made in-memory first and then either persisted(using the Flush() metho or discarded using the Discard() method. Usage is roughly as follows, see [2] for another example: //Assume you have a MicrosoftAdoManager open already in a variable called manager //Create a wrapper - this constructor loads the graph from the store if the third parameter is //false //Please check [1] for exact behaviour as some constructors require a preloaded //base graph while others do not StoreGraphPersistenceWrapper wrapper = new StoreGraphPersistenceWrapper(manager, new Uri("http://example.org/graph"), false); //Now make some changes to the Graph... //Finally persist them wrapper.Flush(); Note that if you don't explicitly flush the changes the wrapper will automatically persist outstanding changes when you either Dispose() of it or when it gets finalized if you didn't call Dispose() explicitly. Hope this helps, let me know if you have more questions on this Best Regards, Rob Vesse [1]: http://dotnetrdf.org/api/index.asp?Topic=VDS.RDF.StoreGraphPersistenceWrapper [2]: http://www.dotnetrdf.org/content.asp?pageID=Working%20with%20Graphs ---------------------------------------- From: "gianluca" <gia...@in...> Sent: 05 October 2011 14:09 To: rv...@do... Subject: Some problems with MicrosoftAdoManager Rob, I'm trying to use the new MicrosoftAdoManager class with Sql Server. However, it seems to be some problems with namespaces, which are not persisted, when saving the graph. It seems they are completely ignored, even in the database. How are they supposed to be handled? Are you planning to add a new SqlGraph class backed by MicrosoftAdoManager? TIA, gianluca |
From: Rob V. <rv...@do...> - 2011-09-28 08:15:15
|
Hi Dan Thanks for pointing this out, the suggested change has been committed as of revision 1900 The 0.5.0 release went out just over two weeks ago so unfortunately it has missed that but will be in the next release (0.5.1) which is scheduled for end of November Cheers, Rob Vesse ---------------------------------------- From: "Dan Smith" <da...@al...> Sent: 28 September 2011 07:27 To: dot...@li... Subject: [dotNetRDF-Develop] Previous uses of new async keyword Hi Rob, I am trying the new beta release from svn for 0.5. In C# 5 the new "async" keyword will be present, and is currently with the use of the async CTP. This prevents me from building the current version directly from svn. There are four places that this arises with variable names. SqlTripleStore.cs Lines: 268, 273, 289 DatasetFileManager.cs Lines: 68, 73 DataFactories.cs Lines: 338, 380, 381 StorageFactories.cs Lines: 169, 208, 209 Changing these variable names from async to isAsync allows the projects to be built, and the changes are non-breaking. I hope this can make it into the 0.5 release. Cheers, Dan -- Dan Smith +1 608-213-2867 Algenta Technologies, LLC http://www.algenta.com ---------------------------------------------------------------------------- -- All the data continuously generated in your IT infrastructure contains a definitive record of customers, application performance, security threats, fraudulent activity and more. Splunk takes this data and makes sense of it. Business sense. IT sense. Common sense. http://p.sf.net/sfu/splunk-d2dcopy1 _______________________________________________ dotNetRDF-develop mailing list dot...@li... https://lists.sourceforge.net/lists/listinfo/dotnetrdf-develop |
From: Dan S. <da...@al...> - 2011-09-28 06:27:18
|
Hi Rob, I am trying the new beta release from svn for 0.5. In C# 5 the new "async" keyword will be present, and is currently with the use of the async CTP. This prevents me from building the current version directly from svn. There are four places that this arises with variable names. SqlTripleStore.cs Lines: 268, 273, 289 DatasetFileManager.cs Lines: 68, 73 DataFactories.cs Lines: 338, 380, 381 StorageFactories.cs Lines: 169, 208, 209 Changing these variable names from async to isAsync allows the projects to be built, and the changes are non-breaking. I hope this can make it into the 0.5 release. Cheers, Dan -- Dan Smith +1 608-213-2867 Algenta Technologies, LLC http://www.algenta.com |
From: Rob V. <rv...@do...> - 2011-09-13 13:53:03
|
Hi All Those of you who have been using our legacy SQL store will be aware that we have been working to deprecate and replace it for a while now and our 0.5.0 release marks the introduction of our new ADO Store. This new store provides many advantages including better abstraction between code and database schema, improved security settings, SQL Azure support and much more memory efficient SPARQL Query over SQL databases. This new store is provided in a separate library dotNetRDF.Data.Sql.dll which you will find under the Data/Sql/ directory in the latest download packages. We've now added some basic documentation on using the store and how to migrate existing stores, please see [1] and [2] for details. We hope the migration process should be relatively simple and easy, if you encounter any problems please drop us an email and we'll work to address this ASAP. If you have any questions, requests for additional documentation etc. please let us know. Best Regards, Rob Vesse [1]: http://www.dotnetrdf.org/blogitem.asp?blogID=52[2]: http://www.dotnetrdf.org/content.asp?pageID=rdfSqlStorage |
From: Rob V. <rv...@do...> - 2011-09-12 14:41:22
|
Hi All We are please to announce that dotNetRDF 0.5.0 Beta has been released as well as the 0.3.0 Alpha release of the associated dotNetRDF Toolkit. You can get the latest library from [1] and latest toolkit from [2] The latest release of the library brings the following new features and improvements:Bug fixes for various issues including major crash fixes for Silverlight/Windows Phone 7Basic .Net serialization support for some core classes e.g. INode, IGraph, SparqlResultSetNew asynchronous callback based APIs for various features to provide better Silverlight/Windows Phone 7 support and to make it easier to write async RDF/SPARQL based applicationsNew features in the Configuration API and our ASP.Net integration to provide detailed control of writer behaviour with regards to things like compression, pretty printing, DTD usage etc - see [3] for detailsAdded a new ExplainQueryProcessor that can be used to explain the evaluation of queries using our Leviathan engineRefactored the LeviathanQueryProcessor to make it much easier to extendSupport for some experimental advanced algebra optimisations such as parallelised union evaluation (disabled by default)Expanded the Handlers API so that you can use it to stream process RDF and SPARQL Results from a wider variety of data sources including SPARQL endpoints, any IGenericIOManager instance and SPARQL Queries in general.This release also marks the introduction of a new packaging model where some features are incorporated into separate DLLs from the core library. As of this release as well as the Core library (dotNetRDF.dll) - which now comes in 4 different versions: .Net 3.5, .Net 3.5 Client Profile, Silverlight 4 and Silverlight 4 for Windows Phone 7 - you will also find two data providers - Data.Virtuoso and Data.Sql Data.Virtuoso (dotNetRDF.Data.Virtuoso.dll) contains the OpenLink Virtuoso functionality previously found in the core library thus allowing us to remove OpenLink.Data.Virtuoso.dll as a dependency from the core library so users who don't need this feature have a smaller dependency footprint. Data.Sql (dotNetRDF.Data.Sql.dll) contains our new SQL backend called the ADO store which currently supports Microsoft SQL Server and SQL Azure only. This backend replaces the legacy SQL store from the core library and we'll be providing documentation and tooling for migrating from the old format to the new format in the next few days so please keep an eye on the mailing list. Note - As another dependency change dotNetRDF now uses Newtonsoft.Json.Net35.dll since the main Json.Net build is now targeted at .Net 4.0, we plan to move to .Net 4.0 as the primary build next year though we will continue to provide a .Net 3.5 build All of these libraries are now also available via NuGet for those using VS2010 and the NuGet plugin. The latest release of the Toolkit incorporates the following changes:Support for our new SQL backend in Store ManagerNamespace declarations from queries are now used to format SELECT results in StoreManager and SparqlGUIQuery Explanation features in SparqlGUISupport for configuring advanced options when using the Save With functionality in rdfEditorFaster and more memory efficient syntax validation in rdfEditorNew rdfSqlStorage tool for managing our new SQL backendAs always we aim to ensure that the library and toolkit continues to stabilise and improve over time, if you have any suggestions, feedback, bug reports, patches etc please let us know via the mailing lists. Finally we'd like to thank the following people who've contributed to this release:Graham Moore and Khalil Ahmed for input on the query engine refactoringDaniel Bittencourt, Rodrigo de Castro Reis, Rafael Dias Araujo and Guillherme Alcantra Dias for input on the .Net serialization supportKoos Strydoom, Robert P DeCarlo for input on the new SQL backendKoos Strydoom and Rahul Patil for input on the new advanced writer configuration for ASP.Net featuresJim Rhyne for input on Store Manager and SparqlGUI improvements regarding formatting of SELECT resultsSid John and Steve S for feedback and debugging help with Silverlight/Windows Phone 7 issuesPaul Hermans for input on the improved Save With functionality in rdfEditorCsaba GoncziMike GroveApologies to anyone I've unintentionally missed off this list, thanks again for all your contributions Best Regards, Rob Vesse [1]: http://www.dotnetrdf.org/content.asp?pageID=Download%20dotNetRDF[2]: http://www.dotnetrdf.org/content.asp?pageID=Download%20dotNetRDF%20Toolkit%2 0for%20Windows[3]: http://www.dotnetrdf.org/content.asp?pageID=Configuration%20API%20-%20HTTP%2 0Handlers#OutputSettings |
From: Rob V. <rv...@do...> - 2011-09-06 10:03:24
|
Hi Steve I am not a Windows Phone expert so it is hard to tell what might be happening. Firstly how are you measuring the memory used? If you are using GC.GetTotalMemory() then the GC has the option of invoking a garbage collection when you call that method, if you've set the boolean parameter to true it will wait for that collection before reporting memory used which will eliminate any temporary objects that have yet to be collected. In general I would expect the Windows Phone GC to be fairly proactive at freeing up memory as obviously a phone has limited RAM. If you aren't already you should try using the Windows Phone 7 memory usage metrics, see http://blogs.msdn.com/b/mikeormond/archive/2010/12/16/monitoring-memory-usag e-on-windows-phone-7.aspx for a nice blog post on this. In terms of memory usage for the result set this may be very small depending on the size of your results and the values involved. Typically a triple takes around 1 KB (assuming you don't have really long literals) so an individual node takes about 300 bytes or so. So say you had 10 results with 3 values for each result you'd only use about 10 KB of memory (possibly less depending on the exact values). If your results set has fewer columns or all the values are very short then this memory usage may be far less. A SparqlResultSet has minimal memory overhead since internally it is a couple of lists and similarly an individual SparqlResult is internally a small dictionary. In contrast IGraph implementations typically exhibit much higher memory overhead because they incorporate indexes on the triples. For example the default Graph implementation which is fully indexed takes ~1.7 KB per triple on average as opposed to a non-indexed implementation like NonIndexedGraph which requires only ~1 KB per triple (i.e. there is an extra 0.7 KB per triple for indexing) Also your memory usage may be affected by temporary objects, so if you created a SparqlRemoteEndpoint instance local to a method in order to make the query and check the initial memory usage in that method then that object would be out of scope and potentially GCd by the time you check the final memory usage. Hope that helps, Rob ---------------------------------------- From: "Steve S" <s....@li...> Sent: 05 September 2011 18:56 To: "Rob Vesse" <rv...@do...> Subject: RE: [dotNetRDF-Develop] WP7 Memory Usage Hi Rob, I am trying to get the memory usage of Windows Phone 7 when getting a response from sparql endpoint via HTTP using dotNetRdf for Windows Phone 7. I am calculating the memory used before the request is sent and then just after receiving the reply (in the ResultsCallBack method). I was expecting the second memory calculation to be higher than the first, but instead it is either the same or lower than the first reading. Is that normal for WP7 or is this because of the asynchronous callback or am I doing something wrong? Regards, Steve |
From: evan chua-y. <sem...@gm...> - 2011-08-18 23:24:49
|
subscribe |
From: Rob V. <rv...@do...> - 2011-08-16 09:07:45
|
Hi Steve The APIs you are using are very new and haven't been in a public release yet hence the lack of mention in the documentation. There is no built in way to get the query execution time for these APIs and particularly for remote queries it would be a somewhat inaccurate measure since the time taken to execute includes a number of overheads - HTTP and network communication, actual execution, parsing of results etc. Assuming you are not using the third state parameter for anything else you could use it to pass in a DateTime e.g. Endpoint.QueryWithResultSet(query, this.ResultsCallback, DateTime.Now); Then in your callback you can do: DateTime? start = state as DateTime?; if (start != null) { TimeSpan elapsed = DateTime.Now - start.Value; //Do what you want with the execution time... } Hope that helps, if you have further questions please let me know Regards, Rob ---------------------------------------- From: "Steve S" <s....@li...> Sent: 15 August 2011 15:30 To: dot...@li... Subject: [dotNetRDF-Develop] SPARQL query execution time Hello, I am trying to develop a client application for Windows Phone 7 using the dotnetrdf windows phone build which sends a SPARQL query to a remote endpoint via HTTP and retrieves the result . Endpoint.QueryWithResultSet(query, this.ResultsCallback, null); I wanted to know if there is a method in order to get the time taken to execute the query like the "QueryTime" function shown in the tutorial "Querying with SPARQL": http://www.dotnetrdf.org/content.asp?pageID=Querying%20with%20SPARQL Regards, Steve |
From: Steve S <s....@li...> - 2011-08-15 14:29:34
|
Hello, I am trying to develop a client application for Windows Phone 7 using the dotnetrdf windows phone build which sends a SPARQL query to a remote endpoint via HTTP and retrieves the result . Endpoint.QueryWithResultSet(query, this.ResultsCallback, null); I wanted to know if there is a method in order to get the time taken to execute the query like the "QueryTime" function shown in the tutorial "Querying with SPARQL": http://www.dotnetrdf.org/content.asp?pageID=Querying%20with%20SPARQL Regards, Steve |
From: Sid J. <si...@ya...> - 2011-08-01 13:37:29
|
Hello, I am trying to create an application in Windows Phone 7 that will query a remote SPARQL endpoint (such as DBpedia). However, when trying to run the following code, the emulator freezes without giving any error message. private void button1_Click(object sender, RoutedEventArgs e) { SparqlRemoteEndpoint endpoint = new SparqlRemoteEndpoint(new Uri("http://dbpedia.org/sparql"), "http://dbpedia.org"); //Make a SELECT query against the Endpoint SparqlResultSet results = endpoint.QueryWithResultSet("SELECT DISTINCT ?Concept WHERE {[] a ? foreach (SparqlResult result in results) { textBox1.Text = result.ToString(); } } It seems that the following line is causing the problem: SparqlResultSet results = endpoint.QueryWithResultSet("SELECT DISTINCT ?Concept WHERE {[] a ? Any reason why it is doing this.? I currently have the following references in my project: - HtmlAgilityPack.WindowsPhone.dll - dotNetRDF.WindowsPhone.dll - Newtonsoft.Json.Silverlight.dll Thanks, Sid |
From: Rob V. <rv...@do...> - 2011-07-26 12:49:26
|
Hi all In addition to our email support we will now formally track issues through an online tracker at http://www.dotnetrdf.org/tracker/ You can report bugs and/or request features there directly if you so wish and we will be using it to track the progress of our work on all future releases Regards, Rob Vesse dotNetRDF Lead Developer ================================================================ Developer Discussion & Feature Request - dot...@li... Bug Reports - dot...@li... User Help & Support - dot...@li... Website: http://www.dotnetrdf.org User Guide: http://www.dotnetrdf.org/content.asp?pageID=User%20Guide API: http://www.dotnetrdf.org/api/ ================================================================ |
From: Strydom, K. <jst...@ha...> - 2011-07-23 23:51:16
|
Hi Rob, I think option one would be the preferred upgrade path. Regards, Koos Strydom From: Rob Vesse [mailto:rv...@do...] Sent: Friday, 22 July 2011 7:28 PM To: Strydom, Koos; rpd...@be... Cc: dot...@li... Subject: Update on progress with new SQL backend Hi Koos and Rob So I wanted to give you an update on the upcoming new SQL backend and get your input on a few points. Design and Performance So the new SQL backend has a schema fairly similar to the existing backend (so read/write performance is pretty similar) but it has been designed to do all the IO using stored procedures since this allows for the schema to change in future (or alternative schemas to be used) provided that they support the necessary stored procedures. The code has also been designed to take advantage of some new API features to provide reasonable performance for out of memory SPARQL i.e. you can make queries and updates against the database without having to load all the data into memory. This makes the system much more scalable in the long term since you don't need so much memory to make requests against the store though this results in a speed trade off i.e. in-memory query is typically much faster but requires significantly more memory. One of the main advantages of this new out of memory query mechanism is that the memory usage is much lower and there is no time overhead of waiting to load data into memory before answering queries. So particularly for people deploying the backend in environments with limited resources e.g. shared hosting it will be much more usable. Security The new backend also improves on security providing a set of roles for users which determine what actions they can take on the store, the point of this being it makes it much easier to secure user interaction with the store. Currently the roles I'm incorporating are as follows: * Admin - Full read/write privileges plus the ability to completely empty the store (though not destroy the database schema) * Read/Write - Full read/write privileges * Read/Insert - Full read privileges and can insert new data but not overwrite/delete existing data * Read Only - Full read privileges but no write priveleges If you guys can think of any other useful roles for the backend please let me know? Note that someone with read/write privileges can effectively empty the store completely by deleting one graph at a time but only the admin can do it in a single command. Additionally only the admin can actually empty the nodes table which has the effect of resetting the store. Code Changes In order to use the new backend you will have to make some minor code/configuration changes which I will provide documentation for nearer the time but hopefully this will not be too disruptive. One thing to note is that as part of a wider strategy regarding reduction of dependencies in the core library the new SQL backend will reside in a separate library dotNetRDF.Data.Sql.dll which allows us to maintain the code separately and add support for additional database back ends (e.g. MySql, Oracle etc) in future without increasing dependencies in the core library. In the initial release only Microsoft SQL Server will be supported, hopefully mySql support will follow in a subsequent release and Oracle eventually since there are issues with getting Oracle licenses for the products required to actually develop and test against. Upgrade Paths The main thing I wanted to get your opinion on was how you would prefer to upgrade your existing databases? The options I am considering are as follows: 1. Provide a standalone tool which can be run at the command line which will migrate from the old format to the new one. Migration could either be in-place or to a new empty database. For in-place migration the old data could be maintained and made restorable for rollbacks if desired? 2. Make the upgrade silent and automatic within the code. This has issues with the fact that it puts the upgrade mechanism into the library and creates a potentially very long delay when you first access your store with the new code as the upgrade has to take place. 3. Do a partial silent upgrade. As the database schemas are not too dissimilar it may be possible to just do a partial upgrade by creating a set of stored procedures within the existing database. This would make it act like a new backend but the performance would likely be worse than the new backend. The advantage of this option is that unlike 2 the upgrade would be very quick and barely noticeable and unlike either of the other options would not modify your existing data in any way so you could continue using old code to talk to it as well as the new code. My personal preference is Option 1 but as users of the existing SQL backend I'd appreciate your feedback on this. Whatever is decided Option 1 will probably be provided in addition to either of the other options - ideally I'd prefer to just have option 1 and not bother with options 2 and 3 but I'm happy to work with whatever you'd prefer. Best Regards, Rob Vesse ***************************** NOTICE - This message from Hatch is intended only for the use of the individual or entity to which it is addressed and may contain information which is privileged, confidential or proprietary. Internet communications cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, arrive late or contain viruses. By communicating with us via e-mail, you accept such risks. When addressed to our clients, any information, drawings, opinions or advice (collectively, "information") contained in this e-mail is subject to the terms and conditions expressed in the governing agreements. Where no such agreement exists, the recipient shall neither rely upon nor disclose to others, such information without our written consent. Unless otherwise agreed, we do not assume any liability with respect to the accuracy or completeness of the information set out in this e-mail. If you have received this message in error, please notify us immediately by return e-mail and destroy and delete the message from your computer. |
From: Rob V. <rv...@do...> - 2011-07-22 09:29:50
|
Hi Koos and Rob So I wanted to give you an update on the upcoming new SQL backend and get your input on a few points. Design and Performance So the new SQL backend has a schema fairly similar to the existing backend (so read/write performance is pretty similar) but it has been designed to do all the IO using stored procedures since this allows for the schema to change in future (or alternative schemas to be used) provided that they support the necessary stored procedures. The code has also been designed to take advantage of some new API features to provide reasonable performance for out of memory SPARQL i.e. you can make queries and updates against the database without having to load all the data into memory. This makes the system much more scalable in the long term since you don't need so much memory to make requests against the store though this results in a speed trade off i.e. in-memory query is typically much faster but requires significantly more memory. One of the main advantages of this new out of memory query mechanism is that the memory usage is much lower and there is no time overhead of waiting to load data into memory before answering queries. So particularly for people deploying the backend in environments with limited resources e.g. shared hosting it will be much more usable. Security The new backend also improves on security providing a set of roles for users which determine what actions they can take on the store, the point of this being it makes it much easier to secure user interaction with the store. Currently the roles I'm incorporating are as follows: Admin - Full read/write privileges plus the ability to completely empty the store (though not destroy the database schema)Read/Write - Full read/write privilegesRead/Insert - Full read privileges and can insert new data but not overwrite/delete existing dataRead Only - Full read privileges but no write privelegesIf you guys can think of any other useful roles for the backend please let me know? Note that someone with read/write privileges can effectively empty the store completely by deleting one graph at a time but only the admin can do it in a single command. Additionally only the admin can actually empty the nodes table which has the effect of resetting the store. Code Changes In order to use the new backend you will have to make some minor code/configuration changes which I will provide documentation for nearer the time but hopefully this will not be too disruptive. One thing to note is that as part of a wider strategy regarding reduction of dependencies in the core library the new SQL backend will reside in a separate library dotNetRDF.Data.Sql.dll which allows us to maintain the code separately and add support for additional database back ends (e.g. MySql, Oracle etc) in future without increasing dependencies in the core library. In the initial release only Microsoft SQL Server will be supported, hopefully mySql support will follow in a subsequent release and Oracle eventually since there are issues with getting Oracle licenses for the products required to actually develop and test against. Upgrade Paths The main thing I wanted to get your opinion on was how you would prefer to upgrade your existing databases? The options I am considering are as follows: Provide a standalone tool which can be run at the command line which will migrate from the old format to the new one. Migration could either be in-place or to a new empty database. For in-place migration the old data could be maintained and made restorable for rollbacks if desired?Make the upgrade silent and automatic within the code. This has issues with the fact that it puts the upgrade mechanism into the library and creates a potentially very long delay when you first access your store with the new code as the upgrade has to take place.Do a partial silent upgrade. As the database schemas are not too dissimilar it may be possible to just do a partial upgrade by creating a set of stored procedures within the existing database. This would make it act like a new backend but the performance would likely be worse than the new backend. The advantage of this option is that unlike 2 the upgrade would be very quick and barely noticeable and unlike either of the other options would not modify your existing data in any way so you could continue using old code to talk to it as well as the new code.My personal preference is Option 1 but as users of the existing SQL backend I'd appreciate your feedback on this. Whatever is decided Option 1 will probably be provided in addition to either of the other options - ideally I'd prefer to just have option 1 and not bother with options 2 and 3 but I'm happy to work with whatever you'd prefer. Best Regards, Rob Vesse |
From: Rob V. <rv...@do...> - 2011-07-20 12:45:43
|
Hi Rodrigo I've started adding some basic support into the core library for .Net serialization (ISerializable and IXmlSerializable implementations), the implementations are not complete nor fully tested yet. It is intended to provide serialization for INode implementations, Triples, some IGraph implementations (complex implementations like GraphPersistenceWrapper won't be serializable), SparqlResult and SparqlResultSet. While this is not in a state that I'd recommend for you to test yet does this address your needs in this area? Regards, Rob Vesse ---------------------------------------- From: "Rodrigo de Castro Reis" <rod...@in...> Sent: 05 July 2011 14:03 To: "Rob Vesse" <rv...@do...>, "dotNetRDF Developer Discussion and Feature Request" <dot...@li...> Subject: RES: [dotNetRDF-Develop] RES: SparqlResultSet and SparqlResult Hi Rob, We work a lot with serialization and we had to change dotNetRdf core classes to achieve this (putting Serializable and DataMember attributes).I think a nice and generic solution would be SparqlResultSet and SparqlResult being interfaces and when I create the Query I indicate to use my custom implementation of those interfaces. When I don´t indicate any custom implementation, the current behavior will apply. Regards, Rodrigo Reis | Product Owner - Research Team rod...@in... | www.invit.com.br | @sigainvit Office: +55 34 3223.4000 - 11 2372.1072 | Mobile: +55 34 9661.7499 De: Rob Vesse [mailto:rv...@do...] Enviada em: Tuesday, July 05, 2011 9:53 AM Para: Rodrigo de Castro Reis; 'dotNetRDF Developer Discussion and Feature Request' Cc: Rafael Dias Araújo Assunto: RE: [dotNetRDF-Develop] RES: SparqlResultSet and SparqlResult Hi Rodrigo Ok, so as I see it there are a few options here. Option 1 - Convert to DataTable or some other serializable structure yourself e.g. create your own static method to do the conversion into something you can effectively serialize Option 2 - dotNetRDF should implement .Net serialization on all its core types OR provide wrapper types that can be used where serialization is desired Option 3 - Wait for the next release of the library (currently slated for August) which brings new API enhancements which allow you to control the processing of SPARQL Results as they are produced. This will reduce overhead as you don't have to have intermediate data structures you can just create a class that will process the results directly. This may not be a viable solution for you though if you are tied to using .Net serialization If your preferred option is number 2 I'll add it to the todo list but can't promise it'll be done for the next release as I have plenty of other work to do for that release already Regards, Rob Vesse From: Rodrigo de Castro Reis [mailto:rod...@in...] Sent: 05 July 2011 13:30 To: dotNetRDF Developer Discussion and Feature Request; Rob Vesse Cc: Rafael Dias Araújo Subject: RES: [dotNetRDF-Develop] RES: SparqlResultSet and SparqlResult We tried to serialize the DataTable and we got the exception [1] because the DataTable had no name and after setting the DataTable´s name, we got the exception [2] because LiteralNode is not serializable.Maybe the DataTable column type doesn´t need to be INode, maybe it´s just string. [1]System.InvalidOperationException was unhandled by user code Message=There was an error generating the XML document. Source=System.Xml StackTrace: at System.Xml.Serialization.XmlSerializer.Serialize(XmlWriter xmlWriter, Object o, XmlSerializerNamespaces namespaces, String encodingStyle, String id) at System.Xml.Serialization.XmlSerializer.Serialize(Stream stream, Object o, XmlSerializerNamespaces namespaces) at QueryTest._Default.ExecuteQuery(Object sender, EventArgs e) in C:\Source\Utils\SemanticTestTools\DomainGraphQueryTest\QueryTest\Default.aspx.cs:line 53 at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) InnerException: System.InvalidOperationException Message=Cannot serialize the DataTable. DataTable name is not set. Source=System.Data StackTrace: at System.Data.DataTable.WriteXmlSchema(XmlWriter writer, Boolean writeHierarchy) at System.Data.DataTable.System.Xml.Serialization.IXmlSerializable.WriteXml(XmlWriter writer) at System.Xml.Serialization.XmlSerializationWriter.WriteSerializable(IXmlSerializable serializable, String name, String ns, Boolean isNullable, Boolean wrapped) at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationWriterDataTable.Write1_DataTable(Object o) InnerException: [2]System.InvalidOperationException was unhandled by user code Message=There was an error generating the XML document. Source=System.Xml StackTrace: at System.Xml.Serialization.XmlSerializer.Serialize(XmlWriter xmlWriter, Object o, XmlSerializerNamespaces namespaces, String encodingStyle, String id) at System.Xml.Serialization.XmlSerializer.Serialize(Stream stream, Object o, XmlSerializerNamespaces namespaces) at QueryTest._Default.ExecuteQuery(Object sender, EventArgs e) in C:\Source\Utils\SemanticTestTools\DomainGraphQueryTest\QueryTest\Default.aspx.cs:line 53 at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) InnerException: System.InvalidOperationException Message=Type 'VDS.RDF.LiteralNode, dotNetRDF, Version=0.4.1.0, Culture=neutral, PublicKeyToken=331e64d1a8b5942b' does not implement IXmlSerializable interface therefore can not proceed with serialization. Source=System.Data StackTrace: at System.Data.XmlDataTreeWriter.XmlDataRowWriter(DataRow row, String encodedTableName) at System.Data.XmlDataTreeWriter.SaveDiffgramData(XmlWriter xw, Hashtable rowsOrder) at System.Data.NewDiffgramGen.Save(XmlWriter xmlw, DataTable table) at System.Data.DataTable.WriteXml(XmlWriter writer, XmlWriteMode mode, Boolean writeHierarchy) at System.Xml.Serialization.XmlSerializationWriter.WriteSerializable(IXmlSerializable serializable, String name, String ns, Boolean isNullable, Boolean wrapped) at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationWriterDataTable.Write1_DataTable(Object o) InnerException: Rodrigo Reis | Product Owner - Research Team rod...@in... | www.invit.com.br | @sigainvit Office: +55 34 3223.4000 - 11 2372.1072 | Mobile: +55 34 9661.7499 De: Rodrigo de Castro Reis [mailto:rod...@in...] Enviada em: Tuesday, July 05, 2011 9:03 AM Para: Rob Vesse; 'dotNetRDF Developer Discussion and Feature Request' Cc: Rafael Dias Araújo Assunto: [dotNetRDF-Develop] RES: SparqlResultSet and SparqlResult Since we have heavy performance requirements, I was trying to do a more directly serialization, with no conversion step. But converting it to DataTable will work. Regards, Rodrigo Reis | Product Owner - Research Team rod...@in... | www.invit.com.br | @sigainvit Office: +55 34 3223.4000 - 11 2372.1072 | Mobile: +55 34 9661.7499 De: Rob Vesse [mailto:rv...@do...] Enviada em: Tuesday, July 05, 2011 5:51 AM Para: 'dotNetRDF Developer Discussion and Feature Request' Cc: Rodrigo de Castro Reis Assunto: RE: [dotNetRDF-Develop] SparqlResultSet and SparqlResult SparqlResult implements IEnumerable so that you can directly enumerate over the columns it represents i.e. the set of key value pairs of variables and values. This is useful since some queries will return result sets where different results contain different columns e.g. queries using UNION Yes SparqlResultSet is effectively a table so you enumerate over the SparqlResult instances which are the rows. If you need to serialize a SparqlResultSet you could cast it to a DataTable and serialize that instead? SparqlResultSet defines an explicit cast to DataTable so that is doable. Regards, Rob Vesse From: Rodrigo de Castro Reis [mailto:rod...@in...] Sent: 04 July 2011 15:18 To: dot...@li... Subject: [dotNetRDF-Develop] SparqlResultSet and SparqlResult Why class SparqlResult implements IEnumerable ? I think SparqlResult is one Row of the SparqlResultSet. I think SparqlResultSet as a Table and SparqlResult as a Row. Am I missing something? We´re providing an endpoint (web-service) for SPARQL queries and these classes are not serializable, that´s why we started questioning this. Regards, Rodrigo Reis | Product Owner - Research Team rod...@in... | www.invit.com.br | @sigainvit Office: +55 34 3223.4000 - 11 2372.1072 | Mobile: +55 34 9661.7499 Confidencialidade: A informação contida nesta mensagem de e-mail, incluindo quaisquer anexos, é confidencial e está reservada apenas à pessoa ou entidade para a qual foi endereçada. Se você não é o destinatário ou a pessoa responsável por encaminhar esta mensagem ao destinatário, você está, por meio desta, notificado que não deverá rever, retransmitir, imprimir, copiar, usar ou distribuir esta mensagem de e-mail ou quaisquer anexos. Caso você tenha recebido esta mensagem por engano, por favor, contate o remetente imediatamente e apague esta mensagem de seu computador ou de qualquer outro banco de dados. Muito obrigado. Confidentiality Notice: The information contained in this email message, including any attachment, is confidential and is intended only for the person or entity to which it is addressed. If you are neither the intended recipient nor the employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that you may not review, retransmit, convert to hard copy, copy, use or distribute this email message or any attachments to it. If you have received this email in error, please contact the sender immediately and delete this message from any computer or other data bank. Thank you. |
From: Rob V. <ra...@ec...> - 2011-05-20 13:55:58
|
Hi All We are pleased to announce a new release of dotNetRDF. dotNetRDF is a free and open source (GPL/LGPL/MIT license) RDF/Semantic Web library written in C# for .Net 3.5 and higher. The new version is 0.4.1 Beta. Below is a quick summary of new features, you can read more online at [1] or see the full Change Log in the download package. Downloads are available via our website [2] or via SourceForge [3] Key New Features/Improvements * Full SPARQL 1.1 Query and Update Support (passes the entire official test suite as it currently stands) * Significantly improved parser subsystem * Various bug fixes for 0.4.0 issues We'd like to acknowledge the following people whose bug reports, patches and suggestions have helped shape this release: * Graham Moore * Laurent Lefort * Felipe Santos * Sergey Novikov * Adonis Damian * Bob Morris Regards, Rob Vesse [1] http://www.dotnetrdf.org/blogitem.asp?blogID=44 [2] http://www.dotnetrdf.org/content.asp?pageID=Download%20dotNetRDF [3] https://sourceforge.net/projects/dotnetrdf/files/Library/0.4.1%20Beta/dotNetRDF_library_041_beta.zip/download |