jrdf-general Mailing List for Java RDF Binding
Status: Inactive
Brought to you by:
newmana
You can subscribe to this list here.
2005 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
(1) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2006 |
Jan
(1) |
Feb
|
Mar
(3) |
Apr
(6) |
May
|
Jun
(3) |
Jul
(1) |
Aug
|
Sep
(2) |
Oct
|
Nov
(1) |
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(6) |
Oct
(9) |
Nov
(11) |
Dec
(8) |
2009 |
Jan
(1) |
Feb
(3) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
(7) |
Apr
(11) |
May
(7) |
Jun
(2) |
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2011 |
Jan
|
Feb
|
Mar
(2) |
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
From: <ma...@li...> - 2011-10-17 05:59:27
|
The staff of Life Science Editorial Ltd. are all native English speakers with extensive experience in revising draft English manuscripts for leading medical and biotech researchers at universities as well as working for pharmaceutical companies. Our high-profile reputation is based on a longstanding and close collaboration with medical scientists as well as reputable pharmaceutical companies. Our expertise in scientific terminology and experience in editing English texts from all fields of medicine and the life sciences will give you or your colleagues invaluable support in preparing your publications. If you are not a native speaker of English, you can be confident that after we have corrected and revised your papers they will receive a commensurate and enthusiastic reception from your scientific colleagues. Please specify whether you would like a quote for "polishing" an English manuscript for publication or for optimising a paper for presentation at an international meeting. 1) General price information for polishing English drafts 2) Please delete my email address (Unsubscribe) Life Science Editorial Ltd. Email: lif...@ar... |
From: Andrew N. <and...@gm...> - 2011-05-02 00:23:16
|
Hi, sorry this is bait late. Jrdf doesn't support bindings like that. I think what you want is something like Jenabeans or Empire (https://github.com/mhgrove/Empire). On Friday, 22 April 2011, Ryan Heaton <sto...@us...> wrote: > Hi. > > I'm looking for a library that will bind the Dublin Core RDF XML to Java objects using JAXB. Is this the right project for something like that? Does anyone know if something like that is available? If that isn't available, what's the possibility that I could be added as a contributor to the project to work on it? > > Eventually, I'd want the artifacts to be available via the central Maven repo, too... > > Thanks! > > -Ryan > |
From: Ryan H. <sto...@us...> - 2011-04-21 22:58:50
|
Hi. I'm looking for a library that will bind the Dublin Core RDF XML to Java objects using JAXB. Is this the right project for something like that? Does anyone know if something like that is available? If that isn't available, what's the possibility that I could be added as a contributor to the project to work on it? Eventually, I'd want the artifacts to be available via the central Maven repo, too... Thanks! -Ryan |
From: Andrew N. <and...@gm...> - 2011-03-24 21:07:49
|
Hi, I think you have confused the two. There are two different JARs: jrdf-0.5.6.3.jar - the library with an example program that is run by default (org.jrdf.example.RdfXmlParserExample). jrdf-0.5.6.3-gui.jar - a Swing application that runs SPARQL queries. If you want the GUI download: http://sourceforge.net/projects/jrdf/files/jrdf/jrdf-0.5.6.3/jrdf-gui-0.5.6.3.jar/download On Thursday, March 24, 2011, Jean-Marc Vanel <jea...@gm...> wrote: > Hi > > It does not work as written in : > http://code.google.com/p/jrdf/wiki/GettingStarted > java -jar ~/Téléchargements/jrdf-0.5.6.3.jar > > prints many lines , ending with : > > Graph: [http://planetrdf.com/, http://purl.org/rss/1.0/items, be10739b-40e9-4234-87c5-fd5693a35240#23] > Total number of statements: 305 > > and STOPS :( ! > > -- > Jean-Marc Vanel > Déductions SARL - Consulting, services, training, > Rule-based programming, Semantic Web > http://jmvanel.free.fr/ - EulerGUI, a turntable GUI for Semantic Web + rules, XML, UML, eCore, Java bytecode > +33 (0)6 89 16 29 52 -- +33 (0)1 39 55 58 16 > ( we rarely listen to voice messages, please send a mail instead ) > |
From: Jean-Marc V. <jea...@gm...> - 2011-03-24 13:18:36
|
Hi It does not work as written in : http://code.google.com/p/jrdf/wiki/GettingStarted java -jar ~/Téléchargements/jrdf-0.5.6.3.jar prints many lines , ending with : Graph: [http://planetrdf.com/, http://purl.org/rss/1.0/items, be10739b-40e9-4234-87c5-fd5693a35240#23] Total number of statements: 305 and STOPS :( ! -- Jean-Marc Vanel Déductions SARL - Consulting, services, training, Rule-based programming, Semantic Web http://jmvanel.free.fr/ - EulerGUI, a turntable GUI for Semantic Web + rules, XML, UML, eCore, Java bytecode +33 (0)6 89 16 29 52 -- +33 (0)1 39 55 58 16 ( we rarely listen to voice messages, please send a mail instead ) |
From: Andrew N. <and...@gm...> - 2010-11-24 20:18:18
|
Hi, JRDF prevents this approach by design - it prevents blank nodes created from one graph being added to another - because JRDF stores a map of longs to an object - and the blank node ids can clash. There was an idea that you could create GUIDs but that doesn't help with blank nodes that are equivalent. I think the traditional approach is to try and map each blank node to another equivalent blank node. If you just want to avoid this error you have to create a map between the two graphs' blank nodes. You would do something like: if graph 1's node is a blank node get graph 2's equivalent blank node (check if it's in map else create a new blank node in graph 2 and map it to graph1's node) use graph 2's blank node in place of graph1's for insert The complexity of doing that across lots of graphs soon reaches its limit. The molecule idea is a bit different. The idea is you decompose your graphs into molecules, send them across the network, and merge them/add them to a graph. The code is a little unpolished. You can't use in memory objects you have to serialize them to text and the deserialize them - as it was always designed to be distributed rather than local. The main idea is: Graph srcGraph = JRDF_FACTORY.getGraph(); MoleculeGraph destGraph = GLOBAL_JRDF_FACTORY.getGraph(); ... TextToMoleculeGraph graphBuilder = getGraphBuilder(destGraph); String graphAsString = graphToMoleculeText(srcGraph); graphBuilder.parse(new StringReader(graphAsString)); while (graphBuilder.hasNext()) { Molecule molecule = graphBuilder.next(); destGraph.add(molecule); } Subversion access to Sourceforge is down at the moment, but I'll put the full example in: http://jrdf.svn.sourceforge.net/viewvc/jrdf/trunk/jrdf/src/java/org/jrdf/example/ called CopyGraphExample.java For merging take a look at: http://jrdf.svn.sourceforge.net/viewvc/jrdf/trunk/jrdf/src/java/org/jrdf/example/performance/DecomposerPerformance.java?view=markup On 24 November 2010 19:29, Vojtech Toman <voj...@gm...> wrote: > Andrew, [Deleted some details] > Basically, I think that what I want is to loop over a sequence of RDF > documents, create a Graph for each document and then merge all these > graphs in to one according the RDF merge rules. > > The naive naive approach below does not work for me: > > Graph resultGraph = MemoryJRDFFactory.getFactory().getGraph(); > > for (Source source : sources) { > InputStream is = serialize(source); > > Graph seqGraph = MemoryJRDFFactory.getFactory().getGraph(); > Parser parser = new GraphRdfXmlParser(seqGraph, new MemMapFactory()); > parser.parse(is, ""); > > ClosableIterable<Triple> triples = seqGraph.find(AnyTriple.ANY_TRIPLE); > try { > for (final Triple triple : triples) { > resultGraph.add(triple); > } > } finally { > triples.iterator().close(); > } > } > > I get errors like this one: > > org.jrdf.graph.local.index.nodepool.ExternalBlankNodeException: Failed > to add triple. > org.jrdf.graph.local.index.nodepool.ExternalBlankNodeException: The > node returned by the nodeId (6) was not the same blank node. Got: > 799d7bdf-51b7-4c51-b6b9-78824b9f5ece#6, expected: > 94a6ac03-8c1e-494a-8da7-06d8fd5085e6#12 > > (I get this error even with a single input RDF document) > > I also tried to use the MoleculeGraph as the result graph and > GraphDecomposer to iterate over the molecules in the sub-graphs, but I > got the same error. > > I would be very grateful if you could point me to the right direction, > or tell me what I am doing wrong. > > Thank you very much for your reply. > > Best regards, > Vojtech Toman > |
From: Andrew N. <and...@gm...> - 2010-07-05 12:34:08
|
It looks like this is a difference between the built in implementation in Java 6 and Woodstox. https://sourceforge.net/tracker/?func=detail&aid=2981078&group_id=96347&atid=616053 Thanks. On 5 July 2010 20:26, Mark <jav...@gm...> wrote: > The fix is in: > > GraphToRdfXmlWithHeader > > from; > FACTORY.setProperty("javax.xml.stream.isRepairingNamespaces", "true"); > to: > FACTORY.setProperty("javax.xml.stream.isRepairingNamespaces", true); > > On 5 July 2010 11:05, Mark <jav...@gm...> wrote: >> Hi when trying to use the RdfWriter in 0.5.6.1 I get the following error: >> >> Java 1.6 >> >> @Test >> public void testWriter(){ >> RdfReader rdfReader = new RdfReader(); >> StringWriter stringWriter = new StringWriter(); >> Graph graph = rdfReader.parseNTriples(new >> File("c:/temp/test.nt")); >> BlankNodeRegistry nodeRegistry = new MemBlankNodeRegistryImpl(); >> nodeRegistry.clear(); >> RdfNamespaceMap map = new MemRdfNamespaceMap(); >> RdfWriter writer = new RdfXmlWriter(nodeRegistry, map); >> writer.write(graph, stringWriter); >> System.out.println(writer.toString()); >> } >> >> test.nt contents: >> >> <jis://node/695A9A26-A7BD-C51A-4133-3F5B25AD2744> <jis://file/name> >> "695A9A26-A7BD-C51A-4133-3F5B25AD2744" . >> <jis://node/695A9A26-A7BD-C51A-4133-3F5B25AD2744> <jis://hash/sha1> >> "e8bdf20c69b402bd569c9ced5bdc5d63bba323ed" . >> >> java.lang.ClassCastException: java.lang.String cannot be cast to >> java.lang.Boolean >> at com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.init(XMLStreamWriterImpl.java:234) >> at com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.<init>(XMLStreamWriterImpl.java:216) >> at com.sun.xml.internal.stream.XMLOutputFactoryImpl.createXMLStreamWriter(XMLOutputFactoryImpl.java:193) >> at com.sun.xml.internal.stream.XMLOutputFactoryImpl.createXMLStreamWriter(XMLOutputFactoryImpl.java:110) >> at org.jrdf.writer.rdfxml.GraphToRdfXmlWithHeader.createXmlStreamWriter(GraphToRdfXmlWithHeader.java:119) >> at org.jrdf.writer.rdfxml.GraphToRdfXmlWithHeader.write(GraphToRdfXmlWithHeader.java:95) >> at org.jrdf.writer.rdfxml.RdfXmlWriter.tryWrite(RdfXmlWriter.java:134) >> at org.jrdf.writer.rdfxml.RdfXmlWriter.write(RdfXmlWriter.java:110) >> at org.jrdf.writer.rdfxml.RdfXmlWriter.write(RdfXmlWriter.java:105) >> at com.js.bug.jrdf.RdfWriterBug.testWriter(RdfWriterBug.java:26) >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) >> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) >> at java.lang.reflect.Method.invoke(Method.java:597) >> at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) >> at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) >> at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) >> at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) >> at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) >> at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) >> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) >> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) >> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) >> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) >> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) >> at org.junit.runners.ParentRunner.run(ParentRunner.java:236) >> at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:49) >> at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) >> at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) >> at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) >> at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) >> at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) >> > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Sprint > What will you do first with EVO, the first 4G phone? > Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first > _______________________________________________ > Jrdf-general mailing list > Jrd...@li... > https://lists.sourceforge.net/lists/listinfo/jrdf-general > |
From: Mark <jav...@gm...> - 2010-07-05 10:26:19
|
The fix is in: GraphToRdfXmlWithHeader from; FACTORY.setProperty("javax.xml.stream.isRepairingNamespaces", "true"); to: FACTORY.setProperty("javax.xml.stream.isRepairingNamespaces", true); On 5 July 2010 11:05, Mark <jav...@gm...> wrote: > Hi when trying to use the RdfWriter in 0.5.6.1 I get the following error: > > Java 1.6 > > @Test > public void testWriter(){ > RdfReader rdfReader = new RdfReader(); > StringWriter stringWriter = new StringWriter(); > Graph graph = rdfReader.parseNTriples(new > File("c:/temp/test.nt")); > BlankNodeRegistry nodeRegistry = new MemBlankNodeRegistryImpl(); > nodeRegistry.clear(); > RdfNamespaceMap map = new MemRdfNamespaceMap(); > RdfWriter writer = new RdfXmlWriter(nodeRegistry, map); > writer.write(graph, stringWriter); > System.out.println(writer.toString()); > } > > test.nt contents: > > <jis://node/695A9A26-A7BD-C51A-4133-3F5B25AD2744> <jis://file/name> > "695A9A26-A7BD-C51A-4133-3F5B25AD2744" . > <jis://node/695A9A26-A7BD-C51A-4133-3F5B25AD2744> <jis://hash/sha1> > "e8bdf20c69b402bd569c9ced5bdc5d63bba323ed" . > > java.lang.ClassCastException: java.lang.String cannot be cast to > java.lang.Boolean > at com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.init(XMLStreamWriterImpl.java:234) > at com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.<init>(XMLStreamWriterImpl.java:216) > at com.sun.xml.internal.stream.XMLOutputFactoryImpl.createXMLStreamWriter(XMLOutputFactoryImpl.java:193) > at com.sun.xml.internal.stream.XMLOutputFactoryImpl.createXMLStreamWriter(XMLOutputFactoryImpl.java:110) > at org.jrdf.writer.rdfxml.GraphToRdfXmlWithHeader.createXmlStreamWriter(GraphToRdfXmlWithHeader.java:119) > at org.jrdf.writer.rdfxml.GraphToRdfXmlWithHeader.write(GraphToRdfXmlWithHeader.java:95) > at org.jrdf.writer.rdfxml.RdfXmlWriter.tryWrite(RdfXmlWriter.java:134) > at org.jrdf.writer.rdfxml.RdfXmlWriter.write(RdfXmlWriter.java:110) > at org.jrdf.writer.rdfxml.RdfXmlWriter.write(RdfXmlWriter.java:105) > at com.js.bug.jrdf.RdfWriterBug.testWriter(RdfWriterBug.java:26) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) > at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) > at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) > at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) > at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) > at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) > at org.junit.runners.ParentRunner.run(ParentRunner.java:236) > at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:49) > at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) > at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) > at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) > at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) > at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) > |
From: Mark <jav...@gm...> - 2010-07-05 10:05:25
|
Hi when trying to use the RdfWriter in 0.5.6.1 I get the following error: Java 1.6 @Test public void testWriter(){ RdfReader rdfReader = new RdfReader(); StringWriter stringWriter = new StringWriter(); Graph graph = rdfReader.parseNTriples(new File("c:/temp/test.nt")); BlankNodeRegistry nodeRegistry = new MemBlankNodeRegistryImpl(); nodeRegistry.clear(); RdfNamespaceMap map = new MemRdfNamespaceMap(); RdfWriter writer = new RdfXmlWriter(nodeRegistry, map); writer.write(graph, stringWriter); System.out.println(writer.toString()); } test.nt contents: <jis://node/695A9A26-A7BD-C51A-4133-3F5B25AD2744> <jis://file/name> "695A9A26-A7BD-C51A-4133-3F5B25AD2744" . <jis://node/695A9A26-A7BD-C51A-4133-3F5B25AD2744> <jis://hash/sha1> "e8bdf20c69b402bd569c9ced5bdc5d63bba323ed" . java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Boolean at com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.init(XMLStreamWriterImpl.java:234) at com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.<init>(XMLStreamWriterImpl.java:216) at com.sun.xml.internal.stream.XMLOutputFactoryImpl.createXMLStreamWriter(XMLOutputFactoryImpl.java:193) at com.sun.xml.internal.stream.XMLOutputFactoryImpl.createXMLStreamWriter(XMLOutputFactoryImpl.java:110) at org.jrdf.writer.rdfxml.GraphToRdfXmlWithHeader.createXmlStreamWriter(GraphToRdfXmlWithHeader.java:119) at org.jrdf.writer.rdfxml.GraphToRdfXmlWithHeader.write(GraphToRdfXmlWithHeader.java:95) at org.jrdf.writer.rdfxml.RdfXmlWriter.tryWrite(RdfXmlWriter.java:134) at org.jrdf.writer.rdfxml.RdfXmlWriter.write(RdfXmlWriter.java:110) at org.jrdf.writer.rdfxml.RdfXmlWriter.write(RdfXmlWriter.java:105) at com.js.bug.jrdf.RdfWriterBug.testWriter(RdfWriterBug.java:26) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:49) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) |
From: Andrew N. <and...@gm...> - 2010-06-09 21:50:11
|
My memory is a little fuzzy but I think you best bet would be: http://topquadrant.com/products/TB_Suite.html They have a free edition. On 10 June 2010 06:51, Kevin Daly <kev...@is...> wrote: > I’m looking for a Java Swing based UI for entering allowing a user to SPARQL > queries. Most of what I have seen is simply a text editor that accepts any > input, then validates the SPARQL and either works or tells the user to try > again. I’m hoping for something smarter, something that can inspect the > ontology and provide guidance to a user for inputting a query. > > > > Have you seen anything like that, or even anything better than a simple text > panel? I could write my own, but do not have the time/resources to do that > on my current project. > > > > Thanks for any thoughts you might have. > > > > Kevin Daly > > > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > Jrdf-general mailing list > Jrd...@li... > https://lists.sourceforge.net/lists/listinfo/jrdf-general > > |
From: Kevin D. <kev...@is...> - 2010-06-09 21:06:35
|
I'm looking for a Java Swing based UI for entering allowing a user to SPARQL queries. Most of what I have seen is simply a text editor that accepts any input, then validates the SPARQL and either works or tells the user to try again. I'm hoping for something smarter, something that can inspect the ontology and provide guidance to a user for inputting a query. Have you seen anything like that, or even anything better than a simple text panel? I could write my own, but do not have the time/resources to do that on my current project. Thanks for any thoughts you might have. Kevin Daly |
From: Mark <jav...@gm...> - 2010-05-27 22:45:26
|
Thanks Andrew. On 27 May 2010 22:44, Andrew Newman <and...@gm...> wrote: > I've made the following changes: > * It only adds the RDF namespace by default - it no longer adds owl, > purl etc by default. > * You can configure it to produce local name spaces (as I'm calling > the feature anyway) by setting the property > "org.jrdf.writer.rdfxml.writeLocalNamespace" to "true". > > ------------------------------------------------------------------------------ > > _______________________________________________ > Jrdf-general mailing list > Jrd...@li... > https://lists.sourceforge.net/lists/listinfo/jrdf-general > |
From: Andrew N. <and...@gm...> - 2010-05-27 21:44:25
|
I've made the following changes: * It only adds the RDF namespace by default - it no longer adds owl, purl etc by default. * You can configure it to produce local name spaces (as I'm calling the feature anyway) by setting the property "org.jrdf.writer.rdfxml.writeLocalNamespace" to "true". |
From: Andrew N. <and...@gm...> - 2010-05-25 22:08:47
|
Sorry it took so long. I've just checked in a version where closing the resources used by the graph is on the close() method of the graph interface. JRDFFactory no longer has a close method. As long as you create a graph and close it you don't seem to get a memory leak. I was able to successfully create open and close the same two graphs 100,000 times on the default memory settings (I put a loop around getExistingTriples() in PerformanceExample). On 14 April 2010 18:53, Andrew Newman <and...@gm...> wrote: > Thanks for your work looking into this. I'll have a quick look as > soon as I can. > > From memory, graph.close is empty - the assumption has been that > graphs can just stop being used. > > On Wednesday, April 14, 2010, Benedikt Forchhammer > <b.f...@gm...> wrote: >> Hi again, >> >> Creating and closing new graph instances over and over again leads to >> an OutOfMemoryError. I have done some testing and it's pretty bad; For >> a small graph I can only create about 7500 Graph instances before the >> program shuts down. Closing a graph after usage does not seem to clean >> things up completely... (see attachments). >> >> The problem can be "solved" by using a shared graph instance instead; >> in terms of concurrency issues that seems to work for me when I >> protect the instance by that WriteLock which I mentioned earlier. >> > |
From: Andrew N. <and...@gm...> - 2010-05-13 10:38:26
|
Cool. Well that's where I was headed anyway. I'll try and integrate your suggestions soon. On 13/05/2010, at 7:19 AM, Mark <jav...@gm...> wrote: > Andrew, > > I figured out a work-arround > > I created my own RdfNamespaceMapImpl > > removing: > //throw new NamespaceException("Partial uri: " + partial + " is not > mapped to a namespace."); > > also i removed some defaults > private void initNamespaces() { > add("rdf", getPartialUri(RDF.BASE_URI.toString())); > //add("rdfs", getPartialUri(RDFS.BASE_URI.toString())); > //add("owl", "http://www.w3.org/2002/07/owl#"); > //add("dc", "http://purl.org/dc/elements/1.1/"); > //add("dcterms", "http://purl.org/dc/terms/"); > } > > I then wired in my own PredicateObjectWriterImpl > > and changed - a bit hacky at the moment but works ok, basically writes > a default xmlns > > private void writePredicate(PredicateNode predicate) throws > WriteException, XMLStreamException { > if (!(predicate instanceof URIReference)) { > throw new WriteException("Unknown predicate node type: " + > predicate.getClass().getName()); > } > String resourceName = names.replaceNamespace((URIReference) predicate); > int lastIndexOf = predicate.toString().lastIndexOf('/'); > > //xmlStreamWriter.writeStartElement(resourceName); > String[] split = resourceName.split(":"); > xmlStreamWriter.writeStartElement(split[1]); > xmlStreamWriter.writeNamespace("xmlns", > predicate.toString().substring(0,lastIndexOf)+"/"); > } > > finally I wired it all up using my own RdfXmlWriter - for which I > removed the call to names.load(graph); > > The resultant graph is not as compact but it is compatable with the > output created by sesame. > > Thanks for the great api. > > Mark > > On 12 May 2010 22:06, Andrew Newman <and...@gm...> wrote: >> No it's not - the implementation of RdfXmlDocument and ResourceWriter >> are current hard coded into RdfXmlWriter. >> >> I'll try and make some small changes to make these changeable but >> implementing it properly would take some more time (I'm still working >> on trying the previous problem at the moment of creating lots of >> graphs). >> >> On 13 May 2010 04:08, Mark <jav...@gm...> wrote: >>> Is there a way to limit the automatic namespace assignment. >>> >>> I don't want <!ENTITY ns5 'jis://file/'>, I would rather prefer: >>> >>> <name xmlns="jis://file/">2F143D1D-5C57-CAC8-0DE1-6F380FE7C20C</name> >>> >>> instead of: >>> >>> <ns5:name>2F143D1D-5C57-CAC8-0DE1-6F380FE7C20C</ns5:name> >>> >>> >>> I this possible with the current API? >>> >> >> ------------------------------------------------------------------------------ >> >> _______________________________________________ >> Jrdf-general mailing list >> Jrd...@li... >> https://lists.sourceforge.net/lists/listinfo/jrdf-general >> > > ------------------------------------------------------------------------------ > > _______________________________________________ > Jrdf-general mailing list > Jrd...@li... > https://lists.sourceforge.net/lists/listinfo/jrdf-general |
From: Mark <jav...@gm...> - 2010-05-12 21:19:17
|
Andrew, I figured out a work-arround I created my own RdfNamespaceMapImpl removing: //throw new NamespaceException("Partial uri: " + partial + " is not mapped to a namespace."); also i removed some defaults private void initNamespaces() { add("rdf", getPartialUri(RDF.BASE_URI.toString())); //add("rdfs", getPartialUri(RDFS.BASE_URI.toString())); //add("owl", "http://www.w3.org/2002/07/owl#"); //add("dc", "http://purl.org/dc/elements/1.1/"); //add("dcterms", "http://purl.org/dc/terms/"); } I then wired in my own PredicateObjectWriterImpl and changed - a bit hacky at the moment but works ok, basically writes a default xmlns private void writePredicate(PredicateNode predicate) throws WriteException, XMLStreamException { if (!(predicate instanceof URIReference)) { throw new WriteException("Unknown predicate node type: " + predicate.getClass().getName()); } String resourceName = names.replaceNamespace((URIReference) predicate); int lastIndexOf = predicate.toString().lastIndexOf('/'); //xmlStreamWriter.writeStartElement(resourceName); String[] split = resourceName.split(":"); xmlStreamWriter.writeStartElement(split[1]); xmlStreamWriter.writeNamespace("xmlns", predicate.toString().substring(0,lastIndexOf)+"/"); } finally I wired it all up using my own RdfXmlWriter - for which I removed the call to names.load(graph); The resultant graph is not as compact but it is compatable with the output created by sesame. Thanks for the great api. Mark On 12 May 2010 22:06, Andrew Newman <and...@gm...> wrote: > No it's not - the implementation of RdfXmlDocument and ResourceWriter > are current hard coded into RdfXmlWriter. > > I'll try and make some small changes to make these changeable but > implementing it properly would take some more time (I'm still working > on trying the previous problem at the moment of creating lots of > graphs). > > On 13 May 2010 04:08, Mark <jav...@gm...> wrote: >> Is there a way to limit the automatic namespace assignment. >> >> I don't want <!ENTITY ns5 'jis://file/'>, I would rather prefer: >> >> <name xmlns="jis://file/">2F143D1D-5C57-CAC8-0DE1-6F380FE7C20C</name> >> >> instead of: >> >> <ns5:name>2F143D1D-5C57-CAC8-0DE1-6F380FE7C20C</ns5:name> >> >> >> I this possible with the current API? >> > > ------------------------------------------------------------------------------ > > _______________________________________________ > Jrdf-general mailing list > Jrd...@li... > https://lists.sourceforge.net/lists/listinfo/jrdf-general > |
From: Andrew N. <and...@gm...> - 2010-05-12 21:06:50
|
No it's not - the implementation of RdfXmlDocument and ResourceWriter are current hard coded into RdfXmlWriter. I'll try and make some small changes to make these changeable but implementing it properly would take some more time (I'm still working on trying the previous problem at the moment of creating lots of graphs). On 13 May 2010 04:08, Mark <jav...@gm...> wrote: > Is there a way to limit the automatic namespace assignment. > > I don't want <!ENTITY ns5 'jis://file/'>, I would rather prefer: > > <name xmlns="jis://file/">2F143D1D-5C57-CAC8-0DE1-6F380FE7C20C</name> > > instead of: > > <ns5:name>2F143D1D-5C57-CAC8-0DE1-6F380FE7C20C</ns5:name> > > > I this possible with the current API? > |
From: Mark <jav...@gm...> - 2010-05-12 18:09:07
|
Is there a way to limit the automatic namespace assignment. I don't want <!ENTITY ns5 'jis://file/'>, I would rather prefer: <name xmlns="jis://file/">2F143D1D-5C57-CAC8-0DE1-6F380FE7C20C</name> instead of: <ns5:name>2F143D1D-5C57-CAC8-0DE1-6F380FE7C20C</ns5:name> This is what I have: URI TEST_NODE = create("jis://node/FD144428-774B-0301-FB90-1CBF2134326D"); URI TEST_PROPERTY = create("jis://file/name"); Graph graph = JRDF_FACTORY.getNewGraph(); GraphElementFactory elementFactory = graph.getElementFactory(); TripleFactory tripleFactory = graph.getTripleFactory(); tripleFactory.addTriple(TEST_NODE, TEST_PROPERTY, "2F143D1D-5C57-CAC8-0DE1-6F380FE7C20C"); Writer.writeNTriples(new File("c:/temp/test.nt"), graph); Writer.writeRdfXml(new File("c:/temp/test.rdf"), graph); <?xml version="1.0" encoding="UTF-8"?><!DOCTYPE rdf:RDF[ <!ENTITY dc 'http://purl.org/dc/elements/1.1/'> <!ENTITY rdfs 'http://www.w3.org/2000/01/rdf-schema#'> <!ENTITY owl 'http://www.w3.org/2002/07/owl#'> <!ENTITY rdf 'http://www.w3.org/1999/02/22-rdf-syntax-ns#'> <!ENTITY ns5 'jis://file/'> <!ENTITY dcterms 'http://purl.org/dc/terms/'> ]><rdf:RDF xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:owl="http://www.w3.org/2002/07/owl#" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:ns5="jis://file/" xmlns:dcterms="http://purl.org/dc/terms/"> <rdf:Description rdf:about="jis://node/FD144428-774B-0301-FB90-1CBF2134326D"> <ns5:name>2F143D1D-5C57-CAC8-0DE1-6F380FE7C20C</ns5:name> </rdf:Description> </rdf:RDF> Ideally I would like: <?xml version="1.0" encoding="UTF-8"?> <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> <rdf:Description rdf:about="jis://node/FD144428-774B-0301-FB90-1CBF2134326D"> <name xmlns="jis://file/">2F143D1D-5C57-CAC8-0DE1-6F380FE7C20C</name> </rdf:Description> </rdf:RDF> I this possible with the current API? Cheers Mark |
From: Andrew N. <and...@gm...> - 2010-04-14 08:53:55
|
Thanks for your work looking into this. I'll have a quick look as soon as I can. >From memory, graph.close is empty - the assumption has been that graphs can just stop being used. On Wednesday, April 14, 2010, Benedikt Forchhammer <b.f...@gm...> wrote: > Hi again, > > Creating and closing new graph instances over and over again leads to > an OutOfMemoryError. I have done some testing and it's pretty bad; For > a small graph I can only create about 7500 Graph instances before the > program shuts down. Closing a graph after usage does not seem to clean > things up completely... (see attachments). > > The problem can be "solved" by using a shared graph instance instead; > in terms of concurrency issues that seems to work for me when I > protect the instance by that WriteLock which I mentioned earlier. > |
From: Benedikt F. <b.f...@gm...> - 2010-04-13 17:26:09
|
Hi again, Creating and closing new graph instances over and over again leads to an OutOfMemoryError. I have done some testing and it's pretty bad; For a small graph I can only create about 7500 Graph instances before the program shuts down. Closing a graph after usage does not seem to clean things up completely... (see attachments). The problem can be "solved" by using a shared graph instance instead; in terms of concurrency issues that seems to work for me when I protect the instance by that WriteLock which I mentioned earlier. Regards, Ben 2010/4/9 Benedikt Forchhammer <b.f...@gm...>: > Hi, > > I have done some testing and it turns out that neither sparql queries > nor graph changes can be done concurrently at the moment... > > I tried using a simple ReadWriteLock (i.e. exclusive write access, > multiple read access), using > java.util.concurrent.locks.ReentrantReadWriteLock, but I still got > bunch of exceptions (not only the ConcurrentModificationException > which I mentioned earlier; for a list including stacktraces see > attachment). > > When I switched to using a fully exclusive lock (i.e. only one thread > has access no matter what the thread does), the whole thing started to > behave... Probably at the cost of performance (not verified). > > I also started using separate graph instances for all graph access > (even the sparql queries); this did not seem to change anything, so > there might be a bug in there somewhere like you mentioned. > > Ben > > On 29 March 2010 23:31, Benedikt Forchhammer <b.f...@gm...> wrote: >> Hi, >> Thanks for the quite reply! >> >> On 29 March 2010 22:07, Andrew Newman <and...@gm...> wrote: >>> Yep, it's safest to create a graph for each thread - but I'm not sure >>> that's the error you're getting - if you're just doing queries against >>> the graph you should get a concurrent modification error - it could be >>> a bug in the OptimizingQueryEngine. >> >> I think it was actually a modification and a query at the same time... >> >>> You could implement your own locking or I would do it is use something >>> like Spring's transaction management to add transactions around >>> methods (see the FooService): >>> http://static.springsource.org/spring/docs/3.0.x/reference/html/transaction.html >> >> Thanks for the link. I am a bit hesitant of adding another "big >> library" to my project, but I will have a look... >> For a start I think I am going to try to use separate graphs for >> writes and see if that solves the problem... >> >> Thank you, >> Ben >> > |
From: Benedikt F. <b.f...@gm...> - 2010-04-09 13:36:44
|
Okay, you guys are the experts... :-) Lazy iterators sound like a good idea; from what you've explained I imagine that concurrency could cause issues here again; e.g. what happens if an entry is removed while the pointer is still "in use" by one of the result iterators? On 9 April 2010 13:28, Yuan-Fang Li <liy...@gm...> wrote: > On Fri, Apr 9, 2010 at 9:38 PM, Benedikt Forchhammer > <b.f...@gm...> wrote: >> >> Hm, indexing over "all permutations of spo" doesn't sound like a >> feasible solution, especially with large datasets... ;-) > > I don't think this is infeasible. Since we only index over spo, all the > permutations are just 6 indices, instead of the current 3. It uses more disk > space, but disks are cheap these days. There's a paper about implementing > it: http://people.csail.mit.edu/tdanford/6830papers/weiss-hexastore.pdf. > However, they only implemented an in-memory prototype but not a persistent > store. > >> >> I don't know if this helps, but for my use case, I only need to sort >> on one type of literal which is bound by one predicate... and I would >> have no problem with manually specifying the respective index >> somewhere (e.g. on graph creation); after all that's what I have to do >> for relational databases as well. >> >> What do you guys mean by "lazy result iterators"? > > At the moment all intermediate result sets are stored in memory. So if a > query, or some part of a graph pattern is not very selective, memory > consumption is going to be high. Lazy iterators hold "pointers" to actual > triples in the triple store so they should use less memory. > >> >> Cheers, >> Ben >> >> On 6 April 2010 21:53, Andrew Newman <and...@gm...> wrote: >> > On 6 April 2010 23:11, Yuan-Fang Li <liy...@gm...> wrote: >> >> >> >> On Mon, Apr 5, 2010 at 10:13 PM, Andrew Newman >> >> <and...@gm...> >> >> wrote: >> >>> >> >>> I guess there are two ways to add the ORDER BY support. >> >>> >> >>> The simplest way to add it is after producing a result set and then >> >>> ordering and limiting that. I don't think that would be any faster - >> >>> it would just be more convenient. >> >>> >> >>> The other way I have thought about it previously was trying to >> >>> maintain sorting through the processing (joins etc). >> >>> >> >> >> >> I think this is the right way to go since if the graph is large and the >> >> query is not very selective, retrieving all results and then sorting >> >> the >> >> result set may take a long time. Keeping things sorted in intermediate >> >> results should be faster. ORDER BY and LIMIT as a post-processing step >> >> implies that a lot of unnecessary joins, etc., may need to be done. >> >> The 2nd way will require, I think, indexing over all permutations of >> >> spo. >> >> To be more memory-friendly, returning "lazy" result iterators may be >> >> the way >> >> to go. :-) >> >> >> > >> > Yeah, I did see this as an opportunity to get the lazy work in. >> > >> > >> > ------------------------------------------------------------------------------ >> > Download Intel® Parallel Studio Eval >> > Try the new software tools for yourself. Speed compiling, find bugs >> > proactively, and fine-tune applications for parallel performance. >> > See why Intel Parallel Studio got high marks during beta. >> > http://p.sf.net/sfu/intel-sw-dev >> > _______________________________________________ >> > Jrdf-general mailing list >> > Jrd...@li... >> > https://lists.sourceforge.net/lists/listinfo/jrdf-general >> > >> >> >> ------------------------------------------------------------------------------ >> Download Intel® Parallel Studio Eval >> Try the new software tools for yourself. Speed compiling, find bugs >> proactively, and fine-tune applications for parallel performance. >> See why Intel Parallel Studio got high marks during beta. >> http://p.sf.net/sfu/intel-sw-dev >> _______________________________________________ >> Jrdf-general mailing list >> Jrd...@li... >> https://lists.sourceforge.net/lists/listinfo/jrdf-general > > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > Jrdf-general mailing list > Jrd...@li... > https://lists.sourceforge.net/lists/listinfo/jrdf-general > > |
From: Yuan-Fang Li <liy...@gm...> - 2010-04-09 12:28:24
|
On Fri, Apr 9, 2010 at 9:38 PM, Benedikt Forchhammer < b.f...@gm...> wrote: > Hm, indexing over "all permutations of spo" doesn't sound like a > feasible solution, especially with large datasets... ;-) > I don't think this is infeasible. Since we only index over spo, all the permutations are just 6 indices, instead of the current 3. It uses more disk space, but disks are cheap these days. There's a paper about implementing it: http://people.csail.mit.edu/tdanford/6830papers/weiss-hexastore.pdf. However, they only implemented an in-memory prototype but not a persistent store. > > I don't know if this helps, but for my use case, I only need to sort > on one type of literal which is bound by one predicate... and I would > have no problem with manually specifying the respective index > somewhere (e.g. on graph creation); after all that's what I have to do > for relational databases as well. > What do you guys mean by "lazy result iterators"? > At the moment all intermediate result sets are stored in memory. So if a query, or some part of a graph pattern is not very selective, memory consumption is going to be high. Lazy iterators hold "pointers" to actual triples in the triple store so they should use less memory. > > Cheers, > Ben > > On 6 April 2010 21:53, Andrew Newman <and...@gm...> wrote: > > On 6 April 2010 23:11, Yuan-Fang Li <liy...@gm...> wrote: > >> > >> On Mon, Apr 5, 2010 at 10:13 PM, Andrew Newman <and...@gm... > > > >> wrote: > >>> > >>> I guess there are two ways to add the ORDER BY support. > >>> > >>> The simplest way to add it is after producing a result set and then > >>> ordering and limiting that. I don't think that would be any faster - > >>> it would just be more convenient. > >>> > >>> The other way I have thought about it previously was trying to > >>> maintain sorting through the processing (joins etc). > >>> > >> > >> I think this is the right way to go since if the graph is large and the > >> query is not very selective, retrieving all results and then sorting the > >> result set may take a long time. Keeping things sorted in intermediate > >> results should be faster. ORDER BY and LIMIT as a post-processing step > >> implies that a lot of unnecessary joins, etc., may need to be done. > >> The 2nd way will require, I think, indexing over all permutations of > spo. > >> To be more memory-friendly, returning "lazy" result iterators may be the > way > >> to go. :-) > >> > > > > Yeah, I did see this as an opportunity to get the lazy work in. > > > > > ------------------------------------------------------------------------------ > > Download Intel® Parallel Studio Eval > > Try the new software tools for yourself. Speed compiling, find bugs > > proactively, and fine-tune applications for parallel performance. > > See why Intel Parallel Studio got high marks during beta. > > http://p.sf.net/sfu/intel-sw-dev > > _______________________________________________ > > Jrdf-general mailing list > > Jrd...@li... > > https://lists.sourceforge.net/lists/listinfo/jrdf-general > > > > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > Jrdf-general mailing list > Jrd...@li... > https://lists.sourceforge.net/lists/listinfo/jrdf-general > |
From: Benedikt F. <b.f...@gm...> - 2010-04-09 11:58:54
|
I stumbled on a post about the problem with order-by performance and the sparql order-by implementation: http://www.mail-archive.com/pub...@w3.../msg00328.html The post is about AllegroGraph but I think some points might be relevant anyway... On 9 April 2010 12:38, Benedikt Forchhammer <b.f...@gm...> wrote: > Hm, indexing over "all permutations of spo" doesn't sound like a > feasible solution, especially with large datasets... ;-) > > I don't know if this helps, but for my use case, I only need to sort > on one type of literal which is bound by one predicate... and I would > have no problem with manually specifying the respective index > somewhere (e.g. on graph creation); after all that's what I have to do > for relational databases as well. > > What do you guys mean by "lazy result iterators"? > > Cheers, > Ben > > On 6 April 2010 21:53, Andrew Newman <and...@gm...> wrote: >> On 6 April 2010 23:11, Yuan-Fang Li <liy...@gm...> wrote: >>> >>> On Mon, Apr 5, 2010 at 10:13 PM, Andrew Newman <and...@gm...> >>> wrote: >>>> >>>> I guess there are two ways to add the ORDER BY support. >>>> >>>> The simplest way to add it is after producing a result set and then >>>> ordering and limiting that. I don't think that would be any faster - >>>> it would just be more convenient. >>>> >>>> The other way I have thought about it previously was trying to >>>> maintain sorting through the processing (joins etc). >>>> >>> >>> I think this is the right way to go since if the graph is large and the >>> query is not very selective, retrieving all results and then sorting the >>> result set may take a long time. Keeping things sorted in intermediate >>> results should be faster. ORDER BY and LIMIT as a post-processing step >>> implies that a lot of unnecessary joins, etc., may need to be done. >>> The 2nd way will require, I think, indexing over all permutations of spo. >>> To be more memory-friendly, returning "lazy" result iterators may be the way >>> to go. :-) >>> >> >> Yeah, I did see this as an opportunity to get the lazy work in. >> >> ------------------------------------------------------------------------------ >> Download Intel® Parallel Studio Eval >> Try the new software tools for yourself. Speed compiling, find bugs >> proactively, and fine-tune applications for parallel performance. >> See why Intel Parallel Studio got high marks during beta. >> http://p.sf.net/sfu/intel-sw-dev >> _______________________________________________ >> Jrdf-general mailing list >> Jrd...@li... >> https://lists.sourceforge.net/lists/listinfo/jrdf-general >> > |
From: Benedikt F. <b.f...@gm...> - 2010-04-09 11:47:47
|
Hi, I have done some testing and it turns out that neither sparql queries nor graph changes can be done concurrently at the moment... I tried using a simple ReadWriteLock (i.e. exclusive write access, multiple read access), using java.util.concurrent.locks.ReentrantReadWriteLock, but I still got bunch of exceptions (not only the ConcurrentModificationException which I mentioned earlier; for a list including stacktraces see attachment). When I switched to using a fully exclusive lock (i.e. only one thread has access no matter what the thread does), the whole thing started to behave... Probably at the cost of performance (not verified). I also started using separate graph instances for all graph access (even the sparql queries); this did not seem to change anything, so there might be a bug in there somewhere like you mentioned. Ben On 29 March 2010 23:31, Benedikt Forchhammer <b.f...@gm...> wrote: > Hi, > Thanks for the quite reply! > > On 29 March 2010 22:07, Andrew Newman <and...@gm...> wrote: >> Yep, it's safest to create a graph for each thread - but I'm not sure >> that's the error you're getting - if you're just doing queries against >> the graph you should get a concurrent modification error - it could be >> a bug in the OptimizingQueryEngine. > > I think it was actually a modification and a query at the same time... > >> You could implement your own locking or I would do it is use something >> like Spring's transaction management to add transactions around >> methods (see the FooService): >> http://static.springsource.org/spring/docs/3.0.x/reference/html/transaction.html > > Thanks for the link. I am a bit hesitant of adding another "big > library" to my project, but I will have a look... > For a start I think I am going to try to use separate graphs for > writes and see if that solves the problem... > > Thank you, > Ben > |
From: Benedikt F. <b.f...@gm...> - 2010-04-09 11:38:24
|
Hm, indexing over "all permutations of spo" doesn't sound like a feasible solution, especially with large datasets... ;-) I don't know if this helps, but for my use case, I only need to sort on one type of literal which is bound by one predicate... and I would have no problem with manually specifying the respective index somewhere (e.g. on graph creation); after all that's what I have to do for relational databases as well. What do you guys mean by "lazy result iterators"? Cheers, Ben On 6 April 2010 21:53, Andrew Newman <and...@gm...> wrote: > On 6 April 2010 23:11, Yuan-Fang Li <liy...@gm...> wrote: >> >> On Mon, Apr 5, 2010 at 10:13 PM, Andrew Newman <and...@gm...> >> wrote: >>> >>> I guess there are two ways to add the ORDER BY support. >>> >>> The simplest way to add it is after producing a result set and then >>> ordering and limiting that. I don't think that would be any faster - >>> it would just be more convenient. >>> >>> The other way I have thought about it previously was trying to >>> maintain sorting through the processing (joins etc). >>> >> >> I think this is the right way to go since if the graph is large and the >> query is not very selective, retrieving all results and then sorting the >> result set may take a long time. Keeping things sorted in intermediate >> results should be faster. ORDER BY and LIMIT as a post-processing step >> implies that a lot of unnecessary joins, etc., may need to be done. >> The 2nd way will require, I think, indexing over all permutations of spo. >> To be more memory-friendly, returning "lazy" result iterators may be the way >> to go. :-) >> > > Yeah, I did see this as an opportunity to get the lazy work in. > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > Jrdf-general mailing list > Jrd...@li... > https://lists.sourceforge.net/lists/listinfo/jrdf-general > |