This list is closed, nobody may subscribe to it.
2010 |
Jan
|
Feb
(19) |
Mar
(8) |
Apr
(25) |
May
(16) |
Jun
(77) |
Jul
(131) |
Aug
(76) |
Sep
(30) |
Oct
(7) |
Nov
(3) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
(2) |
Jul
(16) |
Aug
(3) |
Sep
(1) |
Oct
|
Nov
(7) |
Dec
(7) |
2012 |
Jan
(10) |
Feb
(1) |
Mar
(8) |
Apr
(6) |
May
(1) |
Jun
(3) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(8) |
Dec
(2) |
2013 |
Jan
(5) |
Feb
(12) |
Mar
(2) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
(22) |
Aug
(50) |
Sep
(31) |
Oct
(64) |
Nov
(83) |
Dec
(28) |
2014 |
Jan
(31) |
Feb
(18) |
Mar
(27) |
Apr
(39) |
May
(45) |
Jun
(15) |
Jul
(6) |
Aug
(27) |
Sep
(6) |
Oct
(67) |
Nov
(70) |
Dec
(1) |
2015 |
Jan
(3) |
Feb
(18) |
Mar
(22) |
Apr
(121) |
May
(42) |
Jun
(17) |
Jul
(8) |
Aug
(11) |
Sep
(26) |
Oct
(15) |
Nov
(66) |
Dec
(38) |
2016 |
Jan
(14) |
Feb
(59) |
Mar
(28) |
Apr
(44) |
May
(21) |
Jun
(12) |
Jul
(9) |
Aug
(11) |
Sep
(4) |
Oct
(2) |
Nov
(1) |
Dec
|
2017 |
Jan
(20) |
Feb
(7) |
Mar
(4) |
Apr
(18) |
May
(7) |
Jun
(3) |
Jul
(13) |
Aug
(2) |
Sep
(4) |
Oct
(9) |
Nov
(2) |
Dec
(5) |
2018 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Bigdata by S. L. <bi...@sy...> - 2014-08-11 15:16:07
|
Checkout the latest Blog post on Bigdata by SYSTAP, LLC. View this email in your browser (http://us8.campaign-archive1.com/?u=807fe42a6d19f97994387207d&id=4241affab0&e=085d8ae40c) Updates from ** bigdata ------------------------------------------------------------ bigdata(R) is a scale-out storage and computing fabric supporting optional transactions, very high concurrency, and very high aggregate IO rates. In the 08/11/2014 edition: * Inline URIs ** Inline URIs (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=7d9a7edc07&e=085d8ae40c) ------------------------------------------------------------ By Mike Personick on Aug 07, 2014 06:21 am There is a commit working its way through CI right now in the 1.3 maintenance branch. This commit puts in place a mechanism to inline many different types of URIs directly into the statement indices, including UUIDs. The new mechanism and how to use it are described in the ticket: http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=de1c6d7daa&e=085d8ae40c This change will be backward compatible with old journals, but obviously to take advantage of it you would need to reload the data so that URIs can be inlined. It should be possible to inline all sorts of URIs with this new mechanism. Inlining terms is a great way to save space in the indices and improve performance, as it eliminates the need to round-trips terms through the dictionary indices. Read in browser » (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=cdeb35a092&e=085d8ae40c) http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=01e792b038&e=085d8ae40c http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=96ede3c70a&e=085d8ae40c ** Recent Articles: ------------------------------------------------------------ ** MapGraph processes nearly 30 billion edges per second on GPU cluster (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=7fa24754fe&e=085d8ae40c) ** Significant increases in transaction throughput for RWStore (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=219c48c069&e=085d8ae40c) ** Bigdata powers Linked Open Data portal for the Bavarian State Library (German Library) (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=bc6150db4b&e=085d8ae40c) ** Yahoo7 using bigdata (http://bigdata.us8.list-manage1.com/track/click?u=807fe42a6d19f97994387207d&id=4094cff3ac&e=085d8ae40c) ** Formalized model for Reification done Right: RDF* and SPARQL* semantics. (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=35a309ca4a&e=085d8ae40c ============================================================ Copyright © 2014 SYSTAP, LLC, All rights reserved. You are receiving this email as you've subscribed to receive information about Bigdata, fully open-source high-performance graph database supporting the RDF data model and RDR. Bigdata operates as an embedded database or over a client/server REST API. Bigdata supports high-availability and dynamic sharding. Bigdata supports both the Blueprints and Sesame APIs. Our mailing address is: SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 USA ** unsubscribe from this list (http://bigdata.us8.list-manage.com/unsubscribe?u=807fe42a6d19f97994387207d&id=400be6035d&e=085d8ae40c&c=4241affab0) ** update subscription preferences (http://bigdata.us8.list-manage.com/profile?u=807fe42a6d19f97994387207d&id=400be6035d&e=085d8ae40c) Email Marketing Powered by MailChimp http://www.mailchimp.com/monkey-rewards/?utm_source=freemium_newsletter&utm_medium=email&utm_campaign=monkey_rewards&aid=807fe42a6d19f97994387207d&afl=1 |
From: Bigdata by S. L. <bi...@sy...> - 2014-08-04 15:01:07
|
Checkout the latest Blog post on Bigdata by SYSTAP, LLC. View this email in your browser (http://us8.campaign-archive1.com/?u=807fe42a6d19f97994387207d&id=3a7f281b05&e=085d8ae40c) Updates from ** bigdata ------------------------------------------------------------ bigdata(R) is a scale-out storage and computing fabric supporting optional transactions, very high concurrency, and very high aggregate IO rates. In the 08/04/2014 edition: * MapGraph processes nearly 30 billion edges per second on GPU cluster ** MapGraph processes nearly 30 billion edges per second on GPU cluster (http://bigdata.us8.list-manage2.com/track/click?u=807fe42a6d19f97994387207d&id=2d699cd89a&e=085d8ae40c) ------------------------------------------------------------ By Bryan Thompson on Jul 30, 2014 03:44 am We have been working on a multi-GPU version of MapGraph (http://bigdata.us8.list-manage1.com/track/click?u=807fe42a6d19f97994387207d&id=f5c480f488&e=085d8ae40c) . The new code performs Breadth First Search over 4.3 billion directed edges on a scale-free graph at 29 Billion Traversed Edges Per Second (29 GTEPS) on a cluster with 64 K20 GPUs. Using the platform described in the technical report, MapGraph is 3x faster than the YarcData(r) Urika(r) appliance at what we estimate to be 1/3 of the hardware cost for a 9x improvement in price/performance. Using the K40 GPUs, a cluster of 81 GPUs has nearly 1 TB of RAM, the maximum available for the YarcData appliance. However, unlike YarcData, the MapGraph solution can scale to even larger GPU deployments. Contact us directly if you are interested in custom applications of this emerging disruptive technology. —- Cray, YarcData, and Urika are trademarks of Cray, Inc. Read in browser » (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=709c572359&e=085d8ae40c) http://bigdata.us8.list-manage1.com/track/click?u=807fe42a6d19f97994387207d&id=fd61d49bc2&e=085d8ae40c http://bigdata.us8.list-manage1.com/track/click?u=807fe42a6d19f97994387207d&id=ad10da2b89&e=085d8ae40c. ** Recent Articles: ------------------------------------------------------------ ** Significant increases in transaction throughput for RWStore (http://bigdata.us8.list-manage1.com/track/click?u=807fe42a6d19f97994387207d&id=c2dd44d96e&e=085d8ae40c) ** Bigdata powers Linked Open Data portal for the Bavarian State Library (German Library) (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=6ea06fe17c&e=085d8ae40c) ** Yahoo7 using bigdata (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=ae2eb696bd&e=085d8ae40c) ** Formalized model for Reification done Right: RDF* and SPARQL* semantics. (http://bigdata.us8.list-manage1.com/track/click?u=807fe42a6d19f97994387207d&id=909e0ac26b&e=085d8ae40c) ** Bigdata and Blueprints (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=bbba948156&e=085d8ae40c ============================================================ Copyright © 2014 SYSTAP, LLC, All rights reserved. You are receiving this email as you've subscribed to receive information about Bigdata, fully open-source high-performance graph database supporting the RDF data model and RDR. Bigdata operates as an embedded database or over a client/server REST API. Bigdata supports high-availability and dynamic sharding. Bigdata supports both the Blueprints and Sesame APIs. Our mailing address is: SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 USA ** unsubscribe from this list (http://bigdata.us8.list-manage.com/unsubscribe?u=807fe42a6d19f97994387207d&id=400be6035d&e=085d8ae40c&c=3a7f281b05) ** update subscription preferences (http://bigdata.us8.list-manage1.com/profile?u=807fe42a6d19f97994387207d&id=400be6035d&e=085d8ae40c) Email Marketing Powered by MailChimp http://www.mailchimp.com/monkey-rewards/?utm_source=freemium_newsletter&utm_medium=email&utm_campaign=monkey_rewards&aid=807fe42a6d19f97994387207d&afl=1 |
From: Bigdata by S. L. <bi...@sy...> - 2014-07-21 15:31:06
|
Checkout the latest Blog post on Bigdata by SYSTAP, LLC. View this email in your browser (http://us8.campaign-archive1.com/?u=807fe42a6d19f97994387207d&id=d1f90e9770&e=085d8ae40c) Updates from ** bigdata ------------------------------------------------------------ bigdata(R) is a scale-out storage and computing fabric supporting optional transactions, very high concurrency, and very high aggregate IO rates. In the 07/21/2014 edition: * Significant increases in transaction throughput for RWStore * Bigdata powers Linked Open Data portal for the Bavarian State Library (German Library) ** Significant increases in transaction throughput for RWStore (http://bigdata.us8.list-manage2.com/track/click?u=807fe42a6d19f97994387207d&id=62bcb219dc&e=085d8ae40c) ------------------------------------------------------------ By Bryan Thompson on Jul 18, 2014 11:29 am We have made some significant changes in the RWStore to improve the throughput for incremental transactions. These changes address a problem where scattered IOs lead to a bottleneck in the disk system and also reduce the total #of bytes written to the disk. The benefit can be very large for SATA disks, but it is substantial for SAS and SSD disks as well. How substantial? We observe throughput increasing by 3-4x over our baseline configurations for incremental data load of LUBM. First, a bit of background. Bigdata uses clustered indices for everything. This includes the dictionary indices (TERM2ID, ID2TERM, and BLOBS) and the statement indices (SPO, POS, and OSP). In quads mode, we use a different set of clustered indices for the statements (SPOC, POSC, etc). Some of these indices naturally have good locality on update, especially the ID2TERM, SPO/SPOC indices, and the CSPO index (in quads mode). These indices will always show good locality for transaction updates since we sort the index writes and then write on the indices in ascending order for maximum cache effect. However, the statement indices that start with O (OSP, OCSP) always have very poor locality. This is because the Object position varies quite a bit across the statements in any given transaction update. This means that most index pages for the OSP/OCSP indices that are touched by a transaction will be dirtied by a single tuple during the transaction. The same problem exists to a somewhat lessor extend with the P indices (POS, POCS, PCSO). The TERM2ID index normally has decent locality, but if you are using UUIDs or similar globally unique identifiers in your URLs, then that will cause a scattered update profile on the TERM2ID index. What we recommend for a best practice here is to create an inline IV type for your UUID-based URLs such that they will be automatically converted into fixed length IVs (18 bytes – 1 flags, 1 extension byte, and 16 bytes for the UUID). This will remove the UUID-based URLs completely from the dictionary indices. They will be inlined into the statement indices instead as 18 bytes per URL. The solution for these scattered updates is to (a) reduce the branching factors to target a 1024 byte page size (or less) for the indices with scattered update patterns (this reduces the #of bytes written to the disk); (b) enable the small slot optimization in the RWStore (this ensures good locality on the disk for the indices with the scattered update patterns; and (c) optionally reduce the write retention queue capacity for those indices (this reduces GC overhead associated with those indices – there is little benefit to a high retention queue if the access pattern for the index is scattered). Small slots processing will be in the 1.3.2 release. To enable small slot processing before then, you need to be using branches/BIGDATA_RELEASE_1_3_0 at r8568 or above. The current advice to reduce IO in update transactions is: * Default the BTree branching factor of 256 . * Set the default BTree retention to 4000. * Enable the small slot optimization. * Override branching factors for OSP/OCSP and POS/POSC to 64. To do this, you need to modify your properties file and/or specify the following when creating a new namespace within bigdata. # Enable small slot optimization. com.bigdata.rwstore.RWStore.smallSlotType=1024 # Set the default B+Tree branching factor. com.bigdata.btree.BTree.branchingFactor=256 # Set the default B+Tree retention queue capacity. com.bigdata.btree.writeRetentionQueue.capacity=4000 The branching factor overrides need to be made for each index in each triple store or quad store instance. For example, the following property will override the OSP index branching factor for the default bigdata namespace, which is “kb”. You need to do this for each namespace that you create. com.bigdata.namespace.kb.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor=400 com.bigdata.namespace.kb.spo.SPO.com.bigdata.btree.BTree.branchingFactor=1024 com.bigdata.namespace.kb.spo.OSP.com.bigdata.btree.BTree.branchingFactor=64 com.bigdata.namespace.kb.spo.POS.com.bigdata.btree.BTree.branchingFactor=64 The small slot optimization will take effect when you restart bigdata. The changes to the write retention queue capacity and the branching factors will only take effect when a new triple store or quad store instance is created. We still need to examine the impact on query performance from changing these various branching factors. In principle, the latency of the index is proportional to log(p), where p is the height of the B+Tree. Thus, it should be a sub-linear relationship. Testing on BSBM 100M reveals that reduced branching factors for the indices with scattered update patterns (as recommended above) does not impact query performance. Read in browser » (http://bigdata.us8.list-manage1.com/track/click?u=807fe42a6d19f97994387207d&id=d926b6f1eb&e=085d8ae40c) http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=56a17b99af&e=085d8ae40c http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=70003932a5&e=085d8ae40c. ** Bigdata powers Linked Open Data portal for the Bavarian State Library (German Library) (http://bigdata.us8.list-manage2.com/track/click?u=807fe42a6d19f97994387207d&id=18b51f2c8d&e=085d8ae40c) ------------------------------------------------------------ By Bryan Thompson on Jul 17, 2014 09:45 am We are pleased to announce that Bayerische Staatsbibliothek is in production with bigdata powering their public SPARQL end point. http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=82ebfa8cab&e=085d8ae40c (http://bigdata.us8.list-manage1.com/track/click?u=807fe42a6d19f97994387207d&id=d9d90e48ff&e=085d8ae40c) A description of the provided dataset can be found at the Datahub: http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=7559661fc5&e=085d8ae40c. Some details about the Bavarian State Library: http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=3c4496f1cb&e=085d8ae40c For more information, please contact lo...@bs... (mailto:lo...@bs...) . Read in browser » (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=cbb03e0083&e=085d8ae40c) http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=125384d516&e=085d8ae40c http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=80e8ee21b0&e=085d8ae40c. ** Recent Articles: ------------------------------------------------------------ ** Yahoo7 using bigdata (http://bigdata.us8.list-manage1.com/track/click?u=807fe42a6d19f97994387207d&id=9d7462dab2&e=085d8ae40c) ** Formalized model for Reification done Right: RDF* and SPARQL* semantics. (http://bigdata.us8.list-manage1.com/track/click?u=807fe42a6d19f97994387207d&id=6a044e8793&e=085d8ae40c) ** Bigdata and Blueprints (http://bigdata.us8.list-manage1.com/track/click?u=807fe42a6d19f97994387207d&id=d42532809f&e=085d8ae40c) ** MapGraph at GRADES 2013 (SLC, June 22nd) (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=f6b24e2104&e=085d8ae40c) ** Bigdata Release 1.3.1 (HA Load Balancer, Blueprints, RDR, new Workbench) (http://bigdata.us8.list-manage2.com/track/click?u=807fe42a6d19f97994387207d&id=84f7e35f76&e=085d8ae40c ============================================================ Copyright © 2014 SYSTAP, LLC, All rights reserved. You are receiving this email as you've subscribed to receive information about Bigdata, fully open-source high-performance graph database supporting the RDF data model and RDR. Bigdata operates as an embedded database or over a client/server REST API. Bigdata supports high-availability and dynamic sharding. Bigdata supports both the Blueprints and Sesame APIs. Our mailing address is: SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 USA ** unsubscribe from this list (http://bigdata.us8.list-manage.com/unsubscribe?u=807fe42a6d19f97994387207d&id=400be6035d&e=085d8ae40c&c=d1f90e9770) ** update subscription preferences (http://bigdata.us8.list-manage.com/profile?u=807fe42a6d19f97994387207d&id=400be6035d&e=085d8ae40c) Email Marketing Powered by MailChimp http://www.mailchimp.com/monkey-rewards/?utm_source=freemium_newsletter&utm_medium=email&utm_campaign=monkey_rewards&aid=807fe42a6d19f97994387207d&afl=1 |
From: Jeremy J C. <jj...@sy...> - 2014-07-08 21:31:55
|
hi Bryan the commit I made a while back that may help address the issue of query hints is this one: Jeremy ===== Subversion Revision: 8192 Test and fix for trac904 FILTER EXISTS || TRUE. AbstractJoinGroupOptimizer needs to recurse into the filter expression looking for EXISTS |
From: Bigdata by S. L. <bi...@sy...> - 2014-07-08 15:15:25
|
Use this area to offer a short preview of your email's content. View this email in your browser (http://us8.campaign-archive2.com/?u=807fe42a6d19f97994387207d&id=b2b1b2c7fb&e=085d8ae40c) http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=77f59d4cde&e=085d8ae40c ** Check out MapGraph @ Graph Lab Conference 2014 ------------------------------------------------------------ ** "Think like Vertix": Get up to speed with MapGraph and Bigdata's GAS Abstraction ------------------------------------------------------------ If you haven't checked out the latest with SYSTAP's MapGraph (http://bigdata.us8.list-manage1.com/track/click?u=807fe42a6d19f97994387207d&id=cc88bf38b3&e=085d8ae40c) platform enabling massively parallel graph processing on GPUs, come see us in San Francisco (7/21) at the Graph Lab Conference (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=8f9ea062d3&e=085d8ae40c) . Get up to speed with latest paper SIGMOD 2014 by reading: "MapGraph: A High Level API for Fast Development of High Performance Graph Analytics on GPUs (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=15c5e7fd68&e=085d8ae40c) ". http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=baf676f379&e=085d8ae40c Also, check out the RDF GAS API (http://bigdata.us8.list-manage1.com/track/click?u=807fe42a6d19f97994387207d&id=f5d0e5ec27&e=085d8ae40c) now available in the 1.3.1 Bigdata release that allows you to use the Gather Apply Scatter (GAS) abstraction to write a wide variety of graph traversal, graph mining, and similar classes of algorithms in the current Bigdata Platform (http://bigdata.us8.list-manage2.com/track/click?u=807fe42a6d19f97994387207d&id=530771f641&e=085d8ae40c) . Click here for discounted registration to the Graph Lab Conference 2014. (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=efeac41353&e=085d8ae40c) ============================================================ Copyright © 2014 SYSTAP, LLC, All rights reserved. You are receiving this email as you've subscribed to receive information about Bigdata, fully open-source high-performance graph database supporting the RDF data model and RDR. Bigdata operates as an embedded database or over a client/server REST API. Bigdata supports high-availability and dynamic sharding. Bigdata supports both the Blueprints and Sesame APIs. Our mailing address is: SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 USA ** unsubscribe from this list (http://bigdata.us8.list-manage1.com/unsubscribe?u=807fe42a6d19f97994387207d&id=400be6035d&e=085d8ae40c&c=b2b1b2c7fb) ** update subscription preferences (http://bigdata.us8.list-manage.com/profile?u=807fe42a6d19f97994387207d&id=400be6035d&e=085d8ae40c) Email Marketing Powered by MailChimp http://www.mailchimp.com/monkey-rewards/?utm_source=freemium_newsletter&utm_medium=email&utm_campaign=monkey_rewards&aid=807fe42a6d19f97994387207d&afl=1 |
From: Bryan T. <br...@sy...> - 2014-07-03 23:06:48
|
I am not sure that union is inefficient. The basic execution mechanism runs both sides of the union concurrently and their outputs are merged at the downstream operator. What the query analyzer does not do for union is identify common subexpressions and make a decision to either lift them out or push them down into the union. This can cause problems with cardinality. Also, because union is not a sub query, it is always evaluated left to right (but in parallel across the union). There could be cases where bottom up evaluated would do better. But that is a very general observation. If you recall, BIND is not eagerly evaluated. Try replacing it with a constant and see if that fixes (or changes) things. This could well be the culprit. I am surprised to learn that these are turned into the same query. Unless I am mistaken, ASK is basically a slice using limit 1 and offset 0. The filter (not) exists is a subquery using a solution set hash join. These seem like very different queries to me. Bryan > On Jul 3, 2014, at 6:37 PM, Jeremy J Carroll <jj...@sy...> wrote: > > > > ASK > WHERE { > { BIND( <https://temp-base-image-2-jjc.syapse.com/bdm/api/kbobject/123> as ?key ) } > > ?record rdf:type/syapse:subClassOf ?key . > > { ?record sys:owner <https://temp-base-image-2-jjc.syapse.com/bdm/api/syuser/17> } > UNION > { ?record sys:assignedProject / syapse:isPrivate false . > } > UNION > { ?record sys:assignedProject / syapse:member <https://temp-base-image-2-jjc.syapse.com/bdm/api/syuser/17> . > } > } > > and > > SELECT ?key > WHERE { > { BIND( <https://temp-base-image-2-jjc.syapse.com/bdm/api/kbobject/123> as ?key ) } > > FILTER EXISTS { > ?record rdf:type/syapse:subClassOf ?key . > > { ?record sys:owner <https://temp-base-image-2-jjc.syapse.com/bdm/api/syuser/17> } > UNION > { ?record sys:assignedProject / syapse:isPrivate false . > } > UNION > { ?record sys:assignedProject / syapse:member <https://temp-base-image-2-jjc.syapse.com/bdm/api/syuser/17> . > } > } > > are basically the same query, in fact the latter appears to be transformed into the former …. but > with my data set the first takes 50ms, the second 3000ms > > Given that what I really want to do is to filter about 50 results, and the latter then takes more like 20 seconds, whereas I suspects the former will take about 3, I am likely to have to do a join in the client code … which seems all wrong! > > Any ideas? Are there are workarounds for this other than to make the ASK queries myself rather than the FILTER EXISTS > > (Also I am aware that the UNION is pretty inefficient in bigdata too - I can reformulate my kb to get rid of the UNION but ….) > > > > > > > ------------------------------------------------------------------------------ > Open source business process management suite built on Java and Eclipse > Turn processes into business applications with Bonita BPM Community Edition > Quickly connect people, data, and systems into organized workflows > Winner of BOSSIE, CODIE, OW2 and Gartner awards > http://p.sf.net/sfu/Bonitasoft > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Jeremy J C. <jj...@sy...> - 2014-07-03 22:58:22
|
This version seems to get nearly the same performance as the ASK This looks like a need for an optimizer that adds the subselect limit 1 automatically on a FILTER EXISTS > SELECT ?key WHERE { { BIND( <https://temp-base-image-2-jjc.syapse.com/bdm/api/kbobject/kew:KbCuration> as ?key ) } FILTER EXISTS { { SELECT ?key { ?record rdf:type/syapse:subClassOf ?key . { ?record sys:owner <https://temp-base-image-2-jjc.syapse.com/bdm/api/syuser/17> } UNION { ?record sys:assignedProject / syapse:isPrivate false . } UNION { ?record sys:assignedProject / syapse:member <https://temp-base-image-2-jjc.syapse.com/bdm/api/syuser/17> . } } LIMIT 1 } }} Jeremy J Carroll Principal Architect Syapse, Inc. On Jul 3, 2014, at 3:37 PM, Jeremy J Carroll <jj...@sy...> wrote: > > > ASK > WHERE { > { BIND( <https://temp-base-image-2-jjc.syapse.com/bdm/api/kbobject/123> as ?key ) } > > ?record rdf:type/syapse:subClassOf ?key . > > { ?record sys:owner <https://temp-base-image-2-jjc.syapse.com/bdm/api/syuser/17> } > UNION > { ?record sys:assignedProject / syapse:isPrivate false . > } > UNION > { ?record sys:assignedProject / syapse:member <https://temp-base-image-2-jjc.syapse.com/bdm/api/syuser/17> . > } > } > > and > > SELECT ?key > WHERE { > { BIND( <https://temp-base-image-2-jjc.syapse.com/bdm/api/kbobject/123> as ?key ) } > > FILTER EXISTS { > ?record rdf:type/syapse:subClassOf ?key . > > { ?record sys:owner <https://temp-base-image-2-jjc.syapse.com/bdm/api/syuser/17> } > UNION > { ?record sys:assignedProject / syapse:isPrivate false . > } > UNION > { ?record sys:assignedProject / syapse:member <https://temp-base-image-2-jjc.syapse.com/bdm/api/syuser/17> . > } > } > > are basically the same query, in fact the latter appears to be transformed into the former …. but > with my data set the first takes 50ms, the second 3000ms > > Given that what I really want to do is to filter about 50 results, and the latter then takes more like 20 seconds, whereas I suspects the former will take about 3, I am likely to have to do a join in the client code … which seems all wrong! > > Any ideas? Are there are workarounds for this other than to make the ASK queries myself rather than the FILTER EXISTS > > (Also I am aware that the UNION is pretty inefficient in bigdata too - I can reformulate my kb to get rid of the UNION but ….) > > > > > |
From: Jeremy J C. <jj...@sy...> - 2014-07-03 22:37:54
|
ASK WHERE { { BIND( <https://temp-base-image-2-jjc.syapse.com/bdm/api/kbobject/123> as ?key ) } ?record rdf:type/syapse:subClassOf ?key . { ?record sys:owner <https://temp-base-image-2-jjc.syapse.com/bdm/api/syuser/17> } UNION { ?record sys:assignedProject / syapse:isPrivate false . } UNION { ?record sys:assignedProject / syapse:member <https://temp-base-image-2-jjc.syapse.com/bdm/api/syuser/17> . } } and SELECT ?key WHERE { { BIND( <https://temp-base-image-2-jjc.syapse.com/bdm/api/kbobject/123> as ?key ) } FILTER EXISTS { ?record rdf:type/syapse:subClassOf ?key . { ?record sys:owner <https://temp-base-image-2-jjc.syapse.com/bdm/api/syuser/17> } UNION { ?record sys:assignedProject / syapse:isPrivate false . } UNION { ?record sys:assignedProject / syapse:member <https://temp-base-image-2-jjc.syapse.com/bdm/api/syuser/17> . } } are basically the same query, in fact the latter appears to be transformed into the former …. but with my data set the first takes 50ms, the second 3000ms Given that what I really want to do is to filter about 50 results, and the latter then takes more like 20 seconds, whereas I suspects the former will take about 3, I am likely to have to do a join in the client code … which seems all wrong! Any ideas? Are there are workarounds for this other than to make the ASK queries myself rather than the FILTER EXISTS (Also I am aware that the UNION is pretty inefficient in bigdata too - I can reformulate my kb to get rid of the UNION but ….) |
From: Bryan T. <br...@sy...> - 2014-06-30 13:24:57
|
This is an update to an earlier message concerning trac #936 (Support larger metabit allocations <http://trac.bigdata.com/ticket/936>). The conversion of the metabits from the current storage strategy (in an allocation slot) to the new storage strategy (demi-spaces near the head of the file) now only occurs if the metabits region would exceed the size of the maximum allocation slot. This means that the RWStore retains binary compatibility for any file that could be written or read by previous released versions. See http://wiki.bigdata.com/wiki/index.php/DataMigration for more information. That information is also inline below. As of 1.3.2, new and old RWStore instances will be automatically converted to use a demi-space for the metabits IFF the maximum size of the metabits region is exceeded. The maximum size of the metabits region is determined by the maximum allocator slot size, which defaults to the recommended value of 8k (8196 bytes). Before conversion, the metabits (which identify the addresses of the allocators) were stored in a single allocation slot on the store. After conversion the metabits are stored in two alternating demi-spaces near the head of the RWStore file structure. This conversion permits the addressing of more allocators than can be stored in an allocator slot. Older code is NOT able to read the RWStore after conversion. However, older code was unable to address more metabits than would fit into a single allocator and so could not have read or written on a store which addressed more than 8k metabits. A utility class (MetabitsUtil.java) exists to convert between these two operational modes for the metabits. However, it is not possible to convert an RWStore to the older (non-demi-space) mode once the number of allocators is greater than the maximum slot size for the RWStore since the allocators can no longer be stored in an allocation slot. If the maximum size of the allocators has been overridden from the default / recommended 8k, then the conversion point is also changed to the overridden maximum slot size. 1. RWStore version before conversion: 0x0400 2. RWStore version after conversion: 0x0500 Thanks, Bryan |
From: Bryan T. <br...@sy...> - 2014-06-25 21:50:14
|
I am not aware of any issues with that mechanism. Is the JVM suffering extreme GC? You might also check the server logs to see if there is a problem with the EXPLAIN rendering for that query. I believe that this is the QueryLog class. The code is quite defensive, but it might be hitting a problem that prevents correct rendering. Thanks, Bryan > On Jun 25, 2014, at 3:04 PM, Jeremy J Carroll <jj...@sy...> wrote: > > > At the moment, I have a bigdata instance, running a 'difficult' SPARQL query, and the query details view does not show the detail … > > what are the issues with this .. is there stuff I can do? > > Jeremy > > > ------------------------------------------------------------------------------ > Open source business process management suite built on Java and Eclipse > Turn processes into business applications with Bonita BPM Community Edition > Quickly connect people, data, and systems into organized workflows > Winner of BOSSIE, CODIE, OW2 and Gartner awards > http://p.sf.net/sfu/Bonitasoft > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Jeremy J C. <jj...@sy...> - 2014-06-25 21:04:35
|
At the moment, I have a bigdata instance, running a 'difficult' SPARQL query, and the query details view does not show the detail … what are the issues with this .. is there stuff I can do? Jeremy |
From: Bryan T. <br...@sy...> - 2014-06-23 19:50:27
|
Yes. > On Jun 23, 2014, at 12:35 PM, Jeremy J Carroll <jj...@sy...> wrote: > > Our use case is sufficiently similar to: > "named graphs that are used by all of your customers and that you want to store once and then access from each customer" > > > It sounds like we should proceed with what we are currently doing, which is a little clunky, and simply note an enhancement request at this point: to be able to "see" some named graphs from one namespace in a different namespace. > > Jeremy |
From: Jeremy J C. <jj...@sy...> - 2014-06-23 18:36:03
|
Our use case is sufficiently similar to: "named graphs that are used by all of your customers and that you want to store once and then access from each customer" It sounds like we should proceed with what we are currently doing, which is a little clunky, and simply note an enhancement request at this point: to be able to "see" some named graphs from one namespace in a different namespace. Jeremy |
From: Bryan T. <br...@sy...> - 2014-06-23 17:51:34
|
Are you talking about bigdata vocabulary declaration classes, named graphs that are used by all of your customers and that you want to store once and then access from each customer, or bigdata multi tenancy namespaces in which you want to store some data that would be visible to all of your customers? I think you mean one of the latter two. Yes, you could expose it via a SERVICE. The issue is that query optimization will only occur within the SERVICE call and within the outer query. It will not be able to transparently optimize over the available access paths from the shared namespace and the customer namespace. But it will work out of the box. I think that the better way to do this is to explicitly reference the shared namespace in the query so it can optimize across the available indices. There are some provisions that would partially support this, but the code will not get you all the way there. The AST operators are annotated with the namespace. You could annotate them so as to target the correct namespace, and we could develop a syntax for this. There is an obvious security concern for this cross-tenant namespace query execution since the tenant level security would no longer be purely applied to the SPARQL end point URL. But there is another technical problem as well. The statement indices for the shared namespace and the per-tenant namespaces would need to share a common set of lexicon indices. If they did not, then we would need to add materialization steps to ensure that IVs had their cached Value set when crossing from one multi tenancy namespace to another. SERVICE is the only simple and out of the box way to do this, Bryan > On Jun 23, 2014, at 11:33 AM, Jeremy J Carroll <jj...@sy...> wrote: > > > > We have two conceptually fairly separate aspects of knowledge in our system: > > - vocabularies to do with genetics etc. (same for all customers) > - the genetic data of our customer > > One thing that would work quite nicely for us would be a namespace include feature, where we define the vocabs in one namespace, and then include it in other namespaces - and it then the vocabs appear everywhere > > I note this could be a achieved by a service call … and I was wondering if there is some other way that is supported > > As is, I am thinking of having a journal file with the vocabs preloaded, and then copying it before adding the customer data > > Jeremy > > > ------------------------------------------------------------------------------ > HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions > Find What Matters Most in Your Big Data with HPCC Systems > Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. > Leverages Graph Analysis for Fast Processing & Easy Data Exploration > http://p.sf.net/sfu/hpccsystems > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Jeremy J C. <jj...@sy...> - 2014-06-23 17:33:45
|
We have two conceptually fairly separate aspects of knowledge in our system: - vocabularies to do with genetics etc. (same for all customers) - the genetic data of our customer One thing that would work quite nicely for us would be a namespace include feature, where we define the vocabs in one namespace, and then include it in other namespaces - and it then the vocabs appear everywhere I note this could be a achieved by a service call … and I was wondering if there is some other way that is supported As is, I am thinking of having a journal file with the vocabs preloaded, and then copying it before adding the customer data Jeremy |
From: Jeremy J C. <jj...@sy...> - 2014-06-23 17:06:47
|
Hi at Syapse we are using sentry to co-ordinate our error logging and management I notice that there is a SentryAppender for log4j available https://github.com/getsentry/raven-java/tree/master/raven-log4j I am wondering if anyone has war stories about successful, or less than successful, use of such an approach - please share either on- or off- list. thanks Jeremy |
From: Bigdata by S. L. <bi...@sy...> - 2014-06-23 15:16:57
|
Checkout the latest Blog post on Bigdata by SYSTAP, LLC. View this email in your browser (http://us8.campaign-archive2.com/?u=807fe42a6d19f97994387207d&id=c961003d9f&e=085d8ae40c) Updates from ** bigdata ------------------------------------------------------------ bigdata(R) is a scale-out storage and computing fabric supporting optional transactions, very high concurrency, and very high aggregate IO rates. In the 06/23/2014 edition: * Yahoo7 using bigdata ** Yahoo7 using bigdata (http://bigdata.us8.list-manage1.com/track/click?u=807fe42a6d19f97994387207d&id=2679d2fdca&e=085d8ae40c) ------------------------------------------------------------ By Bryan Thompson on Jun 18, 2014 03:43 pm One of the major successes that people point to for the semantic web is the semantic publishing platform at the BBC. We are pleased to announce that Yahoo7 has rolled out a semantic publishing platform based on bigdata. Read more about the Yahoo7 experience and how they have doubled their users time on site using semantic publishing and bigdata. http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=8a95527358&e=085d8ae40c This is just one more in a list of major semantic web success stories built around the bigdata platform: * EMC – data and host mangement solutions in data centers around the world (slides (http://bigdata.us8.list-manage1.com/track/click?u=807fe42a6d19f97994387207d&id=ee08afb6b5&e=085d8ae40c) from SEMTECH 2012, NYC) * Autodesk - graph management for the Autodesk cloud ecosystem (SEMTECH 2013, SF) * Yahoo7 – semantic publishing (today) Contact us if you want to be the next success. Read in browser » (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=fb89f395f5&e=085d8ae40c) http://bigdata.us8.list-manage1.com/track/click?u=807fe42a6d19f97994387207d&id=fa8f7f7712&e=085d8ae40c http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=e241c50725&e=085d8ae40c ** Recent Articles: ------------------------------------------------------------ ** Formalized model for Reification done Right: RDF* and SPARQL* semantics. (http://bigdata.us8.list-manage1.com/track/click?u=807fe42a6d19f97994387207d&id=a228fb7412&e=085d8ae40c) ** Bigdata and Blueprints (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=8532030a22&e=085d8ae40c) ** MapGraph at GRADES 2013 (SLC, June 22nd) (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=9a51118284&e=085d8ae40c) ** Bigdata Release 1.3.1 (HA Load Balancer, Blueprints, RDR, new Workbench) (http://bigdata.us8.list-manage1.com/track/click?u=807fe42a6d19f97994387207d&id=e46a3d7d6c&e=085d8ae40c) ** Website re-design (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=5886f22dbe&e=085d8ae40c ============================================================ Copyright © 2014 SYSTAP, LLC, All rights reserved. You are receiving this email as you've subscribed to receive information about Bigdata, fully open-source high-performance graph database supporting the RDF data model and RDR. Bigdata operates as an embedded database or over a client/server REST API. Bigdata supports high-availability and dynamic sharding. Bigdata supports both the Blueprints and Sesame APIs. Our mailing address is: SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 USA ** unsubscribe from this list (http://bigdata.us8.list-manage.com/unsubscribe?u=807fe42a6d19f97994387207d&id=400be6035d&e=085d8ae40c&c=c961003d9f) ** update subscription preferences (http://bigdata.us8.list-manage.com/profile?u=807fe42a6d19f97994387207d&id=400be6035d&e=085d8ae40c) Email Marketing Powered by MailChimp http://www.mailchimp.com/monkey-rewards/?utm_source=freemium_newsletter&utm_medium=email&utm_campaign=monkey_rewards&aid=807fe42a6d19f97994387207d&afl=1 |
From: Bryan T. <br...@sy...> - 2014-06-20 23:55:41
|
http://trac.bigdata.com/ticket/936 automatically rolls the rwstore forward to have two metabits regions in order to increase the maximum number of allocations that can be made against the store. This issue was driven by large numbers of small raw records, especially on the id2term index. The new code works with the older journals. But it does have a side effect that means the older code can no longer interpret the journal once it goes through a commit with the new code. This change will be part of the 1.3.2 release. Thanks, Bryan |
From: Bigdata by S. L. <bi...@sy...> - 2014-06-16 15:15:55
|
Checkout the latest Blog post on Bigdata by SYSTAP, LLC. View this email in your browser (http://us8.campaign-archive1.com/?u=807fe42a6d19f97994387207d&id=55a41b4c7e&e=085d8ae40c) Updates from ** bigdata ------------------------------------------------------------ bigdata(R) is a scale-out storage and computing fabric supporting optional transactions, very high concurrency, and very high aggregate IO rates. In the 06/16/2014 edition: * Formalized model for Reification done Right: RDF* and SPARQL* semantics. ** Formalized model for Reification done Right: RDF* and SPARQL* semantics. (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=7983f11a21&e=085d8ae40c) ------------------------------------------------------------ By Bryan Thompson on Jun 16, 2014 08:39 am Olaf Hartig has developed a formal model of the “Reification Done Right” concepts [1]. The model formalizes an extension to both RDF (known as RDF*) and SPARQL (known as SPARQL*). These extensions define a backwards compatible relationship between the RDF data model and the SPARQL query language and an alternative perspective on RDF Reification. The RDF* and SPARQL* models are introduced and formally described in Foundations of an Alternative Approach to Reification in RDF (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=ec0cc68de6&e=085d8ae40c) . The key contributions of this paper are: * Formal extensions of the RDF data model and the SPARQL algebra that reconciles RDF Reification with statement-level metadata; * An extended syntax for TURTLE that permits easy interchange of statements about statements. * An extended syntax for SPARQL that make it easy to express queries and data for statements about statements. * Rewrite rules that may be used to translate RDF* into RDF and SPARQL* into SPARQL. RDF* and SPARQL* allow statements to appear as Subjects or Objects in other statements. Statements about these “inline” statements can be interpreted as if they were statements about statements. The paper shows that this is equivalent to statements about reified RDF statement models. For example, the following statements declare a name for some resource “:bob”, an age for :bob, and provide assertions about how and where that age was obtained: :bob foaf:name "Bob" . <> dct:creator example.com/crawlers#c1> dct:source example.net/homepage-listing.html> . and then queried using: SELECT ?age ?src WHERE { ?bob foaf:name "Bob" . <> dct:source ?src . } In both cases the << >> notation denotes a statement appearing as the Subject or Object of another statement. Further, statements may become bound to variables as shown in this alternative syntax: SELECT ?age ?src WHERE { ?bob foaf:name "Bob" . BIND( <> AS ?t ) . ?t dct:source ?src . } The paper proves that these examples are equivalent using RDF Reification. That is RDF Reification already gives us a mechanism to represent, interchange, and query statements about statements. However, the paper also shows that statements about statements may be modeled and queried within the database in a wide variety of different physical schemas that allow great efficiency and data density when compared to naive indexing of RDF statement models. This gives database designers enormous freedom in how they choose to represent those statements about statements and helps to counter the impression that RDF databases are necessarily bad for problems requiring link attributes. For example, any of the following physical schema could be used to represent these statements about statements: * Explicitly model the statements about statements as reified RDF statement models; * Associating a “statement identifier” with each statement in the database and then using it to represent statements about statements; * Directly embed the statement “:bob foaf:age 23″ into the representation of each statement about that statement (inlining within the statement indices using variable length and recursively embedded encodings of the Subject and Object of a statement); and * Extending the (s,p,o) table to include additional columns, in this case dct:creator and dct:source. This can be advantageous when some metadata predicate has a maximum cardinality of one and is used for most statements in the database (for example, this could be used to create an efficient bi-temporal database with statement-level metadata columns for the business-start-time, business-end-time, and transaction-time for each assertion). By clarifying the formal semantics of RDF Reification and offering a simplified syntax for data interchange, query, and update, database designers and database users can now more easily and confidentially model domains that require statement level metadata. There is a long list of such domains, including domains that model events, domains that require link attributes, sparse matrices, the property graph model, etc. Bigdata supports RDF* and SPARQL* for the efficient interchange, query, and update of statements about statements. Today, this is enabled to through the “SIDS” option com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiers=true This enables the historical mechanism for efficient statements about statements in bigdata. In the future, we plan to add support for RDF* and SPARQL* to the quads mode of the platform as well. This will allow statement level metadata to co-exist seamlessly with the named graphs model. Thanks, Bryan [1] http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=1f3e1bac62&e=085d8ae40c Read in browser » (http://bigdata.us8.list-manage1.com/track/click?u=807fe42a6d19f97994387207d&id=ca96f6d66f&e=085d8ae40c) http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=34b81d724b&e=085d8ae40c http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=a390ce2f87&e=085d8ae40c. ** Recent Articles: ------------------------------------------------------------ ** Bigdata and Blueprints (http://bigdata.us8.list-manage1.com/track/click?u=807fe42a6d19f97994387207d&id=d625acfee9&e=085d8ae40c) ** MapGraph at GRADES 2013 (SLC, June 22nd) (http://bigdata.us8.list-manage1.com/track/click?u=807fe42a6d19f97994387207d&id=1c7937d5bb&e=085d8ae40c) ** Bigdata Release 1.3.1 (HA Load Balancer, Blueprints, RDR, new Workbench) (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=f9a256073e&e=085d8ae40c) ** Website re-design (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=df2b266c7e&e=085d8ae40c) ** Bigdata and Blueprints (http://bigdata.us8.list-manage.com/track/click?u=807fe42a6d19f97994387207d&id=ec052972b0&e=085d8ae40c ============================================================ Copyright © 2014 SYSTAP, LLC, All rights reserved. You are receiving this email as you've subscribed to receive information about Bigdata, fully open-source high-performance graph database supporting the RDF data model and RDR. Bigdata operates as an embedded database or over a client/server REST API. Bigdata supports high-availability and dynamic sharding. Bigdata supports both the Blueprints and Sesame APIs. Our mailing address is: SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 USA ** unsubscribe from this list (http://bigdata.us8.list-manage1.com/unsubscribe?u=807fe42a6d19f97994387207d&id=400be6035d&e=085d8ae40c&c=55a41b4c7e) ** update subscription preferences (http://bigdata.us8.list-manage.com/profile?u=807fe42a6d19f97994387207d&id=400be6035d&e=085d8ae40c) Email Marketing Powered by MailChimp http://www.mailchimp.com/monkey-rewards/?utm_source=freemium_newsletter&utm_medium=email&utm_campaign=monkey_rewards&aid=807fe42a6d19f97994387207d&afl=1 |
From: Bryan T. <br...@sy...> - 2014-06-09 19:46:33
|
I have updated the documentation on the REST API wiki page [1] to indicate the presence of the "bigdata" context path in the web application since 1.3.1. http://localhost:port/*bigdata* The "bigdata" context path was not consistently present prior to this release. This was fixed when we moved to jetty 9.1 and cleaned up the jetty deployment model. Thanks, Bryan [1] http://wiki.bigdata.com/wiki/index.php/NanoSparqlServer |
From: Peter A. <ans...@gm...> - 2014-06-04 20:25:09
|
Hi Bryan, Sesame-2.8 is the RDF-1.1 release where these new mime types will be supported. Cheers, Peter > On 5 Jun 2014, at 4:21 am, Bryan Thompson <br...@sy...> wrote: > > @Jeremy: I think that this is a sesame issue > > @Mike: Does openrdf 2.7 support RDF 1.1 for ntriples? > > --snip-- > The original ntriples from RDF 1.0 is an ASCII format with mime type text/plain; the current recommendation has mime type application/ntriples and is utf8 > > Experimentally bigdata construct queries support the RDF 1.0 version but not the RDF 1.1 version > > Should I add a trac item about this, or is this more a sesame issue …. > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/NeoTech > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Bryan T. <br...@sy...> - 2014-06-04 18:21:41
|
@Jeremy: I think that this is a sesame issue @Mike: Does openrdf 2.7 support RDF 1.1 for ntriples? --snip-- The original ntriples from RDF 1.0 is an ASCII format with mime type text/plain; the current recommendation has mime type application/ntriples and is utf8 Experimentally bigdata construct queries support the RDF 1.0 version but not the RDF 1.1 version Should I add a trac item about this, or is this more a sesame issue …. |
From: Jeremy J C. <jj...@sy...> - 2014-06-04 18:12:24
|
I am looking at how to save data from a graph in a bigdata instance to a file in some standard format As far as I can tell, I am supposed to be using a simple CONSTRUCT query like this one: CONSTRUCT { ?s ?p ?o} WHERE { GRAPH <http://localhost:8000/graph/vocabulary> { ?s ?p ?o } } Because I want it to be moderately git friendly, I am inclined to go with ntriples, and then pass through sort (this is not ideal, particularly with blank nodes, but it is cheap and cheerful, there is a public domain NlogN complexity set of shell scripts on a dropped patent application in my name that does a better job with blank nodes than sort * [further discussion of this off-list please]) The original ntriples from RDF 1.0 is an ASCII format with mime type text/plain; the current recommendation has mime type application/ntriples and is utf8 Experimentally bigdata construct queries support the RDF 1.0 version but not the RDF 1.1 version Should I add a trac item about this, or is this more a sesame issue …. Jeremy |
From: Bryan T. <br...@sy...> - 2014-05-30 17:45:29
|
Anzo does do a quads mode construct with bigdata, but you are correct. I do not think that is out of the box for bigdata. Bryan > On May 30, 2014, at 12:46 PM, Jeremy J Carroll <jj...@sy...> wrote: > > > thanks for the reply > > >> On May 30, 2014, at 9:41 AM, Bryan Thompson <br...@sy...> wrote: >> >> A CONSTRUCT will do this and you could conneg for your choice of supported quads writers. > > I though CONSTRUCT only worked for graphs not quads … > > Glancing at http://www.w3.org/TR/2013/REC-sparql11-http-rdf-update-20130321/ > it doesn't feel that even if we implemented that (which is on my to-do list but pretty low down) that it wouldn't really help - because the unit of operation is the graph > > upwards and onwards or something like that! > > Jeremy > |
From: Jeremy J C. <jj...@sy...> - 2014-05-30 16:46:37
|
thanks for the reply On May 30, 2014, at 9:41 AM, Bryan Thompson <br...@sy...> wrote: > A CONSTRUCT will do this and you could conneg for your choice of supported quads writers. I though CONSTRUCT only worked for graphs not quads … Glancing at http://www.w3.org/TR/2013/REC-sparql11-http-rdf-update-20130321/ it doesn't feel that even if we implemented that (which is on my to-do list but pretty low down) that it wouldn't really help - because the unit of operation is the graph upwards and onwards or something like that! Jeremy |