|
From: Jeremy J C. <jj...@sy...> - 2016-03-17 21:51:31
|
As part of our health check that ensures we know when our system go down, we send the following query to all our blazegraph instances every minute: ASK FROM <http://does-not-exist.example.org/an/empty/named/graph> FROM NAMED <http://does-not-exist.example.org/an/empty/named/graph> WHERE { } Normally this is very quick! However, during a period of heavy write activity using INSERT-by-POST commands that seem to be taking around 30s (this is from logs one level out, so there is room for doubt), we were seeing these queries occasionally take over 5s (the trivial ASK). Is this expected? My general belief was that writes and concurrent reads did not strongly conflict in blazegraph. Jeremy |
|
From: Bryan T. <br...@sy...> - 2016-03-17 21:54:12
|
5s is probably a major GC event. ---- Bryan Thompson Chief Scientist & Founder Blazegraph e: br...@bl... w: http://blazegraph.com Blazegraph products help to solve the Graph Cache Thrash to achieve large scale processing for graph and predictive analytics. Blazegraph is the creator of the industry’s first GPU-accelerated high-performance database for large graphs, has been named as one of the “10 Companies and Technologies to Watch in 2016” <http://insideanalysis.com/2016/01/20535/>. Blazegraph Database <https://www.blazegraph.com/> is our ultra-high performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph GPU <https://www.blazegraph.com/product/gpu-accelerated/> andBlazegraph DAS <https://www.blazegraph.com/product/gpu-accelerated/>L are disruptive new technologies that use GPUs to enable extreme scaling that is thousands of times faster and 40 times more affordable than CPU-based solutions. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC DBA Blazegraph. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Thu, Mar 17, 2016 at 5:51 PM, Jeremy J Carroll <jj...@sy...> wrote: > > As part of our health check that ensures we know when our system go down, > we send the following query to all our blazegraph instances every minute: > > ASK > FROM <http://does-not-exist.example.org/an/empty/named/graph> > FROM NAMED <http://does-not-exist.example.org/an/empty/named/graph> > > WHERE { > > } > > > Normally this is very quick! > > However, during a period of heavy write activity using INSERT-by-POST > commands that seem to be taking around 30s (this is from logs one level > out, so there is room for doubt), we were seeing these queries occasionally > take over 5s (the trivial ASK). > > Is this expected? > > My general belief was that writes and concurrent reads did not strongly > conflict in blazegraph. > > > Jeremy > > > > > ------------------------------------------------------------------------------ > Transform Data into Opportunity. > Accelerate data analysis in your applications with > Intel Data Analytics Acceleration Library. > Click to learn more. > http://pubads.g.doubleclick.net/gampad/clk?id=278785231&iu=/4140 > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |
|
From: Bryan T. <br...@sy...> - 2016-03-17 21:55:06
|
You might try the EST_CARD interface for a lightweight request. It will return a range count without the overhead of parsing a SPARQL request. ---- Bryan Thompson Chief Scientist & Founder Blazegraph e: br...@bl... w: http://blazegraph.com Blazegraph products help to solve the Graph Cache Thrash to achieve large scale processing for graph and predictive analytics. Blazegraph is the creator of the industry’s first GPU-accelerated high-performance database for large graphs, has been named as one of the “10 Companies and Technologies to Watch in 2016” <http://insideanalysis.com/2016/01/20535/>. Blazegraph Database <https://www.blazegraph.com/> is our ultra-high performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph GPU <https://www.blazegraph.com/product/gpu-accelerated/> andBlazegraph DAS <https://www.blazegraph.com/product/gpu-accelerated/>L are disruptive new technologies that use GPUs to enable extreme scaling that is thousands of times faster and 40 times more affordable than CPU-based solutions. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC DBA Blazegraph. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Thu, Mar 17, 2016 at 5:51 PM, Jeremy J Carroll <jj...@sy...> wrote: > > As part of our health check that ensures we know when our system go down, > we send the following query to all our blazegraph instances every minute: > > ASK > FROM <http://does-not-exist.example.org/an/empty/named/graph> > FROM NAMED <http://does-not-exist.example.org/an/empty/named/graph> > > WHERE { > > } > > > Normally this is very quick! > > However, during a period of heavy write activity using INSERT-by-POST > commands that seem to be taking around 30s (this is from logs one level > out, so there is room for doubt), we were seeing these queries occasionally > take over 5s (the trivial ASK). > > Is this expected? > > My general belief was that writes and concurrent reads did not strongly > conflict in blazegraph. > > > Jeremy > > > > > ------------------------------------------------------------------------------ > Transform Data into Opportunity. > Accelerate data analysis in your applications with > Intel Data Analytics Acceleration Library. > Click to learn more. > http://pubads.g.doubleclick.net/gampad/clk?id=278785231&iu=/4140 > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |
|
From: Jeremy J C. <jj...@sy...> - 2016-03-18 22:53:52
|
I checked the GC logs. We have no GC stoppage of more than a second for days either side of the slow query. The slow query was on March 6th, there was a 3s stoppage on Feb 29th and a 1.8s stoppage on March 9th Any other thoughts? Jeremy > On Mar 17, 2016, at 2:54 PM, Bryan Thompson <br...@sy...> wrote: > > 5s is probably a major GC event. |
|
From: Bryan T. <br...@sy...> - 2016-03-19 18:58:08
|
Do you have full logs for the server during this interval? Bryan ---- Bryan Thompson Chief Scientist & Founder Blazegraph e: br...@bl... w: http://blazegraph.com Blazegraph products help to solve the Graph Cache Thrash to achieve large scale processing for graph and predictive analytics. Blazegraph is the creator of the industry’s first GPU-accelerated high-performance database for large graphs, has been named as one of the “10 Companies and Technologies to Watch in 2016” <http://insideanalysis.com/2016/01/20535/>. Blazegraph Database <https://www.blazegraph.com/> is our ultra-high performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph GPU <https://www.blazegraph.com/product/gpu-accelerated/> andBlazegraph DAS <https://www.blazegraph.com/product/gpu-accelerated/>L are disruptive new technologies that use GPUs to enable extreme scaling that is thousands of times faster and 40 times more affordable than CPU-based solutions. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC DBA Blazegraph. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Fri, Mar 18, 2016 at 6:53 PM, Jeremy J Carroll <jj...@sy...> wrote: > I checked the GC logs. We have no GC stoppage of more than a second for > days either side of the slow query. > > The slow query was on March 6th, there was a 3s stoppage on Feb 29th and a > 1.8s stoppage on March 9th > > Any other thoughts? > > Jeremy > > > On Mar 17, 2016, at 2:54 PM, Bryan Thompson <br...@sy...> wrote: > > 5s is probably a major GC event. > > > |
|
From: Jeremy J C. <jj...@sy...> - 2016-03-21 16:41:01
|
The logs I have are: - full GC logs - python level logs of http requests and errors/warnings (of our application server: not the blazegraph http requests) I can map those logs into SPARQL requests approximately, and I see that at the point where I got the long wait, there were a single large create operations that use the INSERT-by-POST operation taking approximately 30s (including python processing) which normally is much more than 50% of blazegraph time. Very minor other activity going on No interesting GC activity Jeremy > On Mar 19, 2016, at 11:57 AM, Bryan Thompson <br...@sy...> wrote: > > Do you have full logs for the server during this interval? > Bryan > > ---- > Bryan Thompson > Chief Scientist & Founder > Blazegraph > e: br...@bl... <mailto:br...@bl...> > w: http://blazegraph.com <http://blazegraph.com/> > > Blazegraph products help to solve the Graph Cache Thrash to achieve large scale processing for graph and predictive analytics. Blazegraph is the creator of the industry’s first GPU-accelerated high-performance database for large graphs, has been named as one of the “10 Companies and Technologies to Watch in 2016” <http://insideanalysis.com/2016/01/20535/>. > > Blazegraph Database <https://www.blazegraph.com/> is our ultra-high performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph GPU <https://www.blazegraph.com/product/gpu-accelerated/> andBlazegraph DAS <https://www.blazegraph.com/product/gpu-accelerated/>L are disruptive new technologies that use GPUs to enable extreme scaling that is thousands of times faster and 40 times more affordable than CPU-based solutions. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC DBA Blazegraph. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. > > > On Fri, Mar 18, 2016 at 6:53 PM, Jeremy J Carroll <jj...@sy... <mailto:jj...@sy...>> wrote: > I checked the GC logs. We have no GC stoppage of more than a second for days either side of the slow query. > > The slow query was on March 6th, there was a 3s stoppage on Feb 29th and a 1.8s stoppage on March 9th > > Any other thoughts? > > Jeremy > > >> On Mar 17, 2016, at 2:54 PM, Bryan Thompson <br...@sy... <mailto:br...@sy...>> wrote: >> >> 5s is probably a major GC event. > > |
|
From: Bryan T. <br...@sy...> - 2016-03-21 19:12:13
|
Is the thought then that there might be some barrier that is blocking the start of a new query during some part of that large create? ---- Bryan Thompson Chief Scientist & Founder Blazegraph e: br...@bl... w: http://blazegraph.com Blazegraph products help to solve the Graph Cache Thrash to achieve large scale processing for graph and predictive analytics. Blazegraph is the creator of the industry’s first GPU-accelerated high-performance database for large graphs, has been named as one of the “10 Companies and Technologies to Watch in 2016” <http://insideanalysis.com/2016/01/20535/>. Blazegraph Database <https://www.blazegraph.com/> is our ultra-high performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph GPU <https://www.blazegraph.com/product/gpu-accelerated/> andBlazegraph DAS <https://www.blazegraph.com/product/gpu-accelerated/>L are disruptive new technologies that use GPUs to enable extreme scaling that is thousands of times faster and 40 times more affordable than CPU-based solutions. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC DBA Blazegraph. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Mon, Mar 21, 2016 at 12:40 PM, Jeremy J Carroll <jj...@sy...> wrote: > The logs I have are: > - full GC logs > - python level logs of http requests and errors/warnings (of our > application server: not the blazegraph http requests) > > I can map those logs into SPARQL requests approximately, and I see that at > the point where I got the long wait, there were a single large create > operations that use the > INSERT-by-POST operation taking approximately 30s (including python > processing) which normally is much more than 50% of blazegraph time. > > Very minor other activity going on > > No interesting GC activity > > Jeremy > > > > > On Mar 19, 2016, at 11:57 AM, Bryan Thompson <br...@sy...> wrote: > > Do you have full logs for the server during this interval? > Bryan > > ---- > Bryan Thompson > Chief Scientist & Founder > Blazegraph > e: br...@bl... > w: http://blazegraph.com > > Blazegraph products help to solve the Graph Cache Thrash to achieve large > scale processing for graph and predictive analytics. Blazegraph is the > creator of the industry’s first GPU-accelerated high-performance database > for large graphs, has been named as one of the “10 Companies and > Technologies to Watch in 2016” <http://insideanalysis.com/2016/01/20535/>. > > > Blazegraph Database <https://www.blazegraph.com/> is our ultra-high > performance graph database that supports both RDF/SPARQL and > Tinkerpop/Blueprints APIs. Blazegraph GPU > <https://www.blazegraph.com/product/gpu-accelerated/> andBlazegraph DAS > <https://www.blazegraph.com/product/gpu-accelerated/>L are disruptive new > technologies that use GPUs to enable extreme scaling that is thousands of > times faster and 40 times more affordable than CPU-based solutions. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP, LLC DBA Blazegraph. Any unauthorized review, use, > disclosure, dissemination or copying of this email or its contents or > attachments is prohibited. If you have received this communication in > error, please notify the sender by reply email and permanently delete all > copies of the email and its contents and attachments. > > On Fri, Mar 18, 2016 at 6:53 PM, Jeremy J Carroll <jj...@sy...> wrote: > >> I checked the GC logs. We have no GC stoppage of more than a second for >> days either side of the slow query. >> >> The slow query was on March 6th, there was a 3s stoppage on Feb 29th and >> a 1.8s stoppage on March 9th >> >> Any other thoughts? >> >> Jeremy >> >> >> On Mar 17, 2016, at 2:54 PM, Bryan Thompson <br...@sy...> wrote: >> >> 5s is probably a major GC event. >> >> >> > > |
|
From: Jeremy J C. <jj...@sy...> - 2016-03-21 20:27:40
|
Yes - or blocking some other part of the process ... > On Mar 21, 2016, at 12:12 PM, Bryan Thompson <br...@sy...> wrote: > > Is the thought then that there might be some barrier that is blocking the start of a new query during some part of that large create? > > ---- > Bryan Thompson > Chief Scientist & Founder > Blazegraph > e: br...@bl... <mailto:br...@bl...> > w: http://blazegraph.com <http://blazegraph.com/> > > Blazegraph products help to solve the Graph Cache Thrash to achieve large scale processing for graph and predictive analytics. Blazegraph is the creator of the industry’s first GPU-accelerated high-performance database for large graphs, has been named as one of the “10 Companies and Technologies to Watch in 2016” <http://insideanalysis.com/2016/01/20535/>. > > Blazegraph Database <https://www.blazegraph.com/> is our ultra-high performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph GPU <https://www.blazegraph.com/product/gpu-accelerated/> andBlazegraph DAS <https://www.blazegraph.com/product/gpu-accelerated/>L are disruptive new technologies that use GPUs to enable extreme scaling that is thousands of times faster and 40 times more affordable than CPU-based solutions. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC DBA Blazegraph. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. > > > On Mon, Mar 21, 2016 at 12:40 PM, Jeremy J Carroll <jj...@sy... <mailto:jj...@sy...>> wrote: > The logs I have are: > - full GC logs > - python level logs of http requests and errors/warnings (of our application server: not the blazegraph http requests) > > I can map those logs into SPARQL requests approximately, and I see that at the point where I got the long wait, there were a single large create operations that use the > INSERT-by-POST operation taking approximately 30s (including python processing) which normally is much more than 50% of blazegraph time. > > Very minor other activity going on > > No interesting GC activity > > Jeremy > > > > >> On Mar 19, 2016, at 11:57 AM, Bryan Thompson <br...@sy... <mailto:br...@sy...>> wrote: >> >> Do you have full logs for the server during this interval? >> Bryan >> >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> Blazegraph >> e: br...@bl... <mailto:br...@bl...> >> w: http://blazegraph.com <http://blazegraph.com/> >> >> Blazegraph products help to solve the Graph Cache Thrash to achieve large scale processing for graph and predictive analytics. Blazegraph is the creator of the industry’s first GPU-accelerated high-performance database for large graphs, has been named as one of the “10 Companies and Technologies to Watch in 2016” <http://insideanalysis.com/2016/01/20535/>. >> >> Blazegraph Database <https://www.blazegraph.com/> is our ultra-high performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph GPU <https://www.blazegraph.com/product/gpu-accelerated/> andBlazegraph DAS <https://www.blazegraph.com/product/gpu-accelerated/>L are disruptive new technologies that use GPUs to enable extreme scaling that is thousands of times faster and 40 times more affordable than CPU-based solutions. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC DBA Blazegraph. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. >> >> >> On Fri, Mar 18, 2016 at 6:53 PM, Jeremy J Carroll <jj...@sy... <mailto:jj...@sy...>> wrote: >> I checked the GC logs. We have no GC stoppage of more than a second for days either side of the slow query. >> >> The slow query was on March 6th, there was a 3s stoppage on Feb 29th and a 1.8s stoppage on March 9th >> >> Any other thoughts? >> >> Jeremy >> >> >>> On Mar 17, 2016, at 2:54 PM, Bryan Thompson <br...@sy... <mailto:br...@sy...>> wrote: >>> >>> 5s is probably a major GC event. >> >> > > |
|
From: Bryan T. <br...@sy...> - 2016-03-21 20:38:33
|
That is possible. There are some barriers around commit processing that
might prevent new tx starts. So you could have a commit that was taking a
few seconds and during that time new queries might block. In particular,
AbstractJournal.commitNow() takes the following lock.
final WriteLock lock = _fieldReadWriteLock.writeLock();
lock.lock();
That lock is contended by some other code paths. For example,
getIndexLocal() in the same class needs that lock if there is a cache miss.
We do use a read/write lock there, but once commitNow() gets the write lock
no other thread will be able to proceed.
It is possible that we could defer acquiring that lock. The slowest part of
the commit is flushing the indices to the disk. If we take a lock that is
specific to the commit, and then wait until we have already flushed the
indices to the disk, we might be able to reduce this latency. Of course,
such changes raise the possibility of lock ordering problems which could
lead to a deadlock. So we would have to take a pretty close look at this
before making the change.
Thanks,
Bryan
----
Bryan Thompson
Chief Scientist & Founder
Blazegraph
e: br...@bl...
w: http://blazegraph.com
Blazegraph products help to solve the Graph Cache Thrash to achieve large
scale processing for graph and predictive analytics. Blazegraph is the
creator of the industry’s first GPU-accelerated high-performance database
for large graphs, has been named as one of the “10 Companies and
Technologies to Watch in 2016” <http://insideanalysis.com/2016/01/20535/>.
Blazegraph Database <https://www.blazegraph.com/> is our ultra-high
performance graph database that supports both RDF/SPARQL and
Tinkerpop/Blueprints APIs. Blazegraph GPU
<https://www.blazegraph.com/product/gpu-accelerated/> andBlazegraph DAS
<https://www.blazegraph.com/product/gpu-accelerated/>L are disruptive new
technologies that use GPUs to enable extreme scaling that is thousands of
times faster and 40 times more affordable than CPU-based solutions.
CONFIDENTIALITY NOTICE: This email and its contents and attachments are
for the sole use of the intended recipient(s) and are confidential or
proprietary to SYSTAP, LLC DBA Blazegraph. Any unauthorized review, use,
disclosure, dissemination or copying of this email or its contents or
attachments is prohibited. If you have received this communication in
error, please notify the sender by reply email and permanently delete all
copies of the email and its contents and attachments.
On Mon, Mar 21, 2016 at 4:27 PM, Jeremy J Carroll <jj...@sy...> wrote:
> Yes - or blocking some other part of the process ...
>
>
> On Mar 21, 2016, at 12:12 PM, Bryan Thompson <br...@sy...> wrote:
>
> Is the thought then that there might be some barrier that is blocking the
> start of a new query during some part of that large create?
>
> ----
> Bryan Thompson
> Chief Scientist & Founder
> Blazegraph
> e: br...@bl...
> w: http://blazegraph.com
>
> Blazegraph products help to solve the Graph Cache Thrash to achieve large
> scale processing for graph and predictive analytics. Blazegraph is the
> creator of the industry’s first GPU-accelerated high-performance database
> for large graphs, has been named as one of the “10 Companies and
> Technologies to Watch in 2016” <http://insideanalysis.com/2016/01/20535/>.
>
>
> Blazegraph Database <https://www.blazegraph.com/> is our ultra-high
> performance graph database that supports both RDF/SPARQL and
> Tinkerpop/Blueprints APIs. Blazegraph GPU
> <https://www.blazegraph.com/product/gpu-accelerated/> andBlazegraph DAS
> <https://www.blazegraph.com/product/gpu-accelerated/>L are disruptive new
> technologies that use GPUs to enable extreme scaling that is thousands of
> times faster and 40 times more affordable than CPU-based solutions.
>
> CONFIDENTIALITY NOTICE: This email and its contents and attachments are
> for the sole use of the intended recipient(s) and are confidential or
> proprietary to SYSTAP, LLC DBA Blazegraph. Any unauthorized review, use,
> disclosure, dissemination or copying of this email or its contents or
> attachments is prohibited. If you have received this communication in
> error, please notify the sender by reply email and permanently delete all
> copies of the email and its contents and attachments.
>
> On Mon, Mar 21, 2016 at 12:40 PM, Jeremy J Carroll <jj...@sy...> wrote:
>
>> The logs I have are:
>> - full GC logs
>> - python level logs of http requests and errors/warnings (of our
>> application server: not the blazegraph http requests)
>>
>> I can map those logs into SPARQL requests approximately, and I see that
>> at the point where I got the long wait, there were a single large create
>> operations that use the
>> INSERT-by-POST operation taking approximately 30s (including python
>> processing) which normally is much more than 50% of blazegraph time.
>>
>> Very minor other activity going on
>>
>> No interesting GC activity
>>
>> Jeremy
>>
>>
>>
>>
>> On Mar 19, 2016, at 11:57 AM, Bryan Thompson <br...@sy...> wrote:
>>
>> Do you have full logs for the server during this interval?
>> Bryan
>>
>> ----
>> Bryan Thompson
>> Chief Scientist & Founder
>> Blazegraph
>> e: br...@bl...
>> w: http://blazegraph.com
>>
>> Blazegraph products help to solve the Graph Cache Thrash to achieve large
>> scale processing for graph and predictive analytics. Blazegraph is the
>> creator of the industry’s first GPU-accelerated high-performance database
>> for large graphs, has been named as one of the “10 Companies and
>> Technologies to Watch in 2016” <http://insideanalysis.com/2016/01/20535/>.
>>
>>
>> Blazegraph Database <https://www.blazegraph.com/> is our ultra-high
>> performance graph database that supports both RDF/SPARQL and
>> Tinkerpop/Blueprints APIs. Blazegraph GPU
>> <https://www.blazegraph.com/product/gpu-accelerated/> andBlazegraph DAS
>> <https://www.blazegraph.com/product/gpu-accelerated/>L are disruptive
>> new technologies that use GPUs to enable extreme scaling that is thousands
>> of times faster and 40 times more affordable than CPU-based solutions.
>>
>> CONFIDENTIALITY NOTICE: This email and its contents and attachments are
>> for the sole use of the intended recipient(s) and are confidential or
>> proprietary to SYSTAP, LLC DBA Blazegraph. Any unauthorized review, use,
>> disclosure, dissemination or copying of this email or its contents or
>> attachments is prohibited. If you have received this communication in
>> error, please notify the sender by reply email and permanently delete all
>> copies of the email and its contents and attachments.
>>
>> On Fri, Mar 18, 2016 at 6:53 PM, Jeremy J Carroll <jj...@sy...> wrote:
>>
>>> I checked the GC logs. We have no GC stoppage of more than a second for
>>> days either side of the slow query.
>>>
>>> The slow query was on March 6th, there was a 3s stoppage on Feb 29th and
>>> a 1.8s stoppage on March 9th
>>>
>>> Any other thoughts?
>>>
>>> Jeremy
>>>
>>>
>>> On Mar 17, 2016, at 2:54 PM, Bryan Thompson <br...@sy...> wrote:
>>>
>>> 5s is probably a major GC event.
>>>
>>>
>>>
>>
>>
>
>
|
|
From: Jeremy J C. <jj...@sy...> - 2016-03-21 22:24:23
|
OK, looking at the implementation of commitNow, I take it to be a fairly low-level piece of code where holding the write lock seems reasonable. The easiest way to reduce the length of time that the write lock is being held is to either reduced the amount of data being written or increased the disk speed. For us to understand the length of time the write lock is written it looks like we should enable Info level on the logger and then we will see commit times and this will give us an idea of how bad the problem is. Then if we cannot reduce the commit time sufficiently we should get back to you. Does that sound like a reasonable plan on this issue? Jeremy > On Mar 21, 2016, at 1:27 PM, Jeremy J Carroll <jj...@sy...> wrote: > > Yes - or blocking some other part of the process ... > > >> On Mar 21, 2016, at 12:12 PM, Bryan Thompson <br...@sy... <mailto:br...@sy...>> wrote: >> >> Is the thought then that there might be some barrier that is blocking the start of a new query during some part of that large create? >> >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> Blazegraph >> e: br...@bl... <mailto:br...@bl...> >> w: http://blazegraph.com <http://blazegraph.com/> >> >> Blazegraph products help to solve the Graph Cache Thrash to achieve large scale processing for graph and predictive analytics. Blazegraph is the creator of the industry’s first GPU-accelerated high-performance database for large graphs, has been named as one of the “10 Companies and Technologies to Watch in 2016” <http://insideanalysis.com/2016/01/20535/>. >> >> Blazegraph Database <https://www.blazegraph.com/> is our ultra-high performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph GPU <https://www.blazegraph.com/product/gpu-accelerated/> andBlazegraph DAS <https://www.blazegraph.com/product/gpu-accelerated/>L are disruptive new technologies that use GPUs to enable extreme scaling that is thousands of times faster and 40 times more affordable than CPU-based solutions. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC DBA Blazegraph. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. >> >> >> On Mon, Mar 21, 2016 at 12:40 PM, Jeremy J Carroll <jj...@sy... <mailto:jj...@sy...>> wrote: >> The logs I have are: >> - full GC logs >> - python level logs of http requests and errors/warnings (of our application server: not the blazegraph http requests) >> >> I can map those logs into SPARQL requests approximately, and I see that at the point where I got the long wait, there were a single large create operations that use the >> INSERT-by-POST operation taking approximately 30s (including python processing) which normally is much more than 50% of blazegraph time. >> >> Very minor other activity going on >> >> No interesting GC activity >> >> Jeremy >> >> >> >> >>> On Mar 19, 2016, at 11:57 AM, Bryan Thompson <br...@sy... <mailto:br...@sy...>> wrote: >>> >>> Do you have full logs for the server during this interval? >>> Bryan >>> >>> ---- >>> Bryan Thompson >>> Chief Scientist & Founder >>> Blazegraph >>> e: br...@bl... <mailto:br...@bl...> >>> w: http://blazegraph.com <http://blazegraph.com/> >>> >>> Blazegraph products help to solve the Graph Cache Thrash to achieve large scale processing for graph and predictive analytics. Blazegraph is the creator of the industry’s first GPU-accelerated high-performance database for large graphs, has been named as one of the “10 Companies and Technologies to Watch in 2016” <http://insideanalysis.com/2016/01/20535/>. >>> >>> Blazegraph Database <https://www.blazegraph.com/> is our ultra-high performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph GPU <https://www.blazegraph.com/product/gpu-accelerated/> andBlazegraph DAS <https://www.blazegraph.com/product/gpu-accelerated/>L are disruptive new technologies that use GPUs to enable extreme scaling that is thousands of times faster and 40 times more affordable than CPU-based solutions. >>> >>> CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC DBA Blazegraph. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. >>> >>> >>> On Fri, Mar 18, 2016 at 6:53 PM, Jeremy J Carroll <jj...@sy... <mailto:jj...@sy...>> wrote: >>> I checked the GC logs. We have no GC stoppage of more than a second for days either side of the slow query. >>> >>> The slow query was on March 6th, there was a 3s stoppage on Feb 29th and a 1.8s stoppage on March 9th >>> >>> Any other thoughts? >>> >>> Jeremy >>> >>> >>>> On Mar 17, 2016, at 2:54 PM, Bryan Thompson <br...@sy... <mailto:br...@sy...>> wrote: >>>> >>>> 5s is probably a major GC event. >>> >>> >> >> > |
|
From: Bryan T. <br...@sy...> - 2016-03-21 22:26:53
|
Certainly. Also, as i indicated, it may be possible to reduce the lockout period. In principle, it seems that flushing the write cache outside of the lock would be safe. But this would need a careful code review. Thanks, Bryan On Mar 21, 2016 6:24 PM, "Jeremy J Carroll" <jj...@sy...> wrote: > OK, > looking at the implementation of commitNow, I take it to be a fairly > low-level piece of code where holding the write lock seems reasonable. > The easiest way to reduce the length of time that the write lock is being > held is to either reduced the amount of data being written or increased the > disk speed. > > For us to understand the length of time the write lock is written it looks > like we should enable Info level on the logger and then we will see commit > times and this will give us an idea of how bad the problem is. > > Then if we cannot reduce the commit time sufficiently we should get back > to you. > > Does that sound like a reasonable plan on this issue? > > Jeremy > > > > On Mar 21, 2016, at 1:27 PM, Jeremy J Carroll <jj...@sy...> wrote: > > Yes - or blocking some other part of the process ... > > > On Mar 21, 2016, at 12:12 PM, Bryan Thompson <br...@sy...> wrote: > > Is the thought then that there might be some barrier that is blocking the > start of a new query during some part of that large create? > > ---- > Bryan Thompson > Chief Scientist & Founder > Blazegraph > e: br...@bl... > w: http://blazegraph.com > > Blazegraph products help to solve the Graph Cache Thrash to achieve large > scale processing for graph and predictive analytics. Blazegraph is the > creator of the industry’s first GPU-accelerated high-performance database > for large graphs, has been named as one of the “10 Companies and > Technologies to Watch in 2016” <http://insideanalysis.com/2016/01/20535/>. > > > Blazegraph Database <https://www.blazegraph.com/> is our ultra-high > performance graph database that supports both RDF/SPARQL and > Tinkerpop/Blueprints APIs. Blazegraph GPU > <https://www.blazegraph.com/product/gpu-accelerated/> andBlazegraph DAS > <https://www.blazegraph.com/product/gpu-accelerated/>L are disruptive new > technologies that use GPUs to enable extreme scaling that is thousands of > times faster and 40 times more affordable than CPU-based solutions. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP, LLC DBA Blazegraph. Any unauthorized review, use, > disclosure, dissemination or copying of this email or its contents or > attachments is prohibited. If you have received this communication in > error, please notify the sender by reply email and permanently delete all > copies of the email and its contents and attachments. > > On Mon, Mar 21, 2016 at 12:40 PM, Jeremy J Carroll <jj...@sy...> wrote: > >> The logs I have are: >> - full GC logs >> - python level logs of http requests and errors/warnings (of our >> application server: not the blazegraph http requests) >> >> I can map those logs into SPARQL requests approximately, and I see that >> at the point where I got the long wait, there were a single large create >> operations that use the >> INSERT-by-POST operation taking approximately 30s (including python >> processing) which normally is much more than 50% of blazegraph time. >> >> Very minor other activity going on >> >> No interesting GC activity >> >> Jeremy >> >> >> >> >> On Mar 19, 2016, at 11:57 AM, Bryan Thompson <br...@sy...> wrote: >> >> Do you have full logs for the server during this interval? >> Bryan >> >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> Blazegraph >> e: br...@bl... >> w: http://blazegraph.com >> >> Blazegraph products help to solve the Graph Cache Thrash to achieve large >> scale processing for graph and predictive analytics. Blazegraph is the >> creator of the industry’s first GPU-accelerated high-performance database >> for large graphs, has been named as one of the “10 Companies and >> Technologies to Watch in 2016” <http://insideanalysis.com/2016/01/20535/>. >> >> >> Blazegraph Database <https://www.blazegraph.com/> is our ultra-high >> performance graph database that supports both RDF/SPARQL and >> Tinkerpop/Blueprints APIs. Blazegraph GPU >> <https://www.blazegraph.com/product/gpu-accelerated/> andBlazegraph DAS >> <https://www.blazegraph.com/product/gpu-accelerated/>L are disruptive >> new technologies that use GPUs to enable extreme scaling that is thousands >> of times faster and 40 times more affordable than CPU-based solutions. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are >> for the sole use of the intended recipient(s) and are confidential or >> proprietary to SYSTAP, LLC DBA Blazegraph. Any unauthorized review, use, >> disclosure, dissemination or copying of this email or its contents or >> attachments is prohibited. If you have received this communication in >> error, please notify the sender by reply email and permanently delete all >> copies of the email and its contents and attachments. >> >> On Fri, Mar 18, 2016 at 6:53 PM, Jeremy J Carroll <jj...@sy...> wrote: >> >>> I checked the GC logs. We have no GC stoppage of more than a second for >>> days either side of the slow query. >>> >>> The slow query was on March 6th, there was a 3s stoppage on Feb 29th and >>> a 1.8s stoppage on March 9th >>> >>> Any other thoughts? >>> >>> Jeremy >>> >>> >>> On Mar 17, 2016, at 2:54 PM, Bryan Thompson <br...@sy...> wrote: >>> >>> 5s is probably a major GC event. >>> >>> >>> >> >> > > > |
|
From: Jeremy J C. <jj...@sy...> - 2016-03-22 17:26:17
|
With more testing I found that by repeatedly inserting a few million triples to a journal with two or three billion triples, normally the journal size does not change, but then when it did grow, it grew by about 40G (the journal is about 500G). I am wondering where in the code that file growth happens, I suspect that that could easily take 5 seconds, and lock out other activity while it is happening. The latency reported in the commitNow function was pretty acceptable at 13051 nanoseconds Jeremy > On Mar 21, 2016, at 3:26 PM, Bryan Thompson <br...@sy...> wrote: > > Certainly. Also, as i indicated, it may be possible to reduce the lockout period. In principle, it seems that flushing the write cache outside of the lock would be safe. But this would need a careful code review. > > Thanks, > Bryan > > On Mar 21, 2016 6:24 PM, "Jeremy J Carroll" <jj...@sy... <mailto:jj...@sy...>> wrote: > OK, > looking at the implementation of commitNow, I take it to be a fairly low-level piece of code where holding the write lock seems reasonable. > The easiest way to reduce the length of time that the write lock is being held is to either reduced the amount of data being written or increased the disk speed. > > For us to understand the length of time the write lock is written it looks like we should enable Info level on the logger and then we will see commit times and this will give us an idea of how bad the problem is. > > Then if we cannot reduce the commit time sufficiently we should get back to you. > > Does that sound like a reasonable plan on this issue? > > Jeremy > > > >> On Mar 21, 2016, at 1:27 PM, Jeremy J Carroll <jj...@sy... <mailto:jj...@sy...>> wrote: >> >> Yes - or blocking some other part of the process ... >> >> >>> On Mar 21, 2016, at 12:12 PM, Bryan Thompson <br...@sy... <mailto:br...@sy...>> wrote: >>> >>> Is the thought then that there might be some barrier that is blocking the start of a new query during some part of that large create? >>> >>> ---- >>> Bryan Thompson >>> Chief Scientist & Founder >>> Blazegraph >>> e: br...@bl... <mailto:br...@bl...> >>> w: http://blazegraph.com <http://blazegraph.com/> >>> >>> Blazegraph products help to solve the Graph Cache Thrash to achieve large scale processing for graph and predictive analytics. Blazegraph is the creator of the industry’s first GPU-accelerated high-performance database for large graphs, has been named as one of the “10 Companies and Technologies to Watch in 2016” <http://insideanalysis.com/2016/01/20535/>. >>> >>> Blazegraph Database <https://www.blazegraph.com/> is our ultra-high performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph GPU <https://www.blazegraph.com/product/gpu-accelerated/> andBlazegraph DAS <https://www.blazegraph.com/product/gpu-accelerated/>L are disruptive new technologies that use GPUs to enable extreme scaling that is thousands of times faster and 40 times more affordable than CPU-based solutions. >>> >>> CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC DBA Blazegraph. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. >>> >>> >>> On Mon, Mar 21, 2016 at 12:40 PM, Jeremy J Carroll <jj...@sy... <mailto:jj...@sy...>> wrote: >>> The logs I have are: >>> - full GC logs >>> - python level logs of http requests and errors/warnings (of our application server: not the blazegraph http requests) >>> >>> I can map those logs into SPARQL requests approximately, and I see that at the point where I got the long wait, there were a single large create operations that use the >>> INSERT-by-POST operation taking approximately 30s (including python processing) which normally is much more than 50% of blazegraph time. >>> >>> Very minor other activity going on >>> >>> No interesting GC activity >>> >>> Jeremy >>> >>> >>> >>> >>>> On Mar 19, 2016, at 11:57 AM, Bryan Thompson <br...@sy... <mailto:br...@sy...>> wrote: >>>> >>>> Do you have full logs for the server during this interval? >>>> Bryan >>>> >>>> ---- >>>> Bryan Thompson >>>> Chief Scientist & Founder >>>> Blazegraph >>>> e: br...@bl... <mailto:br...@bl...> >>>> w: http://blazegraph.com <http://blazegraph.com/> >>>> >>>> Blazegraph products help to solve the Graph Cache Thrash to achieve large scale processing for graph and predictive analytics. Blazegraph is the creator of the industry’s first GPU-accelerated high-performance database for large graphs, has been named as one of the “10 Companies and Technologies to Watch in 2016” <http://insideanalysis.com/2016/01/20535/>. >>>> >>>> Blazegraph Database <https://www.blazegraph.com/> is our ultra-high performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph GPU <https://www.blazegraph.com/product/gpu-accelerated/> andBlazegraph DAS <https://www.blazegraph.com/product/gpu-accelerated/>L are disruptive new technologies that use GPUs to enable extreme scaling that is thousands of times faster and 40 times more affordable than CPU-based solutions. >>>> >>>> CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC DBA Blazegraph. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. >>>> >>>> >>>> On Fri, Mar 18, 2016 at 6:53 PM, Jeremy J Carroll <jj...@sy... <mailto:jj...@sy...>> wrote: >>>> I checked the GC logs. We have no GC stoppage of more than a second for days either side of the slow query. >>>> >>>> The slow query was on March 6th, there was a 3s stoppage on Feb 29th and a 1.8s stoppage on March 9th >>>> >>>> Any other thoughts? >>>> >>>> Jeremy >>>> >>>> >>>>> On Mar 17, 2016, at 2:54 PM, Bryan Thompson <br...@sy... <mailto:br...@sy...>> wrote: >>>>> >>>>> 5s is probably a major GC event. >>>> >>>> >>> >>> >> > |
|
From: Bryan T. <br...@sy...> - 2016-03-22 17:31:32
|
It is extended in RWStrategy.truncate(), which calls through to
RWStore.establishExtent(). I would recommend turning on the txLog logger.
This provides better information about the different phases of the commit
and their latency.
public void truncate(final long extent) {
m_store.establishExtent(extent);
}
Bryan
----
Bryan Thompson
Chief Scientist & Founder
Blazegraph
e: br...@bl...
w: http://blazegraph.com
Blazegraph products help to solve the Graph Cache Thrash to achieve large
scale processing for graph and predictive analytics. Blazegraph is the
creator of the industry’s first GPU-accelerated high-performance database
for large graphs, has been named as one of the “10 Companies and
Technologies to Watch in 2016” <http://insideanalysis.com/2016/01/20535/>.
Blazegraph Database <https://www.blazegraph.com/> is our ultra-high
performance graph database that supports both RDF/SPARQL and
Tinkerpop/Blueprints APIs. Blazegraph GPU
<https://www.blazegraph.com/product/gpu-accelerated/> andBlazegraph DAS
<https://www.blazegraph.com/product/gpu-accelerated/>L are disruptive new
technologies that use GPUs to enable extreme scaling that is thousands of
times faster and 40 times more affordable than CPU-based solutions.
CONFIDENTIALITY NOTICE: This email and its contents and attachments are
for the sole use of the intended recipient(s) and are confidential or
proprietary to SYSTAP, LLC DBA Blazegraph. Any unauthorized review, use,
disclosure, dissemination or copying of this email or its contents or
attachments is prohibited. If you have received this communication in
error, please notify the sender by reply email and permanently delete all
copies of the email and its contents and attachments.
On Tue, Mar 22, 2016 at 1:26 PM, Jeremy J Carroll <jj...@sy...> wrote:
> With more testing I found that by repeatedly inserting a few million
> triples to a journal with two or three billion triples, normally the
> journal size does not change, but then when it did grow, it grew by about
> 40G (the journal is about 500G).
> I am wondering where in the code that file growth happens, I suspect that
> that could easily take 5 seconds, and lock out other activity while it is
> happening.
>
> The latency reported in the commitNow function was pretty acceptable at 13051
> nanoseconds
>
> Jeremy
>
>
>
>
> On Mar 21, 2016, at 3:26 PM, Bryan Thompson <br...@sy...> wrote:
>
> Certainly. Also, as i indicated, it may be possible to reduce the lockout
> period. In principle, it seems that flushing the write cache outside of the
> lock would be safe. But this would need a careful code review.
>
> Thanks,
> Bryan
> On Mar 21, 2016 6:24 PM, "Jeremy J Carroll" <jj...@sy...> wrote:
>
>> OK,
>> looking at the implementation of commitNow, I take it to be a fairly
>> low-level piece of code where holding the write lock seems reasonable.
>> The easiest way to reduce the length of time that the write lock is being
>> held is to either reduced the amount of data being written or increased the
>> disk speed.
>>
>> For us to understand the length of time the write lock is written it
>> looks like we should enable Info level on the logger and then we will see
>> commit times and this will give us an idea of how bad the problem is.
>>
>> Then if we cannot reduce the commit time sufficiently we should get back
>> to you.
>>
>> Does that sound like a reasonable plan on this issue?
>>
>> Jeremy
>>
>>
>>
>> On Mar 21, 2016, at 1:27 PM, Jeremy J Carroll <jj...@sy...> wrote:
>>
>> Yes - or blocking some other part of the process ...
>>
>>
>> On Mar 21, 2016, at 12:12 PM, Bryan Thompson <br...@sy...> wrote:
>>
>> Is the thought then that there might be some barrier that is blocking the
>> start of a new query during some part of that large create?
>>
>> ----
>> Bryan Thompson
>> Chief Scientist & Founder
>> Blazegraph
>> e: br...@bl...
>> w: http://blazegraph.com
>>
>> Blazegraph products help to solve the Graph Cache Thrash to achieve large
>> scale processing for graph and predictive analytics. Blazegraph is the
>> creator of the industry’s first GPU-accelerated high-performance database
>> for large graphs, has been named as one of the “10 Companies and
>> Technologies to Watch in 2016” <http://insideanalysis.com/2016/01/20535/>.
>>
>>
>> Blazegraph Database <https://www.blazegraph.com/> is our ultra-high
>> performance graph database that supports both RDF/SPARQL and
>> Tinkerpop/Blueprints APIs. Blazegraph GPU
>> <https://www.blazegraph.com/product/gpu-accelerated/> andBlazegraph DAS
>> <https://www.blazegraph.com/product/gpu-accelerated/>L are disruptive
>> new technologies that use GPUs to enable extreme scaling that is thousands
>> of times faster and 40 times more affordable than CPU-based solutions.
>>
>> CONFIDENTIALITY NOTICE: This email and its contents and attachments are
>> for the sole use of the intended recipient(s) and are confidential or
>> proprietary to SYSTAP, LLC DBA Blazegraph. Any unauthorized review, use,
>> disclosure, dissemination or copying of this email or its contents or
>> attachments is prohibited. If you have received this communication in
>> error, please notify the sender by reply email and permanently delete all
>> copies of the email and its contents and attachments.
>>
>> On Mon, Mar 21, 2016 at 12:40 PM, Jeremy J Carroll <jj...@sy...>
>> wrote:
>>
>>> The logs I have are:
>>> - full GC logs
>>> - python level logs of http requests and errors/warnings (of our
>>> application server: not the blazegraph http requests)
>>>
>>> I can map those logs into SPARQL requests approximately, and I see that
>>> at the point where I got the long wait, there were a single large create
>>> operations that use the
>>> INSERT-by-POST operation taking approximately 30s (including python
>>> processing) which normally is much more than 50% of blazegraph time.
>>>
>>> Very minor other activity going on
>>>
>>> No interesting GC activity
>>>
>>> Jeremy
>>>
>>>
>>>
>>>
>>> On Mar 19, 2016, at 11:57 AM, Bryan Thompson <br...@sy...> wrote:
>>>
>>> Do you have full logs for the server during this interval?
>>> Bryan
>>>
>>> ----
>>> Bryan Thompson
>>> Chief Scientist & Founder
>>> Blazegraph
>>> e: br...@bl...
>>> w: http://blazegraph.com
>>>
>>> Blazegraph products help to solve the Graph Cache Thrash to achieve
>>> large scale processing for graph and predictive analytics. Blazegraph is
>>> the creator of the industry’s first GPU-accelerated high-performance
>>> database for large graphs, has been named as one of the “10 Companies
>>> and Technologies to Watch in 2016”
>>> <http://insideanalysis.com/2016/01/20535/>.
>>>
>>> Blazegraph Database <https://www.blazegraph.com/> is our ultra-high
>>> performance graph database that supports both RDF/SPARQL and
>>> Tinkerpop/Blueprints APIs. Blazegraph GPU
>>> <https://www.blazegraph.com/product/gpu-accelerated/> andBlazegraph DAS
>>> <https://www.blazegraph.com/product/gpu-accelerated/>L are disruptive
>>> new technologies that use GPUs to enable extreme scaling that is thousands
>>> of times faster and 40 times more affordable than CPU-based solutions.
>>>
>>> CONFIDENTIALITY NOTICE: This email and its contents and attachments
>>> are for the sole use of the intended recipient(s) and are confidential or
>>> proprietary to SYSTAP, LLC DBA Blazegraph. Any unauthorized review, use,
>>> disclosure, dissemination or copying of this email or its contents or
>>> attachments is prohibited. If you have received this communication in
>>> error, please notify the sender by reply email and permanently delete all
>>> copies of the email and its contents and attachments.
>>>
>>> On Fri, Mar 18, 2016 at 6:53 PM, Jeremy J Carroll <jj...@sy...>
>>> wrote:
>>>
>>>> I checked the GC logs. We have no GC stoppage of more than a second for
>>>> days either side of the slow query.
>>>>
>>>> The slow query was on March 6th, there was a 3s stoppage on Feb 29th
>>>> and a 1.8s stoppage on March 9th
>>>>
>>>> Any other thoughts?
>>>>
>>>> Jeremy
>>>>
>>>>
>>>> On Mar 17, 2016, at 2:54 PM, Bryan Thompson <br...@sy...> wrote:
>>>>
>>>> 5s is probably a major GC event.
>>>>
>>>>
>>>>
>>>
>>>
>>
>>
>>
>
|
|
From: Joakim S. <joa...@bl...> - 2016-03-28 21:17:49
|
Hi Do you plan to add support for JSONP in your SPARLQ endpoint to overcome cross-domain restrictions? |
|
From: Stas M. <sma...@wi...> - 2016-03-28 23:06:52
|
Hi! > Do you plan to add support for JSONP in your SPARLQ endpoint to overcome cross-domain restrictions? I think cross-domain restrictions can be also worked around by having a proxy in front of Blazegraph that adds header 'Access-Control-Allow-Origin: *'. -- Stas Malyshev sma...@wi... |