This list is closed, nobody may subscribe to it.
2010 |
Jan
|
Feb
(19) |
Mar
(8) |
Apr
(25) |
May
(16) |
Jun
(77) |
Jul
(131) |
Aug
(76) |
Sep
(30) |
Oct
(7) |
Nov
(3) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
(2) |
Jul
(16) |
Aug
(3) |
Sep
(1) |
Oct
|
Nov
(7) |
Dec
(7) |
2012 |
Jan
(10) |
Feb
(1) |
Mar
(8) |
Apr
(6) |
May
(1) |
Jun
(3) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(8) |
Dec
(2) |
2013 |
Jan
(5) |
Feb
(12) |
Mar
(2) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
(22) |
Aug
(50) |
Sep
(31) |
Oct
(64) |
Nov
(83) |
Dec
(28) |
2014 |
Jan
(31) |
Feb
(18) |
Mar
(27) |
Apr
(39) |
May
(45) |
Jun
(15) |
Jul
(6) |
Aug
(27) |
Sep
(6) |
Oct
(67) |
Nov
(70) |
Dec
(1) |
2015 |
Jan
(3) |
Feb
(18) |
Mar
(22) |
Apr
(121) |
May
(42) |
Jun
(17) |
Jul
(8) |
Aug
(11) |
Sep
(26) |
Oct
(15) |
Nov
(66) |
Dec
(38) |
2016 |
Jan
(14) |
Feb
(59) |
Mar
(28) |
Apr
(44) |
May
(21) |
Jun
(12) |
Jul
(9) |
Aug
(11) |
Sep
(4) |
Oct
(2) |
Nov
(1) |
Dec
|
2017 |
Jan
(20) |
Feb
(7) |
Mar
(4) |
Apr
(18) |
May
(7) |
Jun
(3) |
Jul
(13) |
Aug
(2) |
Sep
(4) |
Oct
(9) |
Nov
(2) |
Dec
(5) |
2018 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Bryan T. <br...@sy...> - 2014-02-19 19:59:04
|
You need to clear out your cookies for the last hour. Then you can choose a different login. Bryan From: Mike Personick <mi...@sy...<mailto:mi...@sy...>> Date: Wednesday, February 19, 2014 2:55 PM To: Bryan Thompson <br...@sy...<mailto:br...@sy...>>, Martyn Cutcher <ma...@sy...<mailto:ma...@sy...>>, "big...@li...<mailto:big...@li...>" <big...@li...<mailto:big...@li...>> Subject: Re: [Bigdata-developers] Bigdata Allura Migration I logged in using my gmail account credentials, which I do not want to use, I want to use my systap credentials. Now when I logout and try to log back in, there is no way for me to change which open id account I want to log in under, it just goes straight to the gmail credentials. From: Bryan Thompson <br...@sy...<mailto:br...@sy...>> Date: Wednesday, February 19, 2014 12:51 PM To: Mike Personick <mi...@sy...<mailto:mi...@sy...>>, Martyn Cutcher <ma...@sy...<mailto:ma...@sy...>>, "big...@li...<mailto:big...@li...>" <big...@li...<mailto:big...@li...>> Subject: Re: [Bigdata-developers] Bigdata Allura Migration Martyn has to do a migration for your username. What you do is login with Google's OpenID. Martyn has scripts to port your OpenID login into the wiki and trac systems. I do not know if he has already run those scripts for you, or if you need to login before he can run the script. I think that it might be the latter, in which case you will need to let him know that you have logged in and then he can do the magic script dance for you. Bryan From: Mike Personick <mi...@sy...<mailto:mi...@sy...>> Date: Wednesday, February 19, 2014 2:49 PM To: Bryan Thompson <br...@sy...<mailto:br...@sy...>>, Martyn Cutcher <ma...@sy...<mailto:ma...@sy...>>, "big...@li...<mailto:big...@li...>" <big...@li...<mailto:big...@li...>> Subject: Re: [Bigdata-developers] Bigdata Allura Migration I'm not exactly sure I understand how to log in to the new trac such that I still have access to my old tickets under my sourceforge user name. From: Bryan Thompson <br...@sy...<mailto:br...@sy...>> Date: Wednesday, February 19, 2014 5:51 AM To: Martyn Cutcher <ma...@sy...<mailto:ma...@sy...>>, "big...@li...<mailto:big...@li...>" <big...@li...<mailto:big...@li...>> Subject: Re: [Bigdata-developers] Bigdata Allura Migration We have done the basic project migration. Developers probably need to check out a CLEAN COPY of bigdata from the new SVN repository (see https://sourceforge.net/projects/bigdata/). If you have any uncommitted edits, then will need to be applied to that clean checkout before they can be committed to the new SVN repository location! Thanks, Bryanb From: Martyn Cutcher <ma...@sy...<mailto:ma...@sy...>> Date: Tuesday, February 18, 2014 10:50 AM To: "big...@li...<mailto:big...@li...>" <big...@li...<mailto:big...@li...>> Subject: [Bigdata-developers] Bigdata Allura Migration Hi all, Very soon we will be migrating the Bigdata sourceforge account to the new Allura platform. We have taken the decision to continue to use the existing Trac issue management system and MediaWiki, but will now host these applications ourselves. We have set up the applications using a server on Amazon EC2. One thing that will change is the user management previously integrated with sourceforge. Instead we have configured the new applications to use OpenID. Users will login using their Google account. This leads to a few issues for existing users, since in some cases we need to be able to identify the new OpenID users with their original accounts. The URLs to access the applications are: http://trac.bigdata.com http://wiki.bigdata.com Most users can simply login to both using an OpenID account - we have restricted the trac login to a required Google account and are aware of the inconsistency of login options between the wiki and trac, but please bear with us while we try and iron out the wrinkles. If you have previously created or commented on trac tickets, or edited wiki articles then we will need to run some scripts to tie your new OpenID user to the existing trac or wiki user. If so then please email me at ma...@sy...<mailto:ma...@sy...> with your old and new user names for trac and/or wiki as appropriate and I will try to get your configuration updated ASAP. Your patience will be greatly appreciated! cheers - Martyn Preliminary Schedule - Wednesday 19th February Trac and Wiki 1) 0800 GMT Freeze new Ticket creation and updates on Trac Freeze updates to Wiki Migrate Trac and Wiki data 2) 1200 GMT Trac and Wiki will be available from new locations for read access 3) Developers should login to trac.bigdata.com and wiki.bigdata.com and mail me their user names for joining. Allura SVN A) 1200 GMT Freeze any updates to SVN. Commit local changes to developer sub-branches where possible. B) 1400 GMT commence migration of SVN to Allura C) Notification of new SVN URLs will be emailed when ready |
From: Bryan T. <br...@sy...> - 2014-02-19 19:57:26
|
You need to do a new checkout from the new SVN. Did you do that? Then you have to migrate any changes. That is why Martyn sent out notice before the change overŠ. Bryan On 2/19/14 2:50 PM, "Mike Personick" <mi...@sy...> wrote: >I cannot commit using my sourceforge credentials. > > >On 2/19/14 6:50 AM, "Bryan Thompson" <br...@sy...> wrote: > >>FYI, the new SVN repository should be ready for use. >>Thanks, >>Bryan >> >>On 2/19/14 8:15 AM, "SourceForge.net" <nor...@in...> >>wrote: >> >>>Your code repository in upgraded project bigdata is now ready for use. >>> >>>Old repository url: http://bigdata.svn.sourceforge.net/svnroot/bigdata >>> >>>New repository checkout command: svn checkout --username=thompsonbry >>>svn+ssh://tho...@sv.../p/bigdata/code/ bigdata-code >>> >>>You should do a checkout using the new repository location. The old >>>repository is read-only now. >>> >>>For more detailed instructions on migrating to your new repo, please see >>>https://sourceforge.net/p/forge/community-docs/Repository%20Upgrade%20FA >>>Q >>>/ >>> >>>-- >>>SourceForge.net has sent this mailing to you as a registered user of >>>the SourceForge.net site to convey important information regarding >>>your SourceForge.net account or your use of SourceForge.net services. >>>If you have concerns about this mailing please contact our Support >>>team per: http://sourceforge.net/support >> >> >>------------------------------------------------------------------------- >>- >>---- >>Managing the Performance of Cloud-Based Applications >>Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. >>Read the Whitepaper. >>http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clk >>t >>rk >>_______________________________________________ >>Bigdata-developers mailing list >>Big...@li... >>https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |
From: Mike P. <mi...@sy...> - 2014-02-19 19:56:09
|
I logged in using my gmail account credentials, which I do not want to use, I want to use my systap credentials. Now when I logout and try to log back in, there is no way for me to change which open id account I want to log in under, it just goes straight to the gmail credentials. From: Bryan Thompson <br...@sy...<mailto:br...@sy...>> Date: Wednesday, February 19, 2014 12:51 PM To: Mike Personick <mi...@sy...<mailto:mi...@sy...>>, Martyn Cutcher <ma...@sy...<mailto:ma...@sy...>>, "big...@li...<mailto:big...@li...>" <big...@li...<mailto:big...@li...>> Subject: Re: [Bigdata-developers] Bigdata Allura Migration Martyn has to do a migration for your username. What you do is login with Google's OpenID. Martyn has scripts to port your OpenID login into the wiki and trac systems. I do not know if he has already run those scripts for you, or if you need to login before he can run the script. I think that it might be the latter, in which case you will need to let him know that you have logged in and then he can do the magic script dance for you. Bryan From: Mike Personick <mi...@sy...<mailto:mi...@sy...>> Date: Wednesday, February 19, 2014 2:49 PM To: Bryan Thompson <br...@sy...<mailto:br...@sy...>>, Martyn Cutcher <ma...@sy...<mailto:ma...@sy...>>, "big...@li...<mailto:big...@li...>" <big...@li...<mailto:big...@li...>> Subject: Re: [Bigdata-developers] Bigdata Allura Migration I'm not exactly sure I understand how to log in to the new trac such that I still have access to my old tickets under my sourceforge user name. From: Bryan Thompson <br...@sy...<mailto:br...@sy...>> Date: Wednesday, February 19, 2014 5:51 AM To: Martyn Cutcher <ma...@sy...<mailto:ma...@sy...>>, "big...@li...<mailto:big...@li...>" <big...@li...<mailto:big...@li...>> Subject: Re: [Bigdata-developers] Bigdata Allura Migration We have done the basic project migration. Developers probably need to check out a CLEAN COPY of bigdata from the new SVN repository (see https://sourceforge.net/projects/bigdata/). If you have any uncommitted edits, then will need to be applied to that clean checkout before they can be committed to the new SVN repository location! Thanks, Bryanb From: Martyn Cutcher <ma...@sy...<mailto:ma...@sy...>> Date: Tuesday, February 18, 2014 10:50 AM To: "big...@li...<mailto:big...@li...>" <big...@li...<mailto:big...@li...>> Subject: [Bigdata-developers] Bigdata Allura Migration Hi all, Very soon we will be migrating the Bigdata sourceforge account to the new Allura platform. We have taken the decision to continue to use the existing Trac issue management system and MediaWiki, but will now host these applications ourselves. We have set up the applications using a server on Amazon EC2. One thing that will change is the user management previously integrated with sourceforge. Instead we have configured the new applications to use OpenID. Users will login using their Google account. This leads to a few issues for existing users, since in some cases we need to be able to identify the new OpenID users with their original accounts. The URLs to access the applications are: http://trac.bigdata.com http://wiki.bigdata.com Most users can simply login to both using an OpenID account - we have restricted the trac login to a required Google account and are aware of the inconsistency of login options between the wiki and trac, but please bear with us while we try and iron out the wrinkles. If you have previously created or commented on trac tickets, or edited wiki articles then we will need to run some scripts to tie your new OpenID user to the existing trac or wiki user. If so then please email me at ma...@sy...<mailto:ma...@sy...> with your old and new user names for trac and/or wiki as appropriate and I will try to get your configuration updated ASAP. Your patience will be greatly appreciated! cheers - Martyn Preliminary Schedule - Wednesday 19th February Trac and Wiki 1) 0800 GMT Freeze new Ticket creation and updates on Trac Freeze updates to Wiki Migrate Trac and Wiki data 2) 1200 GMT Trac and Wiki will be available from new locations for read access 3) Developers should login to trac.bigdata.com and wiki.bigdata.com and mail me their user names for joining. Allura SVN A) 1200 GMT Freeze any updates to SVN. Commit local changes to developer sub-branches where possible. B) 1400 GMT commence migration of SVN to Allura C) Notification of new SVN URLs will be emailed when ready |
From: Bryan T. <br...@sy...> - 2014-02-19 19:52:20
|
Martyn has to do a migration for your username. What you do is login with Google's OpenID. Martyn has scripts to port your OpenID login into the wiki and trac systems. I do not know if he has already run those scripts for you, or if you need to login before he can run the script. I think that it might be the latter, in which case you will need to let him know that you have logged in and then he can do the magic script dance for you. Bryan From: Mike Personick <mi...@sy...<mailto:mi...@sy...>> Date: Wednesday, February 19, 2014 2:49 PM To: Bryan Thompson <br...@sy...<mailto:br...@sy...>>, Martyn Cutcher <ma...@sy...<mailto:ma...@sy...>>, "big...@li...<mailto:big...@li...>" <big...@li...<mailto:big...@li...>> Subject: Re: [Bigdata-developers] Bigdata Allura Migration I'm not exactly sure I understand how to log in to the new trac such that I still have access to my old tickets under my sourceforge user name. From: Bryan Thompson <br...@sy...<mailto:br...@sy...>> Date: Wednesday, February 19, 2014 5:51 AM To: Martyn Cutcher <ma...@sy...<mailto:ma...@sy...>>, "big...@li...<mailto:big...@li...>" <big...@li...<mailto:big...@li...>> Subject: Re: [Bigdata-developers] Bigdata Allura Migration We have done the basic project migration. Developers probably need to check out a CLEAN COPY of bigdata from the new SVN repository (see https://sourceforge.net/projects/bigdata/). If you have any uncommitted edits, then will need to be applied to that clean checkout before they can be committed to the new SVN repository location! Thanks, Bryanb From: Martyn Cutcher <ma...@sy...<mailto:ma...@sy...>> Date: Tuesday, February 18, 2014 10:50 AM To: "big...@li...<mailto:big...@li...>" <big...@li...<mailto:big...@li...>> Subject: [Bigdata-developers] Bigdata Allura Migration Hi all, Very soon we will be migrating the Bigdata sourceforge account to the new Allura platform. We have taken the decision to continue to use the existing Trac issue management system and MediaWiki, but will now host these applications ourselves. We have set up the applications using a server on Amazon EC2. One thing that will change is the user management previously integrated with sourceforge. Instead we have configured the new applications to use OpenID. Users will login using their Google account. This leads to a few issues for existing users, since in some cases we need to be able to identify the new OpenID users with their original accounts. The URLs to access the applications are: http://trac.bigdata.com http://wiki.bigdata.com Most users can simply login to both using an OpenID account - we have restricted the trac login to a required Google account and are aware of the inconsistency of login options between the wiki and trac, but please bear with us while we try and iron out the wrinkles. If you have previously created or commented on trac tickets, or edited wiki articles then we will need to run some scripts to tie your new OpenID user to the existing trac or wiki user. If so then please email me at ma...@sy...<mailto:ma...@sy...> with your old and new user names for trac and/or wiki as appropriate and I will try to get your configuration updated ASAP. Your patience will be greatly appreciated! cheers - Martyn Preliminary Schedule - Wednesday 19th February Trac and Wiki 1) 0800 GMT Freeze new Ticket creation and updates on Trac Freeze updates to Wiki Migrate Trac and Wiki data 2) 1200 GMT Trac and Wiki will be available from new locations for read access 3) Developers should login to trac.bigdata.com and wiki.bigdata.com and mail me their user names for joining. Allura SVN A) 1200 GMT Freeze any updates to SVN. Commit local changes to developer sub-branches where possible. B) 1400 GMT commence migration of SVN to Allura C) Notification of new SVN URLs will be emailed when ready |
From: Mike P. <mi...@sy...> - 2014-02-19 19:51:18
|
I cannot commit using my sourceforge credentials. On 2/19/14 6:50 AM, "Bryan Thompson" <br...@sy...> wrote: >FYI, the new SVN repository should be ready for use. >Thanks, >Bryan > >On 2/19/14 8:15 AM, "SourceForge.net" <nor...@in...> >wrote: > >>Your code repository in upgraded project bigdata is now ready for use. >> >>Old repository url: http://bigdata.svn.sourceforge.net/svnroot/bigdata >> >>New repository checkout command: svn checkout --username=thompsonbry >>svn+ssh://tho...@sv.../p/bigdata/code/ bigdata-code >> >>You should do a checkout using the new repository location. The old >>repository is read-only now. >> >>For more detailed instructions on migrating to your new repo, please see >>https://sourceforge.net/p/forge/community-docs/Repository%20Upgrade%20FAQ >>/ >> >>-- >>SourceForge.net has sent this mailing to you as a registered user of >>the SourceForge.net site to convey important information regarding >>your SourceForge.net account or your use of SourceForge.net services. >>If you have concerns about this mailing please contact our Support >>team per: http://sourceforge.net/support > > >-------------------------------------------------------------------------- >---- >Managing the Performance of Cloud-Based Applications >Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. >Read the Whitepaper. >http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clkt >rk >_______________________________________________ >Bigdata-developers mailing list >Big...@li... >https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Mike P. <mi...@sy...> - 2014-02-19 19:49:53
|
I'm not exactly sure I understand how to log in to the new trac such that I still have access to my old tickets under my sourceforge user name. From: Bryan Thompson <br...@sy...<mailto:br...@sy...>> Date: Wednesday, February 19, 2014 5:51 AM To: Martyn Cutcher <ma...@sy...<mailto:ma...@sy...>>, "big...@li...<mailto:big...@li...>" <big...@li...<mailto:big...@li...>> Subject: Re: [Bigdata-developers] Bigdata Allura Migration We have done the basic project migration. Developers probably need to check out a CLEAN COPY of bigdata from the new SVN repository (see https://sourceforge.net/projects/bigdata/). If you have any uncommitted edits, then will need to be applied to that clean checkout before they can be committed to the new SVN repository location! Thanks, Bryanb From: Martyn Cutcher <ma...@sy...<mailto:ma...@sy...>> Date: Tuesday, February 18, 2014 10:50 AM To: "big...@li...<mailto:big...@li...>" <big...@li...<mailto:big...@li...>> Subject: [Bigdata-developers] Bigdata Allura Migration Hi all, Very soon we will be migrating the Bigdata sourceforge account to the new Allura platform. We have taken the decision to continue to use the existing Trac issue management system and MediaWiki, but will now host these applications ourselves. We have set up the applications using a server on Amazon EC2. One thing that will change is the user management previously integrated with sourceforge. Instead we have configured the new applications to use OpenID. Users will login using their Google account. This leads to a few issues for existing users, since in some cases we need to be able to identify the new OpenID users with their original accounts. The URLs to access the applications are: http://trac.bigdata.com http://wiki.bigdata.com Most users can simply login to both using an OpenID account - we have restricted the trac login to a required Google account and are aware of the inconsistency of login options between the wiki and trac, but please bear with us while we try and iron out the wrinkles. If you have previously created or commented on trac tickets, or edited wiki articles then we will need to run some scripts to tie your new OpenID user to the existing trac or wiki user. If so then please email me at ma...@sy...<mailto:ma...@sy...> with your old and new user names for trac and/or wiki as appropriate and I will try to get your configuration updated ASAP. Your patience will be greatly appreciated! cheers - Martyn Preliminary Schedule - Wednesday 19th February Trac and Wiki 1) 0800 GMT Freeze new Ticket creation and updates on Trac Freeze updates to Wiki Migrate Trac and Wiki data 2) 1200 GMT Trac and Wiki will be available from new locations for read access 3) Developers should login to trac.bigdata.com and wiki.bigdata.com and mail me their user names for joining. Allura SVN A) 1200 GMT Freeze any updates to SVN. Commit local changes to developer sub-branches where possible. B) 1400 GMT commence migration of SVN to Allura C) Notification of new SVN URLs will be emailed when ready |
From: Bryan T. <br...@sy...> - 2014-02-19 13:52:05
|
FYI, the new SVN repository should be ready for use. Thanks, Bryan On 2/19/14 8:15 AM, "SourceForge.net" <nor...@in...> wrote: >Your code repository in upgraded project bigdata is now ready for use. > >Old repository url: http://bigdata.svn.sourceforge.net/svnroot/bigdata > >New repository checkout command: svn checkout --username=thompsonbry >svn+ssh://tho...@sv.../p/bigdata/code/ bigdata-code > >You should do a checkout using the new repository location. The old >repository is read-only now. > >For more detailed instructions on migrating to your new repo, please see >https://sourceforge.net/p/forge/community-docs/Repository%20Upgrade%20FAQ/ > >-- >SourceForge.net has sent this mailing to you as a registered user of >the SourceForge.net site to convey important information regarding >your SourceForge.net account or your use of SourceForge.net services. >If you have concerns about this mailing please contact our Support >team per: http://sourceforge.net/support |
From: Bryan T. <br...@sy...> - 2014-02-19 13:12:04
|
Just a caution – the SVN repository is still being imported. That means that it is currently empty. I will post another announcement once the SVN import is finished. Bryan From: Bryan Thompson <br...@sy...<mailto:br...@sy...>> Date: Wednesday, February 19, 2014 7:51 AM To: Martyn Cutcher <ma...@sy...<mailto:ma...@sy...>>, "big...@li...<mailto:big...@li...>" <big...@li...<mailto:big...@li...>> Subject: Re: [Bigdata-developers] Bigdata Allura Migration We have done the basic project migration. Developers probably need to check out a CLEAN COPY of bigdata from the new SVN repository (see https://sourceforge.net/projects/bigdata/). If you have any uncommitted edits, then will need to be applied to that clean checkout before they can be committed to the new SVN repository location! Thanks, Bryanb From: Martyn Cutcher <ma...@sy...<mailto:ma...@sy...>> Date: Tuesday, February 18, 2014 10:50 AM To: "big...@li...<mailto:big...@li...>" <big...@li...<mailto:big...@li...>> Subject: [Bigdata-developers] Bigdata Allura Migration Hi all, Very soon we will be migrating the Bigdata sourceforge account to the new Allura platform. We have taken the decision to continue to use the existing Trac issue management system and MediaWiki, but will now host these applications ourselves. We have set up the applications using a server on Amazon EC2. One thing that will change is the user management previously integrated with sourceforge. Instead we have configured the new applications to use OpenID. Users will login using their Google account. This leads to a few issues for existing users, since in some cases we need to be able to identify the new OpenID users with their original accounts. The URLs to access the applications are: http://trac.bigdata.com http://wiki.bigdata.com Most users can simply login to both using an OpenID account - we have restricted the trac login to a required Google account and are aware of the inconsistency of login options between the wiki and trac, but please bear with us while we try and iron out the wrinkles. If you have previously created or commented on trac tickets, or edited wiki articles then we will need to run some scripts to tie your new OpenID user to the existing trac or wiki user. If so then please email me at ma...@sy...<mailto:ma...@sy...> with your old and new user names for trac and/or wiki as appropriate and I will try to get your configuration updated ASAP. Your patience will be greatly appreciated! cheers - Martyn Preliminary Schedule - Wednesday 19th February Trac and Wiki 1) 0800 GMT Freeze new Ticket creation and updates on Trac Freeze updates to Wiki Migrate Trac and Wiki data 2) 1200 GMT Trac and Wiki will be available from new locations for read access 3) Developers should login to trac.bigdata.com and wiki.bigdata.com and mail me their user names for joining. Allura SVN A) 1200 GMT Freeze any updates to SVN. Commit local changes to developer sub-branches where possible. B) 1400 GMT commence migration of SVN to Allura C) Notification of new SVN URLs will be emailed when ready |
From: Bryan T. <br...@sy...> - 2014-02-19 12:52:44
|
We have done the basic project migration. Developers probably need to check out a CLEAN COPY of bigdata from the new SVN repository (see https://sourceforge.net/projects/bigdata/). If you have any uncommitted edits, then will need to be applied to that clean checkout before they can be committed to the new SVN repository location! Thanks, Bryanb From: Martyn Cutcher <ma...@sy...<mailto:ma...@sy...>> Date: Tuesday, February 18, 2014 10:50 AM To: "big...@li...<mailto:big...@li...>" <big...@li...<mailto:big...@li...>> Subject: [Bigdata-developers] Bigdata Allura Migration Hi all, Very soon we will be migrating the Bigdata sourceforge account to the new Allura platform. We have taken the decision to continue to use the existing Trac issue management system and MediaWiki, but will now host these applications ourselves. We have set up the applications using a server on Amazon EC2. One thing that will change is the user management previously integrated with sourceforge. Instead we have configured the new applications to use OpenID. Users will login using their Google account. This leads to a few issues for existing users, since in some cases we need to be able to identify the new OpenID users with their original accounts. The URLs to access the applications are: http://trac.bigdata.com http://wiki.bigdata.com Most users can simply login to both using an OpenID account - we have restricted the trac login to a required Google account and are aware of the inconsistency of login options between the wiki and trac, but please bear with us while we try and iron out the wrinkles. If you have previously created or commented on trac tickets, or edited wiki articles then we will need to run some scripts to tie your new OpenID user to the existing trac or wiki user. If so then please email me at ma...@sy...<mailto:ma...@sy...> with your old and new user names for trac and/or wiki as appropriate and I will try to get your configuration updated ASAP. Your patience will be greatly appreciated! cheers - Martyn Preliminary Schedule - Wednesday 19th February Trac and Wiki 1) 0800 GMT Freeze new Ticket creation and updates on Trac Freeze updates to Wiki Migrate Trac and Wiki data 2) 1200 GMT Trac and Wiki will be available from new locations for read access 3) Developers should login to trac.bigdata.com and wiki.bigdata.com and mail me their user names for joining. Allura SVN A) 1200 GMT Freeze any updates to SVN. Commit local changes to developer sub-branches where possible. B) 1400 GMT commence migration of SVN to Allura C) Notification of new SVN URLs will be emailed when ready |
From: Martyn C. <ma...@sy...> - 2014-02-19 08:04:09
|
TRAC AND WIKI FREEZE Please do not create or modify Trac issues or Wiki pages until further notice. - Martyn |
From: Martyn C. <ma...@sy...> - 2014-02-18 17:07:39
|
Hi all, Very soon we will be migrating the Bigdata sourceforge account to the new Allura platform. We have taken the decision to continue to use the existing Trac issue management system and MediaWiki, but will now host these applications ourselves. We have set up the applications using a server on Amazon EC2. One thing that will change is the user management previously integrated with sourceforge. Instead we have configured the new applications to use OpenID. Users will login using their Google account. This leads to a few issues for existing users, since in some cases we need to be able to identify the new OpenID users with their original accounts. The URLs to access the applications are: http://trac.bigdata.com http://wiki.bigdata.com Most users can simply login to both using an OpenID account - we have restricted the trac login to a required Google account and are aware of the inconsistency of login options between the wiki and trac, but please bear with us while we try and iron out the wrinkles. If you have previously created or commented on trac tickets, or edited wiki articles then we will need to run some scripts to tie your new OpenID user to the existing trac or wiki user. If so then please email me at ma...@sy... with your old and new user names for trac and/or wiki as appropriate and I will try to get your configuration updated ASAP. Your patience will be greatly appreciated! cheers - Martyn _Preliminary Schedule__- Wednesday 19th February_ Trac and Wiki 1) 0800 GMT Freeze new Ticket creation and updates on Trac Freeze updates to Wiki Migrate Trac and Wiki data 2) 1200 GMT Trac and Wiki will be available from new locations for read access 3) Developers should login to trac.bigdata.com and wiki.bigdata.com and mail me their user names for joining. Allura SVN A) 1200 GMT Freeze any updates to SVN. Commit local changes to developer sub-branches where possible. B) 1400 GMT commence migration of SVN to Allura C) Notification of new SVN URLs will be emailed when ready |
From: Bryan T. <br...@sy...> - 2014-02-11 14:06:25
|
Ok. It sounds like the Unicode is two things: 1. The HTML FORM might not be Unicode clean. Can you point to what we need to be doing there? 2. The files were using a format that correctly defaults to US-ASCII as described at [1] and in the various formats linked from that page. I've copied the developer list on this. If you reply all, you need to subscribe to the developers list first. I have very little control (none) over the forum spam policy. However, once we migrate to the SF Allura system (very soon) the new forums will have a much smarter anti-spam policy (or so they tell us). Thanks Bryan [1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=NanoSparqlSe rver#MIME_Types On 2/11/14 4:57 AM, "Andreas Kahl" <ka...@bs...> wrote: >Bryan, > >please find my reply to the forum in this mail. I cannot post there >sensibly because everything containing URIs is considered SPAM and >blocked. Please feel free to publish the text below if you have more >privileges in sourceforge.net. > >Currently I am re-converting our data to TURTLE; a new load of the >dataset should be finished by the end of next week. > >Best Regards >Andreas > >While writing some small tester application I found out that the problem >is not as severe as it looks using curl. At the moment I am unsure >whether we need a ticket/program change in Bigdata. The only thing to >think of would be Bigdata defaulting to ASCII when reading NTRIPLES >instead of UTF-8, and UTF-8-INSERT\'s not working from the html-form. > >If I use Sesame\'s SPARQLRepository and send >[code]PREFIX dc: <http://purl.org/dc/elements/1.1/> >INSERT DATA { <http://example/book1> dc:title \"München\".}[/code] > >I get a correct \'ü\' in the result for >[code]PREFIX dc: <http://purl.org/dc/elements/1.1/> >SELECT * { <http://example/book1> dc:title ?o } LIMIT 10[/code] >Although the INSERT does not work through the html-SPARQL form. > >I also checked the LOAD command: As described in the documentation it >processes UTF-8 correctly except for NTRIPLES which requires ASCII and >\\u-Sequences for non-ASCII chars. > >The outputs of my tester program are: >[code]############################################## >Running query LOAD <file:/tmp/test.nt> >Result: o=\"M��nchen\" >UTF-8-Test FAILED > >############################################## >Running query LOAD <file:/tmp/test.ttl> >Result: o=\"München\" >UTF-8-Test OK > >############################################## >Running query LOAD <file:/tmp/test.xml> >Result: o=\"München\" >UTF-8-Test OK > >############################################## >Running query PREFIX dc: <http://purl.org/dc/elements/1.1/> >INSERT DATA >{ ><http://example/book1> dc:title \"München\". >} >Result: o=\"München\" >UTF-8-Test OK[/code] > >My tester code: >[code]package de.bsb_muenchen.bigdatautftester; > >import java.util.HashMap; >import java.util.LinkedList; >import java.util.List; >import java.util.logging.Level; >import java.util.logging.Logger; >import org.openrdf.query.Binding; >import org.openrdf.query.BindingSet; >import org.openrdf.query.MalformedQueryException; >import org.openrdf.query.QueryLanguage; >import org.openrdf.query.TupleQuery; >import org.openrdf.query.TupleQueryResult; >import org.openrdf.query.Update; >import org.openrdf.query.UpdateExecutionException; >import org.openrdf.repository.RepositoryConnection; >import org.openrdf.repository.RepositoryException; >import org.openrdf.repository.sparql.SPARQLRepository; > >/** > * > * @author ak > */ >public class BigdataUtfTester { > > private static final String MÜNCHEN = \"München\"; > private static final List<String> updateQueries = new LinkedList<>(); > > static { > updateQueries.add(\"LOAD <file:/tmp/test.nt>\"); > updateQueries.add(\"LOAD <file:/tmp/test.ttl>\"); > updateQueries.add(\"LOAD <file:/tmp/test.xml>\"); > updateQueries.add(\"PREFIX dc: ><http://purl.org/dc/elements/1.1/>\\n\" > + \"INSERT DATA\\n\" > + \"{\\n\" > + \"<http://example/book1> dc:title \\\"München\\\".\\n\" > + \"}\"); > } > > public static void main(String[] args) throws Exception { > SPARQLRepository endpoint = new >SPARQLRepository(\"http://server:8080/bigdata/sparql\"); > endpoint.initialize(); > HashMap<String, String> headers = new HashMap<>(); > headers.put(\"Accept-Charset\", \"UTF-8\"); > endpoint.setAdditionalHttpHeaders(headers); > RepositoryConnection connection = endpoint.getConnection(); > for (String updSparql : updateQueries) { > deleteTestTriple(connection); > >System.out.println(\"##############################################\\nRunn >ing query \".concat(updSparql)); > Update update = >connection.prepareUpdate(QueryLanguage.SPARQL, updSparql); > update.execute(); > //Check if München was inserted correctly: > TupleQuery query = >connection.prepareTupleQuery(QueryLanguage.SPARQL, \"PREFIX dc: ><http://purl.org/dc/elements/1.1/>\\n\" > + \"SELECT * { <http://example/book1> dc:title ?o } >LIMIT 10\"); > TupleQueryResult result = query.evaluate(); > List<String> bindingNames = result.getBindingNames(); > for (String bindingName : bindingNames) { > BindingSet bindings = result.next(); > Binding binding = bindings.getBinding(bindingName); > System.out.println(\"Result: \" + binding.toString()); > if >(binding.getValue().stringValue().compareToIgnoreCase(MÜNCHEN) == 0) { > System.out.println(\"UTF-8-Test OK\\n\"); > } else { > System.out.println(\"UTF-8-Test FAILED\\n\"); > } > } > } > connection.close(); > } > > private static void deleteTestTriple(RepositoryConnection connection) >{ > try { > Update deleteResource = >connection.prepareUpdate(QueryLanguage.SPARQL, \"DELETE WHERE >{<http://example/book1> ?p ?o.}\"); > deleteResource.execute(); > } catch (UpdateExecutionException | RepositoryException | >MalformedQueryException ex) { > >Logger.getLogger(BigdataUtfTester.class.getName()).log(Level.SEVERE, >null, ex); > } > } >}[/code] > >This is the POM to make Maven download the required libraries: >[code]<?xml version=\"1.0\" encoding=\"UTF-8\"?> ><project xmlns=\"http://maven.apache.org/POM/4.0.0\" >xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" >xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 >http://maven.apache.org/xsd/maven-4.0.0.xsd\"> > <modelVersion>4.0.0</modelVersion> > <groupId>de.bsb_muenchen</groupId> > <artifactId>BigdataUtfTester</artifactId> > <version>1.0-SNAPSHOT</version> > <packaging>jar</packaging> > <dependencies> > <dependency> > <groupId>org.openrdf.sesame</groupId> > <artifactId>sesame-runtime</artifactId> > <version>2.7.8</version> > <type>jar</type> > </dependency> > <dependency> > <groupId>org.apache.logging.log4j</groupId> > <artifactId>log4j</artifactId> > <version>2.0-beta9</version> > <type>pom</type> > </dependency> > <dependency> > <groupId>commons-logging</groupId> > <artifactId>commons-logging</artifactId> > <version>1.1.3</version> > <classifier>api</classifier> > </dependency> > </dependencies> > <properties> > <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> > <maven.compiler.source>1.7</maven.compiler.source> > <maven.compiler.target>1.7</maven.compiler.target> > </properties> ></project>[/code] |
From: Bryan T. <br...@sy...> - 2014-01-29 22:25:12
|
Maybe. Can you file a ticket with SourceForge? Bryan -------- Original message -------- From: Antoni Mylka Date:01/29/2014 9:20 AM (GMT-05:00) To: Bryan Thompson Cc: big...@li... Subject: Re: [Bigdata-developers] Inference in BigData Hi Bryan, Thanks for the answers. Now I think I understand a bit more. The mailing list search still doesn't work for me. The URL you posted yields no results. When I search in both mailing lists, I only get hits from bigdata-commit. When I choose bigdata-developers, I get no hits. Perhaps it's a Sourceforge issue that's not visible in the US. -- Antoni Myłka Software Engineer basis06 AG, Birkenweg 61, CH-3013 Bern - Fon +41 31 311 32 22 http://www.basis06.ch - source of smart business ----- Ursprüngliche Mail ----- Von: "Bryan Thompson" <br...@sy...> An: "Antoni Mylka" <ant...@ba...>, big...@li... Gesendet: Mittwoch, 29. Januar 2014 14:22:40 Betreff: Re: [Bigdata-developers] Inference in BigData Antoni, I think that most of your points are accurate. RDFS+ was a term that Jim Hendler was using for a while. It predates the standardization of these inference profiles. - Bigdata places an emphasis on a subset of inference rules that support scalable applications and which are "interesting". A lot of the standard rules are not very useful. For example, the range/domain queries do not impose a constraint. Many "inferences" are best captured by annotating the data as it is loaded. The application very often knows exactly how to decorate the instances and can ensure that the relevant properties are in place. When this approach works, it is less costly than asking the database to compute or maintain those inferences. - The set of rules in the FastClosure or FullClosure program obviously effects the inferences that are drawn. There can be custom rules inserted into these classes. - There is backward chaining for "everything is a resource", at least in the Journal/HA deployment model. This is pretty much a useless inference and definitely is not one that should be materialized. - I do find search hits on the developers list. For example, this URI: > https://sourceforge.net/search/index.php?group_id=191861&type_of_search=mli sts&q=RTO&ml_name[]=bigdata-developers&posted_date_start=&posted_date_end=& form_submit=Search Another theme that has come up several times with customers and recently on the forum are ways to compute the inferences independent of the target database for improved scaling, handling quads model inference using custom workflows, etc. I will try to touch briefly on these points. - Several customers use a pattern where they manage the inference in a temporary store or temporary journal. They compute the delta against the ground truth database, collect that data using a change log listener, and then apply the delta to the target database. This can be used to scale the inference workload independent of the query workload. It can also be used to partition the inference problem, either within the domain (this is often possible in real world applications) or across multiple triple store instances in a given database (e.g., multi-tenancy). This model can also work well with durable queues or map/reduce processes that feed updates into the database through an inference workflow. This can lead to *very* scalable design patterns. - The main reason why we do not support inference in quads mode is the question of which named graphs are the sources and the target for the ground triples and the inferred triples. You can use an inference workflow to make explicit application decisions about these issues. - You can use an inference workflow to partition the inference problem and scale inference independent of query for the highly available replication cluster. The HA cluster allows you to scale the query throughput linearly. By factoring out (and potentially partitioning) the inference workload, you not only remove a significant burden from the leader, but you can also scale the inference throughput independent of the query throughput if you can partition the inference problem. - The horizontally scaled architecture only supports database-at-once inference (versus incremental truth maintenance). You can use an inference workload to partition the inference problem and scale the inference problem independent of the triple store for scale-out. Thanks, Bryan |
From: Antoni M. <ant...@ba...> - 2014-01-29 14:21:06
|
Hi Bryan, Thanks for the answers. Now I think I understand a bit more. The mailing list search still doesn't work for me. The URL you posted yields no results. When I search in both mailing lists, I only get hits from bigdata-commit. When I choose bigdata-developers, I get no hits. Perhaps it's a Sourceforge issue that's not visible in the US. -- Antoni Myłka Software Engineer basis06 AG, Birkenweg 61, CH-3013 Bern - Fon +41 31 311 32 22 http://www.basis06.ch - source of smart business ----- Ursprüngliche Mail ----- Von: "Bryan Thompson" <br...@sy...> An: "Antoni Mylka" <ant...@ba...>, big...@li... Gesendet: Mittwoch, 29. Januar 2014 14:22:40 Betreff: Re: [Bigdata-developers] Inference in BigData Antoni, I think that most of your points are accurate. RDFS+ was a term that Jim Hendler was using for a while. It predates the standardization of these inference profiles. - Bigdata places an emphasis on a subset of inference rules that support scalable applications and which are "interesting". A lot of the standard rules are not very useful. For example, the range/domain queries do not impose a constraint. Many "inferences" are best captured by annotating the data as it is loaded. The application very often knows exactly how to decorate the instances and can ensure that the relevant properties are in place. When this approach works, it is less costly than asking the database to compute or maintain those inferences. - The set of rules in the FastClosure or FullClosure program obviously effects the inferences that are drawn. There can be custom rules inserted into these classes. - There is backward chaining for "everything is a resource", at least in the Journal/HA deployment model. This is pretty much a useless inference and definitely is not one that should be materialized. - I do find search hits on the developers list. For example, this URI: > https://sourceforge.net/search/index.php?group_id=191861&type_of_search=mli sts&q=RTO&ml_name[]=bigdata-developers&posted_date_start=&posted_date_end=& form_submit=Search Another theme that has come up several times with customers and recently on the forum are ways to compute the inferences independent of the target database for improved scaling, handling quads model inference using custom workflows, etc. I will try to touch briefly on these points. - Several customers use a pattern where they manage the inference in a temporary store or temporary journal. They compute the delta against the ground truth database, collect that data using a change log listener, and then apply the delta to the target database. This can be used to scale the inference workload independent of the query workload. It can also be used to partition the inference problem, either within the domain (this is often possible in real world applications) or across multiple triple store instances in a given database (e.g., multi-tenancy). This model can also work well with durable queues or map/reduce processes that feed updates into the database through an inference workflow. This can lead to *very* scalable design patterns. - The main reason why we do not support inference in quads mode is the question of which named graphs are the sources and the target for the ground triples and the inferred triples. You can use an inference workflow to make explicit application decisions about these issues. - You can use an inference workflow to partition the inference problem and scale inference independent of query for the highly available replication cluster. The HA cluster allows you to scale the query throughput linearly. By factoring out (and potentially partitioning) the inference workload, you not only remove a significant burden from the leader, but you can also scale the inference throughput independent of the query throughput if you can partition the inference problem. - The horizontally scaled architecture only supports database-at-once inference (versus incremental truth maintenance). You can use an inference workload to partition the inference problem and scale the inference problem independent of the triple store for scale-out. Thanks, Bryan |
From: Bryan T. <br...@sy...> - 2014-01-29 13:23:26
|
Antoni, I think that most of your points are accurate. RDFS+ was a term that Jim Hendler was using for a while. It predates the standardization of these inference profiles. - Bigdata places an emphasis on a subset of inference rules that support scalable applications and which are "interesting". A lot of the standard rules are not very useful. For example, the range/domain queries do not impose a constraint. Many "inferences" are best captured by annotating the data as it is loaded. The application very often knows exactly how to decorate the instances and can ensure that the relevant properties are in place. When this approach works, it is less costly than asking the database to compute or maintain those inferences. - The set of rules in the FastClosure or FullClosure program obviously effects the inferences that are drawn. There can be custom rules inserted into these classes. - There is backward chaining for "everything is a resource", at least in the Journal/HA deployment model. This is pretty much a useless inference and definitely is not one that should be materialized. - I do find search hits on the developers list. For example, this URI: > https://sourceforge.net/search/index.php?group_id=191861&type_of_search=mli sts&q=RTO&ml_name[]=bigdata-developers&posted_date_start=&posted_date_end=& form_submit=Search Another theme that has come up several times with customers and recently on the forum are ways to compute the inferences independent of the target database for improved scaling, handling quads model inference using custom workflows, etc. I will try to touch briefly on these points. - Several customers use a pattern where they manage the inference in a temporary store or temporary journal. They compute the delta against the ground truth database, collect that data using a change log listener, and then apply the delta to the target database. This can be used to scale the inference workload independent of the query workload. It can also be used to partition the inference problem, either within the domain (this is often possible in real world applications) or across multiple triple store instances in a given database (e.g., multi-tenancy). This model can also work well with durable queues or map/reduce processes that feed updates into the database through an inference workflow. This can lead to *very* scalable design patterns. - The main reason why we do not support inference in quads mode is the question of which named graphs are the sources and the target for the ground triples and the inferred triples. You can use an inference workflow to make explicit application decisions about these issues. - You can use an inference workflow to partition the inference problem and scale inference independent of query for the highly available replication cluster. The HA cluster allows you to scale the query throughput linearly. By factoring out (and potentially partitioning) the inference workload, you not only remove a significant burden from the leader, but you can also scale the inference throughput independent of the query throughput if you can partition the inference problem. - The horizontally scaled architecture only supports database-at-once inference (versus incremental truth maintenance). You can use an inference workload to partition the inference problem and scale the inference problem independent of the triple store for scale-out. Thanks, Bryan On 1/29/14 4:13 AM, "Antoni Mylka" <ant...@ba...> wrote: >Hi, > >I've been trying to wrap my head around inference in Bigdata. This stuff >is probably obvious to you. I've gathered my findings in seven yes/no >statements. I would be very grateful for a true/false answer and maybe >links to further docs. > >1. The only definition of "RDFS Plus" is in the book "Semantic Web for >the working ontologist". It's not a standard by any means. The statement >"Bigdata supports RDFS Plus" means "It is possible to configure bigdata >to provide the kind of inference described in that book." > >2. The inference in Bigdata depends on exactly three things: > - the axioms - i.e. triples that are in the graph in the beginning and >cannot be removed > - the closure - i.e. inference rules > - the InferenceEngine, is obtained from the AbstractTripleStore, >contains additional configuration like "forwardChainRdfTypeRdfsResource" >etc. The InferenceEngine config is read by the FastClosure and >FullClosure classes. > >3. When I want to be sure that the inferencing goes according to my >needs: I need to understand the meaning of exactly twelve configuration >options and make sure they have correct values: > >com.bigdata.rdf.store.AbstractTripleStore.axiomsClass >com.bigdata.rdf.store.AbstractTripleStore.closureClass >... all 10 properties defined in com.bigdata.rdf.rules.InferenceEngine > >4. The default settings of the above options (OwlAxioms, FastClosure, >default InferenceEngine settings) yield a ruleset that is not formally >defined anywhere. It's not full RDFS (e.g. rules RDFS4a and RDFS4b are >disabled by default) nor OWL. > >5. If I want to follow some written standard and have all the RDFS >entailment rules from >http://www.w3.org/TR/2004/REC-rdf-mt-20040210/#rules, or OWL 2 RL/RDF >from >http://www.w3.org/TR/owl2-profiles/#Reasoning_in_OWL_2_RL_and_RDF_Graphs_u >sing_Rules I need to take care about it myself. There are no canned >"standard" settings, that I could enable with a flick of a switch. If I >need any of that, I'll need to set 12 configuration options and maybe >even write my own Axioms and BaseClosure subclasses. The classes have to >be wrapped in a jar, placed in WEB-INF/lib and shipped with my BigData >distribution. > >6. When configuring inference - I need to understand the performance >tradeoffs, and be sure that I really need ALL the rules. Every new rule >means slower database. The default settings are fast and scalable. > >7. The only complete and authoritative documentation of ALL available >Bigdata configuration options is in the code. I need to search for all >interfaces named "Options" and see the javadocs of the constants there. >Each constant X is accompanied by a DEFAULT_X constant with the default >value. > >BTW: The mailing list search at >http://sourceforge.net/search/?group_id=191861&type_of_search=mlists only >covers bigdata-commit. I couldn't find any search for bigdata-developers. > >Best Regards > >-- >Antoni Myłka >Software Engineer > >basis06 AG, Birkenweg 61, CH-3013 Bern - Fon +41 31 311 32 22 >http://www.basis06.ch - source of smart business > >-------------------------------------------------------------------------- >---- >WatchGuard Dimension instantly turns raw network data into actionable >security intelligence. It gives you real-time visual feedback on key >security issues and trends. Skip the complicated setup - simply import >a virtual appliance and go from zero to informed in seconds. >http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg.clkt >rk >_______________________________________________ >Bigdata-developers mailing list >Big...@li... >https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Antoni M. <ant...@ba...> - 2014-01-29 09:32:02
|
Hi, I've been trying to wrap my head around inference in Bigdata. This stuff is probably obvious to you. I've gathered my findings in seven yes/no statements. I would be very grateful for a true/false answer and maybe links to further docs. 1. The only definition of "RDFS Plus" is in the book "Semantic Web for the working ontologist". It's not a standard by any means. The statement "Bigdata supports RDFS Plus" means "It is possible to configure bigdata to provide the kind of inference described in that book." 2. The inference in Bigdata depends on exactly three things: - the axioms - i.e. triples that are in the graph in the beginning and cannot be removed - the closure - i.e. inference rules - the InferenceEngine, is obtained from the AbstractTripleStore, contains additional configuration like "forwardChainRdfTypeRdfsResource" etc. The InferenceEngine config is read by the FastClosure and FullClosure classes. 3. When I want to be sure that the inferencing goes according to my needs: I need to understand the meaning of exactly twelve configuration options and make sure they have correct values: com.bigdata.rdf.store.AbstractTripleStore.axiomsClass com.bigdata.rdf.store.AbstractTripleStore.closureClass ... all 10 properties defined in com.bigdata.rdf.rules.InferenceEngine 4. The default settings of the above options (OwlAxioms, FastClosure, default InferenceEngine settings) yield a ruleset that is not formally defined anywhere. It's not full RDFS (e.g. rules RDFS4a and RDFS4b are disabled by default) nor OWL. 5. If I want to follow some written standard and have all the RDFS entailment rules from http://www.w3.org/TR/2004/REC-rdf-mt-20040210/#rules, or OWL 2 RL/RDF from http://www.w3.org/TR/owl2-profiles/#Reasoning_in_OWL_2_RL_and_RDF_Graphs_using_Rules I need to take care about it myself. There are no canned "standard" settings, that I could enable with a flick of a switch. If I need any of that, I'll need to set 12 configuration options and maybe even write my own Axioms and BaseClosure subclasses. The classes have to be wrapped in a jar, placed in WEB-INF/lib and shipped with my BigData distribution. 6. When configuring inference - I need to understand the performance tradeoffs, and be sure that I really need ALL the rules. Every new rule means slower database. The default settings are fast and scalable. 7. The only complete and authoritative documentation of ALL available Bigdata configuration options is in the code. I need to search for all interfaces named "Options" and see the javadocs of the constants there. Each constant X is accompanied by a DEFAULT_X constant with the default value. BTW: The mailing list search at http://sourceforge.net/search/?group_id=191861&type_of_search=mlists only covers bigdata-commit. I couldn't find any search for bigdata-developers. Best Regards -- Antoni Myłka Software Engineer basis06 AG, Birkenweg 61, CH-3013 Bern - Fon +41 31 311 32 22 http://www.basis06.ch - source of smart business |
From: Jeremy J C. <jj...@sy...> - 2014-01-25 00:06:15
|
This is trac 807 I did quite a bit of pre work, including: a) tests b) ensuring that the query hint is visible in the right part of the code [this was a little difficult] Jeremy J Carroll Principal Architect Syapse, Inc. On Jan 23, 2014, at 4:22 PM, Bryan Thompson <br...@sy...> wrote: > Is the problem that the native distinct needs to support SPOC? If so, please file a ticket and link it to this ticket and assign it to me. I will take care of that. > Thanks, > Bryan > >> On Jan 23, 2014, at 5:43 PM, "Jeremy J Carroll" <jj...@sy...> wrote: >> >> I have committed a fix to trac 804 in r7825 >> >> The following issues should be noted: >> >> 1) I have added a simple new class com.bigdata.rdf.model.BigdataQuadWrapper whose purpose is to have an equality contract based on quads not triples >> >> 2) there is interaction between the new code and the native distinct mode in construct (which is turned on for analytic mode updates) >> - if we are doing an INSERT or DELETE or both >> - and we have a template that may be quad patterns from multiple graphs >> - then we use a simple filter based on BigdataQuadWrapper rather than the native distinct filter which looks like a much more scalable/robust piece of code >> >> In triples mode and/or if there is only one graph being updated, then the new code is not used. >> >> Jeremy J Carroll >> Principal Architect >> Syapse, Inc. >> >> >> >> >> ------------------------------------------------------------------------------ >> CenturyLink Cloud: The Leader in Enterprise Cloud Services. >> Learn Why More Businesses Are Choosing CenturyLink Cloud For >> Critical Workloads, Development Environments & Everything In Between. >> Get a Quote or Start a Free Trial Today. >> http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Bryan T. <br...@sy...> - 2014-01-24 00:22:48
|
Is the problem that the native distinct needs to support SPOC? If so, please file a ticket and link it to this ticket and assign it to me. I will take care of that. Thanks, Bryan > On Jan 23, 2014, at 5:43 PM, "Jeremy J Carroll" <jj...@sy...> wrote: > > I have committed a fix to trac 804 in r7825 > > The following issues should be noted: > > 1) I have added a simple new class com.bigdata.rdf.model.BigdataQuadWrapper whose purpose is to have an equality contract based on quads not triples > > 2) there is interaction between the new code and the native distinct mode in construct (which is turned on for analytic mode updates) > - if we are doing an INSERT or DELETE or both > - and we have a template that may be quad patterns from multiple graphs > - then we use a simple filter based on BigdataQuadWrapper rather than the native distinct filter which looks like a much more scalable/robust piece of code > > In triples mode and/or if there is only one graph being updated, then the new code is not used. > > Jeremy J Carroll > Principal Architect > Syapse, Inc. > > > > > ------------------------------------------------------------------------------ > CenturyLink Cloud: The Leader in Enterprise Cloud Services. > Learn Why More Businesses Are Choosing CenturyLink Cloud For > Critical Workloads, Development Environments & Everything In Between. > Get a Quote or Start a Free Trial Today. > http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Bryan T. <br...@sy...> - 2014-01-23 23:33:50
|
I generally go to the list of closed tickets in reverse time. -------- Original message -------- From: Jeremy J Carroll Date:01/23/2014 6:06 PM (GMT-05:00) To: Big...@li... Subject: [Bigdata-developers] release note entries??? When I fix a bug is there somewhere I am meant to make a one-liner for eventually customer consumption, or is creating release notes a batch task done before each release? On some projects I have worked on we had a discipline where the commit for each bug fix also modified the draft release notes. As the lead on one such project it still fell on me to tidy these notes up before the release but it was a lot less painful than poring over the svn or git logs Jeremy J Carroll Principal Architect Syapse, Inc. ------------------------------------------------------------------------------ CenturyLink Cloud: The Leader in Enterprise Cloud Services. Learn Why More Businesses Are Choosing CenturyLink Cloud For Critical Workloads, Development Environments & Everything In Between. Get a Quote or Start a Free Trial Today. http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk _______________________________________________ Bigdata-developers mailing list Big...@li... https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Jeremy J C. <jj...@sy...> - 2014-01-23 23:15:04
|
I had left the log4j trace setting log4j.logger.com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate=TRACE which is way too expensive with real data! Jeremy J Carroll Principal Architect Syapse, Inc. On Jan 23, 2014, at 3:08 PM, Jeremy J Carroll <jj...@sy...> wrote: > > Still assessing … this is an early warning: I went back to my other work and immediately got an OOME on a supposedly unrelated sparql update > > Jeremy J Carroll > Principal Architect > Syapse, Inc. > > > |
From: Jeremy J C. <jj...@sy...> - 2014-01-23 23:08:35
|
Still assessing … this is an early warning: I went back to my other work and immediately got an OOME on a supposedly unrelated sparql update Jeremy J Carroll Principal Architect Syapse, Inc. |
From: Jeremy J C. <jj...@sy...> - 2014-01-23 23:06:25
|
When I fix a bug is there somewhere I am meant to make a one-liner for eventually customer consumption, or is creating release notes a batch task done before each release? On some projects I have worked on we had a discipline where the commit for each bug fix also modified the draft release notes. As the lead on one such project it still fell on me to tidy these notes up before the release but it was a lot less painful than poring over the svn or git logs Jeremy J Carroll Principal Architect Syapse, Inc. |
From: Jeremy J C. <jj...@sy...> - 2014-01-23 22:44:02
|
I have committed a fix to trac 804 in r7825 The following issues should be noted: 1) I have added a simple new class com.bigdata.rdf.model.BigdataQuadWrapper whose purpose is to have an equality contract based on quads not triples 2) there is interaction between the new code and the native distinct mode in construct (which is turned on for analytic mode updates) - if we are doing an INSERT or DELETE or both - and we have a template that may be quad patterns from multiple graphs - then we use a simple filter based on BigdataQuadWrapper rather than the native distinct filter which looks like a much more scalable/robust piece of code In triples mode and/or if there is only one graph being updated, then the new code is not used. Jeremy J Carroll Principal Architect Syapse, Inc. |
From: Jeremy C. <jj...@gm...> - 2014-01-23 19:20:50
|
Just FYI - I have identified the root cause of the issue to do with the filter field in ASTConstructIterator and believe I can fix it Jeremy |
From: Jeremy J C. <jj...@sy...> - 2014-01-23 02:12:13
|
I have been making some progress trying to understand an update issue (trac 804) In particular I moved it from a heisenbug to predictable and simple: 1) DROP ALL; prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> INSERT DATA { GRAPH <eg:a> { <eg:b> rdf:type <eg:c> } GRAPH <eg:A> { <eg:b> rdf:type <eg:c> } } 2) DELETE { GRAPH <eg:a> { ?olds ?oldp ?oldo } GRAPH <eg:A> { ?olds ?oldp ?oldo } } WHERE { GRAPH <eg:a> { ?olds ?oldp ?oldo } } And (2) does not work, deleting only one triple. Drilling through the code, so far I have got to, e.g. in AST2BOpUpdate final QuadData quadData = (insertClause == null ? deleteClause : insertClause).getQuadData(); // Flatten the original WHERE clause into a CONSTRUCT // template. final ConstructNode template = quadData .flatten(new ConstructNode(context)); // Set the CONSTRUCT template (quads patterns). queryRoot.setConstruct(template); and this seems questionable because a construct query returns triples (according to the spec. I have not yet looked at the code), whereas the quadData contains quads. And indeed the duplicate triple/different quad gets lost immediately after this …. I will pick this up again tomorrow, but wondered if anyone else had any input at this stage Jeremy J Carroll Principal Architect Syapse, Inc. |